2026-03-09T20:33:33.065 INFO:root:teuthology version: 1.2.4.dev6+g1c580df7a 2026-03-09T20:33:33.074 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-09T20:33:33.102 INFO:teuthology.run:Config: archive_path: /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/647 branch: squid description: orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no kernel overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{reef} 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} email: null first_in_suite: false flavor: default job_id: '647' last_in_suite: false machine_type: vps meta: - desc: 'setup ceph/reef ' name: kyr-2026-03-09_11:23:05-orch-squid-none-default-vps no_nested_subset: false os_type: centos os_version: 9.stream overrides: admin_socket: branch: squid ansible.cephlab: branch: main skip_tags: nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs vars: timezone: UTC ceph: cluster-conf: mgr: client mount timeout: 30 debug client: 20 debug mgr: 20 debug ms: 1 mon warn on pool no app: false conf: client: client mount timeout: 600 debug client: 20 debug ms: 1 rados mon op timeout: 900 rados osd op timeout: 900 global: mon pg warn min per osd: 0 mds: debug mds: 20 debug mds balancer: 20 debug ms: 1 mds debug frag: true mds debug scatterstat: true mds op complaint time: 180 mds verify scatter: true osd op complaint time: 180 rados mon op timeout: 900 rados osd op timeout: 900 mgr: debug mgr: 20 debug ms: 1 mon: debug mon: 20 debug ms: 1 debug paxos: 20 mon down mkfs grace: 300 mon op complaint time: 120 osd: bdev async discard: true bdev enable discard: true bluestore allocator: bitmap bluestore block size: 96636764160 bluestore fsck on mount: true debug bluefs: 1/20 debug bluestore: 1/20 debug ms: 1 debug osd: 20 debug rocksdb: 4/10 mon osd backfillfull_ratio: 0.85 mon osd full ratio: 0.9 mon osd nearfull ratio: 0.8 osd failsafe full ratio: 0.95 osd mclock iops capacity threshold hdd: 49000 osd objectstore: bluestore osd op complaint time: 180 flavor: default fs: xfs log-ignorelist: - \(MDS_ALL_DOWN\) - \(MDS_UP_LESS_THAN_MAX\) - FS_DEGRADED - filesystem is degraded - FS_INLINE_DATA_DEPRECATED - FS_WITH_FAILED_MDS - MDS_ALL_DOWN - filesystem is offline - is offline because no MDS - MDS_DAMAGE - MDS_DEGRADED - MDS_FAILED - MDS_INSUFFICIENT_STANDBY - MDS_UP_LESS_THAN_MAX - online, but wants - filesystem is online with fewer MDS than max_mds - POOL_APP_NOT_ENABLED - do not have an application enabled - overall HEALTH_ - Replacing daemon - deprecated feature inline_data - MGR_MODULE_ERROR - OSD_DOWN - osds down - overall HEALTH_ - \(OSD_DOWN\) - \(OSD_ - but it is still running - is not responding - MON_DOWN - PG_AVAILABILITY - PG_DEGRADED - Reduced data availability - Degraded data redundancy - pg .* is stuck inactive - pg .* is .*degraded - pg .* is stuck peering sha1: e911bdebe5c8faa3800735d1568fcdca65db60df ceph-deploy: bluestore: true conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: {} osd: bdev async discard: true bdev enable discard: true bluestore block size: 96636764160 bluestore fsck on mount: true debug bluefs: 1/20 debug bluestore: 1/20 debug rocksdb: 4/10 mon osd backfillfull_ratio: 0.85 mon osd full ratio: 0.9 mon osd nearfull ratio: 0.8 osd failsafe full ratio: 0.95 osd objectstore: bluestore fs: xfs install: ceph: flavor: default sha1: e911bdebe5c8faa3800735d1568fcdca65db60df extra_system_packages: deb: - python3-xmltodict - python3-jmespath rpm: - bzip2 - perl-Test-Harness - python3-xmltodict - python3-jmespath kclient: syntax: v1 selinux: allowlist: - scontext=system_u:system_r:logrotate_t:s0 - scontext=system_u:system_r:getty_t:s0 thrashosds: bdev_inject_crash: 2 bdev_inject_crash_probability: 0.5 workunit: branch: tt-squid sha1: 569c3e99c9b32a51b4eaf08731c728f4513ed589 owner: kyr priority: 1000 repo: https://github.com/ceph/ceph.git roles: - - host.a - client.0 - osd.0 - osd.1 - osd.2 - - host.b - client.1 - osd.3 - osd.4 - osd.5 seed: 3443 sha1: e911bdebe5c8faa3800735d1568fcdca65db60df sleep_before_teardown: 0 subset: 1/64 suite: orch suite_branch: tt-squid suite_path: /home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa suite_relpath: qa suite_repo: https://github.com/kshtsk/ceph.git suite_sha1: 569c3e99c9b32a51b4eaf08731c728f4513ed589 targets: vm07.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAeOWpayEHUwW7KIeR5GVTTrr6pDRNpZ3+JNDPQURFPp7d+1QanHBaajlXS4fMGXRkfPNSBjvppD6aBOmSslnfc= vm10.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHkPxFJCEv7RCtpJvkRxYpFqQDoka8PTZ3iL5dBIg1oXMAKfwZjbd/E3qPl2ibQJxoKyLZ3M2yta7lGT5j3uWFE= tasks: - install: branch: reef exclude_packages: - ceph-volume - print: '**** done install task...' - cephadm: compiled_cephadm_branch: reef conf: osd: osd_class_default_list: '*' osd_class_load_list: '*' image: quay.ceph.io/ceph-ci/ceph:reef roleless: true - print: '**** done end installing reef cephadm ...' - cephadm.shell: host.a: - ceph config set mgr mgr/cephadm/use_repo_digest true --force - print: '**** done cephadm.shell ceph config set mgr...' - cephadm.shell: host.a: - ceph orch status - ceph orch ps - ceph orch ls - ceph orch host ls - ceph orch device ls - cephadm.shell: host.a: - ceph fs volume create cephfs --placement=4 - ceph fs dump - cephadm.shell: host.a: - ceph fs set cephfs max_mds 2 - cephadm.shell: host.a: - ceph fs set cephfs allow_standby_replay true - cephadm.shell: host.a: - ceph fs set cephfs inline_data false - cephadm.shell: host.a: - ceph fs dump - ceph --format=json fs dump | jq -e ".filesystems | length == 1" - while ! ceph --format=json mds versions | jq -e ". | add == 4"; do sleep 1; done - fs.pre_upgrade_save: null - ceph-fuse: null - print: '**** done client' - parallel: - upgrade-tasks - workload-tasks - cephadm.shell: host.a: - ceph fs dump - fs.post_upgrade_checks: null teuthology: fragments_dropped: - /home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa/suites/orch/cephadm/mds_upgrade_sequence/tasks/3-upgrade-mgr-staggered.yaml meta: {} postmerge: - "local kernel = py_attrgetter(yaml).get('kernel')\nif kernel ~= nil then\n local\ \ branch = py_attrgetter(kernel).get('branch')\n if branch and not kernel.branch:find\ \ \"-all$\" then\n log.debug(\"removing default kernel specification: %s\"\ , kernel)\n py_attrgetter(kernel).pop('branch', nil)\n py_attrgetter(kernel).pop('deb',\ \ nil)\n py_attrgetter(kernel).pop('flavor', nil)\n py_attrgetter(kernel).pop('kdb',\ \ nil)\n py_attrgetter(kernel).pop('koji', nil)\n py_attrgetter(kernel).pop('koji_task',\ \ nil)\n py_attrgetter(kernel).pop('rpm', nil)\n py_attrgetter(kernel).pop('sha1',\ \ nil)\n py_attrgetter(kernel).pop('tag', nil)\n end\nend\n" variables: fail_fs: false teuthology_branch: clyso-debian-13 teuthology_repo: https://github.com/clyso/teuthology teuthology_sha1: 1c580df7a9c7c2aadc272da296344fd99f27c444 timestamp: 2026-03-09_11:23:05 tube: vps upgrade-tasks: sequential: - cephadm.shell: env: - sha1 host.a: - ceph config set mgr mgr/orchestrator/fail_fs false || true - cephadm.shell: env: - sha1 host.a: - ceph config set mon mon_warn_on_insecure_global_id_reclaim false --force - ceph config set mon mon_warn_on_insecure_global_id_reclaim_allowed false --force - ceph config set global log_to_journald false --force - ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 - cephadm.shell: env: - sha1 host.a: - while ceph orch upgrade status | jq '.in_progress' | grep true && ! ceph orch upgrade status | jq '.message' | grep Error ; do ceph orch ps ; ceph versions ; ceph fs dump; ceph orch upgrade status ; ceph health detail ; sleep 30 ; done - ceph orch ps - ceph orch upgrade status - ceph health detail - ceph versions - echo "wait for servicemap items w/ changing names to refresh" - sleep 60 - ceph orch ps - ceph versions - ceph versions | jq -e '.overall | length == 1' - ceph versions | jq -e '.overall | keys' | grep $sha1 user: kyr verbose: false worker_log: /home/teuthos/.teuthology/dispatcher/dispatcher.vps.611473 workload-tasks: sequential: - workunit: clients: all: - suites/fsstress.sh 2026-03-09T20:33:33.102 INFO:teuthology.run:suite_path is set to /home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa; will attempt to use it 2026-03-09T20:33:33.102 INFO:teuthology.run:Found tasks at /home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa/tasks 2026-03-09T20:33:33.102 INFO:teuthology.run_tasks:Running task internal.check_packages... 2026-03-09T20:33:33.103 INFO:teuthology.task.internal:Checking packages... 2026-03-09T20:33:33.103 INFO:teuthology.task.internal:Checking packages for os_type 'centos', flavor 'default' and ceph hash 'e911bdebe5c8faa3800735d1568fcdca65db60df' 2026-03-09T20:33:33.103 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using branch 2026-03-09T20:33:33.103 INFO:teuthology.packaging:ref: None 2026-03-09T20:33:33.103 INFO:teuthology.packaging:tag: None 2026-03-09T20:33:33.103 INFO:teuthology.packaging:branch: squid 2026-03-09T20:33:33.103 INFO:teuthology.packaging:sha1: e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T20:33:33.103 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&ref=squid 2026-03-09T20:33:33.925 INFO:teuthology.task.internal:Found packages for ceph version 19.2.3-678.ge911bdeb 2026-03-09T20:33:33.926 INFO:teuthology.run_tasks:Running task internal.buildpackages_prep... 2026-03-09T20:33:33.927 INFO:teuthology.task.internal:no buildpackages task found 2026-03-09T20:33:33.927 INFO:teuthology.run_tasks:Running task internal.save_config... 2026-03-09T20:33:33.934 INFO:teuthology.task.internal:Saving configuration 2026-03-09T20:33:33.943 INFO:teuthology.run_tasks:Running task internal.check_lock... 2026-03-09T20:33:33.958 INFO:teuthology.task.internal.check_lock:Checking locks... 2026-03-09T20:33:33.965 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm07.local', 'description': '/archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/647', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'centos', 'os_version': '9.stream', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-09 20:32:19.168566', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:07', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAeOWpayEHUwW7KIeR5GVTTrr6pDRNpZ3+JNDPQURFPp7d+1QanHBaajlXS4fMGXRkfPNSBjvppD6aBOmSslnfc='} 2026-03-09T20:33:33.970 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm10.local', 'description': '/archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/647', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'centos', 'os_version': '9.stream', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-09 20:32:19.169201', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:0a', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHkPxFJCEv7RCtpJvkRxYpFqQDoka8PTZ3iL5dBIg1oXMAKfwZjbd/E3qPl2ibQJxoKyLZ3M2yta7lGT5j3uWFE='} 2026-03-09T20:33:33.970 INFO:teuthology.run_tasks:Running task internal.add_remotes... 2026-03-09T20:33:33.971 INFO:teuthology.task.internal:roles: ubuntu@vm07.local - ['host.a', 'client.0', 'osd.0', 'osd.1', 'osd.2'] 2026-03-09T20:33:33.971 INFO:teuthology.task.internal:roles: ubuntu@vm10.local - ['host.b', 'client.1', 'osd.3', 'osd.4', 'osd.5'] 2026-03-09T20:33:33.971 INFO:teuthology.run_tasks:Running task console_log... 2026-03-09T20:33:33.982 DEBUG:teuthology.task.console_log:vm07 does not support IPMI; excluding 2026-03-09T20:33:33.989 DEBUG:teuthology.task.console_log:vm10 does not support IPMI; excluding 2026-03-09T20:33:33.989 DEBUG:teuthology.exit:Installing handler: Handler(exiter=, func=.kill_console_loggers at 0x7f2bd1d12170>, signals=[15]) 2026-03-09T20:33:33.989 INFO:teuthology.run_tasks:Running task internal.connect... 2026-03-09T20:33:33.990 INFO:teuthology.task.internal:Opening connections... 2026-03-09T20:33:33.990 DEBUG:teuthology.task.internal:connecting to ubuntu@vm07.local 2026-03-09T20:33:33.991 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm07.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T20:33:34.052 DEBUG:teuthology.task.internal:connecting to ubuntu@vm10.local 2026-03-09T20:33:34.053 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm10.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T20:33:34.112 INFO:teuthology.run_tasks:Running task internal.push_inventory... 2026-03-09T20:33:34.114 DEBUG:teuthology.orchestra.run.vm07:> uname -m 2026-03-09T20:33:34.180 INFO:teuthology.orchestra.run.vm07.stdout:x86_64 2026-03-09T20:33:34.181 DEBUG:teuthology.orchestra.run.vm07:> cat /etc/os-release 2026-03-09T20:33:34.237 INFO:teuthology.orchestra.run.vm07.stdout:NAME="CentOS Stream" 2026-03-09T20:33:34.237 INFO:teuthology.orchestra.run.vm07.stdout:VERSION="9" 2026-03-09T20:33:34.237 INFO:teuthology.orchestra.run.vm07.stdout:ID="centos" 2026-03-09T20:33:34.237 INFO:teuthology.orchestra.run.vm07.stdout:ID_LIKE="rhel fedora" 2026-03-09T20:33:34.237 INFO:teuthology.orchestra.run.vm07.stdout:VERSION_ID="9" 2026-03-09T20:33:34.237 INFO:teuthology.orchestra.run.vm07.stdout:PLATFORM_ID="platform:el9" 2026-03-09T20:33:34.237 INFO:teuthology.orchestra.run.vm07.stdout:PRETTY_NAME="CentOS Stream 9" 2026-03-09T20:33:34.237 INFO:teuthology.orchestra.run.vm07.stdout:ANSI_COLOR="0;31" 2026-03-09T20:33:34.237 INFO:teuthology.orchestra.run.vm07.stdout:LOGO="fedora-logo-icon" 2026-03-09T20:33:34.237 INFO:teuthology.orchestra.run.vm07.stdout:CPE_NAME="cpe:/o:centos:centos:9" 2026-03-09T20:33:34.237 INFO:teuthology.orchestra.run.vm07.stdout:HOME_URL="https://centos.org/" 2026-03-09T20:33:34.237 INFO:teuthology.orchestra.run.vm07.stdout:BUG_REPORT_URL="https://issues.redhat.com/" 2026-03-09T20:33:34.237 INFO:teuthology.orchestra.run.vm07.stdout:REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 9" 2026-03-09T20:33:34.237 INFO:teuthology.orchestra.run.vm07.stdout:REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream" 2026-03-09T20:33:34.238 INFO:teuthology.lock.ops:Updating vm07.local on lock server 2026-03-09T20:33:34.244 DEBUG:teuthology.orchestra.run.vm10:> uname -m 2026-03-09T20:33:34.264 INFO:teuthology.orchestra.run.vm10.stdout:x86_64 2026-03-09T20:33:34.265 DEBUG:teuthology.orchestra.run.vm10:> cat /etc/os-release 2026-03-09T20:33:34.326 INFO:teuthology.orchestra.run.vm10.stdout:NAME="CentOS Stream" 2026-03-09T20:33:34.327 INFO:teuthology.orchestra.run.vm10.stdout:VERSION="9" 2026-03-09T20:33:34.327 INFO:teuthology.orchestra.run.vm10.stdout:ID="centos" 2026-03-09T20:33:34.327 INFO:teuthology.orchestra.run.vm10.stdout:ID_LIKE="rhel fedora" 2026-03-09T20:33:34.327 INFO:teuthology.orchestra.run.vm10.stdout:VERSION_ID="9" 2026-03-09T20:33:34.327 INFO:teuthology.orchestra.run.vm10.stdout:PLATFORM_ID="platform:el9" 2026-03-09T20:33:34.327 INFO:teuthology.orchestra.run.vm10.stdout:PRETTY_NAME="CentOS Stream 9" 2026-03-09T20:33:34.327 INFO:teuthology.orchestra.run.vm10.stdout:ANSI_COLOR="0;31" 2026-03-09T20:33:34.327 INFO:teuthology.orchestra.run.vm10.stdout:LOGO="fedora-logo-icon" 2026-03-09T20:33:34.327 INFO:teuthology.orchestra.run.vm10.stdout:CPE_NAME="cpe:/o:centos:centos:9" 2026-03-09T20:33:34.327 INFO:teuthology.orchestra.run.vm10.stdout:HOME_URL="https://centos.org/" 2026-03-09T20:33:34.327 INFO:teuthology.orchestra.run.vm10.stdout:BUG_REPORT_URL="https://issues.redhat.com/" 2026-03-09T20:33:34.327 INFO:teuthology.orchestra.run.vm10.stdout:REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 9" 2026-03-09T20:33:34.327 INFO:teuthology.orchestra.run.vm10.stdout:REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream" 2026-03-09T20:33:34.327 INFO:teuthology.lock.ops:Updating vm10.local on lock server 2026-03-09T20:33:34.333 INFO:teuthology.run_tasks:Running task internal.serialize_remote_roles... 2026-03-09T20:33:34.335 INFO:teuthology.run_tasks:Running task internal.check_conflict... 2026-03-09T20:33:34.336 INFO:teuthology.task.internal:Checking for old test directory... 2026-03-09T20:33:34.336 DEBUG:teuthology.orchestra.run.vm07:> test '!' -e /home/ubuntu/cephtest 2026-03-09T20:33:34.338 DEBUG:teuthology.orchestra.run.vm10:> test '!' -e /home/ubuntu/cephtest 2026-03-09T20:33:34.386 INFO:teuthology.run_tasks:Running task internal.check_ceph_data... 2026-03-09T20:33:34.387 INFO:teuthology.task.internal:Checking for non-empty /var/lib/ceph... 2026-03-09T20:33:34.387 DEBUG:teuthology.orchestra.run.vm07:> test -z $(ls -A /var/lib/ceph) 2026-03-09T20:33:34.397 DEBUG:teuthology.orchestra.run.vm10:> test -z $(ls -A /var/lib/ceph) 2026-03-09T20:33:34.417 INFO:teuthology.orchestra.run.vm07.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-09T20:33:34.446 INFO:teuthology.orchestra.run.vm10.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-09T20:33:34.447 INFO:teuthology.run_tasks:Running task internal.vm_setup... 2026-03-09T20:33:34.456 DEBUG:teuthology.orchestra.run.vm07:> test -e /ceph-qa-ready 2026-03-09T20:33:34.475 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T20:33:34.683 DEBUG:teuthology.orchestra.run.vm10:> test -e /ceph-qa-ready 2026-03-09T20:33:34.701 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T20:33:34.881 INFO:teuthology.run_tasks:Running task internal.base... 2026-03-09T20:33:34.882 INFO:teuthology.task.internal:Creating test directory... 2026-03-09T20:33:34.882 DEBUG:teuthology.orchestra.run.vm07:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-09T20:33:34.884 DEBUG:teuthology.orchestra.run.vm10:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-09T20:33:34.903 INFO:teuthology.run_tasks:Running task internal.archive_upload... 2026-03-09T20:33:34.904 INFO:teuthology.run_tasks:Running task internal.archive... 2026-03-09T20:33:34.905 INFO:teuthology.task.internal:Creating archive directory... 2026-03-09T20:33:34.905 DEBUG:teuthology.orchestra.run.vm07:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-09T20:33:34.945 DEBUG:teuthology.orchestra.run.vm10:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-09T20:33:34.965 INFO:teuthology.run_tasks:Running task internal.coredump... 2026-03-09T20:33:34.966 INFO:teuthology.task.internal:Enabling coredump saving... 2026-03-09T20:33:34.966 DEBUG:teuthology.orchestra.run.vm07:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-09T20:33:35.020 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T20:33:35.021 DEBUG:teuthology.orchestra.run.vm10:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-09T20:33:35.043 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T20:33:35.043 DEBUG:teuthology.orchestra.run.vm07:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-09T20:33:35.063 DEBUG:teuthology.orchestra.run.vm10:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-09T20:33:35.090 INFO:teuthology.orchestra.run.vm07.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T20:33:35.100 INFO:teuthology.orchestra.run.vm07.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T20:33:35.113 INFO:teuthology.orchestra.run.vm10.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T20:33:35.121 INFO:teuthology.orchestra.run.vm10.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T20:33:35.123 INFO:teuthology.run_tasks:Running task internal.sudo... 2026-03-09T20:33:35.124 INFO:teuthology.task.internal:Configuring sudo... 2026-03-09T20:33:35.124 DEBUG:teuthology.orchestra.run.vm07:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-09T20:33:35.144 DEBUG:teuthology.orchestra.run.vm10:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-09T20:33:35.185 INFO:teuthology.run_tasks:Running task internal.syslog... 2026-03-09T20:33:35.187 INFO:teuthology.task.internal.syslog:Starting syslog monitoring... 2026-03-09T20:33:35.187 DEBUG:teuthology.orchestra.run.vm07:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-09T20:33:35.213 DEBUG:teuthology.orchestra.run.vm10:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-09T20:33:35.239 DEBUG:teuthology.orchestra.run.vm07:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T20:33:35.295 DEBUG:teuthology.orchestra.run.vm07:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T20:33:35.356 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-09T20:33:35.356 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-09T20:33:35.419 DEBUG:teuthology.orchestra.run.vm10:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T20:33:35.440 DEBUG:teuthology.orchestra.run.vm10:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T20:33:35.500 DEBUG:teuthology.orchestra.run.vm10:> set -ex 2026-03-09T20:33:35.500 DEBUG:teuthology.orchestra.run.vm10:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-09T20:33:35.563 DEBUG:teuthology.orchestra.run.vm07:> sudo service rsyslog restart 2026-03-09T20:33:35.565 DEBUG:teuthology.orchestra.run.vm10:> sudo service rsyslog restart 2026-03-09T20:33:35.593 INFO:teuthology.orchestra.run.vm07.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-09T20:33:35.636 INFO:teuthology.orchestra.run.vm10.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-09T20:33:36.072 INFO:teuthology.run_tasks:Running task internal.timer... 2026-03-09T20:33:36.075 INFO:teuthology.task.internal:Starting timer... 2026-03-09T20:33:36.075 INFO:teuthology.run_tasks:Running task pcp... 2026-03-09T20:33:36.078 INFO:teuthology.run_tasks:Running task selinux... 2026-03-09T20:33:36.081 DEBUG:teuthology.task:Applying overrides for task selinux: {'allowlist': ['scontext=system_u:system_r:logrotate_t:s0', 'scontext=system_u:system_r:getty_t:s0']} 2026-03-09T20:33:36.081 INFO:teuthology.task.selinux:Excluding vm07: VMs are not yet supported 2026-03-09T20:33:36.081 INFO:teuthology.task.selinux:Excluding vm10: VMs are not yet supported 2026-03-09T20:33:36.081 DEBUG:teuthology.task.selinux:Getting current SELinux state 2026-03-09T20:33:36.081 DEBUG:teuthology.task.selinux:Existing SELinux modes: {} 2026-03-09T20:33:36.081 INFO:teuthology.task.selinux:Putting SELinux into permissive mode 2026-03-09T20:33:36.081 INFO:teuthology.run_tasks:Running task ansible.cephlab... 2026-03-09T20:33:36.082 DEBUG:teuthology.task:Applying overrides for task ansible.cephlab: {'branch': 'main', 'skip_tags': 'nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs', 'vars': {'timezone': 'UTC'}} 2026-03-09T20:33:36.083 DEBUG:teuthology.repo_utils:Setting repo remote to https://github.com/ceph/ceph-cm-ansible.git 2026-03-09T20:33:36.084 INFO:teuthology.repo_utils:Fetching github.com_ceph_ceph-cm-ansible_main from origin 2026-03-09T20:33:36.633 DEBUG:teuthology.repo_utils:Resetting repo at /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main to origin/main 2026-03-09T20:33:36.640 INFO:teuthology.task.ansible:Playbook: [{'import_playbook': 'ansible_managed.yml'}, {'import_playbook': 'teuthology.yml'}, {'hosts': 'testnodes', 'tasks': [{'set_fact': {'ran_from_cephlab_playbook': True}}]}, {'import_playbook': 'testnodes.yml'}, {'import_playbook': 'container-host.yml'}, {'import_playbook': 'cobbler.yml'}, {'import_playbook': 'paddles.yml'}, {'import_playbook': 'pulpito.yml'}, {'hosts': 'testnodes', 'become': True, 'tasks': [{'name': 'Touch /ceph-qa-ready', 'file': {'path': '/ceph-qa-ready', 'state': 'touch'}, 'when': 'ran_from_cephlab_playbook|bool'}]}] 2026-03-09T20:33:36.641 DEBUG:teuthology.task.ansible:Running ansible-playbook -v --extra-vars '{"ansible_ssh_user": "ubuntu", "timezone": "UTC"}' -i /tmp/teuth_ansible_inventoryee4wu8hq --limit vm07.local,vm10.local /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main/cephlab.yml --skip-tags nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs 2026-03-09T20:35:20.361 DEBUG:teuthology.task.ansible:Reconnecting to [Remote(name='ubuntu@vm07.local'), Remote(name='ubuntu@vm10.local')] 2026-03-09T20:35:20.362 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm07.local' 2026-03-09T20:35:20.362 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm07.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T20:35:20.434 DEBUG:teuthology.orchestra.run.vm07:> true 2026-03-09T20:35:20.512 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm07.local' 2026-03-09T20:35:20.513 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm10.local' 2026-03-09T20:35:20.513 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm10.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T20:35:20.586 DEBUG:teuthology.orchestra.run.vm10:> true 2026-03-09T20:35:20.668 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm10.local' 2026-03-09T20:35:20.668 INFO:teuthology.run_tasks:Running task clock... 2026-03-09T20:35:20.670 INFO:teuthology.task.clock:Syncing clocks and checking initial clock skew... 2026-03-09T20:35:20.670 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-09T20:35:20.671 DEBUG:teuthology.orchestra.run.vm07:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T20:35:20.672 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-09T20:35:20.672 DEBUG:teuthology.orchestra.run.vm10:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T20:35:20.712 INFO:teuthology.orchestra.run.vm07.stderr:Failed to stop ntp.service: Unit ntp.service not loaded. 2026-03-09T20:35:20.734 INFO:teuthology.orchestra.run.vm07.stderr:Failed to stop ntpd.service: Unit ntpd.service not loaded. 2026-03-09T20:35:20.748 INFO:teuthology.orchestra.run.vm10.stderr:Failed to stop ntp.service: Unit ntp.service not loaded. 2026-03-09T20:35:20.770 INFO:teuthology.orchestra.run.vm10.stderr:Failed to stop ntpd.service: Unit ntpd.service not loaded. 2026-03-09T20:35:20.771 INFO:teuthology.orchestra.run.vm07.stderr:sudo: ntpd: command not found 2026-03-09T20:35:20.788 INFO:teuthology.orchestra.run.vm07.stdout:506 Cannot talk to daemon 2026-03-09T20:35:20.803 INFO:teuthology.orchestra.run.vm10.stderr:sudo: ntpd: command not found 2026-03-09T20:35:20.809 INFO:teuthology.orchestra.run.vm07.stderr:Failed to start ntp.service: Unit ntp.service not found. 2026-03-09T20:35:20.816 INFO:teuthology.orchestra.run.vm10.stdout:506 Cannot talk to daemon 2026-03-09T20:35:20.827 INFO:teuthology.orchestra.run.vm07.stderr:Failed to start ntpd.service: Unit ntpd.service not found. 2026-03-09T20:35:20.832 INFO:teuthology.orchestra.run.vm10.stderr:Failed to start ntp.service: Unit ntp.service not found. 2026-03-09T20:35:20.852 INFO:teuthology.orchestra.run.vm10.stderr:Failed to start ntpd.service: Unit ntpd.service not found. 2026-03-09T20:35:20.889 INFO:teuthology.orchestra.run.vm07.stderr:bash: line 1: ntpq: command not found 2026-03-09T20:35:20.904 INFO:teuthology.orchestra.run.vm10.stderr:bash: line 1: ntpq: command not found 2026-03-09T20:35:20.964 INFO:teuthology.orchestra.run.vm10.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-09T20:35:20.964 INFO:teuthology.orchestra.run.vm10.stdout:=============================================================================== 2026-03-09T20:35:20.964 INFO:teuthology.orchestra.run.vm10.stdout:^? 104-167-24-26.lunoxia.fc> 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-09T20:35:20.964 INFO:teuthology.orchestra.run.vm10.stdout:^? 217.160.19.219 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-09T20:35:20.964 INFO:teuthology.orchestra.run.vm10.stdout:^? vps-fra1.orleans.ddnss.de 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-09T20:35:20.964 INFO:teuthology.orchestra.run.vm10.stdout:^? 79.133.44.136 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-09T20:35:20.964 INFO:teuthology.orchestra.run.vm07.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-09T20:35:20.964 INFO:teuthology.orchestra.run.vm07.stdout:=============================================================================== 2026-03-09T20:35:20.964 INFO:teuthology.orchestra.run.vm07.stdout:^? 104-167-24-26.lunoxia.fc> 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-09T20:35:20.964 INFO:teuthology.orchestra.run.vm07.stdout:^? 217.160.19.219 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-09T20:35:20.964 INFO:teuthology.orchestra.run.vm07.stdout:^? vps-fra1.orleans.ddnss.de 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-09T20:35:20.964 INFO:teuthology.orchestra.run.vm07.stdout:^? 79.133.44.136 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-09T20:35:20.965 INFO:teuthology.run_tasks:Running task install... 2026-03-09T20:35:20.966 DEBUG:teuthology.task.install:project ceph 2026-03-09T20:35:20.966 DEBUG:teuthology.task.install:INSTALL overrides: {'ceph': {'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'}, 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}} 2026-03-09T20:35:20.967 DEBUG:teuthology.task.install:config {'branch': 'reef', 'exclude_packages': ['ceph-volume'], 'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}} 2026-03-09T20:35:20.967 INFO:teuthology.task.install:Using flavor: default 2026-03-09T20:35:20.969 DEBUG:teuthology.task.install:Package list is: {'deb': ['ceph', 'cephadm', 'ceph-mds', 'ceph-mgr', 'ceph-common', 'ceph-fuse', 'ceph-test', 'radosgw', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'libcephfs2', 'libcephfs-dev', 'librados2', 'librbd1', 'rbd-fuse'], 'rpm': ['ceph-radosgw', 'ceph-test', 'ceph', 'ceph-base', 'cephadm', 'ceph-immutable-object-cache', 'ceph-mgr', 'ceph-mgr-dashboard', 'ceph-mgr-diskprediction-local', 'ceph-mgr-rook', 'ceph-mgr-cephadm', 'ceph-fuse', 'librados-devel', 'libcephfs2', 'libcephfs-devel', 'librados2', 'librbd1', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'rbd-fuse', 'rbd-mirror', 'rbd-nbd']} 2026-03-09T20:35:20.969 INFO:teuthology.task.install:extra packages: [] 2026-03-09T20:35:20.969 DEBUG:teuthology.task.install.rpm:_update_package_list_and_install: config is {'branch': 'reef', 'cleanup': None, 'debuginfo': None, 'downgrade_packages': [], 'exclude_packages': ['ceph-volume'], 'extra_packages': [], 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}, 'extras': None, 'enable_coprs': [], 'flavor': 'default', 'install_ceph_packages': True, 'packages': {}, 'project': 'ceph', 'repos_only': False, 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'tag': None, 'wait_for_package': False} 2026-03-09T20:35:20.969 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using branch 2026-03-09T20:35:20.969 INFO:teuthology.packaging:ref: None 2026-03-09T20:35:20.969 INFO:teuthology.packaging:tag: None 2026-03-09T20:35:20.969 INFO:teuthology.packaging:branch: reef 2026-03-09T20:35:20.969 INFO:teuthology.packaging:sha1: e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T20:35:20.969 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&ref=reef 2026-03-09T20:35:20.970 DEBUG:teuthology.task.install.rpm:_update_package_list_and_install: config is {'branch': 'reef', 'cleanup': None, 'debuginfo': None, 'downgrade_packages': [], 'exclude_packages': ['ceph-volume'], 'extra_packages': [], 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}, 'extras': None, 'enable_coprs': [], 'flavor': 'default', 'install_ceph_packages': True, 'packages': {}, 'project': 'ceph', 'repos_only': False, 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'tag': None, 'wait_for_package': False} 2026-03-09T20:35:20.970 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using branch 2026-03-09T20:35:20.970 INFO:teuthology.packaging:ref: None 2026-03-09T20:35:20.970 INFO:teuthology.packaging:tag: None 2026-03-09T20:35:20.970 INFO:teuthology.packaging:branch: reef 2026-03-09T20:35:20.970 INFO:teuthology.packaging:sha1: e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T20:35:20.970 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&ref=reef 2026-03-09T20:35:21.748 INFO:teuthology.task.install.rpm:Pulling from https://3.chacra.ceph.com/r/ceph/reef/ab47f43c099b2cbae6e21342fe673ce251da54d6/centos/9/flavors/default/ 2026-03-09T20:35:21.749 INFO:teuthology.task.install.rpm:Package version is 18.2.7-1055.gab47f43c 2026-03-09T20:35:21.811 INFO:teuthology.task.install.rpm:Pulling from https://3.chacra.ceph.com/r/ceph/reef/ab47f43c099b2cbae6e21342fe673ce251da54d6/centos/9/flavors/default/ 2026-03-09T20:35:21.811 INFO:teuthology.task.install.rpm:Package version is 18.2.7-1055.gab47f43c 2026-03-09T20:35:22.285 INFO:teuthology.packaging:Writing yum repo: [ceph] name=ceph packages for $basearch baseurl=https://3.chacra.ceph.com/r/ceph/reef/ab47f43c099b2cbae6e21342fe673ce251da54d6/centos/9/flavors/default/$basearch enabled=1 gpgcheck=0 type=rpm-md [ceph-noarch] name=ceph noarch packages baseurl=https://3.chacra.ceph.com/r/ceph/reef/ab47f43c099b2cbae6e21342fe673ce251da54d6/centos/9/flavors/default/noarch enabled=1 gpgcheck=0 type=rpm-md [ceph-source] name=ceph source packages baseurl=https://3.chacra.ceph.com/r/ceph/reef/ab47f43c099b2cbae6e21342fe673ce251da54d6/centos/9/flavors/default/SRPMS enabled=1 gpgcheck=0 type=rpm-md 2026-03-09T20:35:22.285 DEBUG:teuthology.orchestra.run.vm10:> set -ex 2026-03-09T20:35:22.285 DEBUG:teuthology.orchestra.run.vm10:> sudo dd of=/etc/yum.repos.d/ceph.repo 2026-03-09T20:35:22.324 INFO:teuthology.task.install.rpm:Installing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd, bzip2, perl-Test-Harness, python3-xmltodict, python3-jmespath on remote rpm x86_64 2026-03-09T20:35:22.324 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using branch 2026-03-09T20:35:22.324 INFO:teuthology.packaging:ref: None 2026-03-09T20:35:22.324 INFO:teuthology.packaging:tag: None 2026-03-09T20:35:22.324 INFO:teuthology.packaging:branch: reef 2026-03-09T20:35:22.324 INFO:teuthology.packaging:sha1: e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T20:35:22.325 DEBUG:teuthology.orchestra.run.vm10:> if test -f /etc/yum.repos.d/ceph.repo ; then sudo sed -i -e ':a;N;$!ba;s/enabled=1\ngpg/enabled=1\npriority=1\ngpg/g' -e 's;ref/[a-zA-Z0-9_-]*/;ref/reef/;g' /etc/yum.repos.d/ceph.repo ; fi 2026-03-09T20:35:22.339 INFO:teuthology.packaging:Writing yum repo: [ceph] name=ceph packages for $basearch baseurl=https://3.chacra.ceph.com/r/ceph/reef/ab47f43c099b2cbae6e21342fe673ce251da54d6/centos/9/flavors/default/$basearch enabled=1 gpgcheck=0 type=rpm-md [ceph-noarch] name=ceph noarch packages baseurl=https://3.chacra.ceph.com/r/ceph/reef/ab47f43c099b2cbae6e21342fe673ce251da54d6/centos/9/flavors/default/noarch enabled=1 gpgcheck=0 type=rpm-md [ceph-source] name=ceph source packages baseurl=https://3.chacra.ceph.com/r/ceph/reef/ab47f43c099b2cbae6e21342fe673ce251da54d6/centos/9/flavors/default/SRPMS enabled=1 gpgcheck=0 type=rpm-md 2026-03-09T20:35:22.339 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-09T20:35:22.339 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/etc/yum.repos.d/ceph.repo 2026-03-09T20:35:22.375 INFO:teuthology.task.install.rpm:Installing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd, bzip2, perl-Test-Harness, python3-xmltodict, python3-jmespath on remote rpm x86_64 2026-03-09T20:35:22.375 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using branch 2026-03-09T20:35:22.375 INFO:teuthology.packaging:ref: None 2026-03-09T20:35:22.375 INFO:teuthology.packaging:tag: None 2026-03-09T20:35:22.375 INFO:teuthology.packaging:branch: reef 2026-03-09T20:35:22.375 INFO:teuthology.packaging:sha1: e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T20:35:22.375 DEBUG:teuthology.orchestra.run.vm07:> if test -f /etc/yum.repos.d/ceph.repo ; then sudo sed -i -e ':a;N;$!ba;s/enabled=1\ngpg/enabled=1\npriority=1\ngpg/g' -e 's;ref/[a-zA-Z0-9_-]*/;ref/reef/;g' /etc/yum.repos.d/ceph.repo ; fi 2026-03-09T20:35:22.401 DEBUG:teuthology.orchestra.run.vm10:> sudo touch -a /etc/yum/pluginconf.d/priorities.conf ; test -e /etc/yum/pluginconf.d/priorities.conf.orig || sudo cp -af /etc/yum/pluginconf.d/priorities.conf /etc/yum/pluginconf.d/priorities.conf.orig 2026-03-09T20:35:22.449 DEBUG:teuthology.orchestra.run.vm07:> sudo touch -a /etc/yum/pluginconf.d/priorities.conf ; test -e /etc/yum/pluginconf.d/priorities.conf.orig || sudo cp -af /etc/yum/pluginconf.d/priorities.conf /etc/yum/pluginconf.d/priorities.conf.orig 2026-03-09T20:35:22.490 DEBUG:teuthology.orchestra.run.vm10:> grep check_obsoletes /etc/yum/pluginconf.d/priorities.conf && sudo sed -i 's/check_obsoletes.*0/check_obsoletes = 1/g' /etc/yum/pluginconf.d/priorities.conf || echo 'check_obsoletes = 1' | sudo tee -a /etc/yum/pluginconf.d/priorities.conf 2026-03-09T20:35:22.516 INFO:teuthology.orchestra.run.vm10.stdout:check_obsoletes = 1 2026-03-09T20:35:22.521 DEBUG:teuthology.orchestra.run.vm10:> sudo yum clean all 2026-03-09T20:35:22.540 DEBUG:teuthology.orchestra.run.vm07:> grep check_obsoletes /etc/yum/pluginconf.d/priorities.conf && sudo sed -i 's/check_obsoletes.*0/check_obsoletes = 1/g' /etc/yum/pluginconf.d/priorities.conf || echo 'check_obsoletes = 1' | sudo tee -a /etc/yum/pluginconf.d/priorities.conf 2026-03-09T20:35:22.575 INFO:teuthology.orchestra.run.vm07.stdout:check_obsoletes = 1 2026-03-09T20:35:22.577 DEBUG:teuthology.orchestra.run.vm07:> sudo yum clean all 2026-03-09T20:35:22.749 INFO:teuthology.orchestra.run.vm10.stdout:41 files removed 2026-03-09T20:35:22.786 DEBUG:teuthology.orchestra.run.vm10:> sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd bzip2 perl-Test-Harness python3-xmltodict python3-jmespath 2026-03-09T20:35:22.809 INFO:teuthology.orchestra.run.vm07.stdout:41 files removed 2026-03-09T20:35:22.846 DEBUG:teuthology.orchestra.run.vm07:> sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd bzip2 perl-Test-Harness python3-xmltodict python3-jmespath 2026-03-09T20:35:24.182 INFO:teuthology.orchestra.run.vm10.stdout:ceph packages for x86_64 65 kB/s | 77 kB 00:01 2026-03-09T20:35:24.200 INFO:teuthology.orchestra.run.vm07.stdout:ceph packages for x86_64 67 kB/s | 77 kB 00:01 2026-03-09T20:35:25.140 INFO:teuthology.orchestra.run.vm10.stdout:ceph noarch packages 12 kB/s | 11 kB 00:00 2026-03-09T20:35:25.212 INFO:teuthology.orchestra.run.vm07.stdout:ceph noarch packages 11 kB/s | 11 kB 00:00 2026-03-09T20:35:26.099 INFO:teuthology.orchestra.run.vm10.stdout:ceph source packages 2.0 kB/s | 1.9 kB 00:00 2026-03-09T20:35:26.187 INFO:teuthology.orchestra.run.vm07.stdout:ceph source packages 2.0 kB/s | 1.9 kB 00:00 2026-03-09T20:35:27.241 INFO:teuthology.orchestra.run.vm10.stdout:CentOS Stream 9 - BaseOS 7.9 MB/s | 8.9 MB 00:01 2026-03-09T20:35:28.124 INFO:teuthology.orchestra.run.vm07.stdout:CentOS Stream 9 - BaseOS 4.6 MB/s | 8.9 MB 00:01 2026-03-09T20:35:30.744 INFO:teuthology.orchestra.run.vm10.stdout:CentOS Stream 9 - AppStream 9.9 MB/s | 27 MB 00:02 2026-03-09T20:35:31.848 INFO:teuthology.orchestra.run.vm07.stdout:CentOS Stream 9 - AppStream 9.1 MB/s | 27 MB 00:02 2026-03-09T20:35:36.803 INFO:teuthology.orchestra.run.vm07.stdout:CentOS Stream 9 - CRB 4.6 MB/s | 8.0 MB 00:01 2026-03-09T20:35:37.062 INFO:teuthology.orchestra.run.vm10.stdout:CentOS Stream 9 - CRB 2.5 MB/s | 8.0 MB 00:03 2026-03-09T20:35:37.923 INFO:teuthology.orchestra.run.vm07.stdout:CentOS Stream 9 - Extras packages 80 kB/s | 20 kB 00:00 2026-03-09T20:35:38.897 INFO:teuthology.orchestra.run.vm07.stdout:Extra Packages for Enterprise Linux 23 MB/s | 20 MB 00:00 2026-03-09T20:35:40.118 INFO:teuthology.orchestra.run.vm10.stdout:CentOS Stream 9 - Extras packages 9.5 kB/s | 20 kB 00:02 2026-03-09T20:35:41.109 INFO:teuthology.orchestra.run.vm10.stdout:Extra Packages for Enterprise Linux 22 MB/s | 20 MB 00:00 2026-03-09T20:35:43.722 INFO:teuthology.orchestra.run.vm07.stdout:lab-extras 64 kB/s | 50 kB 00:00 2026-03-09T20:35:45.222 INFO:teuthology.orchestra.run.vm07.stdout:Package librados2-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-09T20:35:45.222 INFO:teuthology.orchestra.run.vm07.stdout:Package librbd1-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-09T20:35:45.227 INFO:teuthology.orchestra.run.vm07.stdout:Package bzip2-1.0.8-11.el9.x86_64 is already installed. 2026-03-09T20:35:45.227 INFO:teuthology.orchestra.run.vm07.stdout:Package perl-Test-Harness-1:3.42-461.el9.noarch is already installed. 2026-03-09T20:35:45.257 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-03-09T20:35:45.261 INFO:teuthology.orchestra.run.vm07.stdout:======================================================================================= 2026-03-09T20:35:45.261 INFO:teuthology.orchestra.run.vm07.stdout: Package Arch Version Repository Size 2026-03-09T20:35:45.261 INFO:teuthology.orchestra.run.vm07.stdout:======================================================================================= 2026-03-09T20:35:45.261 INFO:teuthology.orchestra.run.vm07.stdout:Installing: 2026-03-09T20:35:45.261 INFO:teuthology.orchestra.run.vm07.stdout: ceph x86_64 2:18.2.7-1055.gab47f43c.el9 ceph 6.5 k 2026-03-09T20:35:45.261 INFO:teuthology.orchestra.run.vm07.stdout: ceph-base x86_64 2:18.2.7-1055.gab47f43c.el9 ceph 5.1 M 2026-03-09T20:35:45.261 INFO:teuthology.orchestra.run.vm07.stdout: ceph-fuse x86_64 2:18.2.7-1055.gab47f43c.el9 ceph 850 k 2026-03-09T20:35:45.261 INFO:teuthology.orchestra.run.vm07.stdout: ceph-immutable-object-cache x86_64 2:18.2.7-1055.gab47f43c.el9 ceph 143 k 2026-03-09T20:35:45.261 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr x86_64 2:18.2.7-1055.gab47f43c.el9 ceph 1.5 M 2026-03-09T20:35:45.261 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-cephadm noarch 2:18.2.7-1055.gab47f43c.el9 ceph-noarch 140 k 2026-03-09T20:35:45.261 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-dashboard noarch 2:18.2.7-1055.gab47f43c.el9 ceph-noarch 3.5 M 2026-03-09T20:35:45.261 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-diskprediction-local noarch 2:18.2.7-1055.gab47f43c.el9 ceph-noarch 7.4 M 2026-03-09T20:35:45.261 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-rook noarch 2:18.2.7-1055.gab47f43c.el9 ceph-noarch 49 k 2026-03-09T20:35:45.261 INFO:teuthology.orchestra.run.vm07.stdout: ceph-radosgw x86_64 2:18.2.7-1055.gab47f43c.el9 ceph 7.8 M 2026-03-09T20:35:45.261 INFO:teuthology.orchestra.run.vm07.stdout: ceph-test x86_64 2:18.2.7-1055.gab47f43c.el9 ceph 36 M 2026-03-09T20:35:45.261 INFO:teuthology.orchestra.run.vm07.stdout: cephadm noarch 2:18.2.7-1055.gab47f43c.el9 ceph-noarch 226 k 2026-03-09T20:35:45.261 INFO:teuthology.orchestra.run.vm07.stdout: libcephfs-devel x86_64 2:18.2.7-1055.gab47f43c.el9 ceph 31 k 2026-03-09T20:35:45.261 INFO:teuthology.orchestra.run.vm07.stdout: libcephfs2 x86_64 2:18.2.7-1055.gab47f43c.el9 ceph 710 k 2026-03-09T20:35:45.261 INFO:teuthology.orchestra.run.vm07.stdout: librados-devel x86_64 2:18.2.7-1055.gab47f43c.el9 ceph 126 k 2026-03-09T20:35:45.261 INFO:teuthology.orchestra.run.vm07.stdout: python3-cephfs x86_64 2:18.2.7-1055.gab47f43c.el9 ceph 162 k 2026-03-09T20:35:45.261 INFO:teuthology.orchestra.run.vm07.stdout: python3-jmespath noarch 1.0.1-1.el9 appstream 48 k 2026-03-09T20:35:45.261 INFO:teuthology.orchestra.run.vm07.stdout: python3-rados x86_64 2:18.2.7-1055.gab47f43c.el9 ceph 322 k 2026-03-09T20:35:45.261 INFO:teuthology.orchestra.run.vm07.stdout: python3-rbd x86_64 2:18.2.7-1055.gab47f43c.el9 ceph 302 k 2026-03-09T20:35:45.261 INFO:teuthology.orchestra.run.vm07.stdout: python3-rgw x86_64 2:18.2.7-1055.gab47f43c.el9 ceph 100 k 2026-03-09T20:35:45.261 INFO:teuthology.orchestra.run.vm07.stdout: python3-xmltodict noarch 0.12.0-15.el9 epel 22 k 2026-03-09T20:35:45.262 INFO:teuthology.orchestra.run.vm07.stdout: rbd-fuse x86_64 2:18.2.7-1055.gab47f43c.el9 ceph 87 k 2026-03-09T20:35:45.262 INFO:teuthology.orchestra.run.vm07.stdout: rbd-mirror x86_64 2:18.2.7-1055.gab47f43c.el9 ceph 3.0 M 2026-03-09T20:35:45.262 INFO:teuthology.orchestra.run.vm07.stdout: rbd-nbd x86_64 2:18.2.7-1055.gab47f43c.el9 ceph 172 k 2026-03-09T20:35:45.262 INFO:teuthology.orchestra.run.vm07.stdout:Upgrading: 2026-03-09T20:35:45.262 INFO:teuthology.orchestra.run.vm07.stdout: librados2 x86_64 2:18.2.7-1055.gab47f43c.el9 ceph 3.3 M 2026-03-09T20:35:45.262 INFO:teuthology.orchestra.run.vm07.stdout: librbd1 x86_64 2:18.2.7-1055.gab47f43c.el9 ceph 3.0 M 2026-03-09T20:35:45.262 INFO:teuthology.orchestra.run.vm07.stdout:Installing dependencies: 2026-03-09T20:35:45.262 INFO:teuthology.orchestra.run.vm07.stdout: boost-program-options x86_64 1.75.0-13.el9 appstream 104 k 2026-03-09T20:35:45.262 INFO:teuthology.orchestra.run.vm07.stdout: ceph-common x86_64 2:18.2.7-1055.gab47f43c.el9 ceph 18 M 2026-03-09T20:35:45.262 INFO:teuthology.orchestra.run.vm07.stdout: ceph-grafana-dashboards noarch 2:18.2.7-1055.gab47f43c.el9 ceph-noarch 24 k 2026-03-09T20:35:45.262 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mds x86_64 2:18.2.7-1055.gab47f43c.el9 ceph 2.1 M 2026-03-09T20:35:45.262 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core noarch 2:18.2.7-1055.gab47f43c.el9 ceph-noarch 248 k 2026-03-09T20:35:45.262 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mon x86_64 2:18.2.7-1055.gab47f43c.el9 ceph 4.7 M 2026-03-09T20:35:45.262 INFO:teuthology.orchestra.run.vm07.stdout: ceph-osd x86_64 2:18.2.7-1055.gab47f43c.el9 ceph 17 M 2026-03-09T20:35:45.262 INFO:teuthology.orchestra.run.vm07.stdout: ceph-prometheus-alerts noarch 2:18.2.7-1055.gab47f43c.el9 ceph-noarch 17 k 2026-03-09T20:35:45.262 INFO:teuthology.orchestra.run.vm07.stdout: ceph-selinux x86_64 2:18.2.7-1055.gab47f43c.el9 ceph 25 k 2026-03-09T20:35:45.262 INFO:teuthology.orchestra.run.vm07.stdout: flexiblas x86_64 3.0.4-9.el9 appstream 30 k 2026-03-09T20:35:45.262 INFO:teuthology.orchestra.run.vm07.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 appstream 3.0 M 2026-03-09T20:35:45.262 INFO:teuthology.orchestra.run.vm07.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 appstream 15 k 2026-03-09T20:35:45.262 INFO:teuthology.orchestra.run.vm07.stdout: fmt x86_64 8.1.1-5.el9 epel 111 k 2026-03-09T20:35:45.262 INFO:teuthology.orchestra.run.vm07.stdout: gperftools-libs x86_64 2.9.1-3.el9 epel 308 k 2026-03-09T20:35:45.262 INFO:teuthology.orchestra.run.vm07.stdout: ledmon-libs x86_64 1.1.0-3.el9 baseos 40 k 2026-03-09T20:35:45.262 INFO:teuthology.orchestra.run.vm07.stdout: libarrow x86_64 9.0.0-15.el9 epel 4.4 M 2026-03-09T20:35:45.262 INFO:teuthology.orchestra.run.vm07.stdout: libarrow-doc noarch 9.0.0-15.el9 epel 25 k 2026-03-09T20:35:45.262 INFO:teuthology.orchestra.run.vm07.stdout: libcephsqlite x86_64 2:18.2.7-1055.gab47f43c.el9 ceph 166 k 2026-03-09T20:35:45.262 INFO:teuthology.orchestra.run.vm07.stdout: libconfig x86_64 1.7.2-9.el9 baseos 72 k 2026-03-09T20:35:45.262 INFO:teuthology.orchestra.run.vm07.stdout: libgfortran x86_64 11.5.0-14.el9 baseos 794 k 2026-03-09T20:35:45.262 INFO:teuthology.orchestra.run.vm07.stdout: liboath x86_64 2.6.12-1.el9 epel 49 k 2026-03-09T20:35:45.262 INFO:teuthology.orchestra.run.vm07.stdout: libpmemobj x86_64 1.12.1-1.el9 appstream 160 k 2026-03-09T20:35:45.262 INFO:teuthology.orchestra.run.vm07.stdout: libquadmath x86_64 11.5.0-14.el9 baseos 184 k 2026-03-09T20:35:45.262 INFO:teuthology.orchestra.run.vm07.stdout: librabbitmq x86_64 0.11.0-7.el9 appstream 45 k 2026-03-09T20:35:45.262 INFO:teuthology.orchestra.run.vm07.stdout: libradosstriper1 x86_64 2:18.2.7-1055.gab47f43c.el9 ceph 475 k 2026-03-09T20:35:45.262 INFO:teuthology.orchestra.run.vm07.stdout: librdkafka x86_64 1.6.1-102.el9 appstream 662 k 2026-03-09T20:35:45.262 INFO:teuthology.orchestra.run.vm07.stdout: librgw2 x86_64 2:18.2.7-1055.gab47f43c.el9 ceph 4.5 M 2026-03-09T20:35:45.262 INFO:teuthology.orchestra.run.vm07.stdout: libstoragemgmt x86_64 1.10.1-1.el9 appstream 246 k 2026-03-09T20:35:45.262 INFO:teuthology.orchestra.run.vm07.stdout: libunwind x86_64 1.6.2-1.el9 epel 67 k 2026-03-09T20:35:45.262 INFO:teuthology.orchestra.run.vm07.stdout: libxslt x86_64 1.1.34-12.el9 appstream 233 k 2026-03-09T20:35:45.262 INFO:teuthology.orchestra.run.vm07.stdout: lttng-ust x86_64 2.12.0-6.el9 appstream 292 k 2026-03-09T20:35:45.262 INFO:teuthology.orchestra.run.vm07.stdout: mailcap noarch 2.1.49-5.el9 baseos 33 k 2026-03-09T20:35:45.262 INFO:teuthology.orchestra.run.vm07.stdout: openblas x86_64 0.3.29-1.el9 appstream 42 k 2026-03-09T20:35:45.263 INFO:teuthology.orchestra.run.vm07.stdout: openblas-openmp x86_64 0.3.29-1.el9 appstream 5.3 M 2026-03-09T20:35:45.263 INFO:teuthology.orchestra.run.vm07.stdout: parquet-libs x86_64 9.0.0-15.el9 epel 838 k 2026-03-09T20:35:45.263 INFO:teuthology.orchestra.run.vm07.stdout: python3-asyncssh noarch 2.13.2-5.el9 epel 548 k 2026-03-09T20:35:45.263 INFO:teuthology.orchestra.run.vm07.stdout: python3-autocommand noarch 2.2.2-8.el9 epel 29 k 2026-03-09T20:35:45.263 INFO:teuthology.orchestra.run.vm07.stdout: python3-babel noarch 2.9.1-2.el9 appstream 6.0 M 2026-03-09T20:35:45.263 INFO:teuthology.orchestra.run.vm07.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 epel 60 k 2026-03-09T20:35:45.263 INFO:teuthology.orchestra.run.vm07.stdout: python3-bcrypt x86_64 3.2.2-1.el9 epel 43 k 2026-03-09T20:35:45.263 INFO:teuthology.orchestra.run.vm07.stdout: python3-cachetools noarch 4.2.4-1.el9 epel 32 k 2026-03-09T20:35:45.263 INFO:teuthology.orchestra.run.vm07.stdout: python3-ceph-argparse x86_64 2:18.2.7-1055.gab47f43c.el9 ceph 45 k 2026-03-09T20:35:45.263 INFO:teuthology.orchestra.run.vm07.stdout: python3-ceph-common x86_64 2:18.2.7-1055.gab47f43c.el9 ceph 130 k 2026-03-09T20:35:45.263 INFO:teuthology.orchestra.run.vm07.stdout: python3-certifi noarch 2023.05.07-4.el9 epel 14 k 2026-03-09T20:35:45.263 INFO:teuthology.orchestra.run.vm07.stdout: python3-cffi x86_64 1.14.5-5.el9 baseos 253 k 2026-03-09T20:35:45.263 INFO:teuthology.orchestra.run.vm07.stdout: python3-cheroot noarch 10.0.1-4.el9 epel 173 k 2026-03-09T20:35:45.263 INFO:teuthology.orchestra.run.vm07.stdout: python3-cherrypy noarch 18.6.1-2.el9 epel 358 k 2026-03-09T20:35:45.263 INFO:teuthology.orchestra.run.vm07.stdout: python3-cryptography x86_64 36.0.1-5.el9 baseos 1.2 M 2026-03-09T20:35:45.263 INFO:teuthology.orchestra.run.vm07.stdout: python3-devel x86_64 3.9.25-3.el9 appstream 244 k 2026-03-09T20:35:45.263 INFO:teuthology.orchestra.run.vm07.stdout: python3-google-auth noarch 1:2.45.0-1.el9 epel 254 k 2026-03-09T20:35:45.263 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco noarch 8.2.1-3.el9 epel 11 k 2026-03-09T20:35:45.263 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 epel 18 k 2026-03-09T20:35:45.263 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 epel 23 k 2026-03-09T20:35:45.263 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-context noarch 6.0.1-3.el9 epel 20 k 2026-03-09T20:35:45.263 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 epel 19 k 2026-03-09T20:35:45.263 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-text noarch 4.0.0-2.el9 epel 26 k 2026-03-09T20:35:45.263 INFO:teuthology.orchestra.run.vm07.stdout: python3-jinja2 noarch 2.11.3-8.el9 appstream 249 k 2026-03-09T20:35:45.263 INFO:teuthology.orchestra.run.vm07.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 epel 1.0 M 2026-03-09T20:35:45.263 INFO:teuthology.orchestra.run.vm07.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 appstream 177 k 2026-03-09T20:35:45.263 INFO:teuthology.orchestra.run.vm07.stdout: python3-logutils noarch 0.3.5-21.el9 epel 46 k 2026-03-09T20:35:45.263 INFO:teuthology.orchestra.run.vm07.stdout: python3-mako noarch 1.1.4-6.el9 appstream 172 k 2026-03-09T20:35:45.263 INFO:teuthology.orchestra.run.vm07.stdout: python3-markupsafe x86_64 1.1.1-12.el9 appstream 35 k 2026-03-09T20:35:45.263 INFO:teuthology.orchestra.run.vm07.stdout: python3-more-itertools noarch 8.12.0-2.el9 epel 79 k 2026-03-09T20:35:45.263 INFO:teuthology.orchestra.run.vm07.stdout: python3-natsort noarch 7.1.1-5.el9 epel 58 k 2026-03-09T20:35:45.263 INFO:teuthology.orchestra.run.vm07.stdout: python3-numpy x86_64 1:1.23.5-2.el9 appstream 6.1 M 2026-03-09T20:35:45.263 INFO:teuthology.orchestra.run.vm07.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 appstream 442 k 2026-03-09T20:35:45.263 INFO:teuthology.orchestra.run.vm07.stdout: python3-pecan noarch 1.4.2-3.el9 epel 272 k 2026-03-09T20:35:45.263 INFO:teuthology.orchestra.run.vm07.stdout: python3-ply noarch 3.11-14.el9 baseos 106 k 2026-03-09T20:35:45.263 INFO:teuthology.orchestra.run.vm07.stdout: python3-portend noarch 3.1.0-2.el9 epel 16 k 2026-03-09T20:35:45.263 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 epel 90 k 2026-03-09T20:35:45.263 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyasn1 noarch 0.4.8-7.el9 appstream 157 k 2026-03-09T20:35:45.263 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 appstream 277 k 2026-03-09T20:35:45.263 INFO:teuthology.orchestra.run.vm07.stdout: python3-pycparser noarch 2.20-6.el9 baseos 135 k 2026-03-09T20:35:45.263 INFO:teuthology.orchestra.run.vm07.stdout: python3-repoze-lru noarch 0.7-16.el9 epel 31 k 2026-03-09T20:35:45.263 INFO:teuthology.orchestra.run.vm07.stdout: python3-requests noarch 2.25.1-10.el9 baseos 126 k 2026-03-09T20:35:45.263 INFO:teuthology.orchestra.run.vm07.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 appstream 54 k 2026-03-09T20:35:45.263 INFO:teuthology.orchestra.run.vm07.stdout: python3-routes noarch 2.5.1-5.el9 epel 188 k 2026-03-09T20:35:45.263 INFO:teuthology.orchestra.run.vm07.stdout: python3-rsa noarch 4.9-2.el9 epel 59 k 2026-03-09T20:35:45.264 INFO:teuthology.orchestra.run.vm07.stdout: python3-scipy x86_64 1.9.3-2.el9 appstream 19 M 2026-03-09T20:35:45.264 INFO:teuthology.orchestra.run.vm07.stdout: python3-tempora noarch 5.0.0-2.el9 epel 36 k 2026-03-09T20:35:45.264 INFO:teuthology.orchestra.run.vm07.stdout: python3-toml noarch 0.10.2-6.el9 appstream 42 k 2026-03-09T20:35:45.264 INFO:teuthology.orchestra.run.vm07.stdout: python3-typing-extensions noarch 4.15.0-1.el9 epel 86 k 2026-03-09T20:35:45.264 INFO:teuthology.orchestra.run.vm07.stdout: python3-urllib3 noarch 1.26.5-7.el9 baseos 218 k 2026-03-09T20:35:45.264 INFO:teuthology.orchestra.run.vm07.stdout: python3-webob noarch 1.8.8-2.el9 epel 230 k 2026-03-09T20:35:45.264 INFO:teuthology.orchestra.run.vm07.stdout: python3-websocket-client noarch 1.2.3-2.el9 epel 90 k 2026-03-09T20:35:45.264 INFO:teuthology.orchestra.run.vm07.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 epel 427 k 2026-03-09T20:35:45.264 INFO:teuthology.orchestra.run.vm07.stdout: python3-zc-lockfile noarch 2.0-10.el9 epel 20 k 2026-03-09T20:35:45.264 INFO:teuthology.orchestra.run.vm07.stdout: re2 x86_64 1:20211101-20.el9 epel 191 k 2026-03-09T20:35:45.264 INFO:teuthology.orchestra.run.vm07.stdout: socat x86_64 1.7.4.1-8.el9 appstream 303 k 2026-03-09T20:35:45.264 INFO:teuthology.orchestra.run.vm07.stdout: thrift x86_64 0.15.0-4.el9 epel 1.6 M 2026-03-09T20:35:45.264 INFO:teuthology.orchestra.run.vm07.stdout: xmlstarlet x86_64 1.6.1-20.el9 appstream 64 k 2026-03-09T20:35:45.264 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:35:45.264 INFO:teuthology.orchestra.run.vm07.stdout:Transaction Summary 2026-03-09T20:35:45.264 INFO:teuthology.orchestra.run.vm07.stdout:======================================================================================= 2026-03-09T20:35:45.264 INFO:teuthology.orchestra.run.vm07.stdout:Install 115 Packages 2026-03-09T20:35:45.264 INFO:teuthology.orchestra.run.vm07.stdout:Upgrade 2 Packages 2026-03-09T20:35:45.264 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:35:45.264 INFO:teuthology.orchestra.run.vm07.stdout:Total download size: 181 M 2026-03-09T20:35:45.264 INFO:teuthology.orchestra.run.vm07.stdout:Downloading Packages: 2026-03-09T20:35:45.987 INFO:teuthology.orchestra.run.vm10.stdout:lab-extras 64 kB/s | 50 kB 00:00 2026-03-09T20:35:46.163 INFO:teuthology.orchestra.run.vm07.stdout:(1/117): ceph-18.2.7-1055.gab47f43c.el9.x86_64. 13 kB/s | 6.5 kB 00:00 2026-03-09T20:35:46.992 INFO:teuthology.orchestra.run.vm07.stdout:(2/117): ceph-fuse-18.2.7-1055.gab47f43c.el9.x8 1.0 MB/s | 850 kB 00:00 2026-03-09T20:35:47.115 INFO:teuthology.orchestra.run.vm07.stdout:(3/117): ceph-immutable-object-cache-18.2.7-105 1.1 MB/s | 143 kB 00:00 2026-03-09T20:35:47.512 INFO:teuthology.orchestra.run.vm10.stdout:Package librados2-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-09T20:35:47.512 INFO:teuthology.orchestra.run.vm10.stdout:Package librbd1-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-09T20:35:47.516 INFO:teuthology.orchestra.run.vm10.stdout:Package bzip2-1.0.8-11.el9.x86_64 is already installed. 2026-03-09T20:35:47.517 INFO:teuthology.orchestra.run.vm10.stdout:Package perl-Test-Harness-1:3.42-461.el9.noarch is already installed. 2026-03-09T20:35:47.548 INFO:teuthology.orchestra.run.vm10.stdout:Dependencies resolved. 2026-03-09T20:35:47.553 INFO:teuthology.orchestra.run.vm10.stdout:======================================================================================= 2026-03-09T20:35:47.553 INFO:teuthology.orchestra.run.vm10.stdout: Package Arch Version Repository Size 2026-03-09T20:35:47.553 INFO:teuthology.orchestra.run.vm10.stdout:======================================================================================= 2026-03-09T20:35:47.553 INFO:teuthology.orchestra.run.vm10.stdout:Installing: 2026-03-09T20:35:47.553 INFO:teuthology.orchestra.run.vm10.stdout: ceph x86_64 2:18.2.7-1055.gab47f43c.el9 ceph 6.5 k 2026-03-09T20:35:47.553 INFO:teuthology.orchestra.run.vm10.stdout: ceph-base x86_64 2:18.2.7-1055.gab47f43c.el9 ceph 5.1 M 2026-03-09T20:35:47.553 INFO:teuthology.orchestra.run.vm10.stdout: ceph-fuse x86_64 2:18.2.7-1055.gab47f43c.el9 ceph 850 k 2026-03-09T20:35:47.553 INFO:teuthology.orchestra.run.vm10.stdout: ceph-immutable-object-cache x86_64 2:18.2.7-1055.gab47f43c.el9 ceph 143 k 2026-03-09T20:35:47.553 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr x86_64 2:18.2.7-1055.gab47f43c.el9 ceph 1.5 M 2026-03-09T20:35:47.553 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-cephadm noarch 2:18.2.7-1055.gab47f43c.el9 ceph-noarch 140 k 2026-03-09T20:35:47.553 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-dashboard noarch 2:18.2.7-1055.gab47f43c.el9 ceph-noarch 3.5 M 2026-03-09T20:35:47.553 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-diskprediction-local noarch 2:18.2.7-1055.gab47f43c.el9 ceph-noarch 7.4 M 2026-03-09T20:35:47.553 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-rook noarch 2:18.2.7-1055.gab47f43c.el9 ceph-noarch 49 k 2026-03-09T20:35:47.553 INFO:teuthology.orchestra.run.vm10.stdout: ceph-radosgw x86_64 2:18.2.7-1055.gab47f43c.el9 ceph 7.8 M 2026-03-09T20:35:47.553 INFO:teuthology.orchestra.run.vm10.stdout: ceph-test x86_64 2:18.2.7-1055.gab47f43c.el9 ceph 36 M 2026-03-09T20:35:47.553 INFO:teuthology.orchestra.run.vm10.stdout: cephadm noarch 2:18.2.7-1055.gab47f43c.el9 ceph-noarch 226 k 2026-03-09T20:35:47.553 INFO:teuthology.orchestra.run.vm10.stdout: libcephfs-devel x86_64 2:18.2.7-1055.gab47f43c.el9 ceph 31 k 2026-03-09T20:35:47.553 INFO:teuthology.orchestra.run.vm10.stdout: libcephfs2 x86_64 2:18.2.7-1055.gab47f43c.el9 ceph 710 k 2026-03-09T20:35:47.553 INFO:teuthology.orchestra.run.vm10.stdout: librados-devel x86_64 2:18.2.7-1055.gab47f43c.el9 ceph 126 k 2026-03-09T20:35:47.553 INFO:teuthology.orchestra.run.vm10.stdout: python3-cephfs x86_64 2:18.2.7-1055.gab47f43c.el9 ceph 162 k 2026-03-09T20:35:47.553 INFO:teuthology.orchestra.run.vm10.stdout: python3-jmespath noarch 1.0.1-1.el9 appstream 48 k 2026-03-09T20:35:47.553 INFO:teuthology.orchestra.run.vm10.stdout: python3-rados x86_64 2:18.2.7-1055.gab47f43c.el9 ceph 322 k 2026-03-09T20:35:47.553 INFO:teuthology.orchestra.run.vm10.stdout: python3-rbd x86_64 2:18.2.7-1055.gab47f43c.el9 ceph 302 k 2026-03-09T20:35:47.553 INFO:teuthology.orchestra.run.vm10.stdout: python3-rgw x86_64 2:18.2.7-1055.gab47f43c.el9 ceph 100 k 2026-03-09T20:35:47.553 INFO:teuthology.orchestra.run.vm10.stdout: python3-xmltodict noarch 0.12.0-15.el9 epel 22 k 2026-03-09T20:35:47.553 INFO:teuthology.orchestra.run.vm10.stdout: rbd-fuse x86_64 2:18.2.7-1055.gab47f43c.el9 ceph 87 k 2026-03-09T20:35:47.553 INFO:teuthology.orchestra.run.vm10.stdout: rbd-mirror x86_64 2:18.2.7-1055.gab47f43c.el9 ceph 3.0 M 2026-03-09T20:35:47.553 INFO:teuthology.orchestra.run.vm10.stdout: rbd-nbd x86_64 2:18.2.7-1055.gab47f43c.el9 ceph 172 k 2026-03-09T20:35:47.553 INFO:teuthology.orchestra.run.vm10.stdout:Upgrading: 2026-03-09T20:35:47.553 INFO:teuthology.orchestra.run.vm10.stdout: librados2 x86_64 2:18.2.7-1055.gab47f43c.el9 ceph 3.3 M 2026-03-09T20:35:47.553 INFO:teuthology.orchestra.run.vm10.stdout: librbd1 x86_64 2:18.2.7-1055.gab47f43c.el9 ceph 3.0 M 2026-03-09T20:35:47.553 INFO:teuthology.orchestra.run.vm10.stdout:Installing dependencies: 2026-03-09T20:35:47.553 INFO:teuthology.orchestra.run.vm10.stdout: boost-program-options x86_64 1.75.0-13.el9 appstream 104 k 2026-03-09T20:35:47.553 INFO:teuthology.orchestra.run.vm10.stdout: ceph-common x86_64 2:18.2.7-1055.gab47f43c.el9 ceph 18 M 2026-03-09T20:35:47.553 INFO:teuthology.orchestra.run.vm10.stdout: ceph-grafana-dashboards noarch 2:18.2.7-1055.gab47f43c.el9 ceph-noarch 24 k 2026-03-09T20:35:47.553 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mds x86_64 2:18.2.7-1055.gab47f43c.el9 ceph 2.1 M 2026-03-09T20:35:47.553 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-modules-core noarch 2:18.2.7-1055.gab47f43c.el9 ceph-noarch 248 k 2026-03-09T20:35:47.554 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mon x86_64 2:18.2.7-1055.gab47f43c.el9 ceph 4.7 M 2026-03-09T20:35:47.554 INFO:teuthology.orchestra.run.vm10.stdout: ceph-osd x86_64 2:18.2.7-1055.gab47f43c.el9 ceph 17 M 2026-03-09T20:35:47.554 INFO:teuthology.orchestra.run.vm10.stdout: ceph-prometheus-alerts noarch 2:18.2.7-1055.gab47f43c.el9 ceph-noarch 17 k 2026-03-09T20:35:47.554 INFO:teuthology.orchestra.run.vm10.stdout: ceph-selinux x86_64 2:18.2.7-1055.gab47f43c.el9 ceph 25 k 2026-03-09T20:35:47.554 INFO:teuthology.orchestra.run.vm10.stdout: flexiblas x86_64 3.0.4-9.el9 appstream 30 k 2026-03-09T20:35:47.554 INFO:teuthology.orchestra.run.vm10.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 appstream 3.0 M 2026-03-09T20:35:47.554 INFO:teuthology.orchestra.run.vm10.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 appstream 15 k 2026-03-09T20:35:47.554 INFO:teuthology.orchestra.run.vm10.stdout: fmt x86_64 8.1.1-5.el9 epel 111 k 2026-03-09T20:35:47.554 INFO:teuthology.orchestra.run.vm10.stdout: gperftools-libs x86_64 2.9.1-3.el9 epel 308 k 2026-03-09T20:35:47.554 INFO:teuthology.orchestra.run.vm10.stdout: ledmon-libs x86_64 1.1.0-3.el9 baseos 40 k 2026-03-09T20:35:47.554 INFO:teuthology.orchestra.run.vm10.stdout: libarrow x86_64 9.0.0-15.el9 epel 4.4 M 2026-03-09T20:35:47.554 INFO:teuthology.orchestra.run.vm10.stdout: libarrow-doc noarch 9.0.0-15.el9 epel 25 k 2026-03-09T20:35:47.554 INFO:teuthology.orchestra.run.vm10.stdout: libcephsqlite x86_64 2:18.2.7-1055.gab47f43c.el9 ceph 166 k 2026-03-09T20:35:47.554 INFO:teuthology.orchestra.run.vm10.stdout: libconfig x86_64 1.7.2-9.el9 baseos 72 k 2026-03-09T20:35:47.554 INFO:teuthology.orchestra.run.vm10.stdout: libgfortran x86_64 11.5.0-14.el9 baseos 794 k 2026-03-09T20:35:47.554 INFO:teuthology.orchestra.run.vm10.stdout: liboath x86_64 2.6.12-1.el9 epel 49 k 2026-03-09T20:35:47.554 INFO:teuthology.orchestra.run.vm10.stdout: libpmemobj x86_64 1.12.1-1.el9 appstream 160 k 2026-03-09T20:35:47.554 INFO:teuthology.orchestra.run.vm10.stdout: libquadmath x86_64 11.5.0-14.el9 baseos 184 k 2026-03-09T20:35:47.554 INFO:teuthology.orchestra.run.vm10.stdout: librabbitmq x86_64 0.11.0-7.el9 appstream 45 k 2026-03-09T20:35:47.554 INFO:teuthology.orchestra.run.vm10.stdout: libradosstriper1 x86_64 2:18.2.7-1055.gab47f43c.el9 ceph 475 k 2026-03-09T20:35:47.554 INFO:teuthology.orchestra.run.vm10.stdout: librdkafka x86_64 1.6.1-102.el9 appstream 662 k 2026-03-09T20:35:47.554 INFO:teuthology.orchestra.run.vm10.stdout: librgw2 x86_64 2:18.2.7-1055.gab47f43c.el9 ceph 4.5 M 2026-03-09T20:35:47.554 INFO:teuthology.orchestra.run.vm10.stdout: libstoragemgmt x86_64 1.10.1-1.el9 appstream 246 k 2026-03-09T20:35:47.554 INFO:teuthology.orchestra.run.vm10.stdout: libunwind x86_64 1.6.2-1.el9 epel 67 k 2026-03-09T20:35:47.554 INFO:teuthology.orchestra.run.vm10.stdout: libxslt x86_64 1.1.34-12.el9 appstream 233 k 2026-03-09T20:35:47.554 INFO:teuthology.orchestra.run.vm10.stdout: lttng-ust x86_64 2.12.0-6.el9 appstream 292 k 2026-03-09T20:35:47.554 INFO:teuthology.orchestra.run.vm10.stdout: mailcap noarch 2.1.49-5.el9 baseos 33 k 2026-03-09T20:35:47.554 INFO:teuthology.orchestra.run.vm10.stdout: openblas x86_64 0.3.29-1.el9 appstream 42 k 2026-03-09T20:35:47.554 INFO:teuthology.orchestra.run.vm10.stdout: openblas-openmp x86_64 0.3.29-1.el9 appstream 5.3 M 2026-03-09T20:35:47.554 INFO:teuthology.orchestra.run.vm10.stdout: parquet-libs x86_64 9.0.0-15.el9 epel 838 k 2026-03-09T20:35:47.554 INFO:teuthology.orchestra.run.vm10.stdout: python3-asyncssh noarch 2.13.2-5.el9 epel 548 k 2026-03-09T20:35:47.554 INFO:teuthology.orchestra.run.vm10.stdout: python3-autocommand noarch 2.2.2-8.el9 epel 29 k 2026-03-09T20:35:47.554 INFO:teuthology.orchestra.run.vm10.stdout: python3-babel noarch 2.9.1-2.el9 appstream 6.0 M 2026-03-09T20:35:47.554 INFO:teuthology.orchestra.run.vm10.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 epel 60 k 2026-03-09T20:35:47.554 INFO:teuthology.orchestra.run.vm10.stdout: python3-bcrypt x86_64 3.2.2-1.el9 epel 43 k 2026-03-09T20:35:47.554 INFO:teuthology.orchestra.run.vm10.stdout: python3-cachetools noarch 4.2.4-1.el9 epel 32 k 2026-03-09T20:35:47.554 INFO:teuthology.orchestra.run.vm10.stdout: python3-ceph-argparse x86_64 2:18.2.7-1055.gab47f43c.el9 ceph 45 k 2026-03-09T20:35:47.554 INFO:teuthology.orchestra.run.vm10.stdout: python3-ceph-common x86_64 2:18.2.7-1055.gab47f43c.el9 ceph 130 k 2026-03-09T20:35:47.554 INFO:teuthology.orchestra.run.vm10.stdout: python3-certifi noarch 2023.05.07-4.el9 epel 14 k 2026-03-09T20:35:47.554 INFO:teuthology.orchestra.run.vm10.stdout: python3-cffi x86_64 1.14.5-5.el9 baseos 253 k 2026-03-09T20:35:47.554 INFO:teuthology.orchestra.run.vm10.stdout: python3-cheroot noarch 10.0.1-4.el9 epel 173 k 2026-03-09T20:35:47.554 INFO:teuthology.orchestra.run.vm10.stdout: python3-cherrypy noarch 18.6.1-2.el9 epel 358 k 2026-03-09T20:35:47.554 INFO:teuthology.orchestra.run.vm10.stdout: python3-cryptography x86_64 36.0.1-5.el9 baseos 1.2 M 2026-03-09T20:35:47.554 INFO:teuthology.orchestra.run.vm10.stdout: python3-devel x86_64 3.9.25-3.el9 appstream 244 k 2026-03-09T20:35:47.554 INFO:teuthology.orchestra.run.vm10.stdout: python3-google-auth noarch 1:2.45.0-1.el9 epel 254 k 2026-03-09T20:35:47.554 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco noarch 8.2.1-3.el9 epel 11 k 2026-03-09T20:35:47.554 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 epel 18 k 2026-03-09T20:35:47.554 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 epel 23 k 2026-03-09T20:35:47.554 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco-context noarch 6.0.1-3.el9 epel 20 k 2026-03-09T20:35:47.555 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 epel 19 k 2026-03-09T20:35:47.555 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco-text noarch 4.0.0-2.el9 epel 26 k 2026-03-09T20:35:47.555 INFO:teuthology.orchestra.run.vm10.stdout: python3-jinja2 noarch 2.11.3-8.el9 appstream 249 k 2026-03-09T20:35:47.555 INFO:teuthology.orchestra.run.vm10.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 epel 1.0 M 2026-03-09T20:35:47.555 INFO:teuthology.orchestra.run.vm10.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 appstream 177 k 2026-03-09T20:35:47.555 INFO:teuthology.orchestra.run.vm10.stdout: python3-logutils noarch 0.3.5-21.el9 epel 46 k 2026-03-09T20:35:47.555 INFO:teuthology.orchestra.run.vm10.stdout: python3-mako noarch 1.1.4-6.el9 appstream 172 k 2026-03-09T20:35:47.555 INFO:teuthology.orchestra.run.vm10.stdout: python3-markupsafe x86_64 1.1.1-12.el9 appstream 35 k 2026-03-09T20:35:47.555 INFO:teuthology.orchestra.run.vm10.stdout: python3-more-itertools noarch 8.12.0-2.el9 epel 79 k 2026-03-09T20:35:47.555 INFO:teuthology.orchestra.run.vm10.stdout: python3-natsort noarch 7.1.1-5.el9 epel 58 k 2026-03-09T20:35:47.555 INFO:teuthology.orchestra.run.vm10.stdout: python3-numpy x86_64 1:1.23.5-2.el9 appstream 6.1 M 2026-03-09T20:35:47.555 INFO:teuthology.orchestra.run.vm10.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 appstream 442 k 2026-03-09T20:35:47.555 INFO:teuthology.orchestra.run.vm10.stdout: python3-pecan noarch 1.4.2-3.el9 epel 272 k 2026-03-09T20:35:47.555 INFO:teuthology.orchestra.run.vm10.stdout: python3-ply noarch 3.11-14.el9 baseos 106 k 2026-03-09T20:35:47.555 INFO:teuthology.orchestra.run.vm10.stdout: python3-portend noarch 3.1.0-2.el9 epel 16 k 2026-03-09T20:35:47.555 INFO:teuthology.orchestra.run.vm10.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 epel 90 k 2026-03-09T20:35:47.555 INFO:teuthology.orchestra.run.vm10.stdout: python3-pyasn1 noarch 0.4.8-7.el9 appstream 157 k 2026-03-09T20:35:47.555 INFO:teuthology.orchestra.run.vm10.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 appstream 277 k 2026-03-09T20:35:47.555 INFO:teuthology.orchestra.run.vm10.stdout: python3-pycparser noarch 2.20-6.el9 baseos 135 k 2026-03-09T20:35:47.555 INFO:teuthology.orchestra.run.vm10.stdout: python3-repoze-lru noarch 0.7-16.el9 epel 31 k 2026-03-09T20:35:47.555 INFO:teuthology.orchestra.run.vm10.stdout: python3-requests noarch 2.25.1-10.el9 baseos 126 k 2026-03-09T20:35:47.555 INFO:teuthology.orchestra.run.vm10.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 appstream 54 k 2026-03-09T20:35:47.555 INFO:teuthology.orchestra.run.vm10.stdout: python3-routes noarch 2.5.1-5.el9 epel 188 k 2026-03-09T20:35:47.555 INFO:teuthology.orchestra.run.vm10.stdout: python3-rsa noarch 4.9-2.el9 epel 59 k 2026-03-09T20:35:47.555 INFO:teuthology.orchestra.run.vm10.stdout: python3-scipy x86_64 1.9.3-2.el9 appstream 19 M 2026-03-09T20:35:47.555 INFO:teuthology.orchestra.run.vm10.stdout: python3-tempora noarch 5.0.0-2.el9 epel 36 k 2026-03-09T20:35:47.555 INFO:teuthology.orchestra.run.vm10.stdout: python3-toml noarch 0.10.2-6.el9 appstream 42 k 2026-03-09T20:35:47.555 INFO:teuthology.orchestra.run.vm10.stdout: python3-typing-extensions noarch 4.15.0-1.el9 epel 86 k 2026-03-09T20:35:47.555 INFO:teuthology.orchestra.run.vm10.stdout: python3-urllib3 noarch 1.26.5-7.el9 baseos 218 k 2026-03-09T20:35:47.555 INFO:teuthology.orchestra.run.vm10.stdout: python3-webob noarch 1.8.8-2.el9 epel 230 k 2026-03-09T20:35:47.555 INFO:teuthology.orchestra.run.vm10.stdout: python3-websocket-client noarch 1.2.3-2.el9 epel 90 k 2026-03-09T20:35:47.555 INFO:teuthology.orchestra.run.vm10.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 epel 427 k 2026-03-09T20:35:47.555 INFO:teuthology.orchestra.run.vm10.stdout: python3-zc-lockfile noarch 2.0-10.el9 epel 20 k 2026-03-09T20:35:47.555 INFO:teuthology.orchestra.run.vm10.stdout: re2 x86_64 1:20211101-20.el9 epel 191 k 2026-03-09T20:35:47.555 INFO:teuthology.orchestra.run.vm10.stdout: socat x86_64 1.7.4.1-8.el9 appstream 303 k 2026-03-09T20:35:47.555 INFO:teuthology.orchestra.run.vm10.stdout: thrift x86_64 0.15.0-4.el9 epel 1.6 M 2026-03-09T20:35:47.555 INFO:teuthology.orchestra.run.vm10.stdout: xmlstarlet x86_64 1.6.1-20.el9 appstream 64 k 2026-03-09T20:35:47.555 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T20:35:47.555 INFO:teuthology.orchestra.run.vm10.stdout:Transaction Summary 2026-03-09T20:35:47.555 INFO:teuthology.orchestra.run.vm10.stdout:======================================================================================= 2026-03-09T20:35:47.555 INFO:teuthology.orchestra.run.vm10.stdout:Install 115 Packages 2026-03-09T20:35:47.555 INFO:teuthology.orchestra.run.vm10.stdout:Upgrade 2 Packages 2026-03-09T20:35:47.555 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T20:35:47.556 INFO:teuthology.orchestra.run.vm10.stdout:Total download size: 181 M 2026-03-09T20:35:47.556 INFO:teuthology.orchestra.run.vm10.stdout:Downloading Packages: 2026-03-09T20:35:47.728 INFO:teuthology.orchestra.run.vm07.stdout:(4/117): ceph-mds-18.2.7-1055.gab47f43c.el9.x86 3.4 MB/s | 2.1 MB 00:00 2026-03-09T20:35:47.760 INFO:teuthology.orchestra.run.vm07.stdout:(5/117): ceph-base-18.2.7-1055.gab47f43c.el9.x8 2.4 MB/s | 5.1 MB 00:02 2026-03-09T20:35:48.103 INFO:teuthology.orchestra.run.vm07.stdout:(6/117): ceph-mgr-18.2.7-1055.gab47f43c.el9.x86 3.9 MB/s | 1.5 MB 00:00 2026-03-09T20:35:48.707 INFO:teuthology.orchestra.run.vm07.stdout:(7/117): ceph-mon-18.2.7-1055.gab47f43c.el9.x86 5.0 MB/s | 4.7 MB 00:00 2026-03-09T20:35:49.211 INFO:teuthology.orchestra.run.vm10.stdout:(1/117): ceph-18.2.7-1055.gab47f43c.el9.x86_64. 13 kB/s | 6.5 kB 00:00 2026-03-09T20:35:49.806 INFO:teuthology.orchestra.run.vm07.stdout:(8/117): ceph-common-18.2.7-1055.gab47f43c.el9. 4.4 MB/s | 18 MB 00:04 2026-03-09T20:35:49.922 INFO:teuthology.orchestra.run.vm07.stdout:(9/117): ceph-selinux-18.2.7-1055.gab47f43c.el9 217 kB/s | 25 kB 00:00 2026-03-09T20:35:50.031 INFO:teuthology.orchestra.run.vm10.stdout:(2/117): ceph-fuse-18.2.7-1055.gab47f43c.el9.x8 1.0 MB/s | 850 kB 00:00 2026-03-09T20:35:50.150 INFO:teuthology.orchestra.run.vm10.stdout:(3/117): ceph-immutable-object-cache-18.2.7-105 1.2 MB/s | 143 kB 00:00 2026-03-09T20:35:50.279 INFO:teuthology.orchestra.run.vm07.stdout:(10/117): ceph-radosgw-18.2.7-1055.gab47f43c.el 5.0 MB/s | 7.8 MB 00:01 2026-03-09T20:35:50.396 INFO:teuthology.orchestra.run.vm07.stdout:(11/117): libcephfs-devel-18.2.7-1055.gab47f43c 267 kB/s | 31 kB 00:00 2026-03-09T20:35:50.685 INFO:teuthology.orchestra.run.vm07.stdout:(12/117): libcephfs2-18.2.7-1055.gab47f43c.el9. 2.4 MB/s | 710 kB 00:00 2026-03-09T20:35:50.743 INFO:teuthology.orchestra.run.vm10.stdout:(4/117): ceph-mds-18.2.7-1055.gab47f43c.el9.x86 3.5 MB/s | 2.1 MB 00:00 2026-03-09T20:35:50.803 INFO:teuthology.orchestra.run.vm07.stdout:(13/117): libcephsqlite-18.2.7-1055.gab47f43c.e 1.4 MB/s | 166 kB 00:00 2026-03-09T20:35:50.920 INFO:teuthology.orchestra.run.vm07.stdout:(14/117): librados-devel-18.2.7-1055.gab47f43c. 1.1 MB/s | 126 kB 00:00 2026-03-09T20:35:51.080 INFO:teuthology.orchestra.run.vm07.stdout:(15/117): libradosstriper1-18.2.7-1055.gab47f43 2.9 MB/s | 475 kB 00:00 2026-03-09T20:35:51.103 INFO:teuthology.orchestra.run.vm10.stdout:(5/117): ceph-mgr-18.2.7-1055.gab47f43c.el9.x86 4.1 MB/s | 1.5 MB 00:00 2026-03-09T20:35:51.426 INFO:teuthology.orchestra.run.vm10.stdout:(6/117): ceph-base-18.2.7-1055.gab47f43c.el9.x8 1.9 MB/s | 5.1 MB 00:02 2026-03-09T20:35:51.917 INFO:teuthology.orchestra.run.vm07.stdout:(16/117): librgw2-18.2.7-1055.gab47f43c.el9.x86 5.4 MB/s | 4.5 MB 00:00 2026-03-09T20:35:51.935 INFO:teuthology.orchestra.run.vm10.stdout:(7/117): ceph-mon-18.2.7-1055.gab47f43c.el9.x86 5.6 MB/s | 4.7 MB 00:00 2026-03-09T20:35:52.052 INFO:teuthology.orchestra.run.vm07.stdout:(17/117): python3-ceph-argparse-18.2.7-1055.gab 335 kB/s | 45 kB 00:00 2026-03-09T20:35:52.259 INFO:teuthology.orchestra.run.vm07.stdout:(18/117): python3-ceph-common-18.2.7-1055.gab47 627 kB/s | 130 kB 00:00 2026-03-09T20:35:52.386 INFO:teuthology.orchestra.run.vm07.stdout:(19/117): python3-cephfs-18.2.7-1055.gab47f43c. 1.2 MB/s | 162 kB 00:00 2026-03-09T20:35:52.529 INFO:teuthology.orchestra.run.vm07.stdout:(20/117): python3-rados-18.2.7-1055.gab47f43c.e 2.2 MB/s | 322 kB 00:00 2026-03-09T20:35:52.649 INFO:teuthology.orchestra.run.vm07.stdout:(21/117): python3-rbd-18.2.7-1055.gab47f43c.el9 2.4 MB/s | 302 kB 00:00 2026-03-09T20:35:52.767 INFO:teuthology.orchestra.run.vm07.stdout:(22/117): python3-rgw-18.2.7-1055.gab47f43c.el9 849 kB/s | 100 kB 00:00 2026-03-09T20:35:52.907 INFO:teuthology.orchestra.run.vm07.stdout:(23/117): rbd-fuse-18.2.7-1055.gab47f43c.el9.x8 619 kB/s | 87 kB 00:00 2026-03-09T20:35:52.908 INFO:teuthology.orchestra.run.vm10.stdout:(8/117): ceph-radosgw-18.2.7-1055.gab47f43c.el9 8.0 MB/s | 7.8 MB 00:00 2026-03-09T20:35:53.027 INFO:teuthology.orchestra.run.vm10.stdout:(9/117): ceph-selinux-18.2.7-1055.gab47f43c.el9 211 kB/s | 25 kB 00:00 2026-03-09T20:35:53.199 INFO:teuthology.orchestra.run.vm10.stdout:(10/117): ceph-common-18.2.7-1055.gab47f43c.el9 4.1 MB/s | 18 MB 00:04 2026-03-09T20:35:53.316 INFO:teuthology.orchestra.run.vm10.stdout:(11/117): libcephfs-devel-18.2.7-1055.gab47f43c 264 kB/s | 31 kB 00:00 2026-03-09T20:35:53.444 INFO:teuthology.orchestra.run.vm10.stdout:(12/117): libcephfs2-18.2.7-1055.gab47f43c.el9. 5.4 MB/s | 710 kB 00:00 2026-03-09T20:35:53.551 INFO:teuthology.orchestra.run.vm07.stdout:(24/117): ceph-osd-18.2.7-1055.gab47f43c.el9.x8 3.1 MB/s | 17 MB 00:05 2026-03-09T20:35:53.561 INFO:teuthology.orchestra.run.vm07.stdout:(25/117): rbd-mirror-18.2.7-1055.gab47f43c.el9. 4.6 MB/s | 3.0 MB 00:00 2026-03-09T20:35:53.563 INFO:teuthology.orchestra.run.vm10.stdout:(13/117): libcephsqlite-18.2.7-1055.gab47f43c.e 1.4 MB/s | 166 kB 00:00 2026-03-09T20:35:53.673 INFO:teuthology.orchestra.run.vm07.stdout:(26/117): rbd-nbd-18.2.7-1055.gab47f43c.el9.x86 1.4 MB/s | 172 kB 00:00 2026-03-09T20:35:53.678 INFO:teuthology.orchestra.run.vm07.stdout:(27/117): ceph-grafana-dashboards-18.2.7-1055.g 209 kB/s | 24 kB 00:00 2026-03-09T20:35:53.681 INFO:teuthology.orchestra.run.vm10.stdout:(14/117): librados-devel-18.2.7-1055.gab47f43c. 1.0 MB/s | 126 kB 00:00 2026-03-09T20:35:53.806 INFO:teuthology.orchestra.run.vm07.stdout:(28/117): ceph-mgr-cephadm-18.2.7-1055.gab47f43 1.0 MB/s | 140 kB 00:00 2026-03-09T20:35:53.910 INFO:teuthology.orchestra.run.vm10.stdout:(15/117): libradosstriper1-18.2.7-1055.gab47f43 2.0 MB/s | 475 kB 00:00 2026-03-09T20:35:54.354 INFO:teuthology.orchestra.run.vm07.stdout:(29/117): ceph-mgr-dashboard-18.2.7-1055.gab47f 5.2 MB/s | 3.5 MB 00:00 2026-03-09T20:35:54.478 INFO:teuthology.orchestra.run.vm07.stdout:(30/117): ceph-mgr-modules-core-18.2.7-1055.gab 2.0 MB/s | 248 kB 00:00 2026-03-09T20:35:54.594 INFO:teuthology.orchestra.run.vm07.stdout:(31/117): ceph-mgr-rook-18.2.7-1055.gab47f43c.e 425 kB/s | 49 kB 00:00 2026-03-09T20:35:54.713 INFO:teuthology.orchestra.run.vm07.stdout:(32/117): ceph-prometheus-alerts-18.2.7-1055.ga 141 kB/s | 17 kB 00:00 2026-03-09T20:35:54.834 INFO:teuthology.orchestra.run.vm07.stdout:(33/117): cephadm-18.2.7-1055.gab47f43c.el9.noa 1.8 MB/s | 226 kB 00:00 2026-03-09T20:35:54.855 INFO:teuthology.orchestra.run.vm10.stdout:(16/117): librgw2-18.2.7-1055.gab47f43c.el9.x86 4.8 MB/s | 4.5 MB 00:00 2026-03-09T20:35:54.973 INFO:teuthology.orchestra.run.vm10.stdout:(17/117): python3-ceph-argparse-18.2.7-1055.gab 383 kB/s | 45 kB 00:00 2026-03-09T20:35:55.092 INFO:teuthology.orchestra.run.vm10.stdout:(18/117): python3-ceph-common-18.2.7-1055.gab47 1.1 MB/s | 130 kB 00:00 2026-03-09T20:35:55.210 INFO:teuthology.orchestra.run.vm10.stdout:(19/117): python3-cephfs-18.2.7-1055.gab47f43c. 1.3 MB/s | 162 kB 00:00 2026-03-09T20:35:55.333 INFO:teuthology.orchestra.run.vm10.stdout:(20/117): python3-rados-18.2.7-1055.gab47f43c.e 2.6 MB/s | 322 kB 00:00 2026-03-09T20:35:55.454 INFO:teuthology.orchestra.run.vm10.stdout:(21/117): python3-rbd-18.2.7-1055.gab47f43c.el9 2.4 MB/s | 302 kB 00:00 2026-03-09T20:35:55.624 INFO:teuthology.orchestra.run.vm10.stdout:(22/117): ceph-test-18.2.7-1055.gab47f43c.el9.x 14 MB/s | 36 MB 00:02 2026-03-09T20:35:55.625 INFO:teuthology.orchestra.run.vm10.stdout:(23/117): python3-rgw-18.2.7-1055.gab47f43c.el9 580 kB/s | 100 kB 00:00 2026-03-09T20:35:55.742 INFO:teuthology.orchestra.run.vm10.stdout:(24/117): rbd-fuse-18.2.7-1055.gab47f43c.el9.x8 740 kB/s | 87 kB 00:00 2026-03-09T20:35:56.451 INFO:teuthology.orchestra.run.vm10.stdout:(25/117): rbd-mirror-18.2.7-1055.gab47f43c.el9. 3.6 MB/s | 3.0 MB 00:00 2026-03-09T20:35:56.534 INFO:teuthology.orchestra.run.vm07.stdout:(34/117): ceph-mgr-diskprediction-local-18.2.7- 2.7 MB/s | 7.4 MB 00:02 2026-03-09T20:35:56.568 INFO:teuthology.orchestra.run.vm10.stdout:(26/117): ceph-grafana-dashboards-18.2.7-1055.g 208 kB/s | 24 kB 00:00 2026-03-09T20:35:56.602 INFO:teuthology.orchestra.run.vm10.stdout:(27/117): rbd-nbd-18.2.7-1055.gab47f43c.el9.x86 200 kB/s | 172 kB 00:00 2026-03-09T20:35:56.717 INFO:teuthology.orchestra.run.vm10.stdout:(28/117): ceph-mgr-cephadm-18.2.7-1055.gab47f43 938 kB/s | 140 kB 00:00 2026-03-09T20:35:56.717 INFO:teuthology.orchestra.run.vm07.stdout:(35/117): ceph-test-18.2.7-1055.gab47f43c.el9.x 5.4 MB/s | 36 MB 00:06 2026-03-09T20:35:56.975 INFO:teuthology.orchestra.run.vm10.stdout:(29/117): ceph-mgr-dashboard-18.2.7-1055.gab47f 9.5 MB/s | 3.5 MB 00:00 2026-03-09T20:35:57.100 INFO:teuthology.orchestra.run.vm10.stdout:(30/117): ceph-mgr-modules-core-18.2.7-1055.gab 1.9 MB/s | 248 kB 00:00 2026-03-09T20:35:57.218 INFO:teuthology.orchestra.run.vm10.stdout:(31/117): ceph-mgr-rook-18.2.7-1055.gab47f43c.e 419 kB/s | 49 kB 00:00 2026-03-09T20:35:57.357 INFO:teuthology.orchestra.run.vm10.stdout:(32/117): ceph-osd-18.2.7-1055.gab47f43c.el9.x8 2.8 MB/s | 17 MB 00:05 2026-03-09T20:35:57.358 INFO:teuthology.orchestra.run.vm10.stdout:(33/117): ceph-prometheus-alerts-18.2.7-1055.ga 119 kB/s | 17 kB 00:00 2026-03-09T20:35:57.474 INFO:teuthology.orchestra.run.vm10.stdout:(34/117): cephadm-18.2.7-1055.gab47f43c.el9.noa 1.9 MB/s | 226 kB 00:00 2026-03-09T20:35:57.555 INFO:teuthology.orchestra.run.vm10.stdout:(35/117): ledmon-libs-1.1.0-3.el9.x86_64.rpm 206 kB/s | 40 kB 00:00 2026-03-09T20:35:57.702 INFO:teuthology.orchestra.run.vm10.stdout:(36/117): libconfig-1.7.2-9.el9.x86_64.rpm 316 kB/s | 72 kB 00:00 2026-03-09T20:35:57.835 INFO:teuthology.orchestra.run.vm10.stdout:(37/117): libgfortran-11.5.0-14.el9.x86_64.rpm 2.8 MB/s | 794 kB 00:00 2026-03-09T20:35:57.837 INFO:teuthology.orchestra.run.vm10.stdout:(38/117): libquadmath-11.5.0-14.el9.x86_64.rpm 1.3 MB/s | 184 kB 00:00 2026-03-09T20:35:57.895 INFO:teuthology.orchestra.run.vm10.stdout:(39/117): mailcap-2.1.49-5.el9.noarch.rpm 561 kB/s | 33 kB 00:00 2026-03-09T20:35:57.957 INFO:teuthology.orchestra.run.vm10.stdout:(40/117): python3-cffi-1.14.5-5.el9.x86_64.rpm 2.1 MB/s | 253 kB 00:00 2026-03-09T20:35:58.024 INFO:teuthology.orchestra.run.vm10.stdout:(41/117): python3-ply-3.11-14.el9.noarch.rpm 1.6 MB/s | 106 kB 00:00 2026-03-09T20:35:58.054 INFO:teuthology.orchestra.run.vm10.stdout:(42/117): python3-cryptography-36.0.1-5.el9.x86 7.8 MB/s | 1.2 MB 00:00 2026-03-09T20:35:58.089 INFO:teuthology.orchestra.run.vm10.stdout:(43/117): python3-pycparser-2.20-6.el9.noarch.r 2.0 MB/s | 135 kB 00:00 2026-03-09T20:35:58.138 INFO:teuthology.orchestra.run.vm10.stdout:(44/117): python3-requests-2.25.1-10.el9.noarch 1.5 MB/s | 126 kB 00:00 2026-03-09T20:35:58.168 INFO:teuthology.orchestra.run.vm10.stdout:(45/117): python3-urllib3-1.26.5-7.el9.noarch.r 2.7 MB/s | 218 kB 00:00 2026-03-09T20:35:58.526 INFO:teuthology.orchestra.run.vm10.stdout:(46/117): flexiblas-3.0.4-9.el9.x86_64.rpm 83 kB/s | 30 kB 00:00 2026-03-09T20:35:58.600 INFO:teuthology.orchestra.run.vm10.stdout:(47/117): boost-program-options-1.75.0-13.el9.x 225 kB/s | 104 kB 00:00 2026-03-09T20:35:58.731 INFO:teuthology.orchestra.run.vm10.stdout:(48/117): ceph-mgr-diskprediction-local-18.2.7- 3.7 MB/s | 7.4 MB 00:02 2026-03-09T20:35:58.733 INFO:teuthology.orchestra.run.vm10.stdout:(49/117): flexiblas-openblas-openmp-3.0.4-9.el9 112 kB/s | 15 kB 00:00 2026-03-09T20:35:58.838 INFO:teuthology.orchestra.run.vm10.stdout:(50/117): flexiblas-netlib-3.0.4-9.el9.x86_64.r 9.6 MB/s | 3.0 MB 00:00 2026-03-09T20:35:58.839 INFO:teuthology.orchestra.run.vm10.stdout:(51/117): libpmemobj-1.12.1-1.el9.x86_64.rpm 1.5 MB/s | 160 kB 00:00 2026-03-09T20:35:58.840 INFO:teuthology.orchestra.run.vm10.stdout:(52/117): librabbitmq-0.11.0-7.el9.x86_64.rpm 420 kB/s | 45 kB 00:00 2026-03-09T20:35:58.891 INFO:teuthology.orchestra.run.vm10.stdout:(53/117): libstoragemgmt-1.10.1-1.el9.x86_64.rp 4.8 MB/s | 246 kB 00:00 2026-03-09T20:35:58.892 INFO:teuthology.orchestra.run.vm10.stdout:(54/117): libxslt-1.1.34-12.el9.x86_64.rpm 4.4 MB/s | 233 kB 00:00 2026-03-09T20:35:58.932 INFO:teuthology.orchestra.run.vm10.stdout:(55/117): librdkafka-1.6.1-102.el9.x86_64.rpm 6.9 MB/s | 662 kB 00:00 2026-03-09T20:35:58.951 INFO:teuthology.orchestra.run.vm10.stdout:(56/117): openblas-0.3.29-1.el9.x86_64.rpm 727 kB/s | 42 kB 00:00 2026-03-09T20:35:58.957 INFO:teuthology.orchestra.run.vm10.stdout:(57/117): lttng-ust-2.12.0-6.el9.x86_64.rpm 4.4 MB/s | 292 kB 00:00 2026-03-09T20:35:59.039 INFO:teuthology.orchestra.run.vm10.stdout:(58/117): python3-devel-3.9.25-3.el9.x86_64.rpm 2.9 MB/s | 244 kB 00:00 2026-03-09T20:35:59.245 INFO:teuthology.orchestra.run.vm10.stdout:(59/117): openblas-openmp-0.3.29-1.el9.x86_64.r 17 MB/s | 5.3 MB 00:00 2026-03-09T20:35:59.247 INFO:teuthology.orchestra.run.vm10.stdout:(60/117): python3-jinja2-2.11.3-8.el9.noarch.rp 1.2 MB/s | 249 kB 00:00 2026-03-09T20:35:59.393 INFO:teuthology.orchestra.run.vm10.stdout:(61/117): python3-babel-2.9.1-2.el9.noarch.rpm 13 MB/s | 6.0 MB 00:00 2026-03-09T20:35:59.395 INFO:teuthology.orchestra.run.vm10.stdout:(62/117): python3-jmespath-1.0.1-1.el9.noarch.r 315 kB/s | 48 kB 00:00 2026-03-09T20:35:59.396 INFO:teuthology.orchestra.run.vm10.stdout:(63/117): python3-libstoragemgmt-1.10.1-1.el9.x 1.2 MB/s | 177 kB 00:00 2026-03-09T20:35:59.441 INFO:teuthology.orchestra.run.vm10.stdout:(64/117): python3-mako-1.1.4-6.el9.noarch.rpm 3.6 MB/s | 172 kB 00:00 2026-03-09T20:35:59.442 INFO:teuthology.orchestra.run.vm10.stdout:(65/117): python3-markupsafe-1.1.1-12.el9.x86_6 758 kB/s | 35 kB 00:00 2026-03-09T20:35:59.491 INFO:teuthology.orchestra.run.vm10.stdout:(66/117): python3-pyasn1-0.4.8-7.el9.noarch.rpm 3.2 MB/s | 157 kB 00:00 2026-03-09T20:35:59.516 INFO:teuthology.orchestra.run.vm10.stdout:(67/117): python3-numpy-f2py-1.23.5-2.el9.x86_6 5.8 MB/s | 442 kB 00:00 2026-03-09T20:35:59.540 INFO:teuthology.orchestra.run.vm10.stdout:(68/117): python3-pyasn1-modules-0.4.8-7.el9.no 5.6 MB/s | 277 kB 00:00 2026-03-09T20:35:59.573 INFO:teuthology.orchestra.run.vm10.stdout:(69/117): python3-requests-oauthlib-1.3.0-12.el 945 kB/s | 54 kB 00:00 2026-03-09T20:35:59.619 INFO:teuthology.orchestra.run.vm10.stdout:(70/117): python3-toml-0.10.2-6.el9.noarch.rpm 912 kB/s | 42 kB 00:00 2026-03-09T20:35:59.699 INFO:teuthology.orchestra.run.vm10.stdout:(71/117): socat-1.7.4.1-8.el9.x86_64.rpm 3.7 MB/s | 303 kB 00:00 2026-03-09T20:35:59.722 INFO:teuthology.orchestra.run.vm10.stdout:(72/117): python3-numpy-1.23.5-2.el9.x86_64.rpm 19 MB/s | 6.1 MB 00:00 2026-03-09T20:35:59.732 INFO:teuthology.orchestra.run.vm10.stdout:(73/117): fmt-8.1.1-5.el9.x86_64.rpm 11 MB/s | 111 kB 00:00 2026-03-09T20:35:59.741 INFO:teuthology.orchestra.run.vm10.stdout:(74/117): gperftools-libs-2.9.1-3.el9.x86_64.rp 35 MB/s | 308 kB 00:00 2026-03-09T20:35:59.745 INFO:teuthology.orchestra.run.vm10.stdout:(75/117): xmlstarlet-1.6.1-20.el9.x86_64.rpm 1.3 MB/s | 64 kB 00:00 2026-03-09T20:35:59.753 INFO:teuthology.orchestra.run.vm10.stdout:(76/117): libarrow-doc-9.0.0-15.el9.noarch.rpm 3.9 MB/s | 25 kB 00:00 2026-03-09T20:35:59.764 INFO:teuthology.orchestra.run.vm10.stdout:(77/117): liboath-2.6.12-1.el9.x86_64.rpm 4.2 MB/s | 49 kB 00:00 2026-03-09T20:35:59.767 INFO:teuthology.orchestra.run.vm10.stdout:(78/117): libunwind-1.6.2-1.el9.x86_64.rpm 23 MB/s | 67 kB 00:00 2026-03-09T20:35:59.787 INFO:teuthology.orchestra.run.vm10.stdout:(79/117): parquet-libs-9.0.0-15.el9.x86_64.rpm 41 MB/s | 838 kB 00:00 2026-03-09T20:35:59.800 INFO:teuthology.orchestra.run.vm10.stdout:(80/117): python3-asyncssh-2.13.2-5.el9.noarch. 47 MB/s | 548 kB 00:00 2026-03-09T20:35:59.805 INFO:teuthology.orchestra.run.vm10.stdout:(81/117): python3-autocommand-2.2.2-8.el9.noarc 5.0 MB/s | 29 kB 00:00 2026-03-09T20:35:59.812 INFO:teuthology.orchestra.run.vm10.stdout:(82/117): python3-backports-tarfile-1.2.0-1.el9 10 MB/s | 60 kB 00:00 2026-03-09T20:35:59.816 INFO:teuthology.orchestra.run.vm10.stdout:(83/117): python3-bcrypt-3.2.2-1.el9.x86_64.rpm 9.9 MB/s | 43 kB 00:00 2026-03-09T20:35:59.820 INFO:teuthology.orchestra.run.vm10.stdout:(84/117): python3-cachetools-4.2.4-1.el9.noarch 7.8 MB/s | 32 kB 00:00 2026-03-09T20:35:59.825 INFO:teuthology.orchestra.run.vm10.stdout:(85/117): python3-certifi-2023.05.07-4.el9.noar 3.3 MB/s | 14 kB 00:00 2026-03-09T20:35:59.830 INFO:teuthology.orchestra.run.vm10.stdout:(86/117): python3-cheroot-10.0.1-4.el9.noarch.r 36 MB/s | 173 kB 00:00 2026-03-09T20:35:59.839 INFO:teuthology.orchestra.run.vm10.stdout:(87/117): python3-cherrypy-18.6.1-2.el9.noarch. 43 MB/s | 358 kB 00:00 2026-03-09T20:35:59.846 INFO:teuthology.orchestra.run.vm10.stdout:(88/117): python3-google-auth-2.45.0-1.el9.noar 40 MB/s | 254 kB 00:00 2026-03-09T20:35:59.850 INFO:teuthology.orchestra.run.vm10.stdout:(89/117): python3-jaraco-8.2.1-3.el9.noarch.rpm 4.0 MB/s | 11 kB 00:00 2026-03-09T20:35:59.852 INFO:teuthology.orchestra.run.vm10.stdout:(90/117): python3-jaraco-classes-3.2.1-5.el9.no 7.2 MB/s | 18 kB 00:00 2026-03-09T20:35:59.855 INFO:teuthology.orchestra.run.vm10.stdout:(91/117): python3-jaraco-collections-3.0.0-8.el 8.7 MB/s | 23 kB 00:00 2026-03-09T20:35:59.859 INFO:teuthology.orchestra.run.vm10.stdout:(92/117): python3-jaraco-context-6.0.1-3.el9.no 6.2 MB/s | 20 kB 00:00 2026-03-09T20:35:59.863 INFO:teuthology.orchestra.run.vm10.stdout:(93/117): python3-jaraco-functools-3.5.0-2.el9. 4.5 MB/s | 19 kB 00:00 2026-03-09T20:35:59.870 INFO:teuthology.orchestra.run.vm10.stdout:(94/117): python3-jaraco-text-4.0.0-2.el9.noarc 4.3 MB/s | 26 kB 00:00 2026-03-09T20:35:59.923 INFO:teuthology.orchestra.run.vm10.stdout:(95/117): python3-kubernetes-26.1.0-3.el9.noarc 19 MB/s | 1.0 MB 00:00 2026-03-09T20:35:59.930 INFO:teuthology.orchestra.run.vm10.stdout:(96/117): python3-logutils-0.3.5-21.el9.noarch. 6.5 MB/s | 46 kB 00:00 2026-03-09T20:35:59.934 INFO:teuthology.orchestra.run.vm10.stdout:(97/117): python3-more-itertools-8.12.0-2.el9.n 24 MB/s | 79 kB 00:00 2026-03-09T20:35:59.948 INFO:teuthology.orchestra.run.vm10.stdout:(98/117): libarrow-9.0.0-15.el9.x86_64.rpm 21 MB/s | 4.4 MB 00:00 2026-03-09T20:35:59.951 INFO:teuthology.orchestra.run.vm10.stdout:(99/117): python3-natsort-7.1.1-5.el9.noarch.rp 3.5 MB/s | 58 kB 00:00 2026-03-09T20:35:59.958 INFO:teuthology.orchestra.run.vm10.stdout:(100/117): python3-portend-3.1.0-2.el9.noarch.r 2.3 MB/s | 16 kB 00:00 2026-03-09T20:35:59.961 INFO:teuthology.orchestra.run.vm10.stdout:(101/117): python3-pecan-1.4.2-3.el9.noarch.rpm 21 MB/s | 272 kB 00:00 2026-03-09T20:35:59.973 INFO:teuthology.orchestra.run.vm10.stdout:(102/117): python3-pyOpenSSL-21.0.0-1.el9.noarc 6.1 MB/s | 90 kB 00:00 2026-03-09T20:35:59.981 INFO:teuthology.orchestra.run.vm10.stdout:(103/117): python3-routes-2.5.1-5.el9.noarch.rp 25 MB/s | 188 kB 00:00 2026-03-09T20:35:59.985 INFO:teuthology.orchestra.run.vm10.stdout:(104/117): python3-rsa-4.9-2.el9.noarch.rpm 17 MB/s | 59 kB 00:00 2026-03-09T20:35:59.989 INFO:teuthology.orchestra.run.vm10.stdout:(105/117): python3-tempora-5.0.0-2.el9.noarch.r 8.6 MB/s | 36 kB 00:00 2026-03-09T20:35:59.994 INFO:teuthology.orchestra.run.vm10.stdout:(106/117): python3-typing-extensions-4.15.0-1.e 22 MB/s | 86 kB 00:00 2026-03-09T20:36:00.002 INFO:teuthology.orchestra.run.vm10.stdout:(107/117): python3-webob-1.8.8-2.el9.noarch.rpm 28 MB/s | 230 kB 00:00 2026-03-09T20:36:00.014 INFO:teuthology.orchestra.run.vm10.stdout:(108/117): python3-websocket-client-1.2.3-2.el9 7.8 MB/s | 90 kB 00:00 2026-03-09T20:36:00.027 INFO:teuthology.orchestra.run.vm10.stdout:(109/117): python3-werkzeug-2.0.3-3.el9.1.noarc 34 MB/s | 427 kB 00:00 2026-03-09T20:36:00.029 INFO:teuthology.orchestra.run.vm10.stdout:(110/117): python3-xmltodict-0.12.0-15.el9.noar 9.0 MB/s | 22 kB 00:00 2026-03-09T20:36:00.032 INFO:teuthology.orchestra.run.vm10.stdout:(111/117): python3-zc-lockfile-2.0-10.el9.noarc 7.3 MB/s | 20 kB 00:00 2026-03-09T20:36:00.041 INFO:teuthology.orchestra.run.vm10.stdout:(112/117): re2-20211101-20.el9.x86_64.rpm 24 MB/s | 191 kB 00:00 2026-03-09T20:36:00.080 INFO:teuthology.orchestra.run.vm10.stdout:(113/117): thrift-0.15.0-4.el9.x86_64.rpm 40 MB/s | 1.6 MB 00:00 2026-03-09T20:36:00.197 INFO:teuthology.orchestra.run.vm10.stdout:(114/117): python3-repoze-lru-0.7-16.el9.noarch 131 kB/s | 31 kB 00:00 2026-03-09T20:36:00.756 INFO:teuthology.orchestra.run.vm10.stdout:(115/117): python3-scipy-1.9.3-2.el9.x86_64.rpm 16 MB/s | 19 MB 00:01 2026-03-09T20:36:01.135 INFO:teuthology.orchestra.run.vm10.stdout:(116/117): librados2-18.2.7-1055.gab47f43c.el9. 3.1 MB/s | 3.3 MB 00:01 2026-03-09T20:36:01.757 INFO:teuthology.orchestra.run.vm10.stdout:(117/117): librbd1-18.2.7-1055.gab47f43c.el9.x8 1.9 MB/s | 3.0 MB 00:01 2026-03-09T20:36:01.762 INFO:teuthology.orchestra.run.vm10.stdout:-------------------------------------------------------------------------------- 2026-03-09T20:36:01.762 INFO:teuthology.orchestra.run.vm10.stdout:Total 13 MB/s | 181 MB 00:14 2026-03-09T20:36:02.235 INFO:teuthology.orchestra.run.vm10.stdout:Running transaction check 2026-03-09T20:36:02.285 INFO:teuthology.orchestra.run.vm10.stdout:Transaction check succeeded. 2026-03-09T20:36:02.285 INFO:teuthology.orchestra.run.vm10.stdout:Running transaction test 2026-03-09T20:36:03.051 INFO:teuthology.orchestra.run.vm10.stdout:Transaction test succeeded. 2026-03-09T20:36:03.051 INFO:teuthology.orchestra.run.vm10.stdout:Running transaction 2026-03-09T20:36:03.934 INFO:teuthology.orchestra.run.vm10.stdout: Preparing : 1/1 2026-03-09T20:36:03.951 INFO:teuthology.orchestra.run.vm10.stdout: Installing : python3-more-itertools-8.12.0-2.el9.noarch 1/119 2026-03-09T20:36:03.966 INFO:teuthology.orchestra.run.vm10.stdout: Installing : thrift-0.15.0-4.el9.x86_64 2/119 2026-03-09T20:36:04.172 INFO:teuthology.orchestra.run.vm10.stdout: Installing : lttng-ust-2.12.0-6.el9.x86_64 3/119 2026-03-09T20:36:04.174 INFO:teuthology.orchestra.run.vm10.stdout: Upgrading : librados2-2:18.2.7-1055.gab47f43c.el9.x86_64 4/119 2026-03-09T20:36:04.230 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: librados2-2:18.2.7-1055.gab47f43c.el9.x86_64 4/119 2026-03-09T20:36:04.232 INFO:teuthology.orchestra.run.vm10.stdout: Installing : libcephfs2-2:18.2.7-1055.gab47f43c.el9.x86_64 5/119 2026-03-09T20:36:04.265 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: libcephfs2-2:18.2.7-1055.gab47f43c.el9.x86_64 5/119 2026-03-09T20:36:04.278 INFO:teuthology.orchestra.run.vm10.stdout: Installing : python3-rados-2:18.2.7-1055.gab47f43c.el9.x86_64 6/119 2026-03-09T20:36:04.310 INFO:teuthology.orchestra.run.vm10.stdout: Installing : librdkafka-1.6.1-102.el9.x86_64 7/119 2026-03-09T20:36:04.316 INFO:teuthology.orchestra.run.vm10.stdout: Installing : librabbitmq-0.11.0-7.el9.x86_64 8/119 2026-03-09T20:36:04.325 INFO:teuthology.orchestra.run.vm10.stdout: Installing : python3-jaraco-8.2.1-3.el9.noarch 9/119 2026-03-09T20:36:04.327 INFO:teuthology.orchestra.run.vm10.stdout: Installing : libcephsqlite-2:18.2.7-1055.gab47f43c.el9.x86_64 10/119 2026-03-09T20:36:04.424 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: libcephsqlite-2:18.2.7-1055.gab47f43c.el9.x86_64 10/119 2026-03-09T20:36:04.425 INFO:teuthology.orchestra.run.vm10.stdout: Installing : libradosstriper1-2:18.2.7-1055.gab47f43c.el9.x86 11/119 2026-03-09T20:36:04.487 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: libradosstriper1-2:18.2.7-1055.gab47f43c.el9.x86 11/119 2026-03-09T20:36:04.494 INFO:teuthology.orchestra.run.vm10.stdout: Installing : python3-werkzeug-2.0.3-3.el9.1.noarch 12/119 2026-03-09T20:36:04.523 INFO:teuthology.orchestra.run.vm10.stdout: Installing : liboath-2.6.12-1.el9.x86_64 13/119 2026-03-09T20:36:04.534 INFO:teuthology.orchestra.run.vm10.stdout: Installing : python3-pyasn1-0.4.8-7.el9.noarch 14/119 2026-03-09T20:36:04.539 INFO:teuthology.orchestra.run.vm10.stdout: Installing : python3-markupsafe-1.1.1-12.el9.x86_64 15/119 2026-03-09T20:36:04.571 INFO:teuthology.orchestra.run.vm10.stdout: Installing : flexiblas-3.0.4-9.el9.x86_64 16/119 2026-03-09T20:36:04.590 INFO:teuthology.orchestra.run.vm10.stdout: Installing : python3-urllib3-1.26.5-7.el9.noarch 17/119 2026-03-09T20:36:04.597 INFO:teuthology.orchestra.run.vm10.stdout: Installing : python3-requests-2.25.1-10.el9.noarch 18/119 2026-03-09T20:36:04.608 INFO:teuthology.orchestra.run.vm10.stdout: Installing : libquadmath-11.5.0-14.el9.x86_64 19/119 2026-03-09T20:36:04.612 INFO:teuthology.orchestra.run.vm10.stdout: Installing : libgfortran-11.5.0-14.el9.x86_64 20/119 2026-03-09T20:36:04.621 INFO:teuthology.orchestra.run.vm10.stdout: Installing : ledmon-libs-1.1.0-3.el9.x86_64 21/119 2026-03-09T20:36:04.637 INFO:teuthology.orchestra.run.vm10.stdout: Installing : python3-ceph-argparse-2:18.2.7-1055.gab47f43c.el 22/119 2026-03-09T20:36:04.662 INFO:teuthology.orchestra.run.vm10.stdout: Installing : python3-cephfs-2:18.2.7-1055.gab47f43c.el9.x86_6 23/119 2026-03-09T20:36:04.714 INFO:teuthology.orchestra.run.vm10.stdout: Installing : python3-requests-oauthlib-1.3.0-12.el9.noarch 24/119 2026-03-09T20:36:04.825 INFO:teuthology.orchestra.run.vm10.stdout: Installing : python3-mako-1.1.4-6.el9.noarch 25/119 2026-03-09T20:36:04.853 INFO:teuthology.orchestra.run.vm10.stdout: Installing : python3-pyasn1-modules-0.4.8-7.el9.noarch 26/119 2026-03-09T20:36:04.866 INFO:teuthology.orchestra.run.vm10.stdout: Installing : python3-rsa-4.9-2.el9.noarch 27/119 2026-03-09T20:36:04.879 INFO:teuthology.orchestra.run.vm10.stdout: Installing : python3-jaraco-classes-3.2.1-5.el9.noarch 28/119 2026-03-09T20:36:04.886 INFO:teuthology.orchestra.run.vm10.stdout: Installing : librados-devel-2:18.2.7-1055.gab47f43c.el9.x86_6 29/119 2026-03-09T20:36:04.938 INFO:teuthology.orchestra.run.vm10.stdout: Installing : re2-1:20211101-20.el9.x86_64 30/119 2026-03-09T20:36:04.948 INFO:teuthology.orchestra.run.vm10.stdout: Installing : libarrow-9.0.0-15.el9.x86_64 31/119 2026-03-09T20:36:04.978 INFO:teuthology.orchestra.run.vm10.stdout: Installing : python3-zc-lockfile-2.0-10.el9.noarch 32/119 2026-03-09T20:36:05.021 INFO:teuthology.orchestra.run.vm10.stdout: Installing : python3-websocket-client-1.2.3-2.el9.noarch 33/119 2026-03-09T20:36:05.031 INFO:teuthology.orchestra.run.vm10.stdout: Installing : python3-webob-1.8.8-2.el9.noarch 34/119 2026-03-09T20:36:05.042 INFO:teuthology.orchestra.run.vm10.stdout: Installing : python3-typing-extensions-4.15.0-1.el9.noarch 35/119 2026-03-09T20:36:05.065 INFO:teuthology.orchestra.run.vm10.stdout: Installing : python3-repoze-lru-0.7-16.el9.noarch 36/119 2026-03-09T20:36:05.082 INFO:teuthology.orchestra.run.vm10.stdout: Installing : python3-routes-2.5.1-5.el9.noarch 37/119 2026-03-09T20:36:05.095 INFO:teuthology.orchestra.run.vm10.stdout: Installing : python3-natsort-7.1.1-5.el9.noarch 38/119 2026-03-09T20:36:05.171 INFO:teuthology.orchestra.run.vm10.stdout: Installing : python3-logutils-0.3.5-21.el9.noarch 39/119 2026-03-09T20:36:05.181 INFO:teuthology.orchestra.run.vm10.stdout: Installing : python3-pecan-1.4.2-3.el9.noarch 40/119 2026-03-09T20:36:05.192 INFO:teuthology.orchestra.run.vm10.stdout: Installing : python3-certifi-2023.05.07-4.el9.noarch 41/119 2026-03-09T20:36:05.246 INFO:teuthology.orchestra.run.vm10.stdout: Installing : python3-cachetools-4.2.4-1.el9.noarch 42/119 2026-03-09T20:36:05.697 INFO:teuthology.orchestra.run.vm10.stdout: Installing : python3-google-auth-1:2.45.0-1.el9.noarch 43/119 2026-03-09T20:36:05.717 INFO:teuthology.orchestra.run.vm10.stdout: Installing : python3-kubernetes-1:26.1.0-3.el9.noarch 44/119 2026-03-09T20:36:05.723 INFO:teuthology.orchestra.run.vm10.stdout: Installing : python3-backports-tarfile-1.2.0-1.el9.noarch 45/119 2026-03-09T20:36:05.732 INFO:teuthology.orchestra.run.vm10.stdout: Installing : python3-jaraco-context-6.0.1-3.el9.noarch 46/119 2026-03-09T20:36:05.738 INFO:teuthology.orchestra.run.vm10.stdout: Installing : python3-autocommand-2.2.2-8.el9.noarch 47/119 2026-03-09T20:36:05.749 INFO:teuthology.orchestra.run.vm10.stdout: Installing : libunwind-1.6.2-1.el9.x86_64 48/119 2026-03-09T20:36:05.753 INFO:teuthology.orchestra.run.vm10.stdout: Installing : gperftools-libs-2.9.1-3.el9.x86_64 49/119 2026-03-09T20:36:05.757 INFO:teuthology.orchestra.run.vm10.stdout: Installing : libarrow-doc-9.0.0-15.el9.noarch 50/119 2026-03-09T20:36:05.768 INFO:teuthology.orchestra.run.vm10.stdout: Installing : fmt-8.1.1-5.el9.x86_64 51/119 2026-03-09T20:36:05.796 INFO:teuthology.orchestra.run.vm10.stdout: Installing : socat-1.7.4.1-8.el9.x86_64 52/119 2026-03-09T20:36:05.802 INFO:teuthology.orchestra.run.vm10.stdout: Installing : python3-toml-0.10.2-6.el9.noarch 53/119 2026-03-09T20:36:05.811 INFO:teuthology.orchestra.run.vm10.stdout: Installing : python3-jaraco-functools-3.5.0-2.el9.noarch 54/119 2026-03-09T20:36:05.818 INFO:teuthology.orchestra.run.vm10.stdout: Installing : python3-jaraco-text-4.0.0-2.el9.noarch 55/119 2026-03-09T20:36:05.830 INFO:teuthology.orchestra.run.vm10.stdout: Installing : python3-jaraco-collections-3.0.0-8.el9.noarch 56/119 2026-03-09T20:36:05.838 INFO:teuthology.orchestra.run.vm10.stdout: Installing : python3-tempora-5.0.0-2.el9.noarch 57/119 2026-03-09T20:36:05.880 INFO:teuthology.orchestra.run.vm10.stdout: Installing : python3-portend-3.1.0-2.el9.noarch 58/119 2026-03-09T20:36:06.189 INFO:teuthology.orchestra.run.vm10.stdout: Installing : python3-devel-3.9.25-3.el9.x86_64 59/119 2026-03-09T20:36:06.223 INFO:teuthology.orchestra.run.vm10.stdout: Installing : python3-babel-2.9.1-2.el9.noarch 60/119 2026-03-09T20:36:06.230 INFO:teuthology.orchestra.run.vm10.stdout: Installing : python3-jinja2-2.11.3-8.el9.noarch 61/119 2026-03-09T20:36:06.303 INFO:teuthology.orchestra.run.vm10.stdout: Installing : openblas-0.3.29-1.el9.x86_64 62/119 2026-03-09T20:36:06.306 INFO:teuthology.orchestra.run.vm10.stdout: Installing : openblas-openmp-0.3.29-1.el9.x86_64 63/119 2026-03-09T20:36:06.340 INFO:teuthology.orchestra.run.vm10.stdout: Installing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 64/119 2026-03-09T20:36:06.762 INFO:teuthology.orchestra.run.vm10.stdout: Installing : flexiblas-netlib-3.0.4-9.el9.x86_64 65/119 2026-03-09T20:36:06.857 INFO:teuthology.orchestra.run.vm10.stdout: Installing : python3-numpy-1:1.23.5-2.el9.x86_64 66/119 2026-03-09T20:36:07.730 INFO:teuthology.orchestra.run.vm10.stdout: Installing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 67/119 2026-03-09T20:36:07.759 INFO:teuthology.orchestra.run.vm10.stdout: Installing : python3-scipy-1.9.3-2.el9.x86_64 68/119 2026-03-09T20:36:07.766 INFO:teuthology.orchestra.run.vm10.stdout: Installing : libxslt-1.1.34-12.el9.x86_64 69/119 2026-03-09T20:36:07.771 INFO:teuthology.orchestra.run.vm10.stdout: Installing : xmlstarlet-1.6.1-20.el9.x86_64 70/119 2026-03-09T20:36:07.927 INFO:teuthology.orchestra.run.vm10.stdout: Installing : libpmemobj-1.12.1-1.el9.x86_64 71/119 2026-03-09T20:36:07.929 INFO:teuthology.orchestra.run.vm10.stdout: Upgrading : librbd1-2:18.2.7-1055.gab47f43c.el9.x86_64 72/119 2026-03-09T20:36:07.960 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: librbd1-2:18.2.7-1055.gab47f43c.el9.x86_64 72/119 2026-03-09T20:36:07.963 INFO:teuthology.orchestra.run.vm10.stdout: Installing : python3-rbd-2:18.2.7-1055.gab47f43c.el9.x86_64 73/119 2026-03-09T20:36:07.972 INFO:teuthology.orchestra.run.vm10.stdout: Installing : boost-program-options-1.75.0-13.el9.x86_64 74/119 2026-03-09T20:36:08.191 INFO:teuthology.orchestra.run.vm10.stdout: Installing : parquet-libs-9.0.0-15.el9.x86_64 75/119 2026-03-09T20:36:08.193 INFO:teuthology.orchestra.run.vm10.stdout: Installing : librgw2-2:18.2.7-1055.gab47f43c.el9.x86_64 76/119 2026-03-09T20:36:08.212 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: librgw2-2:18.2.7-1055.gab47f43c.el9.x86_64 76/119 2026-03-09T20:36:08.221 INFO:teuthology.orchestra.run.vm10.stdout: Installing : python3-rgw-2:18.2.7-1055.gab47f43c.el9.x86_64 77/119 2026-03-09T20:36:08.238 INFO:teuthology.orchestra.run.vm10.stdout: Installing : python3-ply-3.11-14.el9.noarch 78/119 2026-03-09T20:36:08.258 INFO:teuthology.orchestra.run.vm10.stdout: Installing : python3-pycparser-2.20-6.el9.noarch 79/119 2026-03-09T20:36:08.353 INFO:teuthology.orchestra.run.vm10.stdout: Installing : python3-cffi-1.14.5-5.el9.x86_64 80/119 2026-03-09T20:36:08.367 INFO:teuthology.orchestra.run.vm10.stdout: Installing : python3-cryptography-36.0.1-5.el9.x86_64 81/119 2026-03-09T20:36:08.399 INFO:teuthology.orchestra.run.vm10.stdout: Installing : python3-pyOpenSSL-21.0.0-1.el9.noarch 82/119 2026-03-09T20:36:08.440 INFO:teuthology.orchestra.run.vm10.stdout: Installing : python3-cheroot-10.0.1-4.el9.noarch 83/119 2026-03-09T20:36:08.504 INFO:teuthology.orchestra.run.vm10.stdout: Installing : python3-cherrypy-18.6.1-2.el9.noarch 84/119 2026-03-09T20:36:08.515 INFO:teuthology.orchestra.run.vm10.stdout: Installing : python3-asyncssh-2.13.2-5.el9.noarch 85/119 2026-03-09T20:36:08.520 INFO:teuthology.orchestra.run.vm10.stdout: Installing : python3-bcrypt-3.2.2-1.el9.x86_64 86/119 2026-03-09T20:36:08.525 INFO:teuthology.orchestra.run.vm10.stdout: Installing : mailcap-2.1.49-5.el9.noarch 87/119 2026-03-09T20:36:08.527 INFO:teuthology.orchestra.run.vm10.stdout: Installing : libconfig-1.7.2-9.el9.x86_64 88/119 2026-03-09T20:36:08.547 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 89/119 2026-03-09T20:36:08.547 INFO:teuthology.orchestra.run.vm10.stdout:Creating group 'libstoragemgmt' with GID 994. 2026-03-09T20:36:08.547 INFO:teuthology.orchestra.run.vm10.stdout:Creating user 'libstoragemgmt' (daemon account for libstoragemgmt) with UID 994 and GID 994. 2026-03-09T20:36:08.547 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T20:36:08.561 INFO:teuthology.orchestra.run.vm10.stdout: Installing : libstoragemgmt-1.10.1-1.el9.x86_64 89/119 2026-03-09T20:36:08.594 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 89/119 2026-03-09T20:36:08.594 INFO:teuthology.orchestra.run.vm10.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/libstoragemgmt.service → /usr/lib/systemd/system/libstoragemgmt.service. 2026-03-09T20:36:08.594 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T20:36:08.620 INFO:teuthology.orchestra.run.vm10.stdout: Installing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 90/119 2026-03-09T20:36:08.685 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: cephadm-2:18.2.7-1055.gab47f43c.el9.noarch 91/119 2026-03-09T20:36:08.688 INFO:teuthology.orchestra.run.vm10.stdout: Installing : cephadm-2:18.2.7-1055.gab47f43c.el9.noarch 91/119 2026-03-09T20:36:08.695 INFO:teuthology.orchestra.run.vm10.stdout: Installing : ceph-prometheus-alerts-2:18.2.7-1055.gab47f43c.e 92/119 2026-03-09T20:36:08.732 INFO:teuthology.orchestra.run.vm10.stdout: Installing : ceph-grafana-dashboards-2:18.2.7-1055.gab47f43c. 93/119 2026-03-09T20:36:08.736 INFO:teuthology.orchestra.run.vm10.stdout: Installing : python3-ceph-common-2:18.2.7-1055.gab47f43c.el9. 94/119 2026-03-09T20:36:09.794 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: ceph-common-2:18.2.7-1055.gab47f43c.el9.x86_64 95/119 2026-03-09T20:36:09.813 INFO:teuthology.orchestra.run.vm10.stdout: Installing : ceph-common-2:18.2.7-1055.gab47f43c.el9.x86_64 95/119 2026-03-09T20:36:10.144 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: ceph-common-2:18.2.7-1055.gab47f43c.el9.x86_64 95/119 2026-03-09T20:36:10.151 INFO:teuthology.orchestra.run.vm10.stdout: Installing : ceph-base-2:18.2.7-1055.gab47f43c.el9.x86_64 96/119 2026-03-09T20:36:10.194 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: ceph-base-2:18.2.7-1055.gab47f43c.el9.x86_64 96/119 2026-03-09T20:36:10.194 INFO:teuthology.orchestra.run.vm10.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /usr/lib/systemd/system/ceph.target. 2026-03-09T20:36:10.194 INFO:teuthology.orchestra.run.vm10.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /usr/lib/systemd/system/ceph-crash.service. 2026-03-09T20:36:10.194 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T20:36:10.208 INFO:teuthology.orchestra.run.vm10.stdout: Installing : ceph-selinux-2:18.2.7-1055.gab47f43c.el9.x86_64 97/119 2026-03-09T20:36:17.421 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: ceph-selinux-2:18.2.7-1055.gab47f43c.el9.x86_64 97/119 2026-03-09T20:36:17.421 INFO:teuthology.orchestra.run.vm10.stdout:skipping the directory /sys 2026-03-09T20:36:17.421 INFO:teuthology.orchestra.run.vm10.stdout:skipping the directory /proc 2026-03-09T20:36:17.421 INFO:teuthology.orchestra.run.vm10.stdout:skipping the directory /mnt 2026-03-09T20:36:17.421 INFO:teuthology.orchestra.run.vm10.stdout:skipping the directory /var/tmp 2026-03-09T20:36:17.421 INFO:teuthology.orchestra.run.vm10.stdout:skipping the directory /home 2026-03-09T20:36:17.421 INFO:teuthology.orchestra.run.vm10.stdout:skipping the directory /root 2026-03-09T20:36:17.421 INFO:teuthology.orchestra.run.vm10.stdout:skipping the directory /tmp 2026-03-09T20:36:17.421 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T20:36:17.457 INFO:teuthology.orchestra.run.vm10.stdout: Installing : ceph-mgr-cephadm-2:18.2.7-1055.gab47f43c.el9.noa 98/119 2026-03-09T20:36:18.000 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: ceph-mgr-cephadm-2:18.2.7-1055.gab47f43c.el9.noa 98/119 2026-03-09T20:36:18.007 INFO:teuthology.orchestra.run.vm10.stdout: Installing : ceph-mgr-dashboard-2:18.2.7-1055.gab47f43c.el9.n 99/119 2026-03-09T20:36:18.555 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: ceph-mgr-dashboard-2:18.2.7-1055.gab47f43c.el9.n 99/119 2026-03-09T20:36:18.557 INFO:teuthology.orchestra.run.vm10.stdout: Installing : ceph-mgr-diskprediction-local-2:18.2.7-1055.gab4 100/119 2026-03-09T20:36:18.625 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:18.2.7-1055.gab4 100/119 2026-03-09T20:36:18.701 INFO:teuthology.orchestra.run.vm10.stdout: Installing : ceph-mgr-modules-core-2:18.2.7-1055.gab47f43c.el 101/119 2026-03-09T20:36:18.704 INFO:teuthology.orchestra.run.vm10.stdout: Installing : ceph-mgr-2:18.2.7-1055.gab47f43c.el9.x86_64 102/119 2026-03-09T20:36:18.730 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: ceph-mgr-2:18.2.7-1055.gab47f43c.el9.x86_64 102/119 2026-03-09T20:36:18.730 INFO:teuthology.orchestra.run.vm10.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T20:36:18.730 INFO:teuthology.orchestra.run.vm10.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-09T20:36:18.730 INFO:teuthology.orchestra.run.vm10.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-09T20:36:18.730 INFO:teuthology.orchestra.run.vm10.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-09T20:36:18.730 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T20:36:18.744 INFO:teuthology.orchestra.run.vm10.stdout: Installing : ceph-mgr-rook-2:18.2.7-1055.gab47f43c.el9.noarch 103/119 2026-03-09T20:36:18.855 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: ceph-mgr-rook-2:18.2.7-1055.gab47f43c.el9.noarch 103/119 2026-03-09T20:36:18.858 INFO:teuthology.orchestra.run.vm10.stdout: Installing : ceph-mds-2:18.2.7-1055.gab47f43c.el9.x86_64 104/119 2026-03-09T20:36:18.882 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: ceph-mds-2:18.2.7-1055.gab47f43c.el9.x86_64 104/119 2026-03-09T20:36:18.882 INFO:teuthology.orchestra.run.vm10.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T20:36:18.882 INFO:teuthology.orchestra.run.vm10.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-09T20:36:18.882 INFO:teuthology.orchestra.run.vm10.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-09T20:36:18.882 INFO:teuthology.orchestra.run.vm10.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-09T20:36:18.882 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T20:36:19.112 INFO:teuthology.orchestra.run.vm10.stdout: Installing : ceph-mon-2:18.2.7-1055.gab47f43c.el9.x86_64 105/119 2026-03-09T20:36:19.137 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: ceph-mon-2:18.2.7-1055.gab47f43c.el9.x86_64 105/119 2026-03-09T20:36:19.137 INFO:teuthology.orchestra.run.vm10.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T20:36:19.137 INFO:teuthology.orchestra.run.vm10.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-09T20:36:19.137 INFO:teuthology.orchestra.run.vm10.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-09T20:36:19.137 INFO:teuthology.orchestra.run.vm10.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-09T20:36:19.137 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T20:36:19.970 INFO:teuthology.orchestra.run.vm10.stdout: Installing : ceph-osd-2:18.2.7-1055.gab47f43c.el9.x86_64 106/119 2026-03-09T20:36:19.997 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: ceph-osd-2:18.2.7-1055.gab47f43c.el9.x86_64 106/119 2026-03-09T20:36:19.997 INFO:teuthology.orchestra.run.vm10.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T20:36:19.997 INFO:teuthology.orchestra.run.vm10.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-09T20:36:19.998 INFO:teuthology.orchestra.run.vm10.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-09T20:36:19.998 INFO:teuthology.orchestra.run.vm10.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-09T20:36:19.998 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T20:36:20.403 INFO:teuthology.orchestra.run.vm10.stdout: Installing : ceph-2:18.2.7-1055.gab47f43c.el9.x86_64 107/119 2026-03-09T20:36:20.408 INFO:teuthology.orchestra.run.vm10.stdout: Installing : ceph-radosgw-2:18.2.7-1055.gab47f43c.el9.x86_64 108/119 2026-03-09T20:36:20.432 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: ceph-radosgw-2:18.2.7-1055.gab47f43c.el9.x86_64 108/119 2026-03-09T20:36:20.432 INFO:teuthology.orchestra.run.vm10.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T20:36:20.432 INFO:teuthology.orchestra.run.vm10.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-09T20:36:20.432 INFO:teuthology.orchestra.run.vm10.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-09T20:36:20.432 INFO:teuthology.orchestra.run.vm10.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-09T20:36:20.432 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T20:36:20.444 INFO:teuthology.orchestra.run.vm10.stdout: Installing : ceph-immutable-object-cache-2:18.2.7-1055.gab47f 109/119 2026-03-09T20:36:20.468 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: ceph-immutable-object-cache-2:18.2.7-1055.gab47f 109/119 2026-03-09T20:36:20.468 INFO:teuthology.orchestra.run.vm10.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T20:36:20.468 INFO:teuthology.orchestra.run.vm10.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-09T20:36:20.469 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T20:36:20.621 INFO:teuthology.orchestra.run.vm10.stdout: Installing : rbd-mirror-2:18.2.7-1055.gab47f43c.el9.x86_64 110/119 2026-03-09T20:36:20.642 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: rbd-mirror-2:18.2.7-1055.gab47f43c.el9.x86_64 110/119 2026-03-09T20:36:20.642 INFO:teuthology.orchestra.run.vm10.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T20:36:20.642 INFO:teuthology.orchestra.run.vm10.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-09T20:36:20.642 INFO:teuthology.orchestra.run.vm10.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-09T20:36:20.642 INFO:teuthology.orchestra.run.vm10.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-09T20:36:20.642 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T20:36:22.626 INFO:teuthology.orchestra.run.vm10.stdout: Installing : ceph-test-2:18.2.7-1055.gab47f43c.el9.x86_64 111/119 2026-03-09T20:36:22.638 INFO:teuthology.orchestra.run.vm10.stdout: Installing : rbd-fuse-2:18.2.7-1055.gab47f43c.el9.x86_64 112/119 2026-03-09T20:36:22.645 INFO:teuthology.orchestra.run.vm10.stdout: Installing : rbd-nbd-2:18.2.7-1055.gab47f43c.el9.x86_64 113/119 2026-03-09T20:36:22.688 INFO:teuthology.orchestra.run.vm10.stdout: Installing : libcephfs-devel-2:18.2.7-1055.gab47f43c.el9.x86_ 114/119 2026-03-09T20:36:22.694 INFO:teuthology.orchestra.run.vm10.stdout: Installing : ceph-fuse-2:18.2.7-1055.gab47f43c.el9.x86_64 115/119 2026-03-09T20:36:22.714 INFO:teuthology.orchestra.run.vm10.stdout: Installing : python3-xmltodict-0.12.0-15.el9.noarch 116/119 2026-03-09T20:36:22.719 INFO:teuthology.orchestra.run.vm10.stdout: Installing : python3-jmespath-1.0.1-1.el9.noarch 117/119 2026-03-09T20:36:22.719 INFO:teuthology.orchestra.run.vm10.stdout: Cleanup : librbd1-2:16.2.4-5.el9.x86_64 118/119 2026-03-09T20:36:22.734 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: librbd1-2:16.2.4-5.el9.x86_64 118/119 2026-03-09T20:36:22.734 INFO:teuthology.orchestra.run.vm10.stdout: Cleanup : librados2-2:16.2.4-5.el9.x86_64 119/119 2026-03-09T20:36:24.018 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: librados2-2:16.2.4-5.el9.x86_64 119/119 2026-03-09T20:36:24.018 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : ceph-2:18.2.7-1055.gab47f43c.el9.x86_64 1/119 2026-03-09T20:36:24.018 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : ceph-base-2:18.2.7-1055.gab47f43c.el9.x86_64 2/119 2026-03-09T20:36:24.018 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : ceph-common-2:18.2.7-1055.gab47f43c.el9.x86_64 3/119 2026-03-09T20:36:24.018 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : ceph-fuse-2:18.2.7-1055.gab47f43c.el9.x86_64 4/119 2026-03-09T20:36:24.018 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : ceph-immutable-object-cache-2:18.2.7-1055.gab47f 5/119 2026-03-09T20:36:24.018 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : ceph-mds-2:18.2.7-1055.gab47f43c.el9.x86_64 6/119 2026-03-09T20:36:24.018 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : ceph-mgr-2:18.2.7-1055.gab47f43c.el9.x86_64 7/119 2026-03-09T20:36:24.018 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : ceph-mon-2:18.2.7-1055.gab47f43c.el9.x86_64 8/119 2026-03-09T20:36:24.018 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : ceph-osd-2:18.2.7-1055.gab47f43c.el9.x86_64 9/119 2026-03-09T20:36:24.018 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : ceph-radosgw-2:18.2.7-1055.gab47f43c.el9.x86_64 10/119 2026-03-09T20:36:24.018 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : ceph-selinux-2:18.2.7-1055.gab47f43c.el9.x86_64 11/119 2026-03-09T20:36:24.018 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : ceph-test-2:18.2.7-1055.gab47f43c.el9.x86_64 12/119 2026-03-09T20:36:24.024 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : libcephfs-devel-2:18.2.7-1055.gab47f43c.el9.x86_ 13/119 2026-03-09T20:36:24.024 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : libcephfs2-2:18.2.7-1055.gab47f43c.el9.x86_64 14/119 2026-03-09T20:36:24.024 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : libcephsqlite-2:18.2.7-1055.gab47f43c.el9.x86_64 15/119 2026-03-09T20:36:24.024 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : librados-devel-2:18.2.7-1055.gab47f43c.el9.x86_6 16/119 2026-03-09T20:36:24.024 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : libradosstriper1-2:18.2.7-1055.gab47f43c.el9.x86 17/119 2026-03-09T20:36:24.024 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : librgw2-2:18.2.7-1055.gab47f43c.el9.x86_64 18/119 2026-03-09T20:36:24.024 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-ceph-argparse-2:18.2.7-1055.gab47f43c.el 19/119 2026-03-09T20:36:24.024 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-ceph-common-2:18.2.7-1055.gab47f43c.el9. 20/119 2026-03-09T20:36:24.024 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-cephfs-2:18.2.7-1055.gab47f43c.el9.x86_6 21/119 2026-03-09T20:36:24.024 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-rados-2:18.2.7-1055.gab47f43c.el9.x86_64 22/119 2026-03-09T20:36:24.024 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-rbd-2:18.2.7-1055.gab47f43c.el9.x86_64 23/119 2026-03-09T20:36:24.024 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-rgw-2:18.2.7-1055.gab47f43c.el9.x86_64 24/119 2026-03-09T20:36:24.024 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : rbd-fuse-2:18.2.7-1055.gab47f43c.el9.x86_64 25/119 2026-03-09T20:36:24.024 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : rbd-mirror-2:18.2.7-1055.gab47f43c.el9.x86_64 26/119 2026-03-09T20:36:24.024 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : rbd-nbd-2:18.2.7-1055.gab47f43c.el9.x86_64 27/119 2026-03-09T20:36:24.024 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : ceph-grafana-dashboards-2:18.2.7-1055.gab47f43c. 28/119 2026-03-09T20:36:24.025 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : ceph-mgr-cephadm-2:18.2.7-1055.gab47f43c.el9.noa 29/119 2026-03-09T20:36:24.025 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : ceph-mgr-dashboard-2:18.2.7-1055.gab47f43c.el9.n 30/119 2026-03-09T20:36:24.025 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : ceph-mgr-diskprediction-local-2:18.2.7-1055.gab4 31/119 2026-03-09T20:36:24.025 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : ceph-mgr-modules-core-2:18.2.7-1055.gab47f43c.el 32/119 2026-03-09T20:36:24.025 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : ceph-mgr-rook-2:18.2.7-1055.gab47f43c.el9.noarch 33/119 2026-03-09T20:36:24.025 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : ceph-prometheus-alerts-2:18.2.7-1055.gab47f43c.e 34/119 2026-03-09T20:36:24.025 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : cephadm-2:18.2.7-1055.gab47f43c.el9.noarch 35/119 2026-03-09T20:36:24.025 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 36/119 2026-03-09T20:36:24.025 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 37/119 2026-03-09T20:36:24.025 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 38/119 2026-03-09T20:36:24.025 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 39/119 2026-03-09T20:36:24.025 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 40/119 2026-03-09T20:36:24.025 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 41/119 2026-03-09T20:36:24.025 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 42/119 2026-03-09T20:36:24.025 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-ply-3.11-14.el9.noarch 43/119 2026-03-09T20:36:24.025 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 44/119 2026-03-09T20:36:24.025 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 45/119 2026-03-09T20:36:24.025 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 46/119 2026-03-09T20:36:24.025 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 47/119 2026-03-09T20:36:24.025 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 48/119 2026-03-09T20:36:24.025 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 49/119 2026-03-09T20:36:24.025 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 50/119 2026-03-09T20:36:24.025 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 51/119 2026-03-09T20:36:24.025 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 52/119 2026-03-09T20:36:24.025 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 53/119 2026-03-09T20:36:24.025 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 54/119 2026-03-09T20:36:24.025 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 55/119 2026-03-09T20:36:24.025 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 56/119 2026-03-09T20:36:24.025 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 57/119 2026-03-09T20:36:24.025 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 58/119 2026-03-09T20:36:24.025 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 59/119 2026-03-09T20:36:24.025 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 60/119 2026-03-09T20:36:24.025 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 61/119 2026-03-09T20:36:24.025 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-jmespath-1.0.1-1.el9.noarch 62/119 2026-03-09T20:36:24.025 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 63/119 2026-03-09T20:36:24.025 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 64/119 2026-03-09T20:36:24.025 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 65/119 2026-03-09T20:36:24.025 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 66/119 2026-03-09T20:36:24.025 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 67/119 2026-03-09T20:36:24.025 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 68/119 2026-03-09T20:36:24.025 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 69/119 2026-03-09T20:36:24.025 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 70/119 2026-03-09T20:36:24.025 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 71/119 2026-03-09T20:36:24.025 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 72/119 2026-03-09T20:36:24.025 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 73/119 2026-03-09T20:36:24.025 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 74/119 2026-03-09T20:36:24.025 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : fmt-8.1.1-5.el9.x86_64 75/119 2026-03-09T20:36:24.025 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 76/119 2026-03-09T20:36:24.025 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 77/119 2026-03-09T20:36:24.025 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 78/119 2026-03-09T20:36:24.025 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 79/119 2026-03-09T20:36:24.025 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 80/119 2026-03-09T20:36:24.025 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 81/119 2026-03-09T20:36:24.025 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 82/119 2026-03-09T20:36:24.025 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 83/119 2026-03-09T20:36:24.025 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 84/119 2026-03-09T20:36:24.025 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 85/119 2026-03-09T20:36:24.025 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 86/119 2026-03-09T20:36:24.026 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 87/119 2026-03-09T20:36:24.026 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 88/119 2026-03-09T20:36:24.026 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 89/119 2026-03-09T20:36:24.026 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 90/119 2026-03-09T20:36:24.026 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 91/119 2026-03-09T20:36:24.026 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 92/119 2026-03-09T20:36:24.026 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 93/119 2026-03-09T20:36:24.026 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 94/119 2026-03-09T20:36:24.026 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 95/119 2026-03-09T20:36:24.026 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 96/119 2026-03-09T20:36:24.026 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 97/119 2026-03-09T20:36:24.026 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 98/119 2026-03-09T20:36:24.026 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 99/119 2026-03-09T20:36:24.026 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 100/119 2026-03-09T20:36:24.026 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 101/119 2026-03-09T20:36:24.026 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 102/119 2026-03-09T20:36:24.026 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 103/119 2026-03-09T20:36:24.026 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 104/119 2026-03-09T20:36:24.026 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 105/119 2026-03-09T20:36:24.026 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 106/119 2026-03-09T20:36:24.026 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 107/119 2026-03-09T20:36:24.026 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 108/119 2026-03-09T20:36:24.026 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 109/119 2026-03-09T20:36:24.026 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 110/119 2026-03-09T20:36:24.026 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 111/119 2026-03-09T20:36:24.026 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-xmltodict-0.12.0-15.el9.noarch 112/119 2026-03-09T20:36:24.026 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 113/119 2026-03-09T20:36:24.026 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : re2-1:20211101-20.el9.x86_64 114/119 2026-03-09T20:36:24.026 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 115/119 2026-03-09T20:36:24.026 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : librados2-2:18.2.7-1055.gab47f43c.el9.x86_64 116/119 2026-03-09T20:36:24.026 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : librados2-2:16.2.4-5.el9.x86_64 117/119 2026-03-09T20:36:24.026 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : librbd1-2:18.2.7-1055.gab47f43c.el9.x86_64 118/119 2026-03-09T20:36:24.131 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : librbd1-2:16.2.4-5.el9.x86_64 119/119 2026-03-09T20:36:24.131 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T20:36:24.131 INFO:teuthology.orchestra.run.vm10.stdout:Upgraded: 2026-03-09T20:36:24.131 INFO:teuthology.orchestra.run.vm10.stdout: librados2-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T20:36:24.131 INFO:teuthology.orchestra.run.vm10.stdout: librbd1-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T20:36:24.131 INFO:teuthology.orchestra.run.vm10.stdout:Installed: 2026-03-09T20:36:24.131 INFO:teuthology.orchestra.run.vm10.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-09T20:36:24.131 INFO:teuthology.orchestra.run.vm10.stdout: ceph-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T20:36:24.131 INFO:teuthology.orchestra.run.vm10.stdout: ceph-base-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T20:36:24.131 INFO:teuthology.orchestra.run.vm10.stdout: ceph-common-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T20:36:24.131 INFO:teuthology.orchestra.run.vm10.stdout: ceph-fuse-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T20:36:24.131 INFO:teuthology.orchestra.run.vm10.stdout: ceph-grafana-dashboards-2:18.2.7-1055.gab47f43c.el9.noarch 2026-03-09T20:36:24.131 INFO:teuthology.orchestra.run.vm10.stdout: ceph-immutable-object-cache-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T20:36:24.131 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mds-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T20:36:24.131 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T20:36:24.131 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-cephadm-2:18.2.7-1055.gab47f43c.el9.noarch 2026-03-09T20:36:24.131 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-dashboard-2:18.2.7-1055.gab47f43c.el9.noarch 2026-03-09T20:36:24.131 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-diskprediction-local-2:18.2.7-1055.gab47f43c.el9.noarch 2026-03-09T20:36:24.131 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-modules-core-2:18.2.7-1055.gab47f43c.el9.noarch 2026-03-09T20:36:24.131 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-rook-2:18.2.7-1055.gab47f43c.el9.noarch 2026-03-09T20:36:24.131 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mon-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T20:36:24.131 INFO:teuthology.orchestra.run.vm10.stdout: ceph-osd-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T20:36:24.131 INFO:teuthology.orchestra.run.vm10.stdout: ceph-prometheus-alerts-2:18.2.7-1055.gab47f43c.el9.noarch 2026-03-09T20:36:24.131 INFO:teuthology.orchestra.run.vm10.stdout: ceph-radosgw-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T20:36:24.131 INFO:teuthology.orchestra.run.vm10.stdout: ceph-selinux-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T20:36:24.131 INFO:teuthology.orchestra.run.vm10.stdout: ceph-test-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T20:36:24.131 INFO:teuthology.orchestra.run.vm10.stdout: cephadm-2:18.2.7-1055.gab47f43c.el9.noarch 2026-03-09T20:36:24.131 INFO:teuthology.orchestra.run.vm10.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-09T20:36:24.131 INFO:teuthology.orchestra.run.vm10.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-09T20:36:24.131 INFO:teuthology.orchestra.run.vm10.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-09T20:36:24.131 INFO:teuthology.orchestra.run.vm10.stdout: fmt-8.1.1-5.el9.x86_64 2026-03-09T20:36:24.131 INFO:teuthology.orchestra.run.vm10.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-09T20:36:24.132 INFO:teuthology.orchestra.run.vm10.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-09T20:36:24.132 INFO:teuthology.orchestra.run.vm10.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-09T20:36:24.132 INFO:teuthology.orchestra.run.vm10.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-09T20:36:24.132 INFO:teuthology.orchestra.run.vm10.stdout: libcephfs-devel-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T20:36:24.132 INFO:teuthology.orchestra.run.vm10.stdout: libcephfs2-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T20:36:24.132 INFO:teuthology.orchestra.run.vm10.stdout: libcephsqlite-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T20:36:24.132 INFO:teuthology.orchestra.run.vm10.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-09T20:36:24.132 INFO:teuthology.orchestra.run.vm10.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-09T20:36:24.132 INFO:teuthology.orchestra.run.vm10.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-09T20:36:24.132 INFO:teuthology.orchestra.run.vm10.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-09T20:36:24.132 INFO:teuthology.orchestra.run.vm10.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-09T20:36:24.132 INFO:teuthology.orchestra.run.vm10.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-09T20:36:24.132 INFO:teuthology.orchestra.run.vm10.stdout: librados-devel-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T20:36:24.132 INFO:teuthology.orchestra.run.vm10.stdout: libradosstriper1-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T20:36:24.132 INFO:teuthology.orchestra.run.vm10.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-09T20:36:24.132 INFO:teuthology.orchestra.run.vm10.stdout: librgw2-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T20:36:24.132 INFO:teuthology.orchestra.run.vm10.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-09T20:36:24.132 INFO:teuthology.orchestra.run.vm10.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-09T20:36:24.132 INFO:teuthology.orchestra.run.vm10.stdout: libxslt-1.1.34-12.el9.x86_64 2026-03-09T20:36:24.132 INFO:teuthology.orchestra.run.vm10.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-09T20:36:24.132 INFO:teuthology.orchestra.run.vm10.stdout: mailcap-2.1.49-5.el9.noarch 2026-03-09T20:36:24.132 INFO:teuthology.orchestra.run.vm10.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-09T20:36:24.132 INFO:teuthology.orchestra.run.vm10.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-09T20:36:24.132 INFO:teuthology.orchestra.run.vm10.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-09T20:36:24.132 INFO:teuthology.orchestra.run.vm10.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-09T20:36:24.132 INFO:teuthology.orchestra.run.vm10.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-09T20:36:24.132 INFO:teuthology.orchestra.run.vm10.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-09T20:36:24.132 INFO:teuthology.orchestra.run.vm10.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-09T20:36:24.132 INFO:teuthology.orchestra.run.vm10.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-09T20:36:24.132 INFO:teuthology.orchestra.run.vm10.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-09T20:36:24.132 INFO:teuthology.orchestra.run.vm10.stdout: python3-ceph-argparse-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T20:36:24.132 INFO:teuthology.orchestra.run.vm10.stdout: python3-ceph-common-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T20:36:24.132 INFO:teuthology.orchestra.run.vm10.stdout: python3-cephfs-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T20:36:24.132 INFO:teuthology.orchestra.run.vm10.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-09T20:36:24.132 INFO:teuthology.orchestra.run.vm10.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-09T20:36:24.132 INFO:teuthology.orchestra.run.vm10.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-09T20:36:24.132 INFO:teuthology.orchestra.run.vm10.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-09T20:36:24.132 INFO:teuthology.orchestra.run.vm10.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-09T20:36:24.132 INFO:teuthology.orchestra.run.vm10.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-09T20:36:24.132 INFO:teuthology.orchestra.run.vm10.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-09T20:36:24.132 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-09T20:36:24.132 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-09T20:36:24.132 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-09T20:36:24.132 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-09T20:36:24.132 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-09T20:36:24.132 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-09T20:36:24.132 INFO:teuthology.orchestra.run.vm10.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-09T20:36:24.132 INFO:teuthology.orchestra.run.vm10.stdout: python3-jmespath-1.0.1-1.el9.noarch 2026-03-09T20:36:24.132 INFO:teuthology.orchestra.run.vm10.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-09T20:36:24.132 INFO:teuthology.orchestra.run.vm10.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-09T20:36:24.132 INFO:teuthology.orchestra.run.vm10.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-09T20:36:24.132 INFO:teuthology.orchestra.run.vm10.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-09T20:36:24.132 INFO:teuthology.orchestra.run.vm10.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-09T20:36:24.132 INFO:teuthology.orchestra.run.vm10.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-09T20:36:24.133 INFO:teuthology.orchestra.run.vm10.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-09T20:36:24.133 INFO:teuthology.orchestra.run.vm10.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-09T20:36:24.133 INFO:teuthology.orchestra.run.vm10.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-09T20:36:24.133 INFO:teuthology.orchestra.run.vm10.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-09T20:36:24.133 INFO:teuthology.orchestra.run.vm10.stdout: python3-ply-3.11-14.el9.noarch 2026-03-09T20:36:24.133 INFO:teuthology.orchestra.run.vm10.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-09T20:36:24.133 INFO:teuthology.orchestra.run.vm10.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-09T20:36:24.133 INFO:teuthology.orchestra.run.vm10.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-09T20:36:24.133 INFO:teuthology.orchestra.run.vm10.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-09T20:36:24.133 INFO:teuthology.orchestra.run.vm10.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-09T20:36:24.133 INFO:teuthology.orchestra.run.vm10.stdout: python3-rados-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T20:36:24.133 INFO:teuthology.orchestra.run.vm10.stdout: python3-rbd-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T20:36:24.133 INFO:teuthology.orchestra.run.vm10.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-09T20:36:24.133 INFO:teuthology.orchestra.run.vm10.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-09T20:36:24.133 INFO:teuthology.orchestra.run.vm10.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-09T20:36:24.133 INFO:teuthology.orchestra.run.vm10.stdout: python3-rgw-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T20:36:24.133 INFO:teuthology.orchestra.run.vm10.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-09T20:36:24.133 INFO:teuthology.orchestra.run.vm10.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-09T20:36:24.133 INFO:teuthology.orchestra.run.vm10.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-09T20:36:24.133 INFO:teuthology.orchestra.run.vm10.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-09T20:36:24.133 INFO:teuthology.orchestra.run.vm10.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-09T20:36:24.133 INFO:teuthology.orchestra.run.vm10.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-09T20:36:24.133 INFO:teuthology.orchestra.run.vm10.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-09T20:36:24.133 INFO:teuthology.orchestra.run.vm10.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-09T20:36:24.133 INFO:teuthology.orchestra.run.vm10.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-09T20:36:24.133 INFO:teuthology.orchestra.run.vm10.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-09T20:36:24.133 INFO:teuthology.orchestra.run.vm10.stdout: python3-xmltodict-0.12.0-15.el9.noarch 2026-03-09T20:36:24.133 INFO:teuthology.orchestra.run.vm10.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-09T20:36:24.133 INFO:teuthology.orchestra.run.vm10.stdout: rbd-fuse-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T20:36:24.133 INFO:teuthology.orchestra.run.vm10.stdout: rbd-mirror-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T20:36:24.133 INFO:teuthology.orchestra.run.vm10.stdout: rbd-nbd-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T20:36:24.133 INFO:teuthology.orchestra.run.vm10.stdout: re2-1:20211101-20.el9.x86_64 2026-03-09T20:36:24.133 INFO:teuthology.orchestra.run.vm10.stdout: socat-1.7.4.1-8.el9.x86_64 2026-03-09T20:36:24.133 INFO:teuthology.orchestra.run.vm10.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-09T20:36:24.133 INFO:teuthology.orchestra.run.vm10.stdout: xmlstarlet-1.6.1-20.el9.x86_64 2026-03-09T20:36:24.133 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T20:36:24.133 INFO:teuthology.orchestra.run.vm10.stdout:Complete! 2026-03-09T20:36:24.219 DEBUG:teuthology.parallel:result is None 2026-03-09T20:41:01.538 INFO:teuthology.orchestra.run.vm07.stdout:[MIRROR] ledmon-libs-1.1.0-3.el9.x86_64.rpm: Curl error (28): Timeout was reached for http://ftp.nsc.ru/pub/centos-9/9-stream/BaseOS/x86_64/os/Packages/ledmon-libs-1.1.0-3.el9.x86_64.rpm [Operation too slow. Less than 1000 bytes/sec transferred the last 300 seconds] 2026-03-09T20:41:01.686 INFO:teuthology.orchestra.run.vm07.stdout:(36/117): ledmon-libs-1.1.0-3.el9.x86_64.rpm 135 B/s | 40 kB 05:06 2026-03-09T20:41:01.788 INFO:teuthology.orchestra.run.vm07.stdout:(37/117): libquadmath-11.5.0-14.el9.x86_64.rpm 1.8 MB/s | 184 kB 00:00 2026-03-09T20:41:01.819 INFO:teuthology.orchestra.run.vm07.stdout:(38/117): mailcap-2.1.49-5.el9.noarch.rpm 1.1 MB/s | 33 kB 00:00 2026-03-09T20:41:01.897 INFO:teuthology.orchestra.run.vm07.stdout:(39/117): python3-cffi-1.14.5-5.el9.x86_64.rpm 3.2 MB/s | 253 kB 00:00 2026-03-09T20:41:01.943 INFO:teuthology.orchestra.run.vm07.stdout:[MIRROR] libconfig-1.7.2-9.el9.x86_64.rpm: Curl error (28): Timeout was reached for http://ftp.nsc.ru/pub/centos-9/9-stream/BaseOS/x86_64/os/Packages/libconfig-1.7.2-9.el9.x86_64.rpm [Operation too slow. Less than 1000 bytes/sec transferred the last 300 seconds] 2026-03-09T20:41:01.981 INFO:teuthology.orchestra.run.vm07.stdout:(40/117): python3-cryptography-36.0.1-5.el9.x86 15 MB/s | 1.2 MB 00:00 2026-03-09T20:41:02.015 INFO:teuthology.orchestra.run.vm07.stdout:(41/117): python3-ply-3.11-14.el9.noarch.rpm 3.1 MB/s | 106 kB 00:00 2026-03-09T20:41:02.050 INFO:teuthology.orchestra.run.vm07.stdout:(42/117): python3-pycparser-2.20-6.el9.noarch.r 3.7 MB/s | 135 kB 00:00 2026-03-09T20:41:02.080 INFO:teuthology.orchestra.run.vm07.stdout:(43/117): python3-requests-2.25.1-10.el9.noarch 4.1 MB/s | 126 kB 00:00 2026-03-09T20:41:02.089 INFO:teuthology.orchestra.run.vm07.stdout:(44/117): libconfig-1.7.2-9.el9.x86_64.rpm 241 B/s | 72 kB 05:05 2026-03-09T20:41:02.112 INFO:teuthology.orchestra.run.vm07.stdout:(45/117): python3-urllib3-1.26.5-7.el9.noarch.r 6.9 MB/s | 218 kB 00:00 2026-03-09T20:41:02.395 INFO:teuthology.orchestra.run.vm07.stdout:(46/117): flexiblas-3.0.4-9.el9.x86_64.rpm 105 kB/s | 30 kB 00:00 2026-03-09T20:41:02.499 INFO:teuthology.orchestra.run.vm07.stdout:(47/117): boost-program-options-1.75.0-13.el9.x 254 kB/s | 104 kB 00:00 2026-03-09T20:41:02.616 INFO:teuthology.orchestra.run.vm07.stdout:(48/117): flexiblas-openblas-openmp-3.0.4-9.el9 127 kB/s | 15 kB 00:00 2026-03-09T20:41:02.779 INFO:teuthology.orchestra.run.vm07.stdout:(49/117): libpmemobj-1.12.1-1.el9.x86_64.rpm 985 kB/s | 160 kB 00:00 2026-03-09T20:41:02.872 INFO:teuthology.orchestra.run.vm07.stdout:(50/117): librabbitmq-0.11.0-7.el9.x86_64.rpm 488 kB/s | 45 kB 00:00 2026-03-09T20:41:02.921 INFO:teuthology.orchestra.run.vm07.stdout:[MIRROR] libgfortran-11.5.0-14.el9.x86_64.rpm: Curl error (28): Timeout was reached for http://ftp.nsc.ru/pub/centos-9/9-stream/BaseOS/x86_64/os/Packages/libgfortran-11.5.0-14.el9.x86_64.rpm [Operation too slow. Less than 1000 bytes/sec transferred the last 300 seconds] 2026-03-09T20:41:02.965 INFO:teuthology.orchestra.run.vm07.stdout:(51/117): flexiblas-netlib-3.0.4-9.el9.x86_64.r 5.2 MB/s | 3.0 MB 00:00 2026-03-09T20:41:03.053 INFO:teuthology.orchestra.run.vm07.stdout:(52/117): libstoragemgmt-1.10.1-1.el9.x86_64.rp 2.7 MB/s | 246 kB 00:00 2026-03-09T20:41:03.060 INFO:teuthology.orchestra.run.vm07.stdout:(53/117): libgfortran-11.5.0-14.el9.x86_64.rpm 2.6 kB/s | 794 kB 05:06 2026-03-09T20:41:03.111 INFO:teuthology.orchestra.run.vm07.stdout:(54/117): librdkafka-1.6.1-102.el9.x86_64.rpm 2.7 MB/s | 662 kB 00:00 2026-03-09T20:41:03.157 INFO:teuthology.orchestra.run.vm07.stdout:(55/117): libxslt-1.1.34-12.el9.x86_64.rpm 2.2 MB/s | 233 kB 00:00 2026-03-09T20:41:03.215 INFO:teuthology.orchestra.run.vm07.stdout:(56/117): openblas-0.3.29-1.el9.x86_64.rpm 409 kB/s | 42 kB 00:00 2026-03-09T20:41:03.473 INFO:teuthology.orchestra.run.vm07.stdout:(57/117): openblas-openmp-0.3.29-1.el9.x86_64.r 17 MB/s | 5.3 MB 00:00 2026-03-09T20:41:03.552 INFO:teuthology.orchestra.run.vm07.stdout:(58/117): lttng-ust-2.12.0-6.el9.x86_64.rpm 595 kB/s | 292 kB 00:00 2026-03-09T20:41:03.634 INFO:teuthology.orchestra.run.vm07.stdout:(59/117): python3-devel-3.9.25-3.el9.x86_64.rpm 1.5 MB/s | 244 kB 00:00 2026-03-09T20:41:03.655 INFO:teuthology.orchestra.run.vm07.stdout:(60/117): python3-babel-2.9.1-2.el9.noarch.rpm 14 MB/s | 6.0 MB 00:00 2026-03-09T20:41:03.694 INFO:teuthology.orchestra.run.vm07.stdout:(61/117): python3-jinja2-2.11.3-8.el9.noarch.rp 1.7 MB/s | 249 kB 00:00 2026-03-09T20:41:03.716 INFO:teuthology.orchestra.run.vm07.stdout:(62/117): python3-jmespath-1.0.1-1.el9.noarch.r 586 kB/s | 48 kB 00:00 2026-03-09T20:41:03.739 INFO:teuthology.orchestra.run.vm07.stdout:(63/117): python3-libstoragemgmt-1.10.1-1.el9.x 2.1 MB/s | 177 kB 00:00 2026-03-09T20:41:03.794 INFO:teuthology.orchestra.run.vm07.stdout:(64/117): python3-markupsafe-1.1.1-12.el9.x86_6 447 kB/s | 35 kB 00:00 2026-03-09T20:41:03.828 INFO:teuthology.orchestra.run.vm07.stdout:(65/117): python3-mako-1.1.4-6.el9.noarch.rpm 1.3 MB/s | 172 kB 00:00 2026-03-09T20:41:03.917 INFO:teuthology.orchestra.run.vm07.stdout:(66/117): python3-numpy-f2py-1.23.5-2.el9.x86_6 3.5 MB/s | 442 kB 00:00 2026-03-09T20:41:03.947 INFO:teuthology.orchestra.run.vm07.stdout:(67/117): python3-pyasn1-0.4.8-7.el9.noarch.rpm 1.3 MB/s | 157 kB 00:00 2026-03-09T20:41:04.024 INFO:teuthology.orchestra.run.vm07.stdout:(68/117): python3-pyasn1-modules-0.4.8-7.el9.no 2.5 MB/s | 277 kB 00:00 2026-03-09T20:41:04.037 INFO:teuthology.orchestra.run.vm07.stdout:(69/117): python3-requests-oauthlib-1.3.0-12.el 592 kB/s | 54 kB 00:00 2026-03-09T20:41:04.135 INFO:teuthology.orchestra.run.vm07.stdout:(70/117): python3-numpy-1.23.5-2.el9.x86_64.rpm 15 MB/s | 6.1 MB 00:00 2026-03-09T20:41:04.135 INFO:teuthology.orchestra.run.vm07.stdout:(71/117): python3-toml-0.10.2-6.el9.noarch.rpm 427 kB/s | 42 kB 00:00 2026-03-09T20:41:04.233 INFO:teuthology.orchestra.run.vm07.stdout:(72/117): xmlstarlet-1.6.1-20.el9.x86_64.rpm 650 kB/s | 64 kB 00:00 2026-03-09T20:41:04.241 INFO:teuthology.orchestra.run.vm07.stdout:(73/117): fmt-8.1.1-5.el9.x86_64.rpm 15 MB/s | 111 kB 00:00 2026-03-09T20:41:04.242 INFO:teuthology.orchestra.run.vm07.stdout:(74/117): socat-1.7.4.1-8.el9.x86_64.rpm 2.8 MB/s | 303 kB 00:00 2026-03-09T20:41:04.249 INFO:teuthology.orchestra.run.vm07.stdout:(75/117): gperftools-libs-2.9.1-3.el9.x86_64.rp 36 MB/s | 308 kB 00:00 2026-03-09T20:41:04.253 INFO:teuthology.orchestra.run.vm07.stdout:(76/117): libarrow-doc-9.0.0-15.el9.noarch.rpm 7.0 MB/s | 25 kB 00:00 2026-03-09T20:41:04.257 INFO:teuthology.orchestra.run.vm07.stdout:(77/117): liboath-2.6.12-1.el9.x86_64.rpm 12 MB/s | 49 kB 00:00 2026-03-09T20:41:04.261 INFO:teuthology.orchestra.run.vm07.stdout:(78/117): libunwind-1.6.2-1.el9.x86_64.rpm 18 MB/s | 67 kB 00:00 2026-03-09T20:41:04.310 INFO:teuthology.orchestra.run.vm07.stdout:(79/117): parquet-libs-9.0.0-15.el9.x86_64.rpm 17 MB/s | 838 kB 00:00 2026-03-09T20:41:04.326 INFO:teuthology.orchestra.run.vm07.stdout:(80/117): libarrow-9.0.0-15.el9.x86_64.rpm 53 MB/s | 4.4 MB 00:00 2026-03-09T20:41:04.328 INFO:teuthology.orchestra.run.vm07.stdout:(81/117): python3-autocommand-2.2.2-8.el9.noarc 12 MB/s | 29 kB 00:00 2026-03-09T20:41:04.331 INFO:teuthology.orchestra.run.vm07.stdout:(82/117): python3-backports-tarfile-1.2.0-1.el9 21 MB/s | 60 kB 00:00 2026-03-09T20:41:04.334 INFO:teuthology.orchestra.run.vm07.stdout:(83/117): python3-bcrypt-3.2.2-1.el9.x86_64.rpm 17 MB/s | 43 kB 00:00 2026-03-09T20:41:04.337 INFO:teuthology.orchestra.run.vm07.stdout:(84/117): python3-asyncssh-2.13.2-5.el9.noarch. 20 MB/s | 548 kB 00:00 2026-03-09T20:41:04.338 INFO:teuthology.orchestra.run.vm07.stdout:(85/117): python3-cachetools-4.2.4-1.el9.noarch 9.5 MB/s | 32 kB 00:00 2026-03-09T20:41:04.339 INFO:teuthology.orchestra.run.vm07.stdout:(86/117): python3-certifi-2023.05.07-4.el9.noar 6.4 MB/s | 14 kB 00:00 2026-03-09T20:41:04.342 INFO:teuthology.orchestra.run.vm07.stdout:(87/117): python3-cheroot-10.0.1-4.el9.noarch.r 43 MB/s | 173 kB 00:00 2026-03-09T20:41:04.348 INFO:teuthology.orchestra.run.vm07.stdout:(88/117): python3-google-auth-2.45.0-1.el9.noar 37 MB/s | 254 kB 00:00 2026-03-09T20:41:04.352 INFO:teuthology.orchestra.run.vm07.stdout:(89/117): python3-cherrypy-18.6.1-2.el9.noarch. 28 MB/s | 358 kB 00:00 2026-03-09T20:41:04.352 INFO:teuthology.orchestra.run.vm07.stdout:(90/117): python3-jaraco-8.2.1-3.el9.noarch.rpm 2.7 MB/s | 11 kB 00:00 2026-03-09T20:41:04.354 INFO:teuthology.orchestra.run.vm07.stdout:(91/117): python3-jaraco-classes-3.2.1-5.el9.no 7.8 MB/s | 18 kB 00:00 2026-03-09T20:41:04.355 INFO:teuthology.orchestra.run.vm07.stdout:(92/117): python3-jaraco-collections-3.0.0-8.el 9.8 MB/s | 23 kB 00:00 2026-03-09T20:41:04.357 INFO:teuthology.orchestra.run.vm07.stdout:(93/117): python3-jaraco-context-6.0.1-3.el9.no 8.7 MB/s | 20 kB 00:00 2026-03-09T20:41:04.357 INFO:teuthology.orchestra.run.vm07.stdout:(94/117): python3-jaraco-functools-3.5.0-2.el9. 8.6 MB/s | 19 kB 00:00 2026-03-09T20:41:04.359 INFO:teuthology.orchestra.run.vm07.stdout:(95/117): python3-jaraco-text-4.0.0-2.el9.noarc 11 MB/s | 26 kB 00:00 2026-03-09T20:41:04.362 INFO:teuthology.orchestra.run.vm07.stdout:(96/117): python3-logutils-0.3.5-21.el9.noarch. 17 MB/s | 46 kB 00:00 2026-03-09T20:41:04.367 INFO:teuthology.orchestra.run.vm07.stdout:(97/117): python3-more-itertools-8.12.0-2.el9.n 17 MB/s | 79 kB 00:00 2026-03-09T20:41:04.370 INFO:teuthology.orchestra.run.vm07.stdout:(98/117): python3-natsort-7.1.1-5.el9.noarch.rp 19 MB/s | 58 kB 00:00 2026-03-09T20:41:04.373 INFO:teuthology.orchestra.run.vm07.stdout:(99/117): python3-kubernetes-26.1.0-3.el9.noarc 67 MB/s | 1.0 MB 00:00 2026-03-09T20:41:04.376 INFO:teuthology.orchestra.run.vm07.stdout:(100/117): python3-portend-3.1.0-2.el9.noarch.r 5.7 MB/s | 16 kB 00:00 2026-03-09T20:41:04.379 INFO:teuthology.orchestra.run.vm07.stdout:(101/117): python3-pecan-1.4.2-3.el9.noarch.rpm 32 MB/s | 272 kB 00:00 2026-03-09T20:41:04.380 INFO:teuthology.orchestra.run.vm07.stdout:(102/117): python3-pyOpenSSL-21.0.0-1.el9.noarc 24 MB/s | 90 kB 00:00 2026-03-09T20:41:04.381 INFO:teuthology.orchestra.run.vm07.stdout:(103/117): python3-repoze-lru-0.7-16.el9.noarch 12 MB/s | 31 kB 00:00 2026-03-09T20:41:04.385 INFO:teuthology.orchestra.run.vm07.stdout:(104/117): python3-routes-2.5.1-5.el9.noarch.rp 36 MB/s | 188 kB 00:00 2026-03-09T20:41:04.385 INFO:teuthology.orchestra.run.vm07.stdout:(105/117): python3-rsa-4.9-2.el9.noarch.rpm 15 MB/s | 59 kB 00:00 2026-03-09T20:41:04.387 INFO:teuthology.orchestra.run.vm07.stdout:(106/117): python3-tempora-5.0.0-2.el9.noarch.r 14 MB/s | 36 kB 00:00 2026-03-09T20:41:04.389 INFO:teuthology.orchestra.run.vm07.stdout:(107/117): python3-typing-extensions-4.15.0-1.e 24 MB/s | 86 kB 00:00 2026-03-09T20:41:04.392 INFO:teuthology.orchestra.run.vm07.stdout:(108/117): python3-webob-1.8.8-2.el9.noarch.rpm 50 MB/s | 230 kB 00:00 2026-03-09T20:41:04.394 INFO:teuthology.orchestra.run.vm07.stdout:(109/117): python3-websocket-client-1.2.3-2.el9 22 MB/s | 90 kB 00:00 2026-03-09T20:41:04.397 INFO:teuthology.orchestra.run.vm07.stdout:(110/117): python3-xmltodict-0.12.0-15.el9.noar 7.2 MB/s | 22 kB 00:00 2026-03-09T20:41:04.399 INFO:teuthology.orchestra.run.vm07.stdout:(111/117): python3-werkzeug-2.0.3-3.el9.1.noarc 59 MB/s | 427 kB 00:00 2026-03-09T20:41:04.400 INFO:teuthology.orchestra.run.vm07.stdout:(112/117): python3-zc-lockfile-2.0-10.el9.noarc 6.3 MB/s | 20 kB 00:00 2026-03-09T20:41:04.405 INFO:teuthology.orchestra.run.vm07.stdout:(113/117): re2-20211101-20.el9.x86_64.rpm 37 MB/s | 191 kB 00:00 2026-03-09T20:41:04.432 INFO:teuthology.orchestra.run.vm07.stdout:(114/117): thrift-0.15.0-4.el9.x86_64.rpm 50 MB/s | 1.6 MB 00:00 2026-03-09T20:41:04.994 INFO:teuthology.orchestra.run.vm07.stdout:(115/117): python3-scipy-1.9.3-2.el9.x86_64.rpm 20 MB/s | 19 MB 00:00 2026-03-09T20:41:05.851 INFO:teuthology.orchestra.run.vm07.stdout:(116/117): librbd1-18.2.7-1055.gab47f43c.el9.x8 2.1 MB/s | 3.0 MB 00:01 2026-03-09T20:41:05.912 INFO:teuthology.orchestra.run.vm07.stdout:(117/117): librados2-18.2.7-1055.gab47f43c.el9. 2.2 MB/s | 3.3 MB 00:01 2026-03-09T20:41:05.913 INFO:teuthology.orchestra.run.vm07.stdout:-------------------------------------------------------------------------------- 2026-03-09T20:41:05.913 INFO:teuthology.orchestra.run.vm07.stdout:Total 577 kB/s | 181 MB 05:20 2026-03-09T20:41:06.272 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction check 2026-03-09T20:41:06.315 INFO:teuthology.orchestra.run.vm07.stdout:Transaction check succeeded. 2026-03-09T20:41:06.315 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction test 2026-03-09T20:41:07.017 INFO:teuthology.orchestra.run.vm07.stdout:Transaction test succeeded. 2026-03-09T20:41:07.017 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction 2026-03-09T20:41:07.774 INFO:teuthology.orchestra.run.vm07.stdout: Preparing : 1/1 2026-03-09T20:41:07.788 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-more-itertools-8.12.0-2.el9.noarch 1/119 2026-03-09T20:41:07.799 INFO:teuthology.orchestra.run.vm07.stdout: Installing : thrift-0.15.0-4.el9.x86_64 2/119 2026-03-09T20:41:07.955 INFO:teuthology.orchestra.run.vm07.stdout: Installing : lttng-ust-2.12.0-6.el9.x86_64 3/119 2026-03-09T20:41:07.957 INFO:teuthology.orchestra.run.vm07.stdout: Upgrading : librados2-2:18.2.7-1055.gab47f43c.el9.x86_64 4/119 2026-03-09T20:41:08.000 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: librados2-2:18.2.7-1055.gab47f43c.el9.x86_64 4/119 2026-03-09T20:41:08.001 INFO:teuthology.orchestra.run.vm07.stdout: Installing : libcephfs2-2:18.2.7-1055.gab47f43c.el9.x86_64 5/119 2026-03-09T20:41:08.026 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: libcephfs2-2:18.2.7-1055.gab47f43c.el9.x86_64 5/119 2026-03-09T20:41:08.034 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-rados-2:18.2.7-1055.gab47f43c.el9.x86_64 6/119 2026-03-09T20:41:08.068 INFO:teuthology.orchestra.run.vm07.stdout: Installing : librdkafka-1.6.1-102.el9.x86_64 7/119 2026-03-09T20:41:08.070 INFO:teuthology.orchestra.run.vm07.stdout: Installing : librabbitmq-0.11.0-7.el9.x86_64 8/119 2026-03-09T20:41:08.078 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-jaraco-8.2.1-3.el9.noarch 9/119 2026-03-09T20:41:08.080 INFO:teuthology.orchestra.run.vm07.stdout: Installing : libcephsqlite-2:18.2.7-1055.gab47f43c.el9.x86_64 10/119 2026-03-09T20:41:08.110 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: libcephsqlite-2:18.2.7-1055.gab47f43c.el9.x86_64 10/119 2026-03-09T20:41:08.111 INFO:teuthology.orchestra.run.vm07.stdout: Installing : libradosstriper1-2:18.2.7-1055.gab47f43c.el9.x86 11/119 2026-03-09T20:41:08.153 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: libradosstriper1-2:18.2.7-1055.gab47f43c.el9.x86 11/119 2026-03-09T20:41:08.158 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-werkzeug-2.0.3-3.el9.1.noarch 12/119 2026-03-09T20:41:08.182 INFO:teuthology.orchestra.run.vm07.stdout: Installing : liboath-2.6.12-1.el9.x86_64 13/119 2026-03-09T20:41:08.190 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-pyasn1-0.4.8-7.el9.noarch 14/119 2026-03-09T20:41:08.193 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-markupsafe-1.1.1-12.el9.x86_64 15/119 2026-03-09T20:41:08.218 INFO:teuthology.orchestra.run.vm07.stdout: Installing : flexiblas-3.0.4-9.el9.x86_64 16/119 2026-03-09T20:41:08.233 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-urllib3-1.26.5-7.el9.noarch 17/119 2026-03-09T20:41:08.238 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-requests-2.25.1-10.el9.noarch 18/119 2026-03-09T20:41:08.244 INFO:teuthology.orchestra.run.vm07.stdout: Installing : libquadmath-11.5.0-14.el9.x86_64 19/119 2026-03-09T20:41:08.246 INFO:teuthology.orchestra.run.vm07.stdout: Installing : libgfortran-11.5.0-14.el9.x86_64 20/119 2026-03-09T20:41:08.252 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ledmon-libs-1.1.0-3.el9.x86_64 21/119 2026-03-09T20:41:08.261 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-ceph-argparse-2:18.2.7-1055.gab47f43c.el 22/119 2026-03-09T20:41:08.274 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-cephfs-2:18.2.7-1055.gab47f43c.el9.x86_6 23/119 2026-03-09T20:41:08.302 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-requests-oauthlib-1.3.0-12.el9.noarch 24/119 2026-03-09T20:41:08.362 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-mako-1.1.4-6.el9.noarch 25/119 2026-03-09T20:41:08.378 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-pyasn1-modules-0.4.8-7.el9.noarch 26/119 2026-03-09T20:41:08.385 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-rsa-4.9-2.el9.noarch 27/119 2026-03-09T20:41:08.394 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-jaraco-classes-3.2.1-5.el9.noarch 28/119 2026-03-09T20:41:08.399 INFO:teuthology.orchestra.run.vm07.stdout: Installing : librados-devel-2:18.2.7-1055.gab47f43c.el9.x86_6 29/119 2026-03-09T20:41:08.431 INFO:teuthology.orchestra.run.vm07.stdout: Installing : re2-1:20211101-20.el9.x86_64 30/119 2026-03-09T20:41:08.437 INFO:teuthology.orchestra.run.vm07.stdout: Installing : libarrow-9.0.0-15.el9.x86_64 31/119 2026-03-09T20:41:08.454 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-zc-lockfile-2.0-10.el9.noarch 32/119 2026-03-09T20:41:08.479 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-websocket-client-1.2.3-2.el9.noarch 33/119 2026-03-09T20:41:08.485 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-webob-1.8.8-2.el9.noarch 34/119 2026-03-09T20:41:08.491 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-typing-extensions-4.15.0-1.el9.noarch 35/119 2026-03-09T20:41:08.505 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-repoze-lru-0.7-16.el9.noarch 36/119 2026-03-09T20:41:08.516 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-routes-2.5.1-5.el9.noarch 37/119 2026-03-09T20:41:08.527 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-natsort-7.1.1-5.el9.noarch 38/119 2026-03-09T20:41:08.587 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-logutils-0.3.5-21.el9.noarch 39/119 2026-03-09T20:41:08.595 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-pecan-1.4.2-3.el9.noarch 40/119 2026-03-09T20:41:08.605 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-certifi-2023.05.07-4.el9.noarch 41/119 2026-03-09T20:41:08.650 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-cachetools-4.2.4-1.el9.noarch 42/119 2026-03-09T20:41:09.014 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-google-auth-1:2.45.0-1.el9.noarch 43/119 2026-03-09T20:41:09.030 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-kubernetes-1:26.1.0-3.el9.noarch 44/119 2026-03-09T20:41:09.035 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-backports-tarfile-1.2.0-1.el9.noarch 45/119 2026-03-09T20:41:09.042 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-jaraco-context-6.0.1-3.el9.noarch 46/119 2026-03-09T20:41:09.047 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-autocommand-2.2.2-8.el9.noarch 47/119 2026-03-09T20:41:09.054 INFO:teuthology.orchestra.run.vm07.stdout: Installing : libunwind-1.6.2-1.el9.x86_64 48/119 2026-03-09T20:41:09.057 INFO:teuthology.orchestra.run.vm07.stdout: Installing : gperftools-libs-2.9.1-3.el9.x86_64 49/119 2026-03-09T20:41:09.060 INFO:teuthology.orchestra.run.vm07.stdout: Installing : libarrow-doc-9.0.0-15.el9.noarch 50/119 2026-03-09T20:41:09.070 INFO:teuthology.orchestra.run.vm07.stdout: Installing : fmt-8.1.1-5.el9.x86_64 51/119 2026-03-09T20:41:09.077 INFO:teuthology.orchestra.run.vm07.stdout: Installing : socat-1.7.4.1-8.el9.x86_64 52/119 2026-03-09T20:41:09.081 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-toml-0.10.2-6.el9.noarch 53/119 2026-03-09T20:41:09.089 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-jaraco-functools-3.5.0-2.el9.noarch 54/119 2026-03-09T20:41:09.093 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-jaraco-text-4.0.0-2.el9.noarch 55/119 2026-03-09T20:41:09.102 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-jaraco-collections-3.0.0-8.el9.noarch 56/119 2026-03-09T20:41:09.107 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-tempora-5.0.0-2.el9.noarch 57/119 2026-03-09T20:41:09.145 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-portend-3.1.0-2.el9.noarch 58/119 2026-03-09T20:41:09.405 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-devel-3.9.25-3.el9.x86_64 59/119 2026-03-09T20:41:09.434 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-babel-2.9.1-2.el9.noarch 60/119 2026-03-09T20:41:09.440 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-jinja2-2.11.3-8.el9.noarch 61/119 2026-03-09T20:41:09.500 INFO:teuthology.orchestra.run.vm07.stdout: Installing : openblas-0.3.29-1.el9.x86_64 62/119 2026-03-09T20:41:09.503 INFO:teuthology.orchestra.run.vm07.stdout: Installing : openblas-openmp-0.3.29-1.el9.x86_64 63/119 2026-03-09T20:41:09.526 INFO:teuthology.orchestra.run.vm07.stdout: Installing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 64/119 2026-03-09T20:41:09.892 INFO:teuthology.orchestra.run.vm07.stdout: Installing : flexiblas-netlib-3.0.4-9.el9.x86_64 65/119 2026-03-09T20:41:09.977 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-numpy-1:1.23.5-2.el9.x86_64 66/119 2026-03-09T20:41:10.710 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 67/119 2026-03-09T20:41:10.735 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-scipy-1.9.3-2.el9.x86_64 68/119 2026-03-09T20:41:10.741 INFO:teuthology.orchestra.run.vm07.stdout: Installing : libxslt-1.1.34-12.el9.x86_64 69/119 2026-03-09T20:41:10.744 INFO:teuthology.orchestra.run.vm07.stdout: Installing : xmlstarlet-1.6.1-20.el9.x86_64 70/119 2026-03-09T20:41:10.885 INFO:teuthology.orchestra.run.vm07.stdout: Installing : libpmemobj-1.12.1-1.el9.x86_64 71/119 2026-03-09T20:41:10.887 INFO:teuthology.orchestra.run.vm07.stdout: Upgrading : librbd1-2:18.2.7-1055.gab47f43c.el9.x86_64 72/119 2026-03-09T20:41:10.914 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: librbd1-2:18.2.7-1055.gab47f43c.el9.x86_64 72/119 2026-03-09T20:41:10.917 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-rbd-2:18.2.7-1055.gab47f43c.el9.x86_64 73/119 2026-03-09T20:41:10.926 INFO:teuthology.orchestra.run.vm07.stdout: Installing : boost-program-options-1.75.0-13.el9.x86_64 74/119 2026-03-09T20:41:11.125 INFO:teuthology.orchestra.run.vm07.stdout: Installing : parquet-libs-9.0.0-15.el9.x86_64 75/119 2026-03-09T20:41:11.127 INFO:teuthology.orchestra.run.vm07.stdout: Installing : librgw2-2:18.2.7-1055.gab47f43c.el9.x86_64 76/119 2026-03-09T20:41:11.143 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: librgw2-2:18.2.7-1055.gab47f43c.el9.x86_64 76/119 2026-03-09T20:41:11.151 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-rgw-2:18.2.7-1055.gab47f43c.el9.x86_64 77/119 2026-03-09T20:41:11.167 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-ply-3.11-14.el9.noarch 78/119 2026-03-09T20:41:11.186 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-pycparser-2.20-6.el9.noarch 79/119 2026-03-09T20:41:11.273 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-cffi-1.14.5-5.el9.x86_64 80/119 2026-03-09T20:41:11.286 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-cryptography-36.0.1-5.el9.x86_64 81/119 2026-03-09T20:41:11.315 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-pyOpenSSL-21.0.0-1.el9.noarch 82/119 2026-03-09T20:41:11.351 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-cheroot-10.0.1-4.el9.noarch 83/119 2026-03-09T20:41:11.412 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-cherrypy-18.6.1-2.el9.noarch 84/119 2026-03-09T20:41:11.422 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-asyncssh-2.13.2-5.el9.noarch 85/119 2026-03-09T20:41:11.426 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-bcrypt-3.2.2-1.el9.x86_64 86/119 2026-03-09T20:41:11.431 INFO:teuthology.orchestra.run.vm07.stdout: Installing : mailcap-2.1.49-5.el9.noarch 87/119 2026-03-09T20:41:11.434 INFO:teuthology.orchestra.run.vm07.stdout: Installing : libconfig-1.7.2-9.el9.x86_64 88/119 2026-03-09T20:41:11.452 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 89/119 2026-03-09T20:41:11.452 INFO:teuthology.orchestra.run.vm07.stdout:Creating group 'libstoragemgmt' with GID 994. 2026-03-09T20:41:11.452 INFO:teuthology.orchestra.run.vm07.stdout:Creating user 'libstoragemgmt' (daemon account for libstoragemgmt) with UID 994 and GID 994. 2026-03-09T20:41:11.452 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:41:11.464 INFO:teuthology.orchestra.run.vm07.stdout: Installing : libstoragemgmt-1.10.1-1.el9.x86_64 89/119 2026-03-09T20:41:11.490 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 89/119 2026-03-09T20:41:11.490 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/libstoragemgmt.service → /usr/lib/systemd/system/libstoragemgmt.service. 2026-03-09T20:41:11.490 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:41:11.506 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 90/119 2026-03-09T20:41:11.556 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: cephadm-2:18.2.7-1055.gab47f43c.el9.noarch 91/119 2026-03-09T20:41:11.558 INFO:teuthology.orchestra.run.vm07.stdout: Installing : cephadm-2:18.2.7-1055.gab47f43c.el9.noarch 91/119 2026-03-09T20:41:11.563 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-prometheus-alerts-2:18.2.7-1055.gab47f43c.e 92/119 2026-03-09T20:41:11.591 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-grafana-dashboards-2:18.2.7-1055.gab47f43c. 93/119 2026-03-09T20:41:11.594 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-ceph-common-2:18.2.7-1055.gab47f43c.el9. 94/119 2026-03-09T20:41:12.533 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-common-2:18.2.7-1055.gab47f43c.el9.x86_64 95/119 2026-03-09T20:41:12.538 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-common-2:18.2.7-1055.gab47f43c.el9.x86_64 95/119 2026-03-09T20:41:12.835 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-common-2:18.2.7-1055.gab47f43c.el9.x86_64 95/119 2026-03-09T20:41:12.840 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-base-2:18.2.7-1055.gab47f43c.el9.x86_64 96/119 2026-03-09T20:41:12.880 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-base-2:18.2.7-1055.gab47f43c.el9.x86_64 96/119 2026-03-09T20:41:12.880 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /usr/lib/systemd/system/ceph.target. 2026-03-09T20:41:12.880 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /usr/lib/systemd/system/ceph-crash.service. 2026-03-09T20:41:12.880 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:41:12.885 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-selinux-2:18.2.7-1055.gab47f43c.el9.x86_64 97/119 2026-03-09T20:41:18.886 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-selinux-2:18.2.7-1055.gab47f43c.el9.x86_64 97/119 2026-03-09T20:41:18.886 INFO:teuthology.orchestra.run.vm07.stdout:skipping the directory /sys 2026-03-09T20:41:18.886 INFO:teuthology.orchestra.run.vm07.stdout:skipping the directory /proc 2026-03-09T20:41:18.886 INFO:teuthology.orchestra.run.vm07.stdout:skipping the directory /mnt 2026-03-09T20:41:18.886 INFO:teuthology.orchestra.run.vm07.stdout:skipping the directory /var/tmp 2026-03-09T20:41:18.886 INFO:teuthology.orchestra.run.vm07.stdout:skipping the directory /home 2026-03-09T20:41:18.886 INFO:teuthology.orchestra.run.vm07.stdout:skipping the directory /root 2026-03-09T20:41:18.886 INFO:teuthology.orchestra.run.vm07.stdout:skipping the directory /tmp 2026-03-09T20:41:18.886 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:41:18.917 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-mgr-cephadm-2:18.2.7-1055.gab47f43c.el9.noa 98/119 2026-03-09T20:41:19.432 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-mgr-cephadm-2:18.2.7-1055.gab47f43c.el9.noa 98/119 2026-03-09T20:41:19.440 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-mgr-dashboard-2:18.2.7-1055.gab47f43c.el9.n 99/119 2026-03-09T20:41:19.954 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-mgr-dashboard-2:18.2.7-1055.gab47f43c.el9.n 99/119 2026-03-09T20:41:19.991 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-mgr-diskprediction-local-2:18.2.7-1055.gab4 100/119 2026-03-09T20:41:20.050 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:18.2.7-1055.gab4 100/119 2026-03-09T20:41:20.209 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-mgr-modules-core-2:18.2.7-1055.gab47f43c.el 101/119 2026-03-09T20:41:20.212 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-mgr-2:18.2.7-1055.gab47f43c.el9.x86_64 102/119 2026-03-09T20:41:20.237 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-mgr-2:18.2.7-1055.gab47f43c.el9.x86_64 102/119 2026-03-09T20:41:20.238 INFO:teuthology.orchestra.run.vm07.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T20:41:20.238 INFO:teuthology.orchestra.run.vm07.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-09T20:41:20.238 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-09T20:41:20.238 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-09T20:41:20.238 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:41:20.251 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-mgr-rook-2:18.2.7-1055.gab47f43c.el9.noarch 103/119 2026-03-09T20:41:20.354 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-mgr-rook-2:18.2.7-1055.gab47f43c.el9.noarch 103/119 2026-03-09T20:41:20.357 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-mds-2:18.2.7-1055.gab47f43c.el9.x86_64 104/119 2026-03-09T20:41:20.377 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-mds-2:18.2.7-1055.gab47f43c.el9.x86_64 104/119 2026-03-09T20:41:20.377 INFO:teuthology.orchestra.run.vm07.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T20:41:20.377 INFO:teuthology.orchestra.run.vm07.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-09T20:41:20.377 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-09T20:41:20.377 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-09T20:41:20.377 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:41:20.601 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-mon-2:18.2.7-1055.gab47f43c.el9.x86_64 105/119 2026-03-09T20:41:20.622 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-mon-2:18.2.7-1055.gab47f43c.el9.x86_64 105/119 2026-03-09T20:41:20.622 INFO:teuthology.orchestra.run.vm07.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T20:41:20.622 INFO:teuthology.orchestra.run.vm07.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-09T20:41:20.622 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-09T20:41:20.622 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-09T20:41:20.622 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:41:21.397 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-osd-2:18.2.7-1055.gab47f43c.el9.x86_64 106/119 2026-03-09T20:41:21.423 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-osd-2:18.2.7-1055.gab47f43c.el9.x86_64 106/119 2026-03-09T20:41:21.423 INFO:teuthology.orchestra.run.vm07.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T20:41:21.423 INFO:teuthology.orchestra.run.vm07.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-09T20:41:21.423 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-09T20:41:21.423 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-09T20:41:21.423 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:41:21.812 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-2:18.2.7-1055.gab47f43c.el9.x86_64 107/119 2026-03-09T20:41:21.816 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-radosgw-2:18.2.7-1055.gab47f43c.el9.x86_64 108/119 2026-03-09T20:41:21.837 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-radosgw-2:18.2.7-1055.gab47f43c.el9.x86_64 108/119 2026-03-09T20:41:21.837 INFO:teuthology.orchestra.run.vm07.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T20:41:21.837 INFO:teuthology.orchestra.run.vm07.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-09T20:41:21.837 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-09T20:41:21.837 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-09T20:41:21.837 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:41:21.848 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-immutable-object-cache-2:18.2.7-1055.gab47f 109/119 2026-03-09T20:41:21.869 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-immutable-object-cache-2:18.2.7-1055.gab47f 109/119 2026-03-09T20:41:21.869 INFO:teuthology.orchestra.run.vm07.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T20:41:21.869 INFO:teuthology.orchestra.run.vm07.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-09T20:41:21.869 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:41:22.022 INFO:teuthology.orchestra.run.vm07.stdout: Installing : rbd-mirror-2:18.2.7-1055.gab47f43c.el9.x86_64 110/119 2026-03-09T20:41:22.042 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: rbd-mirror-2:18.2.7-1055.gab47f43c.el9.x86_64 110/119 2026-03-09T20:41:22.042 INFO:teuthology.orchestra.run.vm07.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T20:41:22.042 INFO:teuthology.orchestra.run.vm07.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-09T20:41:22.042 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-09T20:41:22.042 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-09T20:41:22.042 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:41:24.097 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-test-2:18.2.7-1055.gab47f43c.el9.x86_64 111/119 2026-03-09T20:41:24.107 INFO:teuthology.orchestra.run.vm07.stdout: Installing : rbd-fuse-2:18.2.7-1055.gab47f43c.el9.x86_64 112/119 2026-03-09T20:41:24.112 INFO:teuthology.orchestra.run.vm07.stdout: Installing : rbd-nbd-2:18.2.7-1055.gab47f43c.el9.x86_64 113/119 2026-03-09T20:41:24.153 INFO:teuthology.orchestra.run.vm07.stdout: Installing : libcephfs-devel-2:18.2.7-1055.gab47f43c.el9.x86_ 114/119 2026-03-09T20:41:24.160 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-fuse-2:18.2.7-1055.gab47f43c.el9.x86_64 115/119 2026-03-09T20:41:24.168 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-xmltodict-0.12.0-15.el9.noarch 116/119 2026-03-09T20:41:24.173 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-jmespath-1.0.1-1.el9.noarch 117/119 2026-03-09T20:41:24.173 INFO:teuthology.orchestra.run.vm07.stdout: Cleanup : librbd1-2:16.2.4-5.el9.x86_64 118/119 2026-03-09T20:41:24.189 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: librbd1-2:16.2.4-5.el9.x86_64 118/119 2026-03-09T20:41:24.189 INFO:teuthology.orchestra.run.vm07.stdout: Cleanup : librados2-2:16.2.4-5.el9.x86_64 119/119 2026-03-09T20:41:25.387 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: librados2-2:16.2.4-5.el9.x86_64 119/119 2026-03-09T20:41:25.387 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-2:18.2.7-1055.gab47f43c.el9.x86_64 1/119 2026-03-09T20:41:25.387 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-base-2:18.2.7-1055.gab47f43c.el9.x86_64 2/119 2026-03-09T20:41:25.387 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-common-2:18.2.7-1055.gab47f43c.el9.x86_64 3/119 2026-03-09T20:41:25.387 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-fuse-2:18.2.7-1055.gab47f43c.el9.x86_64 4/119 2026-03-09T20:41:25.387 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-immutable-object-cache-2:18.2.7-1055.gab47f 5/119 2026-03-09T20:41:25.387 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-mds-2:18.2.7-1055.gab47f43c.el9.x86_64 6/119 2026-03-09T20:41:25.387 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-mgr-2:18.2.7-1055.gab47f43c.el9.x86_64 7/119 2026-03-09T20:41:25.387 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-mon-2:18.2.7-1055.gab47f43c.el9.x86_64 8/119 2026-03-09T20:41:25.387 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-osd-2:18.2.7-1055.gab47f43c.el9.x86_64 9/119 2026-03-09T20:41:25.387 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-radosgw-2:18.2.7-1055.gab47f43c.el9.x86_64 10/119 2026-03-09T20:41:25.388 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-selinux-2:18.2.7-1055.gab47f43c.el9.x86_64 11/119 2026-03-09T20:41:25.388 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-test-2:18.2.7-1055.gab47f43c.el9.x86_64 12/119 2026-03-09T20:41:25.388 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libcephfs-devel-2:18.2.7-1055.gab47f43c.el9.x86_ 13/119 2026-03-09T20:41:25.388 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libcephfs2-2:18.2.7-1055.gab47f43c.el9.x86_64 14/119 2026-03-09T20:41:25.388 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libcephsqlite-2:18.2.7-1055.gab47f43c.el9.x86_64 15/119 2026-03-09T20:41:25.388 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : librados-devel-2:18.2.7-1055.gab47f43c.el9.x86_6 16/119 2026-03-09T20:41:25.388 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libradosstriper1-2:18.2.7-1055.gab47f43c.el9.x86 17/119 2026-03-09T20:41:25.388 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : librgw2-2:18.2.7-1055.gab47f43c.el9.x86_64 18/119 2026-03-09T20:41:25.388 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-ceph-argparse-2:18.2.7-1055.gab47f43c.el 19/119 2026-03-09T20:41:25.388 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-ceph-common-2:18.2.7-1055.gab47f43c.el9. 20/119 2026-03-09T20:41:25.388 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-cephfs-2:18.2.7-1055.gab47f43c.el9.x86_6 21/119 2026-03-09T20:41:25.388 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-rados-2:18.2.7-1055.gab47f43c.el9.x86_64 22/119 2026-03-09T20:41:25.388 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-rbd-2:18.2.7-1055.gab47f43c.el9.x86_64 23/119 2026-03-09T20:41:25.388 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-rgw-2:18.2.7-1055.gab47f43c.el9.x86_64 24/119 2026-03-09T20:41:25.388 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : rbd-fuse-2:18.2.7-1055.gab47f43c.el9.x86_64 25/119 2026-03-09T20:41:25.388 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : rbd-mirror-2:18.2.7-1055.gab47f43c.el9.x86_64 26/119 2026-03-09T20:41:25.388 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : rbd-nbd-2:18.2.7-1055.gab47f43c.el9.x86_64 27/119 2026-03-09T20:41:25.388 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-grafana-dashboards-2:18.2.7-1055.gab47f43c. 28/119 2026-03-09T20:41:25.388 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-mgr-cephadm-2:18.2.7-1055.gab47f43c.el9.noa 29/119 2026-03-09T20:41:25.388 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-mgr-dashboard-2:18.2.7-1055.gab47f43c.el9.n 30/119 2026-03-09T20:41:25.388 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-mgr-diskprediction-local-2:18.2.7-1055.gab4 31/119 2026-03-09T20:41:25.388 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-mgr-modules-core-2:18.2.7-1055.gab47f43c.el 32/119 2026-03-09T20:41:25.388 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-mgr-rook-2:18.2.7-1055.gab47f43c.el9.noarch 33/119 2026-03-09T20:41:25.389 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-prometheus-alerts-2:18.2.7-1055.gab47f43c.e 34/119 2026-03-09T20:41:25.389 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : cephadm-2:18.2.7-1055.gab47f43c.el9.noarch 35/119 2026-03-09T20:41:25.389 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 36/119 2026-03-09T20:41:25.389 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 37/119 2026-03-09T20:41:25.389 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 38/119 2026-03-09T20:41:25.389 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 39/119 2026-03-09T20:41:25.389 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 40/119 2026-03-09T20:41:25.389 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 41/119 2026-03-09T20:41:25.389 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 42/119 2026-03-09T20:41:25.389 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-ply-3.11-14.el9.noarch 43/119 2026-03-09T20:41:25.389 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 44/119 2026-03-09T20:41:25.389 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 45/119 2026-03-09T20:41:25.389 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 46/119 2026-03-09T20:41:25.389 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 47/119 2026-03-09T20:41:25.389 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 48/119 2026-03-09T20:41:25.389 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 49/119 2026-03-09T20:41:25.389 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 50/119 2026-03-09T20:41:25.389 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 51/119 2026-03-09T20:41:25.389 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 52/119 2026-03-09T20:41:25.389 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 53/119 2026-03-09T20:41:25.389 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 54/119 2026-03-09T20:41:25.389 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 55/119 2026-03-09T20:41:25.389 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 56/119 2026-03-09T20:41:25.389 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 57/119 2026-03-09T20:41:25.389 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 58/119 2026-03-09T20:41:25.389 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 59/119 2026-03-09T20:41:25.390 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 60/119 2026-03-09T20:41:25.390 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 61/119 2026-03-09T20:41:25.390 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-jmespath-1.0.1-1.el9.noarch 62/119 2026-03-09T20:41:25.390 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 63/119 2026-03-09T20:41:25.390 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 64/119 2026-03-09T20:41:25.390 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 65/119 2026-03-09T20:41:25.390 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 66/119 2026-03-09T20:41:25.390 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 67/119 2026-03-09T20:41:25.390 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 68/119 2026-03-09T20:41:25.390 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 69/119 2026-03-09T20:41:25.390 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 70/119 2026-03-09T20:41:25.390 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 71/119 2026-03-09T20:41:25.390 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 72/119 2026-03-09T20:41:25.390 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 73/119 2026-03-09T20:41:25.390 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 74/119 2026-03-09T20:41:25.390 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : fmt-8.1.1-5.el9.x86_64 75/119 2026-03-09T20:41:25.390 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 76/119 2026-03-09T20:41:25.390 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 77/119 2026-03-09T20:41:25.390 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 78/119 2026-03-09T20:41:25.390 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 79/119 2026-03-09T20:41:25.390 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 80/119 2026-03-09T20:41:25.390 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 81/119 2026-03-09T20:41:25.390 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 82/119 2026-03-09T20:41:25.390 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 83/119 2026-03-09T20:41:25.390 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 84/119 2026-03-09T20:41:25.390 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 85/119 2026-03-09T20:41:25.390 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 86/119 2026-03-09T20:41:25.390 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 87/119 2026-03-09T20:41:25.390 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 88/119 2026-03-09T20:41:25.390 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 89/119 2026-03-09T20:41:25.390 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 90/119 2026-03-09T20:41:25.390 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 91/119 2026-03-09T20:41:25.390 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 92/119 2026-03-09T20:41:25.390 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 93/119 2026-03-09T20:41:25.390 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 94/119 2026-03-09T20:41:25.390 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 95/119 2026-03-09T20:41:25.390 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 96/119 2026-03-09T20:41:25.390 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 97/119 2026-03-09T20:41:25.390 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 98/119 2026-03-09T20:41:25.390 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 99/119 2026-03-09T20:41:25.390 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 100/119 2026-03-09T20:41:25.390 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 101/119 2026-03-09T20:41:25.390 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 102/119 2026-03-09T20:41:25.390 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 103/119 2026-03-09T20:41:25.390 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 104/119 2026-03-09T20:41:25.390 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 105/119 2026-03-09T20:41:25.390 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 106/119 2026-03-09T20:41:25.390 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 107/119 2026-03-09T20:41:25.390 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 108/119 2026-03-09T20:41:25.390 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 109/119 2026-03-09T20:41:25.390 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 110/119 2026-03-09T20:41:25.390 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 111/119 2026-03-09T20:41:25.390 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-xmltodict-0.12.0-15.el9.noarch 112/119 2026-03-09T20:41:25.390 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 113/119 2026-03-09T20:41:25.390 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : re2-1:20211101-20.el9.x86_64 114/119 2026-03-09T20:41:25.390 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 115/119 2026-03-09T20:41:25.390 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : librados2-2:18.2.7-1055.gab47f43c.el9.x86_64 116/119 2026-03-09T20:41:25.390 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : librados2-2:16.2.4-5.el9.x86_64 117/119 2026-03-09T20:41:25.390 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : librbd1-2:18.2.7-1055.gab47f43c.el9.x86_64 118/119 2026-03-09T20:41:25.479 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : librbd1-2:16.2.4-5.el9.x86_64 119/119 2026-03-09T20:41:25.479 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:41:25.479 INFO:teuthology.orchestra.run.vm07.stdout:Upgraded: 2026-03-09T20:41:25.479 INFO:teuthology.orchestra.run.vm07.stdout: librados2-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T20:41:25.480 INFO:teuthology.orchestra.run.vm07.stdout: librbd1-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T20:41:25.480 INFO:teuthology.orchestra.run.vm07.stdout:Installed: 2026-03-09T20:41:25.480 INFO:teuthology.orchestra.run.vm07.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-09T20:41:25.480 INFO:teuthology.orchestra.run.vm07.stdout: ceph-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T20:41:25.480 INFO:teuthology.orchestra.run.vm07.stdout: ceph-base-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T20:41:25.480 INFO:teuthology.orchestra.run.vm07.stdout: ceph-common-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T20:41:25.480 INFO:teuthology.orchestra.run.vm07.stdout: ceph-fuse-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T20:41:25.480 INFO:teuthology.orchestra.run.vm07.stdout: ceph-grafana-dashboards-2:18.2.7-1055.gab47f43c.el9.noarch 2026-03-09T20:41:25.480 INFO:teuthology.orchestra.run.vm07.stdout: ceph-immutable-object-cache-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T20:41:25.480 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mds-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T20:41:25.480 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T20:41:25.480 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-cephadm-2:18.2.7-1055.gab47f43c.el9.noarch 2026-03-09T20:41:25.480 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-dashboard-2:18.2.7-1055.gab47f43c.el9.noarch 2026-03-09T20:41:25.480 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-diskprediction-local-2:18.2.7-1055.gab47f43c.el9.noarch 2026-03-09T20:41:25.480 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core-2:18.2.7-1055.gab47f43c.el9.noarch 2026-03-09T20:41:25.480 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-rook-2:18.2.7-1055.gab47f43c.el9.noarch 2026-03-09T20:41:25.480 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mon-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T20:41:25.480 INFO:teuthology.orchestra.run.vm07.stdout: ceph-osd-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T20:41:25.480 INFO:teuthology.orchestra.run.vm07.stdout: ceph-prometheus-alerts-2:18.2.7-1055.gab47f43c.el9.noarch 2026-03-09T20:41:25.480 INFO:teuthology.orchestra.run.vm07.stdout: ceph-radosgw-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T20:41:25.480 INFO:teuthology.orchestra.run.vm07.stdout: ceph-selinux-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T20:41:25.480 INFO:teuthology.orchestra.run.vm07.stdout: ceph-test-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T20:41:25.480 INFO:teuthology.orchestra.run.vm07.stdout: cephadm-2:18.2.7-1055.gab47f43c.el9.noarch 2026-03-09T20:41:25.480 INFO:teuthology.orchestra.run.vm07.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-09T20:41:25.480 INFO:teuthology.orchestra.run.vm07.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-09T20:41:25.480 INFO:teuthology.orchestra.run.vm07.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-09T20:41:25.480 INFO:teuthology.orchestra.run.vm07.stdout: fmt-8.1.1-5.el9.x86_64 2026-03-09T20:41:25.480 INFO:teuthology.orchestra.run.vm07.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-09T20:41:25.480 INFO:teuthology.orchestra.run.vm07.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-09T20:41:25.480 INFO:teuthology.orchestra.run.vm07.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-09T20:41:25.480 INFO:teuthology.orchestra.run.vm07.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-09T20:41:25.480 INFO:teuthology.orchestra.run.vm07.stdout: libcephfs-devel-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T20:41:25.480 INFO:teuthology.orchestra.run.vm07.stdout: libcephfs2-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T20:41:25.480 INFO:teuthology.orchestra.run.vm07.stdout: libcephsqlite-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T20:41:25.480 INFO:teuthology.orchestra.run.vm07.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-09T20:41:25.480 INFO:teuthology.orchestra.run.vm07.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-09T20:41:25.480 INFO:teuthology.orchestra.run.vm07.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-09T20:41:25.480 INFO:teuthology.orchestra.run.vm07.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-09T20:41:25.480 INFO:teuthology.orchestra.run.vm07.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-09T20:41:25.480 INFO:teuthology.orchestra.run.vm07.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-09T20:41:25.480 INFO:teuthology.orchestra.run.vm07.stdout: librados-devel-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T20:41:25.480 INFO:teuthology.orchestra.run.vm07.stdout: libradosstriper1-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T20:41:25.480 INFO:teuthology.orchestra.run.vm07.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-09T20:41:25.480 INFO:teuthology.orchestra.run.vm07.stdout: librgw2-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T20:41:25.480 INFO:teuthology.orchestra.run.vm07.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-09T20:41:25.480 INFO:teuthology.orchestra.run.vm07.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-09T20:41:25.480 INFO:teuthology.orchestra.run.vm07.stdout: libxslt-1.1.34-12.el9.x86_64 2026-03-09T20:41:25.481 INFO:teuthology.orchestra.run.vm07.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-09T20:41:25.481 INFO:teuthology.orchestra.run.vm07.stdout: mailcap-2.1.49-5.el9.noarch 2026-03-09T20:41:25.481 INFO:teuthology.orchestra.run.vm07.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-09T20:41:25.481 INFO:teuthology.orchestra.run.vm07.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-09T20:41:25.481 INFO:teuthology.orchestra.run.vm07.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-09T20:41:25.481 INFO:teuthology.orchestra.run.vm07.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-09T20:41:25.481 INFO:teuthology.orchestra.run.vm07.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-09T20:41:25.481 INFO:teuthology.orchestra.run.vm07.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-09T20:41:25.481 INFO:teuthology.orchestra.run.vm07.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-09T20:41:25.481 INFO:teuthology.orchestra.run.vm07.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-09T20:41:25.481 INFO:teuthology.orchestra.run.vm07.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-09T20:41:25.481 INFO:teuthology.orchestra.run.vm07.stdout: python3-ceph-argparse-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T20:41:25.481 INFO:teuthology.orchestra.run.vm07.stdout: python3-ceph-common-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T20:41:25.481 INFO:teuthology.orchestra.run.vm07.stdout: python3-cephfs-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T20:41:25.481 INFO:teuthology.orchestra.run.vm07.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-09T20:41:25.481 INFO:teuthology.orchestra.run.vm07.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-09T20:41:25.481 INFO:teuthology.orchestra.run.vm07.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-09T20:41:25.481 INFO:teuthology.orchestra.run.vm07.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-09T20:41:25.481 INFO:teuthology.orchestra.run.vm07.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-09T20:41:25.481 INFO:teuthology.orchestra.run.vm07.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-09T20:41:25.481 INFO:teuthology.orchestra.run.vm07.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-09T20:41:25.481 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-09T20:41:25.481 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-09T20:41:25.481 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-09T20:41:25.481 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-09T20:41:25.481 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-09T20:41:25.481 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-09T20:41:25.481 INFO:teuthology.orchestra.run.vm07.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-09T20:41:25.481 INFO:teuthology.orchestra.run.vm07.stdout: python3-jmespath-1.0.1-1.el9.noarch 2026-03-09T20:41:25.481 INFO:teuthology.orchestra.run.vm07.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-09T20:41:25.481 INFO:teuthology.orchestra.run.vm07.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-09T20:41:25.481 INFO:teuthology.orchestra.run.vm07.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-09T20:41:25.481 INFO:teuthology.orchestra.run.vm07.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-09T20:41:25.481 INFO:teuthology.orchestra.run.vm07.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-09T20:41:25.481 INFO:teuthology.orchestra.run.vm07.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-09T20:41:25.481 INFO:teuthology.orchestra.run.vm07.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-09T20:41:25.481 INFO:teuthology.orchestra.run.vm07.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-09T20:41:25.481 INFO:teuthology.orchestra.run.vm07.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-09T20:41:25.481 INFO:teuthology.orchestra.run.vm07.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-09T20:41:25.481 INFO:teuthology.orchestra.run.vm07.stdout: python3-ply-3.11-14.el9.noarch 2026-03-09T20:41:25.481 INFO:teuthology.orchestra.run.vm07.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-09T20:41:25.481 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-09T20:41:25.481 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-09T20:41:25.482 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-09T20:41:25.482 INFO:teuthology.orchestra.run.vm07.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-09T20:41:25.482 INFO:teuthology.orchestra.run.vm07.stdout: python3-rados-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T20:41:25.482 INFO:teuthology.orchestra.run.vm07.stdout: python3-rbd-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T20:41:25.482 INFO:teuthology.orchestra.run.vm07.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-09T20:41:25.482 INFO:teuthology.orchestra.run.vm07.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-09T20:41:25.482 INFO:teuthology.orchestra.run.vm07.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-09T20:41:25.482 INFO:teuthology.orchestra.run.vm07.stdout: python3-rgw-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T20:41:25.482 INFO:teuthology.orchestra.run.vm07.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-09T20:41:25.482 INFO:teuthology.orchestra.run.vm07.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-09T20:41:25.482 INFO:teuthology.orchestra.run.vm07.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-09T20:41:25.482 INFO:teuthology.orchestra.run.vm07.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-09T20:41:25.482 INFO:teuthology.orchestra.run.vm07.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-09T20:41:25.482 INFO:teuthology.orchestra.run.vm07.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-09T20:41:25.482 INFO:teuthology.orchestra.run.vm07.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-09T20:41:25.482 INFO:teuthology.orchestra.run.vm07.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-09T20:41:25.482 INFO:teuthology.orchestra.run.vm07.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-09T20:41:25.482 INFO:teuthology.orchestra.run.vm07.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-09T20:41:25.482 INFO:teuthology.orchestra.run.vm07.stdout: python3-xmltodict-0.12.0-15.el9.noarch 2026-03-09T20:41:25.482 INFO:teuthology.orchestra.run.vm07.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-09T20:41:25.482 INFO:teuthology.orchestra.run.vm07.stdout: rbd-fuse-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T20:41:25.482 INFO:teuthology.orchestra.run.vm07.stdout: rbd-mirror-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T20:41:25.482 INFO:teuthology.orchestra.run.vm07.stdout: rbd-nbd-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T20:41:25.482 INFO:teuthology.orchestra.run.vm07.stdout: re2-1:20211101-20.el9.x86_64 2026-03-09T20:41:25.482 INFO:teuthology.orchestra.run.vm07.stdout: socat-1.7.4.1-8.el9.x86_64 2026-03-09T20:41:25.482 INFO:teuthology.orchestra.run.vm07.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-09T20:41:25.482 INFO:teuthology.orchestra.run.vm07.stdout: xmlstarlet-1.6.1-20.el9.x86_64 2026-03-09T20:41:25.482 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:41:25.482 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-03-09T20:41:25.567 DEBUG:teuthology.parallel:result is None 2026-03-09T20:41:25.568 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using branch 2026-03-09T20:41:25.568 INFO:teuthology.packaging:ref: None 2026-03-09T20:41:25.568 INFO:teuthology.packaging:tag: None 2026-03-09T20:41:25.568 INFO:teuthology.packaging:branch: reef 2026-03-09T20:41:25.568 INFO:teuthology.packaging:sha1: e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T20:41:25.568 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&ref=reef 2026-03-09T20:41:26.361 DEBUG:teuthology.orchestra.run.vm07:> rpm -q ceph --qf '%{VERSION}-%{RELEASE}' 2026-03-09T20:41:26.381 INFO:teuthology.orchestra.run.vm07.stdout:18.2.7-1055.gab47f43c.el9 2026-03-09T20:41:26.381 INFO:teuthology.packaging:The installed version of ceph is 18.2.7-1055.gab47f43c.el9 2026-03-09T20:41:26.381 INFO:teuthology.task.install:The correct ceph version 18.2.7-1055.gab47f43c is installed. 2026-03-09T20:41:26.382 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using branch 2026-03-09T20:41:26.382 INFO:teuthology.packaging:ref: None 2026-03-09T20:41:26.382 INFO:teuthology.packaging:tag: None 2026-03-09T20:41:26.382 INFO:teuthology.packaging:branch: reef 2026-03-09T20:41:26.382 INFO:teuthology.packaging:sha1: e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T20:41:26.382 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&ref=reef 2026-03-09T20:41:27.147 DEBUG:teuthology.orchestra.run.vm10:> rpm -q ceph --qf '%{VERSION}-%{RELEASE}' 2026-03-09T20:41:27.166 INFO:teuthology.orchestra.run.vm10.stdout:18.2.7-1055.gab47f43c.el9 2026-03-09T20:41:27.166 INFO:teuthology.packaging:The installed version of ceph is 18.2.7-1055.gab47f43c.el9 2026-03-09T20:41:27.166 INFO:teuthology.task.install:The correct ceph version 18.2.7-1055.gab47f43c is installed. 2026-03-09T20:41:27.167 INFO:teuthology.task.install.util:Shipping valgrind.supp... 2026-03-09T20:41:27.167 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-09T20:41:27.167 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-09T20:41:27.191 DEBUG:teuthology.orchestra.run.vm10:> set -ex 2026-03-09T20:41:27.191 DEBUG:teuthology.orchestra.run.vm10:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-09T20:41:27.231 INFO:teuthology.task.install.util:Shipping 'daemon-helper'... 2026-03-09T20:41:27.232 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-09T20:41:27.232 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/usr/bin/daemon-helper 2026-03-09T20:41:27.253 DEBUG:teuthology.orchestra.run.vm07:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-09T20:41:27.317 DEBUG:teuthology.orchestra.run.vm10:> set -ex 2026-03-09T20:41:27.317 DEBUG:teuthology.orchestra.run.vm10:> sudo dd of=/usr/bin/daemon-helper 2026-03-09T20:41:27.339 DEBUG:teuthology.orchestra.run.vm10:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-09T20:41:27.400 INFO:teuthology.task.install.util:Shipping 'adjust-ulimits'... 2026-03-09T20:41:27.401 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-09T20:41:27.401 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-09T20:41:27.422 DEBUG:teuthology.orchestra.run.vm07:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-09T20:41:27.483 DEBUG:teuthology.orchestra.run.vm10:> set -ex 2026-03-09T20:41:27.483 DEBUG:teuthology.orchestra.run.vm10:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-09T20:41:27.506 DEBUG:teuthology.orchestra.run.vm10:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-09T20:41:27.569 INFO:teuthology.task.install.util:Shipping 'stdin-killer'... 2026-03-09T20:41:27.570 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-09T20:41:27.570 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/usr/bin/stdin-killer 2026-03-09T20:41:27.592 DEBUG:teuthology.orchestra.run.vm07:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-09T20:41:27.656 DEBUG:teuthology.orchestra.run.vm10:> set -ex 2026-03-09T20:41:27.656 DEBUG:teuthology.orchestra.run.vm10:> sudo dd of=/usr/bin/stdin-killer 2026-03-09T20:41:27.678 DEBUG:teuthology.orchestra.run.vm10:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-09T20:41:27.740 INFO:teuthology.run_tasks:Running task print... 2026-03-09T20:41:27.742 INFO:teuthology.task.print:**** done install task... 2026-03-09T20:41:27.742 INFO:teuthology.run_tasks:Running task cephadm... 2026-03-09T20:41:27.782 INFO:tasks.cephadm:Config: {'compiled_cephadm_branch': 'reef', 'conf': {'osd': {'osd_class_default_list': '*', 'osd_class_load_list': '*', 'bdev async discard': True, 'bdev enable discard': True, 'bluestore allocator': 'bitmap', 'bluestore block size': 96636764160, 'bluestore fsck on mount': True, 'debug bluefs': '1/20', 'debug bluestore': '1/20', 'debug ms': 1, 'debug osd': 20, 'debug rocksdb': '4/10', 'mon osd backfillfull_ratio': 0.85, 'mon osd full ratio': 0.9, 'mon osd nearfull ratio': 0.8, 'osd failsafe full ratio': 0.95, 'osd mclock iops capacity threshold hdd': 49000, 'osd objectstore': 'bluestore', 'osd op complaint time': 180}, 'client': {'client mount timeout': 600, 'debug client': 20, 'debug ms': 1, 'rados mon op timeout': 900, 'rados osd op timeout': 900}, 'global': {'mon pg warn min per osd': 0}, 'mds': {'debug mds': 20, 'debug mds balancer': 20, 'debug ms': 1, 'mds debug frag': True, 'mds debug scatterstat': True, 'mds op complaint time': 180, 'mds verify scatter': True, 'osd op complaint time': 180, 'rados mon op timeout': 900, 'rados osd op timeout': 900}, 'mgr': {'debug mgr': 20, 'debug ms': 1}, 'mon': {'debug mon': 20, 'debug ms': 1, 'debug paxos': 20, 'mon down mkfs grace': 300, 'mon op complaint time': 120}}, 'image': 'quay.ceph.io/ceph-ci/ceph:reef', 'roleless': True, 'cluster-conf': {'mgr': {'client mount timeout': 30, 'debug client': 20, 'debug mgr': 20, 'debug ms': 1, 'mon warn on pool no app': False}}, 'flavor': 'default', 'fs': 'xfs', 'log-ignorelist': ['\\(MDS_ALL_DOWN\\)', '\\(MDS_UP_LESS_THAN_MAX\\)', 'FS_DEGRADED', 'filesystem is degraded', 'FS_INLINE_DATA_DEPRECATED', 'FS_WITH_FAILED_MDS', 'MDS_ALL_DOWN', 'filesystem is offline', 'is offline because no MDS', 'MDS_DAMAGE', 'MDS_DEGRADED', 'MDS_FAILED', 'MDS_INSUFFICIENT_STANDBY', 'MDS_UP_LESS_THAN_MAX', 'online, but wants', 'filesystem is online with fewer MDS than max_mds', 'POOL_APP_NOT_ENABLED', 'do not have an application enabled', 'overall HEALTH_', 'Replacing daemon', 'deprecated feature inline_data', 'MGR_MODULE_ERROR', 'OSD_DOWN', 'osds down', 'overall HEALTH_', '\\(OSD_DOWN\\)', '\\(OSD_', 'but it is still running', 'is not responding', 'MON_DOWN', 'PG_AVAILABILITY', 'PG_DEGRADED', 'Reduced data availability', 'Degraded data redundancy', 'pg .* is stuck inactive', 'pg .* is .*degraded', 'pg .* is stuck peering'], 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'} 2026-03-09T20:41:27.782 INFO:tasks.cephadm:Cluster image is quay.ceph.io/ceph-ci/ceph:reef 2026-03-09T20:41:27.782 INFO:tasks.cephadm:Cluster fsid is 589eab88-1bf8-11f1-9e50-71f3ab1833c4 2026-03-09T20:41:27.782 INFO:tasks.cephadm:Choosing monitor IPs and ports... 2026-03-09T20:41:27.782 INFO:tasks.cephadm:No mon roles; fabricating mons 2026-03-09T20:41:27.783 INFO:tasks.cephadm:Monitor IPs: {'mon.vm07': '192.168.123.107', 'mon.vm10': '192.168.123.110'} 2026-03-09T20:41:27.783 INFO:tasks.cephadm:Normalizing hostnames... 2026-03-09T20:41:27.783 DEBUG:teuthology.orchestra.run.vm07:> sudo hostname $(hostname -s) 2026-03-09T20:41:27.804 DEBUG:teuthology.orchestra.run.vm10:> sudo hostname $(hostname -s) 2026-03-09T20:41:27.828 INFO:tasks.cephadm:Downloading "compiled" cephadm from cachra for reef 2026-03-09T20:41:27.828 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T20:41:28.439 INFO:tasks.cephadm:builder_project result: [{'url': 'https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/', 'chacra_url': 'https://3.chacra.ceph.com/repos/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/', 'ref': 'squid', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'distro': 'centos', 'distro_version': '9', 'distro_codename': None, 'modified': '2026-02-25 18:55:15.146628', 'status': 'ready', 'flavor': 'default', 'project': 'ceph', 'archs': ['source', 'x86_64'], 'extra': {'version': '19.2.3-678-ge911bdeb', 'package_manager_version': '19.2.3-678.ge911bdeb', 'build_url': 'https://jenkins.ceph.com/job/ceph-dev-pipeline/3275/', 'root_build_cause': '', 'node_name': '10.20.192.26+soko16', 'job_name': 'ceph-dev-pipeline'}}] 2026-03-09T20:41:29.289 INFO:tasks.util.chacra:got chacra host 3.chacra.ceph.com, ref reef, sha1 ab47f43c099b2cbae6e21342fe673ce251da54d6 from https://shaman.ceph.com/api/search/?project=ceph&distros=centos%2F9%2Fx86_64&flavor=default&ref=reef 2026-03-09T20:41:29.290 INFO:tasks.cephadm:Discovered cachra url: https://3.chacra.ceph.com/binaries/ceph/reef/ab47f43c099b2cbae6e21342fe673ce251da54d6/centos/9/x86_64/flavors/default/cephadm 2026-03-09T20:41:29.290 INFO:tasks.cephadm:Downloading cephadm from url: https://3.chacra.ceph.com/binaries/ceph/reef/ab47f43c099b2cbae6e21342fe673ce251da54d6/centos/9/x86_64/flavors/default/cephadm 2026-03-09T20:41:29.291 DEBUG:teuthology.orchestra.run.vm07:> curl --silent -L https://3.chacra.ceph.com/binaries/ceph/reef/ab47f43c099b2cbae6e21342fe673ce251da54d6/centos/9/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-09T20:41:30.531 INFO:teuthology.orchestra.run.vm07.stdout:-rw-r--r--. 1 ubuntu ubuntu 217462 Mar 9 20:41 /home/ubuntu/cephtest/cephadm 2026-03-09T20:41:30.531 DEBUG:teuthology.orchestra.run.vm10:> curl --silent -L https://3.chacra.ceph.com/binaries/ceph/reef/ab47f43c099b2cbae6e21342fe673ce251da54d6/centos/9/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-09T20:41:31.770 INFO:teuthology.orchestra.run.vm10.stdout:-rw-r--r--. 1 ubuntu ubuntu 217462 Mar 9 20:41 /home/ubuntu/cephtest/cephadm 2026-03-09T20:41:31.771 DEBUG:teuthology.orchestra.run.vm07:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-09T20:41:31.785 DEBUG:teuthology.orchestra.run.vm10:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-09T20:41:31.803 INFO:tasks.cephadm:Pulling image quay.ceph.io/ceph-ci/ceph:reef on all hosts... 2026-03-09T20:41:31.803 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef pull 2026-03-09T20:41:31.827 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef pull 2026-03-09T20:41:31.973 INFO:teuthology.orchestra.run.vm10.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:reef... 2026-03-09T20:41:31.981 INFO:teuthology.orchestra.run.vm07.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:reef... 2026-03-09T20:42:16.517 INFO:teuthology.orchestra.run.vm10.stdout:{ 2026-03-09T20:42:16.517 INFO:teuthology.orchestra.run.vm10.stdout: "ceph_version": "ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable)", 2026-03-09T20:42:16.517 INFO:teuthology.orchestra.run.vm10.stdout: "image_id": "b6fe7eb6a9d0f9b143033c43e1cdf7ef0918719fc7cff0dd0e2c113bb482fdd6", 2026-03-09T20:42:16.517 INFO:teuthology.orchestra.run.vm10.stdout: "repo_digests": [ 2026-03-09T20:42:16.517 INFO:teuthology.orchestra.run.vm10.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:5c4977cb71fa562f9eeef885de8392360f108b57ddc9fb513207223fafb4cf40" 2026-03-09T20:42:16.517 INFO:teuthology.orchestra.run.vm10.stdout: ] 2026-03-09T20:42:16.517 INFO:teuthology.orchestra.run.vm10.stdout:} 2026-03-09T20:42:18.269 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T20:42:18.269 INFO:teuthology.orchestra.run.vm07.stdout: "ceph_version": "ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable)", 2026-03-09T20:42:18.269 INFO:teuthology.orchestra.run.vm07.stdout: "image_id": "b6fe7eb6a9d0f9b143033c43e1cdf7ef0918719fc7cff0dd0e2c113bb482fdd6", 2026-03-09T20:42:18.269 INFO:teuthology.orchestra.run.vm07.stdout: "repo_digests": [ 2026-03-09T20:42:18.269 INFO:teuthology.orchestra.run.vm07.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:5c4977cb71fa562f9eeef885de8392360f108b57ddc9fb513207223fafb4cf40" 2026-03-09T20:42:18.269 INFO:teuthology.orchestra.run.vm07.stdout: ] 2026-03-09T20:42:18.269 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T20:42:18.281 DEBUG:teuthology.orchestra.run.vm07:> sudo mkdir -p /etc/ceph 2026-03-09T20:42:18.304 DEBUG:teuthology.orchestra.run.vm10:> sudo mkdir -p /etc/ceph 2026-03-09T20:42:18.328 DEBUG:teuthology.orchestra.run.vm07:> sudo chmod 777 /etc/ceph 2026-03-09T20:42:18.367 DEBUG:teuthology.orchestra.run.vm10:> sudo chmod 777 /etc/ceph 2026-03-09T20:42:18.390 INFO:tasks.cephadm:Writing seed config... 2026-03-09T20:42:18.390 INFO:tasks.cephadm: override: [osd] osd_class_default_list = * 2026-03-09T20:42:18.390 INFO:tasks.cephadm: override: [osd] osd_class_load_list = * 2026-03-09T20:42:18.390 INFO:tasks.cephadm: override: [osd] bdev async discard = True 2026-03-09T20:42:18.390 INFO:tasks.cephadm: override: [osd] bdev enable discard = True 2026-03-09T20:42:18.390 INFO:tasks.cephadm: override: [osd] bluestore allocator = bitmap 2026-03-09T20:42:18.390 INFO:tasks.cephadm: override: [osd] bluestore block size = 96636764160 2026-03-09T20:42:18.390 INFO:tasks.cephadm: override: [osd] bluestore fsck on mount = True 2026-03-09T20:42:18.390 INFO:tasks.cephadm: override: [osd] debug bluefs = 1/20 2026-03-09T20:42:18.390 INFO:tasks.cephadm: override: [osd] debug bluestore = 1/20 2026-03-09T20:42:18.390 INFO:tasks.cephadm: override: [osd] debug ms = 1 2026-03-09T20:42:18.390 INFO:tasks.cephadm: override: [osd] debug osd = 20 2026-03-09T20:42:18.390 INFO:tasks.cephadm: override: [osd] debug rocksdb = 4/10 2026-03-09T20:42:18.390 INFO:tasks.cephadm: override: [osd] mon osd backfillfull_ratio = 0.85 2026-03-09T20:42:18.390 INFO:tasks.cephadm: override: [osd] mon osd full ratio = 0.9 2026-03-09T20:42:18.390 INFO:tasks.cephadm: override: [osd] mon osd nearfull ratio = 0.8 2026-03-09T20:42:18.390 INFO:tasks.cephadm: override: [osd] osd failsafe full ratio = 0.95 2026-03-09T20:42:18.390 INFO:tasks.cephadm: override: [osd] osd mclock iops capacity threshold hdd = 49000 2026-03-09T20:42:18.390 INFO:tasks.cephadm: override: [osd] osd objectstore = bluestore 2026-03-09T20:42:18.390 INFO:tasks.cephadm: override: [osd] osd op complaint time = 180 2026-03-09T20:42:18.390 INFO:tasks.cephadm: override: [client] client mount timeout = 600 2026-03-09T20:42:18.390 INFO:tasks.cephadm: override: [client] debug client = 20 2026-03-09T20:42:18.390 INFO:tasks.cephadm: override: [client] debug ms = 1 2026-03-09T20:42:18.390 INFO:tasks.cephadm: override: [client] rados mon op timeout = 900 2026-03-09T20:42:18.390 INFO:tasks.cephadm: override: [client] rados osd op timeout = 900 2026-03-09T20:42:18.390 INFO:tasks.cephadm: override: [global] mon pg warn min per osd = 0 2026-03-09T20:42:18.390 INFO:tasks.cephadm: override: [mds] debug mds = 20 2026-03-09T20:42:18.390 INFO:tasks.cephadm: override: [mds] debug mds balancer = 20 2026-03-09T20:42:18.390 INFO:tasks.cephadm: override: [mds] debug ms = 1 2026-03-09T20:42:18.391 INFO:tasks.cephadm: override: [mds] mds debug frag = True 2026-03-09T20:42:18.391 INFO:tasks.cephadm: override: [mds] mds debug scatterstat = True 2026-03-09T20:42:18.391 INFO:tasks.cephadm: override: [mds] mds op complaint time = 180 2026-03-09T20:42:18.391 INFO:tasks.cephadm: override: [mds] mds verify scatter = True 2026-03-09T20:42:18.391 INFO:tasks.cephadm: override: [mds] osd op complaint time = 180 2026-03-09T20:42:18.391 INFO:tasks.cephadm: override: [mds] rados mon op timeout = 900 2026-03-09T20:42:18.391 INFO:tasks.cephadm: override: [mds] rados osd op timeout = 900 2026-03-09T20:42:18.391 INFO:tasks.cephadm: override: [mgr] debug mgr = 20 2026-03-09T20:42:18.391 INFO:tasks.cephadm: override: [mgr] debug ms = 1 2026-03-09T20:42:18.391 INFO:tasks.cephadm: override: [mon] debug mon = 20 2026-03-09T20:42:18.391 INFO:tasks.cephadm: override: [mon] debug ms = 1 2026-03-09T20:42:18.391 INFO:tasks.cephadm: override: [mon] debug paxos = 20 2026-03-09T20:42:18.391 INFO:tasks.cephadm: override: [mon] mon down mkfs grace = 300 2026-03-09T20:42:18.391 INFO:tasks.cephadm: override: [mon] mon op complaint time = 120 2026-03-09T20:42:18.391 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-09T20:42:18.391 DEBUG:teuthology.orchestra.run.vm07:> dd of=/home/ubuntu/cephtest/seed.ceph.conf 2026-03-09T20:42:18.422 DEBUG:tasks.cephadm:Final config: [global] # make logging friendly to teuthology log_to_file = true log_to_stderr = false log to journald = false mon cluster log to file = true mon cluster log file level = debug mon clock drift allowed = 1.000 # replicate across OSDs, not hosts osd crush chooseleaf type = 0 #osd pool default size = 2 osd pool default erasure code profile = plugin=jerasure technique=reed_sol_van k=2 m=1 crush-failure-domain=osd # enable some debugging auth debug = true ms die on old message = true ms die on bug = true debug asserts on shutdown = true # adjust warnings mon max pg per osd = 10000# >= luminous mon pg warn max object skew = 0 mon osd allow primary affinity = true mon osd allow pg remap = true mon warn on legacy crush tunables = false mon warn on crush straw calc version zero = false mon warn on no sortbitwise = false mon warn on osd down out interval zero = false mon warn on too few osds = false mon_warn_on_pool_pg_num_not_power_of_two = false # disable pg_autoscaler by default for new pools osd_pool_default_pg_autoscale_mode = off # tests delete pools mon allow pool delete = true fsid = 589eab88-1bf8-11f1-9e50-71f3ab1833c4 mon pg warn min per osd = 0 [osd] osd scrub load threshold = 5.0 osd scrub max interval = 600 osd mclock profile = high_recovery_ops osd recover clone overlap = true osd recovery max chunk = 1048576 osd deep scrub update digest min age = 30 osd map max advance = 10 osd memory target autotune = true # debugging osd debug shutdown = true osd debug op order = true osd debug verify stray on activate = true osd debug pg log writeout = true osd debug verify cached snaps = true osd debug verify missing on start = true osd debug misdirected ops = true osd op queue = debug_random osd op queue cut off = debug_random osd shutdown pgref assert = true bdev debug aio = true osd sloppy crc = true osd_class_default_list = * osd_class_load_list = * bdev async discard = True bdev enable discard = True bluestore allocator = bitmap bluestore block size = 96636764160 bluestore fsck on mount = True debug bluefs = 1/20 debug bluestore = 1/20 debug ms = 1 debug osd = 20 debug rocksdb = 4/10 mon osd backfillfull_ratio = 0.85 mon osd full ratio = 0.9 mon osd nearfull ratio = 0.8 osd failsafe full ratio = 0.95 osd mclock iops capacity threshold hdd = 49000 osd objectstore = bluestore osd op complaint time = 180 [mgr] mon reweight min pgs per osd = 4 mon reweight min bytes per osd = 10 mgr/telemetry/nag = false debug mgr = 20 debug ms = 1 [mon] mon data avail warn = 5 mon mgr mkfs grace = 240 mon reweight min pgs per osd = 4 mon osd reporter subtree level = osd mon osd prime pg temp = true mon reweight min bytes per osd = 10 # rotate auth tickets quickly to exercise renewal paths auth mon ticket ttl = 660# 11m auth service ticket ttl = 240# 4m # don't complain about global id reclaim mon_warn_on_insecure_global_id_reclaim = false mon_warn_on_insecure_global_id_reclaim_allowed = false debug mon = 20 debug ms = 1 debug paxos = 20 mon down mkfs grace = 300 mon op complaint time = 120 [client.rgw] rgw cache enabled = true rgw enable ops log = true rgw enable usage log = true [client] client mount timeout = 600 debug client = 20 debug ms = 1 rados mon op timeout = 900 rados osd op timeout = 900 [mds] debug mds = 20 debug mds balancer = 20 debug ms = 1 mds debug frag = True mds debug scatterstat = True mds op complaint time = 180 mds verify scatter = True osd op complaint time = 180 rados mon op timeout = 900 rados osd op timeout = 900 2026-03-09T20:42:18.422 DEBUG:teuthology.orchestra.run.vm07:mon.vm07> sudo journalctl -f -n 0 -u ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4@mon.vm07.service 2026-03-09T20:42:18.463 INFO:tasks.cephadm:Bootstrapping... 2026-03-09T20:42:18.463 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef -v bootstrap --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-ip 192.168.123.107 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring 2026-03-09T20:42:18.571 INFO:teuthology.orchestra.run.vm07.stdout:-------------------------------------------------------------------------------- 2026-03-09T20:42:18.572 INFO:teuthology.orchestra.run.vm07.stdout:cephadm ['--image', 'quay.ceph.io/ceph-ci/ceph:reef', '-v', 'bootstrap', '--fsid', '589eab88-1bf8-11f1-9e50-71f3ab1833c4', '--config', '/home/ubuntu/cephtest/seed.ceph.conf', '--output-config', '/etc/ceph/ceph.conf', '--output-keyring', '/etc/ceph/ceph.client.admin.keyring', '--output-pub-ssh-key', '/home/ubuntu/cephtest/ceph.pub', '--mon-ip', '192.168.123.107', '--skip-admin-label'] 2026-03-09T20:42:18.588 INFO:teuthology.orchestra.run.vm07.stdout:/bin/podman: stdout 5.8.0 2026-03-09T20:42:18.588 INFO:teuthology.orchestra.run.vm07.stderr:Specifying an fsid for your cluster offers no advantages and may increase the likelihood of fsid conflicts. 2026-03-09T20:42:18.588 INFO:teuthology.orchestra.run.vm07.stdout:Verifying podman|docker is present... 2026-03-09T20:42:18.604 INFO:teuthology.orchestra.run.vm07.stdout:/bin/podman: stdout 5.8.0 2026-03-09T20:42:18.604 INFO:teuthology.orchestra.run.vm07.stdout:Verifying lvm2 is present... 2026-03-09T20:42:18.604 INFO:teuthology.orchestra.run.vm07.stdout:Verifying time synchronization is in place... 2026-03-09T20:42:18.610 INFO:teuthology.orchestra.run.vm07.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-09T20:42:18.610 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-09T20:42:18.615 INFO:teuthology.orchestra.run.vm07.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-09T20:42:18.615 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stdout inactive 2026-03-09T20:42:18.620 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stdout enabled 2026-03-09T20:42:18.624 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stdout active 2026-03-09T20:42:18.624 INFO:teuthology.orchestra.run.vm07.stdout:Unit chronyd.service is enabled and running 2026-03-09T20:42:18.624 INFO:teuthology.orchestra.run.vm07.stdout:Repeating the final host check... 2026-03-09T20:42:18.641 INFO:teuthology.orchestra.run.vm07.stdout:/bin/podman: stdout 5.8.0 2026-03-09T20:42:18.641 INFO:teuthology.orchestra.run.vm07.stdout:podman (/bin/podman) version 5.8.0 is present 2026-03-09T20:42:18.641 INFO:teuthology.orchestra.run.vm07.stdout:systemctl is present 2026-03-09T20:42:18.641 INFO:teuthology.orchestra.run.vm07.stdout:lvcreate is present 2026-03-09T20:42:18.646 INFO:teuthology.orchestra.run.vm07.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-09T20:42:18.646 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-09T20:42:18.651 INFO:teuthology.orchestra.run.vm07.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-09T20:42:18.651 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stdout inactive 2026-03-09T20:42:18.655 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stdout enabled 2026-03-09T20:42:18.660 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stdout active 2026-03-09T20:42:18.660 INFO:teuthology.orchestra.run.vm07.stdout:Unit chronyd.service is enabled and running 2026-03-09T20:42:18.660 INFO:teuthology.orchestra.run.vm07.stdout:Host looks OK 2026-03-09T20:42:18.660 INFO:teuthology.orchestra.run.vm07.stdout:Cluster fsid: 589eab88-1bf8-11f1-9e50-71f3ab1833c4 2026-03-09T20:42:18.660 INFO:teuthology.orchestra.run.vm07.stdout:Acquiring lock 139796604619984 on /run/cephadm/589eab88-1bf8-11f1-9e50-71f3ab1833c4.lock 2026-03-09T20:42:18.660 INFO:teuthology.orchestra.run.vm07.stdout:Lock 139796604619984 acquired on /run/cephadm/589eab88-1bf8-11f1-9e50-71f3ab1833c4.lock 2026-03-09T20:42:18.660 INFO:teuthology.orchestra.run.vm07.stdout:Verifying IP 192.168.123.107 port 3300 ... 2026-03-09T20:42:18.661 INFO:teuthology.orchestra.run.vm07.stdout:Verifying IP 192.168.123.107 port 6789 ... 2026-03-09T20:42:18.661 INFO:teuthology.orchestra.run.vm07.stdout:Base mon IP(s) is [192.168.123.107:3300, 192.168.123.107:6789], mon addrv is [v2:192.168.123.107:3300,v1:192.168.123.107:6789] 2026-03-09T20:42:18.663 INFO:teuthology.orchestra.run.vm07.stdout:/sbin/ip: stdout default via 192.168.123.1 dev eth0 proto dhcp src 192.168.123.107 metric 100 2026-03-09T20:42:18.663 INFO:teuthology.orchestra.run.vm07.stdout:/sbin/ip: stdout 192.168.123.0/24 dev eth0 proto kernel scope link src 192.168.123.107 metric 100 2026-03-09T20:42:18.665 INFO:teuthology.orchestra.run.vm07.stdout:/sbin/ip: stdout ::1 dev lo proto kernel metric 256 pref medium 2026-03-09T20:42:18.665 INFO:teuthology.orchestra.run.vm07.stdout:/sbin/ip: stdout fe80::/64 dev eth0 proto kernel metric 1024 pref medium 2026-03-09T20:42:18.668 INFO:teuthology.orchestra.run.vm07.stdout:/sbin/ip: stdout 1: lo: mtu 65536 state UNKNOWN qlen 1000 2026-03-09T20:42:18.668 INFO:teuthology.orchestra.run.vm07.stdout:/sbin/ip: stdout inet6 ::1/128 scope host 2026-03-09T20:42:18.668 INFO:teuthology.orchestra.run.vm07.stdout:/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-09T20:42:18.668 INFO:teuthology.orchestra.run.vm07.stdout:/sbin/ip: stdout 2: eth0: mtu 1500 state UP qlen 1000 2026-03-09T20:42:18.668 INFO:teuthology.orchestra.run.vm07.stdout:/sbin/ip: stdout inet6 fe80::5055:ff:fe00:7/64 scope link noprefixroute 2026-03-09T20:42:18.668 INFO:teuthology.orchestra.run.vm07.stdout:/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-09T20:42:18.668 INFO:teuthology.orchestra.run.vm07.stdout:Mon IP `192.168.123.107` is in CIDR network `192.168.123.0/24` 2026-03-09T20:42:18.668 INFO:teuthology.orchestra.run.vm07.stdout:Mon IP `192.168.123.107` is in CIDR network `192.168.123.0/24` 2026-03-09T20:42:18.668 INFO:teuthology.orchestra.run.vm07.stdout:Inferred mon public CIDR from local network configuration ['192.168.123.0/24', '192.168.123.0/24'] 2026-03-09T20:42:18.668 INFO:teuthology.orchestra.run.vm07.stdout:Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network 2026-03-09T20:42:18.669 INFO:teuthology.orchestra.run.vm07.stdout:Pulling container image quay.ceph.io/ceph-ci/ceph:reef... 2026-03-09T20:42:19.999 INFO:teuthology.orchestra.run.vm07.stdout:/bin/podman: stdout b6fe7eb6a9d0f9b143033c43e1cdf7ef0918719fc7cff0dd0e2c113bb482fdd6 2026-03-09T20:42:19.999 INFO:teuthology.orchestra.run.vm07.stdout:/bin/podman: stderr Trying to pull quay.ceph.io/ceph-ci/ceph:reef... 2026-03-09T20:42:19.999 INFO:teuthology.orchestra.run.vm07.stdout:/bin/podman: stderr Getting image source signatures 2026-03-09T20:42:19.999 INFO:teuthology.orchestra.run.vm07.stdout:/bin/podman: stderr Copying blob sha256:8c0f38fb8a72d42ac81f075843e5360929f695c9f93c12951e7539b9ed9b1b5f 2026-03-09T20:42:19.999 INFO:teuthology.orchestra.run.vm07.stdout:/bin/podman: stderr Copying blob sha256:8e380faede39ebd4286247457b408d979ab568aafd8389c42ec304b8cfba4e92 2026-03-09T20:42:19.999 INFO:teuthology.orchestra.run.vm07.stdout:/bin/podman: stderr Copying config sha256:b6fe7eb6a9d0f9b143033c43e1cdf7ef0918719fc7cff0dd0e2c113bb482fdd6 2026-03-09T20:42:19.999 INFO:teuthology.orchestra.run.vm07.stdout:/bin/podman: stderr Writing manifest to image destination 2026-03-09T20:42:20.131 INFO:teuthology.orchestra.run.vm07.stdout:ceph: stdout ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable) 2026-03-09T20:42:20.132 INFO:teuthology.orchestra.run.vm07.stdout:Ceph version: ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable) 2026-03-09T20:42:20.132 INFO:teuthology.orchestra.run.vm07.stdout:Extracting ceph user uid/gid from container image... 2026-03-09T20:42:20.235 INFO:teuthology.orchestra.run.vm07.stdout:stat: stdout 167 167 2026-03-09T20:42:20.235 INFO:teuthology.orchestra.run.vm07.stdout:Creating initial keys... 2026-03-09T20:42:20.334 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-authtool: stdout AQCsMK9pUYDRERAAUxGAeVp1fFJW2ZAc13ukWA== 2026-03-09T20:42:20.428 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-authtool: stdout AQCsMK9pM090FxAAgA0hv0A34qonFgrcBPe80A== 2026-03-09T20:42:20.511 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph-authtool: stdout AQCsMK9pI/V6HRAAnjgiKpyI6AQtZOio781VgQ== 2026-03-09T20:42:20.512 INFO:teuthology.orchestra.run.vm07.stdout:Creating initial monmap... 2026-03-09T20:42:20.640 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-09T20:42:20.640 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/monmaptool: stdout setting min_mon_release = pacific 2026-03-09T20:42:20.640 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: set fsid to 589eab88-1bf8-11f1-9e50-71f3ab1833c4 2026-03-09T20:42:20.640 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-09T20:42:20.640 INFO:teuthology.orchestra.run.vm07.stdout:monmaptool for vm07 [v2:192.168.123.107:3300,v1:192.168.123.107:6789] on /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-09T20:42:20.640 INFO:teuthology.orchestra.run.vm07.stdout:setting min_mon_release = pacific 2026-03-09T20:42:20.640 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/monmaptool: set fsid to 589eab88-1bf8-11f1-9e50-71f3ab1833c4 2026-03-09T20:42:20.640 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-09T20:42:20.640 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:42:20.640 INFO:teuthology.orchestra.run.vm07.stdout:Creating mon... 2026-03-09T20:42:20.772 INFO:teuthology.orchestra.run.vm07.stdout:create mon.vm07 on 2026-03-09T20:42:20.924 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stderr Removed "/etc/systemd/system/multi-user.target.wants/ceph.target". 2026-03-09T20:42:21.044 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /etc/systemd/system/ceph.target. 2026-03-09T20:42:21.178 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4.target → /etc/systemd/system/ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4.target. 2026-03-09T20:42:21.178 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph.target.wants/ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4.target → /etc/systemd/system/ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4.target. 2026-03-09T20:42:21.313 INFO:teuthology.orchestra.run.vm07.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4@mon.vm07 2026-03-09T20:42:21.313 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stderr Failed to reset failed state of unit ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4@mon.vm07.service: Unit ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4@mon.vm07.service not loaded. 2026-03-09T20:42:21.438 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4.target.wants/ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4@mon.vm07.service → /etc/systemd/system/ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4@.service. 2026-03-09T20:42:21.599 INFO:teuthology.orchestra.run.vm07.stdout:firewalld does not appear to be present 2026-03-09T20:42:21.599 INFO:teuthology.orchestra.run.vm07.stdout:Not possible to enable service . firewalld.service is not available 2026-03-09T20:42:21.599 INFO:teuthology.orchestra.run.vm07.stdout:Waiting for mon to start... 2026-03-09T20:42:21.599 INFO:teuthology.orchestra.run.vm07.stdout:Waiting for mon... 2026-03-09T20:42:21.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout cluster: 2026-03-09T20:42:21.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout id: 589eab88-1bf8-11f1-9e50-71f3ab1833c4 2026-03-09T20:42:21.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout health: HEALTH_OK 2026-03-09T20:42:21.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout 2026-03-09T20:42:21.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout services: 2026-03-09T20:42:21.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout mon: 1 daemons, quorum vm07 (age 0.133667s) 2026-03-09T20:42:21.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout mgr: no daemons active 2026-03-09T20:42:21.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout osd: 0 osds: 0 up, 0 in 2026-03-09T20:42:21.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout 2026-03-09T20:42:21.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout data: 2026-03-09T20:42:21.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout pools: 0 pools, 0 pgs 2026-03-09T20:42:21.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout objects: 0 objects, 0 B 2026-03-09T20:42:21.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout usage: 0 B used, 0 B / 0 B avail 2026-03-09T20:42:21.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout pgs: 2026-03-09T20:42:21.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout 2026-03-09T20:42:21.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.717+0000 7f2ea4ce6640 1 Processor -- start 2026-03-09T20:42:21.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.718+0000 7f2ea4ce6640 1 -- start start 2026-03-09T20:42:21.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.718+0000 7f2ea4ce6640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2ea01082f0 0x7f2ea01086f0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:21.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.718+0000 7f2ea4ce6640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f2ea0108cc0 con 0x7f2ea01082f0 2026-03-09T20:42:21.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.718+0000 7f2e9e575640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2ea01082f0 0x7f2ea01086f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:21.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.718+0000 7f2e9e575640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2ea01082f0 0x7f2ea01086f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:53466/0 (socket says 192.168.123.107:53466) 2026-03-09T20:42:21.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.718+0000 7f2e9e575640 1 -- 192.168.123.107:0/2211985739 learned_addr learned my addr 192.168.123.107:0/2211985739 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:21.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.720+0000 7f2e9e575640 1 -- 192.168.123.107:0/2211985739 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f2ea01094a0 con 0x7f2ea01082f0 2026-03-09T20:42:21.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.720+0000 7f2e9e575640 1 --2- 192.168.123.107:0/2211985739 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2ea01082f0 0x7f2ea01086f0 secure :-1 s=READY pgs=1 cs=0 l=1 rev1=1 crypto rx=0x7f2e88009b80 tx=0x7f2e8802f190 comp rx=0 tx=0).ready entity=mon.0 client_cookie=c91f3aa71e0dc810 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:21.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.720+0000 7f2e9d573640 1 -- 192.168.123.107:0/2211985739 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f2e8802fa10 con 0x7f2ea01082f0 2026-03-09T20:42:21.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.720+0000 7f2e9d573640 1 -- 192.168.123.107:0/2211985739 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(0 keys) v1 ==== 4+0+0 (secure 0 0 0) 0x7f2e8802fb70 con 0x7f2ea01082f0 2026-03-09T20:42:21.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.720+0000 7f2e9d573640 1 -- 192.168.123.107:0/2211985739 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f2e8802fe40 con 0x7f2ea01082f0 2026-03-09T20:42:21.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.721+0000 7f2ea4ce6640 1 -- 192.168.123.107:0/2211985739 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2ea01082f0 msgr2=0x7f2ea01086f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:21.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.721+0000 7f2ea4ce6640 1 --2- 192.168.123.107:0/2211985739 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2ea01082f0 0x7f2ea01086f0 secure :-1 s=READY pgs=1 cs=0 l=1 rev1=1 crypto rx=0x7f2e88009b80 tx=0x7f2e8802f190 comp rx=0 tx=0).stop 2026-03-09T20:42:21.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.721+0000 7f2ea4ce6640 1 -- 192.168.123.107:0/2211985739 shutdown_connections 2026-03-09T20:42:21.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.721+0000 7f2ea4ce6640 1 --2- 192.168.123.107:0/2211985739 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2ea01082f0 0x7f2ea01086f0 unknown :-1 s=CLOSED pgs=1 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:21.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.721+0000 7f2ea4ce6640 1 -- 192.168.123.107:0/2211985739 >> 192.168.123.107:0/2211985739 conn(0x7f2ea007ba00 msgr2=0x7f2ea01066a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:21.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.721+0000 7f2ea4ce6640 1 -- 192.168.123.107:0/2211985739 shutdown_connections 2026-03-09T20:42:21.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.721+0000 7f2ea4ce6640 1 -- 192.168.123.107:0/2211985739 wait complete. 2026-03-09T20:42:21.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.721+0000 7f2ea4ce6640 1 Processor -- start 2026-03-09T20:42:21.798 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.721+0000 7f2ea4ce6640 1 -- start start 2026-03-09T20:42:21.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.722+0000 7f2ea4ce6640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2ea01082f0 0x7f2ea019d5a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:21.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.722+0000 7f2ea4ce6640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f2ea019dae0 con 0x7f2ea01082f0 2026-03-09T20:42:21.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.722+0000 7f2e9e575640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2ea01082f0 0x7f2ea019d5a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:21.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.722+0000 7f2e9e575640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2ea01082f0 0x7f2ea019d5a0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:53480/0 (socket says 192.168.123.107:53480) 2026-03-09T20:42:21.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.722+0000 7f2e9e575640 1 -- 192.168.123.107:0/3165995938 learned_addr learned my addr 192.168.123.107:0/3165995938 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:21.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.722+0000 7f2e9e575640 1 -- 192.168.123.107:0/3165995938 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f2e880095d0 con 0x7f2ea01082f0 2026-03-09T20:42:21.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.722+0000 7f2e9e575640 1 --2- 192.168.123.107:0/3165995938 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2ea01082f0 0x7f2ea019d5a0 secure :-1 s=READY pgs=2 cs=0 l=1 rev1=1 crypto rx=0x7f2e88002410 tx=0x7f2e880047c0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:21.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.722+0000 7f2e877fe640 1 -- 192.168.123.107:0/3165995938 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f2e88047070 con 0x7f2ea01082f0 2026-03-09T20:42:21.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.722+0000 7f2e877fe640 1 -- 192.168.123.107:0/3165995938 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(0 keys) v1 ==== 4+0+0 (secure 0 0 0) 0x7f2e88003cc0 con 0x7f2ea01082f0 2026-03-09T20:42:21.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.722+0000 7f2e877fe640 1 -- 192.168.123.107:0/3165995938 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f2e88037560 con 0x7f2ea01082f0 2026-03-09T20:42:21.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.723+0000 7f2ea4ce6640 1 -- 192.168.123.107:0/3165995938 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f2ea019dce0 con 0x7f2ea01082f0 2026-03-09T20:42:21.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.723+0000 7f2ea4ce6640 1 -- 192.168.123.107:0/3165995938 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f2ea019e180 con 0x7f2ea01082f0 2026-03-09T20:42:21.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.723+0000 7f2e877fe640 1 -- 192.168.123.107:0/3165995938 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 1) v1 ==== 672+0+0 (secure 0 0 0) 0x7f2e880379c0 con 0x7f2ea01082f0 2026-03-09T20:42:21.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.723+0000 7f2e877fe640 1 -- 192.168.123.107:0/3165995938 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(1..1 src has 1..1) v4 ==== 725+0+0 (secure 0 0 0) 0x7f2e88043a40 con 0x7f2ea01082f0 2026-03-09T20:42:21.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.724+0000 7f2e857fa640 1 -- 192.168.123.107:0/3165995938 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f2ea0109810 con 0x7f2ea01082f0 2026-03-09T20:42:21.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.725+0000 7f2e877fe640 1 -- 192.168.123.107:0/3165995938 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+74806 (secure 0 0 0) 0x7f2e8803c070 con 0x7f2ea01082f0 2026-03-09T20:42:21.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.755+0000 7f2e857fa640 1 -- 192.168.123.107:0/3165995938 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "status"} v 0) v1 -- 0x7f2ea0061980 con 0x7f2ea01082f0 2026-03-09T20:42:21.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.755+0000 7f2e877fe640 1 -- 192.168.123.107:0/3165995938 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "status"}]=0 v0) v1 ==== 54+0+320 (secure 0 0 0) 0x7f2e88037ca0 con 0x7f2ea01082f0 2026-03-09T20:42:21.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.755+0000 7f2e857fa640 1 -- 192.168.123.107:0/3165995938 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2ea01082f0 msgr2=0x7f2ea019d5a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:21.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.755+0000 7f2e857fa640 1 --2- 192.168.123.107:0/3165995938 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2ea01082f0 0x7f2ea019d5a0 secure :-1 s=READY pgs=2 cs=0 l=1 rev1=1 crypto rx=0x7f2e88002410 tx=0x7f2e880047c0 comp rx=0 tx=0).stop 2026-03-09T20:42:21.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.756+0000 7f2e857fa640 1 -- 192.168.123.107:0/3165995938 shutdown_connections 2026-03-09T20:42:21.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.756+0000 7f2e857fa640 1 --2- 192.168.123.107:0/3165995938 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2ea01082f0 0x7f2ea019d5a0 unknown :-1 s=CLOSED pgs=2 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:21.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.756+0000 7f2e857fa640 1 -- 192.168.123.107:0/3165995938 >> 192.168.123.107:0/3165995938 conn(0x7f2ea007ba00 msgr2=0x7f2ea0191bd0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:21.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.756+0000 7f2e857fa640 1 -- 192.168.123.107:0/3165995938 shutdown_connections 2026-03-09T20:42:21.799 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.756+0000 7f2e857fa640 1 -- 192.168.123.107:0/3165995938 wait complete. 2026-03-09T20:42:21.799 INFO:teuthology.orchestra.run.vm07.stdout:mon is available 2026-03-09T20:42:21.799 INFO:teuthology.orchestra.run.vm07.stdout:Assimilating anything we can from ceph.conf... 2026-03-09T20:42:21.979 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout 2026-03-09T20:42:21.979 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout [global] 2026-03-09T20:42:21.979 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout fsid = 589eab88-1bf8-11f1-9e50-71f3ab1833c4 2026-03-09T20:42:21.979 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-09T20:42:21.979 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.107:3300,v1:192.168.123.107:6789] 2026-03-09T20:42:21.979 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-09T20:42:21.979 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-09T20:42:21.979 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-09T20:42:21.979 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-09T20:42:21.979 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout 2026-03-09T20:42:21.979 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-09T20:42:21.979 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-09T20:42:21.979 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout 2026-03-09T20:42:21.979 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout [osd] 2026-03-09T20:42:21.979 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout mon_osd_backfillfull_ratio = 0.85 2026-03-09T20:42:21.979 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout mon_osd_full_ratio = 0.9 2026-03-09T20:42:21.979 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout mon_osd_nearfull_ratio = 0.8 2026-03-09T20:42:21.979 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-09T20:42:21.979 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-09T20:42:21.979 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.901+0000 7f8ba7bc3640 1 Processor -- start 2026-03-09T20:42:21.979 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.902+0000 7f8ba7bc3640 1 -- start start 2026-03-09T20:42:21.979 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.902+0000 7f8ba7bc3640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8ba01082f0 0x7f8ba01086f0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:21.979 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.902+0000 7f8ba7bc3640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f8ba0108cc0 con 0x7f8ba01082f0 2026-03-09T20:42:21.979 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.902+0000 7f8ba5938640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8ba01082f0 0x7f8ba01086f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:21.979 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.902+0000 7f8ba5938640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8ba01082f0 0x7f8ba01086f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:53488/0 (socket says 192.168.123.107:53488) 2026-03-09T20:42:21.979 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.902+0000 7f8ba5938640 1 -- 192.168.123.107:0/2937755142 learned_addr learned my addr 192.168.123.107:0/2937755142 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:21.979 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.903+0000 7f8ba5938640 1 -- 192.168.123.107:0/2937755142 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f8ba0109490 con 0x7f8ba01082f0 2026-03-09T20:42:21.979 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.903+0000 7f8ba5938640 1 --2- 192.168.123.107:0/2937755142 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8ba01082f0 0x7f8ba01086f0 secure :-1 s=READY pgs=3 cs=0 l=1 rev1=1 crypto rx=0x7f8b88009b80 tx=0x7f8b8802f190 comp rx=0 tx=0).ready entity=mon.0 client_cookie=1ed3dcc75338498 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:21.979 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.904+0000 7f8ba4936640 1 -- 192.168.123.107:0/2937755142 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f8b8802fc20 con 0x7f8ba01082f0 2026-03-09T20:42:21.979 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.904+0000 7f8ba4936640 1 -- 192.168.123.107:0/2937755142 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(0 keys) v1 ==== 4+0+0 (secure 0 0 0) 0x7f8b8802fd80 con 0x7f8ba01082f0 2026-03-09T20:42:21.979 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.904+0000 7f8ba7bc3640 1 -- 192.168.123.107:0/2937755142 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8ba01082f0 msgr2=0x7f8ba01086f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:21.979 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.904+0000 7f8ba7bc3640 1 --2- 192.168.123.107:0/2937755142 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8ba01082f0 0x7f8ba01086f0 secure :-1 s=READY pgs=3 cs=0 l=1 rev1=1 crypto rx=0x7f8b88009b80 tx=0x7f8b8802f190 comp rx=0 tx=0).stop 2026-03-09T20:42:21.980 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.904+0000 7f8ba7bc3640 1 -- 192.168.123.107:0/2937755142 shutdown_connections 2026-03-09T20:42:21.980 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.904+0000 7f8ba7bc3640 1 --2- 192.168.123.107:0/2937755142 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8ba01082f0 0x7f8ba01086f0 unknown :-1 s=CLOSED pgs=3 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:21.980 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.904+0000 7f8ba7bc3640 1 -- 192.168.123.107:0/2937755142 >> 192.168.123.107:0/2937755142 conn(0x7f8ba007b8c0 msgr2=0x7f8ba01066a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:21.980 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.904+0000 7f8ba7bc3640 1 -- 192.168.123.107:0/2937755142 shutdown_connections 2026-03-09T20:42:21.980 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.904+0000 7f8ba7bc3640 1 -- 192.168.123.107:0/2937755142 wait complete. 2026-03-09T20:42:21.980 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.905+0000 7f8ba7bc3640 1 Processor -- start 2026-03-09T20:42:21.980 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.905+0000 7f8ba7bc3640 1 -- start start 2026-03-09T20:42:21.980 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.905+0000 7f8ba7bc3640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8ba01082f0 0x7f8ba019d600 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:21.980 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.905+0000 7f8ba5938640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8ba01082f0 0x7f8ba019d600 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:21.980 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.905+0000 7f8ba5938640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8ba01082f0 0x7f8ba019d600 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:53500/0 (socket says 192.168.123.107:53500) 2026-03-09T20:42:21.980 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.905+0000 7f8ba5938640 1 -- 192.168.123.107:0/2195911727 learned_addr learned my addr 192.168.123.107:0/2195911727 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:21.980 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.905+0000 7f8ba7bc3640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f8b88037440 con 0x7f8ba01082f0 2026-03-09T20:42:21.980 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.906+0000 7f8ba5938640 1 -- 192.168.123.107:0/2195911727 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f8b880095d0 con 0x7f8ba01082f0 2026-03-09T20:42:21.980 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.906+0000 7f8ba5938640 1 --2- 192.168.123.107:0/2195911727 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8ba01082f0 0x7f8ba019d600 secure :-1 s=READY pgs=4 cs=0 l=1 rev1=1 crypto rx=0x7f8b880090b0 tx=0x7f8b880047c0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:21.980 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.906+0000 7f8b96ffd640 1 -- 192.168.123.107:0/2195911727 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f8b880039c0 con 0x7f8ba01082f0 2026-03-09T20:42:21.980 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.906+0000 7f8b96ffd640 1 -- 192.168.123.107:0/2195911727 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(0 keys) v1 ==== 4+0+0 (secure 0 0 0) 0x7f8b88003b20 con 0x7f8ba01082f0 2026-03-09T20:42:21.980 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.906+0000 7f8b96ffd640 1 -- 192.168.123.107:0/2195911727 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f8b88003dd0 con 0x7f8ba01082f0 2026-03-09T20:42:21.980 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.906+0000 7f8ba7bc3640 1 -- 192.168.123.107:0/2195911727 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f8ba019db40 con 0x7f8ba01082f0 2026-03-09T20:42:21.980 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.906+0000 7f8ba7bc3640 1 -- 192.168.123.107:0/2195911727 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f8ba019dfe0 con 0x7f8ba01082f0 2026-03-09T20:42:21.980 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.907+0000 7f8b96ffd640 1 -- 192.168.123.107:0/2195911727 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 1) v1 ==== 672+0+0 (secure 0 0 0) 0x7f8b8804d760 con 0x7f8ba01082f0 2026-03-09T20:42:21.980 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.907+0000 7f8b96ffd640 1 -- 192.168.123.107:0/2195911727 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(1..1 src has 1..1) v4 ==== 725+0+0 (secure 0 0 0) 0x7f8b880435e0 con 0x7f8ba01082f0 2026-03-09T20:42:21.980 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.907+0000 7f8ba7bc3640 1 -- 192.168.123.107:0/2195911727 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f8b68005350 con 0x7f8ba01082f0 2026-03-09T20:42:21.980 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.908+0000 7f8b96ffd640 1 -- 192.168.123.107:0/2195911727 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+74806 (secure 0 0 0) 0x7f8b88037e60 con 0x7f8ba01082f0 2026-03-09T20:42:21.980 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.936+0000 7f8ba7bc3640 1 -- 192.168.123.107:0/2195911727 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "config assimilate-conf"} v 0) v1 -- 0x7f8b68003c00 con 0x7f8ba01082f0 2026-03-09T20:42:21.980 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.940+0000 7f8b96ffd640 1 -- 192.168.123.107:0/2195911727 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "config assimilate-conf"}]=0 v2) v1 ==== 70+0+471 (secure 0 0 0) 0x7f8b8803c070 con 0x7f8ba01082f0 2026-03-09T20:42:21.980 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.940+0000 7f8b96ffd640 1 -- 192.168.123.107:0/2195911727 <== mon.0 v2:192.168.123.107:3300/0 8 ==== config(26 keys) v1 ==== 1003+0+0 (secure 0 0 0) 0x7f8b88041440 con 0x7f8ba01082f0 2026-03-09T20:42:21.980 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.941+0000 7f8ba7bc3640 1 -- 192.168.123.107:0/2195911727 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8ba01082f0 msgr2=0x7f8ba019d600 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:21.980 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.941+0000 7f8ba7bc3640 1 --2- 192.168.123.107:0/2195911727 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8ba01082f0 0x7f8ba019d600 secure :-1 s=READY pgs=4 cs=0 l=1 rev1=1 crypto rx=0x7f8b880090b0 tx=0x7f8b880047c0 comp rx=0 tx=0).stop 2026-03-09T20:42:21.980 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.941+0000 7f8ba7bc3640 1 -- 192.168.123.107:0/2195911727 shutdown_connections 2026-03-09T20:42:21.980 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.941+0000 7f8ba7bc3640 1 --2- 192.168.123.107:0/2195911727 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8ba01082f0 0x7f8ba019d600 unknown :-1 s=CLOSED pgs=4 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:21.980 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.941+0000 7f8ba7bc3640 1 -- 192.168.123.107:0/2195911727 >> 192.168.123.107:0/2195911727 conn(0x7f8ba007b8c0 msgr2=0x7f8ba0191bb0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:21.980 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.941+0000 7f8ba7bc3640 1 -- 192.168.123.107:0/2195911727 shutdown_connections 2026-03-09T20:42:21.980 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:21.941+0000 7f8ba7bc3640 1 -- 192.168.123.107:0/2195911727 wait complete. 2026-03-09T20:42:21.980 INFO:teuthology.orchestra.run.vm07.stdout:Generating new minimal ceph.conf... 2026-03-09T20:42:22.160 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:22.075+0000 7fd4dc890640 1 Processor -- start 2026-03-09T20:42:22.160 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:22.075+0000 7fd4dc890640 1 -- start start 2026-03-09T20:42:22.160 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:22.076+0000 7fd4dc890640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd4d41082f0 0x7fd4d41086f0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:22.160 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:22.076+0000 7fd4dc890640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fd4d4108cc0 con 0x7fd4d41082f0 2026-03-09T20:42:22.160 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:22.076+0000 7fd4da605640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd4d41082f0 0x7fd4d41086f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:22.160 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:22.076+0000 7fd4da605640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd4d41082f0 0x7fd4d41086f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:53504/0 (socket says 192.168.123.107:53504) 2026-03-09T20:42:22.160 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:22.076+0000 7fd4da605640 1 -- 192.168.123.107:0/2167706762 learned_addr learned my addr 192.168.123.107:0/2167706762 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:22.160 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:22.076+0000 7fd4da605640 1 -- 192.168.123.107:0/2167706762 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fd4d4109490 con 0x7fd4d41082f0 2026-03-09T20:42:22.160 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:22.077+0000 7fd4da605640 1 --2- 192.168.123.107:0/2167706762 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd4d41082f0 0x7fd4d41086f0 secure :-1 s=READY pgs=5 cs=0 l=1 rev1=1 crypto rx=0x7fd4c0009b80 tx=0x7fd4c002f190 comp rx=0 tx=0).ready entity=mon.0 client_cookie=9e3b21377f196e64 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:22.160 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:22.077+0000 7fd4d9603640 1 -- 192.168.123.107:0/2167706762 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fd4c002fc20 con 0x7fd4d41082f0 2026-03-09T20:42:22.160 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:22.077+0000 7fd4d9603640 1 -- 192.168.123.107:0/2167706762 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(26 keys) v1 ==== 1003+0+0 (secure 0 0 0) 0x7fd4c002fd80 con 0x7fd4d41082f0 2026-03-09T20:42:22.160 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:22.077+0000 7fd4d9603640 1 -- 192.168.123.107:0/2167706762 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fd4c0035680 con 0x7fd4d41082f0 2026-03-09T20:42:22.160 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:22.078+0000 7fd4dc890640 1 -- 192.168.123.107:0/2167706762 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd4d41082f0 msgr2=0x7fd4d41086f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:22.160 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:22.078+0000 7fd4dc890640 1 --2- 192.168.123.107:0/2167706762 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd4d41082f0 0x7fd4d41086f0 secure :-1 s=READY pgs=5 cs=0 l=1 rev1=1 crypto rx=0x7fd4c0009b80 tx=0x7fd4c002f190 comp rx=0 tx=0).stop 2026-03-09T20:42:22.160 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:22.078+0000 7fd4dc890640 1 -- 192.168.123.107:0/2167706762 shutdown_connections 2026-03-09T20:42:22.160 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:22.078+0000 7fd4dc890640 1 --2- 192.168.123.107:0/2167706762 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd4d41082f0 0x7fd4d41086f0 unknown :-1 s=CLOSED pgs=5 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:22.160 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:22.078+0000 7fd4dc890640 1 -- 192.168.123.107:0/2167706762 >> 192.168.123.107:0/2167706762 conn(0x7fd4d407b8c0 msgr2=0x7fd4d41066a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:22.160 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:22.078+0000 7fd4dc890640 1 -- 192.168.123.107:0/2167706762 shutdown_connections 2026-03-09T20:42:22.160 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:22.078+0000 7fd4dc890640 1 -- 192.168.123.107:0/2167706762 wait complete. 2026-03-09T20:42:22.160 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:22.078+0000 7fd4dc890640 1 Processor -- start 2026-03-09T20:42:22.160 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:22.079+0000 7fd4dc890640 1 -- start start 2026-03-09T20:42:22.160 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:22.079+0000 7fd4dc890640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd4d41082f0 0x7fd4d419de10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:22.160 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:22.079+0000 7fd4dc890640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fd4d419e350 con 0x7fd4d41082f0 2026-03-09T20:42:22.160 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:22.079+0000 7fd4da605640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd4d41082f0 0x7fd4d419de10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:22.160 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:22.079+0000 7fd4da605640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd4d41082f0 0x7fd4d419de10 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:53520/0 (socket says 192.168.123.107:53520) 2026-03-09T20:42:22.160 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:22.079+0000 7fd4da605640 1 -- 192.168.123.107:0/2225061406 learned_addr learned my addr 192.168.123.107:0/2225061406 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:22.160 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:22.079+0000 7fd4da605640 1 -- 192.168.123.107:0/2225061406 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fd4c00095d0 con 0x7fd4d41082f0 2026-03-09T20:42:22.160 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:22.080+0000 7fd4da605640 1 --2- 192.168.123.107:0/2225061406 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd4d41082f0 0x7fd4d419de10 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7fd4c0006fd0 tx=0x7fd4c0004270 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:22.160 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:22.080+0000 7fd4c77fe640 1 -- 192.168.123.107:0/2225061406 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fd4c0004430 con 0x7fd4d41082f0 2026-03-09T20:42:22.160 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:22.080+0000 7fd4c77fe640 1 -- 192.168.123.107:0/2225061406 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(26 keys) v1 ==== 1003+0+0 (secure 0 0 0) 0x7fd4c0004590 con 0x7fd4d41082f0 2026-03-09T20:42:22.160 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:22.080+0000 7fd4c77fe640 1 -- 192.168.123.107:0/2225061406 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fd4c0037980 con 0x7fd4d41082f0 2026-03-09T20:42:22.160 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:22.080+0000 7fd4dc890640 1 -- 192.168.123.107:0/2225061406 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fd4d419e550 con 0x7fd4d41082f0 2026-03-09T20:42:22.160 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:22.080+0000 7fd4dc890640 1 -- 192.168.123.107:0/2225061406 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fd4d419e9f0 con 0x7fd4d41082f0 2026-03-09T20:42:22.160 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:22.081+0000 7fd4c77fe640 1 -- 192.168.123.107:0/2225061406 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 1) v1 ==== 672+0+0 (secure 0 0 0) 0x7fd4c0003710 con 0x7fd4d41082f0 2026-03-09T20:42:22.160 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:22.081+0000 7fd4c77fe640 1 -- 192.168.123.107:0/2225061406 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(1..1 src has 1..1) v4 ==== 725+0+0 (secure 0 0 0) 0x7fd4c0040690 con 0x7fd4d41082f0 2026-03-09T20:42:22.161 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:22.081+0000 7fd4dc890640 1 -- 192.168.123.107:0/2225061406 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fd4d410c6c0 con 0x7fd4d41082f0 2026-03-09T20:42:22.161 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:22.083+0000 7fd4c77fe640 1 -- 192.168.123.107:0/2225061406 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+74806 (secure 0 0 0) 0x7fd4c003c070 con 0x7fd4d41082f0 2026-03-09T20:42:22.161 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:22.114+0000 7fd4dc890640 1 -- 192.168.123.107:0/2225061406 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "config generate-minimal-conf"} v 0) v1 -- 0x7fd4d41a12d0 con 0x7fd4d41082f0 2026-03-09T20:42:22.161 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:22.114+0000 7fd4c77fe640 1 -- 192.168.123.107:0/2225061406 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "config generate-minimal-conf"}]=0 v2) v1 ==== 76+0+181 (secure 0 0 0) 0x7fd4c0003ae0 con 0x7fd4d41082f0 2026-03-09T20:42:22.161 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:22.115+0000 7fd4dc890640 1 -- 192.168.123.107:0/2225061406 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd4d41082f0 msgr2=0x7fd4d419de10 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:22.161 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:22.115+0000 7fd4dc890640 1 --2- 192.168.123.107:0/2225061406 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd4d41082f0 0x7fd4d419de10 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7fd4c0006fd0 tx=0x7fd4c0004270 comp rx=0 tx=0).stop 2026-03-09T20:42:22.161 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:22.115+0000 7fd4dc890640 1 -- 192.168.123.107:0/2225061406 shutdown_connections 2026-03-09T20:42:22.161 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:22.115+0000 7fd4dc890640 1 --2- 192.168.123.107:0/2225061406 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd4d41082f0 0x7fd4d419de10 unknown :-1 s=CLOSED pgs=6 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:22.161 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:22.115+0000 7fd4dc890640 1 -- 192.168.123.107:0/2225061406 >> 192.168.123.107:0/2225061406 conn(0x7fd4d407b8c0 msgr2=0x7fd4d4105df0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:22.161 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:22.115+0000 7fd4dc890640 1 -- 192.168.123.107:0/2225061406 shutdown_connections 2026-03-09T20:42:22.161 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:22.115+0000 7fd4dc890640 1 -- 192.168.123.107:0/2225061406 wait complete. 2026-03-09T20:42:22.161 INFO:teuthology.orchestra.run.vm07.stdout:Restarting the monitor... 2026-03-09T20:42:22.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 systemd[1]: Starting Ceph mon.vm07 for 589eab88-1bf8-11f1-9e50-71f3ab1833c4... 2026-03-09T20:42:22.775 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 podman[49106]: 2026-03-09 20:42:22.409074051 +0000 UTC m=+0.009136243 image pull b6fe7eb6a9d0f9b143033c43e1cdf7ef0918719fc7cff0dd0e2c113bb482fdd6 quay.ceph.io/ceph-ci/ceph:reef 2026-03-09T20:42:22.775 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 podman[49106]: 2026-03-09 20:42:22.683133078 +0000 UTC m=+0.283195260 container create f3e88bdaa0dd6d867afcbeb0f1ad2c0f94d78f7a28f3c148e3b4c7d4cffd0613 (image=quay.ceph.io/ceph-ci/ceph:reef, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-mon-vm07, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, CEPH_SHA1=ab47f43c099b2cbae6e21342fe673ce251da54d6) 2026-03-09T20:42:23.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 podman[49106]: 2026-03-09 20:42:22.929545815 +0000 UTC m=+0.529608007 container init f3e88bdaa0dd6d867afcbeb0f1ad2c0f94d78f7a28f3c148e3b4c7d4cffd0613 (image=quay.ceph.io/ceph-ci/ceph:reef, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-mon-vm07, OSD_FLAVOR=default, CEPH_SHA1=ab47f43c099b2cbae6e21342fe673ce251da54d6, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-09T20:42:23.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 podman[49106]: 2026-03-09 20:42:22.932791431 +0000 UTC m=+0.532853623 container start f3e88bdaa0dd6d867afcbeb0f1ad2c0f94d78f7a28f3c148e3b4c7d4cffd0613 (image=quay.ceph.io/ceph-ci/ceph:reef, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-mon-vm07, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, CEPH_SHA1=ab47f43c099b2cbae6e21342fe673ce251da54d6, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-09T20:42:23.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: set uid:gid to 167:167 (ceph:ceph) 2026-03-09T20:42:23.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable), process ceph-mon, pid 2 2026-03-09T20:42:23.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: pidfile_write: ignore empty --pid-file 2026-03-09T20:42:23.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: load: jerasure load: lrc 2026-03-09T20:42:23.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: RocksDB version: 7.9.2 2026-03-09T20:42:23.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Git sha 0 2026-03-09T20:42:23.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Compile date 2026-02-26 02:56:47 2026-03-09T20:42:23.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: DB SUMMARY 2026-03-09T20:42:23.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: DB Session ID: VPCHM0NDPAKROZNZ74PB 2026-03-09T20:42:23.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: CURRENT file: CURRENT 2026-03-09T20:42:23.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: IDENTITY file: IDENTITY 2026-03-09T20:42:23.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: MANIFEST file: MANIFEST-000010 size: 179 Bytes 2026-03-09T20:42:23.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: SST files in /var/lib/ceph/mon/ceph-vm07/store.db dir, Total Num: 1, files: 000008.sst 2026-03-09T20:42:23.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-vm07/store.db: 000009.log size: 88970 ; 2026-03-09T20:42:23.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.error_if_exists: 0 2026-03-09T20:42:23.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.create_if_missing: 0 2026-03-09T20:42:23.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.paranoid_checks: 1 2026-03-09T20:42:23.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.flush_verify_memtable_count: 1 2026-03-09T20:42:23.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-09T20:42:23.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-09T20:42:23.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.env: 0x557ff6f83ee0 2026-03-09T20:42:23.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.fs: PosixFileSystem 2026-03-09T20:42:23.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.info_log: 0x557ff86e6ea0 2026-03-09T20:42:23.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.max_file_opening_threads: 16 2026-03-09T20:42:23.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.statistics: (nil) 2026-03-09T20:42:23.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.use_fsync: 0 2026-03-09T20:42:23.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.max_log_file_size: 0 2026-03-09T20:42:23.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-09T20:42:23.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.log_file_time_to_roll: 0 2026-03-09T20:42:23.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.keep_log_file_num: 1000 2026-03-09T20:42:23.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.recycle_log_file_num: 0 2026-03-09T20:42:23.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.allow_fallocate: 1 2026-03-09T20:42:23.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.allow_mmap_reads: 0 2026-03-09T20:42:23.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.allow_mmap_writes: 0 2026-03-09T20:42:23.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.use_direct_reads: 0 2026-03-09T20:42:23.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-09T20:42:23.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.create_missing_column_families: 0 2026-03-09T20:42:23.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.db_log_dir: 2026-03-09T20:42:23.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.wal_dir: 2026-03-09T20:42:23.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.table_cache_numshardbits: 6 2026-03-09T20:42:23.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.WAL_ttl_seconds: 0 2026-03-09T20:42:23.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.WAL_size_limit_MB: 0 2026-03-09T20:42:23.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-09T20:42:23.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-09T20:42:23.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.is_fd_close_on_exec: 1 2026-03-09T20:42:23.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.advise_random_on_open: 1 2026-03-09T20:42:23.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.db_write_buffer_size: 0 2026-03-09T20:42:23.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.write_buffer_manager: 0x557ff86f63c0 2026-03-09T20:42:23.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-09T20:42:23.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-09T20:42:23.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.use_adaptive_mutex: 0 2026-03-09T20:42:23.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.rate_limiter: (nil) 2026-03-09T20:42:23.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-09T20:42:23.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.wal_recovery_mode: 2 2026-03-09T20:42:23.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.enable_thread_tracking: 0 2026-03-09T20:42:23.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.enable_pipelined_write: 0 2026-03-09T20:42:23.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.unordered_write: 0 2026-03-09T20:42:23.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-09T20:42:23.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-09T20:42:23.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-09T20:42:23.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-09T20:42:23.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.row_cache: None 2026-03-09T20:42:23.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.wal_filter: None 2026-03-09T20:42:23.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-09T20:42:23.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.allow_ingest_behind: 0 2026-03-09T20:42:23.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.two_write_queues: 0 2026-03-09T20:42:23.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.manual_wal_flush: 0 2026-03-09T20:42:23.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.wal_compression: 0 2026-03-09T20:42:23.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.atomic_flush: 0 2026-03-09T20:42:23.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-09T20:42:23.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.persist_stats_to_disk: 0 2026-03-09T20:42:23.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.write_dbid_to_manifest: 0 2026-03-09T20:42:23.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.log_readahead_size: 0 2026-03-09T20:42:23.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-09T20:42:23.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.best_efforts_recovery: 0 2026-03-09T20:42:23.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-09T20:42:23.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-09T20:42:23.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.allow_data_in_errors: 0 2026-03-09T20:42:23.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.db_host_id: __hostname__ 2026-03-09T20:42:23.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.enforce_single_del_contracts: true 2026-03-09T20:42:23.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.max_background_jobs: 2 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.max_background_compactions: -1 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.max_subcompactions: 1 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.delayed_write_rate : 16777216 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.max_total_wal_size: 0 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.stats_dump_period_sec: 600 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.stats_persist_period_sec: 600 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.max_open_files: -1 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.bytes_per_sync: 0 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.wal_bytes_per_sync: 0 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.strict_bytes_per_sync: 0 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.compaction_readahead_size: 0 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.max_background_flushes: -1 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Compression algorithms supported: 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: kZSTD supported: 0 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: kXpressCompression supported: 0 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: kBZip2Compression supported: 0 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: kLZ4Compression supported: 1 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: kZlibCompression supported: 1 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: kLZ4HCCompression supported: 1 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: kSnappyCompression supported: 1 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Fast CRC32 supported: Supported on x86 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: DMutex implementation: pthread_mutex_t 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-vm07/store.db/MANIFEST-000010 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.merge_operator: 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.compaction_filter: None 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.compaction_filter_factory: None 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.sst_partitioner_factory: None 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.memtable_factory: SkipListFactory 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.table_factory: BlockBasedTable 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557ff86e6ac0) 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout: cache_index_and_filter_blocks: 1 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout: pin_top_level_index_and_filter: 1 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout: index_type: 0 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout: data_block_index_type: 0 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout: index_shortening: 1 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout: data_block_hash_table_util_ratio: 0.750000 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout: checksum: 4 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout: no_block_cache: 0 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout: block_cache: 0x557ff87089b0 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout: block_cache_name: BinnedLRUCache 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout: block_cache_options: 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout: capacity : 536870912 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout: num_shard_bits : 4 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout: strict_capacity_limit : 0 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout: high_pri_pool_ratio: 0.000 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout: block_cache_compressed: (nil) 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout: persistent_cache: (nil) 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout: block_size: 4096 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout: block_size_deviation: 10 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout: block_restart_interval: 16 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout: index_block_restart_interval: 1 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout: metadata_block_size: 4096 2026-03-09T20:42:23.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout: partition_filters: 0 2026-03-09T20:42:23.138 INFO:journalctl@ceph.mon.vm07.vm07.stdout: use_delta_encoding: 1 2026-03-09T20:42:23.138 INFO:journalctl@ceph.mon.vm07.vm07.stdout: filter_policy: bloomfilter 2026-03-09T20:42:23.138 INFO:journalctl@ceph.mon.vm07.vm07.stdout: whole_key_filtering: 1 2026-03-09T20:42:23.138 INFO:journalctl@ceph.mon.vm07.vm07.stdout: verify_compression: 0 2026-03-09T20:42:23.138 INFO:journalctl@ceph.mon.vm07.vm07.stdout: read_amp_bytes_per_bit: 0 2026-03-09T20:42:23.138 INFO:journalctl@ceph.mon.vm07.vm07.stdout: format_version: 5 2026-03-09T20:42:23.138 INFO:journalctl@ceph.mon.vm07.vm07.stdout: enable_index_compression: 1 2026-03-09T20:42:23.138 INFO:journalctl@ceph.mon.vm07.vm07.stdout: block_align: 0 2026-03-09T20:42:23.138 INFO:journalctl@ceph.mon.vm07.vm07.stdout: max_auto_readahead_size: 262144 2026-03-09T20:42:23.138 INFO:journalctl@ceph.mon.vm07.vm07.stdout: prepopulate_block_cache: 0 2026-03-09T20:42:23.138 INFO:journalctl@ceph.mon.vm07.vm07.stdout: initial_auto_readahead_size: 8192 2026-03-09T20:42:23.138 INFO:journalctl@ceph.mon.vm07.vm07.stdout: num_file_reads_for_auto_readahead: 2 2026-03-09T20:42:23.138 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.write_buffer_size: 33554432 2026-03-09T20:42:23.138 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.max_write_buffer_number: 2 2026-03-09T20:42:23.138 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.compression: NoCompression 2026-03-09T20:42:23.138 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.bottommost_compression: Disabled 2026-03-09T20:42:23.138 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.prefix_extractor: nullptr 2026-03-09T20:42:23.138 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-09T20:42:23.138 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.num_levels: 7 2026-03-09T20:42:23.138 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-09T20:42:23.138 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-09T20:42:23.138 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-09T20:42:23.138 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-09T20:42:23.138 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-09T20:42:23.138 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-09T20:42:23.138 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-09T20:42:23.138 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-09T20:42:23.138 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-09T20:42:23.138 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-09T20:42:23.138 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-09T20:42:23.138 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-09T20:42:23.138 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.compression_opts.window_bits: -14 2026-03-09T20:42:23.138 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.compression_opts.level: 32767 2026-03-09T20:42:23.138 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.compression_opts.strategy: 0 2026-03-09T20:42:23.138 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-09T20:42:23.138 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-09T20:42:23.138 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-09T20:42:23.138 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-09T20:42:23.138 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.compression_opts.enabled: false 2026-03-09T20:42:23.138 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-09T20:42:23.138 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-09T20:42:23.138 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-09T20:42:23.138 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-09T20:42:23.138 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.target_file_size_base: 67108864 2026-03-09T20:42:23.138 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.target_file_size_multiplier: 1 2026-03-09T20:42:23.138 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-09T20:42:23.138 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-09T20:42:23.138 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-09T20:42:23.138 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-09T20:42:23.138 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-09T20:42:23.138 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-09T20:42:23.138 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-09T20:42:23.138 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-09T20:42:23.138 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-09T20:42:23.139 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-09T20:42:23.139 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-09T20:42:23.139 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-09T20:42:23.139 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-09T20:42:23.139 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.arena_block_size: 1048576 2026-03-09T20:42:23.139 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-09T20:42:23.139 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-09T20:42:23.139 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.disable_auto_compactions: 0 2026-03-09T20:42:23.139 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-09T20:42:23.139 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-09T20:42:23.139 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-09T20:42:23.139 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-09T20:42:23.139 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-09T20:42:23.139 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-09T20:42:23.139 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-09T20:42:23.139 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-09T20:42:23.139 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-09T20:42:23.139 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-09T20:42:23.139 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-09T20:42:23.139 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.inplace_update_support: 0 2026-03-09T20:42:23.139 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.inplace_update_num_locks: 10000 2026-03-09T20:42:23.139 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-09T20:42:23.139 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-09T20:42:23.139 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.memtable_huge_page_size: 0 2026-03-09T20:42:23.139 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.bloom_locality: 0 2026-03-09T20:42:23.139 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.max_successive_merges: 0 2026-03-09T20:42:23.139 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.optimize_filters_for_hits: 0 2026-03-09T20:42:23.139 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.paranoid_file_checks: 0 2026-03-09T20:42:23.139 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.force_consistency_checks: 1 2026-03-09T20:42:23.139 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.report_bg_io_stats: 0 2026-03-09T20:42:23.139 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.ttl: 2592000 2026-03-09T20:42:23.139 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.periodic_compaction_seconds: 0 2026-03-09T20:42:23.139 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-09T20:42:23.139 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-09T20:42:23.139 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.enable_blob_files: false 2026-03-09T20:42:23.139 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.min_blob_size: 0 2026-03-09T20:42:23.139 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.blob_file_size: 268435456 2026-03-09T20:42:23.139 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.blob_compression_type: NoCompression 2026-03-09T20:42:23.139 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.enable_blob_garbage_collection: false 2026-03-09T20:42:23.139 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-09T20:42:23.139 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-09T20:42:23.139 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-09T20:42:23.139 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.blob_file_starting_level: 0 2026-03-09T20:42:23.139 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-09T20:42:23.139 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-vm07/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5 2026-03-09T20:42:23.139 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 2026-03-09T20:42:23.139 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 4e1294e8-4400-4e9a-9a02-67a268a55194 2026-03-09T20:42:23.139 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773088942957840, "job": 1, "event": "recovery_started", "wal_files": [9]} 2026-03-09T20:42:23.139 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:22 vm07 ceph-mon[49120]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2 2026-03-09T20:42:23.145 INFO:teuthology.orchestra.run.vm07.stdout:Setting public_network to 192.168.123.0/24 in global config section 2026-03-09T20:42:23.358 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.266+0000 7f744e52c640 1 Processor -- start 2026-03-09T20:42:23.358 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.267+0000 7f744e52c640 1 -- start start 2026-03-09T20:42:23.358 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.267+0000 7f744e52c640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f744807bda0 0x7f744807a310 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:23.358 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.267+0000 7f744e52c640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f744807c1a0 con 0x7f744807bda0 2026-03-09T20:42:23.358 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.268+0000 7f7447fff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f744807bda0 0x7f744807a310 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:23.358 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.268+0000 7f7447fff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f744807bda0 0x7f744807a310 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:53530/0 (socket says 192.168.123.107:53530) 2026-03-09T20:42:23.358 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.268+0000 7f7447fff640 1 -- 192.168.123.107:0/4182120351 learned_addr learned my addr 192.168.123.107:0/4182120351 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:23.358 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.269+0000 7f7447fff640 1 -- 192.168.123.107:0/4182120351 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f744807a850 con 0x7f744807bda0 2026-03-09T20:42:23.358 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.269+0000 7f7447fff640 1 --2- 192.168.123.107:0/4182120351 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f744807bda0 0x7f744807a310 secure :-1 s=READY pgs=1 cs=0 l=1 rev1=1 crypto rx=0x7f7434009b80 tx=0x7f743402f190 comp rx=0 tx=0).ready entity=mon.0 client_cookie=a62dd4d9fd2dccaa server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:23.358 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.270+0000 7f7446ffd640 1 -- 192.168.123.107:0/4182120351 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f743402fa10 con 0x7f744807bda0 2026-03-09T20:42:23.358 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.270+0000 7f7446ffd640 1 -- 192.168.123.107:0/4182120351 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(26 keys) v1 ==== 1003+0+0 (secure 0 0 0) 0x7f7434037440 con 0x7f744807bda0 2026-03-09T20:42:23.358 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.270+0000 7f7446ffd640 1 -- 192.168.123.107:0/4182120351 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f74340354e0 con 0x7f744807bda0 2026-03-09T20:42:23.358 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.270+0000 7f744e52c640 1 -- 192.168.123.107:0/4182120351 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f744807bda0 msgr2=0x7f744807a310 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:23.358 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.270+0000 7f744e52c640 1 --2- 192.168.123.107:0/4182120351 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f744807bda0 0x7f744807a310 secure :-1 s=READY pgs=1 cs=0 l=1 rev1=1 crypto rx=0x7f7434009b80 tx=0x7f743402f190 comp rx=0 tx=0).stop 2026-03-09T20:42:23.358 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.270+0000 7f744e52c640 1 -- 192.168.123.107:0/4182120351 shutdown_connections 2026-03-09T20:42:23.358 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.270+0000 7f744e52c640 1 --2- 192.168.123.107:0/4182120351 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f744807bda0 0x7f744807a310 unknown :-1 s=CLOSED pgs=1 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:23.358 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.270+0000 7f744e52c640 1 -- 192.168.123.107:0/4182120351 >> 192.168.123.107:0/4182120351 conn(0x7f7448101870 msgr2=0x7f7448103cd0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:23.358 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.270+0000 7f744e52c640 1 -- 192.168.123.107:0/4182120351 shutdown_connections 2026-03-09T20:42:23.358 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.271+0000 7f744e52c640 1 -- 192.168.123.107:0/4182120351 wait complete. 2026-03-09T20:42:23.358 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.271+0000 7f744e52c640 1 Processor -- start 2026-03-09T20:42:23.358 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.271+0000 7f744e52c640 1 -- start start 2026-03-09T20:42:23.358 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.271+0000 7f744e52c640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f744807bda0 0x7f744819ddb0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:23.358 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.271+0000 7f744e52c640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f744819e2f0 con 0x7f744807bda0 2026-03-09T20:42:23.358 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.272+0000 7f7447fff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f744807bda0 0x7f744819ddb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:23.358 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.272+0000 7f7447fff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f744807bda0 0x7f744819ddb0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:53542/0 (socket says 192.168.123.107:53542) 2026-03-09T20:42:23.358 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.272+0000 7f7447fff640 1 -- 192.168.123.107:0/2070393088 learned_addr learned my addr 192.168.123.107:0/2070393088 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:23.358 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.272+0000 7f7447fff640 1 -- 192.168.123.107:0/2070393088 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f74340095d0 con 0x7f744807bda0 2026-03-09T20:42:23.358 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.272+0000 7f7447fff640 1 --2- 192.168.123.107:0/2070393088 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f744807bda0 0x7f744819ddb0 secure :-1 s=READY pgs=2 cs=0 l=1 rev1=1 crypto rx=0x7f7434009b50 tx=0x7f7434037850 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:23.358 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.272+0000 7f74457fa640 1 -- 192.168.123.107:0/2070393088 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f74340357a0 con 0x7f744807bda0 2026-03-09T20:42:23.358 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.272+0000 7f74457fa640 1 -- 192.168.123.107:0/2070393088 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(26 keys) v1 ==== 1003+0+0 (secure 0 0 0) 0x7f7434035d50 con 0x7f744807bda0 2026-03-09T20:42:23.358 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.272+0000 7f74457fa640 1 -- 192.168.123.107:0/2070393088 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f7434040c80 con 0x7f744807bda0 2026-03-09T20:42:23.358 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.272+0000 7f744e52c640 1 -- 192.168.123.107:0/2070393088 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f744819e4f0 con 0x7f744807bda0 2026-03-09T20:42:23.358 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.273+0000 7f744e52c640 1 -- 192.168.123.107:0/2070393088 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f744819e990 con 0x7f744807bda0 2026-03-09T20:42:23.358 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.273+0000 7f74457fa640 1 -- 192.168.123.107:0/2070393088 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 1) v1 ==== 672+0+0 (secure 0 0 0) 0x7f7434035900 con 0x7f744807bda0 2026-03-09T20:42:23.358 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.273+0000 7f74457fa640 1 -- 192.168.123.107:0/2070393088 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(1..1 src has 1..1) v4 ==== 725+0+0 (secure 0 0 0) 0x7f7434049a30 con 0x7f744807bda0 2026-03-09T20:42:23.358 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.273+0000 7f744e52c640 1 -- 192.168.123.107:0/2070393088 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f744810c2c0 con 0x7f744807bda0 2026-03-09T20:42:23.359 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.275+0000 7f74457fa640 1 -- 192.168.123.107:0/2070393088 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+74806 (secure 0 0 0) 0x7f7434044020 con 0x7f744807bda0 2026-03-09T20:42:23.359 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.305+0000 7f744e52c640 1 -- 192.168.123.107:0/2070393088 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command([{prefix=config set, name=public_network}] v 0) v1 -- 0x7f74481a1230 con 0x7f744807bda0 2026-03-09T20:42:23.359 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.308+0000 7f74457fa640 1 -- 192.168.123.107:0/2070393088 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{prefix=config set, name=public_network}]=0 v3)=0 v3) v1 ==== 130+0+0 (secure 0 0 0) 0x7f74340580c0 con 0x7f744807bda0 2026-03-09T20:42:23.359 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.308+0000 7f74457fa640 1 -- 192.168.123.107:0/2070393088 <== mon.0 v2:192.168.123.107:3300/0 8 ==== config(27 keys) v1 ==== 1041+0+0 (secure 0 0 0) 0x7f7434049400 con 0x7f744807bda0 2026-03-09T20:42:23.359 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.309+0000 7f744e52c640 1 -- 192.168.123.107:0/2070393088 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f744807bda0 msgr2=0x7f744819ddb0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:23.359 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.310+0000 7f744e52c640 1 --2- 192.168.123.107:0/2070393088 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f744807bda0 0x7f744819ddb0 secure :-1 s=READY pgs=2 cs=0 l=1 rev1=1 crypto rx=0x7f7434009b50 tx=0x7f7434037850 comp rx=0 tx=0).stop 2026-03-09T20:42:23.359 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.310+0000 7f744e52c640 1 -- 192.168.123.107:0/2070393088 shutdown_connections 2026-03-09T20:42:23.359 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.310+0000 7f744e52c640 1 --2- 192.168.123.107:0/2070393088 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f744807bda0 0x7f744819ddb0 unknown :-1 s=CLOSED pgs=2 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:23.359 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.310+0000 7f744e52c640 1 -- 192.168.123.107:0/2070393088 >> 192.168.123.107:0/2070393088 conn(0x7f7448101870 msgr2=0x7f7448102310 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:23.359 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.310+0000 7f744e52c640 1 -- 192.168.123.107:0/2070393088 shutdown_connections 2026-03-09T20:42:23.359 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.310+0000 7f744e52c640 1 -- 192.168.123.107:0/2070393088 wait complete. 2026-03-09T20:42:23.359 INFO:teuthology.orchestra.run.vm07.stdout:Wrote config to /etc/ceph/ceph.conf 2026-03-09T20:42:23.359 INFO:teuthology.orchestra.run.vm07.stdout:Wrote keyring to /etc/ceph/ceph.client.admin.keyring 2026-03-09T20:42:23.359 INFO:teuthology.orchestra.run.vm07.stdout:Creating mgr... 2026-03-09T20:42:23.359 INFO:teuthology.orchestra.run.vm07.stdout:Verifying port 0.0.0.0:9283 ... 2026-03-09T20:42:23.359 INFO:teuthology.orchestra.run.vm07.stdout:Verifying port 0.0.0.0:8765 ... 2026-03-09T20:42:23.359 INFO:teuthology.orchestra.run.vm07.stdout:Verifying port 0.0.0.0:8443 ... 2026-03-09T20:42:23.431 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:23 vm07 bash[49106]: f3e88bdaa0dd6d867afcbeb0f1ad2c0f94d78f7a28f3c148e3b4c7d4cffd0613 2026-03-09T20:42:23.431 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:23 vm07 ceph-mon[49120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773088943138403, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 84616, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 285, "table_properties": {"data_size": 82738, "index_size": 203, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 645, "raw_key_size": 13151, "raw_average_key_size": 51, "raw_value_size": 75663, "raw_average_value_size": 295, "num_data_blocks": 9, "num_entries": 256, "num_filter_entries": 256, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773088942, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4e1294e8-4400-4e9a-9a02-67a268a55194", "db_session_id": "VPCHM0NDPAKROZNZ74PB", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}} 2026-03-09T20:42:23.431 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:23 vm07 ceph-mon[49120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773088943138795, "job": 1, "event": "recovery_finished"} 2026-03-09T20:42:23.431 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:23 vm07 ceph-mon[49120]: rocksdb: [db/version_set.cc:5047] Creating manifest 15 2026-03-09T20:42:23.431 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:23 vm07 ceph-mon[49120]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-vm07/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-09T20:42:23.431 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:23 vm07 ceph-mon[49120]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x557ff870ae00 2026-03-09T20:42:23.431 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:23 vm07 ceph-mon[49120]: rocksdb: DB pointer 0x557ff8718000 2026-03-09T20:42:23.431 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:23 vm07 ceph-mon[49120]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-09T20:42:23.431 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:23 vm07 ceph-mon[49120]: rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-09T20:42:23.431 INFO:journalctl@ceph.mon.vm07.vm07.stdout: ** DB Stats ** 2026-03-09T20:42:23.431 INFO:journalctl@ceph.mon.vm07.vm07.stdout: Uptime(secs): 0.2 total, 0.2 interval 2026-03-09T20:42:23.431 INFO:journalctl@ceph.mon.vm07.vm07.stdout: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-09T20:42:23.431 INFO:journalctl@ceph.mon.vm07.vm07.stdout: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T20:42:23.431 INFO:journalctl@ceph.mon.vm07.vm07.stdout: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T20:42:23.431 INFO:journalctl@ceph.mon.vm07.vm07.stdout: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-09T20:42:23.431 INFO:journalctl@ceph.mon.vm07.vm07.stdout: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T20:42:23.431 INFO:journalctl@ceph.mon.vm07.vm07.stdout: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T20:42:23.431 INFO:journalctl@ceph.mon.vm07.vm07.stdout: 2026-03-09T20:42:23.431 INFO:journalctl@ceph.mon.vm07.vm07.stdout: ** Compaction Stats [default] ** 2026-03-09T20:42:23.431 INFO:journalctl@ceph.mon.vm07.vm07.stdout: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T20:42:23.431 INFO:journalctl@ceph.mon.vm07.vm07.stdout: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-09T20:42:23.431 INFO:journalctl@ceph.mon.vm07.vm07.stdout: L0 2/0 84.51 KB 0.5 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.4 0.18 0.00 1 0.180 0 0 0.0 0.0 2026-03-09T20:42:23.431 INFO:journalctl@ceph.mon.vm07.vm07.stdout: Sum 2/0 84.51 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.4 0.18 0.00 1 0.180 0 0 0.0 0.0 2026-03-09T20:42:23.431 INFO:journalctl@ceph.mon.vm07.vm07.stdout: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.4 0.18 0.00 1 0.180 0 0 0.0 0.0 2026-03-09T20:42:23.431 INFO:journalctl@ceph.mon.vm07.vm07.stdout: 2026-03-09T20:42:23.431 INFO:journalctl@ceph.mon.vm07.vm07.stdout: ** Compaction Stats [default] ** 2026-03-09T20:42:23.431 INFO:journalctl@ceph.mon.vm07.vm07.stdout: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T20:42:23.431 INFO:journalctl@ceph.mon.vm07.vm07.stdout: --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-09T20:42:23.431 INFO:journalctl@ceph.mon.vm07.vm07.stdout: User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.4 0.18 0.00 1 0.180 0 0 0.0 0.0 2026-03-09T20:42:23.431 INFO:journalctl@ceph.mon.vm07.vm07.stdout: 2026-03-09T20:42:23.431 INFO:journalctl@ceph.mon.vm07.vm07.stdout: Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-09T20:42:23.431 INFO:journalctl@ceph.mon.vm07.vm07.stdout: 2026-03-09T20:42:23.431 INFO:journalctl@ceph.mon.vm07.vm07.stdout: Uptime(secs): 0.2 total, 0.2 interval 2026-03-09T20:42:23.431 INFO:journalctl@ceph.mon.vm07.vm07.stdout: Flush(GB): cumulative 0.000, interval 0.000 2026-03-09T20:42:23.431 INFO:journalctl@ceph.mon.vm07.vm07.stdout: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-09T20:42:23.431 INFO:journalctl@ceph.mon.vm07.vm07.stdout: AddFile(Total Files): cumulative 0, interval 0 2026-03-09T20:42:23.431 INFO:journalctl@ceph.mon.vm07.vm07.stdout: AddFile(L0 Files): cumulative 0, interval 0 2026-03-09T20:42:23.431 INFO:journalctl@ceph.mon.vm07.vm07.stdout: AddFile(Keys): cumulative 0, interval 0 2026-03-09T20:42:23.431 INFO:journalctl@ceph.mon.vm07.vm07.stdout: Cumulative compaction: 0.00 GB write, 0.43 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds 2026-03-09T20:42:23.431 INFO:journalctl@ceph.mon.vm07.vm07.stdout: Interval compaction: 0.00 GB write, 0.43 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds 2026-03-09T20:42:23.431 INFO:journalctl@ceph.mon.vm07.vm07.stdout: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-09T20:42:23.431 INFO:journalctl@ceph.mon.vm07.vm07.stdout: Block cache BinnedLRUCache@0x557ff87089b0#2 capacity: 512.00 MB usage: 10.33 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 7e-06 secs_since: 0 2026-03-09T20:42:23.431 INFO:journalctl@ceph.mon.vm07.vm07.stdout: Block cache entry stats(count,size,portion): DataBlock(3,9.11 KB,0.00173748%) FilterBlock(2,0.83 KB,0.000157952%) IndexBlock(2,0.39 KB,7.45058e-05%) Misc(1,0.00 KB,0%) 2026-03-09T20:42:23.431 INFO:journalctl@ceph.mon.vm07.vm07.stdout: 2026-03-09T20:42:23.431 INFO:journalctl@ceph.mon.vm07.vm07.stdout: ** File Read Latency Histogram By Level [default] ** 2026-03-09T20:42:23.431 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:23 vm07 systemd[1]: Started Ceph mon.vm07 for 589eab88-1bf8-11f1-9e50-71f3ab1833c4. 2026-03-09T20:42:23.431 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:23 vm07 ceph-mon[49120]: starting mon.vm07 rank 0 at public addrs [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] at bind addrs [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon_data /var/lib/ceph/mon/ceph-vm07 fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 2026-03-09T20:42:23.431 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:23 vm07 ceph-mon[49120]: mon.vm07@-1(???) e1 preinit fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 2026-03-09T20:42:23.432 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:23 vm07 ceph-mon[49120]: mon.vm07@-1(???).mds e0 Unable to load 'last_metadata' 2026-03-09T20:42:23.432 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:23 vm07 ceph-mon[49120]: mon.vm07@-1(???).mds e0 Unable to load 'last_metadata' 2026-03-09T20:42:23.432 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:23 vm07 ceph-mon[49120]: mon.vm07@-1(???).mds e1 new map 2026-03-09T20:42:23.432 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:23 vm07 ceph-mon[49120]: mon.vm07@-1(???).mds e1 print_map 2026-03-09T20:42:23.432 INFO:journalctl@ceph.mon.vm07.vm07.stdout: e1 2026-03-09T20:42:23.432 INFO:journalctl@ceph.mon.vm07.vm07.stdout: enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T20:42:23.432 INFO:journalctl@ceph.mon.vm07.vm07.stdout: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T20:42:23.432 INFO:journalctl@ceph.mon.vm07.vm07.stdout: legacy client fscid: -1 2026-03-09T20:42:23.432 INFO:journalctl@ceph.mon.vm07.vm07.stdout: 2026-03-09T20:42:23.432 INFO:journalctl@ceph.mon.vm07.vm07.stdout: No filesystems configured 2026-03-09T20:42:23.432 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:23 vm07 ceph-mon[49120]: mon.vm07@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires 2026-03-09T20:42:23.432 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:23 vm07 ceph-mon[49120]: mon.vm07@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T20:42:23.432 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:23 vm07 ceph-mon[49120]: mon.vm07@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T20:42:23.432 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:23 vm07 ceph-mon[49120]: mon.vm07@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T20:42:23.432 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:23 vm07 ceph-mon[49120]: mon.vm07@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3 2026-03-09T20:42:23.432 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:23 vm07 ceph-mon[49120]: mon.vm07@-1(???).mgr e0 loading version 1 2026-03-09T20:42:23.432 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:23 vm07 ceph-mon[49120]: mon.vm07@-1(???).mgr e1 active server: (0) 2026-03-09T20:42:23.432 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:23 vm07 ceph-mon[49120]: mon.vm07@-1(???).mgr e1 mkfs or daemon transitioned to available, loading commands 2026-03-09T20:42:23.432 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:23 vm07 ceph-mon[49120]: mon.vm07 is new leader, mons vm07 in quorum (ranks 0) 2026-03-09T20:42:23.432 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:23 vm07 ceph-mon[49120]: monmap e1: 1 mons at {vm07=[v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0]} removed_ranks: {} disallowed_leaders: {} 2026-03-09T20:42:23.432 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:23 vm07 ceph-mon[49120]: fsmap 2026-03-09T20:42:23.432 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:23 vm07 ceph-mon[49120]: osdmap e1: 0 total, 0 up, 0 in 2026-03-09T20:42:23.432 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:23 vm07 ceph-mon[49120]: mgrmap e1: no daemons active 2026-03-09T20:42:23.508 INFO:teuthology.orchestra.run.vm07.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4@mgr.vm07.xjrvch 2026-03-09T20:42:23.508 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stderr Failed to reset failed state of unit ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4@mgr.vm07.xjrvch.service: Unit ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4@mgr.vm07.xjrvch.service not loaded. 2026-03-09T20:42:23.622 INFO:teuthology.orchestra.run.vm07.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4.target.wants/ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4@mgr.vm07.xjrvch.service → /etc/systemd/system/ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4@.service. 2026-03-09T20:42:23.782 INFO:teuthology.orchestra.run.vm07.stdout:firewalld does not appear to be present 2026-03-09T20:42:23.782 INFO:teuthology.orchestra.run.vm07.stdout:Not possible to enable service . firewalld.service is not available 2026-03-09T20:42:23.782 INFO:teuthology.orchestra.run.vm07.stdout:firewalld does not appear to be present 2026-03-09T20:42:23.782 INFO:teuthology.orchestra.run.vm07.stdout:Not possible to open ports <[9283, 8765, 8443]>. firewalld.service is not available 2026-03-09T20:42:23.782 INFO:teuthology.orchestra.run.vm07.stdout:Waiting for mgr to start... 2026-03-09T20:42:23.782 INFO:teuthology.orchestra.run.vm07.stdout:Waiting for mgr... 2026-03-09T20:42:24.034 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout 2026-03-09T20:42:24.034 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout { 2026-03-09T20:42:24.034 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "fsid": "589eab88-1bf8-11f1-9e50-71f3ab1833c4", 2026-03-09T20:42:24.034 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "health": { 2026-03-09T20:42:24.034 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-09T20:42:24.034 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-09T20:42:24.034 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-09T20:42:24.034 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:42:24.034 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-09T20:42:24.034 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-09T20:42:24.034 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout 0 2026-03-09T20:42:24.034 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout ], 2026-03-09T20:42:24.034 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-09T20:42:24.034 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "vm07" 2026-03-09T20:42:24.034 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout ], 2026-03-09T20:42:24.034 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "quorum_age": 0, 2026-03-09T20:42:24.034 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-09T20:42:24.034 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T20:42:24.034 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "reef", 2026-03-09T20:42:24.034 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-09T20:42:24.034 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:42:24.034 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-09T20:42:24.034 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T20:42:24.034 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-09T20:42:24.034 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-09T20:42:24.034 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-09T20:42:24.034 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-09T20:42:24.034 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-09T20:42:24.034 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-09T20:42:24.034 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:42:24.036 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-09T20:42:24.036 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-09T20:42:24.036 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-09T20:42:24.036 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-09T20:42:24.036 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-09T20:42:24.036 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-09T20:42:24.036 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-09T20:42:24.036 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-09T20:42:24.036 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-09T20:42:24.036 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:42:24.036 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-09T20:42:24.036 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T20:42:24.036 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-09T20:42:24.036 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-09T20:42:24.036 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:42:24.036 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-09T20:42:24.036 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-09T20:42:24.036 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-09T20:42:24.036 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-09T20:42:24.036 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-09T20:42:24.036 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-09T20:42:24.036 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "restful" 2026-03-09T20:42:24.036 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout ], 2026-03-09T20:42:24.036 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T20:42:24.036 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:42:24.036 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-09T20:42:24.036 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T20:42:24.036 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "modified": "2026-03-09T20:42:21.623033+0000", 2026-03-09T20:42:24.036 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T20:42:24.036 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:42:24.036 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-09T20:42:24.036 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout } 2026-03-09T20:42:24.036 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.925+0000 7f0aa6378640 1 Processor -- start 2026-03-09T20:42:24.036 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.926+0000 7f0aa6378640 1 -- start start 2026-03-09T20:42:24.036 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.926+0000 7f0aa6378640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0aa0071840 0x7f0aa0071c40 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:24.036 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.926+0000 7f0aa6378640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f0aa0072210 con 0x7f0aa0071840 2026-03-09T20:42:24.036 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.926+0000 7f0aa5376640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0aa0071840 0x7f0aa0071c40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:24.036 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.926+0000 7f0aa5376640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0aa0071840 0x7f0aa0071c40 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:53554/0 (socket says 192.168.123.107:53554) 2026-03-09T20:42:24.036 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.926+0000 7f0aa5376640 1 -- 192.168.123.107:0/543658111 learned_addr learned my addr 192.168.123.107:0/543658111 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:24.036 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.927+0000 7f0aa5376640 1 -- 192.168.123.107:0/543658111 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f0aa0072350 con 0x7f0aa0071840 2026-03-09T20:42:24.036 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.928+0000 7f0aa5376640 1 --2- 192.168.123.107:0/543658111 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0aa0071840 0x7f0aa0071c40 secure :-1 s=READY pgs=5 cs=0 l=1 rev1=1 crypto rx=0x7f0a9c00a9c0 tx=0x7f0a9c033650 comp rx=0 tx=0).ready entity=mon.0 client_cookie=ee9a2d005d9c0a87 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:24.036 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.928+0000 7f0a8ffff640 1 -- 192.168.123.107:0/543658111 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f0a9c037450 con 0x7f0aa0071840 2026-03-09T20:42:24.036 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.928+0000 7f0a8ffff640 1 -- 192.168.123.107:0/543658111 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1041+0+0 (secure 0 0 0) 0x7f0a9c036030 con 0x7f0aa0071840 2026-03-09T20:42:24.036 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.928+0000 7f0a8ffff640 1 -- 192.168.123.107:0/543658111 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f0a9c03ca00 con 0x7f0aa0071840 2026-03-09T20:42:24.037 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.929+0000 7f0aa6378640 1 -- 192.168.123.107:0/543658111 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0aa0071840 msgr2=0x7f0aa0071c40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:24.037 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.929+0000 7f0aa6378640 1 --2- 192.168.123.107:0/543658111 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0aa0071840 0x7f0aa0071c40 secure :-1 s=READY pgs=5 cs=0 l=1 rev1=1 crypto rx=0x7f0a9c00a9c0 tx=0x7f0a9c033650 comp rx=0 tx=0).stop 2026-03-09T20:42:24.037 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.929+0000 7f0aa6378640 1 -- 192.168.123.107:0/543658111 shutdown_connections 2026-03-09T20:42:24.037 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.929+0000 7f0aa6378640 1 --2- 192.168.123.107:0/543658111 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0aa0071840 0x7f0aa0071c40 unknown :-1 s=CLOSED pgs=5 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:24.037 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.929+0000 7f0aa6378640 1 -- 192.168.123.107:0/543658111 >> 192.168.123.107:0/543658111 conn(0x7f0aa006d080 msgr2=0x7f0aa006f4a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:24.037 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.932+0000 7f0aa6378640 1 -- 192.168.123.107:0/543658111 shutdown_connections 2026-03-09T20:42:24.037 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.932+0000 7f0aa6378640 1 -- 192.168.123.107:0/543658111 wait complete. 2026-03-09T20:42:24.037 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.932+0000 7f0aa6378640 1 Processor -- start 2026-03-09T20:42:24.037 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.932+0000 7f0aa6378640 1 -- start start 2026-03-09T20:42:24.037 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.932+0000 7f0aa6378640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0aa0071840 0x7f0aa01b3720 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:24.037 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.932+0000 7f0aa6378640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f0aa01b3c60 con 0x7f0aa0071840 2026-03-09T20:42:24.037 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.933+0000 7f0aa5376640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0aa0071840 0x7f0aa01b3720 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:24.037 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.933+0000 7f0aa5376640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0aa0071840 0x7f0aa01b3720 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:53566/0 (socket says 192.168.123.107:53566) 2026-03-09T20:42:24.037 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.933+0000 7f0aa5376640 1 -- 192.168.123.107:0/3419795149 learned_addr learned my addr 192.168.123.107:0/3419795149 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:24.037 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.934+0000 7f0aa5376640 1 -- 192.168.123.107:0/3419795149 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f0a9c00a670 con 0x7f0aa0071840 2026-03-09T20:42:24.037 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.934+0000 7f0aa5376640 1 --2- 192.168.123.107:0/3419795149 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0aa0071840 0x7f0aa01b3720 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7f0a9c033b80 tx=0x7f0a9c037980 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:24.037 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.935+0000 7f0a8e7fc640 1 -- 192.168.123.107:0/3419795149 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f0a9c037b40 con 0x7f0aa0071840 2026-03-09T20:42:24.037 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.935+0000 7f0a8e7fc640 1 -- 192.168.123.107:0/3419795149 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1041+0+0 (secure 0 0 0) 0x7f0a9c037ca0 con 0x7f0aa0071840 2026-03-09T20:42:24.037 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.935+0000 7f0a8e7fc640 1 -- 192.168.123.107:0/3419795149 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f0a9c0475a0 con 0x7f0aa0071840 2026-03-09T20:42:24.037 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.935+0000 7f0aa6378640 1 -- 192.168.123.107:0/3419795149 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f0aa01b3e00 con 0x7f0aa0071840 2026-03-09T20:42:24.037 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.935+0000 7f0aa6378640 1 -- 192.168.123.107:0/3419795149 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f0aa01b41c0 con 0x7f0aa0071840 2026-03-09T20:42:24.037 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.937+0000 7f0aa6378640 1 -- 192.168.123.107:0/3419795149 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f0a68005350 con 0x7f0aa0071840 2026-03-09T20:42:24.037 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.937+0000 7f0a8e7fc640 1 -- 192.168.123.107:0/3419795149 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 1) v1 ==== 672+0+0 (secure 0 0 0) 0x7f0a9c045030 con 0x7f0aa0071840 2026-03-09T20:42:24.037 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.937+0000 7f0a8e7fc640 1 -- 192.168.123.107:0/3419795149 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(1..1 src has 1..1) v4 ==== 725+0+0 (secure 0 0 0) 0x7f0a9c051520 con 0x7f0aa0071840 2026-03-09T20:42:24.037 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.940+0000 7f0a8e7fc640 1 -- 192.168.123.107:0/3419795149 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+74806 (secure 0 0 0) 0x7f0a9c043070 con 0x7f0aa0071840 2026-03-09T20:42:24.037 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.977+0000 7f0aa6378640 1 -- 192.168.123.107:0/3419795149 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1 -- 0x7f0a680051c0 con 0x7f0aa0071840 2026-03-09T20:42:24.037 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.978+0000 7f0a8e7fc640 1 -- 192.168.123.107:0/3419795149 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "status", "format": "json-pretty"}]=0 v0) v1 ==== 79+0+1241 (secure 0 0 0) 0x7f0a9c03d880 con 0x7f0aa0071840 2026-03-09T20:42:24.037 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.983+0000 7f0a6ffff640 1 -- 192.168.123.107:0/3419795149 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0aa0071840 msgr2=0x7f0aa01b3720 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:24.037 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.983+0000 7f0a6ffff640 1 --2- 192.168.123.107:0/3419795149 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0aa0071840 0x7f0aa01b3720 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7f0a9c033b80 tx=0x7f0a9c037980 comp rx=0 tx=0).stop 2026-03-09T20:42:24.037 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.983+0000 7f0a6ffff640 1 -- 192.168.123.107:0/3419795149 shutdown_connections 2026-03-09T20:42:24.037 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.983+0000 7f0a6ffff640 1 --2- 192.168.123.107:0/3419795149 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0aa0071840 0x7f0aa01b3720 unknown :-1 s=CLOSED pgs=6 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:24.037 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.983+0000 7f0a6ffff640 1 -- 192.168.123.107:0/3419795149 >> 192.168.123.107:0/3419795149 conn(0x7f0aa006d080 msgr2=0x7f0aa006eac0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:24.037 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.985+0000 7f0a6ffff640 1 -- 192.168.123.107:0/3419795149 shutdown_connections 2026-03-09T20:42:24.037 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:23.985+0000 7f0a6ffff640 1 -- 192.168.123.107:0/3419795149 wait complete. 2026-03-09T20:42:24.037 INFO:teuthology.orchestra.run.vm07.stdout:mgr not available, waiting (1/15)... 2026-03-09T20:42:24.575 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:24 vm07 ceph-mon[49120]: from='client.? 192.168.123.107:0/2070393088' entity='client.admin' 2026-03-09T20:42:24.575 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:24 vm07 ceph-mon[49120]: from='client.? 192.168.123.107:0/3419795149' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T20:42:26.243 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout 2026-03-09T20:42:26.243 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout { 2026-03-09T20:42:26.243 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "fsid": "589eab88-1bf8-11f1-9e50-71f3ab1833c4", 2026-03-09T20:42:26.243 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "health": { 2026-03-09T20:42:26.243 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-09T20:42:26.243 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-09T20:42:26.243 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-09T20:42:26.243 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:42:26.243 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-09T20:42:26.243 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-09T20:42:26.243 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout 0 2026-03-09T20:42:26.244 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout ], 2026-03-09T20:42:26.244 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-09T20:42:26.244 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "vm07" 2026-03-09T20:42:26.244 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout ], 2026-03-09T20:42:26.244 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "quorum_age": 3, 2026-03-09T20:42:26.244 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-09T20:42:26.244 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T20:42:26.244 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "reef", 2026-03-09T20:42:26.244 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-09T20:42:26.244 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:42:26.244 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-09T20:42:26.244 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T20:42:26.244 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-09T20:42:26.244 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-09T20:42:26.244 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-09T20:42:26.244 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-09T20:42:26.245 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-09T20:42:26.245 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-09T20:42:26.245 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:42:26.245 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-09T20:42:26.245 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-09T20:42:26.245 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-09T20:42:26.245 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-09T20:42:26.245 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "restful" 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout ], 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "modified": "2026-03-09T20:42:21.623033+0000", 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout } 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:26.174+0000 7fe7bda7a640 1 Processor -- start 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:26.174+0000 7fe7bda7a640 1 -- start start 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:26.174+0000 7fe7bda7a640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe7b8071820 0x7fe7b8071c20 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:26.174+0000 7fe7bda7a640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fe7b80721f0 con 0x7fe7b8071820 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:26.174+0000 7fe7bca78640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe7b8071820 0x7fe7b8071c20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:26.174+0000 7fe7bca78640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe7b8071820 0x7fe7b8071c20 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:53580/0 (socket says 192.168.123.107:53580) 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:26.174+0000 7fe7bca78640 1 -- 192.168.123.107:0/113348209 learned_addr learned my addr 192.168.123.107:0/113348209 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:26.174+0000 7fe7bca78640 1 -- 192.168.123.107:0/113348209 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fe7b8072330 con 0x7fe7b8071820 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:26.175+0000 7fe7bca78640 1 --2- 192.168.123.107:0/113348209 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe7b8071820 0x7fe7b8071c20 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7fe7a80089a0 tx=0x7fe7a8031440 comp rx=0 tx=0).ready entity=mon.0 client_cookie=dfec85dfc2e3c8ee server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:26.175+0000 7fe7b77fe640 1 -- 192.168.123.107:0/113348209 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fe7a8031e50 con 0x7fe7b8071820 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:26.175+0000 7fe7b77fe640 1 -- 192.168.123.107:0/113348209 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1041+0+0 (secure 0 0 0) 0x7fe7a8035070 con 0x7fe7b8071820 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:26.175+0000 7fe7bda7a640 1 -- 192.168.123.107:0/113348209 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe7b8071820 msgr2=0x7fe7b8071c20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:26.175+0000 7fe7bda7a640 1 --2- 192.168.123.107:0/113348209 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe7b8071820 0x7fe7b8071c20 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7fe7a80089a0 tx=0x7fe7a8031440 comp rx=0 tx=0).stop 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:26.175+0000 7fe7bda7a640 1 -- 192.168.123.107:0/113348209 shutdown_connections 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:26.175+0000 7fe7bda7a640 1 --2- 192.168.123.107:0/113348209 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe7b8071820 0x7fe7b8071c20 unknown :-1 s=CLOSED pgs=7 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:26.175+0000 7fe7bda7a640 1 -- 192.168.123.107:0/113348209 >> 192.168.123.107:0/113348209 conn(0x7fe7b806d060 msgr2=0x7fe7b806f480 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:26.176+0000 7fe7bda7a640 1 -- 192.168.123.107:0/113348209 shutdown_connections 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:26.176+0000 7fe7bda7a640 1 -- 192.168.123.107:0/113348209 wait complete. 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:26.176+0000 7fe7bda7a640 1 Processor -- start 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:26.176+0000 7fe7bda7a640 1 -- start start 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:26.176+0000 7fe7bda7a640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe7b81a2460 0x7fe7b81a2880 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:26.176+0000 7fe7bda7a640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fe7a803b720 con 0x7fe7b81a2460 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:26.176+0000 7fe7bca78640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe7b81a2460 0x7fe7b81a2880 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:26.176+0000 7fe7bca78640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe7b81a2460 0x7fe7b81a2880 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:53586/0 (socket says 192.168.123.107:53586) 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:26.176+0000 7fe7bca78640 1 -- 192.168.123.107:0/2488516462 learned_addr learned my addr 192.168.123.107:0/2488516462 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:26.176+0000 7fe7bca78640 1 -- 192.168.123.107:0/2488516462 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fe7a8008650 con 0x7fe7b81a2460 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:26.177+0000 7fe7bca78640 1 --2- 192.168.123.107:0/2488516462 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe7b81a2460 0x7fe7b81a2880 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7fe7a8008970 tx=0x7fe7a8008c70 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:26.177+0000 7fe7b5ffb640 1 -- 192.168.123.107:0/2488516462 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fe7a803c480 con 0x7fe7b81a2460 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:26.177+0000 7fe7bda7a640 1 -- 192.168.123.107:0/2488516462 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fe7b81a2dc0 con 0x7fe7b81a2460 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:26.177+0000 7fe7bda7a640 1 -- 192.168.123.107:0/2488516462 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fe7b81a3950 con 0x7fe7b81a2460 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:26.177+0000 7fe7b5ffb640 1 -- 192.168.123.107:0/2488516462 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1041+0+0 (secure 0 0 0) 0x7fe7a8035040 con 0x7fe7b81a2460 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:26.177+0000 7fe7b5ffb640 1 -- 192.168.123.107:0/2488516462 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fe7a800b5c0 con 0x7fe7b81a2460 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:26.178+0000 7fe7b5ffb640 1 -- 192.168.123.107:0/2488516462 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 1) v1 ==== 672+0+0 (secure 0 0 0) 0x7fe7a8046400 con 0x7fe7b81a2460 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:26.178+0000 7fe7b5ffb640 1 -- 192.168.123.107:0/2488516462 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(1..1 src has 1..1) v4 ==== 725+0+0 (secure 0 0 0) 0x7fe7a8037070 con 0x7fe7b81a2460 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:26.178+0000 7fe7bda7a640 1 -- 192.168.123.107:0/2488516462 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fe7b8111070 con 0x7fe7b81a2460 2026-03-09T20:42:26.246 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:26.180+0000 7fe7b5ffb640 1 -- 192.168.123.107:0/2488516462 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+74806 (secure 0 0 0) 0x7fe7a804b050 con 0x7fe7b81a2460 2026-03-09T20:42:26.247 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:26.211+0000 7fe7bda7a640 1 -- 192.168.123.107:0/2488516462 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1 -- 0x7fe7b8110810 con 0x7fe7b81a2460 2026-03-09T20:42:26.247 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:26.212+0000 7fe7b5ffb640 1 -- 192.168.123.107:0/2488516462 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "status", "format": "json-pretty"}]=0 v0) v1 ==== 79+0+1241 (secure 0 0 0) 0x7fe7a8042070 con 0x7fe7b81a2460 2026-03-09T20:42:26.247 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:26.213+0000 7fe7877fe640 1 -- 192.168.123.107:0/2488516462 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe7b81a2460 msgr2=0x7fe7b81a2880 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:26.247 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:26.213+0000 7fe7877fe640 1 --2- 192.168.123.107:0/2488516462 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe7b81a2460 0x7fe7b81a2880 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7fe7a8008970 tx=0x7fe7a8008c70 comp rx=0 tx=0).stop 2026-03-09T20:42:26.247 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:26.214+0000 7fe7877fe640 1 -- 192.168.123.107:0/2488516462 shutdown_connections 2026-03-09T20:42:26.247 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:26.214+0000 7fe7877fe640 1 --2- 192.168.123.107:0/2488516462 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe7b81a2460 0x7fe7b81a2880 unknown :-1 s=CLOSED pgs=8 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:26.247 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:26.214+0000 7fe7877fe640 1 -- 192.168.123.107:0/2488516462 >> 192.168.123.107:0/2488516462 conn(0x7fe7b806d060 msgr2=0x7fe7b806e900 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:26.247 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:26.214+0000 7fe7877fe640 1 -- 192.168.123.107:0/2488516462 shutdown_connections 2026-03-09T20:42:26.247 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:26.214+0000 7fe7877fe640 1 -- 192.168.123.107:0/2488516462 wait complete. 2026-03-09T20:42:26.247 INFO:teuthology.orchestra.run.vm07.stdout:mgr not available, waiting (2/15)... 2026-03-09T20:42:26.366 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:26 vm07 ceph-mon[49120]: from='client.? 192.168.123.107:0/2488516462' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T20:42:27.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:27 vm07 ceph-mon[49120]: Activating manager daemon vm07.xjrvch 2026-03-09T20:42:27.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:27 vm07 ceph-mon[49120]: mgrmap e2: vm07.xjrvch(active, starting, since 0.00538328s) 2026-03-09T20:42:27.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:27 vm07 ceph-mon[49120]: from='mgr.14100 192.168.123.107:0/1770297841' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T20:42:27.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:27 vm07 ceph-mon[49120]: from='mgr.14100 192.168.123.107:0/1770297841' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T20:42:27.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:27 vm07 ceph-mon[49120]: from='mgr.14100 192.168.123.107:0/1770297841' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T20:42:27.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:27 vm07 ceph-mon[49120]: from='mgr.14100 192.168.123.107:0/1770297841' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mon metadata", "id": "vm07"}]: dispatch 2026-03-09T20:42:27.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:27 vm07 ceph-mon[49120]: from='mgr.14100 192.168.123.107:0/1770297841' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mgr metadata", "who": "vm07.xjrvch", "id": "vm07.xjrvch"}]: dispatch 2026-03-09T20:42:27.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:27 vm07 ceph-mon[49120]: Manager daemon vm07.xjrvch is now available 2026-03-09T20:42:27.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:27 vm07 ceph-mon[49120]: from='mgr.14100 192.168.123.107:0/1770297841' entity='mgr.vm07.xjrvch' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm07.xjrvch/mirror_snapshot_schedule"}]: dispatch 2026-03-09T20:42:27.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:27 vm07 ceph-mon[49120]: from='mgr.14100 192.168.123.107:0/1770297841' entity='mgr.vm07.xjrvch' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm07.xjrvch/trash_purge_schedule"}]: dispatch 2026-03-09T20:42:27.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:27 vm07 ceph-mon[49120]: from='mgr.14100 192.168.123.107:0/1770297841' entity='mgr.vm07.xjrvch' 2026-03-09T20:42:27.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:27 vm07 ceph-mon[49120]: from='mgr.14100 192.168.123.107:0/1770297841' entity='mgr.vm07.xjrvch' 2026-03-09T20:42:27.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:27 vm07 ceph-mon[49120]: from='mgr.14100 192.168.123.107:0/1770297841' entity='mgr.vm07.xjrvch' 2026-03-09T20:42:29.282 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout 2026-03-09T20:42:29.282 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout { 2026-03-09T20:42:29.282 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "fsid": "589eab88-1bf8-11f1-9e50-71f3ab1833c4", 2026-03-09T20:42:29.282 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "health": { 2026-03-09T20:42:29.282 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-09T20:42:29.282 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-09T20:42:29.282 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-09T20:42:29.282 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:42:29.282 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-09T20:42:29.282 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-09T20:42:29.282 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout 0 2026-03-09T20:42:29.282 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout ], 2026-03-09T20:42:29.282 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-09T20:42:29.282 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "vm07" 2026-03-09T20:42:29.282 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout ], 2026-03-09T20:42:29.282 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "quorum_age": 5, 2026-03-09T20:42:29.282 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-09T20:42:29.282 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T20:42:29.282 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "reef", 2026-03-09T20:42:29.282 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-09T20:42:29.282 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:42:29.282 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-09T20:42:29.282 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T20:42:29.282 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-09T20:42:29.282 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-09T20:42:29.282 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-09T20:42:29.282 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-09T20:42:29.282 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-09T20:42:29.282 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-09T20:42:29.282 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:42:29.282 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-09T20:42:29.282 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-09T20:42:29.282 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-09T20:42:29.282 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-09T20:42:29.282 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-09T20:42:29.282 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-09T20:42:29.282 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-09T20:42:29.282 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-09T20:42:29.282 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-09T20:42:29.282 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:42:29.282 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-09T20:42:29.282 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T20:42:29.284 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-09T20:42:29.284 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-09T20:42:29.284 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:42:29.284 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-09T20:42:29.284 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-09T20:42:29.284 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-09T20:42:29.284 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-09T20:42:29.284 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-09T20:42:29.284 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-09T20:42:29.284 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "restful" 2026-03-09T20:42:29.284 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout ], 2026-03-09T20:42:29.284 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T20:42:29.284 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:42:29.284 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-09T20:42:29.284 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T20:42:29.284 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "modified": "2026-03-09T20:42:21.623033+0000", 2026-03-09T20:42:29.284 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T20:42:29.284 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:42:29.284 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-09T20:42:29.284 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout } 2026-03-09T20:42:29.284 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:28.362+0000 7f09b5fd4640 1 Processor -- start 2026-03-09T20:42:29.284 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:28.362+0000 7f09b5fd4640 1 -- start start 2026-03-09T20:42:29.284 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:28.363+0000 7f09b5fd4640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f09b01082f0 0x7f09b01086f0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:29.284 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:28.363+0000 7f09b5fd4640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f09b0108cc0 con 0x7f09b01082f0 2026-03-09T20:42:29.284 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:28.363+0000 7f09af7fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f09b01082f0 0x7f09b01086f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:29.284 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:28.363+0000 7f09af7fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f09b01082f0 0x7f09b01086f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:38572/0 (socket says 192.168.123.107:38572) 2026-03-09T20:42:29.284 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:28.363+0000 7f09af7fe640 1 -- 192.168.123.107:0/3644541824 learned_addr learned my addr 192.168.123.107:0/3644541824 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:29.284 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:28.364+0000 7f09af7fe640 1 -- 192.168.123.107:0/3644541824 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f09b01094a0 con 0x7f09b01082f0 2026-03-09T20:42:29.284 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:28.364+0000 7f09af7fe640 1 --2- 192.168.123.107:0/3644541824 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f09b01082f0 0x7f09b01086f0 secure :-1 s=READY pgs=16 cs=0 l=1 rev1=1 crypto rx=0x7f09a0009b80 tx=0x7f09a002f190 comp rx=0 tx=0).ready entity=mon.0 client_cookie=ccfae50ea3c00470 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:29.284 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:28.364+0000 7f09ae7fc640 1 -- 192.168.123.107:0/3644541824 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f09a002fa10 con 0x7f09b01082f0 2026-03-09T20:42:29.284 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:28.364+0000 7f09ae7fc640 1 -- 192.168.123.107:0/3644541824 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1041+0+0 (secure 0 0 0) 0x7f09a0037440 con 0x7f09b01082f0 2026-03-09T20:42:29.284 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:28.364+0000 7f09ae7fc640 1 -- 192.168.123.107:0/3644541824 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f09a0035540 con 0x7f09b01082f0 2026-03-09T20:42:29.284 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:28.365+0000 7f09b5fd4640 1 -- 192.168.123.107:0/3644541824 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f09b01082f0 msgr2=0x7f09b01086f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:29.284 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:28.365+0000 7f09b5fd4640 1 --2- 192.168.123.107:0/3644541824 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f09b01082f0 0x7f09b01086f0 secure :-1 s=READY pgs=16 cs=0 l=1 rev1=1 crypto rx=0x7f09a0009b80 tx=0x7f09a002f190 comp rx=0 tx=0).stop 2026-03-09T20:42:29.284 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:28.365+0000 7f09b5fd4640 1 -- 192.168.123.107:0/3644541824 shutdown_connections 2026-03-09T20:42:29.284 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:28.365+0000 7f09b5fd4640 1 --2- 192.168.123.107:0/3644541824 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f09b01082f0 0x7f09b01086f0 unknown :-1 s=CLOSED pgs=16 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:29.284 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:28.365+0000 7f09b5fd4640 1 -- 192.168.123.107:0/3644541824 >> 192.168.123.107:0/3644541824 conn(0x7f09b007ba00 msgr2=0x7f09b01066a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:29.284 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:28.365+0000 7f09b5fd4640 1 -- 192.168.123.107:0/3644541824 shutdown_connections 2026-03-09T20:42:29.285 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:28.365+0000 7f09b5fd4640 1 -- 192.168.123.107:0/3644541824 wait complete. 2026-03-09T20:42:29.285 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:28.366+0000 7f09b5fd4640 1 Processor -- start 2026-03-09T20:42:29.285 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:28.366+0000 7f09b5fd4640 1 -- start start 2026-03-09T20:42:29.285 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:28.366+0000 7f09b5fd4640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f09b01082f0 0x7f09b019e140 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:29.285 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:28.366+0000 7f09b5fd4640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f09b019e680 con 0x7f09b01082f0 2026-03-09T20:42:29.285 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:28.367+0000 7f09af7fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f09b01082f0 0x7f09b019e140 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:29.285 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:28.367+0000 7f09af7fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f09b01082f0 0x7f09b019e140 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:38574/0 (socket says 192.168.123.107:38574) 2026-03-09T20:42:29.285 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:28.367+0000 7f09af7fe640 1 -- 192.168.123.107:0/2382698007 learned_addr learned my addr 192.168.123.107:0/2382698007 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:29.285 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:28.367+0000 7f09af7fe640 1 -- 192.168.123.107:0/2382698007 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f09a00095d0 con 0x7f09b01082f0 2026-03-09T20:42:29.285 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:28.367+0000 7f09af7fe640 1 --2- 192.168.123.107:0/2382698007 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f09b01082f0 0x7f09b019e140 secure :-1 s=READY pgs=17 cs=0 l=1 rev1=1 crypto rx=0x7f09a0009cb0 tx=0x7f09a0037870 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:29.285 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:28.367+0000 7f09acff9640 1 -- 192.168.123.107:0/2382698007 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f09a0037b20 con 0x7f09b01082f0 2026-03-09T20:42:29.285 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:28.367+0000 7f09acff9640 1 -- 192.168.123.107:0/2382698007 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1041+0+0 (secure 0 0 0) 0x7f09a0035c80 con 0x7f09b01082f0 2026-03-09T20:42:29.285 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:28.367+0000 7f09acff9640 1 -- 192.168.123.107:0/2382698007 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f09a0041da0 con 0x7f09b01082f0 2026-03-09T20:42:29.285 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:28.367+0000 7f09b5fd4640 1 -- 192.168.123.107:0/2382698007 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f09b019e880 con 0x7f09b01082f0 2026-03-09T20:42:29.285 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:28.368+0000 7f09b5fd4640 1 -- 192.168.123.107:0/2382698007 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f09b019ed20 con 0x7f09b01082f0 2026-03-09T20:42:29.285 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:28.368+0000 7f09b5fd4640 1 -- 192.168.123.107:0/2382698007 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f0974005350 con 0x7f09b01082f0 2026-03-09T20:42:29.285 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:28.369+0000 7f09acff9640 1 -- 192.168.123.107:0/2382698007 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 3) v1 ==== 49253+0+0 (secure 0 0 0) 0x7f09a003e030 con 0x7f09b01082f0 2026-03-09T20:42:29.285 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:28.370+0000 7f09acff9640 1 --2- 192.168.123.107:0/2382698007 >> [v2:192.168.123.107:6800/1859043218,v1:192.168.123.107:6801/1859043218] conn(0x7f098403d020 0x7f098403f4e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:29.285 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:28.370+0000 7f09acff9640 1 -- 192.168.123.107:0/2382698007 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(1..1 src has 1..1) v4 ==== 725+0+0 (secure 0 0 0) 0x7f09a0075470 con 0x7f09b01082f0 2026-03-09T20:42:29.285 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:28.371+0000 7f09aeffd640 1 --2- 192.168.123.107:0/2382698007 >> [v2:192.168.123.107:6800/1859043218,v1:192.168.123.107:6801/1859043218] conn(0x7f098403d020 0x7f098403f4e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:29.285 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:28.371+0000 7f09aeffd640 1 --2- 192.168.123.107:0/2382698007 >> [v2:192.168.123.107:6800/1859043218,v1:192.168.123.107:6801/1859043218] conn(0x7f098403d020 0x7f098403f4e0 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7f099c0099c0 tx=0x7f099c006eb0 comp rx=0 tx=0).ready entity=mgr.14100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:29.285 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:28.372+0000 7f09acff9640 1 -- 192.168.123.107:0/2382698007 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f09a0036580 con 0x7f09b01082f0 2026-03-09T20:42:29.285 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:28.487+0000 7f09b5fd4640 1 -- 192.168.123.107:0/2382698007 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1 -- 0x7f09740051c0 con 0x7f09b01082f0 2026-03-09T20:42:29.285 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:28.488+0000 7f09acff9640 1 -- 192.168.123.107:0/2382698007 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "status", "format": "json-pretty"}]=0 v0) v1 ==== 79+0+1240 (secure 0 0 0) 0x7f09a003c030 con 0x7f09b01082f0 2026-03-09T20:42:29.285 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:28.491+0000 7f09b5fd4640 1 -- 192.168.123.107:0/2382698007 >> [v2:192.168.123.107:6800/1859043218,v1:192.168.123.107:6801/1859043218] conn(0x7f098403d020 msgr2=0x7f098403f4e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:29.285 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:28.492+0000 7f09b5fd4640 1 --2- 192.168.123.107:0/2382698007 >> [v2:192.168.123.107:6800/1859043218,v1:192.168.123.107:6801/1859043218] conn(0x7f098403d020 0x7f098403f4e0 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7f099c0099c0 tx=0x7f099c006eb0 comp rx=0 tx=0).stop 2026-03-09T20:42:29.285 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:28.492+0000 7f09b5fd4640 1 -- 192.168.123.107:0/2382698007 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f09b01082f0 msgr2=0x7f09b019e140 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:29.285 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:28.492+0000 7f09b5fd4640 1 --2- 192.168.123.107:0/2382698007 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f09b01082f0 0x7f09b019e140 secure :-1 s=READY pgs=17 cs=0 l=1 rev1=1 crypto rx=0x7f09a0009cb0 tx=0x7f09a0037870 comp rx=0 tx=0).stop 2026-03-09T20:42:29.285 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:28.492+0000 7f09b5fd4640 1 -- 192.168.123.107:0/2382698007 shutdown_connections 2026-03-09T20:42:29.285 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:28.492+0000 7f09b5fd4640 1 --2- 192.168.123.107:0/2382698007 >> [v2:192.168.123.107:6800/1859043218,v1:192.168.123.107:6801/1859043218] conn(0x7f098403d020 0x7f098403f4e0 unknown :-1 s=CLOSED pgs=6 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:29.285 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:28.492+0000 7f09b5fd4640 1 --2- 192.168.123.107:0/2382698007 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f09b01082f0 0x7f09b019e140 unknown :-1 s=CLOSED pgs=17 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:29.285 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:28.492+0000 7f09b5fd4640 1 -- 192.168.123.107:0/2382698007 >> 192.168.123.107:0/2382698007 conn(0x7f09b007ba00 msgr2=0x7f09b0105d20 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:29.285 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:28.492+0000 7f09b5fd4640 1 -- 192.168.123.107:0/2382698007 shutdown_connections 2026-03-09T20:42:29.285 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:28.492+0000 7f09b5fd4640 1 -- 192.168.123.107:0/2382698007 wait complete. 2026-03-09T20:42:29.285 INFO:teuthology.orchestra.run.vm07.stdout:mgr is available 2026-03-09T20:42:29.525 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:29 vm07 ceph-mon[49120]: mgrmap e3: vm07.xjrvch(active, since 1.0094s) 2026-03-09T20:42:29.525 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:29 vm07 ceph-mon[49120]: from='client.? 192.168.123.107:0/2382698007' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T20:42:29.537 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout 2026-03-09T20:42:29.537 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout [global] 2026-03-09T20:42:29.537 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout fsid = 589eab88-1bf8-11f1-9e50-71f3ab1833c4 2026-03-09T20:42:29.537 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-09T20:42:29.537 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-09T20:42:29.537 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-09T20:42:29.537 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-09T20:42:29.537 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-09T20:42:29.537 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout 2026-03-09T20:42:29.537 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-09T20:42:29.537 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-09T20:42:29.537 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout 2026-03-09T20:42:29.537 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout [osd] 2026-03-09T20:42:29.538 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout mon_osd_backfillfull_ratio = 0.85 2026-03-09T20:42:29.538 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout mon_osd_full_ratio = 0.9 2026-03-09T20:42:29.538 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout mon_osd_nearfull_ratio = 0.8 2026-03-09T20:42:29.538 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-09T20:42:29.538 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-09T20:42:29.538 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.396+0000 7fb77109e640 1 Processor -- start 2026-03-09T20:42:29.538 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.396+0000 7fb77109e640 1 -- start start 2026-03-09T20:42:29.538 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.396+0000 7fb77109e640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb76c1082f0 0x7fb76c1086f0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:29.538 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.396+0000 7fb77109e640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fb76c108cc0 con 0x7fb76c1082f0 2026-03-09T20:42:29.538 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.397+0000 7fb76ad76640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb76c1082f0 0x7fb76c1086f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:29.538 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.397+0000 7fb76ad76640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb76c1082f0 0x7fb76c1086f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:38588/0 (socket says 192.168.123.107:38588) 2026-03-09T20:42:29.538 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.397+0000 7fb76ad76640 1 -- 192.168.123.107:0/1410878765 learned_addr learned my addr 192.168.123.107:0/1410878765 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:29.538 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.397+0000 7fb76ad76640 1 -- 192.168.123.107:0/1410878765 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fb76c109490 con 0x7fb76c1082f0 2026-03-09T20:42:29.538 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.397+0000 7fb76ad76640 1 --2- 192.168.123.107:0/1410878765 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb76c1082f0 0x7fb76c1086f0 secure :-1 s=READY pgs=18 cs=0 l=1 rev1=1 crypto rx=0x7fb760009b30 tx=0x7fb76002f140 comp rx=0 tx=0).ready entity=mon.0 client_cookie=e7092f357bd3c516 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:29.538 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.398+0000 7fb769d74640 1 -- 192.168.123.107:0/1410878765 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fb76002fbd0 con 0x7fb76c1082f0 2026-03-09T20:42:29.538 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.398+0000 7fb769d74640 1 -- 192.168.123.107:0/1410878765 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1041+0+0 (secure 0 0 0) 0x7fb76002fd30 con 0x7fb76c1082f0 2026-03-09T20:42:29.538 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.398+0000 7fb769d74640 1 -- 192.168.123.107:0/1410878765 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fb760035700 con 0x7fb76c1082f0 2026-03-09T20:42:29.538 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.398+0000 7fb77109e640 1 -- 192.168.123.107:0/1410878765 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb76c1082f0 msgr2=0x7fb76c1086f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:29.538 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.398+0000 7fb77109e640 1 --2- 192.168.123.107:0/1410878765 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb76c1082f0 0x7fb76c1086f0 secure :-1 s=READY pgs=18 cs=0 l=1 rev1=1 crypto rx=0x7fb760009b30 tx=0x7fb76002f140 comp rx=0 tx=0).stop 2026-03-09T20:42:29.538 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.398+0000 7fb77109e640 1 -- 192.168.123.107:0/1410878765 shutdown_connections 2026-03-09T20:42:29.538 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.398+0000 7fb77109e640 1 --2- 192.168.123.107:0/1410878765 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb76c1082f0 0x7fb76c1086f0 secure :-1 s=CLOSED pgs=18 cs=0 l=1 rev1=1 crypto rx=0x7fb760009b30 tx=0x7fb76002f140 comp rx=0 tx=0).stop 2026-03-09T20:42:29.538 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.398+0000 7fb77109e640 1 -- 192.168.123.107:0/1410878765 >> 192.168.123.107:0/1410878765 conn(0x7fb76c07b8c0 msgr2=0x7fb76c1066a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:29.538 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.398+0000 7fb77109e640 1 -- 192.168.123.107:0/1410878765 shutdown_connections 2026-03-09T20:42:29.538 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.398+0000 7fb77109e640 1 -- 192.168.123.107:0/1410878765 wait complete. 2026-03-09T20:42:29.538 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.399+0000 7fb77109e640 1 Processor -- start 2026-03-09T20:42:29.538 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.399+0000 7fb77109e640 1 -- start start 2026-03-09T20:42:29.538 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.399+0000 7fb77109e640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb76c19e320 0x7fb76c19e740 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:29.538 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.399+0000 7fb77109e640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fb76c19ec80 con 0x7fb76c19e320 2026-03-09T20:42:29.538 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.399+0000 7fb76ad76640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb76c19e320 0x7fb76c19e740 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:29.538 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.399+0000 7fb76ad76640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb76c19e320 0x7fb76c19e740 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:38590/0 (socket says 192.168.123.107:38590) 2026-03-09T20:42:29.538 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.399+0000 7fb76ad76640 1 -- 192.168.123.107:0/59976990 learned_addr learned my addr 192.168.123.107:0/59976990 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:29.538 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.400+0000 7fb76ad76640 1 -- 192.168.123.107:0/59976990 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fb7600095d0 con 0x7fb76c19e320 2026-03-09T20:42:29.538 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.400+0000 7fb76ad76640 1 --2- 192.168.123.107:0/59976990 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb76c19e320 0x7fb76c19e740 secure :-1 s=READY pgs=19 cs=0 l=1 rev1=1 crypto rx=0x7fb760009b00 tx=0x7fb760035710 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:29.538 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.400+0000 7fb74bfff640 1 -- 192.168.123.107:0/59976990 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fb760045070 con 0x7fb76c19e320 2026-03-09T20:42:29.538 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.400+0000 7fb74bfff640 1 -- 192.168.123.107:0/59976990 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1041+0+0 (secure 0 0 0) 0x7fb760035e90 con 0x7fb76c19e320 2026-03-09T20:42:29.538 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.400+0000 7fb77109e640 1 -- 192.168.123.107:0/59976990 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fb76c19ee80 con 0x7fb76c19e320 2026-03-09T20:42:29.538 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.400+0000 7fb77109e640 1 -- 192.168.123.107:0/59976990 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fb76c1a19f0 con 0x7fb76c19e320 2026-03-09T20:42:29.538 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.401+0000 7fb74bfff640 1 -- 192.168.123.107:0/59976990 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fb760035b50 con 0x7fb76c19e320 2026-03-09T20:42:29.538 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.401+0000 7fb77109e640 1 -- 192.168.123.107:0/59976990 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fb730005350 con 0x7fb76c19e320 2026-03-09T20:42:29.538 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.401+0000 7fb74bfff640 1 -- 192.168.123.107:0/59976990 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 4) v1 ==== 49359+0+0 (secure 0 0 0) 0x7fb76003e030 con 0x7fb76c19e320 2026-03-09T20:42:29.538 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.401+0000 7fb74bfff640 1 --2- 192.168.123.107:0/59976990 >> [v2:192.168.123.107:6800/1859043218,v1:192.168.123.107:6801/1859043218] conn(0x7fb74003d140 0x7fb74003f600 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:29.538 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.401+0000 7fb74bfff640 1 -- 192.168.123.107:0/59976990 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(1..1 src has 1..1) v4 ==== 725+0+0 (secure 0 0 0) 0x7fb760075de0 con 0x7fb76c19e320 2026-03-09T20:42:29.538 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.404+0000 7fb76a575640 1 --2- 192.168.123.107:0/59976990 >> [v2:192.168.123.107:6800/1859043218,v1:192.168.123.107:6801/1859043218] conn(0x7fb74003d140 0x7fb74003f600 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:29.538 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.404+0000 7fb76a575640 1 --2- 192.168.123.107:0/59976990 >> [v2:192.168.123.107:6800/1859043218,v1:192.168.123.107:6801/1859043218] conn(0x7fb74003d140 0x7fb74003f600 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7fb7540099c0 tx=0x7fb754006eb0 comp rx=0 tx=0).ready entity=mgr.14100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:29.538 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.404+0000 7fb74bfff640 1 -- 192.168.123.107:0/59976990 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7fb760036df0 con 0x7fb76c19e320 2026-03-09T20:42:29.538 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.486+0000 7fb77109e640 1 -- 192.168.123.107:0/59976990 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "config assimilate-conf"} v 0) v1 -- 0x7fb730005b80 con 0x7fb76c19e320 2026-03-09T20:42:29.538 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.488+0000 7fb74bfff640 1 -- 192.168.123.107:0/59976990 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "config assimilate-conf"}]=0 v3) v1 ==== 70+0+409 (secure 0 0 0) 0x7fb76003c030 con 0x7fb76c19e320 2026-03-09T20:42:29.538 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.490+0000 7fb77109e640 1 -- 192.168.123.107:0/59976990 >> [v2:192.168.123.107:6800/1859043218,v1:192.168.123.107:6801/1859043218] conn(0x7fb74003d140 msgr2=0x7fb74003f600 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:29.538 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.490+0000 7fb77109e640 1 --2- 192.168.123.107:0/59976990 >> [v2:192.168.123.107:6800/1859043218,v1:192.168.123.107:6801/1859043218] conn(0x7fb74003d140 0x7fb74003f600 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7fb7540099c0 tx=0x7fb754006eb0 comp rx=0 tx=0).stop 2026-03-09T20:42:29.538 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.490+0000 7fb77109e640 1 -- 192.168.123.107:0/59976990 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb76c19e320 msgr2=0x7fb76c19e740 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:29.538 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.490+0000 7fb77109e640 1 --2- 192.168.123.107:0/59976990 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb76c19e320 0x7fb76c19e740 secure :-1 s=READY pgs=19 cs=0 l=1 rev1=1 crypto rx=0x7fb760009b00 tx=0x7fb760035710 comp rx=0 tx=0).stop 2026-03-09T20:42:29.538 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.490+0000 7fb77109e640 1 -- 192.168.123.107:0/59976990 shutdown_connections 2026-03-09T20:42:29.538 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.490+0000 7fb77109e640 1 --2- 192.168.123.107:0/59976990 >> [v2:192.168.123.107:6800/1859043218,v1:192.168.123.107:6801/1859043218] conn(0x7fb74003d140 0x7fb74003f600 unknown :-1 s=CLOSED pgs=7 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:29.538 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.490+0000 7fb77109e640 1 --2- 192.168.123.107:0/59976990 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb76c19e320 0x7fb76c19e740 unknown :-1 s=CLOSED pgs=19 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:29.538 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.490+0000 7fb77109e640 1 -- 192.168.123.107:0/59976990 >> 192.168.123.107:0/59976990 conn(0x7fb76c07b8c0 msgr2=0x7fb76c105db0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:29.538 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.491+0000 7fb77109e640 1 -- 192.168.123.107:0/59976990 shutdown_connections 2026-03-09T20:42:29.538 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.491+0000 7fb77109e640 1 -- 192.168.123.107:0/59976990 wait complete. 2026-03-09T20:42:29.538 INFO:teuthology.orchestra.run.vm07.stdout:Enabling cephadm module... 2026-03-09T20:42:30.307 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.646+0000 7fa8f263f640 1 Processor -- start 2026-03-09T20:42:30.307 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.646+0000 7fa8f263f640 1 -- start start 2026-03-09T20:42:30.307 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.646+0000 7fa8f263f640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa8ec108240 0x7fa8ec108640 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:30.307 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.646+0000 7fa8f263f640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa8ec108c10 con 0x7fa8ec108240 2026-03-09T20:42:30.307 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.647+0000 7fa8ebfff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa8ec108240 0x7fa8ec108640 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:30.307 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.647+0000 7fa8ebfff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa8ec108240 0x7fa8ec108640 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:38600/0 (socket says 192.168.123.107:38600) 2026-03-09T20:42:30.307 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.647+0000 7fa8ebfff640 1 -- 192.168.123.107:0/2030509078 learned_addr learned my addr 192.168.123.107:0/2030509078 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:30.307 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.647+0000 7fa8ebfff640 1 -- 192.168.123.107:0/2030509078 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fa8ec1093f0 con 0x7fa8ec108240 2026-03-09T20:42:30.307 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.648+0000 7fa8ebfff640 1 --2- 192.168.123.107:0/2030509078 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa8ec108240 0x7fa8ec108640 secure :-1 s=READY pgs=20 cs=0 l=1 rev1=1 crypto rx=0x7fa8dc009920 tx=0x7fa8dc02ef20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=49f673847e5b8444 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:30.307 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.648+0000 7fa8eaffd640 1 -- 192.168.123.107:0/2030509078 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fa8dc02f9b0 con 0x7fa8ec108240 2026-03-09T20:42:30.307 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.648+0000 7fa8eaffd640 1 -- 192.168.123.107:0/2030509078 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1041+0+0 (secure 0 0 0) 0x7fa8dc037440 con 0x7fa8ec108240 2026-03-09T20:42:30.307 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.648+0000 7fa8f263f640 1 -- 192.168.123.107:0/2030509078 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa8ec108240 msgr2=0x7fa8ec108640 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:30.307 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.648+0000 7fa8f263f640 1 --2- 192.168.123.107:0/2030509078 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa8ec108240 0x7fa8ec108640 secure :-1 s=READY pgs=20 cs=0 l=1 rev1=1 crypto rx=0x7fa8dc009920 tx=0x7fa8dc02ef20 comp rx=0 tx=0).stop 2026-03-09T20:42:30.307 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.649+0000 7fa8f263f640 1 -- 192.168.123.107:0/2030509078 shutdown_connections 2026-03-09T20:42:30.307 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.649+0000 7fa8f263f640 1 --2- 192.168.123.107:0/2030509078 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa8ec108240 0x7fa8ec108640 unknown :-1 s=CLOSED pgs=20 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:30.307 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.649+0000 7fa8f263f640 1 -- 192.168.123.107:0/2030509078 >> 192.168.123.107:0/2030509078 conn(0x7fa8ec07b8b0 msgr2=0x7fa8ec1066a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:30.307 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.649+0000 7fa8f263f640 1 -- 192.168.123.107:0/2030509078 shutdown_connections 2026-03-09T20:42:30.307 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.649+0000 7fa8f263f640 1 -- 192.168.123.107:0/2030509078 wait complete. 2026-03-09T20:42:30.307 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.649+0000 7fa8f263f640 1 Processor -- start 2026-03-09T20:42:30.307 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.649+0000 7fa8f263f640 1 -- start start 2026-03-09T20:42:30.308 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.650+0000 7fa8f263f640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa8ec19e0b0 0x7fa8ec19e4d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:30.308 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.650+0000 7fa8f263f640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa8dc035340 con 0x7fa8ec19e0b0 2026-03-09T20:42:30.308 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.650+0000 7fa8ebfff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa8ec19e0b0 0x7fa8ec19e4d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:30.308 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.650+0000 7fa8ebfff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa8ec19e0b0 0x7fa8ec19e4d0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:38602/0 (socket says 192.168.123.107:38602) 2026-03-09T20:42:30.308 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.650+0000 7fa8ebfff640 1 -- 192.168.123.107:0/1105024482 learned_addr learned my addr 192.168.123.107:0/1105024482 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:30.308 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.650+0000 7fa8ebfff640 1 -- 192.168.123.107:0/1105024482 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fa8dc0095d0 con 0x7fa8ec19e0b0 2026-03-09T20:42:30.308 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.650+0000 7fa8ebfff640 1 --2- 192.168.123.107:0/1105024482 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa8ec19e0b0 0x7fa8ec19e4d0 secure :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0x7fa8dc02f450 tx=0x7fa8dc0377d0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:30.308 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.651+0000 7fa8e97fa640 1 -- 192.168.123.107:0/1105024482 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fa8dc037b60 con 0x7fa8ec19e0b0 2026-03-09T20:42:30.308 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.651+0000 7fa8e97fa640 1 -- 192.168.123.107:0/1105024482 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1041+0+0 (secure 0 0 0) 0x7fa8dc02fe30 con 0x7fa8ec19e0b0 2026-03-09T20:42:30.308 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.651+0000 7fa8e97fa640 1 -- 192.168.123.107:0/1105024482 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fa8dc042de0 con 0x7fa8ec19e0b0 2026-03-09T20:42:30.308 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.651+0000 7fa8f263f640 1 -- 192.168.123.107:0/1105024482 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fa8ec19ea10 con 0x7fa8ec19e0b0 2026-03-09T20:42:30.308 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.651+0000 7fa8f263f640 1 -- 192.168.123.107:0/1105024482 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fa8ec07c3c0 con 0x7fa8ec19e0b0 2026-03-09T20:42:30.308 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.652+0000 7fa8f263f640 1 -- 192.168.123.107:0/1105024482 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fa8b0005350 con 0x7fa8ec19e0b0 2026-03-09T20:42:30.308 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.652+0000 7fa8e97fa640 1 -- 192.168.123.107:0/1105024482 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 4) v1 ==== 49359+0+0 (secure 0 0 0) 0x7fa8dc02f9b0 con 0x7fa8ec19e0b0 2026-03-09T20:42:30.308 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.652+0000 7fa8e97fa640 1 --2- 192.168.123.107:0/1105024482 >> [v2:192.168.123.107:6800/1859043218,v1:192.168.123.107:6801/1859043218] conn(0x7fa8c003d140 0x7fa8c003f600 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:30.308 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.652+0000 7fa8e97fa640 1 -- 192.168.123.107:0/1105024482 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(1..1 src has 1..1) v4 ==== 725+0+0 (secure 0 0 0) 0x7fa8dc0756d0 con 0x7fa8ec19e0b0 2026-03-09T20:42:30.308 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.654+0000 7fa8eb7fe640 1 --2- 192.168.123.107:0/1105024482 >> [v2:192.168.123.107:6800/1859043218,v1:192.168.123.107:6801/1859043218] conn(0x7fa8c003d140 0x7fa8c003f600 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:30.308 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.654+0000 7fa8eb7fe640 1 --2- 192.168.123.107:0/1105024482 >> [v2:192.168.123.107:6800/1859043218,v1:192.168.123.107:6801/1859043218] conn(0x7fa8c003d140 0x7fa8c003f600 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7fa8d8009a10 tx=0x7fa8d8006eb0 comp rx=0 tx=0).ready entity=mgr.14100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:30.308 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.654+0000 7fa8e97fa640 1 -- 192.168.123.107:0/1105024482 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7fa8dc02fc90 con 0x7fa8ec19e0b0 2026-03-09T20:42:30.308 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:29.758+0000 7fa8f263f640 1 -- 192.168.123.107:0/1105024482 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) v1 -- 0x7fa8b00051c0 con 0x7fa8ec19e0b0 2026-03-09T20:42:30.308 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.270+0000 7fa8e97fa640 1 -- 192.168.123.107:0/1105024482 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "mgr module enable", "module": "cephadm"}]=0 v5) v1 ==== 86+0+0 (secure 0 0 0) 0x7fa8dc047070 con 0x7fa8ec19e0b0 2026-03-09T20:42:30.308 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.275+0000 7fa8f263f640 1 -- 192.168.123.107:0/1105024482 >> [v2:192.168.123.107:6800/1859043218,v1:192.168.123.107:6801/1859043218] conn(0x7fa8c003d140 msgr2=0x7fa8c003f600 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:30.308 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.275+0000 7fa8f263f640 1 --2- 192.168.123.107:0/1105024482 >> [v2:192.168.123.107:6800/1859043218,v1:192.168.123.107:6801/1859043218] conn(0x7fa8c003d140 0x7fa8c003f600 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7fa8d8009a10 tx=0x7fa8d8006eb0 comp rx=0 tx=0).stop 2026-03-09T20:42:30.308 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.275+0000 7fa8f263f640 1 -- 192.168.123.107:0/1105024482 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa8ec19e0b0 msgr2=0x7fa8ec19e4d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:30.308 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.275+0000 7fa8f263f640 1 --2- 192.168.123.107:0/1105024482 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa8ec19e0b0 0x7fa8ec19e4d0 secure :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0x7fa8dc02f450 tx=0x7fa8dc0377d0 comp rx=0 tx=0).stop 2026-03-09T20:42:30.308 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.275+0000 7fa8f263f640 1 -- 192.168.123.107:0/1105024482 shutdown_connections 2026-03-09T20:42:30.308 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.275+0000 7fa8f263f640 1 --2- 192.168.123.107:0/1105024482 >> [v2:192.168.123.107:6800/1859043218,v1:192.168.123.107:6801/1859043218] conn(0x7fa8c003d140 0x7fa8c003f600 unknown :-1 s=CLOSED pgs=8 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:30.308 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.275+0000 7fa8f263f640 1 --2- 192.168.123.107:0/1105024482 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa8ec19e0b0 0x7fa8ec19e4d0 secure :-1 s=CLOSED pgs=21 cs=0 l=1 rev1=1 crypto rx=0x7fa8dc02f450 tx=0x7fa8dc0377d0 comp rx=0 tx=0).stop 2026-03-09T20:42:30.308 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.275+0000 7fa8f263f640 1 -- 192.168.123.107:0/1105024482 >> 192.168.123.107:0/1105024482 conn(0x7fa8ec07b8b0 msgr2=0x7fa8ec105b00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:30.308 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.276+0000 7fa8f263f640 1 -- 192.168.123.107:0/1105024482 shutdown_connections 2026-03-09T20:42:30.308 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.276+0000 7fa8f263f640 1 -- 192.168.123.107:0/1105024482 wait complete. 2026-03-09T20:42:30.543 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:30 vm07 ceph-mon[49120]: mgrmap e4: vm07.xjrvch(active, since 2s) 2026-03-09T20:42:30.543 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:30 vm07 ceph-mon[49120]: from='client.? 192.168.123.107:0/59976990' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-09T20:42:30.543 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:30 vm07 ceph-mon[49120]: from='client.? 192.168.123.107:0/1105024482' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-09T20:42:30.603 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout { 2026-03-09T20:42:30.603 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "epoch": 5, 2026-03-09T20:42:30.603 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-09T20:42:30.603 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "active_name": "vm07.xjrvch", 2026-03-09T20:42:30.603 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-09T20:42:30.603 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout } 2026-03-09T20:42:30.603 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.422+0000 7f9b63fff640 1 Processor -- start 2026-03-09T20:42:30.603 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.423+0000 7f9b63fff640 1 -- start start 2026-03-09T20:42:30.603 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.423+0000 7f9b63fff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9b64071820 0x7f9b64071c20 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:30.603 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.423+0000 7f9b63fff640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9b640721f0 con 0x7f9b64071820 2026-03-09T20:42:30.603 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.423+0000 7f9b62ffd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9b64071820 0x7f9b64071c20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:30.603 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.423+0000 7f9b62ffd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9b64071820 0x7f9b64071c20 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:38640/0 (socket says 192.168.123.107:38640) 2026-03-09T20:42:30.603 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.423+0000 7f9b62ffd640 1 -- 192.168.123.107:0/3237620851 learned_addr learned my addr 192.168.123.107:0/3237620851 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:30.603 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.424+0000 7f9b62ffd640 1 -- 192.168.123.107:0/3237620851 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f9b64072330 con 0x7f9b64071820 2026-03-09T20:42:30.603 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.424+0000 7f9b62ffd640 1 --2- 192.168.123.107:0/3237620851 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9b64071820 0x7f9b64071c20 secure :-1 s=READY pgs=24 cs=0 l=1 rev1=1 crypto rx=0x7f9b540089a0 tx=0x7f9b54031440 comp rx=0 tx=0).ready entity=mon.0 client_cookie=f9291f0810c7e206 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:30.603 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.426+0000 7f9b61ffb640 1 -- 192.168.123.107:0/3237620851 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f9b54031e50 con 0x7f9b64071820 2026-03-09T20:42:30.603 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.426+0000 7f9b61ffb640 1 -- 192.168.123.107:0/3237620851 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1041+0+0 (secure 0 0 0) 0x7f9b54035070 con 0x7f9b64071820 2026-03-09T20:42:30.603 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.426+0000 7f9b63fff640 1 -- 192.168.123.107:0/3237620851 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9b64071820 msgr2=0x7f9b64071c20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:30.603 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.426+0000 7f9b63fff640 1 --2- 192.168.123.107:0/3237620851 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9b64071820 0x7f9b64071c20 secure :-1 s=READY pgs=24 cs=0 l=1 rev1=1 crypto rx=0x7f9b540089a0 tx=0x7f9b54031440 comp rx=0 tx=0).stop 2026-03-09T20:42:30.603 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.426+0000 7f9b63fff640 1 -- 192.168.123.107:0/3237620851 shutdown_connections 2026-03-09T20:42:30.603 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.426+0000 7f9b63fff640 1 --2- 192.168.123.107:0/3237620851 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9b64071820 0x7f9b64071c20 unknown :-1 s=CLOSED pgs=24 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:30.603 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.426+0000 7f9b63fff640 1 -- 192.168.123.107:0/3237620851 >> 192.168.123.107:0/3237620851 conn(0x7f9b6406d060 msgr2=0x7f9b6406f480 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:30.603 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.426+0000 7f9b63fff640 1 -- 192.168.123.107:0/3237620851 shutdown_connections 2026-03-09T20:42:30.603 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.426+0000 7f9b63fff640 1 -- 192.168.123.107:0/3237620851 wait complete. 2026-03-09T20:42:30.603 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.427+0000 7f9b63fff640 1 Processor -- start 2026-03-09T20:42:30.603 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.427+0000 7f9b63fff640 1 -- start start 2026-03-09T20:42:30.603 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.427+0000 7f9b63fff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9b641a2440 0x7f9b641a2860 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:30.603 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.427+0000 7f9b63fff640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9b5403b720 con 0x7f9b641a2440 2026-03-09T20:42:30.603 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.428+0000 7f9b62ffd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9b641a2440 0x7f9b641a2860 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:30.603 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.428+0000 7f9b62ffd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9b641a2440 0x7f9b641a2860 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:38656/0 (socket says 192.168.123.107:38656) 2026-03-09T20:42:30.603 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.428+0000 7f9b62ffd640 1 -- 192.168.123.107:0/3766690660 learned_addr learned my addr 192.168.123.107:0/3766690660 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:30.603 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.428+0000 7f9b62ffd640 1 -- 192.168.123.107:0/3766690660 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f9b54008650 con 0x7f9b641a2440 2026-03-09T20:42:30.604 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.428+0000 7f9b62ffd640 1 --2- 192.168.123.107:0/3766690660 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9b641a2440 0x7f9b641a2860 secure :-1 s=READY pgs=25 cs=0 l=1 rev1=1 crypto rx=0x7f9b54008970 tx=0x7f9b54008c70 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:30.604 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.430+0000 7f9b689da640 1 -- 192.168.123.107:0/3766690660 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f9b5403c480 con 0x7f9b641a2440 2026-03-09T20:42:30.604 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.430+0000 7f9b63fff640 1 -- 192.168.123.107:0/3766690660 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f9b641a2da0 con 0x7f9b641a2440 2026-03-09T20:42:30.604 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.430+0000 7f9b63fff640 1 -- 192.168.123.107:0/3766690660 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f9b641a3930 con 0x7f9b641a2440 2026-03-09T20:42:30.604 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.430+0000 7f9b689da640 1 -- 192.168.123.107:0/3766690660 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1041+0+0 (secure 0 0 0) 0x7f9b54035040 con 0x7f9b641a2440 2026-03-09T20:42:30.604 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.430+0000 7f9b689da640 1 -- 192.168.123.107:0/3766690660 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f9b5400b5c0 con 0x7f9b641a2440 2026-03-09T20:42:30.604 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.431+0000 7f9b63fff640 1 -- 192.168.123.107:0/3766690660 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f9b30005350 con 0x7f9b641a2440 2026-03-09T20:42:30.604 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.431+0000 7f9b689da640 1 -- 192.168.123.107:0/3766690660 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 5) v1 ==== 49370+0+0 (secure 0 0 0) 0x7f9b5400b720 con 0x7f9b641a2440 2026-03-09T20:42:30.604 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.431+0000 7f9b689da640 1 --2- 192.168.123.107:0/3766690660 >> [v2:192.168.123.107:6800/1859043218,v1:192.168.123.107:6801/1859043218] conn(0x7f9b4403d260 0x7f9b4403f720 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:30.604 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.432+0000 7f9b627fc640 1 -- 192.168.123.107:0/3766690660 >> [v2:192.168.123.107:6800/1859043218,v1:192.168.123.107:6801/1859043218] conn(0x7f9b4403d260 msgr2=0x7f9b4403f720 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.107:6800/1859043218 2026-03-09T20:42:30.604 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.432+0000 7f9b627fc640 1 --2- 192.168.123.107:0/3766690660 >> [v2:192.168.123.107:6800/1859043218,v1:192.168.123.107:6801/1859043218] conn(0x7f9b4403d260 0x7f9b4403f720 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.200000 2026-03-09T20:42:30.604 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.432+0000 7f9b689da640 1 -- 192.168.123.107:0/3766690660 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(1..1 src has 1..1) v4 ==== 725+0+0 (secure 0 0 0) 0x7f9b54037070 con 0x7f9b641a2440 2026-03-09T20:42:30.604 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.434+0000 7f9b689da640 1 -- 192.168.123.107:0/3766690660 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f9b5403cc60 con 0x7f9b641a2440 2026-03-09T20:42:30.604 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.540+0000 7f9b63fff640 1 -- 192.168.123.107:0/3766690660 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "mgr stat"} v 0) v1 -- 0x7f9b300058d0 con 0x7f9b641a2440 2026-03-09T20:42:30.604 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.543+0000 7f9b689da640 1 -- 192.168.123.107:0/3766690660 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "mgr stat"}]=0 v5) v1 ==== 56+0+98 (secure 0 0 0) 0x7f9b54042070 con 0x7f9b641a2440 2026-03-09T20:42:30.604 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.550+0000 7f9b427fc640 1 -- 192.168.123.107:0/3766690660 >> [v2:192.168.123.107:6800/1859043218,v1:192.168.123.107:6801/1859043218] conn(0x7f9b4403d260 msgr2=0x7f9b4403f720 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T20:42:30.604 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.550+0000 7f9b427fc640 1 --2- 192.168.123.107:0/3766690660 >> [v2:192.168.123.107:6800/1859043218,v1:192.168.123.107:6801/1859043218] conn(0x7f9b4403d260 0x7f9b4403f720 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:30.604 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.550+0000 7f9b427fc640 1 -- 192.168.123.107:0/3766690660 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9b641a2440 msgr2=0x7f9b641a2860 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:30.604 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.550+0000 7f9b427fc640 1 --2- 192.168.123.107:0/3766690660 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9b641a2440 0x7f9b641a2860 secure :-1 s=READY pgs=25 cs=0 l=1 rev1=1 crypto rx=0x7f9b54008970 tx=0x7f9b54008c70 comp rx=0 tx=0).stop 2026-03-09T20:42:30.604 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.550+0000 7f9b427fc640 1 -- 192.168.123.107:0/3766690660 shutdown_connections 2026-03-09T20:42:30.604 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.550+0000 7f9b427fc640 1 --2- 192.168.123.107:0/3766690660 >> [v2:192.168.123.107:6800/1859043218,v1:192.168.123.107:6801/1859043218] conn(0x7f9b4403d260 0x7f9b4403f720 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:30.604 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.550+0000 7f9b427fc640 1 --2- 192.168.123.107:0/3766690660 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9b641a2440 0x7f9b641a2860 unknown :-1 s=CLOSED pgs=25 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:30.604 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.550+0000 7f9b427fc640 1 -- 192.168.123.107:0/3766690660 >> 192.168.123.107:0/3766690660 conn(0x7f9b6406d060 msgr2=0x7f9b6406e8f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:30.604 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.551+0000 7f9b427fc640 1 -- 192.168.123.107:0/3766690660 shutdown_connections 2026-03-09T20:42:30.604 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.551+0000 7f9b427fc640 1 -- 192.168.123.107:0/3766690660 wait complete. 2026-03-09T20:42:30.604 INFO:teuthology.orchestra.run.vm07.stdout:Waiting for the mgr to restart... 2026-03-09T20:42:30.604 INFO:teuthology.orchestra.run.vm07.stdout:Waiting for mgr epoch 5... 2026-03-09T20:42:31.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:31 vm07 ceph-mon[49120]: from='client.? 192.168.123.107:0/1105024482' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-09T20:42:31.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:31 vm07 ceph-mon[49120]: mgrmap e5: vm07.xjrvch(active, since 3s) 2026-03-09T20:42:31.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:31 vm07 ceph-mon[49120]: from='client.? 192.168.123.107:0/3766690660' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T20:42:33.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:33 vm07 ceph-mon[49120]: Active manager daemon vm07.xjrvch restarted 2026-03-09T20:42:33.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:33 vm07 ceph-mon[49120]: Activating manager daemon vm07.xjrvch 2026-03-09T20:42:33.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:33 vm07 ceph-mon[49120]: osdmap e2: 0 total, 0 up, 0 in 2026-03-09T20:42:33.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:33 vm07 ceph-mon[49120]: mgrmap e6: vm07.xjrvch(active, starting, since 0.00553565s) 2026-03-09T20:42:33.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:33 vm07 ceph-mon[49120]: from='mgr.14118 192.168.123.107:0/1039749675' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mon metadata", "id": "vm07"}]: dispatch 2026-03-09T20:42:33.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:33 vm07 ceph-mon[49120]: from='mgr.14118 192.168.123.107:0/1039749675' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mgr metadata", "who": "vm07.xjrvch", "id": "vm07.xjrvch"}]: dispatch 2026-03-09T20:42:33.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:33 vm07 ceph-mon[49120]: from='mgr.14118 192.168.123.107:0/1039749675' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T20:42:33.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:33 vm07 ceph-mon[49120]: from='mgr.14118 192.168.123.107:0/1039749675' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T20:42:33.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:33 vm07 ceph-mon[49120]: from='mgr.14118 192.168.123.107:0/1039749675' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T20:42:33.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:33 vm07 ceph-mon[49120]: Manager daemon vm07.xjrvch is now available 2026-03-09T20:42:33.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:33 vm07 ceph-mon[49120]: from='mgr.14118 192.168.123.107:0/1039749675' entity='mgr.vm07.xjrvch' 2026-03-09T20:42:33.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:33 vm07 ceph-mon[49120]: from='mgr.14118 192.168.123.107:0/1039749675' entity='mgr.vm07.xjrvch' 2026-03-09T20:42:33.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:33 vm07 ceph-mon[49120]: from='mgr.14118 192.168.123.107:0/1039749675' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:42:33.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:33 vm07 ceph-mon[49120]: from='mgr.14118 192.168.123.107:0/1039749675' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:42:33.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:33 vm07 ceph-mon[49120]: from='mgr.14118 192.168.123.107:0/1039749675' entity='mgr.vm07.xjrvch' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm07.xjrvch/mirror_snapshot_schedule"}]: dispatch 2026-03-09T20:42:34.224 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout { 2026-03-09T20:42:34.224 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 7, 2026-03-09T20:42:34.224 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-09T20:42:34.224 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout } 2026-03-09T20:42:34.224 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.719+0000 7f476ffff640 1 Processor -- start 2026-03-09T20:42:34.224 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.719+0000 7f476ffff640 1 -- start start 2026-03-09T20:42:34.224 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.719+0000 7f476ffff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f47680a48d0 0x7f47680a4cd0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:34.224 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.719+0000 7f476ffff640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f47680a52a0 con 0x7f47680a48d0 2026-03-09T20:42:34.224 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.719+0000 7f476effd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f47680a48d0 0x7f47680a4cd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:34.224 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.719+0000 7f476effd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f47680a48d0 0x7f47680a4cd0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:38666/0 (socket says 192.168.123.107:38666) 2026-03-09T20:42:34.224 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.719+0000 7f476effd640 1 -- 192.168.123.107:0/1991379993 learned_addr learned my addr 192.168.123.107:0/1991379993 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:34.224 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.720+0000 7f476effd640 1 -- 192.168.123.107:0/1991379993 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f47680a5ad0 con 0x7f47680a48d0 2026-03-09T20:42:34.224 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.720+0000 7f476effd640 1 --2- 192.168.123.107:0/1991379993 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f47680a48d0 0x7f47680a4cd0 secure :-1 s=READY pgs=26 cs=0 l=1 rev1=1 crypto rx=0x7f47600089a0 tx=0x7f4760031440 comp rx=0 tx=0).ready entity=mon.0 client_cookie=4113b7e6e6cf657a server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:34.224 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.721+0000 7f476dffb640 1 -- 192.168.123.107:0/1991379993 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f4760031e50 con 0x7f47680a48d0 2026-03-09T20:42:34.224 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.721+0000 7f476dffb640 1 -- 192.168.123.107:0/1991379993 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1041+0+0 (secure 0 0 0) 0x7f4760035070 con 0x7f47680a48d0 2026-03-09T20:42:34.224 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.721+0000 7f476ffff640 1 -- 192.168.123.107:0/1991379993 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f47680a48d0 msgr2=0x7f47680a4cd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:34.224 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.721+0000 7f476ffff640 1 --2- 192.168.123.107:0/1991379993 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f47680a48d0 0x7f47680a4cd0 secure :-1 s=READY pgs=26 cs=0 l=1 rev1=1 crypto rx=0x7f47600089a0 tx=0x7f4760031440 comp rx=0 tx=0).stop 2026-03-09T20:42:34.224 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.721+0000 7f476ffff640 1 -- 192.168.123.107:0/1991379993 shutdown_connections 2026-03-09T20:42:34.224 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.721+0000 7f476ffff640 1 --2- 192.168.123.107:0/1991379993 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f47680a48d0 0x7f47680a4cd0 unknown :-1 s=CLOSED pgs=26 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:34.224 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.721+0000 7f476ffff640 1 -- 192.168.123.107:0/1991379993 >> 192.168.123.107:0/1991379993 conn(0x7f476809fbe0 msgr2=0x7f47680a2040 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:34.224 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.721+0000 7f476ffff640 1 -- 192.168.123.107:0/1991379993 shutdown_connections 2026-03-09T20:42:34.224 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.721+0000 7f476ffff640 1 -- 192.168.123.107:0/1991379993 wait complete. 2026-03-09T20:42:34.224 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.722+0000 7f476ffff640 1 Processor -- start 2026-03-09T20:42:34.224 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.722+0000 7f476ffff640 1 -- start start 2026-03-09T20:42:34.224 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.722+0000 7f476ffff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4768142d90 0x7f47681431b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:34.224 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.722+0000 7f476ffff640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f476003b720 con 0x7f4768142d90 2026-03-09T20:42:34.224 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.723+0000 7f476effd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4768142d90 0x7f47681431b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:34.224 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.723+0000 7f476effd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4768142d90 0x7f47681431b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:38678/0 (socket says 192.168.123.107:38678) 2026-03-09T20:42:34.224 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.723+0000 7f476effd640 1 -- 192.168.123.107:0/3580831911 learned_addr learned my addr 192.168.123.107:0/3580831911 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:34.224 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.723+0000 7f476effd640 1 -- 192.168.123.107:0/3580831911 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f4760008650 con 0x7f4768142d90 2026-03-09T20:42:34.224 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.723+0000 7f476effd640 1 --2- 192.168.123.107:0/3580831911 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4768142d90 0x7f47681431b0 secure :-1 s=READY pgs=27 cs=0 l=1 rev1=1 crypto rx=0x7f4760008970 tx=0x7f4760008db0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:34.224 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.723+0000 7f474ffff640 1 -- 192.168.123.107:0/3580831911 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f476003c480 con 0x7f4768142d90 2026-03-09T20:42:34.224 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.723+0000 7f476ffff640 1 -- 192.168.123.107:0/3580831911 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f47681436f0 con 0x7f4768142d90 2026-03-09T20:42:34.225 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.724+0000 7f476ffff640 1 -- 192.168.123.107:0/3580831911 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f4768146290 con 0x7f4768142d90 2026-03-09T20:42:34.225 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.724+0000 7f474ffff640 1 -- 192.168.123.107:0/3580831911 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1041+0+0 (secure 0 0 0) 0x7f4760035040 con 0x7f4768142d90 2026-03-09T20:42:34.225 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.724+0000 7f474ffff640 1 -- 192.168.123.107:0/3580831911 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f476000b580 con 0x7f4768142d90 2026-03-09T20:42:34.225 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.725+0000 7f474ffff640 1 -- 192.168.123.107:0/3580831911 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 5) v1 ==== 49370+0+0 (secure 0 0 0) 0x7f476000b6e0 con 0x7f4768142d90 2026-03-09T20:42:34.225 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.725+0000 7f474ffff640 1 --2- 192.168.123.107:0/3580831911 >> [v2:192.168.123.107:6800/1859043218,v1:192.168.123.107:6801/1859043218] conn(0x7f473803d230 0x7f473803f6f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:34.225 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.725+0000 7f476e7fc640 1 -- 192.168.123.107:0/3580831911 >> [v2:192.168.123.107:6800/1859043218,v1:192.168.123.107:6801/1859043218] conn(0x7f473803d230 msgr2=0x7f473803f6f0 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.107:6800/1859043218 2026-03-09T20:42:34.225 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.725+0000 7f476e7fc640 1 --2- 192.168.123.107:0/3580831911 >> [v2:192.168.123.107:6800/1859043218,v1:192.168.123.107:6801/1859043218] conn(0x7f473803d230 0x7f473803f6f0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.200000 2026-03-09T20:42:34.225 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.725+0000 7f474ffff640 1 -- 192.168.123.107:0/3580831911 --> [v2:192.168.123.107:6800/1859043218,v1:192.168.123.107:6801/1859043218] -- command(tid 0: {"prefix": "get_command_descriptions"}) v1 -- 0x7f473803fe00 con 0x7f473803d230 2026-03-09T20:42:34.225 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.725+0000 7f474ffff640 1 -- 192.168.123.107:0/3580831911 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(1..1 src has 1..1) v4 ==== 725+0+0 (secure 0 0 0) 0x7f4760037070 con 0x7f4768142d90 2026-03-09T20:42:34.225 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.925+0000 7f476e7fc640 1 -- 192.168.123.107:0/3580831911 >> [v2:192.168.123.107:6800/1859043218,v1:192.168.123.107:6801/1859043218] conn(0x7f473803d230 msgr2=0x7f473803f6f0 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.107:6800/1859043218 2026-03-09T20:42:34.225 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:30.926+0000 7f476e7fc640 1 --2- 192.168.123.107:0/3580831911 >> [v2:192.168.123.107:6800/1859043218,v1:192.168.123.107:6801/1859043218] conn(0x7f473803d230 0x7f473803f6f0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.400000 2026-03-09T20:42:34.225 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:31.326+0000 7f476e7fc640 1 -- 192.168.123.107:0/3580831911 >> [v2:192.168.123.107:6800/1859043218,v1:192.168.123.107:6801/1859043218] conn(0x7f473803d230 msgr2=0x7f473803f6f0 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.107:6800/1859043218 2026-03-09T20:42:34.225 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:31.326+0000 7f476e7fc640 1 --2- 192.168.123.107:0/3580831911 >> [v2:192.168.123.107:6800/1859043218,v1:192.168.123.107:6801/1859043218] conn(0x7f473803d230 0x7f473803f6f0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.800000 2026-03-09T20:42:34.225 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:32.127+0000 7f476e7fc640 1 -- 192.168.123.107:0/3580831911 >> [v2:192.168.123.107:6800/1859043218,v1:192.168.123.107:6801/1859043218] conn(0x7f473803d230 msgr2=0x7f473803f6f0 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.107:6800/1859043218 2026-03-09T20:42:34.225 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:32.127+0000 7f476e7fc640 1 --2- 192.168.123.107:0/3580831911 >> [v2:192.168.123.107:6800/1859043218,v1:192.168.123.107:6801/1859043218] conn(0x7f473803d230 0x7f473803f6f0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 1.600000 2026-03-09T20:42:34.225 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:33.185+0000 7f474ffff640 1 -- 192.168.123.107:0/3580831911 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mgrmap(e 6) v1 ==== 49137+0+0 (secure 0 0 0) 0x7f4760073860 con 0x7f4768142d90 2026-03-09T20:42:34.225 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:33.185+0000 7f474ffff640 1 -- 192.168.123.107:0/3580831911 >> [v2:192.168.123.107:6800/1859043218,v1:192.168.123.107:6801/1859043218] conn(0x7f473803d230 msgr2=0x7f473803f6f0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T20:42:34.225 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:33.185+0000 7f474ffff640 1 --2- 192.168.123.107:0/3580831911 >> [v2:192.168.123.107:6800/1859043218,v1:192.168.123.107:6801/1859043218] conn(0x7f473803d230 0x7f473803f6f0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:34.225 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.187+0000 7f474ffff640 1 -- 192.168.123.107:0/3580831911 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mgrmap(e 7) v1 ==== 49264+0+0 (secure 0 0 0) 0x7f4760043d30 con 0x7f4768142d90 2026-03-09T20:42:34.226 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.187+0000 7f474ffff640 1 --2- 192.168.123.107:0/3580831911 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f4738040b10 0x7f4738042f00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:34.226 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.187+0000 7f474ffff640 1 -- 192.168.123.107:0/3580831911 --> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] -- command(tid 0: {"prefix": "get_command_descriptions"}) v1 -- 0x7f473803fe00 con 0x7f4738040b10 2026-03-09T20:42:34.226 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.190+0000 7f476e7fc640 1 --2- 192.168.123.107:0/3580831911 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f4738040b10 0x7f4738042f00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:34.226 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.190+0000 7f476e7fc640 1 --2- 192.168.123.107:0/3580831911 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f4738040b10 0x7f4738042f00 secure :-1 s=READY pgs=3 cs=0 l=1 rev1=1 crypto rx=0x7f4770066070 tx=0x7f4770069800 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:34.226 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.191+0000 7f474ffff640 1 -- 192.168.123.107:0/3580831911 <== mgr.14118 v2:192.168.123.107:6800/810998986 1 ==== command_reply(tid 0: 0 ) v1 ==== 8+0+7759 (secure 0 0 0) 0x7f473803fe00 con 0x7f4738040b10 2026-03-09T20:42:34.226 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.194+0000 7f476ffff640 1 -- 192.168.123.107:0/3580831911 --> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] -- command(tid 1: {"prefix": "mgr_status"}) v1 -- 0x7f47680a48d0 con 0x7f4738040b10 2026-03-09T20:42:34.226 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.194+0000 7f474ffff640 1 -- 192.168.123.107:0/3580831911 <== mgr.14118 v2:192.168.123.107:6800/810998986 2 ==== command_reply(tid 1: 0 ) v1 ==== 8+0+51 (secure 0 0 0) 0x7f47680a48d0 con 0x7f4738040b10 2026-03-09T20:42:34.226 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.194+0000 7f476ffff640 1 -- 192.168.123.107:0/3580831911 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f4738040b10 msgr2=0x7f4738042f00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:34.226 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.194+0000 7f476ffff640 1 --2- 192.168.123.107:0/3580831911 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f4738040b10 0x7f4738042f00 secure :-1 s=READY pgs=3 cs=0 l=1 rev1=1 crypto rx=0x7f4770066070 tx=0x7f4770069800 comp rx=0 tx=0).stop 2026-03-09T20:42:34.226 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.194+0000 7f476ffff640 1 -- 192.168.123.107:0/3580831911 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4768142d90 msgr2=0x7f47681431b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:34.226 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.194+0000 7f476ffff640 1 --2- 192.168.123.107:0/3580831911 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4768142d90 0x7f47681431b0 secure :-1 s=READY pgs=27 cs=0 l=1 rev1=1 crypto rx=0x7f4760008970 tx=0x7f4760008db0 comp rx=0 tx=0).stop 2026-03-09T20:42:34.226 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.194+0000 7f476ffff640 1 -- 192.168.123.107:0/3580831911 shutdown_connections 2026-03-09T20:42:34.226 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.194+0000 7f476ffff640 1 --2- 192.168.123.107:0/3580831911 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f4738040b10 0x7f4738042f00 unknown :-1 s=CLOSED pgs=3 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:34.226 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.194+0000 7f476ffff640 1 --2- 192.168.123.107:0/3580831911 >> [v2:192.168.123.107:6800/1859043218,v1:192.168.123.107:6801/1859043218] conn(0x7f473803d230 0x7f473803f6f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:34.226 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.194+0000 7f476ffff640 1 --2- 192.168.123.107:0/3580831911 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4768142d90 0x7f47681431b0 unknown :-1 s=CLOSED pgs=27 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:34.226 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.194+0000 7f476ffff640 1 -- 192.168.123.107:0/3580831911 >> 192.168.123.107:0/3580831911 conn(0x7f476809fbe0 msgr2=0x7f47680a04e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:34.226 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.194+0000 7f476ffff640 1 -- 192.168.123.107:0/3580831911 shutdown_connections 2026-03-09T20:42:34.226 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.194+0000 7f476ffff640 1 -- 192.168.123.107:0/3580831911 wait complete. 2026-03-09T20:42:34.226 INFO:teuthology.orchestra.run.vm07.stdout:mgr epoch 5 is available 2026-03-09T20:42:34.226 INFO:teuthology.orchestra.run.vm07.stdout:Setting orchestrator backend to cephadm... 2026-03-09T20:42:34.461 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:34 vm07 ceph-mon[49120]: Found migration_current of "None". Setting to last migration. 2026-03-09T20:42:34.461 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:34 vm07 ceph-mon[49120]: from='mgr.14118 192.168.123.107:0/1039749675' entity='mgr.vm07.xjrvch' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm07.xjrvch/trash_purge_schedule"}]: dispatch 2026-03-09T20:42:34.461 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:34 vm07 ceph-mon[49120]: from='mgr.14118 192.168.123.107:0/1039749675' entity='mgr.vm07.xjrvch' 2026-03-09T20:42:34.462 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:34 vm07 ceph-mon[49120]: from='mgr.14118 192.168.123.107:0/1039749675' entity='mgr.vm07.xjrvch' 2026-03-09T20:42:34.462 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:34 vm07 ceph-mon[49120]: mgrmap e7: vm07.xjrvch(active, since 1.00785s) 2026-03-09T20:42:34.500 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.361+0000 7fcbde497640 1 Processor -- start 2026-03-09T20:42:34.500 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.361+0000 7fcbde497640 1 -- start start 2026-03-09T20:42:34.500 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.361+0000 7fcbde497640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcbd80716c0 0x7fcbd8071ac0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:34.500 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.361+0000 7fcbde497640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fcbd8072ee0 con 0x7fcbd80716c0 2026-03-09T20:42:34.500 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.362+0000 7fcbd7fff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcbd80716c0 0x7fcbd8071ac0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:34.500 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.362+0000 7fcbd7fff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcbd80716c0 0x7fcbd8071ac0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:38732/0 (socket says 192.168.123.107:38732) 2026-03-09T20:42:34.500 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.362+0000 7fcbd7fff640 1 -- 192.168.123.107:0/3120780602 learned_addr learned my addr 192.168.123.107:0/3120780602 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:34.500 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.362+0000 7fcbd7fff640 1 -- 192.168.123.107:0/3120780602 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fcbd8072000 con 0x7fcbd80716c0 2026-03-09T20:42:34.500 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.363+0000 7fcbd7fff640 1 --2- 192.168.123.107:0/3120780602 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcbd80716c0 0x7fcbd8071ac0 secure :-1 s=READY pgs=35 cs=0 l=1 rev1=1 crypto rx=0x7fcbcc009920 tx=0x7fcbcc02ef20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=160fbb94ed89d05a server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:34.500 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.364+0000 7fcbd6ffd640 1 -- 192.168.123.107:0/3120780602 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fcbcc02f9b0 con 0x7fcbd80716c0 2026-03-09T20:42:34.500 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.364+0000 7fcbd6ffd640 1 -- 192.168.123.107:0/3120780602 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1041+0+0 (secure 0 0 0) 0x7fcbcc037440 con 0x7fcbd80716c0 2026-03-09T20:42:34.500 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.364+0000 7fcbde497640 1 -- 192.168.123.107:0/3120780602 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcbd80716c0 msgr2=0x7fcbd8071ac0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:34.500 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.364+0000 7fcbde497640 1 --2- 192.168.123.107:0/3120780602 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcbd80716c0 0x7fcbd8071ac0 secure :-1 s=READY pgs=35 cs=0 l=1 rev1=1 crypto rx=0x7fcbcc009920 tx=0x7fcbcc02ef20 comp rx=0 tx=0).stop 2026-03-09T20:42:34.500 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.364+0000 7fcbde497640 1 -- 192.168.123.107:0/3120780602 shutdown_connections 2026-03-09T20:42:34.500 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.364+0000 7fcbde497640 1 --2- 192.168.123.107:0/3120780602 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcbd80716c0 0x7fcbd8071ac0 secure :-1 s=CLOSED pgs=35 cs=0 l=1 rev1=1 crypto rx=0x7fcbcc009920 tx=0x7fcbcc02ef20 comp rx=0 tx=0).stop 2026-03-09T20:42:34.500 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.364+0000 7fcbde497640 1 -- 192.168.123.107:0/3120780602 >> 192.168.123.107:0/3120780602 conn(0x7fcbd806d2a0 msgr2=0x7fcbd806f6e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:34.500 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.364+0000 7fcbde497640 1 -- 192.168.123.107:0/3120780602 shutdown_connections 2026-03-09T20:42:34.500 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.364+0000 7fcbde497640 1 -- 192.168.123.107:0/3120780602 wait complete. 2026-03-09T20:42:34.500 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.365+0000 7fcbde497640 1 Processor -- start 2026-03-09T20:42:34.500 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.365+0000 7fcbde497640 1 -- start start 2026-03-09T20:42:34.500 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.365+0000 7fcbde497640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcbd81ab020 0x7fcbd81ab440 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:34.500 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.365+0000 7fcbde497640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fcbcc035340 con 0x7fcbd81ab020 2026-03-09T20:42:34.501 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.365+0000 7fcbd7fff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcbd81ab020 0x7fcbd81ab440 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:34.501 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.365+0000 7fcbd7fff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcbd81ab020 0x7fcbd81ab440 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:38748/0 (socket says 192.168.123.107:38748) 2026-03-09T20:42:34.501 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.365+0000 7fcbd7fff640 1 -- 192.168.123.107:0/2543768688 learned_addr learned my addr 192.168.123.107:0/2543768688 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:34.501 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.365+0000 7fcbd7fff640 1 -- 192.168.123.107:0/2543768688 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fcbcc0095d0 con 0x7fcbd81ab020 2026-03-09T20:42:34.501 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.366+0000 7fcbd7fff640 1 --2- 192.168.123.107:0/2543768688 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcbd81ab020 0x7fcbd81ab440 secure :-1 s=READY pgs=36 cs=0 l=1 rev1=1 crypto rx=0x7fcbcc02f450 tx=0x7fcbcc0377d0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:34.501 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.367+0000 7fcbd57fa640 1 -- 192.168.123.107:0/2543768688 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fcbcc037b60 con 0x7fcbd81ab020 2026-03-09T20:42:34.501 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.367+0000 7fcbde497640 1 -- 192.168.123.107:0/2543768688 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fcbd81ab980 con 0x7fcbd81ab020 2026-03-09T20:42:34.501 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.367+0000 7fcbde497640 1 -- 192.168.123.107:0/2543768688 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fcbd807aff0 con 0x7fcbd81ab020 2026-03-09T20:42:34.501 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.367+0000 7fcbde497640 1 -- 192.168.123.107:0/2543768688 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fcba4005350 con 0x7fcbd81ab020 2026-03-09T20:42:34.501 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.368+0000 7fcbd57fa640 1 -- 192.168.123.107:0/2543768688 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1041+0+0 (secure 0 0 0) 0x7fcbcc02fe30 con 0x7fcbd81ab020 2026-03-09T20:42:34.501 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.368+0000 7fcbd57fa640 1 -- 192.168.123.107:0/2543768688 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fcbcc042de0 con 0x7fcbd81ab020 2026-03-09T20:42:34.501 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.370+0000 7fcbd57fa640 1 -- 192.168.123.107:0/2543768688 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 7) v1 ==== 49264+0+0 (secure 0 0 0) 0x7fcbcc04c430 con 0x7fcbd81ab020 2026-03-09T20:42:34.501 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.370+0000 7fcbd57fa640 1 --2- 192.168.123.107:0/2543768688 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7fcba803d110 0x7fcba803f5d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:34.501 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.370+0000 7fcbd57fa640 1 -- 192.168.123.107:0/2543768688 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(2..2 src has 1..2) v4 ==== 940+0+0 (secure 0 0 0) 0x7fcbcc075980 con 0x7fcbd81ab020 2026-03-09T20:42:34.501 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.371+0000 7fcbd57fa640 1 -- 192.168.123.107:0/2543768688 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7fcbcc0363a0 con 0x7fcbd81ab020 2026-03-09T20:42:34.501 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.371+0000 7fcbd77fe640 1 --2- 192.168.123.107:0/2543768688 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7fcba803d110 0x7fcba803f5d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:34.501 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.383+0000 7fcbd77fe640 1 --2- 192.168.123.107:0/2543768688 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7fcba803d110 0x7fcba803f5d0 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7fcbc00099c0 tx=0x7fcbc0006eb0 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:34.501 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.461+0000 7fcbde497640 1 -- 192.168.123.107:0/2543768688 --> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] -- mgr_command(tid 0: {"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}) v1 -- 0x7fcba4002bf0 con 0x7fcba803d110 2026-03-09T20:42:34.501 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.469+0000 7fcbd57fa640 1 -- 192.168.123.107:0/2543768688 <== mgr.14118 v2:192.168.123.107:6800/810998986 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+0 (secure 0 0 0) 0x7fcba4002bf0 con 0x7fcba803d110 2026-03-09T20:42:34.501 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.471+0000 7fcbde497640 1 -- 192.168.123.107:0/2543768688 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7fcba803d110 msgr2=0x7fcba803f5d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:34.501 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.471+0000 7fcbde497640 1 --2- 192.168.123.107:0/2543768688 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7fcba803d110 0x7fcba803f5d0 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7fcbc00099c0 tx=0x7fcbc0006eb0 comp rx=0 tx=0).stop 2026-03-09T20:42:34.501 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.471+0000 7fcbde497640 1 -- 192.168.123.107:0/2543768688 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcbd81ab020 msgr2=0x7fcbd81ab440 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:34.501 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.471+0000 7fcbde497640 1 --2- 192.168.123.107:0/2543768688 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcbd81ab020 0x7fcbd81ab440 secure :-1 s=READY pgs=36 cs=0 l=1 rev1=1 crypto rx=0x7fcbcc02f450 tx=0x7fcbcc0377d0 comp rx=0 tx=0).stop 2026-03-09T20:42:34.501 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.472+0000 7fcbde497640 1 -- 192.168.123.107:0/2543768688 shutdown_connections 2026-03-09T20:42:34.501 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.472+0000 7fcbde497640 1 --2- 192.168.123.107:0/2543768688 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7fcba803d110 0x7fcba803f5d0 unknown :-1 s=CLOSED pgs=7 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:34.501 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.472+0000 7fcbde497640 1 --2- 192.168.123.107:0/2543768688 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcbd81ab020 0x7fcbd81ab440 unknown :-1 s=CLOSED pgs=36 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:34.501 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.472+0000 7fcbde497640 1 -- 192.168.123.107:0/2543768688 >> 192.168.123.107:0/2543768688 conn(0x7fcbd806d2a0 msgr2=0x7fcbd806db00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:34.501 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.473+0000 7fcbde497640 1 -- 192.168.123.107:0/2543768688 shutdown_connections 2026-03-09T20:42:34.501 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.473+0000 7fcbde497640 1 -- 192.168.123.107:0/2543768688 wait complete. 2026-03-09T20:42:34.750 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout value unchanged 2026-03-09T20:42:34.750 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.610+0000 7fe17c764640 1 Processor -- start 2026-03-09T20:42:34.750 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.611+0000 7fe17c764640 1 -- start start 2026-03-09T20:42:34.750 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.611+0000 7fe17c764640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe1741082f0 0x7fe1741086f0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:34.750 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.611+0000 7fe17c764640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fe174108cc0 con 0x7fe1741082f0 2026-03-09T20:42:34.750 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.611+0000 7fe17a4d9640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe1741082f0 0x7fe1741086f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:34.750 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.611+0000 7fe17a4d9640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe1741082f0 0x7fe1741086f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:38750/0 (socket says 192.168.123.107:38750) 2026-03-09T20:42:34.750 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.611+0000 7fe17a4d9640 1 -- 192.168.123.107:0/2471153383 learned_addr learned my addr 192.168.123.107:0/2471153383 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:34.750 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.612+0000 7fe17a4d9640 1 -- 192.168.123.107:0/2471153383 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fe174109490 con 0x7fe1741082f0 2026-03-09T20:42:34.750 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.612+0000 7fe17a4d9640 1 --2- 192.168.123.107:0/2471153383 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe1741082f0 0x7fe1741086f0 secure :-1 s=READY pgs=37 cs=0 l=1 rev1=1 crypto rx=0x7fe164009b80 tx=0x7fe16402f190 comp rx=0 tx=0).ready entity=mon.0 client_cookie=46c5a9599acd0f90 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:34.750 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.612+0000 7fe1794d7640 1 -- 192.168.123.107:0/2471153383 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fe16402fa10 con 0x7fe1741082f0 2026-03-09T20:42:34.750 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.612+0000 7fe1794d7640 1 -- 192.168.123.107:0/2471153383 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1041+0+0 (secure 0 0 0) 0x7fe164037440 con 0x7fe1741082f0 2026-03-09T20:42:34.750 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.612+0000 7fe1794d7640 1 -- 192.168.123.107:0/2471153383 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fe164035540 con 0x7fe1741082f0 2026-03-09T20:42:34.750 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.613+0000 7fe17c764640 1 -- 192.168.123.107:0/2471153383 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe1741082f0 msgr2=0x7fe1741086f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:34.750 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.613+0000 7fe17c764640 1 --2- 192.168.123.107:0/2471153383 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe1741082f0 0x7fe1741086f0 secure :-1 s=READY pgs=37 cs=0 l=1 rev1=1 crypto rx=0x7fe164009b80 tx=0x7fe16402f190 comp rx=0 tx=0).stop 2026-03-09T20:42:34.750 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.613+0000 7fe17c764640 1 -- 192.168.123.107:0/2471153383 shutdown_connections 2026-03-09T20:42:34.750 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.613+0000 7fe17c764640 1 --2- 192.168.123.107:0/2471153383 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe1741082f0 0x7fe1741086f0 unknown :-1 s=CLOSED pgs=37 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:34.750 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.613+0000 7fe17c764640 1 -- 192.168.123.107:0/2471153383 >> 192.168.123.107:0/2471153383 conn(0x7fe17407b8f0 msgr2=0x7fe1741066a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:34.750 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.613+0000 7fe17c764640 1 -- 192.168.123.107:0/2471153383 shutdown_connections 2026-03-09T20:42:34.750 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.613+0000 7fe17c764640 1 -- 192.168.123.107:0/2471153383 wait complete. 2026-03-09T20:42:34.750 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.613+0000 7fe17c764640 1 Processor -- start 2026-03-09T20:42:34.750 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.613+0000 7fe17c764640 1 -- start start 2026-03-09T20:42:34.750 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.614+0000 7fe17c764640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe1741082f0 0x7fe17419e130 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:34.750 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.614+0000 7fe17c764640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fe17419e670 con 0x7fe1741082f0 2026-03-09T20:42:34.750 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.614+0000 7fe17a4d9640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe1741082f0 0x7fe17419e130 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:34.750 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.614+0000 7fe17a4d9640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe1741082f0 0x7fe17419e130 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:38762/0 (socket says 192.168.123.107:38762) 2026-03-09T20:42:34.750 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.614+0000 7fe17a4d9640 1 -- 192.168.123.107:0/3481993158 learned_addr learned my addr 192.168.123.107:0/3481993158 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:34.750 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.614+0000 7fe17a4d9640 1 -- 192.168.123.107:0/3481993158 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fe1640095d0 con 0x7fe1741082f0 2026-03-09T20:42:34.750 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.614+0000 7fe17a4d9640 1 --2- 192.168.123.107:0/3481993158 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe1741082f0 0x7fe17419e130 secure :-1 s=READY pgs=38 cs=0 l=1 rev1=1 crypto rx=0x7fe164009cb0 tx=0x7fe164037870 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:34.750 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.614+0000 7fe1637fe640 1 -- 192.168.123.107:0/3481993158 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fe164037b20 con 0x7fe1741082f0 2026-03-09T20:42:34.750 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.614+0000 7fe1637fe640 1 -- 192.168.123.107:0/3481993158 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1041+0+0 (secure 0 0 0) 0x7fe164035c80 con 0x7fe1741082f0 2026-03-09T20:42:34.750 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.615+0000 7fe1637fe640 1 -- 192.168.123.107:0/3481993158 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fe164041da0 con 0x7fe1741082f0 2026-03-09T20:42:34.750 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.615+0000 7fe17c764640 1 -- 192.168.123.107:0/3481993158 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fe17419e870 con 0x7fe1741082f0 2026-03-09T20:42:34.750 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.615+0000 7fe17c764640 1 -- 192.168.123.107:0/3481993158 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fe17419ed10 con 0x7fe1741082f0 2026-03-09T20:42:34.750 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.615+0000 7fe1617fa640 1 -- 192.168.123.107:0/3481993158 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fe140005350 con 0x7fe1741082f0 2026-03-09T20:42:34.750 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.616+0000 7fe1637fe640 1 -- 192.168.123.107:0/3481993158 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 7) v1 ==== 49264+0+0 (secure 0 0 0) 0x7fe16403e030 con 0x7fe1741082f0 2026-03-09T20:42:34.750 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.616+0000 7fe1637fe640 1 --2- 192.168.123.107:0/3481993158 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7fe15003cd00 0x7fe15003f1c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:34.750 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.616+0000 7fe1637fe640 1 -- 192.168.123.107:0/3481993158 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(2..2 src has 1..2) v4 ==== 940+0+0 (secure 0 0 0) 0x7fe164075580 con 0x7fe1741082f0 2026-03-09T20:42:34.750 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.619+0000 7fe1637fe640 1 -- 192.168.123.107:0/3481993158 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7fe164035df0 con 0x7fe1741082f0 2026-03-09T20:42:34.750 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.619+0000 7fe179cd8640 1 --2- 192.168.123.107:0/3481993158 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7fe15003cd00 0x7fe15003f1c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:34.750 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.621+0000 7fe179cd8640 1 --2- 192.168.123.107:0/3481993158 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7fe15003cd00 0x7fe15003f1c0 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7fe1680099c0 tx=0x7fe168006eb0 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:34.750 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.703+0000 7fe1617fa640 1 -- 192.168.123.107:0/3481993158 --> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] -- mgr_command(tid 0: {"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}) v1 -- 0x7fe140002bf0 con 0x7fe15003cd00 2026-03-09T20:42:34.751 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.703+0000 7fe1637fe640 1 -- 192.168.123.107:0/3481993158 <== mgr.14118 v2:192.168.123.107:6800/810998986 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+16 (secure 0 0 0) 0x7fe140002bf0 con 0x7fe15003cd00 2026-03-09T20:42:34.751 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.704+0000 7fe1617fa640 1 -- 192.168.123.107:0/3481993158 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7fe15003cd00 msgr2=0x7fe15003f1c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:34.751 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.705+0000 7fe1617fa640 1 --2- 192.168.123.107:0/3481993158 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7fe15003cd00 0x7fe15003f1c0 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7fe1680099c0 tx=0x7fe168006eb0 comp rx=0 tx=0).stop 2026-03-09T20:42:34.751 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.705+0000 7fe1617fa640 1 -- 192.168.123.107:0/3481993158 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe1741082f0 msgr2=0x7fe17419e130 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:34.751 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.705+0000 7fe1617fa640 1 --2- 192.168.123.107:0/3481993158 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe1741082f0 0x7fe17419e130 secure :-1 s=READY pgs=38 cs=0 l=1 rev1=1 crypto rx=0x7fe164009cb0 tx=0x7fe164037870 comp rx=0 tx=0).stop 2026-03-09T20:42:34.751 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.705+0000 7fe1617fa640 1 -- 192.168.123.107:0/3481993158 shutdown_connections 2026-03-09T20:42:34.751 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.705+0000 7fe1617fa640 1 --2- 192.168.123.107:0/3481993158 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7fe15003cd00 0x7fe15003f1c0 unknown :-1 s=CLOSED pgs=8 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:34.751 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.705+0000 7fe1617fa640 1 --2- 192.168.123.107:0/3481993158 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe1741082f0 0x7fe17419e130 unknown :-1 s=CLOSED pgs=38 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:34.751 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.705+0000 7fe1617fa640 1 -- 192.168.123.107:0/3481993158 >> 192.168.123.107:0/3481993158 conn(0x7fe17407b8f0 msgr2=0x7fe174105d20 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:34.751 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.705+0000 7fe1617fa640 1 -- 192.168.123.107:0/3481993158 shutdown_connections 2026-03-09T20:42:34.751 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.705+0000 7fe1617fa640 1 -- 192.168.123.107:0/3481993158 wait complete. 2026-03-09T20:42:34.751 INFO:teuthology.orchestra.run.vm07.stdout:Generating ssh key... 2026-03-09T20:42:34.983 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.842+0000 7fd5b950b640 1 Processor -- start 2026-03-09T20:42:34.983 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.842+0000 7fd5b950b640 1 -- start start 2026-03-09T20:42:34.983 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.843+0000 7fd5b950b640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd5b4106060 0x7fd5b4106460 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:34.983 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.843+0000 7fd5b950b640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fd5b41069a0 con 0x7fd5b4106060 2026-03-09T20:42:34.983 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.843+0000 7fd5b2ffd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd5b4106060 0x7fd5b4106460 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:34.983 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.843+0000 7fd5b2ffd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd5b4106060 0x7fd5b4106460 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:38770/0 (socket says 192.168.123.107:38770) 2026-03-09T20:42:34.983 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.843+0000 7fd5b2ffd640 1 -- 192.168.123.107:0/3496077808 learned_addr learned my addr 192.168.123.107:0/3496077808 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:34.983 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.843+0000 7fd5b2ffd640 1 -- 192.168.123.107:0/3496077808 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fd5b4106ae0 con 0x7fd5b4106060 2026-03-09T20:42:34.983 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.844+0000 7fd5b2ffd640 1 --2- 192.168.123.107:0/3496077808 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd5b4106060 0x7fd5b4106460 secure :-1 s=READY pgs=39 cs=0 l=1 rev1=1 crypto rx=0x7fd59c009920 tx=0x7fd59c02ef20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=750a31ae0c602698 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:34.983 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.844+0000 7fd5b1ffb640 1 -- 192.168.123.107:0/3496077808 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fd59c02f9b0 con 0x7fd5b4106060 2026-03-09T20:42:34.984 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.844+0000 7fd5b1ffb640 1 -- 192.168.123.107:0/3496077808 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1041+0+0 (secure 0 0 0) 0x7fd59c037440 con 0x7fd5b4106060 2026-03-09T20:42:34.984 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.844+0000 7fd5b1ffb640 1 -- 192.168.123.107:0/3496077808 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fd59c0354e0 con 0x7fd5b4106060 2026-03-09T20:42:34.984 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.844+0000 7fd5b950b640 1 -- 192.168.123.107:0/3496077808 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd5b4106060 msgr2=0x7fd5b4106460 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:34.984 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.844+0000 7fd5b950b640 1 --2- 192.168.123.107:0/3496077808 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd5b4106060 0x7fd5b4106460 secure :-1 s=READY pgs=39 cs=0 l=1 rev1=1 crypto rx=0x7fd59c009920 tx=0x7fd59c02ef20 comp rx=0 tx=0).stop 2026-03-09T20:42:34.984 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.845+0000 7fd5b950b640 1 -- 192.168.123.107:0/3496077808 shutdown_connections 2026-03-09T20:42:34.984 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.845+0000 7fd5b950b640 1 --2- 192.168.123.107:0/3496077808 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd5b4106060 0x7fd5b4106460 unknown :-1 s=CLOSED pgs=39 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:34.984 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.845+0000 7fd5b950b640 1 -- 192.168.123.107:0/3496077808 >> 192.168.123.107:0/3496077808 conn(0x7fd5b4101870 msgr2=0x7fd5b4103cd0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:34.984 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.845+0000 7fd5b950b640 1 -- 192.168.123.107:0/3496077808 shutdown_connections 2026-03-09T20:42:34.984 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.845+0000 7fd5b950b640 1 -- 192.168.123.107:0/3496077808 wait complete. 2026-03-09T20:42:34.984 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.845+0000 7fd5b950b640 1 Processor -- start 2026-03-09T20:42:34.984 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.845+0000 7fd5b950b640 1 -- start start 2026-03-09T20:42:34.984 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.845+0000 7fd5b950b640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd5b4106060 0x7fd5b419e160 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:34.984 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.845+0000 7fd5b950b640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fd5b419e6a0 con 0x7fd5b4106060 2026-03-09T20:42:34.984 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.846+0000 7fd5b2ffd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd5b4106060 0x7fd5b419e160 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:34.984 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.846+0000 7fd5b2ffd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd5b4106060 0x7fd5b419e160 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:38784/0 (socket says 192.168.123.107:38784) 2026-03-09T20:42:34.984 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.846+0000 7fd5b2ffd640 1 -- 192.168.123.107:0/2718563011 learned_addr learned my addr 192.168.123.107:0/2718563011 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:34.984 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.846+0000 7fd5b2ffd640 1 -- 192.168.123.107:0/2718563011 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fd59c0095d0 con 0x7fd5b4106060 2026-03-09T20:42:34.984 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.846+0000 7fd5b2ffd640 1 --2- 192.168.123.107:0/2718563011 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd5b4106060 0x7fd5b419e160 secure :-1 s=READY pgs=40 cs=0 l=1 rev1=1 crypto rx=0x7fd59c0098f0 tx=0x7fd59c0359b0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:34.984 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.846+0000 7fd58ffff640 1 -- 192.168.123.107:0/2718563011 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fd59c035d00 con 0x7fd5b4106060 2026-03-09T20:42:34.984 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.846+0000 7fd58ffff640 1 -- 192.168.123.107:0/2718563011 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1041+0+0 (secure 0 0 0) 0x7fd59c035e60 con 0x7fd5b4106060 2026-03-09T20:42:34.984 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.846+0000 7fd58ffff640 1 -- 192.168.123.107:0/2718563011 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fd59c040d70 con 0x7fd5b4106060 2026-03-09T20:42:34.984 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.846+0000 7fd5b950b640 1 -- 192.168.123.107:0/2718563011 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fd5b419e8a0 con 0x7fd5b4106060 2026-03-09T20:42:34.984 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.846+0000 7fd5b950b640 1 -- 192.168.123.107:0/2718563011 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fd5b419ed40 con 0x7fd5b4106060 2026-03-09T20:42:34.984 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.847+0000 7fd58ffff640 1 -- 192.168.123.107:0/2718563011 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 7) v1 ==== 49264+0+0 (secure 0 0 0) 0x7fd59c04a460 con 0x7fd5b4106060 2026-03-09T20:42:34.984 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.847+0000 7fd5b950b640 1 -- 192.168.123.107:0/2718563011 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fd57c005350 con 0x7fd5b4106060 2026-03-09T20:42:34.984 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.850+0000 7fd58ffff640 1 --2- 192.168.123.107:0/2718563011 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7fd58803d110 0x7fd58803f5d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:34.984 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.850+0000 7fd58ffff640 1 -- 192.168.123.107:0/2718563011 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(2..2 src has 1..2) v4 ==== 940+0+0 (secure 0 0 0) 0x7fd59c0768f0 con 0x7fd5b4106060 2026-03-09T20:42:34.984 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.850+0000 7fd58ffff640 1 -- 192.168.123.107:0/2718563011 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7fd59c036960 con 0x7fd5b4106060 2026-03-09T20:42:34.984 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.850+0000 7fd5b27fc640 1 --2- 192.168.123.107:0/2718563011 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7fd58803d110 0x7fd58803f5d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:34.984 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.850+0000 7fd5b27fc640 1 --2- 192.168.123.107:0/2718563011 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7fd58803d110 0x7fd58803f5d0 secure :-1 s=READY pgs=9 cs=0 l=1 rev1=1 crypto rx=0x7fd5a8009a10 tx=0x7fd5a8006eb0 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:34.984 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.933+0000 7fd5b950b640 1 -- 192.168.123.107:0/2718563011 --> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] -- mgr_command(tid 0: {"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}) v1 -- 0x7fd57c002bf0 con 0x7fd58803d110 2026-03-09T20:42:34.984 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.955+0000 7fd58ffff640 1 -- 192.168.123.107:0/2718563011 <== mgr.14118 v2:192.168.123.107:6800/810998986 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+0 (secure 0 0 0) 0x7fd57c002bf0 con 0x7fd58803d110 2026-03-09T20:42:34.984 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.957+0000 7fd5b950b640 1 -- 192.168.123.107:0/2718563011 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7fd58803d110 msgr2=0x7fd58803f5d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:34.984 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.957+0000 7fd5b950b640 1 --2- 192.168.123.107:0/2718563011 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7fd58803d110 0x7fd58803f5d0 secure :-1 s=READY pgs=9 cs=0 l=1 rev1=1 crypto rx=0x7fd5a8009a10 tx=0x7fd5a8006eb0 comp rx=0 tx=0).stop 2026-03-09T20:42:34.984 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.957+0000 7fd5b950b640 1 -- 192.168.123.107:0/2718563011 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd5b4106060 msgr2=0x7fd5b419e160 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:34.984 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.957+0000 7fd5b950b640 1 --2- 192.168.123.107:0/2718563011 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd5b4106060 0x7fd5b419e160 secure :-1 s=READY pgs=40 cs=0 l=1 rev1=1 crypto rx=0x7fd59c0098f0 tx=0x7fd59c0359b0 comp rx=0 tx=0).stop 2026-03-09T20:42:34.984 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.957+0000 7fd5b950b640 1 -- 192.168.123.107:0/2718563011 shutdown_connections 2026-03-09T20:42:34.984 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.957+0000 7fd5b950b640 1 --2- 192.168.123.107:0/2718563011 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7fd58803d110 0x7fd58803f5d0 unknown :-1 s=CLOSED pgs=9 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:34.984 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.958+0000 7fd5b950b640 1 --2- 192.168.123.107:0/2718563011 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd5b4106060 0x7fd5b419e160 unknown :-1 s=CLOSED pgs=40 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:34.984 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.958+0000 7fd5b950b640 1 -- 192.168.123.107:0/2718563011 >> 192.168.123.107:0/2718563011 conn(0x7fd5b4101870 msgr2=0x7fd5b41022e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:34.984 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.958+0000 7fd5b950b640 1 -- 192.168.123.107:0/2718563011 shutdown_connections 2026-03-09T20:42:34.984 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:34.958+0000 7fd5b950b640 1 -- 192.168.123.107:0/2718563011 wait complete. 2026-03-09T20:42:35.202 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGfYmEKNabqXZiYSz8H+ttKMu4TpiT6S79bXNujHfYYB ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4 2026-03-09T20:42:35.202 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.078+0000 7f20ddc1c640 1 Processor -- start 2026-03-09T20:42:35.202 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.079+0000 7f20ddc1c640 1 -- start start 2026-03-09T20:42:35.202 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.079+0000 7f20ddc1c640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f20d81082f0 0x7f20d81086f0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:35.202 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.079+0000 7f20ddc1c640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f20d8108cc0 con 0x7f20d81082f0 2026-03-09T20:42:35.202 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.079+0000 7f20d77fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f20d81082f0 0x7f20d81086f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:35.202 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.079+0000 7f20d77fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f20d81082f0 0x7f20d81086f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:38786/0 (socket says 192.168.123.107:38786) 2026-03-09T20:42:35.202 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.079+0000 7f20d77fe640 1 -- 192.168.123.107:0/3266642496 learned_addr learned my addr 192.168.123.107:0/3266642496 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:35.202 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.080+0000 7f20d77fe640 1 -- 192.168.123.107:0/3266642496 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f20d8109490 con 0x7f20d81082f0 2026-03-09T20:42:35.202 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.080+0000 7f20d77fe640 1 --2- 192.168.123.107:0/3266642496 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f20d81082f0 0x7f20d81086f0 secure :-1 s=READY pgs=41 cs=0 l=1 rev1=1 crypto rx=0x7f20c4009b30 tx=0x7f20c402f140 comp rx=0 tx=0).ready entity=mon.0 client_cookie=4382e78c80551ff6 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:35.202 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.081+0000 7f20d67fc640 1 -- 192.168.123.107:0/3266642496 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f20c402fbd0 con 0x7f20d81082f0 2026-03-09T20:42:35.202 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.081+0000 7f20d67fc640 1 -- 192.168.123.107:0/3266642496 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1041+0+0 (secure 0 0 0) 0x7f20c402fd30 con 0x7f20d81082f0 2026-03-09T20:42:35.202 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.081+0000 7f20ddc1c640 1 -- 192.168.123.107:0/3266642496 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f20d81082f0 msgr2=0x7f20d81086f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:35.202 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.081+0000 7f20ddc1c640 1 --2- 192.168.123.107:0/3266642496 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f20d81082f0 0x7f20d81086f0 secure :-1 s=READY pgs=41 cs=0 l=1 rev1=1 crypto rx=0x7f20c4009b30 tx=0x7f20c402f140 comp rx=0 tx=0).stop 2026-03-09T20:42:35.202 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.081+0000 7f20ddc1c640 1 -- 192.168.123.107:0/3266642496 shutdown_connections 2026-03-09T20:42:35.202 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.081+0000 7f20ddc1c640 1 --2- 192.168.123.107:0/3266642496 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f20d81082f0 0x7f20d81086f0 unknown :-1 s=CLOSED pgs=41 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:35.202 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.081+0000 7f20ddc1c640 1 -- 192.168.123.107:0/3266642496 >> 192.168.123.107:0/3266642496 conn(0x7f20d807b8c0 msgr2=0x7f20d81066a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:35.202 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.082+0000 7f20ddc1c640 1 -- 192.168.123.107:0/3266642496 shutdown_connections 2026-03-09T20:42:35.202 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.082+0000 7f20ddc1c640 1 -- 192.168.123.107:0/3266642496 wait complete. 2026-03-09T20:42:35.202 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.082+0000 7f20ddc1c640 1 Processor -- start 2026-03-09T20:42:35.202 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.082+0000 7f20ddc1c640 1 -- start start 2026-03-09T20:42:35.202 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.082+0000 7f20ddc1c640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f20d819e100 0x7f20d819e520 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:35.202 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.082+0000 7f20ddc1c640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f20c4035560 con 0x7f20d819e100 2026-03-09T20:42:35.202 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.083+0000 7f20d77fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f20d819e100 0x7f20d819e520 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:35.202 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.083+0000 7f20d77fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f20d819e100 0x7f20d819e520 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:38794/0 (socket says 192.168.123.107:38794) 2026-03-09T20:42:35.202 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.083+0000 7f20d77fe640 1 -- 192.168.123.107:0/852186552 learned_addr learned my addr 192.168.123.107:0/852186552 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:35.202 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.083+0000 7f20d77fe640 1 -- 192.168.123.107:0/852186552 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f20c40095d0 con 0x7f20d819e100 2026-03-09T20:42:35.202 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.083+0000 7f20d77fe640 1 --2- 192.168.123.107:0/852186552 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f20d819e100 0x7f20d819e520 secure :-1 s=READY pgs=42 cs=0 l=1 rev1=1 crypto rx=0x7f20c402f670 tx=0x7f20c4035f50 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:35.202 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.083+0000 7f20d4ff9640 1 -- 192.168.123.107:0/852186552 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f20c4037650 con 0x7f20d819e100 2026-03-09T20:42:35.202 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.083+0000 7f20d4ff9640 1 -- 192.168.123.107:0/852186552 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1041+0+0 (secure 0 0 0) 0x7f20c4037c30 con 0x7f20d819e100 2026-03-09T20:42:35.202 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.084+0000 7f20d4ff9640 1 -- 192.168.123.107:0/852186552 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f20c4042e10 con 0x7f20d819e100 2026-03-09T20:42:35.202 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.084+0000 7f20ddc1c640 1 -- 192.168.123.107:0/852186552 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f20d819ea60 con 0x7f20d819e100 2026-03-09T20:42:35.202 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.084+0000 7f20ddc1c640 1 -- 192.168.123.107:0/852186552 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f20d81a1600 con 0x7f20d819e100 2026-03-09T20:42:35.202 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.085+0000 7f20d4ff9640 1 -- 192.168.123.107:0/852186552 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 7) v1 ==== 49264+0+0 (secure 0 0 0) 0x7f20c40377b0 con 0x7f20d819e100 2026-03-09T20:42:35.202 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.085+0000 7f20ddc1c640 1 -- 192.168.123.107:0/852186552 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f20d81082f0 con 0x7f20d819e100 2026-03-09T20:42:35.202 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.085+0000 7f20d4ff9640 1 --2- 192.168.123.107:0/852186552 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f20ac03ccb0 0x7f20ac03f170 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:35.202 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.085+0000 7f20d4ff9640 1 -- 192.168.123.107:0/852186552 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(2..2 src has 1..2) v4 ==== 940+0+0 (secure 0 0 0) 0x7f20c4075480 con 0x7f20d819e100 2026-03-09T20:42:35.202 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.085+0000 7f20d6ffd640 1 --2- 192.168.123.107:0/852186552 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f20ac03ccb0 0x7f20ac03f170 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:35.202 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.086+0000 7f20d6ffd640 1 --2- 192.168.123.107:0/852186552 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f20ac03ccb0 0x7f20ac03f170 secure :-1 s=READY pgs=10 cs=0 l=1 rev1=1 crypto rx=0x7f20c8009a10 tx=0x7f20c8006eb0 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:35.202 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.088+0000 7f20d4ff9640 1 -- 192.168.123.107:0/852186552 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f20c4040b20 con 0x7f20d819e100 2026-03-09T20:42:35.202 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.174+0000 7f20ddc1c640 1 -- 192.168.123.107:0/852186552 --> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] -- mgr_command(tid 0: {"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}) v1 -- 0x7f20d81061d0 con 0x7f20ac03ccb0 2026-03-09T20:42:35.202 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.175+0000 7f20d4ff9640 1 -- 192.168.123.107:0/852186552 <== mgr.14118 v2:192.168.123.107:6800/810998986 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+123 (secure 0 0 0) 0x7f20d81061d0 con 0x7f20ac03ccb0 2026-03-09T20:42:35.202 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.177+0000 7f20ddc1c640 1 -- 192.168.123.107:0/852186552 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f20ac03ccb0 msgr2=0x7f20ac03f170 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:35.202 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.177+0000 7f20ddc1c640 1 --2- 192.168.123.107:0/852186552 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f20ac03ccb0 0x7f20ac03f170 secure :-1 s=READY pgs=10 cs=0 l=1 rev1=1 crypto rx=0x7f20c8009a10 tx=0x7f20c8006eb0 comp rx=0 tx=0).stop 2026-03-09T20:42:35.202 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.177+0000 7f20ddc1c640 1 -- 192.168.123.107:0/852186552 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f20d819e100 msgr2=0x7f20d819e520 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:35.203 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.177+0000 7f20ddc1c640 1 --2- 192.168.123.107:0/852186552 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f20d819e100 0x7f20d819e520 secure :-1 s=READY pgs=42 cs=0 l=1 rev1=1 crypto rx=0x7f20c402f670 tx=0x7f20c4035f50 comp rx=0 tx=0).stop 2026-03-09T20:42:35.203 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.177+0000 7f20ddc1c640 1 -- 192.168.123.107:0/852186552 shutdown_connections 2026-03-09T20:42:35.203 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.177+0000 7f20ddc1c640 1 --2- 192.168.123.107:0/852186552 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f20ac03ccb0 0x7f20ac03f170 unknown :-1 s=CLOSED pgs=10 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:35.203 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.177+0000 7f20ddc1c640 1 --2- 192.168.123.107:0/852186552 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f20d819e100 0x7f20d819e520 unknown :-1 s=CLOSED pgs=42 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:35.203 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.177+0000 7f20ddc1c640 1 -- 192.168.123.107:0/852186552 >> 192.168.123.107:0/852186552 conn(0x7f20d807b8c0 msgr2=0x7f20d8105ac0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:35.203 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.178+0000 7f20ddc1c640 1 -- 192.168.123.107:0/852186552 shutdown_connections 2026-03-09T20:42:35.203 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.178+0000 7f20ddc1c640 1 -- 192.168.123.107:0/852186552 wait complete. 2026-03-09T20:42:35.203 INFO:teuthology.orchestra.run.vm07.stdout:Wrote public SSH key to /home/ubuntu/cephtest/ceph.pub 2026-03-09T20:42:35.203 INFO:teuthology.orchestra.run.vm07.stdout:Adding key to root@localhost authorized_keys... 2026-03-09T20:42:35.203 INFO:teuthology.orchestra.run.vm07.stdout:Adding host vm07... 2026-03-09T20:42:35.237 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:35 vm07 ceph-mon[49120]: from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T20:42:35.237 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:35 vm07 ceph-mon[49120]: from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T20:42:35.237 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:35 vm07 ceph-mon[49120]: from='mgr.14118 192.168.123.107:0/1039749675' entity='mgr.vm07.xjrvch' 2026-03-09T20:42:35.512 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:35 vm07 ceph-mon[49120]: from='mgr.14118 192.168.123.107:0/1039749675' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:42:35.512 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:35 vm07 ceph-mon[49120]: from='mgr.14118 192.168.123.107:0/1039749675' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:42:35.512 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:35 vm07 ceph-mon[49120]: from='mgr.14118 192.168.123.107:0/1039749675' entity='mgr.vm07.xjrvch' 2026-03-09T20:42:35.512 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:35 vm07 ceph-mon[49120]: from='mgr.14118 192.168.123.107:0/1039749675' entity='mgr.vm07.xjrvch' 2026-03-09T20:42:36.381 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:36 vm07 ceph-mon[49120]: [09/Mar/2026:20:42:34] ENGINE Bus STARTING 2026-03-09T20:42:36.381 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:36 vm07 ceph-mon[49120]: from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:42:36.381 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:36 vm07 ceph-mon[49120]: [09/Mar/2026:20:42:34] ENGINE Serving on http://192.168.123.107:8765 2026-03-09T20:42:36.381 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:36 vm07 ceph-mon[49120]: [09/Mar/2026:20:42:34] ENGINE Serving on https://192.168.123.107:7150 2026-03-09T20:42:36.381 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:36 vm07 ceph-mon[49120]: [09/Mar/2026:20:42:34] ENGINE Bus STARTED 2026-03-09T20:42:36.381 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:36 vm07 ceph-mon[49120]: [09/Mar/2026:20:42:34] ENGINE Client ('192.168.123.107', 45858) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T20:42:36.381 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:36 vm07 ceph-mon[49120]: from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:42:36.382 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:36 vm07 ceph-mon[49120]: from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:42:36.382 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:36 vm07 ceph-mon[49120]: Generating ssh key... 2026-03-09T20:42:36.382 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:36 vm07 ceph-mon[49120]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:42:36.382 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:36 vm07 ceph-mon[49120]: mgrmap e8: vm07.xjrvch(active, since 2s) 2026-03-09T20:42:36.755 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout Added host 'vm07' with addr '192.168.123.107' 2026-03-09T20:42:36.755 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.312+0000 7feb62d13640 1 Processor -- start 2026-03-09T20:42:36.755 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.313+0000 7feb62d13640 1 -- start start 2026-03-09T20:42:36.755 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.313+0000 7feb62d13640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7feb5c105530 0x7feb5c107920 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:36.755 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.313+0000 7feb62d13640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7feb5c107ef0 con 0x7feb5c105530 2026-03-09T20:42:36.755 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.314+0000 7feb61d11640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7feb5c105530 0x7feb5c107920 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:36.755 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.314+0000 7feb61d11640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7feb5c105530 0x7feb5c107920 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:38808/0 (socket says 192.168.123.107:38808) 2026-03-09T20:42:36.755 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.314+0000 7feb61d11640 1 -- 192.168.123.107:0/1838782977 learned_addr learned my addr 192.168.123.107:0/1838782977 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:36.755 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.314+0000 7feb61d11640 1 -- 192.168.123.107:0/1838782977 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7feb5c108030 con 0x7feb5c105530 2026-03-09T20:42:36.755 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.314+0000 7feb61d11640 1 --2- 192.168.123.107:0/1838782977 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7feb5c105530 0x7feb5c107920 secure :-1 s=READY pgs=43 cs=0 l=1 rev1=1 crypto rx=0x7feb5000bc40 tx=0x7feb50031760 comp rx=0 tx=0).ready entity=mon.0 client_cookie=5e879e61b94155e4 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:36.755 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.315+0000 7feb60d0f640 1 -- 192.168.123.107:0/1838782977 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7feb50036510 con 0x7feb5c105530 2026-03-09T20:42:36.755 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.315+0000 7feb60d0f640 1 -- 192.168.123.107:0/1838782977 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1041+0+0 (secure 0 0 0) 0x7feb50034050 con 0x7feb5c105530 2026-03-09T20:42:36.755 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.315+0000 7feb60d0f640 1 -- 192.168.123.107:0/1838782977 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7feb50038a00 con 0x7feb5c105530 2026-03-09T20:42:36.755 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.315+0000 7feb62d13640 1 -- 192.168.123.107:0/1838782977 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7feb5c105530 msgr2=0x7feb5c107920 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:36.755 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.315+0000 7feb62d13640 1 --2- 192.168.123.107:0/1838782977 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7feb5c105530 0x7feb5c107920 secure :-1 s=READY pgs=43 cs=0 l=1 rev1=1 crypto rx=0x7feb5000bc40 tx=0x7feb50031760 comp rx=0 tx=0).stop 2026-03-09T20:42:36.755 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.315+0000 7feb62d13640 1 -- 192.168.123.107:0/1838782977 shutdown_connections 2026-03-09T20:42:36.755 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.315+0000 7feb62d13640 1 --2- 192.168.123.107:0/1838782977 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7feb5c105530 0x7feb5c107920 unknown :-1 s=CLOSED pgs=43 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:36.755 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.315+0000 7feb62d13640 1 -- 192.168.123.107:0/1838782977 >> 192.168.123.107:0/1838782977 conn(0x7feb5c0ff7c0 msgr2=0x7feb5c101c00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:36.755 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.316+0000 7feb62d13640 1 -- 192.168.123.107:0/1838782977 shutdown_connections 2026-03-09T20:42:36.755 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.316+0000 7feb62d13640 1 -- 192.168.123.107:0/1838782977 wait complete. 2026-03-09T20:42:36.755 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.316+0000 7feb62d13640 1 Processor -- start 2026-03-09T20:42:36.755 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.316+0000 7feb62d13640 1 -- start start 2026-03-09T20:42:36.755 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.316+0000 7feb62d13640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7feb5c105530 0x7feb5c199ac0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:36.755 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.316+0000 7feb62d13640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7feb5c19a000 con 0x7feb5c105530 2026-03-09T20:42:36.755 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.317+0000 7feb61d11640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7feb5c105530 0x7feb5c199ac0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:36.755 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.317+0000 7feb61d11640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7feb5c105530 0x7feb5c199ac0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:38816/0 (socket says 192.168.123.107:38816) 2026-03-09T20:42:36.755 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.317+0000 7feb61d11640 1 -- 192.168.123.107:0/2634477429 learned_addr learned my addr 192.168.123.107:0/2634477429 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:36.755 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.317+0000 7feb61d11640 1 -- 192.168.123.107:0/2634477429 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7feb5000b8f0 con 0x7feb5c105530 2026-03-09T20:42:36.755 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.317+0000 7feb61d11640 1 --2- 192.168.123.107:0/2634477429 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7feb5c105530 0x7feb5c199ac0 secure :-1 s=READY pgs=44 cs=0 l=1 rev1=1 crypto rx=0x7feb5000bc10 tx=0x7feb50031f20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:36.755 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.318+0000 7feb4affd640 1 -- 192.168.123.107:0/2634477429 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7feb50038a30 con 0x7feb5c105530 2026-03-09T20:42:36.755 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.318+0000 7feb4affd640 1 -- 192.168.123.107:0/2634477429 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1041+0+0 (secure 0 0 0) 0x7feb50041ab0 con 0x7feb5c105530 2026-03-09T20:42:36.755 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.318+0000 7feb4affd640 1 -- 192.168.123.107:0/2634477429 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7feb50040910 con 0x7feb5c105530 2026-03-09T20:42:36.755 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.318+0000 7feb62d13640 1 -- 192.168.123.107:0/2634477429 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7feb5c19a200 con 0x7feb5c105530 2026-03-09T20:42:36.755 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.318+0000 7feb62d13640 1 -- 192.168.123.107:0/2634477429 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7feb5c19a640 con 0x7feb5c105530 2026-03-09T20:42:36.755 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.319+0000 7feb4affd640 1 -- 192.168.123.107:0/2634477429 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 7) v1 ==== 49264+0+0 (secure 0 0 0) 0x7feb5003f040 con 0x7feb5c105530 2026-03-09T20:42:36.755 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.319+0000 7feb62d13640 1 -- 192.168.123.107:0/2634477429 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7feb5c1086c0 con 0x7feb5c105530 2026-03-09T20:42:36.755 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.319+0000 7feb4affd640 1 --2- 192.168.123.107:0/2634477429 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7feb3803cd00 0x7feb3803f1c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:36.755 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.319+0000 7feb4affd640 1 -- 192.168.123.107:0/2634477429 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(2..2 src has 1..2) v4 ==== 940+0+0 (secure 0 0 0) 0x7feb50076b50 con 0x7feb5c105530 2026-03-09T20:42:36.755 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.319+0000 7feb61510640 1 --2- 192.168.123.107:0/2634477429 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7feb3803cd00 0x7feb3803f1c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:36.755 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.319+0000 7feb61510640 1 --2- 192.168.123.107:0/2634477429 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7feb3803cd00 0x7feb3803f1c0 secure :-1 s=READY pgs=11 cs=0 l=1 rev1=1 crypto rx=0x7feb4c0099c0 tx=0x7feb4c006eb0 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:36.755 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.322+0000 7feb4affd640 1 -- 192.168.123.107:0/2634477429 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7feb5000a510 con 0x7feb5c105530 2026-03-09T20:42:36.755 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.406+0000 7feb62d13640 1 -- 192.168.123.107:0/2634477429 --> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] -- mgr_command(tid 0: {"prefix": "orch host add", "hostname": "vm07", "addr": "192.168.123.107", "target": ["mon-mgr", ""]}) v1 -- 0x7feb5c1009f0 con 0x7feb3803cd00 2026-03-09T20:42:36.755 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:35.957+0000 7feb4affd640 1 -- 192.168.123.107:0/2634477429 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mgrmap(e 8) v1 ==== 49370+0+0 (secure 0 0 0) 0x7feb5000ace0 con 0x7feb5c105530 2026-03-09T20:42:36.755 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.719+0000 7feb4affd640 1 -- 192.168.123.107:0/2634477429 <== mgr.14118 v2:192.168.123.107:6800/810998986 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+46 (secure 0 0 0) 0x7feb5c1009f0 con 0x7feb3803cd00 2026-03-09T20:42:36.755 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.721+0000 7feb62d13640 1 -- 192.168.123.107:0/2634477429 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7feb3803cd00 msgr2=0x7feb3803f1c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:36.755 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.721+0000 7feb62d13640 1 --2- 192.168.123.107:0/2634477429 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7feb3803cd00 0x7feb3803f1c0 secure :-1 s=READY pgs=11 cs=0 l=1 rev1=1 crypto rx=0x7feb4c0099c0 tx=0x7feb4c006eb0 comp rx=0 tx=0).stop 2026-03-09T20:42:36.755 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.721+0000 7feb62d13640 1 -- 192.168.123.107:0/2634477429 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7feb5c105530 msgr2=0x7feb5c199ac0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:36.755 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.721+0000 7feb62d13640 1 --2- 192.168.123.107:0/2634477429 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7feb5c105530 0x7feb5c199ac0 secure :-1 s=READY pgs=44 cs=0 l=1 rev1=1 crypto rx=0x7feb5000bc10 tx=0x7feb50031f20 comp rx=0 tx=0).stop 2026-03-09T20:42:36.755 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.722+0000 7feb62d13640 1 -- 192.168.123.107:0/2634477429 shutdown_connections 2026-03-09T20:42:36.756 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.722+0000 7feb62d13640 1 --2- 192.168.123.107:0/2634477429 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7feb3803cd00 0x7feb3803f1c0 unknown :-1 s=CLOSED pgs=11 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:36.756 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.722+0000 7feb62d13640 1 --2- 192.168.123.107:0/2634477429 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7feb5c105530 0x7feb5c199ac0 unknown :-1 s=CLOSED pgs=44 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:36.756 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.722+0000 7feb62d13640 1 -- 192.168.123.107:0/2634477429 >> 192.168.123.107:0/2634477429 conn(0x7feb5c0ff7c0 msgr2=0x7feb5c1002e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:36.756 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.722+0000 7feb62d13640 1 -- 192.168.123.107:0/2634477429 shutdown_connections 2026-03-09T20:42:36.756 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.722+0000 7feb62d13640 1 -- 192.168.123.107:0/2634477429 wait complete. 2026-03-09T20:42:36.756 INFO:teuthology.orchestra.run.vm07.stdout:Deploying mon service with default placement... 2026-03-09T20:42:37.029 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout Scheduled mon update... 2026-03-09T20:42:37.030 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.875+0000 7fe2671d8640 1 Processor -- start 2026-03-09T20:42:37.030 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.875+0000 7fe2671d8640 1 -- start start 2026-03-09T20:42:37.030 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.876+0000 7fe2671d8640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe26007be10 0x7fe26007a310 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:37.030 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.876+0000 7fe2671d8640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fe26007a850 con 0x7fe26007be10 2026-03-09T20:42:37.031 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.876+0000 7fe264f4d640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe26007be10 0x7fe26007a310 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:37.031 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.876+0000 7fe264f4d640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe26007be10 0x7fe26007a310 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:38820/0 (socket says 192.168.123.107:38820) 2026-03-09T20:42:37.031 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.876+0000 7fe264f4d640 1 -- 192.168.123.107:0/3167926175 learned_addr learned my addr 192.168.123.107:0/3167926175 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:37.031 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.877+0000 7fe264f4d640 1 -- 192.168.123.107:0/3167926175 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fe26007a990 con 0x7fe26007be10 2026-03-09T20:42:37.031 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.877+0000 7fe264f4d640 1 --2- 192.168.123.107:0/3167926175 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe26007be10 0x7fe26007a310 secure :-1 s=READY pgs=45 cs=0 l=1 rev1=1 crypto rx=0x7fe24c009b80 tx=0x7fe24c02f190 comp rx=0 tx=0).ready entity=mon.0 client_cookie=a9d1ebec026ab19a server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:37.031 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.877+0000 7fe25f7fe640 1 -- 192.168.123.107:0/3167926175 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fe24c02fa10 con 0x7fe26007be10 2026-03-09T20:42:37.031 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.877+0000 7fe25f7fe640 1 -- 192.168.123.107:0/3167926175 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1041+0+0 (secure 0 0 0) 0x7fe24c037440 con 0x7fe26007be10 2026-03-09T20:42:37.031 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.877+0000 7fe25f7fe640 1 -- 192.168.123.107:0/3167926175 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fe24c035540 con 0x7fe26007be10 2026-03-09T20:42:37.031 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.878+0000 7fe2671d8640 1 -- 192.168.123.107:0/3167926175 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe26007be10 msgr2=0x7fe26007a310 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:37.031 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.878+0000 7fe2671d8640 1 --2- 192.168.123.107:0/3167926175 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe26007be10 0x7fe26007a310 secure :-1 s=READY pgs=45 cs=0 l=1 rev1=1 crypto rx=0x7fe24c009b80 tx=0x7fe24c02f190 comp rx=0 tx=0).stop 2026-03-09T20:42:37.031 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.878+0000 7fe2671d8640 1 -- 192.168.123.107:0/3167926175 shutdown_connections 2026-03-09T20:42:37.031 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.878+0000 7fe2671d8640 1 --2- 192.168.123.107:0/3167926175 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe26007be10 0x7fe26007a310 unknown :-1 s=CLOSED pgs=45 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:37.031 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.878+0000 7fe2671d8640 1 -- 192.168.123.107:0/3167926175 >> 192.168.123.107:0/3167926175 conn(0x7fe2601018b0 msgr2=0x7fe260103cd0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:37.031 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.878+0000 7fe2671d8640 1 -- 192.168.123.107:0/3167926175 shutdown_connections 2026-03-09T20:42:37.031 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.878+0000 7fe2671d8640 1 -- 192.168.123.107:0/3167926175 wait complete. 2026-03-09T20:42:37.031 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.879+0000 7fe2671d8640 1 Processor -- start 2026-03-09T20:42:37.031 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.879+0000 7fe2671d8640 1 -- start start 2026-03-09T20:42:37.031 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.879+0000 7fe2671d8640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe26007be10 0x7fe26019e0b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:37.031 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.879+0000 7fe2671d8640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fe26019e5f0 con 0x7fe26007be10 2026-03-09T20:42:37.031 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.879+0000 7fe264f4d640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe26007be10 0x7fe26019e0b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:37.031 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.879+0000 7fe264f4d640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe26007be10 0x7fe26019e0b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:38832/0 (socket says 192.168.123.107:38832) 2026-03-09T20:42:37.031 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.879+0000 7fe264f4d640 1 -- 192.168.123.107:0/2423506016 learned_addr learned my addr 192.168.123.107:0/2423506016 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:37.031 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.880+0000 7fe264f4d640 1 -- 192.168.123.107:0/2423506016 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fe24c0095d0 con 0x7fe26007be10 2026-03-09T20:42:37.031 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.880+0000 7fe264f4d640 1 --2- 192.168.123.107:0/2423506016 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe26007be10 0x7fe26019e0b0 secure :-1 s=READY pgs=46 cs=0 l=1 rev1=1 crypto rx=0x7fe24c037870 tx=0x7fe24c0378a0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:37.031 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.880+0000 7fe25dffb640 1 -- 192.168.123.107:0/2423506016 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fe24c037b50 con 0x7fe26007be10 2026-03-09T20:42:37.031 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.880+0000 7fe2671d8640 1 -- 192.168.123.107:0/2423506016 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fe26019e7f0 con 0x7fe26007be10 2026-03-09T20:42:37.031 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.880+0000 7fe25dffb640 1 -- 192.168.123.107:0/2423506016 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1041+0+0 (secure 0 0 0) 0x7fe24c035c80 con 0x7fe26007be10 2026-03-09T20:42:37.031 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.880+0000 7fe25dffb640 1 -- 192.168.123.107:0/2423506016 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fe24c041de0 con 0x7fe26007be10 2026-03-09T20:42:37.031 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.880+0000 7fe2671d8640 1 -- 192.168.123.107:0/2423506016 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fe26019ec90 con 0x7fe26007be10 2026-03-09T20:42:37.031 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.881+0000 7fe2671d8640 1 -- 192.168.123.107:0/2423506016 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fe228005350 con 0x7fe26007be10 2026-03-09T20:42:37.031 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.882+0000 7fe25dffb640 1 -- 192.168.123.107:0/2423506016 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 8) v1 ==== 49370+0+0 (secure 0 0 0) 0x7fe24c03e030 con 0x7fe26007be10 2026-03-09T20:42:37.031 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.882+0000 7fe25dffb640 1 --2- 192.168.123.107:0/2423506016 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7fe23803d190 0x7fe23803f650 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:37.031 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.882+0000 7fe25dffb640 1 -- 192.168.123.107:0/2423506016 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(2..2 src has 1..2) v4 ==== 940+0+0 (secure 0 0 0) 0x7fe24c075760 con 0x7fe26007be10 2026-03-09T20:42:37.031 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.885+0000 7fe25ffff640 1 --2- 192.168.123.107:0/2423506016 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7fe23803d190 0x7fe23803f650 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:37.031 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.885+0000 7fe25ffff640 1 --2- 192.168.123.107:0/2423506016 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7fe23803d190 0x7fe23803f650 secure :-1 s=READY pgs=12 cs=0 l=1 rev1=1 crypto rx=0x7fe2500099c0 tx=0x7fe250006eb0 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:37.031 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.885+0000 7fe25dffb640 1 -- 192.168.123.107:0/2423506016 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7fe24c048ad0 con 0x7fe26007be10 2026-03-09T20:42:37.031 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.980+0000 7fe2671d8640 1 -- 192.168.123.107:0/2423506016 --> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}) v1 -- 0x7fe228002bf0 con 0x7fe23803d190 2026-03-09T20:42:37.031 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.988+0000 7fe25dffb640 1 -- 192.168.123.107:0/2423506016 <== mgr.14118 v2:192.168.123.107:6800/810998986 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+24 (secure 0 0 0) 0x7fe228002bf0 con 0x7fe23803d190 2026-03-09T20:42:37.031 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.990+0000 7fe2671d8640 1 -- 192.168.123.107:0/2423506016 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7fe23803d190 msgr2=0x7fe23803f650 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:37.031 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.990+0000 7fe2671d8640 1 --2- 192.168.123.107:0/2423506016 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7fe23803d190 0x7fe23803f650 secure :-1 s=READY pgs=12 cs=0 l=1 rev1=1 crypto rx=0x7fe2500099c0 tx=0x7fe250006eb0 comp rx=0 tx=0).stop 2026-03-09T20:42:37.031 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.990+0000 7fe2671d8640 1 -- 192.168.123.107:0/2423506016 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe26007be10 msgr2=0x7fe26019e0b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:37.031 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.990+0000 7fe2671d8640 1 --2- 192.168.123.107:0/2423506016 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe26007be10 0x7fe26019e0b0 secure :-1 s=READY pgs=46 cs=0 l=1 rev1=1 crypto rx=0x7fe24c037870 tx=0x7fe24c0378a0 comp rx=0 tx=0).stop 2026-03-09T20:42:37.031 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.991+0000 7fe2671d8640 1 -- 192.168.123.107:0/2423506016 shutdown_connections 2026-03-09T20:42:37.031 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.991+0000 7fe2671d8640 1 --2- 192.168.123.107:0/2423506016 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7fe23803d190 0x7fe23803f650 unknown :-1 s=CLOSED pgs=12 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:37.031 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.991+0000 7fe2671d8640 1 --2- 192.168.123.107:0/2423506016 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe26007be10 0x7fe26019e0b0 unknown :-1 s=CLOSED pgs=46 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:37.031 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.991+0000 7fe2671d8640 1 -- 192.168.123.107:0/2423506016 >> 192.168.123.107:0/2423506016 conn(0x7fe2601018b0 msgr2=0x7fe260102340 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:37.031 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.991+0000 7fe2671d8640 1 -- 192.168.123.107:0/2423506016 shutdown_connections 2026-03-09T20:42:37.032 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:36.991+0000 7fe2671d8640 1 -- 192.168.123.107:0/2423506016 wait complete. 2026-03-09T20:42:37.032 INFO:teuthology.orchestra.run.vm07.stdout:Deploying mgr service with default placement... 2026-03-09T20:42:37.303 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout Scheduled mgr update... 2026-03-09T20:42:37.303 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.144+0000 7f0d92c3b640 1 Processor -- start 2026-03-09T20:42:37.303 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.144+0000 7f0d92c3b640 1 -- start start 2026-03-09T20:42:37.303 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.144+0000 7f0d92c3b640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0d8c072f30 0x7f0d8c071440 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:37.303 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.144+0000 7f0d92c3b640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f0d8c071a10 con 0x7f0d8c072f30 2026-03-09T20:42:37.303 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.144+0000 7f0d91c39640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0d8c072f30 0x7f0d8c071440 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:37.303 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.144+0000 7f0d91c39640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0d8c072f30 0x7f0d8c071440 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:38846/0 (socket says 192.168.123.107:38846) 2026-03-09T20:42:37.303 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.144+0000 7f0d91c39640 1 -- 192.168.123.107:0/509416143 learned_addr learned my addr 192.168.123.107:0/509416143 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:37.303 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.144+0000 7f0d91c39640 1 -- 192.168.123.107:0/509416143 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f0d8c071b50 con 0x7f0d8c072f30 2026-03-09T20:42:37.303 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.146+0000 7f0d91c39640 1 --2- 192.168.123.107:0/509416143 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0d8c072f30 0x7f0d8c071440 secure :-1 s=READY pgs=47 cs=0 l=1 rev1=1 crypto rx=0x7f0d7c009b80 tx=0x7f0d7c02f190 comp rx=0 tx=0).ready entity=mon.0 client_cookie=3aa7530213d8c40a server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:37.303 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.146+0000 7f0d90c37640 1 -- 192.168.123.107:0/509416143 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f0d7c02fa10 con 0x7f0d8c072f30 2026-03-09T20:42:37.303 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.146+0000 7f0d90c37640 1 -- 192.168.123.107:0/509416143 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1041+0+0 (secure 0 0 0) 0x7f0d7c037440 con 0x7f0d8c072f30 2026-03-09T20:42:37.303 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.146+0000 7f0d90c37640 1 -- 192.168.123.107:0/509416143 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f0d7c035540 con 0x7f0d8c072f30 2026-03-09T20:42:37.303 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.147+0000 7f0d92c3b640 1 -- 192.168.123.107:0/509416143 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0d8c072f30 msgr2=0x7f0d8c071440 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:37.303 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.147+0000 7f0d92c3b640 1 --2- 192.168.123.107:0/509416143 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0d8c072f30 0x7f0d8c071440 secure :-1 s=READY pgs=47 cs=0 l=1 rev1=1 crypto rx=0x7f0d7c009b80 tx=0x7f0d7c02f190 comp rx=0 tx=0).stop 2026-03-09T20:42:37.303 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.147+0000 7f0d92c3b640 1 -- 192.168.123.107:0/509416143 shutdown_connections 2026-03-09T20:42:37.303 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.147+0000 7f0d92c3b640 1 --2- 192.168.123.107:0/509416143 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0d8c072f30 0x7f0d8c071440 unknown :-1 s=CLOSED pgs=47 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:37.303 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.147+0000 7f0d92c3b640 1 -- 192.168.123.107:0/509416143 >> 192.168.123.107:0/509416143 conn(0x7f0d8c06d060 msgr2=0x7f0d8c06f480 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:37.303 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.147+0000 7f0d92c3b640 1 -- 192.168.123.107:0/509416143 shutdown_connections 2026-03-09T20:42:37.303 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.147+0000 7f0d92c3b640 1 -- 192.168.123.107:0/509416143 wait complete. 2026-03-09T20:42:37.303 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.148+0000 7f0d92c3b640 1 Processor -- start 2026-03-09T20:42:37.303 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.148+0000 7f0d92c3b640 1 -- start start 2026-03-09T20:42:37.303 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.148+0000 7f0d92c3b640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0d8c072f30 0x7f0d8c11b610 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:37.303 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.148+0000 7f0d92c3b640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f0d8c11ce50 con 0x7f0d8c072f30 2026-03-09T20:42:37.303 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.148+0000 7f0d91c39640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0d8c072f30 0x7f0d8c11b610 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:37.304 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.148+0000 7f0d91c39640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0d8c072f30 0x7f0d8c11b610 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:38850/0 (socket says 192.168.123.107:38850) 2026-03-09T20:42:37.304 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.148+0000 7f0d91c39640 1 -- 192.168.123.107:0/2233376418 learned_addr learned my addr 192.168.123.107:0/2233376418 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:37.304 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.148+0000 7f0d91c39640 1 -- 192.168.123.107:0/2233376418 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f0d7c0095d0 con 0x7f0d8c072f30 2026-03-09T20:42:37.304 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.150+0000 7f0d91c39640 1 --2- 192.168.123.107:0/2233376418 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0d8c072f30 0x7f0d8c11b610 secure :-1 s=READY pgs=48 cs=0 l=1 rev1=1 crypto rx=0x7f0d7c002410 tx=0x7f0d7c035da0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:37.304 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.151+0000 7f0d7affd640 1 -- 192.168.123.107:0/2233376418 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f0d7c035e60 con 0x7f0d8c072f30 2026-03-09T20:42:37.304 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.151+0000 7f0d92c3b640 1 -- 192.168.123.107:0/2233376418 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f0d8c11d050 con 0x7f0d8c072f30 2026-03-09T20:42:37.304 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.151+0000 7f0d92c3b640 1 -- 192.168.123.107:0/2233376418 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f0d8c11bda0 con 0x7f0d8c072f30 2026-03-09T20:42:37.304 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.151+0000 7f0d7affd640 1 -- 192.168.123.107:0/2233376418 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1041+0+0 (secure 0 0 0) 0x7f0d7c02fe90 con 0x7f0d8c072f30 2026-03-09T20:42:37.304 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.151+0000 7f0d7affd640 1 -- 192.168.123.107:0/2233376418 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f0d7c040e30 con 0x7f0d8c072f30 2026-03-09T20:42:37.304 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.153+0000 7f0d7affd640 1 -- 192.168.123.107:0/2233376418 <== mon.0 v2:192.168.123.107:3300/0 4 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f0d7c04a920 con 0x7f0d8c072f30 2026-03-09T20:42:37.304 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.153+0000 7f0d7affd640 1 -- 192.168.123.107:0/2233376418 <== mon.0 v2:192.168.123.107:3300/0 5 ==== mgrmap(e 8) v1 ==== 49370+0+0 (secure 0 0 0) 0x7f0d7c049b00 con 0x7f0d8c072f30 2026-03-09T20:42:37.304 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.153+0000 7f0d7affd640 1 --2- 192.168.123.107:0/2233376418 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f0d7403d2f0 0x7f0d7403f7b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:37.304 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.153+0000 7f0d7affd640 1 -- 192.168.123.107:0/2233376418 <== mon.0 v2:192.168.123.107:3300/0 6 ==== osd_map(2..2 src has 1..2) v4 ==== 940+0+0 (secure 0 0 0) 0x7f0d7c078180 con 0x7f0d8c072f30 2026-03-09T20:42:37.304 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.154+0000 7f0d91438640 1 --2- 192.168.123.107:0/2233376418 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f0d7403d2f0 0x7f0d7403f7b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:37.304 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.155+0000 7f0d92c3b640 1 -- 192.168.123.107:0/2233376418 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f0d54005350 con 0x7f0d8c072f30 2026-03-09T20:42:37.304 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.157+0000 7f0d91438640 1 --2- 192.168.123.107:0/2233376418 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f0d7403d2f0 0x7f0d7403f7b0 secure :-1 s=READY pgs=13 cs=0 l=1 rev1=1 crypto rx=0x7f0d80009a10 tx=0x7f0d80006eb0 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:37.304 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.160+0000 7f0d7affd640 1 -- 192.168.123.107:0/2233376418 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f0d7c03c030 con 0x7f0d8c072f30 2026-03-09T20:42:37.304 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.262+0000 7f0d92c3b640 1 -- 192.168.123.107:0/2233376418 --> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}) v1 -- 0x7f0d54002bf0 con 0x7f0d7403d2f0 2026-03-09T20:42:37.304 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.269+0000 7f0d7affd640 1 -- 192.168.123.107:0/2233376418 <== mgr.14118 v2:192.168.123.107:6800/810998986 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+24 (secure 0 0 0) 0x7f0d54002bf0 con 0x7f0d7403d2f0 2026-03-09T20:42:37.304 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.271+0000 7f0d92c3b640 1 -- 192.168.123.107:0/2233376418 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f0d7403d2f0 msgr2=0x7f0d7403f7b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:37.304 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.271+0000 7f0d92c3b640 1 --2- 192.168.123.107:0/2233376418 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f0d7403d2f0 0x7f0d7403f7b0 secure :-1 s=READY pgs=13 cs=0 l=1 rev1=1 crypto rx=0x7f0d80009a10 tx=0x7f0d80006eb0 comp rx=0 tx=0).stop 2026-03-09T20:42:37.304 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.271+0000 7f0d92c3b640 1 -- 192.168.123.107:0/2233376418 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0d8c072f30 msgr2=0x7f0d8c11b610 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:37.304 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.271+0000 7f0d92c3b640 1 --2- 192.168.123.107:0/2233376418 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0d8c072f30 0x7f0d8c11b610 secure :-1 s=READY pgs=48 cs=0 l=1 rev1=1 crypto rx=0x7f0d7c002410 tx=0x7f0d7c035da0 comp rx=0 tx=0).stop 2026-03-09T20:42:37.304 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.271+0000 7f0d92c3b640 1 -- 192.168.123.107:0/2233376418 shutdown_connections 2026-03-09T20:42:37.304 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.271+0000 7f0d92c3b640 1 --2- 192.168.123.107:0/2233376418 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f0d7403d2f0 0x7f0d7403f7b0 unknown :-1 s=CLOSED pgs=13 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:37.304 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.271+0000 7f0d92c3b640 1 --2- 192.168.123.107:0/2233376418 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0d8c072f30 0x7f0d8c11b610 unknown :-1 s=CLOSED pgs=48 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:37.304 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.271+0000 7f0d92c3b640 1 -- 192.168.123.107:0/2233376418 >> 192.168.123.107:0/2233376418 conn(0x7f0d8c06d060 msgr2=0x7f0d8c112a50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:37.304 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.271+0000 7f0d92c3b640 1 -- 192.168.123.107:0/2233376418 shutdown_connections 2026-03-09T20:42:37.304 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.271+0000 7f0d92c3b640 1 -- 192.168.123.107:0/2233376418 wait complete. 2026-03-09T20:42:37.304 INFO:teuthology.orchestra.run.vm07.stdout:Deploying crash service with default placement... 2026-03-09T20:42:37.530 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:37 vm07 ceph-mon[49120]: from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm07", "addr": "192.168.123.107", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:42:37.530 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:37 vm07 ceph-mon[49120]: Deploying cephadm binary to vm07 2026-03-09T20:42:37.530 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:37 vm07 ceph-mon[49120]: from='mgr.14118 192.168.123.107:0/1039749675' entity='mgr.vm07.xjrvch' 2026-03-09T20:42:37.530 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:37 vm07 ceph-mon[49120]: from='mgr.14118 192.168.123.107:0/1039749675' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:42:37.530 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:37 vm07 ceph-mon[49120]: from='mgr.14118 192.168.123.107:0/1039749675' entity='mgr.vm07.xjrvch' 2026-03-09T20:42:37.530 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:37 vm07 ceph-mon[49120]: from='mgr.14118 192.168.123.107:0/1039749675' entity='mgr.vm07.xjrvch' 2026-03-09T20:42:37.530 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:37 vm07 ceph-mon[49120]: from='mgr.14118 192.168.123.107:0/1039749675' entity='mgr.vm07.xjrvch' 2026-03-09T20:42:37.574 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout Scheduled crash update... 2026-03-09T20:42:37.574 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.418+0000 7f1f12b2d640 1 Processor -- start 2026-03-09T20:42:37.574 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.418+0000 7f1f12b2d640 1 -- start start 2026-03-09T20:42:37.574 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.419+0000 7f1f12b2d640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1f0c071820 0x7f1f0c071c20 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:37.574 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.419+0000 7f1f12b2d640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f1f0c0721f0 con 0x7f1f0c071820 2026-03-09T20:42:37.574 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.419+0000 7f1f11b2b640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1f0c071820 0x7f1f0c071c20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:37.574 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.419+0000 7f1f11b2b640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1f0c071820 0x7f1f0c071c20 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:38858/0 (socket says 192.168.123.107:38858) 2026-03-09T20:42:37.574 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.419+0000 7f1f11b2b640 1 -- 192.168.123.107:0/2070219715 learned_addr learned my addr 192.168.123.107:0/2070219715 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:37.574 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.419+0000 7f1f11b2b640 1 -- 192.168.123.107:0/2070219715 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f1f0c072330 con 0x7f1f0c071820 2026-03-09T20:42:37.574 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.420+0000 7f1f11b2b640 1 --2- 192.168.123.107:0/2070219715 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1f0c071820 0x7f1f0c071c20 secure :-1 s=READY pgs=49 cs=0 l=1 rev1=1 crypto rx=0x7f1f080089a0 tx=0x7f1f08031440 comp rx=0 tx=0).ready entity=mon.0 client_cookie=c934746f00c1d3f6 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:37.574 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.420+0000 7f1f10b29640 1 -- 192.168.123.107:0/2070219715 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f1f08031e50 con 0x7f1f0c071820 2026-03-09T20:42:37.574 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.420+0000 7f1f10b29640 1 -- 192.168.123.107:0/2070219715 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f1f08035070 con 0x7f1f0c071820 2026-03-09T20:42:37.574 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.420+0000 7f1f10b29640 1 -- 192.168.123.107:0/2070219715 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f1f0803b940 con 0x7f1f0c071820 2026-03-09T20:42:37.574 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.422+0000 7f1f12b2d640 1 -- 192.168.123.107:0/2070219715 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1f0c071820 msgr2=0x7f1f0c071c20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:37.574 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.422+0000 7f1f12b2d640 1 --2- 192.168.123.107:0/2070219715 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1f0c071820 0x7f1f0c071c20 secure :-1 s=READY pgs=49 cs=0 l=1 rev1=1 crypto rx=0x7f1f080089a0 tx=0x7f1f08031440 comp rx=0 tx=0).stop 2026-03-09T20:42:37.575 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.422+0000 7f1f12b2d640 1 -- 192.168.123.107:0/2070219715 shutdown_connections 2026-03-09T20:42:37.575 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.422+0000 7f1f12b2d640 1 --2- 192.168.123.107:0/2070219715 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1f0c071820 0x7f1f0c071c20 unknown :-1 s=CLOSED pgs=49 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:37.575 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.422+0000 7f1f12b2d640 1 -- 192.168.123.107:0/2070219715 >> 192.168.123.107:0/2070219715 conn(0x7f1f0c06d060 msgr2=0x7f1f0c06f480 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:37.575 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.422+0000 7f1f12b2d640 1 -- 192.168.123.107:0/2070219715 shutdown_connections 2026-03-09T20:42:37.575 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.422+0000 7f1f12b2d640 1 -- 192.168.123.107:0/2070219715 wait complete. 2026-03-09T20:42:37.575 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.423+0000 7f1f12b2d640 1 Processor -- start 2026-03-09T20:42:37.575 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.424+0000 7f1f12b2d640 1 -- start start 2026-03-09T20:42:37.575 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.424+0000 7f1f12b2d640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1f0c071820 0x7f1f0c1a23e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:37.575 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.424+0000 7f1f12b2d640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f1f0c1a2920 con 0x7f1f0c071820 2026-03-09T20:42:37.575 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.424+0000 7f1f11b2b640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1f0c071820 0x7f1f0c1a23e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:37.575 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.424+0000 7f1f11b2b640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1f0c071820 0x7f1f0c1a23e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:38874/0 (socket says 192.168.123.107:38874) 2026-03-09T20:42:37.575 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.424+0000 7f1f11b2b640 1 -- 192.168.123.107:0/1026169931 learned_addr learned my addr 192.168.123.107:0/1026169931 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:37.575 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.424+0000 7f1f11b2b640 1 -- 192.168.123.107:0/1026169931 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f1f08008650 con 0x7f1f0c071820 2026-03-09T20:42:37.575 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.425+0000 7f1f11b2b640 1 --2- 192.168.123.107:0/1026169931 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1f0c071820 0x7f1f0c1a23e0 secure :-1 s=READY pgs=50 cs=0 l=1 rev1=1 crypto rx=0x7f1f0803c6b0 tx=0x7f1f0803c6e0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:37.575 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.425+0000 7f1efaffd640 1 -- 192.168.123.107:0/1026169931 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f1f0803bec0 con 0x7f1f0c071820 2026-03-09T20:42:37.575 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.425+0000 7f1f12b2d640 1 -- 192.168.123.107:0/1026169931 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f1f0c1a2b20 con 0x7f1f0c071820 2026-03-09T20:42:37.575 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.425+0000 7f1f12b2d640 1 -- 192.168.123.107:0/1026169931 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f1f0c1a2fc0 con 0x7f1f0c071820 2026-03-09T20:42:37.575 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.425+0000 7f1f12b2d640 1 -- 192.168.123.107:0/1026169931 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f1ed4005350 con 0x7f1f0c071820 2026-03-09T20:42:37.575 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.427+0000 7f1efaffd640 1 -- 192.168.123.107:0/1026169931 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f1f08047470 con 0x7f1f0c071820 2026-03-09T20:42:37.575 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.427+0000 7f1efaffd640 1 -- 192.168.123.107:0/1026169931 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f1f08046550 con 0x7f1f0c071820 2026-03-09T20:42:37.575 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.429+0000 7f1efaffd640 1 -- 192.168.123.107:0/1026169931 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 8) v1 ==== 49370+0+0 (secure 0 0 0) 0x7f1f080466b0 con 0x7f1f0c071820 2026-03-09T20:42:37.575 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.429+0000 7f1efaffd640 1 --2- 192.168.123.107:0/1026169931 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f1edc03d230 0x7f1edc03f6f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:37.575 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.429+0000 7f1efaffd640 1 -- 192.168.123.107:0/1026169931 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(2..2 src has 1..2) v4 ==== 940+0+0 (secure 0 0 0) 0x7f1f08037070 con 0x7f1f0c071820 2026-03-09T20:42:37.575 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.429+0000 7f1efaffd640 1 -- 192.168.123.107:0/1026169931 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f1f08079850 con 0x7f1f0c071820 2026-03-09T20:42:37.575 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.430+0000 7f1f1132a640 1 --2- 192.168.123.107:0/1026169931 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f1edc03d230 0x7f1edc03f6f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:37.575 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.432+0000 7f1f1132a640 1 --2- 192.168.123.107:0/1026169931 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f1edc03d230 0x7f1edc03f6f0 secure :-1 s=READY pgs=14 cs=0 l=1 rev1=1 crypto rx=0x7f1f0400ad30 tx=0x7f1f040093f0 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:37.575 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.529+0000 7f1f12b2d640 1 -- 192.168.123.107:0/1026169931 --> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}) v1 -- 0x7f1ed4002bf0 con 0x7f1edc03d230 2026-03-09T20:42:37.575 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.537+0000 7f1efaffd640 1 -- 192.168.123.107:0/1026169931 <== mgr.14118 v2:192.168.123.107:6800/810998986 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+26 (secure 0 0 0) 0x7f1ed4002bf0 con 0x7f1edc03d230 2026-03-09T20:42:37.575 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.540+0000 7f1f12b2d640 1 -- 192.168.123.107:0/1026169931 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f1edc03d230 msgr2=0x7f1edc03f6f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:37.575 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.540+0000 7f1f12b2d640 1 --2- 192.168.123.107:0/1026169931 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f1edc03d230 0x7f1edc03f6f0 secure :-1 s=READY pgs=14 cs=0 l=1 rev1=1 crypto rx=0x7f1f0400ad30 tx=0x7f1f040093f0 comp rx=0 tx=0).stop 2026-03-09T20:42:37.575 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.540+0000 7f1f12b2d640 1 -- 192.168.123.107:0/1026169931 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1f0c071820 msgr2=0x7f1f0c1a23e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:37.575 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.540+0000 7f1f12b2d640 1 --2- 192.168.123.107:0/1026169931 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1f0c071820 0x7f1f0c1a23e0 secure :-1 s=READY pgs=50 cs=0 l=1 rev1=1 crypto rx=0x7f1f0803c6b0 tx=0x7f1f0803c6e0 comp rx=0 tx=0).stop 2026-03-09T20:42:37.575 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.540+0000 7f1f12b2d640 1 -- 192.168.123.107:0/1026169931 shutdown_connections 2026-03-09T20:42:37.575 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.540+0000 7f1f12b2d640 1 --2- 192.168.123.107:0/1026169931 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f1edc03d230 0x7f1edc03f6f0 unknown :-1 s=CLOSED pgs=14 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:37.575 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.540+0000 7f1f12b2d640 1 --2- 192.168.123.107:0/1026169931 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1f0c071820 0x7f1f0c1a23e0 unknown :-1 s=CLOSED pgs=50 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:37.576 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.540+0000 7f1f12b2d640 1 -- 192.168.123.107:0/1026169931 >> 192.168.123.107:0/1026169931 conn(0x7f1f0c06d060 msgr2=0x7f1f0c06eb90 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:37.576 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.540+0000 7f1f12b2d640 1 -- 192.168.123.107:0/1026169931 shutdown_connections 2026-03-09T20:42:37.576 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.540+0000 7f1f12b2d640 1 -- 192.168.123.107:0/1026169931 wait complete. 2026-03-09T20:42:37.576 INFO:teuthology.orchestra.run.vm07.stdout:Deploying ceph-exporter service with default placement... 2026-03-09T20:42:37.855 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout Scheduled ceph-exporter update... 2026-03-09T20:42:37.855 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.697+0000 7f2de9689640 1 Processor -- start 2026-03-09T20:42:37.855 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.697+0000 7f2de9689640 1 -- start start 2026-03-09T20:42:37.855 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.697+0000 7f2de9689640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2de4072f30 0x7f2de40713e0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:37.855 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.697+0000 7f2de9689640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f2de40719b0 con 0x7f2de4072f30 2026-03-09T20:42:37.855 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.698+0000 7f2de3fff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2de4072f30 0x7f2de40713e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:37.855 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.698+0000 7f2de3fff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2de4072f30 0x7f2de40713e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:35374/0 (socket says 192.168.123.107:35374) 2026-03-09T20:42:37.855 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.698+0000 7f2de3fff640 1 -- 192.168.123.107:0/635306913 learned_addr learned my addr 192.168.123.107:0/635306913 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:37.855 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.698+0000 7f2de3fff640 1 -- 192.168.123.107:0/635306913 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f2de4071af0 con 0x7f2de4072f30 2026-03-09T20:42:37.855 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.698+0000 7f2de3fff640 1 --2- 192.168.123.107:0/635306913 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2de4072f30 0x7f2de40713e0 secure :-1 s=READY pgs=51 cs=0 l=1 rev1=1 crypto rx=0x7f2ddc00cbd0 tx=0x7f2ddc0317f0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=b57cfdaeebf4a8f0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:37.855 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.700+0000 7f2de2ffd640 1 -- 192.168.123.107:0/635306913 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f2ddc037480 con 0x7f2de4072f30 2026-03-09T20:42:37.855 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.700+0000 7f2de2ffd640 1 -- 192.168.123.107:0/635306913 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f2ddc034030 con 0x7f2de4072f30 2026-03-09T20:42:37.855 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.700+0000 7f2de9689640 1 -- 192.168.123.107:0/635306913 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2de4072f30 msgr2=0x7f2de40713e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:37.855 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.700+0000 7f2de9689640 1 --2- 192.168.123.107:0/635306913 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2de4072f30 0x7f2de40713e0 secure :-1 s=READY pgs=51 cs=0 l=1 rev1=1 crypto rx=0x7f2ddc00cbd0 tx=0x7f2ddc0317f0 comp rx=0 tx=0).stop 2026-03-09T20:42:37.855 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.700+0000 7f2de9689640 1 -- 192.168.123.107:0/635306913 shutdown_connections 2026-03-09T20:42:37.855 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.700+0000 7f2de9689640 1 --2- 192.168.123.107:0/635306913 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2de4072f30 0x7f2de40713e0 unknown :-1 s=CLOSED pgs=51 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:37.855 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.700+0000 7f2de9689640 1 -- 192.168.123.107:0/635306913 >> 192.168.123.107:0/635306913 conn(0x7f2de406d080 msgr2=0x7f2de406f4a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:37.855 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.700+0000 7f2de9689640 1 -- 192.168.123.107:0/635306913 shutdown_connections 2026-03-09T20:42:37.855 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.700+0000 7f2de9689640 1 -- 192.168.123.107:0/635306913 wait complete. 2026-03-09T20:42:37.855 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.700+0000 7f2de9689640 1 Processor -- start 2026-03-09T20:42:37.855 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.700+0000 7f2de9689640 1 -- start start 2026-03-09T20:42:37.855 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.700+0000 7f2de9689640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2de4085290 0x7f2de4088790 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:37.855 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.700+0000 7f2de9689640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f2ddc03a970 con 0x7f2de4085290 2026-03-09T20:42:37.855 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.700+0000 7f2de3fff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2de4085290 0x7f2de4088790 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:37.855 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.700+0000 7f2de3fff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2de4085290 0x7f2de4088790 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:35384/0 (socket says 192.168.123.107:35384) 2026-03-09T20:42:37.855 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.700+0000 7f2de3fff640 1 -- 192.168.123.107:0/4057990423 learned_addr learned my addr 192.168.123.107:0/4057990423 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:37.855 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.701+0000 7f2de3fff640 1 -- 192.168.123.107:0/4057990423 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f2ddc00c880 con 0x7f2de4085290 2026-03-09T20:42:37.856 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.702+0000 7f2de3fff640 1 --2- 192.168.123.107:0/4057990423 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2de4085290 0x7f2de4088790 secure :-1 s=READY pgs=52 cs=0 l=1 rev1=1 crypto rx=0x7f2ddc007270 tx=0x7f2ddc009c10 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:37.856 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.702+0000 7f2de17fa640 1 -- 192.168.123.107:0/4057990423 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f2ddc00c360 con 0x7f2de4085290 2026-03-09T20:42:37.856 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.702+0000 7f2de17fa640 1 -- 192.168.123.107:0/4057990423 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f2ddc034070 con 0x7f2de4085290 2026-03-09T20:42:37.856 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.702+0000 7f2de9689640 1 -- 192.168.123.107:0/4057990423 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f2de4088cd0 con 0x7f2de4085290 2026-03-09T20:42:37.856 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.702+0000 7f2de17fa640 1 -- 192.168.123.107:0/4057990423 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f2ddc043630 con 0x7f2de4085290 2026-03-09T20:42:37.856 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.702+0000 7f2de9689640 1 -- 192.168.123.107:0/4057990423 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f2de4085900 con 0x7f2de4085290 2026-03-09T20:42:37.856 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.703+0000 7f2dc2ffd640 1 -- 192.168.123.107:0/4057990423 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f2de407bed0 con 0x7f2de4085290 2026-03-09T20:42:37.856 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.703+0000 7f2de17fa640 1 -- 192.168.123.107:0/4057990423 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 8) v1 ==== 49370+0+0 (secure 0 0 0) 0x7f2ddc04a020 con 0x7f2de4085290 2026-03-09T20:42:37.856 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.703+0000 7f2de17fa640 1 --2- 192.168.123.107:0/4057990423 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f2dc403d1e0 0x7f2dc403f6a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:37.857 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.703+0000 7f2de17fa640 1 -- 192.168.123.107:0/4057990423 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(2..2 src has 1..2) v4 ==== 940+0+0 (secure 0 0 0) 0x7f2ddc076100 con 0x7f2de4085290 2026-03-09T20:42:37.857 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.705+0000 7f2de37fe640 1 --2- 192.168.123.107:0/4057990423 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f2dc403d1e0 0x7f2dc403f6a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:37.857 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.706+0000 7f2de17fa640 1 -- 192.168.123.107:0/4057990423 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f2ddc0aa820 con 0x7f2de4085290 2026-03-09T20:42:37.857 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.707+0000 7f2de37fe640 1 --2- 192.168.123.107:0/4057990423 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f2dc403d1e0 0x7f2dc403f6a0 secure :-1 s=READY pgs=15 cs=0 l=1 rev1=1 crypto rx=0x7f2dd40099c0 tx=0x7f2dd4006eb0 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:37.857 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.799+0000 7f2dc2ffd640 1 -- 192.168.123.107:0/4057990423 --> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "ceph-exporter", "target": ["mon-mgr", ""]}) v1 -- 0x7f2de4072f30 con 0x7f2dc403d1e0 2026-03-09T20:42:37.857 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.806+0000 7f2de17fa640 1 -- 192.168.123.107:0/4057990423 <== mgr.14118 v2:192.168.123.107:6800/810998986 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+34 (secure 0 0 0) 0x7f2de4072f30 con 0x7f2dc403d1e0 2026-03-09T20:42:37.857 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.808+0000 7f2dc2ffd640 1 -- 192.168.123.107:0/4057990423 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f2dc403d1e0 msgr2=0x7f2dc403f6a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:37.857 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.808+0000 7f2dc2ffd640 1 --2- 192.168.123.107:0/4057990423 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f2dc403d1e0 0x7f2dc403f6a0 secure :-1 s=READY pgs=15 cs=0 l=1 rev1=1 crypto rx=0x7f2dd40099c0 tx=0x7f2dd4006eb0 comp rx=0 tx=0).stop 2026-03-09T20:42:37.857 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.808+0000 7f2dc2ffd640 1 -- 192.168.123.107:0/4057990423 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2de4085290 msgr2=0x7f2de4088790 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:37.857 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.808+0000 7f2dc2ffd640 1 --2- 192.168.123.107:0/4057990423 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2de4085290 0x7f2de4088790 secure :-1 s=READY pgs=52 cs=0 l=1 rev1=1 crypto rx=0x7f2ddc007270 tx=0x7f2ddc009c10 comp rx=0 tx=0).stop 2026-03-09T20:42:37.857 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.808+0000 7f2dc2ffd640 1 -- 192.168.123.107:0/4057990423 shutdown_connections 2026-03-09T20:42:37.857 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.808+0000 7f2dc2ffd640 1 --2- 192.168.123.107:0/4057990423 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f2dc403d1e0 0x7f2dc403f6a0 unknown :-1 s=CLOSED pgs=15 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:37.857 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.808+0000 7f2dc2ffd640 1 --2- 192.168.123.107:0/4057990423 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2de4085290 0x7f2de4088790 unknown :-1 s=CLOSED pgs=52 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:37.857 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.809+0000 7f2dc2ffd640 1 -- 192.168.123.107:0/4057990423 >> 192.168.123.107:0/4057990423 conn(0x7f2de406d080 msgr2=0x7f2de407c0e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:37.857 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.809+0000 7f2dc2ffd640 1 -- 192.168.123.107:0/4057990423 shutdown_connections 2026-03-09T20:42:37.857 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.809+0000 7f2dc2ffd640 1 -- 192.168.123.107:0/4057990423 wait complete. 2026-03-09T20:42:37.857 INFO:teuthology.orchestra.run.vm07.stdout:Deploying prometheus service with default placement... 2026-03-09T20:42:38.154 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout Scheduled prometheus update... 2026-03-09T20:42:38.156 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.994+0000 7f2c2da01640 1 Processor -- start 2026-03-09T20:42:38.156 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.994+0000 7f2c2da01640 1 -- start start 2026-03-09T20:42:38.156 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.994+0000 7f2c2da01640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2c28071c80 0x7f2c28072080 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:38.156 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.994+0000 7f2c2da01640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f2c280725c0 con 0x7f2c28071c80 2026-03-09T20:42:38.156 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.996+0000 7f2c26ffd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2c28071c80 0x7f2c28072080 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:38.156 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.996+0000 7f2c26ffd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2c28071c80 0x7f2c28072080 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:35398/0 (socket says 192.168.123.107:35398) 2026-03-09T20:42:38.156 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.996+0000 7f2c26ffd640 1 -- 192.168.123.107:0/764979876 learned_addr learned my addr 192.168.123.107:0/764979876 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:38.156 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.996+0000 7f2c26ffd640 1 -- 192.168.123.107:0/764979876 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f2c28072700 con 0x7f2c28071c80 2026-03-09T20:42:38.156 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.997+0000 7f2c26ffd640 1 --2- 192.168.123.107:0/764979876 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2c28071c80 0x7f2c28072080 secure :-1 s=READY pgs=53 cs=0 l=1 rev1=1 crypto rx=0x7f2c180089a0 tx=0x7f2c18031440 comp rx=0 tx=0).ready entity=mon.0 client_cookie=f38e8d4f59885372 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:38.156 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.997+0000 7f2c25ffb640 1 -- 192.168.123.107:0/764979876 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f2c18031e50 con 0x7f2c28071c80 2026-03-09T20:42:38.156 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.997+0000 7f2c25ffb640 1 -- 192.168.123.107:0/764979876 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f2c18035070 con 0x7f2c28071c80 2026-03-09T20:42:38.156 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.998+0000 7f2c2da01640 1 -- 192.168.123.107:0/764979876 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2c28071c80 msgr2=0x7f2c28072080 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:38.156 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.998+0000 7f2c2da01640 1 --2- 192.168.123.107:0/764979876 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2c28071c80 0x7f2c28072080 secure :-1 s=READY pgs=53 cs=0 l=1 rev1=1 crypto rx=0x7f2c180089a0 tx=0x7f2c18031440 comp rx=0 tx=0).stop 2026-03-09T20:42:38.156 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.998+0000 7f2c2da01640 1 -- 192.168.123.107:0/764979876 shutdown_connections 2026-03-09T20:42:38.156 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.998+0000 7f2c2da01640 1 --2- 192.168.123.107:0/764979876 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2c28071c80 0x7f2c28072080 unknown :-1 s=CLOSED pgs=53 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:38.156 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.998+0000 7f2c2da01640 1 -- 192.168.123.107:0/764979876 >> 192.168.123.107:0/764979876 conn(0x7f2c2806d2a0 msgr2=0x7f2c2806f6e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:38.156 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.998+0000 7f2c2da01640 1 -- 192.168.123.107:0/764979876 shutdown_connections 2026-03-09T20:42:38.156 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.998+0000 7f2c2da01640 1 -- 192.168.123.107:0/764979876 wait complete. 2026-03-09T20:42:38.156 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.999+0000 7f2c2da01640 1 Processor -- start 2026-03-09T20:42:38.156 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.999+0000 7f2c2da01640 1 -- start start 2026-03-09T20:42:38.156 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.999+0000 7f2c2da01640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2c281a26d0 0x7f2c281a2af0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:38.156 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.999+0000 7f2c2da01640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f2c1803b7a0 con 0x7f2c281a26d0 2026-03-09T20:42:38.156 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.999+0000 7f2c26ffd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2c281a26d0 0x7f2c281a2af0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:38.156 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.999+0000 7f2c26ffd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2c281a26d0 0x7f2c281a2af0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:35402/0 (socket says 192.168.123.107:35402) 2026-03-09T20:42:38.156 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:37.999+0000 7f2c26ffd640 1 -- 192.168.123.107:0/2408710745 learned_addr learned my addr 192.168.123.107:0/2408710745 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:38.156 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.000+0000 7f2c26ffd640 1 -- 192.168.123.107:0/2408710745 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f2c18008650 con 0x7f2c281a26d0 2026-03-09T20:42:38.156 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.000+0000 7f2c26ffd640 1 --2- 192.168.123.107:0/2408710745 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2c281a26d0 0x7f2c281a2af0 secure :-1 s=READY pgs=54 cs=0 l=1 rev1=1 crypto rx=0x7f2c180319f0 tx=0x7f2c18008c70 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:38.156 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.001+0000 7f2c2c9ff640 1 -- 192.168.123.107:0/2408710745 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f2c1803bed0 con 0x7f2c281a26d0 2026-03-09T20:42:38.156 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.001+0000 7f2c2da01640 1 -- 192.168.123.107:0/2408710745 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f2c281a3030 con 0x7f2c281a26d0 2026-03-09T20:42:38.156 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.001+0000 7f2c2da01640 1 -- 192.168.123.107:0/2408710745 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f2c281a5bd0 con 0x7f2c281a26d0 2026-03-09T20:42:38.156 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.001+0000 7f2c2c9ff640 1 -- 192.168.123.107:0/2408710745 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f2c18035040 con 0x7f2c281a26d0 2026-03-09T20:42:38.156 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.001+0000 7f2c2c9ff640 1 -- 192.168.123.107:0/2408710745 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f2c1800b9e0 con 0x7f2c281a26d0 2026-03-09T20:42:38.156 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.001+0000 7f2c2da01640 1 -- 192.168.123.107:0/2408710745 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f2c28071c80 con 0x7f2c281a26d0 2026-03-09T20:42:38.156 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.002+0000 7f2c2c9ff640 1 -- 192.168.123.107:0/2408710745 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 8) v1 ==== 49370+0+0 (secure 0 0 0) 0x7f2c18057400 con 0x7f2c281a26d0 2026-03-09T20:42:38.156 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.002+0000 7f2c2c9ff640 1 --2- 192.168.123.107:0/2408710745 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f2bfc03d230 0x7f2bfc03f6f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:38.156 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.002+0000 7f2c2c9ff640 1 -- 192.168.123.107:0/2408710745 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(2..2 src has 1..2) v4 ==== 940+0+0 (secure 0 0 0) 0x7f2c18037070 con 0x7f2c281a26d0 2026-03-09T20:42:38.156 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.002+0000 7f2c267fc640 1 --2- 192.168.123.107:0/2408710745 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f2bfc03d230 0x7f2bfc03f6f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:38.156 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.005+0000 7f2c2c9ff640 1 -- 192.168.123.107:0/2408710745 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f2c180576e0 con 0x7f2c281a26d0 2026-03-09T20:42:38.156 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.005+0000 7f2c267fc640 1 --2- 192.168.123.107:0/2408710745 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f2bfc03d230 0x7f2bfc03f6f0 secure :-1 s=READY pgs=16 cs=0 l=1 rev1=1 crypto rx=0x7f2c1c0099c0 tx=0x7f2c1c006eb0 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:38.156 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.108+0000 7f2c2da01640 1 -- 192.168.123.107:0/2408710745 --> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "prometheus", "target": ["mon-mgr", ""]}) v1 -- 0x7f2c2806f270 con 0x7f2bfc03d230 2026-03-09T20:42:38.156 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.115+0000 7f2c2c9ff640 1 -- 192.168.123.107:0/2408710745 <== mgr.14118 v2:192.168.123.107:6800/810998986 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+31 (secure 0 0 0) 0x7f2c2806f270 con 0x7f2bfc03d230 2026-03-09T20:42:38.156 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.117+0000 7f2c2da01640 1 -- 192.168.123.107:0/2408710745 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f2bfc03d230 msgr2=0x7f2bfc03f6f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:38.156 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.117+0000 7f2c2da01640 1 --2- 192.168.123.107:0/2408710745 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f2bfc03d230 0x7f2bfc03f6f0 secure :-1 s=READY pgs=16 cs=0 l=1 rev1=1 crypto rx=0x7f2c1c0099c0 tx=0x7f2c1c006eb0 comp rx=0 tx=0).stop 2026-03-09T20:42:38.156 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.117+0000 7f2c2da01640 1 -- 192.168.123.107:0/2408710745 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2c281a26d0 msgr2=0x7f2c281a2af0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:38.156 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.117+0000 7f2c2da01640 1 --2- 192.168.123.107:0/2408710745 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2c281a26d0 0x7f2c281a2af0 secure :-1 s=READY pgs=54 cs=0 l=1 rev1=1 crypto rx=0x7f2c180319f0 tx=0x7f2c18008c70 comp rx=0 tx=0).stop 2026-03-09T20:42:38.156 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.118+0000 7f2c2da01640 1 -- 192.168.123.107:0/2408710745 shutdown_connections 2026-03-09T20:42:38.156 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.118+0000 7f2c2da01640 1 --2- 192.168.123.107:0/2408710745 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f2bfc03d230 0x7f2bfc03f6f0 unknown :-1 s=CLOSED pgs=16 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:38.156 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.118+0000 7f2c2da01640 1 --2- 192.168.123.107:0/2408710745 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2c281a26d0 0x7f2c281a2af0 unknown :-1 s=CLOSED pgs=54 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:38.156 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.118+0000 7f2c2da01640 1 -- 192.168.123.107:0/2408710745 >> 192.168.123.107:0/2408710745 conn(0x7f2c2806d2a0 msgr2=0x7f2c2806eb60 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:38.156 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.118+0000 7f2c2da01640 1 -- 192.168.123.107:0/2408710745 shutdown_connections 2026-03-09T20:42:38.157 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.118+0000 7f2c2da01640 1 -- 192.168.123.107:0/2408710745 wait complete. 2026-03-09T20:42:38.157 INFO:teuthology.orchestra.run.vm07.stdout:Deploying grafana service with default placement... 2026-03-09T20:42:38.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:38 vm07 ceph-mon[49120]: Added host vm07 2026-03-09T20:42:38.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:38 vm07 ceph-mon[49120]: from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:42:38.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:38 vm07 ceph-mon[49120]: Saving service mon spec with placement count:5 2026-03-09T20:42:38.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:38 vm07 ceph-mon[49120]: from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:42:38.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:38 vm07 ceph-mon[49120]: Saving service mgr spec with placement count:2 2026-03-09T20:42:38.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:38 vm07 ceph-mon[49120]: from='mgr.14118 192.168.123.107:0/1039749675' entity='mgr.vm07.xjrvch' 2026-03-09T20:42:38.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:38 vm07 ceph-mon[49120]: from='mgr.14118 192.168.123.107:0/1039749675' entity='mgr.vm07.xjrvch' 2026-03-09T20:42:38.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:38 vm07 ceph-mon[49120]: from='mgr.14118 192.168.123.107:0/1039749675' entity='mgr.vm07.xjrvch' 2026-03-09T20:42:38.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:38 vm07 ceph-mon[49120]: from='mgr.14118 192.168.123.107:0/1039749675' entity='mgr.vm07.xjrvch' 2026-03-09T20:42:38.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:38 vm07 ceph-mon[49120]: from='mgr.14118 192.168.123.107:0/1039749675' entity='mgr.vm07.xjrvch' 2026-03-09T20:42:38.444 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout Scheduled grafana update... 2026-03-09T20:42:38.444 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.293+0000 7f371bfff640 1 Processor -- start 2026-03-09T20:42:38.444 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.294+0000 7f371bfff640 1 -- start start 2026-03-09T20:42:38.444 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.294+0000 7f371bfff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f371c071820 0x7f371c071c20 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:38.444 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.294+0000 7f371bfff640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f371c0721f0 con 0x7f371c071820 2026-03-09T20:42:38.444 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.294+0000 7f371affd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f371c071820 0x7f371c071c20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:38.444 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.294+0000 7f371affd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f371c071820 0x7f371c071c20 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:35404/0 (socket says 192.168.123.107:35404) 2026-03-09T20:42:38.444 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.294+0000 7f371affd640 1 -- 192.168.123.107:0/1387173451 learned_addr learned my addr 192.168.123.107:0/1387173451 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:38.444 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.295+0000 7f371affd640 1 -- 192.168.123.107:0/1387173451 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f371c072330 con 0x7f371c071820 2026-03-09T20:42:38.444 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.295+0000 7f371affd640 1 --2- 192.168.123.107:0/1387173451 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f371c071820 0x7f371c071c20 secure :-1 s=READY pgs=55 cs=0 l=1 rev1=1 crypto rx=0x7f370400d3e0 tx=0x7f37040316e0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=2b720069652acadc server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:38.444 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.296+0000 7f371a7fc640 1 -- 192.168.123.107:0/1387173451 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f370403b440 con 0x7f371c071820 2026-03-09T20:42:38.444 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.296+0000 7f371a7fc640 1 -- 192.168.123.107:0/1387173451 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f3704034030 con 0x7f371c071820 2026-03-09T20:42:38.444 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.296+0000 7f371a7fc640 1 -- 192.168.123.107:0/1387173451 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f3704039880 con 0x7f371c071820 2026-03-09T20:42:38.444 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.297+0000 7f371bfff640 1 -- 192.168.123.107:0/1387173451 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f371c071820 msgr2=0x7f371c071c20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:38.444 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.297+0000 7f371bfff640 1 --2- 192.168.123.107:0/1387173451 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f371c071820 0x7f371c071c20 secure :-1 s=READY pgs=55 cs=0 l=1 rev1=1 crypto rx=0x7f370400d3e0 tx=0x7f37040316e0 comp rx=0 tx=0).stop 2026-03-09T20:42:38.444 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.297+0000 7f371bfff640 1 -- 192.168.123.107:0/1387173451 shutdown_connections 2026-03-09T20:42:38.444 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.297+0000 7f371bfff640 1 --2- 192.168.123.107:0/1387173451 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f371c071820 0x7f371c071c20 unknown :-1 s=CLOSED pgs=55 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:38.444 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.297+0000 7f371bfff640 1 -- 192.168.123.107:0/1387173451 >> 192.168.123.107:0/1387173451 conn(0x7f371c06d060 msgr2=0x7f371c06f480 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:38.444 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.297+0000 7f371bfff640 1 -- 192.168.123.107:0/1387173451 shutdown_connections 2026-03-09T20:42:38.444 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.297+0000 7f371bfff640 1 -- 192.168.123.107:0/1387173451 wait complete. 2026-03-09T20:42:38.444 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.297+0000 7f371bfff640 1 Processor -- start 2026-03-09T20:42:38.444 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.298+0000 7f371bfff640 1 -- start start 2026-03-09T20:42:38.444 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.298+0000 7f371bfff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f371c071820 0x7f371c1aada0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:38.444 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.298+0000 7f371bfff640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f371c1ab2e0 con 0x7f371c071820 2026-03-09T20:42:38.444 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.298+0000 7f371affd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f371c071820 0x7f371c1aada0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:38.444 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.298+0000 7f371affd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f371c071820 0x7f371c1aada0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:35410/0 (socket says 192.168.123.107:35410) 2026-03-09T20:42:38.444 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.298+0000 7f371affd640 1 -- 192.168.123.107:0/153743935 learned_addr learned my addr 192.168.123.107:0/153743935 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:38.444 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.298+0000 7f371affd640 1 -- 192.168.123.107:0/153743935 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f3704008650 con 0x7f371c071820 2026-03-09T20:42:38.445 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.298+0000 7f371affd640 1 --2- 192.168.123.107:0/153743935 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f371c071820 0x7f371c1aada0 secure :-1 s=READY pgs=56 cs=0 l=1 rev1=1 crypto rx=0x7f3704039f00 tx=0x7f3704039f30 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:38.447 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.299+0000 7f3718ff9640 1 -- 192.168.123.107:0/153743935 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f370404f570 con 0x7f371c071820 2026-03-09T20:42:38.447 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.299+0000 7f371bfff640 1 -- 192.168.123.107:0/153743935 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f371c1ab4e0 con 0x7f371c071820 2026-03-09T20:42:38.447 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.299+0000 7f371bfff640 1 -- 192.168.123.107:0/153743935 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f371c1ab980 con 0x7f371c071820 2026-03-09T20:42:38.447 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.299+0000 7f3718ff9640 1 -- 192.168.123.107:0/153743935 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f3704053070 con 0x7f371c071820 2026-03-09T20:42:38.447 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.301+0000 7f3718ff9640 1 -- 192.168.123.107:0/153743935 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f370403bca0 con 0x7f371c071820 2026-03-09T20:42:38.447 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.301+0000 7f3718ff9640 1 -- 192.168.123.107:0/153743935 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 8) v1 ==== 49370+0+0 (secure 0 0 0) 0x7f370405a480 con 0x7f371c071820 2026-03-09T20:42:38.447 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.301+0000 7f3718ff9640 1 --2- 192.168.123.107:0/153743935 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f36f003d280 0x7f36f003f740 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:38.447 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.301+0000 7f37125ff640 1 --2- 192.168.123.107:0/153743935 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f36f003d280 0x7f36f003f740 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:38.447 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.301+0000 7f3718ff9640 1 -- 192.168.123.107:0/153743935 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(2..2 src has 1..2) v4 ==== 940+0+0 (secure 0 0 0) 0x7f3704055070 con 0x7f371c071820 2026-03-09T20:42:38.447 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.302+0000 7f37125ff640 1 --2- 192.168.123.107:0/153743935 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f36f003d280 0x7f36f003f740 secure :-1 s=READY pgs=17 cs=0 l=1 rev1=1 crypto rx=0x7f37080099c0 tx=0x7f3708006eb0 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:38.447 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.302+0000 7f371bfff640 1 -- 192.168.123.107:0/153743935 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f36e8005350 con 0x7f371c071820 2026-03-09T20:42:38.447 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.306+0000 7f3718ff9640 1 -- 192.168.123.107:0/153743935 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f3704058bb0 con 0x7f371c071820 2026-03-09T20:42:38.447 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.406+0000 7f371bfff640 1 -- 192.168.123.107:0/153743935 --> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "grafana", "target": ["mon-mgr", ""]}) v1 -- 0x7f36e8002bf0 con 0x7f36f003d280 2026-03-09T20:42:38.447 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.410+0000 7f3718ff9640 1 -- 192.168.123.107:0/153743935 <== mgr.14118 v2:192.168.123.107:6800/810998986 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+28 (secure 0 0 0) 0x7f36e8002bf0 con 0x7f36f003d280 2026-03-09T20:42:38.447 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.414+0000 7f371bfff640 1 -- 192.168.123.107:0/153743935 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f36f003d280 msgr2=0x7f36f003f740 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:38.447 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.414+0000 7f371bfff640 1 --2- 192.168.123.107:0/153743935 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f36f003d280 0x7f36f003f740 secure :-1 s=READY pgs=17 cs=0 l=1 rev1=1 crypto rx=0x7f37080099c0 tx=0x7f3708006eb0 comp rx=0 tx=0).stop 2026-03-09T20:42:38.447 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.414+0000 7f371bfff640 1 -- 192.168.123.107:0/153743935 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f371c071820 msgr2=0x7f371c1aada0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:38.447 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.414+0000 7f371bfff640 1 --2- 192.168.123.107:0/153743935 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f371c071820 0x7f371c1aada0 secure :-1 s=READY pgs=56 cs=0 l=1 rev1=1 crypto rx=0x7f3704039f00 tx=0x7f3704039f30 comp rx=0 tx=0).stop 2026-03-09T20:42:38.447 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.415+0000 7f371bfff640 1 -- 192.168.123.107:0/153743935 shutdown_connections 2026-03-09T20:42:38.447 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.415+0000 7f371bfff640 1 --2- 192.168.123.107:0/153743935 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f36f003d280 0x7f36f003f740 unknown :-1 s=CLOSED pgs=17 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:38.447 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.415+0000 7f371bfff640 1 --2- 192.168.123.107:0/153743935 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f371c071820 0x7f371c1aada0 unknown :-1 s=CLOSED pgs=56 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:38.447 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.415+0000 7f371bfff640 1 -- 192.168.123.107:0/153743935 >> 192.168.123.107:0/153743935 conn(0x7f371c06d060 msgr2=0x7f371c06eb00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:38.447 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.415+0000 7f371bfff640 1 -- 192.168.123.107:0/153743935 shutdown_connections 2026-03-09T20:42:38.447 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.415+0000 7f371bfff640 1 -- 192.168.123.107:0/153743935 wait complete. 2026-03-09T20:42:38.447 INFO:teuthology.orchestra.run.vm07.stdout:Deploying node-exporter service with default placement... 2026-03-09T20:42:38.708 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout Scheduled node-exporter update... 2026-03-09T20:42:38.708 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.555+0000 7f7ce6d8a640 1 Processor -- start 2026-03-09T20:42:38.708 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.555+0000 7f7ce6d8a640 1 -- start start 2026-03-09T20:42:38.708 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.555+0000 7f7ce6d8a640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7cd8095310 0x7f7cd8095710 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:38.708 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.555+0000 7f7ce5d88640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7cd8095310 0x7f7cd8095710 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:38.708 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.555+0000 7f7ce5d88640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7cd8095310 0x7f7cd8095710 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:35414/0 (socket says 192.168.123.107:35414) 2026-03-09T20:42:38.708 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.555+0000 7f7ce5d88640 1 -- 192.168.123.107:0/1280828808 learned_addr learned my addr 192.168.123.107:0/1280828808 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:38.708 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.555+0000 7f7ce6d8a640 1 -- 192.168.123.107:0/1280828808 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f7cd8095ce0 con 0x7f7cd8095310 2026-03-09T20:42:38.708 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.556+0000 7f7ce5d88640 1 -- 192.168.123.107:0/1280828808 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f7cd8096510 con 0x7f7cd8095310 2026-03-09T20:42:38.708 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.556+0000 7f7ce5d88640 1 --2- 192.168.123.107:0/1280828808 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7cd8095310 0x7f7cd8095710 secure :-1 s=READY pgs=57 cs=0 l=1 rev1=1 crypto rx=0x7f7cd0009b80 tx=0x7f7cd002f190 comp rx=0 tx=0).ready entity=mon.0 client_cookie=9dcc8ceac93f6a56 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:38.708 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.556+0000 7f7ce4d86640 1 -- 192.168.123.107:0/1280828808 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f7cd002fa10 con 0x7f7cd8095310 2026-03-09T20:42:38.708 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.556+0000 7f7ce4d86640 1 -- 192.168.123.107:0/1280828808 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f7cd002fb70 con 0x7f7cd8095310 2026-03-09T20:42:38.708 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.556+0000 7f7ce4d86640 1 -- 192.168.123.107:0/1280828808 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f7cd00355b0 con 0x7f7cd8095310 2026-03-09T20:42:38.708 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.557+0000 7f7ce6d8a640 1 -- 192.168.123.107:0/1280828808 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7cd8095310 msgr2=0x7f7cd8095710 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:38.708 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.557+0000 7f7ce6d8a640 1 --2- 192.168.123.107:0/1280828808 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7cd8095310 0x7f7cd8095710 secure :-1 s=READY pgs=57 cs=0 l=1 rev1=1 crypto rx=0x7f7cd0009b80 tx=0x7f7cd002f190 comp rx=0 tx=0).stop 2026-03-09T20:42:38.708 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.557+0000 7f7ce6d8a640 1 -- 192.168.123.107:0/1280828808 shutdown_connections 2026-03-09T20:42:38.708 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.557+0000 7f7ce6d8a640 1 --2- 192.168.123.107:0/1280828808 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7cd8095310 0x7f7cd8095710 unknown :-1 s=CLOSED pgs=57 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:38.708 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.557+0000 7f7ce6d8a640 1 -- 192.168.123.107:0/1280828808 >> 192.168.123.107:0/1280828808 conn(0x7f7cd8090aa0 msgr2=0x7f7cd8092ee0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:38.708 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.557+0000 7f7ce6d8a640 1 -- 192.168.123.107:0/1280828808 shutdown_connections 2026-03-09T20:42:38.708 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.557+0000 7f7ce6d8a640 1 -- 192.168.123.107:0/1280828808 wait complete. 2026-03-09T20:42:38.708 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.558+0000 7f7ce6d8a640 1 Processor -- start 2026-03-09T20:42:38.708 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.558+0000 7f7ce6d8a640 1 -- start start 2026-03-09T20:42:38.708 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.558+0000 7f7ce6d8a640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7cd8095310 0x7f7cd8007ba0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:38.708 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.558+0000 7f7ce6d8a640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f7cd80080e0 con 0x7f7cd8095310 2026-03-09T20:42:38.708 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.558+0000 7f7ce5d88640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7cd8095310 0x7f7cd8007ba0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:38.708 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.558+0000 7f7ce5d88640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7cd8095310 0x7f7cd8007ba0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:35418/0 (socket says 192.168.123.107:35418) 2026-03-09T20:42:38.708 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.558+0000 7f7ce5d88640 1 -- 192.168.123.107:0/1265871036 learned_addr learned my addr 192.168.123.107:0/1265871036 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:38.709 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.558+0000 7f7ce5d88640 1 -- 192.168.123.107:0/1265871036 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f7cd00095d0 con 0x7f7cd8095310 2026-03-09T20:42:38.709 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.558+0000 7f7ce5d88640 1 --2- 192.168.123.107:0/1265871036 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7cd8095310 0x7f7cd8007ba0 secure :-1 s=READY pgs=58 cs=0 l=1 rev1=1 crypto rx=0x7f7cd00357a0 tx=0x7f7cd0035e80 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:38.709 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.559+0000 7f7cceffd640 1 -- 192.168.123.107:0/1265871036 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f7cd0037500 con 0x7f7cd8095310 2026-03-09T20:42:38.709 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.559+0000 7f7cceffd640 1 -- 192.168.123.107:0/1265871036 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f7cd0037b20 con 0x7f7cd8095310 2026-03-09T20:42:38.709 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.559+0000 7f7cceffd640 1 -- 192.168.123.107:0/1265871036 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f7cd0041e30 con 0x7f7cd8095310 2026-03-09T20:42:38.709 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.559+0000 7f7ce6d8a640 1 -- 192.168.123.107:0/1265871036 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f7cd80062d0 con 0x7f7cd8095310 2026-03-09T20:42:38.709 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.559+0000 7f7ce6d8a640 1 -- 192.168.123.107:0/1265871036 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f7cd8006770 con 0x7f7cd8095310 2026-03-09T20:42:38.709 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.560+0000 7f7ce6d8a640 1 -- 192.168.123.107:0/1265871036 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f7ca8005350 con 0x7f7cd8095310 2026-03-09T20:42:38.709 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.562+0000 7f7cceffd640 1 -- 192.168.123.107:0/1265871036 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 8) v1 ==== 49370+0+0 (secure 0 0 0) 0x7f7cd003e030 con 0x7f7cd8095310 2026-03-09T20:42:38.709 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.562+0000 7f7cceffd640 1 --2- 192.168.123.107:0/1265871036 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f7cb403cec0 0x7f7cb403f380 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:38.709 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.562+0000 7f7cceffd640 1 -- 192.168.123.107:0/1265871036 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(2..2 src has 1..2) v4 ==== 940+0+0 (secure 0 0 0) 0x7f7cd00764c0 con 0x7f7cd8095310 2026-03-09T20:42:38.709 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.565+0000 7f7ce5587640 1 --2- 192.168.123.107:0/1265871036 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f7cb403cec0 0x7f7cb403f380 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:38.709 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.565+0000 7f7cceffd640 1 -- 192.168.123.107:0/1265871036 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f7cd003fdd0 con 0x7f7cd8095310 2026-03-09T20:42:38.709 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.565+0000 7f7ce5587640 1 --2- 192.168.123.107:0/1265871036 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f7cb403cec0 0x7f7cb403f380 secure :-1 s=READY pgs=18 cs=0 l=1 rev1=1 crypto rx=0x7f7cd40099c0 tx=0x7f7cd4006eb0 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:38.709 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.666+0000 7f7ce6d8a640 1 -- 192.168.123.107:0/1265871036 --> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "node-exporter", "target": ["mon-mgr", ""]}) v1 -- 0x7f7ca8002bf0 con 0x7f7cb403cec0 2026-03-09T20:42:38.709 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.673+0000 7f7cceffd640 1 -- 192.168.123.107:0/1265871036 <== mgr.14118 v2:192.168.123.107:6800/810998986 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+34 (secure 0 0 0) 0x7f7ca8002bf0 con 0x7f7cb403cec0 2026-03-09T20:42:38.709 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.676+0000 7f7cccff9640 1 -- 192.168.123.107:0/1265871036 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f7cb403cec0 msgr2=0x7f7cb403f380 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:38.709 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.676+0000 7f7cccff9640 1 --2- 192.168.123.107:0/1265871036 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f7cb403cec0 0x7f7cb403f380 secure :-1 s=READY pgs=18 cs=0 l=1 rev1=1 crypto rx=0x7f7cd40099c0 tx=0x7f7cd4006eb0 comp rx=0 tx=0).stop 2026-03-09T20:42:38.709 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.676+0000 7f7cccff9640 1 -- 192.168.123.107:0/1265871036 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7cd8095310 msgr2=0x7f7cd8007ba0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:38.709 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.676+0000 7f7cccff9640 1 --2- 192.168.123.107:0/1265871036 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7cd8095310 0x7f7cd8007ba0 secure :-1 s=READY pgs=58 cs=0 l=1 rev1=1 crypto rx=0x7f7cd00357a0 tx=0x7f7cd0035e80 comp rx=0 tx=0).stop 2026-03-09T20:42:38.709 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.676+0000 7f7cccff9640 1 -- 192.168.123.107:0/1265871036 shutdown_connections 2026-03-09T20:42:38.709 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.676+0000 7f7cccff9640 1 --2- 192.168.123.107:0/1265871036 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f7cb403cec0 0x7f7cb403f380 unknown :-1 s=CLOSED pgs=18 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:38.709 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.676+0000 7f7cccff9640 1 --2- 192.168.123.107:0/1265871036 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7cd8095310 0x7f7cd8007ba0 unknown :-1 s=CLOSED pgs=58 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:38.709 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.676+0000 7f7cccff9640 1 -- 192.168.123.107:0/1265871036 >> 192.168.123.107:0/1265871036 conn(0x7f7cd8090aa0 msgr2=0x7f7cd8091560 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:38.709 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.676+0000 7f7cccff9640 1 -- 192.168.123.107:0/1265871036 shutdown_connections 2026-03-09T20:42:38.709 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.676+0000 7f7cccff9640 1 -- 192.168.123.107:0/1265871036 wait complete. 2026-03-09T20:42:38.709 INFO:teuthology.orchestra.run.vm07.stdout:Deploying alertmanager service with default placement... 2026-03-09T20:42:38.967 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout Scheduled alertmanager update... 2026-03-09T20:42:38.967 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.830+0000 7f1e1012a640 1 Processor -- start 2026-03-09T20:42:38.967 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.831+0000 7f1e1012a640 1 -- start start 2026-03-09T20:42:38.967 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.831+0000 7f1e1012a640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1e081046c0 0x7f1e08104ac0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:38.967 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.831+0000 7f1e1012a640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f1e08105090 con 0x7f1e081046c0 2026-03-09T20:42:38.967 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.831+0000 7f1e0de9f640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1e081046c0 0x7f1e08104ac0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:38.967 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.831+0000 7f1e0de9f640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1e081046c0 0x7f1e08104ac0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:35420/0 (socket says 192.168.123.107:35420) 2026-03-09T20:42:38.967 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.831+0000 7f1e0de9f640 1 -- 192.168.123.107:0/2682881284 learned_addr learned my addr 192.168.123.107:0/2682881284 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:38.967 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.832+0000 7f1e0de9f640 1 -- 192.168.123.107:0/2682881284 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f1e08105850 con 0x7f1e081046c0 2026-03-09T20:42:38.967 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.832+0000 7f1e0de9f640 1 --2- 192.168.123.107:0/2682881284 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1e081046c0 0x7f1e08104ac0 secure :-1 s=READY pgs=59 cs=0 l=1 rev1=1 crypto rx=0x7f1dfc009920 tx=0x7f1dfc02ef20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=8f1a54d357071df8 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:38.967 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.833+0000 7f1e0ce9d640 1 -- 192.168.123.107:0/2682881284 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f1dfc02f9b0 con 0x7f1e081046c0 2026-03-09T20:42:38.967 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.833+0000 7f1e0ce9d640 1 -- 192.168.123.107:0/2682881284 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f1dfc037440 con 0x7f1e081046c0 2026-03-09T20:42:38.967 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.833+0000 7f1e0ce9d640 1 -- 192.168.123.107:0/2682881284 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f1dfc035560 con 0x7f1e081046c0 2026-03-09T20:42:38.967 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.833+0000 7f1e1012a640 1 -- 192.168.123.107:0/2682881284 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1e081046c0 msgr2=0x7f1e08104ac0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:38.967 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.833+0000 7f1e1012a640 1 --2- 192.168.123.107:0/2682881284 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1e081046c0 0x7f1e08104ac0 secure :-1 s=READY pgs=59 cs=0 l=1 rev1=1 crypto rx=0x7f1dfc009920 tx=0x7f1dfc02ef20 comp rx=0 tx=0).stop 2026-03-09T20:42:38.967 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.833+0000 7f1e1012a640 1 -- 192.168.123.107:0/2682881284 shutdown_connections 2026-03-09T20:42:38.967 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.833+0000 7f1e1012a640 1 --2- 192.168.123.107:0/2682881284 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1e081046c0 0x7f1e08104ac0 unknown :-1 s=CLOSED pgs=59 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:38.967 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.833+0000 7f1e1012a640 1 -- 192.168.123.107:0/2682881284 >> 192.168.123.107:0/2682881284 conn(0x7f1e080ff950 msgr2=0x7f1e08101db0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:38.967 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.834+0000 7f1e1012a640 1 -- 192.168.123.107:0/2682881284 shutdown_connections 2026-03-09T20:42:38.967 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.834+0000 7f1e1012a640 1 -- 192.168.123.107:0/2682881284 wait complete. 2026-03-09T20:42:38.967 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.834+0000 7f1e1012a640 1 Processor -- start 2026-03-09T20:42:38.967 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.834+0000 7f1e1012a640 1 -- start start 2026-03-09T20:42:38.967 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.834+0000 7f1e1012a640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1e081046c0 0x7f1e08101bc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:38.967 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.834+0000 7f1e1012a640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f1e08102100 con 0x7f1e081046c0 2026-03-09T20:42:38.967 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.835+0000 7f1e0de9f640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1e081046c0 0x7f1e08101bc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:38.967 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.835+0000 7f1e0de9f640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1e081046c0 0x7f1e08101bc0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:35424/0 (socket says 192.168.123.107:35424) 2026-03-09T20:42:38.967 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.835+0000 7f1e0de9f640 1 -- 192.168.123.107:0/1591346550 learned_addr learned my addr 192.168.123.107:0/1591346550 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:38.967 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.835+0000 7f1e0de9f640 1 -- 192.168.123.107:0/1591346550 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f1dfc0095d0 con 0x7f1e081046c0 2026-03-09T20:42:38.967 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.835+0000 7f1e0de9f640 1 --2- 192.168.123.107:0/1591346550 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1e081046c0 0x7f1e08101bc0 secure :-1 s=READY pgs=60 cs=0 l=1 rev1=1 crypto rx=0x7f1dfc006fd0 tx=0x7f1dfc02fbc0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:38.967 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.836+0000 7f1df6ffd640 1 -- 192.168.123.107:0/1591346550 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f1dfc02fdd0 con 0x7f1e081046c0 2026-03-09T20:42:38.967 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.836+0000 7f1df6ffd640 1 -- 192.168.123.107:0/1591346550 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f1dfc03f3f0 con 0x7f1e081046c0 2026-03-09T20:42:38.967 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.836+0000 7f1df6ffd640 1 -- 192.168.123.107:0/1591346550 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f1dfc0363a0 con 0x7f1e081046c0 2026-03-09T20:42:38.967 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.836+0000 7f1e1012a640 1 -- 192.168.123.107:0/1591346550 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f1e081002f0 con 0x7f1e081046c0 2026-03-09T20:42:38.967 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.836+0000 7f1e1012a640 1 -- 192.168.123.107:0/1591346550 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f1e08100790 con 0x7f1e081046c0 2026-03-09T20:42:38.967 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.837+0000 7f1df6ffd640 1 -- 192.168.123.107:0/1591346550 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 8) v1 ==== 49370+0+0 (secure 0 0 0) 0x7f1dfc03e070 con 0x7f1e081046c0 2026-03-09T20:42:38.967 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.837+0000 7f1df6ffd640 1 --2- 192.168.123.107:0/1591346550 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f1de403d1e0 0x7f1de403f6a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:38.967 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.837+0000 7f1df6ffd640 1 -- 192.168.123.107:0/1591346550 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(2..2 src has 1..2) v4 ==== 940+0+0 (secure 0 0 0) 0x7f1dfc0766f0 con 0x7f1e081046c0 2026-03-09T20:42:38.967 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.837+0000 7f1e0d69e640 1 --2- 192.168.123.107:0/1591346550 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f1de403d1e0 0x7f1de403f6a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:38.968 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.837+0000 7f1e1012a640 1 -- 192.168.123.107:0/1591346550 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f1dd0005350 con 0x7f1e081046c0 2026-03-09T20:42:38.968 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.838+0000 7f1e0d69e640 1 --2- 192.168.123.107:0/1591346550 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f1de403d1e0 0x7f1de403f6a0 secure :-1 s=READY pgs=19 cs=0 l=1 rev1=1 crypto rx=0x7f1df8009a10 tx=0x7f1df8006eb0 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:38.968 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.840+0000 7f1df6ffd640 1 -- 192.168.123.107:0/1591346550 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f1dfc03c030 con 0x7f1e081046c0 2026-03-09T20:42:38.968 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.931+0000 7f1e1012a640 1 -- 192.168.123.107:0/1591346550 --> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "alertmanager", "target": ["mon-mgr", ""]}) v1 -- 0x7f1dd0002bf0 con 0x7f1de403d1e0 2026-03-09T20:42:38.968 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.937+0000 7f1df6ffd640 1 -- 192.168.123.107:0/1591346550 <== mgr.14118 v2:192.168.123.107:6800/810998986 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+33 (secure 0 0 0) 0x7f1dd0002bf0 con 0x7f1de403d1e0 2026-03-09T20:42:38.968 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.940+0000 7f1e1012a640 1 -- 192.168.123.107:0/1591346550 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f1de403d1e0 msgr2=0x7f1de403f6a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:38.968 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.940+0000 7f1e1012a640 1 --2- 192.168.123.107:0/1591346550 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f1de403d1e0 0x7f1de403f6a0 secure :-1 s=READY pgs=19 cs=0 l=1 rev1=1 crypto rx=0x7f1df8009a10 tx=0x7f1df8006eb0 comp rx=0 tx=0).stop 2026-03-09T20:42:38.968 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.940+0000 7f1e1012a640 1 -- 192.168.123.107:0/1591346550 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1e081046c0 msgr2=0x7f1e08101bc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:38.968 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.940+0000 7f1e1012a640 1 --2- 192.168.123.107:0/1591346550 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1e081046c0 0x7f1e08101bc0 secure :-1 s=READY pgs=60 cs=0 l=1 rev1=1 crypto rx=0x7f1dfc006fd0 tx=0x7f1dfc02fbc0 comp rx=0 tx=0).stop 2026-03-09T20:42:38.968 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.940+0000 7f1e1012a640 1 -- 192.168.123.107:0/1591346550 shutdown_connections 2026-03-09T20:42:38.968 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.940+0000 7f1e1012a640 1 --2- 192.168.123.107:0/1591346550 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f1de403d1e0 0x7f1de403f6a0 unknown :-1 s=CLOSED pgs=19 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:38.968 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.940+0000 7f1e1012a640 1 --2- 192.168.123.107:0/1591346550 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1e081046c0 0x7f1e08101bc0 unknown :-1 s=CLOSED pgs=60 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:38.968 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.940+0000 7f1e1012a640 1 -- 192.168.123.107:0/1591346550 >> 192.168.123.107:0/1591346550 conn(0x7f1e080ff950 msgr2=0x7f1e08192d50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:38.968 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.940+0000 7f1e1012a640 1 -- 192.168.123.107:0/1591346550 shutdown_connections 2026-03-09T20:42:38.968 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:38.940+0000 7f1e1012a640 1 -- 192.168.123.107:0/1591346550 wait complete. 2026-03-09T20:42:39.220 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.072+0000 7fd1228f9640 1 Processor -- start 2026-03-09T20:42:39.220 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.072+0000 7fd1228f9640 1 -- start start 2026-03-09T20:42:39.220 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.072+0000 7fd1228f9640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd11c1082f0 0x7fd11c1086f0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:39.220 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.072+0000 7fd1228f9640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fd11c108cc0 con 0x7fd11c1082f0 2026-03-09T20:42:39.221 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.072+0000 7fd11bfff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd11c1082f0 0x7fd11c1086f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:39.221 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.072+0000 7fd11bfff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd11c1082f0 0x7fd11c1086f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:35436/0 (socket says 192.168.123.107:35436) 2026-03-09T20:42:39.221 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.072+0000 7fd11bfff640 1 -- 192.168.123.107:0/2904725025 learned_addr learned my addr 192.168.123.107:0/2904725025 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:39.221 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.073+0000 7fd11bfff640 1 -- 192.168.123.107:0/2904725025 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fd11c109490 con 0x7fd11c1082f0 2026-03-09T20:42:39.221 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.073+0000 7fd11bfff640 1 --2- 192.168.123.107:0/2904725025 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd11c1082f0 0x7fd11c1086f0 secure :-1 s=READY pgs=61 cs=0 l=1 rev1=1 crypto rx=0x7fd10c009b80 tx=0x7fd10c02f190 comp rx=0 tx=0).ready entity=mon.0 client_cookie=340544213b638ed8 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:39.221 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.073+0000 7fd11affd640 1 -- 192.168.123.107:0/2904725025 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fd10c02fc20 con 0x7fd11c1082f0 2026-03-09T20:42:39.221 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.073+0000 7fd11affd640 1 -- 192.168.123.107:0/2904725025 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fd10c02fd80 con 0x7fd11c1082f0 2026-03-09T20:42:39.221 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.074+0000 7fd1228f9640 1 -- 192.168.123.107:0/2904725025 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd11c1082f0 msgr2=0x7fd11c1086f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:39.221 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.074+0000 7fd1228f9640 1 --2- 192.168.123.107:0/2904725025 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd11c1082f0 0x7fd11c1086f0 secure :-1 s=READY pgs=61 cs=0 l=1 rev1=1 crypto rx=0x7fd10c009b80 tx=0x7fd10c02f190 comp rx=0 tx=0).stop 2026-03-09T20:42:39.221 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.074+0000 7fd1228f9640 1 -- 192.168.123.107:0/2904725025 shutdown_connections 2026-03-09T20:42:39.221 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.074+0000 7fd1228f9640 1 --2- 192.168.123.107:0/2904725025 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd11c1082f0 0x7fd11c1086f0 unknown :-1 s=CLOSED pgs=61 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:39.221 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.074+0000 7fd1228f9640 1 -- 192.168.123.107:0/2904725025 >> 192.168.123.107:0/2904725025 conn(0x7fd11c07b8c0 msgr2=0x7fd11c1066a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:39.221 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.074+0000 7fd1228f9640 1 -- 192.168.123.107:0/2904725025 shutdown_connections 2026-03-09T20:42:39.221 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.074+0000 7fd1228f9640 1 -- 192.168.123.107:0/2904725025 wait complete. 2026-03-09T20:42:39.221 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.074+0000 7fd1228f9640 1 Processor -- start 2026-03-09T20:42:39.221 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.075+0000 7fd1228f9640 1 -- start start 2026-03-09T20:42:39.221 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.075+0000 7fd1228f9640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd11c1082f0 0x7fd11c19e0e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:39.221 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.075+0000 7fd1228f9640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fd10c035620 con 0x7fd11c1082f0 2026-03-09T20:42:39.221 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.075+0000 7fd11bfff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd11c1082f0 0x7fd11c19e0e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:39.221 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.075+0000 7fd11bfff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd11c1082f0 0x7fd11c19e0e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:35450/0 (socket says 192.168.123.107:35450) 2026-03-09T20:42:39.221 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.075+0000 7fd11bfff640 1 -- 192.168.123.107:0/292669769 learned_addr learned my addr 192.168.123.107:0/292669769 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:39.221 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.075+0000 7fd11bfff640 1 -- 192.168.123.107:0/292669769 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fd10c0095d0 con 0x7fd11c1082f0 2026-03-09T20:42:39.221 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.075+0000 7fd11bfff640 1 --2- 192.168.123.107:0/292669769 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd11c1082f0 0x7fd11c19e0e0 secure :-1 s=READY pgs=62 cs=0 l=1 rev1=1 crypto rx=0x7fd10c02f6c0 tx=0x7fd10c003940 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:39.221 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.076+0000 7fd1197fa640 1 -- 192.168.123.107:0/292669769 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fd10c035ec0 con 0x7fd11c1082f0 2026-03-09T20:42:39.221 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.076+0000 7fd1228f9640 1 -- 192.168.123.107:0/292669769 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fd11c19e620 con 0x7fd11c1082f0 2026-03-09T20:42:39.221 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.076+0000 7fd1228f9640 1 -- 192.168.123.107:0/292669769 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fd11c19eac0 con 0x7fd11c1082f0 2026-03-09T20:42:39.221 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.076+0000 7fd1197fa640 1 -- 192.168.123.107:0/292669769 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fd10c037440 con 0x7fd11c1082f0 2026-03-09T20:42:39.221 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.076+0000 7fd1197fa640 1 -- 192.168.123.107:0/292669769 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fd10c0405c0 con 0x7fd11c1082f0 2026-03-09T20:42:39.221 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.076+0000 7fd1197fa640 1 -- 192.168.123.107:0/292669769 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 8) v1 ==== 49370+0+0 (secure 0 0 0) 0x7fd10c047020 con 0x7fd11c1082f0 2026-03-09T20:42:39.221 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.077+0000 7fd1197fa640 1 --2- 192.168.123.107:0/292669769 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7fd0e003d280 0x7fd0e003f740 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:39.221 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.077+0000 7fd11b7fe640 1 --2- 192.168.123.107:0/292669769 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7fd0e003d280 0x7fd0e003f740 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:39.221 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.077+0000 7fd1197fa640 1 -- 192.168.123.107:0/292669769 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(2..2 src has 1..2) v4 ==== 940+0+0 (secure 0 0 0) 0x7fd10c076bf0 con 0x7fd11c1082f0 2026-03-09T20:42:39.221 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.077+0000 7fd11b7fe640 1 --2- 192.168.123.107:0/292669769 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7fd0e003d280 0x7fd0e003f740 secure :-1 s=READY pgs=20 cs=0 l=1 rev1=1 crypto rx=0x7fd108009a10 tx=0x7fd108006eb0 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:39.221 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.077+0000 7fd1228f9640 1 -- 192.168.123.107:0/292669769 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fd0e4005350 con 0x7fd11c1082f0 2026-03-09T20:42:39.221 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.080+0000 7fd1197fa640 1 -- 192.168.123.107:0/292669769 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7fd10c03c030 con 0x7fd11c1082f0 2026-03-09T20:42:39.221 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.163+0000 7fd1228f9640 1 -- 192.168.123.107:0/292669769 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) v1 -- 0x7fd0e40051c0 con 0x7fd11c1082f0 2026-03-09T20:42:39.221 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.165+0000 7fd1197fa640 1 -- 192.168.123.107:0/292669769 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{prefix=config set, name=mgr/cephadm/container_init}]=0 v7)=0 v7) v1 ==== 142+0+0 (secure 0 0 0) 0x7fd10c03eac0 con 0x7fd11c1082f0 2026-03-09T20:42:39.221 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.172+0000 7fd1228f9640 1 -- 192.168.123.107:0/292669769 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7fd0e003d280 msgr2=0x7fd0e003f740 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:39.221 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.172+0000 7fd1228f9640 1 --2- 192.168.123.107:0/292669769 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7fd0e003d280 0x7fd0e003f740 secure :-1 s=READY pgs=20 cs=0 l=1 rev1=1 crypto rx=0x7fd108009a10 tx=0x7fd108006eb0 comp rx=0 tx=0).stop 2026-03-09T20:42:39.221 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.172+0000 7fd1228f9640 1 -- 192.168.123.107:0/292669769 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd11c1082f0 msgr2=0x7fd11c19e0e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:39.221 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.172+0000 7fd1228f9640 1 --2- 192.168.123.107:0/292669769 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd11c1082f0 0x7fd11c19e0e0 secure :-1 s=READY pgs=62 cs=0 l=1 rev1=1 crypto rx=0x7fd10c02f6c0 tx=0x7fd10c003940 comp rx=0 tx=0).stop 2026-03-09T20:42:39.221 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.172+0000 7fd1228f9640 1 -- 192.168.123.107:0/292669769 shutdown_connections 2026-03-09T20:42:39.221 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.172+0000 7fd1228f9640 1 --2- 192.168.123.107:0/292669769 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7fd0e003d280 0x7fd0e003f740 unknown :-1 s=CLOSED pgs=20 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:39.221 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.172+0000 7fd1228f9640 1 --2- 192.168.123.107:0/292669769 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd11c1082f0 0x7fd11c19e0e0 unknown :-1 s=CLOSED pgs=62 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:39.221 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.172+0000 7fd1228f9640 1 -- 192.168.123.107:0/292669769 >> 192.168.123.107:0/292669769 conn(0x7fd11c07b8c0 msgr2=0x7fd11c105c70 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:39.221 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.173+0000 7fd1228f9640 1 -- 192.168.123.107:0/292669769 shutdown_connections 2026-03-09T20:42:39.221 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.173+0000 7fd1228f9640 1 -- 192.168.123.107:0/292669769 wait complete. 2026-03-09T20:42:39.456 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:39 vm07 ceph-mon[49120]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:42:39.456 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:39 vm07 ceph-mon[49120]: Saving service crash spec with placement * 2026-03-09T20:42:39.456 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:39 vm07 ceph-mon[49120]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "ceph-exporter", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:42:39.456 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:39 vm07 ceph-mon[49120]: Saving service ceph-exporter spec with placement * 2026-03-09T20:42:39.456 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:39 vm07 ceph-mon[49120]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:42:39.456 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:39 vm07 ceph-mon[49120]: Saving service prometheus spec with placement count:1 2026-03-09T20:42:39.456 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:39 vm07 ceph-mon[49120]: from='mgr.14118 192.168.123.107:0/1039749675' entity='mgr.vm07.xjrvch' 2026-03-09T20:42:39.456 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:39 vm07 ceph-mon[49120]: from='mgr.14118 192.168.123.107:0/1039749675' entity='mgr.vm07.xjrvch' 2026-03-09T20:42:39.456 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:39 vm07 ceph-mon[49120]: from='mgr.14118 192.168.123.107:0/1039749675' entity='mgr.vm07.xjrvch' 2026-03-09T20:42:39.456 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:39 vm07 ceph-mon[49120]: from='mgr.14118 192.168.123.107:0/1039749675' entity='mgr.vm07.xjrvch' 2026-03-09T20:42:39.456 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:39 vm07 ceph-mon[49120]: from='client.? 192.168.123.107:0/292669769' entity='client.admin' 2026-03-09T20:42:39.474 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.327+0000 7f2615fa3640 1 Processor -- start 2026-03-09T20:42:39.474 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.327+0000 7f2615fa3640 1 -- start start 2026-03-09T20:42:39.475 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.327+0000 7f2615fa3640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f26101082f0 0x7f26101086f0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:39.475 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.327+0000 7f2615fa3640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f2610108cc0 con 0x7f26101082f0 2026-03-09T20:42:39.475 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.328+0000 7f260f7fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f26101082f0 0x7f26101086f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:39.475 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.328+0000 7f260f7fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f26101082f0 0x7f26101086f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:35452/0 (socket says 192.168.123.107:35452) 2026-03-09T20:42:39.475 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.328+0000 7f260f7fe640 1 -- 192.168.123.107:0/3364688411 learned_addr learned my addr 192.168.123.107:0/3364688411 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:39.475 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.328+0000 7f260f7fe640 1 -- 192.168.123.107:0/3364688411 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f2610109490 con 0x7f26101082f0 2026-03-09T20:42:39.475 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.329+0000 7f260f7fe640 1 --2- 192.168.123.107:0/3364688411 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f26101082f0 0x7f26101086f0 secure :-1 s=READY pgs=63 cs=0 l=1 rev1=1 crypto rx=0x7f260001c080 tx=0x7f2600040520 comp rx=0 tx=0).ready entity=mon.0 client_cookie=6d38fb5f3b1bfebd server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:39.475 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.329+0000 7f260e7fc640 1 -- 192.168.123.107:0/3364688411 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f260001a2e0 con 0x7f26101082f0 2026-03-09T20:42:39.475 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.329+0000 7f260e7fc640 1 -- 192.168.123.107:0/3364688411 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f2600043050 con 0x7f26101082f0 2026-03-09T20:42:39.475 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.329+0000 7f260e7fc640 1 -- 192.168.123.107:0/3364688411 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f2600047940 con 0x7f26101082f0 2026-03-09T20:42:39.475 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.330+0000 7f2615fa3640 1 -- 192.168.123.107:0/3364688411 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f26101082f0 msgr2=0x7f26101086f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:39.475 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.330+0000 7f2615fa3640 1 --2- 192.168.123.107:0/3364688411 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f26101082f0 0x7f26101086f0 secure :-1 s=READY pgs=63 cs=0 l=1 rev1=1 crypto rx=0x7f260001c080 tx=0x7f2600040520 comp rx=0 tx=0).stop 2026-03-09T20:42:39.475 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.330+0000 7f2615fa3640 1 -- 192.168.123.107:0/3364688411 shutdown_connections 2026-03-09T20:42:39.475 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.330+0000 7f2615fa3640 1 --2- 192.168.123.107:0/3364688411 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f26101082f0 0x7f26101086f0 unknown :-1 s=CLOSED pgs=63 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:39.475 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.330+0000 7f2615fa3640 1 -- 192.168.123.107:0/3364688411 >> 192.168.123.107:0/3364688411 conn(0x7f261007b8c0 msgr2=0x7f26101066a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:39.475 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.330+0000 7f2615fa3640 1 -- 192.168.123.107:0/3364688411 shutdown_connections 2026-03-09T20:42:39.475 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.330+0000 7f2615fa3640 1 -- 192.168.123.107:0/3364688411 wait complete. 2026-03-09T20:42:39.475 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.330+0000 7f2615fa3640 1 Processor -- start 2026-03-09T20:42:39.475 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.330+0000 7f2615fa3640 1 -- start start 2026-03-09T20:42:39.475 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.331+0000 7f2615fa3640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f26101082f0 0x7f26101956f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:39.475 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.331+0000 7f2615fa3640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f2610195c30 con 0x7f26101082f0 2026-03-09T20:42:39.475 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.331+0000 7f260f7fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f26101082f0 0x7f26101956f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:39.475 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.331+0000 7f260f7fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f26101082f0 0x7f26101956f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:35460/0 (socket says 192.168.123.107:35460) 2026-03-09T20:42:39.475 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.331+0000 7f260f7fe640 1 -- 192.168.123.107:0/3133894580 learned_addr learned my addr 192.168.123.107:0/3133894580 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:39.475 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.331+0000 7f260f7fe640 1 -- 192.168.123.107:0/3133894580 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f260001aa70 con 0x7f26101082f0 2026-03-09T20:42:39.475 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.331+0000 7f260f7fe640 1 --2- 192.168.123.107:0/3133894580 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f26101082f0 0x7f26101956f0 secure :-1 s=READY pgs=64 cs=0 l=1 rev1=1 crypto rx=0x7f2600002410 tx=0x7f26000189f0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:39.475 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.332+0000 7f260cff9640 1 -- 192.168.123.107:0/3133894580 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f260001a2e0 con 0x7f26101082f0 2026-03-09T20:42:39.475 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.332+0000 7f260cff9640 1 -- 192.168.123.107:0/3133894580 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f2600018c00 con 0x7f26101082f0 2026-03-09T20:42:39.475 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.332+0000 7f260cff9640 1 -- 192.168.123.107:0/3133894580 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f260004f590 con 0x7f26101082f0 2026-03-09T20:42:39.475 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.332+0000 7f2615fa3640 1 -- 192.168.123.107:0/3133894580 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f2610195e30 con 0x7f26101082f0 2026-03-09T20:42:39.475 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.332+0000 7f2615fa3640 1 -- 192.168.123.107:0/3133894580 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f2610196270 con 0x7f26101082f0 2026-03-09T20:42:39.475 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.332+0000 7f260cff9640 1 -- 192.168.123.107:0/3133894580 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 8) v1 ==== 49370+0+0 (secure 0 0 0) 0x7f260004f6f0 con 0x7f26101082f0 2026-03-09T20:42:39.475 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.333+0000 7f260cff9640 1 --2- 192.168.123.107:0/3133894580 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f25f003d1e0 0x7f25f003f6a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:39.475 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.333+0000 7f260cff9640 1 -- 192.168.123.107:0/3133894580 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(2..2 src has 1..2) v4 ==== 940+0+0 (secure 0 0 0) 0x7f26000889c0 con 0x7f26101082f0 2026-03-09T20:42:39.475 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.333+0000 7f2615fa3640 1 -- 192.168.123.107:0/3133894580 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f26101086f0 con 0x7f26101082f0 2026-03-09T20:42:39.475 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.336+0000 7f260effd640 1 --2- 192.168.123.107:0/3133894580 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f25f003d1e0 0x7f25f003f6a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:39.475 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.336+0000 7f260cff9640 1 -- 192.168.123.107:0/3133894580 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f260004c020 con 0x7f26101082f0 2026-03-09T20:42:39.475 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.336+0000 7f260effd640 1 --2- 192.168.123.107:0/3133894580 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f25f003d1e0 0x7f25f003f6a0 secure :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0x7f25fc0099c0 tx=0x7f25fc006eb0 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:39.475 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.423+0000 7f2615fa3640 1 -- 192.168.123.107:0/3133894580 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command([{prefix=config set, name=mgr/dashboard/ssl_server_port}] v 0) v1 -- 0x7f261006a990 con 0x7f26101082f0 2026-03-09T20:42:39.475 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.429+0000 7f260cff9640 1 -- 192.168.123.107:0/3133894580 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{prefix=config set, name=mgr/dashboard/ssl_server_port}]=0 v8)=0 v8) v1 ==== 130+0+0 (secure 0 0 0) 0x7f2600057070 con 0x7f26101082f0 2026-03-09T20:42:39.475 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.432+0000 7f2615fa3640 1 -- 192.168.123.107:0/3133894580 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f25f003d1e0 msgr2=0x7f25f003f6a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:39.475 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.432+0000 7f2615fa3640 1 --2- 192.168.123.107:0/3133894580 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f25f003d1e0 0x7f25f003f6a0 secure :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0x7f25fc0099c0 tx=0x7f25fc006eb0 comp rx=0 tx=0).stop 2026-03-09T20:42:39.476 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.432+0000 7f2615fa3640 1 -- 192.168.123.107:0/3133894580 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f26101082f0 msgr2=0x7f26101956f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:39.476 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.432+0000 7f2615fa3640 1 --2- 192.168.123.107:0/3133894580 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f26101082f0 0x7f26101956f0 secure :-1 s=READY pgs=64 cs=0 l=1 rev1=1 crypto rx=0x7f2600002410 tx=0x7f26000189f0 comp rx=0 tx=0).stop 2026-03-09T20:42:39.476 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.433+0000 7f2615fa3640 1 -- 192.168.123.107:0/3133894580 shutdown_connections 2026-03-09T20:42:39.476 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.433+0000 7f2615fa3640 1 --2- 192.168.123.107:0/3133894580 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f25f003d1e0 0x7f25f003f6a0 unknown :-1 s=CLOSED pgs=21 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:39.476 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.433+0000 7f2615fa3640 1 --2- 192.168.123.107:0/3133894580 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f26101082f0 0x7f26101956f0 unknown :-1 s=CLOSED pgs=64 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:39.476 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.433+0000 7f2615fa3640 1 -- 192.168.123.107:0/3133894580 >> 192.168.123.107:0/3133894580 conn(0x7f261007b8c0 msgr2=0x7f261006bca0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:39.476 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.433+0000 7f2615fa3640 1 -- 192.168.123.107:0/3133894580 shutdown_connections 2026-03-09T20:42:39.476 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.433+0000 7f2615fa3640 1 -- 192.168.123.107:0/3133894580 wait complete. 2026-03-09T20:42:39.476 INFO:teuthology.orchestra.run.vm07.stdout:Enabling the dashboard module... 2026-03-09T20:42:40.467 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.577+0000 7f04efa15640 1 Processor -- start 2026-03-09T20:42:40.467 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.577+0000 7f04efa15640 1 -- start start 2026-03-09T20:42:40.467 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.577+0000 7f04efa15640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f04e81082f0 0x7f04e81086f0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:40.467 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.577+0000 7f04efa15640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f04e8108cc0 con 0x7f04e81082f0 2026-03-09T20:42:40.467 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.578+0000 7f04ed78a640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f04e81082f0 0x7f04e81086f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:40.467 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.578+0000 7f04ed78a640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f04e81082f0 0x7f04e81086f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:35476/0 (socket says 192.168.123.107:35476) 2026-03-09T20:42:40.467 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.578+0000 7f04ed78a640 1 -- 192.168.123.107:0/344573261 learned_addr learned my addr 192.168.123.107:0/344573261 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:40.467 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.578+0000 7f04ed78a640 1 -- 192.168.123.107:0/344573261 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f04e8109490 con 0x7f04e81082f0 2026-03-09T20:42:40.467 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.579+0000 7f04ed78a640 1 --2- 192.168.123.107:0/344573261 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f04e81082f0 0x7f04e81086f0 secure :-1 s=READY pgs=65 cs=0 l=1 rev1=1 crypto rx=0x7f04d0009b30 tx=0x7f04d002f140 comp rx=0 tx=0).ready entity=mon.0 client_cookie=7ddd27ce8d4e3830 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:40.467 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.579+0000 7f04dffff640 1 -- 192.168.123.107:0/344573261 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f04d002fbd0 con 0x7f04e81082f0 2026-03-09T20:42:40.467 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.579+0000 7f04dffff640 1 -- 192.168.123.107:0/344573261 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f04d002fd30 con 0x7f04e81082f0 2026-03-09T20:42:40.467 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.579+0000 7f04efa15640 1 -- 192.168.123.107:0/344573261 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f04e81082f0 msgr2=0x7f04e81086f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:40.467 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.579+0000 7f04efa15640 1 --2- 192.168.123.107:0/344573261 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f04e81082f0 0x7f04e81086f0 secure :-1 s=READY pgs=65 cs=0 l=1 rev1=1 crypto rx=0x7f04d0009b30 tx=0x7f04d002f140 comp rx=0 tx=0).stop 2026-03-09T20:42:40.467 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.580+0000 7f04efa15640 1 -- 192.168.123.107:0/344573261 shutdown_connections 2026-03-09T20:42:40.467 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.580+0000 7f04efa15640 1 --2- 192.168.123.107:0/344573261 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f04e81082f0 0x7f04e81086f0 unknown :-1 s=CLOSED pgs=65 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:40.467 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.580+0000 7f04efa15640 1 -- 192.168.123.107:0/344573261 >> 192.168.123.107:0/344573261 conn(0x7f04e807b8c0 msgr2=0x7f04e81066a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:40.467 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.580+0000 7f04efa15640 1 -- 192.168.123.107:0/344573261 shutdown_connections 2026-03-09T20:42:40.467 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.580+0000 7f04efa15640 1 -- 192.168.123.107:0/344573261 wait complete. 2026-03-09T20:42:40.467 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.580+0000 7f04efa15640 1 Processor -- start 2026-03-09T20:42:40.467 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.580+0000 7f04efa15640 1 -- start start 2026-03-09T20:42:40.467 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.581+0000 7f04efa15640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f04e81082f0 0x7f04e819e3b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:40.467 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.581+0000 7f04efa15640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f04d00355e0 con 0x7f04e81082f0 2026-03-09T20:42:40.467 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.581+0000 7f04ed78a640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f04e81082f0 0x7f04e819e3b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:40.467 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.581+0000 7f04ed78a640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f04e81082f0 0x7f04e819e3b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:35490/0 (socket says 192.168.123.107:35490) 2026-03-09T20:42:40.467 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.581+0000 7f04ed78a640 1 -- 192.168.123.107:0/2833011467 learned_addr learned my addr 192.168.123.107:0/2833011467 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:40.467 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.581+0000 7f04ed78a640 1 -- 192.168.123.107:0/2833011467 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f04d00095d0 con 0x7f04e81082f0 2026-03-09T20:42:40.467 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.582+0000 7f04ed78a640 1 --2- 192.168.123.107:0/2833011467 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f04e81082f0 0x7f04e819e3b0 secure :-1 s=READY pgs=66 cs=0 l=1 rev1=1 crypto rx=0x7f04d002f6f0 tx=0x7f04d0037440 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:40.467 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.582+0000 7f04de7fc640 1 -- 192.168.123.107:0/2833011467 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f04d0037900 con 0x7f04e81082f0 2026-03-09T20:42:40.467 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.582+0000 7f04de7fc640 1 -- 192.168.123.107:0/2833011467 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f04d0040450 con 0x7f04e81082f0 2026-03-09T20:42:40.467 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.582+0000 7f04de7fc640 1 -- 192.168.123.107:0/2833011467 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f04d003f4f0 con 0x7f04e81082f0 2026-03-09T20:42:40.467 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.582+0000 7f04efa15640 1 -- 192.168.123.107:0/2833011467 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f04e819e8f0 con 0x7f04e81082f0 2026-03-09T20:42:40.467 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.583+0000 7f04efa15640 1 -- 192.168.123.107:0/2833011467 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f04e819ed90 con 0x7f04e81082f0 2026-03-09T20:42:40.467 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.583+0000 7f04c7fff640 1 -- 192.168.123.107:0/2833011467 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f04e810cde0 con 0x7f04e81082f0 2026-03-09T20:42:40.467 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.585+0000 7f04de7fc640 1 -- 192.168.123.107:0/2833011467 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 8) v1 ==== 49370+0+0 (secure 0 0 0) 0x7f04d003e030 con 0x7f04e81082f0 2026-03-09T20:42:40.468 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.585+0000 7f04de7fc640 1 --2- 192.168.123.107:0/2833011467 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f04c003ce70 0x7f04c003f330 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:40.468 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.585+0000 7f04de7fc640 1 -- 192.168.123.107:0/2833011467 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(2..2 src has 1..2) v4 ==== 940+0+0 (secure 0 0 0) 0x7f04d0075a00 con 0x7f04e81082f0 2026-03-09T20:42:40.468 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.586+0000 7f04de7fc640 1 -- 192.168.123.107:0/2833011467 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f04d00474f0 con 0x7f04e81082f0 2026-03-09T20:42:40.468 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.586+0000 7f04ecf89640 1 --2- 192.168.123.107:0/2833011467 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f04c003ce70 0x7f04c003f330 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:40.468 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.587+0000 7f04ecf89640 1 --2- 192.168.123.107:0/2833011467 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f04c003ce70 0x7f04c003f330 secure :-1 s=READY pgs=22 cs=0 l=1 rev1=1 crypto rx=0x7f04d8009a10 tx=0x7f04d8006eb0 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:40.468 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:39.694+0000 7f04c7fff640 1 -- 192.168.123.107:0/2833011467 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0) v1 -- 0x7f04e810c6c0 con 0x7f04e81082f0 2026-03-09T20:42:40.468 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.426+0000 7f04de7fc640 1 -- 192.168.123.107:0/2833011467 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "mgr module enable", "module": "dashboard"}]=0 v9) v1 ==== 88+0+0 (secure 0 0 0) 0x7f04d003c020 con 0x7f04e81082f0 2026-03-09T20:42:40.468 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.428+0000 7f04de7fc640 1 -- 192.168.123.107:0/2833011467 <== mon.0 v2:192.168.123.107:3300/0 8 ==== mgrmap(e 9) v1 ==== 49383+0+0 (secure 0 0 0) 0x7f04d0074e70 con 0x7f04e81082f0 2026-03-09T20:42:40.468 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.430+0000 7f04c7fff640 1 -- 192.168.123.107:0/2833011467 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f04c003ce70 msgr2=0x7f04c003f330 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:40.468 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.430+0000 7f04c7fff640 1 --2- 192.168.123.107:0/2833011467 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f04c003ce70 0x7f04c003f330 secure :-1 s=READY pgs=22 cs=0 l=1 rev1=1 crypto rx=0x7f04d8009a10 tx=0x7f04d8006eb0 comp rx=0 tx=0).stop 2026-03-09T20:42:40.468 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.431+0000 7f04c7fff640 1 -- 192.168.123.107:0/2833011467 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f04e81082f0 msgr2=0x7f04e819e3b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:40.468 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.431+0000 7f04c7fff640 1 --2- 192.168.123.107:0/2833011467 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f04e81082f0 0x7f04e819e3b0 secure :-1 s=READY pgs=66 cs=0 l=1 rev1=1 crypto rx=0x7f04d002f6f0 tx=0x7f04d0037440 comp rx=0 tx=0).stop 2026-03-09T20:42:40.468 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.431+0000 7f04c7fff640 1 -- 192.168.123.107:0/2833011467 shutdown_connections 2026-03-09T20:42:40.468 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.431+0000 7f04c7fff640 1 --2- 192.168.123.107:0/2833011467 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f04c003ce70 0x7f04c003f330 unknown :-1 s=CLOSED pgs=22 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:40.468 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.431+0000 7f04c7fff640 1 --2- 192.168.123.107:0/2833011467 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f04e81082f0 0x7f04e819e3b0 unknown :-1 s=CLOSED pgs=66 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:40.468 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.431+0000 7f04c7fff640 1 -- 192.168.123.107:0/2833011467 >> 192.168.123.107:0/2833011467 conn(0x7f04e807b8c0 msgr2=0x7f04e8105e40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:40.468 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.431+0000 7f04c7fff640 1 -- 192.168.123.107:0/2833011467 shutdown_connections 2026-03-09T20:42:40.468 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.431+0000 7f04c7fff640 1 -- 192.168.123.107:0/2833011467 wait complete. 2026-03-09T20:42:40.581 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:40 vm07 ceph-mon[49120]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:42:40.581 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:40 vm07 ceph-mon[49120]: Saving service grafana spec with placement count:1 2026-03-09T20:42:40.581 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:40 vm07 ceph-mon[49120]: from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:42:40.581 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:40 vm07 ceph-mon[49120]: Saving service node-exporter spec with placement * 2026-03-09T20:42:40.581 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:40 vm07 ceph-mon[49120]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:42:40.581 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:40 vm07 ceph-mon[49120]: Saving service alertmanager spec with placement count:1 2026-03-09T20:42:40.581 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:40 vm07 ceph-mon[49120]: from='client.? 192.168.123.107:0/3133894580' entity='client.admin' 2026-03-09T20:42:40.581 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:40 vm07 ceph-mon[49120]: from='client.? 192.168.123.107:0/2833011467' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-09T20:42:40.773 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout { 2026-03-09T20:42:40.773 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "epoch": 9, 2026-03-09T20:42:40.773 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-09T20:42:40.773 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "active_name": "vm07.xjrvch", 2026-03-09T20:42:40.773 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-09T20:42:40.773 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout } 2026-03-09T20:42:40.773 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.594+0000 7f23ce215640 1 Processor -- start 2026-03-09T20:42:40.773 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.594+0000 7f23ce215640 1 -- start start 2026-03-09T20:42:40.773 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.595+0000 7f23ce215640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f23c00975e0 0x7f23c00979e0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:40.773 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.595+0000 7f23ce215640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f23c0097fb0 con 0x7f23c00975e0 2026-03-09T20:42:40.773 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.595+0000 7f23c77fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f23c00975e0 0x7f23c00979e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:40.773 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.595+0000 7f23c77fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f23c00975e0 0x7f23c00979e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:35522/0 (socket says 192.168.123.107:35522) 2026-03-09T20:42:40.773 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.595+0000 7f23c77fe640 1 -- 192.168.123.107:0/3577268469 learned_addr learned my addr 192.168.123.107:0/3577268469 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:40.773 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.596+0000 7f23c77fe640 1 -- 192.168.123.107:0/3577268469 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f23c00987e0 con 0x7f23c00975e0 2026-03-09T20:42:40.773 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.596+0000 7f23c77fe640 1 --2- 192.168.123.107:0/3577268469 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f23c00975e0 0x7f23c00979e0 secure :-1 s=READY pgs=69 cs=0 l=1 rev1=1 crypto rx=0x7f23b4009920 tx=0x7f23b402ef20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=b976113d06f5fe0f server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:40.773 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.597+0000 7f23c67fc640 1 -- 192.168.123.107:0/3577268469 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f23b402f9b0 con 0x7f23c00975e0 2026-03-09T20:42:40.773 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.597+0000 7f23c67fc640 1 -- 192.168.123.107:0/3577268469 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f23b4037440 con 0x7f23c00975e0 2026-03-09T20:42:40.773 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.597+0000 7f23c67fc640 1 -- 192.168.123.107:0/3577268469 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f23b4035560 con 0x7f23c00975e0 2026-03-09T20:42:40.773 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.597+0000 7f23ce215640 1 -- 192.168.123.107:0/3577268469 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f23c00975e0 msgr2=0x7f23c00979e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:40.773 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.597+0000 7f23ce215640 1 --2- 192.168.123.107:0/3577268469 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f23c00975e0 0x7f23c00979e0 secure :-1 s=READY pgs=69 cs=0 l=1 rev1=1 crypto rx=0x7f23b4009920 tx=0x7f23b402ef20 comp rx=0 tx=0).stop 2026-03-09T20:42:40.773 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.597+0000 7f23ce215640 1 -- 192.168.123.107:0/3577268469 shutdown_connections 2026-03-09T20:42:40.773 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.597+0000 7f23ce215640 1 --2- 192.168.123.107:0/3577268469 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f23c00975e0 0x7f23c00979e0 unknown :-1 s=CLOSED pgs=69 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:40.773 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.597+0000 7f23ce215640 1 -- 192.168.123.107:0/3577268469 >> 192.168.123.107:0/3577268469 conn(0x7f23c0092d70 msgr2=0x7f23c00951b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:40.773 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.597+0000 7f23ce215640 1 -- 192.168.123.107:0/3577268469 shutdown_connections 2026-03-09T20:42:40.773 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.597+0000 7f23ce215640 1 -- 192.168.123.107:0/3577268469 wait complete. 2026-03-09T20:42:40.773 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.598+0000 7f23ce215640 1 Processor -- start 2026-03-09T20:42:40.773 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.603+0000 7f23ce215640 1 -- start start 2026-03-09T20:42:40.773 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.603+0000 7f23ce215640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f23c00975e0 0x7f23c012d480 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:40.773 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.603+0000 7f23ce215640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f23c012d9c0 con 0x7f23c00975e0 2026-03-09T20:42:40.773 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.603+0000 7f23c77fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f23c00975e0 0x7f23c012d480 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:40.773 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.603+0000 7f23c77fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f23c00975e0 0x7f23c012d480 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:35538/0 (socket says 192.168.123.107:35538) 2026-03-09T20:42:40.773 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.603+0000 7f23c77fe640 1 -- 192.168.123.107:0/1734395797 learned_addr learned my addr 192.168.123.107:0/1734395797 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:40.773 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.604+0000 7f23c77fe640 1 -- 192.168.123.107:0/1734395797 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f23b40095d0 con 0x7f23c00975e0 2026-03-09T20:42:40.773 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.604+0000 7f23c77fe640 1 --2- 192.168.123.107:0/1734395797 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f23c00975e0 0x7f23c012d480 secure :-1 s=READY pgs=70 cs=0 l=1 rev1=1 crypto rx=0x7f23b4037d90 tx=0x7f23b4037990 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:40.773 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.604+0000 7f23c4ff9640 1 -- 192.168.123.107:0/1734395797 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f23b402fe40 con 0x7f23c00975e0 2026-03-09T20:42:40.773 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.604+0000 7f23ce215640 1 -- 192.168.123.107:0/1734395797 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f23c012dbc0 con 0x7f23c00975e0 2026-03-09T20:42:40.773 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.604+0000 7f23ce215640 1 -- 192.168.123.107:0/1734395797 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f23c012e060 con 0x7f23c00975e0 2026-03-09T20:42:40.773 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.605+0000 7f23c4ff9640 1 -- 192.168.123.107:0/1734395797 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f23b4035ce0 con 0x7f23c00975e0 2026-03-09T20:42:40.773 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.605+0000 7f23c4ff9640 1 -- 192.168.123.107:0/1734395797 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f23b4049d00 con 0x7f23c00975e0 2026-03-09T20:42:40.773 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.605+0000 7f23c4ff9640 1 -- 192.168.123.107:0/1734395797 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 9) v1 ==== 49383+0+0 (secure 0 0 0) 0x7f23b403e070 con 0x7f23c00975e0 2026-03-09T20:42:40.773 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.606+0000 7f23c4ff9640 1 --2- 192.168.123.107:0/1734395797 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f239403d230 0x7f239403f6f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:40.773 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.606+0000 7f23c4ff9640 1 -- 192.168.123.107:0/1734395797 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(2..2 src has 1..2) v4 ==== 940+0+0 (secure 0 0 0) 0x7f23b40765f0 con 0x7f23c00975e0 2026-03-09T20:42:40.773 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.606+0000 7f23c6ffd640 1 -- 192.168.123.107:0/1734395797 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f239403d230 msgr2=0x7f239403f6f0 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.107:6800/810998986 2026-03-09T20:42:40.773 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.606+0000 7f23c6ffd640 1 --2- 192.168.123.107:0/1734395797 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f239403d230 0x7f239403f6f0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.200000 2026-03-09T20:42:40.773 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.606+0000 7f23ce215640 1 -- 192.168.123.107:0/1734395797 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f238c005350 con 0x7f23c00975e0 2026-03-09T20:42:40.773 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.610+0000 7f23c4ff9640 1 -- 192.168.123.107:0/1734395797 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f23b403c080 con 0x7f23c00975e0 2026-03-09T20:42:40.773 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.715+0000 7f23ce215640 1 -- 192.168.123.107:0/1734395797 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "mgr stat"} v 0) v1 -- 0x7f238c005e10 con 0x7f23c00975e0 2026-03-09T20:42:40.773 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.716+0000 7f23c4ff9640 1 -- 192.168.123.107:0/1734395797 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "mgr stat"}]=0 v9) v1 ==== 56+0+98 (secure 0 0 0) 0x7f23b40483c0 con 0x7f23c00975e0 2026-03-09T20:42:40.774 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.718+0000 7f23a27fc640 1 -- 192.168.123.107:0/1734395797 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f239403d230 msgr2=0x7f239403f6f0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T20:42:40.774 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.718+0000 7f23a27fc640 1 --2- 192.168.123.107:0/1734395797 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f239403d230 0x7f239403f6f0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:40.774 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.718+0000 7f23a27fc640 1 -- 192.168.123.107:0/1734395797 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f23c00975e0 msgr2=0x7f23c012d480 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:40.774 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.718+0000 7f23a27fc640 1 --2- 192.168.123.107:0/1734395797 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f23c00975e0 0x7f23c012d480 secure :-1 s=READY pgs=70 cs=0 l=1 rev1=1 crypto rx=0x7f23b4037d90 tx=0x7f23b4037990 comp rx=0 tx=0).stop 2026-03-09T20:42:40.774 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.718+0000 7f23a27fc640 1 -- 192.168.123.107:0/1734395797 shutdown_connections 2026-03-09T20:42:40.774 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.718+0000 7f23a27fc640 1 --2- 192.168.123.107:0/1734395797 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7f239403d230 0x7f239403f6f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:40.774 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.718+0000 7f23a27fc640 1 --2- 192.168.123.107:0/1734395797 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f23c00975e0 0x7f23c012d480 unknown :-1 s=CLOSED pgs=70 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:40.774 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.718+0000 7f23a27fc640 1 -- 192.168.123.107:0/1734395797 >> 192.168.123.107:0/1734395797 conn(0x7f23c0092d70 msgr2=0x7f23c00937e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:40.774 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.718+0000 7f23a27fc640 1 -- 192.168.123.107:0/1734395797 shutdown_connections 2026-03-09T20:42:40.774 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.718+0000 7f23a27fc640 1 -- 192.168.123.107:0/1734395797 wait complete. 2026-03-09T20:42:40.774 INFO:teuthology.orchestra.run.vm07.stdout:Waiting for the mgr to restart... 2026-03-09T20:42:40.774 INFO:teuthology.orchestra.run.vm07.stdout:Waiting for mgr epoch 9... 2026-03-09T20:42:41.635 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:41 vm07 ceph-mon[49120]: from='client.? 192.168.123.107:0/2833011467' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-09T20:42:41.635 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:41 vm07 ceph-mon[49120]: mgrmap e9: vm07.xjrvch(active, since 7s) 2026-03-09T20:42:41.635 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:41 vm07 ceph-mon[49120]: from='client.? 192.168.123.107:0/1734395797' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T20:42:43.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:43 vm07 ceph-mon[49120]: Active manager daemon vm07.xjrvch restarted 2026-03-09T20:42:43.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:43 vm07 ceph-mon[49120]: Activating manager daemon vm07.xjrvch 2026-03-09T20:42:43.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:43 vm07 ceph-mon[49120]: osdmap e3: 0 total, 0 up, 0 in 2026-03-09T20:42:43.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:43 vm07 ceph-mon[49120]: mgrmap e10: vm07.xjrvch(active, starting, since 0.00555562s) 2026-03-09T20:42:43.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:43 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mon metadata", "id": "vm07"}]: dispatch 2026-03-09T20:42:43.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:43 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mgr metadata", "who": "vm07.xjrvch", "id": "vm07.xjrvch"}]: dispatch 2026-03-09T20:42:43.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:43 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T20:42:43.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:43 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T20:42:43.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:43 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T20:42:43.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:43 vm07 ceph-mon[49120]: Manager daemon vm07.xjrvch is now available 2026-03-09T20:42:43.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:43 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:42:43.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:43 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm07.xjrvch/mirror_snapshot_schedule"}]: dispatch 2026-03-09T20:42:43.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:43 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm07.xjrvch/trash_purge_schedule"}]: dispatch 2026-03-09T20:42:44.354 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout { 2026-03-09T20:42:44.354 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 11, 2026-03-09T20:42:44.354 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-09T20:42:44.354 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout } 2026-03-09T20:42:44.354 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.882+0000 7fee87577640 1 Processor -- start 2026-03-09T20:42:44.354 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.882+0000 7fee87577640 1 -- start start 2026-03-09T20:42:44.354 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.882+0000 7fee87577640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fee88071820 0x7fee88071c20 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:44.354 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.882+0000 7fee87577640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fee880721f0 con 0x7fee88071820 2026-03-09T20:42:44.354 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.882+0000 7fee86575640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fee88071820 0x7fee88071c20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:44.354 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.882+0000 7fee86575640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fee88071820 0x7fee88071c20 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:35540/0 (socket says 192.168.123.107:35540) 2026-03-09T20:42:44.354 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.882+0000 7fee86575640 1 -- 192.168.123.107:0/3768266513 learned_addr learned my addr 192.168.123.107:0/3768266513 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:44.354 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.883+0000 7fee86575640 1 -- 192.168.123.107:0/3768266513 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fee88072330 con 0x7fee88071820 2026-03-09T20:42:44.354 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.883+0000 7fee86575640 1 --2- 192.168.123.107:0/3768266513 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fee88071820 0x7fee88071c20 secure :-1 s=READY pgs=71 cs=0 l=1 rev1=1 crypto rx=0x7fee780089a0 tx=0x7fee78031440 comp rx=0 tx=0).ready entity=mon.0 client_cookie=ca8f17044627136b server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:44.354 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.884+0000 7fee85573640 1 -- 192.168.123.107:0/3768266513 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fee78031e50 con 0x7fee88071820 2026-03-09T20:42:44.354 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.884+0000 7fee85573640 1 -- 192.168.123.107:0/3768266513 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fee78035070 con 0x7fee88071820 2026-03-09T20:42:44.354 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.884+0000 7fee87577640 1 -- 192.168.123.107:0/3768266513 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fee88071820 msgr2=0x7fee88071c20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:44.354 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.884+0000 7fee87577640 1 --2- 192.168.123.107:0/3768266513 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fee88071820 0x7fee88071c20 secure :-1 s=READY pgs=71 cs=0 l=1 rev1=1 crypto rx=0x7fee780089a0 tx=0x7fee78031440 comp rx=0 tx=0).stop 2026-03-09T20:42:44.354 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.884+0000 7fee87577640 1 -- 192.168.123.107:0/3768266513 shutdown_connections 2026-03-09T20:42:44.354 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.884+0000 7fee87577640 1 --2- 192.168.123.107:0/3768266513 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fee88071820 0x7fee88071c20 unknown :-1 s=CLOSED pgs=71 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:44.354 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.884+0000 7fee87577640 1 -- 192.168.123.107:0/3768266513 >> 192.168.123.107:0/3768266513 conn(0x7fee8806d060 msgr2=0x7fee8806f480 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:44.354 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.884+0000 7fee87577640 1 -- 192.168.123.107:0/3768266513 shutdown_connections 2026-03-09T20:42:44.354 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.884+0000 7fee87577640 1 -- 192.168.123.107:0/3768266513 wait complete. 2026-03-09T20:42:44.355 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.884+0000 7fee87577640 1 Processor -- start 2026-03-09T20:42:44.355 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.884+0000 7fee87577640 1 -- start start 2026-03-09T20:42:44.355 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.884+0000 7fee87577640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fee881a24a0 0x7fee881a28c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:44.355 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.885+0000 7fee87577640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fee7803b7a0 con 0x7fee881a24a0 2026-03-09T20:42:44.355 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.885+0000 7fee86575640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fee881a24a0 0x7fee881a28c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:44.355 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.885+0000 7fee86575640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fee881a24a0 0x7fee881a28c0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:35542/0 (socket says 192.168.123.107:35542) 2026-03-09T20:42:44.355 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.885+0000 7fee86575640 1 -- 192.168.123.107:0/2873418025 learned_addr learned my addr 192.168.123.107:0/2873418025 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:44.355 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.885+0000 7fee86575640 1 -- 192.168.123.107:0/2873418025 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fee78008650 con 0x7fee881a24a0 2026-03-09T20:42:44.355 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.885+0000 7fee86575640 1 --2- 192.168.123.107:0/2873418025 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fee881a24a0 0x7fee881a28c0 secure :-1 s=READY pgs=72 cs=0 l=1 rev1=1 crypto rx=0x7fee780319f0 tx=0x7fee78008c70 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:44.355 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.886+0000 7fee777fe640 1 -- 192.168.123.107:0/2873418025 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fee7803bed0 con 0x7fee881a24a0 2026-03-09T20:42:44.355 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.886+0000 7fee87577640 1 -- 192.168.123.107:0/2873418025 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fee881a2e00 con 0x7fee881a24a0 2026-03-09T20:42:44.355 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.886+0000 7fee87577640 1 -- 192.168.123.107:0/2873418025 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fee881a3990 con 0x7fee881a24a0 2026-03-09T20:42:44.355 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.886+0000 7fee777fe640 1 -- 192.168.123.107:0/2873418025 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fee78035040 con 0x7fee881a24a0 2026-03-09T20:42:44.355 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.886+0000 7fee777fe640 1 -- 192.168.123.107:0/2873418025 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fee7800b630 con 0x7fee881a24a0 2026-03-09T20:42:44.355 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.887+0000 7fee777fe640 1 -- 192.168.123.107:0/2873418025 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 9) v1 ==== 49383+0+0 (secure 0 0 0) 0x7fee7800b790 con 0x7fee881a24a0 2026-03-09T20:42:44.355 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.887+0000 7fee777fe640 1 --2- 192.168.123.107:0/2873418025 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7fee6003d280 0x7fee6003f740 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:44.355 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.887+0000 7fee85d74640 1 -- 192.168.123.107:0/2873418025 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7fee6003d280 msgr2=0x7fee6003f740 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.107:6800/810998986 2026-03-09T20:42:44.355 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.887+0000 7fee85d74640 1 --2- 192.168.123.107:0/2873418025 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7fee6003d280 0x7fee6003f740 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.200000 2026-03-09T20:42:44.355 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.887+0000 7fee777fe640 1 -- 192.168.123.107:0/2873418025 --> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] -- command(tid 0: {"prefix": "get_command_descriptions"}) v1 -- 0x7fee6003fe50 con 0x7fee6003d280 2026-03-09T20:42:44.355 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:40.887+0000 7fee777fe640 1 -- 192.168.123.107:0/2873418025 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(2..2 src has 1..2) v4 ==== 940+0+0 (secure 0 0 0) 0x7fee78037070 con 0x7fee881a24a0 2026-03-09T20:42:44.355 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:41.087+0000 7fee85d74640 1 -- 192.168.123.107:0/2873418025 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7fee6003d280 msgr2=0x7fee6003f740 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.107:6800/810998986 2026-03-09T20:42:44.355 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:41.087+0000 7fee85d74640 1 --2- 192.168.123.107:0/2873418025 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7fee6003d280 0x7fee6003f740 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.400000 2026-03-09T20:42:44.355 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:41.488+0000 7fee85d74640 1 -- 192.168.123.107:0/2873418025 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7fee6003d280 msgr2=0x7fee6003f740 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.107:6800/810998986 2026-03-09T20:42:44.355 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:41.488+0000 7fee85d74640 1 --2- 192.168.123.107:0/2873418025 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7fee6003d280 0x7fee6003f740 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.800000 2026-03-09T20:42:44.355 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:42.289+0000 7fee85d74640 1 -- 192.168.123.107:0/2873418025 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7fee6003d280 msgr2=0x7fee6003f740 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.107:6800/810998986 2026-03-09T20:42:44.355 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:42.289+0000 7fee85d74640 1 --2- 192.168.123.107:0/2873418025 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7fee6003d280 0x7fee6003f740 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 1.600000 2026-03-09T20:42:44.355 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:43.302+0000 7fee777fe640 1 -- 192.168.123.107:0/2873418025 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mgrmap(e 10) v1 ==== 49150+0+0 (secure 0 0 0) 0x7fee78073870 con 0x7fee881a24a0 2026-03-09T20:42:44.355 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:43.302+0000 7fee777fe640 1 -- 192.168.123.107:0/2873418025 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7fee6003d280 msgr2=0x7fee6003f740 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T20:42:44.355 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:43.302+0000 7fee777fe640 1 --2- 192.168.123.107:0/2873418025 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7fee6003d280 0x7fee6003f740 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:44.355 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.306+0000 7fee777fe640 1 -- 192.168.123.107:0/2873418025 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mgrmap(e 11) v1 ==== 49277+0+0 (secure 0 0 0) 0x7fee78039330 con 0x7fee881a24a0 2026-03-09T20:42:44.355 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.306+0000 7fee777fe640 1 --2- 192.168.123.107:0/2873418025 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7fee60040d30 0x7fee60043120 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:44.355 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.306+0000 7fee777fe640 1 -- 192.168.123.107:0/2873418025 --> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] -- command(tid 0: {"prefix": "get_command_descriptions"}) v1 -- 0x7fee6003fe50 con 0x7fee60040d30 2026-03-09T20:42:44.355 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.307+0000 7fee85d74640 1 --2- 192.168.123.107:0/2873418025 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7fee60040d30 0x7fee60043120 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:44.355 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.308+0000 7fee85d74640 1 --2- 192.168.123.107:0/2873418025 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7fee60040d30 0x7fee60043120 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7fee80003e00 tx=0x7fee800073c0 comp rx=0 tx=0).ready entity=mgr.14162 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:44.355 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.308+0000 7fee777fe640 1 -- 192.168.123.107:0/2873418025 <== mgr.14162 v2:192.168.123.107:6800/4166937886 1 ==== command_reply(tid 0: 0 ) v1 ==== 8+0+7759 (secure 0 0 0) 0x7fee6003fe50 con 0x7fee60040d30 2026-03-09T20:42:44.355 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.311+0000 7fee87577640 1 -- 192.168.123.107:0/2873418025 --> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] -- command(tid 1: {"prefix": "mgr_status"}) v1 -- 0x7fee88071820 con 0x7fee60040d30 2026-03-09T20:42:44.355 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.311+0000 7fee777fe640 1 -- 192.168.123.107:0/2873418025 <== mgr.14162 v2:192.168.123.107:6800/4166937886 2 ==== command_reply(tid 1: 0 ) v1 ==== 8+0+52 (secure 0 0 0) 0x7fee88071820 con 0x7fee60040d30 2026-03-09T20:42:44.355 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.312+0000 7fee87577640 1 -- 192.168.123.107:0/2873418025 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7fee60040d30 msgr2=0x7fee60043120 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:44.355 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.312+0000 7fee87577640 1 --2- 192.168.123.107:0/2873418025 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7fee60040d30 0x7fee60043120 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7fee80003e00 tx=0x7fee800073c0 comp rx=0 tx=0).stop 2026-03-09T20:42:44.355 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.312+0000 7fee87577640 1 -- 192.168.123.107:0/2873418025 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fee881a24a0 msgr2=0x7fee881a28c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:44.355 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.312+0000 7fee87577640 1 --2- 192.168.123.107:0/2873418025 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fee881a24a0 0x7fee881a28c0 secure :-1 s=READY pgs=72 cs=0 l=1 rev1=1 crypto rx=0x7fee780319f0 tx=0x7fee78008c70 comp rx=0 tx=0).stop 2026-03-09T20:42:44.355 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.312+0000 7fee87577640 1 -- 192.168.123.107:0/2873418025 shutdown_connections 2026-03-09T20:42:44.355 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.312+0000 7fee87577640 1 --2- 192.168.123.107:0/2873418025 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7fee60040d30 0x7fee60043120 unknown :-1 s=CLOSED pgs=6 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:44.355 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.312+0000 7fee87577640 1 --2- 192.168.123.107:0/2873418025 >> [v2:192.168.123.107:6800/810998986,v1:192.168.123.107:6801/810998986] conn(0x7fee6003d280 0x7fee6003f740 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:44.355 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.312+0000 7fee87577640 1 --2- 192.168.123.107:0/2873418025 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fee881a24a0 0x7fee881a28c0 unknown :-1 s=CLOSED pgs=72 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:44.355 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.312+0000 7fee87577640 1 -- 192.168.123.107:0/2873418025 >> 192.168.123.107:0/2873418025 conn(0x7fee8806d060 msgr2=0x7fee8806e900 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:44.355 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.312+0000 7fee87577640 1 -- 192.168.123.107:0/2873418025 shutdown_connections 2026-03-09T20:42:44.355 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.312+0000 7fee87577640 1 -- 192.168.123.107:0/2873418025 wait complete. 2026-03-09T20:42:44.355 INFO:teuthology.orchestra.run.vm07.stdout:mgr epoch 9 is available 2026-03-09T20:42:44.355 INFO:teuthology.orchestra.run.vm07.stdout:Generating a dashboard self-signed certificate... 2026-03-09T20:42:44.656 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout Self-signed certificate created 2026-03-09T20:42:44.656 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.476+0000 7fcc0cdbd640 1 Processor -- start 2026-03-09T20:42:44.656 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.476+0000 7fcc0cdbd640 1 -- start start 2026-03-09T20:42:44.656 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.476+0000 7fcc0cdbd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcc081082f0 0x7fcc081086f0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:44.656 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.476+0000 7fcc0cdbd640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fcc08108cc0 con 0x7fcc081082f0 2026-03-09T20:42:44.657 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.477+0000 7fcc06575640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcc081082f0 0x7fcc081086f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:44.657 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.477+0000 7fcc06575640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcc081082f0 0x7fcc081086f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:35600/0 (socket says 192.168.123.107:35600) 2026-03-09T20:42:44.657 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.477+0000 7fcc06575640 1 -- 192.168.123.107:0/1181255038 learned_addr learned my addr 192.168.123.107:0/1181255038 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:44.657 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.477+0000 7fcc06575640 1 -- 192.168.123.107:0/1181255038 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fcc08109490 con 0x7fcc081082f0 2026-03-09T20:42:44.657 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.478+0000 7fcc06575640 1 --2- 192.168.123.107:0/1181255038 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcc081082f0 0x7fcc081086f0 secure :-1 s=READY pgs=80 cs=0 l=1 rev1=1 crypto rx=0x7fcbf0009b80 tx=0x7fcbf002f190 comp rx=0 tx=0).ready entity=mon.0 client_cookie=f97866be39650099 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:44.657 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.478+0000 7fcc05573640 1 -- 192.168.123.107:0/1181255038 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fcbf002fc20 con 0x7fcc081082f0 2026-03-09T20:42:44.657 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.478+0000 7fcc05573640 1 -- 192.168.123.107:0/1181255038 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fcbf002fd80 con 0x7fcc081082f0 2026-03-09T20:42:44.657 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.478+0000 7fcc05573640 1 -- 192.168.123.107:0/1181255038 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fcbf00357c0 con 0x7fcc081082f0 2026-03-09T20:42:44.657 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.478+0000 7fcc0cdbd640 1 -- 192.168.123.107:0/1181255038 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcc081082f0 msgr2=0x7fcc081086f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:44.657 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.478+0000 7fcc0cdbd640 1 --2- 192.168.123.107:0/1181255038 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcc081082f0 0x7fcc081086f0 secure :-1 s=READY pgs=80 cs=0 l=1 rev1=1 crypto rx=0x7fcbf0009b80 tx=0x7fcbf002f190 comp rx=0 tx=0).stop 2026-03-09T20:42:44.657 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.478+0000 7fcc0cdbd640 1 -- 192.168.123.107:0/1181255038 shutdown_connections 2026-03-09T20:42:44.657 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.478+0000 7fcc0cdbd640 1 --2- 192.168.123.107:0/1181255038 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcc081082f0 0x7fcc081086f0 unknown :-1 s=CLOSED pgs=80 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:44.657 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.478+0000 7fcc0cdbd640 1 -- 192.168.123.107:0/1181255038 >> 192.168.123.107:0/1181255038 conn(0x7fcc0807b8c0 msgr2=0x7fcc081066a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:44.657 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.479+0000 7fcc0cdbd640 1 -- 192.168.123.107:0/1181255038 shutdown_connections 2026-03-09T20:42:44.657 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.479+0000 7fcc0cdbd640 1 -- 192.168.123.107:0/1181255038 wait complete. 2026-03-09T20:42:44.657 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.479+0000 7fcc0cdbd640 1 Processor -- start 2026-03-09T20:42:44.657 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.479+0000 7fcc0cdbd640 1 -- start start 2026-03-09T20:42:44.657 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.479+0000 7fcc0cdbd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcc081082f0 0x7fcc0819e170 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:44.657 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.479+0000 7fcc0cdbd640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fcc0819e6b0 con 0x7fcc081082f0 2026-03-09T20:42:44.657 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.479+0000 7fcc06575640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcc081082f0 0x7fcc0819e170 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:44.657 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.480+0000 7fcc06575640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcc081082f0 0x7fcc0819e170 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:35616/0 (socket says 192.168.123.107:35616) 2026-03-09T20:42:44.657 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.480+0000 7fcc06575640 1 -- 192.168.123.107:0/3923218339 learned_addr learned my addr 192.168.123.107:0/3923218339 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:44.657 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.480+0000 7fcc06575640 1 -- 192.168.123.107:0/3923218339 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fcbf00095d0 con 0x7fcc081082f0 2026-03-09T20:42:44.657 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.480+0000 7fcc06575640 1 --2- 192.168.123.107:0/3923218339 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcc081082f0 0x7fcc0819e170 secure :-1 s=READY pgs=81 cs=0 l=1 rev1=1 crypto rx=0x7fcbf00379e0 tx=0x7fcbf0037a10 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:44.657 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.480+0000 7fcbf77fe640 1 -- 192.168.123.107:0/3923218339 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fcbf0037c00 con 0x7fcc081082f0 2026-03-09T20:42:44.657 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.480+0000 7fcbf77fe640 1 -- 192.168.123.107:0/3923218339 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fcbf0037d60 con 0x7fcc081082f0 2026-03-09T20:42:44.657 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.480+0000 7fcbf77fe640 1 -- 192.168.123.107:0/3923218339 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fcbf003f5e0 con 0x7fcc081082f0 2026-03-09T20:42:44.657 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.480+0000 7fcc0cdbd640 1 -- 192.168.123.107:0/3923218339 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fcc0819e8b0 con 0x7fcc081082f0 2026-03-09T20:42:44.657 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.480+0000 7fcc0cdbd640 1 -- 192.168.123.107:0/3923218339 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fcc0819ed50 con 0x7fcc081082f0 2026-03-09T20:42:44.657 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.481+0000 7fcbf77fe640 1 -- 192.168.123.107:0/3923218339 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 11) v1 ==== 49277+0+0 (secure 0 0 0) 0x7fcbf003e030 con 0x7fcc081082f0 2026-03-09T20:42:44.657 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.481+0000 7fcc0cdbd640 1 -- 192.168.123.107:0/3923218339 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fcbcc005350 con 0x7fcc081082f0 2026-03-09T20:42:44.657 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.484+0000 7fcbf77fe640 1 --2- 192.168.123.107:0/3923218339 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7fcbe003d0c0 0x7fcbe003f580 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:44.657 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.484+0000 7fcbf77fe640 1 -- 192.168.123.107:0/3923218339 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(3..3 src has 1..3) v4 ==== 1155+0+0 (secure 0 0 0) 0x7fcbf0075ec0 con 0x7fcc081082f0 2026-03-09T20:42:44.657 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.484+0000 7fcbf77fe640 1 -- 192.168.123.107:0/3923218339 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7fcbf003eaf0 con 0x7fcc081082f0 2026-03-09T20:42:44.657 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.484+0000 7fcc05d74640 1 --2- 192.168.123.107:0/3923218339 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7fcbe003d0c0 0x7fcbe003f580 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:44.657 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.484+0000 7fcc05d74640 1 --2- 192.168.123.107:0/3923218339 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7fcbe003d0c0 0x7fcbe003f580 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7fcbfc009a10 tx=0x7fcbfc006eb0 comp rx=0 tx=0).ready entity=mgr.14162 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:44.657 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.570+0000 7fcc0cdbd640 1 -- 192.168.123.107:0/3923218339 --> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] -- mgr_command(tid 0: {"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}) v1 -- 0x7fcbcc002bf0 con 0x7fcbe003d0c0 2026-03-09T20:42:44.657 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.612+0000 7fcbf77fe640 1 -- 192.168.123.107:0/3923218339 <== mgr.14162 v2:192.168.123.107:6800/4166937886 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+32 (secure 0 0 0) 0x7fcbcc002bf0 con 0x7fcbe003d0c0 2026-03-09T20:42:44.657 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.614+0000 7fcc0cdbd640 1 -- 192.168.123.107:0/3923218339 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7fcbe003d0c0 msgr2=0x7fcbe003f580 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:44.657 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.614+0000 7fcc0cdbd640 1 --2- 192.168.123.107:0/3923218339 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7fcbe003d0c0 0x7fcbe003f580 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7fcbfc009a10 tx=0x7fcbfc006eb0 comp rx=0 tx=0).stop 2026-03-09T20:42:44.657 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.614+0000 7fcc0cdbd640 1 -- 192.168.123.107:0/3923218339 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcc081082f0 msgr2=0x7fcc0819e170 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:44.657 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.614+0000 7fcc0cdbd640 1 --2- 192.168.123.107:0/3923218339 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcc081082f0 0x7fcc0819e170 secure :-1 s=READY pgs=81 cs=0 l=1 rev1=1 crypto rx=0x7fcbf00379e0 tx=0x7fcbf0037a10 comp rx=0 tx=0).stop 2026-03-09T20:42:44.657 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.615+0000 7fcc0cdbd640 1 -- 192.168.123.107:0/3923218339 shutdown_connections 2026-03-09T20:42:44.657 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.615+0000 7fcc0cdbd640 1 --2- 192.168.123.107:0/3923218339 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7fcbe003d0c0 0x7fcbe003f580 unknown :-1 s=CLOSED pgs=7 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:44.657 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.615+0000 7fcc0cdbd640 1 --2- 192.168.123.107:0/3923218339 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcc081082f0 0x7fcc0819e170 unknown :-1 s=CLOSED pgs=81 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:44.657 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.615+0000 7fcc0cdbd640 1 -- 192.168.123.107:0/3923218339 >> 192.168.123.107:0/3923218339 conn(0x7fcc0807b8c0 msgr2=0x7fcc08105d70 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:44.657 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.615+0000 7fcc0cdbd640 1 -- 192.168.123.107:0/3923218339 shutdown_connections 2026-03-09T20:42:44.657 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.615+0000 7fcc0cdbd640 1 -- 192.168.123.107:0/3923218339 wait complete. 2026-03-09T20:42:44.657 INFO:teuthology.orchestra.run.vm07.stdout:Creating initial admin user... 2026-03-09T20:42:44.841 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:44 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' 2026-03-09T20:42:44.841 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:44 vm07 ceph-mon[49120]: [09/Mar/2026:20:42:44] ENGINE Bus STARTING 2026-03-09T20:42:44.841 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:44 vm07 ceph-mon[49120]: [09/Mar/2026:20:42:44] ENGINE Serving on https://192.168.123.107:7150 2026-03-09T20:42:44.841 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:44 vm07 ceph-mon[49120]: [09/Mar/2026:20:42:44] ENGINE Client ('192.168.123.107', 45228) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T20:42:44.841 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:44 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' 2026-03-09T20:42:44.841 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:44 vm07 ceph-mon[49120]: [09/Mar/2026:20:42:44] ENGINE Serving on http://192.168.123.107:8765 2026-03-09T20:42:44.841 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:44 vm07 ceph-mon[49120]: [09/Mar/2026:20:42:44] ENGINE Bus STARTED 2026-03-09T20:42:44.841 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:44 vm07 ceph-mon[49120]: mgrmap e11: vm07.xjrvch(active, since 1.00723s) 2026-03-09T20:42:44.841 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:44 vm07 ceph-mon[49120]: from='client.14166 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T20:42:44.841 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:44 vm07 ceph-mon[49120]: from='client.14166 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T20:42:44.841 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:44 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' 2026-03-09T20:42:44.841 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:44 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' 2026-03-09T20:42:45.053 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout {"username": "admin", "password": "$2b$12$O52hHOw5Ax2s6QVc.4reD.pyBySX9HijoafTm5A..hiI6nT9G/MKe", "roles": ["administrator"], "name": null, "email": null, "lastUpdate": 1773088965, "enabled": true, "pwdExpirationDate": null, "pwdUpdateRequired": true} 2026-03-09T20:42:45.054 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.761+0000 7fde3ef75640 1 Processor -- start 2026-03-09T20:42:45.054 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.762+0000 7fde3ef75640 1 -- start start 2026-03-09T20:42:45.054 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.762+0000 7fde3ef75640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fde381082f0 0x7fde381086f0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:45.054 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.762+0000 7fde3ef75640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fde38108cc0 con 0x7fde381082f0 2026-03-09T20:42:45.054 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.762+0000 7fde3ccea640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fde381082f0 0x7fde381086f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:45.054 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.762+0000 7fde3ccea640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fde381082f0 0x7fde381086f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:35630/0 (socket says 192.168.123.107:35630) 2026-03-09T20:42:45.054 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.762+0000 7fde3ccea640 1 -- 192.168.123.107:0/165490995 learned_addr learned my addr 192.168.123.107:0/165490995 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:45.054 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.762+0000 7fde3ccea640 1 -- 192.168.123.107:0/165490995 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fde38109490 con 0x7fde381082f0 2026-03-09T20:42:45.054 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.763+0000 7fde3ccea640 1 --2- 192.168.123.107:0/165490995 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fde381082f0 0x7fde381086f0 secure :-1 s=READY pgs=82 cs=0 l=1 rev1=1 crypto rx=0x7fde20009b30 tx=0x7fde2002f140 comp rx=0 tx=0).ready entity=mon.0 client_cookie=c86f9f8d0eeca3d8 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:45.054 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.763+0000 7fde2f7fe640 1 -- 192.168.123.107:0/165490995 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fde2002fbd0 con 0x7fde381082f0 2026-03-09T20:42:45.054 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.763+0000 7fde2f7fe640 1 -- 192.168.123.107:0/165490995 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fde2002fd30 con 0x7fde381082f0 2026-03-09T20:42:45.054 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.764+0000 7fde2f7fe640 1 -- 192.168.123.107:0/165490995 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fde20035780 con 0x7fde381082f0 2026-03-09T20:42:45.054 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.764+0000 7fde3ef75640 1 -- 192.168.123.107:0/165490995 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fde381082f0 msgr2=0x7fde381086f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:45.054 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.764+0000 7fde3ef75640 1 --2- 192.168.123.107:0/165490995 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fde381082f0 0x7fde381086f0 secure :-1 s=READY pgs=82 cs=0 l=1 rev1=1 crypto rx=0x7fde20009b30 tx=0x7fde2002f140 comp rx=0 tx=0).stop 2026-03-09T20:42:45.054 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.764+0000 7fde3ef75640 1 -- 192.168.123.107:0/165490995 shutdown_connections 2026-03-09T20:42:45.054 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.764+0000 7fde3ef75640 1 --2- 192.168.123.107:0/165490995 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fde381082f0 0x7fde381086f0 unknown :-1 s=CLOSED pgs=82 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:45.054 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.764+0000 7fde3ef75640 1 -- 192.168.123.107:0/165490995 >> 192.168.123.107:0/165490995 conn(0x7fde3807b8c0 msgr2=0x7fde381066a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:45.054 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.764+0000 7fde3ef75640 1 -- 192.168.123.107:0/165490995 shutdown_connections 2026-03-09T20:42:45.054 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.764+0000 7fde3ef75640 1 -- 192.168.123.107:0/165490995 wait complete. 2026-03-09T20:42:45.054 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.765+0000 7fde3ef75640 1 Processor -- start 2026-03-09T20:42:45.054 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.765+0000 7fde3ef75640 1 -- start start 2026-03-09T20:42:45.054 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.765+0000 7fde3ef75640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fde381082f0 0x7fde3819e150 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:45.054 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.765+0000 7fde3ef75640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fde3819e690 con 0x7fde381082f0 2026-03-09T20:42:45.054 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.765+0000 7fde3ccea640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fde381082f0 0x7fde3819e150 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:45.054 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.766+0000 7fde3ccea640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fde381082f0 0x7fde3819e150 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:35640/0 (socket says 192.168.123.107:35640) 2026-03-09T20:42:45.054 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.766+0000 7fde3ccea640 1 -- 192.168.123.107:0/2236171312 learned_addr learned my addr 192.168.123.107:0/2236171312 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:45.054 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.766+0000 7fde3ccea640 1 -- 192.168.123.107:0/2236171312 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fde200095d0 con 0x7fde381082f0 2026-03-09T20:42:45.054 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.766+0000 7fde3ccea640 1 --2- 192.168.123.107:0/2236171312 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fde381082f0 0x7fde3819e150 secure :-1 s=READY pgs=83 cs=0 l=1 rev1=1 crypto rx=0x7fde20009c60 tx=0x7fde20037670 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:45.054 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.766+0000 7fde2dffb640 1 -- 192.168.123.107:0/2236171312 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fde20037ad0 con 0x7fde381082f0 2026-03-09T20:42:45.054 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.766+0000 7fde2dffb640 1 -- 192.168.123.107:0/2236171312 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fde20037c30 con 0x7fde381082f0 2026-03-09T20:42:45.054 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.766+0000 7fde2dffb640 1 -- 192.168.123.107:0/2236171312 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fde20036680 con 0x7fde381082f0 2026-03-09T20:42:45.054 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.766+0000 7fde3ef75640 1 -- 192.168.123.107:0/2236171312 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fde3819e890 con 0x7fde381082f0 2026-03-09T20:42:45.054 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.767+0000 7fde3ef75640 1 -- 192.168.123.107:0/2236171312 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fde3819ed30 con 0x7fde381082f0 2026-03-09T20:42:45.054 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.768+0000 7fde2dffb640 1 -- 192.168.123.107:0/2236171312 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 11) v1 ==== 49277+0+0 (secure 0 0 0) 0x7fde2003e070 con 0x7fde381082f0 2026-03-09T20:42:45.054 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.768+0000 7fde2dffb640 1 --2- 192.168.123.107:0/2236171312 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7fde0803d160 0x7fde0803f620 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:45.054 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.768+0000 7fde2dffb640 1 -- 192.168.123.107:0/2236171312 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(3..3 src has 1..3) v4 ==== 1155+0+0 (secure 0 0 0) 0x7fde20076940 con 0x7fde381082f0 2026-03-09T20:42:45.054 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.768+0000 7fde2ffff640 1 --2- 192.168.123.107:0/2236171312 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7fde0803d160 0x7fde0803f620 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:45.054 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.768+0000 7fde2ffff640 1 --2- 192.168.123.107:0/2236171312 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7fde0803d160 0x7fde0803f620 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7fde280099c0 tx=0x7fde28006eb0 comp rx=0 tx=0).ready entity=mgr.14162 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:45.054 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.769+0000 7fde3ef75640 1 -- 192.168.123.107:0/2236171312 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fde04005350 con 0x7fde381082f0 2026-03-09T20:42:45.054 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.771+0000 7fde2dffb640 1 -- 192.168.123.107:0/2236171312 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7fde2003c030 con 0x7fde381082f0 2026-03-09T20:42:45.054 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:44.867+0000 7fde3ef75640 1 -- 192.168.123.107:0/2236171312 --> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] -- mgr_command(tid 0: {"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}) v1 -- 0x7fde04003c00 con 0x7fde0803d160 2026-03-09T20:42:45.054 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.020+0000 7fde2dffb640 1 -- 192.168.123.107:0/2236171312 <== mgr.14162 v2:192.168.123.107:6800/4166937886 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+252 (secure 0 0 0) 0x7fde04003c00 con 0x7fde0803d160 2026-03-09T20:42:45.054 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.022+0000 7fde3ef75640 1 -- 192.168.123.107:0/2236171312 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7fde0803d160 msgr2=0x7fde0803f620 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:45.054 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.022+0000 7fde3ef75640 1 --2- 192.168.123.107:0/2236171312 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7fde0803d160 0x7fde0803f620 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7fde280099c0 tx=0x7fde28006eb0 comp rx=0 tx=0).stop 2026-03-09T20:42:45.054 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.022+0000 7fde3ef75640 1 -- 192.168.123.107:0/2236171312 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fde381082f0 msgr2=0x7fde3819e150 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:45.054 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.022+0000 7fde3ef75640 1 --2- 192.168.123.107:0/2236171312 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fde381082f0 0x7fde3819e150 secure :-1 s=READY pgs=83 cs=0 l=1 rev1=1 crypto rx=0x7fde20009c60 tx=0x7fde20037670 comp rx=0 tx=0).stop 2026-03-09T20:42:45.054 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.022+0000 7fde3ef75640 1 -- 192.168.123.107:0/2236171312 shutdown_connections 2026-03-09T20:42:45.054 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.022+0000 7fde3ef75640 1 --2- 192.168.123.107:0/2236171312 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7fde0803d160 0x7fde0803f620 unknown :-1 s=CLOSED pgs=8 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:45.054 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.022+0000 7fde3ef75640 1 --2- 192.168.123.107:0/2236171312 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fde381082f0 0x7fde3819e150 unknown :-1 s=CLOSED pgs=83 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:45.054 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.022+0000 7fde3ef75640 1 -- 192.168.123.107:0/2236171312 >> 192.168.123.107:0/2236171312 conn(0x7fde3807b8c0 msgr2=0x7fde38105ca0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:45.054 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.022+0000 7fde3ef75640 1 -- 192.168.123.107:0/2236171312 shutdown_connections 2026-03-09T20:42:45.054 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.022+0000 7fde3ef75640 1 -- 192.168.123.107:0/2236171312 wait complete. 2026-03-09T20:42:45.054 INFO:teuthology.orchestra.run.vm07.stdout:Fetching dashboard port number... 2026-03-09T20:42:45.293 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stdout 8443 2026-03-09T20:42:45.293 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.151+0000 7fedd72e3640 1 Processor -- start 2026-03-09T20:42:45.293 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.151+0000 7fedd72e3640 1 -- start start 2026-03-09T20:42:45.293 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.152+0000 7fedd72e3640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fedd01082f0 0x7fedd01086f0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:45.293 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.152+0000 7fedd72e3640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fedd0108cc0 con 0x7fedd01082f0 2026-03-09T20:42:45.293 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.152+0000 7fedd5058640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fedd01082f0 0x7fedd01086f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:45.293 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.152+0000 7fedd5058640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fedd01082f0 0x7fedd01086f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:35654/0 (socket says 192.168.123.107:35654) 2026-03-09T20:42:45.293 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.152+0000 7fedd5058640 1 -- 192.168.123.107:0/2314418633 learned_addr learned my addr 192.168.123.107:0/2314418633 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:45.293 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.152+0000 7fedd5058640 1 -- 192.168.123.107:0/2314418633 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fedd01094a0 con 0x7fedd01082f0 2026-03-09T20:42:45.293 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.153+0000 7fedd5058640 1 --2- 192.168.123.107:0/2314418633 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fedd01082f0 0x7fedd01086f0 secure :-1 s=READY pgs=84 cs=0 l=1 rev1=1 crypto rx=0x7fedb8009b80 tx=0x7fedb802f190 comp rx=0 tx=0).ready entity=mon.0 client_cookie=adf23df9bdbf563c server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:45.293 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.153+0000 7fedc7fff640 1 -- 192.168.123.107:0/2314418633 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fedb802fa10 con 0x7fedd01082f0 2026-03-09T20:42:45.293 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.153+0000 7fedc7fff640 1 -- 192.168.123.107:0/2314418633 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fedb802fb70 con 0x7fedd01082f0 2026-03-09T20:42:45.293 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.153+0000 7fedc7fff640 1 -- 192.168.123.107:0/2314418633 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fedb80355b0 con 0x7fedd01082f0 2026-03-09T20:42:45.293 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.154+0000 7fedd72e3640 1 -- 192.168.123.107:0/2314418633 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fedd01082f0 msgr2=0x7fedd01086f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:45.293 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.154+0000 7fedd72e3640 1 --2- 192.168.123.107:0/2314418633 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fedd01082f0 0x7fedd01086f0 secure :-1 s=READY pgs=84 cs=0 l=1 rev1=1 crypto rx=0x7fedb8009b80 tx=0x7fedb802f190 comp rx=0 tx=0).stop 2026-03-09T20:42:45.293 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.154+0000 7fedd72e3640 1 -- 192.168.123.107:0/2314418633 shutdown_connections 2026-03-09T20:42:45.294 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.154+0000 7fedd72e3640 1 --2- 192.168.123.107:0/2314418633 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fedd01082f0 0x7fedd01086f0 unknown :-1 s=CLOSED pgs=84 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:45.294 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.154+0000 7fedd72e3640 1 -- 192.168.123.107:0/2314418633 >> 192.168.123.107:0/2314418633 conn(0x7fedd007ba00 msgr2=0x7fedd01066a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:45.294 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.154+0000 7fedd72e3640 1 -- 192.168.123.107:0/2314418633 shutdown_connections 2026-03-09T20:42:45.294 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.154+0000 7fedd72e3640 1 -- 192.168.123.107:0/2314418633 wait complete. 2026-03-09T20:42:45.294 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.155+0000 7fedd72e3640 1 Processor -- start 2026-03-09T20:42:45.294 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.155+0000 7fedd72e3640 1 -- start start 2026-03-09T20:42:45.294 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.155+0000 7fedd72e3640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fedd01082f0 0x7fedd019e130 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:45.294 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.155+0000 7fedd72e3640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fedd019e670 con 0x7fedd01082f0 2026-03-09T20:42:45.294 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.155+0000 7fedd5058640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fedd01082f0 0x7fedd019e130 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:45.294 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.156+0000 7fedd5058640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fedd01082f0 0x7fedd019e130 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:35670/0 (socket says 192.168.123.107:35670) 2026-03-09T20:42:45.294 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.156+0000 7fedd5058640 1 -- 192.168.123.107:0/3468910642 learned_addr learned my addr 192.168.123.107:0/3468910642 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:45.294 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.156+0000 7fedd5058640 1 -- 192.168.123.107:0/3468910642 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fedb80095d0 con 0x7fedd01082f0 2026-03-09T20:42:45.294 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.156+0000 7fedd5058640 1 --2- 192.168.123.107:0/3468910642 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fedd01082f0 0x7fedd019e130 secure :-1 s=READY pgs=85 cs=0 l=1 rev1=1 crypto rx=0x7fedb8037670 tx=0x7fedb80376a0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:45.294 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.156+0000 7fedc67fc640 1 -- 192.168.123.107:0/3468910642 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fedb802fe20 con 0x7fedd01082f0 2026-03-09T20:42:45.294 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.157+0000 7fedc67fc640 1 -- 192.168.123.107:0/3468910642 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fedb8041400 con 0x7fedd01082f0 2026-03-09T20:42:45.294 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.157+0000 7fedd72e3640 1 -- 192.168.123.107:0/3468910642 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fedd019e870 con 0x7fedd01082f0 2026-03-09T20:42:45.294 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.157+0000 7fedc67fc640 1 -- 192.168.123.107:0/3468910642 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fedb803f530 con 0x7fedd01082f0 2026-03-09T20:42:45.294 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.157+0000 7fedd72e3640 1 -- 192.168.123.107:0/3468910642 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fedd019ed10 con 0x7fedd01082f0 2026-03-09T20:42:45.294 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.158+0000 7fedc67fc640 1 -- 192.168.123.107:0/3468910642 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 11) v1 ==== 49277+0+0 (secure 0 0 0) 0x7fedb803e030 con 0x7fedd01082f0 2026-03-09T20:42:45.294 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.158+0000 7fedd72e3640 1 -- 192.168.123.107:0/3468910642 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fed98005350 con 0x7fedd01082f0 2026-03-09T20:42:45.294 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.161+0000 7fedc67fc640 1 --2- 192.168.123.107:0/3468910642 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7feda403cd50 0x7feda403f210 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:45.294 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.161+0000 7fedc67fc640 1 -- 192.168.123.107:0/3468910642 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(3..3 src has 1..3) v4 ==== 1155+0+0 (secure 0 0 0) 0x7fedb80756e0 con 0x7fedd01082f0 2026-03-09T20:42:45.294 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.161+0000 7fedd4857640 1 --2- 192.168.123.107:0/3468910642 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7feda403cd50 0x7feda403f210 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:45.294 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.162+0000 7fedd4857640 1 --2- 192.168.123.107:0/3468910642 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7feda403cd50 0x7feda403f210 secure :-1 s=READY pgs=9 cs=0 l=1 rev1=1 crypto rx=0x7fedc00099c0 tx=0x7fedc0006eb0 comp rx=0 tx=0).ready entity=mgr.14162 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:45.294 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.161+0000 7fedc67fc640 1 -- 192.168.123.107:0/3468910642 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7fedb80363a0 con 0x7fedd01082f0 2026-03-09T20:42:45.294 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.245+0000 7fedd72e3640 1 -- 192.168.123.107:0/3468910642 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"} v 0) v1 -- 0x7fed98005b80 con 0x7fedd01082f0 2026-03-09T20:42:45.294 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.245+0000 7fedc67fc640 1 -- 192.168.123.107:0/3468910642 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]=0 v8) v1 ==== 112+0+5 (secure 0 0 0) 0x7fedb8036af0 con 0x7fedd01082f0 2026-03-09T20:42:45.294 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.247+0000 7fedd72e3640 1 -- 192.168.123.107:0/3468910642 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7feda403cd50 msgr2=0x7feda403f210 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:45.294 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.247+0000 7fedd72e3640 1 --2- 192.168.123.107:0/3468910642 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7feda403cd50 0x7feda403f210 secure :-1 s=READY pgs=9 cs=0 l=1 rev1=1 crypto rx=0x7fedc00099c0 tx=0x7fedc0006eb0 comp rx=0 tx=0).stop 2026-03-09T20:42:45.294 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.247+0000 7fedd72e3640 1 -- 192.168.123.107:0/3468910642 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fedd01082f0 msgr2=0x7fedd019e130 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:45.294 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.247+0000 7fedd72e3640 1 --2- 192.168.123.107:0/3468910642 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fedd01082f0 0x7fedd019e130 secure :-1 s=READY pgs=85 cs=0 l=1 rev1=1 crypto rx=0x7fedb8037670 tx=0x7fedb80376a0 comp rx=0 tx=0).stop 2026-03-09T20:42:45.294 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.248+0000 7fedd72e3640 1 -- 192.168.123.107:0/3468910642 shutdown_connections 2026-03-09T20:42:45.294 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.248+0000 7fedd72e3640 1 --2- 192.168.123.107:0/3468910642 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7feda403cd50 0x7feda403f210 unknown :-1 s=CLOSED pgs=9 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:45.294 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.248+0000 7fedd72e3640 1 --2- 192.168.123.107:0/3468910642 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fedd01082f0 0x7fedd019e130 unknown :-1 s=CLOSED pgs=85 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:45.294 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.248+0000 7fedd72e3640 1 -- 192.168.123.107:0/3468910642 >> 192.168.123.107:0/3468910642 conn(0x7fedd007ba00 msgr2=0x7fedd0105d80 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:45.294 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.248+0000 7fedd72e3640 1 -- 192.168.123.107:0/3468910642 shutdown_connections 2026-03-09T20:42:45.294 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.248+0000 7fedd72e3640 1 -- 192.168.123.107:0/3468910642 wait complete. 2026-03-09T20:42:45.294 INFO:teuthology.orchestra.run.vm07.stdout:firewalld does not appear to be present 2026-03-09T20:42:45.294 INFO:teuthology.orchestra.run.vm07.stdout:Not possible to open ports <[8443]>. firewalld.service is not available 2026-03-09T20:42:45.295 INFO:teuthology.orchestra.run.vm07.stdout:Ceph Dashboard is now available at: 2026-03-09T20:42:45.295 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:42:45.295 INFO:teuthology.orchestra.run.vm07.stdout: URL: https://vm07.local:8443/ 2026-03-09T20:42:45.295 INFO:teuthology.orchestra.run.vm07.stdout: User: admin 2026-03-09T20:42:45.295 INFO:teuthology.orchestra.run.vm07.stdout: Password: 6lzh4f9bau 2026-03-09T20:42:45.295 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:42:45.295 INFO:teuthology.orchestra.run.vm07.stdout:Saving cluster configuration to /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/config directory 2026-03-09T20:42:45.295 INFO:teuthology.orchestra.run.vm07.stdout:Enabling autotune for osd_memory_target 2026-03-09T20:42:45.523 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.403+0000 7f794f065640 1 Processor -- start 2026-03-09T20:42:45.523 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.403+0000 7f794f065640 1 -- start start 2026-03-09T20:42:45.523 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.403+0000 7f794f065640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7948103c60 0x7f7948104060 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:45.523 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.403+0000 7f794f065640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f7948104630 con 0x7f7948103c60 2026-03-09T20:42:45.523 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.403+0000 7f794e063640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7948103c60 0x7f7948104060 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:45.523 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.403+0000 7f794e063640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7948103c60 0x7f7948104060 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:35682/0 (socket says 192.168.123.107:35682) 2026-03-09T20:42:45.523 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.403+0000 7f794e063640 1 -- 192.168.123.107:0/2491368901 learned_addr learned my addr 192.168.123.107:0/2491368901 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:45.523 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.404+0000 7f794e063640 1 -- 192.168.123.107:0/2491368901 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f7948104df0 con 0x7f7948103c60 2026-03-09T20:42:45.523 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.404+0000 7f794e063640 1 --2- 192.168.123.107:0/2491368901 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7948103c60 0x7f7948104060 secure :-1 s=READY pgs=86 cs=0 l=1 rev1=1 crypto rx=0x7f7938009920 tx=0x7f793802ef20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=db13b06566f85f0c server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:45.523 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.404+0000 7f794d061640 1 -- 192.168.123.107:0/2491368901 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f793802f9b0 con 0x7f7948103c60 2026-03-09T20:42:45.523 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.405+0000 7f794d061640 1 -- 192.168.123.107:0/2491368901 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f7938037440 con 0x7f7948103c60 2026-03-09T20:42:45.523 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.405+0000 7f794f065640 1 -- 192.168.123.107:0/2491368901 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7948103c60 msgr2=0x7f7948104060 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:45.523 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.405+0000 7f794f065640 1 --2- 192.168.123.107:0/2491368901 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7948103c60 0x7f7948104060 secure :-1 s=READY pgs=86 cs=0 l=1 rev1=1 crypto rx=0x7f7938009920 tx=0x7f793802ef20 comp rx=0 tx=0).stop 2026-03-09T20:42:45.523 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.405+0000 7f794f065640 1 -- 192.168.123.107:0/2491368901 shutdown_connections 2026-03-09T20:42:45.523 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.405+0000 7f794f065640 1 --2- 192.168.123.107:0/2491368901 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7948103c60 0x7f7948104060 unknown :-1 s=CLOSED pgs=86 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:45.523 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.405+0000 7f794f065640 1 -- 192.168.123.107:0/2491368901 >> 192.168.123.107:0/2491368901 conn(0x7f79480ff7c0 msgr2=0x7f7948101c00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:45.523 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.405+0000 7f794f065640 1 -- 192.168.123.107:0/2491368901 shutdown_connections 2026-03-09T20:42:45.523 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.405+0000 7f794f065640 1 -- 192.168.123.107:0/2491368901 wait complete. 2026-03-09T20:42:45.523 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.406+0000 7f794f065640 1 Processor -- start 2026-03-09T20:42:45.523 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.406+0000 7f794f065640 1 -- start start 2026-03-09T20:42:45.523 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.406+0000 7f794f065640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f79481a24e0 0x7f79481a2900 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:45.523 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.406+0000 7f794f065640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f79380353c0 con 0x7f79481a24e0 2026-03-09T20:42:45.523 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.406+0000 7f794e063640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f79481a24e0 0x7f79481a2900 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:45.524 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.406+0000 7f794e063640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f79481a24e0 0x7f79481a2900 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:35696/0 (socket says 192.168.123.107:35696) 2026-03-09T20:42:45.524 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.406+0000 7f794e063640 1 -- 192.168.123.107:0/2414565232 learned_addr learned my addr 192.168.123.107:0/2414565232 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:45.524 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.406+0000 7f794e063640 1 -- 192.168.123.107:0/2414565232 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f79380095d0 con 0x7f79481a24e0 2026-03-09T20:42:45.524 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.407+0000 7f794e063640 1 --2- 192.168.123.107:0/2414565232 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f79481a24e0 0x7f79481a2900 secure :-1 s=READY pgs=87 cs=0 l=1 rev1=1 crypto rx=0x7f793802f450 tx=0x7f79380377b0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:45.524 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.407+0000 7f79377fe640 1 -- 192.168.123.107:0/2414565232 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f7938037ad0 con 0x7f79481a24e0 2026-03-09T20:42:45.524 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.407+0000 7f79377fe640 1 -- 192.168.123.107:0/2414565232 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f793802fe70 con 0x7f79481a24e0 2026-03-09T20:42:45.524 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.407+0000 7f79377fe640 1 -- 192.168.123.107:0/2414565232 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f7938042da0 con 0x7f79481a24e0 2026-03-09T20:42:45.524 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.407+0000 7f794f065640 1 -- 192.168.123.107:0/2414565232 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f79481a2e40 con 0x7f79481a24e0 2026-03-09T20:42:45.524 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.407+0000 7f794f065640 1 -- 192.168.123.107:0/2414565232 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f79481a59e0 con 0x7f79481a24e0 2026-03-09T20:42:45.524 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.408+0000 7f794f065640 1 -- 192.168.123.107:0/2414565232 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f7910005350 con 0x7f79481a24e0 2026-03-09T20:42:45.524 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.411+0000 7f79377fe640 1 -- 192.168.123.107:0/2414565232 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 11) v1 ==== 49277+0+0 (secure 0 0 0) 0x7f79380425d0 con 0x7f79481a24e0 2026-03-09T20:42:45.524 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.411+0000 7f79377fe640 1 --2- 192.168.123.107:0/2414565232 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f792003d110 0x7f792003f5d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:45.524 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.411+0000 7f79377fe640 1 -- 192.168.123.107:0/2414565232 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(3..3 src has 1..3) v4 ==== 1155+0+0 (secure 0 0 0) 0x7f7938075ea0 con 0x7f79481a24e0 2026-03-09T20:42:45.524 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.411+0000 7f794d862640 1 --2- 192.168.123.107:0/2414565232 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f792003d110 0x7f792003f5d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:45.524 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.411+0000 7f794d862640 1 --2- 192.168.123.107:0/2414565232 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f792003d110 0x7f792003f5d0 secure :-1 s=READY pgs=10 cs=0 l=1 rev1=1 crypto rx=0x7f793c0099c0 tx=0x7f793c006eb0 comp rx=0 tx=0).ready entity=mgr.14162 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:45.524 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.412+0000 7f79377fe640 1 -- 192.168.123.107:0/2414565232 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f7938040b20 con 0x7f79481a24e0 2026-03-09T20:42:45.524 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.492+0000 7f794f065640 1 -- 192.168.123.107:0/2414565232 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command([{prefix=config set, name=osd_memory_target_autotune}] v 0) v1 -- 0x7f79100058d0 con 0x7f79481a24e0 2026-03-09T20:42:45.524 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.493+0000 7f79377fe640 1 -- 192.168.123.107:0/2414565232 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{prefix=config set, name=osd_memory_target_autotune}]=0 v8)=0 v8) v1 ==== 127+0+0 (secure 0 0 0) 0x7f7938047030 con 0x7f79481a24e0 2026-03-09T20:42:45.524 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.494+0000 7f794f065640 1 -- 192.168.123.107:0/2414565232 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f792003d110 msgr2=0x7f792003f5d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:45.524 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.494+0000 7f794f065640 1 --2- 192.168.123.107:0/2414565232 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f792003d110 0x7f792003f5d0 secure :-1 s=READY pgs=10 cs=0 l=1 rev1=1 crypto rx=0x7f793c0099c0 tx=0x7f793c006eb0 comp rx=0 tx=0).stop 2026-03-09T20:42:45.524 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.494+0000 7f794f065640 1 -- 192.168.123.107:0/2414565232 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f79481a24e0 msgr2=0x7f79481a2900 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:45.524 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.494+0000 7f794f065640 1 --2- 192.168.123.107:0/2414565232 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f79481a24e0 0x7f79481a2900 secure :-1 s=READY pgs=87 cs=0 l=1 rev1=1 crypto rx=0x7f793802f450 tx=0x7f79380377b0 comp rx=0 tx=0).stop 2026-03-09T20:42:45.524 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.494+0000 7f794f065640 1 -- 192.168.123.107:0/2414565232 shutdown_connections 2026-03-09T20:42:45.524 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.494+0000 7f794f065640 1 --2- 192.168.123.107:0/2414565232 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f792003d110 0x7f792003f5d0 unknown :-1 s=CLOSED pgs=10 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:45.524 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.494+0000 7f794f065640 1 --2- 192.168.123.107:0/2414565232 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f79481a24e0 0x7f79481a2900 unknown :-1 s=CLOSED pgs=87 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:45.524 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.494+0000 7f794f065640 1 -- 192.168.123.107:0/2414565232 >> 192.168.123.107:0/2414565232 conn(0x7f79480ff7c0 msgr2=0x7f794810aa90 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:45.524 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.494+0000 7f794f065640 1 -- 192.168.123.107:0/2414565232 shutdown_connections 2026-03-09T20:42:45.524 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.494+0000 7f794f065640 1 -- 192.168.123.107:0/2414565232 wait complete. 2026-03-09T20:42:45.812 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.627+0000 7f9d4e043640 1 Processor -- start 2026-03-09T20:42:45.812 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.628+0000 7f9d4e043640 1 -- start start 2026-03-09T20:42:45.812 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.628+0000 7f9d4e043640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9d481082f0 0x7f9d481086f0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:45.812 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.628+0000 7f9d4e043640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9d48108cc0 con 0x7f9d481082f0 2026-03-09T20:42:45.812 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.628+0000 7f9d477fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9d481082f0 0x7f9d481086f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:45.812 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.628+0000 7f9d477fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9d481082f0 0x7f9d481086f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:35708/0 (socket says 192.168.123.107:35708) 2026-03-09T20:42:45.812 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.628+0000 7f9d477fe640 1 -- 192.168.123.107:0/4010005622 learned_addr learned my addr 192.168.123.107:0/4010005622 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:45.812 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.628+0000 7f9d477fe640 1 -- 192.168.123.107:0/4010005622 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f9d481094a0 con 0x7f9d481082f0 2026-03-09T20:42:45.812 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.629+0000 7f9d477fe640 1 --2- 192.168.123.107:0/4010005622 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9d481082f0 0x7f9d481086f0 secure :-1 s=READY pgs=88 cs=0 l=1 rev1=1 crypto rx=0x7f9d3801c080 tx=0x7f9d38040520 comp rx=0 tx=0).ready entity=mon.0 client_cookie=6b01490616bee3bd server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:45.812 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.630+0000 7f9d467fc640 1 -- 192.168.123.107:0/4010005622 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f9d3801a0d0 con 0x7f9d481082f0 2026-03-09T20:42:45.812 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.630+0000 7f9d467fc640 1 -- 192.168.123.107:0/4010005622 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f9d38043050 con 0x7f9d481082f0 2026-03-09T20:42:45.812 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.630+0000 7f9d4e043640 1 -- 192.168.123.107:0/4010005622 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9d481082f0 msgr2=0x7f9d481086f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:45.812 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.630+0000 7f9d4e043640 1 --2- 192.168.123.107:0/4010005622 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9d481082f0 0x7f9d481086f0 secure :-1 s=READY pgs=88 cs=0 l=1 rev1=1 crypto rx=0x7f9d3801c080 tx=0x7f9d38040520 comp rx=0 tx=0).stop 2026-03-09T20:42:45.812 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.631+0000 7f9d4e043640 1 -- 192.168.123.107:0/4010005622 shutdown_connections 2026-03-09T20:42:45.812 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.631+0000 7f9d4e043640 1 --2- 192.168.123.107:0/4010005622 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9d481082f0 0x7f9d481086f0 unknown :-1 s=CLOSED pgs=88 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:45.812 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.631+0000 7f9d4e043640 1 -- 192.168.123.107:0/4010005622 >> 192.168.123.107:0/4010005622 conn(0x7f9d4807ba00 msgr2=0x7f9d481066a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:45.812 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.631+0000 7f9d4e043640 1 -- 192.168.123.107:0/4010005622 shutdown_connections 2026-03-09T20:42:45.812 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.631+0000 7f9d4e043640 1 -- 192.168.123.107:0/4010005622 wait complete. 2026-03-09T20:42:45.812 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.631+0000 7f9d4e043640 1 Processor -- start 2026-03-09T20:42:45.812 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.632+0000 7f9d4e043640 1 -- start start 2026-03-09T20:42:45.812 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.632+0000 7f9d4e043640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9d481082f0 0x7f9d4807fc50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:45.812 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.632+0000 7f9d4e043640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9d380475a0 con 0x7f9d481082f0 2026-03-09T20:42:45.812 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.632+0000 7f9d477fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9d481082f0 0x7f9d4807fc50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:45.812 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.632+0000 7f9d477fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9d481082f0 0x7f9d4807fc50 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:35718/0 (socket says 192.168.123.107:35718) 2026-03-09T20:42:45.812 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.632+0000 7f9d477fe640 1 -- 192.168.123.107:0/2131985982 learned_addr learned my addr 192.168.123.107:0/2131985982 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:45.812 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.633+0000 7f9d477fe640 1 -- 192.168.123.107:0/2131985982 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f9d3801aa70 con 0x7f9d481082f0 2026-03-09T20:42:45.812 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.633+0000 7f9d477fe640 1 --2- 192.168.123.107:0/2131985982 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9d481082f0 0x7f9d4807fc50 secure :-1 s=READY pgs=89 cs=0 l=1 rev1=1 crypto rx=0x7f9d38040a50 tx=0x7f9d38004270 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:45.812 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.634+0000 7f9d44ff9640 1 -- 192.168.123.107:0/2131985982 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f9d38004450 con 0x7f9d481082f0 2026-03-09T20:42:45.812 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.634+0000 7f9d44ff9640 1 -- 192.168.123.107:0/2131985982 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f9d380045b0 con 0x7f9d481082f0 2026-03-09T20:42:45.812 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.634+0000 7f9d44ff9640 1 -- 192.168.123.107:0/2131985982 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f9d38018c20 con 0x7f9d481082f0 2026-03-09T20:42:45.812 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.634+0000 7f9d4e043640 1 -- 192.168.123.107:0/2131985982 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f9d48080190 con 0x7f9d481082f0 2026-03-09T20:42:45.812 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.634+0000 7f9d4e043640 1 -- 192.168.123.107:0/2131985982 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f9d48080690 con 0x7f9d481082f0 2026-03-09T20:42:45.812 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.635+0000 7f9d44ff9640 1 -- 192.168.123.107:0/2131985982 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 11) v1 ==== 49277+0+0 (secure 0 0 0) 0x7f9d38003710 con 0x7f9d481082f0 2026-03-09T20:42:45.812 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.635+0000 7f9d44ff9640 1 --2- 192.168.123.107:0/2131985982 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f9d1c03cda0 0x7f9d1c03f260 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:45.812 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.635+0000 7f9d46ffd640 1 --2- 192.168.123.107:0/2131985982 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f9d1c03cda0 0x7f9d1c03f260 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:45.812 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.635+0000 7f9d44ff9640 1 -- 192.168.123.107:0/2131985982 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(3..3 src has 1..3) v4 ==== 1155+0+0 (secure 0 0 0) 0x7f9d380457a0 con 0x7f9d481082f0 2026-03-09T20:42:45.812 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.635+0000 7f9d4e043640 1 -- 192.168.123.107:0/2131985982 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f9d481086f0 con 0x7f9d481082f0 2026-03-09T20:42:45.812 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.637+0000 7f9d46ffd640 1 --2- 192.168.123.107:0/2131985982 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f9d1c03cda0 0x7f9d1c03f260 secure :-1 s=READY pgs=11 cs=0 l=1 rev1=1 crypto rx=0x7f9d340099c0 tx=0x7f9d34006eb0 comp rx=0 tx=0).ready entity=mgr.14162 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:45.812 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.638+0000 7f9d44ff9640 1 -- 192.168.123.107:0/2131985982 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f9d3804c080 con 0x7f9d481082f0 2026-03-09T20:42:45.812 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.763+0000 7f9d4e043640 1 -- 192.168.123.107:0/2131985982 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) v1 -- 0x7f9d4806a990 con 0x7f9d481082f0 2026-03-09T20:42:45.812 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.765+0000 7f9d44ff9640 1 -- 192.168.123.107:0/2131985982 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{prefix=config-key set, key=mgr/dashboard/cluster/status}]=0 set mgr/dashboard/cluster/status v34)=0 set mgr/dashboard/cluster/status v34) v1 ==== 153+0+0 (secure 0 0 0) 0x7f9d38057030 con 0x7f9d481082f0 2026-03-09T20:42:45.812 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr set mgr/dashboard/cluster/status 2026-03-09T20:42:45.812 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.767+0000 7f9d4e043640 1 -- 192.168.123.107:0/2131985982 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f9d1c03cda0 msgr2=0x7f9d1c03f260 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:45.812 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.767+0000 7f9d4e043640 1 --2- 192.168.123.107:0/2131985982 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f9d1c03cda0 0x7f9d1c03f260 secure :-1 s=READY pgs=11 cs=0 l=1 rev1=1 crypto rx=0x7f9d340099c0 tx=0x7f9d34006eb0 comp rx=0 tx=0).stop 2026-03-09T20:42:45.812 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.767+0000 7f9d4e043640 1 -- 192.168.123.107:0/2131985982 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9d481082f0 msgr2=0x7f9d4807fc50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:45.812 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.767+0000 7f9d4e043640 1 --2- 192.168.123.107:0/2131985982 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9d481082f0 0x7f9d4807fc50 secure :-1 s=READY pgs=89 cs=0 l=1 rev1=1 crypto rx=0x7f9d38040a50 tx=0x7f9d38004270 comp rx=0 tx=0).stop 2026-03-09T20:42:45.812 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.768+0000 7f9d4e043640 1 -- 192.168.123.107:0/2131985982 shutdown_connections 2026-03-09T20:42:45.812 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.768+0000 7f9d4e043640 1 --2- 192.168.123.107:0/2131985982 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f9d1c03cda0 0x7f9d1c03f260 unknown :-1 s=CLOSED pgs=11 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:45.812 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.768+0000 7f9d4e043640 1 --2- 192.168.123.107:0/2131985982 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9d481082f0 0x7f9d4807fc50 unknown :-1 s=CLOSED pgs=89 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:45.812 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.768+0000 7f9d4e043640 1 -- 192.168.123.107:0/2131985982 >> 192.168.123.107:0/2131985982 conn(0x7f9d4807ba00 msgr2=0x7f9d4806bb90 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:45.812 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.768+0000 7f9d4e043640 1 -- 192.168.123.107:0/2131985982 shutdown_connections 2026-03-09T20:42:45.813 INFO:teuthology.orchestra.run.vm07.stdout:/usr/bin/ceph: stderr 2026-03-09T20:42:45.768+0000 7f9d4e043640 1 -- 192.168.123.107:0/2131985982 wait complete. 2026-03-09T20:42:45.813 INFO:teuthology.orchestra.run.vm07.stdout:You can access the Ceph CLI as following in case of multi-cluster or non-default config: 2026-03-09T20:42:45.813 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:42:45.813 INFO:teuthology.orchestra.run.vm07.stdout: sudo /home/ubuntu/cephtest/cephadm shell --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring 2026-03-09T20:42:45.813 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:42:45.813 INFO:teuthology.orchestra.run.vm07.stdout:Or, if you are only running a single cluster on this host: 2026-03-09T20:42:45.813 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:42:45.813 INFO:teuthology.orchestra.run.vm07.stdout: sudo /home/ubuntu/cephtest/cephadm shell 2026-03-09T20:42:45.813 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:42:45.813 INFO:teuthology.orchestra.run.vm07.stdout:Please consider enabling telemetry to help improve Ceph: 2026-03-09T20:42:45.813 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:42:45.813 INFO:teuthology.orchestra.run.vm07.stdout: ceph telemetry on 2026-03-09T20:42:45.813 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:42:45.813 INFO:teuthology.orchestra.run.vm07.stdout:For more information see: 2026-03-09T20:42:45.813 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:42:45.813 INFO:teuthology.orchestra.run.vm07.stdout: https://docs.ceph.com/en/latest/mgr/telemetry/ 2026-03-09T20:42:45.813 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:42:45.813 INFO:teuthology.orchestra.run.vm07.stdout:Bootstrap complete. 2026-03-09T20:42:45.835 INFO:tasks.cephadm:Fetching config... 2026-03-09T20:42:45.835 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-09T20:42:45.835 DEBUG:teuthology.orchestra.run.vm07:> dd if=/etc/ceph/ceph.conf of=/dev/stdout 2026-03-09T20:42:45.901 INFO:tasks.cephadm:Fetching client.admin keyring... 2026-03-09T20:42:45.902 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-09T20:42:45.902 DEBUG:teuthology.orchestra.run.vm07:> dd if=/etc/ceph/ceph.client.admin.keyring of=/dev/stdout 2026-03-09T20:42:45.960 INFO:tasks.cephadm:Fetching mon keyring... 2026-03-09T20:42:45.960 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-09T20:42:45.960 DEBUG:teuthology.orchestra.run.vm07:> sudo dd if=/var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/keyring of=/dev/stdout 2026-03-09T20:42:46.031 INFO:tasks.cephadm:Fetching pub ssh key... 2026-03-09T20:42:46.031 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-09T20:42:46.031 DEBUG:teuthology.orchestra.run.vm07:> dd if=/home/ubuntu/cephtest/ceph.pub of=/dev/stdout 2026-03-09T20:42:46.088 INFO:tasks.cephadm:Installing pub ssh key for root users... 2026-03-09T20:42:46.088 DEBUG:teuthology.orchestra.run.vm07:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGfYmEKNabqXZiYSz8H+ttKMu4TpiT6S79bXNujHfYYB ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-09T20:42:46.154 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:46 vm07 ceph-mon[49120]: from='client.14174 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:42:46.155 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:46 vm07 ceph-mon[49120]: from='client.14176 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:42:46.155 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:46 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' 2026-03-09T20:42:46.155 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:46 vm07 ceph-mon[49120]: from='client.? 192.168.123.107:0/3468910642' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-09T20:42:46.155 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:46 vm07 ceph-mon[49120]: from='client.? 192.168.123.107:0/2131985982' entity='client.admin' 2026-03-09T20:42:46.166 INFO:teuthology.orchestra.run.vm07.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGfYmEKNabqXZiYSz8H+ttKMu4TpiT6S79bXNujHfYYB ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4 2026-03-09T20:42:46.181 DEBUG:teuthology.orchestra.run.vm10:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGfYmEKNabqXZiYSz8H+ttKMu4TpiT6S79bXNujHfYYB ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-09T20:42:46.220 INFO:teuthology.orchestra.run.vm10.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGfYmEKNabqXZiYSz8H+ttKMu4TpiT6S79bXNujHfYYB ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4 2026-03-09T20:42:46.231 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph config set mgr mgr/cephadm/allow_ptrace true 2026-03-09T20:42:46.435 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:42:46.887 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:46.886+0000 7faa990ee640 1 -- 192.168.123.107:0/2388308783 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7faa9410ae60 msgr2=0x7faa9410b260 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:46.887 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:46.886+0000 7faa990ee640 1 --2- 192.168.123.107:0/2388308783 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7faa9410ae60 0x7faa9410b260 secure :-1 s=READY pgs=90 cs=0 l=1 rev1=1 crypto rx=0x7faa80009a00 tx=0x7faa8002f310 comp rx=0 tx=0).stop 2026-03-09T20:42:46.887 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:46.887+0000 7faa990ee640 1 -- 192.168.123.107:0/2388308783 shutdown_connections 2026-03-09T20:42:46.887 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:46.887+0000 7faa990ee640 1 --2- 192.168.123.107:0/2388308783 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7faa9410ae60 0x7faa9410b260 unknown :-1 s=CLOSED pgs=90 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:46.887 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:46.887+0000 7faa990ee640 1 -- 192.168.123.107:0/2388308783 >> 192.168.123.107:0/2388308783 conn(0x7faa94069cd0 msgr2=0x7faa9406a0e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:46.887 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:46.888+0000 7faa990ee640 1 -- 192.168.123.107:0/2388308783 shutdown_connections 2026-03-09T20:42:46.888 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:46.888+0000 7faa990ee640 1 -- 192.168.123.107:0/2388308783 wait complete. 2026-03-09T20:42:46.888 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:46.888+0000 7faa990ee640 1 Processor -- start 2026-03-09T20:42:46.888 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:46.888+0000 7faa990ee640 1 -- start start 2026-03-09T20:42:46.888 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:46.888+0000 7faa990ee640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7faa9410ae60 0x7faa941a66a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:46.888 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:46.888+0000 7faa990ee640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7faa941a6be0 con 0x7faa9410ae60 2026-03-09T20:42:46.888 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:46.888+0000 7faa93fff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7faa9410ae60 0x7faa941a66a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:46.889 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:46.889+0000 7faa93fff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7faa9410ae60 0x7faa941a66a0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:35738/0 (socket says 192.168.123.107:35738) 2026-03-09T20:42:46.889 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:46.889+0000 7faa93fff640 1 -- 192.168.123.107:0/457686174 learned_addr learned my addr 192.168.123.107:0/457686174 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:46.889 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:46.889+0000 7faa93fff640 1 -- 192.168.123.107:0/457686174 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7faa80009660 con 0x7faa9410ae60 2026-03-09T20:42:46.889 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:46.889+0000 7faa93fff640 1 --2- 192.168.123.107:0/457686174 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7faa9410ae60 0x7faa941a66a0 secure :-1 s=READY pgs=91 cs=0 l=1 rev1=1 crypto rx=0x7faa80002cf0 tx=0x7faa80004290 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:46.889 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:46.890+0000 7faa917fa640 1 -- 192.168.123.107:0/457686174 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7faa80004330 con 0x7faa9410ae60 2026-03-09T20:42:46.890 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:46.890+0000 7faa917fa640 1 -- 192.168.123.107:0/457686174 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7faa80038bf0 con 0x7faa9410ae60 2026-03-09T20:42:46.890 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:46.890+0000 7faa990ee640 1 -- 192.168.123.107:0/457686174 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7faa941a6de0 con 0x7faa9410ae60 2026-03-09T20:42:46.890 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:46.890+0000 7faa990ee640 1 -- 192.168.123.107:0/457686174 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7faa941a72e0 con 0x7faa9410ae60 2026-03-09T20:42:46.893 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:46.890+0000 7faa990ee640 1 -- 192.168.123.107:0/457686174 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7faa9410b2e0 con 0x7faa9410ae60 2026-03-09T20:42:46.893 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:46.891+0000 7faa917fa640 1 -- 192.168.123.107:0/457686174 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7faa80040a50 con 0x7faa9410ae60 2026-03-09T20:42:46.893 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:46.891+0000 7faa917fa640 1 -- 192.168.123.107:0/457686174 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 12) v1 ==== 49383+0+0 (secure 0 0 0) 0x7faa8003f070 con 0x7faa9410ae60 2026-03-09T20:42:46.893 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:46.891+0000 7faa917fa640 1 --2- 192.168.123.107:0/457686174 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7faa6003cf60 0x7faa6003f420 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:46.893 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:46.891+0000 7faa917fa640 1 -- 192.168.123.107:0/457686174 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(3..3 src has 1..3) v4 ==== 1155+0+0 (secure 0 0 0) 0x7faa80077660 con 0x7faa9410ae60 2026-03-09T20:42:46.893 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:46.891+0000 7faa937fe640 1 --2- 192.168.123.107:0/457686174 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7faa6003cf60 0x7faa6003f420 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:46.893 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:46.892+0000 7faa937fe640 1 --2- 192.168.123.107:0/457686174 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7faa6003cf60 0x7faa6003f420 secure :-1 s=READY pgs=12 cs=0 l=1 rev1=1 crypto rx=0x7faa840099c0 tx=0x7faa84006eb0 comp rx=0 tx=0).ready entity=mgr.14162 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:46.894 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:46.894+0000 7faa917fa640 1 -- 192.168.123.107:0/457686174 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7faa80048590 con 0x7faa9410ae60 2026-03-09T20:42:46.994 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:46.993+0000 7faa990ee640 1 -- 192.168.123.107:0/457686174 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command([{prefix=config set, name=mgr/cephadm/allow_ptrace}] v 0) v1 -- 0x7faa94110c50 con 0x7faa9410ae60 2026-03-09T20:42:46.996 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:46.996+0000 7faa917fa640 1 -- 192.168.123.107:0/457686174 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{prefix=config set, name=mgr/cephadm/allow_ptrace}]=0 v9)=0 v9) v1 ==== 125+0+0 (secure 0 0 0) 0x7faa8003d030 con 0x7faa9410ae60 2026-03-09T20:42:47.001 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:47.001+0000 7faa990ee640 1 -- 192.168.123.107:0/457686174 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7faa6003cf60 msgr2=0x7faa6003f420 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:47.001 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:47.001+0000 7faa990ee640 1 --2- 192.168.123.107:0/457686174 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7faa6003cf60 0x7faa6003f420 secure :-1 s=READY pgs=12 cs=0 l=1 rev1=1 crypto rx=0x7faa840099c0 tx=0x7faa84006eb0 comp rx=0 tx=0).stop 2026-03-09T20:42:47.001 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:47.001+0000 7faa990ee640 1 -- 192.168.123.107:0/457686174 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7faa9410ae60 msgr2=0x7faa941a66a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:47.001 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:47.001+0000 7faa990ee640 1 --2- 192.168.123.107:0/457686174 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7faa9410ae60 0x7faa941a66a0 secure :-1 s=READY pgs=91 cs=0 l=1 rev1=1 crypto rx=0x7faa80002cf0 tx=0x7faa80004290 comp rx=0 tx=0).stop 2026-03-09T20:42:47.002 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:47.001+0000 7faa990ee640 1 -- 192.168.123.107:0/457686174 shutdown_connections 2026-03-09T20:42:47.002 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:47.001+0000 7faa990ee640 1 --2- 192.168.123.107:0/457686174 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7faa6003cf60 0x7faa6003f420 unknown :-1 s=CLOSED pgs=12 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:47.002 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:47.001+0000 7faa990ee640 1 --2- 192.168.123.107:0/457686174 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7faa9410ae60 0x7faa941a66a0 unknown :-1 s=CLOSED pgs=91 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:47.002 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:47.001+0000 7faa990ee640 1 -- 192.168.123.107:0/457686174 >> 192.168.123.107:0/457686174 conn(0x7faa94069cd0 msgr2=0x7faa9406bbb0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:47.002 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:47.002+0000 7faa990ee640 1 -- 192.168.123.107:0/457686174 shutdown_connections 2026-03-09T20:42:47.002 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:47.002+0000 7faa990ee640 1 -- 192.168.123.107:0/457686174 wait complete. 2026-03-09T20:42:47.043 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:47 vm07 ceph-mon[49120]: mgrmap e12: vm07.xjrvch(active, since 2s) 2026-03-09T20:42:47.043 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:47 vm07 ceph-mon[49120]: from='client.? 192.168.123.107:0/457686174' entity='client.admin' 2026-03-09T20:42:47.069 INFO:tasks.cephadm:Distributing conf and client.admin keyring to all hosts + 0755 2026-03-09T20:42:47.069 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph orch client-keyring set client.admin '*' --mode 0755 2026-03-09T20:42:47.220 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:42:47.453 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:47.452+0000 7f750ffff640 1 -- 192.168.123.107:0/4200536883 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7510071650 msgr2=0x7f7510071a50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:47.453 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:47.452+0000 7f750ffff640 1 --2- 192.168.123.107:0/4200536883 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7510071650 0x7f7510071a50 secure :-1 s=READY pgs=92 cs=0 l=1 rev1=1 crypto rx=0x7f7500008ae0 tx=0x7f7500033930 comp rx=0 tx=0).stop 2026-03-09T20:42:47.453 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:47.453+0000 7f750ffff640 1 -- 192.168.123.107:0/4200536883 shutdown_connections 2026-03-09T20:42:47.453 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:47.453+0000 7f750ffff640 1 --2- 192.168.123.107:0/4200536883 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7510071650 0x7f7510071a50 unknown :-1 s=CLOSED pgs=92 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:47.453 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:47.453+0000 7f750ffff640 1 -- 192.168.123.107:0/4200536883 >> 192.168.123.107:0/4200536883 conn(0x7f751006cfb0 msgr2=0x7f751006f3f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:47.453 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:47.453+0000 7f750ffff640 1 -- 192.168.123.107:0/4200536883 shutdown_connections 2026-03-09T20:42:47.454 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:47.454+0000 7f750ffff640 1 -- 192.168.123.107:0/4200536883 wait complete. 2026-03-09T20:42:47.454 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:47.454+0000 7f750ffff640 1 Processor -- start 2026-03-09T20:42:47.454 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:47.454+0000 7f750ffff640 1 -- start start 2026-03-09T20:42:47.454 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:47.454+0000 7f750ffff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f75101b7c60 0x7f75101b8080 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:47.454 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:47.454+0000 7f750ffff640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f75101b85c0 con 0x7f75101b7c60 2026-03-09T20:42:47.454 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:47.455+0000 7f750effd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f75101b7c60 0x7f75101b8080 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:47.454 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:47.455+0000 7f750effd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f75101b7c60 0x7f75101b8080 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:35766/0 (socket says 192.168.123.107:35766) 2026-03-09T20:42:47.455 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:47.455+0000 7f750effd640 1 -- 192.168.123.107:0/3445679661 learned_addr learned my addr 192.168.123.107:0/3445679661 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:47.455 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:47.455+0000 7f750effd640 1 -- 192.168.123.107:0/3445679661 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f7500008530 con 0x7f75101b7c60 2026-03-09T20:42:47.455 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:47.455+0000 7f750effd640 1 --2- 192.168.123.107:0/3445679661 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f75101b7c60 0x7f75101b8080 secure :-1 s=READY pgs=93 cs=0 l=1 rev1=1 crypto rx=0x7f7500033ee0 tx=0x7f7500004660 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:47.455 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:47.456+0000 7f74effff640 1 -- 192.168.123.107:0/3445679661 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f7500002850 con 0x7f75101b7c60 2026-03-09T20:42:47.455 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:47.456+0000 7f74effff640 1 -- 192.168.123.107:0/3445679661 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f7500002e70 con 0x7f75101b7c60 2026-03-09T20:42:47.455 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:47.456+0000 7f74effff640 1 -- 192.168.123.107:0/3445679661 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f75000115d0 con 0x7f75101b7c60 2026-03-09T20:42:47.456 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:47.456+0000 7f750ffff640 1 -- 192.168.123.107:0/3445679661 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f75101b87c0 con 0x7f75101b7c60 2026-03-09T20:42:47.456 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:47.456+0000 7f750ffff640 1 -- 192.168.123.107:0/3445679661 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f75101bb330 con 0x7f75101b7c60 2026-03-09T20:42:47.456 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:47.456+0000 7f750ffff640 1 -- 192.168.123.107:0/3445679661 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f74d4005350 con 0x7f75101b7c60 2026-03-09T20:42:47.460 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:47.459+0000 7f74effff640 1 -- 192.168.123.107:0/3445679661 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 12) v1 ==== 49383+0+0 (secure 0 0 0) 0x7f7500013290 con 0x7f75101b7c60 2026-03-09T20:42:47.460 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:47.459+0000 7f74effff640 1 --2- 192.168.123.107:0/3445679661 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f74e403d230 0x7f74e403f6f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:47.460 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:47.459+0000 7f74effff640 1 -- 192.168.123.107:0/3445679661 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(3..3 src has 1..3) v4 ==== 1155+0+0 (secure 0 0 0) 0x7f7500076a50 con 0x7f75101b7c60 2026-03-09T20:42:47.460 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:47.459+0000 7f74effff640 1 -- 192.168.123.107:0/3445679661 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f7500013090 con 0x7f75101b7c60 2026-03-09T20:42:47.460 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:47.460+0000 7f750e7fc640 1 --2- 192.168.123.107:0/3445679661 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f74e403d230 0x7f74e403f6f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:47.460 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:47.460+0000 7f750e7fc640 1 --2- 192.168.123.107:0/3445679661 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f74e403d230 0x7f74e403f6f0 secure :-1 s=READY pgs=13 cs=0 l=1 rev1=1 crypto rx=0x7f750800ad80 tx=0x7f75080093f0 comp rx=0 tx=0).ready entity=mgr.14162 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:47.560 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:47.559+0000 7f750ffff640 1 -- 192.168.123.107:0/3445679661 --> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] -- mgr_command(tid 0: {"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}) v1 -- 0x7f74d4002bf0 con 0x7f74e403d230 2026-03-09T20:42:47.564 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:47.563+0000 7f74effff640 1 -- 192.168.123.107:0/3445679661 <== mgr.14162 v2:192.168.123.107:6800/4166937886 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+0 (secure 0 0 0) 0x7f74d4002bf0 con 0x7f74e403d230 2026-03-09T20:42:47.571 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:47.571+0000 7f750ffff640 1 -- 192.168.123.107:0/3445679661 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f74e403d230 msgr2=0x7f74e403f6f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:47.571 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:47.571+0000 7f750ffff640 1 --2- 192.168.123.107:0/3445679661 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f74e403d230 0x7f74e403f6f0 secure :-1 s=READY pgs=13 cs=0 l=1 rev1=1 crypto rx=0x7f750800ad80 tx=0x7f75080093f0 comp rx=0 tx=0).stop 2026-03-09T20:42:47.571 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:47.571+0000 7f750ffff640 1 -- 192.168.123.107:0/3445679661 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f75101b7c60 msgr2=0x7f75101b8080 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:47.571 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:47.571+0000 7f750ffff640 1 --2- 192.168.123.107:0/3445679661 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f75101b7c60 0x7f75101b8080 secure :-1 s=READY pgs=93 cs=0 l=1 rev1=1 crypto rx=0x7f7500033ee0 tx=0x7f7500004660 comp rx=0 tx=0).stop 2026-03-09T20:42:47.571 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:47.571+0000 7f750ffff640 1 -- 192.168.123.107:0/3445679661 shutdown_connections 2026-03-09T20:42:47.571 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:47.571+0000 7f750ffff640 1 --2- 192.168.123.107:0/3445679661 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f74e403d230 0x7f74e403f6f0 unknown :-1 s=CLOSED pgs=13 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:47.571 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:47.571+0000 7f750ffff640 1 --2- 192.168.123.107:0/3445679661 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f75101b7c60 0x7f75101b8080 unknown :-1 s=CLOSED pgs=93 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:47.571 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:47.571+0000 7f750ffff640 1 -- 192.168.123.107:0/3445679661 >> 192.168.123.107:0/3445679661 conn(0x7f751006cfb0 msgr2=0x7f75100835a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:47.571 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:47.572+0000 7f750ffff640 1 -- 192.168.123.107:0/3445679661 shutdown_connections 2026-03-09T20:42:47.571 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:47.572+0000 7f750ffff640 1 -- 192.168.123.107:0/3445679661 wait complete. 2026-03-09T20:42:47.627 INFO:tasks.cephadm:Writing (initial) conf and keyring to vm10 2026-03-09T20:42:47.627 DEBUG:teuthology.orchestra.run.vm10:> set -ex 2026-03-09T20:42:47.627 DEBUG:teuthology.orchestra.run.vm10:> dd of=/etc/ceph/ceph.conf 2026-03-09T20:42:47.641 DEBUG:teuthology.orchestra.run.vm10:> set -ex 2026-03-09T20:42:47.641 DEBUG:teuthology.orchestra.run.vm10:> dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:42:47.696 INFO:tasks.cephadm:Adding host vm10 to orchestrator... 2026-03-09T20:42:47.696 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph orch host add vm10 2026-03-09T20:42:47.850 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:42:48.133 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:48.132+0000 7f668e77f640 1 -- 192.168.123.107:0/883840084 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f66880ffca0 msgr2=0x7f66881000a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:48.133 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:48.132+0000 7f668e77f640 1 --2- 192.168.123.107:0/883840084 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f66880ffca0 0x7f66881000a0 secure :-1 s=READY pgs=94 cs=0 l=1 rev1=1 crypto rx=0x7f66740099b0 tx=0x7f667402f2b0 comp rx=0 tx=0).stop 2026-03-09T20:42:48.135 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:48.135+0000 7f668e77f640 1 -- 192.168.123.107:0/883840084 shutdown_connections 2026-03-09T20:42:48.135 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:48.135+0000 7f668e77f640 1 --2- 192.168.123.107:0/883840084 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f66880ffca0 0x7f66881000a0 unknown :-1 s=CLOSED pgs=94 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:48.135 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:48.135+0000 7f668e77f640 1 -- 192.168.123.107:0/883840084 >> 192.168.123.107:0/883840084 conn(0x7f66880f9b60 msgr2=0x7f66880fbfa0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:48.135 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:48.135+0000 7f668e77f640 1 -- 192.168.123.107:0/883840084 shutdown_connections 2026-03-09T20:42:48.135 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:48.135+0000 7f668e77f640 1 -- 192.168.123.107:0/883840084 wait complete. 2026-03-09T20:42:48.135 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:48.135+0000 7f668e77f640 1 Processor -- start 2026-03-09T20:42:48.137 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:48.136+0000 7f668e77f640 1 -- start start 2026-03-09T20:42:48.137 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:48.136+0000 7f668e77f640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f66880ffca0 0x7f668810e6c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:48.137 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:48.136+0000 7f6687fff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f66880ffca0 0x7f668810e6c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:48.137 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:48.136+0000 7f6687fff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f66880ffca0 0x7f668810e6c0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:50730/0 (socket says 192.168.123.107:50730) 2026-03-09T20:42:48.137 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:48.136+0000 7f6687fff640 1 -- 192.168.123.107:0/3586761906 learned_addr learned my addr 192.168.123.107:0/3586761906 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:48.137 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:48.136+0000 7f668e77f640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f668810ec00 con 0x7f66880ffca0 2026-03-09T20:42:48.137 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:48.137+0000 7f6687fff640 1 -- 192.168.123.107:0/3586761906 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f6674009660 con 0x7f66880ffca0 2026-03-09T20:42:48.137 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:48.137+0000 7f6687fff640 1 --2- 192.168.123.107:0/3586761906 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f66880ffca0 0x7f668810e6c0 secure :-1 s=READY pgs=95 cs=0 l=1 rev1=1 crypto rx=0x7f667402f860 tx=0x7f6674004290 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:48.137 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:48.137+0000 7f66857fa640 1 -- 192.168.123.107:0/3586761906 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f66740043d0 con 0x7f66880ffca0 2026-03-09T20:42:48.137 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:48.137+0000 7f66857fa640 1 -- 192.168.123.107:0/3586761906 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f6674038b40 con 0x7f66880ffca0 2026-03-09T20:42:48.137 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:48.137+0000 7f668e77f640 1 -- 192.168.123.107:0/3586761906 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f668810ee00 con 0x7f66880ffca0 2026-03-09T20:42:48.139 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:48.138+0000 7f66857fa640 1 -- 192.168.123.107:0/3586761906 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f6674041860 con 0x7f66880ffca0 2026-03-09T20:42:48.139 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:48.138+0000 7f668e77f640 1 -- 192.168.123.107:0/3586761906 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f668810f2a0 con 0x7f66880ffca0 2026-03-09T20:42:48.139 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:48.139+0000 7f66857fa640 1 -- 192.168.123.107:0/3586761906 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 12) v1 ==== 49383+0+0 (secure 0 0 0) 0x7f6674038cb0 con 0x7f66880ffca0 2026-03-09T20:42:48.139 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:48.139+0000 7f66857fa640 1 --2- 192.168.123.107:0/3586761906 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f665c03cf10 0x7f665c03f3d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:48.139 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:48.139+0000 7f66877fe640 1 --2- 192.168.123.107:0/3586761906 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f665c03cf10 0x7f665c03f3d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:48.139 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:48.139+0000 7f66857fa640 1 -- 192.168.123.107:0/3586761906 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(3..3 src has 1..3) v4 ==== 1155+0+0 (secure 0 0 0) 0x7f667407b660 con 0x7f66880ffca0 2026-03-09T20:42:48.142 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:48.139+0000 7f668e77f640 1 -- 192.168.123.107:0/3586761906 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f66881000a0 con 0x7f66880ffca0 2026-03-09T20:42:48.142 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:48.140+0000 7f66877fe640 1 --2- 192.168.123.107:0/3586761906 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f665c03cf10 0x7f665c03f3d0 secure :-1 s=READY pgs=14 cs=0 l=1 rev1=1 crypto rx=0x7f6678009a10 tx=0x7f6678006eb0 comp rx=0 tx=0).ready entity=mgr.14162 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:48.142 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:48.142+0000 7f66857fa640 1 -- 192.168.123.107:0/3586761906 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f667407c050 con 0x7f66880ffca0 2026-03-09T20:42:48.237 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:48.236+0000 7f668e77f640 1 -- 192.168.123.107:0/3586761906 --> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] -- mgr_command(tid 0: {"prefix": "orch host add", "hostname": "vm10", "target": ["mon-mgr", ""]}) v1 -- 0x7f66880fbed0 con 0x7f665c03cf10 2026-03-09T20:42:49.128 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:48 vm07 ceph-mon[49120]: from='client.14186 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:42:49.128 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:48 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' 2026-03-09T20:42:49.128 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:48 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' 2026-03-09T20:42:49.128 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:48 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' 2026-03-09T20:42:49.128 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:48 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:42:49.128 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:48 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:42:49.128 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:48 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:42:49.128 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:48 vm07 ceph-mon[49120]: Updating vm07:/etc/ceph/ceph.conf 2026-03-09T20:42:49.128 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:48 vm07 ceph-mon[49120]: Updating vm07:/var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/config/ceph.conf 2026-03-09T20:42:49.128 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:48 vm07 ceph-mon[49120]: from='client.14188 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm10", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:42:49.128 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:48 vm07 ceph-mon[49120]: Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:42:49.852 INFO:teuthology.orchestra.run.vm07.stdout:Added host 'vm10' with addr '192.168.123.110' 2026-03-09T20:42:49.852 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:49.850+0000 7f66857fa640 1 -- 192.168.123.107:0/3586761906 <== mgr.14162 v2:192.168.123.107:6800/4166937886 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+46 (secure 0 0 0) 0x7f66880fbed0 con 0x7f665c03cf10 2026-03-09T20:42:49.852 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:49.852+0000 7f668e77f640 1 -- 192.168.123.107:0/3586761906 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f665c03cf10 msgr2=0x7f665c03f3d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:49.852 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:49.852+0000 7f668e77f640 1 --2- 192.168.123.107:0/3586761906 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f665c03cf10 0x7f665c03f3d0 secure :-1 s=READY pgs=14 cs=0 l=1 rev1=1 crypto rx=0x7f6678009a10 tx=0x7f6678006eb0 comp rx=0 tx=0).stop 2026-03-09T20:42:49.852 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:49.852+0000 7f668e77f640 1 -- 192.168.123.107:0/3586761906 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f66880ffca0 msgr2=0x7f668810e6c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:49.852 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:49.852+0000 7f668e77f640 1 --2- 192.168.123.107:0/3586761906 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f66880ffca0 0x7f668810e6c0 secure :-1 s=READY pgs=95 cs=0 l=1 rev1=1 crypto rx=0x7f667402f860 tx=0x7f6674004290 comp rx=0 tx=0).stop 2026-03-09T20:42:49.852 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:49.852+0000 7f668e77f640 1 -- 192.168.123.107:0/3586761906 shutdown_connections 2026-03-09T20:42:49.852 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:49.852+0000 7f668e77f640 1 --2- 192.168.123.107:0/3586761906 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f665c03cf10 0x7f665c03f3d0 unknown :-1 s=CLOSED pgs=14 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:49.852 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:49.852+0000 7f668e77f640 1 --2- 192.168.123.107:0/3586761906 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f66880ffca0 0x7f668810e6c0 unknown :-1 s=CLOSED pgs=95 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:49.852 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:49.852+0000 7f668e77f640 1 -- 192.168.123.107:0/3586761906 >> 192.168.123.107:0/3586761906 conn(0x7f66880f9b60 msgr2=0x7f66880fa7b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:49.853 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:49.852+0000 7f668e77f640 1 -- 192.168.123.107:0/3586761906 shutdown_connections 2026-03-09T20:42:49.853 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:49.853+0000 7f668e77f640 1 -- 192.168.123.107:0/3586761906 wait complete. 2026-03-09T20:42:49.899 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph orch host ls --format=json 2026-03-09T20:42:49.923 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:49 vm07 ceph-mon[49120]: Updating vm07:/var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/config/ceph.client.admin.keyring 2026-03-09T20:42:49.923 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:49 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' 2026-03-09T20:42:49.924 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:49 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' 2026-03-09T20:42:49.924 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:49 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' 2026-03-09T20:42:49.924 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:49 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm07", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-09T20:42:49.924 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:49 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm07", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]': finished 2026-03-09T20:42:49.924 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:49 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:42:49.924 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:49 vm07 ceph-mon[49120]: Deploying daemon ceph-exporter.vm07 on vm07 2026-03-09T20:42:49.924 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:49 vm07 ceph-mon[49120]: Deploying cephadm binary to vm10 2026-03-09T20:42:49.924 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:49 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' 2026-03-09T20:42:49.924 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:49 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' 2026-03-09T20:42:49.924 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:49 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' 2026-03-09T20:42:49.924 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:49 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' 2026-03-09T20:42:49.924 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:49 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm07", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-09T20:42:49.924 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:49 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.vm07", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished 2026-03-09T20:42:49.924 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:49 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:42:49.924 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:49 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' 2026-03-09T20:42:50.115 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:42:50.575 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:50.574+0000 7f1cb99ba640 1 -- 192.168.123.107:0/2856326101 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1cb4071990 msgr2=0x7f1cb4071d70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:50.575 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:50.574+0000 7f1cb99ba640 1 --2- 192.168.123.107:0/2856326101 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1cb4071990 0x7f1cb4071d70 secure :-1 s=READY pgs=97 cs=0 l=1 rev1=1 crypto rx=0x7f1ca40099b0 tx=0x7f1ca402f2b0 comp rx=0 tx=0).stop 2026-03-09T20:42:50.575 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:50.574+0000 7f1cb99ba640 1 -- 192.168.123.107:0/2856326101 shutdown_connections 2026-03-09T20:42:50.575 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:50.574+0000 7f1cb99ba640 1 --2- 192.168.123.107:0/2856326101 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1cb4071990 0x7f1cb4071d70 unknown :-1 s=CLOSED pgs=97 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:50.575 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:50.574+0000 7f1cb99ba640 1 -- 192.168.123.107:0/2856326101 >> 192.168.123.107:0/2856326101 conn(0x7f1cb406b190 msgr2=0x7f1cb406b5a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:50.575 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:50.574+0000 7f1cb99ba640 1 -- 192.168.123.107:0/2856326101 shutdown_connections 2026-03-09T20:42:50.575 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:50.574+0000 7f1cb99ba640 1 -- 192.168.123.107:0/2856326101 wait complete. 2026-03-09T20:42:50.575 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:50.574+0000 7f1cb99ba640 1 Processor -- start 2026-03-09T20:42:50.575 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:50.574+0000 7f1cb99ba640 1 -- start start 2026-03-09T20:42:50.575 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:50.575+0000 7f1cb99ba640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1cb4116f20 0x7f1cb4114000 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:50.575 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:50.575+0000 7f1cb99ba640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f1ca4002dc0 con 0x7f1cb4116f20 2026-03-09T20:42:50.577 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:50.576+0000 7f1cb89b8640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1cb4116f20 0x7f1cb4114000 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:50.580 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:50.578+0000 7f1cb89b8640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1cb4116f20 0x7f1cb4114000 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:50776/0 (socket says 192.168.123.107:50776) 2026-03-09T20:42:50.580 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:50.578+0000 7f1cb89b8640 1 -- 192.168.123.107:0/3330641417 learned_addr learned my addr 192.168.123.107:0/3330641417 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:50.580 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:50.579+0000 7f1cb89b8640 1 -- 192.168.123.107:0/3330641417 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f1ca4009660 con 0x7f1cb4116f20 2026-03-09T20:42:50.580 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:50.579+0000 7f1cb89b8640 1 --2- 192.168.123.107:0/3330641417 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1cb4116f20 0x7f1cb4114000 secure :-1 s=READY pgs=98 cs=0 l=1 rev1=1 crypto rx=0x7f1ca4009ae0 tx=0x7f1ca4004290 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:50.580 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:50.579+0000 7f1cb1ffb640 1 -- 192.168.123.107:0/3330641417 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f1ca403d070 con 0x7f1cb4116f20 2026-03-09T20:42:50.580 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:50.579+0000 7f1cb99ba640 1 -- 192.168.123.107:0/3330641417 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f1cb4117300 con 0x7f1cb4116f20 2026-03-09T20:42:50.580 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:50.579+0000 7f1cb99ba640 1 -- 192.168.123.107:0/3330641417 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f1cb41145c0 con 0x7f1cb4116f20 2026-03-09T20:42:50.580 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:50.579+0000 7f1cb1ffb640 1 -- 192.168.123.107:0/3330641417 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f1ca4004590 con 0x7f1cb4116f20 2026-03-09T20:42:50.580 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:50.580+0000 7f1cb1ffb640 1 -- 192.168.123.107:0/3330641417 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f1ca40418a0 con 0x7f1cb4116f20 2026-03-09T20:42:50.580 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:50.580+0000 7f1cb99ba640 1 -- 192.168.123.107:0/3330641417 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f1c7c005350 con 0x7f1cb4116f20 2026-03-09T20:42:50.583 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:50.583+0000 7f1cb1ffb640 1 -- 192.168.123.107:0/3330641417 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 13) v1 ==== 49429+0+0 (secure 0 0 0) 0x7f1ca4049050 con 0x7f1cb4116f20 2026-03-09T20:42:50.584 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:50.583+0000 7f1cb1ffb640 1 --2- 192.168.123.107:0/3330641417 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f1c8403d2d0 0x7f1c8403f790 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:50.584 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:50.584+0000 7f1cb3fff640 1 --2- 192.168.123.107:0/3330641417 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f1c8403d2d0 0x7f1c8403f790 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:50.584 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:50.584+0000 7f1cb3fff640 1 --2- 192.168.123.107:0/3330641417 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f1c8403d2d0 0x7f1c8403f790 secure :-1 s=READY pgs=15 cs=0 l=1 rev1=1 crypto rx=0x7f1cac00ad30 tx=0x7f1cac0093f0 comp rx=0 tx=0).ready entity=mgr.14162 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:50.586 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:50.586+0000 7f1cb1ffb640 1 -- 192.168.123.107:0/3330641417 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(3..3 src has 1..3) v4 ==== 1155+0+0 (secure 0 0 0) 0x7f1ca40368f0 con 0x7f1cb4116f20 2026-03-09T20:42:50.592 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:50.588+0000 7f1cb1ffb640 1 -- 192.168.123.107:0/3330641417 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f1ca4038690 con 0x7f1cb4116f20 2026-03-09T20:42:50.701 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:50.700+0000 7f1cb99ba640 1 -- 192.168.123.107:0/3330641417 --> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] -- mgr_command(tid 0: {"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}) v1 -- 0x7f1c7c002bf0 con 0x7f1c8403d2d0 2026-03-09T20:42:50.702 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:42:50.702 INFO:teuthology.orchestra.run.vm07.stdout:[{"addr": "192.168.123.107", "hostname": "vm07", "labels": [], "status": ""}, {"addr": "192.168.123.110", "hostname": "vm10", "labels": [], "status": ""}] 2026-03-09T20:42:50.702 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:50.702+0000 7f1cb1ffb640 1 -- 192.168.123.107:0/3330641417 <== mgr.14162 v2:192.168.123.107:6800/4166937886 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+155 (secure 0 0 0) 0x7f1c7c002bf0 con 0x7f1c8403d2d0 2026-03-09T20:42:50.704 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:50.705+0000 7f1c837fe640 1 -- 192.168.123.107:0/3330641417 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f1c8403d2d0 msgr2=0x7f1c8403f790 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:50.704 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:50.705+0000 7f1c837fe640 1 --2- 192.168.123.107:0/3330641417 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f1c8403d2d0 0x7f1c8403f790 secure :-1 s=READY pgs=15 cs=0 l=1 rev1=1 crypto rx=0x7f1cac00ad30 tx=0x7f1cac0093f0 comp rx=0 tx=0).stop 2026-03-09T20:42:50.705 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:50.705+0000 7f1c837fe640 1 -- 192.168.123.107:0/3330641417 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1cb4116f20 msgr2=0x7f1cb4114000 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:50.705 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:50.705+0000 7f1c837fe640 1 --2- 192.168.123.107:0/3330641417 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1cb4116f20 0x7f1cb4114000 secure :-1 s=READY pgs=98 cs=0 l=1 rev1=1 crypto rx=0x7f1ca4009ae0 tx=0x7f1ca4004290 comp rx=0 tx=0).stop 2026-03-09T20:42:50.705 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:50.705+0000 7f1c837fe640 1 -- 192.168.123.107:0/3330641417 shutdown_connections 2026-03-09T20:42:50.705 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:50.705+0000 7f1c837fe640 1 --2- 192.168.123.107:0/3330641417 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f1c8403d2d0 0x7f1c8403f790 unknown :-1 s=CLOSED pgs=15 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:50.705 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:50.705+0000 7f1c837fe640 1 --2- 192.168.123.107:0/3330641417 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1cb4116f20 0x7f1cb4114000 unknown :-1 s=CLOSED pgs=98 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:50.705 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:50.705+0000 7f1c837fe640 1 -- 192.168.123.107:0/3330641417 >> 192.168.123.107:0/3330641417 conn(0x7f1cb406b190 msgr2=0x7f1cb406fe10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:50.705 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:50.705+0000 7f1c837fe640 1 -- 192.168.123.107:0/3330641417 shutdown_connections 2026-03-09T20:42:50.705 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:50.706+0000 7f1c837fe640 1 -- 192.168.123.107:0/3330641417 wait complete. 2026-03-09T20:42:50.758 INFO:tasks.cephadm:Setting crush tunables to default 2026-03-09T20:42:50.758 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph osd crush tunables default 2026-03-09T20:42:50.953 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:50 vm07 ceph-mon[49120]: Deploying daemon crash.vm07 on vm07 2026-03-09T20:42:50.953 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:50 vm07 ceph-mon[49120]: Added host vm10 2026-03-09T20:42:50.953 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:50 vm07 ceph-mon[49120]: mgrmap e13: vm07.xjrvch(active, since 6s) 2026-03-09T20:42:50.953 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:50 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' 2026-03-09T20:42:50.953 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:50 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' 2026-03-09T20:42:50.953 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:50 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' 2026-03-09T20:42:50.953 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:50 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' 2026-03-09T20:42:51.091 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:42:51.367 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:51.366+0000 7f495fbcc640 1 -- 192.168.123.107:0/2513234023 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f49581005a0 msgr2=0x7f4958100980 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:51.367 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:51.366+0000 7f495fbcc640 1 --2- 192.168.123.107:0/2513234023 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f49581005a0 0x7f4958100980 secure :-1 s=READY pgs=99 cs=0 l=1 rev1=1 crypto rx=0x7f494c0099b0 tx=0x7f494c02f2b0 comp rx=0 tx=0).stop 2026-03-09T20:42:51.367 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:51.367+0000 7f495fbcc640 1 -- 192.168.123.107:0/2513234023 shutdown_connections 2026-03-09T20:42:51.367 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:51.367+0000 7f495fbcc640 1 --2- 192.168.123.107:0/2513234023 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f49581005a0 0x7f4958100980 unknown :-1 s=CLOSED pgs=99 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:51.367 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:51.367+0000 7f495fbcc640 1 -- 192.168.123.107:0/2513234023 >> 192.168.123.107:0/2513234023 conn(0x7f49580fbfc0 msgr2=0x7f49580fe3e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:51.367 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:51.367+0000 7f495fbcc640 1 -- 192.168.123.107:0/2513234023 shutdown_connections 2026-03-09T20:42:51.367 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:51.367+0000 7f495fbcc640 1 -- 192.168.123.107:0/2513234023 wait complete. 2026-03-09T20:42:51.367 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:51.367+0000 7f495fbcc640 1 Processor -- start 2026-03-09T20:42:51.367 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:51.368+0000 7f495fbcc640 1 -- start start 2026-03-09T20:42:51.368 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:51.368+0000 7f495fbcc640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f49581005a0 0x7f495819b5a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:51.368 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:51.368+0000 7f495fbcc640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f495819bae0 con 0x7f49581005a0 2026-03-09T20:42:51.368 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:51.368+0000 7f495d941640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f49581005a0 0x7f495819b5a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:51.368 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:51.368+0000 7f495d941640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f49581005a0 0x7f495819b5a0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:50802/0 (socket says 192.168.123.107:50802) 2026-03-09T20:42:51.368 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:51.368+0000 7f495d941640 1 -- 192.168.123.107:0/3078672462 learned_addr learned my addr 192.168.123.107:0/3078672462 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:42:51.368 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:51.368+0000 7f495d941640 1 -- 192.168.123.107:0/3078672462 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f494c009660 con 0x7f49581005a0 2026-03-09T20:42:51.368 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:51.368+0000 7f495d941640 1 --2- 192.168.123.107:0/3078672462 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f49581005a0 0x7f495819b5a0 secure :-1 s=READY pgs=100 cs=0 l=1 rev1=1 crypto rx=0x7f494c002410 tx=0x7f494c004290 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:51.369 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:51.368+0000 7f4946ffd640 1 -- 192.168.123.107:0/3078672462 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f494c004450 con 0x7f49581005a0 2026-03-09T20:42:51.369 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:51.368+0000 7f4946ffd640 1 -- 192.168.123.107:0/3078672462 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f494c038930 con 0x7f49581005a0 2026-03-09T20:42:51.369 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:51.368+0000 7f4946ffd640 1 -- 192.168.123.107:0/3078672462 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f494c041900 con 0x7f49581005a0 2026-03-09T20:42:51.369 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:51.368+0000 7f495fbcc640 1 -- 192.168.123.107:0/3078672462 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f495819bce0 con 0x7f49581005a0 2026-03-09T20:42:51.369 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:51.368+0000 7f495fbcc640 1 -- 192.168.123.107:0/3078672462 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f495819c040 con 0x7f49581005a0 2026-03-09T20:42:51.369 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:51.369+0000 7f495fbcc640 1 -- 192.168.123.107:0/3078672462 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f4920005350 con 0x7f49581005a0 2026-03-09T20:42:51.372 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:51.372+0000 7f4946ffd640 1 -- 192.168.123.107:0/3078672462 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 13) v1 ==== 49429+0+0 (secure 0 0 0) 0x7f494c038aa0 con 0x7f49581005a0 2026-03-09T20:42:51.373 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:51.372+0000 7f4946ffd640 1 --2- 192.168.123.107:0/3078672462 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f493403d320 0x7f493403f7e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:51.373 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:51.372+0000 7f4946ffd640 1 -- 192.168.123.107:0/3078672462 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(3..3 src has 1..3) v4 ==== 1155+0+0 (secure 0 0 0) 0x7f494c076310 con 0x7f49581005a0 2026-03-09T20:42:51.373 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:51.372+0000 7f4946ffd640 1 -- 192.168.123.107:0/3078672462 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f494c076740 con 0x7f49581005a0 2026-03-09T20:42:51.373 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:51.372+0000 7f495d140640 1 --2- 192.168.123.107:0/3078672462 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f493403d320 0x7f493403f7e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:51.373 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:51.373+0000 7f495d140640 1 --2- 192.168.123.107:0/3078672462 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f493403d320 0x7f493403f7e0 secure :-1 s=READY pgs=16 cs=0 l=1 rev1=1 crypto rx=0x7f49480099c0 tx=0x7f4948006eb0 comp rx=0 tx=0).ready entity=mgr.14162 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:51.462 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:51.462+0000 7f495fbcc640 1 -- 192.168.123.107:0/3078672462 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "osd crush tunables", "profile": "default"} v 0) v1 -- 0x7f49200051c0 con 0x7f49581005a0 2026-03-09T20:42:51.936 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:51.935+0000 7f4946ffd640 1 -- 192.168.123.107:0/3078672462 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "osd crush tunables", "profile": "default"}]=0 adjusted tunables profile to default v4) v1 ==== 124+0+0 (secure 0 0 0) 0x7f494c038470 con 0x7f49581005a0 2026-03-09T20:42:51.936 INFO:teuthology.orchestra.run.vm07.stderr:adjusted tunables profile to default 2026-03-09T20:42:51.938 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:51.938+0000 7f495fbcc640 1 -- 192.168.123.107:0/3078672462 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f493403d320 msgr2=0x7f493403f7e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:51.938 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:51.938+0000 7f495fbcc640 1 --2- 192.168.123.107:0/3078672462 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f493403d320 0x7f493403f7e0 secure :-1 s=READY pgs=16 cs=0 l=1 rev1=1 crypto rx=0x7f49480099c0 tx=0x7f4948006eb0 comp rx=0 tx=0).stop 2026-03-09T20:42:51.938 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:51.938+0000 7f495fbcc640 1 -- 192.168.123.107:0/3078672462 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f49581005a0 msgr2=0x7f495819b5a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:51.938 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:51.938+0000 7f495fbcc640 1 --2- 192.168.123.107:0/3078672462 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f49581005a0 0x7f495819b5a0 secure :-1 s=READY pgs=100 cs=0 l=1 rev1=1 crypto rx=0x7f494c002410 tx=0x7f494c004290 comp rx=0 tx=0).stop 2026-03-09T20:42:51.938 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:51.938+0000 7f495fbcc640 1 -- 192.168.123.107:0/3078672462 shutdown_connections 2026-03-09T20:42:51.938 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:51.938+0000 7f495fbcc640 1 --2- 192.168.123.107:0/3078672462 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f493403d320 0x7f493403f7e0 unknown :-1 s=CLOSED pgs=16 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:51.938 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:51.938+0000 7f495fbcc640 1 --2- 192.168.123.107:0/3078672462 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f49581005a0 0x7f495819b5a0 unknown :-1 s=CLOSED pgs=100 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:51.938 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:51.938+0000 7f495fbcc640 1 -- 192.168.123.107:0/3078672462 >> 192.168.123.107:0/3078672462 conn(0x7f49580fbfc0 msgr2=0x7f49580fe3b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:51.938 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:51.938+0000 7f495fbcc640 1 -- 192.168.123.107:0/3078672462 shutdown_connections 2026-03-09T20:42:51.938 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:42:51.938+0000 7f495fbcc640 1 -- 192.168.123.107:0/3078672462 wait complete. 2026-03-09T20:42:51.979 INFO:tasks.cephadm:Adding mon.vm07 on vm07 2026-03-09T20:42:51.979 INFO:tasks.cephadm:Adding mon.vm10 on vm10 2026-03-09T20:42:51.979 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph orch apply mon '2;vm07:192.168.123.107=vm07;vm10:192.168.123.110=vm10' 2026-03-09T20:42:52.119 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T20:42:52.152 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T20:42:52.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:51 vm07 ceph-mon[49120]: Deploying daemon node-exporter.vm07 on vm07 2026-03-09T20:42:52.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:51 vm07 ceph-mon[49120]: from='client.14191 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:42:52.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:51 vm07 ceph-mon[49120]: from='client.? 192.168.123.107:0/3078672462' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-09T20:42:53.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:52 vm07 ceph-mon[49120]: from='client.? 192.168.123.107:0/3078672462' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-09T20:42:53.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:52 vm07 ceph-mon[49120]: osdmap e4: 0 total, 0 up, 0 in 2026-03-09T20:42:53.268 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.266+0000 7f6de3b8a640 1 -- 192.168.123.110:0/799237350 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6ddc102620 msgr2=0x7f6ddc102a20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:53.268 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.266+0000 7f6de3b8a640 1 --2- 192.168.123.110:0/799237350 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6ddc102620 0x7f6ddc102a20 secure :-1 s=READY pgs=101 cs=0 l=1 rev1=1 crypto rx=0x7f6dd00099b0 tx=0x7f6dd002f2b0 comp rx=0 tx=0).stop 2026-03-09T20:42:53.268 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.267+0000 7f6de3b8a640 1 -- 192.168.123.110:0/799237350 shutdown_connections 2026-03-09T20:42:53.268 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.267+0000 7f6de3b8a640 1 --2- 192.168.123.110:0/799237350 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6ddc102620 0x7f6ddc102a20 unknown :-1 s=CLOSED pgs=101 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:53.268 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.267+0000 7f6de3b8a640 1 -- 192.168.123.110:0/799237350 >> 192.168.123.110:0/799237350 conn(0x7f6ddc0fde70 msgr2=0x7f6ddc100260 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:53.268 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.267+0000 7f6de3b8a640 1 -- 192.168.123.110:0/799237350 shutdown_connections 2026-03-09T20:42:53.268 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.267+0000 7f6de3b8a640 1 -- 192.168.123.110:0/799237350 wait complete. 2026-03-09T20:42:53.268 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.267+0000 7f6de3b8a640 1 Processor -- start 2026-03-09T20:42:53.269 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.267+0000 7f6de3b8a640 1 -- start start 2026-03-09T20:42:53.269 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.267+0000 7f6de3b8a640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6ddc102620 0x7f6ddc078ec0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:53.269 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.267+0000 7f6de3b8a640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f6ddc079400 con 0x7f6ddc102620 2026-03-09T20:42:53.269 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.268+0000 7f6de18ff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6ddc102620 0x7f6ddc078ec0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:53.269 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.268+0000 7f6de18ff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6ddc102620 0x7f6ddc078ec0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.110:45814/0 (socket says 192.168.123.110:45814) 2026-03-09T20:42:53.269 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.268+0000 7f6de18ff640 1 -- 192.168.123.110:0/3390936658 learned_addr learned my addr 192.168.123.110:0/3390936658 (peer_addr_for_me v2:192.168.123.110:0/0) 2026-03-09T20:42:53.269 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.268+0000 7f6de18ff640 1 -- 192.168.123.110:0/3390936658 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f6dd0009660 con 0x7f6ddc102620 2026-03-09T20:42:53.269 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.268+0000 7f6de18ff640 1 --2- 192.168.123.110:0/3390936658 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6ddc102620 0x7f6ddc078ec0 secure :-1 s=READY pgs=102 cs=0 l=1 rev1=1 crypto rx=0x7f6dd002f860 tx=0x7f6dd0004270 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:53.270 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.269+0000 7f6dcaffd640 1 -- 192.168.123.110:0/3390936658 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f6dd00043b0 con 0x7f6ddc102620 2026-03-09T20:42:53.270 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.269+0000 7f6de3b8a640 1 -- 192.168.123.110:0/3390936658 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f6ddc079600 con 0x7f6ddc102620 2026-03-09T20:42:53.270 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.269+0000 7f6de3b8a640 1 -- 192.168.123.110:0/3390936658 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f6ddc075a00 con 0x7f6ddc102620 2026-03-09T20:42:53.270 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.269+0000 7f6dcaffd640 1 -- 192.168.123.110:0/3390936658 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f6dd0038b40 con 0x7f6ddc102620 2026-03-09T20:42:53.270 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.269+0000 7f6dcaffd640 1 -- 192.168.123.110:0/3390936658 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f6dd00418f0 con 0x7f6ddc102620 2026-03-09T20:42:53.271 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.269+0000 7f6dc8ff9640 1 -- 192.168.123.110:0/3390936658 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f6da8005350 con 0x7f6ddc102620 2026-03-09T20:42:53.271 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.270+0000 7f6dcaffd640 1 -- 192.168.123.110:0/3390936658 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 13) v1 ==== 49429+0+0 (secure 0 0 0) 0x7f6dd0038cb0 con 0x7f6ddc102620 2026-03-09T20:42:53.271 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.270+0000 7f6dcaffd640 1 --2- 192.168.123.110:0/3390936658 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f6db803d2d0 0x7f6db803f790 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:53.271 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.270+0000 7f6dcaffd640 1 -- 192.168.123.110:0/3390936658 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1155+0+0 (secure 0 0 0) 0x7f6dd0050020 con 0x7f6ddc102620 2026-03-09T20:42:53.273 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.272+0000 7f6de10fe640 1 --2- 192.168.123.110:0/3390936658 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f6db803d2d0 0x7f6db803f790 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:53.274 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.273+0000 7f6de10fe640 1 --2- 192.168.123.110:0/3390936658 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f6db803d2d0 0x7f6db803f790 secure :-1 s=READY pgs=17 cs=0 l=1 rev1=1 crypto rx=0x7f6dcc0099c0 tx=0x7f6dcc006eb0 comp rx=0 tx=0).ready entity=mgr.14162 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:53.274 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.273+0000 7f6dcaffd640 1 -- 192.168.123.110:0/3390936658 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f6dd0041400 con 0x7f6ddc102620 2026-03-09T20:42:53.370 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.367+0000 7f6dc8ff9640 1 -- 192.168.123.110:0/3390936658 --> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "mon", "placement": "2;vm07:192.168.123.107=vm07;vm10:192.168.123.110=vm10", "target": ["mon-mgr", ""]}) v1 -- 0x7f6da8002bf0 con 0x7f6db803d2d0 2026-03-09T20:42:53.375 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.374+0000 7f6dcaffd640 1 -- 192.168.123.110:0/3390936658 <== mgr.14162 v2:192.168.123.107:6800/4166937886 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+24 (secure 0 0 0) 0x7f6da8002bf0 con 0x7f6db803d2d0 2026-03-09T20:42:53.375 INFO:teuthology.orchestra.run.vm10.stdout:Scheduled mon update... 2026-03-09T20:42:53.377 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.376+0000 7f6dc8ff9640 1 -- 192.168.123.110:0/3390936658 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f6db803d2d0 msgr2=0x7f6db803f790 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:53.377 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.376+0000 7f6dc8ff9640 1 --2- 192.168.123.110:0/3390936658 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f6db803d2d0 0x7f6db803f790 secure :-1 s=READY pgs=17 cs=0 l=1 rev1=1 crypto rx=0x7f6dcc0099c0 tx=0x7f6dcc006eb0 comp rx=0 tx=0).stop 2026-03-09T20:42:53.378 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.376+0000 7f6dc8ff9640 1 -- 192.168.123.110:0/3390936658 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6ddc102620 msgr2=0x7f6ddc078ec0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:53.378 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.376+0000 7f6dc8ff9640 1 --2- 192.168.123.110:0/3390936658 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6ddc102620 0x7f6ddc078ec0 secure :-1 s=READY pgs=102 cs=0 l=1 rev1=1 crypto rx=0x7f6dd002f860 tx=0x7f6dd0004270 comp rx=0 tx=0).stop 2026-03-09T20:42:53.378 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.376+0000 7f6dc8ff9640 1 -- 192.168.123.110:0/3390936658 shutdown_connections 2026-03-09T20:42:53.378 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.376+0000 7f6dc8ff9640 1 --2- 192.168.123.110:0/3390936658 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f6db803d2d0 0x7f6db803f790 unknown :-1 s=CLOSED pgs=17 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:53.378 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.376+0000 7f6dc8ff9640 1 --2- 192.168.123.110:0/3390936658 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6ddc102620 0x7f6ddc078ec0 unknown :-1 s=CLOSED pgs=102 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:53.378 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.376+0000 7f6dc8ff9640 1 -- 192.168.123.110:0/3390936658 >> 192.168.123.110:0/3390936658 conn(0x7f6ddc0fde70 msgr2=0x7f6ddc0fea70 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:53.378 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.377+0000 7f6dc8ff9640 1 -- 192.168.123.110:0/3390936658 shutdown_connections 2026-03-09T20:42:53.378 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.377+0000 7f6dc8ff9640 1 -- 192.168.123.110:0/3390936658 wait complete. 2026-03-09T20:42:53.442 DEBUG:teuthology.orchestra.run.vm10:mon.vm10> sudo journalctl -f -n 0 -u ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4@mon.vm10.service 2026-03-09T20:42:53.444 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T20:42:53.444 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph mon dump -f json 2026-03-09T20:42:53.614 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T20:42:53.655 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T20:42:53.929 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.927+0000 7fd920ba0640 1 -- 192.168.123.110:0/1425083196 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd91c100390 msgr2=0x7fd91c100790 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:53.929 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.927+0000 7fd920ba0640 1 --2- 192.168.123.110:0/1425083196 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd91c100390 0x7fd91c100790 secure :-1 s=READY pgs=103 cs=0 l=1 rev1=1 crypto rx=0x7fd9040099b0 tx=0x7fd90402f2b0 comp rx=0 tx=0).stop 2026-03-09T20:42:53.929 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.928+0000 7fd920ba0640 1 -- 192.168.123.110:0/1425083196 shutdown_connections 2026-03-09T20:42:53.929 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.928+0000 7fd920ba0640 1 --2- 192.168.123.110:0/1425083196 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd91c100390 0x7fd91c100790 unknown :-1 s=CLOSED pgs=103 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:53.929 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.928+0000 7fd920ba0640 1 -- 192.168.123.110:0/1425083196 >> 192.168.123.110:0/1425083196 conn(0x7fd91c0fbb20 msgr2=0x7fd91c0fdf60 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:53.929 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.928+0000 7fd920ba0640 1 -- 192.168.123.110:0/1425083196 shutdown_connections 2026-03-09T20:42:53.929 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.928+0000 7fd920ba0640 1 -- 192.168.123.110:0/1425083196 wait complete. 2026-03-09T20:42:53.930 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.928+0000 7fd920ba0640 1 Processor -- start 2026-03-09T20:42:53.930 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.929+0000 7fd920ba0640 1 -- start start 2026-03-09T20:42:53.931 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.929+0000 7fd920ba0640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd91c199be0 0x7fd91c19a000 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:53.931 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.929+0000 7fd920ba0640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fd91c19a540 con 0x7fd91c199be0 2026-03-09T20:42:53.931 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.930+0000 7fd91a575640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd91c199be0 0x7fd91c19a000 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:53.931 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.930+0000 7fd91a575640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd91c199be0 0x7fd91c19a000 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.110:45828/0 (socket says 192.168.123.110:45828) 2026-03-09T20:42:53.931 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.930+0000 7fd91a575640 1 -- 192.168.123.110:0/4106535196 learned_addr learned my addr 192.168.123.110:0/4106535196 (peer_addr_for_me v2:192.168.123.110:0/0) 2026-03-09T20:42:53.932 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.931+0000 7fd91a575640 1 -- 192.168.123.110:0/4106535196 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fd904009660 con 0x7fd91c199be0 2026-03-09T20:42:53.932 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.931+0000 7fd91a575640 1 --2- 192.168.123.110:0/4106535196 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd91c199be0 0x7fd91c19a000 secure :-1 s=READY pgs=104 cs=0 l=1 rev1=1 crypto rx=0x7fd91c1013f0 tx=0x7fd904004290 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:53.932 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.931+0000 7fd9037fe640 1 -- 192.168.123.110:0/4106535196 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fd90403d070 con 0x7fd91c199be0 2026-03-09T20:42:53.932 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.931+0000 7fd920ba0640 1 -- 192.168.123.110:0/4106535196 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fd91c19a740 con 0x7fd91c199be0 2026-03-09T20:42:53.933 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.931+0000 7fd920ba0640 1 -- 192.168.123.110:0/4106535196 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fd91c19d2b0 con 0x7fd91c199be0 2026-03-09T20:42:53.933 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.932+0000 7fd9037fe640 1 -- 192.168.123.110:0/4106535196 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fd904038930 con 0x7fd91c199be0 2026-03-09T20:42:53.933 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.932+0000 7fd9037fe640 1 -- 192.168.123.110:0/4106535196 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fd904041750 con 0x7fd91c199be0 2026-03-09T20:42:53.934 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.932+0000 7fd9037fe640 1 -- 192.168.123.110:0/4106535196 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 13) v1 ==== 49429+0+0 (secure 0 0 0) 0x7fd90404b430 con 0x7fd91c199be0 2026-03-09T20:42:53.934 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.932+0000 7fd920ba0640 1 -- 192.168.123.110:0/4106535196 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fd91c100790 con 0x7fd91c199be0 2026-03-09T20:42:53.934 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.933+0000 7fd9037fe640 1 --2- 192.168.123.110:0/4106535196 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7fd8f403d280 0x7fd8f403f740 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:53.934 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.933+0000 7fd9037fe640 1 -- 192.168.123.110:0/4106535196 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1155+0+0 (secure 0 0 0) 0x7fd904075840 con 0x7fd91c199be0 2026-03-09T20:42:53.934 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.933+0000 7fd919d74640 1 --2- 192.168.123.110:0/4106535196 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7fd8f403d280 0x7fd8f403f740 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:53.934 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.933+0000 7fd919d74640 1 --2- 192.168.123.110:0/4106535196 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7fd8f403d280 0x7fd8f403f740 secure :-1 s=READY pgs=18 cs=0 l=1 rev1=1 crypto rx=0x7fd9100099c0 tx=0x7fd910006eb0 comp rx=0 tx=0).ready entity=mgr.14162 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:53.936 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:53.935+0000 7fd9037fe640 1 -- 192.168.123.110:0/4106535196 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7fd90403fd80 con 0x7fd91c199be0 2026-03-09T20:42:54.052 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:54.050+0000 7fd920ba0640 1 -- 192.168.123.110:0/4106535196 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7fd91c1062a0 con 0x7fd91c199be0 2026-03-09T20:42:54.052 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:54.051+0000 7fd9037fe640 1 -- 192.168.123.110:0/4106535196 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+747 (secure 0 0 0) 0x7fd904046030 con 0x7fd91c199be0 2026-03-09T20:42:54.052 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T20:42:54.052 INFO:teuthology.orchestra.run.vm10.stdout:{"epoch":1,"fsid":"589eab88-1bf8-11f1-9e50-71f3ab1833c4","modified":"2026-03-09T20:42:20.613735Z","created":"2026-03-09T20:42:20.613735Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm07","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:3300","nonce":0},{"type":"v1","addr":"192.168.123.107:6789","nonce":0}]},"addr":"192.168.123.107:6789/0","public_addr":"192.168.123.107:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T20:42:54.052 INFO:teuthology.orchestra.run.vm10.stderr:dumped monmap epoch 1 2026-03-09T20:42:54.054 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:54.053+0000 7fd920ba0640 1 -- 192.168.123.110:0/4106535196 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7fd8f403d280 msgr2=0x7fd8f403f740 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:54.054 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:54.053+0000 7fd920ba0640 1 --2- 192.168.123.110:0/4106535196 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7fd8f403d280 0x7fd8f403f740 secure :-1 s=READY pgs=18 cs=0 l=1 rev1=1 crypto rx=0x7fd9100099c0 tx=0x7fd910006eb0 comp rx=0 tx=0).stop 2026-03-09T20:42:54.054 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:54.053+0000 7fd920ba0640 1 -- 192.168.123.110:0/4106535196 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd91c199be0 msgr2=0x7fd91c19a000 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:54.054 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:54.053+0000 7fd920ba0640 1 --2- 192.168.123.110:0/4106535196 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd91c199be0 0x7fd91c19a000 secure :-1 s=READY pgs=104 cs=0 l=1 rev1=1 crypto rx=0x7fd91c1013f0 tx=0x7fd904004290 comp rx=0 tx=0).stop 2026-03-09T20:42:54.054 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:54.053+0000 7fd920ba0640 1 -- 192.168.123.110:0/4106535196 shutdown_connections 2026-03-09T20:42:54.054 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:54.053+0000 7fd920ba0640 1 --2- 192.168.123.110:0/4106535196 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7fd8f403d280 0x7fd8f403f740 unknown :-1 s=CLOSED pgs=18 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:54.054 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:54.053+0000 7fd920ba0640 1 --2- 192.168.123.110:0/4106535196 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd91c199be0 0x7fd91c19a000 unknown :-1 s=CLOSED pgs=104 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:54.054 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:54.053+0000 7fd920ba0640 1 -- 192.168.123.110:0/4106535196 >> 192.168.123.110:0/4106535196 conn(0x7fd91c0fbb20 msgr2=0x7fd91c0fdf60 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:54.055 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:54.054+0000 7fd920ba0640 1 -- 192.168.123.110:0/4106535196 shutdown_connections 2026-03-09T20:42:54.055 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:54.054+0000 7fd920ba0640 1 -- 192.168.123.110:0/4106535196 wait complete. 2026-03-09T20:42:54.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:54 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' 2026-03-09T20:42:54.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:54 vm07 ceph-mon[49120]: from='client.14195 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "2;vm07:192.168.123.107=vm07;vm10:192.168.123.110=vm10", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:42:54.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:54 vm07 ceph-mon[49120]: Saving service mon spec with placement vm07:192.168.123.107=vm07;vm10:192.168.123.110=vm10;count:2 2026-03-09T20:42:54.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:54 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' 2026-03-09T20:42:54.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:54 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' 2026-03-09T20:42:54.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:54 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' 2026-03-09T20:42:54.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:54 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' 2026-03-09T20:42:54.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:54 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' 2026-03-09T20:42:54.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:54 vm07 ceph-mon[49120]: from='client.? 192.168.123.110:0/4106535196' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T20:42:55.115 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T20:42:55.115 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph mon dump -f json 2026-03-09T20:42:55.242 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T20:42:55.274 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T20:42:55.500 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:55.498+0000 7eff3f771640 1 -- 192.168.123.110:0/2633792576 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7eff38075ba0 msgr2=0x7eff38075fa0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:55.500 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:55.498+0000 7eff3f771640 1 --2- 192.168.123.110:0/2633792576 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7eff38075ba0 0x7eff38075fa0 secure :-1 s=READY pgs=105 cs=0 l=1 rev1=1 crypto rx=0x7eff2c0099b0 tx=0x7eff2c02f2b0 comp rx=0 tx=0).stop 2026-03-09T20:42:55.500 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:55.498+0000 7eff3f771640 1 -- 192.168.123.110:0/2633792576 shutdown_connections 2026-03-09T20:42:55.500 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:55.498+0000 7eff3f771640 1 --2- 192.168.123.110:0/2633792576 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7eff38075ba0 0x7eff38075fa0 unknown :-1 s=CLOSED pgs=105 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:55.500 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:55.498+0000 7eff3f771640 1 -- 192.168.123.110:0/2633792576 >> 192.168.123.110:0/2633792576 conn(0x7eff380fde70 msgr2=0x7eff38100260 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:55.500 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:55.499+0000 7eff3f771640 1 -- 192.168.123.110:0/2633792576 shutdown_connections 2026-03-09T20:42:55.500 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:55.499+0000 7eff3f771640 1 -- 192.168.123.110:0/2633792576 wait complete. 2026-03-09T20:42:55.500 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:55.499+0000 7eff3f771640 1 Processor -- start 2026-03-09T20:42:55.500 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:55.499+0000 7eff3f771640 1 -- start start 2026-03-09T20:42:55.501 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:55.500+0000 7eff3f771640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7eff3819e000 0x7eff3819e420 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:55.501 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:55.500+0000 7eff3f771640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7eff3819e960 con 0x7eff3819e000 2026-03-09T20:42:55.501 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:55.500+0000 7eff3d4e6640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7eff3819e000 0x7eff3819e420 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:55.501 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:55.500+0000 7eff3d4e6640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7eff3819e000 0x7eff3819e420 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.110:45840/0 (socket says 192.168.123.110:45840) 2026-03-09T20:42:55.501 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:55.500+0000 7eff3d4e6640 1 -- 192.168.123.110:0/1538780649 learned_addr learned my addr 192.168.123.110:0/1538780649 (peer_addr_for_me v2:192.168.123.110:0/0) 2026-03-09T20:42:55.501 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:55.500+0000 7eff3d4e6640 1 -- 192.168.123.110:0/1538780649 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7eff2c009660 con 0x7eff3819e000 2026-03-09T20:42:55.502 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:55.501+0000 7eff3d4e6640 1 --2- 192.168.123.110:0/1538780649 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7eff3819e000 0x7eff3819e420 secure :-1 s=READY pgs=106 cs=0 l=1 rev1=1 crypto rx=0x7eff2c002bf0 tx=0x7eff2c004290 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:55.502 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:55.501+0000 7eff267fc640 1 -- 192.168.123.110:0/1538780649 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7eff2c038470 con 0x7eff3819e000 2026-03-09T20:42:55.502 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:55.501+0000 7eff3f771640 1 -- 192.168.123.110:0/1538780649 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7eff3819eb60 con 0x7eff3819e000 2026-03-09T20:42:55.502 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:55.501+0000 7eff3f771640 1 -- 192.168.123.110:0/1538780649 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7eff381a16d0 con 0x7eff3819e000 2026-03-09T20:42:55.503 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:55.502+0000 7eff267fc640 1 -- 192.168.123.110:0/1538780649 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7eff2c038a90 con 0x7eff3819e000 2026-03-09T20:42:55.503 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:55.502+0000 7eff267fc640 1 -- 192.168.123.110:0/1538780649 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7eff2c041860 con 0x7eff3819e000 2026-03-09T20:42:55.503 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:55.502+0000 7eff3f771640 1 -- 192.168.123.110:0/1538780649 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7eff38075fa0 con 0x7eff3819e000 2026-03-09T20:42:55.503 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:55.502+0000 7eff267fc640 1 -- 192.168.123.110:0/1538780649 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 13) v1 ==== 49429+0+0 (secure 0 0 0) 0x7eff2c0419c0 con 0x7eff3819e000 2026-03-09T20:42:55.504 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:55.502+0000 7eff267fc640 1 --2- 192.168.123.110:0/1538780649 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7eff1403d2d0 0x7eff1403f790 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:55.504 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:55.502+0000 7eff267fc640 1 -- 192.168.123.110:0/1538780649 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1155+0+0 (secure 0 0 0) 0x7eff2c076270 con 0x7eff3819e000 2026-03-09T20:42:55.504 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:55.503+0000 7eff3cce5640 1 --2- 192.168.123.110:0/1538780649 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7eff1403d2d0 0x7eff1403f790 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:55.504 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:55.503+0000 7eff3cce5640 1 --2- 192.168.123.110:0/1538780649 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7eff1403d2d0 0x7eff1403f790 secure :-1 s=READY pgs=19 cs=0 l=1 rev1=1 crypto rx=0x7eff280099c0 tx=0x7eff28006eb0 comp rx=0 tx=0).ready entity=mgr.14162 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:55.506 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:55.505+0000 7eff267fc640 1 -- 192.168.123.110:0/1538780649 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7eff2c049cf0 con 0x7eff3819e000 2026-03-09T20:42:55.629 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:55.627+0000 7eff3f771640 1 -- 192.168.123.110:0/1538780649 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7eff3810ca10 con 0x7eff3819e000 2026-03-09T20:42:55.629 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:55.628+0000 7eff267fc640 1 -- 192.168.123.110:0/1538780649 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+747 (secure 0 0 0) 0x7eff3810ca10 con 0x7eff3819e000 2026-03-09T20:42:55.629 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T20:42:55.630 INFO:teuthology.orchestra.run.vm10.stdout:{"epoch":1,"fsid":"589eab88-1bf8-11f1-9e50-71f3ab1833c4","modified":"2026-03-09T20:42:20.613735Z","created":"2026-03-09T20:42:20.613735Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm07","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:3300","nonce":0},{"type":"v1","addr":"192.168.123.107:6789","nonce":0}]},"addr":"192.168.123.107:6789/0","public_addr":"192.168.123.107:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T20:42:55.630 INFO:teuthology.orchestra.run.vm10.stderr:dumped monmap epoch 1 2026-03-09T20:42:55.632 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:55.630+0000 7eff3f771640 1 -- 192.168.123.110:0/1538780649 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7eff1403d2d0 msgr2=0x7eff1403f790 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:55.632 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:55.631+0000 7eff3f771640 1 --2- 192.168.123.110:0/1538780649 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7eff1403d2d0 0x7eff1403f790 secure :-1 s=READY pgs=19 cs=0 l=1 rev1=1 crypto rx=0x7eff280099c0 tx=0x7eff28006eb0 comp rx=0 tx=0).stop 2026-03-09T20:42:55.632 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:55.631+0000 7eff3f771640 1 -- 192.168.123.110:0/1538780649 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7eff3819e000 msgr2=0x7eff3819e420 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:55.632 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:55.631+0000 7eff3f771640 1 --2- 192.168.123.110:0/1538780649 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7eff3819e000 0x7eff3819e420 secure :-1 s=READY pgs=106 cs=0 l=1 rev1=1 crypto rx=0x7eff2c002bf0 tx=0x7eff2c004290 comp rx=0 tx=0).stop 2026-03-09T20:42:55.632 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:55.631+0000 7eff3f771640 1 -- 192.168.123.110:0/1538780649 shutdown_connections 2026-03-09T20:42:55.632 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:55.631+0000 7eff3f771640 1 --2- 192.168.123.110:0/1538780649 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7eff1403d2d0 0x7eff1403f790 unknown :-1 s=CLOSED pgs=19 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:55.632 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:55.631+0000 7eff3f771640 1 --2- 192.168.123.110:0/1538780649 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7eff3819e000 0x7eff3819e420 unknown :-1 s=CLOSED pgs=106 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:55.632 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:55.631+0000 7eff3f771640 1 -- 192.168.123.110:0/1538780649 >> 192.168.123.110:0/1538780649 conn(0x7eff380fde70 msgr2=0x7eff380ff120 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:55.633 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:55.632+0000 7eff3f771640 1 -- 192.168.123.110:0/1538780649 shutdown_connections 2026-03-09T20:42:55.633 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:55.632+0000 7eff3f771640 1 -- 192.168.123.110:0/1538780649 wait complete. 2026-03-09T20:42:55.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:55 vm07 ceph-mon[49120]: Deploying daemon alertmanager.vm07 on vm07 2026-03-09T20:42:56.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:56 vm07 ceph-mon[49120]: from='client.? 192.168.123.110:0/1538780649' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T20:42:56.674 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T20:42:56.674 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph mon dump -f json 2026-03-09T20:42:56.811 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T20:42:56.845 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T20:42:57.077 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:57.075+0000 7f3fb0ddb640 1 -- 192.168.123.110:0/3339846862 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3fac102470 msgr2=0x7f3fac102870 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:57.077 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:57.075+0000 7f3fb0ddb640 1 --2- 192.168.123.110:0/3339846862 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3fac102470 0x7f3fac102870 secure :-1 s=READY pgs=107 cs=0 l=1 rev1=1 crypto rx=0x7f3f980099b0 tx=0x7f3f9802f2b0 comp rx=0 tx=0).stop 2026-03-09T20:42:57.077 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:57.075+0000 7f3fb0ddb640 1 -- 192.168.123.110:0/3339846862 shutdown_connections 2026-03-09T20:42:57.077 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:57.075+0000 7f3fb0ddb640 1 --2- 192.168.123.110:0/3339846862 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3fac102470 0x7f3fac102870 unknown :-1 s=CLOSED pgs=107 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:57.077 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:57.075+0000 7f3fb0ddb640 1 -- 192.168.123.110:0/3339846862 >> 192.168.123.110:0/3339846862 conn(0x7f3fac0fdca0 msgr2=0x7f3fac100090 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:57.077 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:57.076+0000 7f3fb0ddb640 1 -- 192.168.123.110:0/3339846862 shutdown_connections 2026-03-09T20:42:57.077 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:57.076+0000 7f3fb0ddb640 1 -- 192.168.123.110:0/3339846862 wait complete. 2026-03-09T20:42:57.077 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:57.076+0000 7f3fb0ddb640 1 Processor -- start 2026-03-09T20:42:57.078 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:57.077+0000 7f3fb0ddb640 1 -- start start 2026-03-09T20:42:57.078 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:57.077+0000 7f3fb0ddb640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3fac102470 0x7f3fac1997c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:57.078 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:57.077+0000 7f3fb0ddb640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f3fac199d00 con 0x7f3fac102470 2026-03-09T20:42:57.078 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:57.077+0000 7f3fab7fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3fac102470 0x7f3fac1997c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:57.078 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:57.077+0000 7f3fab7fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3fac102470 0x7f3fac1997c0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.110:45852/0 (socket says 192.168.123.110:45852) 2026-03-09T20:42:57.078 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:57.077+0000 7f3fab7fe640 1 -- 192.168.123.110:0/2622419825 learned_addr learned my addr 192.168.123.110:0/2622419825 (peer_addr_for_me v2:192.168.123.110:0/0) 2026-03-09T20:42:57.079 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:57.078+0000 7f3fab7fe640 1 -- 192.168.123.110:0/2622419825 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f3f98009660 con 0x7f3fac102470 2026-03-09T20:42:57.079 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:57.078+0000 7f3fab7fe640 1 --2- 192.168.123.110:0/2622419825 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3fac102470 0x7f3fac1997c0 secure :-1 s=READY pgs=108 cs=0 l=1 rev1=1 crypto rx=0x7f3f9802f860 tx=0x7f3f98004270 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:57.079 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:57.078+0000 7f3fa8ff9640 1 -- 192.168.123.110:0/2622419825 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f3f980043b0 con 0x7f3fac102470 2026-03-09T20:42:57.079 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:57.078+0000 7f3fb0ddb640 1 -- 192.168.123.110:0/2622419825 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f3fac199f00 con 0x7f3fac102470 2026-03-09T20:42:57.079 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:57.078+0000 7f3fb0ddb640 1 -- 192.168.123.110:0/2622419825 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f3fac19a3a0 con 0x7f3fac102470 2026-03-09T20:42:57.080 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:57.079+0000 7f3fa8ff9640 1 -- 192.168.123.110:0/2622419825 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f3f98038b40 con 0x7f3fac102470 2026-03-09T20:42:57.080 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:57.079+0000 7f3fa8ff9640 1 -- 192.168.123.110:0/2622419825 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f3f98041810 con 0x7f3fac102470 2026-03-09T20:42:57.080 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:57.079+0000 7f3fa8ff9640 1 -- 192.168.123.110:0/2622419825 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 13) v1 ==== 49429+0+0 (secure 0 0 0) 0x7f3f98041ac0 con 0x7f3fac102470 2026-03-09T20:42:57.080 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:57.079+0000 7f3fa8ff9640 1 --2- 192.168.123.110:0/2622419825 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f3f8403d2d0 0x7f3f8403f790 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:57.081 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:57.079+0000 7f3fa8ff9640 1 -- 192.168.123.110:0/2622419825 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1155+0+0 (secure 0 0 0) 0x7f3f980771a0 con 0x7f3fac102470 2026-03-09T20:42:57.081 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:57.080+0000 7f3faaffd640 1 --2- 192.168.123.110:0/2622419825 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f3f8403d2d0 0x7f3f8403f790 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:57.081 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:57.080+0000 7f3fb0ddb640 1 -- 192.168.123.110:0/2622419825 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f3fac1028f0 con 0x7f3fac102470 2026-03-09T20:42:57.081 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:57.080+0000 7f3faaffd640 1 --2- 192.168.123.110:0/2622419825 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f3f8403d2d0 0x7f3f8403f790 secure :-1 s=READY pgs=20 cs=0 l=1 rev1=1 crypto rx=0x7f3f900099c0 tx=0x7f3f90006eb0 comp rx=0 tx=0).ready entity=mgr.14162 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:57.084 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:57.083+0000 7f3fa8ff9640 1 -- 192.168.123.110:0/2622419825 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f3f98041df0 con 0x7f3fac102470 2026-03-09T20:42:57.210 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:57.209+0000 7f3fb0ddb640 1 -- 192.168.123.110:0/2622419825 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7f3fac108380 con 0x7f3fac102470 2026-03-09T20:42:57.210 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:57.209+0000 7f3fa8ff9640 1 -- 192.168.123.110:0/2622419825 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+747 (secure 0 0 0) 0x7f3f980373d0 con 0x7f3fac102470 2026-03-09T20:42:57.211 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T20:42:57.211 INFO:teuthology.orchestra.run.vm10.stdout:{"epoch":1,"fsid":"589eab88-1bf8-11f1-9e50-71f3ab1833c4","modified":"2026-03-09T20:42:20.613735Z","created":"2026-03-09T20:42:20.613735Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm07","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:3300","nonce":0},{"type":"v1","addr":"192.168.123.107:6789","nonce":0}]},"addr":"192.168.123.107:6789/0","public_addr":"192.168.123.107:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T20:42:57.211 INFO:teuthology.orchestra.run.vm10.stderr:dumped monmap epoch 1 2026-03-09T20:42:57.212 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:57.211+0000 7f3fb0ddb640 1 -- 192.168.123.110:0/2622419825 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f3f8403d2d0 msgr2=0x7f3f8403f790 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:57.212 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:57.211+0000 7f3fb0ddb640 1 --2- 192.168.123.110:0/2622419825 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f3f8403d2d0 0x7f3f8403f790 secure :-1 s=READY pgs=20 cs=0 l=1 rev1=1 crypto rx=0x7f3f900099c0 tx=0x7f3f90006eb0 comp rx=0 tx=0).stop 2026-03-09T20:42:57.212 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:57.211+0000 7f3fb0ddb640 1 -- 192.168.123.110:0/2622419825 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3fac102470 msgr2=0x7f3fac1997c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:57.212 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:57.211+0000 7f3fb0ddb640 1 --2- 192.168.123.110:0/2622419825 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3fac102470 0x7f3fac1997c0 secure :-1 s=READY pgs=108 cs=0 l=1 rev1=1 crypto rx=0x7f3f9802f860 tx=0x7f3f98004270 comp rx=0 tx=0).stop 2026-03-09T20:42:57.212 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:57.211+0000 7f3fb0ddb640 1 -- 192.168.123.110:0/2622419825 shutdown_connections 2026-03-09T20:42:57.212 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:57.211+0000 7f3fb0ddb640 1 --2- 192.168.123.110:0/2622419825 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f3f8403d2d0 0x7f3f8403f790 unknown :-1 s=CLOSED pgs=20 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:57.212 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:57.211+0000 7f3fb0ddb640 1 --2- 192.168.123.110:0/2622419825 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3fac102470 0x7f3fac1997c0 unknown :-1 s=CLOSED pgs=108 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:57.212 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:57.211+0000 7f3fb0ddb640 1 -- 192.168.123.110:0/2622419825 >> 192.168.123.110:0/2622419825 conn(0x7f3fac0fdca0 msgr2=0x7f3fac0fe8a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:57.212 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:57.211+0000 7f3fb0ddb640 1 -- 192.168.123.110:0/2622419825 shutdown_connections 2026-03-09T20:42:57.212 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:57.211+0000 7f3fb0ddb640 1 -- 192.168.123.110:0/2622419825 wait complete. 2026-03-09T20:42:57.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:57 vm07 ceph-mon[49120]: from='client.? 192.168.123.110:0/2622419825' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T20:42:58.278 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T20:42:58.278 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph mon dump -f json 2026-03-09T20:42:58.418 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T20:42:58.463 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T20:42:58.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:58 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' 2026-03-09T20:42:58.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:58 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' 2026-03-09T20:42:58.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:58 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' 2026-03-09T20:42:58.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:58 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' 2026-03-09T20:42:58.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:58 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' 2026-03-09T20:42:58.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:58 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' 2026-03-09T20:42:58.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:58 vm07 ceph-mon[49120]: Regenerating cephadm self-signed grafana TLS certificates 2026-03-09T20:42:58.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:58 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' 2026-03-09T20:42:58.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:58 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' 2026-03-09T20:42:58.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:58 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T20:42:58.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:58 vm07 ceph-mon[49120]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T20:42:58.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:58 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' 2026-03-09T20:42:58.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:58 vm07 ceph-mon[49120]: Deploying daemon grafana.vm07 on vm07 2026-03-09T20:42:58.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:58 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' 2026-03-09T20:42:58.718 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:58.716+0000 7feae7d35640 1 -- 192.168.123.110:0/3797364362 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7feae00ff250 msgr2=0x7feae00ff650 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:58.718 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:58.716+0000 7feae7d35640 1 --2- 192.168.123.110:0/3797364362 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7feae00ff250 0x7feae00ff650 secure :-1 s=READY pgs=109 cs=0 l=1 rev1=1 crypto rx=0x7fead40099b0 tx=0x7fead402f2b0 comp rx=0 tx=0).stop 2026-03-09T20:42:58.718 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:58.717+0000 7feae7d35640 1 -- 192.168.123.110:0/3797364362 shutdown_connections 2026-03-09T20:42:58.719 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:58.717+0000 7feae7d35640 1 --2- 192.168.123.110:0/3797364362 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7feae00ff250 0x7feae00ff650 unknown :-1 s=CLOSED pgs=109 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:58.719 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:58.717+0000 7feae7d35640 1 -- 192.168.123.110:0/3797364362 >> 192.168.123.110:0/3797364362 conn(0x7feae00faa00 msgr2=0x7feae00fce20 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:58.719 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:58.718+0000 7feae7d35640 1 -- 192.168.123.110:0/3797364362 shutdown_connections 2026-03-09T20:42:58.719 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:58.718+0000 7feae7d35640 1 -- 192.168.123.110:0/3797364362 wait complete. 2026-03-09T20:42:58.720 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:58.719+0000 7feae7d35640 1 Processor -- start 2026-03-09T20:42:58.720 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:58.719+0000 7feae7d35640 1 -- start start 2026-03-09T20:42:58.720 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:58.719+0000 7feae7d35640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7feae00ff250 0x7feae0195480 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:58.721 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:58.719+0000 7feae7d35640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7feae01959c0 con 0x7feae00ff250 2026-03-09T20:42:58.721 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:58.720+0000 7feae5aaa640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7feae00ff250 0x7feae0195480 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:58.721 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:58.720+0000 7feae5aaa640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7feae00ff250 0x7feae0195480 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.110:38160/0 (socket says 192.168.123.110:38160) 2026-03-09T20:42:58.721 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:58.720+0000 7feae5aaa640 1 -- 192.168.123.110:0/2851915430 learned_addr learned my addr 192.168.123.110:0/2851915430 (peer_addr_for_me v2:192.168.123.110:0/0) 2026-03-09T20:42:58.722 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:58.720+0000 7feae5aaa640 1 -- 192.168.123.110:0/2851915430 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fead4009660 con 0x7feae00ff250 2026-03-09T20:42:58.722 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:58.721+0000 7feae5aaa640 1 --2- 192.168.123.110:0/2851915430 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7feae00ff250 0x7feae0195480 secure :-1 s=READY pgs=110 cs=0 l=1 rev1=1 crypto rx=0x7fead40042c0 tx=0x7fead40042f0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:58.722 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:58.721+0000 7feaceffd640 1 -- 192.168.123.110:0/2851915430 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fead403d070 con 0x7feae00ff250 2026-03-09T20:42:58.722 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:58.721+0000 7feae7d35640 1 -- 192.168.123.110:0/2851915430 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7feae0195bc0 con 0x7feae00ff250 2026-03-09T20:42:58.724 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:58.721+0000 7feaceffd640 1 -- 192.168.123.110:0/2851915430 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fead4038b40 con 0x7feae00ff250 2026-03-09T20:42:58.724 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:58.721+0000 7feaceffd640 1 -- 192.168.123.110:0/2851915430 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fead4041a40 con 0x7feae00ff250 2026-03-09T20:42:58.724 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:58.722+0000 7feae7d35640 1 -- 192.168.123.110:0/2851915430 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7feae0196060 con 0x7feae00ff250 2026-03-09T20:42:58.724 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:58.723+0000 7feaceffd640 1 -- 192.168.123.110:0/2851915430 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 13) v1 ==== 49429+0+0 (secure 0 0 0) 0x7fead4038cb0 con 0x7feae00ff250 2026-03-09T20:42:58.724 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:58.723+0000 7feaceffd640 1 --2- 192.168.123.110:0/2851915430 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7feabc03d280 0x7feabc03f740 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:42:58.725 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:58.723+0000 7feae7d35640 1 -- 192.168.123.110:0/2851915430 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7feaa8005350 con 0x7feae00ff250 2026-03-09T20:42:58.725 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:58.723+0000 7feaceffd640 1 -- 192.168.123.110:0/2851915430 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1155+0+0 (secure 0 0 0) 0x7fead4075af0 con 0x7feae00ff250 2026-03-09T20:42:58.725 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:58.723+0000 7feae52a9640 1 --2- 192.168.123.110:0/2851915430 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7feabc03d280 0x7feabc03f740 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:42:58.725 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:58.723+0000 7feae52a9640 1 --2- 192.168.123.110:0/2851915430 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7feabc03d280 0x7feabc03f740 secure :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0x7fead00099c0 tx=0x7fead0006eb0 comp rx=0 tx=0).ready entity=mgr.14162 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:42:58.728 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:58.727+0000 7feaceffd640 1 -- 192.168.123.110:0/2851915430 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7fead4046070 con 0x7feae00ff250 2026-03-09T20:42:58.855 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:58.853+0000 7feae7d35640 1 -- 192.168.123.110:0/2851915430 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7feaa80051c0 con 0x7feae00ff250 2026-03-09T20:42:58.856 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:58.855+0000 7feaceffd640 1 -- 192.168.123.110:0/2851915430 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+747 (secure 0 0 0) 0x7fead40375b0 con 0x7feae00ff250 2026-03-09T20:42:58.857 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T20:42:58.857 INFO:teuthology.orchestra.run.vm10.stdout:{"epoch":1,"fsid":"589eab88-1bf8-11f1-9e50-71f3ab1833c4","modified":"2026-03-09T20:42:20.613735Z","created":"2026-03-09T20:42:20.613735Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm07","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:3300","nonce":0},{"type":"v1","addr":"192.168.123.107:6789","nonce":0}]},"addr":"192.168.123.107:6789/0","public_addr":"192.168.123.107:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T20:42:58.857 INFO:teuthology.orchestra.run.vm10.stderr:dumped monmap epoch 1 2026-03-09T20:42:58.860 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:58.858+0000 7feae7d35640 1 -- 192.168.123.110:0/2851915430 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7feabc03d280 msgr2=0x7feabc03f740 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:58.860 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:58.859+0000 7feae7d35640 1 --2- 192.168.123.110:0/2851915430 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7feabc03d280 0x7feabc03f740 secure :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0x7fead00099c0 tx=0x7fead0006eb0 comp rx=0 tx=0).stop 2026-03-09T20:42:58.860 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:58.859+0000 7feae7d35640 1 -- 192.168.123.110:0/2851915430 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7feae00ff250 msgr2=0x7feae0195480 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:42:58.860 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:58.859+0000 7feae7d35640 1 --2- 192.168.123.110:0/2851915430 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7feae00ff250 0x7feae0195480 secure :-1 s=READY pgs=110 cs=0 l=1 rev1=1 crypto rx=0x7fead40042c0 tx=0x7fead40042f0 comp rx=0 tx=0).stop 2026-03-09T20:42:58.860 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:58.859+0000 7feae7d35640 1 -- 192.168.123.110:0/2851915430 shutdown_connections 2026-03-09T20:42:58.860 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:58.859+0000 7feae7d35640 1 --2- 192.168.123.110:0/2851915430 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7feabc03d280 0x7feabc03f740 unknown :-1 s=CLOSED pgs=21 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:58.861 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:58.859+0000 7feae7d35640 1 --2- 192.168.123.110:0/2851915430 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7feae00ff250 0x7feae0195480 unknown :-1 s=CLOSED pgs=110 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:42:58.861 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:58.860+0000 7feae7d35640 1 -- 192.168.123.110:0/2851915430 >> 192.168.123.110:0/2851915430 conn(0x7feae00faa00 msgr2=0x7feae00fb2c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:42:58.861 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:58.860+0000 7feae7d35640 1 -- 192.168.123.110:0/2851915430 shutdown_connections 2026-03-09T20:42:58.861 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:42:58.860+0000 7feae7d35640 1 -- 192.168.123.110:0/2851915430 wait complete. 2026-03-09T20:42:59.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:42:59 vm07 ceph-mon[49120]: from='client.? 192.168.123.110:0/2851915430' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T20:42:59.938 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T20:42:59.938 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph mon dump -f json 2026-03-09T20:43:00.086 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T20:43:00.125 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T20:43:00.371 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:00.369+0000 7fb4c5611640 1 -- 192.168.123.110:0/4289003534 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb4c00751a0 msgr2=0x7fb4c0073600 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:00.371 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:00.369+0000 7fb4c5611640 1 --2- 192.168.123.110:0/4289003534 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb4c00751a0 0x7fb4c0073600 secure :-1 s=READY pgs=111 cs=0 l=1 rev1=1 crypto rx=0x7fb4ac0099b0 tx=0x7fb4ac02f2b0 comp rx=0 tx=0).stop 2026-03-09T20:43:00.371 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:00.370+0000 7fb4c5611640 1 -- 192.168.123.110:0/4289003534 shutdown_connections 2026-03-09T20:43:00.371 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:00.370+0000 7fb4c5611640 1 --2- 192.168.123.110:0/4289003534 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb4c00751a0 0x7fb4c0073600 unknown :-1 s=CLOSED pgs=111 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:00.371 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:00.370+0000 7fb4c5611640 1 -- 192.168.123.110:0/4289003534 >> 192.168.123.110:0/4289003534 conn(0x7fb4c00fbb20 msgr2=0x7fb4c00fdf60 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:43:00.371 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:00.370+0000 7fb4c5611640 1 -- 192.168.123.110:0/4289003534 shutdown_connections 2026-03-09T20:43:00.371 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:00.370+0000 7fb4c5611640 1 -- 192.168.123.110:0/4289003534 wait complete. 2026-03-09T20:43:00.372 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:00.370+0000 7fb4c5611640 1 Processor -- start 2026-03-09T20:43:00.372 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:00.371+0000 7fb4c5611640 1 -- start start 2026-03-09T20:43:00.376 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:00.371+0000 7fb4c5611640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb4c00751a0 0x7fb4c019de30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:43:00.377 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:00.371+0000 7fb4c5611640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fb4c019e370 con 0x7fb4c00751a0 2026-03-09T20:43:00.377 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:00.372+0000 7fb4beffd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb4c00751a0 0x7fb4c019de30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:43:00.377 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:00.372+0000 7fb4beffd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb4c00751a0 0x7fb4c019de30 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.110:38178/0 (socket says 192.168.123.110:38178) 2026-03-09T20:43:00.377 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:00.372+0000 7fb4beffd640 1 -- 192.168.123.110:0/1915657646 learned_addr learned my addr 192.168.123.110:0/1915657646 (peer_addr_for_me v2:192.168.123.110:0/0) 2026-03-09T20:43:00.377 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:00.373+0000 7fb4beffd640 1 -- 192.168.123.110:0/1915657646 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fb4ac009660 con 0x7fb4c00751a0 2026-03-09T20:43:00.377 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:00.373+0000 7fb4beffd640 1 --2- 192.168.123.110:0/1915657646 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb4c00751a0 0x7fb4c019de30 secure :-1 s=READY pgs=112 cs=0 l=1 rev1=1 crypto rx=0x7fb4ac009ae0 tx=0x7fb4ac004290 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:43:00.377 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:00.374+0000 7fb4a3fff640 1 -- 192.168.123.110:0/1915657646 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fb4ac03d070 con 0x7fb4c00751a0 2026-03-09T20:43:00.377 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:00.374+0000 7fb4c5611640 1 -- 192.168.123.110:0/1915657646 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fb4c019e570 con 0x7fb4c00751a0 2026-03-09T20:43:00.378 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:00.374+0000 7fb4c5611640 1 -- 192.168.123.110:0/1915657646 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fb4c019ea10 con 0x7fb4c00751a0 2026-03-09T20:43:00.378 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:00.374+0000 7fb4a3fff640 1 -- 192.168.123.110:0/1915657646 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fb4ac038b40 con 0x7fb4c00751a0 2026-03-09T20:43:00.378 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:00.374+0000 7fb4a3fff640 1 -- 192.168.123.110:0/1915657646 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fb4ac041950 con 0x7fb4c00751a0 2026-03-09T20:43:00.378 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:00.375+0000 7fb4a3fff640 1 -- 192.168.123.110:0/1915657646 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 13) v1 ==== 49429+0+0 (secure 0 0 0) 0x7fb4ac04b430 con 0x7fb4c00751a0 2026-03-09T20:43:00.378 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:00.375+0000 7fb4a3fff640 1 --2- 192.168.123.110:0/1915657646 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7fb49004e730 0x7fb490050bf0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:43:00.378 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:00.375+0000 7fb4a3fff640 1 -- 192.168.123.110:0/1915657646 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1155+0+0 (secure 0 0 0) 0x7fb4ac076af0 con 0x7fb4c00751a0 2026-03-09T20:43:00.378 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:00.376+0000 7fb4be7fc640 1 --2- 192.168.123.110:0/1915657646 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7fb49004e730 0x7fb490050bf0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:43:00.379 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:00.377+0000 7fb4c5611640 1 -- 192.168.123.110:0/1915657646 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fb4c010d320 con 0x7fb4c00751a0 2026-03-09T20:43:00.379 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:00.378+0000 7fb4be7fc640 1 --2- 192.168.123.110:0/1915657646 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7fb49004e730 0x7fb490050bf0 secure :-1 s=READY pgs=22 cs=0 l=1 rev1=1 crypto rx=0x7fb4b40099c0 tx=0x7fb4b4006eb0 comp rx=0 tx=0).ready entity=mgr.14162 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:43:00.381 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:00.380+0000 7fb4a3fff640 1 -- 192.168.123.110:0/1915657646 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7fb4ac046070 con 0x7fb4c00751a0 2026-03-09T20:43:00.516 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:00.515+0000 7fb4c5611640 1 -- 192.168.123.110:0/1915657646 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7fb4c0073680 con 0x7fb4c00751a0 2026-03-09T20:43:00.519 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:00.517+0000 7fb4a3fff640 1 -- 192.168.123.110:0/1915657646 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+747 (secure 0 0 0) 0x7fb4ac03f6d0 con 0x7fb4c00751a0 2026-03-09T20:43:00.519 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T20:43:00.519 INFO:teuthology.orchestra.run.vm10.stdout:{"epoch":1,"fsid":"589eab88-1bf8-11f1-9e50-71f3ab1833c4","modified":"2026-03-09T20:42:20.613735Z","created":"2026-03-09T20:42:20.613735Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm07","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:3300","nonce":0},{"type":"v1","addr":"192.168.123.107:6789","nonce":0}]},"addr":"192.168.123.107:6789/0","public_addr":"192.168.123.107:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T20:43:00.519 INFO:teuthology.orchestra.run.vm10.stderr:dumped monmap epoch 1 2026-03-09T20:43:00.522 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:00.520+0000 7fb4c5611640 1 -- 192.168.123.110:0/1915657646 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7fb49004e730 msgr2=0x7fb490050bf0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:00.522 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:00.520+0000 7fb4c5611640 1 --2- 192.168.123.110:0/1915657646 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7fb49004e730 0x7fb490050bf0 secure :-1 s=READY pgs=22 cs=0 l=1 rev1=1 crypto rx=0x7fb4b40099c0 tx=0x7fb4b4006eb0 comp rx=0 tx=0).stop 2026-03-09T20:43:00.522 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:00.520+0000 7fb4c5611640 1 -- 192.168.123.110:0/1915657646 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb4c00751a0 msgr2=0x7fb4c019de30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:00.522 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:00.520+0000 7fb4c5611640 1 --2- 192.168.123.110:0/1915657646 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb4c00751a0 0x7fb4c019de30 secure :-1 s=READY pgs=112 cs=0 l=1 rev1=1 crypto rx=0x7fb4ac009ae0 tx=0x7fb4ac004290 comp rx=0 tx=0).stop 2026-03-09T20:43:00.522 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:00.521+0000 7fb4c5611640 1 -- 192.168.123.110:0/1915657646 shutdown_connections 2026-03-09T20:43:00.522 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:00.521+0000 7fb4c5611640 1 --2- 192.168.123.110:0/1915657646 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7fb49004e730 0x7fb490050bf0 unknown :-1 s=CLOSED pgs=22 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:00.522 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:00.521+0000 7fb4c5611640 1 --2- 192.168.123.110:0/1915657646 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb4c00751a0 0x7fb4c019de30 unknown :-1 s=CLOSED pgs=112 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:00.522 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:00.521+0000 7fb4c5611640 1 -- 192.168.123.110:0/1915657646 >> 192.168.123.110:0/1915657646 conn(0x7fb4c00fbb20 msgr2=0x7fb4c01948b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:43:00.522 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:00.521+0000 7fb4c5611640 1 -- 192.168.123.110:0/1915657646 shutdown_connections 2026-03-09T20:43:00.522 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:00.521+0000 7fb4c5611640 1 -- 192.168.123.110:0/1915657646 wait complete. 2026-03-09T20:43:00.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:00 vm07 ceph-mon[49120]: from='client.? 192.168.123.110:0/1915657646' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T20:43:01.569 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T20:43:01.569 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph mon dump -f json 2026-03-09T20:43:01.719 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T20:43:01.757 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T20:43:02.005 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:02.002+0000 7fe9fa578640 1 -- 192.168.123.110:0/4122551351 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe9f41025b0 msgr2=0x7fe9f41029b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:02.005 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:02.002+0000 7fe9fa578640 1 --2- 192.168.123.110:0/4122551351 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe9f41025b0 0x7fe9f41029b0 secure :-1 s=READY pgs=113 cs=0 l=1 rev1=1 crypto rx=0x7fe9d80099b0 tx=0x7fe9d802f2b0 comp rx=0 tx=0).stop 2026-03-09T20:43:02.005 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:02.003+0000 7fe9fa578640 1 -- 192.168.123.110:0/4122551351 shutdown_connections 2026-03-09T20:43:02.005 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:02.003+0000 7fe9fa578640 1 --2- 192.168.123.110:0/4122551351 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe9f41025b0 0x7fe9f41029b0 unknown :-1 s=CLOSED pgs=113 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:02.005 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:02.003+0000 7fe9fa578640 1 -- 192.168.123.110:0/4122551351 >> 192.168.123.110:0/4122551351 conn(0x7fe9f40fde70 msgr2=0x7fe9f4100260 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:43:02.005 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:02.004+0000 7fe9fa578640 1 -- 192.168.123.110:0/4122551351 shutdown_connections 2026-03-09T20:43:02.005 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:02.004+0000 7fe9fa578640 1 -- 192.168.123.110:0/4122551351 wait complete. 2026-03-09T20:43:02.006 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:02.004+0000 7fe9fa578640 1 Processor -- start 2026-03-09T20:43:02.006 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:02.005+0000 7fe9fa578640 1 -- start start 2026-03-09T20:43:02.006 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:02.005+0000 7fe9fa578640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe9f41025b0 0x7fe9f4199930 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:43:02.006 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:02.005+0000 7fe9fa578640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fe9f4199e70 con 0x7fe9f41025b0 2026-03-09T20:43:02.006 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:02.005+0000 7fe9f3fff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe9f41025b0 0x7fe9f4199930 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:43:02.006 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:02.005+0000 7fe9f3fff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe9f41025b0 0x7fe9f4199930 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.110:38192/0 (socket says 192.168.123.110:38192) 2026-03-09T20:43:02.006 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:02.005+0000 7fe9f3fff640 1 -- 192.168.123.110:0/1703130496 learned_addr learned my addr 192.168.123.110:0/1703130496 (peer_addr_for_me v2:192.168.123.110:0/0) 2026-03-09T20:43:02.007 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:02.005+0000 7fe9f3fff640 1 -- 192.168.123.110:0/1703130496 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fe9d8009660 con 0x7fe9f41025b0 2026-03-09T20:43:02.007 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:02.006+0000 7fe9f3fff640 1 --2- 192.168.123.110:0/1703130496 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe9f41025b0 0x7fe9f4199930 secure :-1 s=READY pgs=114 cs=0 l=1 rev1=1 crypto rx=0x7fe9d80042c0 tx=0x7fe9d80042f0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:43:02.007 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:02.006+0000 7fe9f17fa640 1 -- 192.168.123.110:0/1703130496 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fe9d803d070 con 0x7fe9f41025b0 2026-03-09T20:43:02.007 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:02.006+0000 7fe9fa578640 1 -- 192.168.123.110:0/1703130496 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fe9f419a070 con 0x7fe9f41025b0 2026-03-09T20:43:02.007 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:02.006+0000 7fe9fa578640 1 -- 192.168.123.110:0/1703130496 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fe9f419a510 con 0x7fe9f41025b0 2026-03-09T20:43:02.008 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:02.006+0000 7fe9f17fa640 1 -- 192.168.123.110:0/1703130496 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fe9d8038b40 con 0x7fe9f41025b0 2026-03-09T20:43:02.008 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:02.006+0000 7fe9f17fa640 1 -- 192.168.123.110:0/1703130496 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fe9d8041a40 con 0x7fe9f41025b0 2026-03-09T20:43:02.009 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:02.008+0000 7fe9f17fa640 1 -- 192.168.123.110:0/1703130496 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 13) v1 ==== 49429+0+0 (secure 0 0 0) 0x7fe9d804b430 con 0x7fe9f41025b0 2026-03-09T20:43:02.009 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:02.008+0000 7fe9fa578640 1 -- 192.168.123.110:0/1703130496 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fe9f4102a30 con 0x7fe9f41025b0 2026-03-09T20:43:02.009 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:02.008+0000 7fe9f17fa640 1 --2- 192.168.123.110:0/1703130496 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7fe9c803d280 0x7fe9c803f740 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:43:02.009 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:02.008+0000 7fe9f17fa640 1 -- 192.168.123.110:0/1703130496 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1155+0+0 (secure 0 0 0) 0x7fe9d8075a00 con 0x7fe9f41025b0 2026-03-09T20:43:02.009 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:02.008+0000 7fe9f37fe640 1 --2- 192.168.123.110:0/1703130496 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7fe9c803d280 0x7fe9c803f740 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:43:02.010 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:02.008+0000 7fe9f37fe640 1 --2- 192.168.123.110:0/1703130496 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7fe9c803d280 0x7fe9c803f740 secure :-1 s=READY pgs=23 cs=0 l=1 rev1=1 crypto rx=0x7fe9e00099c0 tx=0x7fe9e0006eb0 comp rx=0 tx=0).ready entity=mgr.14162 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:43:02.012 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:02.011+0000 7fe9f17fa640 1 -- 192.168.123.110:0/1703130496 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7fe9d803faf0 con 0x7fe9f41025b0 2026-03-09T20:43:02.139 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:02.136+0000 7fe9fa578640 1 -- 192.168.123.110:0/1703130496 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7fe9f41084c0 con 0x7fe9f41025b0 2026-03-09T20:43:02.139 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:02.138+0000 7fe9f17fa640 1 -- 192.168.123.110:0/1703130496 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+747 (secure 0 0 0) 0x7fe9f41084c0 con 0x7fe9f41025b0 2026-03-09T20:43:02.140 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T20:43:02.140 INFO:teuthology.orchestra.run.vm10.stdout:{"epoch":1,"fsid":"589eab88-1bf8-11f1-9e50-71f3ab1833c4","modified":"2026-03-09T20:42:20.613735Z","created":"2026-03-09T20:42:20.613735Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm07","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:3300","nonce":0},{"type":"v1","addr":"192.168.123.107:6789","nonce":0}]},"addr":"192.168.123.107:6789/0","public_addr":"192.168.123.107:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T20:43:02.140 INFO:teuthology.orchestra.run.vm10.stderr:dumped monmap epoch 1 2026-03-09T20:43:02.142 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:02.141+0000 7fe9fa578640 1 -- 192.168.123.110:0/1703130496 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7fe9c803d280 msgr2=0x7fe9c803f740 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:02.142 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:02.141+0000 7fe9fa578640 1 --2- 192.168.123.110:0/1703130496 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7fe9c803d280 0x7fe9c803f740 secure :-1 s=READY pgs=23 cs=0 l=1 rev1=1 crypto rx=0x7fe9e00099c0 tx=0x7fe9e0006eb0 comp rx=0 tx=0).stop 2026-03-09T20:43:02.142 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:02.141+0000 7fe9fa578640 1 -- 192.168.123.110:0/1703130496 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe9f41025b0 msgr2=0x7fe9f4199930 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:02.142 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:02.141+0000 7fe9fa578640 1 --2- 192.168.123.110:0/1703130496 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe9f41025b0 0x7fe9f4199930 secure :-1 s=READY pgs=114 cs=0 l=1 rev1=1 crypto rx=0x7fe9d80042c0 tx=0x7fe9d80042f0 comp rx=0 tx=0).stop 2026-03-09T20:43:02.142 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:02.141+0000 7fe9fa578640 1 -- 192.168.123.110:0/1703130496 shutdown_connections 2026-03-09T20:43:02.142 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:02.141+0000 7fe9fa578640 1 --2- 192.168.123.110:0/1703130496 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7fe9c803d280 0x7fe9c803f740 unknown :-1 s=CLOSED pgs=23 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:02.142 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:02.141+0000 7fe9fa578640 1 --2- 192.168.123.110:0/1703130496 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe9f41025b0 0x7fe9f4199930 unknown :-1 s=CLOSED pgs=114 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:02.142 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:02.141+0000 7fe9fa578640 1 -- 192.168.123.110:0/1703130496 >> 192.168.123.110:0/1703130496 conn(0x7fe9f40fde70 msgr2=0x7fe9f40ff320 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:43:02.142 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:02.141+0000 7fe9fa578640 1 -- 192.168.123.110:0/1703130496 shutdown_connections 2026-03-09T20:43:02.142 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:02.141+0000 7fe9fa578640 1 -- 192.168.123.110:0/1703130496 wait complete. 2026-03-09T20:43:02.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:02 vm07 ceph-mon[49120]: from='client.? 192.168.123.110:0/1703130496' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T20:43:03.188 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T20:43:03.188 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph mon dump -f json 2026-03-09T20:43:03.330 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T20:43:03.365 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T20:43:03.601 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:03.598+0000 7f927aee4640 1 -- 192.168.123.110:0/2777055816 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9274100390 msgr2=0x7f9274100790 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:03.601 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:03.598+0000 7f927aee4640 1 --2- 192.168.123.110:0/2777055816 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9274100390 0x7f9274100790 secure :-1 s=READY pgs=115 cs=0 l=1 rev1=1 crypto rx=0x7f92600099b0 tx=0x7f926002f2b0 comp rx=0 tx=0).stop 2026-03-09T20:43:03.601 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:03.598+0000 7f927aee4640 1 -- 192.168.123.110:0/2777055816 shutdown_connections 2026-03-09T20:43:03.601 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:03.598+0000 7f927aee4640 1 --2- 192.168.123.110:0/2777055816 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9274100390 0x7f9274100790 unknown :-1 s=CLOSED pgs=115 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:03.601 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:03.598+0000 7f927aee4640 1 -- 192.168.123.110:0/2777055816 >> 192.168.123.110:0/2777055816 conn(0x7f92740fbb60 msgr2=0x7f92740fdf80 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:43:03.602 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:03.599+0000 7f927aee4640 1 -- 192.168.123.110:0/2777055816 shutdown_connections 2026-03-09T20:43:03.602 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:03.599+0000 7f927aee4640 1 -- 192.168.123.110:0/2777055816 wait complete. 2026-03-09T20:43:03.602 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:03.600+0000 7f927aee4640 1 Processor -- start 2026-03-09T20:43:03.602 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:03.600+0000 7f927aee4640 1 -- start start 2026-03-09T20:43:03.603 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:03.601+0000 7f927aee4640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9274105e00 0x7f9274106220 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:43:03.603 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:03.601+0000 7f927aee4640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9260002dc0 con 0x7f9274105e00 2026-03-09T20:43:03.603 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:03.601+0000 7f9278c59640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9274105e00 0x7f9274106220 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:43:03.606 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:03.601+0000 7f9278c59640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9274105e00 0x7f9274106220 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.110:38212/0 (socket says 192.168.123.110:38212) 2026-03-09T20:43:03.606 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:03.601+0000 7f9278c59640 1 -- 192.168.123.110:0/1074486287 learned_addr learned my addr 192.168.123.110:0/1074486287 (peer_addr_for_me v2:192.168.123.110:0/0) 2026-03-09T20:43:03.606 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:03.601+0000 7f9278c59640 1 -- 192.168.123.110:0/1074486287 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f9260009660 con 0x7f9274105e00 2026-03-09T20:43:03.606 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:03.602+0000 7f9278c59640 1 --2- 192.168.123.110:0/1074486287 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9274105e00 0x7f9274106220 secure :-1 s=READY pgs=116 cs=0 l=1 rev1=1 crypto rx=0x7f926002f860 tx=0x7f9260004290 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:43:03.606 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:03.603+0000 7f9271ffb640 1 -- 192.168.123.110:0/1074486287 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f926003d070 con 0x7f9274105e00 2026-03-09T20:43:03.606 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:03.603+0000 7f927aee4640 1 -- 192.168.123.110:0/1074486287 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f9274106760 con 0x7f9274105e00 2026-03-09T20:43:03.606 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:03.603+0000 7f927aee4640 1 -- 192.168.123.110:0/1074486287 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f9274109300 con 0x7f9274105e00 2026-03-09T20:43:03.606 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:03.603+0000 7f9271ffb640 1 -- 192.168.123.110:0/1074486287 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f9260002a50 con 0x7f9274105e00 2026-03-09T20:43:03.606 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:03.603+0000 7f9271ffb640 1 -- 192.168.123.110:0/1074486287 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f9260041980 con 0x7f9274105e00 2026-03-09T20:43:03.606 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:03.605+0000 7f927aee4640 1 -- 192.168.123.110:0/1074486287 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f9240005350 con 0x7f9274105e00 2026-03-09T20:43:03.607 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:03.606+0000 7f9271ffb640 1 -- 192.168.123.110:0/1074486287 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 13) v1 ==== 49429+0+0 (secure 0 0 0) 0x7f9260041ae0 con 0x7f9274105e00 2026-03-09T20:43:03.608 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:03.606+0000 7f9271ffb640 1 --2- 192.168.123.110:0/1074486287 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f925803d350 0x7f925803f810 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:43:03.608 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:03.606+0000 7f9271ffb640 1 -- 192.168.123.110:0/1074486287 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1155+0+0 (secure 0 0 0) 0x7f9260075c10 con 0x7f9274105e00 2026-03-09T20:43:03.608 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:03.607+0000 7f9273fff640 1 --2- 192.168.123.110:0/1074486287 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f925803d350 0x7f925803f810 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:43:03.608 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:03.607+0000 7f9273fff640 1 --2- 192.168.123.110:0/1074486287 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f925803d350 0x7f925803f810 secure :-1 s=READY pgs=24 cs=0 l=1 rev1=1 crypto rx=0x7f926800ad30 tx=0x7f92680093f0 comp rx=0 tx=0).ready entity=mgr.14162 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:43:03.610 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:03.609+0000 7f9271ffb640 1 -- 192.168.123.110:0/1074486287 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f9260038690 con 0x7f9274105e00 2026-03-09T20:43:03.732 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:03.731+0000 7f927aee4640 1 -- 192.168.123.110:0/1074486287 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7f9240005600 con 0x7f9274105e00 2026-03-09T20:43:03.733 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:03.731+0000 7f9271ffb640 1 -- 192.168.123.110:0/1074486287 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+747 (secure 0 0 0) 0x7f9260046030 con 0x7f9274105e00 2026-03-09T20:43:03.733 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T20:43:03.733 INFO:teuthology.orchestra.run.vm10.stdout:{"epoch":1,"fsid":"589eab88-1bf8-11f1-9e50-71f3ab1833c4","modified":"2026-03-09T20:42:20.613735Z","created":"2026-03-09T20:42:20.613735Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm07","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:3300","nonce":0},{"type":"v1","addr":"192.168.123.107:6789","nonce":0}]},"addr":"192.168.123.107:6789/0","public_addr":"192.168.123.107:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T20:43:03.733 INFO:teuthology.orchestra.run.vm10.stderr:dumped monmap epoch 1 2026-03-09T20:43:03.735 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:03.734+0000 7f927aee4640 1 -- 192.168.123.110:0/1074486287 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f925803d350 msgr2=0x7f925803f810 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:03.735 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:03.734+0000 7f927aee4640 1 --2- 192.168.123.110:0/1074486287 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f925803d350 0x7f925803f810 secure :-1 s=READY pgs=24 cs=0 l=1 rev1=1 crypto rx=0x7f926800ad30 tx=0x7f92680093f0 comp rx=0 tx=0).stop 2026-03-09T20:43:03.735 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:03.734+0000 7f927aee4640 1 -- 192.168.123.110:0/1074486287 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9274105e00 msgr2=0x7f9274106220 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:03.735 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:03.734+0000 7f927aee4640 1 --2- 192.168.123.110:0/1074486287 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9274105e00 0x7f9274106220 secure :-1 s=READY pgs=116 cs=0 l=1 rev1=1 crypto rx=0x7f926002f860 tx=0x7f9260004290 comp rx=0 tx=0).stop 2026-03-09T20:43:03.735 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:03.734+0000 7f927aee4640 1 -- 192.168.123.110:0/1074486287 shutdown_connections 2026-03-09T20:43:03.735 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:03.734+0000 7f927aee4640 1 --2- 192.168.123.110:0/1074486287 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f925803d350 0x7f925803f810 unknown :-1 s=CLOSED pgs=24 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:03.735 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:03.734+0000 7f927aee4640 1 --2- 192.168.123.110:0/1074486287 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9274105e00 0x7f9274106220 unknown :-1 s=CLOSED pgs=116 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:03.736 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:03.734+0000 7f927aee4640 1 -- 192.168.123.110:0/1074486287 >> 192.168.123.110:0/1074486287 conn(0x7f92740fbb60 msgr2=0x7f92740fc760 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:43:03.736 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:03.735+0000 7f927aee4640 1 -- 192.168.123.110:0/1074486287 shutdown_connections 2026-03-09T20:43:03.736 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:03.735+0000 7f927aee4640 1 -- 192.168.123.110:0/1074486287 wait complete. 2026-03-09T20:43:04.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:04 vm07 ceph-mon[49120]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:43:04.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:04 vm07 ceph-mon[49120]: from='client.? 192.168.123.110:0/1074486287' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T20:43:05.072 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T20:43:05.073 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph mon dump -f json 2026-03-09T20:43:05.210 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T20:43:05.243 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T20:43:05.513 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:05.511+0000 7f7068dce640 1 -- 192.168.123.110:0/3222924165 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7064102620 msgr2=0x7f7064102a20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:05.513 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:05.511+0000 7f7068dce640 1 --2- 192.168.123.110:0/3222924165 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7064102620 0x7f7064102a20 secure :-1 s=READY pgs=117 cs=0 l=1 rev1=1 crypto rx=0x7f704c0099b0 tx=0x7f704c02f2b0 comp rx=0 tx=0).stop 2026-03-09T20:43:05.513 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:05.512+0000 7f7068dce640 1 -- 192.168.123.110:0/3222924165 shutdown_connections 2026-03-09T20:43:05.514 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:05.512+0000 7f7068dce640 1 --2- 192.168.123.110:0/3222924165 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7064102620 0x7f7064102a20 unknown :-1 s=CLOSED pgs=117 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:05.514 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:05.512+0000 7f7068dce640 1 -- 192.168.123.110:0/3222924165 >> 192.168.123.110:0/3222924165 conn(0x7f70640fde70 msgr2=0x7f7064100260 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:43:05.514 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:05.512+0000 7f7068dce640 1 -- 192.168.123.110:0/3222924165 shutdown_connections 2026-03-09T20:43:05.514 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:05.512+0000 7f7068dce640 1 -- 192.168.123.110:0/3222924165 wait complete. 2026-03-09T20:43:05.514 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:05.513+0000 7f7068dce640 1 Processor -- start 2026-03-09T20:43:05.514 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:05.513+0000 7f7068dce640 1 -- start start 2026-03-09T20:43:05.514 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:05.513+0000 7f7068dce640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7064102620 0x7f7064199990 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:43:05.515 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:05.513+0000 7f7068dce640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f7064199ed0 con 0x7f7064102620 2026-03-09T20:43:05.515 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:05.514+0000 7f7062575640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7064102620 0x7f7064199990 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:43:05.515 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:05.514+0000 7f7062575640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7064102620 0x7f7064199990 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.110:38224/0 (socket says 192.168.123.110:38224) 2026-03-09T20:43:05.515 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:05.514+0000 7f7062575640 1 -- 192.168.123.110:0/1966091376 learned_addr learned my addr 192.168.123.110:0/1966091376 (peer_addr_for_me v2:192.168.123.110:0/0) 2026-03-09T20:43:05.515 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:05.514+0000 7f7062575640 1 -- 192.168.123.110:0/1966091376 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f704c009660 con 0x7f7064102620 2026-03-09T20:43:05.515 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:05.514+0000 7f7062575640 1 --2- 192.168.123.110:0/1966091376 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7064102620 0x7f7064199990 secure :-1 s=READY pgs=118 cs=0 l=1 rev1=1 crypto rx=0x7f704c0042c0 tx=0x7f704c0042f0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:43:05.516 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:05.514+0000 7f704b7fe640 1 -- 192.168.123.110:0/1966091376 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f704c038680 con 0x7f7064102620 2026-03-09T20:43:05.516 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:05.514+0000 7f704b7fe640 1 -- 192.168.123.110:0/1966091376 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f704c038ca0 con 0x7f7064102620 2026-03-09T20:43:05.516 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:05.515+0000 7f704b7fe640 1 -- 192.168.123.110:0/1966091376 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f704c0419c0 con 0x7f7064102620 2026-03-09T20:43:05.517 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:05.515+0000 7f7068dce640 1 -- 192.168.123.110:0/1966091376 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f706419a0d0 con 0x7f7064102620 2026-03-09T20:43:05.517 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:05.515+0000 7f7068dce640 1 -- 192.168.123.110:0/1966091376 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f706419a570 con 0x7f7064102620 2026-03-09T20:43:05.517 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:05.516+0000 7f704b7fe640 1 -- 192.168.123.110:0/1966091376 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 13) v1 ==== 49429+0+0 (secure 0 0 0) 0x7f704c0387e0 con 0x7f7064102620 2026-03-09T20:43:05.517 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:05.516+0000 7f7068dce640 1 -- 192.168.123.110:0/1966091376 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f7064102aa0 con 0x7f7064102620 2026-03-09T20:43:05.517 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:05.516+0000 7f704b7fe640 1 --2- 192.168.123.110:0/1966091376 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f703c03cf10 0x7f703c03f3d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:43:05.517 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:05.516+0000 7f704b7fe640 1 -- 192.168.123.110:0/1966091376 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1155+0+0 (secure 0 0 0) 0x7f704c075d70 con 0x7f7064102620 2026-03-09T20:43:05.517 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:05.516+0000 7f7061d74640 1 --2- 192.168.123.110:0/1966091376 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f703c03cf10 0x7f703c03f3d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:43:05.518 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:05.517+0000 7f7061d74640 1 --2- 192.168.123.110:0/1966091376 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f703c03cf10 0x7f703c03f3d0 secure :-1 s=READY pgs=25 cs=0 l=1 rev1=1 crypto rx=0x7f70580099c0 tx=0x7f7058006eb0 comp rx=0 tx=0).ready entity=mgr.14162 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:43:05.520 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:05.519+0000 7f704b7fe640 1 -- 192.168.123.110:0/1966091376 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f704c04acb0 con 0x7f7064102620 2026-03-09T20:43:05.641 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:05.640+0000 7f7068dce640 1 -- 192.168.123.110:0/1966091376 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7f7064108530 con 0x7f7064102620 2026-03-09T20:43:05.642 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:05.640+0000 7f704b7fe640 1 -- 192.168.123.110:0/1966091376 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+747 (secure 0 0 0) 0x7f704c048310 con 0x7f7064102620 2026-03-09T20:43:05.642 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T20:43:05.642 INFO:teuthology.orchestra.run.vm10.stdout:{"epoch":1,"fsid":"589eab88-1bf8-11f1-9e50-71f3ab1833c4","modified":"2026-03-09T20:42:20.613735Z","created":"2026-03-09T20:42:20.613735Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm07","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:3300","nonce":0},{"type":"v1","addr":"192.168.123.107:6789","nonce":0}]},"addr":"192.168.123.107:6789/0","public_addr":"192.168.123.107:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T20:43:05.642 INFO:teuthology.orchestra.run.vm10.stderr:dumped monmap epoch 1 2026-03-09T20:43:05.644 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:05.643+0000 7f7068dce640 1 -- 192.168.123.110:0/1966091376 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f703c03cf10 msgr2=0x7f703c03f3d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:05.644 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:05.643+0000 7f7068dce640 1 --2- 192.168.123.110:0/1966091376 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f703c03cf10 0x7f703c03f3d0 secure :-1 s=READY pgs=25 cs=0 l=1 rev1=1 crypto rx=0x7f70580099c0 tx=0x7f7058006eb0 comp rx=0 tx=0).stop 2026-03-09T20:43:05.644 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:05.643+0000 7f7068dce640 1 -- 192.168.123.110:0/1966091376 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7064102620 msgr2=0x7f7064199990 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:05.644 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:05.643+0000 7f7068dce640 1 --2- 192.168.123.110:0/1966091376 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7064102620 0x7f7064199990 secure :-1 s=READY pgs=118 cs=0 l=1 rev1=1 crypto rx=0x7f704c0042c0 tx=0x7f704c0042f0 comp rx=0 tx=0).stop 2026-03-09T20:43:05.644 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:05.643+0000 7f7068dce640 1 -- 192.168.123.110:0/1966091376 shutdown_connections 2026-03-09T20:43:05.645 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:05.643+0000 7f7068dce640 1 --2- 192.168.123.110:0/1966091376 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f703c03cf10 0x7f703c03f3d0 unknown :-1 s=CLOSED pgs=25 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:05.645 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:05.643+0000 7f7068dce640 1 --2- 192.168.123.110:0/1966091376 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7064102620 0x7f7064199990 unknown :-1 s=CLOSED pgs=118 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:05.645 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:05.643+0000 7f7068dce640 1 -- 192.168.123.110:0/1966091376 >> 192.168.123.110:0/1966091376 conn(0x7f70640fde70 msgr2=0x7f70640fea70 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:43:05.645 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:05.643+0000 7f7068dce640 1 -- 192.168.123.110:0/1966091376 shutdown_connections 2026-03-09T20:43:05.645 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:05.643+0000 7f7068dce640 1 -- 192.168.123.110:0/1966091376 wait complete. 2026-03-09T20:43:06.707 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T20:43:06.707 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph mon dump -f json 2026-03-09T20:43:06.839 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T20:43:06.881 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T20:43:06.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:06 vm07 ceph-mon[49120]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:43:06.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:06 vm07 ceph-mon[49120]: from='client.? 192.168.123.110:0/1966091376' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T20:43:07.121 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:07.118+0000 7f53ad83b640 1 -- 192.168.123.110:0/1459296032 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f53a8102620 msgr2=0x7f53a8102a20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:07.121 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:07.118+0000 7f53ad83b640 1 --2- 192.168.123.110:0/1459296032 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f53a8102620 0x7f53a8102a20 secure :-1 s=READY pgs=119 cs=0 l=1 rev1=1 crypto rx=0x7f53900099b0 tx=0x7f539002f2b0 comp rx=0 tx=0).stop 2026-03-09T20:43:07.121 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:07.119+0000 7f53ad83b640 1 -- 192.168.123.110:0/1459296032 shutdown_connections 2026-03-09T20:43:07.121 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:07.119+0000 7f53ad83b640 1 --2- 192.168.123.110:0/1459296032 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f53a8102620 0x7f53a8102a20 unknown :-1 s=CLOSED pgs=119 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:07.121 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:07.119+0000 7f53ad83b640 1 -- 192.168.123.110:0/1459296032 >> 192.168.123.110:0/1459296032 conn(0x7f53a80fde70 msgr2=0x7f53a8100260 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:43:07.121 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:07.119+0000 7f53ad83b640 1 -- 192.168.123.110:0/1459296032 shutdown_connections 2026-03-09T20:43:07.121 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:07.119+0000 7f53ad83b640 1 -- 192.168.123.110:0/1459296032 wait complete. 2026-03-09T20:43:07.121 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:07.120+0000 7f53ad83b640 1 Processor -- start 2026-03-09T20:43:07.121 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:07.120+0000 7f53ad83b640 1 -- start start 2026-03-09T20:43:07.121 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:07.120+0000 7f53ad83b640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f53a8102620 0x7f53a8199990 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:43:07.121 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:07.120+0000 7f53ad83b640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f53a8199ed0 con 0x7f53a8102620 2026-03-09T20:43:07.122 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:07.120+0000 7f53a6ffd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f53a8102620 0x7f53a8199990 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:43:07.122 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:07.120+0000 7f53a6ffd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f53a8102620 0x7f53a8199990 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.110:38242/0 (socket says 192.168.123.110:38242) 2026-03-09T20:43:07.122 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:07.120+0000 7f53a6ffd640 1 -- 192.168.123.110:0/4159415514 learned_addr learned my addr 192.168.123.110:0/4159415514 (peer_addr_for_me v2:192.168.123.110:0/0) 2026-03-09T20:43:07.122 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:07.121+0000 7f53a6ffd640 1 -- 192.168.123.110:0/4159415514 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f5390009660 con 0x7f53a8102620 2026-03-09T20:43:07.122 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:07.121+0000 7f53a6ffd640 1 --2- 192.168.123.110:0/4159415514 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f53a8102620 0x7f53a8199990 secure :-1 s=READY pgs=120 cs=0 l=1 rev1=1 crypto rx=0x7f539002f860 tx=0x7f5390004270 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:43:07.125 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:07.122+0000 7f53ac839640 1 -- 192.168.123.110:0/4159415514 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f53900043b0 con 0x7f53a8102620 2026-03-09T20:43:07.125 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:07.122+0000 7f53ac839640 1 -- 192.168.123.110:0/4159415514 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f5390038b40 con 0x7f53a8102620 2026-03-09T20:43:07.125 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:07.122+0000 7f53ac839640 1 -- 192.168.123.110:0/4159415514 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f53900418f0 con 0x7f53a8102620 2026-03-09T20:43:07.125 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:07.122+0000 7f53ad83b640 1 -- 192.168.123.110:0/4159415514 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f53a819a0d0 con 0x7f53a8102620 2026-03-09T20:43:07.125 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:07.122+0000 7f53ad83b640 1 -- 192.168.123.110:0/4159415514 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f53a819a4b0 con 0x7f53a8102620 2026-03-09T20:43:07.125 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:07.124+0000 7f53ac839640 1 -- 192.168.123.110:0/4159415514 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 13) v1 ==== 49429+0+0 (secure 0 0 0) 0x7f5390038cb0 con 0x7f53a8102620 2026-03-09T20:43:07.125 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:07.124+0000 7f53ac839640 1 --2- 192.168.123.110:0/4159415514 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f538003d2d0 0x7f538003f790 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:43:07.126 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:07.124+0000 7f53ac839640 1 -- 192.168.123.110:0/4159415514 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1155+0+0 (secure 0 0 0) 0x7f5390076250 con 0x7f53a8102620 2026-03-09T20:43:07.126 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:07.124+0000 7f53ad83b640 1 -- 192.168.123.110:0/4159415514 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f5374005350 con 0x7f53a8102620 2026-03-09T20:43:07.126 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:07.124+0000 7f53a67fc640 1 --2- 192.168.123.110:0/4159415514 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f538003d2d0 0x7f538003f790 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:43:07.126 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:07.125+0000 7f53a67fc640 1 --2- 192.168.123.110:0/4159415514 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f538003d2d0 0x7f538003f790 secure :-1 s=READY pgs=26 cs=0 l=1 rev1=1 crypto rx=0x7f539c0099c0 tx=0x7f539c006eb0 comp rx=0 tx=0).ready entity=mgr.14162 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:43:07.128 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:07.127+0000 7f53ac839640 1 -- 192.168.123.110:0/4159415514 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f5390037a20 con 0x7f53a8102620 2026-03-09T20:43:07.248 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:07.247+0000 7f53ad83b640 1 -- 192.168.123.110:0/4159415514 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7f53740051c0 con 0x7f53a8102620 2026-03-09T20:43:07.249 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:07.248+0000 7f53ac839640 1 -- 192.168.123.110:0/4159415514 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+747 (secure 0 0 0) 0x7f53900373d0 con 0x7f53a8102620 2026-03-09T20:43:07.249 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T20:43:07.249 INFO:teuthology.orchestra.run.vm10.stdout:{"epoch":1,"fsid":"589eab88-1bf8-11f1-9e50-71f3ab1833c4","modified":"2026-03-09T20:42:20.613735Z","created":"2026-03-09T20:42:20.613735Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm07","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:3300","nonce":0},{"type":"v1","addr":"192.168.123.107:6789","nonce":0}]},"addr":"192.168.123.107:6789/0","public_addr":"192.168.123.107:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T20:43:07.249 INFO:teuthology.orchestra.run.vm10.stderr:dumped monmap epoch 1 2026-03-09T20:43:07.251 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:07.250+0000 7f53ad83b640 1 -- 192.168.123.110:0/4159415514 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f538003d2d0 msgr2=0x7f538003f790 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:07.251 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:07.250+0000 7f53ad83b640 1 --2- 192.168.123.110:0/4159415514 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f538003d2d0 0x7f538003f790 secure :-1 s=READY pgs=26 cs=0 l=1 rev1=1 crypto rx=0x7f539c0099c0 tx=0x7f539c006eb0 comp rx=0 tx=0).stop 2026-03-09T20:43:07.251 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:07.250+0000 7f53ad83b640 1 -- 192.168.123.110:0/4159415514 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f53a8102620 msgr2=0x7f53a8199990 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:07.251 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:07.250+0000 7f53ad83b640 1 --2- 192.168.123.110:0/4159415514 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f53a8102620 0x7f53a8199990 secure :-1 s=READY pgs=120 cs=0 l=1 rev1=1 crypto rx=0x7f539002f860 tx=0x7f5390004270 comp rx=0 tx=0).stop 2026-03-09T20:43:07.252 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:07.250+0000 7f53ad83b640 1 -- 192.168.123.110:0/4159415514 shutdown_connections 2026-03-09T20:43:07.252 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:07.250+0000 7f53ad83b640 1 --2- 192.168.123.110:0/4159415514 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f538003d2d0 0x7f538003f790 unknown :-1 s=CLOSED pgs=26 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:07.252 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:07.250+0000 7f53ad83b640 1 --2- 192.168.123.110:0/4159415514 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f53a8102620 0x7f53a8199990 unknown :-1 s=CLOSED pgs=120 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:07.252 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:07.251+0000 7f53ad83b640 1 -- 192.168.123.110:0/4159415514 >> 192.168.123.110:0/4159415514 conn(0x7f53a80fde70 msgr2=0x7f53a80fea70 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:43:07.252 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:07.251+0000 7f53ad83b640 1 -- 192.168.123.110:0/4159415514 shutdown_connections 2026-03-09T20:43:07.252 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:07.251+0000 7f53ad83b640 1 -- 192.168.123.110:0/4159415514 wait complete. 2026-03-09T20:43:08.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:07 vm07 ceph-mon[49120]: from='client.? 192.168.123.110:0/4159415514' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T20:43:08.312 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T20:43:08.312 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph mon dump -f json 2026-03-09T20:43:08.464 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T20:43:08.506 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T20:43:08.993 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:08.990+0000 7fef4d27b640 1 -- 192.168.123.110:0/355641450 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fef48102620 msgr2=0x7fef48102a20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:08.993 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:08.990+0000 7fef4d27b640 1 --2- 192.168.123.110:0/355641450 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fef48102620 0x7fef48102a20 secure :-1 s=READY pgs=121 cs=0 l=1 rev1=1 crypto rx=0x7fef300099b0 tx=0x7fef3002f2b0 comp rx=0 tx=0).stop 2026-03-09T20:43:08.993 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:08.991+0000 7fef4d27b640 1 -- 192.168.123.110:0/355641450 shutdown_connections 2026-03-09T20:43:08.993 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:08.991+0000 7fef4d27b640 1 --2- 192.168.123.110:0/355641450 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fef48102620 0x7fef48102a20 unknown :-1 s=CLOSED pgs=121 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:08.993 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:08.991+0000 7fef4d27b640 1 -- 192.168.123.110:0/355641450 >> 192.168.123.110:0/355641450 conn(0x7fef480fde70 msgr2=0x7fef48100260 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:43:08.993 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:08.992+0000 7fef4d27b640 1 -- 192.168.123.110:0/355641450 shutdown_connections 2026-03-09T20:43:08.993 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:08.992+0000 7fef4d27b640 1 -- 192.168.123.110:0/355641450 wait complete. 2026-03-09T20:43:08.994 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:08.992+0000 7fef4d27b640 1 Processor -- start 2026-03-09T20:43:08.994 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:08.993+0000 7fef4d27b640 1 -- start start 2026-03-09T20:43:08.994 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:08.993+0000 7fef4d27b640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fef48199bb0 0x7fef48199fd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:43:08.994 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:08.993+0000 7fef4d27b640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fef4819a510 con 0x7fef48199bb0 2026-03-09T20:43:08.995 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:08.993+0000 7fef46d76640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fef48199bb0 0x7fef48199fd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:43:08.995 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:08.994+0000 7fef46d76640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fef48199bb0 0x7fef48199fd0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.110:52648/0 (socket says 192.168.123.110:52648) 2026-03-09T20:43:08.995 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:08.994+0000 7fef46d76640 1 -- 192.168.123.110:0/1351099411 learned_addr learned my addr 192.168.123.110:0/1351099411 (peer_addr_for_me v2:192.168.123.110:0/0) 2026-03-09T20:43:08.995 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:08.994+0000 7fef46d76640 1 -- 192.168.123.110:0/1351099411 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fef30009660 con 0x7fef48199bb0 2026-03-09T20:43:08.995 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:08.994+0000 7fef46d76640 1 --2- 192.168.123.110:0/1351099411 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fef48199bb0 0x7fef48199fd0 secure :-1 s=READY pgs=122 cs=0 l=1 rev1=1 crypto rx=0x7fef48103680 tx=0x7fef30004290 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:43:08.996 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:08.995+0000 7fef27fff640 1 -- 192.168.123.110:0/1351099411 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fef30038470 con 0x7fef48199bb0 2026-03-09T20:43:08.996 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:08.995+0000 7fef27fff640 1 -- 192.168.123.110:0/1351099411 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fef30038a90 con 0x7fef48199bb0 2026-03-09T20:43:08.997 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:08.996+0000 7fef27fff640 1 -- 192.168.123.110:0/1351099411 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fef30041920 con 0x7fef48199bb0 2026-03-09T20:43:08.997 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:08.996+0000 7fef4d27b640 1 -- 192.168.123.110:0/1351099411 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fef4819a710 con 0x7fef48199bb0 2026-03-09T20:43:08.997 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:08.996+0000 7fef4d27b640 1 -- 192.168.123.110:0/1351099411 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fef4819d280 con 0x7fef48199bb0 2026-03-09T20:43:08.999 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:08.997+0000 7fef27fff640 1 -- 192.168.123.110:0/1351099411 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 13) v1 ==== 49429+0+0 (secure 0 0 0) 0x7fef30038c00 con 0x7fef48199bb0 2026-03-09T20:43:08.999 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:08.997+0000 7fef4d27b640 1 -- 192.168.123.110:0/1351099411 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fef48102a20 con 0x7fef48199bb0 2026-03-09T20:43:08.999 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:08.997+0000 7fef27fff640 1 --2- 192.168.123.110:0/1351099411 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7fef2003cf10 0x7fef2003f3d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:43:08.999 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:08.997+0000 7fef27fff640 1 -- 192.168.123.110:0/1351099411 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1155+0+0 (secure 0 0 0) 0x7fef30075b20 con 0x7fef48199bb0 2026-03-09T20:43:08.999 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:08.998+0000 7fef46575640 1 --2- 192.168.123.110:0/1351099411 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7fef2003cf10 0x7fef2003f3d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:43:08.999 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:08.998+0000 7fef46575640 1 --2- 192.168.123.110:0/1351099411 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7fef2003cf10 0x7fef2003f3d0 secure :-1 s=READY pgs=27 cs=0 l=1 rev1=1 crypto rx=0x7fef3c0099c0 tx=0x7fef3c006eb0 comp rx=0 tx=0).ready entity=mgr.14162 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:43:09.002 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:09.001+0000 7fef27fff640 1 -- 192.168.123.110:0/1351099411 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7fef30048ac0 con 0x7fef48199bb0 2026-03-09T20:43:09.136 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:09.134+0000 7fef4d27b640 1 -- 192.168.123.110:0/1351099411 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7fef48108530 con 0x7fef48199bb0 2026-03-09T20:43:09.137 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:09.135+0000 7fef27fff640 1 -- 192.168.123.110:0/1351099411 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+747 (secure 0 0 0) 0x7fef30048ac0 con 0x7fef48199bb0 2026-03-09T20:43:09.137 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T20:43:09.137 INFO:teuthology.orchestra.run.vm10.stdout:{"epoch":1,"fsid":"589eab88-1bf8-11f1-9e50-71f3ab1833c4","modified":"2026-03-09T20:42:20.613735Z","created":"2026-03-09T20:42:20.613735Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm07","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:3300","nonce":0},{"type":"v1","addr":"192.168.123.107:6789","nonce":0}]},"addr":"192.168.123.107:6789/0","public_addr":"192.168.123.107:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T20:43:09.137 INFO:teuthology.orchestra.run.vm10.stderr:dumped monmap epoch 1 2026-03-09T20:43:09.139 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:09.138+0000 7fef4d27b640 1 -- 192.168.123.110:0/1351099411 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7fef2003cf10 msgr2=0x7fef2003f3d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:09.139 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:09.138+0000 7fef4d27b640 1 --2- 192.168.123.110:0/1351099411 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7fef2003cf10 0x7fef2003f3d0 secure :-1 s=READY pgs=27 cs=0 l=1 rev1=1 crypto rx=0x7fef3c0099c0 tx=0x7fef3c006eb0 comp rx=0 tx=0).stop 2026-03-09T20:43:09.139 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:09.138+0000 7fef4d27b640 1 -- 192.168.123.110:0/1351099411 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fef48199bb0 msgr2=0x7fef48199fd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:09.139 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:09.138+0000 7fef4d27b640 1 --2- 192.168.123.110:0/1351099411 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fef48199bb0 0x7fef48199fd0 secure :-1 s=READY pgs=122 cs=0 l=1 rev1=1 crypto rx=0x7fef48103680 tx=0x7fef30004290 comp rx=0 tx=0).stop 2026-03-09T20:43:09.139 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:09.138+0000 7fef4d27b640 1 -- 192.168.123.110:0/1351099411 shutdown_connections 2026-03-09T20:43:09.139 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:09.138+0000 7fef4d27b640 1 --2- 192.168.123.110:0/1351099411 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7fef2003cf10 0x7fef2003f3d0 unknown :-1 s=CLOSED pgs=27 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:09.140 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:09.138+0000 7fef4d27b640 1 --2- 192.168.123.110:0/1351099411 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fef48199bb0 0x7fef48199fd0 unknown :-1 s=CLOSED pgs=122 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:09.140 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:09.139+0000 7fef4d27b640 1 -- 192.168.123.110:0/1351099411 >> 192.168.123.110:0/1351099411 conn(0x7fef480fde70 msgr2=0x7fef480fea70 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:43:09.140 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:09.139+0000 7fef4d27b640 1 -- 192.168.123.110:0/1351099411 shutdown_connections 2026-03-09T20:43:09.140 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:09.139+0000 7fef4d27b640 1 -- 192.168.123.110:0/1351099411 wait complete. 2026-03-09T20:43:09.260 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:08 vm07 ceph-mon[49120]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:43:10.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:09 vm07 ceph-mon[49120]: from='client.? 192.168.123.110:0/1351099411' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T20:43:10.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:09 vm07 ceph-mon[49120]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:43:10.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:09 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:10.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:09 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:10.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:09 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:10.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:09 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:10.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:09 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:10.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:09 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:10.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:09 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:10.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:09 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:10.208 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T20:43:10.209 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph mon dump -f json 2026-03-09T20:43:10.346 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T20:43:10.385 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T20:43:10.666 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:10.663+0000 7f5691f82640 1 -- 192.168.123.110:0/367876510 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f568c1003a0 msgr2=0x7f568c1007a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:10.666 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:10.663+0000 7f5691f82640 1 --2- 192.168.123.110:0/367876510 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f568c1003a0 0x7f568c1007a0 secure :-1 s=READY pgs=123 cs=0 l=1 rev1=1 crypto rx=0x7f56780099b0 tx=0x7f567802f2b0 comp rx=0 tx=0).stop 2026-03-09T20:43:10.666 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:10.664+0000 7f5691f82640 1 -- 192.168.123.110:0/367876510 shutdown_connections 2026-03-09T20:43:10.666 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:10.664+0000 7f5691f82640 1 --2- 192.168.123.110:0/367876510 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f568c1003a0 0x7f568c1007a0 unknown :-1 s=CLOSED pgs=123 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:10.666 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:10.664+0000 7f5691f82640 1 -- 192.168.123.110:0/367876510 >> 192.168.123.110:0/367876510 conn(0x7f568c0fbb30 msgr2=0x7f568c0fdf70 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:43:10.666 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:10.665+0000 7f5691f82640 1 -- 192.168.123.110:0/367876510 shutdown_connections 2026-03-09T20:43:10.666 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:10.665+0000 7f5691f82640 1 -- 192.168.123.110:0/367876510 wait complete. 2026-03-09T20:43:10.666 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:10.665+0000 7f5691f82640 1 Processor -- start 2026-03-09T20:43:10.667 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:10.666+0000 7f5691f82640 1 -- start start 2026-03-09T20:43:10.667 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:10.666+0000 7f5691f82640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f568c1003a0 0x7f568c1976f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:43:10.667 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:10.666+0000 7f5691f82640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f568c197c30 con 0x7f568c1003a0 2026-03-09T20:43:10.667 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:10.666+0000 7f568b7fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f568c1003a0 0x7f568c1976f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:43:10.667 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:10.666+0000 7f568b7fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f568c1003a0 0x7f568c1976f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.110:52678/0 (socket says 192.168.123.110:52678) 2026-03-09T20:43:10.667 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:10.666+0000 7f568b7fe640 1 -- 192.168.123.110:0/2814236830 learned_addr learned my addr 192.168.123.110:0/2814236830 (peer_addr_for_me v2:192.168.123.110:0/0) 2026-03-09T20:43:10.667 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:10.666+0000 7f568b7fe640 1 -- 192.168.123.110:0/2814236830 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f5678009660 con 0x7f568c1003a0 2026-03-09T20:43:10.668 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:10.666+0000 7f568b7fe640 1 --2- 192.168.123.110:0/2814236830 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f568c1003a0 0x7f568c1976f0 secure :-1 s=READY pgs=124 cs=0 l=1 rev1=1 crypto rx=0x7f567802f860 tx=0x7f5678004270 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:43:10.668 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:10.667+0000 7f5688ff9640 1 -- 192.168.123.110:0/2814236830 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f56780043b0 con 0x7f568c1003a0 2026-03-09T20:43:10.668 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:10.667+0000 7f5688ff9640 1 -- 192.168.123.110:0/2814236830 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f5678038b40 con 0x7f568c1003a0 2026-03-09T20:43:10.668 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:10.667+0000 7f5688ff9640 1 -- 192.168.123.110:0/2814236830 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f56780418f0 con 0x7f568c1003a0 2026-03-09T20:43:10.669 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:10.667+0000 7f5691f82640 1 -- 192.168.123.110:0/2814236830 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f568c197e30 con 0x7f568c1003a0 2026-03-09T20:43:10.669 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:10.667+0000 7f5691f82640 1 -- 192.168.123.110:0/2814236830 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f568c1982d0 con 0x7f568c1003a0 2026-03-09T20:43:10.669 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:10.668+0000 7f5691f82640 1 -- 192.168.123.110:0/2814236830 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f5658005350 con 0x7f568c1003a0 2026-03-09T20:43:10.670 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:10.669+0000 7f5688ff9640 1 -- 192.168.123.110:0/2814236830 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 13) v1 ==== 49429+0+0 (secure 0 0 0) 0x7f5678038cb0 con 0x7f568c1003a0 2026-03-09T20:43:10.670 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:10.669+0000 7f5688ff9640 1 --2- 192.168.123.110:0/2814236830 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f566003d280 0x7f566003f740 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:43:10.670 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:10.669+0000 7f5688ff9640 1 -- 192.168.123.110:0/2814236830 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1155+0+0 (secure 0 0 0) 0x7f5678076470 con 0x7f568c1003a0 2026-03-09T20:43:10.672 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:10.671+0000 7f568affd640 1 --2- 192.168.123.110:0/2814236830 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f566003d280 0x7f566003f740 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:43:10.672 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:10.671+0000 7f5688ff9640 1 -- 192.168.123.110:0/2814236830 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f5678048310 con 0x7f568c1003a0 2026-03-09T20:43:10.673 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:10.671+0000 7f568affd640 1 --2- 192.168.123.110:0/2814236830 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f566003d280 0x7f566003f740 secure :-1 s=READY pgs=28 cs=0 l=1 rev1=1 crypto rx=0x7f567c0099c0 tx=0x7f567c006eb0 comp rx=0 tx=0).ready entity=mgr.14162 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:43:10.793 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:10.791+0000 7f5691f82640 1 -- 192.168.123.110:0/2814236830 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7f56580051c0 con 0x7f568c1003a0 2026-03-09T20:43:10.793 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:10.792+0000 7f5688ff9640 1 -- 192.168.123.110:0/2814236830 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+747 (secure 0 0 0) 0x7f5678051310 con 0x7f568c1003a0 2026-03-09T20:43:10.793 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T20:43:10.793 INFO:teuthology.orchestra.run.vm10.stdout:{"epoch":1,"fsid":"589eab88-1bf8-11f1-9e50-71f3ab1833c4","modified":"2026-03-09T20:42:20.613735Z","created":"2026-03-09T20:42:20.613735Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm07","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:3300","nonce":0},{"type":"v1","addr":"192.168.123.107:6789","nonce":0}]},"addr":"192.168.123.107:6789/0","public_addr":"192.168.123.107:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T20:43:10.793 INFO:teuthology.orchestra.run.vm10.stderr:dumped monmap epoch 1 2026-03-09T20:43:10.795 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:10.794+0000 7f5691f82640 1 -- 192.168.123.110:0/2814236830 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f566003d280 msgr2=0x7f566003f740 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:10.795 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:10.794+0000 7f5691f82640 1 --2- 192.168.123.110:0/2814236830 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f566003d280 0x7f566003f740 secure :-1 s=READY pgs=28 cs=0 l=1 rev1=1 crypto rx=0x7f567c0099c0 tx=0x7f567c006eb0 comp rx=0 tx=0).stop 2026-03-09T20:43:10.795 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:10.794+0000 7f5691f82640 1 -- 192.168.123.110:0/2814236830 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f568c1003a0 msgr2=0x7f568c1976f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:10.795 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:10.794+0000 7f5691f82640 1 --2- 192.168.123.110:0/2814236830 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f568c1003a0 0x7f568c1976f0 secure :-1 s=READY pgs=124 cs=0 l=1 rev1=1 crypto rx=0x7f567802f860 tx=0x7f5678004270 comp rx=0 tx=0).stop 2026-03-09T20:43:10.796 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:10.794+0000 7f5691f82640 1 -- 192.168.123.110:0/2814236830 shutdown_connections 2026-03-09T20:43:10.796 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:10.794+0000 7f5691f82640 1 --2- 192.168.123.110:0/2814236830 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f566003d280 0x7f566003f740 unknown :-1 s=CLOSED pgs=28 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:10.796 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:10.794+0000 7f5691f82640 1 --2- 192.168.123.110:0/2814236830 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f568c1003a0 0x7f568c1976f0 unknown :-1 s=CLOSED pgs=124 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:10.796 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:10.795+0000 7f5691f82640 1 -- 192.168.123.110:0/2814236830 >> 192.168.123.110:0/2814236830 conn(0x7f568c0fbb30 msgr2=0x7f568c0fc780 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:43:10.796 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:10.795+0000 7f5691f82640 1 -- 192.168.123.110:0/2814236830 shutdown_connections 2026-03-09T20:43:10.796 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:10.795+0000 7f5691f82640 1 -- 192.168.123.110:0/2814236830 wait complete. 2026-03-09T20:43:11.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:10 vm07 ceph-mon[49120]: Deploying daemon prometheus.vm07 on vm07 2026-03-09T20:43:11.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:10 vm07 ceph-mon[49120]: from='client.? 192.168.123.110:0/2814236830' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T20:43:11.840 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T20:43:11.840 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph mon dump -f json 2026-03-09T20:43:11.979 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T20:43:12.015 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T20:43:12.252 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:12.250+0000 7f6f724fe640 1 -- 192.168.123.110:0/1531255018 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6f6c0fe680 msgr2=0x7f6f6c0fea80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:12.252 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:12.250+0000 7f6f724fe640 1 --2- 192.168.123.110:0/1531255018 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6f6c0fe680 0x7f6f6c0fea80 secure :-1 s=READY pgs=125 cs=0 l=1 rev1=1 crypto rx=0x7f6f5c0099b0 tx=0x7f6f5c02f2b0 comp rx=0 tx=0).stop 2026-03-09T20:43:12.252 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:12.251+0000 7f6f724fe640 1 -- 192.168.123.110:0/1531255018 shutdown_connections 2026-03-09T20:43:12.252 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:12.251+0000 7f6f724fe640 1 --2- 192.168.123.110:0/1531255018 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6f6c0fe680 0x7f6f6c0fea80 unknown :-1 s=CLOSED pgs=125 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:12.252 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:12.251+0000 7f6f724fe640 1 -- 192.168.123.110:0/1531255018 >> 192.168.123.110:0/1531255018 conn(0x7f6f6c0fa160 msgr2=0x7f6f6c0fc580 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:43:12.252 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:12.251+0000 7f6f724fe640 1 -- 192.168.123.110:0/1531255018 shutdown_connections 2026-03-09T20:43:12.252 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:12.251+0000 7f6f724fe640 1 -- 192.168.123.110:0/1531255018 wait complete. 2026-03-09T20:43:12.253 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:12.251+0000 7f6f724fe640 1 Processor -- start 2026-03-09T20:43:12.253 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:12.251+0000 7f6f724fe640 1 -- start start 2026-03-09T20:43:12.253 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:12.252+0000 7f6f724fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6f6c0fe680 0x7f6f6c199950 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:43:12.253 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:12.252+0000 7f6f724fe640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f6f6c199e90 con 0x7f6f6c0fe680 2026-03-09T20:43:12.253 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:12.252+0000 7f6f6bfff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6f6c0fe680 0x7f6f6c199950 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:43:12.253 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:12.252+0000 7f6f6bfff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6f6c0fe680 0x7f6f6c199950 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.110:52706/0 (socket says 192.168.123.110:52706) 2026-03-09T20:43:12.253 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:12.252+0000 7f6f6bfff640 1 -- 192.168.123.110:0/3088518521 learned_addr learned my addr 192.168.123.110:0/3088518521 (peer_addr_for_me v2:192.168.123.110:0/0) 2026-03-09T20:43:12.253 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:12.252+0000 7f6f6bfff640 1 -- 192.168.123.110:0/3088518521 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f6f5c009660 con 0x7f6f6c0fe680 2026-03-09T20:43:12.254 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:12.253+0000 7f6f6bfff640 1 --2- 192.168.123.110:0/3088518521 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6f6c0fe680 0x7f6f6c199950 secure :-1 s=READY pgs=126 cs=0 l=1 rev1=1 crypto rx=0x7f6f5c02f860 tx=0x7f6f5c004270 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:43:12.254 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:12.253+0000 7f6f697fa640 1 -- 192.168.123.110:0/3088518521 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f6f5c0043b0 con 0x7f6f6c0fe680 2026-03-09T20:43:12.254 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:12.253+0000 7f6f724fe640 1 -- 192.168.123.110:0/3088518521 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f6f6c19a090 con 0x7f6f6c0fe680 2026-03-09T20:43:12.254 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:12.253+0000 7f6f697fa640 1 -- 192.168.123.110:0/3088518521 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f6f5c038b40 con 0x7f6f6c0fe680 2026-03-09T20:43:12.255 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:12.253+0000 7f6f724fe640 1 -- 192.168.123.110:0/3088518521 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f6f6c19a530 con 0x7f6f6c0fe680 2026-03-09T20:43:12.255 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:12.254+0000 7f6f697fa640 1 -- 192.168.123.110:0/3088518521 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f6f5c0418f0 con 0x7f6f6c0fe680 2026-03-09T20:43:12.256 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:12.254+0000 7f6f724fe640 1 -- 192.168.123.110:0/3088518521 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f6f6c0feb00 con 0x7f6f6c0fe680 2026-03-09T20:43:12.256 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:12.254+0000 7f6f697fa640 1 -- 192.168.123.110:0/3088518521 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 13) v1 ==== 49429+0+0 (secure 0 0 0) 0x7f6f5c041b10 con 0x7f6f6c0fe680 2026-03-09T20:43:12.256 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:12.255+0000 7f6f697fa640 1 --2- 192.168.123.110:0/3088518521 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f6f4403cfb0 0x7f6f4403f470 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:43:12.256 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:12.255+0000 7f6f697fa640 1 -- 192.168.123.110:0/3088518521 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1155+0+0 (secure 0 0 0) 0x7f6f5c077290 con 0x7f6f6c0fe680 2026-03-09T20:43:12.256 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:12.255+0000 7f6f6b7fe640 1 --2- 192.168.123.110:0/3088518521 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f6f4403cfb0 0x7f6f4403f470 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:43:12.257 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:12.256+0000 7f6f6b7fe640 1 --2- 192.168.123.110:0/3088518521 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f6f4403cfb0 0x7f6f4403f470 secure :-1 s=READY pgs=29 cs=0 l=1 rev1=1 crypto rx=0x7f6f580099c0 tx=0x7f6f58006eb0 comp rx=0 tx=0).ready entity=mgr.14162 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:43:12.259 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:12.258+0000 7f6f697fa640 1 -- 192.168.123.110:0/3088518521 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f6f5c037bb0 con 0x7f6f6c0fe680 2026-03-09T20:43:12.379 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:12.378+0000 7f6f724fe640 1 -- 192.168.123.110:0/3088518521 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7f6f6c1085b0 con 0x7f6f6c0fe680 2026-03-09T20:43:12.380 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:12.379+0000 7f6f697fa640 1 -- 192.168.123.110:0/3088518521 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+747 (secure 0 0 0) 0x7f6f5c0373d0 con 0x7f6f6c0fe680 2026-03-09T20:43:12.380 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T20:43:12.380 INFO:teuthology.orchestra.run.vm10.stdout:{"epoch":1,"fsid":"589eab88-1bf8-11f1-9e50-71f3ab1833c4","modified":"2026-03-09T20:42:20.613735Z","created":"2026-03-09T20:42:20.613735Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm07","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:3300","nonce":0},{"type":"v1","addr":"192.168.123.107:6789","nonce":0}]},"addr":"192.168.123.107:6789/0","public_addr":"192.168.123.107:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T20:43:12.380 INFO:teuthology.orchestra.run.vm10.stderr:dumped monmap epoch 1 2026-03-09T20:43:12.382 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:12.381+0000 7f6f724fe640 1 -- 192.168.123.110:0/3088518521 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f6f4403cfb0 msgr2=0x7f6f4403f470 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:12.382 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:12.381+0000 7f6f724fe640 1 --2- 192.168.123.110:0/3088518521 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f6f4403cfb0 0x7f6f4403f470 secure :-1 s=READY pgs=29 cs=0 l=1 rev1=1 crypto rx=0x7f6f580099c0 tx=0x7f6f58006eb0 comp rx=0 tx=0).stop 2026-03-09T20:43:12.382 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:12.381+0000 7f6f724fe640 1 -- 192.168.123.110:0/3088518521 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6f6c0fe680 msgr2=0x7f6f6c199950 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:12.382 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:12.381+0000 7f6f724fe640 1 --2- 192.168.123.110:0/3088518521 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6f6c0fe680 0x7f6f6c199950 secure :-1 s=READY pgs=126 cs=0 l=1 rev1=1 crypto rx=0x7f6f5c02f860 tx=0x7f6f5c004270 comp rx=0 tx=0).stop 2026-03-09T20:43:12.382 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:12.381+0000 7f6f724fe640 1 -- 192.168.123.110:0/3088518521 shutdown_connections 2026-03-09T20:43:12.382 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:12.381+0000 7f6f724fe640 1 --2- 192.168.123.110:0/3088518521 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f6f4403cfb0 0x7f6f4403f470 unknown :-1 s=CLOSED pgs=29 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:12.382 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:12.381+0000 7f6f724fe640 1 --2- 192.168.123.110:0/3088518521 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6f6c0fe680 0x7f6f6c199950 unknown :-1 s=CLOSED pgs=126 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:12.382 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:12.381+0000 7f6f724fe640 1 -- 192.168.123.110:0/3088518521 >> 192.168.123.110:0/3088518521 conn(0x7f6f6c0fa160 msgr2=0x7f6f6c0fad60 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:43:12.382 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:12.381+0000 7f6f724fe640 1 -- 192.168.123.110:0/3088518521 shutdown_connections 2026-03-09T20:43:12.383 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:12.381+0000 7f6f724fe640 1 -- 192.168.123.110:0/3088518521 wait complete. 2026-03-09T20:43:12.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:12 vm07 ceph-mon[49120]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:43:13.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:13 vm07 ceph-mon[49120]: from='client.? 192.168.123.110:0/3088518521' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T20:43:13.537 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T20:43:13.537 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph mon dump -f json 2026-03-09T20:43:13.676 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T20:43:13.710 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T20:43:13.950 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:13.949+0000 7f00a3480640 1 -- 192.168.123.110:0/3565019509 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f009c102620 msgr2=0x7f009c102a20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:13.950 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:13.949+0000 7f00a3480640 1 --2- 192.168.123.110:0/3565019509 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f009c102620 0x7f009c102a20 secure :-1 s=READY pgs=127 cs=0 l=1 rev1=1 crypto rx=0x7f00900099b0 tx=0x7f009002f2b0 comp rx=0 tx=0).stop 2026-03-09T20:43:13.951 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:13.949+0000 7f00a3480640 1 -- 192.168.123.110:0/3565019509 shutdown_connections 2026-03-09T20:43:13.951 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:13.949+0000 7f00a3480640 1 --2- 192.168.123.110:0/3565019509 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f009c102620 0x7f009c102a20 unknown :-1 s=CLOSED pgs=127 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:13.951 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:13.949+0000 7f00a3480640 1 -- 192.168.123.110:0/3565019509 >> 192.168.123.110:0/3565019509 conn(0x7f009c0fde70 msgr2=0x7f009c100260 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:43:13.951 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:13.950+0000 7f00a3480640 1 -- 192.168.123.110:0/3565019509 shutdown_connections 2026-03-09T20:43:13.951 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:13.950+0000 7f00a3480640 1 -- 192.168.123.110:0/3565019509 wait complete. 2026-03-09T20:43:13.951 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:13.950+0000 7f00a3480640 1 Processor -- start 2026-03-09T20:43:13.951 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:13.950+0000 7f00a3480640 1 -- start start 2026-03-09T20:43:13.952 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:13.950+0000 7f00a3480640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f009c102620 0x7f009c199990 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:43:13.952 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:13.950+0000 7f00a3480640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f009c199ed0 con 0x7f009c102620 2026-03-09T20:43:13.952 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:13.951+0000 7f00a11f5640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f009c102620 0x7f009c199990 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:43:13.952 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:13.951+0000 7f00a11f5640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f009c102620 0x7f009c199990 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.110:52720/0 (socket says 192.168.123.110:52720) 2026-03-09T20:43:13.952 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:13.951+0000 7f00a11f5640 1 -- 192.168.123.110:0/2996693875 learned_addr learned my addr 192.168.123.110:0/2996693875 (peer_addr_for_me v2:192.168.123.110:0/0) 2026-03-09T20:43:13.952 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:13.951+0000 7f00a11f5640 1 -- 192.168.123.110:0/2996693875 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f0090009660 con 0x7f009c102620 2026-03-09T20:43:13.952 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:13.951+0000 7f00a11f5640 1 --2- 192.168.123.110:0/2996693875 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f009c102620 0x7f009c199990 secure :-1 s=READY pgs=128 cs=0 l=1 rev1=1 crypto rx=0x7f009002f860 tx=0x7f0090004270 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:43:13.953 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:13.952+0000 7f008a7fc640 1 -- 192.168.123.110:0/2996693875 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f00900043b0 con 0x7f009c102620 2026-03-09T20:43:13.953 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:13.952+0000 7f008a7fc640 1 -- 192.168.123.110:0/2996693875 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f0090038b40 con 0x7f009c102620 2026-03-09T20:43:13.953 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:13.952+0000 7f00a3480640 1 -- 192.168.123.110:0/2996693875 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f009c19a0d0 con 0x7f009c102620 2026-03-09T20:43:13.954 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:13.952+0000 7f008a7fc640 1 -- 192.168.123.110:0/2996693875 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f00900418f0 con 0x7f009c102620 2026-03-09T20:43:13.954 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:13.952+0000 7f00a3480640 1 -- 192.168.123.110:0/2996693875 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f009c19a570 con 0x7f009c102620 2026-03-09T20:43:13.954 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:13.953+0000 7f008a7fc640 1 -- 192.168.123.110:0/2996693875 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 13) v1 ==== 49429+0+0 (secure 0 0 0) 0x7f0090038cb0 con 0x7f009c102620 2026-03-09T20:43:13.954 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:13.953+0000 7f00a3480640 1 -- 192.168.123.110:0/2996693875 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f009c102aa0 con 0x7f009c102620 2026-03-09T20:43:13.954 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:13.953+0000 7f008a7fc640 1 --2- 192.168.123.110:0/2996693875 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f007403d2d0 0x7f007403f790 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:43:13.954 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:13.953+0000 7f008a7fc640 1 -- 192.168.123.110:0/2996693875 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1155+0+0 (secure 0 0 0) 0x7f0090076250 con 0x7f009c102620 2026-03-09T20:43:13.954 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:13.953+0000 7f00a09f4640 1 --2- 192.168.123.110:0/2996693875 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f007403d2d0 0x7f007403f790 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:43:13.955 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:13.954+0000 7f00a09f4640 1 --2- 192.168.123.110:0/2996693875 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f007403d2d0 0x7f007403f790 secure :-1 s=READY pgs=30 cs=0 l=1 rev1=1 crypto rx=0x7f008c0099c0 tx=0x7f008c006eb0 comp rx=0 tx=0).ready entity=mgr.14162 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:43:13.958 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:13.956+0000 7f008a7fc640 1 -- 192.168.123.110:0/2996693875 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f0090035320 con 0x7f009c102620 2026-03-09T20:43:14.079 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:14.077+0000 7f00a3480640 1 -- 192.168.123.110:0/2996693875 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7f009c108530 con 0x7f009c102620 2026-03-09T20:43:14.079 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:14.078+0000 7f008a7fc640 1 -- 192.168.123.110:0/2996693875 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+747 (secure 0 0 0) 0x7f00900373d0 con 0x7f009c102620 2026-03-09T20:43:14.079 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T20:43:14.079 INFO:teuthology.orchestra.run.vm10.stdout:{"epoch":1,"fsid":"589eab88-1bf8-11f1-9e50-71f3ab1833c4","modified":"2026-03-09T20:42:20.613735Z","created":"2026-03-09T20:42:20.613735Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm07","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:3300","nonce":0},{"type":"v1","addr":"192.168.123.107:6789","nonce":0}]},"addr":"192.168.123.107:6789/0","public_addr":"192.168.123.107:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T20:43:14.079 INFO:teuthology.orchestra.run.vm10.stderr:dumped monmap epoch 1 2026-03-09T20:43:14.082 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:14.080+0000 7f00a3480640 1 -- 192.168.123.110:0/2996693875 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f007403d2d0 msgr2=0x7f007403f790 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:14.082 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:14.080+0000 7f00a3480640 1 --2- 192.168.123.110:0/2996693875 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f007403d2d0 0x7f007403f790 secure :-1 s=READY pgs=30 cs=0 l=1 rev1=1 crypto rx=0x7f008c0099c0 tx=0x7f008c006eb0 comp rx=0 tx=0).stop 2026-03-09T20:43:14.082 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:14.080+0000 7f00a3480640 1 -- 192.168.123.110:0/2996693875 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f009c102620 msgr2=0x7f009c199990 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:14.082 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:14.080+0000 7f00a3480640 1 --2- 192.168.123.110:0/2996693875 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f009c102620 0x7f009c199990 secure :-1 s=READY pgs=128 cs=0 l=1 rev1=1 crypto rx=0x7f009002f860 tx=0x7f0090004270 comp rx=0 tx=0).stop 2026-03-09T20:43:14.082 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:14.081+0000 7f00a3480640 1 -- 192.168.123.110:0/2996693875 shutdown_connections 2026-03-09T20:43:14.082 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:14.081+0000 7f00a3480640 1 --2- 192.168.123.110:0/2996693875 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f007403d2d0 0x7f007403f790 unknown :-1 s=CLOSED pgs=30 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:14.082 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:14.081+0000 7f00a3480640 1 --2- 192.168.123.110:0/2996693875 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f009c102620 0x7f009c199990 unknown :-1 s=CLOSED pgs=128 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:14.082 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:14.081+0000 7f00a3480640 1 -- 192.168.123.110:0/2996693875 >> 192.168.123.110:0/2996693875 conn(0x7f009c0fde70 msgr2=0x7f009c0fea70 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:43:14.082 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:14.081+0000 7f00a3480640 1 -- 192.168.123.110:0/2996693875 shutdown_connections 2026-03-09T20:43:14.082 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:14.081+0000 7f00a3480640 1 -- 192.168.123.110:0/2996693875 wait complete. 2026-03-09T20:43:14.600 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:14 vm07 ceph-mon[49120]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:43:14.601 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:14 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:14.601 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:14 vm07 ceph-mon[49120]: from='client.? 192.168.123.110:0/2996693875' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T20:43:15.145 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T20:43:15.145 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph mon dump -f json 2026-03-09T20:43:15.271 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T20:43:15.305 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T20:43:15.535 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:15.533+0000 7f61c077f640 1 -- 192.168.123.110:0/1616192946 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f61b80751a0 msgr2=0x7f61b8073600 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:15.535 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:15.533+0000 7f61c077f640 1 --2- 192.168.123.110:0/1616192946 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f61b80751a0 0x7f61b8073600 secure :-1 s=READY pgs=129 cs=0 l=1 rev1=1 crypto rx=0x7f61ac0099b0 tx=0x7f61ac02f2b0 comp rx=0 tx=0).stop 2026-03-09T20:43:15.535 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:15.533+0000 7f61bd4f2640 1 -- 192.168.123.110:0/1616192946 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f61ac038470 con 0x7f61b80751a0 2026-03-09T20:43:15.535 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:15.533+0000 7f61c077f640 1 -- 192.168.123.110:0/1616192946 shutdown_connections 2026-03-09T20:43:15.535 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:15.533+0000 7f61c077f640 1 --2- 192.168.123.110:0/1616192946 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f61b80751a0 0x7f61b8073600 unknown :-1 s=CLOSED pgs=129 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:15.535 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:15.533+0000 7f61c077f640 1 -- 192.168.123.110:0/1616192946 >> 192.168.123.110:0/1616192946 conn(0x7f61b80fbb20 msgr2=0x7f61b80fdf60 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:43:15.535 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:15.534+0000 7f61c077f640 1 -- 192.168.123.110:0/1616192946 shutdown_connections 2026-03-09T20:43:15.535 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:15.534+0000 7f61c077f640 1 -- 192.168.123.110:0/1616192946 wait complete. 2026-03-09T20:43:15.535 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:15.534+0000 7f61c077f640 1 Processor -- start 2026-03-09T20:43:15.535 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:15.534+0000 7f61c077f640 1 -- start start 2026-03-09T20:43:15.536 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:15.534+0000 7f61c077f640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f61b80751a0 0x7f61b8199960 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:43:15.536 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:15.534+0000 7f61c077f640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f61b8199ea0 con 0x7f61b80751a0 2026-03-09T20:43:15.536 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:15.535+0000 7f61be4f4640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f61b80751a0 0x7f61b8199960 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:43:15.536 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:15.535+0000 7f61be4f4640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f61b80751a0 0x7f61b8199960 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.110:52742/0 (socket says 192.168.123.110:52742) 2026-03-09T20:43:15.536 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:15.535+0000 7f61be4f4640 1 -- 192.168.123.110:0/3011701255 learned_addr learned my addr 192.168.123.110:0/3011701255 (peer_addr_for_me v2:192.168.123.110:0/0) 2026-03-09T20:43:15.536 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:15.535+0000 7f61be4f4640 1 -- 192.168.123.110:0/3011701255 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f61ac009660 con 0x7f61b80751a0 2026-03-09T20:43:15.536 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:15.535+0000 7f61be4f4640 1 --2- 192.168.123.110:0/3011701255 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f61b80751a0 0x7f61b8199960 secure :-1 s=READY pgs=130 cs=0 l=1 rev1=1 crypto rx=0x7f61ac02f860 tx=0x7f61ac004270 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:43:15.537 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:15.535+0000 7f619f7fe640 1 -- 192.168.123.110:0/3011701255 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f61ac03d070 con 0x7f61b80751a0 2026-03-09T20:43:15.537 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:15.535+0000 7f61c077f640 1 -- 192.168.123.110:0/3011701255 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f61b819a0a0 con 0x7f61b80751a0 2026-03-09T20:43:15.537 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:15.536+0000 7f61c077f640 1 -- 192.168.123.110:0/3011701255 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f61b819a540 con 0x7f61b80751a0 2026-03-09T20:43:15.537 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:15.536+0000 7f619f7fe640 1 -- 192.168.123.110:0/3011701255 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f61ac004400 con 0x7f61b80751a0 2026-03-09T20:43:15.537 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:15.536+0000 7f619f7fe640 1 -- 192.168.123.110:0/3011701255 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f61ac0419f0 con 0x7f61b80751a0 2026-03-09T20:43:15.538 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:15.536+0000 7f619f7fe640 1 -- 192.168.123.110:0/3011701255 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 13) v1 ==== 49429+0+0 (secure 0 0 0) 0x7f61ac04b430 con 0x7f61b80751a0 2026-03-09T20:43:15.538 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:15.536+0000 7f61c077f640 1 -- 192.168.123.110:0/3011701255 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f61b8073680 con 0x7f61b80751a0 2026-03-09T20:43:15.538 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:15.537+0000 7f619f7fe640 1 --2- 192.168.123.110:0/3011701255 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f619403d2d0 0x7f619403f790 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:43:15.538 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:15.537+0000 7f619f7fe640 1 -- 192.168.123.110:0/3011701255 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1155+0+0 (secure 0 0 0) 0x7f61ac076c00 con 0x7f61b80751a0 2026-03-09T20:43:15.538 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:15.537+0000 7f61bdcf3640 1 --2- 192.168.123.110:0/3011701255 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f619403d2d0 0x7f619403f790 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:43:15.538 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:15.537+0000 7f61bdcf3640 1 --2- 192.168.123.110:0/3011701255 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f619403d2d0 0x7f619403f790 secure :-1 s=READY pgs=31 cs=0 l=1 rev1=1 crypto rx=0x7f61a80099c0 tx=0x7f61a8006eb0 comp rx=0 tx=0).ready entity=mgr.14162 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:43:15.541 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:15.540+0000 7f619f7fe640 1 -- 192.168.123.110:0/3011701255 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f61ac03faf0 con 0x7f61b80751a0 2026-03-09T20:43:15.662 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:15.660+0000 7f61c077f640 1 -- 192.168.123.110:0/3011701255 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7f61b8108500 con 0x7f61b80751a0 2026-03-09T20:43:15.662 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:15.661+0000 7f619f7fe640 1 -- 192.168.123.110:0/3011701255 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+747 (secure 0 0 0) 0x7f61b8108500 con 0x7f61b80751a0 2026-03-09T20:43:15.662 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T20:43:15.662 INFO:teuthology.orchestra.run.vm10.stdout:{"epoch":1,"fsid":"589eab88-1bf8-11f1-9e50-71f3ab1833c4","modified":"2026-03-09T20:42:20.613735Z","created":"2026-03-09T20:42:20.613735Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm07","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:3300","nonce":0},{"type":"v1","addr":"192.168.123.107:6789","nonce":0}]},"addr":"192.168.123.107:6789/0","public_addr":"192.168.123.107:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T20:43:15.663 INFO:teuthology.orchestra.run.vm10.stderr:dumped monmap epoch 1 2026-03-09T20:43:15.665 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:15.663+0000 7f61c077f640 1 -- 192.168.123.110:0/3011701255 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f619403d2d0 msgr2=0x7f619403f790 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:15.665 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:15.663+0000 7f61c077f640 1 --2- 192.168.123.110:0/3011701255 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f619403d2d0 0x7f619403f790 secure :-1 s=READY pgs=31 cs=0 l=1 rev1=1 crypto rx=0x7f61a80099c0 tx=0x7f61a8006eb0 comp rx=0 tx=0).stop 2026-03-09T20:43:15.665 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:15.663+0000 7f61c077f640 1 -- 192.168.123.110:0/3011701255 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f61b80751a0 msgr2=0x7f61b8199960 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:15.665 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:15.663+0000 7f61c077f640 1 --2- 192.168.123.110:0/3011701255 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f61b80751a0 0x7f61b8199960 secure :-1 s=READY pgs=130 cs=0 l=1 rev1=1 crypto rx=0x7f61ac02f860 tx=0x7f61ac004270 comp rx=0 tx=0).stop 2026-03-09T20:43:15.665 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:15.664+0000 7f61c077f640 1 -- 192.168.123.110:0/3011701255 shutdown_connections 2026-03-09T20:43:15.665 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:15.664+0000 7f61c077f640 1 --2- 192.168.123.110:0/3011701255 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f619403d2d0 0x7f619403f790 unknown :-1 s=CLOSED pgs=31 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:15.665 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:15.664+0000 7f61c077f640 1 --2- 192.168.123.110:0/3011701255 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f61b80751a0 0x7f61b8199960 unknown :-1 s=CLOSED pgs=130 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:15.665 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:15.664+0000 7f61c077f640 1 -- 192.168.123.110:0/3011701255 >> 192.168.123.110:0/3011701255 conn(0x7f61b80fbb20 msgr2=0x7f61b80fc770 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:43:15.665 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:15.664+0000 7f61c077f640 1 -- 192.168.123.110:0/3011701255 shutdown_connections 2026-03-09T20:43:15.665 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:15.664+0000 7f61c077f640 1 -- 192.168.123.110:0/3011701255 wait complete. 2026-03-09T20:43:16.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:15 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:16.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:15 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:16.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:15 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:16.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:15 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-09T20:43:16.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:15 vm07 ceph-mon[49120]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:43:16.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:15 vm07 ceph-mon[49120]: from='client.? 192.168.123.110:0/3011701255' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T20:43:16.724 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T20:43:16.724 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph mon dump -f json 2026-03-09T20:43:16.871 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T20:43:16.920 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T20:43:17.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:16 vm07 ceph-mon[49120]: from='mgr.14162 192.168.123.107:0/3677725133' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-09T20:43:17.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:16 vm07 ceph-mon[49120]: mgrmap e14: vm07.xjrvch(active, since 32s) 2026-03-09T20:43:17.170 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:17.168+0000 7f0e7de67640 1 -- 192.168.123.110:0/3671213205 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0e78102620 msgr2=0x7f0e78102a20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:17.170 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:17.168+0000 7f0e7de67640 1 --2- 192.168.123.110:0/3671213205 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0e78102620 0x7f0e78102a20 secure :-1 s=READY pgs=133 cs=0 l=1 rev1=1 crypto rx=0x7f0e5c0099b0 tx=0x7f0e5c02f2b0 comp rx=0 tx=0).stop 2026-03-09T20:43:17.171 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:17.169+0000 7f0e7de67640 1 -- 192.168.123.110:0/3671213205 shutdown_connections 2026-03-09T20:43:17.171 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:17.169+0000 7f0e7de67640 1 --2- 192.168.123.110:0/3671213205 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0e78102620 0x7f0e78102a20 unknown :-1 s=CLOSED pgs=133 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:17.171 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:17.169+0000 7f0e7de67640 1 -- 192.168.123.110:0/3671213205 >> 192.168.123.110:0/3671213205 conn(0x7f0e780fde70 msgr2=0x7f0e78100260 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:43:17.171 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:17.170+0000 7f0e7de67640 1 -- 192.168.123.110:0/3671213205 shutdown_connections 2026-03-09T20:43:17.171 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:17.170+0000 7f0e7de67640 1 -- 192.168.123.110:0/3671213205 wait complete. 2026-03-09T20:43:17.171 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:17.170+0000 7f0e7de67640 1 Processor -- start 2026-03-09T20:43:17.172 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:17.170+0000 7f0e7de67640 1 -- start start 2026-03-09T20:43:17.172 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:17.171+0000 7f0e7de67640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0e78102620 0x7f0e78199950 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:43:17.172 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:17.171+0000 7f0e7de67640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f0e78199e90 con 0x7f0e78102620 2026-03-09T20:43:17.172 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:17.171+0000 7f0e777fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0e78102620 0x7f0e78199950 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:43:17.172 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:17.171+0000 7f0e777fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0e78102620 0x7f0e78199950 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.110:52774/0 (socket says 192.168.123.110:52774) 2026-03-09T20:43:17.172 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:17.171+0000 7f0e777fe640 1 -- 192.168.123.110:0/402475183 learned_addr learned my addr 192.168.123.110:0/402475183 (peer_addr_for_me v2:192.168.123.110:0/0) 2026-03-09T20:43:17.172 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:17.171+0000 7f0e777fe640 1 -- 192.168.123.110:0/402475183 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f0e5c009660 con 0x7f0e78102620 2026-03-09T20:43:17.172 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:17.171+0000 7f0e777fe640 1 --2- 192.168.123.110:0/402475183 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0e78102620 0x7f0e78199950 secure :-1 s=READY pgs=134 cs=0 l=1 rev1=1 crypto rx=0x7f0e5c02f860 tx=0x7f0e5c004270 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:43:17.174 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:17.172+0000 7f0e74ff9640 1 -- 192.168.123.110:0/402475183 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f0e5c0043b0 con 0x7f0e78102620 2026-03-09T20:43:17.174 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:17.172+0000 7f0e74ff9640 1 -- 192.168.123.110:0/402475183 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f0e5c038b40 con 0x7f0e78102620 2026-03-09T20:43:17.174 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:17.172+0000 7f0e74ff9640 1 -- 192.168.123.110:0/402475183 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f0e5c0418f0 con 0x7f0e78102620 2026-03-09T20:43:17.174 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:17.172+0000 7f0e7de67640 1 -- 192.168.123.110:0/402475183 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f0e7819a090 con 0x7f0e78102620 2026-03-09T20:43:17.174 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:17.172+0000 7f0e7de67640 1 -- 192.168.123.110:0/402475183 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f0e7819a530 con 0x7f0e78102620 2026-03-09T20:43:17.174 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:17.173+0000 7f0e74ff9640 1 -- 192.168.123.110:0/402475183 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 14) v1 ==== 49443+0+0 (secure 0 0 0) 0x7f0e5c038680 con 0x7f0e78102620 2026-03-09T20:43:17.175 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:17.173+0000 7f0e74ff9640 1 --2- 192.168.123.110:0/402475183 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f0e5003d320 0x7f0e5003f7e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:43:17.175 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:17.173+0000 7f0e74ff9640 1 -- 192.168.123.110:0/402475183 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1155+0+0 (secure 0 0 0) 0x7f0e5c0761f0 con 0x7f0e78102620 2026-03-09T20:43:17.175 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:17.174+0000 7f0e76ffd640 1 -- 192.168.123.110:0/402475183 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f0e5003d320 msgr2=0x7f0e5003f7e0 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.107:6800/4166937886 2026-03-09T20:43:17.175 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:17.174+0000 7f0e76ffd640 1 --2- 192.168.123.110:0/402475183 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f0e5003d320 0x7f0e5003f7e0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.200000 2026-03-09T20:43:17.175 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:17.174+0000 7f0e7de67640 1 -- 192.168.123.110:0/402475183 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f0e3c005350 con 0x7f0e78102620 2026-03-09T20:43:17.180 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:17.179+0000 7f0e74ff9640 1 -- 192.168.123.110:0/402475183 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f0e5c0379e0 con 0x7f0e78102620 2026-03-09T20:43:17.299 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:17.297+0000 7f0e7de67640 1 -- 192.168.123.110:0/402475183 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7f0e3c005600 con 0x7f0e78102620 2026-03-09T20:43:17.299 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:17.298+0000 7f0e74ff9640 1 -- 192.168.123.110:0/402475183 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+747 (secure 0 0 0) 0x7f0e5c05a090 con 0x7f0e78102620 2026-03-09T20:43:17.299 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T20:43:17.299 INFO:teuthology.orchestra.run.vm10.stdout:{"epoch":1,"fsid":"589eab88-1bf8-11f1-9e50-71f3ab1833c4","modified":"2026-03-09T20:42:20.613735Z","created":"2026-03-09T20:42:20.613735Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm07","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:3300","nonce":0},{"type":"v1","addr":"192.168.123.107:6789","nonce":0}]},"addr":"192.168.123.107:6789/0","public_addr":"192.168.123.107:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T20:43:17.299 INFO:teuthology.orchestra.run.vm10.stderr:dumped monmap epoch 1 2026-03-09T20:43:17.301 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:17.300+0000 7f0e7de67640 1 -- 192.168.123.110:0/402475183 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f0e5003d320 msgr2=0x7f0e5003f7e0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T20:43:17.301 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:17.300+0000 7f0e7de67640 1 --2- 192.168.123.110:0/402475183 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f0e5003d320 0x7f0e5003f7e0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:17.302 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:17.300+0000 7f0e7de67640 1 -- 192.168.123.110:0/402475183 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0e78102620 msgr2=0x7f0e78199950 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:17.302 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:17.300+0000 7f0e7de67640 1 --2- 192.168.123.110:0/402475183 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0e78102620 0x7f0e78199950 secure :-1 s=READY pgs=134 cs=0 l=1 rev1=1 crypto rx=0x7f0e5c02f860 tx=0x7f0e5c004270 comp rx=0 tx=0).stop 2026-03-09T20:43:17.302 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:17.300+0000 7f0e7de67640 1 -- 192.168.123.110:0/402475183 shutdown_connections 2026-03-09T20:43:17.302 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:17.300+0000 7f0e7de67640 1 --2- 192.168.123.110:0/402475183 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f0e5003d320 0x7f0e5003f7e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:17.302 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:17.300+0000 7f0e7de67640 1 --2- 192.168.123.110:0/402475183 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0e78102620 0x7f0e78199950 unknown :-1 s=CLOSED pgs=134 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:17.302 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:17.300+0000 7f0e7de67640 1 -- 192.168.123.110:0/402475183 >> 192.168.123.110:0/402475183 conn(0x7f0e780fde70 msgr2=0x7f0e780fea70 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:43:17.302 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:17.301+0000 7f0e7de67640 1 -- 192.168.123.110:0/402475183 shutdown_connections 2026-03-09T20:43:17.302 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:17.301+0000 7f0e7de67640 1 -- 192.168.123.110:0/402475183 wait complete. 2026-03-09T20:43:18.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:17 vm07 ceph-mon[49120]: from='client.? 192.168.123.110:0/402475183' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T20:43:18.372 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T20:43:18.372 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph mon dump -f json 2026-03-09T20:43:18.525 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T20:43:18.564 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T20:43:18.846 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:18.843+0000 7f7cc6dbf640 1 -- 192.168.123.110:0/2536604019 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7cc0102620 msgr2=0x7f7cc0102a20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:18.846 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:18.843+0000 7f7cc6dbf640 1 --2- 192.168.123.110:0/2536604019 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7cc0102620 0x7f7cc0102a20 secure :-1 s=READY pgs=135 cs=0 l=1 rev1=1 crypto rx=0x7f7ca80099b0 tx=0x7f7ca802f2b0 comp rx=0 tx=0).stop 2026-03-09T20:43:18.846 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:18.844+0000 7f7cc6dbf640 1 -- 192.168.123.110:0/2536604019 shutdown_connections 2026-03-09T20:43:18.846 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:18.844+0000 7f7cc6dbf640 1 --2- 192.168.123.110:0/2536604019 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7cc0102620 0x7f7cc0102a20 unknown :-1 s=CLOSED pgs=135 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:18.846 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:18.844+0000 7f7cc6dbf640 1 -- 192.168.123.110:0/2536604019 >> 192.168.123.110:0/2536604019 conn(0x7f7cc00fde70 msgr2=0x7f7cc0100260 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:43:18.846 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:18.845+0000 7f7cc6dbf640 1 -- 192.168.123.110:0/2536604019 shutdown_connections 2026-03-09T20:43:18.846 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:18.845+0000 7f7cc6dbf640 1 -- 192.168.123.110:0/2536604019 wait complete. 2026-03-09T20:43:18.846 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:18.845+0000 7f7cc6dbf640 1 Processor -- start 2026-03-09T20:43:18.846 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:18.845+0000 7f7cc6dbf640 1 -- start start 2026-03-09T20:43:18.847 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:18.846+0000 7f7cc6dbf640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7cc0102620 0x7f7cc01999a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:43:18.847 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:18.846+0000 7f7cc6dbf640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f7cc0199ee0 con 0x7f7cc0102620 2026-03-09T20:43:18.847 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:18.846+0000 7f7cc4b34640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7cc0102620 0x7f7cc01999a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:43:18.847 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:18.846+0000 7f7cc4b34640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7cc0102620 0x7f7cc01999a0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.110:47570/0 (socket says 192.168.123.110:47570) 2026-03-09T20:43:18.847 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:18.846+0000 7f7cc4b34640 1 -- 192.168.123.110:0/2078820082 learned_addr learned my addr 192.168.123.110:0/2078820082 (peer_addr_for_me v2:192.168.123.110:0/0) 2026-03-09T20:43:18.848 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:18.847+0000 7f7cc4b34640 1 -- 192.168.123.110:0/2078820082 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f7ca8009660 con 0x7f7cc0102620 2026-03-09T20:43:18.848 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:18.847+0000 7f7cc4b34640 1 --2- 192.168.123.110:0/2078820082 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7cc0102620 0x7f7cc01999a0 secure :-1 s=READY pgs=136 cs=0 l=1 rev1=1 crypto rx=0x7f7ca80042c0 tx=0x7f7ca80042f0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:43:18.848 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:18.847+0000 7f7cb5ffb640 1 -- 192.168.123.110:0/2078820082 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f7ca803d070 con 0x7f7cc0102620 2026-03-09T20:43:18.848 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:18.847+0000 7f7cc6dbf640 1 -- 192.168.123.110:0/2078820082 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f7cc019a0e0 con 0x7f7cc0102620 2026-03-09T20:43:18.848 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:18.847+0000 7f7cb5ffb640 1 -- 192.168.123.110:0/2078820082 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f7ca8038b40 con 0x7f7cc0102620 2026-03-09T20:43:18.849 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:18.847+0000 7f7cc6dbf640 1 -- 192.168.123.110:0/2078820082 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f7cc019a580 con 0x7f7cc0102620 2026-03-09T20:43:18.849 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:18.847+0000 7f7cb5ffb640 1 -- 192.168.123.110:0/2078820082 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f7ca8041a40 con 0x7f7cc0102620 2026-03-09T20:43:18.850 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:18.849+0000 7f7cb5ffb640 1 -- 192.168.123.110:0/2078820082 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 14) v1 ==== 49443+0+0 (secure 0 0 0) 0x7f7ca8038680 con 0x7f7cc0102620 2026-03-09T20:43:18.850 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:18.849+0000 7f7cb5ffb640 1 --2- 192.168.123.110:0/2078820082 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f7c9003d320 0x7f7c9003f7e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:43:18.850 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:18.849+0000 7f7cb5ffb640 1 -- 192.168.123.110:0/2078820082 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1155+0+0 (secure 0 0 0) 0x7f7ca8075af0 con 0x7f7cc0102620 2026-03-09T20:43:18.850 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:18.849+0000 7f7cb7fff640 1 -- 192.168.123.110:0/2078820082 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f7c9003d320 msgr2=0x7f7c9003f7e0 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.107:6800/4166937886 2026-03-09T20:43:18.850 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:18.849+0000 7f7cb7fff640 1 --2- 192.168.123.110:0/2078820082 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f7c9003d320 0x7f7c9003f7e0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.200000 2026-03-09T20:43:18.850 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:18.849+0000 7f7cc6dbf640 1 -- 192.168.123.110:0/2078820082 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f7cc0108e30 con 0x7f7cc0102620 2026-03-09T20:43:18.854 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:18.853+0000 7f7cb5ffb640 1 -- 192.168.123.110:0/2078820082 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f7ca8046070 con 0x7f7cc0102620 2026-03-09T20:43:18.977 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:18.975+0000 7f7cc6dbf640 1 -- 192.168.123.110:0/2078820082 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7f7cc0102aa0 con 0x7f7cc0102620 2026-03-09T20:43:18.978 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:18.976+0000 7f7cb5ffb640 1 -- 192.168.123.110:0/2078820082 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+747 (secure 0 0 0) 0x7f7ca804b430 con 0x7f7cc0102620 2026-03-09T20:43:18.978 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T20:43:18.978 INFO:teuthology.orchestra.run.vm10.stdout:{"epoch":1,"fsid":"589eab88-1bf8-11f1-9e50-71f3ab1833c4","modified":"2026-03-09T20:42:20.613735Z","created":"2026-03-09T20:42:20.613735Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm07","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:3300","nonce":0},{"type":"v1","addr":"192.168.123.107:6789","nonce":0}]},"addr":"192.168.123.107:6789/0","public_addr":"192.168.123.107:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T20:43:18.978 INFO:teuthology.orchestra.run.vm10.stderr:dumped monmap epoch 1 2026-03-09T20:43:18.980 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:18.979+0000 7f7cc6dbf640 1 -- 192.168.123.110:0/2078820082 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f7c9003d320 msgr2=0x7f7c9003f7e0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T20:43:18.980 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:18.979+0000 7f7cc6dbf640 1 --2- 192.168.123.110:0/2078820082 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f7c9003d320 0x7f7c9003f7e0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:18.980 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:18.979+0000 7f7cc6dbf640 1 -- 192.168.123.110:0/2078820082 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7cc0102620 msgr2=0x7f7cc01999a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:18.981 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:18.979+0000 7f7cc6dbf640 1 --2- 192.168.123.110:0/2078820082 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7cc0102620 0x7f7cc01999a0 secure :-1 s=READY pgs=136 cs=0 l=1 rev1=1 crypto rx=0x7f7ca80042c0 tx=0x7f7ca80042f0 comp rx=0 tx=0).stop 2026-03-09T20:43:18.981 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:18.980+0000 7f7cc6dbf640 1 -- 192.168.123.110:0/2078820082 shutdown_connections 2026-03-09T20:43:18.981 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:18.980+0000 7f7cc6dbf640 1 --2- 192.168.123.110:0/2078820082 >> [v2:192.168.123.107:6800/4166937886,v1:192.168.123.107:6801/4166937886] conn(0x7f7c9003d320 0x7f7c9003f7e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:18.981 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:18.980+0000 7f7cc6dbf640 1 --2- 192.168.123.110:0/2078820082 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7cc0102620 0x7f7cc01999a0 unknown :-1 s=CLOSED pgs=136 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:18.981 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:18.980+0000 7f7cc6dbf640 1 -- 192.168.123.110:0/2078820082 >> 192.168.123.110:0/2078820082 conn(0x7f7cc00fde70 msgr2=0x7f7cc00fea70 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:43:18.981 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:18.980+0000 7f7cc6dbf640 1 -- 192.168.123.110:0/2078820082 shutdown_connections 2026-03-09T20:43:18.981 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:18.980+0000 7f7cc6dbf640 1 -- 192.168.123.110:0/2078820082 wait complete. 2026-03-09T20:43:19.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:19 vm07 ceph-mon[49120]: from='client.? 192.168.123.110:0/2078820082' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T20:43:20.030 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T20:43:20.031 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph mon dump -f json 2026-03-09T20:43:20.198 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T20:43:20.251 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T20:43:20.308 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:20 vm07 ceph-mon[49120]: Active manager daemon vm07.xjrvch restarted 2026-03-09T20:43:20.308 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:20 vm07 ceph-mon[49120]: Activating manager daemon vm07.xjrvch 2026-03-09T20:43:20.308 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:20 vm07 ceph-mon[49120]: osdmap e5: 0 total, 0 up, 0 in 2026-03-09T20:43:20.308 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:20 vm07 ceph-mon[49120]: mgrmap e15: vm07.xjrvch(active, starting, since 0.00501533s) 2026-03-09T20:43:20.308 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:20 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mon metadata", "id": "vm07"}]: dispatch 2026-03-09T20:43:20.308 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:20 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mgr metadata", "who": "vm07.xjrvch", "id": "vm07.xjrvch"}]: dispatch 2026-03-09T20:43:20.308 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:20 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T20:43:20.308 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:20 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T20:43:20.308 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:20 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T20:43:20.308 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:20 vm07 ceph-mon[49120]: Manager daemon vm07.xjrvch is now available 2026-03-09T20:43:20.308 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:20 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:20.308 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:20 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:43:20.308 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:20 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm07.xjrvch/mirror_snapshot_schedule"}]: dispatch 2026-03-09T20:43:20.308 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:20 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:43:20.308 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:20 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm07.xjrvch/trash_purge_schedule"}]: dispatch 2026-03-09T20:43:20.308 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:20 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:20.308 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:20 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:20.528 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:20.526+0000 7fbc6ffff640 1 -- 192.168.123.110:0/3865829023 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbc700fdd70 msgr2=0x7fbc700fe170 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:20.529 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:20.526+0000 7fbc6ffff640 1 --2- 192.168.123.110:0/3865829023 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbc700fdd70 0x7fbc700fe170 secure :-1 s=READY pgs=144 cs=0 l=1 rev1=1 crypto rx=0x7fbc640099b0 tx=0x7fbc6402f2b0 comp rx=0 tx=0).stop 2026-03-09T20:43:20.529 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:20.526+0000 7fbc6ffff640 1 -- 192.168.123.110:0/3865829023 shutdown_connections 2026-03-09T20:43:20.529 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:20.526+0000 7fbc6ffff640 1 --2- 192.168.123.110:0/3865829023 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbc700fdd70 0x7fbc700fe170 unknown :-1 s=CLOSED pgs=144 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:20.529 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:20.526+0000 7fbc6ffff640 1 -- 192.168.123.110:0/3865829023 >> 192.168.123.110:0/3865829023 conn(0x7fbc700f9b60 msgr2=0x7fbc700fbf80 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:43:20.529 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:20.527+0000 7fbc6ffff640 1 -- 192.168.123.110:0/3865829023 shutdown_connections 2026-03-09T20:43:20.529 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:20.527+0000 7fbc6ffff640 1 -- 192.168.123.110:0/3865829023 wait complete. 2026-03-09T20:43:20.529 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:20.528+0000 7fbc6ffff640 1 Processor -- start 2026-03-09T20:43:20.529 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:20.528+0000 7fbc6ffff640 1 -- start start 2026-03-09T20:43:20.529 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:20.528+0000 7fbc6ffff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbc700fdd70 0x7fbc7019dde0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:43:20.529 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:20.528+0000 7fbc6ffff640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fbc7019e320 con 0x7fbc700fdd70 2026-03-09T20:43:20.529 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:20.528+0000 7fbc6effd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbc700fdd70 0x7fbc7019dde0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:43:20.532 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:20.528+0000 7fbc6effd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbc700fdd70 0x7fbc7019dde0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.110:47594/0 (socket says 192.168.123.110:47594) 2026-03-09T20:43:20.532 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:20.528+0000 7fbc6effd640 1 -- 192.168.123.110:0/776315199 learned_addr learned my addr 192.168.123.110:0/776315199 (peer_addr_for_me v2:192.168.123.110:0/0) 2026-03-09T20:43:20.533 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:20.531+0000 7fbc6effd640 1 -- 192.168.123.110:0/776315199 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fbc64009660 con 0x7fbc700fdd70 2026-03-09T20:43:20.534 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:20.531+0000 7fbc6effd640 1 --2- 192.168.123.110:0/776315199 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbc700fdd70 0x7fbc7019dde0 secure :-1 s=READY pgs=145 cs=0 l=1 rev1=1 crypto rx=0x7fbc6402f860 tx=0x7fbc64004290 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:43:20.534 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:20.532+0000 7fbc4ffff640 1 -- 192.168.123.110:0/776315199 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fbc640043d0 con 0x7fbc700fdd70 2026-03-09T20:43:20.534 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:20.532+0000 7fbc6ffff640 1 -- 192.168.123.110:0/776315199 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fbc7019e520 con 0x7fbc700fdd70 2026-03-09T20:43:20.534 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:20.532+0000 7fbc6ffff640 1 -- 192.168.123.110:0/776315199 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fbc7019e9c0 con 0x7fbc700fdd70 2026-03-09T20:43:20.534 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:20.532+0000 7fbc4ffff640 1 -- 192.168.123.110:0/776315199 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fbc64038b40 con 0x7fbc700fdd70 2026-03-09T20:43:20.534 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:20.532+0000 7fbc4ffff640 1 -- 192.168.123.110:0/776315199 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fbc64041890 con 0x7fbc700fdd70 2026-03-09T20:43:20.536 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:20.534+0000 7fbc6ffff640 1 -- 192.168.123.110:0/776315199 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fbc3c005350 con 0x7fbc700fdd70 2026-03-09T20:43:20.537 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:20.535+0000 7fbc4ffff640 1 -- 192.168.123.110:0/776315199 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 16) v1 ==== 49291+0+0 (secure 0 0 0) 0x7fbc64038cb0 con 0x7fbc700fdd70 2026-03-09T20:43:20.539 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:20.537+0000 7fbc4ffff640 1 --2- 192.168.123.110:0/776315199 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fbc4803d1b0 0x7fbc4803f670 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:43:20.539 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:20.537+0000 7fbc4ffff640 1 -- 192.168.123.110:0/776315199 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(5..5 src has 1..5) v4 ==== 1370+0+0 (secure 0 0 0) 0x7fbc64076350 con 0x7fbc700fdd70 2026-03-09T20:43:20.539 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:20.538+0000 7fbc6e7fc640 1 --2- 192.168.123.110:0/776315199 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fbc4803d1b0 0x7fbc4803f670 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:43:20.539 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:20.538+0000 7fbc6e7fc640 1 --2- 192.168.123.110:0/776315199 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fbc4803d1b0 0x7fbc4803f670 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7fbc580099c0 tx=0x7fbc58006eb0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:43:20.552 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:20.550+0000 7fbc4ffff640 1 -- 192.168.123.110:0/776315199 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7fbc6405e540 con 0x7fbc700fdd70 2026-03-09T20:43:20.673 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:20.671+0000 7fbc6ffff640 1 -- 192.168.123.110:0/776315199 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7fbc3c0051c0 con 0x7fbc700fdd70 2026-03-09T20:43:20.673 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:20.672+0000 7fbc4ffff640 1 -- 192.168.123.110:0/776315199 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+747 (secure 0 0 0) 0x7fbc6405e540 con 0x7fbc700fdd70 2026-03-09T20:43:20.674 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T20:43:20.674 INFO:teuthology.orchestra.run.vm10.stdout:{"epoch":1,"fsid":"589eab88-1bf8-11f1-9e50-71f3ab1833c4","modified":"2026-03-09T20:42:20.613735Z","created":"2026-03-09T20:42:20.613735Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm07","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:3300","nonce":0},{"type":"v1","addr":"192.168.123.107:6789","nonce":0}]},"addr":"192.168.123.107:6789/0","public_addr":"192.168.123.107:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T20:43:20.674 INFO:teuthology.orchestra.run.vm10.stderr:dumped monmap epoch 1 2026-03-09T20:43:20.676 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:20.674+0000 7fbc6ffff640 1 -- 192.168.123.110:0/776315199 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fbc4803d1b0 msgr2=0x7fbc4803f670 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:20.676 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:20.674+0000 7fbc6ffff640 1 --2- 192.168.123.110:0/776315199 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fbc4803d1b0 0x7fbc4803f670 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7fbc580099c0 tx=0x7fbc58006eb0 comp rx=0 tx=0).stop 2026-03-09T20:43:20.676 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:20.674+0000 7fbc6ffff640 1 -- 192.168.123.110:0/776315199 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbc700fdd70 msgr2=0x7fbc7019dde0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:20.676 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:20.674+0000 7fbc6ffff640 1 --2- 192.168.123.110:0/776315199 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbc700fdd70 0x7fbc7019dde0 secure :-1 s=READY pgs=145 cs=0 l=1 rev1=1 crypto rx=0x7fbc6402f860 tx=0x7fbc64004290 comp rx=0 tx=0).stop 2026-03-09T20:43:20.676 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:20.675+0000 7fbc6ffff640 1 -- 192.168.123.110:0/776315199 shutdown_connections 2026-03-09T20:43:20.676 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:20.675+0000 7fbc6ffff640 1 --2- 192.168.123.110:0/776315199 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fbc4803d1b0 0x7fbc4803f670 unknown :-1 s=CLOSED pgs=6 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:20.676 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:20.675+0000 7fbc6ffff640 1 --2- 192.168.123.110:0/776315199 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbc700fdd70 0x7fbc7019dde0 unknown :-1 s=CLOSED pgs=145 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:20.676 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:20.675+0000 7fbc6ffff640 1 -- 192.168.123.110:0/776315199 >> 192.168.123.110:0/776315199 conn(0x7fbc700f9b60 msgr2=0x7fbc700fbf50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:43:20.676 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:20.675+0000 7fbc6ffff640 1 -- 192.168.123.110:0/776315199 shutdown_connections 2026-03-09T20:43:20.676 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:20.675+0000 7fbc6ffff640 1 -- 192.168.123.110:0/776315199 wait complete. 2026-03-09T20:43:21.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:21 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:21.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:21 vm07 ceph-mon[49120]: mgrmap e16: vm07.xjrvch(active, since 1.00775s) 2026-03-09T20:43:21.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:21 vm07 ceph-mon[49120]: from='client.? 192.168.123.110:0/776315199' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T20:43:21.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:21 vm07 ceph-mon[49120]: [09/Mar/2026:20:43:20] ENGINE Bus STARTING 2026-03-09T20:43:21.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:21 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:21.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:21 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:21.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:21 vm07 ceph-mon[49120]: [09/Mar/2026:20:43:20] ENGINE Serving on http://192.168.123.107:8765 2026-03-09T20:43:21.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:21 vm07 ceph-mon[49120]: [09/Mar/2026:20:43:20] ENGINE Serving on https://192.168.123.107:7150 2026-03-09T20:43:21.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:21 vm07 ceph-mon[49120]: [09/Mar/2026:20:43:20] ENGINE Bus STARTED 2026-03-09T20:43:21.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:21 vm07 ceph-mon[49120]: [09/Mar/2026:20:43:20] ENGINE Client ('192.168.123.107', 43368) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T20:43:21.749 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T20:43:21.749 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph mon dump -f json 2026-03-09T20:43:21.931 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T20:43:21.980 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T20:43:22.248 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:22.246+0000 7f9722f7d640 1 -- 192.168.123.110:0/2345294921 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f971c0717c0 msgr2=0x7f971c071bc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:22.249 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:22.246+0000 7f9722f7d640 1 --2- 192.168.123.110:0/2345294921 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f971c0717c0 0x7f971c071bc0 secure :-1 s=READY pgs=146 cs=0 l=1 rev1=1 crypto rx=0x7f970c007920 tx=0x7f970c030050 comp rx=0 tx=0).stop 2026-03-09T20:43:22.249 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:22.247+0000 7f9722f7d640 1 -- 192.168.123.110:0/2345294921 shutdown_connections 2026-03-09T20:43:22.249 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:22.247+0000 7f9722f7d640 1 --2- 192.168.123.110:0/2345294921 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f971c0717c0 0x7f971c071bc0 unknown :-1 s=CLOSED pgs=146 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:22.249 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:22.247+0000 7f9722f7d640 1 -- 192.168.123.110:0/2345294921 >> 192.168.123.110:0/2345294921 conn(0x7f971c06d1c0 msgr2=0x7f971c06f600 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:43:22.249 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:22.248+0000 7f9722f7d640 1 -- 192.168.123.110:0/2345294921 shutdown_connections 2026-03-09T20:43:22.249 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:22.248+0000 7f9722f7d640 1 -- 192.168.123.110:0/2345294921 wait complete. 2026-03-09T20:43:22.249 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:22.248+0000 7f9722f7d640 1 Processor -- start 2026-03-09T20:43:22.250 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:22.248+0000 7f9722f7d640 1 -- start start 2026-03-09T20:43:22.250 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:22.248+0000 7f9722f7d640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f971c0795c0 0x7f971c0799e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:43:22.250 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:22.248+0000 7f9722f7d640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f970c002dc0 con 0x7f971c0795c0 2026-03-09T20:43:22.250 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:22.248+0000 7f9720cf2640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f971c0795c0 0x7f971c0799e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:43:22.250 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:22.248+0000 7f9720cf2640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f971c0795c0 0x7f971c0799e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.110:47606/0 (socket says 192.168.123.110:47606) 2026-03-09T20:43:22.250 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:22.248+0000 7f9720cf2640 1 -- 192.168.123.110:0/4106906101 learned_addr learned my addr 192.168.123.110:0/4106906101 (peer_addr_for_me v2:192.168.123.110:0/0) 2026-03-09T20:43:22.250 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:22.249+0000 7f9720cf2640 1 -- 192.168.123.110:0/4106906101 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f970c0075d0 con 0x7f971c0795c0 2026-03-09T20:43:22.250 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:22.249+0000 7f9720cf2640 1 --2- 192.168.123.110:0/4106906101 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f971c0795c0 0x7f971c0799e0 secure :-1 s=READY pgs=147 cs=0 l=1 rev1=1 crypto rx=0x7f970c030600 tx=0x7f970c031b10 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:43:22.250 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:22.249+0000 7f9719ffb640 1 -- 192.168.123.110:0/4106906101 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f970c0398a0 con 0x7f971c0795c0 2026-03-09T20:43:22.251 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:22.249+0000 7f9722f7d640 1 -- 192.168.123.110:0/4106906101 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f971c079f20 con 0x7f971c0795c0 2026-03-09T20:43:22.252 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:22.250+0000 7f9722f7d640 1 -- 192.168.123.110:0/4106906101 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f971c07cac0 con 0x7f971c0795c0 2026-03-09T20:43:22.252 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:22.251+0000 7f9719ffb640 1 -- 192.168.123.110:0/4106906101 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f970c0413e0 con 0x7f971c0795c0 2026-03-09T20:43:22.252 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:22.251+0000 7f9719ffb640 1 -- 192.168.123.110:0/4106906101 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f970c040390 con 0x7f971c0795c0 2026-03-09T20:43:22.252 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:22.251+0000 7f9719ffb640 1 -- 192.168.123.110:0/4106906101 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 17) v1 ==== 49489+0+0 (secure 0 0 0) 0x7f970c049050 con 0x7f971c0795c0 2026-03-09T20:43:22.252 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:22.251+0000 7f9719ffb640 1 --2- 192.168.123.110:0/4106906101 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f970003d4e0 0x7f970003f9a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:43:22.253 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:22.251+0000 7f9719ffb640 1 -- 192.168.123.110:0/4106906101 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(5..5 src has 1..5) v4 ==== 1370+0+0 (secure 0 0 0) 0x7f970c076e10 con 0x7f971c0795c0 2026-03-09T20:43:22.253 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:22.252+0000 7f971bfff640 1 --2- 192.168.123.110:0/4106906101 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f970003d4e0 0x7f970003f9a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:43:22.254 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:22.253+0000 7f971bfff640 1 --2- 192.168.123.110:0/4106906101 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f970003d4e0 0x7f970003f9a0 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7f971400ad30 tx=0x7f97140093f0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:43:22.254 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:22.253+0000 7f9722f7d640 1 -- 192.168.123.110:0/4106906101 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f96e8005350 con 0x7f971c0795c0 2026-03-09T20:43:22.257 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:22.256+0000 7f9719ffb640 1 -- 192.168.123.110:0/4106906101 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f970c049d80 con 0x7f971c0795c0 2026-03-09T20:43:22.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:22 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:22.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:22 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:22.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:22 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:43:22.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:22 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:22.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:22 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:22.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:22 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:22.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:22 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:22.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:22 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "who": "osd/host:vm10", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:43:22.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:22 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:43:22.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:22 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:43:22.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:22 vm07 ceph-mon[49120]: Updating vm07:/etc/ceph/ceph.conf 2026-03-09T20:43:22.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:22 vm07 ceph-mon[49120]: Updating vm10:/etc/ceph/ceph.conf 2026-03-09T20:43:22.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:22 vm07 ceph-mon[49120]: Updating vm10:/var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/config/ceph.conf 2026-03-09T20:43:22.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:22 vm07 ceph-mon[49120]: Updating vm07:/var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/config/ceph.conf 2026-03-09T20:43:22.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:22 vm07 ceph-mon[49120]: mgrmap e17: vm07.xjrvch(active, since 2s) 2026-03-09T20:43:22.402 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:22.400+0000 7f9722f7d640 1 -- 192.168.123.110:0/4106906101 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7f96e80051c0 con 0x7f971c0795c0 2026-03-09T20:43:22.402 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:22.401+0000 7f9719ffb640 1 -- 192.168.123.110:0/4106906101 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+747 (secure 0 0 0) 0x7f970c068090 con 0x7f971c0795c0 2026-03-09T20:43:22.404 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T20:43:22.404 INFO:teuthology.orchestra.run.vm10.stdout:{"epoch":1,"fsid":"589eab88-1bf8-11f1-9e50-71f3ab1833c4","modified":"2026-03-09T20:42:20.613735Z","created":"2026-03-09T20:42:20.613735Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm07","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:3300","nonce":0},{"type":"v1","addr":"192.168.123.107:6789","nonce":0}]},"addr":"192.168.123.107:6789/0","public_addr":"192.168.123.107:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T20:43:22.405 INFO:teuthology.orchestra.run.vm10.stderr:dumped monmap epoch 1 2026-03-09T20:43:22.405 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:22.403+0000 7f9722f7d640 1 -- 192.168.123.110:0/4106906101 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f970003d4e0 msgr2=0x7f970003f9a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:22.405 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:22.403+0000 7f9722f7d640 1 --2- 192.168.123.110:0/4106906101 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f970003d4e0 0x7f970003f9a0 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7f971400ad30 tx=0x7f97140093f0 comp rx=0 tx=0).stop 2026-03-09T20:43:22.405 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:22.404+0000 7f9722f7d640 1 -- 192.168.123.110:0/4106906101 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f971c0795c0 msgr2=0x7f971c0799e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:22.405 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:22.404+0000 7f9722f7d640 1 --2- 192.168.123.110:0/4106906101 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f971c0795c0 0x7f971c0799e0 secure :-1 s=READY pgs=147 cs=0 l=1 rev1=1 crypto rx=0x7f970c030600 tx=0x7f970c031b10 comp rx=0 tx=0).stop 2026-03-09T20:43:22.405 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:22.404+0000 7f9722f7d640 1 -- 192.168.123.110:0/4106906101 shutdown_connections 2026-03-09T20:43:22.405 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:22.404+0000 7f9722f7d640 1 --2- 192.168.123.110:0/4106906101 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f970003d4e0 0x7f970003f9a0 unknown :-1 s=CLOSED pgs=7 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:22.405 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:22.404+0000 7f9722f7d640 1 --2- 192.168.123.110:0/4106906101 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f971c0795c0 0x7f971c0799e0 unknown :-1 s=CLOSED pgs=147 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:22.405 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:22.404+0000 7f9722f7d640 1 -- 192.168.123.110:0/4106906101 >> 192.168.123.110:0/4106906101 conn(0x7f971c06d1c0 msgr2=0x7f971c06db80 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:43:22.405 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:22.404+0000 7f9722f7d640 1 -- 192.168.123.110:0/4106906101 shutdown_connections 2026-03-09T20:43:22.405 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:22.404+0000 7f9722f7d640 1 -- 192.168.123.110:0/4106906101 wait complete. 2026-03-09T20:43:23.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:23 vm07 ceph-mon[49120]: Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:43:23.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:23 vm07 ceph-mon[49120]: Updating vm10:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:43:23.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:23 vm07 ceph-mon[49120]: Updating vm07:/var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/config/ceph.client.admin.keyring 2026-03-09T20:43:23.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:23 vm07 ceph-mon[49120]: Updating vm10:/var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/config/ceph.client.admin.keyring 2026-03-09T20:43:23.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:23 vm07 ceph-mon[49120]: from='client.? 192.168.123.110:0/4106906101' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T20:43:23.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:23 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:23.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:23 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:23.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:23 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:23.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:23 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:23.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:23 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:23.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:23 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm10", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-09T20:43:23.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:23 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm10", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]': finished 2026-03-09T20:43:23.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:23 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:43:23.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:23 vm07 ceph-mon[49120]: Deploying daemon ceph-exporter.vm10 on vm10 2026-03-09T20:43:23.453 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T20:43:23.453 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph mon dump -f json 2026-03-09T20:43:23.653 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/config/ceph.conf 2026-03-09T20:43:23.910 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:23.906+0000 7f32c4d53640 1 -- 192.168.123.110:0/3218970122 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f32c0071990 msgr2=0x7f32c0071d70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:23.910 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:23.906+0000 7f32be7fc640 1 -- 192.168.123.110:0/3218970122 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f32b002fb30 con 0x7f32c0071990 2026-03-09T20:43:23.910 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:23.906+0000 7f32c4d53640 1 --2- 192.168.123.110:0/3218970122 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f32c0071990 0x7f32c0071d70 secure :-1 s=READY pgs=149 cs=0 l=1 rev1=1 crypto rx=0x7f32b00099b0 tx=0x7f32b002f2b0 comp rx=0 tx=0).stop 2026-03-09T20:43:23.910 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:23.906+0000 7f32c4d53640 1 -- 192.168.123.110:0/3218970122 shutdown_connections 2026-03-09T20:43:23.910 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:23.906+0000 7f32c4d53640 1 --2- 192.168.123.110:0/3218970122 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f32c0071990 0x7f32c0071d70 unknown :-1 s=CLOSED pgs=149 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:23.910 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:23.906+0000 7f32c4d53640 1 -- 192.168.123.110:0/3218970122 >> 192.168.123.110:0/3218970122 conn(0x7f32c006b190 msgr2=0x7f32c006b5a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:43:23.910 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:23.907+0000 7f32c4d53640 1 -- 192.168.123.110:0/3218970122 shutdown_connections 2026-03-09T20:43:23.910 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:23.907+0000 7f32c4d53640 1 -- 192.168.123.110:0/3218970122 wait complete. 2026-03-09T20:43:23.910 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:23.908+0000 7f32c4d53640 1 Processor -- start 2026-03-09T20:43:23.910 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:23.908+0000 7f32c4d53640 1 -- start start 2026-03-09T20:43:23.910 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:23.908+0000 7f32c4d53640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f32c0071990 0x7f32c0115550 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:43:23.910 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:23.908+0000 7f32c4d53640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f32c0117460 con 0x7f32c0071990 2026-03-09T20:43:23.910 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:23.908+0000 7f32bf7fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f32c0071990 0x7f32c0115550 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:43:23.910 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:23.908+0000 7f32bf7fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f32c0071990 0x7f32c0115550 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.110:47630/0 (socket says 192.168.123.110:47630) 2026-03-09T20:43:23.910 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:23.908+0000 7f32bf7fe640 1 -- 192.168.123.110:0/469750802 learned_addr learned my addr 192.168.123.110:0/469750802 (peer_addr_for_me v2:192.168.123.110:0/0) 2026-03-09T20:43:23.910 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:23.908+0000 7f32bf7fe640 1 -- 192.168.123.110:0/469750802 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f32b0009660 con 0x7f32c0071990 2026-03-09T20:43:23.910 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:23.908+0000 7f32bf7fe640 1 --2- 192.168.123.110:0/469750802 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f32c0071990 0x7f32c0115550 secure :-1 s=READY pgs=150 cs=0 l=1 rev1=1 crypto rx=0x7f32b0005ec0 tx=0x7f32b0004060 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:43:23.910 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:23.909+0000 7f32bcff9640 1 -- 192.168.123.110:0/469750802 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f32b0004430 con 0x7f32c0071990 2026-03-09T20:43:23.910 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:23.909+0000 7f32bcff9640 1 -- 192.168.123.110:0/469750802 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f32b0038930 con 0x7f32c0071990 2026-03-09T20:43:23.910 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:23.909+0000 7f32c4d53640 1 -- 192.168.123.110:0/469750802 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f32c0115a90 con 0x7f32c0071990 2026-03-09T20:43:23.910 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:23.909+0000 7f32c4d53640 1 -- 192.168.123.110:0/469750802 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f32c0115f30 con 0x7f32c0071990 2026-03-09T20:43:23.910 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:23.909+0000 7f32bcff9640 1 -- 192.168.123.110:0/469750802 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f32b0041840 con 0x7f32c0071990 2026-03-09T20:43:23.912 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:23.910+0000 7f32bcff9640 1 -- 192.168.123.110:0/469750802 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 17) v1 ==== 49489+0+0 (secure 0 0 0) 0x7f32b0038aa0 con 0x7f32c0071990 2026-03-09T20:43:23.912 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:23.910+0000 7f32c4d53640 1 -- 192.168.123.110:0/469750802 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f328c005350 con 0x7f32c0071990 2026-03-09T20:43:23.912 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:23.910+0000 7f32bcff9640 1 --2- 192.168.123.110:0/469750802 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f32a403d410 0x7f32a403f8d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:43:23.912 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:23.910+0000 7f32bcff9640 1 -- 192.168.123.110:0/469750802 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(5..5 src has 1..5) v4 ==== 1370+0+0 (secure 0 0 0) 0x7f32b0075af0 con 0x7f32c0071990 2026-03-09T20:43:23.912 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:23.910+0000 7f32beffd640 1 --2- 192.168.123.110:0/469750802 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f32a403d410 0x7f32a403f8d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:43:23.912 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:23.911+0000 7f32beffd640 1 --2- 192.168.123.110:0/469750802 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f32a403d410 0x7f32a403f8d0 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7f32b40099c0 tx=0x7f32b4006eb0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:43:23.914 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:23.913+0000 7f32bcff9640 1 -- 192.168.123.110:0/469750802 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f32b0035320 con 0x7f32c0071990 2026-03-09T20:43:24.044 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:24.043+0000 7f32c4d53640 1 -- 192.168.123.110:0/469750802 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7f328c0051c0 con 0x7f32c0071990 2026-03-09T20:43:24.045 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:24.043+0000 7f32bcff9640 1 -- 192.168.123.110:0/469750802 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+747 (secure 0 0 0) 0x7f32b0046030 con 0x7f32c0071990 2026-03-09T20:43:24.045 INFO:teuthology.orchestra.run.vm10.stderr:dumped monmap epoch 1 2026-03-09T20:43:24.045 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T20:43:24.045 INFO:teuthology.orchestra.run.vm10.stdout:{"epoch":1,"fsid":"589eab88-1bf8-11f1-9e50-71f3ab1833c4","modified":"2026-03-09T20:42:20.613735Z","created":"2026-03-09T20:42:20.613735Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm07","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:3300","nonce":0},{"type":"v1","addr":"192.168.123.107:6789","nonce":0}]},"addr":"192.168.123.107:6789/0","public_addr":"192.168.123.107:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T20:43:24.048 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:24.046+0000 7f32c4d53640 1 -- 192.168.123.110:0/469750802 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f32a403d410 msgr2=0x7f32a403f8d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:24.048 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:24.046+0000 7f32c4d53640 1 --2- 192.168.123.110:0/469750802 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f32a403d410 0x7f32a403f8d0 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7f32b40099c0 tx=0x7f32b4006eb0 comp rx=0 tx=0).stop 2026-03-09T20:43:24.048 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:24.046+0000 7f32c4d53640 1 -- 192.168.123.110:0/469750802 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f32c0071990 msgr2=0x7f32c0115550 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:24.048 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:24.046+0000 7f32c4d53640 1 --2- 192.168.123.110:0/469750802 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f32c0071990 0x7f32c0115550 secure :-1 s=READY pgs=150 cs=0 l=1 rev1=1 crypto rx=0x7f32b0005ec0 tx=0x7f32b0004060 comp rx=0 tx=0).stop 2026-03-09T20:43:24.048 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:24.047+0000 7f32c4d53640 1 -- 192.168.123.110:0/469750802 shutdown_connections 2026-03-09T20:43:24.048 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:24.047+0000 7f32c4d53640 1 --2- 192.168.123.110:0/469750802 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f32a403d410 0x7f32a403f8d0 unknown :-1 s=CLOSED pgs=8 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:24.048 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:24.047+0000 7f32c4d53640 1 --2- 192.168.123.110:0/469750802 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f32c0071990 0x7f32c0115550 unknown :-1 s=CLOSED pgs=150 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:24.048 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:24.047+0000 7f32c4d53640 1 -- 192.168.123.110:0/469750802 >> 192.168.123.110:0/469750802 conn(0x7f32c006b190 msgr2=0x7f32c006eb50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:43:24.049 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:24.047+0000 7f32c4d53640 1 -- 192.168.123.110:0/469750802 shutdown_connections 2026-03-09T20:43:24.049 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:24.047+0000 7f32c4d53640 1 -- 192.168.123.110:0/469750802 wait complete. 2026-03-09T20:43:25.057 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:24 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:25.057 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:24 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:25.057 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:24 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:25.057 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:24 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:25.057 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:24 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm10", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-09T20:43:25.057 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:24 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.vm10", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished 2026-03-09T20:43:25.057 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:24 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:43:25.057 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:24 vm07 ceph-mon[49120]: Deploying daemon crash.vm10 on vm10 2026-03-09T20:43:25.057 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:24 vm07 ceph-mon[49120]: from='client.? 192.168.123.110:0/469750802' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T20:43:25.057 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:24 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:25.057 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:24 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:25.057 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:24 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:25.057 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:24 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:25.057 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:24 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:25.098 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T20:43:25.098 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph mon dump -f json 2026-03-09T20:43:25.237 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/config/ceph.conf 2026-03-09T20:43:25.462 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:25.460+0000 7f2eea681640 1 -- 192.168.123.110:0/2610679917 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2ee4107580 msgr2=0x7f2ee4107960 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:25.462 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:25.460+0000 7f2eea681640 1 --2- 192.168.123.110:0/2610679917 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2ee4107580 0x7f2ee4107960 secure :-1 s=READY pgs=151 cs=0 l=1 rev1=1 crypto rx=0x7f2ed40099b0 tx=0x7f2ed402f2b0 comp rx=0 tx=0).stop 2026-03-09T20:43:25.462 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:25.460+0000 7f2eea681640 1 -- 192.168.123.110:0/2610679917 shutdown_connections 2026-03-09T20:43:25.462 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:25.460+0000 7f2eea681640 1 --2- 192.168.123.110:0/2610679917 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2ee4107580 0x7f2ee4107960 unknown :-1 s=CLOSED pgs=151 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:25.462 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:25.460+0000 7f2eea681640 1 -- 192.168.123.110:0/2610679917 >> 192.168.123.110:0/2610679917 conn(0x7f2ee40739a0 msgr2=0x7f2ee4075dc0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:43:25.462 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:25.460+0000 7f2eea681640 1 -- 192.168.123.110:0/2610679917 shutdown_connections 2026-03-09T20:43:25.462 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:25.461+0000 7f2eea681640 1 -- 192.168.123.110:0/2610679917 wait complete. 2026-03-09T20:43:25.462 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:25.461+0000 7f2eea681640 1 Processor -- start 2026-03-09T20:43:25.462 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:25.461+0000 7f2eea681640 1 -- start start 2026-03-09T20:43:25.462 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:25.461+0000 7f2eea681640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2ee4107580 0x7f2ee419e2e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:43:25.462 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:25.461+0000 7f2eea681640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f2ee419e820 con 0x7f2ee4107580 2026-03-09T20:43:25.463 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:25.461+0000 7f2ee3fff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2ee4107580 0x7f2ee419e2e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:43:25.463 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:25.461+0000 7f2ee3fff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2ee4107580 0x7f2ee419e2e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.110:47662/0 (socket says 192.168.123.110:47662) 2026-03-09T20:43:25.463 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:25.461+0000 7f2ee3fff640 1 -- 192.168.123.110:0/1071181536 learned_addr learned my addr 192.168.123.110:0/1071181536 (peer_addr_for_me v2:192.168.123.110:0/0) 2026-03-09T20:43:25.463 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:25.462+0000 7f2ee3fff640 1 -- 192.168.123.110:0/1071181536 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f2ed4009660 con 0x7f2ee4107580 2026-03-09T20:43:25.463 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:25.462+0000 7f2ee3fff640 1 --2- 192.168.123.110:0/1071181536 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2ee4107580 0x7f2ee419e2e0 secure :-1 s=READY pgs=152 cs=0 l=1 rev1=1 crypto rx=0x7f2ed4005ec0 tx=0x7f2ed4004290 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:43:25.463 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:25.462+0000 7f2ee17fa640 1 -- 192.168.123.110:0/1071181536 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f2ed403d070 con 0x7f2ee4107580 2026-03-09T20:43:25.464 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:25.462+0000 7f2eea681640 1 -- 192.168.123.110:0/1071181536 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f2ee419ea20 con 0x7f2ee4107580 2026-03-09T20:43:25.464 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:25.462+0000 7f2ee17fa640 1 -- 192.168.123.110:0/1071181536 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f2ed4002a70 con 0x7f2ee4107580 2026-03-09T20:43:25.464 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:25.462+0000 7f2ee17fa640 1 -- 192.168.123.110:0/1071181536 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f2ed4041900 con 0x7f2ee4107580 2026-03-09T20:43:25.464 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:25.462+0000 7f2eea681640 1 -- 192.168.123.110:0/1071181536 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f2ee419ee40 con 0x7f2ee4107580 2026-03-09T20:43:25.464 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:25.463+0000 7f2ee17fa640 1 -- 192.168.123.110:0/1071181536 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 17) v1 ==== 49489+0+0 (secure 0 0 0) 0x7f2ed4038680 con 0x7f2ee4107580 2026-03-09T20:43:25.464 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:25.463+0000 7f2eea681640 1 -- 192.168.123.110:0/1071181536 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f2ea8005350 con 0x7f2ee4107580 2026-03-09T20:43:25.464 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:25.463+0000 7f2ee17fa640 1 --2- 192.168.123.110:0/1071181536 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f2ec403d370 0x7f2ec403f830 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:43:25.465 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:25.463+0000 7f2ee17fa640 1 -- 192.168.123.110:0/1071181536 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(5..5 src has 1..5) v4 ==== 1370+0+0 (secure 0 0 0) 0x7f2ed4075c60 con 0x7f2ee4107580 2026-03-09T20:43:25.465 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:25.463+0000 7f2ee37fe640 1 --2- 192.168.123.110:0/1071181536 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f2ec403d370 0x7f2ec403f830 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:43:25.465 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:25.464+0000 7f2ee37fe640 1 --2- 192.168.123.110:0/1071181536 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f2ec403d370 0x7f2ec403f830 secure :-1 s=READY pgs=9 cs=0 l=1 rev1=1 crypto rx=0x7f2ed00099c0 tx=0x7f2ed0006eb0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:43:25.467 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:25.466+0000 7f2ee17fa640 1 -- 192.168.123.110:0/1071181536 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f2ed4038c40 con 0x7f2ee4107580 2026-03-09T20:43:25.586 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:25.584+0000 7f2eea681640 1 -- 192.168.123.110:0/1071181536 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7f2ea80051c0 con 0x7f2ee4107580 2026-03-09T20:43:25.586 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:25.585+0000 7f2ee17fa640 1 -- 192.168.123.110:0/1071181536 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+747 (secure 0 0 0) 0x7f2ed4046030 con 0x7f2ee4107580 2026-03-09T20:43:25.586 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T20:43:25.586 INFO:teuthology.orchestra.run.vm10.stdout:{"epoch":1,"fsid":"589eab88-1bf8-11f1-9e50-71f3ab1833c4","modified":"2026-03-09T20:42:20.613735Z","created":"2026-03-09T20:42:20.613735Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm07","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:3300","nonce":0},{"type":"v1","addr":"192.168.123.107:6789","nonce":0}]},"addr":"192.168.123.107:6789/0","public_addr":"192.168.123.107:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T20:43:25.586 INFO:teuthology.orchestra.run.vm10.stderr:dumped monmap epoch 1 2026-03-09T20:43:25.588 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:25.587+0000 7f2eea681640 1 -- 192.168.123.110:0/1071181536 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f2ec403d370 msgr2=0x7f2ec403f830 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:25.588 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:25.587+0000 7f2eea681640 1 --2- 192.168.123.110:0/1071181536 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f2ec403d370 0x7f2ec403f830 secure :-1 s=READY pgs=9 cs=0 l=1 rev1=1 crypto rx=0x7f2ed00099c0 tx=0x7f2ed0006eb0 comp rx=0 tx=0).stop 2026-03-09T20:43:25.588 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:25.587+0000 7f2eea681640 1 -- 192.168.123.110:0/1071181536 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2ee4107580 msgr2=0x7f2ee419e2e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:25.588 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:25.587+0000 7f2eea681640 1 --2- 192.168.123.110:0/1071181536 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2ee4107580 0x7f2ee419e2e0 secure :-1 s=READY pgs=152 cs=0 l=1 rev1=1 crypto rx=0x7f2ed4005ec0 tx=0x7f2ed4004290 comp rx=0 tx=0).stop 2026-03-09T20:43:25.588 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:25.587+0000 7f2eea681640 1 -- 192.168.123.110:0/1071181536 shutdown_connections 2026-03-09T20:43:25.588 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:25.587+0000 7f2eea681640 1 --2- 192.168.123.110:0/1071181536 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f2ec403d370 0x7f2ec403f830 unknown :-1 s=CLOSED pgs=9 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:25.588 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:25.587+0000 7f2eea681640 1 --2- 192.168.123.110:0/1071181536 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2ee4107580 0x7f2ee419e2e0 unknown :-1 s=CLOSED pgs=152 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:25.588 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:25.587+0000 7f2eea681640 1 -- 192.168.123.110:0/1071181536 >> 192.168.123.110:0/1071181536 conn(0x7f2ee40739a0 msgr2=0x7f2ee4075d90 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:43:25.589 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:25.588+0000 7f2eea681640 1 -- 192.168.123.110:0/1071181536 shutdown_connections 2026-03-09T20:43:25.589 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:25.588+0000 7f2eea681640 1 -- 192.168.123.110:0/1071181536 wait complete. 2026-03-09T20:43:26.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:25 vm07 ceph-mon[49120]: Deploying daemon node-exporter.vm10 on vm10 2026-03-09T20:43:26.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:25 vm07 ceph-mon[49120]: from='client.? 192.168.123.110:0/1071181536' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T20:43:26.872 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T20:43:26.872 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph mon dump -f json 2026-03-09T20:43:27.034 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/config/ceph.conf 2026-03-09T20:43:27.311 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:27.309+0000 7f3ce7f35640 1 -- 192.168.123.110:0/2844764664 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3ce0071dc0 msgr2=0x7f3ce00721a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:27.311 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:27.309+0000 7f3ce7f35640 1 --2- 192.168.123.110:0/2844764664 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3ce0071dc0 0x7f3ce00721a0 secure :-1 s=READY pgs=153 cs=0 l=1 rev1=1 crypto rx=0x7f3cd8008030 tx=0x7f3cd8030de0 comp rx=0 tx=0).stop 2026-03-09T20:43:27.312 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:27.311+0000 7f3ce7f35640 1 -- 192.168.123.110:0/2844764664 shutdown_connections 2026-03-09T20:43:27.312 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:27.311+0000 7f3ce7f35640 1 --2- 192.168.123.110:0/2844764664 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3ce0071dc0 0x7f3ce00721a0 unknown :-1 s=CLOSED pgs=153 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:27.312 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:27.311+0000 7f3ce7f35640 1 -- 192.168.123.110:0/2844764664 >> 192.168.123.110:0/2844764664 conn(0x7f3ce006b380 msgr2=0x7f3ce006b790 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:43:27.313 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:27.312+0000 7f3ce7f35640 1 -- 192.168.123.110:0/2844764664 shutdown_connections 2026-03-09T20:43:27.313 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:27.312+0000 7f3ce7f35640 1 -- 192.168.123.110:0/2844764664 wait complete. 2026-03-09T20:43:27.314 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:27.312+0000 7f3ce7f35640 1 Processor -- start 2026-03-09T20:43:27.314 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:27.312+0000 7f3ce7f35640 1 -- start start 2026-03-09T20:43:27.314 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:27.312+0000 7f3ce7f35640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3ce0071dc0 0x7f3ce01b80e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:43:27.314 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:27.312+0000 7f3ce7f35640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f3ce01b8620 con 0x7f3ce0071dc0 2026-03-09T20:43:27.316 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:27.313+0000 7f3ce5caa640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3ce0071dc0 0x7f3ce01b80e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:43:27.316 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:27.313+0000 7f3ce5caa640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3ce0071dc0 0x7f3ce01b80e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.110:47684/0 (socket says 192.168.123.110:47684) 2026-03-09T20:43:27.316 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:27.313+0000 7f3ce5caa640 1 -- 192.168.123.110:0/952412385 learned_addr learned my addr 192.168.123.110:0/952412385 (peer_addr_for_me v2:192.168.123.110:0/0) 2026-03-09T20:43:27.316 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:27.313+0000 7f3ce5caa640 1 -- 192.168.123.110:0/952412385 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f3cd8007ce0 con 0x7f3ce0071dc0 2026-03-09T20:43:27.316 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:27.313+0000 7f3ce5caa640 1 --2- 192.168.123.110:0/952412385 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3ce0071dc0 0x7f3ce01b80e0 secure :-1 s=READY pgs=154 cs=0 l=1 rev1=1 crypto rx=0x7f3cd8002790 tx=0x7f3cd8033870 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:43:27.316 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:27.313+0000 7f3cd6ffd640 1 -- 192.168.123.110:0/952412385 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f3cd803f070 con 0x7f3ce0071dc0 2026-03-09T20:43:27.316 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:27.313+0000 7f3cd6ffd640 1 -- 192.168.123.110:0/952412385 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f3cd8033ec0 con 0x7f3ce0071dc0 2026-03-09T20:43:27.316 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:27.314+0000 7f3cd6ffd640 1 -- 192.168.123.110:0/952412385 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f3cd803f070 con 0x7f3ce0071dc0 2026-03-09T20:43:27.316 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:27.315+0000 7f3ce7f35640 1 -- 192.168.123.110:0/952412385 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f3ce01b8820 con 0x7f3ce0071dc0 2026-03-09T20:43:27.316 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:27.315+0000 7f3ce7f35640 1 -- 192.168.123.110:0/952412385 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f3ce01b8b80 con 0x7f3ce0071dc0 2026-03-09T20:43:27.317 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:27.316+0000 7f3cd4ff9640 1 -- 192.168.123.110:0/952412385 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f3ce010d060 con 0x7f3ce0071dc0 2026-03-09T20:43:27.317 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:27.316+0000 7f3cd6ffd640 1 -- 192.168.123.110:0/952412385 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 17) v1 ==== 49489+0+0 (secure 0 0 0) 0x7f3cd80332e0 con 0x7f3ce0071dc0 2026-03-09T20:43:27.317 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:27.316+0000 7f3cd6ffd640 1 --2- 192.168.123.110:0/952412385 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f3cb403d3c0 0x7f3cb403f880 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:43:27.317 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:27.316+0000 7f3cd6ffd640 1 -- 192.168.123.110:0/952412385 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(5..5 src has 1..5) v4 ==== 1370+0+0 (secure 0 0 0) 0x7f3cd8031620 con 0x7f3ce0071dc0 2026-03-09T20:43:27.318 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:27.317+0000 7f3ce54a9640 1 --2- 192.168.123.110:0/952412385 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f3cb403d3c0 0x7f3cb403f880 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:43:27.320 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:27.318+0000 7f3ce54a9640 1 --2- 192.168.123.110:0/952412385 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f3cb403d3c0 0x7f3cb403f880 secure :-1 s=READY pgs=10 cs=0 l=1 rev1=1 crypto rx=0x7f3cdc0099c0 tx=0x7f3cdc006eb0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:43:27.320 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:27.319+0000 7f3cd6ffd640 1 -- 192.168.123.110:0/952412385 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f3cd803a610 con 0x7f3ce0071dc0 2026-03-09T20:43:27.449 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:27.448+0000 7f3cd4ff9640 1 -- 192.168.123.110:0/952412385 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7f3ce00721a0 con 0x7f3ce0071dc0 2026-03-09T20:43:27.451 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:27.450+0000 7f3cd6ffd640 1 -- 192.168.123.110:0/952412385 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+747 (secure 0 0 0) 0x7f3cd803a610 con 0x7f3ce0071dc0 2026-03-09T20:43:27.451 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T20:43:27.451 INFO:teuthology.orchestra.run.vm10.stdout:{"epoch":1,"fsid":"589eab88-1bf8-11f1-9e50-71f3ab1833c4","modified":"2026-03-09T20:42:20.613735Z","created":"2026-03-09T20:42:20.613735Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm07","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:3300","nonce":0},{"type":"v1","addr":"192.168.123.107:6789","nonce":0}]},"addr":"192.168.123.107:6789/0","public_addr":"192.168.123.107:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T20:43:27.451 INFO:teuthology.orchestra.run.vm10.stderr:dumped monmap epoch 1 2026-03-09T20:43:27.453 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:27.452+0000 7f3cd4ff9640 1 -- 192.168.123.110:0/952412385 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f3cb403d3c0 msgr2=0x7f3cb403f880 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:27.453 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:27.452+0000 7f3cd4ff9640 1 --2- 192.168.123.110:0/952412385 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f3cb403d3c0 0x7f3cb403f880 secure :-1 s=READY pgs=10 cs=0 l=1 rev1=1 crypto rx=0x7f3cdc0099c0 tx=0x7f3cdc006eb0 comp rx=0 tx=0).stop 2026-03-09T20:43:27.453 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:27.452+0000 7f3cd4ff9640 1 -- 192.168.123.110:0/952412385 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3ce0071dc0 msgr2=0x7f3ce01b80e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:27.453 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:27.452+0000 7f3cd4ff9640 1 --2- 192.168.123.110:0/952412385 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3ce0071dc0 0x7f3ce01b80e0 secure :-1 s=READY pgs=154 cs=0 l=1 rev1=1 crypto rx=0x7f3cd8002790 tx=0x7f3cd8033870 comp rx=0 tx=0).stop 2026-03-09T20:43:27.453 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:27.452+0000 7f3cd4ff9640 1 -- 192.168.123.110:0/952412385 shutdown_connections 2026-03-09T20:43:27.453 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:27.452+0000 7f3cd4ff9640 1 --2- 192.168.123.110:0/952412385 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f3cb403d3c0 0x7f3cb403f880 unknown :-1 s=CLOSED pgs=10 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:27.453 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:27.452+0000 7f3cd4ff9640 1 --2- 192.168.123.110:0/952412385 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3ce0071dc0 0x7f3ce01b80e0 unknown :-1 s=CLOSED pgs=154 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:27.453 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:27.452+0000 7f3cd4ff9640 1 -- 192.168.123.110:0/952412385 >> 192.168.123.110:0/952412385 conn(0x7f3ce006b380 msgr2=0x7f3ce0074260 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:43:27.454 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:27.452+0000 7f3cd4ff9640 1 -- 192.168.123.110:0/952412385 shutdown_connections 2026-03-09T20:43:27.454 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:27.452+0000 7f3cd4ff9640 1 -- 192.168.123.110:0/952412385 wait complete. 2026-03-09T20:43:27.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:27 vm07 ceph-mon[49120]: from='client.? 192.168.123.110:0/952412385' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T20:43:28.501 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T20:43:28.501 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph mon dump -f json 2026-03-09T20:43:28.680 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/config/ceph.conf 2026-03-09T20:43:29.016 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:29.014+0000 7f8175c38640 1 -- 192.168.123.110:0/2010669352 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f817010ca80 msgr2=0x7f817010ce60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:29.016 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:29.014+0000 7f8175c38640 1 --2- 192.168.123.110:0/2010669352 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f817010ca80 0x7f817010ce60 secure :-1 s=READY pgs=157 cs=0 l=1 rev1=1 crypto rx=0x7f8168008030 tx=0x7f8168030de0 comp rx=0 tx=0).stop 2026-03-09T20:43:29.017 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:29.014+0000 7f8175c38640 1 -- 192.168.123.110:0/2010669352 shutdown_connections 2026-03-09T20:43:29.017 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:29.014+0000 7f8175c38640 1 --2- 192.168.123.110:0/2010669352 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f817010ca80 0x7f817010ce60 unknown :-1 s=CLOSED pgs=157 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:29.017 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:29.014+0000 7f8175c38640 1 -- 192.168.123.110:0/2010669352 >> 192.168.123.110:0/2010669352 conn(0x7f817006b190 msgr2=0x7f817006b5a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:43:29.017 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:29.014+0000 7f8175c38640 1 -- 192.168.123.110:0/2010669352 shutdown_connections 2026-03-09T20:43:29.017 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:29.014+0000 7f8175c38640 1 -- 192.168.123.110:0/2010669352 wait complete. 2026-03-09T20:43:29.017 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:29.015+0000 7f8175c38640 1 Processor -- start 2026-03-09T20:43:29.017 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:29.015+0000 7f8175c38640 1 -- start start 2026-03-09T20:43:29.017 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:29.015+0000 7f8175c38640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f81700839d0 0x7f8170083db0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:43:29.017 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:29.015+0000 7f8175c38640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f8168004530 con 0x7f81700839d0 2026-03-09T20:43:29.020 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:29.016+0000 7f8174c36640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f81700839d0 0x7f8170083db0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:43:29.020 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:29.016+0000 7f8174c36640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f81700839d0 0x7f8170083db0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.110:54176/0 (socket says 192.168.123.110:54176) 2026-03-09T20:43:29.020 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:29.016+0000 7f8174c36640 1 -- 192.168.123.110:0/454319162 learned_addr learned my addr 192.168.123.110:0/454319162 (peer_addr_for_me v2:192.168.123.110:0/0) 2026-03-09T20:43:29.020 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:29.017+0000 7f8174c36640 1 -- 192.168.123.110:0/454319162 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f8168007ce0 con 0x7f81700839d0 2026-03-09T20:43:29.020 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:29.017+0000 7f8174c36640 1 --2- 192.168.123.110:0/454319162 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f81700839d0 0x7f8170083db0 secure :-1 s=READY pgs=158 cs=0 l=1 rev1=1 crypto rx=0x7f8168004290 tx=0x7f81680042c0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:43:29.020 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:29.018+0000 7f816dffb640 1 -- 192.168.123.110:0/454319162 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f816803f070 con 0x7f81700839d0 2026-03-09T20:43:29.020 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:29.018+0000 7f816dffb640 1 -- 192.168.123.110:0/454319162 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f8168002e80 con 0x7f81700839d0 2026-03-09T20:43:29.020 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:29.018+0000 7f816dffb640 1 -- 192.168.123.110:0/454319162 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f816803a470 con 0x7f81700839d0 2026-03-09T20:43:29.020 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:29.018+0000 7f8175c38640 1 -- 192.168.123.110:0/454319162 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f817007ca60 con 0x7f81700839d0 2026-03-09T20:43:29.020 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:29.018+0000 7f8175c38640 1 -- 192.168.123.110:0/454319162 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f817007cee0 con 0x7f81700839d0 2026-03-09T20:43:29.022 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:29.020+0000 7f8175c38640 1 -- 192.168.123.110:0/454319162 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f8138005350 con 0x7f81700839d0 2026-03-09T20:43:29.023 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:29.021+0000 7f816dffb640 1 -- 192.168.123.110:0/454319162 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 17) v1 ==== 49489+0+0 (secure 0 0 0) 0x7f8168031720 con 0x7f81700839d0 2026-03-09T20:43:29.023 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:29.021+0000 7f816dffb640 1 --2- 192.168.123.110:0/454319162 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f815003d410 0x7f815003f8d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:43:29.023 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:29.021+0000 7f816dffb640 1 -- 192.168.123.110:0/454319162 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(5..5 src has 1..5) v4 ==== 1370+0+0 (secure 0 0 0) 0x7f8168076a10 con 0x7f81700839d0 2026-03-09T20:43:29.023 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:29.021+0000 7f816ffff640 1 --2- 192.168.123.110:0/454319162 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f815003d410 0x7f815003f8d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:43:29.024 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:29.022+0000 7f816ffff640 1 --2- 192.168.123.110:0/454319162 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f815003d410 0x7f815003f8d0 secure :-1 s=READY pgs=11 cs=0 l=1 rev1=1 crypto rx=0x7f81600099c0 tx=0x7f8160006eb0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:43:29.027 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:29.026+0000 7f816dffb640 1 -- 192.168.123.110:0/454319162 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f81680385a0 con 0x7f81700839d0 2026-03-09T20:43:29.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:28 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:29.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:28 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:29.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:28 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:29.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:28 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:29.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:28 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm10.byqahe", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T20:43:29.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:28 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.vm10.byqahe", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-09T20:43:29.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:28 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T20:43:29.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:28 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:43:29.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:28 vm07 ceph-mon[49120]: Deploying daemon mgr.vm10.byqahe on vm10 2026-03-09T20:43:29.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:28 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:29.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:28 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:29.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:28 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:29.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:28 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:29.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:28 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T20:43:29.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:28 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:43:29.202 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:29.200+0000 7f8175c38640 1 -- 192.168.123.110:0/454319162 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7f81380051c0 con 0x7f81700839d0 2026-03-09T20:43:29.204 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:29.203+0000 7f816dffb640 1 -- 192.168.123.110:0/454319162 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+747 (secure 0 0 0) 0x7f8168048070 con 0x7f81700839d0 2026-03-09T20:43:29.206 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T20:43:29.210 INFO:teuthology.orchestra.run.vm10.stdout:{"epoch":1,"fsid":"589eab88-1bf8-11f1-9e50-71f3ab1833c4","modified":"2026-03-09T20:42:20.613735Z","created":"2026-03-09T20:42:20.613735Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm07","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:3300","nonce":0},{"type":"v1","addr":"192.168.123.107:6789","nonce":0}]},"addr":"192.168.123.107:6789/0","public_addr":"192.168.123.107:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T20:43:29.210 INFO:teuthology.orchestra.run.vm10.stderr:dumped monmap epoch 1 2026-03-09T20:43:29.211 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:29.209+0000 7f813f7fe640 1 -- 192.168.123.110:0/454319162 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f815003d410 msgr2=0x7f815003f8d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:29.213 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:29.209+0000 7f813f7fe640 1 --2- 192.168.123.110:0/454319162 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f815003d410 0x7f815003f8d0 secure :-1 s=READY pgs=11 cs=0 l=1 rev1=1 crypto rx=0x7f81600099c0 tx=0x7f8160006eb0 comp rx=0 tx=0).stop 2026-03-09T20:43:29.213 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:29.209+0000 7f813f7fe640 1 -- 192.168.123.110:0/454319162 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f81700839d0 msgr2=0x7f8170083db0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:29.213 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:29.209+0000 7f813f7fe640 1 --2- 192.168.123.110:0/454319162 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f81700839d0 0x7f8170083db0 secure :-1 s=READY pgs=158 cs=0 l=1 rev1=1 crypto rx=0x7f8168004290 tx=0x7f81680042c0 comp rx=0 tx=0).stop 2026-03-09T20:43:29.213 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:29.209+0000 7f813f7fe640 1 -- 192.168.123.110:0/454319162 shutdown_connections 2026-03-09T20:43:29.213 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:29.209+0000 7f813f7fe640 1 --2- 192.168.123.110:0/454319162 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f815003d410 0x7f815003f8d0 unknown :-1 s=CLOSED pgs=11 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:29.213 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:29.209+0000 7f813f7fe640 1 --2- 192.168.123.110:0/454319162 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f81700839d0 0x7f8170083db0 secure :-1 s=CLOSED pgs=158 cs=0 l=1 rev1=1 crypto rx=0x7f8168004290 tx=0x7f81680042c0 comp rx=0 tx=0).stop 2026-03-09T20:43:29.213 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:29.209+0000 7f813f7fe640 1 -- 192.168.123.110:0/454319162 >> 192.168.123.110:0/454319162 conn(0x7f817006b190 msgr2=0x7f817006e970 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:43:29.213 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:29.212+0000 7f813f7fe640 1 -- 192.168.123.110:0/454319162 shutdown_connections 2026-03-09T20:43:29.213 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:29.212+0000 7f813f7fe640 1 -- 192.168.123.110:0/454319162 wait complete. 2026-03-09T20:43:30.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:29 vm07 ceph-mon[49120]: Deploying daemon mon.vm10 on vm10 2026-03-09T20:43:30.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:29 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:30.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:29 vm07 ceph-mon[49120]: from='client.? 192.168.123.110:0/454319162' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T20:43:30.271 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T20:43:30.272 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph mon dump -f json 2026-03-09T20:43:30.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:30 vm10 ceph-mon[57011]: mon.vm10@-1(synchronizing).paxosservice(auth 1..8) refresh upgraded, format 0 -> 3 2026-03-09T20:43:30.414 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm10/config 2026-03-09T20:43:35.083 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:35.077+0000 7f0ddfdf1640 1 -- 192.168.123.110:0/1501788534 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0dd8071af0 msgr2=0x7f0dbc005680 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:35.083 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:35.077+0000 7f0ddfdf1640 1 --2- 192.168.123.110:0/1501788534 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0dd8071af0 0x7f0dbc005680 secure :-1 s=READY pgs=161 cs=0 l=1 rev1=1 crypto rx=0x7f0dc8002a00 tx=0x7f0dc8030c80 comp rx=0 tx=0).stop 2026-03-09T20:43:35.083 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:35.079+0000 7f0ddfdf1640 1 -- 192.168.123.110:0/1501788534 shutdown_connections 2026-03-09T20:43:35.083 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:35.079+0000 7f0ddfdf1640 1 --2- 192.168.123.110:0/1501788534 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0dd8071af0 0x7f0dbc005680 unknown :-1 s=CLOSED pgs=161 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:35.083 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:35.079+0000 7f0ddfdf1640 1 -- 192.168.123.110:0/1501788534 >> 192.168.123.110:0/1501788534 conn(0x7f0dd806c650 msgr2=0x7f0dd806ca60 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:43:35.083 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:35.079+0000 7f0ddfdf1640 1 -- 192.168.123.110:0/1501788534 shutdown_connections 2026-03-09T20:43:35.083 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:35.079+0000 7f0ddfdf1640 1 -- 192.168.123.110:0/1501788534 wait complete. 2026-03-09T20:43:35.083 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:35.079+0000 7f0ddfdf1640 1 Processor -- start 2026-03-09T20:43:35.083 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:35.079+0000 7f0ddfdf1640 1 -- start start 2026-03-09T20:43:35.083 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:35.080+0000 7f0ddfdf1640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f0dd8071af0 0x7f0dd81ac8a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:43:35.083 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:35.080+0000 7f0ddfdf1640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0dd81acde0 0x7f0dd81a6990 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:43:35.083 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:35.080+0000 7f0ddfdf1640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f0dd81ad2f0 con 0x7f0dd81acde0 2026-03-09T20:43:35.084 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:35.080+0000 7f0ddfdf1640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f0dd81ad430 con 0x7f0dd8071af0 2026-03-09T20:43:35.084 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:35.080+0000 7f0ddd365640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0dd81acde0 0x7f0dd81a6990 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:43:35.084 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:35.080+0000 7f0ddd365640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0dd81acde0 0x7f0dd81a6990 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.110:54302/0 (socket says 192.168.123.110:54302) 2026-03-09T20:43:35.084 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:35.080+0000 7f0ddd365640 1 -- 192.168.123.110:0/1799913114 learned_addr learned my addr 192.168.123.110:0/1799913114 (peer_addr_for_me v2:192.168.123.110:0/0) 2026-03-09T20:43:35.084 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:35.080+0000 7f0ddd365640 1 -- 192.168.123.110:0/1799913114 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f0dd8071af0 msgr2=0x7f0dd81ac8a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:35.084 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:35.080+0000 7f0ddd365640 1 --2- 192.168.123.110:0/1799913114 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f0dd8071af0 0x7f0dd81ac8a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:35.084 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:35.080+0000 7f0ddd365640 1 -- 192.168.123.110:0/1799913114 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f0dc80026e0 con 0x7f0dd81acde0 2026-03-09T20:43:35.084 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:35.080+0000 7f0ddd365640 1 --2- 192.168.123.110:0/1799913114 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0dd81acde0 0x7f0dd81a6990 secure :-1 s=READY pgs=162 cs=0 l=1 rev1=1 crypto rx=0x7f0dcc00b730 tx=0x7f0dcc00bc00 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:43:35.084 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:35.080+0000 7f0dc6ffd640 1 -- 192.168.123.110:0/1799913114 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f0dcc00c8e0 con 0x7f0dd81acde0 2026-03-09T20:43:35.084 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:35.080+0000 7f0dc6ffd640 1 -- 192.168.123.110:0/1799913114 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f0dcc014440 con 0x7f0dd81acde0 2026-03-09T20:43:35.084 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:35.081+0000 7f0dc6ffd640 1 -- 192.168.123.110:0/1799913114 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f0dcc012530 con 0x7f0dd81acde0 2026-03-09T20:43:35.084 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:35.081+0000 7f0ddfdf1640 1 -- 192.168.123.110:0/1799913114 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f0dd81a6ff0 con 0x7f0dd81acde0 2026-03-09T20:43:35.084 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:35.081+0000 7f0ddfdf1640 1 -- 192.168.123.110:0/1799913114 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f0dd81a7470 con 0x7f0dd81acde0 2026-03-09T20:43:35.084 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:35.081+0000 7f0ddfdf1640 1 -- 192.168.123.110:0/1799913114 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f0da0005350 con 0x7f0dd81acde0 2026-03-09T20:43:35.084 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:35.082+0000 7f0dc6ffd640 1 -- 192.168.123.110:0/1799913114 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 17) v1 ==== 49489+0+0 (secure 0 0 0) 0x7f0dcc01b020 con 0x7f0dd81acde0 2026-03-09T20:43:35.084 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:35.082+0000 7f0dc6ffd640 1 --2- 192.168.123.110:0/1799913114 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f0db403d160 0x7f0db403f620 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:43:35.084 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:35.082+0000 7f0dc6ffd640 1 -- 192.168.123.110:0/1799913114 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(5..5 src has 1..5) v4 ==== 1370+0+0 (secure 0 0 0) 0x7f0dcc050910 con 0x7f0dd81acde0 2026-03-09T20:43:35.084 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:35.083+0000 7f0dddb66640 1 --2- 192.168.123.110:0/1799913114 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f0db403d160 0x7f0db403f620 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:43:35.085 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:35.083+0000 7f0dddb66640 1 --2- 192.168.123.110:0/1799913114 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f0db403d160 0x7f0db403f620 secure :-1 s=READY pgs=22 cs=0 l=1 rev1=1 crypto rx=0x7f0dc8005d80 tx=0x7f0dc8005cd0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:43:35.090 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:35.085+0000 7f0dc6ffd640 1 -- 192.168.123.110:0/1799913114 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f0dcc0145b0 con 0x7f0dd81acde0 2026-03-09T20:43:35.116 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:35.113+0000 7f0dc6ffd640 1 -- 192.168.123.110:0/1799913114 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mgrmap(e 18) v1 ==== 98424+0+0 (secure 0 0 0) 0x7f0dcc04f4d0 con 0x7f0dd81acde0 2026-03-09T20:43:35.235 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:35.234+0000 7f0ddfdf1640 1 -- 192.168.123.110:0/1799913114 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7f0da0005600 con 0x7f0dd81acde0 2026-03-09T20:43:35.235 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:35.234+0000 7f0dc6ffd640 1 -- 192.168.123.110:0/1799913114 <== mon.0 v2:192.168.123.107:3300/0 8 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 2 v2) v1 ==== 95+0+1028 (secure 0 0 0) 0x7f0dcc019030 con 0x7f0dd81acde0 2026-03-09T20:43:35.236 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T20:43:35.236 INFO:teuthology.orchestra.run.vm10.stdout:{"epoch":2,"fsid":"589eab88-1bf8-11f1-9e50-71f3ab1833c4","modified":"2026-03-09T20:43:30.011073Z","created":"2026-03-09T20:42:20.613735Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm07","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:3300","nonce":0},{"type":"v1","addr":"192.168.123.107:6789","nonce":0}]},"addr":"192.168.123.107:6789/0","public_addr":"192.168.123.107:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"vm10","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:3300","nonce":0},{"type":"v1","addr":"192.168.123.110:6789","nonce":0}]},"addr":"192.168.123.110:6789/0","public_addr":"192.168.123.110:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1]} 2026-03-09T20:43:35.236 INFO:teuthology.orchestra.run.vm10.stderr:dumped monmap epoch 2 2026-03-09T20:43:35.238 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:35.237+0000 7f0dc4ff9640 1 -- 192.168.123.110:0/1799913114 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f0db403d160 msgr2=0x7f0db403f620 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:35.238 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:35.237+0000 7f0dc4ff9640 1 --2- 192.168.123.110:0/1799913114 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f0db403d160 0x7f0db403f620 secure :-1 s=READY pgs=22 cs=0 l=1 rev1=1 crypto rx=0x7f0dc8005d80 tx=0x7f0dc8005cd0 comp rx=0 tx=0).stop 2026-03-09T20:43:35.238 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:35.237+0000 7f0dc4ff9640 1 -- 192.168.123.110:0/1799913114 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0dd81acde0 msgr2=0x7f0dd81a6990 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:35.238 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:35.237+0000 7f0dc4ff9640 1 --2- 192.168.123.110:0/1799913114 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0dd81acde0 0x7f0dd81a6990 secure :-1 s=READY pgs=162 cs=0 l=1 rev1=1 crypto rx=0x7f0dcc00b730 tx=0x7f0dcc00bc00 comp rx=0 tx=0).stop 2026-03-09T20:43:35.238 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:35.237+0000 7f0dc4ff9640 1 -- 192.168.123.110:0/1799913114 shutdown_connections 2026-03-09T20:43:35.239 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:35.237+0000 7f0dc4ff9640 1 --2- 192.168.123.110:0/1799913114 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f0db403d160 0x7f0db403f620 unknown :-1 s=CLOSED pgs=22 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:35.239 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:35.237+0000 7f0dc4ff9640 1 --2- 192.168.123.110:0/1799913114 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0dd81acde0 0x7f0dd81a6990 unknown :-1 s=CLOSED pgs=162 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:35.239 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:35.237+0000 7f0dc4ff9640 1 --2- 192.168.123.110:0/1799913114 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f0dd8071af0 0x7f0dd81ac8a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:35.239 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:35.237+0000 7f0dc4ff9640 1 -- 192.168.123.110:0/1799913114 >> 192.168.123.110:0/1799913114 conn(0x7f0dd806c650 msgr2=0x7f0dd810e0a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:43:35.241 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:35.237+0000 7f0dc4ff9640 1 -- 192.168.123.110:0/1799913114 shutdown_connections 2026-03-09T20:43:35.241 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:43:35.240+0000 7f0dc4ff9640 1 -- 192.168.123.110:0/1799913114 wait complete. 2026-03-09T20:43:35.292 INFO:tasks.cephadm:Generating final ceph.conf file... 2026-03-09T20:43:35.292 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph config generate-minimal-conf 2026-03-09T20:43:35.316 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:35 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mon metadata", "id": "vm07"}]: dispatch 2026-03-09T20:43:35.316 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:35 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mon metadata", "id": "vm10"}]: dispatch 2026-03-09T20:43:35.316 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:35 vm07 ceph-mon[49120]: mon.vm07 calling monitor election 2026-03-09T20:43:35.316 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:35 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:43:35.316 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:35 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mon metadata", "id": "vm10"}]: dispatch 2026-03-09T20:43:35.316 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:35 vm07 ceph-mon[49120]: from='mgr.? 192.168.123.110:0/1995847192' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm10.byqahe/crt"}]: dispatch 2026-03-09T20:43:35.316 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:35 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mon metadata", "id": "vm10"}]: dispatch 2026-03-09T20:43:35.316 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:35 vm07 ceph-mon[49120]: mon.vm10 calling monitor election 2026-03-09T20:43:35.316 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:35 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mon metadata", "id": "vm10"}]: dispatch 2026-03-09T20:43:35.316 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:35 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mon metadata", "id": "vm10"}]: dispatch 2026-03-09T20:43:35.316 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:35 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:43:35.316 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:35 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mon metadata", "id": "vm10"}]: dispatch 2026-03-09T20:43:35.316 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:35 vm07 ceph-mon[49120]: mon.vm07 is new leader, mons vm07,vm10 in quorum (ranks 0,1) 2026-03-09T20:43:35.316 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:35 vm07 ceph-mon[49120]: monmap e2: 2 mons at {vm07=[v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0],vm10=[v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0]} removed_ranks: {} disallowed_leaders: {} 2026-03-09T20:43:35.316 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:35 vm07 ceph-mon[49120]: fsmap 2026-03-09T20:43:35.316 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:35 vm07 ceph-mon[49120]: osdmap e5: 0 total, 0 up, 0 in 2026-03-09T20:43:35.316 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:35 vm07 ceph-mon[49120]: mgrmap e17: vm07.xjrvch(active, since 15s) 2026-03-09T20:43:35.316 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:35 vm07 ceph-mon[49120]: overall HEALTH_OK 2026-03-09T20:43:35.316 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:35 vm07 ceph-mon[49120]: Standby manager daemon vm10.byqahe started 2026-03-09T20:43:35.316 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:35 vm07 ceph-mon[49120]: from='mgr.? 192.168.123.110:0/1995847192' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T20:43:35.316 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:35 vm07 ceph-mon[49120]: from='mgr.? 192.168.123.110:0/1995847192' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm10.byqahe/key"}]: dispatch 2026-03-09T20:43:35.316 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:35 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:35.316 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:35 vm07 ceph-mon[49120]: from='mgr.? 192.168.123.110:0/1995847192' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T20:43:35.382 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:35 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mon metadata", "id": "vm07"}]: dispatch 2026-03-09T20:43:35.382 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:35 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mon metadata", "id": "vm10"}]: dispatch 2026-03-09T20:43:35.382 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:35 vm10 ceph-mon[57011]: mon.vm07 calling monitor election 2026-03-09T20:43:35.382 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:35 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:43:35.382 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:35 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mon metadata", "id": "vm10"}]: dispatch 2026-03-09T20:43:35.382 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:35 vm10 ceph-mon[57011]: from='mgr.? 192.168.123.110:0/1995847192' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm10.byqahe/crt"}]: dispatch 2026-03-09T20:43:35.382 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:35 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mon metadata", "id": "vm10"}]: dispatch 2026-03-09T20:43:35.382 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:35 vm10 ceph-mon[57011]: mon.vm10 calling monitor election 2026-03-09T20:43:35.382 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:35 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mon metadata", "id": "vm10"}]: dispatch 2026-03-09T20:43:35.382 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:35 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mon metadata", "id": "vm10"}]: dispatch 2026-03-09T20:43:35.382 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:35 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:43:35.382 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:35 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mon metadata", "id": "vm10"}]: dispatch 2026-03-09T20:43:35.382 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:35 vm10 ceph-mon[57011]: mon.vm07 is new leader, mons vm07,vm10 in quorum (ranks 0,1) 2026-03-09T20:43:35.382 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:35 vm10 ceph-mon[57011]: monmap e2: 2 mons at {vm07=[v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0],vm10=[v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0]} removed_ranks: {} disallowed_leaders: {} 2026-03-09T20:43:35.382 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:35 vm10 ceph-mon[57011]: fsmap 2026-03-09T20:43:35.382 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:35 vm10 ceph-mon[57011]: osdmap e5: 0 total, 0 up, 0 in 2026-03-09T20:43:35.382 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:35 vm10 ceph-mon[57011]: mgrmap e17: vm07.xjrvch(active, since 15s) 2026-03-09T20:43:35.382 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:35 vm10 ceph-mon[57011]: overall HEALTH_OK 2026-03-09T20:43:35.382 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:35 vm10 ceph-mon[57011]: Standby manager daemon vm10.byqahe started 2026-03-09T20:43:35.382 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:35 vm10 ceph-mon[57011]: from='mgr.? 192.168.123.110:0/1995847192' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T20:43:35.382 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:35 vm10 ceph-mon[57011]: from='mgr.? 192.168.123.110:0/1995847192' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm10.byqahe/key"}]: dispatch 2026-03-09T20:43:35.382 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:35 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:35.382 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:35 vm10 ceph-mon[57011]: from='mgr.? 192.168.123.110:0/1995847192' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T20:43:35.448 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:43:35.707 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:35.706+0000 7fa017fff640 1 -- 192.168.123.107:0/4222688962 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa0180ff390 msgr2=0x7fa0180ff770 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:35.707 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:35.706+0000 7fa017fff640 1 --2- 192.168.123.107:0/4222688962 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa0180ff390 0x7fa0180ff770 secure :-1 s=READY pgs=163 cs=0 l=1 rev1=1 crypto rx=0x7fa00c0099b0 tx=0x7fa00c02f2b0 comp rx=0 tx=0).stop 2026-03-09T20:43:35.707 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:35.706+0000 7fa017fff640 1 -- 192.168.123.107:0/4222688962 shutdown_connections 2026-03-09T20:43:35.707 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:35.706+0000 7fa017fff640 1 --2- 192.168.123.107:0/4222688962 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa0180ff390 0x7fa0180ff770 unknown :-1 s=CLOSED pgs=163 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:35.707 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:35.706+0000 7fa017fff640 1 -- 192.168.123.107:0/4222688962 >> 192.168.123.107:0/4222688962 conn(0x7fa018076190 msgr2=0x7fa0180765a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:43:35.708 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:35.707+0000 7fa017fff640 1 -- 192.168.123.107:0/4222688962 shutdown_connections 2026-03-09T20:43:35.708 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:35.707+0000 7fa017fff640 1 -- 192.168.123.107:0/4222688962 wait complete. 2026-03-09T20:43:35.708 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:35.708+0000 7fa017fff640 1 Processor -- start 2026-03-09T20:43:35.708 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:35.708+0000 7fa017fff640 1 -- start start 2026-03-09T20:43:35.708 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:35.708+0000 7fa017fff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa0180ff390 0x7fa0180700a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:43:35.708 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:35.708+0000 7fa017fff640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa0180705e0 0x7fa018070a40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:43:35.708 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:35.708+0000 7fa017fff640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa0180730a0 con 0x7fa0180ff390 2026-03-09T20:43:35.708 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:35.708+0000 7fa017fff640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa018073210 con 0x7fa0180705e0 2026-03-09T20:43:35.709 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:35.709+0000 7fa0167fc640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa0180705e0 0x7fa018070a40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:43:35.709 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:35.709+0000 7fa0167fc640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa0180705e0 0x7fa018070a40 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:56476/0 (socket says 192.168.123.107:56476) 2026-03-09T20:43:35.709 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:35.709+0000 7fa0167fc640 1 -- 192.168.123.107:0/178124989 learned_addr learned my addr 192.168.123.107:0/178124989 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:43:35.710 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:35.709+0000 7fa016ffd640 1 --2- 192.168.123.107:0/178124989 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa0180ff390 0x7fa0180700a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:43:35.710 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:35.709+0000 7fa0167fc640 1 -- 192.168.123.107:0/178124989 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa0180705e0 msgr2=0x7fa018070a40 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).read_bulk peer close file descriptor 12 2026-03-09T20:43:35.710 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:35.709+0000 7fa0167fc640 1 -- 192.168.123.107:0/178124989 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa0180705e0 msgr2=0x7fa018070a40 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).read_until read failed 2026-03-09T20:43:35.710 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:35.709+0000 7fa0167fc640 1 --2- 192.168.123.107:0/178124989 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa0180705e0 0x7fa018070a40 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_read_frame_preamble_main read frame preamble failed r=-1 2026-03-09T20:43:35.710 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:35.709+0000 7fa0167fc640 1 --2- 192.168.123.107:0/178124989 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa0180705e0 0x7fa018070a40 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.200000 2026-03-09T20:43:35.710 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:35.709+0000 7fa016ffd640 1 -- 192.168.123.107:0/178124989 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa0180705e0 msgr2=0x7fa018070a40 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T20:43:35.710 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:35.709+0000 7fa016ffd640 1 --2- 192.168.123.107:0/178124989 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa0180705e0 0x7fa018070a40 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:35.710 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:35.709+0000 7fa016ffd640 1 -- 192.168.123.107:0/178124989 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fa00c009660 con 0x7fa0180ff390 2026-03-09T20:43:35.710 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:35.709+0000 7fa016ffd640 1 --2- 192.168.123.107:0/178124989 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa0180ff390 0x7fa0180700a0 secure :-1 s=READY pgs=164 cs=0 l=1 rev1=1 crypto rx=0x7fa00c02f7e0 tx=0x7fa00c004290 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:43:35.711 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:35.710+0000 7f9ff7fff640 1 -- 192.168.123.107:0/178124989 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fa00c03d070 con 0x7fa0180ff390 2026-03-09T20:43:35.711 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:35.710+0000 7fa017fff640 1 -- 192.168.123.107:0/178124989 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fa0181b98d0 con 0x7fa0180ff390 2026-03-09T20:43:35.711 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:35.710+0000 7fa017fff640 1 -- 192.168.123.107:0/178124989 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fa0181b9e20 con 0x7fa0180ff390 2026-03-09T20:43:35.716 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:35.712+0000 7fa017fff640 1 -- 192.168.123.107:0/178124989 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fa018100760 con 0x7fa0180ff390 2026-03-09T20:43:35.716 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:35.715+0000 7f9ff7fff640 1 -- 192.168.123.107:0/178124989 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fa00c031ec0 con 0x7fa0180ff390 2026-03-09T20:43:35.716 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:35.715+0000 7f9ff7fff640 1 -- 192.168.123.107:0/178124989 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fa00c031070 con 0x7fa0180ff390 2026-03-09T20:43:35.716 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:35.715+0000 7f9ff7fff640 1 -- 192.168.123.107:0/178124989 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 18) v1 ==== 98424+0+0 (secure 0 0 0) 0x7fa00c049050 con 0x7fa0180ff390 2026-03-09T20:43:35.717 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:35.717+0000 7f9ff7fff640 1 --2- 192.168.123.107:0/178124989 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f9ff00762d0 0x7f9ff0078790 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:43:35.717 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:35.717+0000 7fa0167fc640 1 --2- 192.168.123.107:0/178124989 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f9ff00762d0 0x7f9ff0078790 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:43:35.717 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:35.718+0000 7fa0167fc640 1 --2- 192.168.123.107:0/178124989 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f9ff00762d0 0x7f9ff0078790 secure :-1 s=READY pgs=23 cs=0 l=1 rev1=1 crypto rx=0x7fa000006fd0 tx=0x7fa000008040 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:43:35.718 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:35.718+0000 7f9ff7fff640 1 -- 192.168.123.107:0/178124989 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(5..5 src has 1..5) v4 ==== 1370+0+0 (secure 0 0 0) 0x7fa00c0bc230 con 0x7fa0180ff390 2026-03-09T20:43:35.718 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:35.718+0000 7f9ff7fff640 1 -- 192.168.123.107:0/178124989 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7fa00c0be720 con 0x7fa0180ff390 2026-03-09T20:43:35.829 INFO:teuthology.orchestra.run.vm07.stdout:# minimal ceph.conf for 589eab88-1bf8-11f1-9e50-71f3ab1833c4 2026-03-09T20:43:35.829 INFO:teuthology.orchestra.run.vm07.stdout:[global] 2026-03-09T20:43:35.829 INFO:teuthology.orchestra.run.vm07.stdout: fsid = 589eab88-1bf8-11f1-9e50-71f3ab1833c4 2026-03-09T20:43:35.829 INFO:teuthology.orchestra.run.vm07.stdout: mon_host = [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] 2026-03-09T20:43:35.829 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:35.824+0000 7fa017fff640 1 -- 192.168.123.107:0/178124989 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "config generate-minimal-conf"} v 0) v1 -- 0x7fa0181083c0 con 0x7fa0180ff390 2026-03-09T20:43:35.829 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:35.824+0000 7f9ff7fff640 1 -- 192.168.123.107:0/178124989 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "config generate-minimal-conf"}]=0 v10) v1 ==== 76+0+235 (secure 0 0 0) 0x7fa00c086910 con 0x7fa0180ff390 2026-03-09T20:43:35.829 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:35.828+0000 7f9ff5ffb640 1 -- 192.168.123.107:0/178124989 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f9ff00762d0 msgr2=0x7f9ff0078790 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:35.830 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:35.828+0000 7f9ff5ffb640 1 --2- 192.168.123.107:0/178124989 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f9ff00762d0 0x7f9ff0078790 secure :-1 s=READY pgs=23 cs=0 l=1 rev1=1 crypto rx=0x7fa000006fd0 tx=0x7fa000008040 comp rx=0 tx=0).stop 2026-03-09T20:43:35.831 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:35.829+0000 7f9ff5ffb640 1 -- 192.168.123.107:0/178124989 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa0180ff390 msgr2=0x7fa0180700a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:35.831 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:35.829+0000 7f9ff5ffb640 1 --2- 192.168.123.107:0/178124989 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa0180ff390 0x7fa0180700a0 secure :-1 s=READY pgs=164 cs=0 l=1 rev1=1 crypto rx=0x7fa00c02f7e0 tx=0x7fa00c004290 comp rx=0 tx=0).stop 2026-03-09T20:43:35.831 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:35.829+0000 7f9ff5ffb640 1 -- 192.168.123.107:0/178124989 shutdown_connections 2026-03-09T20:43:35.831 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:35.829+0000 7f9ff5ffb640 1 --2- 192.168.123.107:0/178124989 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f9ff00762d0 0x7f9ff0078790 unknown :-1 s=CLOSED pgs=23 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:35.831 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:35.829+0000 7f9ff5ffb640 1 --2- 192.168.123.107:0/178124989 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa0180705e0 0x7fa018070a40 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:35.831 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:35.829+0000 7f9ff5ffb640 1 --2- 192.168.123.107:0/178124989 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa0180ff390 0x7fa0180700a0 unknown :-1 s=CLOSED pgs=164 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:35.831 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:35.829+0000 7f9ff5ffb640 1 -- 192.168.123.107:0/178124989 >> 192.168.123.107:0/178124989 conn(0x7fa018076190 msgr2=0x7fa0181072c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:43:35.832 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:35.832+0000 7f9ff5ffb640 1 -- 192.168.123.107:0/178124989 shutdown_connections 2026-03-09T20:43:35.832 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:35.832+0000 7f9ff5ffb640 1 -- 192.168.123.107:0/178124989 wait complete. 2026-03-09T20:43:35.933 INFO:tasks.cephadm:Distributing (final) config and client.admin keyring... 2026-03-09T20:43:35.933 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-09T20:43:35.933 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/etc/ceph/ceph.conf 2026-03-09T20:43:36.005 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-09T20:43:36.005 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:43:36.076 DEBUG:teuthology.orchestra.run.vm10:> set -ex 2026-03-09T20:43:36.089 DEBUG:teuthology.orchestra.run.vm10:> sudo dd of=/etc/ceph/ceph.conf 2026-03-09T20:43:36.120 DEBUG:teuthology.orchestra.run.vm10:> set -ex 2026-03-09T20:43:36.120 DEBUG:teuthology.orchestra.run.vm10:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:43:36.184 INFO:tasks.cephadm:Deploying OSDs... 2026-03-09T20:43:36.185 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-09T20:43:36.185 DEBUG:teuthology.orchestra.run.vm07:> dd if=/scratch_devs of=/dev/stdout 2026-03-09T20:43:36.205 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T20:43:36.212 DEBUG:teuthology.orchestra.run.vm07:> ls /dev/[sv]d? 2026-03-09T20:43:36.267 INFO:teuthology.orchestra.run.vm07.stdout:/dev/vda 2026-03-09T20:43:36.267 INFO:teuthology.orchestra.run.vm07.stdout:/dev/vdb 2026-03-09T20:43:36.267 INFO:teuthology.orchestra.run.vm07.stdout:/dev/vdc 2026-03-09T20:43:36.267 INFO:teuthology.orchestra.run.vm07.stdout:/dev/vdd 2026-03-09T20:43:36.267 INFO:teuthology.orchestra.run.vm07.stdout:/dev/vde 2026-03-09T20:43:36.267 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-09T20:43:36.267 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-09T20:43:36.268 DEBUG:teuthology.orchestra.run.vm07:> stat /dev/vdb 2026-03-09T20:43:36.324 INFO:teuthology.orchestra.run.vm07.stdout: File: /dev/vdb 2026-03-09T20:43:36.324 INFO:teuthology.orchestra.run.vm07.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-09T20:43:36.324 INFO:teuthology.orchestra.run.vm07.stdout:Device: 6h/6d Inode: 245 Links: 1 Device type: fc,10 2026-03-09T20:43:36.324 INFO:teuthology.orchestra.run.vm07.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T20:43:36.324 INFO:teuthology.orchestra.run.vm07.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-09T20:43:36.324 INFO:teuthology.orchestra.run.vm07.stdout:Access: 2026-03-09 20:42:47.510118072 +0000 2026-03-09T20:43:36.324 INFO:teuthology.orchestra.run.vm07.stdout:Modify: 2026-03-09 20:32:28.307000000 +0000 2026-03-09T20:43:36.324 INFO:teuthology.orchestra.run.vm07.stdout:Change: 2026-03-09 20:32:28.307000000 +0000 2026-03-09T20:43:36.324 INFO:teuthology.orchestra.run.vm07.stdout: Birth: 2026-03-09 20:32:26.354000000 +0000 2026-03-09T20:43:36.324 DEBUG:teuthology.orchestra.run.vm07:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-09T20:43:36.385 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:36 vm07 ceph-mon[49120]: mgrmap e18: vm07.xjrvch(active, since 16s), standbys: vm10.byqahe 2026-03-09T20:43:36.385 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:36 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mgr metadata", "who": "vm10.byqahe", "id": "vm10.byqahe"}]: dispatch 2026-03-09T20:43:36.385 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:36 vm07 ceph-mon[49120]: from='client.? 192.168.123.110:0/1799913114' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T20:43:36.385 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:36 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:36.385 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:36 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:36.385 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:36 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:43:36.385 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:36 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:43:36.385 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:36 vm07 ceph-mon[49120]: Updating vm07:/etc/ceph/ceph.conf 2026-03-09T20:43:36.385 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:36 vm07 ceph-mon[49120]: Updating vm10:/etc/ceph/ceph.conf 2026-03-09T20:43:36.385 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:36 vm07 ceph-mon[49120]: from='client.? 192.168.123.107:0/178124989' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:43:36.385 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:36 vm07 ceph-mon[49120]: Updating vm10:/var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/config/ceph.conf 2026-03-09T20:43:36.385 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:36 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mon metadata", "id": "vm10"}]: dispatch 2026-03-09T20:43:36.387 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records in 2026-03-09T20:43:36.388 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records out 2026-03-09T20:43:36.388 INFO:teuthology.orchestra.run.vm07.stderr:512 bytes copied, 0.000117429 s, 4.4 MB/s 2026-03-09T20:43:36.389 DEBUG:teuthology.orchestra.run.vm07:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-09T20:43:36.448 DEBUG:teuthology.orchestra.run.vm07:> stat /dev/vdc 2026-03-09T20:43:36.506 INFO:teuthology.orchestra.run.vm07.stdout: File: /dev/vdc 2026-03-09T20:43:36.521 INFO:teuthology.orchestra.run.vm07.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-09T20:43:36.521 INFO:teuthology.orchestra.run.vm07.stdout:Device: 6h/6d Inode: 255 Links: 1 Device type: fc,20 2026-03-09T20:43:36.521 INFO:teuthology.orchestra.run.vm07.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T20:43:36.521 INFO:teuthology.orchestra.run.vm07.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-09T20:43:36.521 INFO:teuthology.orchestra.run.vm07.stdout:Access: 2026-03-09 20:42:47.558118127 +0000 2026-03-09T20:43:36.521 INFO:teuthology.orchestra.run.vm07.stdout:Modify: 2026-03-09 20:32:28.315000000 +0000 2026-03-09T20:43:36.521 INFO:teuthology.orchestra.run.vm07.stdout:Change: 2026-03-09 20:32:28.315000000 +0000 2026-03-09T20:43:36.521 INFO:teuthology.orchestra.run.vm07.stdout: Birth: 2026-03-09 20:32:26.359000000 +0000 2026-03-09T20:43:36.521 DEBUG:teuthology.orchestra.run.vm07:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-09T20:43:36.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:36 vm10 ceph-mon[57011]: mgrmap e18: vm07.xjrvch(active, since 16s), standbys: vm10.byqahe 2026-03-09T20:43:36.594 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:36 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mgr metadata", "who": "vm10.byqahe", "id": "vm10.byqahe"}]: dispatch 2026-03-09T20:43:36.594 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:36 vm10 ceph-mon[57011]: from='client.? 192.168.123.110:0/1799913114' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T20:43:36.594 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:36 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:36.594 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:36 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:36.594 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:36 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:43:36.594 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:36 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:43:36.594 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:36 vm10 ceph-mon[57011]: Updating vm07:/etc/ceph/ceph.conf 2026-03-09T20:43:36.594 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:36 vm10 ceph-mon[57011]: Updating vm10:/etc/ceph/ceph.conf 2026-03-09T20:43:36.594 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:36 vm10 ceph-mon[57011]: from='client.? 192.168.123.107:0/178124989' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:43:36.594 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:36 vm10 ceph-mon[57011]: Updating vm10:/var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/config/ceph.conf 2026-03-09T20:43:36.594 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:36 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mon metadata", "id": "vm10"}]: dispatch 2026-03-09T20:43:36.621 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records in 2026-03-09T20:43:36.621 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records out 2026-03-09T20:43:36.621 INFO:teuthology.orchestra.run.vm07.stderr:512 bytes copied, 0.000126266 s, 4.1 MB/s 2026-03-09T20:43:36.623 DEBUG:teuthology.orchestra.run.vm07:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-09T20:43:36.687 DEBUG:teuthology.orchestra.run.vm07:> stat /dev/vdd 2026-03-09T20:43:36.749 INFO:teuthology.orchestra.run.vm07.stdout: File: /dev/vdd 2026-03-09T20:43:36.749 INFO:teuthology.orchestra.run.vm07.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-09T20:43:36.749 INFO:teuthology.orchestra.run.vm07.stdout:Device: 6h/6d Inode: 256 Links: 1 Device type: fc,30 2026-03-09T20:43:36.749 INFO:teuthology.orchestra.run.vm07.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T20:43:36.749 INFO:teuthology.orchestra.run.vm07.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-09T20:43:36.749 INFO:teuthology.orchestra.run.vm07.stdout:Access: 2026-03-09 20:42:47.611118187 +0000 2026-03-09T20:43:36.749 INFO:teuthology.orchestra.run.vm07.stdout:Modify: 2026-03-09 20:32:28.313000000 +0000 2026-03-09T20:43:36.749 INFO:teuthology.orchestra.run.vm07.stdout:Change: 2026-03-09 20:32:28.313000000 +0000 2026-03-09T20:43:36.749 INFO:teuthology.orchestra.run.vm07.stdout: Birth: 2026-03-09 20:32:26.364000000 +0000 2026-03-09T20:43:36.749 DEBUG:teuthology.orchestra.run.vm07:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-09T20:43:36.821 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records in 2026-03-09T20:43:36.821 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records out 2026-03-09T20:43:36.821 INFO:teuthology.orchestra.run.vm07.stderr:512 bytes copied, 0.000124934 s, 4.1 MB/s 2026-03-09T20:43:36.821 DEBUG:teuthology.orchestra.run.vm07:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-09T20:43:36.886 DEBUG:teuthology.orchestra.run.vm07:> stat /dev/vde 2026-03-09T20:43:36.956 INFO:teuthology.orchestra.run.vm07.stdout: File: /dev/vde 2026-03-09T20:43:36.956 INFO:teuthology.orchestra.run.vm07.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-09T20:43:36.956 INFO:teuthology.orchestra.run.vm07.stdout:Device: 6h/6d Inode: 257 Links: 1 Device type: fc,40 2026-03-09T20:43:36.956 INFO:teuthology.orchestra.run.vm07.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T20:43:36.956 INFO:teuthology.orchestra.run.vm07.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-09T20:43:36.956 INFO:teuthology.orchestra.run.vm07.stdout:Access: 2026-03-09 20:42:47.667118250 +0000 2026-03-09T20:43:36.956 INFO:teuthology.orchestra.run.vm07.stdout:Modify: 2026-03-09 20:32:28.327000000 +0000 2026-03-09T20:43:36.956 INFO:teuthology.orchestra.run.vm07.stdout:Change: 2026-03-09 20:32:28.327000000 +0000 2026-03-09T20:43:36.957 INFO:teuthology.orchestra.run.vm07.stdout: Birth: 2026-03-09 20:32:26.400000000 +0000 2026-03-09T20:43:36.957 DEBUG:teuthology.orchestra.run.vm07:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-09T20:43:37.069 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records in 2026-03-09T20:43:37.069 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records out 2026-03-09T20:43:37.069 INFO:teuthology.orchestra.run.vm07.stderr:512 bytes copied, 0.000209342 s, 2.4 MB/s 2026-03-09T20:43:37.073 DEBUG:teuthology.orchestra.run.vm07:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-09T20:43:37.094 DEBUG:teuthology.orchestra.run.vm10:> set -ex 2026-03-09T20:43:37.094 DEBUG:teuthology.orchestra.run.vm10:> dd if=/scratch_devs of=/dev/stdout 2026-03-09T20:43:37.116 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T20:43:37.116 DEBUG:teuthology.orchestra.run.vm10:> ls /dev/[sv]d? 2026-03-09T20:43:37.176 INFO:teuthology.orchestra.run.vm10.stdout:/dev/vda 2026-03-09T20:43:37.176 INFO:teuthology.orchestra.run.vm10.stdout:/dev/vdb 2026-03-09T20:43:37.176 INFO:teuthology.orchestra.run.vm10.stdout:/dev/vdc 2026-03-09T20:43:37.176 INFO:teuthology.orchestra.run.vm10.stdout:/dev/vdd 2026-03-09T20:43:37.176 INFO:teuthology.orchestra.run.vm10.stdout:/dev/vde 2026-03-09T20:43:37.176 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-09T20:43:37.176 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-09T20:43:37.177 DEBUG:teuthology.orchestra.run.vm10:> stat /dev/vdb 2026-03-09T20:43:37.235 INFO:teuthology.orchestra.run.vm10.stdout: File: /dev/vdb 2026-03-09T20:43:37.235 INFO:teuthology.orchestra.run.vm10.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-09T20:43:37.235 INFO:teuthology.orchestra.run.vm10.stdout:Device: 6h/6d Inode: 254 Links: 1 Device type: fc,10 2026-03-09T20:43:37.235 INFO:teuthology.orchestra.run.vm10.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T20:43:37.235 INFO:teuthology.orchestra.run.vm10.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-09T20:43:37.235 INFO:teuthology.orchestra.run.vm10.stdout:Access: 2026-03-09 20:43:21.166804371 +0000 2026-03-09T20:43:37.235 INFO:teuthology.orchestra.run.vm10.stdout:Modify: 2026-03-09 20:32:59.342000000 +0000 2026-03-09T20:43:37.235 INFO:teuthology.orchestra.run.vm10.stdout:Change: 2026-03-09 20:32:59.342000000 +0000 2026-03-09T20:43:37.235 INFO:teuthology.orchestra.run.vm10.stdout: Birth: 2026-03-09 20:32:57.336000000 +0000 2026-03-09T20:43:37.235 DEBUG:teuthology.orchestra.run.vm10:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-09T20:43:37.301 INFO:teuthology.orchestra.run.vm10.stderr:1+0 records in 2026-03-09T20:43:37.301 INFO:teuthology.orchestra.run.vm10.stderr:1+0 records out 2026-03-09T20:43:37.301 INFO:teuthology.orchestra.run.vm10.stderr:512 bytes copied, 0.00024583 s, 2.1 MB/s 2026-03-09T20:43:37.302 DEBUG:teuthology.orchestra.run.vm10:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-09T20:43:37.362 DEBUG:teuthology.orchestra.run.vm10:> stat /dev/vdc 2026-03-09T20:43:37.421 INFO:teuthology.orchestra.run.vm10.stdout: File: /dev/vdc 2026-03-09T20:43:37.422 INFO:teuthology.orchestra.run.vm10.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-09T20:43:37.422 INFO:teuthology.orchestra.run.vm10.stdout:Device: 6h/6d Inode: 255 Links: 1 Device type: fc,20 2026-03-09T20:43:37.422 INFO:teuthology.orchestra.run.vm10.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T20:43:37.422 INFO:teuthology.orchestra.run.vm10.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-09T20:43:37.422 INFO:teuthology.orchestra.run.vm10.stdout:Access: 2026-03-09 20:43:21.219804436 +0000 2026-03-09T20:43:37.422 INFO:teuthology.orchestra.run.vm10.stdout:Modify: 2026-03-09 20:32:59.301000000 +0000 2026-03-09T20:43:37.422 INFO:teuthology.orchestra.run.vm10.stdout:Change: 2026-03-09 20:32:59.301000000 +0000 2026-03-09T20:43:37.422 INFO:teuthology.orchestra.run.vm10.stdout: Birth: 2026-03-09 20:32:57.344000000 +0000 2026-03-09T20:43:37.422 DEBUG:teuthology.orchestra.run.vm10:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-09T20:43:37.493 INFO:teuthology.orchestra.run.vm10.stderr:1+0 records in 2026-03-09T20:43:37.493 INFO:teuthology.orchestra.run.vm10.stderr:1+0 records out 2026-03-09T20:43:37.493 INFO:teuthology.orchestra.run.vm10.stderr:512 bytes copied, 0.000125796 s, 4.1 MB/s 2026-03-09T20:43:37.494 DEBUG:teuthology.orchestra.run.vm10:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-09T20:43:37.559 DEBUG:teuthology.orchestra.run.vm10:> stat /dev/vdd 2026-03-09T20:43:37.617 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:37 vm10 ceph-mon[57011]: Updating vm07:/var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/config/ceph.conf 2026-03-09T20:43:37.617 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:37 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:37.617 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:37 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:37.617 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:37 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:37.617 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:37 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:37.617 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:37 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:37.617 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:37 vm10 ceph-mon[57011]: Reconfiguring mon.vm07 (unknown last config time)... 2026-03-09T20:43:37.617 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:37 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T20:43:37.617 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:37 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T20:43:37.617 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:37 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:43:37.617 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:37 vm10 ceph-mon[57011]: Reconfiguring daemon mon.vm07 on vm07 2026-03-09T20:43:37.617 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:37 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:37.617 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:37 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:37.617 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:37 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm07.xjrvch", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T20:43:37.617 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:37 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T20:43:37.617 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:37 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:43:37.617 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:37 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:37.617 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:37 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:37.617 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:37 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm07", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-09T20:43:37.617 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:37 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:43:37.621 INFO:teuthology.orchestra.run.vm10.stdout: File: /dev/vdd 2026-03-09T20:43:37.621 INFO:teuthology.orchestra.run.vm10.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-09T20:43:37.621 INFO:teuthology.orchestra.run.vm10.stdout:Device: 6h/6d Inode: 256 Links: 1 Device type: fc,30 2026-03-09T20:43:37.621 INFO:teuthology.orchestra.run.vm10.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T20:43:37.621 INFO:teuthology.orchestra.run.vm10.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-09T20:43:37.621 INFO:teuthology.orchestra.run.vm10.stdout:Access: 2026-03-09 20:43:21.283804513 +0000 2026-03-09T20:43:37.621 INFO:teuthology.orchestra.run.vm10.stdout:Modify: 2026-03-09 20:32:59.314000000 +0000 2026-03-09T20:43:37.621 INFO:teuthology.orchestra.run.vm10.stdout:Change: 2026-03-09 20:32:59.314000000 +0000 2026-03-09T20:43:37.621 INFO:teuthology.orchestra.run.vm10.stdout: Birth: 2026-03-09 20:32:57.352000000 +0000 2026-03-09T20:43:37.621 DEBUG:teuthology.orchestra.run.vm10:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-09T20:43:37.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:37 vm07 ceph-mon[49120]: Updating vm07:/var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/config/ceph.conf 2026-03-09T20:43:37.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:37 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:37.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:37 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:37.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:37 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:37.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:37 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:37.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:37 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:37.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:37 vm07 ceph-mon[49120]: Reconfiguring mon.vm07 (unknown last config time)... 2026-03-09T20:43:37.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:37 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T20:43:37.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:37 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T20:43:37.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:37 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:43:37.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:37 vm07 ceph-mon[49120]: Reconfiguring daemon mon.vm07 on vm07 2026-03-09T20:43:37.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:37 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:37.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:37 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:37.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:37 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm07.xjrvch", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T20:43:37.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:37 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T20:43:37.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:37 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:43:37.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:37 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:37.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:37 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:37.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:37 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm07", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-09T20:43:37.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:37 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:43:37.699 INFO:teuthology.orchestra.run.vm10.stderr:1+0 records in 2026-03-09T20:43:37.699 INFO:teuthology.orchestra.run.vm10.stderr:1+0 records out 2026-03-09T20:43:37.699 INFO:teuthology.orchestra.run.vm10.stderr:512 bytes copied, 0.00214853 s, 238 kB/s 2026-03-09T20:43:37.700 DEBUG:teuthology.orchestra.run.vm10:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-09T20:43:37.768 DEBUG:teuthology.orchestra.run.vm10:> stat /dev/vde 2026-03-09T20:43:37.829 INFO:teuthology.orchestra.run.vm10.stdout: File: /dev/vde 2026-03-09T20:43:37.829 INFO:teuthology.orchestra.run.vm10.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-09T20:43:37.830 INFO:teuthology.orchestra.run.vm10.stdout:Device: 6h/6d Inode: 257 Links: 1 Device type: fc,40 2026-03-09T20:43:37.830 INFO:teuthology.orchestra.run.vm10.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T20:43:37.830 INFO:teuthology.orchestra.run.vm10.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-09T20:43:37.830 INFO:teuthology.orchestra.run.vm10.stdout:Access: 2026-03-09 20:43:21.342804585 +0000 2026-03-09T20:43:37.830 INFO:teuthology.orchestra.run.vm10.stdout:Modify: 2026-03-09 20:32:59.353000000 +0000 2026-03-09T20:43:37.830 INFO:teuthology.orchestra.run.vm10.stdout:Change: 2026-03-09 20:32:59.353000000 +0000 2026-03-09T20:43:37.830 INFO:teuthology.orchestra.run.vm10.stdout: Birth: 2026-03-09 20:32:57.359000000 +0000 2026-03-09T20:43:37.830 DEBUG:teuthology.orchestra.run.vm10:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-09T20:43:37.900 INFO:teuthology.orchestra.run.vm10.stderr:1+0 records in 2026-03-09T20:43:37.900 INFO:teuthology.orchestra.run.vm10.stderr:1+0 records out 2026-03-09T20:43:37.900 INFO:teuthology.orchestra.run.vm10.stderr:512 bytes copied, 0.000124084 s, 4.1 MB/s 2026-03-09T20:43:37.901 DEBUG:teuthology.orchestra.run.vm10:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-09T20:43:37.963 INFO:tasks.cephadm:Deploying osd.0 on vm07 with /dev/vde... 2026-03-09T20:43:37.964 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- lvm zap /dev/vde 2026-03-09T20:43:38.171 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:43:39.680 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:43:39.700 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph orch daemon add osd vm07:/dev/vde 2026-03-09T20:43:39.897 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:43:39.908 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:39 vm07 ceph-mon[49120]: Reconfiguring mgr.vm07.xjrvch (unknown last config time)... 2026-03-09T20:43:39.908 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:39 vm07 ceph-mon[49120]: Reconfiguring daemon mgr.vm07.xjrvch on vm07 2026-03-09T20:43:39.908 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:39 vm07 ceph-mon[49120]: Reconfiguring ceph-exporter.vm07 (monmap changed)... 2026-03-09T20:43:39.908 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:39 vm07 ceph-mon[49120]: Reconfiguring daemon ceph-exporter.vm07 on vm07 2026-03-09T20:43:39.908 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:39 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:39.908 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:39 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:39.908 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:39 vm07 ceph-mon[49120]: Reconfiguring crash.vm07 (monmap changed)... 2026-03-09T20:43:39.908 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:39 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm07", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-09T20:43:39.908 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:39 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:43:39.908 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:39 vm07 ceph-mon[49120]: Reconfiguring daemon crash.vm07 on vm07 2026-03-09T20:43:39.908 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:39 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:39.908 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:39 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:40.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:39 vm10 ceph-mon[57011]: Reconfiguring mgr.vm07.xjrvch (unknown last config time)... 2026-03-09T20:43:40.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:39 vm10 ceph-mon[57011]: Reconfiguring daemon mgr.vm07.xjrvch on vm07 2026-03-09T20:43:40.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:39 vm10 ceph-mon[57011]: Reconfiguring ceph-exporter.vm07 (monmap changed)... 2026-03-09T20:43:40.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:39 vm10 ceph-mon[57011]: Reconfiguring daemon ceph-exporter.vm07 on vm07 2026-03-09T20:43:40.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:39 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:40.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:39 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:40.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:39 vm10 ceph-mon[57011]: Reconfiguring crash.vm07 (monmap changed)... 2026-03-09T20:43:40.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:39 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm07", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-09T20:43:40.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:39 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:43:40.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:39 vm10 ceph-mon[57011]: Reconfiguring daemon crash.vm07 on vm07 2026-03-09T20:43:40.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:39 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:40.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:39 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:40.425 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:40.424+0000 7fabfa522640 1 -- 192.168.123.107:0/4209082456 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fabf4073b40 msgr2=0x7fabf4073fa0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:40.425 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:40.424+0000 7fabfa522640 1 --2- 192.168.123.107:0/4209082456 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fabf4073b40 0x7fabf4073fa0 secure :-1 s=READY pgs=3 cs=0 l=1 rev1=1 crypto rx=0x7fabe40099b0 tx=0x7fabe402f240 comp rx=0 tx=0).stop 2026-03-09T20:43:40.425 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:40.425+0000 7fabfa522640 1 -- 192.168.123.107:0/4209082456 shutdown_connections 2026-03-09T20:43:40.425 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:40.425+0000 7fabfa522640 1 --2- 192.168.123.107:0/4209082456 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fabf4073b40 0x7fabf4073fa0 unknown :-1 s=CLOSED pgs=3 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:40.425 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:40.425+0000 7fabfa522640 1 --2- 192.168.123.107:0/4209082456 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fabf40751a0 0x7fabf4073600 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:40.425 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:40.425+0000 7fabfa522640 1 -- 192.168.123.107:0/4209082456 >> 192.168.123.107:0/4209082456 conn(0x7fabf40fbdb0 msgr2=0x7fabf40fe1f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:43:40.426 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:40.425+0000 7fabfa522640 1 -- 192.168.123.107:0/4209082456 shutdown_connections 2026-03-09T20:43:40.426 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:40.425+0000 7fabfa522640 1 -- 192.168.123.107:0/4209082456 wait complete. 2026-03-09T20:43:40.426 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:40.426+0000 7fabfa522640 1 Processor -- start 2026-03-09T20:43:40.426 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:40.426+0000 7fabfa522640 1 -- start start 2026-03-09T20:43:40.427 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:40.426+0000 7fabfa522640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fabf4073b40 0x7fabf419a2f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:43:40.427 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:40.426+0000 7fabfa522640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fabf40751a0 0x7fabf419a830 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:43:40.427 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:40.426+0000 7fabfa522640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fabf419ad70 con 0x7fabf4073b40 2026-03-09T20:43:40.427 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:40.426+0000 7fabfa522640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fabf419aee0 con 0x7fabf40751a0 2026-03-09T20:43:40.427 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:40.426+0000 7fabf8d1f640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fabf40751a0 0x7fabf419a830 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:43:40.427 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:40.426+0000 7fabf8d1f640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fabf40751a0 0x7fabf419a830 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:40634/0 (socket says 192.168.123.107:40634) 2026-03-09T20:43:40.427 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:40.426+0000 7fabf8d1f640 1 -- 192.168.123.107:0/2098654032 learned_addr learned my addr 192.168.123.107:0/2098654032 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:43:40.427 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:40.427+0000 7fabf8d1f640 1 -- 192.168.123.107:0/2098654032 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fabf4073b40 msgr2=0x7fabf419a2f0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:40.427 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:40.427+0000 7fabf8d1f640 1 --2- 192.168.123.107:0/2098654032 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fabf4073b40 0x7fabf419a2f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:40.427 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:40.427+0000 7fabf8d1f640 1 -- 192.168.123.107:0/2098654032 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fabe4009660 con 0x7fabf40751a0 2026-03-09T20:43:40.428 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:40.427+0000 7fabf8d1f640 1 --2- 192.168.123.107:0/2098654032 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fabf40751a0 0x7fabf419a830 secure :-1 s=READY pgs=4 cs=0 l=1 rev1=1 crypto rx=0x7fabe402f750 tx=0x7fabe4031cf0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:43:40.428 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:40.427+0000 7fabe27fc640 1 -- 192.168.123.107:0/2098654032 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fabe403d070 con 0x7fabf40751a0 2026-03-09T20:43:40.428 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:40.427+0000 7fabfa522640 1 -- 192.168.123.107:0/2098654032 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fabf419f960 con 0x7fabf40751a0 2026-03-09T20:43:40.428 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:40.427+0000 7fabfa522640 1 -- 192.168.123.107:0/2098654032 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fabf4100290 con 0x7fabf40751a0 2026-03-09T20:43:40.429 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:40.428+0000 7fabe27fc640 1 -- 192.168.123.107:0/2098654032 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fabe4031070 con 0x7fabf40751a0 2026-03-09T20:43:40.429 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:40.428+0000 7fabe27fc640 1 -- 192.168.123.107:0/2098654032 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fabe4038660 con 0x7fabf40751a0 2026-03-09T20:43:40.429 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:40.429+0000 7fabe27fc640 1 -- 192.168.123.107:0/2098654032 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 18) v1 ==== 98424+0+0 (secure 0 0 0) 0x7fabe40388d0 con 0x7fabf40751a0 2026-03-09T20:43:40.429 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:40.429+0000 7fabe27fc640 1 --2- 192.168.123.107:0/2098654032 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fabd4076130 0x7fabd40785f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:43:40.430 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:40.429+0000 7fabe27fc640 1 -- 192.168.123.107:0/2098654032 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(5..5 src has 1..5) v4 ==== 1370+0+0 (secure 0 0 0) 0x7fabe40bc6c0 con 0x7fabf40751a0 2026-03-09T20:43:40.430 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:40.430+0000 7fabfa522640 1 -- 192.168.123.107:0/2098654032 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fabf410b4b0 con 0x7fabf40751a0 2026-03-09T20:43:40.431 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:40.431+0000 7fabf9520640 1 --2- 192.168.123.107:0/2098654032 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fabd4076130 0x7fabd40785f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:43:40.434 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:40.433+0000 7fabe27fc640 1 -- 192.168.123.107:0/2098654032 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7fabe4086da0 con 0x7fabf40751a0 2026-03-09T20:43:40.435 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:40.435+0000 7fabf9520640 1 --2- 192.168.123.107:0/2098654032 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fabd4076130 0x7fabd40785f0 secure :-1 s=READY pgs=27 cs=0 l=1 rev1=1 crypto rx=0x7fabf000d0e0 tx=0x7fabf00085a0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:43:40.542 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:40.541+0000 7fabfa522640 1 -- 192.168.123.107:0/2098654032 --> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vde", "target": ["mon-mgr", ""]}) v1 -- 0x7fabf4107510 con 0x7fabd4076130 2026-03-09T20:43:40.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:40 vm07 ceph-mon[49120]: Reconfiguring alertmanager.vm07 (dependencies changed)... 2026-03-09T20:43:40.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:40 vm07 ceph-mon[49120]: Reconfiguring daemon alertmanager.vm07 on vm07 2026-03-09T20:43:40.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:40 vm07 ceph-mon[49120]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:43:40.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:40 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T20:43:40.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:40 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T20:43:40.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:40 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:43:40.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:40 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:40.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:40 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:41.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:40 vm10 ceph-mon[57011]: Reconfiguring alertmanager.vm07 (dependencies changed)... 2026-03-09T20:43:41.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:40 vm10 ceph-mon[57011]: Reconfiguring daemon alertmanager.vm07 on vm07 2026-03-09T20:43:41.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:40 vm10 ceph-mon[57011]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:43:41.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:40 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T20:43:41.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:40 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T20:43:41.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:40 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:43:41.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:40 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:41.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:40 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:41.797 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:41 vm07 ceph-mon[49120]: from='client.24101 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:43:41.797 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:41 vm07 ceph-mon[49120]: Reconfiguring grafana.vm07 (dependencies changed)... 2026-03-09T20:43:41.797 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:41 vm07 ceph-mon[49120]: Reconfiguring daemon grafana.vm07 on vm07 2026-03-09T20:43:42.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:41 vm10 ceph-mon[57011]: from='client.24101 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:43:42.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:41 vm10 ceph-mon[57011]: Reconfiguring grafana.vm07 (dependencies changed)... 2026-03-09T20:43:42.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:41 vm10 ceph-mon[57011]: Reconfiguring daemon grafana.vm07 on vm07 2026-03-09T20:43:42.843 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:42 vm07 ceph-mon[49120]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:43:42.843 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:42 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:42.843 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:42 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:42.843 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:42 vm07 ceph-mon[49120]: Reconfiguring prometheus.vm07 (dependencies changed)... 2026-03-09T20:43:42.843 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:42 vm07 ceph-mon[49120]: Reconfiguring daemon prometheus.vm07 on vm07 2026-03-09T20:43:42.843 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:42 vm07 ceph-mon[49120]: from='client.? 192.168.123.107:0/1343412194' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "4ceba074-cc1e-460f-b8f1-b7d80b498d37"}]: dispatch 2026-03-09T20:43:42.843 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:42 vm07 ceph-mon[49120]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "4ceba074-cc1e-460f-b8f1-b7d80b498d37"}]: dispatch 2026-03-09T20:43:42.844 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:42 vm07 ceph-mon[49120]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "4ceba074-cc1e-460f-b8f1-b7d80b498d37"}]': finished 2026-03-09T20:43:42.844 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:42 vm07 ceph-mon[49120]: osdmap e6: 1 total, 0 up, 1 in 2026-03-09T20:43:42.844 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:42 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:43:43.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:42 vm10 ceph-mon[57011]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:43:43.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:42 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:43.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:42 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:43.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:42 vm10 ceph-mon[57011]: Reconfiguring prometheus.vm07 (dependencies changed)... 2026-03-09T20:43:43.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:42 vm10 ceph-mon[57011]: Reconfiguring daemon prometheus.vm07 on vm07 2026-03-09T20:43:43.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:42 vm10 ceph-mon[57011]: from='client.? 192.168.123.107:0/1343412194' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "4ceba074-cc1e-460f-b8f1-b7d80b498d37"}]: dispatch 2026-03-09T20:43:43.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:42 vm10 ceph-mon[57011]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "4ceba074-cc1e-460f-b8f1-b7d80b498d37"}]: dispatch 2026-03-09T20:43:43.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:42 vm10 ceph-mon[57011]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "4ceba074-cc1e-460f-b8f1-b7d80b498d37"}]': finished 2026-03-09T20:43:43.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:42 vm10 ceph-mon[57011]: osdmap e6: 1 total, 0 up, 1 in 2026-03-09T20:43:43.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:42 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:43:44.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:43 vm10 ceph-mon[57011]: from='client.? 192.168.123.107:0/2455189126' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T20:43:44.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:43 vm07 ceph-mon[49120]: from='client.? 192.168.123.107:0/2455189126' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T20:43:45.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:44 vm10 ceph-mon[57011]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:43:45.056 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:44 vm07 ceph-mon[49120]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:43:47.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:46 vm07 ceph-mon[49120]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:43:47.172 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:46 vm10 ceph-mon[57011]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:43:47.985 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:47 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:47.986 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:47 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:47.986 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:47 vm07 ceph-mon[49120]: Reconfiguring ceph-exporter.vm10 (monmap changed)... 2026-03-09T20:43:47.986 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:47 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm10", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-09T20:43:47.986 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:47 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:43:47.986 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:47 vm07 ceph-mon[49120]: Reconfiguring daemon ceph-exporter.vm10 on vm10 2026-03-09T20:43:47.986 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:47 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T20:43:47.986 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:47 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:43:47.986 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:47 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:47.986 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:47 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:47.986 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:47 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm10", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-09T20:43:47.986 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:47 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:43:47.986 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:47 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:47.986 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:47 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:47.986 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:47 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm10.byqahe", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T20:43:47.986 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:47 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T20:43:47.986 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:47 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:43:48.111 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:47 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:48.111 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:47 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:48.111 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:47 vm10 ceph-mon[57011]: Reconfiguring ceph-exporter.vm10 (monmap changed)... 2026-03-09T20:43:48.111 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:47 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm10", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-09T20:43:48.111 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:47 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:43:48.111 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:47 vm10 ceph-mon[57011]: Reconfiguring daemon ceph-exporter.vm10 on vm10 2026-03-09T20:43:48.111 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:47 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T20:43:48.111 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:47 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:43:48.111 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:47 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:48.111 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:47 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:48.111 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:47 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm10", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-09T20:43:48.111 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:47 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:43:48.111 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:47 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:48.111 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:47 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:48.111 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:47 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm10.byqahe", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T20:43:48.111 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:47 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T20:43:48.111 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:47 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:43:49.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:48 vm07 ceph-mon[49120]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:43:49.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:48 vm07 ceph-mon[49120]: Deploying daemon osd.0 on vm07 2026-03-09T20:43:49.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:48 vm07 ceph-mon[49120]: Reconfiguring crash.vm10 (monmap changed)... 2026-03-09T20:43:49.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:48 vm07 ceph-mon[49120]: Reconfiguring daemon crash.vm10 on vm10 2026-03-09T20:43:49.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:48 vm07 ceph-mon[49120]: Reconfiguring mgr.vm10.byqahe (monmap changed)... 2026-03-09T20:43:49.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:48 vm07 ceph-mon[49120]: Reconfiguring daemon mgr.vm10.byqahe on vm10 2026-03-09T20:43:49.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:48 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:49.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:48 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:49.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:48 vm07 ceph-mon[49120]: Reconfiguring mon.vm10 (monmap changed)... 2026-03-09T20:43:49.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:48 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T20:43:49.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:48 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T20:43:49.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:48 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:43:49.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:48 vm07 ceph-mon[49120]: Reconfiguring daemon mon.vm10 on vm10 2026-03-09T20:43:49.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:48 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:49.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:48 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:49.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:48 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T20:43:49.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:48 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm07.local:9093"}]: dispatch 2026-03-09T20:43:49.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:48 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:49.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:48 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T20:43:49.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:48 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm07.local:3000"}]: dispatch 2026-03-09T20:43:49.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:48 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:49.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:48 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T20:43:49.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:48 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm07.local:9095"}]: dispatch 2026-03-09T20:43:49.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:48 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:49.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:48 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:43:49.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:48 vm10 ceph-mon[57011]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:43:49.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:48 vm10 ceph-mon[57011]: Deploying daemon osd.0 on vm07 2026-03-09T20:43:49.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:48 vm10 ceph-mon[57011]: Reconfiguring crash.vm10 (monmap changed)... 2026-03-09T20:43:49.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:48 vm10 ceph-mon[57011]: Reconfiguring daemon crash.vm10 on vm10 2026-03-09T20:43:49.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:48 vm10 ceph-mon[57011]: Reconfiguring mgr.vm10.byqahe (monmap changed)... 2026-03-09T20:43:49.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:48 vm10 ceph-mon[57011]: Reconfiguring daemon mgr.vm10.byqahe on vm10 2026-03-09T20:43:49.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:48 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:49.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:48 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:49.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:48 vm10 ceph-mon[57011]: Reconfiguring mon.vm10 (monmap changed)... 2026-03-09T20:43:49.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:48 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T20:43:49.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:48 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T20:43:49.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:48 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:43:49.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:48 vm10 ceph-mon[57011]: Reconfiguring daemon mon.vm10 on vm10 2026-03-09T20:43:49.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:48 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:49.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:48 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:49.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:48 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T20:43:49.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:48 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm07.local:9093"}]: dispatch 2026-03-09T20:43:49.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:48 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:49.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:48 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T20:43:49.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:48 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm07.local:3000"}]: dispatch 2026-03-09T20:43:49.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:48 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:49.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:48 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T20:43:49.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:48 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm07.local:9095"}]: dispatch 2026-03-09T20:43:49.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:48 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:49.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:48 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:43:50.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:49 vm10 ceph-mon[57011]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T20:43:50.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:49 vm10 ceph-mon[57011]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm07.local:9093"}]: dispatch 2026-03-09T20:43:50.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:49 vm10 ceph-mon[57011]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T20:43:50.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:49 vm10 ceph-mon[57011]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm07.local:3000"}]: dispatch 2026-03-09T20:43:50.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:49 vm10 ceph-mon[57011]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T20:43:50.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:49 vm10 ceph-mon[57011]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm07.local:9095"}]: dispatch 2026-03-09T20:43:50.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:49 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:50.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:49 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:50.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:49 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:43:50.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:49 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:50.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:49 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:50.316 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:49 vm07 ceph-mon[49120]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T20:43:50.316 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:49 vm07 ceph-mon[49120]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm07.local:9093"}]: dispatch 2026-03-09T20:43:50.316 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:49 vm07 ceph-mon[49120]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T20:43:50.316 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:49 vm07 ceph-mon[49120]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm07.local:3000"}]: dispatch 2026-03-09T20:43:50.316 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:49 vm07 ceph-mon[49120]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T20:43:50.316 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:49 vm07 ceph-mon[49120]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm07.local:9095"}]: dispatch 2026-03-09T20:43:50.316 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:49 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:50.316 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:49 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:50.317 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:49 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:43:50.317 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:49 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:50.317 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:49 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:50.489 INFO:teuthology.orchestra.run.vm07.stdout:Created osd(s) 0 on host 'vm07' 2026-03-09T20:43:50.489 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:50.486+0000 7fabe27fc640 1 -- 192.168.123.107:0/2098654032 <== mgr.14225 v2:192.168.123.107:6800/4233182156 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+32 (secure 0 0 0) 0x7fabf0018030 con 0x7fabd4076130 2026-03-09T20:43:50.489 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:50.489+0000 7fabc3fff640 1 -- 192.168.123.107:0/2098654032 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fabd4076130 msgr2=0x7fabd40785f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:50.489 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:50.489+0000 7fabc3fff640 1 --2- 192.168.123.107:0/2098654032 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fabd4076130 0x7fabd40785f0 secure :-1 s=READY pgs=27 cs=0 l=1 rev1=1 crypto rx=0x7fabf000d0e0 tx=0x7fabf00085a0 comp rx=0 tx=0).stop 2026-03-09T20:43:50.489 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:50.489+0000 7fabc3fff640 1 -- 192.168.123.107:0/2098654032 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fabf40751a0 msgr2=0x7fabf419a830 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:50.489 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:50.489+0000 7fabc3fff640 1 --2- 192.168.123.107:0/2098654032 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fabf40751a0 0x7fabf419a830 secure :-1 s=READY pgs=4 cs=0 l=1 rev1=1 crypto rx=0x7fabe402f750 tx=0x7fabe4031cf0 comp rx=0 tx=0).stop 2026-03-09T20:43:50.489 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:50.489+0000 7fabc3fff640 1 -- 192.168.123.107:0/2098654032 shutdown_connections 2026-03-09T20:43:50.489 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:50.489+0000 7fabc3fff640 1 --2- 192.168.123.107:0/2098654032 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fabd4076130 0x7fabd40785f0 unknown :-1 s=CLOSED pgs=27 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:50.489 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:50.489+0000 7fabc3fff640 1 --2- 192.168.123.107:0/2098654032 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fabf40751a0 0x7fabf419a830 unknown :-1 s=CLOSED pgs=4 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:50.490 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:50.489+0000 7fabc3fff640 1 --2- 192.168.123.107:0/2098654032 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fabf4073b40 0x7fabf419a2f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:50.490 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:50.489+0000 7fabc3fff640 1 -- 192.168.123.107:0/2098654032 >> 192.168.123.107:0/2098654032 conn(0x7fabf40fbdb0 msgr2=0x7fabf40fd980 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:43:50.490 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:50.489+0000 7fabc3fff640 1 -- 192.168.123.107:0/2098654032 shutdown_connections 2026-03-09T20:43:50.490 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:50.489+0000 7fabc3fff640 1 -- 192.168.123.107:0/2098654032 wait complete. 2026-03-09T20:43:50.575 DEBUG:teuthology.orchestra.run.vm07:osd.0> sudo journalctl -f -n 0 -u ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4@osd.0.service 2026-03-09T20:43:50.577 INFO:tasks.cephadm:Deploying osd.1 on vm07 with /dev/vdd... 2026-03-09T20:43:50.577 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- lvm zap /dev/vdd 2026-03-09T20:43:50.804 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:43:50.980 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:50 vm10 ceph-mon[57011]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:43:50.980 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:50 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:50.980 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:50 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:50.981 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:50 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:43:50.981 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:50 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:43:50.981 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:50 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:50.981 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:50 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:43:51.115 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:50 vm07 ceph-mon[49120]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:43:51.115 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:50 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:51.115 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:50 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:51.115 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:50 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:43:51.115 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:50 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:43:51.115 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:50 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:51.115 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:50 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:43:51.115 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:50 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:51.115 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:50 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:51.115 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:50 vm07 ceph-mon[49120]: from='osd.0 [v2:192.168.123.107:6802/1974618076,v1:192.168.123.107:6803/1974618076]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T20:43:51.118 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:43:50 vm07 ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-0[68598]: 2026-03-09T20:43:50.910+0000 7f9f34e27740 -1 osd.0 0 log_to_monitors true 2026-03-09T20:43:51.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:50 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:51.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:50 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:51.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:50 vm10 ceph-mon[57011]: from='osd.0 [v2:192.168.123.107:6802/1974618076,v1:192.168.123.107:6803/1974618076]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T20:43:51.401 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:43:51.414 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph orch daemon add osd vm07:/dev/vdd 2026-03-09T20:43:51.594 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:43:51.920 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:51.918+0000 7f86b6751640 1 -- 192.168.123.107:0/1285247940 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f86b0071d40 msgr2=0x7f86b0072140 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:43:51.920 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:51.918+0000 7f86b6751640 1 --2- 192.168.123.107:0/1285247940 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f86b0071d40 0x7f86b0072140 secure :-1 s=READY pgs=168 cs=0 l=1 rev1=1 crypto rx=0x7f869c009a00 tx=0x7f869c02f290 comp rx=0 tx=0).stop 2026-03-09T20:43:51.921 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:51.921+0000 7f86b6751640 1 -- 192.168.123.107:0/1285247940 shutdown_connections 2026-03-09T20:43:51.921 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:51.921+0000 7f86b6751640 1 --2- 192.168.123.107:0/1285247940 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f86b0072710 0x7f86b010c590 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:51.921 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:51.921+0000 7f86b6751640 1 --2- 192.168.123.107:0/1285247940 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f86b0071d40 0x7f86b0072140 unknown :-1 s=CLOSED pgs=168 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:51.921 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:51.921+0000 7f86b6751640 1 -- 192.168.123.107:0/1285247940 >> 192.168.123.107:0/1285247940 conn(0x7f86b006d660 msgr2=0x7f86b006faa0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:43:51.922 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:51.922+0000 7f86b6751640 1 -- 192.168.123.107:0/1285247940 shutdown_connections 2026-03-09T20:43:51.922 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:51.922+0000 7f86b6751640 1 -- 192.168.123.107:0/1285247940 wait complete. 2026-03-09T20:43:51.922 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:51.922+0000 7f86b6751640 1 Processor -- start 2026-03-09T20:43:51.923 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:51.922+0000 7f86b6751640 1 -- start start 2026-03-09T20:43:51.923 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:51.922+0000 7f86b6751640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f86b0071d40 0x7f86b01b28e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:43:51.923 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:51.922+0000 7f86b6751640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f86b0072710 0x7f86b01b4e30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:43:51.923 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:51.922+0000 7f86b6751640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f86b01184d0 con 0x7f86b0072710 2026-03-09T20:43:51.923 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:51.922+0000 7f86b6751640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f86b0118640 con 0x7f86b0071d40 2026-03-09T20:43:51.923 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:51.923+0000 7f86af7fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f86b0072710 0x7f86b01b4e30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:43:51.923 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:51.923+0000 7f86af7fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f86b0072710 0x7f86b01b4e30 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:41902/0 (socket says 192.168.123.107:41902) 2026-03-09T20:43:51.923 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:51.923+0000 7f86af7fe640 1 -- 192.168.123.107:0/1404692169 learned_addr learned my addr 192.168.123.107:0/1404692169 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:43:51.923 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:51.923+0000 7f86af7fe640 1 -- 192.168.123.107:0/1404692169 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f86b0071d40 msgr2=0x7f86b01b28e0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T20:43:51.924 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:51.923+0000 7f86af7fe640 1 --2- 192.168.123.107:0/1404692169 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f86b0071d40 0x7f86b01b28e0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:43:51.924 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:51.923+0000 7f86af7fe640 1 -- 192.168.123.107:0/1404692169 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f869c009660 con 0x7f86b0072710 2026-03-09T20:43:51.924 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:51.923+0000 7f86af7fe640 1 --2- 192.168.123.107:0/1404692169 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f86b0072710 0x7f86b01b4e30 secure :-1 s=READY pgs=169 cs=0 l=1 rev1=1 crypto rx=0x7f86a000b6d0 tx=0x7f86a000bba0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:43:51.925 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:51.924+0000 7f86ad7fa640 1 -- 192.168.123.107:0/1404692169 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f86a000be60 con 0x7f86b0072710 2026-03-09T20:43:51.925 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:51.924+0000 7f86b6751640 1 -- 192.168.123.107:0/1404692169 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f86b01b5430 con 0x7f86b0072710 2026-03-09T20:43:51.925 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:51.924+0000 7f86b6751640 1 -- 192.168.123.107:0/1404692169 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f86b01b5980 con 0x7f86b0072710 2026-03-09T20:43:51.925 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:51.924+0000 7f86ad7fa640 1 -- 192.168.123.107:0/1404692169 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f86a00027a0 con 0x7f86b0072710 2026-03-09T20:43:51.925 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:51.924+0000 7f86ad7fa640 1 -- 192.168.123.107:0/1404692169 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f86a000ca30 con 0x7f86b0072710 2026-03-09T20:43:51.926 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:51.926+0000 7f86b6751640 1 -- 192.168.123.107:0/1404692169 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f867c005350 con 0x7f86b0072710 2026-03-09T20:43:51.930 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:51.926+0000 7f86ad7fa640 1 -- 192.168.123.107:0/1404692169 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 18) v1 ==== 98424+0+0 (secure 0 0 0) 0x7f86a0002910 con 0x7f86b0072710 2026-03-09T20:43:51.930 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:51.929+0000 7f86ad7fa640 1 --2- 192.168.123.107:0/1404692169 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f8684076130 0x7f86840785f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:43:51.930 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:51.930+0000 7f86ad7fa640 1 -- 192.168.123.107:0/1404692169 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(7..7 src has 1..7) v4 ==== 1576+0+0 (secure 0 0 0) 0x7f86a0095c70 con 0x7f86b0072710 2026-03-09T20:43:51.930 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:51.930+0000 7f86affff640 1 --2- 192.168.123.107:0/1404692169 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f8684076130 0x7f86840785f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:43:51.930 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:51.930+0000 7f86ad7fa640 1 -- 192.168.123.107:0/1404692169 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f86a0060450 con 0x7f86b0072710 2026-03-09T20:43:51.936 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:51.934+0000 7f86affff640 1 --2- 192.168.123.107:0/1404692169 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f8684076130 0x7f86840785f0 secure :-1 s=READY pgs=30 cs=0 l=1 rev1=1 crypto rx=0x7f869c02f7a0 tx=0x7f869c0023d0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:43:52.034 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:43:52.034+0000 7f86b6751640 1 -- 192.168.123.107:0/1404692169 --> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdd", "target": ["mon-mgr", ""]}) v1 -- 0x7f867c002bf0 con 0x7f8684076130 2026-03-09T20:43:52.174 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:51 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:52.174 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:51 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:52.174 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:51 vm07 ceph-mon[49120]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:43:52.174 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:51 vm07 ceph-mon[49120]: from='osd.0 [v2:192.168.123.107:6802/1974618076,v1:192.168.123.107:6803/1974618076]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T20:43:52.174 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:51 vm07 ceph-mon[49120]: osdmap e7: 1 total, 0 up, 1 in 2026-03-09T20:43:52.174 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:51 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:43:52.175 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:51 vm07 ceph-mon[49120]: from='osd.0 [v2:192.168.123.107:6802/1974618076,v1:192.168.123.107:6803/1974618076]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T20:43:52.175 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:51 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:52.175 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:51 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:52.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:51 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:52.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:51 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:52.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:51 vm10 ceph-mon[57011]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:43:52.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:51 vm10 ceph-mon[57011]: from='osd.0 [v2:192.168.123.107:6802/1974618076,v1:192.168.123.107:6803/1974618076]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T20:43:52.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:51 vm10 ceph-mon[57011]: osdmap e7: 1 total, 0 up, 1 in 2026-03-09T20:43:52.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:51 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:43:52.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:51 vm10 ceph-mon[57011]: from='osd.0 [v2:192.168.123.107:6802/1974618076,v1:192.168.123.107:6803/1974618076]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T20:43:52.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:51 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:52.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:51 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:53.085 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:53 vm07 ceph-mon[49120]: from='client.14274 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:43:53.085 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:53 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T20:43:53.085 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:53 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T20:43:53.085 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:53 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:43:53.085 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:53 vm07 ceph-mon[49120]: from='osd.0 [v2:192.168.123.107:6802/1974618076,v1:192.168.123.107:6803/1974618076]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-09T20:43:53.085 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:53 vm07 ceph-mon[49120]: osdmap e8: 1 total, 0 up, 1 in 2026-03-09T20:43:53.085 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:53 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:43:53.085 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:53 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:43:53.085 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:53 vm07 ceph-mon[49120]: Detected new or changed devices on vm07 2026-03-09T20:43:53.085 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:53 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:53.085 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:53 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:53.085 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:53 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:43:53.085 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:53 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:43:53.085 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:53 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:43:53.085 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:53 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:53.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:53 vm10 ceph-mon[57011]: from='client.14274 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:43:53.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:53 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T20:43:53.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:53 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T20:43:53.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:53 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:43:53.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:53 vm10 ceph-mon[57011]: from='osd.0 [v2:192.168.123.107:6802/1974618076,v1:192.168.123.107:6803/1974618076]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-09T20:43:53.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:53 vm10 ceph-mon[57011]: osdmap e8: 1 total, 0 up, 1 in 2026-03-09T20:43:53.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:53 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:43:53.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:53 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:43:53.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:53 vm10 ceph-mon[57011]: Detected new or changed devices on vm07 2026-03-09T20:43:53.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:53 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:53.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:53 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:53.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:53 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:43:53.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:53 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:43:53.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:53 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:43:53.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:53 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:54.224 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:54 vm07 ceph-mon[49120]: purged_snaps scrub starts 2026-03-09T20:43:54.224 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:54 vm07 ceph-mon[49120]: purged_snaps scrub ok 2026-03-09T20:43:54.224 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:54 vm07 ceph-mon[49120]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:43:54.224 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:54 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:43:54.224 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:54 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:43:54.224 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:54 vm07 ceph-mon[49120]: from='client.? 192.168.123.107:0/3580703114' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "7431b664-9dad-4df6-ac1e-d480eeb7d102"}]: dispatch 2026-03-09T20:43:54.224 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:54 vm07 ceph-mon[49120]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "7431b664-9dad-4df6-ac1e-d480eeb7d102"}]: dispatch 2026-03-09T20:43:54.224 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:54 vm07 ceph-mon[49120]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "7431b664-9dad-4df6-ac1e-d480eeb7d102"}]': finished 2026-03-09T20:43:54.225 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:54 vm07 ceph-mon[49120]: osdmap e9: 2 total, 0 up, 2 in 2026-03-09T20:43:54.225 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:54 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:43:54.225 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:54 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:43:54.225 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:54 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:54.225 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:54 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:54.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:54 vm10 ceph-mon[57011]: purged_snaps scrub starts 2026-03-09T20:43:54.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:54 vm10 ceph-mon[57011]: purged_snaps scrub ok 2026-03-09T20:43:54.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:54 vm10 ceph-mon[57011]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:43:54.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:54 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:43:54.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:54 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:43:54.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:54 vm10 ceph-mon[57011]: from='client.? 192.168.123.107:0/3580703114' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "7431b664-9dad-4df6-ac1e-d480eeb7d102"}]: dispatch 2026-03-09T20:43:54.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:54 vm10 ceph-mon[57011]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "7431b664-9dad-4df6-ac1e-d480eeb7d102"}]: dispatch 2026-03-09T20:43:54.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:54 vm10 ceph-mon[57011]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "7431b664-9dad-4df6-ac1e-d480eeb7d102"}]': finished 2026-03-09T20:43:54.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:54 vm10 ceph-mon[57011]: osdmap e9: 2 total, 0 up, 2 in 2026-03-09T20:43:54.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:54 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:43:54.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:54 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:43:54.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:54 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:54.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:54 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:54.520 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:43:54 vm07 ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-0[68598]: 2026-03-09T20:43:54.298+0000 7f9f31dbd640 -1 osd.0 0 waiting for initial osdmap 2026-03-09T20:43:54.520 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:43:54 vm07 ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-0[68598]: 2026-03-09T20:43:54.312+0000 7f9f2c3be640 -1 osd.0 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T20:43:55.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:55 vm10 ceph-mon[57011]: from='osd.0 [v2:192.168.123.107:6802/1974618076,v1:192.168.123.107:6803/1974618076]' entity='osd.0' 2026-03-09T20:43:55.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:55 vm10 ceph-mon[57011]: from='client.? 192.168.123.107:0/2902555795' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T20:43:55.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:55 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:43:55.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:55 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:55.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:55 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:55.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:55 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:43:55.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:55 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:43:55.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:55 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:55.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:55 vm07 ceph-mon[49120]: from='osd.0 [v2:192.168.123.107:6802/1974618076,v1:192.168.123.107:6803/1974618076]' entity='osd.0' 2026-03-09T20:43:55.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:55 vm07 ceph-mon[49120]: from='client.? 192.168.123.107:0/2902555795' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T20:43:55.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:55 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:43:55.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:55 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:55.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:55 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:55.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:55 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:43:55.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:55 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:43:55.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:55 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:43:56.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:56 vm10 ceph-mon[57011]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:43:56.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:56 vm10 ceph-mon[57011]: osd.0 [v2:192.168.123.107:6802/1974618076,v1:192.168.123.107:6803/1974618076] boot 2026-03-09T20:43:56.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:56 vm10 ceph-mon[57011]: osdmap e10: 2 total, 1 up, 2 in 2026-03-09T20:43:56.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:56 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:43:56.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:56 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:43:56.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:56 vm07 ceph-mon[49120]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:43:56.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:56 vm07 ceph-mon[49120]: osd.0 [v2:192.168.123.107:6802/1974618076,v1:192.168.123.107:6803/1974618076] boot 2026-03-09T20:43:56.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:56 vm07 ceph-mon[49120]: osdmap e10: 2 total, 1 up, 2 in 2026-03-09T20:43:56.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:56 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:43:56.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:56 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:43:57.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:57 vm07 ceph-mon[49120]: osdmap e11: 2 total, 1 up, 2 in 2026-03-09T20:43:57.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:57 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:43:58.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:57 vm10 ceph-mon[57011]: osdmap e11: 2 total, 1 up, 2 in 2026-03-09T20:43:58.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:57 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:43:59.033 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:58 vm07 ceph-mon[49120]: pgmap v18: 0 pgs: ; 0 B data, 430 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:43:59.033 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:58 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T20:43:59.033 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:58 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:43:59.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:58 vm10 ceph-mon[57011]: pgmap v18: 0 pgs: ; 0 B data, 430 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:43:59.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:58 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T20:43:59.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:58 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:44:00.035 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:59 vm07 ceph-mon[49120]: Deploying daemon osd.1 on vm07 2026-03-09T20:44:00.035 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:59 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:44:00.035 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:59 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:00.035 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:43:59 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:00.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:59 vm10 ceph-mon[57011]: Deploying daemon osd.1 on vm07 2026-03-09T20:44:00.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:59 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:44:00.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:59 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:00.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:43:59 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:00.444 INFO:teuthology.orchestra.run.vm07.stdout:Created osd(s) 1 on host 'vm07' 2026-03-09T20:44:00.444 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:00.442+0000 7f86ad7fa640 1 -- 192.168.123.107:0/1404692169 <== mgr.14225 v2:192.168.123.107:6800/4233182156 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+32 (secure 0 0 0) 0x7f867c002bf0 con 0x7f8684076130 2026-03-09T20:44:00.444 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:00.444+0000 7f8692ffd640 1 -- 192.168.123.107:0/1404692169 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f8684076130 msgr2=0x7f86840785f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:00.444 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:00.444+0000 7f8692ffd640 1 --2- 192.168.123.107:0/1404692169 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f8684076130 0x7f86840785f0 secure :-1 s=READY pgs=30 cs=0 l=1 rev1=1 crypto rx=0x7f869c02f7a0 tx=0x7f869c0023d0 comp rx=0 tx=0).stop 2026-03-09T20:44:00.446 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:00.445+0000 7f8692ffd640 1 -- 192.168.123.107:0/1404692169 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f86b0072710 msgr2=0x7f86b01b4e30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:00.446 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:00.445+0000 7f8692ffd640 1 --2- 192.168.123.107:0/1404692169 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f86b0072710 0x7f86b01b4e30 secure :-1 s=READY pgs=169 cs=0 l=1 rev1=1 crypto rx=0x7f86a000b6d0 tx=0x7f86a000bba0 comp rx=0 tx=0).stop 2026-03-09T20:44:00.446 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:00.446+0000 7f8692ffd640 1 -- 192.168.123.107:0/1404692169 shutdown_connections 2026-03-09T20:44:00.446 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:00.446+0000 7f8692ffd640 1 --2- 192.168.123.107:0/1404692169 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f8684076130 0x7f86840785f0 unknown :-1 s=CLOSED pgs=30 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:00.446 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:00.446+0000 7f8692ffd640 1 --2- 192.168.123.107:0/1404692169 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f86b0072710 0x7f86b01b4e30 unknown :-1 s=CLOSED pgs=169 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:00.447 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:00.446+0000 7f8692ffd640 1 --2- 192.168.123.107:0/1404692169 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f86b0071d40 0x7f86b01b28e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:00.447 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:00.446+0000 7f8692ffd640 1 -- 192.168.123.107:0/1404692169 >> 192.168.123.107:0/1404692169 conn(0x7f86b006d660 msgr2=0x7f86b0070830 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:00.447 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:00.447+0000 7f8692ffd640 1 -- 192.168.123.107:0/1404692169 shutdown_connections 2026-03-09T20:44:00.447 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:00.447+0000 7f8692ffd640 1 -- 192.168.123.107:0/1404692169 wait complete. 2026-03-09T20:44:00.518 DEBUG:teuthology.orchestra.run.vm07:osd.1> sudo journalctl -f -n 0 -u ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4@osd.1.service 2026-03-09T20:44:00.519 INFO:tasks.cephadm:Deploying osd.2 on vm07 with /dev/vdc... 2026-03-09T20:44:00.519 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- lvm zap /dev/vdc 2026-03-09T20:44:00.790 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:44:01.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:00 vm07 ceph-mon[49120]: pgmap v19: 0 pgs: ; 0 B data, 430 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:44:01.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:00 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:01.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:00 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:01.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:00 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:01.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:00 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:01.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:00 vm10 ceph-mon[57011]: pgmap v19: 0 pgs: ; 0 B data, 430 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:44:01.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:00 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:01.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:00 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:01.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:00 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:01.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:00 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:01.515 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:44:01 vm07 ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-1[76030]: 2026-03-09T20:44:01.389+0000 7fbd6c24a740 -1 osd.1 0 log_to_monitors true 2026-03-09T20:44:01.528 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:44:01.539 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph orch daemon add osd vm07:/dev/vdc 2026-03-09T20:44:01.746 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:44:02.041 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:01 vm07 ceph-mon[49120]: from='osd.1 [v2:192.168.123.107:6810/1875790749,v1:192.168.123.107:6811/1875790749]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T20:44:02.041 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:01 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:02.041 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:01 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:02.114 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:02.114+0000 7fc75b24e640 1 -- 192.168.123.107:0/3990360825 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc754071a50 msgr2=0x7fc754071e50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:02.114 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:02.114+0000 7fc75b24e640 1 --2- 192.168.123.107:0/3990360825 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc754071a50 0x7fc754071e50 secure :-1 s=READY pgs=176 cs=0 l=1 rev1=1 crypto rx=0x7fc7500099b0 tx=0x7fc75002f240 comp rx=0 tx=0).stop 2026-03-09T20:44:02.114 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:02.114+0000 7fc75b24e640 1 -- 192.168.123.107:0/3990360825 shutdown_connections 2026-03-09T20:44:02.114 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:02.114+0000 7fc75b24e640 1 --2- 192.168.123.107:0/3990360825 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc754072420 0x7fc754077190 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:02.115 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:02.114+0000 7fc75b24e640 1 --2- 192.168.123.107:0/3990360825 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc754071a50 0x7fc754071e50 unknown :-1 s=CLOSED pgs=176 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:02.115 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:02.114+0000 7fc75b24e640 1 -- 192.168.123.107:0/3990360825 >> 192.168.123.107:0/3990360825 conn(0x7fc75406d4f0 msgr2=0x7fc75406f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:02.116 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:02.114+0000 7fc75b24e640 1 -- 192.168.123.107:0/3990360825 shutdown_connections 2026-03-09T20:44:02.116 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:02.114+0000 7fc75b24e640 1 -- 192.168.123.107:0/3990360825 wait complete. 2026-03-09T20:44:02.116 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:02.115+0000 7fc75b24e640 1 Processor -- start 2026-03-09T20:44:02.116 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:02.115+0000 7fc75b24e640 1 -- start start 2026-03-09T20:44:02.116 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:02.115+0000 7fc75b24e640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc754072420 0x7fc754084070 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:02.116 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:02.115+0000 7fc75b24e640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc7540826c0 0x7fc754082b40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:02.116 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:02.115+0000 7fc75b24e640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc7540845b0 con 0x7fc754072420 2026-03-09T20:44:02.116 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:02.115+0000 7fc75b24e640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc754083080 con 0x7fc7540826c0 2026-03-09T20:44:02.116 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:02.115+0000 7fc75a24c640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc754072420 0x7fc754084070 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:02.116 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:02.115+0000 7fc75a24c640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc754072420 0x7fc754084070 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:38676/0 (socket says 192.168.123.107:38676) 2026-03-09T20:44:02.116 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:02.115+0000 7fc75a24c640 1 -- 192.168.123.107:0/1725977752 learned_addr learned my addr 192.168.123.107:0/1725977752 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:44:02.116 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:02.115+0000 7fc759a4b640 1 --2- 192.168.123.107:0/1725977752 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc7540826c0 0x7fc754082b40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:02.116 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:02.115+0000 7fc759a4b640 1 -- 192.168.123.107:0/1725977752 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc754072420 msgr2=0x7fc754084070 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:02.116 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:02.115+0000 7fc759a4b640 1 --2- 192.168.123.107:0/1725977752 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc754072420 0x7fc754084070 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:02.116 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:02.115+0000 7fc759a4b640 1 -- 192.168.123.107:0/1725977752 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fc750009660 con 0x7fc7540826c0 2026-03-09T20:44:02.116 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:02.115+0000 7fc759a4b640 1 --2- 192.168.123.107:0/1725977752 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc7540826c0 0x7fc754082b40 secure :-1 s=READY pgs=10 cs=0 l=1 rev1=1 crypto rx=0x7fc74c00eee0 tx=0x7fc74c00c490 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:02.116 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:02.116+0000 7fc74b7fe640 1 -- 192.168.123.107:0/1725977752 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fc74c009280 con 0x7fc7540826c0 2026-03-09T20:44:02.116 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:02.116+0000 7fc75b24e640 1 -- 192.168.123.107:0/1725977752 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fc754083300 con 0x7fc7540826c0 2026-03-09T20:44:02.116 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:02.116+0000 7fc75b24e640 1 -- 192.168.123.107:0/1725977752 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fc7541b5bc0 con 0x7fc7540826c0 2026-03-09T20:44:02.116 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:02.116+0000 7fc74b7fe640 1 -- 192.168.123.107:0/1725977752 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fc74c00f040 con 0x7fc7540826c0 2026-03-09T20:44:02.116 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:02.116+0000 7fc74b7fe640 1 -- 192.168.123.107:0/1725977752 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fc74c0040d0 con 0x7fc7540826c0 2026-03-09T20:44:02.117 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:02.117+0000 7fc74b7fe640 1 -- 192.168.123.107:0/1725977752 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 18) v1 ==== 98424+0+0 (secure 0 0 0) 0x7fc74c013750 con 0x7fc7540826c0 2026-03-09T20:44:02.117 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:02.117+0000 7fc75b24e640 1 -- 192.168.123.107:0/1725977752 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fc724005350 con 0x7fc7540826c0 2026-03-09T20:44:02.117 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:02.117+0000 7fc74b7fe640 1 --2- 192.168.123.107:0/1725977752 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fc738076230 0x7fc7380786f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:02.119 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:02.118+0000 7fc75a24c640 1 --2- 192.168.123.107:0/1725977752 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fc738076230 0x7fc7380786f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:02.119 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:02.118+0000 7fc74b7fe640 1 -- 192.168.123.107:0/1725977752 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(12..12 src has 1..12) v4 ==== 2108+0+0 (secure 0 0 0) 0x7fc74c09aba0 con 0x7fc7540826c0 2026-03-09T20:44:02.119 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:02.119+0000 7fc75a24c640 1 --2- 192.168.123.107:0/1725977752 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fc738076230 0x7fc7380786f0 secure :-1 s=READY pgs=37 cs=0 l=1 rev1=1 crypto rx=0x7fc750002fd0 tx=0x7fc75003a040 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:02.120 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:02.120+0000 7fc74b7fe640 1 -- 192.168.123.107:0/1725977752 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7fc74c064f10 con 0x7fc7540826c0 2026-03-09T20:44:02.236 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:02.235+0000 7fc75b24e640 1 -- 192.168.123.107:0/1725977752 --> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdc", "target": ["mon-mgr", ""]}) v1 -- 0x7fc724002bf0 con 0x7fc738076230 2026-03-09T20:44:02.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:01 vm10 ceph-mon[57011]: from='osd.1 [v2:192.168.123.107:6810/1875790749,v1:192.168.123.107:6811/1875790749]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T20:44:02.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:01 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:02.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:01 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:03.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:02 vm07 ceph-mon[49120]: pgmap v20: 0 pgs: ; 0 B data, 430 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:44:03.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:02 vm07 ceph-mon[49120]: from='osd.1 [v2:192.168.123.107:6810/1875790749,v1:192.168.123.107:6811/1875790749]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T20:44:03.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:02 vm07 ceph-mon[49120]: osdmap e12: 2 total, 1 up, 2 in 2026-03-09T20:44:03.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:02 vm07 ceph-mon[49120]: from='osd.1 [v2:192.168.123.107:6810/1875790749,v1:192.168.123.107:6811/1875790749]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T20:44:03.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:02 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:44:03.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:02 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T20:44:03.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:02 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T20:44:03.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:02 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:44:03.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:02 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:03.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:02 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:03.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:02 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:44:03.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:02 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:44:03.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:02 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:44:03.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:02 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:03.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:02 vm10 ceph-mon[57011]: pgmap v20: 0 pgs: ; 0 B data, 430 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:44:03.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:02 vm10 ceph-mon[57011]: from='osd.1 [v2:192.168.123.107:6810/1875790749,v1:192.168.123.107:6811/1875790749]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T20:44:03.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:02 vm10 ceph-mon[57011]: osdmap e12: 2 total, 1 up, 2 in 2026-03-09T20:44:03.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:02 vm10 ceph-mon[57011]: from='osd.1 [v2:192.168.123.107:6810/1875790749,v1:192.168.123.107:6811/1875790749]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T20:44:03.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:02 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:44:03.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:02 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T20:44:03.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:02 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T20:44:03.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:02 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:44:03.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:02 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:03.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:02 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:03.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:02 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:44:03.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:02 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:44:03.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:02 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:44:03.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:02 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:04.115 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:03 vm07 ceph-mon[49120]: from='client.24129 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:44:04.115 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:03 vm07 ceph-mon[49120]: Detected new or changed devices on vm07 2026-03-09T20:44:04.115 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:03 vm07 ceph-mon[49120]: from='osd.1 [v2:192.168.123.107:6810/1875790749,v1:192.168.123.107:6811/1875790749]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-09T20:44:04.115 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:03 vm07 ceph-mon[49120]: osdmap e13: 2 total, 1 up, 2 in 2026-03-09T20:44:04.115 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:03 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:44:04.115 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:03 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:44:04.115 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:03 vm07 ceph-mon[49120]: from='client.? 192.168.123.107:0/3147211909' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "91efe4fd-879b-433f-ab7e-d98ab2676ea3"}]: dispatch 2026-03-09T20:44:04.115 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:03 vm07 ceph-mon[49120]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "91efe4fd-879b-433f-ab7e-d98ab2676ea3"}]: dispatch 2026-03-09T20:44:04.115 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:03 vm07 ceph-mon[49120]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "91efe4fd-879b-433f-ab7e-d98ab2676ea3"}]': finished 2026-03-09T20:44:04.115 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:03 vm07 ceph-mon[49120]: osdmap e14: 3 total, 1 up, 3 in 2026-03-09T20:44:04.115 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:03 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:44:04.115 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:03 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:44:04.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:03 vm10 ceph-mon[57011]: from='client.24129 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:44:04.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:03 vm10 ceph-mon[57011]: Detected new or changed devices on vm07 2026-03-09T20:44:04.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:03 vm10 ceph-mon[57011]: from='osd.1 [v2:192.168.123.107:6810/1875790749,v1:192.168.123.107:6811/1875790749]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-09T20:44:04.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:03 vm10 ceph-mon[57011]: osdmap e13: 2 total, 1 up, 2 in 2026-03-09T20:44:04.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:03 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:44:04.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:03 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:44:04.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:03 vm10 ceph-mon[57011]: from='client.? 192.168.123.107:0/3147211909' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "91efe4fd-879b-433f-ab7e-d98ab2676ea3"}]: dispatch 2026-03-09T20:44:04.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:03 vm10 ceph-mon[57011]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "91efe4fd-879b-433f-ab7e-d98ab2676ea3"}]: dispatch 2026-03-09T20:44:04.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:03 vm10 ceph-mon[57011]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "91efe4fd-879b-433f-ab7e-d98ab2676ea3"}]': finished 2026-03-09T20:44:04.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:03 vm10 ceph-mon[57011]: osdmap e14: 3 total, 1 up, 3 in 2026-03-09T20:44:04.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:03 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:44:04.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:03 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:44:05.133 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:44:04 vm07 ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-1[76030]: 2026-03-09T20:44:04.912+0000 7fbd679a6640 -1 osd.1 0 waiting for initial osdmap 2026-03-09T20:44:05.134 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:44:04 vm07 ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-1[76030]: 2026-03-09T20:44:04.976+0000 7fbd637e1640 -1 osd.1 14 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T20:44:05.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:04 vm07 ceph-mon[49120]: purged_snaps scrub starts 2026-03-09T20:44:05.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:04 vm07 ceph-mon[49120]: purged_snaps scrub ok 2026-03-09T20:44:05.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:04 vm07 ceph-mon[49120]: pgmap v23: 0 pgs: ; 0 B data, 430 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:44:05.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:04 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:44:05.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:04 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:44:05.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:04 vm07 ceph-mon[49120]: from='client.? 192.168.123.107:0/1323171903' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T20:44:05.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:04 vm07 ceph-mon[49120]: from='osd.1 [v2:192.168.123.107:6810/1875790749,v1:192.168.123.107:6811/1875790749]' entity='osd.1' 2026-03-09T20:44:05.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:04 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:44:05.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:04 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:05.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:04 vm10 ceph-mon[57011]: purged_snaps scrub starts 2026-03-09T20:44:05.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:04 vm10 ceph-mon[57011]: purged_snaps scrub ok 2026-03-09T20:44:05.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:04 vm10 ceph-mon[57011]: pgmap v23: 0 pgs: ; 0 B data, 430 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:44:05.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:04 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:44:05.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:04 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:44:05.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:04 vm10 ceph-mon[57011]: from='client.? 192.168.123.107:0/1323171903' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T20:44:05.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:04 vm10 ceph-mon[57011]: from='osd.1 [v2:192.168.123.107:6810/1875790749,v1:192.168.123.107:6811/1875790749]' entity='osd.1' 2026-03-09T20:44:05.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:04 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:44:05.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:04 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:06.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:06 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:06.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:06 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:44:06.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:06 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:44:06.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:06 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:06.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:06 vm10 ceph-mon[57011]: pgmap v25: 0 pgs: ; 0 B data, 430 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:44:06.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:06 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:44:06.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:06 vm10 ceph-mon[57011]: osd.1 [v2:192.168.123.107:6810/1875790749,v1:192.168.123.107:6811/1875790749] boot 2026-03-09T20:44:06.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:06 vm10 ceph-mon[57011]: osdmap e15: 3 total, 2 up, 3 in 2026-03-09T20:44:06.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:06 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:44:06.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:06 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:44:06.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:06 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:06.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:06 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:44:06.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:06 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:44:06.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:06 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:06.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:06 vm07 ceph-mon[49120]: pgmap v25: 0 pgs: ; 0 B data, 430 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:44:06.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:06 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:44:06.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:06 vm07 ceph-mon[49120]: osd.1 [v2:192.168.123.107:6810/1875790749,v1:192.168.123.107:6811/1875790749] boot 2026-03-09T20:44:06.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:06 vm07 ceph-mon[49120]: osdmap e15: 3 total, 2 up, 3 in 2026-03-09T20:44:06.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:06 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:44:06.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:06 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:44:08.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:08 vm07 ceph-mon[49120]: osdmap e16: 3 total, 2 up, 3 in 2026-03-09T20:44:08.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:08 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:44:08.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:08 vm07 ceph-mon[49120]: pgmap v28: 0 pgs: ; 0 B data, 57 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:44:08.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:08 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T20:44:08.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:08 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:44:08.137 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:08 vm07 ceph-mon[49120]: Deploying daemon osd.2 on vm07 2026-03-09T20:44:08.377 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:08 vm10 ceph-mon[57011]: osdmap e16: 3 total, 2 up, 3 in 2026-03-09T20:44:08.377 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:08 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:44:08.377 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:08 vm10 ceph-mon[57011]: pgmap v28: 0 pgs: ; 0 B data, 57 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:44:08.377 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:08 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T20:44:08.377 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:08 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:44:08.377 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:08 vm10 ceph-mon[57011]: Deploying daemon osd.2 on vm07 2026-03-09T20:44:10.351 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:10 vm07 ceph-mon[49120]: pgmap v29: 0 pgs: ; 0 B data, 57 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:44:10.351 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:10 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:44:10.351 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:10 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:10.351 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:10 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:10.440 INFO:teuthology.orchestra.run.vm07.stdout:Created osd(s) 2 on host 'vm07' 2026-03-09T20:44:10.441 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:10.437+0000 7fc74b7fe640 1 -- 192.168.123.107:0/1725977752 <== mgr.14225 v2:192.168.123.107:6800/4233182156 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+32 (secure 0 0 0) 0x7fc724002bf0 con 0x7fc738076230 2026-03-09T20:44:10.441 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:10.439+0000 7fc75b24e640 1 -- 192.168.123.107:0/1725977752 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fc738076230 msgr2=0x7fc7380786f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:10.441 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:10.439+0000 7fc75b24e640 1 --2- 192.168.123.107:0/1725977752 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fc738076230 0x7fc7380786f0 secure :-1 s=READY pgs=37 cs=0 l=1 rev1=1 crypto rx=0x7fc750002fd0 tx=0x7fc75003a040 comp rx=0 tx=0).stop 2026-03-09T20:44:10.441 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:10.439+0000 7fc75b24e640 1 -- 192.168.123.107:0/1725977752 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc7540826c0 msgr2=0x7fc754082b40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:10.441 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:10.439+0000 7fc75b24e640 1 --2- 192.168.123.107:0/1725977752 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc7540826c0 0x7fc754082b40 secure :-1 s=READY pgs=10 cs=0 l=1 rev1=1 crypto rx=0x7fc74c00eee0 tx=0x7fc74c00c490 comp rx=0 tx=0).stop 2026-03-09T20:44:10.441 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:10.439+0000 7fc75b24e640 1 -- 192.168.123.107:0/1725977752 shutdown_connections 2026-03-09T20:44:10.441 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:10.439+0000 7fc75b24e640 1 --2- 192.168.123.107:0/1725977752 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fc738076230 0x7fc7380786f0 unknown :-1 s=CLOSED pgs=37 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:10.441 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:10.439+0000 7fc75b24e640 1 --2- 192.168.123.107:0/1725977752 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc7540826c0 0x7fc754082b40 unknown :-1 s=CLOSED pgs=10 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:10.441 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:10.439+0000 7fc75b24e640 1 --2- 192.168.123.107:0/1725977752 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc754072420 0x7fc754084070 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:10.441 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:10.439+0000 7fc75b24e640 1 -- 192.168.123.107:0/1725977752 >> 192.168.123.107:0/1725977752 conn(0x7fc75406d4f0 msgr2=0x7fc7540753c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:10.441 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:10.439+0000 7fc75b24e640 1 -- 192.168.123.107:0/1725977752 shutdown_connections 2026-03-09T20:44:10.441 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:10.439+0000 7fc75b24e640 1 -- 192.168.123.107:0/1725977752 wait complete. 2026-03-09T20:44:10.492 DEBUG:teuthology.orchestra.run.vm07:osd.2> sudo journalctl -f -n 0 -u ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4@osd.2.service 2026-03-09T20:44:10.497 INFO:tasks.cephadm:Deploying osd.3 on vm10 with /dev/vde... 2026-03-09T20:44:10.497 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- lvm zap /dev/vde 2026-03-09T20:44:10.522 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:10 vm10 ceph-mon[57011]: pgmap v29: 0 pgs: ; 0 B data, 57 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:44:10.522 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:10 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:44:10.522 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:10 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:10.522 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:10 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:10.660 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm10/config 2026-03-09T20:44:11.164 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T20:44:11.179 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph orch daemon add osd vm10:/dev/vde 2026-03-09T20:44:11.374 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm10/config 2026-03-09T20:44:11.617 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:11.615+0000 7f8c1aa46640 1 -- 192.168.123.110:0/3460448424 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f8c140751a0 msgr2=0x7f8c14073600 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:11.617 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:11.615+0000 7f8c12ffd640 1 -- 192.168.123.110:0/3460448424 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f8c0002fa80 con 0x7f8c140751a0 2026-03-09T20:44:11.617 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:11.615+0000 7f8c1aa46640 1 --2- 192.168.123.110:0/3460448424 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f8c140751a0 0x7f8c14073600 secure :-1 s=READY pgs=17 cs=0 l=1 rev1=1 crypto rx=0x7f8c000099b0 tx=0x7f8c0002f220 comp rx=0 tx=0).stop 2026-03-09T20:44:11.617 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:11.616+0000 7f8c1aa46640 1 -- 192.168.123.110:0/3460448424 shutdown_connections 2026-03-09T20:44:11.617 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:11.616+0000 7f8c1aa46640 1 --2- 192.168.123.110:0/3460448424 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8c14073b40 0x7f8c14073fa0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:11.617 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:11.616+0000 7f8c1aa46640 1 --2- 192.168.123.110:0/3460448424 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f8c140751a0 0x7f8c14073600 unknown :-1 s=CLOSED pgs=17 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:11.617 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:11.616+0000 7f8c1aa46640 1 -- 192.168.123.110:0/3460448424 >> 192.168.123.110:0/3460448424 conn(0x7f8c140fbfb0 msgr2=0x7f8c140fe3d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:11.617 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:11.616+0000 7f8c1aa46640 1 -- 192.168.123.110:0/3460448424 shutdown_connections 2026-03-09T20:44:11.617 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:11.616+0000 7f8c1aa46640 1 -- 192.168.123.110:0/3460448424 wait complete. 2026-03-09T20:44:11.618 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:11.616+0000 7f8c1aa46640 1 Processor -- start 2026-03-09T20:44:11.618 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:11.617+0000 7f8c1aa46640 1 -- start start 2026-03-09T20:44:11.618 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:11.617+0000 7f8c1aa46640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8c14073b40 0x7f8c1419e820 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:11.618 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:11.617+0000 7f8c1aa46640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f8c140751a0 0x7f8c1419ed60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:11.618 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:11.617+0000 7f8c1aa46640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f8c1419f330 con 0x7f8c14073b40 2026-03-09T20:44:11.618 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:11.617+0000 7f8c1aa46640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f8c1419f4a0 con 0x7f8c140751a0 2026-03-09T20:44:11.618 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:11.617+0000 7f8c13fff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8c14073b40 0x7f8c1419e820 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:11.618 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:11.617+0000 7f8c13fff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8c14073b40 0x7f8c1419e820 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.110:51786/0 (socket says 192.168.123.110:51786) 2026-03-09T20:44:11.618 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:11.617+0000 7f8c13fff640 1 -- 192.168.123.110:0/94825177 learned_addr learned my addr 192.168.123.110:0/94825177 (peer_addr_for_me v2:192.168.123.110:0/0) 2026-03-09T20:44:11.619 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:11.617+0000 7f8c137fe640 1 --2- 192.168.123.110:0/94825177 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f8c140751a0 0x7f8c1419ed60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:11.619 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:11.617+0000 7f8c13fff640 1 -- 192.168.123.110:0/94825177 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f8c140751a0 msgr2=0x7f8c1419ed60 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:11.619 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:11.618+0000 7f8c13fff640 1 --2- 192.168.123.110:0/94825177 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f8c140751a0 0x7f8c1419ed60 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:11.619 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:11.618+0000 7f8c13fff640 1 -- 192.168.123.110:0/94825177 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f8c00009660 con 0x7f8c14073b40 2026-03-09T20:44:11.620 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:11.618+0000 7f8c13fff640 1 --2- 192.168.123.110:0/94825177 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8c14073b40 0x7f8c1419e820 secure :-1 s=READY pgs=178 cs=0 l=1 rev1=1 crypto rx=0x7f8c00009ae0 tx=0x7f8c00031cd0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:11.620 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:11.619+0000 7f8c117fa640 1 -- 192.168.123.110:0/94825177 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f8c00031ec0 con 0x7f8c14073b40 2026-03-09T20:44:11.620 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:11.619+0000 7f8c1aa46640 1 -- 192.168.123.110:0/94825177 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f8c141a3ee0 con 0x7f8c14073b40 2026-03-09T20:44:11.620 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:11.619+0000 7f8c117fa640 1 -- 192.168.123.110:0/94825177 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f8c00002c20 con 0x7f8c14073b40 2026-03-09T20:44:11.620 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:11.619+0000 7f8c1aa46640 1 -- 192.168.123.110:0/94825177 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f8c141a4350 con 0x7f8c14073b40 2026-03-09T20:44:11.622 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:11.621+0000 7f8c117fa640 1 -- 192.168.123.110:0/94825177 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f8c00038560 con 0x7f8c14073b40 2026-03-09T20:44:11.622 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:11.621+0000 7f8c1aa46640 1 -- 192.168.123.110:0/94825177 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f8c1410fb90 con 0x7f8c14073b40 2026-03-09T20:44:11.622 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:11.621+0000 7f8c117fa640 1 -- 192.168.123.110:0/94825177 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 18) v1 ==== 98424+0+0 (secure 0 0 0) 0x7f8c00038700 con 0x7f8c14073b40 2026-03-09T20:44:11.623 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:11.622+0000 7f8c117fa640 1 --2- 192.168.123.110:0/94825177 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f8be4075f20 0x7f8be40783e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:11.623 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:11.622+0000 7f8c137fe640 1 --2- 192.168.123.110:0/94825177 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f8be4075f20 0x7f8be40783e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:11.623 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:11.622+0000 7f8c117fa640 1 -- 192.168.123.110:0/94825177 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(16..16 src has 1..16) v4 ==== 2519+0+0 (secure 0 0 0) 0x7f8c000bcae0 con 0x7f8c14073b40 2026-03-09T20:44:11.624 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:11.622+0000 7f8c137fe640 1 --2- 192.168.123.110:0/94825177 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f8be4075f20 0x7f8be40783e0 secure :-1 s=READY pgs=45 cs=0 l=1 rev1=1 crypto rx=0x7f8c1419fd40 tx=0x7f8c04005f50 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:11.626 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:11.625+0000 7f8c117fa640 1 -- 192.168.123.110:0/94825177 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f8c00086d40 con 0x7f8c14073b40 2026-03-09T20:44:11.635 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:11 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:11.635 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:11 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:11.635 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:11 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:11.635 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:11 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:11.635 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:11 vm07 ceph-mon[49120]: from='osd.2 [v2:192.168.123.107:6818/436827330,v1:192.168.123.107:6819/436827330]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T20:44:11.635 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:44:11 vm07 ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-2[83454]: 2026-03-09T20:44:11.400+0000 7face6fcd740 -1 osd.2 0 log_to_monitors true 2026-03-09T20:44:11.674 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:11 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:11.674 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:11 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:11.674 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:11 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:11.674 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:11 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:11.674 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:11 vm10 ceph-mon[57011]: from='osd.2 [v2:192.168.123.107:6818/436827330,v1:192.168.123.107:6819/436827330]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T20:44:11.728 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:11.726+0000 7f8c1aa46640 1 -- 192.168.123.110:0/94825177 --> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm10:/dev/vde", "target": ["mon-mgr", ""]}) v1 -- 0x7f8c14103310 con 0x7f8be4075f20 2026-03-09T20:44:12.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:12 vm10 ceph-mon[57011]: pgmap v30: 0 pgs: ; 0 B data, 57 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:44:12.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:12 vm10 ceph-mon[57011]: from='client.14306 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm10:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:44:12.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:12 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T20:44:12.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:12 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T20:44:12.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:12 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:44:12.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:12 vm10 ceph-mon[57011]: from='osd.2 [v2:192.168.123.107:6818/436827330,v1:192.168.123.107:6819/436827330]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T20:44:12.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:12 vm10 ceph-mon[57011]: osdmap e17: 3 total, 2 up, 3 in 2026-03-09T20:44:12.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:12 vm10 ceph-mon[57011]: from='osd.2 [v2:192.168.123.107:6818/436827330,v1:192.168.123.107:6819/436827330]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T20:44:12.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:12 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:44:12.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:12 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:12.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:12 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:12.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:12 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:44:12.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:12 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:44:12.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:12 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:44:12.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:12 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:12.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:12 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:44:12.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:12 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:44:12.538 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:12 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:44:12.538 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:12 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:12.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:12 vm07 ceph-mon[49120]: pgmap v30: 0 pgs: ; 0 B data, 57 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:44:12.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:12 vm07 ceph-mon[49120]: from='client.14306 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm10:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:44:12.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:12 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T20:44:12.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:12 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T20:44:12.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:12 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:44:12.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:12 vm07 ceph-mon[49120]: from='osd.2 [v2:192.168.123.107:6818/436827330,v1:192.168.123.107:6819/436827330]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T20:44:12.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:12 vm07 ceph-mon[49120]: osdmap e17: 3 total, 2 up, 3 in 2026-03-09T20:44:12.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:12 vm07 ceph-mon[49120]: from='osd.2 [v2:192.168.123.107:6818/436827330,v1:192.168.123.107:6819/436827330]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T20:44:12.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:12 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:44:12.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:12 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:12.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:12 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:12.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:12 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:44:12.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:12 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:44:12.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:12 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:44:12.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:12 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:12.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:12 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:44:12.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:12 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:44:12.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:12 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:44:12.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:12 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:12.885 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:44:12 vm07 ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-2[83454]: 2026-03-09T20:44:12.528+0000 7face2729640 -1 osd.2 0 waiting for initial osdmap 2026-03-09T20:44:12.885 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:44:12 vm07 ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-2[83454]: 2026-03-09T20:44:12.533+0000 7facded65640 -1 osd.2 18 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T20:44:13.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:13 vm10 ceph-mon[57011]: Detected new or changed devices on vm07 2026-03-09T20:44:13.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:13 vm10 ceph-mon[57011]: from='client.? 192.168.123.110:0/2298528120' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5baaeff4-3fa0-43d6-81ca-ff28de0673a4"}]: dispatch 2026-03-09T20:44:13.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:13 vm10 ceph-mon[57011]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5baaeff4-3fa0-43d6-81ca-ff28de0673a4"}]: dispatch 2026-03-09T20:44:13.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:13 vm10 ceph-mon[57011]: from='osd.2 [v2:192.168.123.107:6818/436827330,v1:192.168.123.107:6819/436827330]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-09T20:44:13.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:13 vm10 ceph-mon[57011]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "5baaeff4-3fa0-43d6-81ca-ff28de0673a4"}]': finished 2026-03-09T20:44:13.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:13 vm10 ceph-mon[57011]: osdmap e18: 4 total, 2 up, 4 in 2026-03-09T20:44:13.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:13 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:44:13.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:13 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:44:13.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:13 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:44:13.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:13 vm10 ceph-mon[57011]: from='client.? 192.168.123.110:0/62383273' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T20:44:13.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:13 vm07 ceph-mon[49120]: Detected new or changed devices on vm07 2026-03-09T20:44:13.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:13 vm07 ceph-mon[49120]: from='client.? 192.168.123.110:0/2298528120' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5baaeff4-3fa0-43d6-81ca-ff28de0673a4"}]: dispatch 2026-03-09T20:44:13.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:13 vm07 ceph-mon[49120]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5baaeff4-3fa0-43d6-81ca-ff28de0673a4"}]: dispatch 2026-03-09T20:44:13.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:13 vm07 ceph-mon[49120]: from='osd.2 [v2:192.168.123.107:6818/436827330,v1:192.168.123.107:6819/436827330]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-09T20:44:13.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:13 vm07 ceph-mon[49120]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "5baaeff4-3fa0-43d6-81ca-ff28de0673a4"}]': finished 2026-03-09T20:44:13.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:13 vm07 ceph-mon[49120]: osdmap e18: 4 total, 2 up, 4 in 2026-03-09T20:44:13.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:13 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:44:13.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:13 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:44:13.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:13 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:44:13.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:13 vm07 ceph-mon[49120]: from='client.? 192.168.123.110:0/62383273' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T20:44:14.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:14 vm10 ceph-mon[57011]: purged_snaps scrub starts 2026-03-09T20:44:14.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:14 vm10 ceph-mon[57011]: purged_snaps scrub ok 2026-03-09T20:44:14.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:14 vm10 ceph-mon[57011]: pgmap v33: 0 pgs: ; 0 B data, 57 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:44:14.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:14 vm10 ceph-mon[57011]: osd.2 [v2:192.168.123.107:6818/436827330,v1:192.168.123.107:6819/436827330] boot 2026-03-09T20:44:14.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:14 vm10 ceph-mon[57011]: osdmap e19: 4 total, 3 up, 4 in 2026-03-09T20:44:14.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:14 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:44:14.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:14 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:44:14.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:14 vm07 ceph-mon[49120]: purged_snaps scrub starts 2026-03-09T20:44:14.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:14 vm07 ceph-mon[49120]: purged_snaps scrub ok 2026-03-09T20:44:14.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:14 vm07 ceph-mon[49120]: pgmap v33: 0 pgs: ; 0 B data, 57 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:44:14.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:14 vm07 ceph-mon[49120]: osd.2 [v2:192.168.123.107:6818/436827330,v1:192.168.123.107:6819/436827330] boot 2026-03-09T20:44:14.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:14 vm07 ceph-mon[49120]: osdmap e19: 4 total, 3 up, 4 in 2026-03-09T20:44:14.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:14 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:44:14.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:14 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:44:15.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:15 vm10 ceph-mon[57011]: osdmap e20: 4 total, 3 up, 4 in 2026-03-09T20:44:15.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:15 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:44:15.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:15 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:44:15.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:15 vm07 ceph-mon[49120]: osdmap e20: 4 total, 3 up, 4 in 2026-03-09T20:44:15.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:15 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:44:15.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:15 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:44:16.746 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:16 vm10 ceph-mon[57011]: pgmap v36: 0 pgs: ; 0 B data, 83 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:44:16.746 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:16 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-09T20:44:16.746 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:16 vm10 ceph-mon[57011]: osdmap e21: 4 total, 3 up, 4 in 2026-03-09T20:44:16.746 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:16 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:44:16.746 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:16 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:44:16.746 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:16 vm10 sudo[62402]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda 2026-03-09T20:44:16.746 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:16 vm10 sudo[62402]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-09T20:44:16.746 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:16 vm10 sudo[62402]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-09T20:44:16.883 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:44:16 vm07 sudo[88526]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vdd 2026-03-09T20:44:16.883 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:44:16 vm07 sudo[88526]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-09T20:44:16.883 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:44:16 vm07 sudo[88526]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-09T20:44:16.883 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:44:16 vm07 sudo[88526]: pam_unix(sudo:session): session closed for user root 2026-03-09T20:44:16.883 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:44:16 vm07 sudo[88530]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vdc 2026-03-09T20:44:16.883 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:44:16 vm07 sudo[88530]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-09T20:44:16.884 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:44:16 vm07 sudo[88530]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-09T20:44:16.884 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:44:16 vm07 sudo[88530]: pam_unix(sudo:session): session closed for user root 2026-03-09T20:44:16.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:16 vm07 ceph-mon[49120]: pgmap v36: 0 pgs: ; 0 B data, 83 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:44:16.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:16 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-09T20:44:16.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:16 vm07 ceph-mon[49120]: osdmap e21: 4 total, 3 up, 4 in 2026-03-09T20:44:16.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:16 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:44:16.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:16 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:44:16.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:16 vm07 sudo[88534]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda 2026-03-09T20:44:16.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:16 vm07 sudo[88534]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-09T20:44:16.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:16 vm07 sudo[88534]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-09T20:44:16.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:16 vm07 sudo[88534]: pam_unix(sudo:session): session closed for user root 2026-03-09T20:44:16.884 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:44:16 vm07 sudo[88522]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vde 2026-03-09T20:44:16.884 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:44:16 vm07 sudo[88522]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-09T20:44:16.884 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:44:16 vm07 sudo[88522]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-09T20:44:16.884 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:44:16 vm07 sudo[88522]: pam_unix(sudo:session): session closed for user root 2026-03-09T20:44:17.027 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:16 vm10 sudo[62402]: pam_unix(sudo:session): session closed for user root 2026-03-09T20:44:17.570 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:17.567+0000 7f8c117fa640 1 -- 192.168.123.110:0/94825177 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7f8c000832a0 con 0x7f8c14073b40 2026-03-09T20:44:17.833 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:17 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-09T20:44:17.833 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:17 vm10 ceph-mon[57011]: osdmap e22: 4 total, 3 up, 4 in 2026-03-09T20:44:17.833 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:17 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:44:17.833 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:17 vm10 ceph-mon[57011]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T20:44:17.833 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:17 vm10 ceph-mon[57011]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T20:44:17.833 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:17 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mon metadata", "id": "vm07"}]: dispatch 2026-03-09T20:44:17.833 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:17 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mon metadata", "id": "vm10"}]: dispatch 2026-03-09T20:44:17.833 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:17 vm10 ceph-mon[57011]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T20:44:17.833 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:17 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mon metadata", "id": "vm07"}]: dispatch 2026-03-09T20:44:17.833 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:17 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mon metadata", "id": "vm10"}]: dispatch 2026-03-09T20:44:17.833 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:17 vm10 ceph-mon[57011]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T20:44:17.833 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:17 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T20:44:17.833 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:17 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:44:17.833 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:17 vm10 ceph-mon[57011]: Deploying daemon osd.3 on vm10 2026-03-09T20:44:17.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:17 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-09T20:44:17.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:17 vm07 ceph-mon[49120]: osdmap e22: 4 total, 3 up, 4 in 2026-03-09T20:44:17.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:17 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:44:17.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:17 vm07 ceph-mon[49120]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T20:44:17.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:17 vm07 ceph-mon[49120]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T20:44:17.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:17 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mon metadata", "id": "vm07"}]: dispatch 2026-03-09T20:44:17.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:17 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mon metadata", "id": "vm10"}]: dispatch 2026-03-09T20:44:17.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:17 vm07 ceph-mon[49120]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T20:44:17.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:17 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mon metadata", "id": "vm07"}]: dispatch 2026-03-09T20:44:17.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:17 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mon metadata", "id": "vm10"}]: dispatch 2026-03-09T20:44:17.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:17 vm07 ceph-mon[49120]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T20:44:17.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:17 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T20:44:17.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:17 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:44:17.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:17 vm07 ceph-mon[49120]: Deploying daemon osd.3 on vm10 2026-03-09T20:44:18.662 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:18 vm10 ceph-mon[57011]: pgmap v39: 1 pgs: 1 active+clean; 449 KiB data, 83 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:44:18.662 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:18 vm10 ceph-mon[57011]: osdmap e23: 4 total, 3 up, 4 in 2026-03-09T20:44:18.662 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:18 vm10 ceph-mon[57011]: mgrmap e19: vm07.xjrvch(active, since 58s), standbys: vm10.byqahe 2026-03-09T20:44:18.662 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:18 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:44:18.662 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:18 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:44:18.662 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:18 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:18.662 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:18 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:18.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:18 vm07 ceph-mon[49120]: pgmap v39: 1 pgs: 1 active+clean; 449 KiB data, 83 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:44:18.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:18 vm07 ceph-mon[49120]: osdmap e23: 4 total, 3 up, 4 in 2026-03-09T20:44:18.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:18 vm07 ceph-mon[49120]: mgrmap e19: vm07.xjrvch(active, since 58s), standbys: vm10.byqahe 2026-03-09T20:44:18.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:18 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:44:18.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:18 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:44:18.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:18 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:18.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:18 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:19.197 INFO:teuthology.orchestra.run.vm10.stdout:Created osd(s) 3 on host 'vm10' 2026-03-09T20:44:19.197 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:19.191+0000 7f8c117fa640 1 -- 192.168.123.110:0/94825177 <== mgr.14225 v2:192.168.123.107:6800/4233182156 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+32 (secure 0 0 0) 0x7f8c14103310 con 0x7f8be4075f20 2026-03-09T20:44:19.197 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:19.193+0000 7f8c1aa46640 1 -- 192.168.123.110:0/94825177 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f8be4075f20 msgr2=0x7f8be40783e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:19.197 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:19.193+0000 7f8c1aa46640 1 --2- 192.168.123.110:0/94825177 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f8be4075f20 0x7f8be40783e0 secure :-1 s=READY pgs=45 cs=0 l=1 rev1=1 crypto rx=0x7f8c1419fd40 tx=0x7f8c04005f50 comp rx=0 tx=0).stop 2026-03-09T20:44:19.198 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:19.194+0000 7f8c1aa46640 1 -- 192.168.123.110:0/94825177 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8c14073b40 msgr2=0x7f8c1419e820 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:19.198 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:19.194+0000 7f8c1aa46640 1 --2- 192.168.123.110:0/94825177 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8c14073b40 0x7f8c1419e820 secure :-1 s=READY pgs=178 cs=0 l=1 rev1=1 crypto rx=0x7f8c00009ae0 tx=0x7f8c00031cd0 comp rx=0 tx=0).stop 2026-03-09T20:44:19.198 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:19.194+0000 7f8c1aa46640 1 -- 192.168.123.110:0/94825177 shutdown_connections 2026-03-09T20:44:19.198 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:19.194+0000 7f8c1aa46640 1 --2- 192.168.123.110:0/94825177 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f8be4075f20 0x7f8be40783e0 unknown :-1 s=CLOSED pgs=45 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:19.198 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:19.194+0000 7f8c1aa46640 1 --2- 192.168.123.110:0/94825177 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f8c140751a0 0x7f8c1419ed60 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:19.198 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:19.194+0000 7f8c1aa46640 1 --2- 192.168.123.110:0/94825177 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8c14073b40 0x7f8c1419e820 secure :-1 s=CLOSED pgs=178 cs=0 l=1 rev1=1 crypto rx=0x7f8c00009ae0 tx=0x7f8c00031cd0 comp rx=0 tx=0).stop 2026-03-09T20:44:19.198 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:19.194+0000 7f8c1aa46640 1 -- 192.168.123.110:0/94825177 >> 192.168.123.110:0/94825177 conn(0x7f8c140fbfb0 msgr2=0x7f8c140fd880 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:19.198 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:19.195+0000 7f8c1aa46640 1 -- 192.168.123.110:0/94825177 shutdown_connections 2026-03-09T20:44:19.198 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:19.195+0000 7f8c1aa46640 1 -- 192.168.123.110:0/94825177 wait complete. 2026-03-09T20:44:19.287 DEBUG:teuthology.orchestra.run.vm10:osd.3> sudo journalctl -f -n 0 -u ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4@osd.3.service 2026-03-09T20:44:19.300 INFO:tasks.cephadm:Deploying osd.4 on vm10 with /dev/vdd... 2026-03-09T20:44:19.300 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- lvm zap /dev/vdd 2026-03-09T20:44:19.546 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm10/config 2026-03-09T20:44:19.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:19 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:44:19.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:19 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:19.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:19 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:19.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:19 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:19.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:19 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:20.014 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:19 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:44:20.014 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:19 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:20.014 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:19 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:20.014 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:19 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:20.014 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:19 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:20.098 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T20:44:20.112 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph orch daemon add osd vm10:/dev/vdd 2026-03-09T20:44:20.282 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm10/config 2026-03-09T20:44:20.287 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:44:20 vm10 ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-3[62912]: 2026-03-09T20:44:20.031+0000 7f124f7ca740 -1 osd.3 0 log_to_monitors true 2026-03-09T20:44:20.521 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:20.519+0000 7fa94b4b1640 1 -- 192.168.123.110:0/707476485 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa944101170 msgr2=0x7fa944103560 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:20.521 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:20.519+0000 7fa94b4b1640 1 --2- 192.168.123.110:0/707476485 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa944101170 0x7fa944103560 secure :-1 s=READY pgs=182 cs=0 l=1 rev1=1 crypto rx=0x7fa92c0099b0 tx=0x7fa92c02f240 comp rx=0 tx=0).stop 2026-03-09T20:44:20.521 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:20.519+0000 7fa94b4b1640 1 -- 192.168.123.110:0/707476485 shutdown_connections 2026-03-09T20:44:20.521 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:20.519+0000 7fa94b4b1640 1 --2- 192.168.123.110:0/707476485 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa944103aa0 0x7fa944105e90 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:20.521 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:20.519+0000 7fa94b4b1640 1 --2- 192.168.123.110:0/707476485 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa944101170 0x7fa944103560 unknown :-1 s=CLOSED pgs=182 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:20.521 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:20.519+0000 7fa94b4b1640 1 -- 192.168.123.110:0/707476485 >> 192.168.123.110:0/707476485 conn(0x7fa9440fac90 msgr2=0x7fa9440fd0b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:20.521 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:20.519+0000 7fa94b4b1640 1 -- 192.168.123.110:0/707476485 shutdown_connections 2026-03-09T20:44:20.521 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:20.519+0000 7fa94b4b1640 1 -- 192.168.123.110:0/707476485 wait complete. 2026-03-09T20:44:20.521 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:20.520+0000 7fa94b4b1640 1 Processor -- start 2026-03-09T20:44:20.521 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:20.520+0000 7fa94b4b1640 1 -- start start 2026-03-09T20:44:20.521 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:20.520+0000 7fa94b4b1640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa944101170 0x7fa94419a300 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:20.522 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:20.520+0000 7fa94b4b1640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa944103aa0 0x7fa94419a840 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:20.522 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:20.520+0000 7fa94b4b1640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa94419ae10 con 0x7fa944101170 2026-03-09T20:44:20.522 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:20.520+0000 7fa94b4b1640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa94419af80 con 0x7fa944103aa0 2026-03-09T20:44:20.522 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:20.520+0000 7fa949cae640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa944103aa0 0x7fa94419a840 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:20.522 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:20.520+0000 7fa949cae640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa944103aa0 0x7fa94419a840 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.110:56546/0 (socket says 192.168.123.110:56546) 2026-03-09T20:44:20.522 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:20.521+0000 7fa949cae640 1 -- 192.168.123.110:0/1274288232 learned_addr learned my addr 192.168.123.110:0/1274288232 (peer_addr_for_me v2:192.168.123.110:0/0) 2026-03-09T20:44:20.522 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:20.521+0000 7fa94a4af640 1 --2- 192.168.123.110:0/1274288232 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa944101170 0x7fa94419a300 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:20.522 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:20.521+0000 7fa94a4af640 1 -- 192.168.123.110:0/1274288232 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa944103aa0 msgr2=0x7fa94419a840 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:20.522 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:20.521+0000 7fa94a4af640 1 --2- 192.168.123.110:0/1274288232 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa944103aa0 0x7fa94419a840 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:20.522 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:20.521+0000 7fa94a4af640 1 -- 192.168.123.110:0/1274288232 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fa92c009660 con 0x7fa944101170 2026-03-09T20:44:20.522 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:20.521+0000 7fa949cae640 1 --2- 192.168.123.110:0/1274288232 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa944103aa0 0x7fa94419a840 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T20:44:20.522 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:20.521+0000 7fa94a4af640 1 --2- 192.168.123.110:0/1274288232 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa944101170 0x7fa94419a300 secure :-1 s=READY pgs=183 cs=0 l=1 rev1=1 crypto rx=0x7fa92c002410 tx=0x7fa92c0026e0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:20.523 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:20.521+0000 7fa93b7fe640 1 -- 192.168.123.110:0/1274288232 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fa92c03d070 con 0x7fa944101170 2026-03-09T20:44:20.523 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:20.521+0000 7fa93b7fe640 1 -- 192.168.123.110:0/1274288232 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fa92c002910 con 0x7fa944101170 2026-03-09T20:44:20.523 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:20.521+0000 7fa93b7fe640 1 -- 192.168.123.110:0/1274288232 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fa92c0417a0 con 0x7fa944101170 2026-03-09T20:44:20.523 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:20.521+0000 7fa94b4b1640 1 -- 192.168.123.110:0/1274288232 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fa94419f970 con 0x7fa944101170 2026-03-09T20:44:20.523 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:20.521+0000 7fa94b4b1640 1 -- 192.168.123.110:0/1274288232 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fa94419fe60 con 0x7fa944101170 2026-03-09T20:44:20.523 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:20.522+0000 7fa94b4b1640 1 -- 192.168.123.110:0/1274288232 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fa90c005350 con 0x7fa944101170 2026-03-09T20:44:20.525 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:20.523+0000 7fa93b7fe640 1 -- 192.168.123.110:0/1274288232 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7fa92c02fcb0 con 0x7fa944101170 2026-03-09T20:44:20.525 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:20.523+0000 7fa93b7fe640 1 --2- 192.168.123.110:0/1274288232 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fa91c076170 0x7fa91c078630 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:20.525 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:20.523+0000 7fa93b7fe640 1 -- 192.168.123.110:0/1274288232 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(23..23 src has 1..23) v4 ==== 3337+0+0 (secure 0 0 0) 0x7fa92c0bbdc0 con 0x7fa944101170 2026-03-09T20:44:20.525 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:20.524+0000 7fa949cae640 1 --2- 192.168.123.110:0/1274288232 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fa91c076170 0x7fa91c078630 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:20.525 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:20.524+0000 7fa949cae640 1 --2- 192.168.123.110:0/1274288232 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fa91c076170 0x7fa91c078630 secure :-1 s=READY pgs=51 cs=0 l=1 rev1=1 crypto rx=0x7fa94419b7d0 tx=0x7fa9340074e0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:20.528 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:20.527+0000 7fa93b7fe640 1 -- 192.168.123.110:0/1274288232 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7fa92c085c10 con 0x7fa944101170 2026-03-09T20:44:20.630 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:20.629+0000 7fa94b4b1640 1 -- 192.168.123.110:0/1274288232 --> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm10:/dev/vdd", "target": ["mon-mgr", ""]}) v1 -- 0x7fa90c002bf0 con 0x7fa91c076170 2026-03-09T20:44:20.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:20 vm07 ceph-mon[49120]: pgmap v41: 1 pgs: 1 active+clean; 449 KiB data, 84 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:44:20.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:20 vm07 ceph-mon[49120]: from='osd.3 [v2:192.168.123.110:6800/2906542839,v1:192.168.123.110:6801/2906542839]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T20:44:20.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:20 vm07 ceph-mon[49120]: from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T20:44:20.897 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:20 vm10 ceph-mon[57011]: pgmap v41: 1 pgs: 1 active+clean; 449 KiB data, 84 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:44:20.897 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:20 vm10 ceph-mon[57011]: from='osd.3 [v2:192.168.123.110:6800/2906542839,v1:192.168.123.110:6801/2906542839]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T20:44:20.897 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:20 vm10 ceph-mon[57011]: from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T20:44:21.848 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:21 vm10 ceph-mon[57011]: from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-09T20:44:21.849 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:21 vm10 ceph-mon[57011]: osdmap e24: 4 total, 3 up, 4 in 2026-03-09T20:44:21.849 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:21 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:44:21.849 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:21 vm10 ceph-mon[57011]: from='osd.3 [v2:192.168.123.110:6800/2906542839,v1:192.168.123.110:6801/2906542839]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T20:44:21.849 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:21 vm10 ceph-mon[57011]: from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T20:44:21.849 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:21 vm10 ceph-mon[57011]: from='client.14324 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm10:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:44:21.849 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:21 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T20:44:21.849 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:21 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T20:44:21.849 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:21 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:44:21.849 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:21 vm10 ceph-mon[57011]: Detected new or changed devices on vm10 2026-03-09T20:44:21.849 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:21 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:21.849 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:21 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:21.849 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:21 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "who": "osd/host:vm10", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:44:21.849 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:21 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:44:21.849 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:21 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:44:21.849 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:21 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:21.849 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:21 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:44:21.849 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:21 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:44:21.849 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:21 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:44:21.849 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:21 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:21.849 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:21 vm10 ceph-mon[57011]: from='client.? 192.168.123.110:0/2240304227' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5ced8315-7f95-41be-88c5-e29628c579a6"}]: dispatch 2026-03-09T20:44:21.849 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:21 vm10 ceph-mon[57011]: from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm10", "root=default"]}]': finished 2026-03-09T20:44:21.849 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:21 vm10 ceph-mon[57011]: from='client.? 192.168.123.110:0/2240304227' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "5ced8315-7f95-41be-88c5-e29628c579a6"}]': finished 2026-03-09T20:44:21.849 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:21 vm10 ceph-mon[57011]: osdmap e25: 5 total, 3 up, 5 in 2026-03-09T20:44:21.849 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:21 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:44:21.849 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:21 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T20:44:21.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:21 vm07 ceph-mon[49120]: from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-09T20:44:21.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:21 vm07 ceph-mon[49120]: osdmap e24: 4 total, 3 up, 4 in 2026-03-09T20:44:21.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:21 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:44:21.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:21 vm07 ceph-mon[49120]: from='osd.3 [v2:192.168.123.110:6800/2906542839,v1:192.168.123.110:6801/2906542839]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T20:44:21.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:21 vm07 ceph-mon[49120]: from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T20:44:21.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:21 vm07 ceph-mon[49120]: from='client.14324 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm10:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:44:21.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:21 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T20:44:21.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:21 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T20:44:21.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:21 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:44:21.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:21 vm07 ceph-mon[49120]: Detected new or changed devices on vm10 2026-03-09T20:44:21.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:21 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:21.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:21 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:21.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:21 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "who": "osd/host:vm10", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:44:21.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:21 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:44:21.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:21 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:44:21.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:21 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:21.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:21 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:44:21.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:21 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:44:21.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:21 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:44:21.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:21 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:21.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:21 vm07 ceph-mon[49120]: from='client.? 192.168.123.110:0/2240304227' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5ced8315-7f95-41be-88c5-e29628c579a6"}]: dispatch 2026-03-09T20:44:21.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:21 vm07 ceph-mon[49120]: from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm10", "root=default"]}]': finished 2026-03-09T20:44:21.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:21 vm07 ceph-mon[49120]: from='client.? 192.168.123.110:0/2240304227' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "5ced8315-7f95-41be-88c5-e29628c579a6"}]': finished 2026-03-09T20:44:21.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:21 vm07 ceph-mon[49120]: osdmap e25: 5 total, 3 up, 5 in 2026-03-09T20:44:21.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:21 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:44:21.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:21 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T20:44:23.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:22 vm10 ceph-mon[57011]: pgmap v43: 1 pgs: 1 active+clean; 449 KiB data, 84 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:44:23.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:22 vm10 ceph-mon[57011]: from='client.? 192.168.123.110:0/689103724' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T20:44:23.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:22 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:44:23.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:22 vm10 ceph-mon[57011]: osdmap e26: 5 total, 3 up, 5 in 2026-03-09T20:44:23.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:22 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:44:23.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:22 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T20:44:23.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:22 vm10 ceph-mon[57011]: from='osd.3 ' entity='osd.3' 2026-03-09T20:44:23.037 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:44:22 vm10 ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-3[62912]: 2026-03-09T20:44:22.759+0000 7f124af26640 -1 osd.3 0 waiting for initial osdmap 2026-03-09T20:44:23.037 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:44:22 vm10 ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-3[62912]: 2026-03-09T20:44:22.768+0000 7f1246d61640 -1 osd.3 26 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T20:44:23.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:22 vm07 ceph-mon[49120]: pgmap v43: 1 pgs: 1 active+clean; 449 KiB data, 84 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:44:23.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:22 vm07 ceph-mon[49120]: from='client.? 192.168.123.110:0/689103724' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T20:44:23.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:22 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:44:23.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:22 vm07 ceph-mon[49120]: osdmap e26: 5 total, 3 up, 5 in 2026-03-09T20:44:23.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:22 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:44:23.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:22 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T20:44:23.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:22 vm07 ceph-mon[49120]: from='osd.3 ' entity='osd.3' 2026-03-09T20:44:24.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:23 vm07 ceph-mon[49120]: purged_snaps scrub starts 2026-03-09T20:44:24.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:23 vm07 ceph-mon[49120]: purged_snaps scrub ok 2026-03-09T20:44:24.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:23 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:44:24.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:23 vm07 ceph-mon[49120]: osd.3 [v2:192.168.123.110:6800/2906542839,v1:192.168.123.110:6801/2906542839] boot 2026-03-09T20:44:24.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:23 vm07 ceph-mon[49120]: osdmap e27: 5 total, 4 up, 5 in 2026-03-09T20:44:24.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:23 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:44:24.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:23 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T20:44:24.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:23 vm10 ceph-mon[57011]: purged_snaps scrub starts 2026-03-09T20:44:24.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:23 vm10 ceph-mon[57011]: purged_snaps scrub ok 2026-03-09T20:44:24.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:23 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:44:24.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:23 vm10 ceph-mon[57011]: osd.3 [v2:192.168.123.110:6800/2906542839,v1:192.168.123.110:6801/2906542839] boot 2026-03-09T20:44:24.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:23 vm10 ceph-mon[57011]: osdmap e27: 5 total, 4 up, 5 in 2026-03-09T20:44:24.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:23 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:44:24.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:23 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T20:44:25.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:24 vm07 ceph-mon[49120]: pgmap v46: 1 pgs: 1 peering; 449 KiB data, 89 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:44:25.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:24 vm07 ceph-mon[49120]: osdmap e28: 5 total, 4 up, 5 in 2026-03-09T20:44:25.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:24 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T20:44:25.215 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:24 vm10 ceph-mon[57011]: pgmap v46: 1 pgs: 1 peering; 449 KiB data, 89 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:44:25.215 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:24 vm10 ceph-mon[57011]: osdmap e28: 5 total, 4 up, 5 in 2026-03-09T20:44:25.215 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:24 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T20:44:26.822 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:26 vm10 ceph-mon[57011]: pgmap v49: 1 pgs: 1 peering; 449 KiB data, 515 MiB used, 79 GiB / 80 GiB avail 2026-03-09T20:44:26.822 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:26 vm10 ceph-mon[57011]: osdmap e29: 5 total, 4 up, 5 in 2026-03-09T20:44:26.822 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:26 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T20:44:26.822 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:26 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T20:44:26.822 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:26 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:44:26.822 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:26 vm10 ceph-mon[57011]: Deploying daemon osd.4 on vm10 2026-03-09T20:44:26.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:26 vm07 ceph-mon[49120]: pgmap v49: 1 pgs: 1 peering; 449 KiB data, 515 MiB used, 79 GiB / 80 GiB avail 2026-03-09T20:44:26.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:26 vm07 ceph-mon[49120]: osdmap e29: 5 total, 4 up, 5 in 2026-03-09T20:44:26.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:26 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T20:44:26.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:26 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T20:44:26.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:26 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:44:26.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:26 vm07 ceph-mon[49120]: Deploying daemon osd.4 on vm10 2026-03-09T20:44:27.808 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:27.807+0000 7fa93b7fe640 1 -- 192.168.123.110:0/1274288232 <== mgr.14225 v2:192.168.123.107:6800/4233182156 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+32 (secure 0 0 0) 0x7fa90c002bf0 con 0x7fa91c076170 2026-03-09T20:44:27.811 INFO:teuthology.orchestra.run.vm10.stdout:Created osd(s) 4 on host 'vm10' 2026-03-09T20:44:27.811 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:27.809+0000 7fa94b4b1640 1 -- 192.168.123.110:0/1274288232 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fa91c076170 msgr2=0x7fa91c078630 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:27.811 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:27.809+0000 7fa94b4b1640 1 --2- 192.168.123.110:0/1274288232 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fa91c076170 0x7fa91c078630 secure :-1 s=READY pgs=51 cs=0 l=1 rev1=1 crypto rx=0x7fa94419b7d0 tx=0x7fa9340074e0 comp rx=0 tx=0).stop 2026-03-09T20:44:27.811 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:27.809+0000 7fa94b4b1640 1 -- 192.168.123.110:0/1274288232 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa944101170 msgr2=0x7fa94419a300 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:27.811 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:27.809+0000 7fa94b4b1640 1 --2- 192.168.123.110:0/1274288232 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa944101170 0x7fa94419a300 secure :-1 s=READY pgs=183 cs=0 l=1 rev1=1 crypto rx=0x7fa92c002410 tx=0x7fa92c0026e0 comp rx=0 tx=0).stop 2026-03-09T20:44:27.811 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:27.809+0000 7fa94b4b1640 1 -- 192.168.123.110:0/1274288232 shutdown_connections 2026-03-09T20:44:27.811 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:27.809+0000 7fa94b4b1640 1 --2- 192.168.123.110:0/1274288232 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fa91c076170 0x7fa91c078630 unknown :-1 s=CLOSED pgs=51 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:27.811 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:27.809+0000 7fa94b4b1640 1 --2- 192.168.123.110:0/1274288232 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa944103aa0 0x7fa94419a840 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:27.811 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:27.809+0000 7fa94b4b1640 1 --2- 192.168.123.110:0/1274288232 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa944101170 0x7fa94419a300 secure :-1 s=CLOSED pgs=183 cs=0 l=1 rev1=1 crypto rx=0x7fa92c002410 tx=0x7fa92c0026e0 comp rx=0 tx=0).stop 2026-03-09T20:44:27.811 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:27.809+0000 7fa94b4b1640 1 -- 192.168.123.110:0/1274288232 >> 192.168.123.110:0/1274288232 conn(0x7fa9440fac90 msgr2=0x7fa9441046a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:27.811 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:27.809+0000 7fa94b4b1640 1 -- 192.168.123.110:0/1274288232 shutdown_connections 2026-03-09T20:44:27.811 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:27.810+0000 7fa94b4b1640 1 -- 192.168.123.110:0/1274288232 wait complete. 2026-03-09T20:44:27.812 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:27 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:44:27.812 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:27 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:27.812 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:27 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:27.880 DEBUG:teuthology.orchestra.run.vm10:osd.4> sudo journalctl -f -n 0 -u ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4@osd.4.service 2026-03-09T20:44:27.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:27 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:44:27.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:27 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:27.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:27 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:27.884 INFO:tasks.cephadm:Deploying osd.5 on vm10 with /dev/vdc... 2026-03-09T20:44:27.884 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- lvm zap /dev/vdc 2026-03-09T20:44:28.143 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm10/config 2026-03-09T20:44:28.715 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T20:44:28.731 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph orch daemon add osd vm10:/dev/vdc 2026-03-09T20:44:28.891 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:28 vm10 ceph-mon[57011]: pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 515 MiB used, 79 GiB / 80 GiB avail; 98 KiB/s, 0 objects/s recovering 2026-03-09T20:44:28.891 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:28 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:28.891 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:28 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:28.891 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:28 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:28.891 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:28 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:28.945 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm10/config 2026-03-09T20:44:29.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:28 vm07 ceph-mon[49120]: pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 515 MiB used, 79 GiB / 80 GiB avail; 98 KiB/s, 0 objects/s recovering 2026-03-09T20:44:29.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:28 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:29.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:28 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:29.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:28 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:29.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:28 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:29.152 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:44:28 vm10 ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-4[69362]: 2026-03-09T20:44:28.965+0000 7f675a431740 -1 osd.4 0 log_to_monitors true 2026-03-09T20:44:29.225 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:29.224+0000 7f6f23577640 1 -- 192.168.123.110:0/3755200858 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f6f240719a0 msgr2=0x7f6f24071da0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:29.225 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:29.224+0000 7f6f23577640 1 --2- 192.168.123.110:0/3755200858 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f6f240719a0 0x7f6f24071da0 secure :-1 s=READY pgs=30 cs=0 l=1 rev1=1 crypto rx=0x7f6f180099b0 tx=0x7f6f1802f240 comp rx=0 tx=0).stop 2026-03-09T20:44:29.225 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:29.224+0000 7f6f23577640 1 -- 192.168.123.110:0/3755200858 shutdown_connections 2026-03-09T20:44:29.225 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:29.224+0000 7f6f23577640 1 --2- 192.168.123.110:0/3755200858 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6f24072370 0x7f6f2410c590 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:29.225 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:29.224+0000 7f6f23577640 1 --2- 192.168.123.110:0/3755200858 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f6f240719a0 0x7f6f24071da0 unknown :-1 s=CLOSED pgs=30 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:29.225 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:29.224+0000 7f6f23577640 1 -- 192.168.123.110:0/3755200858 >> 192.168.123.110:0/3755200858 conn(0x7f6f2406d4f0 msgr2=0x7f6f2406f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:29.225 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:29.224+0000 7f6f23577640 1 -- 192.168.123.110:0/3755200858 shutdown_connections 2026-03-09T20:44:29.226 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:29.224+0000 7f6f23577640 1 -- 192.168.123.110:0/3755200858 wait complete. 2026-03-09T20:44:29.226 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:29.225+0000 7f6f23577640 1 Processor -- start 2026-03-09T20:44:29.226 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:29.225+0000 7f6f23577640 1 -- start start 2026-03-09T20:44:29.226 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:29.225+0000 7f6f23577640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f6f240719a0 0x7f6f24115930 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:29.226 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:29.225+0000 7f6f23577640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6f24072370 0x7f6f24115e70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:29.226 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:29.225+0000 7f6f23577640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f6f24117370 con 0x7f6f24072370 2026-03-09T20:44:29.226 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:29.225+0000 7f6f23577640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f6f241174e0 con 0x7f6f240719a0 2026-03-09T20:44:29.226 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:29.225+0000 7f6f21d74640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6f24072370 0x7f6f24115e70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:29.226 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:29.225+0000 7f6f21d74640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6f24072370 0x7f6f24115e70 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.110:50160/0 (socket says 192.168.123.110:50160) 2026-03-09T20:44:29.226 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:29.225+0000 7f6f21d74640 1 -- 192.168.123.110:0/319741757 learned_addr learned my addr 192.168.123.110:0/319741757 (peer_addr_for_me v2:192.168.123.110:0/0) 2026-03-09T20:44:29.227 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:29.226+0000 7f6f21d74640 1 -- 192.168.123.110:0/319741757 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f6f240719a0 msgr2=0x7f6f24115930 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:29.227 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:29.226+0000 7f6f21d74640 1 --2- 192.168.123.110:0/319741757 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f6f240719a0 0x7f6f24115930 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:29.227 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:29.226+0000 7f6f21d74640 1 -- 192.168.123.110:0/319741757 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f6f18009660 con 0x7f6f24072370 2026-03-09T20:44:29.228 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:29.226+0000 7f6f21d74640 1 --2- 192.168.123.110:0/319741757 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6f24072370 0x7f6f24115e70 secure :-1 s=READY pgs=187 cs=0 l=1 rev1=1 crypto rx=0x7f6f0c00b700 tx=0x7f6f0c00bbd0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:29.228 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:29.227+0000 7f6f137fe640 1 -- 192.168.123.110:0/319741757 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f6f0c00be90 con 0x7f6f24072370 2026-03-09T20:44:29.228 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:29.227+0000 7f6f23577640 1 -- 192.168.123.110:0/319741757 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f6f24116470 con 0x7f6f24072370 2026-03-09T20:44:29.228 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:29.227+0000 7f6f23577640 1 -- 192.168.123.110:0/319741757 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f6f241b5960 con 0x7f6f24072370 2026-03-09T20:44:29.228 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:29.227+0000 7f6f137fe640 1 -- 192.168.123.110:0/319741757 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f6f0c002ba0 con 0x7f6f24072370 2026-03-09T20:44:29.228 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:29.227+0000 7f6f137fe640 1 -- 192.168.123.110:0/319741757 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f6f0c00ca40 con 0x7f6f24072370 2026-03-09T20:44:29.229 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:29.228+0000 7f6f23577640 1 -- 192.168.123.110:0/319741757 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f6ee8005350 con 0x7f6f24072370 2026-03-09T20:44:29.230 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:29.229+0000 7f6f137fe640 1 -- 192.168.123.110:0/319741757 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7f6f0c004380 con 0x7f6f24072370 2026-03-09T20:44:29.230 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:29.229+0000 7f6f137fe640 1 --2- 192.168.123.110:0/319741757 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f6efc0761c0 0x7f6efc078680 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:29.230 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:29.229+0000 7f6f137fe640 1 -- 192.168.123.110:0/319741757 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(29..29 src has 1..29) v4 ==== 3869+0+0 (secure 0 0 0) 0x7f6f0c096ad0 con 0x7f6f24072370 2026-03-09T20:44:29.231 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:29.229+0000 7f6f22575640 1 --2- 192.168.123.110:0/319741757 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f6efc0761c0 0x7f6efc078680 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:29.231 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:29.230+0000 7f6f22575640 1 --2- 192.168.123.110:0/319741757 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f6efc0761c0 0x7f6efc078680 secure :-1 s=READY pgs=58 cs=0 l=1 rev1=1 crypto rx=0x7f6f18002fd0 tx=0x7f6f18004820 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:29.232 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:29.231+0000 7f6f137fe640 1 -- 192.168.123.110:0/319741757 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f6f0c060820 con 0x7f6f24072370 2026-03-09T20:44:29.341 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:29.339+0000 7f6f23577640 1 -- 192.168.123.110:0/319741757 --> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm10:/dev/vdc", "target": ["mon-mgr", ""]}) v1 -- 0x7f6ee8002bf0 con 0x7f6efc0761c0 2026-03-09T20:44:30.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:29 vm10 ceph-mon[57011]: from='osd.4 [v2:192.168.123.110:6808/1620701058,v1:192.168.123.110:6809/1620701058]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T20:44:30.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:29 vm10 ceph-mon[57011]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T20:44:30.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:29 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T20:44:30.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:29 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T20:44:30.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:29 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:44:30.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:29 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:30.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:29 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:30.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:29 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "who": "osd/host:vm10", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:44:30.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:29 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:44:30.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:29 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:44:30.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:29 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:30.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:29 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:44:30.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:29 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:44:30.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:29 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:44:30.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:29 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:30.096 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:29 vm07 ceph-mon[49120]: from='osd.4 [v2:192.168.123.110:6808/1620701058,v1:192.168.123.110:6809/1620701058]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T20:44:30.096 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:29 vm07 ceph-mon[49120]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T20:44:30.096 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:29 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T20:44:30.096 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:29 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T20:44:30.096 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:29 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:44:30.096 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:29 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:30.096 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:29 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:30.096 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:29 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "who": "osd/host:vm10", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:44:30.096 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:29 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:44:30.096 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:29 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:44:30.096 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:29 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:30.096 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:29 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:44:30.096 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:29 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:44:30.096 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:29 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:44:30.096 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:29 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:30.359 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:44:30 vm10 ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-4[69362]: 2026-03-09T20:44:30.239+0000 7f67573c7640 -1 osd.4 0 waiting for initial osdmap 2026-03-09T20:44:30.359 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:44:30 vm10 ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-4[69362]: 2026-03-09T20:44:30.268+0000 7f67521c9640 -1 osd.4 31 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T20:44:31.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:30 vm10 ceph-mon[57011]: pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 515 MiB used, 79 GiB / 80 GiB avail; 75 KiB/s, 0 objects/s recovering 2026-03-09T20:44:31.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:30 vm10 ceph-mon[57011]: from='client.14338 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm10:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:44:31.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:30 vm10 ceph-mon[57011]: Detected new or changed devices on vm10 2026-03-09T20:44:31.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:30 vm10 ceph-mon[57011]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-09T20:44:31.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:30 vm10 ceph-mon[57011]: osdmap e30: 5 total, 4 up, 5 in 2026-03-09T20:44:31.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:30 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T20:44:31.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:30 vm10 ceph-mon[57011]: from='osd.4 [v2:192.168.123.110:6808/1620701058,v1:192.168.123.110:6809/1620701058]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T20:44:31.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:30 vm10 ceph-mon[57011]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T20:44:31.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:30 vm10 ceph-mon[57011]: from='client.? 192.168.123.110:0/1899731255' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "bb0b085e-9ae4-46b4-9158-53e2edb3b952"}]: dispatch 2026-03-09T20:44:31.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:30 vm10 ceph-mon[57011]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "bb0b085e-9ae4-46b4-9158-53e2edb3b952"}]: dispatch 2026-03-09T20:44:31.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:30 vm10 ceph-mon[57011]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm10", "root=default"]}]': finished 2026-03-09T20:44:31.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:30 vm10 ceph-mon[57011]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "bb0b085e-9ae4-46b4-9158-53e2edb3b952"}]': finished 2026-03-09T20:44:31.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:30 vm10 ceph-mon[57011]: osdmap e31: 6 total, 4 up, 6 in 2026-03-09T20:44:31.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:30 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T20:44:31.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:30 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T20:44:31.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:30 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T20:44:31.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:30 vm10 ceph-mon[57011]: from='client.? 192.168.123.110:0/2329957192' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T20:44:31.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:30 vm07 ceph-mon[49120]: pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 515 MiB used, 79 GiB / 80 GiB avail; 75 KiB/s, 0 objects/s recovering 2026-03-09T20:44:31.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:30 vm07 ceph-mon[49120]: from='client.14338 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm10:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:44:31.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:30 vm07 ceph-mon[49120]: Detected new or changed devices on vm10 2026-03-09T20:44:31.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:30 vm07 ceph-mon[49120]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-09T20:44:31.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:30 vm07 ceph-mon[49120]: osdmap e30: 5 total, 4 up, 5 in 2026-03-09T20:44:31.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:30 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T20:44:31.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:30 vm07 ceph-mon[49120]: from='osd.4 [v2:192.168.123.110:6808/1620701058,v1:192.168.123.110:6809/1620701058]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T20:44:31.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:30 vm07 ceph-mon[49120]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T20:44:31.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:30 vm07 ceph-mon[49120]: from='client.? 192.168.123.110:0/1899731255' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "bb0b085e-9ae4-46b4-9158-53e2edb3b952"}]: dispatch 2026-03-09T20:44:31.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:30 vm07 ceph-mon[49120]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "bb0b085e-9ae4-46b4-9158-53e2edb3b952"}]: dispatch 2026-03-09T20:44:31.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:30 vm07 ceph-mon[49120]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm10", "root=default"]}]': finished 2026-03-09T20:44:31.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:30 vm07 ceph-mon[49120]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "bb0b085e-9ae4-46b4-9158-53e2edb3b952"}]': finished 2026-03-09T20:44:31.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:30 vm07 ceph-mon[49120]: osdmap e31: 6 total, 4 up, 6 in 2026-03-09T20:44:31.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:30 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T20:44:31.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:30 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T20:44:31.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:30 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T20:44:31.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:30 vm07 ceph-mon[49120]: from='client.? 192.168.123.110:0/2329957192' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T20:44:32.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:32 vm10 ceph-mon[57011]: purged_snaps scrub starts 2026-03-09T20:44:32.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:32 vm10 ceph-mon[57011]: purged_snaps scrub ok 2026-03-09T20:44:32.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:32 vm10 ceph-mon[57011]: pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 115 MiB used, 80 GiB / 80 GiB avail; 75 KiB/s, 0 objects/s recovering 2026-03-09T20:44:32.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:32 vm10 ceph-mon[57011]: osd.4 [v2:192.168.123.110:6808/1620701058,v1:192.168.123.110:6809/1620701058] boot 2026-03-09T20:44:32.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:32 vm10 ceph-mon[57011]: osdmap e32: 6 total, 5 up, 6 in 2026-03-09T20:44:32.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:32 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T20:44:32.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:32 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T20:44:32.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:32 vm07 ceph-mon[49120]: purged_snaps scrub starts 2026-03-09T20:44:32.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:32 vm07 ceph-mon[49120]: purged_snaps scrub ok 2026-03-09T20:44:32.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:32 vm07 ceph-mon[49120]: pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 115 MiB used, 80 GiB / 80 GiB avail; 75 KiB/s, 0 objects/s recovering 2026-03-09T20:44:32.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:32 vm07 ceph-mon[49120]: osd.4 [v2:192.168.123.110:6808/1620701058,v1:192.168.123.110:6809/1620701058] boot 2026-03-09T20:44:32.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:32 vm07 ceph-mon[49120]: osdmap e32: 6 total, 5 up, 6 in 2026-03-09T20:44:32.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:32 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T20:44:32.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:32 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T20:44:33.923 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:33 vm10 ceph-mon[57011]: osdmap e33: 6 total, 5 up, 6 in 2026-03-09T20:44:33.923 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:33 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T20:44:34.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:33 vm07 ceph-mon[49120]: osdmap e33: 6 total, 5 up, 6 in 2026-03-09T20:44:34.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:33 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T20:44:35.099 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:34 vm10 ceph-mon[57011]: pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 142 MiB used, 100 GiB / 100 GiB avail 2026-03-09T20:44:35.100 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:34 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:44:35.100 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:34 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T20:44:35.100 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:34 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:44:35.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:34 vm07 ceph-mon[49120]: pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 142 MiB used, 100 GiB / 100 GiB avail 2026-03-09T20:44:35.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:34 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:44:35.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:34 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T20:44:35.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:34 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:44:36.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:35 vm10 ceph-mon[57011]: Deploying daemon osd.5 on vm10 2026-03-09T20:44:36.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:35 vm07 ceph-mon[49120]: Deploying daemon osd.5 on vm10 2026-03-09T20:44:36.770 INFO:teuthology.orchestra.run.vm10.stdout:Created osd(s) 5 on host 'vm10' 2026-03-09T20:44:36.770 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:36.766+0000 7f6f137fe640 1 -- 192.168.123.110:0/319741757 <== mgr.14225 v2:192.168.123.107:6800/4233182156 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+32 (secure 0 0 0) 0x7f6ee8002bf0 con 0x7f6efc0761c0 2026-03-09T20:44:36.770 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:36.768+0000 7f6f23577640 1 -- 192.168.123.110:0/319741757 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f6efc0761c0 msgr2=0x7f6efc078680 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:36.770 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:36.768+0000 7f6f23577640 1 --2- 192.168.123.110:0/319741757 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f6efc0761c0 0x7f6efc078680 secure :-1 s=READY pgs=58 cs=0 l=1 rev1=1 crypto rx=0x7f6f18002fd0 tx=0x7f6f18004820 comp rx=0 tx=0).stop 2026-03-09T20:44:36.770 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:36.769+0000 7f6f23577640 1 -- 192.168.123.110:0/319741757 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6f24072370 msgr2=0x7f6f24115e70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:36.770 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:36.769+0000 7f6f23577640 1 --2- 192.168.123.110:0/319741757 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6f24072370 0x7f6f24115e70 secure :-1 s=READY pgs=187 cs=0 l=1 rev1=1 crypto rx=0x7f6f0c00b700 tx=0x7f6f0c00bbd0 comp rx=0 tx=0).stop 2026-03-09T20:44:36.770 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:36.769+0000 7f6f23577640 1 -- 192.168.123.110:0/319741757 shutdown_connections 2026-03-09T20:44:36.770 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:36.769+0000 7f6f23577640 1 --2- 192.168.123.110:0/319741757 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f6efc0761c0 0x7f6efc078680 unknown :-1 s=CLOSED pgs=58 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:36.770 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:36.769+0000 7f6f23577640 1 --2- 192.168.123.110:0/319741757 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6f24072370 0x7f6f24115e70 unknown :-1 s=CLOSED pgs=187 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:36.770 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:36.769+0000 7f6f23577640 1 --2- 192.168.123.110:0/319741757 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f6f240719a0 0x7f6f24115930 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:36.770 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:36.769+0000 7f6f23577640 1 -- 192.168.123.110:0/319741757 >> 192.168.123.110:0/319741757 conn(0x7f6f2406d4f0 msgr2=0x7f6f240706b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:36.770 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:36.769+0000 7f6f23577640 1 -- 192.168.123.110:0/319741757 shutdown_connections 2026-03-09T20:44:36.771 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:36.769+0000 7f6f23577640 1 -- 192.168.123.110:0/319741757 wait complete. 2026-03-09T20:44:36.823 DEBUG:teuthology.orchestra.run.vm10:osd.5> sudo journalctl -f -n 0 -u ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4@osd.5.service 2026-03-09T20:44:36.827 INFO:tasks.cephadm:Waiting for 6 OSDs to come up... 2026-03-09T20:44:36.827 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph osd stat -f json 2026-03-09T20:44:37.017 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:44:37.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:36 vm10 ceph-mon[57011]: pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 142 MiB used, 100 GiB / 100 GiB avail 2026-03-09T20:44:37.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:36 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:44:37.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:36 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:37.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:36 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:37.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:36 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:37.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:36 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:37.106 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:36 vm07 ceph-mon[49120]: pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 142 MiB used, 100 GiB / 100 GiB avail 2026-03-09T20:44:37.106 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:36 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:44:37.106 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:36 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:37.106 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:36 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:37.106 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:36 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:37.106 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:36 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:37.274 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:37.273+0000 7f5d5955d640 1 -- 192.168.123.107:0/3884873180 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5d54102a60 msgr2=0x7f5d54102e60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:37.274 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:37.273+0000 7f5d5955d640 1 --2- 192.168.123.107:0/3884873180 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5d54102a60 0x7f5d54102e60 secure :-1 s=READY pgs=191 cs=0 l=1 rev1=1 crypto rx=0x7f5d3c0099b0 tx=0x7f5d3c02f220 comp rx=0 tx=0).stop 2026-03-09T20:44:37.274 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:37.274+0000 7f5d5955d640 1 -- 192.168.123.107:0/3884873180 shutdown_connections 2026-03-09T20:44:37.274 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:37.274+0000 7f5d5955d640 1 --2- 192.168.123.107:0/3884873180 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5d54103c60 0x7f5d541040e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:37.274 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:37.274+0000 7f5d5955d640 1 --2- 192.168.123.107:0/3884873180 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5d54102a60 0x7f5d54102e60 unknown :-1 s=CLOSED pgs=191 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:37.274 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:37.274+0000 7f5d5955d640 1 -- 192.168.123.107:0/3884873180 >> 192.168.123.107:0/3884873180 conn(0x7f5d540fe250 msgr2=0x7f5d54100670 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:37.275 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:37.274+0000 7f5d5955d640 1 -- 192.168.123.107:0/3884873180 shutdown_connections 2026-03-09T20:44:37.275 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:37.274+0000 7f5d5955d640 1 -- 192.168.123.107:0/3884873180 wait complete. 2026-03-09T20:44:37.275 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:37.275+0000 7f5d5955d640 1 Processor -- start 2026-03-09T20:44:37.275 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:37.275+0000 7f5d5955d640 1 -- start start 2026-03-09T20:44:37.275 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:37.275+0000 7f5d5955d640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5d54103c60 0x7f5d5410f560 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:37.276 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:37.275+0000 7f5d5955d640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5d54112ab0 0x7f5d5410faa0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:37.276 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:37.275+0000 7f5d5955d640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f5d54110070 con 0x7f5d54112ab0 2026-03-09T20:44:37.276 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:37.275+0000 7f5d5955d640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f5d541101e0 con 0x7f5d54103c60 2026-03-09T20:44:37.276 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:37.276+0000 7f5d527fc640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5d54112ab0 0x7f5d5410faa0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:37.276 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:37.276+0000 7f5d527fc640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5d54112ab0 0x7f5d5410faa0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:44432/0 (socket says 192.168.123.107:44432) 2026-03-09T20:44:37.276 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:37.276+0000 7f5d527fc640 1 -- 192.168.123.107:0/2753651291 learned_addr learned my addr 192.168.123.107:0/2753651291 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:44:37.276 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:37.276+0000 7f5d527fc640 1 -- 192.168.123.107:0/2753651291 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5d54103c60 msgr2=0x7f5d5410f560 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T20:44:37.276 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:37.276+0000 7f5d527fc640 1 --2- 192.168.123.107:0/2753651291 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5d54103c60 0x7f5d5410f560 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:37.277 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:37.276+0000 7f5d527fc640 1 -- 192.168.123.107:0/2753651291 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f5d3c009660 con 0x7f5d54112ab0 2026-03-09T20:44:37.277 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:37.276+0000 7f5d527fc640 1 --2- 192.168.123.107:0/2753651291 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5d54112ab0 0x7f5d5410faa0 secure :-1 s=READY pgs=192 cs=0 l=1 rev1=1 crypto rx=0x7f5d4800e9b0 tx=0x7f5d4800ee80 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:37.277 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:37.276+0000 7f5d33fff640 1 -- 192.168.123.107:0/2753651291 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f5d4800cd90 con 0x7f5d54112ab0 2026-03-09T20:44:37.277 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:37.276+0000 7f5d33fff640 1 -- 192.168.123.107:0/2753651291 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f5d48004590 con 0x7f5d54112ab0 2026-03-09T20:44:37.277 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:37.276+0000 7f5d33fff640 1 -- 192.168.123.107:0/2753651291 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f5d48010640 con 0x7f5d54112ab0 2026-03-09T20:44:37.277 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:37.277+0000 7f5d5955d640 1 -- 192.168.123.107:0/2753651291 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f5d541ad0a0 con 0x7f5d54112ab0 2026-03-09T20:44:37.277 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:37.277+0000 7f5d5955d640 1 -- 192.168.123.107:0/2753651291 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f5d541ad5c0 con 0x7f5d54112ab0 2026-03-09T20:44:37.279 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:37.279+0000 7f5d33fff640 1 -- 192.168.123.107:0/2753651291 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7f5d480040d0 con 0x7f5d54112ab0 2026-03-09T20:44:37.280 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:37.280+0000 7f5d33fff640 1 --2- 192.168.123.107:0/2753651291 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f5d280761c0 0x7f5d28078680 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:37.280 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:37.280+0000 7f5d33fff640 1 -- 192.168.123.107:0/2753651291 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4301+0+0 (secure 0 0 0) 0x7f5d48014070 con 0x7f5d54112ab0 2026-03-09T20:44:37.280 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:37.280+0000 7f5d52ffd640 1 --2- 192.168.123.107:0/2753651291 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f5d280761c0 0x7f5d28078680 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:37.282 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:37.280+0000 7f5d5955d640 1 -- 192.168.123.107:0/2753651291 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f5d18005350 con 0x7f5d54112ab0 2026-03-09T20:44:37.284 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:37.284+0000 7f5d52ffd640 1 --2- 192.168.123.107:0/2753651291 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f5d280761c0 0x7f5d28078680 secure :-1 s=READY pgs=63 cs=0 l=1 rev1=1 crypto rx=0x7f5d3c02f730 tx=0x7f5d3c0023d0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:37.284 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:37.284+0000 7f5d33fff640 1 -- 192.168.123.107:0/2753651291 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f5d480616d0 con 0x7f5d54112ab0 2026-03-09T20:44:37.383 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:37.382+0000 7f5d5955d640 1 -- 192.168.123.107:0/2753651291 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "osd stat", "format": "json"} v 0) v1 -- 0x7f5d180051c0 con 0x7f5d54112ab0 2026-03-09T20:44:37.383 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:37.383+0000 7f5d33fff640 1 -- 192.168.123.107:0/2753651291 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "osd stat", "format": "json"}]=0 v33) v1 ==== 74+0+130 (secure 0 0 0) 0x7f5d48061070 con 0x7f5d54112ab0 2026-03-09T20:44:37.384 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:44:37.386 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:37.386+0000 7f5d5955d640 1 -- 192.168.123.107:0/2753651291 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f5d280761c0 msgr2=0x7f5d28078680 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:37.386 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:37.386+0000 7f5d5955d640 1 --2- 192.168.123.107:0/2753651291 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f5d280761c0 0x7f5d28078680 secure :-1 s=READY pgs=63 cs=0 l=1 rev1=1 crypto rx=0x7f5d3c02f730 tx=0x7f5d3c0023d0 comp rx=0 tx=0).stop 2026-03-09T20:44:37.386 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:37.386+0000 7f5d5955d640 1 -- 192.168.123.107:0/2753651291 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5d54112ab0 msgr2=0x7f5d5410faa0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:37.386 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:37.387+0000 7f5d5955d640 1 --2- 192.168.123.107:0/2753651291 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5d54112ab0 0x7f5d5410faa0 secure :-1 s=READY pgs=192 cs=0 l=1 rev1=1 crypto rx=0x7f5d4800e9b0 tx=0x7f5d4800ee80 comp rx=0 tx=0).stop 2026-03-09T20:44:37.387 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:37.387+0000 7f5d5955d640 1 -- 192.168.123.107:0/2753651291 shutdown_connections 2026-03-09T20:44:37.387 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:37.387+0000 7f5d5955d640 1 --2- 192.168.123.107:0/2753651291 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f5d280761c0 0x7f5d28078680 unknown :-1 s=CLOSED pgs=63 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:37.387 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:37.387+0000 7f5d5955d640 1 --2- 192.168.123.107:0/2753651291 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5d54112ab0 0x7f5d5410faa0 unknown :-1 s=CLOSED pgs=192 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:37.387 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:37.387+0000 7f5d5955d640 1 --2- 192.168.123.107:0/2753651291 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5d54103c60 0x7f5d5410f560 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:37.387 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:37.387+0000 7f5d5955d640 1 -- 192.168.123.107:0/2753651291 >> 192.168.123.107:0/2753651291 conn(0x7f5d540fe250 msgr2=0x7f5d54100640 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:37.387 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:37.387+0000 7f5d5955d640 1 -- 192.168.123.107:0/2753651291 shutdown_connections 2026-03-09T20:44:37.387 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:37.388+0000 7f5d5955d640 1 -- 192.168.123.107:0/2753651291 wait complete. 2026-03-09T20:44:37.454 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":33,"num_osds":6,"num_up_osds":5,"osd_up_since":1773089071,"num_in_osds":6,"osd_in_since":1773089070,"num_remapped_pgs":0} 2026-03-09T20:44:37.538 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:44:37 vm10 ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-5[75905]: 2026-03-09T20:44:37.222+0000 7fcebf6f8740 -1 osd.5 0 log_to_monitors true 2026-03-09T20:44:38.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:37 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:38.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:37 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:38.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:37 vm10 ceph-mon[57011]: from='osd.5 [v2:192.168.123.110:6816/1579901954,v1:192.168.123.110:6817/1579901954]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T20:44:38.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:37 vm10 ceph-mon[57011]: from='client.? 192.168.123.107:0/2753651291' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T20:44:38.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:37 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:38.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:37 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:38.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:37 vm07 ceph-mon[49120]: from='osd.5 [v2:192.168.123.110:6816/1579901954,v1:192.168.123.110:6817/1579901954]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T20:44:38.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:37 vm07 ceph-mon[49120]: from='client.? 192.168.123.107:0/2753651291' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T20:44:38.455 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph osd stat -f json 2026-03-09T20:44:38.608 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:44:38.851 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:38.851+0000 7ff77939a640 1 -- 192.168.123.107:0/2460230344 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ff774075ba0 msgr2=0x7ff774075fa0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:38.852 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:38.851+0000 7ff77939a640 1 --2- 192.168.123.107:0/2460230344 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ff774075ba0 0x7ff774075fa0 secure :-1 s=READY pgs=193 cs=0 l=1 rev1=1 crypto rx=0x7ff7680099b0 tx=0x7ff76802f220 comp rx=0 tx=0).stop 2026-03-09T20:44:38.852 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:38.851+0000 7ff77939a640 1 -- 192.168.123.107:0/2460230344 shutdown_connections 2026-03-09T20:44:38.852 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:38.852+0000 7ff77939a640 1 --2- 192.168.123.107:0/2460230344 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7ff774076df0 0x7ff774077250 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:38.852 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:38.852+0000 7ff77939a640 1 --2- 192.168.123.107:0/2460230344 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ff774075ba0 0x7ff774075fa0 unknown :-1 s=CLOSED pgs=193 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:38.852 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:38.852+0000 7ff77939a640 1 -- 192.168.123.107:0/2460230344 >> 192.168.123.107:0/2460230344 conn(0x7ff7740fe250 msgr2=0x7ff774100670 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:38.852 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:38.852+0000 7ff77939a640 1 -- 192.168.123.107:0/2460230344 shutdown_connections 2026-03-09T20:44:38.852 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:38.852+0000 7ff77939a640 1 -- 192.168.123.107:0/2460230344 wait complete. 2026-03-09T20:44:38.852 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:38.852+0000 7ff77939a640 1 Processor -- start 2026-03-09T20:44:38.852 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:38.852+0000 7ff77939a640 1 -- start start 2026-03-09T20:44:38.852 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:38.853+0000 7ff77939a640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ff774075ba0 0x7ff77419e9a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:38.853 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:38.853+0000 7ff77939a640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7ff774076df0 0x7ff77419eee0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:38.853 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:38.853+0000 7ff772ffd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ff774075ba0 0x7ff77419e9a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:38.853 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:38.853+0000 7ff772ffd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ff774075ba0 0x7ff77419e9a0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:36916/0 (socket says 192.168.123.107:36916) 2026-03-09T20:44:38.853 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:38.853+0000 7ff772ffd640 1 -- 192.168.123.107:0/2568528342 learned_addr learned my addr 192.168.123.107:0/2568528342 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:44:38.853 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:38.853+0000 7ff77939a640 1 -- 192.168.123.107:0/2568528342 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff77419f4b0 con 0x7ff774075ba0 2026-03-09T20:44:38.853 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:38.853+0000 7ff77939a640 1 -- 192.168.123.107:0/2568528342 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff77419f620 con 0x7ff774076df0 2026-03-09T20:44:38.853 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:38.853+0000 7ff7727fc640 1 --2- 192.168.123.107:0/2568528342 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7ff774076df0 0x7ff77419eee0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:38.854 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:38.854+0000 7ff772ffd640 1 -- 192.168.123.107:0/2568528342 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7ff774076df0 msgr2=0x7ff77419eee0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:38.854 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:38.854+0000 7ff772ffd640 1 --2- 192.168.123.107:0/2568528342 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7ff774076df0 0x7ff77419eee0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:38.854 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:38.854+0000 7ff772ffd640 1 -- 192.168.123.107:0/2568528342 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7ff768009660 con 0x7ff774075ba0 2026-03-09T20:44:38.854 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:38.854+0000 7ff772ffd640 1 --2- 192.168.123.107:0/2568528342 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ff774075ba0 0x7ff77419e9a0 secure :-1 s=READY pgs=194 cs=0 l=1 rev1=1 crypto rx=0x7ff768009ae0 tx=0x7ff768002980 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:38.854 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:38.854+0000 7ff74ffff640 1 -- 192.168.123.107:0/2568528342 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7ff76803d070 con 0x7ff774075ba0 2026-03-09T20:44:38.854 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:38.854+0000 7ff77939a640 1 -- 192.168.123.107:0/2568528342 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7ff7741a4060 con 0x7ff774075ba0 2026-03-09T20:44:38.854 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:38.854+0000 7ff77939a640 1 -- 192.168.123.107:0/2568528342 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7ff7741a45d0 con 0x7ff774075ba0 2026-03-09T20:44:38.855 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:38.854+0000 7ff74ffff640 1 -- 192.168.123.107:0/2568528342 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7ff768031df0 con 0x7ff774075ba0 2026-03-09T20:44:38.855 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:38.854+0000 7ff74ffff640 1 -- 192.168.123.107:0/2568528342 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7ff768031280 con 0x7ff774075ba0 2026-03-09T20:44:38.858 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:38.855+0000 7ff77939a640 1 -- 192.168.123.107:0/2568528342 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7ff77410fc00 con 0x7ff774075ba0 2026-03-09T20:44:38.858 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:38.856+0000 7ff74ffff640 1 -- 192.168.123.107:0/2568528342 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7ff7680388c0 con 0x7ff774075ba0 2026-03-09T20:44:38.858 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:38.857+0000 7ff74ffff640 1 --2- 192.168.123.107:0/2568528342 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7ff7380761c0 0x7ff738078680 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:38.858 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:38.857+0000 7ff74ffff640 1 -- 192.168.123.107:0/2568528342 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(34..34 src has 1..34) v4 ==== 4322+0+0 (secure 0 0 0) 0x7ff7680bc780 con 0x7ff774075ba0 2026-03-09T20:44:38.859 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:38.859+0000 7ff74ffff640 1 -- 192.168.123.107:0/2568528342 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7ff7680861f0 con 0x7ff774075ba0 2026-03-09T20:44:38.859 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:38.859+0000 7ff7727fc640 1 --2- 192.168.123.107:0/2568528342 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7ff7380761c0 0x7ff738078680 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:38.859 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:38.859+0000 7ff7727fc640 1 --2- 192.168.123.107:0/2568528342 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7ff7380761c0 0x7ff738078680 secure :-1 s=READY pgs=64 cs=0 l=1 rev1=1 crypto rx=0x7ff77419fec0 tx=0x7ff760009210 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:38.947 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:38.947+0000 7ff77939a640 1 -- 192.168.123.107:0/2568528342 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "osd stat", "format": "json"} v 0) v1 -- 0x7ff77410fe40 con 0x7ff774075ba0 2026-03-09T20:44:38.948 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:38.948+0000 7ff74ffff640 1 -- 192.168.123.107:0/2568528342 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "osd stat", "format": "json"}]=0 v34) v1 ==== 74+0+130 (secure 0 0 0) 0x7ff76808c090 con 0x7ff774075ba0 2026-03-09T20:44:38.948 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:44:38.950 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:38.950+0000 7ff77939a640 1 -- 192.168.123.107:0/2568528342 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7ff7380761c0 msgr2=0x7ff738078680 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:38.950 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:38.950+0000 7ff77939a640 1 --2- 192.168.123.107:0/2568528342 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7ff7380761c0 0x7ff738078680 secure :-1 s=READY pgs=64 cs=0 l=1 rev1=1 crypto rx=0x7ff77419fec0 tx=0x7ff760009210 comp rx=0 tx=0).stop 2026-03-09T20:44:38.950 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:38.950+0000 7ff77939a640 1 -- 192.168.123.107:0/2568528342 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ff774075ba0 msgr2=0x7ff77419e9a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:38.950 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:38.950+0000 7ff77939a640 1 --2- 192.168.123.107:0/2568528342 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ff774075ba0 0x7ff77419e9a0 secure :-1 s=READY pgs=194 cs=0 l=1 rev1=1 crypto rx=0x7ff768009ae0 tx=0x7ff768002980 comp rx=0 tx=0).stop 2026-03-09T20:44:38.950 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:38.950+0000 7ff77939a640 1 -- 192.168.123.107:0/2568528342 shutdown_connections 2026-03-09T20:44:38.950 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:38.950+0000 7ff77939a640 1 --2- 192.168.123.107:0/2568528342 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7ff7380761c0 0x7ff738078680 unknown :-1 s=CLOSED pgs=64 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:38.950 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:38.950+0000 7ff77939a640 1 --2- 192.168.123.107:0/2568528342 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7ff774076df0 0x7ff77419eee0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:38.950 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:38.950+0000 7ff77939a640 1 --2- 192.168.123.107:0/2568528342 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ff774075ba0 0x7ff77419e9a0 unknown :-1 s=CLOSED pgs=194 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:38.950 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:38.951+0000 7ff77939a640 1 -- 192.168.123.107:0/2568528342 >> 192.168.123.107:0/2568528342 conn(0x7ff7740fe250 msgr2=0x7ff7740ffa10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:38.951 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:38.951+0000 7ff77939a640 1 -- 192.168.123.107:0/2568528342 shutdown_connections 2026-03-09T20:44:38.951 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:38.951+0000 7ff77939a640 1 -- 192.168.123.107:0/2568528342 wait complete. 2026-03-09T20:44:39.007 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":34,"num_osds":6,"num_up_osds":5,"osd_up_since":1773089071,"num_in_osds":6,"osd_in_since":1773089070,"num_remapped_pgs":0} 2026-03-09T20:44:39.007 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:38 vm07 ceph-mon[49120]: pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 142 MiB used, 100 GiB / 100 GiB avail 2026-03-09T20:44:39.007 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:38 vm07 ceph-mon[49120]: Detected new or changed devices on vm10 2026-03-09T20:44:39.007 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:38 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:39.007 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:38 vm07 ceph-mon[49120]: from='osd.5 [v2:192.168.123.110:6816/1579901954,v1:192.168.123.110:6817/1579901954]' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-09T20:44:39.007 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:38 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:39.007 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:38 vm07 ceph-mon[49120]: osdmap e34: 6 total, 5 up, 6 in 2026-03-09T20:44:39.007 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:38 vm07 ceph-mon[49120]: from='osd.5 [v2:192.168.123.110:6816/1579901954,v1:192.168.123.110:6817/1579901954]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T20:44:39.007 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:38 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T20:44:39.007 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:38 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "who": "osd/host:vm10", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:44:39.007 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:38 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:44:39.007 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:38 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:44:39.007 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:38 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:39.007 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:38 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:44:39.007 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:38 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:44:39.008 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:38 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:44:39.008 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:38 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:39.008 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:38 vm07 ceph-mon[49120]: from='client.? 192.168.123.107:0/2568528342' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T20:44:39.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:38 vm10 ceph-mon[57011]: pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 142 MiB used, 100 GiB / 100 GiB avail 2026-03-09T20:44:39.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:38 vm10 ceph-mon[57011]: Detected new or changed devices on vm10 2026-03-09T20:44:39.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:38 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:39.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:38 vm10 ceph-mon[57011]: from='osd.5 [v2:192.168.123.110:6816/1579901954,v1:192.168.123.110:6817/1579901954]' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-09T20:44:39.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:38 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:39.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:38 vm10 ceph-mon[57011]: osdmap e34: 6 total, 5 up, 6 in 2026-03-09T20:44:39.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:38 vm10 ceph-mon[57011]: from='osd.5 [v2:192.168.123.110:6816/1579901954,v1:192.168.123.110:6817/1579901954]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T20:44:39.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:38 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T20:44:39.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:38 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "who": "osd/host:vm10", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:44:39.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:38 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:44:39.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:38 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:44:39.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:38 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:39.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:38 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:44:39.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:38 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:44:39.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:38 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:44:39.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:38 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:39.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:38 vm10 ceph-mon[57011]: from='client.? 192.168.123.107:0/2568528342' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T20:44:40.008 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph osd stat -f json 2026-03-09T20:44:40.201 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:44:40.226 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:39 vm07 ceph-mon[49120]: from='osd.5 [v2:192.168.123.110:6816/1579901954,v1:192.168.123.110:6817/1579901954]' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm10", "root=default"]}]': finished 2026-03-09T20:44:40.226 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:39 vm07 ceph-mon[49120]: osdmap e35: 6 total, 5 up, 6 in 2026-03-09T20:44:40.226 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:39 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T20:44:40.226 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:39 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T20:44:40.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:39 vm10 ceph-mon[57011]: from='osd.5 [v2:192.168.123.110:6816/1579901954,v1:192.168.123.110:6817/1579901954]' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm10", "root=default"]}]': finished 2026-03-09T20:44:40.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:39 vm10 ceph-mon[57011]: osdmap e35: 6 total, 5 up, 6 in 2026-03-09T20:44:40.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:39 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T20:44:40.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:39 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T20:44:40.287 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:44:40 vm10 ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-5[75905]: 2026-03-09T20:44:40.138+0000 7fcebae54640 -1 osd.5 0 waiting for initial osdmap 2026-03-09T20:44:40.287 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:44:40 vm10 ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-5[75905]: 2026-03-09T20:44:40.150+0000 7fceb6c8f640 -1 osd.5 35 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T20:44:40.447 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:40.446+0000 7f9a0a457640 1 -- 192.168.123.107:0/2248450970 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9a041008e0 msgr2=0x7f9a04100d60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:40.447 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:40.446+0000 7f9a0a457640 1 --2- 192.168.123.107:0/2248450970 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9a041008e0 0x7f9a04100d60 secure :-1 s=READY pgs=195 cs=0 l=1 rev1=1 crypto rx=0x7f99f00099b0 tx=0x7f99f002f220 comp rx=0 tx=0).stop 2026-03-09T20:44:40.447 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:40.447+0000 7f9a0a457640 1 -- 192.168.123.107:0/2248450970 shutdown_connections 2026-03-09T20:44:40.447 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:40.447+0000 7f9a0a457640 1 --2- 192.168.123.107:0/2248450970 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9a041008e0 0x7f9a04100d60 unknown :-1 s=CLOSED pgs=195 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:40.447 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:40.447+0000 7f9a0a457640 1 --2- 192.168.123.107:0/2248450970 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9a040ff6e0 0x7f9a040ffae0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:40.447 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:40.447+0000 7f9a0a457640 1 -- 192.168.123.107:0/2248450970 >> 192.168.123.107:0/2248450970 conn(0x7f9a040fae50 msgr2=0x7f9a040fd2b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:40.447 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:40.447+0000 7f9a0a457640 1 -- 192.168.123.107:0/2248450970 shutdown_connections 2026-03-09T20:44:40.447 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:40.447+0000 7f9a0a457640 1 -- 192.168.123.107:0/2248450970 wait complete. 2026-03-09T20:44:40.448 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:40.448+0000 7f9a0a457640 1 Processor -- start 2026-03-09T20:44:40.448 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:40.448+0000 7f9a0a457640 1 -- start start 2026-03-09T20:44:40.448 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:40.448+0000 7f9a0a457640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9a040ff6e0 0x7f9a04198150 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:40.448 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:40.448+0000 7f9a0a457640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9a041008e0 0x7f9a04198690 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:40.448 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:40.448+0000 7f9a0a457640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9a04198bd0 con 0x7f9a040ff6e0 2026-03-09T20:44:40.448 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:40.448+0000 7f9a0a457640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9a04198d40 con 0x7f9a041008e0 2026-03-09T20:44:40.448 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:40.448+0000 7f9a03fff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9a040ff6e0 0x7f9a04198150 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:40.448 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:40.448+0000 7f9a03fff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9a040ff6e0 0x7f9a04198150 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:36942/0 (socket says 192.168.123.107:36942) 2026-03-09T20:44:40.449 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:40.448+0000 7f9a03fff640 1 -- 192.168.123.107:0/1456160784 learned_addr learned my addr 192.168.123.107:0/1456160784 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:44:40.449 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:40.449+0000 7f9a037fe640 1 --2- 192.168.123.107:0/1456160784 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9a041008e0 0x7f9a04198690 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:40.449 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:40.449+0000 7f9a03fff640 1 -- 192.168.123.107:0/1456160784 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9a041008e0 msgr2=0x7f9a04198690 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:40.449 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:40.449+0000 7f9a03fff640 1 --2- 192.168.123.107:0/1456160784 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9a041008e0 0x7f9a04198690 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:40.449 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:40.449+0000 7f9a03fff640 1 -- 192.168.123.107:0/1456160784 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f99f0009660 con 0x7f9a040ff6e0 2026-03-09T20:44:40.449 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:40.449+0000 7f9a03fff640 1 --2- 192.168.123.107:0/1456160784 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9a040ff6e0 0x7f9a04198150 secure :-1 s=READY pgs=196 cs=0 l=1 rev1=1 crypto rx=0x7f99f400cbf0 tx=0x7f99f4007590 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:40.450 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:40.449+0000 7f9a017fa640 1 -- 192.168.123.107:0/1456160784 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f99f4007cb0 con 0x7f9a040ff6e0 2026-03-09T20:44:40.450 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:40.449+0000 7f9a017fa640 1 -- 192.168.123.107:0/1456160784 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f99f4007e10 con 0x7f9a040ff6e0 2026-03-09T20:44:40.450 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:40.449+0000 7f9a017fa640 1 -- 192.168.123.107:0/1456160784 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f99f400f4b0 con 0x7f9a040ff6e0 2026-03-09T20:44:40.450 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:40.449+0000 7f9a0a457640 1 -- 192.168.123.107:0/1456160784 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f9a04105b80 con 0x7f9a040ff6e0 2026-03-09T20:44:40.450 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:40.449+0000 7f9a0a457640 1 -- 192.168.123.107:0/1456160784 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f9a04106000 con 0x7f9a040ff6e0 2026-03-09T20:44:40.454 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:40.451+0000 7f9a017fa640 1 -- 192.168.123.107:0/1456160784 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7f99f400f690 con 0x7f9a040ff6e0 2026-03-09T20:44:40.454 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:40.451+0000 7f9a0a457640 1 -- 192.168.123.107:0/1456160784 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f99c8005350 con 0x7f9a040ff6e0 2026-03-09T20:44:40.454 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:40.452+0000 7f9a017fa640 1 --2- 192.168.123.107:0/1456160784 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f99d8076290 0x7f99d8078750 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:40.454 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:40.452+0000 7f9a017fa640 1 -- 192.168.123.107:0/1456160784 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(35..35 src has 1..35) v4 ==== 4338+0+0 (secure 0 0 0) 0x7f99f4098490 con 0x7f9a040ff6e0 2026-03-09T20:44:40.454 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:40.454+0000 7f9a017fa640 1 -- 192.168.123.107:0/1456160784 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f99f4061ef0 con 0x7f9a040ff6e0 2026-03-09T20:44:40.454 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:40.454+0000 7f9a037fe640 1 --2- 192.168.123.107:0/1456160784 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f99d8076290 0x7f99d8078750 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:40.454 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:40.454+0000 7f9a037fe640 1 --2- 192.168.123.107:0/1456160784 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f99d8076290 0x7f99d8078750 secure :-1 s=READY pgs=67 cs=0 l=1 rev1=1 crypto rx=0x7f99f0002c20 tx=0x7f99f00023d0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:40.546 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:40.544+0000 7f9a0a457640 1 -- 192.168.123.107:0/1456160784 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "osd stat", "format": "json"} v 0) v1 -- 0x7f99c80051c0 con 0x7f9a040ff6e0 2026-03-09T20:44:40.546 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:40.545+0000 7f9a017fa640 1 -- 192.168.123.107:0/1456160784 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "osd stat", "format": "json"}]=0 v35) v1 ==== 74+0+130 (secure 0 0 0) 0x7f99f4061890 con 0x7f9a040ff6e0 2026-03-09T20:44:40.547 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:44:40.549 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:40.549+0000 7f9a0a457640 1 -- 192.168.123.107:0/1456160784 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f99d8076290 msgr2=0x7f99d8078750 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:40.549 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:40.550+0000 7f9a0a457640 1 --2- 192.168.123.107:0/1456160784 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f99d8076290 0x7f99d8078750 secure :-1 s=READY pgs=67 cs=0 l=1 rev1=1 crypto rx=0x7f99f0002c20 tx=0x7f99f00023d0 comp rx=0 tx=0).stop 2026-03-09T20:44:40.550 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:40.550+0000 7f9a0a457640 1 -- 192.168.123.107:0/1456160784 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9a040ff6e0 msgr2=0x7f9a04198150 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:40.550 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:40.550+0000 7f9a0a457640 1 --2- 192.168.123.107:0/1456160784 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9a040ff6e0 0x7f9a04198150 secure :-1 s=READY pgs=196 cs=0 l=1 rev1=1 crypto rx=0x7f99f400cbf0 tx=0x7f99f4007590 comp rx=0 tx=0).stop 2026-03-09T20:44:40.550 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:40.550+0000 7f9a0a457640 1 -- 192.168.123.107:0/1456160784 shutdown_connections 2026-03-09T20:44:40.550 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:40.550+0000 7f9a0a457640 1 --2- 192.168.123.107:0/1456160784 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f99d8076290 0x7f99d8078750 unknown :-1 s=CLOSED pgs=67 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:40.550 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:40.551+0000 7f9a0a457640 1 --2- 192.168.123.107:0/1456160784 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9a041008e0 0x7f9a04198690 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:40.550 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:40.551+0000 7f9a0a457640 1 --2- 192.168.123.107:0/1456160784 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9a040ff6e0 0x7f9a04198150 unknown :-1 s=CLOSED pgs=196 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:40.551 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:40.551+0000 7f9a0a457640 1 -- 192.168.123.107:0/1456160784 >> 192.168.123.107:0/1456160784 conn(0x7f9a040fae50 msgr2=0x7f9a040fc950 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:40.551 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:40.551+0000 7f9a0a457640 1 -- 192.168.123.107:0/1456160784 shutdown_connections 2026-03-09T20:44:40.551 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:40.551+0000 7f9a0a457640 1 -- 192.168.123.107:0/1456160784 wait complete. 2026-03-09T20:44:40.624 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":35,"num_osds":6,"num_up_osds":5,"osd_up_since":1773089071,"num_in_osds":6,"osd_in_since":1773089070,"num_remapped_pgs":0} 2026-03-09T20:44:41.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:40 vm10 ceph-mon[57011]: purged_snaps scrub starts 2026-03-09T20:44:41.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:40 vm10 ceph-mon[57011]: purged_snaps scrub ok 2026-03-09T20:44:41.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:40 vm10 ceph-mon[57011]: pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 142 MiB used, 100 GiB / 100 GiB avail 2026-03-09T20:44:41.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:40 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T20:44:41.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:40 vm10 ceph-mon[57011]: from='osd.5 [v2:192.168.123.110:6816/1579901954,v1:192.168.123.110:6817/1579901954]' entity='osd.5' 2026-03-09T20:44:41.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:40 vm10 ceph-mon[57011]: from='client.? 192.168.123.107:0/1456160784' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T20:44:41.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:40 vm07 ceph-mon[49120]: purged_snaps scrub starts 2026-03-09T20:44:41.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:40 vm07 ceph-mon[49120]: purged_snaps scrub ok 2026-03-09T20:44:41.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:40 vm07 ceph-mon[49120]: pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 142 MiB used, 100 GiB / 100 GiB avail 2026-03-09T20:44:41.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:40 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T20:44:41.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:40 vm07 ceph-mon[49120]: from='osd.5 [v2:192.168.123.110:6816/1579901954,v1:192.168.123.110:6817/1579901954]' entity='osd.5' 2026-03-09T20:44:41.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:40 vm07 ceph-mon[49120]: from='client.? 192.168.123.107:0/1456160784' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T20:44:41.624 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph osd stat -f json 2026-03-09T20:44:41.783 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:44:42.477 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:42 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T20:44:42.477 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:42 vm07 ceph-mon[49120]: pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 142 MiB used, 100 GiB / 100 GiB avail 2026-03-09T20:44:42.477 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:42 vm07 ceph-mon[49120]: osd.5 [v2:192.168.123.110:6816/1579901954,v1:192.168.123.110:6817/1579901954] boot 2026-03-09T20:44:42.477 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:42 vm07 ceph-mon[49120]: osdmap e36: 6 total, 6 up, 6 in 2026-03-09T20:44:42.477 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:42 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T20:44:42.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:42 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T20:44:42.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:42 vm10 ceph-mon[57011]: pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 142 MiB used, 100 GiB / 100 GiB avail 2026-03-09T20:44:42.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:42 vm10 ceph-mon[57011]: osd.5 [v2:192.168.123.110:6816/1579901954,v1:192.168.123.110:6817/1579901954] boot 2026-03-09T20:44:42.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:42 vm10 ceph-mon[57011]: osdmap e36: 6 total, 6 up, 6 in 2026-03-09T20:44:42.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:42 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T20:44:42.838 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:42.838+0000 7fccfa252640 1 -- 192.168.123.107:0/746774796 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fccf4103c80 msgr2=0x7fccf4104100 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:42.839 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:42.838+0000 7fccfa252640 1 --2- 192.168.123.107:0/746774796 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fccf4103c80 0x7fccf4104100 secure :-1 s=READY pgs=197 cs=0 l=1 rev1=1 crypto rx=0x7fcce00099b0 tx=0x7fcce002f240 comp rx=0 tx=0).stop 2026-03-09T20:44:42.839 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:42.838+0000 7fccfa252640 1 -- 192.168.123.107:0/746774796 shutdown_connections 2026-03-09T20:44:42.839 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:42.838+0000 7fccfa252640 1 --2- 192.168.123.107:0/746774796 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fccf4103c80 0x7fccf4104100 unknown :-1 s=CLOSED pgs=197 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:42.839 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:42.838+0000 7fccfa252640 1 --2- 192.168.123.107:0/746774796 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fccf4102a80 0x7fccf4102e80 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:42.839 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:42.838+0000 7fccfa252640 1 -- 192.168.123.107:0/746774796 >> 192.168.123.107:0/746774796 conn(0x7fccf40fe250 msgr2=0x7fccf4100670 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:42.839 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:42.839+0000 7fccfa252640 1 -- 192.168.123.107:0/746774796 shutdown_connections 2026-03-09T20:44:42.839 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:42.839+0000 7fccfa252640 1 -- 192.168.123.107:0/746774796 wait complete. 2026-03-09T20:44:42.839 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:42.839+0000 7fccfa252640 1 Processor -- start 2026-03-09T20:44:42.839 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:42.839+0000 7fccfa252640 1 -- start start 2026-03-09T20:44:42.840 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:42.840+0000 7fccfa252640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fccf4102a80 0x7fccf419a4f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:42.840 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:42.840+0000 7fccfa252640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fccf4103c80 0x7fccf419aa30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:42.840 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:42.840+0000 7fccfa252640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fccf419b000 con 0x7fccf4103c80 2026-03-09T20:44:42.840 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:42.840+0000 7fccf37fe640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fccf4102a80 0x7fccf419a4f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:42.840 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:42.840+0000 7fccf37fe640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fccf4102a80 0x7fccf419a4f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:41864/0 (socket says 192.168.123.107:41864) 2026-03-09T20:44:42.840 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:42.840+0000 7fccf37fe640 1 -- 192.168.123.107:0/3945685132 learned_addr learned my addr 192.168.123.107:0/3945685132 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:44:42.840 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:42.840+0000 7fccf2ffd640 1 --2- 192.168.123.107:0/3945685132 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fccf4103c80 0x7fccf419aa30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:42.840 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:42.841+0000 7fccfa252640 1 -- 192.168.123.107:0/3945685132 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fccf419b170 con 0x7fccf4102a80 2026-03-09T20:44:42.841 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:42.841+0000 7fccf2ffd640 1 -- 192.168.123.107:0/3945685132 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fccf4102a80 msgr2=0x7fccf419a4f0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:42.841 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:42.841+0000 7fccf2ffd640 1 --2- 192.168.123.107:0/3945685132 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fccf4102a80 0x7fccf419a4f0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:42.841 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:42.841+0000 7fccf2ffd640 1 -- 192.168.123.107:0/3945685132 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fcce0009660 con 0x7fccf4103c80 2026-03-09T20:44:42.841 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:42.841+0000 7fccf2ffd640 1 --2- 192.168.123.107:0/3945685132 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fccf4103c80 0x7fccf419aa30 secure :-1 s=READY pgs=198 cs=0 l=1 rev1=1 crypto rx=0x7fcce0002ba0 tx=0x7fcce0031cf0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:42.841 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:42.841+0000 7fccf0ff9640 1 -- 192.168.123.107:0/3945685132 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fcce003d070 con 0x7fccf4103c80 2026-03-09T20:44:42.841 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:42.841+0000 7fccf0ff9640 1 -- 192.168.123.107:0/3945685132 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fcce0002e00 con 0x7fccf4103c80 2026-03-09T20:44:42.843 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:42.842+0000 7fccfa252640 1 -- 192.168.123.107:0/3945685132 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fccf419fbb0 con 0x7fccf4103c80 2026-03-09T20:44:42.843 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:42.842+0000 7fccf0ff9640 1 -- 192.168.123.107:0/3945685132 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fcce0031070 con 0x7fccf4103c80 2026-03-09T20:44:42.844 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:42.842+0000 7fccfa252640 1 -- 192.168.123.107:0/3945685132 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fccf41a0120 con 0x7fccf4103c80 2026-03-09T20:44:42.844 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:42.843+0000 7fccf0ff9640 1 -- 192.168.123.107:0/3945685132 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7fcce0049050 con 0x7fccf4103c80 2026-03-09T20:44:42.844 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:42.843+0000 7fccf0ff9640 1 --2- 192.168.123.107:0/3945685132 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fcccc076290 0x7fcccc078750 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:42.844 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:42.844+0000 7fccf37fe640 1 --2- 192.168.123.107:0/3945685132 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fcccc076290 0x7fcccc078750 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:42.847 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:42.844+0000 7fccfa252640 1 -- 192.168.123.107:0/3945685132 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fccc0005350 con 0x7fccf4103c80 2026-03-09T20:44:42.847 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:42.844+0000 7fccf0ff9640 1 -- 192.168.123.107:0/3945685132 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(37..37 src has 1..37) v4 ==== 4618+0+0 (secure 0 0 0) 0x7fcce00bcea0 con 0x7fccf4103c80 2026-03-09T20:44:42.847 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:42.844+0000 7fccf37fe640 1 --2- 192.168.123.107:0/3945685132 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fcccc076290 0x7fcccc078750 secure :-1 s=READY pgs=70 cs=0 l=1 rev1=1 crypto rx=0x7fcce4004640 tx=0x7fcce4009290 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:42.847 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:42.847+0000 7fccf0ff9640 1 -- 192.168.123.107:0/3945685132 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7fcce00867f0 con 0x7fccf4103c80 2026-03-09T20:44:42.947 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:42.946+0000 7fccfa252640 1 -- 192.168.123.107:0/3945685132 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "osd stat", "format": "json"} v 0) v1 -- 0x7fccc00051c0 con 0x7fccf4103c80 2026-03-09T20:44:42.967 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:42.947+0000 7fccf0ff9640 1 -- 192.168.123.107:0/3945685132 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "osd stat", "format": "json"}]=0 v37) v1 ==== 74+0+130 (secure 0 0 0) 0x7fcce0086190 con 0x7fccf4103c80 2026-03-09T20:44:42.968 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:44:42.968 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:42.949+0000 7fccfa252640 1 -- 192.168.123.107:0/3945685132 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fcccc076290 msgr2=0x7fcccc078750 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:42.968 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:42.949+0000 7fccfa252640 1 --2- 192.168.123.107:0/3945685132 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fcccc076290 0x7fcccc078750 secure :-1 s=READY pgs=70 cs=0 l=1 rev1=1 crypto rx=0x7fcce4004640 tx=0x7fcce4009290 comp rx=0 tx=0).stop 2026-03-09T20:44:42.968 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:42.949+0000 7fccfa252640 1 -- 192.168.123.107:0/3945685132 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fccf4103c80 msgr2=0x7fccf419aa30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:42.968 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:42.949+0000 7fccfa252640 1 --2- 192.168.123.107:0/3945685132 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fccf4103c80 0x7fccf419aa30 secure :-1 s=READY pgs=198 cs=0 l=1 rev1=1 crypto rx=0x7fcce0002ba0 tx=0x7fcce0031cf0 comp rx=0 tx=0).stop 2026-03-09T20:44:42.968 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:42.949+0000 7fccfa252640 1 -- 192.168.123.107:0/3945685132 shutdown_connections 2026-03-09T20:44:42.968 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:42.949+0000 7fccfa252640 1 --2- 192.168.123.107:0/3945685132 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fcccc076290 0x7fcccc078750 unknown :-1 s=CLOSED pgs=70 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:42.968 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:42.949+0000 7fccfa252640 1 --2- 192.168.123.107:0/3945685132 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fccf4103c80 0x7fccf419aa30 unknown :-1 s=CLOSED pgs=198 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:42.968 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:42.949+0000 7fccfa252640 1 --2- 192.168.123.107:0/3945685132 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fccf4102a80 0x7fccf419a4f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:42.968 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:42.949+0000 7fccfa252640 1 -- 192.168.123.107:0/3945685132 >> 192.168.123.107:0/3945685132 conn(0x7fccf40fe250 msgr2=0x7fccf40ffd50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:42.968 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:42.949+0000 7fccfa252640 1 -- 192.168.123.107:0/3945685132 shutdown_connections 2026-03-09T20:44:42.968 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:42.949+0000 7fccfa252640 1 -- 192.168.123.107:0/3945685132 wait complete. 2026-03-09T20:44:43.244 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":37,"num_osds":6,"num_up_osds":6,"osd_up_since":1773089081,"num_in_osds":6,"osd_in_since":1773089070,"num_remapped_pgs":0} 2026-03-09T20:44:43.244 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph osd dump --format=json 2026-03-09T20:44:43.391 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:44:43.497 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:43 vm07 ceph-mon[49120]: osdmap e37: 6 total, 6 up, 6 in 2026-03-09T20:44:43.497 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:43 vm07 ceph-mon[49120]: from='client.? 192.168.123.107:0/3945685132' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T20:44:43.696 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:43.696+0000 7f634f093640 1 -- 192.168.123.107:0/3145266165 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6348105550 msgr2=0x7f6348105930 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:43.696 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:43.696+0000 7f634f093640 1 --2- 192.168.123.107:0/3145266165 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6348105550 0x7f6348105930 secure :-1 s=READY pgs=199 cs=0 l=1 rev1=1 crypto rx=0x7f6338009a10 tx=0x7f6338031510 comp rx=0 tx=0).stop 2026-03-09T20:44:43.697 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:43.697+0000 7f634f093640 1 -- 192.168.123.107:0/3145266165 shutdown_connections 2026-03-09T20:44:43.697 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:43.697+0000 7f634f093640 1 --2- 192.168.123.107:0/3145266165 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f63480ff4a0 0x7f63480ff900 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:43.697 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:43.697+0000 7f634f093640 1 --2- 192.168.123.107:0/3145266165 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6348105550 0x7f6348105930 unknown :-1 s=CLOSED pgs=199 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:43.697 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:43.697+0000 7f634f093640 1 -- 192.168.123.107:0/3145266165 >> 192.168.123.107:0/3145266165 conn(0x7f63480fb180 msgr2=0x7f63480fd5a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:43.697 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:43.697+0000 7f634f093640 1 -- 192.168.123.107:0/3145266165 shutdown_connections 2026-03-09T20:44:43.697 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:43.697+0000 7f634f093640 1 -- 192.168.123.107:0/3145266165 wait complete. 2026-03-09T20:44:43.698 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:43.698+0000 7f634f093640 1 Processor -- start 2026-03-09T20:44:43.698 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:43.698+0000 7f634f093640 1 -- start start 2026-03-09T20:44:43.699 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:43.698+0000 7f634f093640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f63480ff4a0 0x7f6348196160 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:43.699 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:43.698+0000 7f634f093640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6348105550 0x7f63481966a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:43.699 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:43.698+0000 7f634f093640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f6348196d30 con 0x7f6348105550 2026-03-09T20:44:43.699 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:43.698+0000 7f634f093640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f634819aaa0 con 0x7f63480ff4a0 2026-03-09T20:44:43.699 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:43.699+0000 7f634d890640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6348105550 0x7f63481966a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:43.699 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:43.699+0000 7f634d890640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6348105550 0x7f63481966a0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:36970/0 (socket says 192.168.123.107:36970) 2026-03-09T20:44:43.699 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:43.699+0000 7f634d890640 1 -- 192.168.123.107:0/1561533231 learned_addr learned my addr 192.168.123.107:0/1561533231 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:44:43.699 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:43.699+0000 7f634e091640 1 --2- 192.168.123.107:0/1561533231 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f63480ff4a0 0x7f6348196160 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:43.700 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:43.700+0000 7f634d890640 1 -- 192.168.123.107:0/1561533231 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f63480ff4a0 msgr2=0x7f6348196160 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:43.700 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:43.700+0000 7f634d890640 1 --2- 192.168.123.107:0/1561533231 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f63480ff4a0 0x7f6348196160 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:43.700 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:43.700+0000 7f634d890640 1 -- 192.168.123.107:0/1561533231 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f6338009660 con 0x7f6348105550 2026-03-09T20:44:43.700 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:43.700+0000 7f634d890640 1 --2- 192.168.123.107:0/1561533231 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6348105550 0x7f63481966a0 secure :-1 s=READY pgs=200 cs=0 l=1 rev1=1 crypto rx=0x7f633000d8d0 tx=0x7f633000dda0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:43.700 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:43.700+0000 7f633f7fe640 1 -- 192.168.123.107:0/1561533231 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f6330004490 con 0x7f6348105550 2026-03-09T20:44:43.700 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:43.700+0000 7f634f093640 1 -- 192.168.123.107:0/1561533231 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f634819ad80 con 0x7f6348105550 2026-03-09T20:44:43.701 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:43.701+0000 7f634f093640 1 -- 192.168.123.107:0/1561533231 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f634819b2d0 con 0x7f6348105550 2026-03-09T20:44:43.701 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:43.701+0000 7f633f7fe640 1 -- 192.168.123.107:0/1561533231 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f633000bd00 con 0x7f6348105550 2026-03-09T20:44:43.701 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:43.701+0000 7f633f7fe640 1 -- 192.168.123.107:0/1561533231 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f6330010460 con 0x7f6348105550 2026-03-09T20:44:43.702 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:43.702+0000 7f633f7fe640 1 -- 192.168.123.107:0/1561533231 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7f63300105c0 con 0x7f6348105550 2026-03-09T20:44:43.703 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:43.703+0000 7f633f7fe640 1 --2- 192.168.123.107:0/1561533231 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f631c0761c0 0x7f631c078680 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:43.703 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:43.703+0000 7f634e091640 1 --2- 192.168.123.107:0/1561533231 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f631c0761c0 0x7f631c078680 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:43.703 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:43.703+0000 7f633f7fe640 1 -- 192.168.123.107:0/1561533231 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(37..37 src has 1..37) v4 ==== 4618+0+0 (secure 0 0 0) 0x7f6330098c60 con 0x7f6348105550 2026-03-09T20:44:43.703 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:43.703+0000 7f634f093640 1 -- 192.168.123.107:0/1561533231 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f6348100bc0 con 0x7f6348105550 2026-03-09T20:44:43.707 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:43.706+0000 7f633f7fe640 1 -- 192.168.123.107:0/1561533231 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f633009c050 con 0x7f6348105550 2026-03-09T20:44:43.707 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:43.707+0000 7f634e091640 1 --2- 192.168.123.107:0/1561533231 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f631c0761c0 0x7f631c078680 secure :-1 s=READY pgs=71 cs=0 l=1 rev1=1 crypto rx=0x7f63380384f0 tx=0x7f6338005c90 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:43.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:43 vm10 ceph-mon[57011]: osdmap e37: 6 total, 6 up, 6 in 2026-03-09T20:44:43.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:43 vm10 ceph-mon[57011]: from='client.? 192.168.123.107:0/3945685132' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T20:44:43.796 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:43.795+0000 7f634f093640 1 -- 192.168.123.107:0/1561533231 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "osd dump", "format": "json"} v 0) v1 -- 0x7f63480ff900 con 0x7f6348105550 2026-03-09T20:44:43.797 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:43.797+0000 7f633f7fe640 1 -- 192.168.123.107:0/1561533231 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "osd dump", "format": "json"}]=0 v37) v1 ==== 74+0+11585 (secure 0 0 0) 0x7f6330014020 con 0x7f6348105550 2026-03-09T20:44:43.797 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:44:43.798 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":37,"fsid":"589eab88-1bf8-11f1-9e50-71f3ab1833c4","created":"2026-03-09T20:42:21.622782+0000","modified":"2026-03-09T20:44:42.146215+0000","last_up_change":"2026-03-09T20:44:41.142793+0000","last_in_change":"2026-03-09T20:44:30.230343+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":14,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":6,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"reef","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-09T20:44:15.118166+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"23","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_acting":6,"score_stable":6,"optimal_score":0.5,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"4ceba074-cc1e-460f-b8f1-b7d80b498d37","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":10,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6802","nonce":1974618076},{"type":"v1","addr":"192.168.123.107:6803","nonce":1974618076}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6804","nonce":1974618076},{"type":"v1","addr":"192.168.123.107:6805","nonce":1974618076}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6808","nonce":1974618076},{"type":"v1","addr":"192.168.123.107:6809","nonce":1974618076}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6806","nonce":1974618076},{"type":"v1","addr":"192.168.123.107:6807","nonce":1974618076}]},"public_addr":"192.168.123.107:6803/1974618076","cluster_addr":"192.168.123.107:6805/1974618076","heartbeat_back_addr":"192.168.123.107:6809/1974618076","heartbeat_front_addr":"192.168.123.107:6807/1974618076","state":["exists","up"]},{"osd":1,"uuid":"7431b664-9dad-4df6-ac1e-d480eeb7d102","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":15,"up_thru":28,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6810","nonce":1875790749},{"type":"v1","addr":"192.168.123.107:6811","nonce":1875790749}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6812","nonce":1875790749},{"type":"v1","addr":"192.168.123.107:6813","nonce":1875790749}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6816","nonce":1875790749},{"type":"v1","addr":"192.168.123.107:6817","nonce":1875790749}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6814","nonce":1875790749},{"type":"v1","addr":"192.168.123.107:6815","nonce":1875790749}]},"public_addr":"192.168.123.107:6811/1875790749","cluster_addr":"192.168.123.107:6813/1875790749","heartbeat_back_addr":"192.168.123.107:6817/1875790749","heartbeat_front_addr":"192.168.123.107:6815/1875790749","state":["exists","up"]},{"osd":2,"uuid":"91efe4fd-879b-433f-ab7e-d98ab2676ea3","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":19,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6818","nonce":436827330},{"type":"v1","addr":"192.168.123.107:6819","nonce":436827330}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6820","nonce":436827330},{"type":"v1","addr":"192.168.123.107:6821","nonce":436827330}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6824","nonce":436827330},{"type":"v1","addr":"192.168.123.107:6825","nonce":436827330}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6822","nonce":436827330},{"type":"v1","addr":"192.168.123.107:6823","nonce":436827330}]},"public_addr":"192.168.123.107:6819/436827330","cluster_addr":"192.168.123.107:6821/436827330","heartbeat_back_addr":"192.168.123.107:6825/436827330","heartbeat_front_addr":"192.168.123.107:6823/436827330","state":["exists","up"]},{"osd":3,"uuid":"5baaeff4-3fa0-43d6-81ca-ff28de0673a4","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":27,"up_thru":31,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6800","nonce":2906542839},{"type":"v1","addr":"192.168.123.110:6801","nonce":2906542839}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6802","nonce":2906542839},{"type":"v1","addr":"192.168.123.110:6803","nonce":2906542839}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6806","nonce":2906542839},{"type":"v1","addr":"192.168.123.110:6807","nonce":2906542839}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6804","nonce":2906542839},{"type":"v1","addr":"192.168.123.110:6805","nonce":2906542839}]},"public_addr":"192.168.123.110:6801/2906542839","cluster_addr":"192.168.123.110:6803/2906542839","heartbeat_back_addr":"192.168.123.110:6807/2906542839","heartbeat_front_addr":"192.168.123.110:6805/2906542839","state":["exists","up"]},{"osd":4,"uuid":"5ced8315-7f95-41be-88c5-e29628c579a6","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":32,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6808","nonce":1620701058},{"type":"v1","addr":"192.168.123.110:6809","nonce":1620701058}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6810","nonce":1620701058},{"type":"v1","addr":"192.168.123.110:6811","nonce":1620701058}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6814","nonce":1620701058},{"type":"v1","addr":"192.168.123.110:6815","nonce":1620701058}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6812","nonce":1620701058},{"type":"v1","addr":"192.168.123.110:6813","nonce":1620701058}]},"public_addr":"192.168.123.110:6809/1620701058","cluster_addr":"192.168.123.110:6811/1620701058","heartbeat_back_addr":"192.168.123.110:6815/1620701058","heartbeat_front_addr":"192.168.123.110:6813/1620701058","state":["exists","up"]},{"osd":5,"uuid":"bb0b085e-9ae4-46b4-9158-53e2edb3b952","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":36,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6816","nonce":1579901954},{"type":"v1","addr":"192.168.123.110:6817","nonce":1579901954}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6818","nonce":1579901954},{"type":"v1","addr":"192.168.123.110:6819","nonce":1579901954}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6822","nonce":1579901954},{"type":"v1","addr":"192.168.123.110:6823","nonce":1579901954}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6820","nonce":1579901954},{"type":"v1","addr":"192.168.123.110:6821","nonce":1579901954}]},"public_addr":"192.168.123.110:6817/1579901954","cluster_addr":"192.168.123.110:6819/1579901954","heartbeat_back_addr":"192.168.123.110:6823/1579901954","heartbeat_front_addr":"192.168.123.110:6821/1579901954","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138322906710015,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T20:43:51.872016+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138322906710015,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T20:44:02.436528+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138322906710015,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T20:44:12.399428+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138322906710015,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T20:44:21.060426+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138322906710015,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T20:44:29.970569+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138322906710015,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T20:44:38.192755+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.107:0/3147220177":"2026-03-10T20:43:19.070116+0000","192.168.123.107:0/1632839730":"2026-03-10T20:42:33.181797+0000","192.168.123.107:6800/4166937886":"2026-03-10T20:43:19.070116+0000","192.168.123.107:0/860262504":"2026-03-10T20:42:33.181797+0000","192.168.123.107:6800/1859043218":"2026-03-10T20:42:33.181797+0000","192.168.123.107:0/2967433424":"2026-03-10T20:42:43.298801+0000","192.168.123.107:0/216055426":"2026-03-10T20:42:33.181797+0000","192.168.123.107:6801/1859043218":"2026-03-10T20:42:33.181797+0000","192.168.123.107:0/4075654402":"2026-03-10T20:42:43.298801+0000","192.168.123.107:0/1007394955":"2026-03-10T20:42:43.298801+0000","192.168.123.107:0/1552256432":"2026-03-10T20:43:19.070116+0000","192.168.123.107:6801/810998986":"2026-03-10T20:42:43.298801+0000","192.168.123.107:6800/810998986":"2026-03-10T20:42:43.298801+0000","192.168.123.107:6801/4166937886":"2026-03-10T20:43:19.070116+0000","192.168.123.107:0/1594973564":"2026-03-10T20:43:19.070116+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-09T20:44:43.800 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:43.800+0000 7f634f093640 1 -- 192.168.123.107:0/1561533231 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f631c0761c0 msgr2=0x7f631c078680 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:43.800 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:43.800+0000 7f634f093640 1 --2- 192.168.123.107:0/1561533231 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f631c0761c0 0x7f631c078680 secure :-1 s=READY pgs=71 cs=0 l=1 rev1=1 crypto rx=0x7f63380384f0 tx=0x7f6338005c90 comp rx=0 tx=0).stop 2026-03-09T20:44:43.800 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:43.800+0000 7f634f093640 1 -- 192.168.123.107:0/1561533231 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6348105550 msgr2=0x7f63481966a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:43.800 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:43.800+0000 7f634f093640 1 --2- 192.168.123.107:0/1561533231 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6348105550 0x7f63481966a0 secure :-1 s=READY pgs=200 cs=0 l=1 rev1=1 crypto rx=0x7f633000d8d0 tx=0x7f633000dda0 comp rx=0 tx=0).stop 2026-03-09T20:44:43.800 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:43.800+0000 7f634f093640 1 -- 192.168.123.107:0/1561533231 shutdown_connections 2026-03-09T20:44:43.800 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:43.801+0000 7f634f093640 1 --2- 192.168.123.107:0/1561533231 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f631c0761c0 0x7f631c078680 unknown :-1 s=CLOSED pgs=71 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:43.800 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:43.801+0000 7f634f093640 1 --2- 192.168.123.107:0/1561533231 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6348105550 0x7f63481966a0 unknown :-1 s=CLOSED pgs=200 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:43.801 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:43.801+0000 7f634f093640 1 --2- 192.168.123.107:0/1561533231 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f63480ff4a0 0x7f6348196160 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:43.801 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:43.801+0000 7f634f093640 1 -- 192.168.123.107:0/1561533231 >> 192.168.123.107:0/1561533231 conn(0x7f63480fb180 msgr2=0x7f6348103050 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:43.801 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:43.801+0000 7f634f093640 1 -- 192.168.123.107:0/1561533231 shutdown_connections 2026-03-09T20:44:43.801 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:43.801+0000 7f634f093640 1 -- 192.168.123.107:0/1561533231 wait complete. 2026-03-09T20:44:43.863 INFO:tasks.cephadm.ceph_manager.ceph:[{'pool': 1, 'pool_name': '.mgr', 'create_time': '2026-03-09T20:44:15.118166+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'is_stretch_pool': False, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 1, 'pg_placement_num': 1, 'pg_placement_num_target': 1, 'pg_num_target': 1, 'pg_num_pending': 1, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '23', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_num_max': 32, 'pg_num_min': 1}, 'application_metadata': {'mgr': {}}, 'read_balance': {'score_acting': 6, 'score_stable': 6, 'optimal_score': 0.5, 'raw_score_acting': 3, 'raw_score_stable': 3, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}] 2026-03-09T20:44:43.863 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph osd pool get .mgr pg_num 2026-03-09T20:44:44.006 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:44:44.245 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.244+0000 7f3395961640 1 -- 192.168.123.107:0/2769647134 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3390075720 msgr2=0x7f3390075b00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:44.245 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.244+0000 7f3395961640 1 --2- 192.168.123.107:0/2769647134 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3390075720 0x7f3390075b00 secure :-1 s=READY pgs=201 cs=0 l=1 rev1=1 crypto rx=0x7f3378009a00 tx=0x7f337802f280 comp rx=0 tx=0).stop 2026-03-09T20:44:44.245 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.245+0000 7f3395961640 1 -- 192.168.123.107:0/2769647134 shutdown_connections 2026-03-09T20:44:44.245 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.245+0000 7f3395961640 1 --2- 192.168.123.107:0/2769647134 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f3390076040 0x7f3390111330 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:44.245 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.245+0000 7f3395961640 1 --2- 192.168.123.107:0/2769647134 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3390075720 0x7f3390075b00 unknown :-1 s=CLOSED pgs=201 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:44.245 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.245+0000 7f3395961640 1 -- 192.168.123.107:0/2769647134 >> 192.168.123.107:0/2769647134 conn(0x7f33900fe710 msgr2=0x7f3390100b30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:44.245 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.245+0000 7f3395961640 1 -- 192.168.123.107:0/2769647134 shutdown_connections 2026-03-09T20:44:44.245 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.245+0000 7f3395961640 1 -- 192.168.123.107:0/2769647134 wait complete. 2026-03-09T20:44:44.245 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.245+0000 7f3395961640 1 Processor -- start 2026-03-09T20:44:44.245 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.246+0000 7f3395961640 1 -- start start 2026-03-09T20:44:44.246 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.246+0000 7f3395961640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f3390075720 0x7f339019ee70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:44.246 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.246+0000 7f3395961640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3390076040 0x7f339019f3b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:44.246 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.246+0000 7f3395961640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f339019fa40 con 0x7f3390076040 2026-03-09T20:44:44.246 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.246+0000 7f338e7fc640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3390076040 0x7f339019f3b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:44.246 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.246+0000 7f338e7fc640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3390076040 0x7f339019f3b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:36980/0 (socket says 192.168.123.107:36980) 2026-03-09T20:44:44.246 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.246+0000 7f338e7fc640 1 -- 192.168.123.107:0/1591203407 learned_addr learned my addr 192.168.123.107:0/1591203407 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:44:44.246 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.246+0000 7f338effd640 1 --2- 192.168.123.107:0/1591203407 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f3390075720 0x7f339019ee70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:44.246 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.246+0000 7f3395961640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f33901a37b0 con 0x7f3390075720 2026-03-09T20:44:44.247 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.247+0000 7f338e7fc640 1 -- 192.168.123.107:0/1591203407 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f3390075720 msgr2=0x7f339019ee70 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:44.247 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.247+0000 7f338e7fc640 1 --2- 192.168.123.107:0/1591203407 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f3390075720 0x7f339019ee70 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:44.247 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.247+0000 7f338e7fc640 1 -- 192.168.123.107:0/1591203407 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f3378009660 con 0x7f3390076040 2026-03-09T20:44:44.247 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.247+0000 7f338effd640 1 --2- 192.168.123.107:0/1591203407 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f3390075720 0x7f339019ee70 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-09T20:44:44.247 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.247+0000 7f338e7fc640 1 --2- 192.168.123.107:0/1591203407 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3390076040 0x7f339019f3b0 secure :-1 s=READY pgs=202 cs=0 l=1 rev1=1 crypto rx=0x7f338400e990 tx=0x7f338400ee60 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:44.247 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.247+0000 7f339495f640 1 -- 192.168.123.107:0/1591203407 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f338400cd30 con 0x7f3390076040 2026-03-09T20:44:44.248 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.247+0000 7f339495f640 1 -- 192.168.123.107:0/1591203407 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f338400ce90 con 0x7f3390076040 2026-03-09T20:44:44.248 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.248+0000 7f339495f640 1 -- 192.168.123.107:0/1591203407 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f3384004280 con 0x7f3390076040 2026-03-09T20:44:44.248 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.248+0000 7f3395961640 1 -- 192.168.123.107:0/1591203407 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f33901a3a90 con 0x7f3390076040 2026-03-09T20:44:44.249 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.249+0000 7f3395961640 1 -- 192.168.123.107:0/1591203407 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f33901a3fb0 con 0x7f3390076040 2026-03-09T20:44:44.250 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.250+0000 7f339495f640 1 -- 192.168.123.107:0/1591203407 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7f33840043e0 con 0x7f3390076040 2026-03-09T20:44:44.250 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.250+0000 7f339495f640 1 --2- 192.168.123.107:0/1591203407 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f3364076170 0x7f3364078630 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:44.250 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.250+0000 7f338effd640 1 --2- 192.168.123.107:0/1591203407 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f3364076170 0x7f3364078630 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:44.250 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.251+0000 7f339495f640 1 -- 192.168.123.107:0/1591203407 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(37..37 src has 1..37) v4 ==== 4618+0+0 (secure 0 0 0) 0x7f3384014070 con 0x7f3390076040 2026-03-09T20:44:44.251 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.251+0000 7f338effd640 1 --2- 192.168.123.107:0/1591203407 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f3364076170 0x7f3364078630 secure :-1 s=READY pgs=72 cs=0 l=1 rev1=1 crypto rx=0x7f3378004870 tx=0x7f33780047c0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:44.253 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.251+0000 7f3395961640 1 -- 192.168.123.107:0/1591203407 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f3354005350 con 0x7f3390076040 2026-03-09T20:44:44.254 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.254+0000 7f339495f640 1 -- 192.168.123.107:0/1591203407 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f338409c050 con 0x7f3390076040 2026-03-09T20:44:44.342 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.341+0000 7f3395961640 1 -- 192.168.123.107:0/1591203407 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"} v 0) v1 -- 0x7f3354005b80 con 0x7f3390076040 2026-03-09T20:44:44.344 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.344+0000 7f339495f640 1 -- 192.168.123.107:0/1591203407 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]=0 v37) v1 ==== 93+0+10 (secure 0 0 0) 0x7f3384060350 con 0x7f3390076040 2026-03-09T20:44:44.344 INFO:teuthology.orchestra.run.vm07.stdout:pg_num: 1 2026-03-09T20:44:44.346 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.346+0000 7f3395961640 1 -- 192.168.123.107:0/1591203407 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f3364076170 msgr2=0x7f3364078630 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:44.346 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.346+0000 7f3395961640 1 --2- 192.168.123.107:0/1591203407 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f3364076170 0x7f3364078630 secure :-1 s=READY pgs=72 cs=0 l=1 rev1=1 crypto rx=0x7f3378004870 tx=0x7f33780047c0 comp rx=0 tx=0).stop 2026-03-09T20:44:44.347 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.347+0000 7f3395961640 1 -- 192.168.123.107:0/1591203407 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3390076040 msgr2=0x7f339019f3b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:44.347 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.347+0000 7f3395961640 1 --2- 192.168.123.107:0/1591203407 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3390076040 0x7f339019f3b0 secure :-1 s=READY pgs=202 cs=0 l=1 rev1=1 crypto rx=0x7f338400e990 tx=0x7f338400ee60 comp rx=0 tx=0).stop 2026-03-09T20:44:44.347 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.347+0000 7f3395961640 1 -- 192.168.123.107:0/1591203407 shutdown_connections 2026-03-09T20:44:44.347 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.347+0000 7f3395961640 1 --2- 192.168.123.107:0/1591203407 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f3364076170 0x7f3364078630 unknown :-1 s=CLOSED pgs=72 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:44.347 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.347+0000 7f3395961640 1 --2- 192.168.123.107:0/1591203407 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3390076040 0x7f339019f3b0 unknown :-1 s=CLOSED pgs=202 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:44.347 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.347+0000 7f3395961640 1 --2- 192.168.123.107:0/1591203407 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f3390075720 0x7f339019ee70 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:44.347 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.347+0000 7f3395961640 1 -- 192.168.123.107:0/1591203407 >> 192.168.123.107:0/1591203407 conn(0x7f33900fe710 msgr2=0x7f33900ffe30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:44.347 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.348+0000 7f3395961640 1 -- 192.168.123.107:0/1591203407 shutdown_connections 2026-03-09T20:44:44.347 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.348+0000 7f3395961640 1 -- 192.168.123.107:0/1591203407 wait complete. 2026-03-09T20:44:44.411 INFO:tasks.cephadm:Setting up client nodes... 2026-03-09T20:44:44.411 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph auth get-or-create client.0 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-09T20:44:44.564 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:44:44.649 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:44 vm07 ceph-mon[49120]: pgmap v67: 1 pgs: 1 active+clean; 449 KiB data, 573 MiB used, 119 GiB / 120 GiB avail 2026-03-09T20:44:44.650 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:44 vm07 ceph-mon[49120]: from='client.? 192.168.123.107:0/1561533231' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T20:44:44.650 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:44 vm07 ceph-mon[49120]: from='client.? 192.168.123.107:0/1591203407' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-09T20:44:44.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:44 vm10 ceph-mon[57011]: pgmap v67: 1 pgs: 1 active+clean; 449 KiB data, 573 MiB used, 119 GiB / 120 GiB avail 2026-03-09T20:44:44.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:44 vm10 ceph-mon[57011]: from='client.? 192.168.123.107:0/1561533231' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T20:44:44.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:44 vm10 ceph-mon[57011]: from='client.? 192.168.123.107:0/1591203407' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-09T20:44:44.814 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.813+0000 7f19d4de6640 1 -- 192.168.123.107:0/3820647279 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f19d0073f40 msgr2=0x7f19d010d0e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:44.814 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.813+0000 7f19d4de6640 1 --2- 192.168.123.107:0/3820647279 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f19d0073f40 0x7f19d010d0e0 secure :-1 s=READY pgs=203 cs=0 l=1 rev1=1 crypto rx=0x7f19c40099b0 tx=0x7f19c402f220 comp rx=0 tx=0).stop 2026-03-09T20:44:44.814 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.814+0000 7f19d4de6640 1 -- 192.168.123.107:0/3820647279 shutdown_connections 2026-03-09T20:44:44.814 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.814+0000 7f19d4de6640 1 --2- 192.168.123.107:0/3820647279 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f19d0073f40 0x7f19d010d0e0 unknown :-1 s=CLOSED pgs=203 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:44.814 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.814+0000 7f19d4de6640 1 --2- 192.168.123.107:0/3820647279 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f19d0073600 0x7f19d0073a00 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:44.814 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.814+0000 7f19d4de6640 1 -- 192.168.123.107:0/3820647279 >> 192.168.123.107:0/3820647279 conn(0x7f19d00fbf90 msgr2=0x7f19d00fe3d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:44.814 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.814+0000 7f19d4de6640 1 -- 192.168.123.107:0/3820647279 shutdown_connections 2026-03-09T20:44:44.814 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.814+0000 7f19d4de6640 1 -- 192.168.123.107:0/3820647279 wait complete. 2026-03-09T20:44:44.814 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.814+0000 7f19d4de6640 1 Processor -- start 2026-03-09T20:44:44.815 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.815+0000 7f19d4de6640 1 -- start start 2026-03-09T20:44:44.815 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.815+0000 7f19d4de6640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f19d0073600 0x7f19d010b3c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:44.815 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.815+0000 7f19d4de6640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f19d0073f40 0x7f19d010b900 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:44.815 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.815+0000 7f19d4de6640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f19d010cd70 con 0x7f19d0073600 2026-03-09T20:44:44.815 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.815+0000 7f19d4de6640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f19d010cee0 con 0x7f19d0073f40 2026-03-09T20:44:44.815 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.815+0000 7f19ce575640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f19d0073600 0x7f19d010b3c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:44.815 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.815+0000 7f19ce575640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f19d0073600 0x7f19d010b3c0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:37000/0 (socket says 192.168.123.107:37000) 2026-03-09T20:44:44.815 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.815+0000 7f19ce575640 1 -- 192.168.123.107:0/3621342025 learned_addr learned my addr 192.168.123.107:0/3621342025 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:44:44.815 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.815+0000 7f19cdd74640 1 --2- 192.168.123.107:0/3621342025 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f19d0073f40 0x7f19d010b900 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:44.816 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.816+0000 7f19ce575640 1 -- 192.168.123.107:0/3621342025 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f19d0073f40 msgr2=0x7f19d010b900 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:44.816 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.816+0000 7f19ce575640 1 --2- 192.168.123.107:0/3621342025 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f19d0073f40 0x7f19d010b900 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:44.816 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.816+0000 7f19ce575640 1 -- 192.168.123.107:0/3621342025 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f19c4009660 con 0x7f19d0073600 2026-03-09T20:44:44.816 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.816+0000 7f19ce575640 1 --2- 192.168.123.107:0/3621342025 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f19d0073600 0x7f19d010b3c0 secure :-1 s=READY pgs=204 cs=0 l=1 rev1=1 crypto rx=0x7f19b800cc60 tx=0x7f19b8007590 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:44.816 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.816+0000 7f19b77fe640 1 -- 192.168.123.107:0/3621342025 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f19b8007d80 con 0x7f19d0073600 2026-03-09T20:44:44.818 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.816+0000 7f19d4de6640 1 -- 192.168.123.107:0/3621342025 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f19d010bf60 con 0x7f19d0073600 2026-03-09T20:44:44.818 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.817+0000 7f19d4de6640 1 -- 192.168.123.107:0/3621342025 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f19d01ad180 con 0x7f19d0073600 2026-03-09T20:44:44.818 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.818+0000 7f19b77fe640 1 -- 192.168.123.107:0/3621342025 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f19b800ce80 con 0x7f19d0073600 2026-03-09T20:44:44.818 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.818+0000 7f19b77fe640 1 -- 192.168.123.107:0/3621342025 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f19b800f660 con 0x7f19d0073600 2026-03-09T20:44:44.818 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.818+0000 7f19d4de6640 1 -- 192.168.123.107:0/3621342025 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f19d0073a00 con 0x7f19d0073600 2026-03-09T20:44:44.819 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.819+0000 7f19b77fe640 1 -- 192.168.123.107:0/3621342025 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7f19b800f880 con 0x7f19d0073600 2026-03-09T20:44:44.823 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.820+0000 7f19b77fe640 1 --2- 192.168.123.107:0/3621342025 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f19a00761c0 0x7f19a0078680 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:44.823 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.820+0000 7f19b77fe640 1 -- 192.168.123.107:0/3621342025 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(37..37 src has 1..37) v4 ==== 4618+0+0 (secure 0 0 0) 0x7f19b8004070 con 0x7f19d0073600 2026-03-09T20:44:44.823 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.823+0000 7f19cdd74640 1 --2- 192.168.123.107:0/3621342025 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f19a00761c0 0x7f19a0078680 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:44.823 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.824+0000 7f19cdd74640 1 --2- 192.168.123.107:0/3621342025 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f19a00761c0 0x7f19a0078680 secure :-1 s=READY pgs=73 cs=0 l=1 rev1=1 crypto rx=0x7f19d010caf0 tx=0x7f19c403a040 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:44.824 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.823+0000 7f19b77fe640 1 -- 192.168.123.107:0/3621342025 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f19b809c050 con 0x7f19d0073600 2026-03-09T20:44:44.950 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.949+0000 7f19d4de6640 1 -- 192.168.123.107:0/3621342025 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]} v 0) v1 -- 0x7f19d010fee0 con 0x7f19d0073600 2026-03-09T20:44:44.954 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.953+0000 7f19b77fe640 1 -- 192.168.123.107:0/3621342025 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]=0 v16) v1 ==== 170+0+59 (secure 0 0 0) 0x7f19b8061490 con 0x7f19d0073600 2026-03-09T20:44:44.955 INFO:teuthology.orchestra.run.vm07.stdout:[client.0] 2026-03-09T20:44:44.955 INFO:teuthology.orchestra.run.vm07.stdout: key = AQA8Ma9pV2erOBAA/GorvAFip8N9SuDCOuL7Xw== 2026-03-09T20:44:44.957 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.957+0000 7f19d4de6640 1 -- 192.168.123.107:0/3621342025 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f19a00761c0 msgr2=0x7f19a0078680 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:44.957 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.958+0000 7f19d4de6640 1 --2- 192.168.123.107:0/3621342025 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f19a00761c0 0x7f19a0078680 secure :-1 s=READY pgs=73 cs=0 l=1 rev1=1 crypto rx=0x7f19d010caf0 tx=0x7f19c403a040 comp rx=0 tx=0).stop 2026-03-09T20:44:44.958 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.958+0000 7f19d4de6640 1 -- 192.168.123.107:0/3621342025 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f19d0073600 msgr2=0x7f19d010b3c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:44.958 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.958+0000 7f19d4de6640 1 --2- 192.168.123.107:0/3621342025 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f19d0073600 0x7f19d010b3c0 secure :-1 s=READY pgs=204 cs=0 l=1 rev1=1 crypto rx=0x7f19b800cc60 tx=0x7f19b8007590 comp rx=0 tx=0).stop 2026-03-09T20:44:44.958 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.958+0000 7f19d4de6640 1 -- 192.168.123.107:0/3621342025 shutdown_connections 2026-03-09T20:44:44.958 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.958+0000 7f19d4de6640 1 --2- 192.168.123.107:0/3621342025 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f19a00761c0 0x7f19a0078680 unknown :-1 s=CLOSED pgs=73 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:44.959 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.959+0000 7f19d4de6640 1 --2- 192.168.123.107:0/3621342025 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f19d0073f40 0x7f19d010b900 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:44.959 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.959+0000 7f19d4de6640 1 --2- 192.168.123.107:0/3621342025 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f19d0073600 0x7f19d010b3c0 unknown :-1 s=CLOSED pgs=204 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:44.959 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.959+0000 7f19d4de6640 1 -- 192.168.123.107:0/3621342025 >> 192.168.123.107:0/3621342025 conn(0x7f19d00fbf90 msgr2=0x7f19d00fdb30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:44.959 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.959+0000 7f19d4de6640 1 -- 192.168.123.107:0/3621342025 shutdown_connections 2026-03-09T20:44:44.959 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:44.959+0000 7f19d4de6640 1 -- 192.168.123.107:0/3621342025 wait complete. 2026-03-09T20:44:45.022 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-09T20:44:45.022 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/etc/ceph/ceph.client.0.keyring 2026-03-09T20:44:45.022 DEBUG:teuthology.orchestra.run.vm07:> sudo chmod 0644 /etc/ceph/ceph.client.0.keyring 2026-03-09T20:44:45.060 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph auth get-or-create client.1 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-09T20:44:45.214 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm10/config 2026-03-09T20:44:45.465 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:45.463+0000 7fa785b14640 1 -- 192.168.123.110:0/4070012760 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa780103c90 msgr2=0x7fa780104110 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:45.465 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:45.463+0000 7fa785b14640 1 --2- 192.168.123.110:0/4070012760 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa780103c90 0x7fa780104110 secure :-1 s=READY pgs=35 cs=0 l=1 rev1=1 crypto rx=0x7fa7700099b0 tx=0x7fa77002f220 comp rx=0 tx=0).stop 2026-03-09T20:44:45.465 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:45.464+0000 7fa785b14640 1 -- 192.168.123.110:0/4070012760 shutdown_connections 2026-03-09T20:44:45.466 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:45.464+0000 7fa785b14640 1 --2- 192.168.123.110:0/4070012760 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa780103c90 0x7fa780104110 unknown :-1 s=CLOSED pgs=35 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:45.466 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:45.464+0000 7fa785b14640 1 --2- 192.168.123.110:0/4070012760 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa780102a90 0x7fa780102e90 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:45.466 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:45.464+0000 7fa785b14640 1 -- 192.168.123.110:0/4070012760 >> 192.168.123.110:0/4070012760 conn(0x7fa7800fe220 msgr2=0x7fa780100660 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:45.466 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:45.465+0000 7fa785b14640 1 -- 192.168.123.110:0/4070012760 shutdown_connections 2026-03-09T20:44:45.466 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:45.465+0000 7fa785b14640 1 -- 192.168.123.110:0/4070012760 wait complete. 2026-03-09T20:44:45.466 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:45.465+0000 7fa785b14640 1 Processor -- start 2026-03-09T20:44:45.467 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:45.465+0000 7fa785b14640 1 -- start start 2026-03-09T20:44:45.467 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:45.465+0000 7fa785b14640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa780102a90 0x7fa78019a3e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:45.467 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:45.465+0000 7fa785b14640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa780103c90 0x7fa78019a920 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:45.467 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:45.465+0000 7fa785b14640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa78019ae60 con 0x7fa780103c90 2026-03-09T20:44:45.467 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:45.465+0000 7fa785b14640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa78019afd0 con 0x7fa780102a90 2026-03-09T20:44:45.467 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:45.466+0000 7fa77f7fe640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa780102a90 0x7fa78019a3e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:45.467 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:45.466+0000 7fa77f7fe640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa780102a90 0x7fa78019a3e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.110:38098/0 (socket says 192.168.123.110:38098) 2026-03-09T20:44:45.467 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:45.466+0000 7fa77f7fe640 1 -- 192.168.123.110:0/3368240472 learned_addr learned my addr 192.168.123.110:0/3368240472 (peer_addr_for_me v2:192.168.123.110:0/0) 2026-03-09T20:44:45.468 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:45.466+0000 7fa77effd640 1 --2- 192.168.123.110:0/3368240472 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa780103c90 0x7fa78019a920 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:45.468 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:45.466+0000 7fa77f7fe640 1 -- 192.168.123.110:0/3368240472 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa780103c90 msgr2=0x7fa78019a920 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:45.468 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:45.466+0000 7fa77f7fe640 1 --2- 192.168.123.110:0/3368240472 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa780103c90 0x7fa78019a920 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:45.468 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:45.466+0000 7fa77f7fe640 1 -- 192.168.123.110:0/3368240472 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fa770009660 con 0x7fa780102a90 2026-03-09T20:44:45.468 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:45.466+0000 7fa77f7fe640 1 --2- 192.168.123.110:0/3368240472 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa780102a90 0x7fa78019a3e0 secure :-1 s=READY pgs=36 cs=0 l=1 rev1=1 crypto rx=0x7fa76c00cbf0 tx=0x7fa76c007590 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:45.468 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:45.467+0000 7fa77cff9640 1 -- 192.168.123.110:0/3368240472 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fa76c007cb0 con 0x7fa780102a90 2026-03-09T20:44:45.468 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:45.467+0000 7fa77cff9640 1 -- 192.168.123.110:0/3368240472 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fa76c007e10 con 0x7fa780102a90 2026-03-09T20:44:45.469 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:45.467+0000 7fa785b14640 1 -- 192.168.123.110:0/3368240472 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fa78019fab0 con 0x7fa780102a90 2026-03-09T20:44:45.469 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:45.467+0000 7fa785b14640 1 -- 192.168.123.110:0/3368240472 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fa78019ffb0 con 0x7fa780102a90 2026-03-09T20:44:45.469 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:45.468+0000 7fa77cff9640 1 -- 192.168.123.110:0/3368240472 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fa76c00f4b0 con 0x7fa780102a90 2026-03-09T20:44:45.470 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:45.468+0000 7fa785b14640 1 -- 192.168.123.110:0/3368240472 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fa780102e90 con 0x7fa780102a90 2026-03-09T20:44:45.470 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:45.469+0000 7fa77cff9640 1 -- 192.168.123.110:0/3368240472 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7fa76c00f750 con 0x7fa780102a90 2026-03-09T20:44:45.470 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:45.469+0000 7fa77cff9640 1 --2- 192.168.123.110:0/3368240472 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fa754075fb0 0x7fa754078470 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:45.470 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:45.469+0000 7fa77cff9640 1 -- 192.168.123.110:0/3368240472 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(37..37 src has 1..37) v4 ==== 4618+0+0 (secure 0 0 0) 0x7fa76c097610 con 0x7fa780102a90 2026-03-09T20:44:45.471 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:45.469+0000 7fa77effd640 1 --2- 192.168.123.110:0/3368240472 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fa754075fb0 0x7fa754078470 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:45.471 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:45.470+0000 7fa77effd640 1 --2- 192.168.123.110:0/3368240472 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fa754075fb0 0x7fa754078470 secure :-1 s=READY pgs=74 cs=0 l=1 rev1=1 crypto rx=0x7fa78019b900 tx=0x7fa77003a040 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:45.473 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:45.472+0000 7fa77cff9640 1 -- 192.168.123.110:0/3368240472 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7fa76c060f60 con 0x7fa780102a90 2026-03-09T20:44:45.589 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:45.588+0000 7fa785b14640 1 -- 192.168.123.110:0/3368240472 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]} v 0) v1 -- 0x7fa78010b6c0 con 0x7fa780102a90 2026-03-09T20:44:45.596 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:45.594+0000 7fa77cff9640 1 -- 192.168.123.110:0/3368240472 <== mon.1 v2:192.168.123.110:3300/0 7 ==== mon_command_ack([{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]=0 v17) v1 ==== 170+0+59 (secure 0 0 0) 0x7fa76c060900 con 0x7fa780102a90 2026-03-09T20:44:45.596 INFO:teuthology.orchestra.run.vm10.stdout:[client.1] 2026-03-09T20:44:45.596 INFO:teuthology.orchestra.run.vm10.stdout: key = AQA9Ma9pIYZEIxAAkq7ba7SPtR32gAwIDzqdPw== 2026-03-09T20:44:45.598 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:45.597+0000 7fa785b14640 1 -- 192.168.123.110:0/3368240472 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fa754075fb0 msgr2=0x7fa754078470 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:45.598 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:45.597+0000 7fa785b14640 1 --2- 192.168.123.110:0/3368240472 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fa754075fb0 0x7fa754078470 secure :-1 s=READY pgs=74 cs=0 l=1 rev1=1 crypto rx=0x7fa78019b900 tx=0x7fa77003a040 comp rx=0 tx=0).stop 2026-03-09T20:44:45.598 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:45.597+0000 7fa785b14640 1 -- 192.168.123.110:0/3368240472 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa780102a90 msgr2=0x7fa78019a3e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:45.598 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:45.597+0000 7fa785b14640 1 --2- 192.168.123.110:0/3368240472 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa780102a90 0x7fa78019a3e0 secure :-1 s=READY pgs=36 cs=0 l=1 rev1=1 crypto rx=0x7fa76c00cbf0 tx=0x7fa76c007590 comp rx=0 tx=0).stop 2026-03-09T20:44:45.598 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:45.597+0000 7fa785b14640 1 -- 192.168.123.110:0/3368240472 shutdown_connections 2026-03-09T20:44:45.599 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:45.597+0000 7fa785b14640 1 --2- 192.168.123.110:0/3368240472 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fa754075fb0 0x7fa754078470 unknown :-1 s=CLOSED pgs=74 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:45.599 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:45.597+0000 7fa785b14640 1 --2- 192.168.123.110:0/3368240472 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa780103c90 0x7fa78019a920 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:45.599 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:45.597+0000 7fa785b14640 1 --2- 192.168.123.110:0/3368240472 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa780102a90 0x7fa78019a3e0 unknown :-1 s=CLOSED pgs=36 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:45.599 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:45.597+0000 7fa785b14640 1 -- 192.168.123.110:0/3368240472 >> 192.168.123.110:0/3368240472 conn(0x7fa7800fe220 msgr2=0x7fa7800ff960 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:45.599 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:45.598+0000 7fa785b14640 1 -- 192.168.123.110:0/3368240472 shutdown_connections 2026-03-09T20:44:45.599 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:44:45.598+0000 7fa785b14640 1 -- 192.168.123.110:0/3368240472 wait complete. 2026-03-09T20:44:45.632 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:45 vm10 ceph-mon[57011]: from='client.? 192.168.123.107:0/3621342025' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T20:44:45.632 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:45 vm10 ceph-mon[57011]: from='client.? 192.168.123.107:0/3621342025' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T20:44:45.663 DEBUG:teuthology.orchestra.run.vm10:> set -ex 2026-03-09T20:44:45.663 DEBUG:teuthology.orchestra.run.vm10:> sudo dd of=/etc/ceph/ceph.client.1.keyring 2026-03-09T20:44:45.664 DEBUG:teuthology.orchestra.run.vm10:> sudo chmod 0644 /etc/ceph/ceph.client.1.keyring 2026-03-09T20:44:45.697 INFO:tasks.ceph:Waiting until ceph daemons up and pgs clean... 2026-03-09T20:44:45.697 INFO:tasks.cephadm.ceph_manager.ceph:waiting for mgr available 2026-03-09T20:44:45.697 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph mgr dump --format=json 2026-03-09T20:44:45.841 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:44:45.867 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:45 vm07 ceph-mon[49120]: from='client.? 192.168.123.107:0/3621342025' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T20:44:45.867 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:45 vm07 ceph-mon[49120]: from='client.? 192.168.123.107:0/3621342025' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T20:44:46.103 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.102+0000 7f77adde3640 1 -- 192.168.123.107:0/536412038 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f77a8076040 msgr2=0x7f77a8111330 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:46.103 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.102+0000 7f77adde3640 1 --2- 192.168.123.107:0/536412038 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f77a8076040 0x7f77a8111330 secure :-1 s=READY pgs=205 cs=0 l=1 rev1=1 crypto rx=0x7f77940099b0 tx=0x7f779402f220 comp rx=0 tx=0).stop 2026-03-09T20:44:46.103 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.103+0000 7f77adde3640 1 -- 192.168.123.107:0/536412038 shutdown_connections 2026-03-09T20:44:46.103 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.103+0000 7f77adde3640 1 --2- 192.168.123.107:0/536412038 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f77a8076040 0x7f77a8111330 unknown :-1 s=CLOSED pgs=205 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:46.103 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.103+0000 7f77adde3640 1 --2- 192.168.123.107:0/536412038 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f77a8075720 0x7f77a8075b00 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:46.103 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.103+0000 7f77adde3640 1 -- 192.168.123.107:0/536412038 >> 192.168.123.107:0/536412038 conn(0x7f77a80fe710 msgr2=0x7f77a8100b30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:46.103 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.103+0000 7f77adde3640 1 -- 192.168.123.107:0/536412038 shutdown_connections 2026-03-09T20:44:46.103 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.103+0000 7f77adde3640 1 -- 192.168.123.107:0/536412038 wait complete. 2026-03-09T20:44:46.104 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.104+0000 7f77adde3640 1 Processor -- start 2026-03-09T20:44:46.104 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.104+0000 7f77adde3640 1 -- start start 2026-03-09T20:44:46.104 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.104+0000 7f77adde3640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f77a8075720 0x7f77a819ee60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:46.104 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.104+0000 7f77adde3640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f77a8076040 0x7f77a819f3a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:46.104 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.104+0000 7f77adde3640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f77a819fa30 con 0x7f77a8075720 2026-03-09T20:44:46.104 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.104+0000 7f77adde3640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f77a81a37a0 con 0x7f77a8076040 2026-03-09T20:44:46.105 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.104+0000 7f77a77fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f77a8075720 0x7f77a819ee60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:46.105 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.104+0000 7f77a77fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f77a8075720 0x7f77a819ee60 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:37018/0 (socket says 192.168.123.107:37018) 2026-03-09T20:44:46.105 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.104+0000 7f77a77fe640 1 -- 192.168.123.107:0/2724410452 learned_addr learned my addr 192.168.123.107:0/2724410452 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:44:46.105 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.104+0000 7f77a6ffd640 1 --2- 192.168.123.107:0/2724410452 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f77a8076040 0x7f77a819f3a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:46.105 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.105+0000 7f77a77fe640 1 -- 192.168.123.107:0/2724410452 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f77a8076040 msgr2=0x7f77a819f3a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:46.105 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.105+0000 7f77a77fe640 1 --2- 192.168.123.107:0/2724410452 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f77a8076040 0x7f77a819f3a0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:46.105 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.105+0000 7f77a77fe640 1 -- 192.168.123.107:0/2724410452 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f7794009660 con 0x7f77a8075720 2026-03-09T20:44:46.105 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.105+0000 7f77a77fe640 1 --2- 192.168.123.107:0/2724410452 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f77a8075720 0x7f77a819ee60 secure :-1 s=READY pgs=206 cs=0 l=1 rev1=1 crypto rx=0x7f779800b700 tx=0x7f779800bbd0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:46.105 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.105+0000 7f77a4ff9640 1 -- 192.168.123.107:0/2724410452 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f779800be90 con 0x7f77a8075720 2026-03-09T20:44:46.106 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.105+0000 7f77a4ff9640 1 -- 192.168.123.107:0/2724410452 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f7798002ba0 con 0x7f77a8075720 2026-03-09T20:44:46.106 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.105+0000 7f77adde3640 1 -- 192.168.123.107:0/2724410452 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f77a81a3a80 con 0x7f77a8075720 2026-03-09T20:44:46.106 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.105+0000 7f77adde3640 1 -- 192.168.123.107:0/2724410452 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f77a81a3fd0 con 0x7f77a8075720 2026-03-09T20:44:46.107 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.108+0000 7f77a4ff9640 1 -- 192.168.123.107:0/2724410452 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f779800ca50 con 0x7f77a8075720 2026-03-09T20:44:46.108 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.108+0000 7f77adde3640 1 -- 192.168.123.107:0/2724410452 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f77a8076e60 con 0x7f77a8075720 2026-03-09T20:44:46.112 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.112+0000 7f77a4ff9640 1 -- 192.168.123.107:0/2724410452 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7f779800ccb0 con 0x7f77a8075720 2026-03-09T20:44:46.113 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.113+0000 7f77a4ff9640 1 --2- 192.168.123.107:0/2724410452 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f7778076080 0x7f7778078540 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:46.113 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.113+0000 7f77a6ffd640 1 --2- 192.168.123.107:0/2724410452 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f7778076080 0x7f7778078540 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:46.114 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.114+0000 7f77a6ffd640 1 --2- 192.168.123.107:0/2724410452 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f7778076080 0x7f7778078540 secure :-1 s=READY pgs=75 cs=0 l=1 rev1=1 crypto rx=0x7f7794005ec0 tx=0x7f779403a040 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:46.114 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.114+0000 7f77a4ff9640 1 -- 192.168.123.107:0/2724410452 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(37..37 src has 1..37) v4 ==== 4618+0+0 (secure 0 0 0) 0x7f7798097a80 con 0x7f77a8075720 2026-03-09T20:44:46.114 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.114+0000 7f77a4ff9640 1 -- 192.168.123.107:0/2724410452 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f7798002d10 con 0x7f77a8075720 2026-03-09T20:44:46.229 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.229+0000 7f77adde3640 1 -- 192.168.123.107:0/2724410452 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "mgr dump", "format": "json"} v 0) v1 -- 0x7f77a8075b00 con 0x7f77a8075720 2026-03-09T20:44:46.231 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.231+0000 7f77a4ff9640 1 -- 192.168.123.107:0/2724410452 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "mgr dump", "format": "json"}]=0 v19) v1 ==== 74+0+189855 (secure 0 0 0) 0x7f77980614e0 con 0x7f77a8075720 2026-03-09T20:44:46.231 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:44:46.236 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.236+0000 7f77adde3640 1 -- 192.168.123.107:0/2724410452 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f7778076080 msgr2=0x7f7778078540 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:46.236 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.236+0000 7f77adde3640 1 --2- 192.168.123.107:0/2724410452 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f7778076080 0x7f7778078540 secure :-1 s=READY pgs=75 cs=0 l=1 rev1=1 crypto rx=0x7f7794005ec0 tx=0x7f779403a040 comp rx=0 tx=0).stop 2026-03-09T20:44:46.236 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.236+0000 7f77adde3640 1 -- 192.168.123.107:0/2724410452 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f77a8075720 msgr2=0x7f77a819ee60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:46.237 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.236+0000 7f77adde3640 1 --2- 192.168.123.107:0/2724410452 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f77a8075720 0x7f77a819ee60 secure :-1 s=READY pgs=206 cs=0 l=1 rev1=1 crypto rx=0x7f779800b700 tx=0x7f779800bbd0 comp rx=0 tx=0).stop 2026-03-09T20:44:46.237 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.237+0000 7f77adde3640 1 -- 192.168.123.107:0/2724410452 shutdown_connections 2026-03-09T20:44:46.237 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.237+0000 7f77adde3640 1 --2- 192.168.123.107:0/2724410452 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f7778076080 0x7f7778078540 unknown :-1 s=CLOSED pgs=75 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:46.237 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.237+0000 7f77adde3640 1 --2- 192.168.123.107:0/2724410452 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f77a8076040 0x7f77a819f3a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:46.237 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.237+0000 7f77adde3640 1 --2- 192.168.123.107:0/2724410452 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f77a8075720 0x7f77a819ee60 unknown :-1 s=CLOSED pgs=206 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:46.237 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.237+0000 7f77adde3640 1 -- 192.168.123.107:0/2724410452 >> 192.168.123.107:0/2724410452 conn(0x7f77a80fe710 msgr2=0x7f77a80ffea0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:46.237 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.237+0000 7f77adde3640 1 -- 192.168.123.107:0/2724410452 shutdown_connections 2026-03-09T20:44:46.237 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.237+0000 7f77adde3640 1 -- 192.168.123.107:0/2724410452 wait complete. 2026-03-09T20:44:46.299 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":19,"flags":0,"active_gid":14225,"active_name":"vm07.xjrvch","active_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6800","nonce":4233182156},{"type":"v1","addr":"192.168.123.107:6801","nonce":4233182156}]},"active_addr":"192.168.123.107:6801/4233182156","active_change":"2026-03-09T20:43:19.070374+0000","active_mgr_features":4540138322906710015,"available":true,"standbys":[{"gid":14248,"name":"vm10.byqahe","mgr_features":4540138322906710015,"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","upmap"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/ceph-grafana:9.4.7","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.5.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.0.0","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.43.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"UNSAFE_TLS_v1_2":{"name":"UNSAFE_TLS_v1_2","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number.if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"pause_cloning":{"name":"pause_cloning","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Pause asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"pause_purging":{"name":"pause_purging","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Pause asynchronous subvolume purge threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}]}],"modules":["cephadm","dashboard","iostat","nfs","prometheus","restful"],"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","upmap"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/ceph-grafana:9.4.7","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.5.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.0.0","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.43.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"UNSAFE_TLS_v1_2":{"name":"UNSAFE_TLS_v1_2","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number.if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"pause_cloning":{"name":"pause_cloning","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Pause asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"pause_purging":{"name":"pause_purging","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Pause asynchronous subvolume purge threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}],"services":{"dashboard":"https://192.168.123.107:8443/","prometheus":"http://192.168.123.107:9283/"},"always_on_modules":{"octopus":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"pacific":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"quincy":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"reef":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"]},"force_disabled_modules":{},"last_failure_osd_epoch":5,"active_clients":[{"name":"devicehealth","addrvec":[{"type":"v2","addr":"192.168.123.107:0","nonce":1991130546}]},{"name":"libcephsqlite","addrvec":[{"type":"v2","addr":"192.168.123.107:0","nonce":2516772307}]},{"name":"rbd_support","addrvec":[{"type":"v2","addr":"192.168.123.107:0","nonce":2106845752}]},{"name":"volumes","addrvec":[{"type":"v2","addr":"192.168.123.107:0","nonce":831337671}]}]} 2026-03-09T20:44:46.300 INFO:tasks.cephadm.ceph_manager.ceph:mgr available! 2026-03-09T20:44:46.300 INFO:tasks.cephadm.ceph_manager.ceph:waiting for all up 2026-03-09T20:44:46.301 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph osd dump --format=json 2026-03-09T20:44:46.459 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:44:46.482 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:46 vm07 ceph-mon[49120]: pgmap v68: 1 pgs: 1 active+clean; 449 KiB data, 573 MiB used, 119 GiB / 120 GiB avail 2026-03-09T20:44:46.482 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:46 vm07 ceph-mon[49120]: from='client.? 192.168.123.110:0/3368240472' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T20:44:46.721 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.720+0000 7fba3c1a2640 1 -- 192.168.123.107:0/3287052074 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fba34075720 msgr2=0x7fba34075b00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:46.721 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.720+0000 7fba3c1a2640 1 --2- 192.168.123.107:0/3287052074 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fba34075720 0x7fba34075b00 secure :-1 s=READY pgs=207 cs=0 l=1 rev1=1 crypto rx=0x7fba28009a00 tx=0x7fba2802f280 comp rx=0 tx=0).stop 2026-03-09T20:44:46.721 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.721+0000 7fba3c1a2640 1 -- 192.168.123.107:0/3287052074 shutdown_connections 2026-03-09T20:44:46.721 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.721+0000 7fba3c1a2640 1 --2- 192.168.123.107:0/3287052074 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fba34076040 0x7fba34111330 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:46.722 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.721+0000 7fba3c1a2640 1 --2- 192.168.123.107:0/3287052074 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fba34075720 0x7fba34075b00 unknown :-1 s=CLOSED pgs=207 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:46.722 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.721+0000 7fba3c1a2640 1 -- 192.168.123.107:0/3287052074 >> 192.168.123.107:0/3287052074 conn(0x7fba340fe710 msgr2=0x7fba34100b30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:46.722 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.721+0000 7fba3c1a2640 1 -- 192.168.123.107:0/3287052074 shutdown_connections 2026-03-09T20:44:46.722 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.721+0000 7fba3c1a2640 1 -- 192.168.123.107:0/3287052074 wait complete. 2026-03-09T20:44:46.722 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.722+0000 7fba3c1a2640 1 Processor -- start 2026-03-09T20:44:46.722 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.722+0000 7fba3c1a2640 1 -- start start 2026-03-09T20:44:46.722 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.722+0000 7fba3c1a2640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fba34075720 0x7fba3419edf0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:46.722 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.722+0000 7fba3c1a2640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fba34076040 0x7fba3419f330 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:46.722 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.722+0000 7fba3c1a2640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fba3419f9c0 con 0x7fba34076040 2026-03-09T20:44:46.722 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.722+0000 7fba3c1a2640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fba341a3730 con 0x7fba34075720 2026-03-09T20:44:46.723 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.723+0000 7fba39716640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fba34076040 0x7fba3419f330 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:46.723 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.723+0000 7fba39716640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fba34076040 0x7fba3419f330 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:37024/0 (socket says 192.168.123.107:37024) 2026-03-09T20:44:46.723 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.723+0000 7fba39716640 1 -- 192.168.123.107:0/1912482327 learned_addr learned my addr 192.168.123.107:0/1912482327 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:44:46.723 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.723+0000 7fba39716640 1 -- 192.168.123.107:0/1912482327 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fba34075720 msgr2=0x7fba3419edf0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T20:44:46.723 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.723+0000 7fba39716640 1 --2- 192.168.123.107:0/1912482327 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fba34075720 0x7fba3419edf0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:46.723 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.723+0000 7fba39716640 1 -- 192.168.123.107:0/1912482327 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fba28009660 con 0x7fba34076040 2026-03-09T20:44:46.724 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.723+0000 7fba39716640 1 --2- 192.168.123.107:0/1912482327 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fba34076040 0x7fba3419f330 secure :-1 s=READY pgs=208 cs=0 l=1 rev1=1 crypto rx=0x7fba2400da30 tx=0x7fba2400df00 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:46.725 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.724+0000 7fba22ffd640 1 -- 192.168.123.107:0/1912482327 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fba2400bb80 con 0x7fba34076040 2026-03-09T20:44:46.725 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.724+0000 7fba22ffd640 1 -- 192.168.123.107:0/1912482327 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fba24004590 con 0x7fba34076040 2026-03-09T20:44:46.725 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.724+0000 7fba22ffd640 1 -- 192.168.123.107:0/1912482327 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fba24010460 con 0x7fba34076040 2026-03-09T20:44:46.725 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.724+0000 7fba3c1a2640 1 -- 192.168.123.107:0/1912482327 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fba341a3a10 con 0x7fba34076040 2026-03-09T20:44:46.725 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.724+0000 7fba3c1a2640 1 -- 192.168.123.107:0/1912482327 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fba341a3f60 con 0x7fba34076040 2026-03-09T20:44:46.728 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.726+0000 7fba22ffd640 1 -- 192.168.123.107:0/1912482327 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7fba2400bce0 con 0x7fba34076040 2026-03-09T20:44:46.728 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.726+0000 7fba3c1a2640 1 -- 192.168.123.107:0/1912482327 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fb9fc005350 con 0x7fba34076040 2026-03-09T20:44:46.728 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.726+0000 7fba22ffd640 1 --2- 192.168.123.107:0/1912482327 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fba0c0761c0 0x7fba0c078680 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:46.728 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.726+0000 7fba22ffd640 1 -- 192.168.123.107:0/1912482327 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(37..37 src has 1..37) v4 ==== 4618+0+0 (secure 0 0 0) 0x7fba240975f0 con 0x7fba34076040 2026-03-09T20:44:46.728 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.729+0000 7fba22ffd640 1 -- 192.168.123.107:0/1912482327 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7fba24060f40 con 0x7fba34076040 2026-03-09T20:44:46.729 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.729+0000 7fba39f17640 1 --2- 192.168.123.107:0/1912482327 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fba0c0761c0 0x7fba0c078680 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:46.729 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.729+0000 7fba39f17640 1 --2- 192.168.123.107:0/1912482327 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fba0c0761c0 0x7fba0c078680 secure :-1 s=READY pgs=76 cs=0 l=1 rev1=1 crypto rx=0x7fba28009a00 tx=0x7fba280023d0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:46.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:46 vm10 ceph-mon[57011]: pgmap v68: 1 pgs: 1 active+clean; 449 KiB data, 573 MiB used, 119 GiB / 120 GiB avail 2026-03-09T20:44:46.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:46 vm10 ceph-mon[57011]: from='client.? 192.168.123.110:0/3368240472' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T20:44:46.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:46 vm10 ceph-mon[57011]: from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T20:44:46.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:46 vm10 ceph-mon[57011]: from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T20:44:46.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:46 vm10 ceph-mon[57011]: from='client.? 192.168.123.107:0/2724410452' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T20:44:46.838 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.838+0000 7fba3c1a2640 1 -- 192.168.123.107:0/1912482327 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "osd dump", "format": "json"} v 0) v1 -- 0x7fb9fc0051c0 con 0x7fba34076040 2026-03-09T20:44:46.838 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:46 vm07 ceph-mon[49120]: from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T20:44:46.838 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:46 vm07 ceph-mon[49120]: from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T20:44:46.838 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:46 vm07 ceph-mon[49120]: from='client.? 192.168.123.107:0/2724410452' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T20:44:46.840 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.839+0000 7fba22ffd640 1 -- 192.168.123.107:0/1912482327 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "osd dump", "format": "json"}]=0 v37) v1 ==== 74+0+11585 (secure 0 0 0) 0x7fba240608e0 con 0x7fba34076040 2026-03-09T20:44:46.840 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:44:46.841 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":37,"fsid":"589eab88-1bf8-11f1-9e50-71f3ab1833c4","created":"2026-03-09T20:42:21.622782+0000","modified":"2026-03-09T20:44:42.146215+0000","last_up_change":"2026-03-09T20:44:41.142793+0000","last_in_change":"2026-03-09T20:44:30.230343+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":14,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":6,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"reef","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-09T20:44:15.118166+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"23","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_acting":6,"score_stable":6,"optimal_score":0.5,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"4ceba074-cc1e-460f-b8f1-b7d80b498d37","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":10,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6802","nonce":1974618076},{"type":"v1","addr":"192.168.123.107:6803","nonce":1974618076}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6804","nonce":1974618076},{"type":"v1","addr":"192.168.123.107:6805","nonce":1974618076}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6808","nonce":1974618076},{"type":"v1","addr":"192.168.123.107:6809","nonce":1974618076}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6806","nonce":1974618076},{"type":"v1","addr":"192.168.123.107:6807","nonce":1974618076}]},"public_addr":"192.168.123.107:6803/1974618076","cluster_addr":"192.168.123.107:6805/1974618076","heartbeat_back_addr":"192.168.123.107:6809/1974618076","heartbeat_front_addr":"192.168.123.107:6807/1974618076","state":["exists","up"]},{"osd":1,"uuid":"7431b664-9dad-4df6-ac1e-d480eeb7d102","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":15,"up_thru":28,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6810","nonce":1875790749},{"type":"v1","addr":"192.168.123.107:6811","nonce":1875790749}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6812","nonce":1875790749},{"type":"v1","addr":"192.168.123.107:6813","nonce":1875790749}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6816","nonce":1875790749},{"type":"v1","addr":"192.168.123.107:6817","nonce":1875790749}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6814","nonce":1875790749},{"type":"v1","addr":"192.168.123.107:6815","nonce":1875790749}]},"public_addr":"192.168.123.107:6811/1875790749","cluster_addr":"192.168.123.107:6813/1875790749","heartbeat_back_addr":"192.168.123.107:6817/1875790749","heartbeat_front_addr":"192.168.123.107:6815/1875790749","state":["exists","up"]},{"osd":2,"uuid":"91efe4fd-879b-433f-ab7e-d98ab2676ea3","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":19,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6818","nonce":436827330},{"type":"v1","addr":"192.168.123.107:6819","nonce":436827330}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6820","nonce":436827330},{"type":"v1","addr":"192.168.123.107:6821","nonce":436827330}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6824","nonce":436827330},{"type":"v1","addr":"192.168.123.107:6825","nonce":436827330}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6822","nonce":436827330},{"type":"v1","addr":"192.168.123.107:6823","nonce":436827330}]},"public_addr":"192.168.123.107:6819/436827330","cluster_addr":"192.168.123.107:6821/436827330","heartbeat_back_addr":"192.168.123.107:6825/436827330","heartbeat_front_addr":"192.168.123.107:6823/436827330","state":["exists","up"]},{"osd":3,"uuid":"5baaeff4-3fa0-43d6-81ca-ff28de0673a4","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":27,"up_thru":31,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6800","nonce":2906542839},{"type":"v1","addr":"192.168.123.110:6801","nonce":2906542839}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6802","nonce":2906542839},{"type":"v1","addr":"192.168.123.110:6803","nonce":2906542839}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6806","nonce":2906542839},{"type":"v1","addr":"192.168.123.110:6807","nonce":2906542839}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6804","nonce":2906542839},{"type":"v1","addr":"192.168.123.110:6805","nonce":2906542839}]},"public_addr":"192.168.123.110:6801/2906542839","cluster_addr":"192.168.123.110:6803/2906542839","heartbeat_back_addr":"192.168.123.110:6807/2906542839","heartbeat_front_addr":"192.168.123.110:6805/2906542839","state":["exists","up"]},{"osd":4,"uuid":"5ced8315-7f95-41be-88c5-e29628c579a6","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":32,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6808","nonce":1620701058},{"type":"v1","addr":"192.168.123.110:6809","nonce":1620701058}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6810","nonce":1620701058},{"type":"v1","addr":"192.168.123.110:6811","nonce":1620701058}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6814","nonce":1620701058},{"type":"v1","addr":"192.168.123.110:6815","nonce":1620701058}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6812","nonce":1620701058},{"type":"v1","addr":"192.168.123.110:6813","nonce":1620701058}]},"public_addr":"192.168.123.110:6809/1620701058","cluster_addr":"192.168.123.110:6811/1620701058","heartbeat_back_addr":"192.168.123.110:6815/1620701058","heartbeat_front_addr":"192.168.123.110:6813/1620701058","state":["exists","up"]},{"osd":5,"uuid":"bb0b085e-9ae4-46b4-9158-53e2edb3b952","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":36,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6816","nonce":1579901954},{"type":"v1","addr":"192.168.123.110:6817","nonce":1579901954}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6818","nonce":1579901954},{"type":"v1","addr":"192.168.123.110:6819","nonce":1579901954}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6822","nonce":1579901954},{"type":"v1","addr":"192.168.123.110:6823","nonce":1579901954}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6820","nonce":1579901954},{"type":"v1","addr":"192.168.123.110:6821","nonce":1579901954}]},"public_addr":"192.168.123.110:6817/1579901954","cluster_addr":"192.168.123.110:6819/1579901954","heartbeat_back_addr":"192.168.123.110:6823/1579901954","heartbeat_front_addr":"192.168.123.110:6821/1579901954","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138322906710015,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T20:43:51.872016+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138322906710015,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T20:44:02.436528+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138322906710015,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T20:44:12.399428+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138322906710015,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T20:44:21.060426+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138322906710015,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T20:44:29.970569+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138322906710015,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T20:44:38.192755+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.107:0/3147220177":"2026-03-10T20:43:19.070116+0000","192.168.123.107:0/1632839730":"2026-03-10T20:42:33.181797+0000","192.168.123.107:6800/4166937886":"2026-03-10T20:43:19.070116+0000","192.168.123.107:0/860262504":"2026-03-10T20:42:33.181797+0000","192.168.123.107:6800/1859043218":"2026-03-10T20:42:33.181797+0000","192.168.123.107:0/2967433424":"2026-03-10T20:42:43.298801+0000","192.168.123.107:0/216055426":"2026-03-10T20:42:33.181797+0000","192.168.123.107:6801/1859043218":"2026-03-10T20:42:33.181797+0000","192.168.123.107:0/4075654402":"2026-03-10T20:42:43.298801+0000","192.168.123.107:0/1007394955":"2026-03-10T20:42:43.298801+0000","192.168.123.107:0/1552256432":"2026-03-10T20:43:19.070116+0000","192.168.123.107:6801/810998986":"2026-03-10T20:42:43.298801+0000","192.168.123.107:6800/810998986":"2026-03-10T20:42:43.298801+0000","192.168.123.107:6801/4166937886":"2026-03-10T20:43:19.070116+0000","192.168.123.107:0/1594973564":"2026-03-10T20:43:19.070116+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-09T20:44:46.844 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.844+0000 7fba3c1a2640 1 -- 192.168.123.107:0/1912482327 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fba0c0761c0 msgr2=0x7fba0c078680 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:46.845 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.844+0000 7fba3c1a2640 1 --2- 192.168.123.107:0/1912482327 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fba0c0761c0 0x7fba0c078680 secure :-1 s=READY pgs=76 cs=0 l=1 rev1=1 crypto rx=0x7fba28009a00 tx=0x7fba280023d0 comp rx=0 tx=0).stop 2026-03-09T20:44:46.845 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.844+0000 7fba3c1a2640 1 -- 192.168.123.107:0/1912482327 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fba34076040 msgr2=0x7fba3419f330 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:46.845 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.844+0000 7fba3c1a2640 1 --2- 192.168.123.107:0/1912482327 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fba34076040 0x7fba3419f330 secure :-1 s=READY pgs=208 cs=0 l=1 rev1=1 crypto rx=0x7fba2400da30 tx=0x7fba2400df00 comp rx=0 tx=0).stop 2026-03-09T20:44:46.845 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.845+0000 7fba3c1a2640 1 -- 192.168.123.107:0/1912482327 shutdown_connections 2026-03-09T20:44:46.845 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.845+0000 7fba3c1a2640 1 --2- 192.168.123.107:0/1912482327 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fba0c0761c0 0x7fba0c078680 unknown :-1 s=CLOSED pgs=76 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:46.845 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.845+0000 7fba3c1a2640 1 --2- 192.168.123.107:0/1912482327 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fba34076040 0x7fba3419f330 secure :-1 s=CLOSED pgs=208 cs=0 l=1 rev1=1 crypto rx=0x7fba2400da30 tx=0x7fba2400df00 comp rx=0 tx=0).stop 2026-03-09T20:44:46.845 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.845+0000 7fba3c1a2640 1 --2- 192.168.123.107:0/1912482327 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fba34075720 0x7fba3419edf0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:46.845 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.845+0000 7fba3c1a2640 1 -- 192.168.123.107:0/1912482327 >> 192.168.123.107:0/1912482327 conn(0x7fba340fe710 msgr2=0x7fba340ffdf0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:46.845 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.845+0000 7fba3c1a2640 1 -- 192.168.123.107:0/1912482327 shutdown_connections 2026-03-09T20:44:46.845 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:46.845+0000 7fba3c1a2640 1 -- 192.168.123.107:0/1912482327 wait complete. 2026-03-09T20:44:47.329 INFO:tasks.cephadm.ceph_manager.ceph:all up! 2026-03-09T20:44:47.329 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph osd dump --format=json 2026-03-09T20:44:47.485 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:44:47.554 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:47 vm07 ceph-mon[49120]: from='client.? 192.168.123.107:0/1912482327' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T20:44:47.754 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:47.754+0000 7efecfcec640 1 -- 192.168.123.107:0/1343455120 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7efec80ffdb0 msgr2=0x7efec810cc80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:47.755 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:47.754+0000 7efecfcec640 1 --2- 192.168.123.107:0/1343455120 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7efec80ffdb0 0x7efec810cc80 secure :-1 s=READY pgs=209 cs=0 l=1 rev1=1 crypto rx=0x7efeb0009a00 tx=0x7efeb002f280 comp rx=0 tx=0).stop 2026-03-09T20:44:47.755 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:47.755+0000 7efecfcec640 1 -- 192.168.123.107:0/1343455120 shutdown_connections 2026-03-09T20:44:47.755 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:47.755+0000 7efecfcec640 1 --2- 192.168.123.107:0/1343455120 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7efec80ffdb0 0x7efec810cc80 unknown :-1 s=CLOSED pgs=209 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:47.755 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:47.755+0000 7efecfcec640 1 --2- 192.168.123.107:0/1343455120 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7efec80ff490 0x7efec80ff870 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:47.755 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:47.755+0000 7efecfcec640 1 -- 192.168.123.107:0/1343455120 >> 192.168.123.107:0/1343455120 conn(0x7efec80fb340 msgr2=0x7efec80fd760 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:47.755 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:47.755+0000 7efecfcec640 1 -- 192.168.123.107:0/1343455120 shutdown_connections 2026-03-09T20:44:47.755 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:47.755+0000 7efecfcec640 1 -- 192.168.123.107:0/1343455120 wait complete. 2026-03-09T20:44:47.755 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:47.755+0000 7efecfcec640 1 Processor -- start 2026-03-09T20:44:47.756 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:47.756+0000 7efecfcec640 1 -- start start 2026-03-09T20:44:47.756 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:47.756+0000 7efecfcec640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7efec80ff490 0x7efec81017f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:47.756 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:47.756+0000 7efecfcec640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7efec80ffdb0 0x7efec8101d30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:47.756 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:47.756+0000 7efecfcec640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7efec8105870 con 0x7efec80ffdb0 2026-03-09T20:44:47.756 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:47.756+0000 7efecfcec640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7efec81059e0 con 0x7efec80ff490 2026-03-09T20:44:47.756 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:47.756+0000 7efecd260640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7efec80ffdb0 0x7efec8101d30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:47.756 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:47.756+0000 7efecd260640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7efec80ffdb0 0x7efec8101d30 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:47524/0 (socket says 192.168.123.107:47524) 2026-03-09T20:44:47.756 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:47.756+0000 7efecd260640 1 -- 192.168.123.107:0/2627859679 learned_addr learned my addr 192.168.123.107:0/2627859679 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:44:47.757 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:47.756+0000 7efecda61640 1 --2- 192.168.123.107:0/2627859679 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7efec80ff490 0x7efec81017f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:47.757 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:47.757+0000 7efecd260640 1 -- 192.168.123.107:0/2627859679 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7efec80ff490 msgr2=0x7efec81017f0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:47.757 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:47.757+0000 7efecd260640 1 --2- 192.168.123.107:0/2627859679 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7efec80ff490 0x7efec81017f0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:47.757 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:47.757+0000 7efecd260640 1 -- 192.168.123.107:0/2627859679 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7efeb0009660 con 0x7efec80ffdb0 2026-03-09T20:44:47.757 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:47.757+0000 7efecd260640 1 --2- 192.168.123.107:0/2627859679 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7efec80ffdb0 0x7efec8101d30 secure :-1 s=READY pgs=210 cs=0 l=1 rev1=1 crypto rx=0x7efeb002f790 tx=0x7efeb0004300 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:47.759 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:47.757+0000 7efebeffd640 1 -- 192.168.123.107:0/2627859679 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7efeb002fae0 con 0x7efec80ffdb0 2026-03-09T20:44:47.759 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:47.757+0000 7efebeffd640 1 -- 192.168.123.107:0/2627859679 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7efeb002fc40 con 0x7efec80ffdb0 2026-03-09T20:44:47.759 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:47.757+0000 7efebeffd640 1 -- 192.168.123.107:0/2627859679 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7efeb00418c0 con 0x7efec80ffdb0 2026-03-09T20:44:47.759 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:47.757+0000 7efecfcec640 1 -- 192.168.123.107:0/2627859679 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7efec81022d0 con 0x7efec80ffdb0 2026-03-09T20:44:47.759 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:47.758+0000 7efecfcec640 1 -- 192.168.123.107:0/2627859679 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7efec81a90f0 con 0x7efec80ffdb0 2026-03-09T20:44:47.761 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:47.758+0000 7efecfcec640 1 -- 192.168.123.107:0/2627859679 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7efe8c005350 con 0x7efec80ffdb0 2026-03-09T20:44:47.763 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:47.762+0000 7efebeffd640 1 -- 192.168.123.107:0/2627859679 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7efeb003f070 con 0x7efec80ffdb0 2026-03-09T20:44:47.763 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:47.762+0000 7efebeffd640 1 --2- 192.168.123.107:0/2627859679 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7efea00761c0 0x7efea0078680 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:47.763 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:47.763+0000 7efebeffd640 1 -- 192.168.123.107:0/2627859679 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(37..37 src has 1..37) v4 ==== 4618+0+0 (secure 0 0 0) 0x7efeb00bc720 con 0x7efec80ffdb0 2026-03-09T20:44:47.763 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:47.763+0000 7efebeffd640 1 -- 192.168.123.107:0/2627859679 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7efeb00bcb30 con 0x7efec80ffdb0 2026-03-09T20:44:47.763 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:47.763+0000 7efecda61640 1 --2- 192.168.123.107:0/2627859679 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7efea00761c0 0x7efea0078680 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:47.763 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:47.763+0000 7efecda61640 1 --2- 192.168.123.107:0/2627859679 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7efea00761c0 0x7efea0078680 secure :-1 s=READY pgs=77 cs=0 l=1 rev1=1 crypto rx=0x7efeb8004500 tx=0x7efeb8009290 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:47.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:47 vm10 ceph-mon[57011]: from='client.? 192.168.123.107:0/1912482327' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T20:44:47.856 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:47.855+0000 7efecfcec640 1 -- 192.168.123.107:0/2627859679 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "osd dump", "format": "json"} v 0) v1 -- 0x7efe8c0051c0 con 0x7efec80ffdb0 2026-03-09T20:44:47.856 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:47.856+0000 7efebeffd640 1 -- 192.168.123.107:0/2627859679 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "osd dump", "format": "json"}]=0 v37) v1 ==== 74+0+11585 (secure 0 0 0) 0x7efeb0086120 con 0x7efec80ffdb0 2026-03-09T20:44:47.856 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:44:47.856 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":37,"fsid":"589eab88-1bf8-11f1-9e50-71f3ab1833c4","created":"2026-03-09T20:42:21.622782+0000","modified":"2026-03-09T20:44:42.146215+0000","last_up_change":"2026-03-09T20:44:41.142793+0000","last_in_change":"2026-03-09T20:44:30.230343+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":14,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":6,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"reef","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-09T20:44:15.118166+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"23","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_acting":6,"score_stable":6,"optimal_score":0.5,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"4ceba074-cc1e-460f-b8f1-b7d80b498d37","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":10,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6802","nonce":1974618076},{"type":"v1","addr":"192.168.123.107:6803","nonce":1974618076}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6804","nonce":1974618076},{"type":"v1","addr":"192.168.123.107:6805","nonce":1974618076}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6808","nonce":1974618076},{"type":"v1","addr":"192.168.123.107:6809","nonce":1974618076}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6806","nonce":1974618076},{"type":"v1","addr":"192.168.123.107:6807","nonce":1974618076}]},"public_addr":"192.168.123.107:6803/1974618076","cluster_addr":"192.168.123.107:6805/1974618076","heartbeat_back_addr":"192.168.123.107:6809/1974618076","heartbeat_front_addr":"192.168.123.107:6807/1974618076","state":["exists","up"]},{"osd":1,"uuid":"7431b664-9dad-4df6-ac1e-d480eeb7d102","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":15,"up_thru":28,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6810","nonce":1875790749},{"type":"v1","addr":"192.168.123.107:6811","nonce":1875790749}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6812","nonce":1875790749},{"type":"v1","addr":"192.168.123.107:6813","nonce":1875790749}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6816","nonce":1875790749},{"type":"v1","addr":"192.168.123.107:6817","nonce":1875790749}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6814","nonce":1875790749},{"type":"v1","addr":"192.168.123.107:6815","nonce":1875790749}]},"public_addr":"192.168.123.107:6811/1875790749","cluster_addr":"192.168.123.107:6813/1875790749","heartbeat_back_addr":"192.168.123.107:6817/1875790749","heartbeat_front_addr":"192.168.123.107:6815/1875790749","state":["exists","up"]},{"osd":2,"uuid":"91efe4fd-879b-433f-ab7e-d98ab2676ea3","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":19,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6818","nonce":436827330},{"type":"v1","addr":"192.168.123.107:6819","nonce":436827330}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6820","nonce":436827330},{"type":"v1","addr":"192.168.123.107:6821","nonce":436827330}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6824","nonce":436827330},{"type":"v1","addr":"192.168.123.107:6825","nonce":436827330}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6822","nonce":436827330},{"type":"v1","addr":"192.168.123.107:6823","nonce":436827330}]},"public_addr":"192.168.123.107:6819/436827330","cluster_addr":"192.168.123.107:6821/436827330","heartbeat_back_addr":"192.168.123.107:6825/436827330","heartbeat_front_addr":"192.168.123.107:6823/436827330","state":["exists","up"]},{"osd":3,"uuid":"5baaeff4-3fa0-43d6-81ca-ff28de0673a4","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":27,"up_thru":31,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6800","nonce":2906542839},{"type":"v1","addr":"192.168.123.110:6801","nonce":2906542839}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6802","nonce":2906542839},{"type":"v1","addr":"192.168.123.110:6803","nonce":2906542839}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6806","nonce":2906542839},{"type":"v1","addr":"192.168.123.110:6807","nonce":2906542839}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6804","nonce":2906542839},{"type":"v1","addr":"192.168.123.110:6805","nonce":2906542839}]},"public_addr":"192.168.123.110:6801/2906542839","cluster_addr":"192.168.123.110:6803/2906542839","heartbeat_back_addr":"192.168.123.110:6807/2906542839","heartbeat_front_addr":"192.168.123.110:6805/2906542839","state":["exists","up"]},{"osd":4,"uuid":"5ced8315-7f95-41be-88c5-e29628c579a6","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":32,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6808","nonce":1620701058},{"type":"v1","addr":"192.168.123.110:6809","nonce":1620701058}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6810","nonce":1620701058},{"type":"v1","addr":"192.168.123.110:6811","nonce":1620701058}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6814","nonce":1620701058},{"type":"v1","addr":"192.168.123.110:6815","nonce":1620701058}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6812","nonce":1620701058},{"type":"v1","addr":"192.168.123.110:6813","nonce":1620701058}]},"public_addr":"192.168.123.110:6809/1620701058","cluster_addr":"192.168.123.110:6811/1620701058","heartbeat_back_addr":"192.168.123.110:6815/1620701058","heartbeat_front_addr":"192.168.123.110:6813/1620701058","state":["exists","up"]},{"osd":5,"uuid":"bb0b085e-9ae4-46b4-9158-53e2edb3b952","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":36,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6816","nonce":1579901954},{"type":"v1","addr":"192.168.123.110:6817","nonce":1579901954}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6818","nonce":1579901954},{"type":"v1","addr":"192.168.123.110:6819","nonce":1579901954}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6822","nonce":1579901954},{"type":"v1","addr":"192.168.123.110:6823","nonce":1579901954}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6820","nonce":1579901954},{"type":"v1","addr":"192.168.123.110:6821","nonce":1579901954}]},"public_addr":"192.168.123.110:6817/1579901954","cluster_addr":"192.168.123.110:6819/1579901954","heartbeat_back_addr":"192.168.123.110:6823/1579901954","heartbeat_front_addr":"192.168.123.110:6821/1579901954","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138322906710015,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T20:43:51.872016+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138322906710015,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T20:44:02.436528+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138322906710015,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T20:44:12.399428+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138322906710015,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T20:44:21.060426+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138322906710015,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T20:44:29.970569+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138322906710015,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T20:44:38.192755+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.107:0/3147220177":"2026-03-10T20:43:19.070116+0000","192.168.123.107:0/1632839730":"2026-03-10T20:42:33.181797+0000","192.168.123.107:6800/4166937886":"2026-03-10T20:43:19.070116+0000","192.168.123.107:0/860262504":"2026-03-10T20:42:33.181797+0000","192.168.123.107:6800/1859043218":"2026-03-10T20:42:33.181797+0000","192.168.123.107:0/2967433424":"2026-03-10T20:42:43.298801+0000","192.168.123.107:0/216055426":"2026-03-10T20:42:33.181797+0000","192.168.123.107:6801/1859043218":"2026-03-10T20:42:33.181797+0000","192.168.123.107:0/4075654402":"2026-03-10T20:42:43.298801+0000","192.168.123.107:0/1007394955":"2026-03-10T20:42:43.298801+0000","192.168.123.107:0/1552256432":"2026-03-10T20:43:19.070116+0000","192.168.123.107:6801/810998986":"2026-03-10T20:42:43.298801+0000","192.168.123.107:6800/810998986":"2026-03-10T20:42:43.298801+0000","192.168.123.107:6801/4166937886":"2026-03-10T20:43:19.070116+0000","192.168.123.107:0/1594973564":"2026-03-10T20:43:19.070116+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-09T20:44:47.858 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:47.859+0000 7efecfcec640 1 -- 192.168.123.107:0/2627859679 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7efea00761c0 msgr2=0x7efea0078680 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:47.858 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:47.859+0000 7efecfcec640 1 --2- 192.168.123.107:0/2627859679 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7efea00761c0 0x7efea0078680 secure :-1 s=READY pgs=77 cs=0 l=1 rev1=1 crypto rx=0x7efeb8004500 tx=0x7efeb8009290 comp rx=0 tx=0).stop 2026-03-09T20:44:47.859 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:47.859+0000 7efecfcec640 1 -- 192.168.123.107:0/2627859679 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7efec80ffdb0 msgr2=0x7efec8101d30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:47.859 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:47.859+0000 7efecfcec640 1 --2- 192.168.123.107:0/2627859679 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7efec80ffdb0 0x7efec8101d30 secure :-1 s=READY pgs=210 cs=0 l=1 rev1=1 crypto rx=0x7efeb002f790 tx=0x7efeb0004300 comp rx=0 tx=0).stop 2026-03-09T20:44:47.859 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:47.859+0000 7efecfcec640 1 -- 192.168.123.107:0/2627859679 shutdown_connections 2026-03-09T20:44:47.859 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:47.859+0000 7efecfcec640 1 --2- 192.168.123.107:0/2627859679 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7efea00761c0 0x7efea0078680 unknown :-1 s=CLOSED pgs=77 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:47.859 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:47.859+0000 7efecfcec640 1 --2- 192.168.123.107:0/2627859679 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7efec80ffdb0 0x7efec8101d30 unknown :-1 s=CLOSED pgs=210 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:47.859 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:47.859+0000 7efecfcec640 1 --2- 192.168.123.107:0/2627859679 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7efec80ff490 0x7efec81017f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:47.859 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:47.859+0000 7efecfcec640 1 -- 192.168.123.107:0/2627859679 >> 192.168.123.107:0/2627859679 conn(0x7efec80fb340 msgr2=0x7efec80fbc10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:47.859 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:47.859+0000 7efecfcec640 1 -- 192.168.123.107:0/2627859679 shutdown_connections 2026-03-09T20:44:47.859 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:47.859+0000 7efecfcec640 1 -- 192.168.123.107:0/2627859679 wait complete. 2026-03-09T20:44:47.904 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph tell osd.0 flush_pg_stats 2026-03-09T20:44:47.904 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph tell osd.1 flush_pg_stats 2026-03-09T20:44:47.904 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph tell osd.2 flush_pg_stats 2026-03-09T20:44:47.904 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph tell osd.3 flush_pg_stats 2026-03-09T20:44:47.904 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph tell osd.4 flush_pg_stats 2026-03-09T20:44:47.904 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph tell osd.5 flush_pg_stats 2026-03-09T20:44:48.297 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:44:48.440 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:44:48.449 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:44:48.450 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:44:48.451 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:44:48.678 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:48 vm07 ceph-mon[49120]: pgmap v69: 1 pgs: 1 active+clean; 449 KiB data, 573 MiB used, 119 GiB / 120 GiB avail 2026-03-09T20:44:48.678 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:48 vm07 ceph-mon[49120]: from='client.? 192.168.123.107:0/2627859679' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T20:44:48.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:48 vm10 ceph-mon[57011]: pgmap v69: 1 pgs: 1 active+clean; 449 KiB data, 573 MiB used, 119 GiB / 120 GiB avail 2026-03-09T20:44:48.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:48 vm10 ceph-mon[57011]: from='client.? 192.168.123.107:0/2627859679' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T20:44:48.800 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:44:48.936 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:48.934+0000 7f8e6308f640 1 -- 192.168.123.107:0/1233652537 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8e5c100780 msgr2=0x7f8e5c100be0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:48.936 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:48.934+0000 7f8e6308f640 1 --2- 192.168.123.107:0/1233652537 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8e5c100780 0x7f8e5c100be0 secure :-1 s=READY pgs=211 cs=0 l=1 rev1=1 crypto rx=0x7f8e5000b0a0 tx=0x7f8e5002f4a0 comp rx=0 tx=0).stop 2026-03-09T20:44:48.938 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:48.938+0000 7f8e6308f640 1 -- 192.168.123.107:0/1233652537 shutdown_connections 2026-03-09T20:44:48.938 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:48.938+0000 7f8e6308f640 1 --2- 192.168.123.107:0/1233652537 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8e5c100780 0x7f8e5c100be0 unknown :-1 s=CLOSED pgs=211 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:48.938 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:48.938+0000 7f8e6308f640 1 --2- 192.168.123.107:0/1233652537 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f8e5c106780 0x7f8e5c106b60 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:48.938 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:48.938+0000 7f8e6308f640 1 -- 192.168.123.107:0/1233652537 >> 192.168.123.107:0/1233652537 conn(0x7f8e5c0fc460 msgr2=0x7f8e5c0fe880 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:48.938 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:48.938+0000 7f8e6308f640 1 -- 192.168.123.107:0/1233652537 shutdown_connections 2026-03-09T20:44:48.938 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:48.938+0000 7f8e6308f640 1 -- 192.168.123.107:0/1233652537 wait complete. 2026-03-09T20:44:48.938 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:48.938+0000 7f8e6308f640 1 Processor -- start 2026-03-09T20:44:48.938 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:48.938+0000 7f8e6308f640 1 -- start start 2026-03-09T20:44:48.939 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:48.938+0000 7f8e6308f640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f8e5c100780 0x7f8e5c10f7a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:48.939 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:48.939+0000 7f8e6308f640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8e5c106780 0x7f8e5c10fce0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:48.939 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:48.939+0000 7f8e6308f640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f8e5c1102b0 con 0x7f8e5c106780 2026-03-09T20:44:48.939 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:48.939+0000 7f8e6308f640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f8e5c110420 con 0x7f8e5c100780 2026-03-09T20:44:48.939 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:48.939+0000 7f8e60e04640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f8e5c100780 0x7f8e5c10f7a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:48.939 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:48.939+0000 7f8e60e04640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f8e5c100780 0x7f8e5c10f7a0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:60460/0 (socket says 192.168.123.107:60460) 2026-03-09T20:44:48.939 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:48.939+0000 7f8e60e04640 1 -- 192.168.123.107:0/2926000851 learned_addr learned my addr 192.168.123.107:0/2926000851 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:44:48.939 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:48.939+0000 7f8e5bfff640 1 --2- 192.168.123.107:0/2926000851 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8e5c106780 0x7f8e5c10fce0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:48.939 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:48.939+0000 7f8e60e04640 1 -- 192.168.123.107:0/2926000851 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8e5c106780 msgr2=0x7f8e5c10fce0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:48.939 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:48.939+0000 7f8e60e04640 1 --2- 192.168.123.107:0/2926000851 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8e5c106780 0x7f8e5c10fce0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:48.939 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:48.939+0000 7f8e60e04640 1 -- 192.168.123.107:0/2926000851 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f8e50009d00 con 0x7f8e5c100780 2026-03-09T20:44:48.944 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:48.939+0000 7f8e60e04640 1 --2- 192.168.123.107:0/2926000851 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f8e5c100780 0x7f8e5c10f7a0 secure :-1 s=READY pgs=37 cs=0 l=1 rev1=1 crypto rx=0x7f8e4800b700 tx=0x7f8e4800bbd0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:48.944 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:48.941+0000 7f8e59ffb640 1 -- 192.168.123.107:0/2926000851 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f8e4800be90 con 0x7f8e5c100780 2026-03-09T20:44:48.944 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:48.941+0000 7f8e6308f640 1 -- 192.168.123.107:0/2926000851 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f8e5c114140 con 0x7f8e5c100780 2026-03-09T20:44:48.944 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:48.941+0000 7f8e6308f640 1 -- 192.168.123.107:0/2926000851 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f8e5c114690 con 0x7f8e5c100780 2026-03-09T20:44:48.944 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:48.941+0000 7f8e59ffb640 1 -- 192.168.123.107:0/2926000851 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f8e48002ba0 con 0x7f8e5c100780 2026-03-09T20:44:48.944 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:48.941+0000 7f8e59ffb640 1 -- 192.168.123.107:0/2926000851 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f8e4800cab0 con 0x7f8e5c100780 2026-03-09T20:44:48.944 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:48.943+0000 7f8e59ffb640 1 -- 192.168.123.107:0/2926000851 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7f8e48004380 con 0x7f8e5c100780 2026-03-09T20:44:48.944 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:48.944+0000 7f8e59ffb640 1 --2- 192.168.123.107:0/2926000851 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f8e40076290 0x7f8e40078750 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:48.944 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:48.944+0000 7f8e5bfff640 1 --2- 192.168.123.107:0/2926000851 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f8e40076290 0x7f8e40078750 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:48.944 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:48.944+0000 7f8e59ffb640 1 -- 192.168.123.107:0/2926000851 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(37..37 src has 1..37) v4 ==== 4618+0+0 (secure 0 0 0) 0x7f8e480968c0 con 0x7f8e5c100780 2026-03-09T20:44:48.944 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:48.944+0000 7f8e6308f640 1 --2- 192.168.123.107:0/2926000851 >> [v2:192.168.123.107:6802/1974618076,v1:192.168.123.107:6803/1974618076] conn(0x7f8e240015e0 0x7f8e24003aa0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:48.944 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:48.944+0000 7f8e61605640 1 --2- 192.168.123.107:0/2926000851 >> [v2:192.168.123.107:6802/1974618076,v1:192.168.123.107:6803/1974618076] conn(0x7f8e240015e0 0x7f8e24003aa0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:48.944 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:48.945+0000 7f8e5bfff640 1 --2- 192.168.123.107:0/2926000851 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f8e40076290 0x7f8e40078750 secure :-1 s=READY pgs=78 cs=0 l=1 rev1=1 crypto rx=0x7f8e5c110d00 tx=0x7f8e5003a040 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:48.947 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:48.945+0000 7f8e61605640 1 --2- 192.168.123.107:0/2926000851 >> [v2:192.168.123.107:6802/1974618076,v1:192.168.123.107:6803/1974618076] conn(0x7f8e240015e0 0x7f8e24003aa0 crc :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:48.947 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:48.945+0000 7f8e6308f640 1 -- 192.168.123.107:0/2926000851 --> [v2:192.168.123.107:6802/1974618076,v1:192.168.123.107:6803/1974618076] -- command(tid 1: {"prefix": "get_command_descriptions"}) v1 -- 0x7f8e24006c40 con 0x7f8e240015e0 2026-03-09T20:44:48.947 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:48.945+0000 7f8e59ffb640 1 -- 192.168.123.107:0/2926000851 <== osd.0 v2:192.168.123.107:6802/1974618076 1 ==== command_reply(tid 1: 0 ) v1 ==== 8+0+26930 (crc 0 0 0) 0x7f8e24006c40 con 0x7f8e240015e0 2026-03-09T20:44:48.954 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:48.954+0000 7f8e6308f640 1 -- 192.168.123.107:0/2926000851 --> [v2:192.168.123.107:6802/1974618076,v1:192.168.123.107:6803/1974618076] -- command(tid 2: {"prefix": "flush_pg_stats"}) v1 -- 0x7f8e24005d20 con 0x7f8e240015e0 2026-03-09T20:44:48.955 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:48.955+0000 7f8e59ffb640 1 -- 192.168.123.107:0/2926000851 <== osd.0 v2:192.168.123.107:6802/1974618076 2 ==== command_reply(tid 2: 0 ) v1 ==== 8+0+11 (crc 0 0 0) 0x7f8e24005d20 con 0x7f8e240015e0 2026-03-09T20:44:48.955 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:48.955+0000 7f8e6308f640 1 -- 192.168.123.107:0/2926000851 >> [v2:192.168.123.107:6802/1974618076,v1:192.168.123.107:6803/1974618076] conn(0x7f8e240015e0 msgr2=0x7f8e24003aa0 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:48.955 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:48.955+0000 7f8e6308f640 1 --2- 192.168.123.107:0/2926000851 >> [v2:192.168.123.107:6802/1974618076,v1:192.168.123.107:6803/1974618076] conn(0x7f8e240015e0 0x7f8e24003aa0 crc :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:48.955 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:48.955+0000 7f8e6308f640 1 -- 192.168.123.107:0/2926000851 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f8e40076290 msgr2=0x7f8e40078750 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:48.956 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:48.955+0000 7f8e6308f640 1 --2- 192.168.123.107:0/2926000851 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f8e40076290 0x7f8e40078750 secure :-1 s=READY pgs=78 cs=0 l=1 rev1=1 crypto rx=0x7f8e5c110d00 tx=0x7f8e5003a040 comp rx=0 tx=0).stop 2026-03-09T20:44:48.956 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:48.955+0000 7f8e6308f640 1 -- 192.168.123.107:0/2926000851 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f8e5c100780 msgr2=0x7f8e5c10f7a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:48.956 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:48.955+0000 7f8e6308f640 1 --2- 192.168.123.107:0/2926000851 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f8e5c100780 0x7f8e5c10f7a0 secure :-1 s=READY pgs=37 cs=0 l=1 rev1=1 crypto rx=0x7f8e4800b700 tx=0x7f8e4800bbd0 comp rx=0 tx=0).stop 2026-03-09T20:44:48.956 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:48.955+0000 7f8e6308f640 1 -- 192.168.123.107:0/2926000851 shutdown_connections 2026-03-09T20:44:48.956 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:48.955+0000 7f8e6308f640 1 --2- 192.168.123.107:0/2926000851 >> [v2:192.168.123.107:6802/1974618076,v1:192.168.123.107:6803/1974618076] conn(0x7f8e240015e0 0x7f8e24003aa0 unknown :-1 s=CLOSED pgs=8 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:48.956 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:48.955+0000 7f8e6308f640 1 --2- 192.168.123.107:0/2926000851 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f8e40076290 0x7f8e40078750 unknown :-1 s=CLOSED pgs=78 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:48.956 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:48.955+0000 7f8e6308f640 1 --2- 192.168.123.107:0/2926000851 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8e5c106780 0x7f8e5c10fce0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:48.956 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:48.955+0000 7f8e6308f640 1 --2- 192.168.123.107:0/2926000851 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f8e5c100780 0x7f8e5c10f7a0 unknown :-1 s=CLOSED pgs=37 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:48.956 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:48.955+0000 7f8e6308f640 1 -- 192.168.123.107:0/2926000851 >> 192.168.123.107:0/2926000851 conn(0x7f8e5c0fc460 msgr2=0x7f8e5c0fc840 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:48.956 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:48.956+0000 7f8e6308f640 1 -- 192.168.123.107:0/2926000851 shutdown_connections 2026-03-09T20:44:48.956 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:48.956+0000 7f8e6308f640 1 -- 192.168.123.107:0/2926000851 wait complete. 2026-03-09T20:44:49.098 INFO:teuthology.orchestra.run.vm07.stdout:42949672972 2026-03-09T20:44:49.098 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph osd last-stat-seq osd.0 2026-03-09T20:44:49.275 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.236+0000 7fee56ca6640 1 -- 192.168.123.107:0/3177981423 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fee50072340 msgr2=0x7fee50072720 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:49.276 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.236+0000 7fee56ca6640 1 --2- 192.168.123.107:0/3177981423 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fee50072340 0x7fee50072720 secure :-1 s=READY pgs=212 cs=0 l=1 rev1=1 crypto rx=0x7fee440099b0 tx=0x7fee4402f240 comp rx=0 tx=0).stop 2026-03-09T20:44:49.276 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.274+0000 7fee56ca6640 1 -- 192.168.123.107:0/3177981423 shutdown_connections 2026-03-09T20:44:49.276 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.274+0000 7fee56ca6640 1 --2- 192.168.123.107:0/3177981423 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fee50072cf0 0x7fee5010cd90 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:49.276 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.274+0000 7fee56ca6640 1 --2- 192.168.123.107:0/3177981423 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fee50072340 0x7fee50072720 unknown :-1 s=CLOSED pgs=212 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:49.276 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.274+0000 7fee56ca6640 1 -- 192.168.123.107:0/3177981423 >> 192.168.123.107:0/3177981423 conn(0x7fee5006b7f0 msgr2=0x7fee5006bc00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:49.284 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.280+0000 7fee56ca6640 1 -- 192.168.123.107:0/3177981423 shutdown_connections 2026-03-09T20:44:49.285 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.283+0000 7fee56ca6640 1 -- 192.168.123.107:0/3177981423 wait complete. 2026-03-09T20:44:49.285 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.283+0000 7fee56ca6640 1 Processor -- start 2026-03-09T20:44:49.285 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.283+0000 7fee56ca6640 1 -- start start 2026-03-09T20:44:49.285 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.283+0000 7fee56ca6640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fee50072340 0x7fee50112c60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:49.285 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.283+0000 7fee56ca6640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fee50072cf0 0x7fee501131a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:49.289 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.283+0000 7fee56ca6640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fee50113880 con 0x7fee50072340 2026-03-09T20:44:49.289 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.283+0000 7fee56ca6640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fee501b9cf0 con 0x7fee50072cf0 2026-03-09T20:44:49.289 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.283+0000 7fee4ffff640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fee50072cf0 0x7fee501131a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:49.290 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.283+0000 7fee4ffff640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fee50072cf0 0x7fee501131a0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:60496/0 (socket says 192.168.123.107:60496) 2026-03-09T20:44:49.296 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.283+0000 7fee4ffff640 1 -- 192.168.123.107:0/1169379014 learned_addr learned my addr 192.168.123.107:0/1169379014 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:44:49.296 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.292+0000 7fee54a1b640 1 --2- 192.168.123.107:0/1169379014 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fee50072340 0x7fee50112c60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:49.296 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.292+0000 7fee4ffff640 1 -- 192.168.123.107:0/1169379014 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fee50072340 msgr2=0x7fee50112c60 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:49.296 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.292+0000 7fee4ffff640 1 --2- 192.168.123.107:0/1169379014 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fee50072340 0x7fee50112c60 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:49.296 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.292+0000 7fee4ffff640 1 -- 192.168.123.107:0/1169379014 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fee44009660 con 0x7fee50072cf0 2026-03-09T20:44:49.296 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.293+0000 7fee54a1b640 1 --2- 192.168.123.107:0/1169379014 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fee50072340 0x7fee50112c60 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-09T20:44:49.296 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.293+0000 7fee4ffff640 1 --2- 192.168.123.107:0/1169379014 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fee50072cf0 0x7fee501131a0 secure :-1 s=READY pgs=38 cs=0 l=1 rev1=1 crypto rx=0x7fee3800e970 tx=0x7fee3800ee40 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:49.296 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.293+0000 7fee4dffb640 1 -- 192.168.123.107:0/1169379014 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fee3800cd10 con 0x7fee50072cf0 2026-03-09T20:44:49.296 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.294+0000 7fee56ca6640 1 -- 192.168.123.107:0/1169379014 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fee501b9ef0 con 0x7fee50072cf0 2026-03-09T20:44:49.296 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.294+0000 7fee56ca6640 1 -- 192.168.123.107:0/1169379014 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fee501ba3c0 con 0x7fee50072cf0 2026-03-09T20:44:49.296 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.294+0000 7fee4dffb640 1 -- 192.168.123.107:0/1169379014 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fee3800ce70 con 0x7fee50072cf0 2026-03-09T20:44:49.296 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.294+0000 7fee4dffb640 1 -- 192.168.123.107:0/1169379014 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fee38010640 con 0x7fee50072cf0 2026-03-09T20:44:49.299 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.296+0000 7fee4dffb640 1 -- 192.168.123.107:0/1169379014 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7fee380107a0 con 0x7fee50072cf0 2026-03-09T20:44:49.299 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.298+0000 7fee4dffb640 1 --2- 192.168.123.107:0/1169379014 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fee2c076290 0x7fee2c078750 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:49.300 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.299+0000 7f13d6a96640 1 -- 192.168.123.107:0/3721295395 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f13d0072a40 msgr2=0x7f13d010ca90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:49.300 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.299+0000 7f13d6a96640 1 --2- 192.168.123.107:0/3721295395 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f13d0072a40 0x7f13d010ca90 secure :-1 s=READY pgs=213 cs=0 l=1 rev1=1 crypto rx=0x7f13c40099b0 tx=0x7f13c402f240 comp rx=0 tx=0).stop 2026-03-09T20:44:49.300 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.300+0000 7f13d6a96640 1 -- 192.168.123.107:0/3721295395 shutdown_connections 2026-03-09T20:44:49.300 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.300+0000 7f13d6a96640 1 --2- 192.168.123.107:0/3721295395 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f13d0072a40 0x7f13d010ca90 unknown :-1 s=CLOSED pgs=213 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:49.300 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.300+0000 7f13d6a96640 1 --2- 192.168.123.107:0/3721295395 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f13d0072120 0x7f13d0072500 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:49.300 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.300+0000 7f13d6a96640 1 -- 192.168.123.107:0/3721295395 >> 192.168.123.107:0/3721295395 conn(0x7f13d006c7d0 msgr2=0x7f13d006cbe0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:49.301 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.300+0000 7fee54a1b640 1 --2- 192.168.123.107:0/1169379014 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fee2c076290 0x7fee2c078750 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:49.302 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.302+0000 7fee4dffb640 1 -- 192.168.123.107:0/1169379014 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(37..37 src has 1..37) v4 ==== 4618+0+0 (secure 0 0 0) 0x7fee38014070 con 0x7fee50072cf0 2026-03-09T20:44:49.302 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.301+0000 7f13d6a96640 1 -- 192.168.123.107:0/3721295395 shutdown_connections 2026-03-09T20:44:49.303 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.302+0000 7f13d6a96640 1 -- 192.168.123.107:0/3721295395 wait complete. 2026-03-09T20:44:49.303 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.302+0000 7f13d6a96640 1 Processor -- start 2026-03-09T20:44:49.303 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.303+0000 7f13d6a96640 1 -- start start 2026-03-09T20:44:49.303 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.303+0000 7f13d6a96640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f13d01b5eb0 0x7f13d01b6290 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:49.303 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.303+0000 7f13d6a96640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f13d01b67d0 0x7f13d0112bb0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:49.303 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.303+0000 7f13d6a96640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f13d0113290 con 0x7f13d01b5eb0 2026-03-09T20:44:49.303 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.303+0000 7f13d6a96640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f13d01133d0 con 0x7f13d01b67d0 2026-03-09T20:44:49.303 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.303+0000 7f13d5a94640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f13d01b5eb0 0x7f13d01b6290 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:49.303 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.303+0000 7f13d5a94640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f13d01b5eb0 0x7f13d01b6290 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:47568/0 (socket says 192.168.123.107:47568) 2026-03-09T20:44:49.303 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.303+0000 7f13d5a94640 1 -- 192.168.123.107:0/3457899422 learned_addr learned my addr 192.168.123.107:0/3457899422 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:44:49.305 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.303+0000 7f13d5a94640 1 -- 192.168.123.107:0/3457899422 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f13d01b67d0 msgr2=0x7f13d0112bb0 unknown :-1 s=STATE_CONNECTING_RE l=1).mark_down 2026-03-09T20:44:49.305 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.303+0000 7f13d5a94640 1 --2- 192.168.123.107:0/3457899422 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f13d01b67d0 0x7f13d0112bb0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:49.305 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.303+0000 7f13d5a94640 1 -- 192.168.123.107:0/3457899422 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f13c4009660 con 0x7f13d01b5eb0 2026-03-09T20:44:49.305 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.304+0000 7f13d5a94640 1 --2- 192.168.123.107:0/3457899422 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f13d01b5eb0 0x7f13d01b6290 secure :-1 s=READY pgs=214 cs=0 l=1 rev1=1 crypto rx=0x7f13c000b470 tx=0x7f13c000b940 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:49.305 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.304+0000 7f13beffd640 1 -- 192.168.123.107:0/3457899422 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f13c0004280 con 0x7f13d01b5eb0 2026-03-09T20:44:49.305 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.304+0000 7f13d6a96640 1 -- 192.168.123.107:0/3457899422 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f13d01136b0 con 0x7f13d01b5eb0 2026-03-09T20:44:49.305 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.304+0000 7f13d6a96640 1 -- 192.168.123.107:0/3457899422 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f13d0113a30 con 0x7f13d01b5eb0 2026-03-09T20:44:49.306 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.306+0000 7fee54a1b640 1 --2- 192.168.123.107:0/1169379014 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fee2c076290 0x7fee2c078750 secure :-1 s=READY pgs=79 cs=0 l=1 rev1=1 crypto rx=0x7fee44002410 tx=0x7fee4403a040 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:49.310 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.309+0000 7fee56ca6640 1 --2- 192.168.123.107:0/1169379014 >> [v2:192.168.123.107:6818/436827330,v1:192.168.123.107:6819/436827330] conn(0x7fee180015e0 0x7fee18003aa0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:49.310 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.309+0000 7fee5521c640 1 --2- 192.168.123.107:0/1169379014 >> [v2:192.168.123.107:6818/436827330,v1:192.168.123.107:6819/436827330] conn(0x7fee180015e0 0x7fee18003aa0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:49.310 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.309+0000 7fee56ca6640 1 -- 192.168.123.107:0/1169379014 --> [v2:192.168.123.107:6818/436827330,v1:192.168.123.107:6819/436827330] -- command(tid 1: {"prefix": "get_command_descriptions"}) v1 -- 0x7fee18006c40 con 0x7fee180015e0 2026-03-09T20:44:49.310 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.309+0000 7f13beffd640 1 -- 192.168.123.107:0/3457899422 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f13c00043e0 con 0x7f13d01b5eb0 2026-03-09T20:44:49.310 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.309+0000 7f13beffd640 1 -- 192.168.123.107:0/3457899422 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f13c0010a70 con 0x7f13d01b5eb0 2026-03-09T20:44:49.310 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.310+0000 7f13bcff9640 1 -- 192.168.123.107:0/3457899422 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_get_version(what=osdmap handle=1) v1 -- 0x7f13d011cca0 con 0x7f13d01b5eb0 2026-03-09T20:44:49.310 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.310+0000 7f13beffd640 1 -- 192.168.123.107:0/3457899422 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7f13c0010cb0 con 0x7f13d01b5eb0 2026-03-09T20:44:49.312 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.311+0000 7fee5521c640 1 --2- 192.168.123.107:0/1169379014 >> [v2:192.168.123.107:6818/436827330,v1:192.168.123.107:6819/436827330] conn(0x7fee180015e0 0x7fee18003aa0 crc :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:49.312 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.311+0000 7fee4dffb640 1 -- 192.168.123.107:0/1169379014 <== osd.2 v2:192.168.123.107:6818/436827330 1 ==== command_reply(tid 1: 0 ) v1 ==== 8+0+26930 (crc 0 0 0) 0x7fee18006c40 con 0x7fee180015e0 2026-03-09T20:44:49.312 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.311+0000 7f13beffd640 1 --2- 192.168.123.107:0/3457899422 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f13a0076360 0x7f13a0078820 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:49.312 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.311+0000 7f13d5293640 1 --2- 192.168.123.107:0/3457899422 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f13a0076360 0x7f13a0078820 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:49.319 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.312+0000 7f13d5293640 1 --2- 192.168.123.107:0/3457899422 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f13a0076360 0x7f13a0078820 secure :-1 s=READY pgs=80 cs=0 l=1 rev1=1 crypto rx=0x7f13d0119f50 tx=0x7f13c40023d0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:49.319 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.314+0000 7f13beffd640 1 -- 192.168.123.107:0/3457899422 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(37..37 src has 1..37) v4 ==== 4618+0+0 (secure 0 0 0) 0x7f13c00979d0 con 0x7f13d01b5eb0 2026-03-09T20:44:49.319 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.314+0000 7f13beffd640 1 --2- 192.168.123.107:0/3457899422 >> [v2:192.168.123.110:6816/1579901954,v1:192.168.123.110:6817/1579901954] conn(0x7f13a007be10 0x7f13a007e230 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:49.320 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.314+0000 7f13beffd640 1 -- 192.168.123.107:0/3457899422 --> [v2:192.168.123.110:6816/1579901954,v1:192.168.123.110:6817/1579901954] -- command(tid 1: {"prefix": "get_command_descriptions"}) v1 -- 0x7f13a007e8e0 con 0x7f13a007be10 2026-03-09T20:44:49.320 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.314+0000 7f13beffd640 1 -- 192.168.123.107:0/3457899422 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_get_version_reply(handle=1 version=37) v2 ==== 24+0+0 (secure 0 0 0) 0x7f13c0002dc0 con 0x7f13d01b5eb0 2026-03-09T20:44:49.320 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.314+0000 7f13d6295640 1 --2- 192.168.123.107:0/3457899422 >> [v2:192.168.123.110:6816/1579901954,v1:192.168.123.110:6817/1579901954] conn(0x7f13a007be10 0x7f13a007e230 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:49.320 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.315+0000 7f13d6295640 1 --2- 192.168.123.107:0/3457899422 >> [v2:192.168.123.110:6816/1579901954,v1:192.168.123.110:6817/1579901954] conn(0x7f13a007be10 0x7f13a007e230 crc :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.5 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:49.320 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.316+0000 7f13beffd640 1 -- 192.168.123.107:0/3457899422 <== osd.5 v2:192.168.123.110:6816/1579901954 1 ==== command_reply(tid 1: 0 ) v1 ==== 8+0+26930 (crc 0 0 0) 0x7f13a007e8e0 con 0x7f13a007be10 2026-03-09T20:44:49.329 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.329+0000 7fee56ca6640 1 -- 192.168.123.107:0/1169379014 --> [v2:192.168.123.107:6818/436827330,v1:192.168.123.107:6819/436827330] -- command(tid 2: {"prefix": "flush_pg_stats"}) v1 -- 0x7fee18005d20 con 0x7fee180015e0 2026-03-09T20:44:49.333 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.333+0000 7fee4dffb640 1 -- 192.168.123.107:0/1169379014 <== osd.2 v2:192.168.123.107:6818/436827330 2 ==== command_reply(tid 2: 0 ) v1 ==== 8+0+11 (crc 0 0 0) 0x7fee18005d20 con 0x7fee180015e0 2026-03-09T20:44:49.333 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.333+0000 7fee1f7fe640 1 -- 192.168.123.107:0/1169379014 >> [v2:192.168.123.107:6818/436827330,v1:192.168.123.107:6819/436827330] conn(0x7fee180015e0 msgr2=0x7fee18003aa0 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:49.333 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.333+0000 7fee1f7fe640 1 --2- 192.168.123.107:0/1169379014 >> [v2:192.168.123.107:6818/436827330,v1:192.168.123.107:6819/436827330] conn(0x7fee180015e0 0x7fee18003aa0 crc :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:49.333 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.333+0000 7fee1f7fe640 1 -- 192.168.123.107:0/1169379014 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fee2c076290 msgr2=0x7fee2c078750 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:49.333 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.334+0000 7fee1f7fe640 1 --2- 192.168.123.107:0/1169379014 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fee2c076290 0x7fee2c078750 secure :-1 s=READY pgs=79 cs=0 l=1 rev1=1 crypto rx=0x7fee44002410 tx=0x7fee4403a040 comp rx=0 tx=0).stop 2026-03-09T20:44:49.334 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.334+0000 7fee1f7fe640 1 -- 192.168.123.107:0/1169379014 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fee50072cf0 msgr2=0x7fee501131a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:49.334 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.334+0000 7fee1f7fe640 1 --2- 192.168.123.107:0/1169379014 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fee50072cf0 0x7fee501131a0 secure :-1 s=READY pgs=38 cs=0 l=1 rev1=1 crypto rx=0x7fee3800e970 tx=0x7fee3800ee40 comp rx=0 tx=0).stop 2026-03-09T20:44:49.336 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.336+0000 7fee1f7fe640 1 -- 192.168.123.107:0/1169379014 shutdown_connections 2026-03-09T20:44:49.336 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.336+0000 7fee1f7fe640 1 --2- 192.168.123.107:0/1169379014 >> [v2:192.168.123.107:6818/436827330,v1:192.168.123.107:6819/436827330] conn(0x7fee180015e0 0x7fee18003aa0 unknown :-1 s=CLOSED pgs=6 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:49.336 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.336+0000 7fee1f7fe640 1 --2- 192.168.123.107:0/1169379014 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fee2c076290 0x7fee2c078750 unknown :-1 s=CLOSED pgs=79 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:49.336 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.337+0000 7fee1f7fe640 1 --2- 192.168.123.107:0/1169379014 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fee50072cf0 0x7fee501131a0 unknown :-1 s=CLOSED pgs=38 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:49.337 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.337+0000 7fee1f7fe640 1 --2- 192.168.123.107:0/1169379014 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fee50072340 0x7fee50112c60 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:49.337 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.337+0000 7fee1f7fe640 1 -- 192.168.123.107:0/1169379014 >> 192.168.123.107:0/1169379014 conn(0x7fee5006b7f0 msgr2=0x7fee5010df40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:49.337 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.337+0000 7fee1f7fe640 1 -- 192.168.123.107:0/1169379014 shutdown_connections 2026-03-09T20:44:49.337 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.337+0000 7fee1f7fe640 1 -- 192.168.123.107:0/1169379014 wait complete. 2026-03-09T20:44:49.342 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.341+0000 7f54834f4640 1 -- 192.168.123.107:0/501908413 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f547c072a40 msgr2=0x7f547c10ca90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:49.342 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.341+0000 7f54834f4640 1 --2- 192.168.123.107:0/501908413 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f547c072a40 0x7f547c10ca90 secure :-1 s=READY pgs=215 cs=0 l=1 rev1=1 crypto rx=0x7f5470009a00 tx=0x7f547002f270 comp rx=0 tx=0).stop 2026-03-09T20:44:49.343 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.342+0000 7f54834f4640 1 -- 192.168.123.107:0/501908413 shutdown_connections 2026-03-09T20:44:49.344 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.342+0000 7f54834f4640 1 --2- 192.168.123.107:0/501908413 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f547c072a40 0x7f547c10ca90 unknown :-1 s=CLOSED pgs=215 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:49.344 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.342+0000 7f54834f4640 1 --2- 192.168.123.107:0/501908413 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f547c072120 0x7f547c072500 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:49.344 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.342+0000 7f54834f4640 1 -- 192.168.123.107:0/501908413 >> 192.168.123.107:0/501908413 conn(0x7f547c06c7d0 msgr2=0x7f547c06cbe0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:49.344 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.342+0000 7f54834f4640 1 -- 192.168.123.107:0/501908413 shutdown_connections 2026-03-09T20:44:49.344 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.342+0000 7f54834f4640 1 -- 192.168.123.107:0/501908413 wait complete. 2026-03-09T20:44:49.344 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.342+0000 7f54834f4640 1 Processor -- start 2026-03-09T20:44:49.347 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.347+0000 7f54834f4640 1 -- start start 2026-03-09T20:44:49.347 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.347+0000 7f54834f4640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f547c072120 0x7f547c1a74b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:49.347 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.347+0000 7f54834f4640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f547c1a79f0 0x7f547c1abdf0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:49.347 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.347+0000 7f54834f4640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f547c1a7ff0 con 0x7f547c072120 2026-03-09T20:44:49.347 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.347+0000 7f54834f4640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f547c1a8160 con 0x7f547c1a79f0 2026-03-09T20:44:49.347 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.347+0000 7f13bcff9640 1 -- 192.168.123.107:0/3457899422 --> [v2:192.168.123.110:6816/1579901954,v1:192.168.123.110:6817/1579901954] -- command(tid 2: {"prefix": "flush_pg_stats"}) v1 -- 0x7f13d0108570 con 0x7f13a007be10 2026-03-09T20:44:49.347 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.348+0000 7f13beffd640 1 -- 192.168.123.107:0/3457899422 <== osd.5 v2:192.168.123.110:6816/1579901954 2 ==== command_reply(tid 2: 0 ) v1 ==== 8+0+12 (crc 0 0 0) 0x7f13d0108570 con 0x7f13a007be10 2026-03-09T20:44:49.348 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.348+0000 7f13bcff9640 1 -- 192.168.123.107:0/3457899422 >> [v2:192.168.123.110:6816/1579901954,v1:192.168.123.110:6817/1579901954] conn(0x7f13a007be10 msgr2=0x7f13a007e230 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:49.348 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.348+0000 7f13bcff9640 1 --2- 192.168.123.107:0/3457899422 >> [v2:192.168.123.110:6816/1579901954,v1:192.168.123.110:6817/1579901954] conn(0x7f13a007be10 0x7f13a007e230 crc :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:49.348 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.348+0000 7f13bcff9640 1 -- 192.168.123.107:0/3457899422 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f13a0076360 msgr2=0x7f13a0078820 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:49.348 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.348+0000 7f13bcff9640 1 --2- 192.168.123.107:0/3457899422 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f13a0076360 0x7f13a0078820 secure :-1 s=READY pgs=80 cs=0 l=1 rev1=1 crypto rx=0x7f13d0119f50 tx=0x7f13c40023d0 comp rx=0 tx=0).stop 2026-03-09T20:44:49.350 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.348+0000 7f13bcff9640 1 -- 192.168.123.107:0/3457899422 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f13d01b5eb0 msgr2=0x7f13d01b6290 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:49.351 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.348+0000 7f13bcff9640 1 --2- 192.168.123.107:0/3457899422 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f13d01b5eb0 0x7f13d01b6290 secure :-1 s=READY pgs=214 cs=0 l=1 rev1=1 crypto rx=0x7f13c000b470 tx=0x7f13c000b940 comp rx=0 tx=0).stop 2026-03-09T20:44:49.351 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.348+0000 7f13bcff9640 1 -- 192.168.123.107:0/3457899422 shutdown_connections 2026-03-09T20:44:49.351 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.348+0000 7f13bcff9640 1 --2- 192.168.123.107:0/3457899422 >> [v2:192.168.123.110:6816/1579901954,v1:192.168.123.110:6817/1579901954] conn(0x7f13a007be10 0x7f13a007e230 unknown :-1 s=CLOSED pgs=7 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:49.351 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.348+0000 7f13bcff9640 1 --2- 192.168.123.107:0/3457899422 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f13a0076360 0x7f13a0078820 unknown :-1 s=CLOSED pgs=80 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:49.351 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.348+0000 7f13bcff9640 1 --2- 192.168.123.107:0/3457899422 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f13d01b67d0 0x7f13d0112bb0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:49.351 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.348+0000 7f13bcff9640 1 --2- 192.168.123.107:0/3457899422 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f13d01b5eb0 0x7f13d01b6290 secure :-1 s=CLOSED pgs=214 cs=0 l=1 rev1=1 crypto rx=0x7f13c000b470 tx=0x7f13c000b940 comp rx=0 tx=0).stop 2026-03-09T20:44:49.351 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.349+0000 7f13bcff9640 1 -- 192.168.123.107:0/3457899422 >> 192.168.123.107:0/3457899422 conn(0x7f13d006c7d0 msgr2=0x7f13d0070eb0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:49.351 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.351+0000 7f54824f2640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f547c072120 0x7f547c1a74b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:49.351 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.351+0000 7f13bcff9640 1 -- 192.168.123.107:0/3457899422 shutdown_connections 2026-03-09T20:44:49.351 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.351+0000 7f13bcff9640 1 -- 192.168.123.107:0/3457899422 wait complete. 2026-03-09T20:44:49.352 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.351+0000 7f54824f2640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f547c072120 0x7f547c1a74b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:47584/0 (socket says 192.168.123.107:47584) 2026-03-09T20:44:49.352 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.351+0000 7f54824f2640 1 -- 192.168.123.107:0/1157039061 learned_addr learned my addr 192.168.123.107:0/1157039061 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:44:49.352 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.352+0000 7f54824f2640 1 -- 192.168.123.107:0/1157039061 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f547c1a79f0 msgr2=0x7f547c1abdf0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T20:44:49.352 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.352+0000 7f54824f2640 1 --2- 192.168.123.107:0/1157039061 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f547c1a79f0 0x7f547c1abdf0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:49.352 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.352+0000 7f54824f2640 1 -- 192.168.123.107:0/1157039061 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f5470009660 con 0x7f547c072120 2026-03-09T20:44:49.354 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.354+0000 7f54824f2640 1 --2- 192.168.123.107:0/1157039061 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f547c072120 0x7f547c1a74b0 secure :-1 s=READY pgs=216 cs=0 l=1 rev1=1 crypto rx=0x7f546c00da30 tx=0x7f546c00df00 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:49.354 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.354+0000 7f546b7fe640 1 -- 192.168.123.107:0/1157039061 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f546c00bb80 con 0x7f547c072120 2026-03-09T20:44:49.354 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.354+0000 7f54834f4640 1 -- 192.168.123.107:0/1157039061 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f547c1ac3f0 con 0x7f547c072120 2026-03-09T20:44:49.354 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.354+0000 7f54834f4640 1 -- 192.168.123.107:0/1157039061 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f547c1ac940 con 0x7f547c072120 2026-03-09T20:44:49.355 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.355+0000 7f546b7fe640 1 -- 192.168.123.107:0/1157039061 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f546c004590 con 0x7f547c072120 2026-03-09T20:44:49.355 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.355+0000 7f546b7fe640 1 -- 192.168.123.107:0/1157039061 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f546c010460 con 0x7f547c072120 2026-03-09T20:44:49.357 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.357+0000 7f546b7fe640 1 -- 192.168.123.107:0/1157039061 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7f546c0105c0 con 0x7f547c072120 2026-03-09T20:44:49.361 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.360+0000 7f546b7fe640 1 --2- 192.168.123.107:0/1157039061 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f544c076290 0x7f544c078750 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:49.371 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.361+0000 7f5481cf1640 1 --2- 192.168.123.107:0/1157039061 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f544c076290 0x7f544c078750 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:49.371 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.362+0000 7f5481cf1640 1 --2- 192.168.123.107:0/1157039061 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f544c076290 0x7f544c078750 secure :-1 s=READY pgs=81 cs=0 l=1 rev1=1 crypto rx=0x7f5470009a00 tx=0x7f54700023d0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:49.371 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.362+0000 7f546b7fe640 1 -- 192.168.123.107:0/1157039061 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(37..37 src has 1..37) v4 ==== 4618+0+0 (secure 0 0 0) 0x7f546c098690 con 0x7f547c072120 2026-03-09T20:44:49.371 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.362+0000 7f54834f4640 1 --2- 192.168.123.107:0/1157039061 >> [v2:192.168.123.107:6810/1875790749,v1:192.168.123.107:6811/1875790749] conn(0x7f54480015e0 0x7f5448003aa0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:49.371 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.363+0000 7f5482cf3640 1 --2- 192.168.123.107:0/1157039061 >> [v2:192.168.123.107:6810/1875790749,v1:192.168.123.107:6811/1875790749] conn(0x7f54480015e0 0x7f5448003aa0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:49.371 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.368+0000 7f5482cf3640 1 --2- 192.168.123.107:0/1157039061 >> [v2:192.168.123.107:6810/1875790749,v1:192.168.123.107:6811/1875790749] conn(0x7f54480015e0 0x7f5448003aa0 crc :-1 s=READY pgs=11 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:49.371 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.368+0000 7f54834f4640 1 -- 192.168.123.107:0/1157039061 --> [v2:192.168.123.107:6810/1875790749,v1:192.168.123.107:6811/1875790749] -- command(tid 1: {"prefix": "get_command_descriptions"}) v1 -- 0x7f5448006c40 con 0x7f54480015e0 2026-03-09T20:44:49.371 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.369+0000 7f546b7fe640 1 -- 192.168.123.107:0/1157039061 <== osd.1 v2:192.168.123.107:6810/1875790749 1 ==== command_reply(tid 1: 0 ) v1 ==== 8+0+26930 (crc 0 0 0) 0x7f5448006c40 con 0x7f54480015e0 2026-03-09T20:44:49.399 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.397+0000 7f54834f4640 1 -- 192.168.123.107:0/1157039061 --> [v2:192.168.123.107:6810/1875790749,v1:192.168.123.107:6811/1875790749] -- command(tid 2: {"prefix": "flush_pg_stats"}) v1 -- 0x7f5448005d20 con 0x7f54480015e0 2026-03-09T20:44:49.403 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.400+0000 7f546b7fe640 1 -- 192.168.123.107:0/1157039061 <== osd.1 v2:192.168.123.107:6810/1875790749 2 ==== command_reply(tid 2: 0 ) v1 ==== 8+0+11 (crc 0 0 0) 0x7f5448005d20 con 0x7f54480015e0 2026-03-09T20:44:49.408 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.408+0000 7f54834f4640 1 -- 192.168.123.107:0/1157039061 >> [v2:192.168.123.107:6810/1875790749,v1:192.168.123.107:6811/1875790749] conn(0x7f54480015e0 msgr2=0x7f5448003aa0 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:49.429 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.408+0000 7f54834f4640 1 --2- 192.168.123.107:0/1157039061 >> [v2:192.168.123.107:6810/1875790749,v1:192.168.123.107:6811/1875790749] conn(0x7f54480015e0 0x7f5448003aa0 crc :-1 s=READY pgs=11 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:49.429 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.409+0000 7f54834f4640 1 -- 192.168.123.107:0/1157039061 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f544c076290 msgr2=0x7f544c078750 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:49.429 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.409+0000 7f54834f4640 1 --2- 192.168.123.107:0/1157039061 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f544c076290 0x7f544c078750 secure :-1 s=READY pgs=81 cs=0 l=1 rev1=1 crypto rx=0x7f5470009a00 tx=0x7f54700023d0 comp rx=0 tx=0).stop 2026-03-09T20:44:49.429 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.409+0000 7f54834f4640 1 -- 192.168.123.107:0/1157039061 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f547c072120 msgr2=0x7f547c1a74b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:49.429 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.409+0000 7f54834f4640 1 --2- 192.168.123.107:0/1157039061 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f547c072120 0x7f547c1a74b0 secure :-1 s=READY pgs=216 cs=0 l=1 rev1=1 crypto rx=0x7f546c00da30 tx=0x7f546c00df00 comp rx=0 tx=0).stop 2026-03-09T20:44:49.429 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.409+0000 7f54834f4640 1 -- 192.168.123.107:0/1157039061 shutdown_connections 2026-03-09T20:44:49.429 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.409+0000 7f54834f4640 1 --2- 192.168.123.107:0/1157039061 >> [v2:192.168.123.107:6810/1875790749,v1:192.168.123.107:6811/1875790749] conn(0x7f54480015e0 0x7f5448003aa0 unknown :-1 s=CLOSED pgs=11 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:49.429 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.409+0000 7f54834f4640 1 --2- 192.168.123.107:0/1157039061 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f544c076290 0x7f544c078750 unknown :-1 s=CLOSED pgs=81 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:49.429 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.409+0000 7f54834f4640 1 --2- 192.168.123.107:0/1157039061 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f547c1a79f0 0x7f547c1abdf0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:49.429 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.409+0000 7f54834f4640 1 --2- 192.168.123.107:0/1157039061 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f547c072120 0x7f547c1a74b0 unknown :-1 s=CLOSED pgs=216 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:49.429 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.409+0000 7f54834f4640 1 -- 192.168.123.107:0/1157039061 >> 192.168.123.107:0/1157039061 conn(0x7f547c06c7d0 msgr2=0x7f547c070eb0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:49.429 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.409+0000 7f54834f4640 1 -- 192.168.123.107:0/1157039061 shutdown_connections 2026-03-09T20:44:49.430 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.409+0000 7f54834f4640 1 -- 192.168.123.107:0/1157039061 wait complete. 2026-03-09T20:44:49.524 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.520+0000 7f1ae09b9640 1 -- 192.168.123.107:0/2669678747 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1ad8072af0 msgr2=0x7f1ad810ba70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:49.524 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.520+0000 7f1ae09b9640 1 --2- 192.168.123.107:0/2669678747 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1ad8072af0 0x7f1ad810ba70 secure :-1 s=READY pgs=217 cs=0 l=1 rev1=1 crypto rx=0x7f1acc0098e0 tx=0x7f1acc02f1b0 comp rx=0 tx=0).stop 2026-03-09T20:44:49.524 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.520+0000 7f1ae09b9640 1 -- 192.168.123.107:0/2669678747 shutdown_connections 2026-03-09T20:44:49.524 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.520+0000 7f1ae09b9640 1 --2- 192.168.123.107:0/2669678747 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1ad8072af0 0x7f1ad810ba70 unknown :-1 s=CLOSED pgs=217 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:49.524 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.520+0000 7f1ae09b9640 1 --2- 192.168.123.107:0/2669678747 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1ad8072140 0x7f1ad8072520 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:49.524 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.520+0000 7f1ae09b9640 1 -- 192.168.123.107:0/2669678747 >> 192.168.123.107:0/2669678747 conn(0x7f1ad806c7e0 msgr2=0x7f1ad806cbf0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:49.524 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.520+0000 7f1ae09b9640 1 -- 192.168.123.107:0/2669678747 shutdown_connections 2026-03-09T20:44:49.524 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.520+0000 7f1ae09b9640 1 -- 192.168.123.107:0/2669678747 wait complete. 2026-03-09T20:44:49.524 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.521+0000 7f1ae09b9640 1 Processor -- start 2026-03-09T20:44:49.524 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.521+0000 7f1ae09b9640 1 -- start start 2026-03-09T20:44:49.524 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.521+0000 7f1ae09b9640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1ad8072140 0x7f1ad807d4b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:49.524 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.521+0000 7f1ae09b9640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1ad80843d0 0x7f1ad807da10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:49.524 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.521+0000 7f1ae09b9640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f1ad807e0f0 con 0x7f1ad8072140 2026-03-09T20:44:49.524 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.521+0000 7f1ae09b9640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f1ad807e260 con 0x7f1ad80843d0 2026-03-09T20:44:49.524 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.521+0000 7f1addf2d640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1ad80843d0 0x7f1ad807da10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:49.524 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.521+0000 7f1addf2d640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1ad80843d0 0x7f1ad807da10 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:60528/0 (socket says 192.168.123.107:60528) 2026-03-09T20:44:49.524 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.521+0000 7f1addf2d640 1 -- 192.168.123.107:0/2733654251 learned_addr learned my addr 192.168.123.107:0/2733654251 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:44:49.526 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.524+0000 7f1ade72e640 1 --2- 192.168.123.107:0/2733654251 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1ad8072140 0x7f1ad807d4b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:49.526 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.524+0000 7f1ade72e640 1 -- 192.168.123.107:0/2733654251 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1ad80843d0 msgr2=0x7f1ad807da10 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:49.526 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.524+0000 7f1ade72e640 1 --2- 192.168.123.107:0/2733654251 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1ad80843d0 0x7f1ad807da10 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:49.526 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.524+0000 7f1ade72e640 1 -- 192.168.123.107:0/2733654251 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f1acc009590 con 0x7f1ad8072140 2026-03-09T20:44:49.526 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.524+0000 7f1ade72e640 1 --2- 192.168.123.107:0/2733654251 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1ad8072140 0x7f1ad807d4b0 secure :-1 s=READY pgs=218 cs=0 l=1 rev1=1 crypto rx=0x7f1ad40049e0 tx=0x7f1ad400d4a0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:49.526 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.525+0000 7f1acb7fe640 1 -- 192.168.123.107:0/2733654251 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f1ad40090d0 con 0x7f1ad8072140 2026-03-09T20:44:49.527 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.525+0000 7f1ae09b9640 1 -- 192.168.123.107:0/2733654251 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f1ad8082030 con 0x7f1ad8072140 2026-03-09T20:44:49.527 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.525+0000 7f1ae09b9640 1 -- 192.168.123.107:0/2733654251 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f1ad8082580 con 0x7f1ad8072140 2026-03-09T20:44:49.527 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.526+0000 7f1acb7fe640 1 -- 192.168.123.107:0/2733654251 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f1ad400f040 con 0x7f1ad8072140 2026-03-09T20:44:49.527 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.526+0000 7f1acb7fe640 1 -- 192.168.123.107:0/2733654251 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f1ad4013600 con 0x7f1ad8072140 2026-03-09T20:44:49.527 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.527+0000 7f1acb7fe640 1 -- 192.168.123.107:0/2733654251 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7f1ad40137a0 con 0x7f1ad8072140 2026-03-09T20:44:49.528 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.527+0000 7f1acb7fe640 1 --2- 192.168.123.107:0/2733654251 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f1ab8076360 0x7f1ab8078820 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:49.528 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.528+0000 7f1addf2d640 1 --2- 192.168.123.107:0/2733654251 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f1ab8076360 0x7f1ab8078820 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:49.528 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.528+0000 7f1addf2d640 1 --2- 192.168.123.107:0/2733654251 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f1ab8076360 0x7f1ab8078820 secure :-1 s=READY pgs=82 cs=0 l=1 rev1=1 crypto rx=0x7f1ad807ebe0 tx=0x7f1acc03a040 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:49.528 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.528+0000 7f1acb7fe640 1 -- 192.168.123.107:0/2733654251 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(37..37 src has 1..37) v4 ==== 4618+0+0 (secure 0 0 0) 0x7f1ad4097c50 con 0x7f1ad8072140 2026-03-09T20:44:49.528 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.528+0000 7f1ac97fa640 1 --2- 192.168.123.107:0/2733654251 >> [v2:192.168.123.110:6800/2906542839,v1:192.168.123.110:6801/2906542839] conn(0x7f1a9c0015e0 0x7f1a9c003aa0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:49.528 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.528+0000 7f1ac97fa640 1 -- 192.168.123.107:0/2733654251 --> [v2:192.168.123.110:6800/2906542839,v1:192.168.123.110:6801/2906542839] -- command(tid 1: {"prefix": "get_command_descriptions"}) v1 -- 0x7f1a9c006c40 con 0x7f1a9c0015e0 2026-03-09T20:44:49.528 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.528+0000 7f1adef2f640 1 --2- 192.168.123.107:0/2733654251 >> [v2:192.168.123.110:6800/2906542839,v1:192.168.123.110:6801/2906542839] conn(0x7f1a9c0015e0 0x7f1a9c003aa0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:49.533 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.532+0000 7f1adef2f640 1 --2- 192.168.123.107:0/2733654251 >> [v2:192.168.123.110:6800/2906542839,v1:192.168.123.110:6801/2906542839] conn(0x7f1a9c0015e0 0x7f1a9c003aa0 crc :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.3 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:49.533 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.533+0000 7f1acb7fe640 1 -- 192.168.123.107:0/2733654251 <== osd.3 v2:192.168.123.110:6800/2906542839 1 ==== command_reply(tid 1: 0 ) v1 ==== 8+0+26930 (crc 0 0 0) 0x7f1a9c006c40 con 0x7f1a9c0015e0 2026-03-09T20:44:49.573 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.558+0000 7f1ac97fa640 1 -- 192.168.123.107:0/2733654251 --> [v2:192.168.123.110:6800/2906542839,v1:192.168.123.110:6801/2906542839] -- command(tid 2: {"prefix": "flush_pg_stats"}) v1 -- 0x7f1a9c005d20 con 0x7f1a9c0015e0 2026-03-09T20:44:49.577 INFO:teuthology.orchestra.run.vm07.stdout:81604378633 2026-03-09T20:44:49.577 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.575+0000 7f1acb7fe640 1 -- 192.168.123.107:0/2733654251 <== osd.3 v2:192.168.123.110:6800/2906542839 2 ==== command_reply(tid 2: 0 ) v1 ==== 8+0+12 (crc 0 0 0) 0x7f1a9c005d20 con 0x7f1a9c0015e0 2026-03-09T20:44:49.577 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.576+0000 7f1ae09b9640 1 -- 192.168.123.107:0/2733654251 >> [v2:192.168.123.110:6800/2906542839,v1:192.168.123.110:6801/2906542839] conn(0x7f1a9c0015e0 msgr2=0x7f1a9c003aa0 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:49.577 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.576+0000 7f1ae09b9640 1 --2- 192.168.123.107:0/2733654251 >> [v2:192.168.123.110:6800/2906542839,v1:192.168.123.110:6801/2906542839] conn(0x7f1a9c0015e0 0x7f1a9c003aa0 crc :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:49.577 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.577+0000 7f1ae09b9640 1 -- 192.168.123.107:0/2733654251 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f1ab8076360 msgr2=0x7f1ab8078820 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:49.577 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.577+0000 7f1ae09b9640 1 --2- 192.168.123.107:0/2733654251 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f1ab8076360 0x7f1ab8078820 secure :-1 s=READY pgs=82 cs=0 l=1 rev1=1 crypto rx=0x7f1ad807ebe0 tx=0x7f1acc03a040 comp rx=0 tx=0).stop 2026-03-09T20:44:49.577 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.577+0000 7f1ae09b9640 1 -- 192.168.123.107:0/2733654251 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1ad8072140 msgr2=0x7f1ad807d4b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:49.577 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.577+0000 7f1ae09b9640 1 --2- 192.168.123.107:0/2733654251 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1ad8072140 0x7f1ad807d4b0 secure :-1 s=READY pgs=218 cs=0 l=1 rev1=1 crypto rx=0x7f1ad40049e0 tx=0x7f1ad400d4a0 comp rx=0 tx=0).stop 2026-03-09T20:44:49.578 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph osd last-stat-seq osd.2 2026-03-09T20:44:49.579 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.579+0000 7f1ae09b9640 1 -- 192.168.123.107:0/2733654251 shutdown_connections 2026-03-09T20:44:49.579 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.579+0000 7f1ae09b9640 1 --2- 192.168.123.107:0/2733654251 >> [v2:192.168.123.110:6800/2906542839,v1:192.168.123.110:6801/2906542839] conn(0x7f1a9c0015e0 0x7f1a9c003aa0 unknown :-1 s=CLOSED pgs=6 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:49.579 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.579+0000 7f1ae09b9640 1 --2- 192.168.123.107:0/2733654251 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f1ab8076360 0x7f1ab8078820 unknown :-1 s=CLOSED pgs=82 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:49.579 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.579+0000 7f1ae09b9640 1 --2- 192.168.123.107:0/2733654251 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1ad80843d0 0x7f1ad807da10 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:49.579 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.579+0000 7f1ae09b9640 1 --2- 192.168.123.107:0/2733654251 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1ad8072140 0x7f1ad807d4b0 unknown :-1 s=CLOSED pgs=218 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:49.579 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.579+0000 7f1ae09b9640 1 -- 192.168.123.107:0/2733654251 >> 192.168.123.107:0/2733654251 conn(0x7f1ad806c7e0 msgr2=0x7f1ad810a9e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:49.592 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.579+0000 7f1ae09b9640 1 -- 192.168.123.107:0/2733654251 shutdown_connections 2026-03-09T20:44:49.592 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.579+0000 7f1ae09b9640 1 -- 192.168.123.107:0/2733654251 wait complete. 2026-03-09T20:44:49.593 INFO:teuthology.orchestra.run.vm07.stdout:154618822659 2026-03-09T20:44:49.593 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph osd last-stat-seq osd.5 2026-03-09T20:44:49.608 INFO:teuthology.orchestra.run.vm07.stdout:64424509450 2026-03-09T20:44:49.608 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph osd last-stat-seq osd.1 2026-03-09T20:44:49.735 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:49 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:44:49.743 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.733+0000 7f3c9e4db640 1 -- 192.168.123.107:0/588035383 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f3c980720e0 msgr2=0x7f3c98072520 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:49.743 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.733+0000 7f3c9e4db640 1 --2- 192.168.123.107:0/588035383 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f3c980720e0 0x7f3c98072520 secure :-1 s=READY pgs=39 cs=0 l=1 rev1=1 crypto rx=0x7f3c9000b0a0 tx=0x7f3c9002f4c0 comp rx=0 tx=0).stop 2026-03-09T20:44:49.743 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.733+0000 7f3c9e4db640 1 -- 192.168.123.107:0/588035383 shutdown_connections 2026-03-09T20:44:49.743 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.733+0000 7f3c9e4db640 1 --2- 192.168.123.107:0/588035383 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f3c980720e0 0x7f3c98072520 unknown :-1 s=CLOSED pgs=39 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:49.743 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.733+0000 7f3c9e4db640 1 --2- 192.168.123.107:0/588035383 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3c9810d110 0x7f3c9810d4f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:49.743 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.733+0000 7f3c9e4db640 1 -- 192.168.123.107:0/588035383 >> 192.168.123.107:0/588035383 conn(0x7f3c9806b7f0 msgr2=0x7f3c9806bc00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:49.743 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.733+0000 7f3c9e4db640 1 -- 192.168.123.107:0/588035383 shutdown_connections 2026-03-09T20:44:49.743 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.733+0000 7f3c9e4db640 1 -- 192.168.123.107:0/588035383 wait complete. 2026-03-09T20:44:49.743 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.734+0000 7f3c9e4db640 1 Processor -- start 2026-03-09T20:44:49.743 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.734+0000 7f3c9e4db640 1 -- start start 2026-03-09T20:44:49.743 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.734+0000 7f3c9e4db640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3c9810d110 0x7f3c9807d4b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:49.743 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.734+0000 7f3c9e4db640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f3c980843d0 0x7f3c9807da10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:49.743 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.734+0000 7f3c9e4db640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f3c9807e0f0 con 0x7f3c9810d110 2026-03-09T20:44:49.743 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.734+0000 7f3c9e4db640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f3c9807e260 con 0x7f3c980843d0 2026-03-09T20:44:49.743 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.734+0000 7f3c97fff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3c9810d110 0x7f3c9807d4b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:49.743 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.734+0000 7f3c97fff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3c9810d110 0x7f3c9807d4b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:47624/0 (socket says 192.168.123.107:47624) 2026-03-09T20:44:49.743 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.734+0000 7f3c97fff640 1 -- 192.168.123.107:0/1122240867 learned_addr learned my addr 192.168.123.107:0/1122240867 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:44:49.743 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.734+0000 7f3c97fff640 1 -- 192.168.123.107:0/1122240867 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f3c980843d0 msgr2=0x7f3c9807da10 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T20:44:49.743 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.734+0000 7f3c97fff640 1 --2- 192.168.123.107:0/1122240867 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f3c980843d0 0x7f3c9807da10 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:49.743 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.734+0000 7f3c97fff640 1 -- 192.168.123.107:0/1122240867 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f3c90009d00 con 0x7f3c9810d110 2026-03-09T20:44:49.743 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.734+0000 7f3c97fff640 1 --2- 192.168.123.107:0/1122240867 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3c9810d110 0x7f3c9807d4b0 secure :-1 s=READY pgs=219 cs=0 l=1 rev1=1 crypto rx=0x7f3c88053cf0 tx=0x7f3c8800c6a0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:49.743 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.739+0000 7f3c957fa640 1 -- 192.168.123.107:0/1122240867 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f3c88053eb0 con 0x7f3c9810d110 2026-03-09T20:44:49.743 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.739+0000 7f3c957fa640 1 -- 192.168.123.107:0/1122240867 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f3c88004590 con 0x7f3c9810d110 2026-03-09T20:44:49.743 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.739+0000 7f3c957fa640 1 -- 192.168.123.107:0/1122240867 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f3c88055650 con 0x7f3c9810d110 2026-03-09T20:44:49.743 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.739+0000 7f3c9e4db640 1 -- 192.168.123.107:0/1122240867 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f3c98082030 con 0x7f3c9810d110 2026-03-09T20:44:49.743 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.739+0000 7f3c9e4db640 1 -- 192.168.123.107:0/1122240867 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f3c98082550 con 0x7f3c9810d110 2026-03-09T20:44:49.743 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.742+0000 7f3c9e4db640 1 -- 192.168.123.107:0/1122240867 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_get_version(what=osdmap handle=1) v1 -- 0x7f3c64000fc0 con 0x7f3c9810d110 2026-03-09T20:44:49.748 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.747+0000 7f3c957fa640 1 -- 192.168.123.107:0/1122240867 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7f3c880029a0 con 0x7f3c9810d110 2026-03-09T20:44:49.748 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.747+0000 7f3c957fa640 1 --2- 192.168.123.107:0/1122240867 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f3c80076290 0x7f3c80078750 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:49.748 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.747+0000 7f3c957fa640 1 -- 192.168.123.107:0/1122240867 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(37..37 src has 1..37) v4 ==== 4618+0+0 (secure 0 0 0) 0x7f3c88062030 con 0x7f3c9810d110 2026-03-09T20:44:49.748 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.747+0000 7f3c957fa640 1 --2- 192.168.123.107:0/1122240867 >> [v2:192.168.123.110:6808/1620701058,v1:192.168.123.110:6809/1620701058] conn(0x7f3c8007bd40 0x7f3c8007e160 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:49.748 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.747+0000 7f3c957fa640 1 -- 192.168.123.107:0/1122240867 --> [v2:192.168.123.110:6808/1620701058,v1:192.168.123.110:6809/1620701058] -- command(tid 1: {"prefix": "get_command_descriptions"}) v1 -- 0x7f3c8007e810 con 0x7f3c8007bd40 2026-03-09T20:44:49.748 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.747+0000 7f3c957fa640 1 -- 192.168.123.107:0/1122240867 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_get_version_reply(handle=1 version=37) v2 ==== 24+0+0 (secure 0 0 0) 0x7f3c880dcea0 con 0x7f3c9810d110 2026-03-09T20:44:49.748 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.748+0000 7f3c977fe640 1 --2- 192.168.123.107:0/1122240867 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f3c80076290 0x7f3c80078750 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:49.748 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.748+0000 7f3c9ca51640 1 --2- 192.168.123.107:0/1122240867 >> [v2:192.168.123.110:6808/1620701058,v1:192.168.123.110:6809/1620701058] conn(0x7f3c8007bd40 0x7f3c8007e160 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:49.749 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.748+0000 7f3c977fe640 1 --2- 192.168.123.107:0/1122240867 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f3c80076290 0x7f3c80078750 secure :-1 s=READY pgs=83 cs=0 l=1 rev1=1 crypto rx=0x7f3c9807ebe0 tx=0x7f3c90002750 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:49.749 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.748+0000 7f3c9ca51640 1 --2- 192.168.123.107:0/1122240867 >> [v2:192.168.123.110:6808/1620701058,v1:192.168.123.110:6809/1620701058] conn(0x7f3c8007bd40 0x7f3c8007e160 crc :-1 s=READY pgs=5 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.4 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:49.759 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.749+0000 7f3c957fa640 1 -- 192.168.123.107:0/1122240867 <== osd.4 v2:192.168.123.110:6808/1620701058 1 ==== command_reply(tid 1: 0 ) v1 ==== 8+0+26930 (crc 0 0 0) 0x7f3c8007e810 con 0x7f3c8007bd40 2026-03-09T20:44:49.766 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.759+0000 7f3c9e4db640 1 -- 192.168.123.107:0/1122240867 --> [v2:192.168.123.110:6808/1620701058,v1:192.168.123.110:6809/1620701058] -- command(tid 2: {"prefix": "flush_pg_stats"}) v1 -- 0x7f3c64002de0 con 0x7f3c8007bd40 2026-03-09T20:44:49.766 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.759+0000 7f3c957fa640 1 -- 192.168.123.107:0/1122240867 <== osd.4 v2:192.168.123.110:6808/1620701058 2 ==== command_reply(tid 2: 0 ) v1 ==== 8+0+12 (crc 0 0 0) 0x7f3c64002de0 con 0x7f3c8007bd40 2026-03-09T20:44:49.766 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.760+0000 7f3c76ffd640 1 -- 192.168.123.107:0/1122240867 >> [v2:192.168.123.110:6808/1620701058,v1:192.168.123.110:6809/1620701058] conn(0x7f3c8007bd40 msgr2=0x7f3c8007e160 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:49.766 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.760+0000 7f3c76ffd640 1 --2- 192.168.123.107:0/1122240867 >> [v2:192.168.123.110:6808/1620701058,v1:192.168.123.110:6809/1620701058] conn(0x7f3c8007bd40 0x7f3c8007e160 crc :-1 s=READY pgs=5 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:49.766 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.760+0000 7f3c76ffd640 1 -- 192.168.123.107:0/1122240867 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f3c80076290 msgr2=0x7f3c80078750 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:49.766 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.760+0000 7f3c76ffd640 1 --2- 192.168.123.107:0/1122240867 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f3c80076290 0x7f3c80078750 secure :-1 s=READY pgs=83 cs=0 l=1 rev1=1 crypto rx=0x7f3c9807ebe0 tx=0x7f3c90002750 comp rx=0 tx=0).stop 2026-03-09T20:44:49.766 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.760+0000 7f3c76ffd640 1 -- 192.168.123.107:0/1122240867 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3c9810d110 msgr2=0x7f3c9807d4b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:49.766 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.760+0000 7f3c76ffd640 1 --2- 192.168.123.107:0/1122240867 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3c9810d110 0x7f3c9807d4b0 secure :-1 s=READY pgs=219 cs=0 l=1 rev1=1 crypto rx=0x7f3c88053cf0 tx=0x7f3c8800c6a0 comp rx=0 tx=0).stop 2026-03-09T20:44:49.766 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.760+0000 7f3c76ffd640 1 -- 192.168.123.107:0/1122240867 shutdown_connections 2026-03-09T20:44:49.766 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.760+0000 7f3c76ffd640 1 --2- 192.168.123.107:0/1122240867 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f3c80076290 0x7f3c80078750 unknown :-1 s=CLOSED pgs=83 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:49.766 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.760+0000 7f3c76ffd640 1 --2- 192.168.123.107:0/1122240867 >> [v2:192.168.123.110:6808/1620701058,v1:192.168.123.110:6809/1620701058] conn(0x7f3c8007bd40 0x7f3c8007e160 unknown :-1 s=CLOSED pgs=5 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:49.766 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.760+0000 7f3c76ffd640 1 --2- 192.168.123.107:0/1122240867 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f3c980843d0 0x7f3c9807da10 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:49.766 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.760+0000 7f3c76ffd640 1 --2- 192.168.123.107:0/1122240867 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3c9810d110 0x7f3c9807d4b0 unknown :-1 s=CLOSED pgs=219 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:49.766 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.760+0000 7f3c76ffd640 1 -- 192.168.123.107:0/1122240867 >> 192.168.123.107:0/1122240867 conn(0x7f3c9806b7f0 msgr2=0x7f3c9807a180 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:49.766 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.760+0000 7f3c76ffd640 1 -- 192.168.123.107:0/1122240867 shutdown_connections 2026-03-09T20:44:49.766 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:49.760+0000 7f3c76ffd640 1 -- 192.168.123.107:0/1122240867 wait complete. 2026-03-09T20:44:49.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:49 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:44:49.802 INFO:teuthology.orchestra.run.vm07.stdout:115964117000 2026-03-09T20:44:49.802 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph osd last-stat-seq osd.3 2026-03-09T20:44:49.907 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:44:49.940 INFO:teuthology.orchestra.run.vm07.stdout:137438953477 2026-03-09T20:44:49.940 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph osd last-stat-seq osd.4 2026-03-09T20:44:49.981 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:44:49.983 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:44:50.356 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:44:50.501 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:50 vm07 ceph-mon[49120]: pgmap v70: 1 pgs: 1 active+clean; 449 KiB data, 573 MiB used, 119 GiB / 120 GiB avail 2026-03-09T20:44:50.548 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:50.545+0000 7f9217577640 1 -- 192.168.123.107:0/159069781 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9218072a40 msgr2=0x7f921810ca90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:50.548 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:50.545+0000 7f9217577640 1 --2- 192.168.123.107:0/159069781 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9218072a40 0x7f921810ca90 secure :-1 s=READY pgs=220 cs=0 l=1 rev1=1 crypto rx=0x7f921001c7f0 tx=0x7f9210040af0 comp rx=0 tx=0).stop 2026-03-09T20:44:50.548 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:50.548+0000 7f9217577640 1 -- 192.168.123.107:0/159069781 shutdown_connections 2026-03-09T20:44:50.548 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:50.548+0000 7f9217577640 1 --2- 192.168.123.107:0/159069781 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9218072a40 0x7f921810ca90 unknown :-1 s=CLOSED pgs=220 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:50.548 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:50.548+0000 7f9217577640 1 --2- 192.168.123.107:0/159069781 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9218072120 0x7f9218072500 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:50.548 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:50.548+0000 7f9217577640 1 -- 192.168.123.107:0/159069781 >> 192.168.123.107:0/159069781 conn(0x7f921806c7d0 msgr2=0x7f921806cbe0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:50.548 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:50.548+0000 7f9217577640 1 -- 192.168.123.107:0/159069781 shutdown_connections 2026-03-09T20:44:50.549 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:50.548+0000 7f9217577640 1 -- 192.168.123.107:0/159069781 wait complete. 2026-03-09T20:44:50.549 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:50.549+0000 7f9217577640 1 Processor -- start 2026-03-09T20:44:50.551 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:50.550+0000 7f9217577640 1 -- start start 2026-03-09T20:44:50.551 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:50.550+0000 7f9217577640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9218072120 0x7f921819ed80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:50.551 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:50.550+0000 7f9217577640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9218072a40 0x7f921819f2c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:50.551 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:50.550+0000 7f9217577640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f921819f950 con 0x7f9218072a40 2026-03-09T20:44:50.551 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:50.550+0000 7f9217577640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f92181a36c0 con 0x7f9218072120 2026-03-09T20:44:50.551 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:50.550+0000 7f9216575640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9218072120 0x7f921819ed80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:50.551 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:50.550+0000 7f9216575640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9218072120 0x7f921819ed80 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:60550/0 (socket says 192.168.123.107:60550) 2026-03-09T20:44:50.551 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:50.550+0000 7f9216575640 1 -- 192.168.123.107:0/646670017 learned_addr learned my addr 192.168.123.107:0/646670017 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:44:50.551 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:50.550+0000 7f9215d74640 1 --2- 192.168.123.107:0/646670017 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9218072a40 0x7f921819f2c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:50.551 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:50.551+0000 7f9216575640 1 -- 192.168.123.107:0/646670017 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9218072a40 msgr2=0x7f921819f2c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:50.551 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:50.551+0000 7f9216575640 1 --2- 192.168.123.107:0/646670017 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9218072a40 0x7f921819f2c0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:50.551 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:50.551+0000 7f9216575640 1 -- 192.168.123.107:0/646670017 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f9210009d00 con 0x7f9218072120 2026-03-09T20:44:50.551 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:50.551+0000 7f9215d74640 1 --2- 192.168.123.107:0/646670017 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9218072a40 0x7f921819f2c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-09T20:44:50.551 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:50.551+0000 7f9216575640 1 --2- 192.168.123.107:0/646670017 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9218072120 0x7f921819ed80 secure :-1 s=READY pgs=40 cs=0 l=1 rev1=1 crypto rx=0x7f920800c910 tx=0x7f920800cde0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:50.554 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:50.551+0000 7f92077fe640 1 -- 192.168.123.107:0/646670017 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f9208007c20 con 0x7f9218072120 2026-03-09T20:44:50.554 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:50.552+0000 7f9217577640 1 -- 192.168.123.107:0/646670017 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f92181a39a0 con 0x7f9218072120 2026-03-09T20:44:50.554 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:50.552+0000 7f9217577640 1 -- 192.168.123.107:0/646670017 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f92181a3ef0 con 0x7f9218072120 2026-03-09T20:44:50.554 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:50.552+0000 7f92077fe640 1 -- 192.168.123.107:0/646670017 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f9208007d80 con 0x7f9218072120 2026-03-09T20:44:50.554 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:50.553+0000 7f92077fe640 1 -- 192.168.123.107:0/646670017 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f920800f450 con 0x7f9218072120 2026-03-09T20:44:50.554 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:50.553+0000 7f92077fe640 1 -- 192.168.123.107:0/646670017 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7f9208016030 con 0x7f9218072120 2026-03-09T20:44:50.554 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:50.553+0000 7f9217577640 1 -- 192.168.123.107:0/646670017 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f9218108570 con 0x7f9218072120 2026-03-09T20:44:50.554 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:50.554+0000 7f92077fe640 1 --2- 192.168.123.107:0/646670017 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f91e8076000 0x7f91e80784c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:50.556 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:50.554+0000 7f9215d74640 1 --2- 192.168.123.107:0/646670017 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f91e8076000 0x7f91e80784c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:50.556 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:50.556+0000 7f9215d74640 1 --2- 192.168.123.107:0/646670017 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f91e8076000 0x7f91e80784c0 secure :-1 s=READY pgs=84 cs=0 l=1 rev1=1 crypto rx=0x7f92181a0330 tx=0x7f92100077f0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:50.556 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:50.556+0000 7f92077fe640 1 -- 192.168.123.107:0/646670017 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(37..37 src has 1..37) v4 ==== 4618+0+0 (secure 0 0 0) 0x7f92080981d0 con 0x7f9218072120 2026-03-09T20:44:50.559 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:50.557+0000 7f92077fe640 1 -- 192.168.123.107:0/646670017 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f9208061b20 con 0x7f9218072120 2026-03-09T20:44:50.688 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:50.688+0000 7f9217577640 1 -- 192.168.123.107:0/646670017 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "osd last-stat-seq", "id": 0} v 0) v1 -- 0x7f9218072500 con 0x7f9218072120 2026-03-09T20:44:50.689 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:50.688+0000 7f92077fe640 1 -- 192.168.123.107:0/646670017 <== mon.1 v2:192.168.123.110:3300/0 7 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 0}]=0 v0) v1 ==== 74+0+12 (secure 0 0 0) 0x7f92080614c0 con 0x7f9218072120 2026-03-09T20:44:50.689 INFO:teuthology.orchestra.run.vm07.stdout:42949672972 2026-03-09T20:44:50.691 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:50.691+0000 7f9217577640 1 -- 192.168.123.107:0/646670017 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f91e8076000 msgr2=0x7f91e80784c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:50.691 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:50.691+0000 7f9217577640 1 --2- 192.168.123.107:0/646670017 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f91e8076000 0x7f91e80784c0 secure :-1 s=READY pgs=84 cs=0 l=1 rev1=1 crypto rx=0x7f92181a0330 tx=0x7f92100077f0 comp rx=0 tx=0).stop 2026-03-09T20:44:50.691 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:50.691+0000 7f9217577640 1 -- 192.168.123.107:0/646670017 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9218072120 msgr2=0x7f921819ed80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:50.691 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:50.691+0000 7f9217577640 1 --2- 192.168.123.107:0/646670017 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9218072120 0x7f921819ed80 secure :-1 s=READY pgs=40 cs=0 l=1 rev1=1 crypto rx=0x7f920800c910 tx=0x7f920800cde0 comp rx=0 tx=0).stop 2026-03-09T20:44:50.691 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:50.691+0000 7f9217577640 1 -- 192.168.123.107:0/646670017 shutdown_connections 2026-03-09T20:44:50.691 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:50.691+0000 7f9217577640 1 --2- 192.168.123.107:0/646670017 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f91e8076000 0x7f91e80784c0 unknown :-1 s=CLOSED pgs=84 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:50.691 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:50.691+0000 7f9217577640 1 --2- 192.168.123.107:0/646670017 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9218072a40 0x7f921819f2c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:50.691 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:50.691+0000 7f9217577640 1 --2- 192.168.123.107:0/646670017 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9218072120 0x7f921819ed80 unknown :-1 s=CLOSED pgs=40 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:50.691 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:50.691+0000 7f9217577640 1 -- 192.168.123.107:0/646670017 >> 192.168.123.107:0/646670017 conn(0x7f921806c7d0 msgr2=0x7f921806eea0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:50.691 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:50.691+0000 7f9217577640 1 -- 192.168.123.107:0/646670017 shutdown_connections 2026-03-09T20:44:50.691 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:50.691+0000 7f9217577640 1 -- 192.168.123.107:0/646670017 wait complete. 2026-03-09T20:44:50.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:50 vm10 ceph-mon[57011]: pgmap v70: 1 pgs: 1 active+clean; 449 KiB data, 573 MiB used, 119 GiB / 120 GiB avail 2026-03-09T20:44:51.041 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:44:51.042 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:44:51.109 INFO:tasks.cephadm.ceph_manager.ceph:need seq 42949672972 got 42949672972 for osd.0 2026-03-09T20:44:51.109 DEBUG:teuthology.parallel:result is None 2026-03-09T20:44:51.417 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.413+0000 7f78ccc21640 1 -- 192.168.123.107:0/3741789702 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f78c00077d0 con 0x7f78c8072af0 2026-03-09T20:44:51.417 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.413+0000 7f78cfeae640 1 -- 192.168.123.107:0/3741789702 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f78c8072af0 msgr2=0x7f78c810ba70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:51.417 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.413+0000 7f78cfeae640 1 --2- 192.168.123.107:0/3741789702 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f78c8072af0 0x7f78c810ba70 secure :-1 s=READY pgs=222 cs=0 l=1 rev1=1 crypto rx=0x7f78c000b600 tx=0x7f78c0030670 comp rx=0 tx=0).stop 2026-03-09T20:44:51.417 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.413+0000 7f78cfeae640 1 -- 192.168.123.107:0/3741789702 shutdown_connections 2026-03-09T20:44:51.417 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.413+0000 7f78cfeae640 1 --2- 192.168.123.107:0/3741789702 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f78c8072af0 0x7f78c810ba70 unknown :-1 s=CLOSED pgs=222 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:51.417 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.413+0000 7f78cfeae640 1 --2- 192.168.123.107:0/3741789702 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f78c8072140 0x7f78c8072520 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:51.417 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.413+0000 7f78cfeae640 1 -- 192.168.123.107:0/3741789702 >> 192.168.123.107:0/3741789702 conn(0x7f78c806c7e0 msgr2=0x7f78c806cbf0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:51.417 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.413+0000 7f78cfeae640 1 -- 192.168.123.107:0/3741789702 shutdown_connections 2026-03-09T20:44:51.417 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.413+0000 7f78cfeae640 1 -- 192.168.123.107:0/3741789702 wait complete. 2026-03-09T20:44:51.417 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.414+0000 7f78cfeae640 1 Processor -- start 2026-03-09T20:44:51.417 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.414+0000 7f78cfeae640 1 -- start start 2026-03-09T20:44:51.417 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.414+0000 7f78cfeae640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f78c8072140 0x7f78c80792b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:51.417 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.414+0000 7f78cfeae640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f78c8072af0 0x7f78c80797f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:51.417 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.414+0000 7f78cfeae640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f78c80803e0 con 0x7f78c8072140 2026-03-09T20:44:51.417 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.414+0000 7f78cfeae640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f78c8079d60 con 0x7f78c8072af0 2026-03-09T20:44:51.417 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.414+0000 7f78cdc23640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f78c8072140 0x7f78c80792b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:51.417 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.414+0000 7f78cdc23640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f78c8072140 0x7f78c80792b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:47676/0 (socket says 192.168.123.107:47676) 2026-03-09T20:44:51.417 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.414+0000 7f78cdc23640 1 -- 192.168.123.107:0/68048463 learned_addr learned my addr 192.168.123.107:0/68048463 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:44:51.417 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.414+0000 7f78cd422640 1 --2- 192.168.123.107:0/68048463 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f78c8072af0 0x7f78c80797f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:51.417 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.414+0000 7f78cdc23640 1 -- 192.168.123.107:0/68048463 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f78c8072af0 msgr2=0x7f78c80797f0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:51.417 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.414+0000 7f78cdc23640 1 --2- 192.168.123.107:0/68048463 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f78c8072af0 0x7f78c80797f0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:51.417 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.414+0000 7f78cdc23640 1 -- 192.168.123.107:0/68048463 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f78c0009d00 con 0x7f78c8072140 2026-03-09T20:44:51.417 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.414+0000 7f78cdc23640 1 --2- 192.168.123.107:0/68048463 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f78c8072140 0x7f78c80792b0 secure :-1 s=READY pgs=223 cs=0 l=1 rev1=1 crypto rx=0x7f78c400e990 tx=0x7f78c400ee60 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:51.417 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.415+0000 7f78beffd640 1 -- 192.168.123.107:0/68048463 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f78c400cd30 con 0x7f78c8072140 2026-03-09T20:44:51.417 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.415+0000 7f78cfeae640 1 -- 192.168.123.107:0/68048463 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f78c807a040 con 0x7f78c8072140 2026-03-09T20:44:51.417 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.415+0000 7f78cfeae640 1 -- 192.168.123.107:0/68048463 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f78c807e0a0 con 0x7f78c8072140 2026-03-09T20:44:51.417 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.415+0000 7f78beffd640 1 -- 192.168.123.107:0/68048463 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f78c400ce90 con 0x7f78c8072140 2026-03-09T20:44:51.417 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.415+0000 7f78beffd640 1 -- 192.168.123.107:0/68048463 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f78c4010640 con 0x7f78c8072140 2026-03-09T20:44:51.418 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.416+0000 7f78beffd640 1 -- 192.168.123.107:0/68048463 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7f78c40107a0 con 0x7f78c8072140 2026-03-09T20:44:51.418 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.417+0000 7f78beffd640 1 --2- 192.168.123.107:0/68048463 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f78b4076330 0x7f78b40787f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:51.418 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.417+0000 7f78cd422640 1 --2- 192.168.123.107:0/68048463 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f78b4076330 0x7f78b40787f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:51.418 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.418+0000 7f78beffd640 1 -- 192.168.123.107:0/68048463 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(37..37 src has 1..37) v4 ==== 4618+0+0 (secure 0 0 0) 0x7f78c4014070 con 0x7f78c8072140 2026-03-09T20:44:51.418 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.418+0000 7f78cd422640 1 --2- 192.168.123.107:0/68048463 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f78b4076330 0x7f78b40787f0 secure :-1 s=READY pgs=85 cs=0 l=1 rev1=1 crypto rx=0x7f78c0002790 tx=0x7f78c003a040 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:51.421 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.420+0000 7fbbbce70640 1 -- 192.168.123.107:0/2218012309 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbbb8072120 msgr2=0x7fbbb8072500 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:51.421 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.420+0000 7fbbbce70640 1 --2- 192.168.123.107:0/2218012309 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbbb8072120 0x7fbbb8072500 secure :-1 s=READY pgs=221 cs=0 l=1 rev1=1 crypto rx=0x7fbbac009a00 tx=0x7fbbac02f290 comp rx=0 tx=0).stop 2026-03-09T20:44:51.421 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.421+0000 7fbbbce70640 1 -- 192.168.123.107:0/2218012309 shutdown_connections 2026-03-09T20:44:51.421 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.418+0000 7f78cfeae640 1 -- 192.168.123.107:0/68048463 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f7898005350 con 0x7f78c8072140 2026-03-09T20:44:51.422 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.421+0000 7fbbbce70640 1 --2- 192.168.123.107:0/2218012309 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fbbb8072a40 0x7fbbb810ca90 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:51.422 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.421+0000 7fbbbce70640 1 --2- 192.168.123.107:0/2218012309 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbbb8072120 0x7fbbb8072500 unknown :-1 s=CLOSED pgs=221 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:51.422 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.421+0000 7fbbbce70640 1 -- 192.168.123.107:0/2218012309 >> 192.168.123.107:0/2218012309 conn(0x7fbbb806c7d0 msgr2=0x7fbbb806cbe0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:51.422 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.422+0000 7fbbbce70640 1 -- 192.168.123.107:0/2218012309 shutdown_connections 2026-03-09T20:44:51.422 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.421+0000 7f78beffd640 1 -- 192.168.123.107:0/68048463 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f78c40626b0 con 0x7f78c8072140 2026-03-09T20:44:51.424 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.422+0000 7fbbbce70640 1 -- 192.168.123.107:0/2218012309 wait complete. 2026-03-09T20:44:51.429 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.422+0000 7fbbbce70640 1 Processor -- start 2026-03-09T20:44:51.429 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.428+0000 7fbbbce70640 1 -- start start 2026-03-09T20:44:51.429 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.424+0000 7fb5ca326640 1 -- 192.168.123.107:0/1786832014 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fb5c40720e0 msgr2=0x7fb5c4072520 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:51.429 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.424+0000 7fb5ca326640 1 --2- 192.168.123.107:0/1786832014 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fb5c40720e0 0x7fb5c4072520 secure :-1 s=READY pgs=41 cs=0 l=1 rev1=1 crypto rx=0x7fb5b40098e0 tx=0x7fb5b402f190 comp rx=0 tx=0).stop 2026-03-09T20:44:51.429 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.428+0000 7fb5ca326640 1 -- 192.168.123.107:0/1786832014 shutdown_connections 2026-03-09T20:44:51.431 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.428+0000 7fbbbce70640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbbb8072a40 0x7fbbb8112c40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:51.431 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.428+0000 7fbbbce70640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fbbb8113180 0x7fbbb81b9ad0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:51.431 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.428+0000 7fbbbce70640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fbbb8113670 con 0x7fbbb8072a40 2026-03-09T20:44:51.431 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.428+0000 7fbbbce70640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fbbb81137e0 con 0x7fbbb8113180 2026-03-09T20:44:51.431 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.429+0000 7fbbb6ffd640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fbbb8113180 0x7fbbb81b9ad0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:51.431 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.430+0000 7fbbb6ffd640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fbbb8113180 0x7fbbb81b9ad0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:60604/0 (socket says 192.168.123.107:60604) 2026-03-09T20:44:51.431 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.430+0000 7fbbb6ffd640 1 -- 192.168.123.107:0/1709440132 learned_addr learned my addr 192.168.123.107:0/1709440132 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:44:51.431 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.431+0000 7fbbb6ffd640 1 -- 192.168.123.107:0/1709440132 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbbb8072a40 msgr2=0x7fbbb8112c40 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:51.431 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.431+0000 7fbbb77fe640 1 --2- 192.168.123.107:0/1709440132 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbbb8072a40 0x7fbbb8112c40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:51.432 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.428+0000 7fb5ca326640 1 --2- 192.168.123.107:0/1786832014 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fb5c40720e0 0x7fb5c4072520 unknown :-1 s=CLOSED pgs=41 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:51.432 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.428+0000 7fb5ca326640 1 --2- 192.168.123.107:0/1786832014 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb5c410d110 0x7fb5c410d4f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:51.432 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.428+0000 7fb5ca326640 1 -- 192.168.123.107:0/1786832014 >> 192.168.123.107:0/1786832014 conn(0x7fb5c406b7f0 msgr2=0x7fb5c406bc00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:51.432 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.430+0000 7fb5ca326640 1 -- 192.168.123.107:0/1786832014 shutdown_connections 2026-03-09T20:44:51.432 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.430+0000 7fb5ca326640 1 -- 192.168.123.107:0/1786832014 wait complete. 2026-03-09T20:44:51.434 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.431+0000 7fbbb6ffd640 1 --2- 192.168.123.107:0/1709440132 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbbb8072a40 0x7fbbb8112c40 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:51.434 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.431+0000 7fbbb6ffd640 1 -- 192.168.123.107:0/1709440132 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fbbac009660 con 0x7fbbb8113180 2026-03-09T20:44:51.434 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.434+0000 7fbbb77fe640 1 --2- 192.168.123.107:0/1709440132 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbbb8072a40 0x7fbbb8112c40 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T20:44:51.437 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.434+0000 7fbbb6ffd640 1 --2- 192.168.123.107:0/1709440132 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fbbb8113180 0x7fbbb81b9ad0 secure :-1 s=READY pgs=42 cs=0 l=1 rev1=1 crypto rx=0x7fbba000e990 tx=0x7fbba000ee60 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:51.437 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.436+0000 7fbbb4ff9640 1 -- 192.168.123.107:0/1709440132 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fbba000cd30 con 0x7fbbb8113180 2026-03-09T20:44:51.437 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.436+0000 7fbbbce70640 1 -- 192.168.123.107:0/1709440132 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fbbb81ba070 con 0x7fbbb8113180 2026-03-09T20:44:51.438 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.436+0000 7fbbbce70640 1 -- 192.168.123.107:0/1709440132 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fbbb81ba590 con 0x7fbbb8113180 2026-03-09T20:44:51.438 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.436+0000 7fbbb4ff9640 1 -- 192.168.123.107:0/1709440132 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fbba000ce90 con 0x7fbbb8113180 2026-03-09T20:44:51.438 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.436+0000 7fbbb4ff9640 1 -- 192.168.123.107:0/1709440132 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fbba0010640 con 0x7fbbb8113180 2026-03-09T20:44:51.441 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.441+0000 7fbbb4ff9640 1 -- 192.168.123.107:0/1709440132 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7fbba00107a0 con 0x7fbbb8113180 2026-03-09T20:44:51.442 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.430+0000 7fb5ca326640 1 Processor -- start 2026-03-09T20:44:51.442 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.431+0000 7fb5ca326640 1 -- start start 2026-03-09T20:44:51.442 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.431+0000 7fb5ca326640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fb5c410d110 0x7fb5c41ba0a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:51.442 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.431+0000 7fb5ca326640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb5c41ba5e0 0x7fb5c407e8a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:51.442 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.431+0000 7fb5ca326640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fb5c41bac00 con 0x7fb5c41ba5e0 2026-03-09T20:44:51.442 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.431+0000 7fb5ca326640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fb5c41bad40 con 0x7fb5c410d110 2026-03-09T20:44:51.442 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.440+0000 7fb5c3fff640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fb5c410d110 0x7fb5c41ba0a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:51.442 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.440+0000 7fb5c3fff640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fb5c410d110 0x7fb5c41ba0a0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:60620/0 (socket says 192.168.123.107:60620) 2026-03-09T20:44:51.442 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.440+0000 7fb5c3fff640 1 -- 192.168.123.107:0/2390660821 learned_addr learned my addr 192.168.123.107:0/2390660821 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:44:51.442 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.440+0000 7fb5c3fff640 1 -- 192.168.123.107:0/2390660821 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb5c41ba5e0 msgr2=0x7fb5c407e8a0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T20:44:51.442 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.440+0000 7fb5c3fff640 1 --2- 192.168.123.107:0/2390660821 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb5c41ba5e0 0x7fb5c407e8a0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:51.442 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.440+0000 7fb5c3fff640 1 -- 192.168.123.107:0/2390660821 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fb5bc009d00 con 0x7fb5c410d110 2026-03-09T20:44:51.442 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.440+0000 7fb5c3fff640 1 --2- 192.168.123.107:0/2390660821 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fb5c410d110 0x7fb5c41ba0a0 secure :-1 s=READY pgs=43 cs=0 l=1 rev1=1 crypto rx=0x7fb5bc009610 tx=0x7fb5bc00e860 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:51.442 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.441+0000 7fb5c17fa640 1 -- 192.168.123.107:0/2390660821 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fb5bc002c70 con 0x7fb5c410d110 2026-03-09T20:44:51.442 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.441+0000 7fb5ca326640 1 -- 192.168.123.107:0/2390660821 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fb5b4009590 con 0x7fb5c410d110 2026-03-09T20:44:51.446 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.445+0000 7fbbb4ff9640 1 --2- 192.168.123.107:0/1709440132 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fbb88076290 0x7fbb88078750 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:51.447 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.446+0000 7fbbb77fe640 1 --2- 192.168.123.107:0/1709440132 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fbb88076290 0x7fbb88078750 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:51.447 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.441+0000 7fb5ca326640 1 -- 192.168.123.107:0/2390660821 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fb5c407f1a0 con 0x7fb5c410d110 2026-03-09T20:44:51.447 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.442+0000 7fb5ca326640 1 -- 192.168.123.107:0/2390660821 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fb58c005350 con 0x7fb5c410d110 2026-03-09T20:44:51.447 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.446+0000 7fb5c17fa640 1 -- 192.168.123.107:0/2390660821 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fb5bc00f040 con 0x7fb5c410d110 2026-03-09T20:44:51.449 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.447+0000 7fbbbce70640 1 -- 192.168.123.107:0/1709440132 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fbbb8108570 con 0x7fbbb8113180 2026-03-09T20:44:51.449 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.448+0000 7fbbb77fe640 1 --2- 192.168.123.107:0/1709440132 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fbb88076290 0x7fbb88078750 secure :-1 s=READY pgs=86 cs=0 l=1 rev1=1 crypto rx=0x7fbbac005d20 tx=0x7fbbac005c50 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:51.450 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.448+0000 7fb5c17fa640 1 -- 192.168.123.107:0/2390660821 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fb5bc013710 con 0x7fb5c410d110 2026-03-09T20:44:51.451 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.449+0000 7fb5c17fa640 1 -- 192.168.123.107:0/2390660821 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7fb5bc0139e0 con 0x7fb5c410d110 2026-03-09T20:44:51.453 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.451+0000 7fbbb4ff9640 1 -- 192.168.123.107:0/1709440132 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(37..37 src has 1..37) v4 ==== 4618+0+0 (secure 0 0 0) 0x7fbba0014070 con 0x7fbbb8113180 2026-03-09T20:44:51.453 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.452+0000 7fbbb4ff9640 1 -- 192.168.123.107:0/1709440132 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7fbba0062680 con 0x7fbbb8113180 2026-03-09T20:44:51.453 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.453+0000 7fb5c17fa640 1 --2- 192.168.123.107:0/2390660821 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fb5a4076360 0x7fb5a4078820 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:51.455 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.454+0000 7fb5c37fe640 1 --2- 192.168.123.107:0/2390660821 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fb5a4076360 0x7fb5a4078820 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:51.455 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.455+0000 7fb5c37fe640 1 --2- 192.168.123.107:0/2390660821 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fb5a4076360 0x7fb5a4078820 secure :-1 s=READY pgs=87 cs=0 l=1 rev1=1 crypto rx=0x7fb5b402f6a0 tx=0x7fb5b40023d0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:51.455 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.455+0000 7fb5c17fa640 1 -- 192.168.123.107:0/2390660821 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(37..37 src has 1..37) v4 ==== 4618+0+0 (secure 0 0 0) 0x7fb5bc099140 con 0x7fb5c410d110 2026-03-09T20:44:51.455 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.455+0000 7fb5c17fa640 1 -- 192.168.123.107:0/2390660821 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7fb5bc099570 con 0x7fb5c410d110 2026-03-09T20:44:51.568 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:51 vm07 ceph-mon[49120]: from='client.? 192.168.123.107:0/646670017' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T20:44:51.643 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.642+0000 7f78cfeae640 1 -- 192.168.123.107:0/68048463 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "osd last-stat-seq", "id": 2} v 0) v1 -- 0x7f78980051c0 con 0x7f78c8072140 2026-03-09T20:44:51.643 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.643+0000 7f78beffd640 1 -- 192.168.123.107:0/68048463 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 2}]=0 v0) v1 ==== 74+0+12 (secure 0 0 0) 0x7f78c4062050 con 0x7f78c8072140 2026-03-09T20:44:51.643 INFO:teuthology.orchestra.run.vm07.stdout:81604378633 2026-03-09T20:44:51.647 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.646+0000 7f78bcff9640 1 -- 192.168.123.107:0/68048463 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f78b4076330 msgr2=0x7f78b40787f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:51.647 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.646+0000 7f78bcff9640 1 --2- 192.168.123.107:0/68048463 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f78b4076330 0x7f78b40787f0 secure :-1 s=READY pgs=85 cs=0 l=1 rev1=1 crypto rx=0x7f78c0002790 tx=0x7f78c003a040 comp rx=0 tx=0).stop 2026-03-09T20:44:51.647 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.646+0000 7f78bcff9640 1 -- 192.168.123.107:0/68048463 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f78c8072140 msgr2=0x7f78c80792b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:51.647 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.646+0000 7f78bcff9640 1 --2- 192.168.123.107:0/68048463 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f78c8072140 0x7f78c80792b0 secure :-1 s=READY pgs=223 cs=0 l=1 rev1=1 crypto rx=0x7f78c400e990 tx=0x7f78c400ee60 comp rx=0 tx=0).stop 2026-03-09T20:44:51.647 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.647+0000 7f78bcff9640 1 -- 192.168.123.107:0/68048463 shutdown_connections 2026-03-09T20:44:51.647 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.647+0000 7f78bcff9640 1 --2- 192.168.123.107:0/68048463 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f78b4076330 0x7f78b40787f0 unknown :-1 s=CLOSED pgs=85 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:51.647 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.647+0000 7f78bcff9640 1 --2- 192.168.123.107:0/68048463 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f78c8072af0 0x7f78c80797f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:51.647 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.647+0000 7f78bcff9640 1 --2- 192.168.123.107:0/68048463 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f78c8072140 0x7f78c80792b0 unknown :-1 s=CLOSED pgs=223 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:51.647 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.647+0000 7f78bcff9640 1 -- 192.168.123.107:0/68048463 >> 192.168.123.107:0/68048463 conn(0x7f78c806c7e0 msgr2=0x7f78c8071660 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:51.647 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.647+0000 7f78bcff9640 1 -- 192.168.123.107:0/68048463 shutdown_connections 2026-03-09T20:44:51.647 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.647+0000 7f78bcff9640 1 -- 192.168.123.107:0/68048463 wait complete. 2026-03-09T20:44:51.704 INFO:tasks.cephadm.ceph_manager.ceph:need seq 81604378633 got 81604378633 for osd.2 2026-03-09T20:44:51.704 DEBUG:teuthology.parallel:result is None 2026-03-09T20:44:51.747 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.745+0000 7fe2d42e0640 1 -- 192.168.123.107:0/2268309333 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe2cc072af0 msgr2=0x7fe2cc10ba70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:51.747 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.745+0000 7fe2d42e0640 1 --2- 192.168.123.107:0/2268309333 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe2cc072af0 0x7fe2cc10ba70 secure :-1 s=READY pgs=224 cs=0 l=1 rev1=1 crypto rx=0x7fe2c4009040 tx=0x7fe2c4031ab0 comp rx=0 tx=0).stop 2026-03-09T20:44:51.747 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.746+0000 7fe2d42e0640 1 -- 192.168.123.107:0/2268309333 shutdown_connections 2026-03-09T20:44:51.747 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.746+0000 7fe2d42e0640 1 --2- 192.168.123.107:0/2268309333 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe2cc072af0 0x7fe2cc10ba70 unknown :-1 s=CLOSED pgs=224 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:51.747 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.746+0000 7fe2d42e0640 1 --2- 192.168.123.107:0/2268309333 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe2cc072140 0x7fe2cc072520 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:51.747 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.746+0000 7fe2d42e0640 1 -- 192.168.123.107:0/2268309333 >> 192.168.123.107:0/2268309333 conn(0x7fe2cc06c7e0 msgr2=0x7fe2cc06cbf0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:51.747 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.746+0000 7fe2d42e0640 1 -- 192.168.123.107:0/2268309333 shutdown_connections 2026-03-09T20:44:51.747 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.746+0000 7fe2d42e0640 1 -- 192.168.123.107:0/2268309333 wait complete. 2026-03-09T20:44:51.747 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.746+0000 7fe2d42e0640 1 Processor -- start 2026-03-09T20:44:51.747 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.746+0000 7fe2d42e0640 1 -- start start 2026-03-09T20:44:51.747 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.746+0000 7fe2d42e0640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe2cc072140 0x7fe2cc07d580 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:51.747 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.746+0000 7fe2d42e0640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe2cc07dac0 0x7fe2cc07df20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:51.747 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.746+0000 7fe2d42e0640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fe2cc0845f0 con 0x7fe2cc072140 2026-03-09T20:44:51.747 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.746+0000 7fe2d42e0640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fe2cc082090 con 0x7fe2cc07dac0 2026-03-09T20:44:51.747 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.746+0000 7fe2d1854640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe2cc07dac0 0x7fe2cc07df20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:51.747 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.746+0000 7fe2d1854640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe2cc07dac0 0x7fe2cc07df20 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:60640/0 (socket says 192.168.123.107:60640) 2026-03-09T20:44:51.748 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.746+0000 7fe2d1854640 1 -- 192.168.123.107:0/2315439633 learned_addr learned my addr 192.168.123.107:0/2315439633 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:44:51.748 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.746+0000 7fe2d2055640 1 --2- 192.168.123.107:0/2315439633 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe2cc072140 0x7fe2cc07d580 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:51.748 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.747+0000 7fe2d2055640 1 -- 192.168.123.107:0/2315439633 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe2cc07dac0 msgr2=0x7fe2cc07df20 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:51.748 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.747+0000 7fe2d2055640 1 --2- 192.168.123.107:0/2315439633 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe2cc07dac0 0x7fe2cc07df20 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:51.748 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.747+0000 7fe2d2055640 1 -- 192.168.123.107:0/2315439633 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fe2c4008cf0 con 0x7fe2cc072140 2026-03-09T20:44:51.748 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.747+0000 7fe2d2055640 1 --2- 192.168.123.107:0/2315439633 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe2cc072140 0x7fe2cc07d580 secure :-1 s=READY pgs=225 cs=0 l=1 rev1=1 crypto rx=0x7fe2c800b4d0 tx=0x7fe2c800b9a0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:51.748 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.747+0000 7fe2c37fe640 1 -- 192.168.123.107:0/2315439633 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fe2c8004280 con 0x7fe2cc072140 2026-03-09T20:44:51.750 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.747+0000 7fe2d42e0640 1 -- 192.168.123.107:0/2315439633 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fe2cc082370 con 0x7fe2cc072140 2026-03-09T20:44:51.750 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.747+0000 7fe2d42e0640 1 -- 192.168.123.107:0/2315439633 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fe2cc0828c0 con 0x7fe2cc072140 2026-03-09T20:44:51.750 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.750+0000 7fe2c37fe640 1 -- 192.168.123.107:0/2315439633 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fe2c80043e0 con 0x7fe2cc072140 2026-03-09T20:44:51.750 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.750+0000 7fe2c37fe640 1 -- 192.168.123.107:0/2315439633 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fe2c8010a40 con 0x7fe2cc072140 2026-03-09T20:44:51.750 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.750+0000 7fe2c37fe640 1 -- 192.168.123.107:0/2315439633 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7fe2c8010c60 con 0x7fe2cc072140 2026-03-09T20:44:51.751 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.751+0000 7fe2c37fe640 1 --2- 192.168.123.107:0/2315439633 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fe2b4076360 0x7fe2b4078820 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:51.752 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.751+0000 7fe2d1854640 1 --2- 192.168.123.107:0/2315439633 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fe2b4076360 0x7fe2b4078820 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:51.752 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.752+0000 7fe2d1854640 1 --2- 192.168.123.107:0/2315439633 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fe2b4076360 0x7fe2b4078820 secure :-1 s=READY pgs=88 cs=0 l=1 rev1=1 crypto rx=0x7fe2c4006070 tx=0x7fe2c4006000 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:51.752 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.752+0000 7fe2c37fe640 1 -- 192.168.123.107:0/2315439633 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(37..37 src has 1..37) v4 ==== 4618+0+0 (secure 0 0 0) 0x7fe2c80978f0 con 0x7fe2cc072140 2026-03-09T20:44:51.755 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.752+0000 7fe2d42e0640 1 -- 192.168.123.107:0/2315439633 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fe2cc108570 con 0x7fe2cc072140 2026-03-09T20:44:51.755 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.755+0000 7fe2c37fe640 1 -- 192.168.123.107:0/2315439633 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7fe2c8061240 con 0x7fe2cc072140 2026-03-09T20:44:51.773 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.772+0000 7fb5ca326640 1 -- 192.168.123.107:0/2390660821 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "osd last-stat-seq", "id": 5} v 0) v1 -- 0x7fb58c0051c0 con 0x7fb5c410d110 2026-03-09T20:44:51.773 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.774+0000 7fb5c17fa640 1 -- 192.168.123.107:0/2390660821 <== mon.1 v2:192.168.123.110:3300/0 7 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 5}]=0 v0) v1 ==== 74+0+13 (secure 0 0 0) 0x7fb5bc062a90 con 0x7fb5c410d110 2026-03-09T20:44:51.776 INFO:teuthology.orchestra.run.vm07.stdout:154618822659 2026-03-09T20:44:51.785 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.785+0000 7fb5ca326640 1 -- 192.168.123.107:0/2390660821 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fb5a4076360 msgr2=0x7fb5a4078820 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:51.785 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.785+0000 7f4702fa6640 1 -- 192.168.123.107:0/3375690343 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f46fc10ae70 msgr2=0x7f46fc10b250 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:51.785 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.785+0000 7f4702fa6640 1 --2- 192.168.123.107:0/3375690343 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f46fc10ae70 0x7f46fc10b250 secure :-1 s=READY pgs=226 cs=0 l=1 rev1=1 crypto rx=0x7f46ec009a00 tx=0x7f46ec02f290 comp rx=0 tx=0).stop 2026-03-09T20:44:51.785 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.785+0000 7fb5ca326640 1 --2- 192.168.123.107:0/2390660821 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fb5a4076360 0x7fb5a4078820 secure :-1 s=READY pgs=87 cs=0 l=1 rev1=1 crypto rx=0x7fb5b402f6a0 tx=0x7fb5b40023d0 comp rx=0 tx=0).stop 2026-03-09T20:44:51.785 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.786+0000 7fb5ca326640 1 -- 192.168.123.107:0/2390660821 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fb5c410d110 msgr2=0x7fb5c41ba0a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:51.786 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.786+0000 7fb5ca326640 1 --2- 192.168.123.107:0/2390660821 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fb5c410d110 0x7fb5c41ba0a0 secure :-1 s=READY pgs=43 cs=0 l=1 rev1=1 crypto rx=0x7fb5bc009610 tx=0x7fb5bc00e860 comp rx=0 tx=0).stop 2026-03-09T20:44:51.786 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.786+0000 7fb5ca326640 1 -- 192.168.123.107:0/2390660821 shutdown_connections 2026-03-09T20:44:51.786 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.787+0000 7fb5ca326640 1 --2- 192.168.123.107:0/2390660821 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fb5a4076360 0x7fb5a4078820 unknown :-1 s=CLOSED pgs=87 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:51.786 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.787+0000 7fb5ca326640 1 --2- 192.168.123.107:0/2390660821 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb5c41ba5e0 0x7fb5c407e8a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:51.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:51 vm10 ceph-mon[57011]: from='client.? 192.168.123.107:0/646670017' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T20:44:51.787 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.787+0000 7fb5ca326640 1 --2- 192.168.123.107:0/2390660821 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fb5c410d110 0x7fb5c41ba0a0 unknown :-1 s=CLOSED pgs=43 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:51.787 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.787+0000 7fb5ca326640 1 -- 192.168.123.107:0/2390660821 >> 192.168.123.107:0/2390660821 conn(0x7fb5c406b7f0 msgr2=0x7fb5c4070d20 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:51.787 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.787+0000 7fb5ca326640 1 -- 192.168.123.107:0/2390660821 shutdown_connections 2026-03-09T20:44:51.787 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.787+0000 7fb5ca326640 1 -- 192.168.123.107:0/2390660821 wait complete. 2026-03-09T20:44:51.788 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.789+0000 7f4702fa6640 1 -- 192.168.123.107:0/3375690343 shutdown_connections 2026-03-09T20:44:51.789 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.789+0000 7f4702fa6640 1 --2- 192.168.123.107:0/3375690343 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f46fc071ec0 0x7f46fc072320 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:51.789 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.789+0000 7f4702fa6640 1 --2- 192.168.123.107:0/3375690343 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f46fc10ae70 0x7f46fc10b250 unknown :-1 s=CLOSED pgs=226 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:51.789 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.789+0000 7f4702fa6640 1 -- 192.168.123.107:0/3375690343 >> 192.168.123.107:0/3375690343 conn(0x7f46fc06c7d0 msgr2=0x7f46fc06cbe0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:51.789 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.789+0000 7f4702fa6640 1 -- 192.168.123.107:0/3375690343 shutdown_connections 2026-03-09T20:44:51.789 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.789+0000 7f4702fa6640 1 -- 192.168.123.107:0/3375690343 wait complete. 2026-03-09T20:44:51.789 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.789+0000 7f4702fa6640 1 Processor -- start 2026-03-09T20:44:51.789 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.789+0000 7f4702fa6640 1 -- start start 2026-03-09T20:44:51.790 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.790+0000 7f4702fa6640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f46fc071ec0 0x7f46fc10ba20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:51.791 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.790+0000 7f4702fa6640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f46fc10ae70 0x7f46fc10bf60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:51.791 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.790+0000 7f4702fa6640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f46fc10faa0 con 0x7f46fc10ae70 2026-03-09T20:44:51.791 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.790+0000 7f4702fa6640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f46fc10fc10 con 0x7f46fc071ec0 2026-03-09T20:44:51.791 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.790+0000 7f47017a3640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f46fc10ae70 0x7f46fc10bf60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:51.791 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.790+0000 7f47017a3640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f46fc10ae70 0x7f46fc10bf60 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:47724/0 (socket says 192.168.123.107:47724) 2026-03-09T20:44:51.791 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.790+0000 7f47017a3640 1 -- 192.168.123.107:0/1545211760 learned_addr learned my addr 192.168.123.107:0/1545211760 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:44:51.791 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.790+0000 7f47017a3640 1 -- 192.168.123.107:0/1545211760 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f46fc071ec0 msgr2=0x7f46fc10ba20 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T20:44:51.791 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.790+0000 7f47017a3640 1 --2- 192.168.123.107:0/1545211760 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f46fc071ec0 0x7f46fc10ba20 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:51.791 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.790+0000 7f47017a3640 1 -- 192.168.123.107:0/1545211760 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f46ec009660 con 0x7f46fc10ae70 2026-03-09T20:44:51.791 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.791+0000 7f47017a3640 1 --2- 192.168.123.107:0/1545211760 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f46fc10ae70 0x7f46fc10bf60 secure :-1 s=READY pgs=227 cs=0 l=1 rev1=1 crypto rx=0x7f46f000ece0 tx=0x7f46f000c6a0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:51.791 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.791+0000 7f46eaffd640 1 -- 192.168.123.107:0/1545211760 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f46f000eea0 con 0x7f46fc10ae70 2026-03-09T20:44:51.792 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.791+0000 7f4702fa6640 1 -- 192.168.123.107:0/1545211760 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f46fc10c560 con 0x7f46fc10ae70 2026-03-09T20:44:51.792 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.791+0000 7f4702fa6640 1 -- 192.168.123.107:0/1545211760 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f46fc1b1ae0 con 0x7f46fc10ae70 2026-03-09T20:44:51.792 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.791+0000 7f46eaffd640 1 -- 192.168.123.107:0/1545211760 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f46f0004590 con 0x7f46fc10ae70 2026-03-09T20:44:51.792 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.791+0000 7f46eaffd640 1 -- 192.168.123.107:0/1545211760 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f46f0010640 con 0x7f46fc10ae70 2026-03-09T20:44:51.792 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.792+0000 7f4702fa6640 1 -- 192.168.123.107:0/1545211760 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f46c4005350 con 0x7f46fc10ae70 2026-03-09T20:44:51.795 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.795+0000 7f46eaffd640 1 -- 192.168.123.107:0/1545211760 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7f46f00040d0 con 0x7f46fc10ae70 2026-03-09T20:44:51.814 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.803+0000 7f46eaffd640 1 --2- 192.168.123.107:0/1545211760 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f46d80761c0 0x7f46d8078680 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:51.814 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.803+0000 7f46eaffd640 1 -- 192.168.123.107:0/1545211760 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(37..37 src has 1..37) v4 ==== 4618+0+0 (secure 0 0 0) 0x7f46f001d030 con 0x7f46fc10ae70 2026-03-09T20:44:51.814 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.803+0000 7f46eaffd640 1 -- 192.168.123.107:0/1545211760 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f46f0097b20 con 0x7f46fc10ae70 2026-03-09T20:44:51.816 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.816+0000 7f4701fa4640 1 --2- 192.168.123.107:0/1545211760 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f46d80761c0 0x7f46d8078680 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:51.823 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.819+0000 7f4701fa4640 1 --2- 192.168.123.107:0/1545211760 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f46d80761c0 0x7f46d8078680 secure :-1 s=READY pgs=89 cs=0 l=1 rev1=1 crypto rx=0x7f46ec009a00 tx=0x7f46ec0023d0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:51.857 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.857+0000 7fbbbce70640 1 -- 192.168.123.107:0/1709440132 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "osd last-stat-seq", "id": 1} v 0) v1 -- 0x7fbbb811c9c0 con 0x7fbbb8113180 2026-03-09T20:44:51.858 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.858+0000 7fbbb4ff9640 1 -- 192.168.123.107:0/1709440132 <== mon.1 v2:192.168.123.110:3300/0 7 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 1}]=0 v0) v1 ==== 74+0+12 (secure 0 0 0) 0x7fbba0078690 con 0x7fbbb8113180 2026-03-09T20:44:51.858 INFO:teuthology.orchestra.run.vm07.stdout:64424509450 2026-03-09T20:44:51.863 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.862+0000 7fbb967fc640 1 -- 192.168.123.107:0/1709440132 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fbb88076290 msgr2=0x7fbb88078750 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:51.863 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.862+0000 7fbb967fc640 1 --2- 192.168.123.107:0/1709440132 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fbb88076290 0x7fbb88078750 secure :-1 s=READY pgs=86 cs=0 l=1 rev1=1 crypto rx=0x7fbbac005d20 tx=0x7fbbac005c50 comp rx=0 tx=0).stop 2026-03-09T20:44:51.863 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.862+0000 7fbb967fc640 1 -- 192.168.123.107:0/1709440132 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fbbb8113180 msgr2=0x7fbbb81b9ad0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:51.863 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.862+0000 7fbb967fc640 1 --2- 192.168.123.107:0/1709440132 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fbbb8113180 0x7fbbb81b9ad0 secure :-1 s=READY pgs=42 cs=0 l=1 rev1=1 crypto rx=0x7fbba000e990 tx=0x7fbba000ee60 comp rx=0 tx=0).stop 2026-03-09T20:44:51.863 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.863+0000 7fbb967fc640 1 -- 192.168.123.107:0/1709440132 shutdown_connections 2026-03-09T20:44:51.863 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.863+0000 7fbb967fc640 1 --2- 192.168.123.107:0/1709440132 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fbb88076290 0x7fbb88078750 unknown :-1 s=CLOSED pgs=86 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:51.863 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.863+0000 7fbb967fc640 1 --2- 192.168.123.107:0/1709440132 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fbbb8113180 0x7fbbb81b9ad0 unknown :-1 s=CLOSED pgs=42 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:51.863 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.863+0000 7fbb967fc640 1 --2- 192.168.123.107:0/1709440132 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbbb8072a40 0x7fbbb8112c40 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:51.863 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.863+0000 7fbb967fc640 1 -- 192.168.123.107:0/1709440132 >> 192.168.123.107:0/1709440132 conn(0x7fbbb806c7d0 msgr2=0x7fbbb8070eb0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:51.863 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.863+0000 7fbb967fc640 1 -- 192.168.123.107:0/1709440132 shutdown_connections 2026-03-09T20:44:51.863 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.863+0000 7fbb967fc640 1 -- 192.168.123.107:0/1709440132 wait complete. 2026-03-09T20:44:51.876 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.876+0000 7fe2d42e0640 1 -- 192.168.123.107:0/2315439633 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "osd last-stat-seq", "id": 3} v 0) v1 -- 0x7fe2cc072af0 con 0x7fe2cc072140 2026-03-09T20:44:51.876 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.876+0000 7fe2c37fe640 1 -- 192.168.123.107:0/2315439633 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 3}]=0 v0) v1 ==== 74+0+13 (secure 0 0 0) 0x7fe2c8060be0 con 0x7fe2cc072140 2026-03-09T20:44:51.876 INFO:teuthology.orchestra.run.vm07.stdout:115964117000 2026-03-09T20:44:51.879 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.878+0000 7fe2d42e0640 1 -- 192.168.123.107:0/2315439633 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fe2b4076360 msgr2=0x7fe2b4078820 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:51.879 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.878+0000 7fe2d42e0640 1 --2- 192.168.123.107:0/2315439633 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fe2b4076360 0x7fe2b4078820 secure :-1 s=READY pgs=88 cs=0 l=1 rev1=1 crypto rx=0x7fe2c4006070 tx=0x7fe2c4006000 comp rx=0 tx=0).stop 2026-03-09T20:44:51.879 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.879+0000 7fe2d42e0640 1 -- 192.168.123.107:0/2315439633 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe2cc072140 msgr2=0x7fe2cc07d580 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:51.879 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.879+0000 7fe2d42e0640 1 --2- 192.168.123.107:0/2315439633 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe2cc072140 0x7fe2cc07d580 secure :-1 s=READY pgs=225 cs=0 l=1 rev1=1 crypto rx=0x7fe2c800b4d0 tx=0x7fe2c800b9a0 comp rx=0 tx=0).stop 2026-03-09T20:44:51.879 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.879+0000 7fe2d42e0640 1 -- 192.168.123.107:0/2315439633 shutdown_connections 2026-03-09T20:44:51.879 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.879+0000 7fe2d42e0640 1 --2- 192.168.123.107:0/2315439633 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fe2b4076360 0x7fe2b4078820 unknown :-1 s=CLOSED pgs=88 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:51.879 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.879+0000 7fe2d42e0640 1 --2- 192.168.123.107:0/2315439633 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe2cc07dac0 0x7fe2cc07df20 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:51.879 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.879+0000 7fe2d42e0640 1 --2- 192.168.123.107:0/2315439633 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe2cc072140 0x7fe2cc07d580 unknown :-1 s=CLOSED pgs=225 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:51.879 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.879+0000 7fe2d42e0640 1 -- 192.168.123.107:0/2315439633 >> 192.168.123.107:0/2315439633 conn(0x7fe2cc06c7e0 msgr2=0x7fe2cc10a9e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:51.879 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.879+0000 7fe2d42e0640 1 -- 192.168.123.107:0/2315439633 shutdown_connections 2026-03-09T20:44:51.879 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.879+0000 7fe2d42e0640 1 -- 192.168.123.107:0/2315439633 wait complete. 2026-03-09T20:44:51.967 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.967+0000 7f4702fa6640 1 -- 192.168.123.107:0/1545211760 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "osd last-stat-seq", "id": 4} v 0) v1 -- 0x7f46c40051c0 con 0x7f46fc10ae70 2026-03-09T20:44:51.967 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.967+0000 7f46eaffd640 1 -- 192.168.123.107:0/1545211760 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 4}]=0 v0) v1 ==== 74+0+13 (secure 0 0 0) 0x7f46f0061250 con 0x7f46fc10ae70 2026-03-09T20:44:51.968 INFO:teuthology.orchestra.run.vm07.stdout:137438953477 2026-03-09T20:44:51.970 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.970+0000 7f4702fa6640 1 -- 192.168.123.107:0/1545211760 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f46d80761c0 msgr2=0x7f46d8078680 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:51.970 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.970+0000 7f4702fa6640 1 --2- 192.168.123.107:0/1545211760 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f46d80761c0 0x7f46d8078680 secure :-1 s=READY pgs=89 cs=0 l=1 rev1=1 crypto rx=0x7f46ec009a00 tx=0x7f46ec0023d0 comp rx=0 tx=0).stop 2026-03-09T20:44:51.970 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.970+0000 7f4702fa6640 1 -- 192.168.123.107:0/1545211760 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f46fc10ae70 msgr2=0x7f46fc10bf60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:51.970 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.970+0000 7f4702fa6640 1 --2- 192.168.123.107:0/1545211760 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f46fc10ae70 0x7f46fc10bf60 secure :-1 s=READY pgs=227 cs=0 l=1 rev1=1 crypto rx=0x7f46f000ece0 tx=0x7f46f000c6a0 comp rx=0 tx=0).stop 2026-03-09T20:44:51.970 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.970+0000 7f4702fa6640 1 -- 192.168.123.107:0/1545211760 shutdown_connections 2026-03-09T20:44:51.970 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.970+0000 7f4702fa6640 1 --2- 192.168.123.107:0/1545211760 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f46d80761c0 0x7f46d8078680 unknown :-1 s=CLOSED pgs=89 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:51.970 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.970+0000 7f4702fa6640 1 --2- 192.168.123.107:0/1545211760 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f46fc10ae70 0x7f46fc10bf60 unknown :-1 s=CLOSED pgs=227 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:51.970 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.971+0000 7f4702fa6640 1 --2- 192.168.123.107:0/1545211760 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f46fc071ec0 0x7f46fc10ba20 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:51.971 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.971+0000 7f4702fa6640 1 -- 192.168.123.107:0/1545211760 >> 192.168.123.107:0/1545211760 conn(0x7f46fc06c7d0 msgr2=0x7f46fc10fe60 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:51.971 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.971+0000 7f4702fa6640 1 -- 192.168.123.107:0/1545211760 shutdown_connections 2026-03-09T20:44:51.971 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:51.971+0000 7f4702fa6640 1 -- 192.168.123.107:0/1545211760 wait complete. 2026-03-09T20:44:52.052 INFO:tasks.cephadm.ceph_manager.ceph:need seq 154618822659 got 154618822659 for osd.5 2026-03-09T20:44:52.052 DEBUG:teuthology.parallel:result is None 2026-03-09T20:44:52.078 INFO:tasks.cephadm.ceph_manager.ceph:need seq 137438953477 got 137438953477 for osd.4 2026-03-09T20:44:52.078 DEBUG:teuthology.parallel:result is None 2026-03-09T20:44:52.106 INFO:tasks.cephadm.ceph_manager.ceph:need seq 64424509450 got 64424509450 for osd.1 2026-03-09T20:44:52.106 DEBUG:teuthology.parallel:result is None 2026-03-09T20:44:52.109 INFO:tasks.cephadm.ceph_manager.ceph:need seq 115964117000 got 115964117000 for osd.3 2026-03-09T20:44:52.109 DEBUG:teuthology.parallel:result is None 2026-03-09T20:44:52.109 INFO:tasks.cephadm.ceph_manager.ceph:waiting for clean 2026-03-09T20:44:52.110 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph pg dump --format=json 2026-03-09T20:44:52.308 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:44:52.579 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:52.578+0000 7f4d7967b640 1 -- 192.168.123.107:0/1491547407 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4d74076040 msgr2=0x7f4d74111330 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:52.579 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:52.578+0000 7f4d7967b640 1 --2- 192.168.123.107:0/1491547407 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4d74076040 0x7f4d74111330 secure :-1 s=READY pgs=228 cs=0 l=1 rev1=1 crypto rx=0x7f4d680099b0 tx=0x7f4d6802f220 comp rx=0 tx=0).stop 2026-03-09T20:44:52.579 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:52.579+0000 7f4d7967b640 1 -- 192.168.123.107:0/1491547407 shutdown_connections 2026-03-09T20:44:52.579 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:52.579+0000 7f4d7967b640 1 --2- 192.168.123.107:0/1491547407 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4d74076040 0x7f4d74111330 unknown :-1 s=CLOSED pgs=228 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:52.579 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:52.579+0000 7f4d7967b640 1 --2- 192.168.123.107:0/1491547407 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4d74075720 0x7f4d74075b00 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:52.579 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:52.579+0000 7f4d7967b640 1 -- 192.168.123.107:0/1491547407 >> 192.168.123.107:0/1491547407 conn(0x7f4d740fe710 msgr2=0x7f4d74100b30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:52.580 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:52.580+0000 7f4d7967b640 1 -- 192.168.123.107:0/1491547407 shutdown_connections 2026-03-09T20:44:52.580 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:52.580+0000 7f4d7967b640 1 -- 192.168.123.107:0/1491547407 wait complete. 2026-03-09T20:44:52.580 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:52.580+0000 7f4d7967b640 1 Processor -- start 2026-03-09T20:44:52.580 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:52.580+0000 7f4d7967b640 1 -- start start 2026-03-09T20:44:52.581 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:52.581+0000 7f4d7967b640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4d74075720 0x7f4d7419eda0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:52.581 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:52.581+0000 7f4d7967b640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4d74076040 0x7f4d7419f2e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:52.581 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:52.581+0000 7f4d7967b640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f4d7419f970 con 0x7f4d74076040 2026-03-09T20:44:52.581 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:52.581+0000 7f4d7967b640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f4d741a36e0 con 0x7f4d74075720 2026-03-09T20:44:52.581 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:52.581+0000 7f4d727fc640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4d74076040 0x7f4d7419f2e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:52.581 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:52.581+0000 7f4d727fc640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4d74076040 0x7f4d7419f2e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:47740/0 (socket says 192.168.123.107:47740) 2026-03-09T20:44:52.581 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:52.581+0000 7f4d727fc640 1 -- 192.168.123.107:0/1029392299 learned_addr learned my addr 192.168.123.107:0/1029392299 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:44:52.581 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:52.581+0000 7f4d727fc640 1 -- 192.168.123.107:0/1029392299 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4d74075720 msgr2=0x7f4d7419eda0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:52.581 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:52.581+0000 7f4d72ffd640 1 --2- 192.168.123.107:0/1029392299 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4d74075720 0x7f4d7419eda0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:52.582 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:52.582+0000 7f4d727fc640 1 --2- 192.168.123.107:0/1029392299 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4d74075720 0x7f4d7419eda0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:52.582 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:52.582+0000 7f4d727fc640 1 -- 192.168.123.107:0/1029392299 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f4d68009660 con 0x7f4d74076040 2026-03-09T20:44:52.582 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:52.582+0000 7f4d72ffd640 1 --2- 192.168.123.107:0/1029392299 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4d74075720 0x7f4d7419eda0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T20:44:52.582 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:52.582+0000 7f4d727fc640 1 --2- 192.168.123.107:0/1029392299 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4d74076040 0x7f4d7419f2e0 secure :-1 s=READY pgs=229 cs=0 l=1 rev1=1 crypto rx=0x7f4d6802f730 tx=0x7f4d680043d0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:52.582 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:52.582+0000 7f4d53fff640 1 -- 192.168.123.107:0/1029392299 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f4d6803d070 con 0x7f4d74076040 2026-03-09T20:44:52.582 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:52.582+0000 7f4d53fff640 1 -- 192.168.123.107:0/1029392299 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f4d6802fc90 con 0x7f4d74076040 2026-03-09T20:44:52.583 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:52.582+0000 7f4d53fff640 1 -- 192.168.123.107:0/1029392299 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f4d680417b0 con 0x7f4d74076040 2026-03-09T20:44:52.583 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:52.582+0000 7f4d7967b640 1 -- 192.168.123.107:0/1029392299 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f4d741a3960 con 0x7f4d74076040 2026-03-09T20:44:52.583 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:52.582+0000 7f4d7967b640 1 -- 192.168.123.107:0/1029392299 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f4d741a3e50 con 0x7f4d74076040 2026-03-09T20:44:52.583 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:52.583+0000 7f4d7967b640 1 -- 192.168.123.107:0/1029392299 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f4d38005350 con 0x7f4d74076040 2026-03-09T20:44:52.587 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:52 vm07 ceph-mon[49120]: pgmap v71: 1 pgs: 1 active+clean; 449 KiB data, 573 MiB used, 119 GiB / 120 GiB avail 2026-03-09T20:44:52.587 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:52 vm07 ceph-mon[49120]: from='client.? 192.168.123.107:0/68048463' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-09T20:44:52.587 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:52 vm07 ceph-mon[49120]: from='client.? 192.168.123.107:0/2390660821' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-09T20:44:52.587 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:52 vm07 ceph-mon[49120]: from='client.? 192.168.123.107:0/1709440132' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T20:44:52.587 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:52 vm07 ceph-mon[49120]: from='client.? 192.168.123.107:0/2315439633' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-09T20:44:52.587 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:52 vm07 ceph-mon[49120]: from='client.? 192.168.123.107:0/1545211760' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-09T20:44:52.590 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:52.586+0000 7f4d53fff640 1 -- 192.168.123.107:0/1029392299 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7f4d68038730 con 0x7f4d74076040 2026-03-09T20:44:52.590 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:52.587+0000 7f4d53fff640 1 --2- 192.168.123.107:0/1029392299 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f4d48076290 0x7f4d48078750 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:52.590 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:52.587+0000 7f4d53fff640 1 -- 192.168.123.107:0/1029392299 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(37..37 src has 1..37) v4 ==== 4618+0+0 (secure 0 0 0) 0x7f4d680bc6c0 con 0x7f4d74076040 2026-03-09T20:44:52.590 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:52.587+0000 7f4d72ffd640 1 --2- 192.168.123.107:0/1029392299 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f4d48076290 0x7f4d48078750 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:52.591 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:52.587+0000 7f4d53fff640 1 -- 192.168.123.107:0/1029392299 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f4d68086010 con 0x7f4d74076040 2026-03-09T20:44:52.591 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:52.591+0000 7f4d72ffd640 1 --2- 192.168.123.107:0/1029392299 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f4d48076290 0x7f4d48078750 secure :-1 s=READY pgs=90 cs=0 l=1 rev1=1 crypto rx=0x7f4d5c0059c0 tx=0x7f4d5c005e30 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:52.674 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:52.674+0000 7f4d7967b640 1 -- 192.168.123.107:0/1029392299 --> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] -- mgr_command(tid 0: {"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}) v1 -- 0x7f4d38002bf0 con 0x7f4d48076290 2026-03-09T20:44:52.675 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:52.675+0000 7f4d53fff640 1 -- 192.168.123.107:0/1029392299 <== mgr.14225 v2:192.168.123.107:6800/4233182156 1 ==== mgr_command_reply(tid 0: 0 dumped all) v1 ==== 18+0+19165 (secure 0 0 0) 0x7f4d38002bf0 con 0x7f4d48076290 2026-03-09T20:44:52.675 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:44:52.676 INFO:teuthology.orchestra.run.vm07.stderr:dumped all 2026-03-09T20:44:52.677 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:52.677+0000 7f4d7967b640 1 -- 192.168.123.107:0/1029392299 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f4d48076290 msgr2=0x7f4d48078750 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:52.677 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:52.677+0000 7f4d7967b640 1 --2- 192.168.123.107:0/1029392299 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f4d48076290 0x7f4d48078750 secure :-1 s=READY pgs=90 cs=0 l=1 rev1=1 crypto rx=0x7f4d5c0059c0 tx=0x7f4d5c005e30 comp rx=0 tx=0).stop 2026-03-09T20:44:52.677 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:52.677+0000 7f4d7967b640 1 -- 192.168.123.107:0/1029392299 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4d74076040 msgr2=0x7f4d7419f2e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:52.677 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:52.678+0000 7f4d7967b640 1 --2- 192.168.123.107:0/1029392299 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4d74076040 0x7f4d7419f2e0 secure :-1 s=READY pgs=229 cs=0 l=1 rev1=1 crypto rx=0x7f4d6802f730 tx=0x7f4d680043d0 comp rx=0 tx=0).stop 2026-03-09T20:44:52.678 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:52.678+0000 7f4d7967b640 1 -- 192.168.123.107:0/1029392299 shutdown_connections 2026-03-09T20:44:52.678 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:52.678+0000 7f4d7967b640 1 --2- 192.168.123.107:0/1029392299 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f4d48076290 0x7f4d48078750 unknown :-1 s=CLOSED pgs=90 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:52.678 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:52.678+0000 7f4d7967b640 1 --2- 192.168.123.107:0/1029392299 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4d74076040 0x7f4d7419f2e0 unknown :-1 s=CLOSED pgs=229 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:52.678 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:52.678+0000 7f4d7967b640 1 --2- 192.168.123.107:0/1029392299 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4d74075720 0x7f4d7419eda0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:52.678 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:52.678+0000 7f4d7967b640 1 -- 192.168.123.107:0/1029392299 >> 192.168.123.107:0/1029392299 conn(0x7f4d740fe710 msgr2=0x7f4d740ffd90 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:52.678 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:52.678+0000 7f4d7967b640 1 -- 192.168.123.107:0/1029392299 shutdown_connections 2026-03-09T20:44:52.678 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:52.679+0000 7f4d7967b640 1 -- 192.168.123.107:0/1029392299 wait complete. 2026-03-09T20:44:52.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:52 vm10 ceph-mon[57011]: pgmap v71: 1 pgs: 1 active+clean; 449 KiB data, 573 MiB used, 119 GiB / 120 GiB avail 2026-03-09T20:44:52.790 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:52 vm10 ceph-mon[57011]: from='client.? 192.168.123.107:0/68048463' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-09T20:44:52.790 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:52 vm10 ceph-mon[57011]: from='client.? 192.168.123.107:0/2390660821' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-09T20:44:52.790 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:52 vm10 ceph-mon[57011]: from='client.? 192.168.123.107:0/1709440132' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T20:44:52.790 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:52 vm10 ceph-mon[57011]: from='client.? 192.168.123.107:0/2315439633' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-09T20:44:52.790 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:52 vm10 ceph-mon[57011]: from='client.? 192.168.123.107:0/1545211760' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-09T20:44:52.846 INFO:teuthology.orchestra.run.vm07.stdout:{"pg_ready":true,"pg_map":{"version":71,"stamp":"2026-03-09T20:44:51.114475+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":3,"num_osds":6,"num_per_pool_osds":6,"num_per_pool_omap_osds":3,"kb":125804544,"kb_used":586400,"kb_used_data":3324,"kb_used_omap":9,"kb_used_meta":173366,"kb_avail":125218144,"statfs":{"total":128823853056,"available":128223379456,"internally_reserved":0,"allocated":3403776,"data_stored":2140860,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":9530,"internal_metadata":177527494},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"9.968458"},"pg_stats":[{"pgid":"1.0","version":"22'32","reported_seq":80,"reported_epoch":36,"state":"active+clean","last_fresh":"2026-03-09T20:44:41.550014+0000","last_change":"2026-03-09T20:44:31.242099+0000","last_active":"2026-03-09T20:44:41.550014+0000","last_peered":"2026-03-09T20:44:41.550014+0000","last_clean":"2026-03-09T20:44:41.550014+0000","last_became_active":"2026-03-09T20:44:31.241899+0000","last_became_peered":"2026-03-09T20:44:31.241899+0000","last_unstale":"2026-03-09T20:44:41.550014+0000","last_undegraded":"2026-03-09T20:44:41.550014+0000","last_fullsized":"2026-03-09T20:44:41.550014+0000","mapping_epoch":31,"log_start":"0'0","ondisk_log_start":"0'0","created":21,"last_epoch_clean":32,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:44:15.534377+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:44:15.534377+0000","last_clean_scrub_stamp":"2026-03-09T20:44:15.534377+0000","objects_scrubbed":0,"log_size":32,"log_dups_size":0,"ondisk_log_size":32,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T07:20:52.062761+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,1],"acting":[3,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]}],"pool_stats":[{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1388544,"data_stored":1377840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":4}],"osd_stats":[{"osd":5,"up_from":36,"seq":154618822659,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":440920,"kb_used_data":328,"kb_used_omap":1,"kb_used_meta":30974,"kb_avail":20526504,"statfs":{"total":21470642176,"available":21019140096,"internally_reserved":0,"allocated":335872,"data_stored":127170,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":31717835},"hb_peers":[0,1,2,3,4],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.51700000000000002}]},{"osd":1,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.47299999999999998}]},{"osd":2,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.70799999999999996}]},{"osd":3,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.55100000000000005}]},{"osd":4,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.58799999999999997}]}]},{"osd":4,"up_from":32,"seq":137438953477,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27160,"kb_used_data":328,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940264,"statfs":{"total":21470642176,"available":21442830336,"internally_reserved":0,"allocated":335872,"data_stored":127170,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[0,1,2,3,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.54500000000000004}]},{"osd":1,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.59199999999999997}]},{"osd":2,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.51900000000000002}]},{"osd":3,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.61299999999999999}]},{"osd":5,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.41099999999999998}]}]},{"osd":3,"up_from":27,"seq":115964117000,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27612,"kb_used_data":780,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939812,"statfs":{"total":21470642176,"available":21442367488,"internally_reserved":0,"allocated":798720,"data_stored":586450,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1587,"internal_metadata":27457997},"hb_peers":[0,1,2,4,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":3.2370000000000001}]},{"osd":1,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":6.8179999999999996}]},{"osd":2,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":3.173}]},{"osd":4,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.40899999999999997}]},{"osd":5,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":3.012}]}]},{"osd":2,"up_from":19,"seq":81604378633,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27164,"kb_used_data":328,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940260,"statfs":{"total":21470642176,"available":21442826240,"internally_reserved":0,"allocated":335872,"data_stored":127170,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[0,1,3,4,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.47099999999999997}]},{"osd":1,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.45300000000000001}]},{"osd":3,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.75600000000000001}]},{"osd":4,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.64900000000000002}]},{"osd":5,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.54000000000000004}]}]},{"osd":1,"up_from":15,"seq":64424509450,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":31772,"kb_used_data":780,"kb_used_omap":1,"kb_used_meta":30974,"kb_avail":20935652,"statfs":{"total":21470642176,"available":21438107648,"internally_reserved":0,"allocated":798720,"data_stored":586450,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1588,"internal_metadata":31717836},"hb_peers":[0,2,3,4,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.42499999999999999}]},{"osd":2,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.45900000000000002}]},{"osd":3,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.51600000000000001}]},{"osd":4,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.55700000000000005}]},{"osd":5,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.35499999999999998}]}]},{"osd":0,"up_from":10,"seq":42949672973,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":31772,"kb_used_data":780,"kb_used_omap":1,"kb_used_meta":30974,"kb_avail":20935652,"statfs":{"total":21470642176,"available":21438107648,"internally_reserved":0,"allocated":798720,"data_stored":586450,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1588,"internal_metadata":31717836},"hb_peers":[1,2,3,4,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":1,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.69899999999999995}]},{"osd":2,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.71599999999999997}]},{"osd":3,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.70699999999999996}]},{"osd":4,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.47699999999999998}]},{"osd":5,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.628}]}]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-09T20:44:52.846 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph pg dump --format=json 2026-03-09T20:44:53.001 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:44:53.251 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:53.250+0000 7f176d222640 1 -- 192.168.123.107:0/3241260330 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f17681067c0 msgr2=0x7f1768106ba0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:53.251 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:53.250+0000 7f176d222640 1 --2- 192.168.123.107:0/3241260330 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f17681067c0 0x7f1768106ba0 secure :-1 s=READY pgs=230 cs=0 l=1 rev1=1 crypto rx=0x7f1758009a00 tx=0x7f175802f280 comp rx=0 tx=0).stop 2026-03-09T20:44:53.251 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:53.250+0000 7f176d222640 1 -- 192.168.123.107:0/3241260330 shutdown_connections 2026-03-09T20:44:53.251 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:53.250+0000 7f176d222640 1 --2- 192.168.123.107:0/3241260330 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f17680fe650 0x7f17680fead0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:53.251 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:53.250+0000 7f176d222640 1 --2- 192.168.123.107:0/3241260330 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f17681067c0 0x7f1768106ba0 unknown :-1 s=CLOSED pgs=230 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:53.251 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:53.250+0000 7f176d222640 1 -- 192.168.123.107:0/3241260330 >> 192.168.123.107:0/3241260330 conn(0x7f17680fa4a0 msgr2=0x7f17680fc8c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:53.251 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:53.251+0000 7f176d222640 1 -- 192.168.123.107:0/3241260330 shutdown_connections 2026-03-09T20:44:53.251 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:53.251+0000 7f176d222640 1 -- 192.168.123.107:0/3241260330 wait complete. 2026-03-09T20:44:53.251 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:53.251+0000 7f176d222640 1 Processor -- start 2026-03-09T20:44:53.251 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:53.251+0000 7f176d222640 1 -- start start 2026-03-09T20:44:53.252 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:53.252+0000 7f176d222640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f17680fe650 0x7f17681ad470 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:53.252 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:53.252+0000 7f176d222640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f17681067c0 0x7f17681ad9b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:53.252 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:53.252+0000 7f176d222640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f176806d6f0 con 0x7f17680fe650 2026-03-09T20:44:53.252 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:53.252+0000 7f176d222640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f176806d860 con 0x7f17681067c0 2026-03-09T20:44:53.252 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:53.252+0000 7f17677fe640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f17681067c0 0x7f17681ad9b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:53.252 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:53.252+0000 7f17677fe640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f17681067c0 0x7f17681ad9b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:60676/0 (socket says 192.168.123.107:60676) 2026-03-09T20:44:53.252 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:53.252+0000 7f17677fe640 1 -- 192.168.123.107:0/3508021680 learned_addr learned my addr 192.168.123.107:0/3508021680 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:44:53.252 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:53.252+0000 7f17677fe640 1 -- 192.168.123.107:0/3508021680 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f17680fe650 msgr2=0x7f17681ad470 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T20:44:53.252 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:53.252+0000 7f17677fe640 1 --2- 192.168.123.107:0/3508021680 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f17680fe650 0x7f17681ad470 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:53.252 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:53.252+0000 7f17677fe640 1 -- 192.168.123.107:0/3508021680 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f1758009660 con 0x7f17681067c0 2026-03-09T20:44:53.252 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:53.252+0000 7f17677fe640 1 --2- 192.168.123.107:0/3508021680 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f17681067c0 0x7f17681ad9b0 secure :-1 s=READY pgs=44 cs=0 l=1 rev1=1 crypto rx=0x7f175400e9b0 tx=0x7f175400ee80 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:53.252 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:53.252+0000 7f17657fa640 1 -- 192.168.123.107:0/3508021680 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f175400cd90 con 0x7f17681067c0 2026-03-09T20:44:53.252 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:53.253+0000 7f17657fa640 1 -- 192.168.123.107:0/3508021680 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f1754004590 con 0x7f17681067c0 2026-03-09T20:44:53.252 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:53.253+0000 7f176d222640 1 -- 192.168.123.107:0/3508021680 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f176806da60 con 0x7f17681067c0 2026-03-09T20:44:53.253 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:53.253+0000 7f17657fa640 1 -- 192.168.123.107:0/3508021680 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f1754010620 con 0x7f17681067c0 2026-03-09T20:44:53.253 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:53.253+0000 7f176d222640 1 -- 192.168.123.107:0/3508021680 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f176806dfb0 con 0x7f17681067c0 2026-03-09T20:44:53.253 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:53.253+0000 7f176d222640 1 -- 192.168.123.107:0/3508021680 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f172c005350 con 0x7f17681067c0 2026-03-09T20:44:53.254 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:53.254+0000 7f17657fa640 1 -- 192.168.123.107:0/3508021680 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7f17540040d0 con 0x7f17681067c0 2026-03-09T20:44:53.254 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:53.254+0000 7f17657fa640 1 --2- 192.168.123.107:0/3508021680 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f173c0761c0 0x7f173c078680 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:53.255 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:53.255+0000 7f17657fa640 1 -- 192.168.123.107:0/3508021680 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(37..37 src has 1..37) v4 ==== 4618+0+0 (secure 0 0 0) 0x7f1754014070 con 0x7f17681067c0 2026-03-09T20:44:53.255 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:53.255+0000 7f1767fff640 1 --2- 192.168.123.107:0/3508021680 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f173c0761c0 0x7f173c078680 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:53.255 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:53.255+0000 7f1767fff640 1 --2- 192.168.123.107:0/3508021680 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f173c0761c0 0x7f173c078680 secure :-1 s=READY pgs=91 cs=0 l=1 rev1=1 crypto rx=0x7f1758009a00 tx=0x7f17580023d0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:53.258 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:53.258+0000 7f17657fa640 1 -- 192.168.123.107:0/3508021680 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f1754060f20 con 0x7f17681067c0 2026-03-09T20:44:53.343 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:53.343+0000 7f176d222640 1 -- 192.168.123.107:0/3508021680 --> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] -- mgr_command(tid 0: {"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}) v1 -- 0x7f172c002bf0 con 0x7f173c0761c0 2026-03-09T20:44:53.343 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:53.344+0000 7f17657fa640 1 -- 192.168.123.107:0/3508021680 <== mgr.14225 v2:192.168.123.107:6800/4233182156 1 ==== mgr_command_reply(tid 0: 0 dumped all) v1 ==== 18+0+19165 (secure 0 0 0) 0x7f172c002bf0 con 0x7f173c0761c0 2026-03-09T20:44:53.344 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:44:53.344 INFO:teuthology.orchestra.run.vm07.stderr:dumped all 2026-03-09T20:44:53.346 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:53.346+0000 7f176d222640 1 -- 192.168.123.107:0/3508021680 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f173c0761c0 msgr2=0x7f173c078680 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:53.346 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:53.346+0000 7f176d222640 1 --2- 192.168.123.107:0/3508021680 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f173c0761c0 0x7f173c078680 secure :-1 s=READY pgs=91 cs=0 l=1 rev1=1 crypto rx=0x7f1758009a00 tx=0x7f17580023d0 comp rx=0 tx=0).stop 2026-03-09T20:44:53.346 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:53.346+0000 7f176d222640 1 -- 192.168.123.107:0/3508021680 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f17681067c0 msgr2=0x7f17681ad9b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:53.346 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:53.346+0000 7f176d222640 1 --2- 192.168.123.107:0/3508021680 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f17681067c0 0x7f17681ad9b0 secure :-1 s=READY pgs=44 cs=0 l=1 rev1=1 crypto rx=0x7f175400e9b0 tx=0x7f175400ee80 comp rx=0 tx=0).stop 2026-03-09T20:44:53.346 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:53.346+0000 7f176d222640 1 -- 192.168.123.107:0/3508021680 shutdown_connections 2026-03-09T20:44:53.346 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:53.346+0000 7f176d222640 1 --2- 192.168.123.107:0/3508021680 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f173c0761c0 0x7f173c078680 unknown :-1 s=CLOSED pgs=91 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:53.346 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:53.346+0000 7f176d222640 1 --2- 192.168.123.107:0/3508021680 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f17681067c0 0x7f17681ad9b0 unknown :-1 s=CLOSED pgs=44 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:53.346 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:53.346+0000 7f176d222640 1 --2- 192.168.123.107:0/3508021680 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f17680fe650 0x7f17681ad470 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:53.346 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:53.346+0000 7f176d222640 1 -- 192.168.123.107:0/3508021680 >> 192.168.123.107:0/3508021680 conn(0x7f17680fa4a0 msgr2=0x7f1768105490 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:53.346 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:53.347+0000 7f176d222640 1 -- 192.168.123.107:0/3508021680 shutdown_connections 2026-03-09T20:44:53.346 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:53.347+0000 7f176d222640 1 -- 192.168.123.107:0/3508021680 wait complete. 2026-03-09T20:44:53.552 INFO:teuthology.orchestra.run.vm07.stdout:{"pg_ready":true,"pg_map":{"version":72,"stamp":"2026-03-09T20:44:53.114762+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":3,"num_osds":6,"num_per_pool_osds":6,"num_per_pool_omap_osds":3,"kb":125804544,"kb_used":586400,"kb_used_data":3324,"kb_used_omap":9,"kb_used_meta":173366,"kb_avail":125218144,"statfs":{"total":128823853056,"available":128223379456,"internally_reserved":0,"allocated":3403776,"data_stored":2140860,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":9530,"internal_metadata":177527494},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"10.788903"},"pg_stats":[{"pgid":"1.0","version":"22'32","reported_seq":80,"reported_epoch":36,"state":"active+clean","last_fresh":"2026-03-09T20:44:41.550014+0000","last_change":"2026-03-09T20:44:31.242099+0000","last_active":"2026-03-09T20:44:41.550014+0000","last_peered":"2026-03-09T20:44:41.550014+0000","last_clean":"2026-03-09T20:44:41.550014+0000","last_became_active":"2026-03-09T20:44:31.241899+0000","last_became_peered":"2026-03-09T20:44:31.241899+0000","last_unstale":"2026-03-09T20:44:41.550014+0000","last_undegraded":"2026-03-09T20:44:41.550014+0000","last_fullsized":"2026-03-09T20:44:41.550014+0000","mapping_epoch":31,"log_start":"0'0","ondisk_log_start":"0'0","created":21,"last_epoch_clean":32,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:44:15.534377+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:44:15.534377+0000","last_clean_scrub_stamp":"2026-03-09T20:44:15.534377+0000","objects_scrubbed":0,"log_size":32,"log_dups_size":0,"ondisk_log_size":32,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T07:20:52.062761+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,1],"acting":[3,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]}],"pool_stats":[{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1388544,"data_stored":1377840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":4}],"osd_stats":[{"osd":5,"up_from":36,"seq":154618822660,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":440920,"kb_used_data":328,"kb_used_omap":1,"kb_used_meta":30974,"kb_avail":20526504,"statfs":{"total":21470642176,"available":21019140096,"internally_reserved":0,"allocated":335872,"data_stored":127170,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":31717835},"hb_peers":[0,1,2,3,4],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.34499999999999997}]},{"osd":1,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.36299999999999999}]},{"osd":2,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.55500000000000005}]},{"osd":3,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.32600000000000001}]},{"osd":4,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.26300000000000001}]}]},{"osd":4,"up_from":32,"seq":137438953478,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27160,"kb_used_data":328,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940264,"statfs":{"total":21470642176,"available":21442830336,"internally_reserved":0,"allocated":335872,"data_stored":127170,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[0,1,2,3,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":1.1359999999999999}]},{"osd":1,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.69599999999999995}]},{"osd":2,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.65700000000000003}]},{"osd":3,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.42099999999999999}]},{"osd":5,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.40500000000000003}]}]},{"osd":3,"up_from":27,"seq":115964117000,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27612,"kb_used_data":780,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939812,"statfs":{"total":21470642176,"available":21442367488,"internally_reserved":0,"allocated":798720,"data_stored":586450,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1587,"internal_metadata":27457997},"hb_peers":[0,1,2,4,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":3.2370000000000001}]},{"osd":1,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":6.8179999999999996}]},{"osd":2,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":3.173}]},{"osd":4,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.40899999999999997}]},{"osd":5,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":3.012}]}]},{"osd":2,"up_from":19,"seq":81604378633,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27164,"kb_used_data":328,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940260,"statfs":{"total":21470642176,"available":21442826240,"internally_reserved":0,"allocated":335872,"data_stored":127170,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[0,1,3,4,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.47099999999999997}]},{"osd":1,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.45300000000000001}]},{"osd":3,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.75600000000000001}]},{"osd":4,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.64900000000000002}]},{"osd":5,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.54000000000000004}]}]},{"osd":1,"up_from":15,"seq":64424509451,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":31772,"kb_used_data":780,"kb_used_omap":1,"kb_used_meta":30974,"kb_avail":20935652,"statfs":{"total":21470642176,"available":21438107648,"internally_reserved":0,"allocated":798720,"data_stored":586450,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1588,"internal_metadata":31717836},"hb_peers":[0,2,3,4,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.51900000000000002}]},{"osd":2,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.48699999999999999}]},{"osd":3,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.53400000000000003}]},{"osd":4,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.57699999999999996}]},{"osd":5,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.56899999999999995}]}]},{"osd":0,"up_from":10,"seq":42949672973,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":31772,"kb_used_data":780,"kb_used_omap":1,"kb_used_meta":30974,"kb_avail":20935652,"statfs":{"total":21470642176,"available":21438107648,"internally_reserved":0,"allocated":798720,"data_stored":586450,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1588,"internal_metadata":31717836},"hb_peers":[1,2,3,4,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":1,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.69899999999999995}]},{"osd":2,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.71599999999999997}]},{"osd":3,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.70699999999999996}]},{"osd":4,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.47699999999999998}]},{"osd":5,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.628}]}]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-09T20:44:53.552 INFO:tasks.cephadm.ceph_manager.ceph:clean! 2026-03-09T20:44:53.552 INFO:tasks.ceph:Waiting until ceph cluster ceph is healthy... 2026-03-09T20:44:53.552 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy 2026-03-09T20:44:53.553 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph health --format=json 2026-03-09T20:44:53.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:53 vm07 ceph-mon[49120]: from='client.14436 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:44:53.715 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:44:53.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:53 vm10 ceph-mon[57011]: from='client.14436 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:44:53.999 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:53.998+0000 7f3a71d70640 1 -- 192.168.123.107:0/4143824552 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3a6c073510 msgr2=0x7f3a6c0738f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:53.999 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:53.998+0000 7f3a71d70640 1 --2- 192.168.123.107:0/4143824552 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3a6c073510 0x7f3a6c0738f0 secure :-1 s=READY pgs=231 cs=0 l=1 rev1=1 crypto rx=0x7f3a50009930 tx=0x7f3a5002f230 comp rx=0 tx=0).stop 2026-03-09T20:44:53.999 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:53.999+0000 7f3a71d70640 1 -- 192.168.123.107:0/4143824552 shutdown_connections 2026-03-09T20:44:53.999 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:53.999+0000 7f3a71d70640 1 --2- 192.168.123.107:0/4143824552 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f3a6c073e30 0x7f3a6c10cb80 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:53.999 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:53.999+0000 7f3a71d70640 1 --2- 192.168.123.107:0/4143824552 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3a6c073510 0x7f3a6c0738f0 unknown :-1 s=CLOSED pgs=231 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:53.999 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:53.999+0000 7f3a71d70640 1 -- 192.168.123.107:0/4143824552 >> 192.168.123.107:0/4143824552 conn(0x7f3a6c0fc460 msgr2=0x7f3a6c0fe880 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:53.999 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:53.999+0000 7f3a71d70640 1 -- 192.168.123.107:0/4143824552 shutdown_connections 2026-03-09T20:44:53.999 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:53.999+0000 7f3a71d70640 1 -- 192.168.123.107:0/4143824552 wait complete. 2026-03-09T20:44:54.000 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.000+0000 7f3a71d70640 1 Processor -- start 2026-03-09T20:44:54.000 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.000+0000 7f3a71d70640 1 -- start start 2026-03-09T20:44:54.000 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.000+0000 7f3a71d70640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3a6c073510 0x7f3a6c1a0680 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:54.000 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.000+0000 7f3a71d70640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f3a6c073e30 0x7f3a6c1a0bc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:54.000 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.000+0000 7f3a71d70640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f3a6c19a770 con 0x7f3a6c073510 2026-03-09T20:44:54.000 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.000+0000 7f3a71d70640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f3a6c19a8e0 con 0x7f3a6c073e30 2026-03-09T20:44:54.000 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.000+0000 7f3a6b7fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3a6c073510 0x7f3a6c1a0680 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:54.001 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.000+0000 7f3a6affd640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f3a6c073e30 0x7f3a6c1a0bc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:54.001 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.000+0000 7f3a6affd640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f3a6c073e30 0x7f3a6c1a0bc0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:60696/0 (socket says 192.168.123.107:60696) 2026-03-09T20:44:54.001 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.000+0000 7f3a6affd640 1 -- 192.168.123.107:0/954367434 learned_addr learned my addr 192.168.123.107:0/954367434 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:44:54.001 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.001+0000 7f3a6affd640 1 -- 192.168.123.107:0/954367434 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3a6c073510 msgr2=0x7f3a6c1a0680 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:54.001 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.001+0000 7f3a6affd640 1 --2- 192.168.123.107:0/954367434 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3a6c073510 0x7f3a6c1a0680 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:54.001 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.001+0000 7f3a6affd640 1 -- 192.168.123.107:0/954367434 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f3a50009590 con 0x7f3a6c073e30 2026-03-09T20:44:54.001 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.001+0000 7f3a6b7fe640 1 --2- 192.168.123.107:0/954367434 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3a6c073510 0x7f3a6c1a0680 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T20:44:54.001 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.001+0000 7f3a6affd640 1 --2- 192.168.123.107:0/954367434 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f3a6c073e30 0x7f3a6c1a0bc0 secure :-1 s=READY pgs=45 cs=0 l=1 rev1=1 crypto rx=0x7f3a5800e990 tx=0x7f3a5800ee60 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:54.001 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.001+0000 7f3a68ff9640 1 -- 192.168.123.107:0/954367434 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f3a5800cd30 con 0x7f3a6c073e30 2026-03-09T20:44:54.001 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.001+0000 7f3a68ff9640 1 -- 192.168.123.107:0/954367434 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f3a5800ce90 con 0x7f3a6c073e30 2026-03-09T20:44:54.001 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.001+0000 7f3a71d70640 1 -- 192.168.123.107:0/954367434 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f3a6c19abc0 con 0x7f3a6c073e30 2026-03-09T20:44:54.001 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.002+0000 7f3a71d70640 1 -- 192.168.123.107:0/954367434 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f3a6c19b110 con 0x7f3a6c073e30 2026-03-09T20:44:54.003 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.003+0000 7f3a68ff9640 1 -- 192.168.123.107:0/954367434 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f3a58010640 con 0x7f3a6c073e30 2026-03-09T20:44:54.003 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.003+0000 7f3a71d70640 1 -- 192.168.123.107:0/954367434 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f3a30005350 con 0x7f3a6c073e30 2026-03-09T20:44:54.004 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.004+0000 7f3a68ff9640 1 -- 192.168.123.107:0/954367434 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7f3a58002990 con 0x7f3a6c073e30 2026-03-09T20:44:54.004 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.004+0000 7f3a68ff9640 1 --2- 192.168.123.107:0/954367434 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f3a400761c0 0x7f3a40078680 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:54.005 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.005+0000 7f3a68ff9640 1 -- 192.168.123.107:0/954367434 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(37..37 src has 1..37) v4 ==== 4618+0+0 (secure 0 0 0) 0x7f3a58014070 con 0x7f3a6c073e30 2026-03-09T20:44:54.005 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.005+0000 7f3a6b7fe640 1 --2- 192.168.123.107:0/954367434 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f3a400761c0 0x7f3a40078680 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:54.005 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.005+0000 7f3a6b7fe640 1 --2- 192.168.123.107:0/954367434 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f3a400761c0 0x7f3a40078680 secure :-1 s=READY pgs=92 cs=0 l=1 rev1=1 crypto rx=0x7f3a50004750 tx=0x7f3a500023d0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:54.007 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.007+0000 7f3a68ff9640 1 -- 192.168.123.107:0/954367434 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f3a58061660 con 0x7f3a6c073e30 2026-03-09T20:44:54.116 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.114+0000 7f3a71d70640 1 -- 192.168.123.107:0/954367434 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "health", "format": "json"} v 0) v1 -- 0x7f3a30005b80 con 0x7f3a6c073e30 2026-03-09T20:44:54.116 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.115+0000 7f3a68ff9640 1 -- 192.168.123.107:0/954367434 <== mon.1 v2:192.168.123.110:3300/0 7 ==== mon_command_ack([{"prefix": "health", "format": "json"}]=0 v0) v1 ==== 72+0+46 (secure 0 0 0) 0x7f3a58061000 con 0x7f3a6c073e30 2026-03-09T20:44:54.118 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:44:54.118 INFO:teuthology.orchestra.run.vm07.stdout:{"status":"HEALTH_OK","checks":{},"mutes":[]} 2026-03-09T20:44:54.120 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.120+0000 7f3a71d70640 1 -- 192.168.123.107:0/954367434 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f3a400761c0 msgr2=0x7f3a40078680 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:54.120 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.120+0000 7f3a71d70640 1 --2- 192.168.123.107:0/954367434 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f3a400761c0 0x7f3a40078680 secure :-1 s=READY pgs=92 cs=0 l=1 rev1=1 crypto rx=0x7f3a50004750 tx=0x7f3a500023d0 comp rx=0 tx=0).stop 2026-03-09T20:44:54.120 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.120+0000 7f3a71d70640 1 -- 192.168.123.107:0/954367434 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f3a6c073e30 msgr2=0x7f3a6c1a0bc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:54.120 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.120+0000 7f3a71d70640 1 --2- 192.168.123.107:0/954367434 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f3a6c073e30 0x7f3a6c1a0bc0 secure :-1 s=READY pgs=45 cs=0 l=1 rev1=1 crypto rx=0x7f3a5800e990 tx=0x7f3a5800ee60 comp rx=0 tx=0).stop 2026-03-09T20:44:54.120 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.120+0000 7f3a71d70640 1 -- 192.168.123.107:0/954367434 shutdown_connections 2026-03-09T20:44:54.120 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.120+0000 7f3a71d70640 1 --2- 192.168.123.107:0/954367434 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f3a400761c0 0x7f3a40078680 unknown :-1 s=CLOSED pgs=92 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:54.121 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.121+0000 7f3a71d70640 1 --2- 192.168.123.107:0/954367434 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f3a6c073e30 0x7f3a6c1a0bc0 unknown :-1 s=CLOSED pgs=45 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:54.121 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.121+0000 7f3a71d70640 1 --2- 192.168.123.107:0/954367434 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3a6c073510 0x7f3a6c1a0680 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:54.121 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.121+0000 7f3a71d70640 1 -- 192.168.123.107:0/954367434 >> 192.168.123.107:0/954367434 conn(0x7f3a6c0fc460 msgr2=0x7f3a6c10c1c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:54.121 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.121+0000 7f3a71d70640 1 -- 192.168.123.107:0/954367434 shutdown_connections 2026-03-09T20:44:54.121 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.121+0000 7f3a71d70640 1 -- 192.168.123.107:0/954367434 wait complete. 2026-03-09T20:44:54.182 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy done 2026-03-09T20:44:54.182 INFO:tasks.cephadm:Setup complete, yielding 2026-03-09T20:44:54.182 INFO:teuthology.run_tasks:Running task print... 2026-03-09T20:44:54.184 INFO:teuthology.task.print:**** done end installing reef cephadm ... 2026-03-09T20:44:54.184 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-09T20:44:54.186 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm07.local 2026-03-09T20:44:54.187 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- bash -c 'ceph config set mgr mgr/cephadm/use_repo_digest true --force' 2026-03-09T20:44:54.340 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:44:54.681 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:54 vm07 ceph-mon[49120]: pgmap v72: 1 pgs: 1 active+clean; 449 KiB data, 573 MiB used, 119 GiB / 120 GiB avail 2026-03-09T20:44:54.681 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:54 vm07 ceph-mon[49120]: from='client.24259 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:44:54.681 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:54 vm07 ceph-mon[49120]: from='client.? 192.168.123.107:0/954367434' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-09T20:44:54.715 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.715+0000 7f4dcca8d640 1 -- 192.168.123.107:0/642362863 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4dc81007f0 msgr2=0x7f4dc8100bf0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:54.715 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.715+0000 7f4dcca8d640 1 --2- 192.168.123.107:0/642362863 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4dc81007f0 0x7f4dc8100bf0 secure :-1 s=READY pgs=232 cs=0 l=1 rev1=1 crypto rx=0x7f4dbc0099b0 tx=0x7f4dbc02f220 comp rx=0 tx=0).stop 2026-03-09T20:44:54.715 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.715+0000 7f4dcca8d640 1 -- 192.168.123.107:0/642362863 shutdown_connections 2026-03-09T20:44:54.715 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.715+0000 7f4dcca8d640 1 --2- 192.168.123.107:0/642362863 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4dc81019f0 0x7f4dc8101e70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:54.715 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.715+0000 7f4dcca8d640 1 --2- 192.168.123.107:0/642362863 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4dc81007f0 0x7f4dc8100bf0 unknown :-1 s=CLOSED pgs=232 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:54.715 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.715+0000 7f4dcca8d640 1 -- 192.168.123.107:0/642362863 >> 192.168.123.107:0/642362863 conn(0x7f4dc80fbf80 msgr2=0x7f4dc80fe3c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:54.715 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.716+0000 7f4dcca8d640 1 -- 192.168.123.107:0/642362863 shutdown_connections 2026-03-09T20:44:54.715 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.716+0000 7f4dcca8d640 1 -- 192.168.123.107:0/642362863 wait complete. 2026-03-09T20:44:54.716 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.716+0000 7f4dcca8d640 1 Processor -- start 2026-03-09T20:44:54.716 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.716+0000 7f4dcca8d640 1 -- start start 2026-03-09T20:44:54.716 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.716+0000 7f4dcca8d640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4dc81007f0 0x7f4dc819a470 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:54.716 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.716+0000 7f4dcca8d640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4dc81019f0 0x7f4dc819a9b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:54.716 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.716+0000 7f4dcca8d640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f4dc819af80 con 0x7f4dc81007f0 2026-03-09T20:44:54.716 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.716+0000 7f4dcca8d640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f4dc819b0f0 con 0x7f4dc81019f0 2026-03-09T20:44:54.716 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.717+0000 7f4dc6575640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4dc81007f0 0x7f4dc819a470 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:54.716 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.717+0000 7f4dc6575640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4dc81007f0 0x7f4dc819a470 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:47796/0 (socket says 192.168.123.107:47796) 2026-03-09T20:44:54.717 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.717+0000 7f4dc6575640 1 -- 192.168.123.107:0/516190216 learned_addr learned my addr 192.168.123.107:0/516190216 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:44:54.717 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.717+0000 7f4dc6575640 1 -- 192.168.123.107:0/516190216 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4dc81019f0 msgr2=0x7f4dc819a9b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:54.717 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.717+0000 7f4dc5d74640 1 --2- 192.168.123.107:0/516190216 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4dc81019f0 0x7f4dc819a9b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:54.717 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.717+0000 7f4dc6575640 1 --2- 192.168.123.107:0/516190216 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4dc81019f0 0x7f4dc819a9b0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:54.717 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.717+0000 7f4dc6575640 1 -- 192.168.123.107:0/516190216 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f4dbc009660 con 0x7f4dc81007f0 2026-03-09T20:44:54.717 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.717+0000 7f4dc5d74640 1 --2- 192.168.123.107:0/516190216 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4dc81019f0 0x7f4dc819a9b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T20:44:54.717 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.717+0000 7f4dc6575640 1 --2- 192.168.123.107:0/516190216 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4dc81007f0 0x7f4dc819a470 secure :-1 s=READY pgs=233 cs=0 l=1 rev1=1 crypto rx=0x7f4dbc002940 tx=0x7f4dbc002970 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:54.717 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.717+0000 7f4daf7fe640 1 -- 192.168.123.107:0/516190216 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f4dbc03d070 con 0x7f4dc81007f0 2026-03-09T20:44:54.717 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.717+0000 7f4dcca8d640 1 -- 192.168.123.107:0/516190216 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f4dc819fb30 con 0x7f4dc81007f0 2026-03-09T20:44:54.718 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.718+0000 7f4daf7fe640 1 -- 192.168.123.107:0/516190216 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f4dbc02fd50 con 0x7f4dc81007f0 2026-03-09T20:44:54.718 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.718+0000 7f4daf7fe640 1 -- 192.168.123.107:0/516190216 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f4dbc0419e0 con 0x7f4dc81007f0 2026-03-09T20:44:54.720 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.718+0000 7f4dcca8d640 1 -- 192.168.123.107:0/516190216 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f4dc81a0020 con 0x7f4dc81007f0 2026-03-09T20:44:54.720 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.719+0000 7f4daf7fe640 1 -- 192.168.123.107:0/516190216 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7f4dbc049050 con 0x7f4dc81007f0 2026-03-09T20:44:54.720 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.719+0000 7f4dcca8d640 1 -- 192.168.123.107:0/516190216 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f4d8c005350 con 0x7f4dc81007f0 2026-03-09T20:44:54.720 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.720+0000 7f4daf7fe640 1 --2- 192.168.123.107:0/516190216 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f4da0076170 0x7f4da0078630 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:54.720 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.720+0000 7f4dc5d74640 1 --2- 192.168.123.107:0/516190216 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f4da0076170 0x7f4da0078630 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:54.720 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.720+0000 7f4daf7fe640 1 -- 192.168.123.107:0/516190216 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(37..37 src has 1..37) v4 ==== 4618+0+0 (secure 0 0 0) 0x7f4dbc038e10 con 0x7f4dc81007f0 2026-03-09T20:44:54.720 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.720+0000 7f4dc5d74640 1 --2- 192.168.123.107:0/516190216 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f4da0076170 0x7f4da0078630 secure :-1 s=READY pgs=93 cs=0 l=1 rev1=1 crypto rx=0x7f4dc819b990 tx=0x7f4db0005e30 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:54.723 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.723+0000 7f4daf7fe640 1 -- 192.168.123.107:0/516190216 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f4dbc0c1050 con 0x7f4dc81007f0 2026-03-09T20:44:54.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:54 vm10 ceph-mon[57011]: pgmap v72: 1 pgs: 1 active+clean; 449 KiB data, 573 MiB used, 119 GiB / 120 GiB avail 2026-03-09T20:44:54.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:54 vm10 ceph-mon[57011]: from='client.24259 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:44:54.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:54 vm10 ceph-mon[57011]: from='client.? 192.168.123.107:0/954367434' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-09T20:44:54.820 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.820+0000 7f4dcca8d640 1 -- 192.168.123.107:0/516190216 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0) v1 -- 0x7f4d8c0051c0 con 0x7f4dc81007f0 2026-03-09T20:44:54.830 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.829+0000 7f4daf7fe640 1 -- 192.168.123.107:0/516190216 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{prefix=config set, name=mgr/cephadm/use_repo_digest}]=0 v18)=0 v18) v1 ==== 143+0+0 (secure 0 0 0) 0x7f4dbc0853b0 con 0x7f4dc81007f0 2026-03-09T20:44:54.835 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.835+0000 7f4dcca8d640 1 -- 192.168.123.107:0/516190216 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f4da0076170 msgr2=0x7f4da0078630 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:54.835 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.835+0000 7f4dcca8d640 1 --2- 192.168.123.107:0/516190216 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f4da0076170 0x7f4da0078630 secure :-1 s=READY pgs=93 cs=0 l=1 rev1=1 crypto rx=0x7f4dc819b990 tx=0x7f4db0005e30 comp rx=0 tx=0).stop 2026-03-09T20:44:54.835 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.835+0000 7f4dcca8d640 1 -- 192.168.123.107:0/516190216 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4dc81007f0 msgr2=0x7f4dc819a470 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:54.835 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.835+0000 7f4dcca8d640 1 --2- 192.168.123.107:0/516190216 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4dc81007f0 0x7f4dc819a470 secure :-1 s=READY pgs=233 cs=0 l=1 rev1=1 crypto rx=0x7f4dbc002940 tx=0x7f4dbc002970 comp rx=0 tx=0).stop 2026-03-09T20:44:54.835 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.835+0000 7f4dcca8d640 1 -- 192.168.123.107:0/516190216 shutdown_connections 2026-03-09T20:44:54.835 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.835+0000 7f4dcca8d640 1 --2- 192.168.123.107:0/516190216 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f4da0076170 0x7f4da0078630 unknown :-1 s=CLOSED pgs=93 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:54.835 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.835+0000 7f4dcca8d640 1 --2- 192.168.123.107:0/516190216 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4dc81019f0 0x7f4dc819a9b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:54.835 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.835+0000 7f4dcca8d640 1 --2- 192.168.123.107:0/516190216 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4dc81007f0 0x7f4dc819a470 unknown :-1 s=CLOSED pgs=233 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:54.835 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.835+0000 7f4dcca8d640 1 -- 192.168.123.107:0/516190216 >> 192.168.123.107:0/516190216 conn(0x7f4dc80fbf80 msgr2=0x7f4dc80fdad0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:54.835 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.835+0000 7f4dcca8d640 1 -- 192.168.123.107:0/516190216 shutdown_connections 2026-03-09T20:44:54.835 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:54.835+0000 7f4dcca8d640 1 -- 192.168.123.107:0/516190216 wait complete. 2026-03-09T20:44:54.916 INFO:teuthology.run_tasks:Running task print... 2026-03-09T20:44:54.918 INFO:teuthology.task.print:**** done cephadm.shell ceph config set mgr... 2026-03-09T20:44:54.918 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-09T20:44:54.920 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm07.local 2026-03-09T20:44:54.920 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- bash -c 'ceph orch status' 2026-03-09T20:44:55.081 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:44:55.343 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.343+0000 7f3ddbe5b640 1 -- 192.168.123.107:0/988588246 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3dd4073b40 msgr2=0x7f3dd4073fa0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:55.343 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.343+0000 7f3ddbe5b640 1 --2- 192.168.123.107:0/988588246 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3dd4073b40 0x7f3dd4073fa0 secure :-1 s=READY pgs=234 cs=0 l=1 rev1=1 crypto rx=0x7f3dc80099b0 tx=0x7f3dc802f240 comp rx=0 tx=0).stop 2026-03-09T20:44:55.343 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.343+0000 7f3ddbe5b640 1 -- 192.168.123.107:0/988588246 shutdown_connections 2026-03-09T20:44:55.343 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.343+0000 7f3ddbe5b640 1 --2- 192.168.123.107:0/988588246 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3dd4073b40 0x7f3dd4073fa0 unknown :-1 s=CLOSED pgs=234 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:55.343 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.343+0000 7f3ddbe5b640 1 --2- 192.168.123.107:0/988588246 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f3dd40751a0 0x7f3dd4073600 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:55.343 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.343+0000 7f3ddbe5b640 1 -- 192.168.123.107:0/988588246 >> 192.168.123.107:0/988588246 conn(0x7f3dd40fbf80 msgr2=0x7f3dd40fe3c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:55.343 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.344+0000 7f3ddbe5b640 1 -- 192.168.123.107:0/988588246 shutdown_connections 2026-03-09T20:44:55.343 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.344+0000 7f3ddbe5b640 1 -- 192.168.123.107:0/988588246 wait complete. 2026-03-09T20:44:55.344 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.344+0000 7f3ddbe5b640 1 Processor -- start 2026-03-09T20:44:55.344 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.344+0000 7f3ddbe5b640 1 -- start start 2026-03-09T20:44:55.345 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.345+0000 7f3ddbe5b640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3dd4073b40 0x7f3dd419a430 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:55.345 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.345+0000 7f3ddbe5b640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f3dd40751a0 0x7f3dd419a970 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:55.345 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.345+0000 7f3ddbe5b640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f3dd419af40 con 0x7f3dd4073b40 2026-03-09T20:44:55.345 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.345+0000 7f3dd9bd0640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3dd4073b40 0x7f3dd419a430 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:55.345 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.345+0000 7f3dd9bd0640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3dd4073b40 0x7f3dd419a430 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:47810/0 (socket says 192.168.123.107:47810) 2026-03-09T20:44:55.345 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.345+0000 7f3dd9bd0640 1 -- 192.168.123.107:0/3491469376 learned_addr learned my addr 192.168.123.107:0/3491469376 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:44:55.345 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.345+0000 7f3ddbe5b640 1 -- 192.168.123.107:0/3491469376 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f3dd419b0b0 con 0x7f3dd40751a0 2026-03-09T20:44:55.345 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.345+0000 7f3dd93cf640 1 --2- 192.168.123.107:0/3491469376 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f3dd40751a0 0x7f3dd419a970 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:55.346 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.346+0000 7f3dd93cf640 1 -- 192.168.123.107:0/3491469376 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3dd4073b40 msgr2=0x7f3dd419a430 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:55.346 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.346+0000 7f3dd93cf640 1 --2- 192.168.123.107:0/3491469376 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3dd4073b40 0x7f3dd419a430 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:55.346 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.346+0000 7f3dd93cf640 1 -- 192.168.123.107:0/3491469376 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f3dc8009660 con 0x7f3dd40751a0 2026-03-09T20:44:55.346 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.346+0000 7f3dd93cf640 1 --2- 192.168.123.107:0/3491469376 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f3dd40751a0 0x7f3dd419a970 secure :-1 s=READY pgs=46 cs=0 l=1 rev1=1 crypto rx=0x7f3dc802f750 tx=0x7f3dc8004290 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:55.346 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.346+0000 7f3dc2ffd640 1 -- 192.168.123.107:0/3491469376 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f3dc803d070 con 0x7f3dd40751a0 2026-03-09T20:44:55.346 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.346+0000 7f3ddbe5b640 1 -- 192.168.123.107:0/3491469376 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f3dd419faf0 con 0x7f3dd40751a0 2026-03-09T20:44:55.346 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.346+0000 7f3ddbe5b640 1 -- 192.168.123.107:0/3491469376 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f3dd419ffe0 con 0x7f3dd40751a0 2026-03-09T20:44:55.346 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.346+0000 7f3dc2ffd640 1 -- 192.168.123.107:0/3491469376 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f3dc8038730 con 0x7f3dd40751a0 2026-03-09T20:44:55.347 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.347+0000 7f3dc2ffd640 1 -- 192.168.123.107:0/3491469376 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f3dc80417f0 con 0x7f3dd40751a0 2026-03-09T20:44:55.348 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.348+0000 7f3dc2ffd640 1 -- 192.168.123.107:0/3491469376 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7f3dc8038cd0 con 0x7f3dd40751a0 2026-03-09T20:44:55.348 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.348+0000 7f3dc2ffd640 1 --2- 192.168.123.107:0/3491469376 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f3db00761c0 0x7f3db0078680 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:55.348 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.348+0000 7f3dd9bd0640 1 --2- 192.168.123.107:0/3491469376 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f3db00761c0 0x7f3db0078680 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:55.349 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.349+0000 7f3dc2ffd640 1 -- 192.168.123.107:0/3491469376 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(37..37 src has 1..37) v4 ==== 4618+0+0 (secure 0 0 0) 0x7f3dc80bc160 con 0x7f3dd40751a0 2026-03-09T20:44:55.349 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.349+0000 7f3dd9bd0640 1 --2- 192.168.123.107:0/3491469376 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f3db00761c0 0x7f3db0078680 secure :-1 s=READY pgs=94 cs=0 l=1 rev1=1 crypto rx=0x7f3dc4005fd0 tx=0x7f3dc40074e0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:55.349 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.349+0000 7f3dc0ff9640 1 -- 192.168.123.107:0/3491469376 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f3dd410b680 con 0x7f3dd40751a0 2026-03-09T20:44:55.352 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.352+0000 7f3dc2ffd640 1 -- 192.168.123.107:0/3491469376 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f3dc8085ae0 con 0x7f3dd40751a0 2026-03-09T20:44:55.456 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.455+0000 7f3dc0ff9640 1 -- 192.168.123.107:0/3491469376 --> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] -- mgr_command(tid 0: {"prefix": "orch status", "target": ["mon-mgr", ""]}) v1 -- 0x7f3dd410b810 con 0x7f3db00761c0 2026-03-09T20:44:55.456 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.456+0000 7f3dc2ffd640 1 -- 192.168.123.107:0/3491469376 <== mgr.14225 v2:192.168.123.107:6800/4233182156 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+43 (secure 0 0 0) 0x7f3dd410b810 con 0x7f3db00761c0 2026-03-09T20:44:55.457 INFO:teuthology.orchestra.run.vm07.stdout:Backend: cephadm 2026-03-09T20:44:55.457 INFO:teuthology.orchestra.run.vm07.stdout:Available: Yes 2026-03-09T20:44:55.457 INFO:teuthology.orchestra.run.vm07.stdout:Paused: No 2026-03-09T20:44:55.459 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.459+0000 7f3dc0ff9640 1 -- 192.168.123.107:0/3491469376 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f3db00761c0 msgr2=0x7f3db0078680 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:55.459 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.459+0000 7f3dc0ff9640 1 --2- 192.168.123.107:0/3491469376 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f3db00761c0 0x7f3db0078680 secure :-1 s=READY pgs=94 cs=0 l=1 rev1=1 crypto rx=0x7f3dc4005fd0 tx=0x7f3dc40074e0 comp rx=0 tx=0).stop 2026-03-09T20:44:55.459 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.459+0000 7f3dc0ff9640 1 -- 192.168.123.107:0/3491469376 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f3dd40751a0 msgr2=0x7f3dd419a970 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:55.459 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.459+0000 7f3dc0ff9640 1 --2- 192.168.123.107:0/3491469376 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f3dd40751a0 0x7f3dd419a970 secure :-1 s=READY pgs=46 cs=0 l=1 rev1=1 crypto rx=0x7f3dc802f750 tx=0x7f3dc8004290 comp rx=0 tx=0).stop 2026-03-09T20:44:55.459 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.459+0000 7f3dc0ff9640 1 -- 192.168.123.107:0/3491469376 shutdown_connections 2026-03-09T20:44:55.459 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.460+0000 7f3dc0ff9640 1 --2- 192.168.123.107:0/3491469376 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f3db00761c0 0x7f3db0078680 unknown :-1 s=CLOSED pgs=94 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:55.460 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.460+0000 7f3dc0ff9640 1 --2- 192.168.123.107:0/3491469376 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f3dd40751a0 0x7f3dd419a970 unknown :-1 s=CLOSED pgs=46 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:55.460 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.460+0000 7f3dc0ff9640 1 --2- 192.168.123.107:0/3491469376 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3dd4073b40 0x7f3dd419a430 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:55.460 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.460+0000 7f3dc0ff9640 1 -- 192.168.123.107:0/3491469376 >> 192.168.123.107:0/3491469376 conn(0x7f3dd40fbf80 msgr2=0x7f3dd40fdab0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:55.460 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.460+0000 7f3dc0ff9640 1 -- 192.168.123.107:0/3491469376 shutdown_connections 2026-03-09T20:44:55.460 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.460+0000 7f3dc0ff9640 1 -- 192.168.123.107:0/3491469376 wait complete. 2026-03-09T20:44:55.503 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- bash -c 'ceph orch ps' 2026-03-09T20:44:55.653 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:44:55.928 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.928+0000 7f042aae3640 1 -- 192.168.123.107:0/208429708 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0424102a80 msgr2=0x7f0424102e80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:55.928 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.928+0000 7f042aae3640 1 --2- 192.168.123.107:0/208429708 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0424102a80 0x7f0424102e80 secure :-1 s=READY pgs=235 cs=0 l=1 rev1=1 crypto rx=0x7f040c0099b0 tx=0x7f040c02f220 comp rx=0 tx=0).stop 2026-03-09T20:44:55.929 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.929+0000 7f042aae3640 1 -- 192.168.123.107:0/208429708 shutdown_connections 2026-03-09T20:44:55.929 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.929+0000 7f042aae3640 1 --2- 192.168.123.107:0/208429708 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f0424103c80 0x7f0424104100 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:55.929 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.929+0000 7f042aae3640 1 --2- 192.168.123.107:0/208429708 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0424102a80 0x7f0424102e80 unknown :-1 s=CLOSED pgs=235 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:55.929 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.929+0000 7f042aae3640 1 -- 192.168.123.107:0/208429708 >> 192.168.123.107:0/208429708 conn(0x7f04240fe250 msgr2=0x7f0424100670 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:55.929 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.929+0000 7f042aae3640 1 -- 192.168.123.107:0/208429708 shutdown_connections 2026-03-09T20:44:55.929 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.929+0000 7f042aae3640 1 -- 192.168.123.107:0/208429708 wait complete. 2026-03-09T20:44:55.929 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.929+0000 7f042aae3640 1 Processor -- start 2026-03-09T20:44:55.929 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.929+0000 7f042aae3640 1 -- start start 2026-03-09T20:44:55.929 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.930+0000 7f042aae3640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f0424102a80 0x7f0424078fa0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:55.929 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.930+0000 7f042aae3640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0424103c80 0x7f04240794e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:55.930 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.930+0000 7f042aae3640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f0424075a00 con 0x7f0424103c80 2026-03-09T20:44:55.930 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.930+0000 7f042aae3640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f0424075b70 con 0x7f0424102a80 2026-03-09T20:44:55.930 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.930+0000 7f0428858640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f0424102a80 0x7f0424078fa0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:55.930 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.930+0000 7f041bfff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0424103c80 0x7f04240794e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:55.930 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.930+0000 7f0428858640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f0424102a80 0x7f0424078fa0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:60748/0 (socket says 192.168.123.107:60748) 2026-03-09T20:44:55.930 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.930+0000 7f0428858640 1 -- 192.168.123.107:0/1815670595 learned_addr learned my addr 192.168.123.107:0/1815670595 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:44:55.930 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.930+0000 7f041bfff640 1 -- 192.168.123.107:0/1815670595 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f0424102a80 msgr2=0x7f0424078fa0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:55.930 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.930+0000 7f041bfff640 1 --2- 192.168.123.107:0/1815670595 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f0424102a80 0x7f0424078fa0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:55.930 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.930+0000 7f041bfff640 1 -- 192.168.123.107:0/1815670595 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f040c009660 con 0x7f0424103c80 2026-03-09T20:44:55.930 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.930+0000 7f041bfff640 1 --2- 192.168.123.107:0/1815670595 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0424103c80 0x7f04240794e0 secure :-1 s=READY pgs=236 cs=0 l=1 rev1=1 crypto rx=0x7f041400d8d0 tx=0x7f041400dda0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:55.932 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.931+0000 7f0419ffb640 1 -- 192.168.123.107:0/1815670595 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f0414004490 con 0x7f0424103c80 2026-03-09T20:44:55.932 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.931+0000 7f0419ffb640 1 -- 192.168.123.107:0/1815670595 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f0414007e70 con 0x7f0424103c80 2026-03-09T20:44:55.932 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.931+0000 7f0419ffb640 1 -- 192.168.123.107:0/1815670595 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f0414005020 con 0x7f0424103c80 2026-03-09T20:44:55.932 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.931+0000 7f042aae3640 1 -- 192.168.123.107:0/1815670595 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f0424075e50 con 0x7f0424103c80 2026-03-09T20:44:55.932 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.931+0000 7f042aae3640 1 -- 192.168.123.107:0/1815670595 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f04240763a0 con 0x7f0424103c80 2026-03-09T20:44:55.932 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.932+0000 7f0419ffb640 1 -- 192.168.123.107:0/1815670595 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7f0414005290 con 0x7f0424103c80 2026-03-09T20:44:55.932 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.932+0000 7f042aae3640 1 -- 192.168.123.107:0/1815670595 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f03ec005350 con 0x7f0424103c80 2026-03-09T20:44:55.935 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.935+0000 7f0419ffb640 1 --2- 192.168.123.107:0/1815670595 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f03fc076170 0x7f03fc078630 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:55.935 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.935+0000 7f0428858640 1 --2- 192.168.123.107:0/1815670595 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f03fc076170 0x7f03fc078630 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:55.936 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.936+0000 7f0419ffb640 1 -- 192.168.123.107:0/1815670595 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(37..37 src has 1..37) v4 ==== 4618+0+0 (secure 0 0 0) 0x7f0414097240 con 0x7f0424103c80 2026-03-09T20:44:55.936 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.936+0000 7f0428858640 1 --2- 192.168.123.107:0/1815670595 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f03fc076170 0x7f03fc078630 secure :-1 s=READY pgs=95 cs=0 l=1 rev1=1 crypto rx=0x7f040c002410 tx=0x7f040c03a040 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:55.936 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:55.936+0000 7f0419ffb640 1 -- 192.168.123.107:0/1815670595 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f0414099880 con 0x7f0424103c80 2026-03-09T20:44:56.027 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:56.027+0000 7f042aae3640 1 -- 192.168.123.107:0/1815670595 --> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] -- mgr_command(tid 0: {"prefix": "orch ps", "target": ["mon-mgr", ""]}) v1 -- 0x7f03ec002bf0 con 0x7f03fc076170 2026-03-09T20:44:56.027 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:55 vm07 ceph-mon[49120]: from='client.? 192.168.123.107:0/516190216' entity='client.admin' 2026-03-09T20:44:56.027 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:55 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:44:56.027 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:55 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:44:56.027 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:55 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:44:56.027 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:55 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:56.034 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:56.034+0000 7f0419ffb640 1 -- 192.168.123.107:0/1815670595 <== mgr.14225 v2:192.168.123.107:6800/4233182156 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+2920 (secure 0 0 0) 0x7f03ec002bf0 con 0x7f03fc076170 2026-03-09T20:44:56.061 INFO:teuthology.orchestra.run.vm07.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T20:44:56.061 INFO:teuthology.orchestra.run.vm07.stdout:alertmanager.vm07 vm07 *:9093,9094 running (75s) 44s ago 118s 21.5M - 0.25.0 c8568f914cd2 aa3206f6f5cb 2026-03-09T20:44:56.061 INFO:teuthology.orchestra.run.vm07.stdout:ceph-exporter.vm07 vm07 running (2m) 44s ago 2m 8178k - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 06140d824fae 2026-03-09T20:44:56.061 INFO:teuthology.orchestra.run.vm07.stdout:ceph-exporter.vm10 vm10 running (92s) 19s ago 92s 8321k - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 ecddc8340426 2026-03-09T20:44:56.061 INFO:teuthology.orchestra.run.vm07.stdout:crash.vm07 vm07 running (2m) 44s ago 2m 7620k - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 8dda9981b08b 2026-03-09T20:44:56.061 INFO:teuthology.orchestra.run.vm07.stdout:crash.vm10 vm10 running (91s) 19s ago 91s 7616k - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 eba80e79586f 2026-03-09T20:44:56.061 INFO:teuthology.orchestra.run.vm07.stdout:grafana.vm07 vm07 *:3000 running (74s) 44s ago 106s 74.7M - 9.4.7 954c08fa6188 74cf2e7ee6ad 2026-03-09T20:44:56.061 INFO:teuthology.orchestra.run.vm07.stdout:mgr.vm07.xjrvch vm07 *:9283,8765,8443 running (2m) 44s ago 2m 529M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 7a35a71cbc43 2026-03-09T20:44:56.061 INFO:teuthology.orchestra.run.vm07.stdout:mgr.vm10.byqahe vm10 *:8443,9283,8765 running (87s) 19s ago 87s 485M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 91b60c6e69dc 2026-03-09T20:44:56.061 INFO:teuthology.orchestra.run.vm07.stdout:mon.vm07 vm07 running (2m) 44s ago 2m 45.4M 2048M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 f3e88bdaa0dd 2026-03-09T20:44:56.061 INFO:teuthology.orchestra.run.vm07.stdout:mon.vm10 vm10 running (86s) 19s ago 86s 43.6M 2048M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 4e5d7d18c660 2026-03-09T20:44:56.062 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.vm07 vm07 *:9100 running (2m) 44s ago 2m 13.9M - 1.5.0 0da6a335fe13 d6fac1f8a1d0 2026-03-09T20:44:56.062 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.vm10 vm10 *:9100 running (88s) 19s ago 88s 14.5M - 1.5.0 0da6a335fe13 9716a97e7ed1 2026-03-09T20:44:56.062 INFO:teuthology.orchestra.run.vm07.stdout:osd.0 vm07 running (66s) 44s ago 66s 61.9M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 482878bd7721 2026-03-09T20:44:56.062 INFO:teuthology.orchestra.run.vm07.stdout:osd.1 vm07 running (56s) 44s ago 56s 62.1M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 15564e5032c9 2026-03-09T20:44:56.062 INFO:teuthology.orchestra.run.vm07.stdout:osd.2 vm07 running (46s) 44s ago 46s 15.3M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 a2ad523a264c 2026-03-09T20:44:56.062 INFO:teuthology.orchestra.run.vm07.stdout:osd.3 vm10 running (37s) 19s ago 37s 63.4M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 c4d7e2279ba1 2026-03-09T20:44:56.062 INFO:teuthology.orchestra.run.vm07.stdout:osd.4 vm10 running (29s) 19s ago 29s 39.8M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 37651efc9a7d 2026-03-09T20:44:56.062 INFO:teuthology.orchestra.run.vm07.stdout:osd.5 vm10 running (20s) 19s ago 20s 12.1M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 e1bd83add343 2026-03-09T20:44:56.062 INFO:teuthology.orchestra.run.vm07.stdout:prometheus.vm07 vm07 *:9095 running (69s) 44s ago 101s 31.9M - 2.43.0 a07b618ecd1d 08a586cd1392 2026-03-09T20:44:56.062 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:56.036+0000 7f042aae3640 1 -- 192.168.123.107:0/1815670595 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f03fc076170 msgr2=0x7f03fc078630 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:56.062 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:56.036+0000 7f042aae3640 1 --2- 192.168.123.107:0/1815670595 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f03fc076170 0x7f03fc078630 secure :-1 s=READY pgs=95 cs=0 l=1 rev1=1 crypto rx=0x7f040c002410 tx=0x7f040c03a040 comp rx=0 tx=0).stop 2026-03-09T20:44:56.062 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:56.036+0000 7f042aae3640 1 -- 192.168.123.107:0/1815670595 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0424103c80 msgr2=0x7f04240794e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:56.062 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:56.036+0000 7f042aae3640 1 --2- 192.168.123.107:0/1815670595 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0424103c80 0x7f04240794e0 secure :-1 s=READY pgs=236 cs=0 l=1 rev1=1 crypto rx=0x7f041400d8d0 tx=0x7f041400dda0 comp rx=0 tx=0).stop 2026-03-09T20:44:56.062 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:56.036+0000 7f042aae3640 1 -- 192.168.123.107:0/1815670595 shutdown_connections 2026-03-09T20:44:56.062 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:56.036+0000 7f042aae3640 1 --2- 192.168.123.107:0/1815670595 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f03fc076170 0x7f03fc078630 unknown :-1 s=CLOSED pgs=95 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:56.062 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:56.036+0000 7f042aae3640 1 --2- 192.168.123.107:0/1815670595 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0424103c80 0x7f04240794e0 unknown :-1 s=CLOSED pgs=236 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:56.062 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:56.036+0000 7f042aae3640 1 --2- 192.168.123.107:0/1815670595 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f0424102a80 0x7f0424078fa0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:56.062 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:56.036+0000 7f042aae3640 1 -- 192.168.123.107:0/1815670595 >> 192.168.123.107:0/1815670595 conn(0x7f04240fe250 msgr2=0x7f04240ffd70 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:56.062 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:56.036+0000 7f042aae3640 1 -- 192.168.123.107:0/1815670595 shutdown_connections 2026-03-09T20:44:56.062 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:56.036+0000 7f042aae3640 1 -- 192.168.123.107:0/1815670595 wait complete. 2026-03-09T20:44:56.149 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- bash -c 'ceph orch ls' 2026-03-09T20:44:56.287 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:44:56.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:55 vm10 ceph-mon[57011]: from='client.? 192.168.123.107:0/516190216' entity='client.admin' 2026-03-09T20:44:56.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:55 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:44:56.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:55 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:44:56.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:55 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:44:56.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:55 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:44:56.523 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:56.522+0000 7f78f515e640 1 -- 192.168.123.107:0/3026075083 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f78f0102a60 msgr2=0x7f78f0102e60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:56.523 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:56.522+0000 7f78f515e640 1 --2- 192.168.123.107:0/3026075083 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f78f0102a60 0x7f78f0102e60 secure :-1 s=READY pgs=237 cs=0 l=1 rev1=1 crypto rx=0x7f78dc0099b0 tx=0x7f78dc02f220 comp rx=0 tx=0).stop 2026-03-09T20:44:56.523 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:56.523+0000 7f78f515e640 1 -- 192.168.123.107:0/3026075083 shutdown_connections 2026-03-09T20:44:56.523 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:56.523+0000 7f78f515e640 1 --2- 192.168.123.107:0/3026075083 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f78f0103c60 0x7f78f01040e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:56.523 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:56.523+0000 7f78f515e640 1 --2- 192.168.123.107:0/3026075083 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f78f0102a60 0x7f78f0102e60 unknown :-1 s=CLOSED pgs=237 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:56.523 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:56.523+0000 7f78f515e640 1 -- 192.168.123.107:0/3026075083 >> 192.168.123.107:0/3026075083 conn(0x7f78f00fe250 msgr2=0x7f78f0100670 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:56.523 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:56.523+0000 7f78f515e640 1 -- 192.168.123.107:0/3026075083 shutdown_connections 2026-03-09T20:44:56.523 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:56.523+0000 7f78f515e640 1 -- 192.168.123.107:0/3026075083 wait complete. 2026-03-09T20:44:56.523 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:56.524+0000 7f78f515e640 1 Processor -- start 2026-03-09T20:44:56.524 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:56.524+0000 7f78f515e640 1 -- start start 2026-03-09T20:44:56.524 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:56.524+0000 7f78f515e640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f78f0102a60 0x7f78f019a4e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:56.524 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:56.524+0000 7f78f515e640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f78f0103c60 0x7f78f019aa20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:56.524 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:56.524+0000 7f78f515e640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f78f019aff0 con 0x7f78f0103c60 2026-03-09T20:44:56.524 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:56.524+0000 7f78f515e640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f78f019b160 con 0x7f78f0102a60 2026-03-09T20:44:56.524 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:56.524+0000 7f78ee575640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f78f0103c60 0x7f78f019aa20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:56.524 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:56.524+0000 7f78ee575640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f78f0103c60 0x7f78f019aa20 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:47848/0 (socket says 192.168.123.107:47848) 2026-03-09T20:44:56.524 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:56.524+0000 7f78ee575640 1 -- 192.168.123.107:0/3838282699 learned_addr learned my addr 192.168.123.107:0/3838282699 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:44:56.525 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:56.524+0000 7f78ee575640 1 -- 192.168.123.107:0/3838282699 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f78f0102a60 msgr2=0x7f78f019a4e0 unknown :-1 s=STATE_CONNECTING_RE l=1).mark_down 2026-03-09T20:44:56.525 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:56.525+0000 7f78ee575640 1 --2- 192.168.123.107:0/3838282699 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f78f0102a60 0x7f78f019a4e0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:56.525 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:56.525+0000 7f78ee575640 1 -- 192.168.123.107:0/3838282699 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f78dc009660 con 0x7f78f0103c60 2026-03-09T20:44:56.525 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:56.525+0000 7f78ee575640 1 --2- 192.168.123.107:0/3838282699 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f78f0103c60 0x7f78f019aa20 secure :-1 s=READY pgs=238 cs=0 l=1 rev1=1 crypto rx=0x7f78e400b730 tx=0x7f78e400bc00 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:56.525 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:56.525+0000 7f78d3fff640 1 -- 192.168.123.107:0/3838282699 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f78e4004280 con 0x7f78f0103c60 2026-03-09T20:44:56.525 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:56.525+0000 7f78d3fff640 1 -- 192.168.123.107:0/3838282699 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f78e40043e0 con 0x7f78f0103c60 2026-03-09T20:44:56.525 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:56.525+0000 7f78d3fff640 1 -- 192.168.123.107:0/3838282699 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f78e400ca90 con 0x7f78f0103c60 2026-03-09T20:44:56.525 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:56.525+0000 7f78f515e640 1 -- 192.168.123.107:0/3838282699 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f78f019fc00 con 0x7f78f0103c60 2026-03-09T20:44:56.526 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:56.526+0000 7f78f515e640 1 -- 192.168.123.107:0/3838282699 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f78f01a01a0 con 0x7f78f0103c60 2026-03-09T20:44:56.530 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:56.527+0000 7f78f515e640 1 -- 192.168.123.107:0/3838282699 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f78f0102e90 con 0x7f78f0103c60 2026-03-09T20:44:56.531 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:56.528+0000 7f78d3fff640 1 -- 192.168.123.107:0/3838282699 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7f78e400cbf0 con 0x7f78f0103c60 2026-03-09T20:44:56.531 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:56.531+0000 7f78d3fff640 1 --2- 192.168.123.107:0/3838282699 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f78c8076170 0x7f78c8078630 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:56.531 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:56.531+0000 7f78eed76640 1 --2- 192.168.123.107:0/3838282699 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f78c8076170 0x7f78c8078630 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:56.531 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:56.531+0000 7f78eed76640 1 --2- 192.168.123.107:0/3838282699 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f78c8076170 0x7f78c8078630 secure :-1 s=READY pgs=96 cs=0 l=1 rev1=1 crypto rx=0x7f78dc002af0 tx=0x7f78dc0023d0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:56.531 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:56.532+0000 7f78d3fff640 1 -- 192.168.123.107:0/3838282699 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(37..37 src has 1..37) v4 ==== 4618+0+0 (secure 0 0 0) 0x7f78e4097c50 con 0x7f78f0103c60 2026-03-09T20:44:56.532 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:56.532+0000 7f78d3fff640 1 -- 192.168.123.107:0/3838282699 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f78e409b050 con 0x7f78f0103c60 2026-03-09T20:44:56.623 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:56.622+0000 7f78f515e640 1 -- 192.168.123.107:0/3838282699 --> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] -- mgr_command(tid 0: {"prefix": "orch ls", "target": ["mon-mgr", ""]}) v1 -- 0x7f78f01080d0 con 0x7f78c8076170 2026-03-09T20:44:56.627 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:56.627+0000 7f78d3fff640 1 -- 192.168.123.107:0/3838282699 <== mgr.14225 v2:192.168.123.107:6800/4233182156 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+1150 (secure 0 0 0) 0x7f78f01080d0 con 0x7f78c8076170 2026-03-09T20:44:56.627 INFO:teuthology.orchestra.run.vm07.stdout:NAME PORTS RUNNING REFRESHED AGE PLACEMENT 2026-03-09T20:44:56.627 INFO:teuthology.orchestra.run.vm07.stdout:alertmanager ?:9093,9094 1/1 45s ago 2m count:1 2026-03-09T20:44:56.627 INFO:teuthology.orchestra.run.vm07.stdout:ceph-exporter 2/2 45s ago 2m * 2026-03-09T20:44:56.627 INFO:teuthology.orchestra.run.vm07.stdout:crash 2/2 45s ago 2m * 2026-03-09T20:44:56.627 INFO:teuthology.orchestra.run.vm07.stdout:grafana ?:3000 1/1 45s ago 2m count:1 2026-03-09T20:44:56.627 INFO:teuthology.orchestra.run.vm07.stdout:mgr 2/2 45s ago 2m count:2 2026-03-09T20:44:56.627 INFO:teuthology.orchestra.run.vm07.stdout:mon 2/2 45s ago 2m vm07:192.168.123.107=vm07;vm10:192.168.123.110=vm10;count:2 2026-03-09T20:44:56.627 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter ?:9100 2/2 45s ago 2m * 2026-03-09T20:44:56.627 INFO:teuthology.orchestra.run.vm07.stdout:osd 6 45s ago - 2026-03-09T20:44:56.627 INFO:teuthology.orchestra.run.vm07.stdout:prometheus ?:9095 1/1 45s ago 2m count:1 2026-03-09T20:44:56.630 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:56.630+0000 7f78f515e640 1 -- 192.168.123.107:0/3838282699 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f78c8076170 msgr2=0x7f78c8078630 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:56.630 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:56.630+0000 7f78f515e640 1 --2- 192.168.123.107:0/3838282699 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f78c8076170 0x7f78c8078630 secure :-1 s=READY pgs=96 cs=0 l=1 rev1=1 crypto rx=0x7f78dc002af0 tx=0x7f78dc0023d0 comp rx=0 tx=0).stop 2026-03-09T20:44:56.630 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:56.630+0000 7f78f515e640 1 -- 192.168.123.107:0/3838282699 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f78f0103c60 msgr2=0x7f78f019aa20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:56.630 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:56.630+0000 7f78f515e640 1 --2- 192.168.123.107:0/3838282699 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f78f0103c60 0x7f78f019aa20 secure :-1 s=READY pgs=238 cs=0 l=1 rev1=1 crypto rx=0x7f78e400b730 tx=0x7f78e400bc00 comp rx=0 tx=0).stop 2026-03-09T20:44:56.630 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:56.630+0000 7f78f515e640 1 -- 192.168.123.107:0/3838282699 shutdown_connections 2026-03-09T20:44:56.630 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:56.630+0000 7f78f515e640 1 --2- 192.168.123.107:0/3838282699 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f78c8076170 0x7f78c8078630 unknown :-1 s=CLOSED pgs=96 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:56.630 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:56.630+0000 7f78f515e640 1 --2- 192.168.123.107:0/3838282699 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f78f0103c60 0x7f78f019aa20 unknown :-1 s=CLOSED pgs=238 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:56.630 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:56.630+0000 7f78f515e640 1 --2- 192.168.123.107:0/3838282699 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f78f0102a60 0x7f78f019a4e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:56.630 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:56.630+0000 7f78f515e640 1 -- 192.168.123.107:0/3838282699 >> 192.168.123.107:0/3838282699 conn(0x7f78f00fe250 msgr2=0x7f78f00ffa30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:56.630 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:56.630+0000 7f78f515e640 1 -- 192.168.123.107:0/3838282699 shutdown_connections 2026-03-09T20:44:56.630 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:56.630+0000 7f78f515e640 1 -- 192.168.123.107:0/3838282699 wait complete. 2026-03-09T20:44:56.678 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- bash -c 'ceph orch host ls' 2026-03-09T20:44:56.821 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:44:56.875 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:56 vm07 ceph-mon[49120]: pgmap v73: 1 pgs: 1 active+clean; 449 KiB data, 573 MiB used, 119 GiB / 120 GiB avail 2026-03-09T20:44:56.875 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:56 vm07 ceph-mon[49120]: from='client.24267 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:44:57.057 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.056+0000 7ff24be4d640 1 -- 192.168.123.107:0/3944829062 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ff244073b40 msgr2=0x7ff244073fa0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:57.057 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.056+0000 7ff24be4d640 1 --2- 192.168.123.107:0/3944829062 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ff244073b40 0x7ff244073fa0 secure :-1 s=READY pgs=239 cs=0 l=1 rev1=1 crypto rx=0x7ff2340099b0 tx=0x7ff23402f240 comp rx=0 tx=0).stop 2026-03-09T20:44:57.057 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.057+0000 7ff24be4d640 1 -- 192.168.123.107:0/3944829062 shutdown_connections 2026-03-09T20:44:57.057 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.057+0000 7ff24be4d640 1 --2- 192.168.123.107:0/3944829062 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ff244073b40 0x7ff244073fa0 unknown :-1 s=CLOSED pgs=239 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:57.057 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.057+0000 7ff24be4d640 1 --2- 192.168.123.107:0/3944829062 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7ff2440751a0 0x7ff244073600 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:57.057 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.057+0000 7ff24be4d640 1 -- 192.168.123.107:0/3944829062 >> 192.168.123.107:0/3944829062 conn(0x7ff2440fbfb0 msgr2=0x7ff2440fe3d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:57.057 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.057+0000 7ff24be4d640 1 -- 192.168.123.107:0/3944829062 shutdown_connections 2026-03-09T20:44:57.057 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.057+0000 7ff24be4d640 1 -- 192.168.123.107:0/3944829062 wait complete. 2026-03-09T20:44:57.057 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.057+0000 7ff24be4d640 1 Processor -- start 2026-03-09T20:44:57.058 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.058+0000 7ff24be4d640 1 -- start start 2026-03-09T20:44:57.058 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.058+0000 7ff24be4d640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ff244073b40 0x7ff244071670 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:57.058 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.058+0000 7ff24be4d640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7ff2440751a0 0x7ff244071bb0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:57.058 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.058+0000 7ff24be4d640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff2440730b0 con 0x7ff244073b40 2026-03-09T20:44:57.058 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.058+0000 7ff24be4d640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff244073220 con 0x7ff2440751a0 2026-03-09T20:44:57.058 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.058+0000 7ff249bc2640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ff244073b40 0x7ff244071670 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:57.058 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.058+0000 7ff249bc2640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ff244073b40 0x7ff244071670 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:47870/0 (socket says 192.168.123.107:47870) 2026-03-09T20:44:57.058 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.058+0000 7ff249bc2640 1 -- 192.168.123.107:0/755910231 learned_addr learned my addr 192.168.123.107:0/755910231 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:44:57.059 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.058+0000 7ff249bc2640 1 -- 192.168.123.107:0/755910231 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7ff2440751a0 msgr2=0x7ff244071bb0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:57.059 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.058+0000 7ff2493c1640 1 --2- 192.168.123.107:0/755910231 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7ff2440751a0 0x7ff244071bb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:57.059 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.058+0000 7ff249bc2640 1 --2- 192.168.123.107:0/755910231 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7ff2440751a0 0x7ff244071bb0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:57.059 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.058+0000 7ff249bc2640 1 -- 192.168.123.107:0/755910231 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7ff234009660 con 0x7ff244073b40 2026-03-09T20:44:57.059 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.058+0000 7ff2493c1640 1 --2- 192.168.123.107:0/755910231 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7ff2440751a0 0x7ff244071bb0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T20:44:57.059 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.059+0000 7ff249bc2640 1 --2- 192.168.123.107:0/755910231 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ff244073b40 0x7ff244071670 secure :-1 s=READY pgs=240 cs=0 l=1 rev1=1 crypto rx=0x7ff23800d8d0 tx=0x7ff23800dda0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:57.059 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.059+0000 7ff232ffd640 1 -- 192.168.123.107:0/755910231 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7ff238004490 con 0x7ff244073b40 2026-03-09T20:44:57.060 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.059+0000 7ff24be4d640 1 -- 192.168.123.107:0/755910231 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7ff244104380 con 0x7ff244073b40 2026-03-09T20:44:57.060 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.059+0000 7ff24be4d640 1 -- 192.168.123.107:0/755910231 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7ff244072310 con 0x7ff244073b40 2026-03-09T20:44:57.060 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.059+0000 7ff232ffd640 1 -- 192.168.123.107:0/755910231 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7ff23800bd00 con 0x7ff244073b40 2026-03-09T20:44:57.063 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.060+0000 7ff232ffd640 1 -- 192.168.123.107:0/755910231 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7ff238010460 con 0x7ff244073b40 2026-03-09T20:44:57.063 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.060+0000 7ff24be4d640 1 -- 192.168.123.107:0/755910231 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7ff210005350 con 0x7ff244073b40 2026-03-09T20:44:57.063 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.063+0000 7ff232ffd640 1 -- 192.168.123.107:0/755910231 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7ff23800b840 con 0x7ff244073b40 2026-03-09T20:44:57.064 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.064+0000 7ff232ffd640 1 --2- 192.168.123.107:0/755910231 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7ff20c076290 0x7ff20c078750 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:57.064 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.064+0000 7ff2493c1640 1 --2- 192.168.123.107:0/755910231 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7ff20c076290 0x7ff20c078750 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:57.064 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.064+0000 7ff232ffd640 1 -- 192.168.123.107:0/755910231 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(37..37 src has 1..37) v4 ==== 4618+0+0 (secure 0 0 0) 0x7ff238097a20 con 0x7ff244073b40 2026-03-09T20:44:57.064 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.064+0000 7ff232ffd640 1 -- 192.168.123.107:0/755910231 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7ff2380c58c0 con 0x7ff244073b40 2026-03-09T20:44:57.065 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.065+0000 7ff2493c1640 1 --2- 192.168.123.107:0/755910231 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7ff20c076290 0x7ff20c078750 secure :-1 s=READY pgs=97 cs=0 l=1 rev1=1 crypto rx=0x7ff244072da0 tx=0x7ff2340023d0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:57.156 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.156+0000 7ff24be4d640 1 -- 192.168.123.107:0/755910231 --> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] -- mgr_command(tid 0: {"prefix": "orch host ls", "target": ["mon-mgr", ""]}) v1 -- 0x7ff210002bf0 con 0x7ff20c076290 2026-03-09T20:44:57.157 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.157+0000 7ff232ffd640 1 -- 192.168.123.107:0/755910231 <== mgr.14225 v2:192.168.123.107:6800/4233182156 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+139 (secure 0 0 0) 0x7ff210002bf0 con 0x7ff20c076290 2026-03-09T20:44:57.157 INFO:teuthology.orchestra.run.vm07.stdout:HOST ADDR LABELS STATUS 2026-03-09T20:44:57.157 INFO:teuthology.orchestra.run.vm07.stdout:vm07 192.168.123.107 2026-03-09T20:44:57.157 INFO:teuthology.orchestra.run.vm07.stdout:vm10 192.168.123.110 2026-03-09T20:44:57.157 INFO:teuthology.orchestra.run.vm07.stdout:2 hosts in cluster 2026-03-09T20:44:57.159 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.159+0000 7ff24be4d640 1 -- 192.168.123.107:0/755910231 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7ff20c076290 msgr2=0x7ff20c078750 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:57.159 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.159+0000 7ff24be4d640 1 --2- 192.168.123.107:0/755910231 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7ff20c076290 0x7ff20c078750 secure :-1 s=READY pgs=97 cs=0 l=1 rev1=1 crypto rx=0x7ff244072da0 tx=0x7ff2340023d0 comp rx=0 tx=0).stop 2026-03-09T20:44:57.160 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.159+0000 7ff24be4d640 1 -- 192.168.123.107:0/755910231 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ff244073b40 msgr2=0x7ff244071670 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:57.160 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.160+0000 7ff24be4d640 1 --2- 192.168.123.107:0/755910231 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ff244073b40 0x7ff244071670 secure :-1 s=READY pgs=240 cs=0 l=1 rev1=1 crypto rx=0x7ff23800d8d0 tx=0x7ff23800dda0 comp rx=0 tx=0).stop 2026-03-09T20:44:57.160 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.160+0000 7ff24be4d640 1 -- 192.168.123.107:0/755910231 shutdown_connections 2026-03-09T20:44:57.160 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.160+0000 7ff24be4d640 1 --2- 192.168.123.107:0/755910231 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7ff20c076290 0x7ff20c078750 unknown :-1 s=CLOSED pgs=97 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:57.160 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.160+0000 7ff24be4d640 1 --2- 192.168.123.107:0/755910231 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7ff2440751a0 0x7ff244071bb0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:57.160 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.160+0000 7ff24be4d640 1 --2- 192.168.123.107:0/755910231 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ff244073b40 0x7ff244071670 unknown :-1 s=CLOSED pgs=240 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:57.160 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.160+0000 7ff24be4d640 1 -- 192.168.123.107:0/755910231 >> 192.168.123.107:0/755910231 conn(0x7ff2440fbfb0 msgr2=0x7ff2440fdad0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:57.160 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.160+0000 7ff24be4d640 1 -- 192.168.123.107:0/755910231 shutdown_connections 2026-03-09T20:44:57.160 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.160+0000 7ff24be4d640 1 -- 192.168.123.107:0/755910231 wait complete. 2026-03-09T20:44:57.206 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- bash -c 'ceph orch device ls' 2026-03-09T20:44:57.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:56 vm10 ceph-mon[57011]: pgmap v73: 1 pgs: 1 active+clean; 449 KiB data, 573 MiB used, 119 GiB / 120 GiB avail 2026-03-09T20:44:57.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:56 vm10 ceph-mon[57011]: from='client.24267 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:44:57.354 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:44:57.612 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.611+0000 7f7cb5023640 1 -- 192.168.123.107:0/1351958930 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7cb0103c80 msgr2=0x7f7cb0104100 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:57.612 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.611+0000 7f7cb5023640 1 --2- 192.168.123.107:0/1351958930 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7cb0103c80 0x7f7cb0104100 secure :-1 s=READY pgs=241 cs=0 l=1 rev1=1 crypto rx=0x7f7c9c0099b0 tx=0x7f7c9c02f220 comp rx=0 tx=0).stop 2026-03-09T20:44:57.612 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.612+0000 7f7cb5023640 1 -- 192.168.123.107:0/1351958930 shutdown_connections 2026-03-09T20:44:57.612 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.612+0000 7f7cb5023640 1 --2- 192.168.123.107:0/1351958930 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7cb0103c80 0x7f7cb0104100 unknown :-1 s=CLOSED pgs=241 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:57.612 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.612+0000 7f7cb5023640 1 --2- 192.168.123.107:0/1351958930 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f7cb0102a80 0x7f7cb0102e80 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:57.612 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.612+0000 7f7cb5023640 1 -- 192.168.123.107:0/1351958930 >> 192.168.123.107:0/1351958930 conn(0x7f7cb00fe250 msgr2=0x7f7cb0100670 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:57.612 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.612+0000 7f7cb5023640 1 -- 192.168.123.107:0/1351958930 shutdown_connections 2026-03-09T20:44:57.612 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.612+0000 7f7cb5023640 1 -- 192.168.123.107:0/1351958930 wait complete. 2026-03-09T20:44:57.612 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.612+0000 7f7cb5023640 1 Processor -- start 2026-03-09T20:44:57.613 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.613+0000 7f7cb5023640 1 -- start start 2026-03-09T20:44:57.613 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.613+0000 7f7cb5023640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7cb0102a80 0x7f7cb019a430 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:57.613 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.613+0000 7f7cb5023640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f7cb0103c80 0x7f7cb019a970 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:57.613 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.613+0000 7f7cb5023640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f7cb019af40 con 0x7f7cb0102a80 2026-03-09T20:44:57.613 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.613+0000 7f7cb5023640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f7cb019b0b0 con 0x7f7cb0103c80 2026-03-09T20:44:57.613 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.613+0000 7f7cae575640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f7cb0103c80 0x7f7cb019a970 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:57.613 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.613+0000 7f7cae575640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f7cb0103c80 0x7f7cb019a970 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:58814/0 (socket says 192.168.123.107:58814) 2026-03-09T20:44:57.613 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.613+0000 7f7cae575640 1 -- 192.168.123.107:0/828696178 learned_addr learned my addr 192.168.123.107:0/828696178 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:44:57.613 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.613+0000 7f7caed76640 1 --2- 192.168.123.107:0/828696178 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7cb0102a80 0x7f7cb019a430 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:57.614 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.614+0000 7f7caed76640 1 -- 192.168.123.107:0/828696178 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f7cb0103c80 msgr2=0x7f7cb019a970 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:57.614 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.614+0000 7f7caed76640 1 --2- 192.168.123.107:0/828696178 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f7cb0103c80 0x7f7cb019a970 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:57.614 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.614+0000 7f7caed76640 1 -- 192.168.123.107:0/828696178 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f7c9c009660 con 0x7f7cb0102a80 2026-03-09T20:44:57.614 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.614+0000 7f7cae575640 1 --2- 192.168.123.107:0/828696178 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f7cb0103c80 0x7f7cb019a970 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T20:44:57.614 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.614+0000 7f7caed76640 1 --2- 192.168.123.107:0/828696178 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7cb0102a80 0x7f7cb019a430 secure :-1 s=READY pgs=242 cs=0 l=1 rev1=1 crypto rx=0x7f7ca400d6b0 tx=0x7f7ca400db80 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:57.615 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.614+0000 7f7c93fff640 1 -- 192.168.123.107:0/828696178 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f7ca4004280 con 0x7f7cb0102a80 2026-03-09T20:44:57.615 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.614+0000 7f7c93fff640 1 -- 192.168.123.107:0/828696178 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f7ca400bd00 con 0x7f7cb0102a80 2026-03-09T20:44:57.615 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.614+0000 7f7c93fff640 1 -- 192.168.123.107:0/828696178 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f7ca4010460 con 0x7f7cb0102a80 2026-03-09T20:44:57.615 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.614+0000 7f7cb5023640 1 -- 192.168.123.107:0/828696178 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f7cb019fb50 con 0x7f7cb0102a80 2026-03-09T20:44:57.615 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.615+0000 7f7cb5023640 1 -- 192.168.123.107:0/828696178 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f7cb0075910 con 0x7f7cb0102a80 2026-03-09T20:44:57.619 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.616+0000 7f7c93fff640 1 -- 192.168.123.107:0/828696178 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7f7ca40105c0 con 0x7f7cb0102a80 2026-03-09T20:44:57.619 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.616+0000 7f7cb5023640 1 -- 192.168.123.107:0/828696178 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f7c7c005350 con 0x7f7cb0102a80 2026-03-09T20:44:57.619 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.617+0000 7f7c93fff640 1 --2- 192.168.123.107:0/828696178 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f7c880761c0 0x7f7c88078680 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:57.619 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.617+0000 7f7c93fff640 1 -- 192.168.123.107:0/828696178 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(37..37 src has 1..37) v4 ==== 4618+0+0 (secure 0 0 0) 0x7f7ca40974e0 con 0x7f7cb0102a80 2026-03-09T20:44:57.619 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.617+0000 7f7cae575640 1 --2- 192.168.123.107:0/828696178 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f7c880761c0 0x7f7c88078680 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:57.620 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.620+0000 7f7cae575640 1 --2- 192.168.123.107:0/828696178 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f7c880761c0 0x7f7c88078680 secure :-1 s=READY pgs=98 cs=0 l=1 rev1=1 crypto rx=0x7f7cb019b950 tx=0x7f7c9c03a040 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:57.620 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.620+0000 7f7c93fff640 1 -- 192.168.123.107:0/828696178 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f7ca4061ef0 con 0x7f7cb0102a80 2026-03-09T20:44:57.716 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.716+0000 7f7cb5023640 1 -- 192.168.123.107:0/828696178 --> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] -- mgr_command(tid 0: {"prefix": "orch device ls", "target": ["mon-mgr", ""]}) v1 -- 0x7f7c7c002bf0 con 0x7f7c880761c0 2026-03-09T20:44:57.720 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.720+0000 7f7c93fff640 1 -- 192.168.123.107:0/828696178 <== mgr.14225 v2:192.168.123.107:6800/4233182156 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+1617 (secure 0 0 0) 0x7f7c7c002bf0 con 0x7f7c880761c0 2026-03-09T20:44:57.726 INFO:teuthology.orchestra.run.vm07.stdout:HOST PATH TYPE DEVICE ID SIZE AVAILABLE REFRESHED REJECT REASONS 2026-03-09T20:44:57.726 INFO:teuthology.orchestra.run.vm07.stdout:vm07 /dev/sr0 hdd QEMU_DVD-ROM_QM00003 366k No 45s ago Has a FileSystem, Insufficient space (<5GB) 2026-03-09T20:44:57.726 INFO:teuthology.orchestra.run.vm07.stdout:vm07 /dev/vdb hdd DWNBRSTVMM07001 20.0G Yes 45s ago 2026-03-09T20:44:57.726 INFO:teuthology.orchestra.run.vm07.stdout:vm07 /dev/vdc hdd DWNBRSTVMM07002 20.0G No 45s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-09T20:44:57.726 INFO:teuthology.orchestra.run.vm07.stdout:vm07 /dev/vdd hdd DWNBRSTVMM07003 20.0G No 45s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-09T20:44:57.726 INFO:teuthology.orchestra.run.vm07.stdout:vm07 /dev/vde hdd DWNBRSTVMM07004 20.0G No 45s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-09T20:44:57.726 INFO:teuthology.orchestra.run.vm07.stdout:vm10 /dev/sr0 hdd QEMU_DVD-ROM_QM00003 366k No 19s ago Has a FileSystem, Insufficient space (<5GB) 2026-03-09T20:44:57.726 INFO:teuthology.orchestra.run.vm07.stdout:vm10 /dev/vdb hdd DWNBRSTVMM10001 20.0G Yes 19s ago 2026-03-09T20:44:57.726 INFO:teuthology.orchestra.run.vm07.stdout:vm10 /dev/vdc hdd DWNBRSTVMM10002 20.0G No 19s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-09T20:44:57.726 INFO:teuthology.orchestra.run.vm07.stdout:vm10 /dev/vdd hdd DWNBRSTVMM10003 20.0G No 19s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-09T20:44:57.726 INFO:teuthology.orchestra.run.vm07.stdout:vm10 /dev/vde hdd DWNBRSTVMM10004 20.0G No 19s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-09T20:44:57.726 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.722+0000 7f7cb5023640 1 -- 192.168.123.107:0/828696178 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f7c880761c0 msgr2=0x7f7c88078680 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:57.726 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.722+0000 7f7cb5023640 1 --2- 192.168.123.107:0/828696178 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f7c880761c0 0x7f7c88078680 secure :-1 s=READY pgs=98 cs=0 l=1 rev1=1 crypto rx=0x7f7cb019b950 tx=0x7f7c9c03a040 comp rx=0 tx=0).stop 2026-03-09T20:44:57.726 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.722+0000 7f7cb5023640 1 -- 192.168.123.107:0/828696178 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7cb0102a80 msgr2=0x7f7cb019a430 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:57.726 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.722+0000 7f7cb5023640 1 --2- 192.168.123.107:0/828696178 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7cb0102a80 0x7f7cb019a430 secure :-1 s=READY pgs=242 cs=0 l=1 rev1=1 crypto rx=0x7f7ca400d6b0 tx=0x7f7ca400db80 comp rx=0 tx=0).stop 2026-03-09T20:44:57.726 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.723+0000 7f7cb5023640 1 -- 192.168.123.107:0/828696178 shutdown_connections 2026-03-09T20:44:57.726 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.723+0000 7f7cb5023640 1 --2- 192.168.123.107:0/828696178 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f7c880761c0 0x7f7c88078680 unknown :-1 s=CLOSED pgs=98 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:57.726 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.723+0000 7f7cb5023640 1 --2- 192.168.123.107:0/828696178 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f7cb0103c80 0x7f7cb019a970 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:57.726 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.723+0000 7f7cb5023640 1 --2- 192.168.123.107:0/828696178 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7cb0102a80 0x7f7cb019a430 unknown :-1 s=CLOSED pgs=242 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:57.726 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.723+0000 7f7cb5023640 1 -- 192.168.123.107:0/828696178 >> 192.168.123.107:0/828696178 conn(0x7f7cb00fe250 msgr2=0x7f7cb00ffd30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:57.726 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.723+0000 7f7cb5023640 1 -- 192.168.123.107:0/828696178 shutdown_connections 2026-03-09T20:44:57.726 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:57.723+0000 7f7cb5023640 1 -- 192.168.123.107:0/828696178 wait complete. 2026-03-09T20:44:57.792 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-09T20:44:57.794 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm07.local 2026-03-09T20:44:57.794 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- bash -c 'ceph fs volume create cephfs --placement=4' 2026-03-09T20:44:57.958 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:44:57.985 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:57 vm07 ceph-mon[49120]: from='client.14452 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:44:57.985 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:57 vm07 ceph-mon[49120]: from='client.14456 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:44:58.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:57 vm10 ceph-mon[57011]: from='client.14452 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:44:58.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:57 vm10 ceph-mon[57011]: from='client.14456 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:44:58.188 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:58.187+0000 7f0c21200640 1 -- 192.168.123.107:0/3320936067 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0c1c103c60 msgr2=0x7f0c1c1040e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:58.188 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:58.187+0000 7f0c21200640 1 --2- 192.168.123.107:0/3320936067 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0c1c103c60 0x7f0c1c1040e0 secure :-1 s=READY pgs=243 cs=0 l=1 rev1=1 crypto rx=0x7f0c100099b0 tx=0x7f0c1002f220 comp rx=0 tx=0).stop 2026-03-09T20:44:58.188 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:58.187+0000 7f0c21200640 1 -- 192.168.123.107:0/3320936067 shutdown_connections 2026-03-09T20:44:58.188 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:58.187+0000 7f0c21200640 1 --2- 192.168.123.107:0/3320936067 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0c1c103c60 0x7f0c1c1040e0 unknown :-1 s=CLOSED pgs=243 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:58.188 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:58.187+0000 7f0c21200640 1 --2- 192.168.123.107:0/3320936067 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f0c1c102a60 0x7f0c1c102e60 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:58.188 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:58.187+0000 7f0c21200640 1 -- 192.168.123.107:0/3320936067 >> 192.168.123.107:0/3320936067 conn(0x7f0c1c0fe250 msgr2=0x7f0c1c100670 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:58.188 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:58.188+0000 7f0c21200640 1 -- 192.168.123.107:0/3320936067 shutdown_connections 2026-03-09T20:44:58.188 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:58.188+0000 7f0c21200640 1 -- 192.168.123.107:0/3320936067 wait complete. 2026-03-09T20:44:58.189 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:58.189+0000 7f0c21200640 1 Processor -- start 2026-03-09T20:44:58.189 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:58.189+0000 7f0c21200640 1 -- start start 2026-03-09T20:44:58.189 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:58.189+0000 7f0c21200640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0c1c102a60 0x7f0c1c19a440 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:58.189 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:58.189+0000 7f0c21200640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f0c1c103c60 0x7f0c1c19a980 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:58.189 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:58.189+0000 7f0c21200640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f0c1c19af50 con 0x7f0c1c102a60 2026-03-09T20:44:58.189 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:58.189+0000 7f0c21200640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f0c1c19b0c0 con 0x7f0c1c103c60 2026-03-09T20:44:58.189 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:58.189+0000 7f0c1ad76640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0c1c102a60 0x7f0c1c19a440 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:58.189 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:58.189+0000 7f0c1ad76640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0c1c102a60 0x7f0c1c19a440 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:49958/0 (socket says 192.168.123.107:49958) 2026-03-09T20:44:58.189 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:58.189+0000 7f0c1ad76640 1 -- 192.168.123.107:0/3250750614 learned_addr learned my addr 192.168.123.107:0/3250750614 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:44:58.190 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:58.189+0000 7f0c1ad76640 1 -- 192.168.123.107:0/3250750614 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f0c1c103c60 msgr2=0x7f0c1c19a980 unknown :-1 s=STATE_CONNECTING_RE l=1).mark_down 2026-03-09T20:44:58.190 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:58.189+0000 7f0c1a575640 1 --2- 192.168.123.107:0/3250750614 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f0c1c103c60 0x7f0c1c19a980 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:58.190 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:58.189+0000 7f0c1ad76640 1 --2- 192.168.123.107:0/3250750614 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f0c1c103c60 0x7f0c1c19a980 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:58.190 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:58.189+0000 7f0c1ad76640 1 -- 192.168.123.107:0/3250750614 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f0c10009660 con 0x7f0c1c102a60 2026-03-09T20:44:58.190 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:58.190+0000 7f0c1ad76640 1 --2- 192.168.123.107:0/3250750614 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0c1c102a60 0x7f0c1c19a440 secure :-1 s=READY pgs=244 cs=0 l=1 rev1=1 crypto rx=0x7f0c0400d930 tx=0x7f0c0400de00 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:58.190 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:58.190+0000 7f0bfbfff640 1 -- 192.168.123.107:0/3250750614 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f0c04004490 con 0x7f0c1c102a60 2026-03-09T20:44:58.190 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:58.190+0000 7f0c21200640 1 -- 192.168.123.107:0/3250750614 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f0c1c19fb60 con 0x7f0c1c102a60 2026-03-09T20:44:58.191 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:58.191+0000 7f0c21200640 1 -- 192.168.123.107:0/3250750614 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f0c1c1a00b0 con 0x7f0c1c102a60 2026-03-09T20:44:58.192 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:58.190+0000 7f0bfbfff640 1 -- 192.168.123.107:0/3250750614 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f0c04007620 con 0x7f0c1c102a60 2026-03-09T20:44:58.193 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:58.193+0000 7f0bfbfff640 1 -- 192.168.123.107:0/3250750614 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f0c04010460 con 0x7f0c1c102a60 2026-03-09T20:44:58.195 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:58.193+0000 7f0c21200640 1 -- 192.168.123.107:0/3250750614 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f0c1c10b690 con 0x7f0c1c102a60 2026-03-09T20:44:58.196 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:58.196+0000 7f0bfbfff640 1 -- 192.168.123.107:0/3250750614 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7f0c04010680 con 0x7f0c1c102a60 2026-03-09T20:44:58.196 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:58.196+0000 7f0bfbfff640 1 --2- 192.168.123.107:0/3250750614 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f0bec076080 0x7f0bec078540 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:44:58.196 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:58.196+0000 7f0bfbfff640 1 -- 192.168.123.107:0/3250750614 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(37..37 src has 1..37) v4 ==== 4618+0+0 (secure 0 0 0) 0x7f0c04098a70 con 0x7f0c1c102a60 2026-03-09T20:44:58.196 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:58.196+0000 7f0bfbfff640 1 -- 192.168.123.107:0/3250750614 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f0c040026e0 con 0x7f0c1c102a60 2026-03-09T20:44:58.197 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:58.197+0000 7f0c1a575640 1 --2- 192.168.123.107:0/3250750614 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f0bec076080 0x7f0bec078540 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:44:58.198 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:58.198+0000 7f0c1a575640 1 --2- 192.168.123.107:0/3250750614 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f0bec076080 0x7f0bec078540 secure :-1 s=READY pgs=99 cs=0 l=1 rev1=1 crypto rx=0x7f0c10004870 tx=0x7f0c100047e0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:44:58.298 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:58.298+0000 7f0c21200640 1 -- 192.168.123.107:0/3250750614 --> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] -- mgr_command(tid 0: {"prefix": "fs volume create", "name": "cephfs", "placement": "4", "target": ["mon-mgr", ""]}) v1 -- 0x7f0c1c1a0360 con 0x7f0bec076080 2026-03-09T20:44:59.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:58 vm07 ceph-mon[49120]: pgmap v74: 1 pgs: 1 active+clean; 449 KiB data, 573 MiB used, 119 GiB / 120 GiB avail 2026-03-09T20:44:59.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:58 vm07 ceph-mon[49120]: from='client.14460 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:44:59.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:58 vm07 ceph-mon[49120]: from='client.14464 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:44:59.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:58 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch 2026-03-09T20:44:59.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:58 vm10 ceph-mon[57011]: pgmap v74: 1 pgs: 1 active+clean; 449 KiB data, 573 MiB used, 119 GiB / 120 GiB avail 2026-03-09T20:44:59.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:58 vm10 ceph-mon[57011]: from='client.14460 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:44:59.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:58 vm10 ceph-mon[57011]: from='client.14464 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:44:59.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:58 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch 2026-03-09T20:44:59.912 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:59.912+0000 7f0bfbfff640 1 -- 192.168.123.107:0/3250750614 <== mgr.14225 v2:192.168.123.107:6800/4233182156 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+0 (secure 0 0 0) 0x7f0c1c1a0360 con 0x7f0bec076080 2026-03-09T20:44:59.915 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:59.915+0000 7f0c21200640 1 -- 192.168.123.107:0/3250750614 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f0bec076080 msgr2=0x7f0bec078540 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:59.915 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:59.915+0000 7f0c21200640 1 --2- 192.168.123.107:0/3250750614 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f0bec076080 0x7f0bec078540 secure :-1 s=READY pgs=99 cs=0 l=1 rev1=1 crypto rx=0x7f0c10004870 tx=0x7f0c100047e0 comp rx=0 tx=0).stop 2026-03-09T20:44:59.915 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:59.915+0000 7f0c21200640 1 -- 192.168.123.107:0/3250750614 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0c1c102a60 msgr2=0x7f0c1c19a440 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:44:59.915 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:59.915+0000 7f0c21200640 1 --2- 192.168.123.107:0/3250750614 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0c1c102a60 0x7f0c1c19a440 secure :-1 s=READY pgs=244 cs=0 l=1 rev1=1 crypto rx=0x7f0c0400d930 tx=0x7f0c0400de00 comp rx=0 tx=0).stop 2026-03-09T20:44:59.915 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:59.915+0000 7f0c21200640 1 -- 192.168.123.107:0/3250750614 shutdown_connections 2026-03-09T20:44:59.915 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:59.915+0000 7f0c21200640 1 --2- 192.168.123.107:0/3250750614 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f0bec076080 0x7f0bec078540 unknown :-1 s=CLOSED pgs=99 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:59.915 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:59.915+0000 7f0c21200640 1 --2- 192.168.123.107:0/3250750614 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f0c1c103c60 0x7f0c1c19a980 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:59.915 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:59.915+0000 7f0c21200640 1 --2- 192.168.123.107:0/3250750614 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0c1c102a60 0x7f0c1c19a440 unknown :-1 s=CLOSED pgs=244 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:44:59.915 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:59.915+0000 7f0c21200640 1 -- 192.168.123.107:0/3250750614 >> 192.168.123.107:0/3250750614 conn(0x7f0c1c0fe250 msgr2=0x7f0c1c0ffd50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:44:59.915 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:59.915+0000 7f0c21200640 1 -- 192.168.123.107:0/3250750614 shutdown_connections 2026-03-09T20:44:59.915 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:44:59.915+0000 7f0c21200640 1 -- 192.168.123.107:0/3250750614 wait complete. 2026-03-09T20:44:59.961 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- bash -c 'ceph fs dump' 2026-03-09T20:45:00.136 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:45:00.178 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:59 vm07 ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-mon-vm07[49116]: 2026-03-09T20:44:59.885+0000 7fac9e71b640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN) 2026-03-09T20:45:00.178 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:59 vm07 ceph-mon[49120]: from='client.14468 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "4", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:45:00.178 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:59 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]': finished 2026-03-09T20:45:00.178 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:59 vm07 ceph-mon[49120]: osdmap e38: 6 total, 6 up, 6 in 2026-03-09T20:45:00.178 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:59 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch 2026-03-09T20:45:00.178 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:59 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd='[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]': finished 2026-03-09T20:45:00.178 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:44:59 vm07 ceph-mon[49120]: osdmap e39: 6 total, 6 up, 6 in 2026-03-09T20:45:00.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:59 vm10 ceph-mon[57011]: from='client.14468 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "4", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:45:00.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:59 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]': finished 2026-03-09T20:45:00.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:59 vm10 ceph-mon[57011]: osdmap e38: 6 total, 6 up, 6 in 2026-03-09T20:45:00.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:59 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch 2026-03-09T20:45:00.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:59 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd='[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]': finished 2026-03-09T20:45:00.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:44:59 vm10 ceph-mon[57011]: osdmap e39: 6 total, 6 up, 6 in 2026-03-09T20:45:00.652 INFO:teuthology.orchestra.run.vm07.stdout:e2 2026-03-09T20:45:00.653 INFO:teuthology.orchestra.run.vm07.stdout:enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T20:45:00.653 INFO:teuthology.orchestra.run.vm07.stdout:default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T20:45:00.653 INFO:teuthology.orchestra.run.vm07.stdout:legacy client fscid: 1 2026-03-09T20:45:00.653 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:45:00.653 INFO:teuthology.orchestra.run.vm07.stdout:Filesystem 'cephfs' (1) 2026-03-09T20:45:00.653 INFO:teuthology.orchestra.run.vm07.stdout:fs_name cephfs 2026-03-09T20:45:00.653 INFO:teuthology.orchestra.run.vm07.stdout:epoch 2 2026-03-09T20:45:00.653 INFO:teuthology.orchestra.run.vm07.stdout:flags 12 joinable allow_snaps allow_multimds_snaps 2026-03-09T20:45:00.653 INFO:teuthology.orchestra.run.vm07.stdout:created 2026-03-09T20:44:59.885491+0000 2026-03-09T20:45:00.653 INFO:teuthology.orchestra.run.vm07.stdout:modified 2026-03-09T20:44:59.885511+0000 2026-03-09T20:45:00.653 INFO:teuthology.orchestra.run.vm07.stdout:tableserver 0 2026-03-09T20:45:00.653 INFO:teuthology.orchestra.run.vm07.stdout:root 0 2026-03-09T20:45:00.653 INFO:teuthology.orchestra.run.vm07.stdout:session_timeout 60 2026-03-09T20:45:00.653 INFO:teuthology.orchestra.run.vm07.stdout:session_autoclose 300 2026-03-09T20:45:00.653 INFO:teuthology.orchestra.run.vm07.stdout:max_file_size 1099511627776 2026-03-09T20:45:00.653 INFO:teuthology.orchestra.run.vm07.stdout:max_xattr_size 65536 2026-03-09T20:45:00.653 INFO:teuthology.orchestra.run.vm07.stdout:required_client_features {} 2026-03-09T20:45:00.653 INFO:teuthology.orchestra.run.vm07.stdout:last_failure 0 2026-03-09T20:45:00.653 INFO:teuthology.orchestra.run.vm07.stdout:last_failure_osd_epoch 0 2026-03-09T20:45:00.653 INFO:teuthology.orchestra.run.vm07.stdout:compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T20:45:00.653 INFO:teuthology.orchestra.run.vm07.stdout:max_mds 1 2026-03-09T20:45:00.653 INFO:teuthology.orchestra.run.vm07.stdout:in 2026-03-09T20:45:00.653 INFO:teuthology.orchestra.run.vm07.stdout:up {} 2026-03-09T20:45:00.653 INFO:teuthology.orchestra.run.vm07.stdout:failed 2026-03-09T20:45:00.653 INFO:teuthology.orchestra.run.vm07.stdout:damaged 2026-03-09T20:45:00.653 INFO:teuthology.orchestra.run.vm07.stdout:stopped 2026-03-09T20:45:00.653 INFO:teuthology.orchestra.run.vm07.stdout:data_pools [3] 2026-03-09T20:45:00.653 INFO:teuthology.orchestra.run.vm07.stdout:metadata_pool 2 2026-03-09T20:45:00.653 INFO:teuthology.orchestra.run.vm07.stdout:inline_data disabled 2026-03-09T20:45:00.653 INFO:teuthology.orchestra.run.vm07.stdout:balancer 2026-03-09T20:45:00.653 INFO:teuthology.orchestra.run.vm07.stdout:bal_rank_mask -1 2026-03-09T20:45:00.653 INFO:teuthology.orchestra.run.vm07.stdout:standby_count_wanted 0 2026-03-09T20:45:00.653 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:45:00.653 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:45:00.653 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:00.486+0000 7f661d98a640 1 -- 192.168.123.107:0/1758194008 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6618103c60 msgr2=0x7f66181040e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:00.653 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:00.486+0000 7f661d98a640 1 --2- 192.168.123.107:0/1758194008 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6618103c60 0x7f66181040e0 secure :-1 s=READY pgs=245 cs=0 l=1 rev1=1 crypto rx=0x7f660c0099b0 tx=0x7f660c02f220 comp rx=0 tx=0).stop 2026-03-09T20:45:00.653 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:00.487+0000 7f661d98a640 1 -- 192.168.123.107:0/1758194008 shutdown_connections 2026-03-09T20:45:00.653 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:00.487+0000 7f661d98a640 1 --2- 192.168.123.107:0/1758194008 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6618103c60 0x7f66181040e0 unknown :-1 s=CLOSED pgs=245 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:00.653 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:00.487+0000 7f661d98a640 1 --2- 192.168.123.107:0/1758194008 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f6618102a60 0x7f6618102e60 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:00.653 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:00.487+0000 7f661d98a640 1 -- 192.168.123.107:0/1758194008 >> 192.168.123.107:0/1758194008 conn(0x7f66180fe250 msgr2=0x7f6618100670 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:45:00.653 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:00.487+0000 7f661d98a640 1 -- 192.168.123.107:0/1758194008 shutdown_connections 2026-03-09T20:45:00.653 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:00.487+0000 7f661d98a640 1 -- 192.168.123.107:0/1758194008 wait complete. 2026-03-09T20:45:00.653 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:00.487+0000 7f661d98a640 1 Processor -- start 2026-03-09T20:45:00.653 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:00.487+0000 7f661d98a640 1 -- start start 2026-03-09T20:45:00.653 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:00.487+0000 7f661d98a640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6618102a60 0x7f661819a340 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:00.653 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:00.487+0000 7f661d98a640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f6618103c60 0x7f661819a880 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:00.653 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:00.487+0000 7f661d98a640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f661819ae50 con 0x7f6618102a60 2026-03-09T20:45:00.653 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:00.487+0000 7f661d98a640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f661819afc0 con 0x7f6618103c60 2026-03-09T20:45:00.653 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:00.488+0000 7f6616ffd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6618102a60 0x7f661819a340 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:00.653 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:00.488+0000 7f6616ffd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6618102a60 0x7f661819a340 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:49978/0 (socket says 192.168.123.107:49978) 2026-03-09T20:45:00.653 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:00.488+0000 7f6616ffd640 1 -- 192.168.123.107:0/2974278133 learned_addr learned my addr 192.168.123.107:0/2974278133 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:45:00.653 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:00.488+0000 7f66167fc640 1 --2- 192.168.123.107:0/2974278133 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f6618103c60 0x7f661819a880 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:00.654 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:00.488+0000 7f6616ffd640 1 -- 192.168.123.107:0/2974278133 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f6618103c60 msgr2=0x7f661819a880 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:00.654 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:00.488+0000 7f6616ffd640 1 --2- 192.168.123.107:0/2974278133 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f6618103c60 0x7f661819a880 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:00.654 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:00.488+0000 7f6616ffd640 1 -- 192.168.123.107:0/2974278133 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f660c009660 con 0x7f6618102a60 2026-03-09T20:45:00.654 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:00.488+0000 7f6616ffd640 1 --2- 192.168.123.107:0/2974278133 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6618102a60 0x7f661819a340 secure :-1 s=READY pgs=246 cs=0 l=1 rev1=1 crypto rx=0x7f660400e990 tx=0x7f660400ee60 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:45:00.654 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:00.488+0000 7f661c988640 1 -- 192.168.123.107:0/2974278133 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f6604009800 con 0x7f6618102a60 2026-03-09T20:45:00.654 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:00.488+0000 7f661c988640 1 -- 192.168.123.107:0/2974278133 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f6604004590 con 0x7f6618102a60 2026-03-09T20:45:00.654 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:00.489+0000 7f661c988640 1 -- 192.168.123.107:0/2974278133 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f6604010640 con 0x7f6618102a60 2026-03-09T20:45:00.654 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:00.489+0000 7f661d98a640 1 -- 192.168.123.107:0/2974278133 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f661819fa60 con 0x7f6618102a60 2026-03-09T20:45:00.654 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:00.489+0000 7f661d98a640 1 -- 192.168.123.107:0/2974278133 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f661819ff30 con 0x7f6618102a60 2026-03-09T20:45:00.654 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:00.490+0000 7f661c988640 1 -- 192.168.123.107:0/2974278133 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7f660400cd30 con 0x7f6618102a60 2026-03-09T20:45:00.654 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:00.490+0000 7f661d98a640 1 -- 192.168.123.107:0/2974278133 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f65dc005350 con 0x7f6618102a60 2026-03-09T20:45:00.654 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:00.493+0000 7f661c988640 1 --2- 192.168.123.107:0/2974278133 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f65ec075f60 0x7f65ec078420 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:00.654 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:00.493+0000 7f66167fc640 1 --2- 192.168.123.107:0/2974278133 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f65ec075f60 0x7f65ec078420 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:00.654 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:00.493+0000 7f661c988640 1 -- 192.168.123.107:0/2974278133 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(40..40 src has 1..40) v4 ==== 5448+0+0 (secure 0 0 0) 0x7f660401d030 con 0x7f6618102a60 2026-03-09T20:45:00.654 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:00.493+0000 7f66167fc640 1 --2- 192.168.123.107:0/2974278133 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f65ec075f60 0x7f65ec078420 secure :-1 s=READY pgs=100 cs=0 l=1 rev1=1 crypto rx=0x7f660c002c20 tx=0x7f660c03a040 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:45:00.654 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:00.494+0000 7f661c988640 1 -- 192.168.123.107:0/2974278133 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f6604099dd0 con 0x7f6618102a60 2026-03-09T20:45:00.654 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:00.597+0000 7f661d98a640 1 -- 192.168.123.107:0/2974278133 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "fs dump"} v 0) v1 -- 0x7f65dc005e10 con 0x7f6618102a60 2026-03-09T20:45:00.654 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:00.597+0000 7f661c988640 1 -- 192.168.123.107:0/2974278133 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump"}]=0 dumped fsmap epoch 2 v2) v1 ==== 75+0+1114 (secure 0 0 0) 0x7f6604060f80 con 0x7f6618102a60 2026-03-09T20:45:00.654 INFO:teuthology.orchestra.run.vm07.stderr:dumped fsmap epoch 2 2026-03-09T20:45:00.654 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:00.599+0000 7f661d98a640 1 -- 192.168.123.107:0/2974278133 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f65ec075f60 msgr2=0x7f65ec078420 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:00.654 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:00.599+0000 7f661d98a640 1 --2- 192.168.123.107:0/2974278133 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f65ec075f60 0x7f65ec078420 secure :-1 s=READY pgs=100 cs=0 l=1 rev1=1 crypto rx=0x7f660c002c20 tx=0x7f660c03a040 comp rx=0 tx=0).stop 2026-03-09T20:45:00.654 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:00.599+0000 7f661d98a640 1 -- 192.168.123.107:0/2974278133 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6618102a60 msgr2=0x7f661819a340 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:00.654 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:00.599+0000 7f661d98a640 1 --2- 192.168.123.107:0/2974278133 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6618102a60 0x7f661819a340 secure :-1 s=READY pgs=246 cs=0 l=1 rev1=1 crypto rx=0x7f660400e990 tx=0x7f660400ee60 comp rx=0 tx=0).stop 2026-03-09T20:45:00.654 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:00.599+0000 7f661d98a640 1 -- 192.168.123.107:0/2974278133 shutdown_connections 2026-03-09T20:45:00.654 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:00.599+0000 7f661d98a640 1 --2- 192.168.123.107:0/2974278133 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f65ec075f60 0x7f65ec078420 unknown :-1 s=CLOSED pgs=100 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:00.654 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:00.599+0000 7f661d98a640 1 --2- 192.168.123.107:0/2974278133 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f6618103c60 0x7f661819a880 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:00.654 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:00.600+0000 7f661d98a640 1 --2- 192.168.123.107:0/2974278133 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6618102a60 0x7f661819a340 unknown :-1 s=CLOSED pgs=246 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:00.654 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:00.600+0000 7f661d98a640 1 -- 192.168.123.107:0/2974278133 >> 192.168.123.107:0/2974278133 conn(0x7f66180fe250 msgr2=0x7f66180ffce0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:45:00.654 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:00.600+0000 7f661d98a640 1 -- 192.168.123.107:0/2974278133 shutdown_connections 2026-03-09T20:45:00.654 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:00.600+0000 7f661d98a640 1 -- 192.168.123.107:0/2974278133 wait complete. 2026-03-09T20:45:01.033 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-09T20:45:01.036 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm07.local 2026-03-09T20:45:01.036 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- bash -c 'ceph fs set cephfs max_mds 2' 2026-03-09T20:45:01.101 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:00 vm07 ceph-mon[49120]: pgmap v76: 33 pgs: 32 unknown, 1 active+clean; 449 KiB data, 573 MiB used, 119 GiB / 120 GiB avail 2026-03-09T20:45:01.102 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:00 vm07 ceph-mon[49120]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:45:01.102 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:00 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch 2026-03-09T20:45:01.102 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:00 vm07 ceph-mon[49120]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN) 2026-03-09T20:45:01.102 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:00 vm07 ceph-mon[49120]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX) 2026-03-09T20:45:01.102 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:00 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished 2026-03-09T20:45:01.102 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:00 vm07 ceph-mon[49120]: osdmap e40: 6 total, 6 up, 6 in 2026-03-09T20:45:01.102 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:00 vm07 ceph-mon[49120]: fsmap cephfs:0 2026-03-09T20:45:01.102 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:00 vm07 ceph-mon[49120]: Saving service mds.cephfs spec with placement count:4 2026-03-09T20:45:01.102 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:00 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:01.102 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:00 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:45:01.102 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:00 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:45:01.102 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:00 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:45:01.102 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:00 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:01.102 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:00 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm07.rovdbp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-09T20:45:01.102 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:00 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm07.rovdbp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished 2026-03-09T20:45:01.102 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:00 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:45:01.102 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:00 vm07 ceph-mon[49120]: Deploying daemon mds.cephfs.vm07.rovdbp on vm07 2026-03-09T20:45:01.102 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:00 vm07 ceph-mon[49120]: from='client.? 192.168.123.107:0/2974278133' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T20:45:01.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:00 vm10 ceph-mon[57011]: pgmap v76: 33 pgs: 32 unknown, 1 active+clean; 449 KiB data, 573 MiB used, 119 GiB / 120 GiB avail 2026-03-09T20:45:01.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:00 vm10 ceph-mon[57011]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:45:01.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:00 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch 2026-03-09T20:45:01.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:00 vm10 ceph-mon[57011]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN) 2026-03-09T20:45:01.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:00 vm10 ceph-mon[57011]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX) 2026-03-09T20:45:01.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:00 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished 2026-03-09T20:45:01.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:00 vm10 ceph-mon[57011]: osdmap e40: 6 total, 6 up, 6 in 2026-03-09T20:45:01.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:00 vm10 ceph-mon[57011]: fsmap cephfs:0 2026-03-09T20:45:01.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:00 vm10 ceph-mon[57011]: Saving service mds.cephfs spec with placement count:4 2026-03-09T20:45:01.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:00 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:01.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:00 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:45:01.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:00 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:45:01.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:00 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:45:01.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:00 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:01.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:00 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm07.rovdbp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-09T20:45:01.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:00 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm07.rovdbp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished 2026-03-09T20:45:01.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:00 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:45:01.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:00 vm10 ceph-mon[57011]: Deploying daemon mds.cephfs.vm07.rovdbp on vm07 2026-03-09T20:45:01.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:00 vm10 ceph-mon[57011]: from='client.? 192.168.123.107:0/2974278133' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T20:45:01.374 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:45:01.646 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:01.645+0000 7f2396762640 1 -- 192.168.123.107:0/900427698 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f23901019f0 msgr2=0x7f2390101e70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:01.646 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:01.645+0000 7f2396762640 1 --2- 192.168.123.107:0/900427698 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f23901019f0 0x7f2390101e70 secure :-1 s=READY pgs=249 cs=0 l=1 rev1=1 crypto rx=0x7f237c009a00 tx=0x7f237c02f280 comp rx=0 tx=0).stop 2026-03-09T20:45:01.646 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:01.646+0000 7f2396762640 1 -- 192.168.123.107:0/900427698 shutdown_connections 2026-03-09T20:45:01.646 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:01.646+0000 7f2396762640 1 --2- 192.168.123.107:0/900427698 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f23901019f0 0x7f2390101e70 unknown :-1 s=CLOSED pgs=249 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:01.646 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:01.646+0000 7f2396762640 1 --2- 192.168.123.107:0/900427698 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f23901007f0 0x7f2390100bf0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:01.646 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:01.646+0000 7f2396762640 1 -- 192.168.123.107:0/900427698 >> 192.168.123.107:0/900427698 conn(0x7f23900fbf80 msgr2=0x7f23900fe3c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:45:01.646 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:01.646+0000 7f2396762640 1 -- 192.168.123.107:0/900427698 shutdown_connections 2026-03-09T20:45:01.646 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:01.647+0000 7f2396762640 1 -- 192.168.123.107:0/900427698 wait complete. 2026-03-09T20:45:01.647 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:01.647+0000 7f2396762640 1 Processor -- start 2026-03-09T20:45:01.647 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:01.647+0000 7f2396762640 1 -- start start 2026-03-09T20:45:01.647 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:01.647+0000 7f2396762640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f23901007f0 0x7f2390074f10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:01.648 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:01.648+0000 7f2396762640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f23901019f0 0x7f2390073560 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:01.648 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:01.648+0000 7f2396762640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f23900754e0 con 0x7f23901019f0 2026-03-09T20:45:01.648 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:01.648+0000 7f238f7fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f23901019f0 0x7f2390073560 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:01.648 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:01.648+0000 7f238f7fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f23901019f0 0x7f2390073560 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:49998/0 (socket says 192.168.123.107:49998) 2026-03-09T20:45:01.648 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:01.648+0000 7f238f7fe640 1 -- 192.168.123.107:0/2276706709 learned_addr learned my addr 192.168.123.107:0/2276706709 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:45:01.648 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:01.648+0000 7f2396762640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f2390073ad0 con 0x7f23901007f0 2026-03-09T20:45:01.648 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:01.648+0000 7f238ffff640 1 --2- 192.168.123.107:0/2276706709 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f23901007f0 0x7f2390074f10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:01.648 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:01.648+0000 7f238f7fe640 1 -- 192.168.123.107:0/2276706709 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f23901007f0 msgr2=0x7f2390074f10 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:01.648 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:01.648+0000 7f238f7fe640 1 --2- 192.168.123.107:0/2276706709 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f23901007f0 0x7f2390074f10 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:01.649 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:01.648+0000 7f238f7fe640 1 -- 192.168.123.107:0/2276706709 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f237c009660 con 0x7f23901019f0 2026-03-09T20:45:01.649 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:01.648+0000 7f238ffff640 1 --2- 192.168.123.107:0/2276706709 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f23901007f0 0x7f2390074f10 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T20:45:01.649 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:01.649+0000 7f238f7fe640 1 --2- 192.168.123.107:0/2276706709 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f23901019f0 0x7f2390073560 secure :-1 s=READY pgs=250 cs=0 l=1 rev1=1 crypto rx=0x7f237c009b30 tx=0x7f237c0043d0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:45:01.649 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:01.649+0000 7f238d7fa640 1 -- 192.168.123.107:0/2276706709 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f237c02fae0 con 0x7f23901019f0 2026-03-09T20:45:01.649 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:01.649+0000 7f238d7fa640 1 -- 192.168.123.107:0/2276706709 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f237c02fc40 con 0x7f23901019f0 2026-03-09T20:45:01.649 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:01.649+0000 7f238d7fa640 1 -- 192.168.123.107:0/2276706709 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f237c041a10 con 0x7f23901019f0 2026-03-09T20:45:01.649 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:01.649+0000 7f2396762640 1 -- 192.168.123.107:0/2276706709 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f2390073d50 con 0x7f23901019f0 2026-03-09T20:45:01.649 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:01.649+0000 7f2396762640 1 -- 192.168.123.107:0/2276706709 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f2390074240 con 0x7f23901019f0 2026-03-09T20:45:01.650 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:01.650+0000 7f2396762640 1 -- 192.168.123.107:0/2276706709 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f2354005350 con 0x7f23901019f0 2026-03-09T20:45:01.653 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:01.651+0000 7f238d7fa640 1 -- 192.168.123.107:0/2276706709 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7f237c03f070 con 0x7f23901019f0 2026-03-09T20:45:01.653 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:01.651+0000 7f238d7fa640 1 --2- 192.168.123.107:0/2276706709 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f2364076290 0x7f2364078750 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:01.653 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:01.651+0000 7f238d7fa640 1 -- 192.168.123.107:0/2276706709 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(41..41 src has 1..41) v4 ==== 5448+0+0 (secure 0 0 0) 0x7f237c0bc0d0 con 0x7f23901019f0 2026-03-09T20:45:01.654 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:01.654+0000 7f238ffff640 1 --2- 192.168.123.107:0/2276706709 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f2364076290 0x7f2364078750 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:01.654 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:01.654+0000 7f238d7fa640 1 -- 192.168.123.107:0/2276706709 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f237c0856e0 con 0x7f23901019f0 2026-03-09T20:45:01.654 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:01.654+0000 7f238ffff640 1 --2- 192.168.123.107:0/2276706709 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f2364076290 0x7f2364078750 secure :-1 s=READY pgs=101 cs=0 l=1 rev1=1 crypto rx=0x7f2390101850 tx=0x7f2380009290 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:45:01.775 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:01.772+0000 7f2396762640 1 -- 192.168.123.107:0/2276706709 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "fs set", "fs_name": "cephfs", "var": "max_mds", "val": "2"} v 0) v1 -- 0x7f2354005b80 con 0x7f23901019f0 2026-03-09T20:45:01.997 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:01.997+0000 7f238d7fa640 1 -- 192.168.123.107:0/2276706709 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "fs set", "fs_name": "cephfs", "var": "max_mds", "val": "2"}]=0 v3) v1 ==== 105+0+0 (secure 0 0 0) 0x7f237c085080 con 0x7f23901019f0 2026-03-09T20:45:02.000 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:02.000+0000 7f2396762640 1 -- 192.168.123.107:0/2276706709 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f2364076290 msgr2=0x7f2364078750 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:02.000 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:02.000+0000 7f2396762640 1 --2- 192.168.123.107:0/2276706709 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f2364076290 0x7f2364078750 secure :-1 s=READY pgs=101 cs=0 l=1 rev1=1 crypto rx=0x7f2390101850 tx=0x7f2380009290 comp rx=0 tx=0).stop 2026-03-09T20:45:02.000 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:02.000+0000 7f2396762640 1 -- 192.168.123.107:0/2276706709 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f23901019f0 msgr2=0x7f2390073560 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:02.000 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:02.000+0000 7f2396762640 1 --2- 192.168.123.107:0/2276706709 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f23901019f0 0x7f2390073560 secure :-1 s=READY pgs=250 cs=0 l=1 rev1=1 crypto rx=0x7f237c009b30 tx=0x7f237c0043d0 comp rx=0 tx=0).stop 2026-03-09T20:45:02.000 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:02.000+0000 7f2396762640 1 -- 192.168.123.107:0/2276706709 shutdown_connections 2026-03-09T20:45:02.000 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:02.000+0000 7f2396762640 1 --2- 192.168.123.107:0/2276706709 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f2364076290 0x7f2364078750 unknown :-1 s=CLOSED pgs=101 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:02.000 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:02.000+0000 7f2396762640 1 --2- 192.168.123.107:0/2276706709 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f23901019f0 0x7f2390073560 unknown :-1 s=CLOSED pgs=250 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:02.000 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:02.000+0000 7f2396762640 1 --2- 192.168.123.107:0/2276706709 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f23901007f0 0x7f2390074f10 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:02.000 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:02.000+0000 7f2396762640 1 -- 192.168.123.107:0/2276706709 >> 192.168.123.107:0/2276706709 conn(0x7f23900fbf80 msgr2=0x7f23900fdab0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:45:02.000 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:02.000+0000 7f2396762640 1 -- 192.168.123.107:0/2276706709 shutdown_connections 2026-03-09T20:45:02.000 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:02.000+0000 7f2396762640 1 -- 192.168.123.107:0/2276706709 wait complete. 2026-03-09T20:45:02.027 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:01 vm07 ceph-mon[49120]: osdmap e41: 6 total, 6 up, 6 in 2026-03-09T20:45:02.027 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:01 vm07 ceph-mon[49120]: pgmap v80: 65 pgs: 15 creating+peering, 4 active+clean, 46 unknown; 449 KiB data, 573 MiB used, 119 GiB / 120 GiB avail 2026-03-09T20:45:02.027 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:01 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:02.027 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:01 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:02.027 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:01 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:02.027 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:01 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm10.qpltwp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-09T20:45:02.027 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:01 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm10.qpltwp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished 2026-03-09T20:45:02.027 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:01 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:45:02.027 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:01 vm07 ceph-mon[49120]: Deploying daemon mds.cephfs.vm10.qpltwp on vm10 2026-03-09T20:45:02.027 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:01 vm07 ceph-mon[49120]: from='client.? 192.168.123.107:0/2276706709' entity='client.admin' cmd=[{"prefix": "fs set", "fs_name": "cephfs", "var": "max_mds", "val": "2"}]: dispatch 2026-03-09T20:45:02.048 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-09T20:45:02.051 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm07.local 2026-03-09T20:45:02.052 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- bash -c 'ceph fs set cephfs allow_standby_replay true' 2026-03-09T20:45:02.207 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:45:02.251 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:01 vm10 ceph-mon[57011]: osdmap e41: 6 total, 6 up, 6 in 2026-03-09T20:45:02.251 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:02 vm10 ceph-mon[57011]: pgmap v80: 65 pgs: 15 creating+peering, 4 active+clean, 46 unknown; 449 KiB data, 573 MiB used, 119 GiB / 120 GiB avail 2026-03-09T20:45:02.251 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:02 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:02.251 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:02 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:02.251 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:02 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:02.251 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:02 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm10.qpltwp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-09T20:45:02.251 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:02 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm10.qpltwp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished 2026-03-09T20:45:02.251 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:02 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:45:02.251 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:02 vm10 ceph-mon[57011]: Deploying daemon mds.cephfs.vm10.qpltwp on vm10 2026-03-09T20:45:02.251 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:02 vm10 ceph-mon[57011]: from='client.? 192.168.123.107:0/2276706709' entity='client.admin' cmd=[{"prefix": "fs set", "fs_name": "cephfs", "var": "max_mds", "val": "2"}]: dispatch 2026-03-09T20:45:02.498 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:02.496+0000 7fd3395ee640 1 -- 192.168.123.107:0/175154692 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd3340719a0 msgr2=0x7fd334071da0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:02.498 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:02.496+0000 7fd3395ee640 1 --2- 192.168.123.107:0/175154692 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd3340719a0 0x7fd334071da0 secure :-1 s=READY pgs=251 cs=0 l=1 rev1=1 crypto rx=0x7fd324009a00 tx=0x7fd32402f290 comp rx=0 tx=0).stop 2026-03-09T20:45:02.498 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:02.496+0000 7fd3395ee640 1 -- 192.168.123.107:0/175154692 shutdown_connections 2026-03-09T20:45:02.498 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:02.496+0000 7fd3395ee640 1 --2- 192.168.123.107:0/175154692 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fd3340722e0 0x7fd334110d20 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:02.498 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:02.496+0000 7fd3395ee640 1 --2- 192.168.123.107:0/175154692 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd3340719a0 0x7fd334071da0 unknown :-1 s=CLOSED pgs=251 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:02.498 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:02.496+0000 7fd3395ee640 1 -- 192.168.123.107:0/175154692 >> 192.168.123.107:0/175154692 conn(0x7fd33406d4f0 msgr2=0x7fd33406f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:45:02.498 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:02.496+0000 7fd3395ee640 1 -- 192.168.123.107:0/175154692 shutdown_connections 2026-03-09T20:45:02.498 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:02.496+0000 7fd3395ee640 1 -- 192.168.123.107:0/175154692 wait complete. 2026-03-09T20:45:02.499 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:02.497+0000 7fd3395ee640 1 Processor -- start 2026-03-09T20:45:02.499 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:02.497+0000 7fd3395ee640 1 -- start start 2026-03-09T20:45:02.499 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:02.497+0000 7fd3395ee640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fd3340722e0 0x7fd334117af0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:02.499 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:02.497+0000 7fd3395ee640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd334118030 0x7fd33411d0a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:02.499 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:02.497+0000 7fd3395ee640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fd3341184b0 con 0x7fd334118030 2026-03-09T20:45:02.499 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:02.497+0000 7fd3395ee640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fd334118620 con 0x7fd3340722e0 2026-03-09T20:45:02.499 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:02.497+0000 7fd3337fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd334118030 0x7fd33411d0a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:02.499 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:02.497+0000 7fd3337fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd334118030 0x7fd33411d0a0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:50024/0 (socket says 192.168.123.107:50024) 2026-03-09T20:45:02.499 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:02.497+0000 7fd3337fe640 1 -- 192.168.123.107:0/3612462447 learned_addr learned my addr 192.168.123.107:0/3612462447 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:45:02.499 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:02.497+0000 7fd333fff640 1 --2- 192.168.123.107:0/3612462447 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fd3340722e0 0x7fd334117af0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:02.499 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:02.497+0000 7fd3337fe640 1 -- 192.168.123.107:0/3612462447 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fd3340722e0 msgr2=0x7fd334117af0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:02.499 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:02.497+0000 7fd3337fe640 1 --2- 192.168.123.107:0/3612462447 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fd3340722e0 0x7fd334117af0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:02.499 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:02.497+0000 7fd3337fe640 1 -- 192.168.123.107:0/3612462447 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fd324009660 con 0x7fd334118030 2026-03-09T20:45:02.500 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:02.499+0000 7fd3337fe640 1 --2- 192.168.123.107:0/3612462447 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd334118030 0x7fd33411d0a0 secure :-1 s=READY pgs=252 cs=0 l=1 rev1=1 crypto rx=0x7fd32c004a70 tx=0x7fd32c052580 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:45:02.500 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:02.499+0000 7fd3317fa640 1 -- 192.168.123.107:0/3612462447 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fd32c0090d0 con 0x7fd334118030 2026-03-09T20:45:02.500 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:02.499+0000 7fd3395ee640 1 -- 192.168.123.107:0/3612462447 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fd33411d640 con 0x7fd334118030 2026-03-09T20:45:02.501 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:02.499+0000 7fd3395ee640 1 -- 192.168.123.107:0/3612462447 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fd33411db40 con 0x7fd334118030 2026-03-09T20:45:02.501 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:02.501+0000 7fd3317fa640 1 -- 192.168.123.107:0/3612462447 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fd32c009230 con 0x7fd334118030 2026-03-09T20:45:02.501 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:02.501+0000 7fd3317fa640 1 -- 192.168.123.107:0/3612462447 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fd32c0587d0 con 0x7fd334118030 2026-03-09T20:45:02.501 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:02.501+0000 7fd3317fa640 1 -- 192.168.123.107:0/3612462447 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7fd32c062430 con 0x7fd334118030 2026-03-09T20:45:02.501 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:02.501+0000 7fd3317fa640 1 --2- 192.168.123.107:0/3612462447 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fd320076390 0x7fd320078850 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:02.501 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:02.501+0000 7fd333fff640 1 --2- 192.168.123.107:0/3612462447 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fd320076390 0x7fd320078850 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:02.502 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:02.502+0000 7fd3317fa640 1 -- 192.168.123.107:0/3612462447 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(42..42 src has 1..42) v4 ==== 5448+0+0 (secure 0 0 0) 0x7fd32c0dd310 con 0x7fd334118030 2026-03-09T20:45:02.502 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:02.502+0000 7fd333fff640 1 --2- 192.168.123.107:0/3612462447 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fd320076390 0x7fd320078850 secure :-1 s=READY pgs=103 cs=0 l=1 rev1=1 crypto rx=0x7fd324005ec0 tx=0x7fd324005e50 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:45:02.506 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:02.503+0000 7fd3395ee640 1 -- 192.168.123.107:0/3612462447 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fd2fc005350 con 0x7fd334118030 2026-03-09T20:45:02.506 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:02.506+0000 7fd3317fa640 1 -- 192.168.123.107:0/3612462447 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7fd32c0a6890 con 0x7fd334118030 2026-03-09T20:45:02.628 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:02.627+0000 7fd3395ee640 1 -- 192.168.123.107:0/3612462447 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "fs set", "fs_name": "cephfs", "var": "allow_standby_replay", "val": "true"} v 0) v1 -- 0x7fd2fc005b80 con 0x7fd334118030 2026-03-09T20:45:03.017 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:03.014+0000 7fd3317fa640 1 -- 192.168.123.107:0/3612462447 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "fs set", "fs_name": "cephfs", "var": "allow_standby_replay", "val": "true"}]=0 v5) v1 ==== 121+0+0 (secure 0 0 0) 0x7fd32c0a6230 con 0x7fd334118030 2026-03-09T20:45:03.019 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:03.019+0000 7fd312ffd640 1 -- 192.168.123.107:0/3612462447 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fd320076390 msgr2=0x7fd320078850 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:03.019 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:03.019+0000 7fd312ffd640 1 --2- 192.168.123.107:0/3612462447 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fd320076390 0x7fd320078850 secure :-1 s=READY pgs=103 cs=0 l=1 rev1=1 crypto rx=0x7fd324005ec0 tx=0x7fd324005e50 comp rx=0 tx=0).stop 2026-03-09T20:45:03.019 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:03.019+0000 7fd312ffd640 1 -- 192.168.123.107:0/3612462447 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd334118030 msgr2=0x7fd33411d0a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:03.019 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:03.019+0000 7fd312ffd640 1 --2- 192.168.123.107:0/3612462447 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd334118030 0x7fd33411d0a0 secure :-1 s=READY pgs=252 cs=0 l=1 rev1=1 crypto rx=0x7fd32c004a70 tx=0x7fd32c052580 comp rx=0 tx=0).stop 2026-03-09T20:45:03.019 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:03.019+0000 7fd312ffd640 1 -- 192.168.123.107:0/3612462447 shutdown_connections 2026-03-09T20:45:03.019 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:03.019+0000 7fd312ffd640 1 --2- 192.168.123.107:0/3612462447 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fd320076390 0x7fd320078850 unknown :-1 s=CLOSED pgs=103 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:03.019 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:03.019+0000 7fd312ffd640 1 --2- 192.168.123.107:0/3612462447 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd334118030 0x7fd33411d0a0 unknown :-1 s=CLOSED pgs=252 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:03.019 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:03.019+0000 7fd312ffd640 1 --2- 192.168.123.107:0/3612462447 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fd3340722e0 0x7fd334117af0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:03.019 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:03.019+0000 7fd312ffd640 1 -- 192.168.123.107:0/3612462447 >> 192.168.123.107:0/3612462447 conn(0x7fd33406d4f0 msgr2=0x7fd33406e4a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:45:03.019 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:03.019+0000 7fd312ffd640 1 -- 192.168.123.107:0/3612462447 shutdown_connections 2026-03-09T20:45:03.019 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:03.019+0000 7fd312ffd640 1 -- 192.168.123.107:0/3612462447 wait complete. 2026-03-09T20:45:03.105 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-09T20:45:03.107 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm07.local 2026-03-09T20:45:03.107 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- bash -c 'ceph fs set cephfs inline_data false' 2026-03-09T20:45:03.184 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:03 vm07 ceph-mon[49120]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-09T20:45:03.184 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:03 vm07 ceph-mon[49120]: from='client.? 192.168.123.107:0/2276706709' entity='client.admin' cmd='[{"prefix": "fs set", "fs_name": "cephfs", "var": "max_mds", "val": "2"}]': finished 2026-03-09T20:45:03.184 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:03 vm07 ceph-mon[49120]: mds.? [v2:192.168.123.107:6826/2216764941,v1:192.168.123.107:6827/2216764941] up:boot 2026-03-09T20:45:03.184 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:03 vm07 ceph-mon[49120]: daemon mds.cephfs.vm07.rovdbp assigned to filesystem cephfs as rank 0 (now has 1 ranks) 2026-03-09T20:45:03.184 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:03 vm07 ceph-mon[49120]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline) 2026-03-09T20:45:03.184 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:03 vm07 ceph-mon[49120]: fsmap cephfs:0 1 up:standby 2026-03-09T20:45:03.184 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:03 vm07 ceph-mon[49120]: osdmap e42: 6 total, 6 up, 6 in 2026-03-09T20:45:03.184 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:03 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm07.rovdbp"}]: dispatch 2026-03-09T20:45:03.184 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:03 vm07 ceph-mon[49120]: fsmap cephfs:1 {0=cephfs.vm07.rovdbp=up:creating} 2026-03-09T20:45:03.184 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:03 vm07 ceph-mon[49120]: daemon mds.cephfs.vm07.rovdbp is now active in filesystem cephfs as rank 0 2026-03-09T20:45:03.184 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:03 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:03.184 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:03 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:03.184 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:03 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:03.184 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:03 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm07.potfau", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-09T20:45:03.184 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:03 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm07.potfau", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished 2026-03-09T20:45:03.184 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:03 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:45:03.185 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:03 vm07 ceph-mon[49120]: Deploying daemon mds.cephfs.vm07.potfau on vm07 2026-03-09T20:45:03.185 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:03 vm07 ceph-mon[49120]: from='client.? 192.168.123.107:0/3612462447' entity='client.admin' cmd=[{"prefix": "fs set", "fs_name": "cephfs", "var": "allow_standby_replay", "val": "true"}]: dispatch 2026-03-09T20:45:03.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:03 vm10 ceph-mon[57011]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-09T20:45:03.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:03 vm10 ceph-mon[57011]: from='client.? 192.168.123.107:0/2276706709' entity='client.admin' cmd='[{"prefix": "fs set", "fs_name": "cephfs", "var": "max_mds", "val": "2"}]': finished 2026-03-09T20:45:03.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:03 vm10 ceph-mon[57011]: mds.? [v2:192.168.123.107:6826/2216764941,v1:192.168.123.107:6827/2216764941] up:boot 2026-03-09T20:45:03.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:03 vm10 ceph-mon[57011]: daemon mds.cephfs.vm07.rovdbp assigned to filesystem cephfs as rank 0 (now has 1 ranks) 2026-03-09T20:45:03.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:03 vm10 ceph-mon[57011]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline) 2026-03-09T20:45:03.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:03 vm10 ceph-mon[57011]: fsmap cephfs:0 1 up:standby 2026-03-09T20:45:03.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:03 vm10 ceph-mon[57011]: osdmap e42: 6 total, 6 up, 6 in 2026-03-09T20:45:03.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:03 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm07.rovdbp"}]: dispatch 2026-03-09T20:45:03.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:03 vm10 ceph-mon[57011]: fsmap cephfs:1 {0=cephfs.vm07.rovdbp=up:creating} 2026-03-09T20:45:03.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:03 vm10 ceph-mon[57011]: daemon mds.cephfs.vm07.rovdbp is now active in filesystem cephfs as rank 0 2026-03-09T20:45:03.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:03 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:03.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:03 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:03.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:03 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:03.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:03 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm07.potfau", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-09T20:45:03.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:03 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm07.potfau", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished 2026-03-09T20:45:03.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:03 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:45:03.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:03 vm10 ceph-mon[57011]: Deploying daemon mds.cephfs.vm07.potfau on vm07 2026-03-09T20:45:03.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:03 vm10 ceph-mon[57011]: from='client.? 192.168.123.107:0/3612462447' entity='client.admin' cmd=[{"prefix": "fs set", "fs_name": "cephfs", "var": "allow_standby_replay", "val": "true"}]: dispatch 2026-03-09T20:45:03.309 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:45:03.553 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:03.553+0000 7f2a1f3ad640 1 -- 192.168.123.107:0/934369440 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2a18102a80 msgr2=0x7f2a18102e80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:03.554 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:03.553+0000 7f2a1f3ad640 1 --2- 192.168.123.107:0/934369440 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2a18102a80 0x7f2a18102e80 secure :-1 s=READY pgs=254 cs=0 l=1 rev1=1 crypto rx=0x7f2a080099b0 tx=0x7f2a0802f220 comp rx=0 tx=0).stop 2026-03-09T20:45:03.554 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:03.554+0000 7f2a1f3ad640 1 -- 192.168.123.107:0/934369440 shutdown_connections 2026-03-09T20:45:03.554 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:03.554+0000 7f2a1f3ad640 1 --2- 192.168.123.107:0/934369440 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f2a18103c80 0x7f2a18104100 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:03.554 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:03.554+0000 7f2a1f3ad640 1 --2- 192.168.123.107:0/934369440 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2a18102a80 0x7f2a18102e80 unknown :-1 s=CLOSED pgs=254 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:03.554 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:03.554+0000 7f2a1f3ad640 1 -- 192.168.123.107:0/934369440 >> 192.168.123.107:0/934369440 conn(0x7f2a180fe250 msgr2=0x7f2a18100670 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:45:03.554 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:03.554+0000 7f2a1f3ad640 1 -- 192.168.123.107:0/934369440 shutdown_connections 2026-03-09T20:45:03.554 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:03.554+0000 7f2a1f3ad640 1 -- 192.168.123.107:0/934369440 wait complete. 2026-03-09T20:45:03.554 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:03.555+0000 7f2a1f3ad640 1 Processor -- start 2026-03-09T20:45:03.555 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:03.555+0000 7f2a1f3ad640 1 -- start start 2026-03-09T20:45:03.555 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:03.555+0000 7f2a1f3ad640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f2a18102a80 0x7f2a18071660 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:03.555 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:03.555+0000 7f2a1f3ad640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2a18103c80 0x7f2a18071ba0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:03.555 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:03.555+0000 7f2a1f3ad640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f2a180730a0 con 0x7f2a18103c80 2026-03-09T20:45:03.555 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:03.555+0000 7f2a1f3ad640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f2a18073210 con 0x7f2a18102a80 2026-03-09T20:45:03.555 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:03.555+0000 7f2a1c921640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2a18103c80 0x7f2a18071ba0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:03.555 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:03.555+0000 7f2a1c921640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2a18103c80 0x7f2a18071ba0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:50066/0 (socket says 192.168.123.107:50066) 2026-03-09T20:45:03.555 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:03.555+0000 7f2a1c921640 1 -- 192.168.123.107:0/3464438937 learned_addr learned my addr 192.168.123.107:0/3464438937 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:45:03.555 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:03.555+0000 7f2a1c921640 1 -- 192.168.123.107:0/3464438937 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f2a18102a80 msgr2=0x7f2a18071660 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T20:45:03.556 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:03.555+0000 7f2a1c921640 1 --2- 192.168.123.107:0/3464438937 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f2a18102a80 0x7f2a18071660 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:03.556 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:03.555+0000 7f2a1c921640 1 -- 192.168.123.107:0/3464438937 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f2a08009660 con 0x7f2a18103c80 2026-03-09T20:45:03.556 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:03.556+0000 7f2a1c921640 1 --2- 192.168.123.107:0/3464438937 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2a18103c80 0x7f2a18071ba0 secure :-1 s=READY pgs=255 cs=0 l=1 rev1=1 crypto rx=0x7f2a0000b790 tx=0x7f2a0000bc60 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:45:03.556 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:03.556+0000 7f2a0e7fc640 1 -- 192.168.123.107:0/3464438937 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f2a00004070 con 0x7f2a18103c80 2026-03-09T20:45:03.556 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:03.556+0000 7f2a0e7fc640 1 -- 192.168.123.107:0/3464438937 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f2a000026e0 con 0x7f2a18103c80 2026-03-09T20:45:03.556 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:03.556+0000 7f2a0e7fc640 1 -- 192.168.123.107:0/3464438937 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f2a0000cad0 con 0x7f2a18103c80 2026-03-09T20:45:03.556 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:03.556+0000 7f2a1f3ad640 1 -- 192.168.123.107:0/3464438937 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f2a180797a0 con 0x7f2a18103c80 2026-03-09T20:45:03.556 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:03.556+0000 7f2a1f3ad640 1 -- 192.168.123.107:0/3464438937 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f2a18072330 con 0x7f2a18103c80 2026-03-09T20:45:03.558 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:03.558+0000 7f2a0e7fc640 1 -- 192.168.123.107:0/3464438937 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7f2a0000cc30 con 0x7f2a18103c80 2026-03-09T20:45:03.558 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:03.558+0000 7f2a1f3ad640 1 -- 192.168.123.107:0/3464438937 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f29e0005350 con 0x7f2a18103c80 2026-03-09T20:45:03.561 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:03.561+0000 7f2a0e7fc640 1 --2- 192.168.123.107:0/3464438937 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f29f40761c0 0x7f29f4078680 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:03.561 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:03.561+0000 7f2a0e7fc640 1 -- 192.168.123.107:0/3464438937 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(42..42 src has 1..42) v4 ==== 5448+0+0 (secure 0 0 0) 0x7f2a00096c30 con 0x7f2a18103c80 2026-03-09T20:45:03.561 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:03.561+0000 7f2a1d122640 1 --2- 192.168.123.107:0/3464438937 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f29f40761c0 0x7f29f4078680 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:03.561 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:03.561+0000 7f2a0e7fc640 1 -- 192.168.123.107:0/3464438937 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f2a000613b0 con 0x7f2a18103c80 2026-03-09T20:45:03.562 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:03.562+0000 7f2a1d122640 1 --2- 192.168.123.107:0/3464438937 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f29f40761c0 0x7f29f4078680 secure :-1 s=READY pgs=105 cs=0 l=1 rev1=1 crypto rx=0x7f2a08002af0 tx=0x7f2a080023d0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:45:03.668 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:03.668+0000 7f2a1f3ad640 1 -- 192.168.123.107:0/3464438937 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "fs set", "fs_name": "cephfs", "var": "inline_data", "val": "false"} v 0) v1 -- 0x7f29e0005b80 con 0x7f2a18103c80 2026-03-09T20:45:04.109 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:04.109+0000 7f2a0e7fc640 1 -- 192.168.123.107:0/3464438937 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "fs set", "fs_name": "cephfs", "var": "inline_data", "val": "false"}]=0 inline data disabled v7) v1 ==== 133+0+0 (secure 0 0 0) 0x7f2a00060d50 con 0x7f2a18103c80 2026-03-09T20:45:04.109 INFO:teuthology.orchestra.run.vm07.stderr:inline data disabled 2026-03-09T20:45:04.113 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:04.113+0000 7f2a1f3ad640 1 -- 192.168.123.107:0/3464438937 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f29f40761c0 msgr2=0x7f29f4078680 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:04.113 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:04.113+0000 7f2a1f3ad640 1 --2- 192.168.123.107:0/3464438937 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f29f40761c0 0x7f29f4078680 secure :-1 s=READY pgs=105 cs=0 l=1 rev1=1 crypto rx=0x7f2a08002af0 tx=0x7f2a080023d0 comp rx=0 tx=0).stop 2026-03-09T20:45:04.113 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:04.113+0000 7f2a1f3ad640 1 -- 192.168.123.107:0/3464438937 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2a18103c80 msgr2=0x7f2a18071ba0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:04.113 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:04.113+0000 7f2a1f3ad640 1 --2- 192.168.123.107:0/3464438937 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2a18103c80 0x7f2a18071ba0 secure :-1 s=READY pgs=255 cs=0 l=1 rev1=1 crypto rx=0x7f2a0000b790 tx=0x7f2a0000bc60 comp rx=0 tx=0).stop 2026-03-09T20:45:04.115 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:04.115+0000 7f2a1f3ad640 1 -- 192.168.123.107:0/3464438937 shutdown_connections 2026-03-09T20:45:04.115 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:04.115+0000 7f2a1f3ad640 1 --2- 192.168.123.107:0/3464438937 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f29f40761c0 0x7f29f4078680 unknown :-1 s=CLOSED pgs=105 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:04.115 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:04.115+0000 7f2a1f3ad640 1 --2- 192.168.123.107:0/3464438937 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2a18103c80 0x7f2a18071ba0 unknown :-1 s=CLOSED pgs=255 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:04.115 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:04.115+0000 7f2a1f3ad640 1 --2- 192.168.123.107:0/3464438937 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f2a18102a80 0x7f2a18071660 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:04.115 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:04.115+0000 7f2a1f3ad640 1 -- 192.168.123.107:0/3464438937 >> 192.168.123.107:0/3464438937 conn(0x7f2a180fe250 msgr2=0x7f2a180ffd70 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:45:04.115 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:04.115+0000 7f2a1f3ad640 1 -- 192.168.123.107:0/3464438937 shutdown_connections 2026-03-09T20:45:04.115 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:04.116+0000 7f2a1f3ad640 1 -- 192.168.123.107:0/3464438937 wait complete. 2026-03-09T20:45:04.157 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:04 vm10 ceph-mon[57011]: from='client.? 192.168.123.107:0/3612462447' entity='client.admin' cmd='[{"prefix": "fs set", "fs_name": "cephfs", "var": "allow_standby_replay", "val": "true"}]': finished 2026-03-09T20:45:04.157 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:04 vm10 ceph-mon[57011]: mds.? [v2:192.168.123.107:6826/2216764941,v1:192.168.123.107:6827/2216764941] up:active 2026-03-09T20:45:04.157 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:04 vm10 ceph-mon[57011]: mds.? [v2:192.168.123.110:6824/61492274,v1:192.168.123.110:6825/61492274] up:boot 2026-03-09T20:45:04.157 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:04 vm10 ceph-mon[57011]: daemon mds.cephfs.vm10.qpltwp assigned to filesystem cephfs as rank 1 (now has 2 ranks) 2026-03-09T20:45:04.157 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:04 vm10 ceph-mon[57011]: Health check failed: insufficient standby MDS daemons available (MDS_INSUFFICIENT_STANDBY) 2026-03-09T20:45:04.157 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:04 vm10 ceph-mon[57011]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds) 2026-03-09T20:45:04.157 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:04 vm10 ceph-mon[57011]: fsmap cephfs:1 {0=cephfs.vm07.rovdbp=up:active} 1 up:standby 2026-03-09T20:45:04.157 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:04 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm10.qpltwp"}]: dispatch 2026-03-09T20:45:04.157 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:04 vm10 ceph-mon[57011]: fsmap cephfs:2 {0=cephfs.vm07.rovdbp=up:active,1=cephfs.vm10.qpltwp=up:creating} 2026-03-09T20:45:04.157 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:04 vm10 ceph-mon[57011]: daemon mds.cephfs.vm10.qpltwp is now active in filesystem cephfs as rank 1 2026-03-09T20:45:04.157 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:04 vm10 ceph-mon[57011]: pgmap v82: 65 pgs: 15 creating+peering, 35 active+clean, 15 unknown; 449 KiB data, 173 MiB used, 120 GiB / 120 GiB avail 2026-03-09T20:45:04.157 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:04 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:04.157 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:04 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:04.157 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:04 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:04.157 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:04 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm10.hzyuyq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-09T20:45:04.157 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:04 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm10.hzyuyq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished 2026-03-09T20:45:04.157 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:04 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:45:04.157 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:04 vm10 ceph-mon[57011]: Deploying daemon mds.cephfs.vm10.hzyuyq on vm10 2026-03-09T20:45:04.157 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:04 vm10 ceph-mon[57011]: from='client.? 192.168.123.107:0/3464438937' entity='client.admin' cmd=[{"prefix": "fs set", "fs_name": "cephfs", "var": "inline_data", "val": "false"}]: dispatch 2026-03-09T20:45:04.199 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-09T20:45:04.203 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm07.local 2026-03-09T20:45:04.203 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- bash -c 'ceph fs dump' 2026-03-09T20:45:04.385 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:04 vm07 ceph-mon[49120]: from='client.? 192.168.123.107:0/3612462447' entity='client.admin' cmd='[{"prefix": "fs set", "fs_name": "cephfs", "var": "allow_standby_replay", "val": "true"}]': finished 2026-03-09T20:45:04.385 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:04 vm07 ceph-mon[49120]: mds.? [v2:192.168.123.107:6826/2216764941,v1:192.168.123.107:6827/2216764941] up:active 2026-03-09T20:45:04.385 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:04 vm07 ceph-mon[49120]: mds.? [v2:192.168.123.110:6824/61492274,v1:192.168.123.110:6825/61492274] up:boot 2026-03-09T20:45:04.385 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:04 vm07 ceph-mon[49120]: daemon mds.cephfs.vm10.qpltwp assigned to filesystem cephfs as rank 1 (now has 2 ranks) 2026-03-09T20:45:04.385 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:04 vm07 ceph-mon[49120]: Health check failed: insufficient standby MDS daemons available (MDS_INSUFFICIENT_STANDBY) 2026-03-09T20:45:04.385 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:04 vm07 ceph-mon[49120]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds) 2026-03-09T20:45:04.385 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:04 vm07 ceph-mon[49120]: fsmap cephfs:1 {0=cephfs.vm07.rovdbp=up:active} 1 up:standby 2026-03-09T20:45:04.385 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:04 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm10.qpltwp"}]: dispatch 2026-03-09T20:45:04.385 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:04 vm07 ceph-mon[49120]: fsmap cephfs:2 {0=cephfs.vm07.rovdbp=up:active,1=cephfs.vm10.qpltwp=up:creating} 2026-03-09T20:45:04.385 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:04 vm07 ceph-mon[49120]: daemon mds.cephfs.vm10.qpltwp is now active in filesystem cephfs as rank 1 2026-03-09T20:45:04.385 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:04 vm07 ceph-mon[49120]: pgmap v82: 65 pgs: 15 creating+peering, 35 active+clean, 15 unknown; 449 KiB data, 173 MiB used, 120 GiB / 120 GiB avail 2026-03-09T20:45:04.385 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:04 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:04.385 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:04 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:04.385 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:04 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:04.385 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:04 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm10.hzyuyq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-09T20:45:04.385 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:04 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm10.hzyuyq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished 2026-03-09T20:45:04.385 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:04 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:45:04.385 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:04 vm07 ceph-mon[49120]: Deploying daemon mds.cephfs.vm10.hzyuyq on vm10 2026-03-09T20:45:04.385 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:04 vm07 ceph-mon[49120]: from='client.? 192.168.123.107:0/3464438937' entity='client.admin' cmd=[{"prefix": "fs set", "fs_name": "cephfs", "var": "inline_data", "val": "false"}]: dispatch 2026-03-09T20:45:04.392 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:45:04.677 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:04.676+0000 7f5f02ed1640 1 -- 192.168.123.107:0/1845709262 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5efc0719a0 msgr2=0x7f5efc071da0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:04.677 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:04.676+0000 7f5f02ed1640 1 --2- 192.168.123.107:0/1845709262 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5efc0719a0 0x7f5efc071da0 secure :-1 s=READY pgs=257 cs=0 l=1 rev1=1 crypto rx=0x7f5eec0099b0 tx=0x7f5eec02f220 comp rx=0 tx=0).stop 2026-03-09T20:45:04.677 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:04.676+0000 7f5f02ed1640 1 -- 192.168.123.107:0/1845709262 shutdown_connections 2026-03-09T20:45:04.677 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:04.676+0000 7f5f02ed1640 1 --2- 192.168.123.107:0/1845709262 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5efc072370 0x7f5efc10c590 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:04.677 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:04.676+0000 7f5f02ed1640 1 --2- 192.168.123.107:0/1845709262 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5efc0719a0 0x7f5efc071da0 unknown :-1 s=CLOSED pgs=257 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:04.677 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:04.676+0000 7f5f02ed1640 1 -- 192.168.123.107:0/1845709262 >> 192.168.123.107:0/1845709262 conn(0x7f5efc06d4f0 msgr2=0x7f5efc06f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:45:04.680 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:04.678+0000 7f5f02ed1640 1 -- 192.168.123.107:0/1845709262 shutdown_connections 2026-03-09T20:45:04.680 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:04.680+0000 7f5f02ed1640 1 -- 192.168.123.107:0/1845709262 wait complete. 2026-03-09T20:45:04.680 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:04.680+0000 7f5f02ed1640 1 Processor -- start 2026-03-09T20:45:04.680 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:04.680+0000 7f5f02ed1640 1 -- start start 2026-03-09T20:45:04.680 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:04.680+0000 7f5f02ed1640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5efc0719a0 0x7f5efc1a7150 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:04.681 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:04.680+0000 7f5f02ed1640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5efc072370 0x7f5efc1a7690 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:04.681 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:04.680+0000 7f5f02ed1640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f5efc1a7c60 con 0x7f5efc072370 2026-03-09T20:45:04.681 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:04.680+0000 7f5f02ed1640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f5efc1a7dd0 con 0x7f5efc0719a0 2026-03-09T20:45:04.681 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:04.680+0000 7f5f016ce640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5efc072370 0x7f5efc1a7690 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:04.681 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:04.680+0000 7f5f016ce640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5efc072370 0x7f5efc1a7690 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:50098/0 (socket says 192.168.123.107:50098) 2026-03-09T20:45:04.681 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:04.680+0000 7f5f016ce640 1 -- 192.168.123.107:0/2701077375 learned_addr learned my addr 192.168.123.107:0/2701077375 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:45:04.681 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:04.680+0000 7f5f01ecf640 1 --2- 192.168.123.107:0/2701077375 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5efc0719a0 0x7f5efc1a7150 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:04.681 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:04.681+0000 7f5f016ce640 1 -- 192.168.123.107:0/2701077375 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5efc0719a0 msgr2=0x7f5efc1a7150 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:04.681 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:04.681+0000 7f5f016ce640 1 --2- 192.168.123.107:0/2701077375 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5efc0719a0 0x7f5efc1a7150 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:04.681 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:04.681+0000 7f5f016ce640 1 -- 192.168.123.107:0/2701077375 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f5eec009660 con 0x7f5efc072370 2026-03-09T20:45:04.681 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:04.681+0000 7f5f016ce640 1 --2- 192.168.123.107:0/2701077375 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5efc072370 0x7f5efc1a7690 secure :-1 s=READY pgs=258 cs=0 l=1 rev1=1 crypto rx=0x7f5ef000e990 tx=0x7f5ef000ee60 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:45:04.682 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:04.682+0000 7f5eeaffd640 1 -- 192.168.123.107:0/2701077375 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f5ef000cd30 con 0x7f5efc072370 2026-03-09T20:45:04.682 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:04.682+0000 7f5f02ed1640 1 -- 192.168.123.107:0/2701077375 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f5efc077560 con 0x7f5efc072370 2026-03-09T20:45:04.682 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:04.682+0000 7f5f02ed1640 1 -- 192.168.123.107:0/2701077375 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f5efc077ab0 con 0x7f5efc072370 2026-03-09T20:45:04.682 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:04.682+0000 7f5eeaffd640 1 -- 192.168.123.107:0/2701077375 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f5ef000ce90 con 0x7f5efc072370 2026-03-09T20:45:04.682 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:04.682+0000 7f5eeaffd640 1 -- 192.168.123.107:0/2701077375 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f5ef0010640 con 0x7f5efc072370 2026-03-09T20:45:04.685 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:04.684+0000 7f5ee8ff9640 1 -- 192.168.123.107:0/2701077375 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f5efc1183e0 con 0x7f5efc072370 2026-03-09T20:45:04.686 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:04.686+0000 7f5eeaffd640 1 -- 192.168.123.107:0/2701077375 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7f5ef0002900 con 0x7f5efc072370 2026-03-09T20:45:04.686 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:04.687+0000 7f5eeaffd640 1 --2- 192.168.123.107:0/2701077375 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f5ed00761c0 0x7f5ed0078680 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:04.687 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:04.687+0000 7f5f01ecf640 1 --2- 192.168.123.107:0/2701077375 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f5ed00761c0 0x7f5ed0078680 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:04.687 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:04.687+0000 7f5eeaffd640 1 -- 192.168.123.107:0/2701077375 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(42..42 src has 1..42) v4 ==== 5448+0+0 (secure 0 0 0) 0x7f5ef0014070 con 0x7f5efc072370 2026-03-09T20:45:04.689 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:04.690+0000 7f5f01ecf640 1 --2- 192.168.123.107:0/2701077375 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f5ed00761c0 0x7f5ed0078680 secure :-1 s=READY pgs=109 cs=0 l=1 rev1=1 crypto rx=0x7f5eec002c20 tx=0x7f5eec03a040 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:45:04.690 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:04.690+0000 7f5eeaffd640 1 -- 192.168.123.107:0/2701077375 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f5ef0060d70 con 0x7f5efc072370 2026-03-09T20:45:04.803 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:04.802+0000 7f5ee8ff9640 1 -- 192.168.123.107:0/2701077375 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "fs dump"} v 0) v1 -- 0x7f5efc1185f0 con 0x7f5efc072370 2026-03-09T20:45:04.803 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:04.802+0000 7f5eeaffd640 1 -- 192.168.123.107:0/2701077375 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump"}]=0 dumped fsmap epoch 8 v8) v1 ==== 75+0+1813 (secure 0 0 0) 0x7f5ef0067020 con 0x7f5efc072370 2026-03-09T20:45:04.803 INFO:teuthology.orchestra.run.vm07.stdout:e8 2026-03-09T20:45:04.803 INFO:teuthology.orchestra.run.vm07.stdout:enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T20:45:04.803 INFO:teuthology.orchestra.run.vm07.stdout:default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T20:45:04.803 INFO:teuthology.orchestra.run.vm07.stdout:legacy client fscid: 1 2026-03-09T20:45:04.803 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:45:04.803 INFO:teuthology.orchestra.run.vm07.stdout:Filesystem 'cephfs' (1) 2026-03-09T20:45:04.803 INFO:teuthology.orchestra.run.vm07.stdout:fs_name cephfs 2026-03-09T20:45:04.803 INFO:teuthology.orchestra.run.vm07.stdout:epoch 8 2026-03-09T20:45:04.803 INFO:teuthology.orchestra.run.vm07.stdout:flags 32 joinable allow_snaps allow_multimds_snaps allow_standby_replay 2026-03-09T20:45:04.803 INFO:teuthology.orchestra.run.vm07.stdout:created 2026-03-09T20:44:59.885491+0000 2026-03-09T20:45:04.803 INFO:teuthology.orchestra.run.vm07.stdout:modified 2026-03-09T20:45:04.116464+0000 2026-03-09T20:45:04.803 INFO:teuthology.orchestra.run.vm07.stdout:tableserver 0 2026-03-09T20:45:04.803 INFO:teuthology.orchestra.run.vm07.stdout:root 0 2026-03-09T20:45:04.803 INFO:teuthology.orchestra.run.vm07.stdout:session_timeout 60 2026-03-09T20:45:04.803 INFO:teuthology.orchestra.run.vm07.stdout:session_autoclose 300 2026-03-09T20:45:04.803 INFO:teuthology.orchestra.run.vm07.stdout:max_file_size 1099511627776 2026-03-09T20:45:04.803 INFO:teuthology.orchestra.run.vm07.stdout:max_xattr_size 65536 2026-03-09T20:45:04.803 INFO:teuthology.orchestra.run.vm07.stdout:required_client_features {} 2026-03-09T20:45:04.803 INFO:teuthology.orchestra.run.vm07.stdout:last_failure 0 2026-03-09T20:45:04.803 INFO:teuthology.orchestra.run.vm07.stdout:last_failure_osd_epoch 0 2026-03-09T20:45:04.803 INFO:teuthology.orchestra.run.vm07.stdout:compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T20:45:04.804 INFO:teuthology.orchestra.run.vm07.stdout:max_mds 2 2026-03-09T20:45:04.804 INFO:teuthology.orchestra.run.vm07.stdout:in 0,1 2026-03-09T20:45:04.804 INFO:teuthology.orchestra.run.vm07.stdout:up {0=14476,1=24291} 2026-03-09T20:45:04.804 INFO:teuthology.orchestra.run.vm07.stdout:failed 2026-03-09T20:45:04.804 INFO:teuthology.orchestra.run.vm07.stdout:damaged 2026-03-09T20:45:04.804 INFO:teuthology.orchestra.run.vm07.stdout:stopped 2026-03-09T20:45:04.804 INFO:teuthology.orchestra.run.vm07.stdout:data_pools [3] 2026-03-09T20:45:04.804 INFO:teuthology.orchestra.run.vm07.stdout:metadata_pool 2 2026-03-09T20:45:04.804 INFO:teuthology.orchestra.run.vm07.stdout:inline_data disabled 2026-03-09T20:45:04.804 INFO:teuthology.orchestra.run.vm07.stdout:balancer 2026-03-09T20:45:04.804 INFO:teuthology.orchestra.run.vm07.stdout:bal_rank_mask -1 2026-03-09T20:45:04.804 INFO:teuthology.orchestra.run.vm07.stdout:standby_count_wanted 1 2026-03-09T20:45:04.804 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm07.rovdbp{0:14476} state up:active seq 2 addr [v2:192.168.123.107:6826/2216764941,v1:192.168.123.107:6827/2216764941] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:45:04.804 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm10.hzyuyq{0:14498} state up:standby-replay seq 1 addr [v2:192.168.123.110:6826/3212743251,v1:192.168.123.110:6827/3212743251] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:45:04.804 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm10.qpltwp{1:24291} state up:active seq 2 addr [v2:192.168.123.110:6824/61492274,v1:192.168.123.110:6825/61492274] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:45:04.804 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm07.potfau{1:14490} state up:standby-replay seq 1 addr [v2:192.168.123.107:6828/3289699342,v1:192.168.123.107:6829/3289699342] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:45:04.804 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:45:04.804 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:45:04.804 INFO:teuthology.orchestra.run.vm07.stderr:dumped fsmap epoch 8 2026-03-09T20:45:04.805 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:04.804+0000 7f5ee8ff9640 1 -- 192.168.123.107:0/2701077375 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f5ed00761c0 msgr2=0x7f5ed0078680 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:04.805 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:04.804+0000 7f5ee8ff9640 1 --2- 192.168.123.107:0/2701077375 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f5ed00761c0 0x7f5ed0078680 secure :-1 s=READY pgs=109 cs=0 l=1 rev1=1 crypto rx=0x7f5eec002c20 tx=0x7f5eec03a040 comp rx=0 tx=0).stop 2026-03-09T20:45:04.805 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:04.805+0000 7f5ee8ff9640 1 -- 192.168.123.107:0/2701077375 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5efc072370 msgr2=0x7f5efc1a7690 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:04.805 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:04.805+0000 7f5ee8ff9640 1 --2- 192.168.123.107:0/2701077375 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5efc072370 0x7f5efc1a7690 secure :-1 s=READY pgs=258 cs=0 l=1 rev1=1 crypto rx=0x7f5ef000e990 tx=0x7f5ef000ee60 comp rx=0 tx=0).stop 2026-03-09T20:45:04.805 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:04.805+0000 7f5ee8ff9640 1 -- 192.168.123.107:0/2701077375 shutdown_connections 2026-03-09T20:45:04.805 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:04.805+0000 7f5ee8ff9640 1 --2- 192.168.123.107:0/2701077375 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f5ed00761c0 0x7f5ed0078680 unknown :-1 s=CLOSED pgs=109 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:04.805 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:04.805+0000 7f5ee8ff9640 1 --2- 192.168.123.107:0/2701077375 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5efc072370 0x7f5efc1a7690 unknown :-1 s=CLOSED pgs=258 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:04.805 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:04.805+0000 7f5ee8ff9640 1 --2- 192.168.123.107:0/2701077375 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5efc0719a0 0x7f5efc1a7150 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:04.805 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:04.805+0000 7f5ee8ff9640 1 -- 192.168.123.107:0/2701077375 >> 192.168.123.107:0/2701077375 conn(0x7f5efc06d4f0 msgr2=0x7f5efc0706f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:45:04.806 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:04.806+0000 7f5ee8ff9640 1 -- 192.168.123.107:0/2701077375 shutdown_connections 2026-03-09T20:45:04.806 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:04.806+0000 7f5ee8ff9640 1 -- 192.168.123.107:0/2701077375 wait complete. 2026-03-09T20:45:04.865 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- bash -c 'ceph --format=json fs dump | jq -e ".filesystems | length == 1"' 2026-03-09T20:45:05.029 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:05 vm10 ceph-mon[57011]: Health check cleared: MDS_INSUFFICIENT_STANDBY (was: insufficient standby MDS daemons available) 2026-03-09T20:45:05.030 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:05 vm10 ceph-mon[57011]: Cluster is now healthy 2026-03-09T20:45:05.030 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:05 vm10 ceph-mon[57011]: from='client.? 192.168.123.107:0/3464438937' entity='client.admin' cmd='[{"prefix": "fs set", "fs_name": "cephfs", "var": "inline_data", "val": "false"}]': finished 2026-03-09T20:45:05.030 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:05 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:05.030 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:05 vm10 ceph-mon[57011]: mds.? [v2:192.168.123.110:6824/61492274,v1:192.168.123.110:6825/61492274] up:active 2026-03-09T20:45:05.030 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:05 vm10 ceph-mon[57011]: mds.? [v2:192.168.123.107:6828/3289699342,v1:192.168.123.107:6829/3289699342] up:boot 2026-03-09T20:45:05.030 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:05 vm10 ceph-mon[57011]: fsmap cephfs:2 {0=cephfs.vm07.rovdbp=up:active,1=cephfs.vm10.qpltwp=up:active} 1 up:standby 2026-03-09T20:45:05.030 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:05 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm07.potfau"}]: dispatch 2026-03-09T20:45:05.030 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:05 vm10 ceph-mon[57011]: mds.? [v2:192.168.123.110:6826/3212743251,v1:192.168.123.110:6827/3212743251] up:boot 2026-03-09T20:45:05.030 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:05 vm10 ceph-mon[57011]: fsmap cephfs:2 {0=cephfs.vm07.rovdbp=up:active,1=cephfs.vm10.qpltwp=up:active} 2 up:standby-replay 2026-03-09T20:45:05.030 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:05 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm10.hzyuyq"}]: dispatch 2026-03-09T20:45:05.030 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:05 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:05.030 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:05 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:45:05.030 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:05 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:05.030 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:05 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:05.030 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:05 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:05.030 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:05 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:05.030 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:05 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:45:05.030 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:05 vm10 ceph-mon[57011]: from='client.? 192.168.123.107:0/2701077375' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T20:45:05.090 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:45:05.280 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:05 vm07 ceph-mon[49120]: Health check cleared: MDS_INSUFFICIENT_STANDBY (was: insufficient standby MDS daemons available) 2026-03-09T20:45:05.280 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:05 vm07 ceph-mon[49120]: Cluster is now healthy 2026-03-09T20:45:05.280 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:05 vm07 ceph-mon[49120]: from='client.? 192.168.123.107:0/3464438937' entity='client.admin' cmd='[{"prefix": "fs set", "fs_name": "cephfs", "var": "inline_data", "val": "false"}]': finished 2026-03-09T20:45:05.280 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:05 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:05.280 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:05 vm07 ceph-mon[49120]: mds.? [v2:192.168.123.110:6824/61492274,v1:192.168.123.110:6825/61492274] up:active 2026-03-09T20:45:05.280 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:05 vm07 ceph-mon[49120]: mds.? [v2:192.168.123.107:6828/3289699342,v1:192.168.123.107:6829/3289699342] up:boot 2026-03-09T20:45:05.280 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:05 vm07 ceph-mon[49120]: fsmap cephfs:2 {0=cephfs.vm07.rovdbp=up:active,1=cephfs.vm10.qpltwp=up:active} 1 up:standby 2026-03-09T20:45:05.280 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:05 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm07.potfau"}]: dispatch 2026-03-09T20:45:05.280 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:05 vm07 ceph-mon[49120]: mds.? [v2:192.168.123.110:6826/3212743251,v1:192.168.123.110:6827/3212743251] up:boot 2026-03-09T20:45:05.280 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:05 vm07 ceph-mon[49120]: fsmap cephfs:2 {0=cephfs.vm07.rovdbp=up:active,1=cephfs.vm10.qpltwp=up:active} 2 up:standby-replay 2026-03-09T20:45:05.280 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:05 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm10.hzyuyq"}]: dispatch 2026-03-09T20:45:05.281 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:05 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:05.281 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:05 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:45:05.281 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:05 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:05.281 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:05 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:05.281 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:05 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:05.281 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:05 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:05.281 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:05 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:45:05.281 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:05 vm07 ceph-mon[49120]: from='client.? 192.168.123.107:0/2701077375' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T20:45:05.378 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:05.377+0000 7f9e309cb640 1 -- 192.168.123.107:0/2130675274 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9e28071d40 msgr2=0x7f9e28072140 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:05.378 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:05.377+0000 7f9e309cb640 1 --2- 192.168.123.107:0/2130675274 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9e28071d40 0x7f9e28072140 secure :-1 s=READY pgs=51 cs=0 l=1 rev1=1 crypto rx=0x7f9e140098e0 tx=0x7f9e1402f1d0 comp rx=0 tx=0).stop 2026-03-09T20:45:05.378 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:05.377+0000 7f9e309cb640 1 -- 192.168.123.107:0/2130675274 shutdown_connections 2026-03-09T20:45:05.378 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:05.377+0000 7f9e309cb640 1 --2- 192.168.123.107:0/2130675274 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9e28072710 0x7f9e2810c590 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:05.378 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:05.377+0000 7f9e309cb640 1 --2- 192.168.123.107:0/2130675274 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9e28071d40 0x7f9e28072140 unknown :-1 s=CLOSED pgs=51 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:05.378 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:05.377+0000 7f9e309cb640 1 -- 192.168.123.107:0/2130675274 >> 192.168.123.107:0/2130675274 conn(0x7f9e2806d660 msgr2=0x7f9e2806faa0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:45:05.378 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:05.379+0000 7f9e309cb640 1 -- 192.168.123.107:0/2130675274 shutdown_connections 2026-03-09T20:45:05.379 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:05.379+0000 7f9e309cb640 1 -- 192.168.123.107:0/2130675274 wait complete. 2026-03-09T20:45:05.379 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:05.379+0000 7f9e309cb640 1 Processor -- start 2026-03-09T20:45:05.379 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:05.379+0000 7f9e309cb640 1 -- start start 2026-03-09T20:45:05.379 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:05.379+0000 7f9e309cb640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9e28071d40 0x7f9e28116b00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:05.379 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:05.379+0000 7f9e309cb640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9e28072710 0x7f9e28117040 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:05.379 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:05.379+0000 7f9e309cb640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9e281184f0 con 0x7f9e28072710 2026-03-09T20:45:05.379 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:05.379+0000 7f9e309cb640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9e28118660 con 0x7f9e28071d40 2026-03-09T20:45:05.380 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:05.380+0000 7f9e2df3f640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9e28072710 0x7f9e28117040 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:05.380 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:05.380+0000 7f9e2df3f640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9e28072710 0x7f9e28117040 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:50110/0 (socket says 192.168.123.107:50110) 2026-03-09T20:45:05.380 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:05.380+0000 7f9e2df3f640 1 -- 192.168.123.107:0/3419013358 learned_addr learned my addr 192.168.123.107:0/3419013358 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:45:05.380 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:05.380+0000 7f9e2e740640 1 --2- 192.168.123.107:0/3419013358 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9e28071d40 0x7f9e28116b00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:05.380 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:05.380+0000 7f9e2df3f640 1 -- 192.168.123.107:0/3419013358 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9e28071d40 msgr2=0x7f9e28116b00 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:05.380 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:05.380+0000 7f9e2df3f640 1 --2- 192.168.123.107:0/3419013358 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9e28071d40 0x7f9e28116b00 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:05.380 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:05.380+0000 7f9e2df3f640 1 -- 192.168.123.107:0/3419013358 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f9e14009590 con 0x7f9e28072710 2026-03-09T20:45:05.380 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:05.380+0000 7f9e2df3f640 1 --2- 192.168.123.107:0/3419013358 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9e28072710 0x7f9e28117040 secure :-1 s=READY pgs=259 cs=0 l=1 rev1=1 crypto rx=0x7f9e240049b0 tx=0x7f9e2400d4a0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:45:05.385 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:05.382+0000 7f9e1b7fe640 1 -- 192.168.123.107:0/3419013358 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f9e240090d0 con 0x7f9e28072710 2026-03-09T20:45:05.385 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:05.382+0000 7f9e309cb640 1 -- 192.168.123.107:0/3419013358 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f9e28117640 con 0x7f9e28072710 2026-03-09T20:45:05.385 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:05.382+0000 7f9e309cb640 1 -- 192.168.123.107:0/3419013358 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f9e28117970 con 0x7f9e28072710 2026-03-09T20:45:05.385 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:05.382+0000 7f9e1b7fe640 1 -- 192.168.123.107:0/3419013358 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f9e2400f040 con 0x7f9e28072710 2026-03-09T20:45:05.385 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:05.382+0000 7f9e1b7fe640 1 -- 192.168.123.107:0/3419013358 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f9e240135e0 con 0x7f9e28072710 2026-03-09T20:45:05.385 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:05.385+0000 7f9e309cb640 1 -- 192.168.123.107:0/3419013358 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f9df0005350 con 0x7f9e28072710 2026-03-09T20:45:05.389 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:05.389+0000 7f9e1b7fe640 1 -- 192.168.123.107:0/3419013358 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7f9e240075c0 con 0x7f9e28072710 2026-03-09T20:45:05.390 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:05.390+0000 7f9e1b7fe640 1 --2- 192.168.123.107:0/3419013358 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f9e04076290 0x7f9e04078750 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:05.391 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:05.390+0000 7f9e2e740640 1 --2- 192.168.123.107:0/3419013358 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f9e04076290 0x7f9e04078750 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:05.391 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:05.391+0000 7f9e1b7fe640 1 -- 192.168.123.107:0/3419013358 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(42..42 src has 1..42) v4 ==== 5448+0+0 (secure 0 0 0) 0x7f9e24098140 con 0x7f9e28072710 2026-03-09T20:45:05.391 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:05.391+0000 7f9e1b7fe640 1 -- 192.168.123.107:0/3419013358 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f9e240985c0 con 0x7f9e28072710 2026-03-09T20:45:05.391 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:05.391+0000 7f9e2e740640 1 --2- 192.168.123.107:0/3419013358 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f9e04076290 0x7f9e04078750 secure :-1 s=READY pgs=110 cs=0 l=1 rev1=1 crypto rx=0x7f9e14002410 tx=0x7f9e1403a040 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:45:05.519 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:05.518+0000 7f9e309cb640 1 -- 192.168.123.107:0/3419013358 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "fs dump", "format": "json"} v 0) v1 -- 0x7f9df00051c0 con 0x7f9e28072710 2026-03-09T20:45:05.521 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:05.520+0000 7f9e1b7fe640 1 -- 192.168.123.107:0/3419013358 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "format": "json"}]=0 dumped fsmap epoch 8 v8) v1 ==== 93+0+4866 (secure 0 0 0) 0x7f9e24061810 con 0x7f9e28072710 2026-03-09T20:45:05.521 INFO:teuthology.orchestra.run.vm07.stderr:dumped fsmap epoch 8 2026-03-09T20:45:05.529 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:05.527+0000 7f9e197fa640 1 -- 192.168.123.107:0/3419013358 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f9e04076290 msgr2=0x7f9e04078750 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:05.530 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:05.527+0000 7f9e197fa640 1 --2- 192.168.123.107:0/3419013358 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f9e04076290 0x7f9e04078750 secure :-1 s=READY pgs=110 cs=0 l=1 rev1=1 crypto rx=0x7f9e14002410 tx=0x7f9e1403a040 comp rx=0 tx=0).stop 2026-03-09T20:45:05.530 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:05.528+0000 7f9e197fa640 1 -- 192.168.123.107:0/3419013358 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9e28072710 msgr2=0x7f9e28117040 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:05.530 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:05.528+0000 7f9e197fa640 1 --2- 192.168.123.107:0/3419013358 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9e28072710 0x7f9e28117040 secure :-1 s=READY pgs=259 cs=0 l=1 rev1=1 crypto rx=0x7f9e240049b0 tx=0x7f9e2400d4a0 comp rx=0 tx=0).stop 2026-03-09T20:45:05.530 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:05.528+0000 7f9e197fa640 1 -- 192.168.123.107:0/3419013358 shutdown_connections 2026-03-09T20:45:05.530 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:05.528+0000 7f9e197fa640 1 --2- 192.168.123.107:0/3419013358 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f9e04076290 0x7f9e04078750 unknown :-1 s=CLOSED pgs=110 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:05.530 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:05.528+0000 7f9e197fa640 1 --2- 192.168.123.107:0/3419013358 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9e28072710 0x7f9e28117040 secure :-1 s=CLOSED pgs=259 cs=0 l=1 rev1=1 crypto rx=0x7f9e240049b0 tx=0x7f9e2400d4a0 comp rx=0 tx=0).stop 2026-03-09T20:45:05.530 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:05.528+0000 7f9e197fa640 1 --2- 192.168.123.107:0/3419013358 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9e28071d40 0x7f9e28116b00 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:05.530 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:05.528+0000 7f9e197fa640 1 -- 192.168.123.107:0/3419013358 >> 192.168.123.107:0/3419013358 conn(0x7f9e2806d660 msgr2=0x7f9e2810a7c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:45:05.532 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:05.532+0000 7f9e197fa640 1 -- 192.168.123.107:0/3419013358 shutdown_connections 2026-03-09T20:45:05.532 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:05.533+0000 7f9e197fa640 1 -- 192.168.123.107:0/3419013358 wait complete. 2026-03-09T20:45:05.541 INFO:teuthology.orchestra.run.vm07.stdout:true 2026-03-09T20:45:05.629 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- bash -c 'while ! ceph --format=json mds versions | jq -e ". | add == 4"; do sleep 1; done' 2026-03-09T20:45:05.833 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:45:06.081 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:06 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:06.081 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:06 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:06.082 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:06 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:06.082 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:06 vm07 ceph-mon[49120]: pgmap v83: 65 pgs: 6 creating+peering, 59 active+clean; 450 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 1.1 KiB/s rd, 779 B/s wr, 10 op/s 2026-03-09T20:45:06.082 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:06 vm07 ceph-mon[49120]: from='client.? 192.168.123.107:0/3419013358' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json"}]: dispatch 2026-03-09T20:45:06.139 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.139+0000 7f01a69fc640 1 -- 192.168.123.107:0/268171894 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f01a0072370 msgr2=0x7f01a010c590 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:06.139 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.139+0000 7f01a69fc640 1 --2- 192.168.123.107:0/268171894 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f01a0072370 0x7f01a010c590 secure :-1 s=READY pgs=260 cs=0 l=1 rev1=1 crypto rx=0x7f01940099b0 tx=0x7f019402f240 comp rx=0 tx=0).stop 2026-03-09T20:45:06.139 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.139+0000 7f01a69fc640 1 -- 192.168.123.107:0/268171894 shutdown_connections 2026-03-09T20:45:06.139 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.139+0000 7f01a69fc640 1 --2- 192.168.123.107:0/268171894 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f01a0072370 0x7f01a010c590 unknown :-1 s=CLOSED pgs=260 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:06.139 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.139+0000 7f01a69fc640 1 --2- 192.168.123.107:0/268171894 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f01a00719a0 0x7f01a0071da0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:06.140 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.139+0000 7f01a69fc640 1 -- 192.168.123.107:0/268171894 >> 192.168.123.107:0/268171894 conn(0x7f01a006d4f0 msgr2=0x7f01a006f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:45:06.140 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.140+0000 7f01a69fc640 1 -- 192.168.123.107:0/268171894 shutdown_connections 2026-03-09T20:45:06.140 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.140+0000 7f01a69fc640 1 -- 192.168.123.107:0/268171894 wait complete. 2026-03-09T20:45:06.140 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.140+0000 7f01a69fc640 1 Processor -- start 2026-03-09T20:45:06.140 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.140+0000 7f01a69fc640 1 -- start start 2026-03-09T20:45:06.140 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.140+0000 7f01a69fc640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f01a00719a0 0x7f01a00753b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:06.140 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.140+0000 7f01a69fc640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f01a0072370 0x7f01a0077900 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:06.140 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.140+0000 7f01a69fc640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f01a0077e40 con 0x7f01a0072370 2026-03-09T20:45:06.141 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.140+0000 7f01a69fc640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f01a0077fb0 con 0x7f01a00719a0 2026-03-09T20:45:06.141 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.141+0000 7f01a51f9640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f01a0072370 0x7f01a0077900 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:06.141 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.141+0000 7f01a51f9640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f01a0072370 0x7f01a0077900 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:50126/0 (socket says 192.168.123.107:50126) 2026-03-09T20:45:06.141 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.141+0000 7f01a51f9640 1 -- 192.168.123.107:0/3990693903 learned_addr learned my addr 192.168.123.107:0/3990693903 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:45:06.141 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.141+0000 7f01a59fa640 1 --2- 192.168.123.107:0/3990693903 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f01a00719a0 0x7f01a00753b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:06.141 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.141+0000 7f01a51f9640 1 -- 192.168.123.107:0/3990693903 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f01a00719a0 msgr2=0x7f01a00753b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:06.141 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.141+0000 7f01a51f9640 1 --2- 192.168.123.107:0/3990693903 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f01a00719a0 0x7f01a00753b0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:06.141 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.141+0000 7f01a51f9640 1 -- 192.168.123.107:0/3990693903 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f0194009660 con 0x7f01a0072370 2026-03-09T20:45:06.141 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.141+0000 7f01a51f9640 1 --2- 192.168.123.107:0/3990693903 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f01a0072370 0x7f01a0077900 secure :-1 s=READY pgs=261 cs=0 l=1 rev1=1 crypto rx=0x7f0194009630 tx=0x7f0194004290 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:45:06.141 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.142+0000 7f018effd640 1 -- 192.168.123.107:0/3990693903 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f019403d070 con 0x7f01a0072370 2026-03-09T20:45:06.142 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.142+0000 7f018effd640 1 -- 192.168.123.107:0/3990693903 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f01940043c0 con 0x7f01a0072370 2026-03-09T20:45:06.142 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.142+0000 7f018effd640 1 -- 192.168.123.107:0/3990693903 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f0194041780 con 0x7f01a0072370 2026-03-09T20:45:06.142 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.142+0000 7f01a69fc640 1 -- 192.168.123.107:0/3990693903 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f01a0078230 con 0x7f01a0072370 2026-03-09T20:45:06.142 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.142+0000 7f01a69fc640 1 -- 192.168.123.107:0/3990693903 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f01a0078720 con 0x7f01a0072370 2026-03-09T20:45:06.142 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.142+0000 7f01a69fc640 1 -- 192.168.123.107:0/3990693903 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f01a010eef0 con 0x7f01a0072370 2026-03-09T20:45:06.144 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.144+0000 7f018effd640 1 -- 192.168.123.107:0/3990693903 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7f01940418e0 con 0x7f01a0072370 2026-03-09T20:45:06.144 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.144+0000 7f018effd640 1 --2- 192.168.123.107:0/3990693903 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f017c0761c0 0x7f017c078680 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:06.144 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.144+0000 7f018effd640 1 -- 192.168.123.107:0/3990693903 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(42..42 src has 1..42) v4 ==== 5448+0+0 (secure 0 0 0) 0x7f01940bc810 con 0x7f01a0072370 2026-03-09T20:45:06.147 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.147+0000 7f01a59fa640 1 --2- 192.168.123.107:0/3990693903 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f017c0761c0 0x7f017c078680 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:06.147 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.147+0000 7f018effd640 1 -- 192.168.123.107:0/3990693903 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f0194085ed0 con 0x7f01a0072370 2026-03-09T20:45:06.151 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.151+0000 7f01a59fa640 1 --2- 192.168.123.107:0/3990693903 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f017c0761c0 0x7f017c078680 secure :-1 s=READY pgs=111 cs=0 l=1 rev1=1 crypto rx=0x7f01a0117560 tx=0x7f0190009290 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:45:06.292 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.290+0000 7f01a69fc640 1 -- 192.168.123.107:0/3990693903 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "mds versions", "format": "json"} v 0) v1 -- 0x7f01a010f0b0 con 0x7f01a0072370 2026-03-09T20:45:06.292 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.291+0000 7f018effd640 1 -- 192.168.123.107:0/3990693903 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "mds versions", "format": "json"}]=0 v8) v1 ==== 78+0+98 (secure 0 0 0) 0x7f0194085870 con 0x7f01a0072370 2026-03-09T20:45:06.294 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.294+0000 7f01a69fc640 1 -- 192.168.123.107:0/3990693903 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f017c0761c0 msgr2=0x7f017c078680 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:06.294 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.294+0000 7f01a69fc640 1 --2- 192.168.123.107:0/3990693903 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f017c0761c0 0x7f017c078680 secure :-1 s=READY pgs=111 cs=0 l=1 rev1=1 crypto rx=0x7f01a0117560 tx=0x7f0190009290 comp rx=0 tx=0).stop 2026-03-09T20:45:06.294 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.294+0000 7f01a69fc640 1 -- 192.168.123.107:0/3990693903 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f01a0072370 msgr2=0x7f01a0077900 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:06.294 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.294+0000 7f01a69fc640 1 --2- 192.168.123.107:0/3990693903 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f01a0072370 0x7f01a0077900 secure :-1 s=READY pgs=261 cs=0 l=1 rev1=1 crypto rx=0x7f0194009630 tx=0x7f0194004290 comp rx=0 tx=0).stop 2026-03-09T20:45:06.294 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.295+0000 7f01a69fc640 1 -- 192.168.123.107:0/3990693903 shutdown_connections 2026-03-09T20:45:06.294 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.295+0000 7f01a69fc640 1 --2- 192.168.123.107:0/3990693903 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f017c0761c0 0x7f017c078680 unknown :-1 s=CLOSED pgs=111 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:06.294 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.295+0000 7f01a69fc640 1 --2- 192.168.123.107:0/3990693903 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f01a0072370 0x7f01a0077900 unknown :-1 s=CLOSED pgs=261 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:06.294 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.295+0000 7f01a69fc640 1 --2- 192.168.123.107:0/3990693903 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f01a00719a0 0x7f01a00753b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:06.295 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.295+0000 7f01a69fc640 1 -- 192.168.123.107:0/3990693903 >> 192.168.123.107:0/3990693903 conn(0x7f01a006d4f0 msgr2=0x7f01a010a860 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:45:06.295 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.295+0000 7f01a69fc640 1 -- 192.168.123.107:0/3990693903 shutdown_connections 2026-03-09T20:45:06.295 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.295+0000 7f01a69fc640 1 -- 192.168.123.107:0/3990693903 wait complete. 2026-03-09T20:45:06.302 INFO:teuthology.orchestra.run.vm07.stdout:true 2026-03-09T20:45:06.343 INFO:teuthology.run_tasks:Running task fs.pre_upgrade_save... 2026-03-09T20:45:06.346 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph fs dump --format=json 2026-03-09T20:45:06.519 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:45:06.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:06 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:06.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:06 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:06.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:06 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:06.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:06 vm10 ceph-mon[57011]: pgmap v83: 65 pgs: 6 creating+peering, 59 active+clean; 450 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 1.1 KiB/s rd, 779 B/s wr, 10 op/s 2026-03-09T20:45:06.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:06 vm10 ceph-mon[57011]: from='client.? 192.168.123.107:0/3419013358' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json"}]: dispatch 2026-03-09T20:45:06.817 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.816+0000 7f17d8b64640 1 -- 192.168.123.107:0/2659379210 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f17d40fe780 msgr2=0x7f17d40febe0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:06.817 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.816+0000 7f17d8b64640 1 --2- 192.168.123.107:0/2659379210 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f17d40fe780 0x7f17d40febe0 secure :-1 s=READY pgs=262 cs=0 l=1 rev1=1 crypto rx=0x7f17c4007510 tx=0x7f17c402fee0 comp rx=0 tx=0).stop 2026-03-09T20:45:06.817 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.816+0000 7f17d8b64640 1 -- 192.168.123.107:0/2659379210 shutdown_connections 2026-03-09T20:45:06.817 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.816+0000 7f17d8b64640 1 --2- 192.168.123.107:0/2659379210 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f17d40fe780 0x7f17d40febe0 unknown :-1 s=CLOSED pgs=262 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:06.817 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.816+0000 7f17d8b64640 1 --2- 192.168.123.107:0/2659379210 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f17d4104780 0x7f17d4104b60 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:06.817 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.816+0000 7f17d8b64640 1 -- 192.168.123.107:0/2659379210 >> 192.168.123.107:0/2659379210 conn(0x7f17d40fa4a0 msgr2=0x7f17d40fc8c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:45:06.817 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.817+0000 7f17d8b64640 1 -- 192.168.123.107:0/2659379210 shutdown_connections 2026-03-09T20:45:06.817 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.817+0000 7f17d8b64640 1 -- 192.168.123.107:0/2659379210 wait complete. 2026-03-09T20:45:06.817 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.817+0000 7f17d8b64640 1 Processor -- start 2026-03-09T20:45:06.817 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.817+0000 7f17d8b64640 1 -- start start 2026-03-09T20:45:06.817 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.817+0000 7f17d8b64640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f17d40fe780 0x7f17d419aef0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:06.818 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.817+0000 7f17d8b64640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f17d4104780 0x7f17d419b430 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:06.818 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.817+0000 7f17d8b64640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f17d4196010 con 0x7f17d4104780 2026-03-09T20:45:06.818 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.817+0000 7f17d8b64640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f17d4196180 con 0x7f17d40fe780 2026-03-09T20:45:06.818 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.818+0000 7f17d37fe640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f17d40fe780 0x7f17d419aef0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:06.818 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.818+0000 7f17d37fe640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f17d40fe780 0x7f17d419aef0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:59022/0 (socket says 192.168.123.107:59022) 2026-03-09T20:45:06.818 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.818+0000 7f17d37fe640 1 -- 192.168.123.107:0/498964437 learned_addr learned my addr 192.168.123.107:0/498964437 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:45:06.818 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.818+0000 7f17d37fe640 1 -- 192.168.123.107:0/498964437 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f17d4104780 msgr2=0x7f17d419b430 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:06.818 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.818+0000 7f17d37fe640 1 --2- 192.168.123.107:0/498964437 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f17d4104780 0x7f17d419b430 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:06.818 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.818+0000 7f17d37fe640 1 -- 192.168.123.107:0/498964437 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f17c40071c0 con 0x7f17d40fe780 2026-03-09T20:45:06.818 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.819+0000 7f17d37fe640 1 --2- 192.168.123.107:0/498964437 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f17d40fe780 0x7f17d419aef0 secure :-1 s=READY pgs=52 cs=0 l=1 rev1=1 crypto rx=0x7f17c000e990 tx=0x7f17c000ee60 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:45:06.819 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.819+0000 7f17d0ff9640 1 -- 192.168.123.107:0/498964437 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f17c000cd30 con 0x7f17d40fe780 2026-03-09T20:45:06.819 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.819+0000 7f17d0ff9640 1 -- 192.168.123.107:0/498964437 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f17c000ce90 con 0x7f17d40fe780 2026-03-09T20:45:06.819 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.819+0000 7f17d0ff9640 1 -- 192.168.123.107:0/498964437 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f17c00106d0 con 0x7f17d40fe780 2026-03-09T20:45:06.820 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.819+0000 7f17d8b64640 1 -- 192.168.123.107:0/498964437 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f17d41963e0 con 0x7f17d40fe780 2026-03-09T20:45:06.820 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.819+0000 7f17d8b64640 1 -- 192.168.123.107:0/498964437 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f17d41968b0 con 0x7f17d40fe780 2026-03-09T20:45:06.820 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.820+0000 7f17b67fc640 1 -- 192.168.123.107:0/498964437 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f17d40ffec0 con 0x7f17d40fe780 2026-03-09T20:45:06.822 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.821+0000 7f17d0ff9640 1 -- 192.168.123.107:0/498964437 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7f17c00026e0 con 0x7f17d40fe780 2026-03-09T20:45:06.822 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.822+0000 7f17d0ff9640 1 --2- 192.168.123.107:0/498964437 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f17a4075f60 0x7f17a4078420 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:06.822 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.822+0000 7f17d0ff9640 1 -- 192.168.123.107:0/498964437 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(42..42 src has 1..42) v4 ==== 5448+0+0 (secure 0 0 0) 0x7f17c001d030 con 0x7f17d40fe780 2026-03-09T20:45:06.822 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.822+0000 7f17d2ffd640 1 --2- 192.168.123.107:0/498964437 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f17a4075f60 0x7f17a4078420 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:06.822 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.822+0000 7f17d2ffd640 1 --2- 192.168.123.107:0/498964437 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f17a4075f60 0x7f17a4078420 secure :-1 s=READY pgs=112 cs=0 l=1 rev1=1 crypto rx=0x7f17c4005e00 tx=0x7f17c4005d90 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:45:06.823 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.823+0000 7f17d0ff9640 1 -- 192.168.123.107:0/498964437 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f17c0060dd0 con 0x7f17d40fe780 2026-03-09T20:45:06.940 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.940+0000 7f17b67fc640 1 -- 192.168.123.107:0/498964437 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "fs dump", "format": "json"} v 0) v1 -- 0x7f17d419fdf0 con 0x7f17d40fe780 2026-03-09T20:45:06.940 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.940+0000 7f17d0ff9640 1 -- 192.168.123.107:0/498964437 <== mon.1 v2:192.168.123.110:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "format": "json"}]=0 dumped fsmap epoch 8 v8) v1 ==== 93+0+4866 (secure 0 0 0) 0x7f17c00029c0 con 0x7f17d40fe780 2026-03-09T20:45:06.940 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:45:06.941 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":8,"default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[],"filesystems":[{"mdsmap":{"epoch":8,"flags":50,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":true,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-09T20:44:59.885491+0000","modified":"2026-03-09T20:45:04.116464+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":2,"in":[0,1],"up":{"mds_0":14476,"mds_1":24291},"failed":[],"damaged":[],"stopped":[],"info":{"gid_14476":{"gid":14476,"name":"cephfs.vm07.rovdbp","rank":0,"incarnation":4,"state":"up:active","state_seq":2,"addr":"192.168.123.107:6827/2216764941","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6826","nonce":2216764941},{"type":"v1","addr":"192.168.123.107:6827","nonce":2216764941}]},"join_fscid":-1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}},"gid_14490":{"gid":14490,"name":"cephfs.vm07.potfau","rank":1,"incarnation":0,"state":"up:standby-replay","state_seq":1,"addr":"192.168.123.107:6829/3289699342","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6828","nonce":3289699342},{"type":"v1","addr":"192.168.123.107:6829","nonce":3289699342}]},"join_fscid":-1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}},"gid_14498":{"gid":14498,"name":"cephfs.vm10.hzyuyq","rank":0,"incarnation":0,"state":"up:standby-replay","state_seq":1,"addr":"192.168.123.110:6827/3212743251","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6826","nonce":3212743251},{"type":"v1","addr":"192.168.123.110:6827","nonce":3212743251}]},"join_fscid":-1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}},"gid_24291":{"gid":24291,"name":"cephfs.vm10.qpltwp","rank":1,"incarnation":6,"state":"up:active","state_seq":2,"addr":"192.168.123.110:6825/61492274","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6824","nonce":61492274},{"type":"v1","addr":"192.168.123.110:6825","nonce":61492274}]},"join_fscid":-1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1},"id":1}]} 2026-03-09T20:45:06.941 INFO:teuthology.orchestra.run.vm07.stderr:dumped fsmap epoch 8 2026-03-09T20:45:06.942 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.942+0000 7f17b67fc640 1 -- 192.168.123.107:0/498964437 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f17a4075f60 msgr2=0x7f17a4078420 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:06.942 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.942+0000 7f17b67fc640 1 --2- 192.168.123.107:0/498964437 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f17a4075f60 0x7f17a4078420 secure :-1 s=READY pgs=112 cs=0 l=1 rev1=1 crypto rx=0x7f17c4005e00 tx=0x7f17c4005d90 comp rx=0 tx=0).stop 2026-03-09T20:45:06.943 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.943+0000 7f17b67fc640 1 -- 192.168.123.107:0/498964437 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f17d40fe780 msgr2=0x7f17d419aef0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:06.943 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.943+0000 7f17b67fc640 1 --2- 192.168.123.107:0/498964437 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f17d40fe780 0x7f17d419aef0 secure :-1 s=READY pgs=52 cs=0 l=1 rev1=1 crypto rx=0x7f17c000e990 tx=0x7f17c000ee60 comp rx=0 tx=0).stop 2026-03-09T20:45:06.943 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.943+0000 7f17b67fc640 1 -- 192.168.123.107:0/498964437 shutdown_connections 2026-03-09T20:45:06.943 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.943+0000 7f17b67fc640 1 --2- 192.168.123.107:0/498964437 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f17a4075f60 0x7f17a4078420 unknown :-1 s=CLOSED pgs=112 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:06.943 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.943+0000 7f17b67fc640 1 --2- 192.168.123.107:0/498964437 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f17d4104780 0x7f17d419b430 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:06.943 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.943+0000 7f17b67fc640 1 --2- 192.168.123.107:0/498964437 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f17d40fe780 0x7f17d419aef0 unknown :-1 s=CLOSED pgs=52 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:06.943 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.943+0000 7f17b67fc640 1 -- 192.168.123.107:0/498964437 >> 192.168.123.107:0/498964437 conn(0x7f17d40fa4a0 msgr2=0x7f17d40fbdb0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:45:06.944 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.943+0000 7f17b67fc640 1 -- 192.168.123.107:0/498964437 shutdown_connections 2026-03-09T20:45:06.944 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:06.943+0000 7f17b67fc640 1 -- 192.168.123.107:0/498964437 wait complete. 2026-03-09T20:45:06.996 DEBUG:tasks.fs:fs fscid=1,name=cephfs state = {'epoch': 8, 'max_mds': 2, 'flags': 50} 2026-03-09T20:45:06.997 INFO:teuthology.run_tasks:Running task ceph-fuse... 2026-03-09T20:45:07.007 INFO:tasks.ceph_fuse:Running ceph_fuse task... 2026-03-09T20:45:07.007 INFO:tasks.ceph_fuse:config is {'client.0': {}, 'client.1': {}} 2026-03-09T20:45:07.007 INFO:tasks.ceph_fuse:client.0 config is {} 2026-03-09T20:45:07.007 INFO:tasks.cephfs.mount:cephfs_mntpt = None 2026-03-09T20:45:07.007 INFO:tasks.cephfs.mount:hostfs_mntpt = /home/ubuntu/cephtest/mnt.0 2026-03-09T20:45:07.007 INFO:tasks.ceph_fuse:client.1 config is {} 2026-03-09T20:45:07.007 INFO:tasks.cephfs.mount:cephfs_mntpt = None 2026-03-09T20:45:07.007 INFO:tasks.cephfs.mount:hostfs_mntpt = /home/ubuntu/cephtest/mnt.1 2026-03-09T20:45:07.007 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-09T20:45:07.007 DEBUG:teuthology.orchestra.run.vm07:> ip netns list 2026-03-09T20:45:07.056 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-09T20:45:07.056 DEBUG:teuthology.orchestra.run.vm07:> sudo ip link delete ceph-brx 2026-03-09T20:45:07.131 INFO:teuthology.orchestra.run.vm07.stderr:Cannot find device "ceph-brx" 2026-03-09T20:45:07.133 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T20:45:07.133 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-09T20:45:07.133 DEBUG:teuthology.orchestra.run.vm10:> ip netns list 2026-03-09T20:45:07.174 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-09T20:45:07.174 DEBUG:teuthology.orchestra.run.vm10:> sudo ip link delete ceph-brx 2026-03-09T20:45:07.239 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:07 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:07.239 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:07 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:07.239 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:07 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:45:07.239 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:07 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:45:07.239 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:07 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:07.239 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:07 vm10 ceph-mon[57011]: from='client.? 192.168.123.107:0/3990693903' entity='client.admin' cmd=[{"prefix": "mds versions", "format": "json"}]: dispatch 2026-03-09T20:45:07.239 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:07 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:45:07.239 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:07 vm10 ceph-mon[57011]: from='client.? 192.168.123.107:0/498964437' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json"}]: dispatch 2026-03-09T20:45:07.239 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:07 vm10 ceph-mon[57011]: mds.? [v2:192.168.123.107:6826/2216764941,v1:192.168.123.107:6827/2216764941] up:active 2026-03-09T20:45:07.239 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:07 vm10 ceph-mon[57011]: fsmap cephfs:2 {0=cephfs.vm07.rovdbp=up:active,1=cephfs.vm10.qpltwp=up:active} 2 up:standby-replay 2026-03-09T20:45:07.254 INFO:teuthology.orchestra.run.vm10.stderr:Cannot find device "ceph-brx" 2026-03-09T20:45:07.255 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T20:45:07.256 INFO:tasks.ceph_fuse:Mounting ceph-fuse clients... 2026-03-09T20:45:07.256 DEBUG:tasks.ceph_fuse:passing mntargs=[] 2026-03-09T20:45:07.256 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph fs ls 2026-03-09T20:45:07.454 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:45:07.489 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:07 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:07.489 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:07 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:07.489 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:07 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:45:07.489 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:07 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:45:07.489 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:07 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:07.489 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:07 vm07 ceph-mon[49120]: from='client.? 192.168.123.107:0/3990693903' entity='client.admin' cmd=[{"prefix": "mds versions", "format": "json"}]: dispatch 2026-03-09T20:45:07.489 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:07 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:45:07.489 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:07 vm07 ceph-mon[49120]: from='client.? 192.168.123.107:0/498964437' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json"}]: dispatch 2026-03-09T20:45:07.489 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:07 vm07 ceph-mon[49120]: mds.? [v2:192.168.123.107:6826/2216764941,v1:192.168.123.107:6827/2216764941] up:active 2026-03-09T20:45:07.489 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:07 vm07 ceph-mon[49120]: fsmap cephfs:2 {0=cephfs.vm07.rovdbp=up:active,1=cephfs.vm10.qpltwp=up:active} 2 up:standby-replay 2026-03-09T20:45:07.871 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:07.870+0000 7f292d099640 1 -- 192.168.123.107:0/3188792488 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f29280720b0 msgr2=0x7f2928072490 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:07.871 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:07.870+0000 7f292d099640 1 --2- 192.168.123.107:0/3188792488 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f29280720b0 0x7f2928072490 secure :-1 s=READY pgs=53 cs=0 l=1 rev1=1 crypto rx=0x7f291800b0a0 tx=0x7f291802f550 comp rx=0 tx=0).stop 2026-03-09T20:45:07.875 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:07.873+0000 7f292d099640 1 -- 192.168.123.107:0/3188792488 shutdown_connections 2026-03-09T20:45:07.876 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:07.873+0000 7f292d099640 1 --2- 192.168.123.107:0/3188792488 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f29280729d0 0x7f292810b9f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:07.876 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:07.873+0000 7f292d099640 1 --2- 192.168.123.107:0/3188792488 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f29280720b0 0x7f2928072490 unknown :-1 s=CLOSED pgs=53 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:07.876 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:07.873+0000 7f292d099640 1 -- 192.168.123.107:0/3188792488 >> 192.168.123.107:0/3188792488 conn(0x7f292806c7e0 msgr2=0x7f292806cbf0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:45:07.878 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:07.876+0000 7f292d099640 1 -- 192.168.123.107:0/3188792488 shutdown_connections 2026-03-09T20:45:07.878 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:07.877+0000 7f292d099640 1 -- 192.168.123.107:0/3188792488 wait complete. 2026-03-09T20:45:07.878 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:07.877+0000 7f292d099640 1 Processor -- start 2026-03-09T20:45:07.878 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:07.877+0000 7f292d099640 1 -- start start 2026-03-09T20:45:07.878 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:07.877+0000 7f292d099640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f29280720b0 0x7f2928116500 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:07.878 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:07.877+0000 7f292d099640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f29280729d0 0x7f2928116a40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:07.878 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:07.877+0000 7f292d099640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f292811a620 con 0x7f29280729d0 2026-03-09T20:45:07.878 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:07.877+0000 7f292d099640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f2928116f80 con 0x7f29280720b0 2026-03-09T20:45:07.878 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:07.877+0000 7f2926575640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f29280729d0 0x7f2928116a40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:07.878 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:07.877+0000 7f2926575640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f29280729d0 0x7f2928116a40 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:53746/0 (socket says 192.168.123.107:53746) 2026-03-09T20:45:07.878 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:07.877+0000 7f2926575640 1 -- 192.168.123.107:0/565859907 learned_addr learned my addr 192.168.123.107:0/565859907 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:45:07.878 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:07.878+0000 7f2926d76640 1 --2- 192.168.123.107:0/565859907 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f29280720b0 0x7f2928116500 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:07.878 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:07.878+0000 7f2926d76640 1 -- 192.168.123.107:0/565859907 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f29280729d0 msgr2=0x7f2928116a40 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:07.878 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:07.878+0000 7f2926d76640 1 --2- 192.168.123.107:0/565859907 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f29280729d0 0x7f2928116a40 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:07.878 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:07.878+0000 7f2926d76640 1 -- 192.168.123.107:0/565859907 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f2918009d00 con 0x7f29280720b0 2026-03-09T20:45:07.880 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:07.878+0000 7f2926d76640 1 --2- 192.168.123.107:0/565859907 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f29280720b0 0x7f2928116500 secure :-1 s=READY pgs=54 cs=0 l=1 rev1=1 crypto rx=0x7f2918002790 tx=0x7f2918004060 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:45:07.881 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:07.881+0000 7f2907fff640 1 -- 192.168.123.107:0/565859907 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f291803d070 con 0x7f29280720b0 2026-03-09T20:45:07.881 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:07.881+0000 7f292d099640 1 -- 192.168.123.107:0/565859907 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f2928117200 con 0x7f29280720b0 2026-03-09T20:45:07.881 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:07.881+0000 7f292d099640 1 -- 192.168.123.107:0/565859907 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f29281ba4a0 con 0x7f29280720b0 2026-03-09T20:45:07.881 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:07.881+0000 7f2907fff640 1 -- 192.168.123.107:0/565859907 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f2918004590 con 0x7f29280720b0 2026-03-09T20:45:07.881 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:07.881+0000 7f2907fff640 1 -- 192.168.123.107:0/565859907 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f2918042a90 con 0x7f29280720b0 2026-03-09T20:45:07.882 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:07.882+0000 7f292d099640 1 -- 192.168.123.107:0/565859907 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f28f4005350 con 0x7f29280720b0 2026-03-09T20:45:07.884 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:07.883+0000 7f2907fff640 1 -- 192.168.123.107:0/565859907 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7f29180074e0 con 0x7f29280720b0 2026-03-09T20:45:07.884 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:07.884+0000 7f2907fff640 1 --2- 192.168.123.107:0/565859907 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f28f80761c0 0x7f28f8078680 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:07.884 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:07.884+0000 7f2926575640 1 --2- 192.168.123.107:0/565859907 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f28f80761c0 0x7f28f8078680 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:07.884 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:07.884+0000 7f2907fff640 1 -- 192.168.123.107:0/565859907 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(42..42 src has 1..42) v4 ==== 5448+0+0 (secure 0 0 0) 0x7f29180bd440 con 0x7f29280720b0 2026-03-09T20:45:07.885 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:07.885+0000 7f2926575640 1 --2- 192.168.123.107:0/565859907 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f28f80761c0 0x7f28f8078680 secure :-1 s=READY pgs=113 cs=0 l=1 rev1=1 crypto rx=0x7f292810f240 tx=0x7f291c009290 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:45:07.887 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:07.886+0000 7f2907fff640 1 -- 192.168.123.107:0/565859907 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f2918086bc0 con 0x7f29280720b0 2026-03-09T20:45:08.032 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:08.031+0000 7f292d099640 1 -- 192.168.123.107:0/565859907 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "fs ls"} v 0) v1 -- 0x7f28f40058d0 con 0x7f29280720b0 2026-03-09T20:45:08.032 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:08.032+0000 7f2907fff640 1 -- 192.168.123.107:0/565859907 <== mon.1 v2:192.168.123.110:3300/0 7 ==== mon_command_ack([{"prefix": "fs ls"}]=0 v9) v1 ==== 53+0+83 (secure 0 0 0) 0x7f2918086560 con 0x7f29280720b0 2026-03-09T20:45:08.032 INFO:teuthology.orchestra.run.vm07.stdout:name: cephfs, metadata pool: cephfs.cephfs.meta, data pools: [cephfs.cephfs.data ] 2026-03-09T20:45:08.038 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:08.037+0000 7f2905ffb640 1 -- 192.168.123.107:0/565859907 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f28f80761c0 msgr2=0x7f28f8078680 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:08.038 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:08.037+0000 7f2905ffb640 1 --2- 192.168.123.107:0/565859907 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f28f80761c0 0x7f28f8078680 secure :-1 s=READY pgs=113 cs=0 l=1 rev1=1 crypto rx=0x7f292810f240 tx=0x7f291c009290 comp rx=0 tx=0).stop 2026-03-09T20:45:08.038 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:08.037+0000 7f2905ffb640 1 -- 192.168.123.107:0/565859907 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f29280720b0 msgr2=0x7f2928116500 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:08.038 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:08.037+0000 7f2905ffb640 1 --2- 192.168.123.107:0/565859907 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f29280720b0 0x7f2928116500 secure :-1 s=READY pgs=54 cs=0 l=1 rev1=1 crypto rx=0x7f2918002790 tx=0x7f2918004060 comp rx=0 tx=0).stop 2026-03-09T20:45:08.038 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:08.038+0000 7f2905ffb640 1 -- 192.168.123.107:0/565859907 shutdown_connections 2026-03-09T20:45:08.038 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:08.038+0000 7f2905ffb640 1 --2- 192.168.123.107:0/565859907 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f28f80761c0 0x7f28f8078680 unknown :-1 s=CLOSED pgs=113 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:08.038 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:08.038+0000 7f2905ffb640 1 --2- 192.168.123.107:0/565859907 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f29280729d0 0x7f2928116a40 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:08.038 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:08.038+0000 7f2905ffb640 1 --2- 192.168.123.107:0/565859907 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f29280720b0 0x7f2928116500 unknown :-1 s=CLOSED pgs=54 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:08.038 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:08.038+0000 7f2905ffb640 1 -- 192.168.123.107:0/565859907 >> 192.168.123.107:0/565859907 conn(0x7f292806c7e0 msgr2=0x7f2928070ae0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:45:08.040 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:08.041+0000 7f2905ffb640 1 -- 192.168.123.107:0/565859907 shutdown_connections 2026-03-09T20:45:08.041 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:08.041+0000 7f2905ffb640 1 -- 192.168.123.107:0/565859907 wait complete. 2026-03-09T20:45:08.146 INFO:tasks.cephfs.mount:Mounting default Ceph FS; just confirmed its presence on cluster 2026-03-09T20:45:08.147 INFO:tasks.cephfs.mount:Mounting Ceph FS. Following are details of mount; remember "None" represents Python type None - 2026-03-09T20:45:08.147 INFO:tasks.cephfs.mount:self.client_remote.hostname = vm07.local 2026-03-09T20:45:08.147 INFO:tasks.cephfs.mount:self.client.name = client.0 2026-03-09T20:45:08.147 INFO:tasks.cephfs.mount:self.hostfs_mntpt = /home/ubuntu/cephtest/mnt.0 2026-03-09T20:45:08.147 INFO:tasks.cephfs.mount:self.cephfs_name = None 2026-03-09T20:45:08.147 INFO:tasks.cephfs.mount:self.cephfs_mntpt = None 2026-03-09T20:45:08.147 INFO:tasks.cephfs.mount:self.client_keyring_path = None 2026-03-09T20:45:08.147 INFO:tasks.cephfs.mount:Setting the 'None' netns for '/home/ubuntu/cephtest/mnt.0' 2026-03-09T20:45:08.147 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-09T20:45:08.147 DEBUG:teuthology.orchestra.run.vm07:> ip addr 2026-03-09T20:45:08.177 INFO:teuthology.orchestra.run.vm07.stdout:1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 2026-03-09T20:45:08.177 INFO:teuthology.orchestra.run.vm07.stdout: link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2026-03-09T20:45:08.177 INFO:teuthology.orchestra.run.vm07.stdout: inet 127.0.0.1/8 scope host lo 2026-03-09T20:45:08.177 INFO:teuthology.orchestra.run.vm07.stdout: valid_lft forever preferred_lft forever 2026-03-09T20:45:08.177 INFO:teuthology.orchestra.run.vm07.stdout: inet6 ::1/128 scope host 2026-03-09T20:45:08.177 INFO:teuthology.orchestra.run.vm07.stdout: valid_lft forever preferred_lft forever 2026-03-09T20:45:08.177 INFO:teuthology.orchestra.run.vm07.stdout:2: eth0: mtu 1500 qdisc fq_codel state UP group default qlen 1000 2026-03-09T20:45:08.177 INFO:teuthology.orchestra.run.vm07.stdout: link/ether 52:55:00:00:00:07 brd ff:ff:ff:ff:ff:ff 2026-03-09T20:45:08.177 INFO:teuthology.orchestra.run.vm07.stdout: altname enp0s3 2026-03-09T20:45:08.177 INFO:teuthology.orchestra.run.vm07.stdout: altname ens3 2026-03-09T20:45:08.177 INFO:teuthology.orchestra.run.vm07.stdout: inet 192.168.123.107/24 brd 192.168.123.255 scope global dynamic noprefixroute eth0 2026-03-09T20:45:08.177 INFO:teuthology.orchestra.run.vm07.stdout: valid_lft 2842sec preferred_lft 2842sec 2026-03-09T20:45:08.177 INFO:teuthology.orchestra.run.vm07.stdout: inet6 fe80::5055:ff:fe00:7/64 scope link noprefixroute 2026-03-09T20:45:08.177 INFO:teuthology.orchestra.run.vm07.stdout: valid_lft forever preferred_lft forever 2026-03-09T20:45:08.177 INFO:tasks.cephfs.mount:Setuping the 'ceph-brx' with 192.168.159.254/20 2026-03-09T20:45:08.177 DEBUG:teuthology.orchestra.run.vm07:> (cd / && exec stdin-killer --timeout=300 -- bash -c ' 2026-03-09T20:45:08.177 DEBUG:teuthology.orchestra.run.vm07:> set -e 2026-03-09T20:45:08.178 DEBUG:teuthology.orchestra.run.vm07:> sudo ip link add name ceph-brx type bridge 2026-03-09T20:45:08.178 DEBUG:teuthology.orchestra.run.vm07:> sudo ip addr flush dev ceph-brx 2026-03-09T20:45:08.178 DEBUG:teuthology.orchestra.run.vm07:> sudo ip link set ceph-brx up 2026-03-09T20:45:08.178 DEBUG:teuthology.orchestra.run.vm07:> sudo ip addr add 192.168.159.254/20 brd 192.168.159.255 dev ceph-brx 2026-03-09T20:45:08.178 DEBUG:teuthology.orchestra.run.vm07:> ') 2026-03-09T20:45:08.256 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:08 stdin-killer INFO: expiration expected; waiting 300 seconds for command to complete 2026-03-09T20:45:08.384 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:08 stdin-killer INFO: command exited with status 0: exiting normally with same code! 2026-03-09T20:45:08.388 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-09T20:45:08.388 DEBUG:teuthology.orchestra.run.vm07:> echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward 2026-03-09T20:45:08.502 INFO:teuthology.orchestra.run.vm07.stdout:1 2026-03-09T20:45:08.503 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-09T20:45:08.503 DEBUG:teuthology.orchestra.run.vm07:> ip r 2026-03-09T20:45:08.530 INFO:teuthology.orchestra.run.vm07.stdout:default via 192.168.123.1 dev eth0 proto dhcp src 192.168.123.107 metric 100 2026-03-09T20:45:08.530 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.0/24 dev eth0 proto kernel scope link src 192.168.123.107 metric 100 2026-03-09T20:45:08.530 INFO:teuthology.orchestra.run.vm07.stdout:192.168.144.0/20 dev ceph-brx proto kernel scope link src 192.168.159.254 2026-03-09T20:45:08.530 DEBUG:teuthology.orchestra.run.vm07:> (cd / && exec stdin-killer --timeout=300 -- bash -c ' 2026-03-09T20:45:08.530 DEBUG:teuthology.orchestra.run.vm07:> set -e 2026-03-09T20:45:08.530 DEBUG:teuthology.orchestra.run.vm07:> sudo iptables -A FORWARD -o eth0 -i ceph-brx -j ACCEPT 2026-03-09T20:45:08.530 DEBUG:teuthology.orchestra.run.vm07:> sudo iptables -A FORWARD -i eth0 -o ceph-brx -j ACCEPT 2026-03-09T20:45:08.530 DEBUG:teuthology.orchestra.run.vm07:> sudo iptables -t nat -A POSTROUTING -s 192.168.159.254/20 -o eth0 -j MASQUERADE 2026-03-09T20:45:08.530 DEBUG:teuthology.orchestra.run.vm07:> ') 2026-03-09T20:45:08.614 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:08 vm10 ceph-mon[57011]: pgmap v84: 65 pgs: 65 active+clean; 452 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.5 KiB/s rd, 3.6 KiB/s wr, 14 op/s 2026-03-09T20:45:08.614 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:08 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:08.614 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:08 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:08.614 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:08 vm10 ceph-mon[57011]: from='client.? 192.168.123.107:0/565859907' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch 2026-03-09T20:45:08.614 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:08 vm10 ceph-mon[57011]: mds.? [v2:192.168.123.110:6824/61492274,v1:192.168.123.110:6825/61492274] up:active 2026-03-09T20:45:08.614 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:08 vm10 ceph-mon[57011]: mds.? [v2:192.168.123.107:6828/3289699342,v1:192.168.123.107:6829/3289699342] up:standby-replay 2026-03-09T20:45:08.614 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:08 vm10 ceph-mon[57011]: fsmap cephfs:2 {0=cephfs.vm07.rovdbp=up:active,1=cephfs.vm10.qpltwp=up:active} 2 up:standby-replay 2026-03-09T20:45:08.620 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:08 stdin-killer INFO: expiration expected; waiting 300 seconds for command to complete 2026-03-09T20:45:08.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:08 vm07 ceph-mon[49120]: pgmap v84: 65 pgs: 65 active+clean; 452 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.5 KiB/s rd, 3.6 KiB/s wr, 14 op/s 2026-03-09T20:45:08.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:08 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:08.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:08 vm07 ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:08.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:08 vm07 ceph-mon[49120]: from='client.? 192.168.123.107:0/565859907' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch 2026-03-09T20:45:08.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:08 vm07 ceph-mon[49120]: mds.? [v2:192.168.123.110:6824/61492274,v1:192.168.123.110:6825/61492274] up:active 2026-03-09T20:45:08.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:08 vm07 ceph-mon[49120]: mds.? [v2:192.168.123.107:6828/3289699342,v1:192.168.123.107:6829/3289699342] up:standby-replay 2026-03-09T20:45:08.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:08 vm07 ceph-mon[49120]: fsmap cephfs:2 {0=cephfs.vm07.rovdbp=up:active,1=cephfs.vm10.qpltwp=up:active} 2 up:standby-replay 2026-03-09T20:45:08.731 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:08 stdin-killer INFO: command exited with status 0: exiting normally with same code! 2026-03-09T20:45:08.737 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-09T20:45:08.737 DEBUG:teuthology.orchestra.run.vm07:> ip netns list 2026-03-09T20:45:08.758 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-09T20:45:08.758 DEBUG:teuthology.orchestra.run.vm07:> ip netns list-id 2026-03-09T20:45:08.818 DEBUG:teuthology.orchestra.run.vm07:> (cd / && exec stdin-killer --timeout=300 -- bash -c ' 2026-03-09T20:45:08.818 DEBUG:teuthology.orchestra.run.vm07:> set -e 2026-03-09T20:45:08.818 DEBUG:teuthology.orchestra.run.vm07:> sudo ip netns add ceph-ns--home-ubuntu-cephtest-mnt.0 2026-03-09T20:45:08.818 DEBUG:teuthology.orchestra.run.vm07:> sudo ip netns set ceph-ns--home-ubuntu-cephtest-mnt.0 0 2026-03-09T20:45:08.818 DEBUG:teuthology.orchestra.run.vm07:> ') 2026-03-09T20:45:08.894 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:08 stdin-killer INFO: expiration expected; waiting 300 seconds for command to complete 2026-03-09T20:45:08.918 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:08 stdin-killer INFO: command exited with status 0: exiting normally with same code! 2026-03-09T20:45:08.922 INFO:tasks.cephfs.mount:Setuping the netns 'ceph-ns--home-ubuntu-cephtest-mnt.0' with 192.168.144.1/20 2026-03-09T20:45:08.922 DEBUG:teuthology.orchestra.run.vm07:> (cd / && exec stdin-killer --timeout=300 -- bash -c ' 2026-03-09T20:45:08.922 DEBUG:teuthology.orchestra.run.vm07:> set -e 2026-03-09T20:45:08.922 DEBUG:teuthology.orchestra.run.vm07:> sudo ip link add veth0 netns ceph-ns--home-ubuntu-cephtest-mnt.0 type veth peer name brx.0 2026-03-09T20:45:08.922 DEBUG:teuthology.orchestra.run.vm07:> sudo ip netns exec ceph-ns--home-ubuntu-cephtest-mnt.0 ip addr add 192.168.144.1/20 brd 192.168.159.255 dev veth0 2026-03-09T20:45:08.922 DEBUG:teuthology.orchestra.run.vm07:> sudo ip netns exec ceph-ns--home-ubuntu-cephtest-mnt.0 ip link set veth0 up 2026-03-09T20:45:08.922 DEBUG:teuthology.orchestra.run.vm07:> sudo ip netns exec ceph-ns--home-ubuntu-cephtest-mnt.0 ip link set lo up 2026-03-09T20:45:08.922 DEBUG:teuthology.orchestra.run.vm07:> sudo ip netns exec ceph-ns--home-ubuntu-cephtest-mnt.0 ip route add default via 192.168.159.254 2026-03-09T20:45:08.922 DEBUG:teuthology.orchestra.run.vm07:> ') 2026-03-09T20:45:08.999 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:08 stdin-killer INFO: expiration expected; waiting 300 seconds for command to complete 2026-03-09T20:45:09.072 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:09 stdin-killer INFO: command exited with status 0: exiting normally with same code! 2026-03-09T20:45:09.076 DEBUG:teuthology.orchestra.run.vm07:> (cd / && exec stdin-killer --timeout=300 -- bash -c ' 2026-03-09T20:45:09.076 DEBUG:teuthology.orchestra.run.vm07:> set -e 2026-03-09T20:45:09.076 DEBUG:teuthology.orchestra.run.vm07:> sudo ip link set brx.0 up 2026-03-09T20:45:09.076 DEBUG:teuthology.orchestra.run.vm07:> sudo ip link set dev brx.0 master ceph-brx 2026-03-09T20:45:09.076 DEBUG:teuthology.orchestra.run.vm07:> ') 2026-03-09T20:45:09.154 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:09 stdin-killer INFO: expiration expected; waiting 300 seconds for command to complete 2026-03-09T20:45:09.188 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:09 stdin-killer INFO: command exited with status 0: exiting normally with same code! 2026-03-09T20:45:09.189 INFO:tasks.cephfs.fuse_mount:Client client.0 config is {} 2026-03-09T20:45:09.189 INFO:teuthology.orchestra.run:Running command with timeout 60 2026-03-09T20:45:09.189 DEBUG:teuthology.orchestra.run.vm07:> mkdir -p -v /home/ubuntu/cephtest/mnt.0 2026-03-09T20:45:09.247 INFO:teuthology.orchestra.run.vm07.stdout:mkdir: created directory '/home/ubuntu/cephtest/mnt.0' 2026-03-09T20:45:09.247 INFO:teuthology.orchestra.run:Running command with timeout 60 2026-03-09T20:45:09.247 DEBUG:teuthology.orchestra.run.vm07:> chmod 0000 /home/ubuntu/cephtest/mnt.0 2026-03-09T20:45:09.304 DEBUG:teuthology.orchestra.run.vm07:> sudo modprobe fuse 2026-03-09T20:45:09.369 DEBUG:teuthology.orchestra.run.vm07:> cat /proc/self/mounts | awk '{print $2}' 2026-03-09T20:45:09.428 INFO:teuthology.orchestra.run.vm07.stdout:/proc 2026-03-09T20:45:09.428 INFO:teuthology.orchestra.run.vm07.stdout:/sys 2026-03-09T20:45:09.428 INFO:teuthology.orchestra.run.vm07.stdout:/dev 2026-03-09T20:45:09.428 INFO:teuthology.orchestra.run.vm07.stdout:/sys/kernel/security 2026-03-09T20:45:09.429 INFO:teuthology.orchestra.run.vm07.stdout:/dev/shm 2026-03-09T20:45:09.429 INFO:teuthology.orchestra.run.vm07.stdout:/dev/pts 2026-03-09T20:45:09.429 INFO:teuthology.orchestra.run.vm07.stdout:/run 2026-03-09T20:45:09.429 INFO:teuthology.orchestra.run.vm07.stdout:/sys/fs/cgroup 2026-03-09T20:45:09.429 INFO:teuthology.orchestra.run.vm07.stdout:/sys/fs/pstore 2026-03-09T20:45:09.429 INFO:teuthology.orchestra.run.vm07.stdout:/sys/fs/bpf 2026-03-09T20:45:09.429 INFO:teuthology.orchestra.run.vm07.stdout:/sys/kernel/config 2026-03-09T20:45:09.429 INFO:teuthology.orchestra.run.vm07.stdout:/ 2026-03-09T20:45:09.429 INFO:teuthology.orchestra.run.vm07.stdout:/var/lib/nfs/rpc_pipefs 2026-03-09T20:45:09.429 INFO:teuthology.orchestra.run.vm07.stdout:/sys/fs/selinux 2026-03-09T20:45:09.429 INFO:teuthology.orchestra.run.vm07.stdout:/proc/sys/fs/binfmt_misc 2026-03-09T20:45:09.429 INFO:teuthology.orchestra.run.vm07.stdout:/dev/mqueue 2026-03-09T20:45:09.429 INFO:teuthology.orchestra.run.vm07.stdout:/dev/hugepages 2026-03-09T20:45:09.429 INFO:teuthology.orchestra.run.vm07.stdout:/sys/kernel/debug 2026-03-09T20:45:09.429 INFO:teuthology.orchestra.run.vm07.stdout:/sys/kernel/tracing 2026-03-09T20:45:09.429 INFO:teuthology.orchestra.run.vm07.stdout:/run/credentials/systemd-sysctl.service 2026-03-09T20:45:09.429 INFO:teuthology.orchestra.run.vm07.stdout:/sys/fs/fuse/connections 2026-03-09T20:45:09.429 INFO:teuthology.orchestra.run.vm07.stdout:/run/credentials/systemd-sysusers.service 2026-03-09T20:45:09.429 INFO:teuthology.orchestra.run.vm07.stdout:/run/credentials/systemd-tmpfiles-setup-dev.service 2026-03-09T20:45:09.429 INFO:teuthology.orchestra.run.vm07.stdout:/run/credentials/systemd-tmpfiles-setup.service 2026-03-09T20:45:09.429 INFO:teuthology.orchestra.run.vm07.stdout:/run/user/1000 2026-03-09T20:45:09.429 INFO:teuthology.orchestra.run.vm07.stdout:/var/lib/containers/storage/overlay 2026-03-09T20:45:09.429 INFO:teuthology.orchestra.run.vm07.stdout:/var/lib/containers/storage/overlay/9a587aa9c3270df8d16b082d261b851707c09425c16b4e54e9500cfb81796374/merged 2026-03-09T20:45:09.429 INFO:teuthology.orchestra.run.vm07.stdout:/var/lib/containers/storage/overlay/3f7dc80f34edcbbe64a249821cabebe690707b54d960c2a958e33e685d84a1f4/merged 2026-03-09T20:45:09.429 INFO:teuthology.orchestra.run.vm07.stdout:/run/user/0 2026-03-09T20:45:09.429 INFO:teuthology.orchestra.run.vm07.stdout:/proc/sys/fs/binfmt_misc 2026-03-09T20:45:09.429 INFO:teuthology.orchestra.run.vm07.stdout:/var/lib/containers/storage/overlay/991c503ff834fa03f99f875bd15daeae66e3a183e3aab6ef75ee8794cc51a704/merged 2026-03-09T20:45:09.429 INFO:teuthology.orchestra.run.vm07.stdout:/var/lib/containers/storage/overlay/e2e16ec660733a90afe6bed98c0cc6d6c8c6cf6bcaf9e4e8f7d2751126f652cf/merged 2026-03-09T20:45:09.429 INFO:teuthology.orchestra.run.vm07.stdout:/var/lib/containers/storage/overlay/7889397a66e64a9bf11c33e82bd95728b3a129bc60ad95ac3279340d46675dc2/merged 2026-03-09T20:45:09.429 INFO:teuthology.orchestra.run.vm07.stdout:/var/lib/containers/storage/overlay/cb1656f19352ba6a161d028b9facd3440be0c4488bfb3ffcf39cbdd92b325893/merged 2026-03-09T20:45:09.429 INFO:teuthology.orchestra.run.vm07.stdout:/var/lib/containers/storage/overlay/15e1c131ed9f0ff12a28a142f8ee8a366606fb7ae1e975486b4c56441bc573ce/merged 2026-03-09T20:45:09.429 INFO:teuthology.orchestra.run.vm07.stdout:/var/lib/containers/storage/overlay/346d8322d256ff66b563bf2d22d660f1c061e05f1f9444b66401dd7d73b4d5a2/merged 2026-03-09T20:45:09.429 INFO:teuthology.orchestra.run.vm07.stdout:/var/lib/containers/storage/overlay/f616847028afe65d11fd8dcc49f6e6eda227be555b8b8e46125aa2d98348bc7a/merged 2026-03-09T20:45:09.429 INFO:teuthology.orchestra.run.vm07.stdout:/var/lib/containers/storage/overlay/3e367de832becfa5289874e581a80e56c907ae161df24ec346e9126d4ea46395/merged 2026-03-09T20:45:09.429 INFO:teuthology.orchestra.run.vm07.stdout:/var/lib/containers/storage/overlay/224ac220cd95d91771e49a7deca652be0b5ad6910ebca1ce9465d1a0ff2cdfd7/merged 2026-03-09T20:45:09.429 INFO:teuthology.orchestra.run.vm07.stdout:/var/lib/containers/storage/overlay/dcdb0d85981cb56ef84efa5a7e19746dd66ef11903dead0e491525b9709bee23/merged 2026-03-09T20:45:09.429 INFO:teuthology.orchestra.run.vm07.stdout:/var/lib/containers/storage/overlay/d9c810ed9de3bb22dd970ab9900a0fd293fb3d783e53c20e437bbafcdb60a9dc/merged 2026-03-09T20:45:09.429 INFO:teuthology.orchestra.run.vm07.stdout:/run/netns 2026-03-09T20:45:09.429 INFO:teuthology.orchestra.run.vm07.stdout:/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.0 2026-03-09T20:45:09.429 INFO:teuthology.orchestra.run.vm07.stdout:/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.0 2026-03-09T20:45:09.429 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-09T20:45:09.429 DEBUG:teuthology.orchestra.run.vm07:> ls /sys/fs/fuse/connections 2026-03-09T20:45:09.487 INFO:tasks.cephfs.fuse_mount:Pre-mount connections: [] 2026-03-09T20:45:09.487 DEBUG:teuthology.orchestra.run.vm07:> (cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.0 sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.0 --id 0) 2026-03-09T20:45:09.529 DEBUG:teuthology.orchestra.run.vm07:> sudo modprobe fuse 2026-03-09T20:45:09.559 DEBUG:teuthology.orchestra.run.vm07:> cat /proc/self/mounts | awk '{print $2}' 2026-03-09T20:45:09.608 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.vm07.stderr:2026-03-09T20:45:09.608+0000 7f39dd544580 -1 init, newargv = 0x55b1c9106ff0 newargc=15 2026-03-09T20:45:09.608 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.vm07.stderr:ceph-fuse[96587]: starting ceph client 2026-03-09T20:45:09.618 INFO:teuthology.orchestra.run.vm07.stdout:/proc 2026-03-09T20:45:09.643 INFO:teuthology.orchestra.run.vm07.stdout:/sys 2026-03-09T20:45:09.643 INFO:teuthology.orchestra.run.vm07.stdout:/dev 2026-03-09T20:45:09.643 INFO:teuthology.orchestra.run.vm07.stdout:/sys/kernel/security 2026-03-09T20:45:09.643 INFO:teuthology.orchestra.run.vm07.stdout:/dev/shm 2026-03-09T20:45:09.643 INFO:teuthology.orchestra.run.vm07.stdout:/dev/pts 2026-03-09T20:45:09.644 INFO:teuthology.orchestra.run.vm07.stdout:/run 2026-03-09T20:45:09.644 INFO:teuthology.orchestra.run.vm07.stdout:/sys/fs/cgroup 2026-03-09T20:45:09.644 INFO:teuthology.orchestra.run.vm07.stdout:/sys/fs/pstore 2026-03-09T20:45:09.644 INFO:teuthology.orchestra.run.vm07.stdout:/sys/fs/bpf 2026-03-09T20:45:09.644 INFO:teuthology.orchestra.run.vm07.stdout:/sys/kernel/config 2026-03-09T20:45:09.644 INFO:teuthology.orchestra.run.vm07.stdout:/ 2026-03-09T20:45:09.644 INFO:teuthology.orchestra.run.vm07.stdout:/var/lib/nfs/rpc_pipefs 2026-03-09T20:45:09.644 INFO:teuthology.orchestra.run.vm07.stdout:/sys/fs/selinux 2026-03-09T20:45:09.644 INFO:teuthology.orchestra.run.vm07.stdout:/proc/sys/fs/binfmt_misc 2026-03-09T20:45:09.644 INFO:teuthology.orchestra.run.vm07.stdout:/dev/mqueue 2026-03-09T20:45:09.644 INFO:teuthology.orchestra.run.vm07.stdout:/dev/hugepages 2026-03-09T20:45:09.644 INFO:teuthology.orchestra.run.vm07.stdout:/sys/kernel/debug 2026-03-09T20:45:09.644 INFO:teuthology.orchestra.run.vm07.stdout:/sys/kernel/tracing 2026-03-09T20:45:09.644 INFO:teuthology.orchestra.run.vm07.stdout:/run/credentials/systemd-sysctl.service 2026-03-09T20:45:09.644 INFO:teuthology.orchestra.run.vm07.stdout:/sys/fs/fuse/connections 2026-03-09T20:45:09.644 INFO:teuthology.orchestra.run.vm07.stdout:/run/credentials/systemd-sysusers.service 2026-03-09T20:45:09.644 INFO:teuthology.orchestra.run.vm07.stdout:/run/credentials/systemd-tmpfiles-setup-dev.service 2026-03-09T20:45:09.644 INFO:teuthology.orchestra.run.vm07.stdout:/run/credentials/systemd-tmpfiles-setup.service 2026-03-09T20:45:09.644 INFO:teuthology.orchestra.run.vm07.stdout:/run/user/1000 2026-03-09T20:45:09.644 INFO:teuthology.orchestra.run.vm07.stdout:/var/lib/containers/storage/overlay 2026-03-09T20:45:09.644 INFO:teuthology.orchestra.run.vm07.stdout:/var/lib/containers/storage/overlay/9a587aa9c3270df8d16b082d261b851707c09425c16b4e54e9500cfb81796374/merged 2026-03-09T20:45:09.644 INFO:teuthology.orchestra.run.vm07.stdout:/var/lib/containers/storage/overlay/3f7dc80f34edcbbe64a249821cabebe690707b54d960c2a958e33e685d84a1f4/merged 2026-03-09T20:45:09.644 INFO:teuthology.orchestra.run.vm07.stdout:/run/user/0 2026-03-09T20:45:09.644 INFO:teuthology.orchestra.run.vm07.stdout:/proc/sys/fs/binfmt_misc 2026-03-09T20:45:09.644 INFO:teuthology.orchestra.run.vm07.stdout:/var/lib/containers/storage/overlay/991c503ff834fa03f99f875bd15daeae66e3a183e3aab6ef75ee8794cc51a704/merged 2026-03-09T20:45:09.644 INFO:teuthology.orchestra.run.vm07.stdout:/var/lib/containers/storage/overlay/e2e16ec660733a90afe6bed98c0cc6d6c8c6cf6bcaf9e4e8f7d2751126f652cf/merged 2026-03-09T20:45:09.644 INFO:teuthology.orchestra.run.vm07.stdout:/var/lib/containers/storage/overlay/7889397a66e64a9bf11c33e82bd95728b3a129bc60ad95ac3279340d46675dc2/merged 2026-03-09T20:45:09.644 INFO:teuthology.orchestra.run.vm07.stdout:/var/lib/containers/storage/overlay/cb1656f19352ba6a161d028b9facd3440be0c4488bfb3ffcf39cbdd92b325893/merged 2026-03-09T20:45:09.644 INFO:teuthology.orchestra.run.vm07.stdout:/var/lib/containers/storage/overlay/15e1c131ed9f0ff12a28a142f8ee8a366606fb7ae1e975486b4c56441bc573ce/merged 2026-03-09T20:45:09.644 INFO:teuthology.orchestra.run.vm07.stdout:/var/lib/containers/storage/overlay/346d8322d256ff66b563bf2d22d660f1c061e05f1f9444b66401dd7d73b4d5a2/merged 2026-03-09T20:45:09.644 INFO:teuthology.orchestra.run.vm07.stdout:/var/lib/containers/storage/overlay/f616847028afe65d11fd8dcc49f6e6eda227be555b8b8e46125aa2d98348bc7a/merged 2026-03-09T20:45:09.644 INFO:teuthology.orchestra.run.vm07.stdout:/var/lib/containers/storage/overlay/3e367de832becfa5289874e581a80e56c907ae161df24ec346e9126d4ea46395/merged 2026-03-09T20:45:09.644 INFO:teuthology.orchestra.run.vm07.stdout:/var/lib/containers/storage/overlay/224ac220cd95d91771e49a7deca652be0b5ad6910ebca1ce9465d1a0ff2cdfd7/merged 2026-03-09T20:45:09.644 INFO:teuthology.orchestra.run.vm07.stdout:/var/lib/containers/storage/overlay/dcdb0d85981cb56ef84efa5a7e19746dd66ef11903dead0e491525b9709bee23/merged 2026-03-09T20:45:09.644 INFO:teuthology.orchestra.run.vm07.stdout:/var/lib/containers/storage/overlay/d9c810ed9de3bb22dd970ab9900a0fd293fb3d783e53c20e437bbafcdb60a9dc/merged 2026-03-09T20:45:09.644 INFO:teuthology.orchestra.run.vm07.stdout:/run/netns 2026-03-09T20:45:09.644 INFO:teuthology.orchestra.run.vm07.stdout:/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.0 2026-03-09T20:45:09.644 INFO:teuthology.orchestra.run.vm07.stdout:/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.0 2026-03-09T20:45:09.645 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-09T20:45:09.645 DEBUG:teuthology.orchestra.run.vm07:> ls /sys/fs/fuse/connections 2026-03-09T20:45:09.778 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.vm07.stderr:ceph-fuse[96587]: starting fuse 2026-03-09T20:45:09.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:09 vm07.local ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:09.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:09 vm07.local ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:09.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:09 vm07.local ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:45:09.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:09 vm07.local ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:45:09.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:09 vm07.local ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:10.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:09 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:10.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:09 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:10.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:09 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:45:10.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:09 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:45:10.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:09 vm10 ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:10.800 DEBUG:teuthology.orchestra.run.vm07:> sudo modprobe fuse 2026-03-09T20:45:10.829 DEBUG:teuthology.orchestra.run.vm07:> cat /proc/self/mounts | awk '{print $2}' 2026-03-09T20:45:10.886 INFO:teuthology.orchestra.run.vm07.stdout:/proc 2026-03-09T20:45:10.901 INFO:teuthology.orchestra.run.vm07.stdout:/sys 2026-03-09T20:45:10.901 INFO:teuthology.orchestra.run.vm07.stdout:/dev 2026-03-09T20:45:10.901 INFO:teuthology.orchestra.run.vm07.stdout:/sys/kernel/security 2026-03-09T20:45:10.901 INFO:teuthology.orchestra.run.vm07.stdout:/dev/shm 2026-03-09T20:45:10.901 INFO:teuthology.orchestra.run.vm07.stdout:/dev/pts 2026-03-09T20:45:10.901 INFO:teuthology.orchestra.run.vm07.stdout:/run 2026-03-09T20:45:10.901 INFO:teuthology.orchestra.run.vm07.stdout:/sys/fs/cgroup 2026-03-09T20:45:10.901 INFO:teuthology.orchestra.run.vm07.stdout:/sys/fs/pstore 2026-03-09T20:45:10.901 INFO:teuthology.orchestra.run.vm07.stdout:/sys/fs/bpf 2026-03-09T20:45:10.901 INFO:teuthology.orchestra.run.vm07.stdout:/sys/kernel/config 2026-03-09T20:45:10.901 INFO:teuthology.orchestra.run.vm07.stdout:/ 2026-03-09T20:45:10.901 INFO:teuthology.orchestra.run.vm07.stdout:/var/lib/nfs/rpc_pipefs 2026-03-09T20:45:10.901 INFO:teuthology.orchestra.run.vm07.stdout:/sys/fs/selinux 2026-03-09T20:45:10.901 INFO:teuthology.orchestra.run.vm07.stdout:/proc/sys/fs/binfmt_misc 2026-03-09T20:45:10.901 INFO:teuthology.orchestra.run.vm07.stdout:/dev/mqueue 2026-03-09T20:45:10.901 INFO:teuthology.orchestra.run.vm07.stdout:/dev/hugepages 2026-03-09T20:45:10.901 INFO:teuthology.orchestra.run.vm07.stdout:/sys/kernel/debug 2026-03-09T20:45:10.901 INFO:teuthology.orchestra.run.vm07.stdout:/sys/kernel/tracing 2026-03-09T20:45:10.901 INFO:teuthology.orchestra.run.vm07.stdout:/run/credentials/systemd-sysctl.service 2026-03-09T20:45:10.902 INFO:teuthology.orchestra.run.vm07.stdout:/sys/fs/fuse/connections 2026-03-09T20:45:10.902 INFO:teuthology.orchestra.run.vm07.stdout:/run/credentials/systemd-sysusers.service 2026-03-09T20:45:10.902 INFO:teuthology.orchestra.run.vm07.stdout:/run/credentials/systemd-tmpfiles-setup-dev.service 2026-03-09T20:45:10.902 INFO:teuthology.orchestra.run.vm07.stdout:/run/credentials/systemd-tmpfiles-setup.service 2026-03-09T20:45:10.902 INFO:teuthology.orchestra.run.vm07.stdout:/run/user/1000 2026-03-09T20:45:10.902 INFO:teuthology.orchestra.run.vm07.stdout:/var/lib/containers/storage/overlay 2026-03-09T20:45:10.902 INFO:teuthology.orchestra.run.vm07.stdout:/var/lib/containers/storage/overlay/9a587aa9c3270df8d16b082d261b851707c09425c16b4e54e9500cfb81796374/merged 2026-03-09T20:45:10.902 INFO:teuthology.orchestra.run.vm07.stdout:/var/lib/containers/storage/overlay/3f7dc80f34edcbbe64a249821cabebe690707b54d960c2a958e33e685d84a1f4/merged 2026-03-09T20:45:10.902 INFO:teuthology.orchestra.run.vm07.stdout:/run/user/0 2026-03-09T20:45:10.902 INFO:teuthology.orchestra.run.vm07.stdout:/proc/sys/fs/binfmt_misc 2026-03-09T20:45:10.902 INFO:teuthology.orchestra.run.vm07.stdout:/var/lib/containers/storage/overlay/991c503ff834fa03f99f875bd15daeae66e3a183e3aab6ef75ee8794cc51a704/merged 2026-03-09T20:45:10.902 INFO:teuthology.orchestra.run.vm07.stdout:/var/lib/containers/storage/overlay/e2e16ec660733a90afe6bed98c0cc6d6c8c6cf6bcaf9e4e8f7d2751126f652cf/merged 2026-03-09T20:45:10.902 INFO:teuthology.orchestra.run.vm07.stdout:/var/lib/containers/storage/overlay/7889397a66e64a9bf11c33e82bd95728b3a129bc60ad95ac3279340d46675dc2/merged 2026-03-09T20:45:10.902 INFO:teuthology.orchestra.run.vm07.stdout:/var/lib/containers/storage/overlay/cb1656f19352ba6a161d028b9facd3440be0c4488bfb3ffcf39cbdd92b325893/merged 2026-03-09T20:45:10.902 INFO:teuthology.orchestra.run.vm07.stdout:/var/lib/containers/storage/overlay/15e1c131ed9f0ff12a28a142f8ee8a366606fb7ae1e975486b4c56441bc573ce/merged 2026-03-09T20:45:10.902 INFO:teuthology.orchestra.run.vm07.stdout:/var/lib/containers/storage/overlay/346d8322d256ff66b563bf2d22d660f1c061e05f1f9444b66401dd7d73b4d5a2/merged 2026-03-09T20:45:10.902 INFO:teuthology.orchestra.run.vm07.stdout:/var/lib/containers/storage/overlay/f616847028afe65d11fd8dcc49f6e6eda227be555b8b8e46125aa2d98348bc7a/merged 2026-03-09T20:45:10.902 INFO:teuthology.orchestra.run.vm07.stdout:/var/lib/containers/storage/overlay/3e367de832becfa5289874e581a80e56c907ae161df24ec346e9126d4ea46395/merged 2026-03-09T20:45:10.902 INFO:teuthology.orchestra.run.vm07.stdout:/var/lib/containers/storage/overlay/224ac220cd95d91771e49a7deca652be0b5ad6910ebca1ce9465d1a0ff2cdfd7/merged 2026-03-09T20:45:10.902 INFO:teuthology.orchestra.run.vm07.stdout:/var/lib/containers/storage/overlay/dcdb0d85981cb56ef84efa5a7e19746dd66ef11903dead0e491525b9709bee23/merged 2026-03-09T20:45:10.902 INFO:teuthology.orchestra.run.vm07.stdout:/var/lib/containers/storage/overlay/d9c810ed9de3bb22dd970ab9900a0fd293fb3d783e53c20e437bbafcdb60a9dc/merged 2026-03-09T20:45:10.902 INFO:teuthology.orchestra.run.vm07.stdout:/run/netns 2026-03-09T20:45:10.902 INFO:teuthology.orchestra.run.vm07.stdout:/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.0 2026-03-09T20:45:10.902 INFO:teuthology.orchestra.run.vm07.stdout:/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.0 2026-03-09T20:45:10.902 INFO:teuthology.orchestra.run.vm07.stdout:/home/ubuntu/cephtest/mnt.0 2026-03-09T20:45:10.902 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-09T20:45:10.902 DEBUG:teuthology.orchestra.run.vm07:> ls /sys/fs/fuse/connections 2026-03-09T20:45:10.959 INFO:teuthology.orchestra.run.vm07.stdout:75 2026-03-09T20:45:10.959 INFO:tasks.cephfs.fuse_mount:Post-mount connections: [75] 2026-03-09T20:45:10.959 DEBUG:teuthology.orchestra.run.vm07:> sudo stdin-killer -- python3 -c ' 2026-03-09T20:45:10.959 DEBUG:teuthology.orchestra.run.vm07:> import glob 2026-03-09T20:45:10.959 DEBUG:teuthology.orchestra.run.vm07:> import re 2026-03-09T20:45:10.959 DEBUG:teuthology.orchestra.run.vm07:> import os 2026-03-09T20:45:10.959 DEBUG:teuthology.orchestra.run.vm07:> import subprocess 2026-03-09T20:45:10.959 DEBUG:teuthology.orchestra.run.vm07:> 2026-03-09T20:45:10.959 DEBUG:teuthology.orchestra.run.vm07:> def _find_admin_socket(client_name): 2026-03-09T20:45:10.959 DEBUG:teuthology.orchestra.run.vm07:> asok_path = "/var/run/ceph/ceph-client.0.*.asok" 2026-03-09T20:45:10.959 DEBUG:teuthology.orchestra.run.vm07:> files = glob.glob(asok_path) 2026-03-09T20:45:10.959 DEBUG:teuthology.orchestra.run.vm07:> mountpoint = "/home/ubuntu/cephtest/mnt.0" 2026-03-09T20:45:10.959 DEBUG:teuthology.orchestra.run.vm07:> 2026-03-09T20:45:10.959 DEBUG:teuthology.orchestra.run.vm07:> # Given a non-glob path, it better be there 2026-03-09T20:45:10.959 DEBUG:teuthology.orchestra.run.vm07:> if "*" not in asok_path: 2026-03-09T20:45:10.959 DEBUG:teuthology.orchestra.run.vm07:> assert(len(files) == 1) 2026-03-09T20:45:10.959 DEBUG:teuthology.orchestra.run.vm07:> return files[0] 2026-03-09T20:45:10.959 DEBUG:teuthology.orchestra.run.vm07:> 2026-03-09T20:45:10.959 DEBUG:teuthology.orchestra.run.vm07:> for f in files: 2026-03-09T20:45:10.959 DEBUG:teuthology.orchestra.run.vm07:> pid = re.match(".*\.(\d+)\.asok$", f).group(1) 2026-03-09T20:45:10.959 DEBUG:teuthology.orchestra.run.vm07:> if os.path.exists("/proc/{0}".format(pid)): 2026-03-09T20:45:10.959 DEBUG:teuthology.orchestra.run.vm07:> with open("/proc/{0}/cmdline".format(pid), '"'"'r'"'"') as proc_f: 2026-03-09T20:45:10.959 DEBUG:teuthology.orchestra.run.vm07:> contents = proc_f.read() 2026-03-09T20:45:10.959 DEBUG:teuthology.orchestra.run.vm07:> if mountpoint in contents: 2026-03-09T20:45:10.959 DEBUG:teuthology.orchestra.run.vm07:> return f 2026-03-09T20:45:10.959 DEBUG:teuthology.orchestra.run.vm07:> raise RuntimeError("Client socket {0} not found".format(client_name)) 2026-03-09T20:45:10.959 DEBUG:teuthology.orchestra.run.vm07:> 2026-03-09T20:45:10.959 DEBUG:teuthology.orchestra.run.vm07:> print(_find_admin_socket("client.0")) 2026-03-09T20:45:10.959 DEBUG:teuthology.orchestra.run.vm07:> ' 2026-03-09T20:45:11.020 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:10 vm07.local ceph-mon[49120]: pgmap v85: 65 pgs: 65 active+clean; 452 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.2 KiB/s rd, 3.2 KiB/s wr, 13 op/s 2026-03-09T20:45:11.055 INFO:teuthology.orchestra.run.vm07.stdout:/var/run/ceph/ceph-client.0.96587.asok 2026-03-09T20:45:11.057 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:11 stdin-killer INFO: command exited with status 0: exiting normally with same code! 2026-03-09T20:45:11.062 INFO:tasks.cephfs.fuse_mount:Found client admin socket at /var/run/ceph/ceph-client.0.96587.asok 2026-03-09T20:45:11.062 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-09T20:45:11.063 DEBUG:teuthology.orchestra.run.vm07:> sudo ceph --admin-daemon /var/run/ceph/ceph-client.0.96587.asok status 2026-03-09T20:45:11.170 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T20:45:11.170 INFO:teuthology.orchestra.run.vm07.stdout: "metadata": { 2026-03-09T20:45:11.170 INFO:teuthology.orchestra.run.vm07.stdout: "ceph_sha1": "ab47f43c099b2cbae6e21342fe673ce251da54d6", 2026-03-09T20:45:11.170 INFO:teuthology.orchestra.run.vm07.stdout: "ceph_version": "ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable)", 2026-03-09T20:45:11.170 INFO:teuthology.orchestra.run.vm07.stdout: "entity_id": "0", 2026-03-09T20:45:11.170 INFO:teuthology.orchestra.run.vm07.stdout: "hostname": "vm07.local", 2026-03-09T20:45:11.170 INFO:teuthology.orchestra.run.vm07.stdout: "mount_point": "/home/ubuntu/cephtest/mnt.0", 2026-03-09T20:45:11.170 INFO:teuthology.orchestra.run.vm07.stdout: "pid": "96587", 2026-03-09T20:45:11.170 INFO:teuthology.orchestra.run.vm07.stdout: "root": "/" 2026-03-09T20:45:11.170 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:45:11.170 INFO:teuthology.orchestra.run.vm07.stdout: "dentry_count": 0, 2026-03-09T20:45:11.170 INFO:teuthology.orchestra.run.vm07.stdout: "dentry_pinned_count": 0, 2026-03-09T20:45:11.170 INFO:teuthology.orchestra.run.vm07.stdout: "id": 14518, 2026-03-09T20:45:11.170 INFO:teuthology.orchestra.run.vm07.stdout: "inst": { 2026-03-09T20:45:11.170 INFO:teuthology.orchestra.run.vm07.stdout: "name": { 2026-03-09T20:45:11.170 INFO:teuthology.orchestra.run.vm07.stdout: "type": "client", 2026-03-09T20:45:11.170 INFO:teuthology.orchestra.run.vm07.stdout: "num": 14518 2026-03-09T20:45:11.171 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:45:11.171 INFO:teuthology.orchestra.run.vm07.stdout: "addr": { 2026-03-09T20:45:11.171 INFO:teuthology.orchestra.run.vm07.stdout: "type": "v1", 2026-03-09T20:45:11.171 INFO:teuthology.orchestra.run.vm07.stdout: "addr": "192.168.144.1:0", 2026-03-09T20:45:11.171 INFO:teuthology.orchestra.run.vm07.stdout: "nonce": 2371384686 2026-03-09T20:45:11.171 INFO:teuthology.orchestra.run.vm07.stdout: } 2026-03-09T20:45:11.171 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:45:11.171 INFO:teuthology.orchestra.run.vm07.stdout: "addr": { 2026-03-09T20:45:11.171 INFO:teuthology.orchestra.run.vm07.stdout: "type": "v1", 2026-03-09T20:45:11.171 INFO:teuthology.orchestra.run.vm07.stdout: "addr": "192.168.144.1:0", 2026-03-09T20:45:11.171 INFO:teuthology.orchestra.run.vm07.stdout: "nonce": 2371384686 2026-03-09T20:45:11.171 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:45:11.171 INFO:teuthology.orchestra.run.vm07.stdout: "inst_str": "client.14518 192.168.144.1:0/2371384686", 2026-03-09T20:45:11.171 INFO:teuthology.orchestra.run.vm07.stdout: "addr_str": "192.168.144.1:0/2371384686", 2026-03-09T20:45:11.171 INFO:teuthology.orchestra.run.vm07.stdout: "inode_count": 1, 2026-03-09T20:45:11.171 INFO:teuthology.orchestra.run.vm07.stdout: "mds_epoch": 10, 2026-03-09T20:45:11.171 INFO:teuthology.orchestra.run.vm07.stdout: "osd_epoch": 42, 2026-03-09T20:45:11.171 INFO:teuthology.orchestra.run.vm07.stdout: "osd_epoch_barrier": 0, 2026-03-09T20:45:11.171 INFO:teuthology.orchestra.run.vm07.stdout: "blocklisted": false, 2026-03-09T20:45:11.171 INFO:teuthology.orchestra.run.vm07.stdout: "fs_name": "cephfs" 2026-03-09T20:45:11.171 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T20:45:11.177 DEBUG:tasks.ceph_fuse:passing mntargs=[] 2026-03-09T20:45:11.177 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph fs ls 2026-03-09T20:45:11.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:10 vm10 ceph-mon[57011]: pgmap v85: 65 pgs: 65 active+clean; 452 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.2 KiB/s rd, 3.2 KiB/s wr, 13 op/s 2026-03-09T20:45:11.335 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:45:11.595 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:11.594+0000 7f280df43640 1 -- 192.168.123.107:0/1206493367 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f28081089d0 msgr2=0x7f2808108db0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:11.595 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:11.594+0000 7f280df43640 1 --2- 192.168.123.107:0/1206493367 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f28081089d0 0x7f2808108db0 secure :-1 s=READY pgs=265 cs=0 l=1 rev1=1 crypto rx=0x7f27f80099b0 tx=0x7f27f802f220 comp rx=0 tx=0).stop 2026-03-09T20:45:11.595 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:11.595+0000 7f280df43640 1 -- 192.168.123.107:0/1206493367 shutdown_connections 2026-03-09T20:45:11.596 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:11.595+0000 7f280df43640 1 --2- 192.168.123.107:0/1206493367 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f28081029d0 0x7f2808102e30 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:11.596 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:11.595+0000 7f280df43640 1 --2- 192.168.123.107:0/1206493367 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f28081089d0 0x7f2808108db0 unknown :-1 s=CLOSED pgs=265 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:11.596 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:11.595+0000 7f280df43640 1 -- 192.168.123.107:0/1206493367 >> 192.168.123.107:0/1206493367 conn(0x7f28080fe710 msgr2=0x7f2808100b30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:45:11.596 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:11.596+0000 7f280df43640 1 -- 192.168.123.107:0/1206493367 shutdown_connections 2026-03-09T20:45:11.596 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:11.596+0000 7f280df43640 1 -- 192.168.123.107:0/1206493367 wait complete. 2026-03-09T20:45:11.596 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:11.596+0000 7f280df43640 1 Processor -- start 2026-03-09T20:45:11.596 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:11.596+0000 7f280df43640 1 -- start start 2026-03-09T20:45:11.597 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:11.597+0000 7f280df43640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f28081029d0 0x7f28081a0700 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:11.597 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:11.597+0000 7f280df43640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f28081089d0 0x7f28081a0c40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:11.597 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:11.597+0000 7f280df43640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f280819a7f0 con 0x7f28081029d0 2026-03-09T20:45:11.597 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:11.597+0000 7f280df43640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f280819a960 con 0x7f28081089d0 2026-03-09T20:45:11.597 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:11.597+0000 7f28077fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f28081029d0 0x7f28081a0700 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:11.597 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:11.597+0000 7f28077fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f28081029d0 0x7f28081a0700 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:53762/0 (socket says 192.168.123.107:53762) 2026-03-09T20:45:11.597 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:11.597+0000 7f28077fe640 1 -- 192.168.123.107:0/3312708593 learned_addr learned my addr 192.168.123.107:0/3312708593 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:45:11.597 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:11.597+0000 7f2806ffd640 1 --2- 192.168.123.107:0/3312708593 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f28081089d0 0x7f28081a0c40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:11.597 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:11.597+0000 7f28077fe640 1 -- 192.168.123.107:0/3312708593 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f28081089d0 msgr2=0x7f28081a0c40 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:11.597 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:11.597+0000 7f28077fe640 1 --2- 192.168.123.107:0/3312708593 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f28081089d0 0x7f28081a0c40 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:11.597 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:11.597+0000 7f28077fe640 1 -- 192.168.123.107:0/3312708593 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f27f8009660 con 0x7f28081029d0 2026-03-09T20:45:11.598 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:11.598+0000 7f28077fe640 1 --2- 192.168.123.107:0/3312708593 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f28081029d0 0x7f28081a0700 secure :-1 s=READY pgs=266 cs=0 l=1 rev1=1 crypto rx=0x7f27f8002410 tx=0x7f27f8004290 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:45:11.599 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:11.598+0000 7f2804ff9640 1 -- 192.168.123.107:0/3312708593 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f27f803d070 con 0x7f28081029d0 2026-03-09T20:45:11.599 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:11.598+0000 7f2804ff9640 1 -- 192.168.123.107:0/3312708593 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f27f80043b0 con 0x7f28081029d0 2026-03-09T20:45:11.599 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:11.598+0000 7f2804ff9640 1 -- 192.168.123.107:0/3312708593 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f27f8041880 con 0x7f28081029d0 2026-03-09T20:45:11.599 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:11.598+0000 7f280df43640 1 -- 192.168.123.107:0/3312708593 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f280819abe0 con 0x7f28081029d0 2026-03-09T20:45:11.599 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:11.598+0000 7f280df43640 1 -- 192.168.123.107:0/3312708593 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f280819b0d0 con 0x7f28081029d0 2026-03-09T20:45:11.599 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:11.599+0000 7f2804ff9640 1 -- 192.168.123.107:0/3312708593 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7f27f804b430 con 0x7f28081029d0 2026-03-09T20:45:11.599 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:11.600+0000 7f280df43640 1 -- 192.168.123.107:0/3312708593 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f27cc005350 con 0x7f28081029d0 2026-03-09T20:45:11.602 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:11.601+0000 7f2804ff9640 1 --2- 192.168.123.107:0/3312708593 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f27dc0761c0 0x7f27dc078680 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:11.603 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:11.601+0000 7f2804ff9640 1 -- 192.168.123.107:0/3312708593 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(42..42 src has 1..42) v4 ==== 5448+0+0 (secure 0 0 0) 0x7f27f80bd3b0 con 0x7f28081029d0 2026-03-09T20:45:11.603 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:11.603+0000 7f2806ffd640 1 --2- 192.168.123.107:0/3312708593 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f27dc0761c0 0x7f27dc078680 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:11.603 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:11.603+0000 7f2806ffd640 1 --2- 192.168.123.107:0/3312708593 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f27dc0761c0 0x7f27dc078680 secure :-1 s=READY pgs=114 cs=0 l=1 rev1=1 crypto rx=0x7f280819bc60 tx=0x7f27f4008040 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:45:11.603 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:11.603+0000 7f2804ff9640 1 -- 192.168.123.107:0/3312708593 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f27f8086940 con 0x7f28081029d0 2026-03-09T20:45:11.708 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:11.708+0000 7f280df43640 1 -- 192.168.123.107:0/3312708593 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "fs ls"} v 0) v1 -- 0x7f27cc005e10 con 0x7f28081029d0 2026-03-09T20:45:11.708 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:11.708+0000 7f2804ff9640 1 -- 192.168.123.107:0/3312708593 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "fs ls"}]=0 v10) v1 ==== 53+0+83 (secure 0 0 0) 0x7f27f80862e0 con 0x7f28081029d0 2026-03-09T20:45:11.708 INFO:teuthology.orchestra.run.vm07.stdout:name: cephfs, metadata pool: cephfs.cephfs.meta, data pools: [cephfs.cephfs.data ] 2026-03-09T20:45:11.710 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:11.710+0000 7f280df43640 1 -- 192.168.123.107:0/3312708593 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f27dc0761c0 msgr2=0x7f27dc078680 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:11.710 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:11.710+0000 7f280df43640 1 --2- 192.168.123.107:0/3312708593 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f27dc0761c0 0x7f27dc078680 secure :-1 s=READY pgs=114 cs=0 l=1 rev1=1 crypto rx=0x7f280819bc60 tx=0x7f27f4008040 comp rx=0 tx=0).stop 2026-03-09T20:45:11.710 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:11.710+0000 7f280df43640 1 -- 192.168.123.107:0/3312708593 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f28081029d0 msgr2=0x7f28081a0700 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:11.710 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:11.710+0000 7f280df43640 1 --2- 192.168.123.107:0/3312708593 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f28081029d0 0x7f28081a0700 secure :-1 s=READY pgs=266 cs=0 l=1 rev1=1 crypto rx=0x7f27f8002410 tx=0x7f27f8004290 comp rx=0 tx=0).stop 2026-03-09T20:45:11.711 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:11.711+0000 7f280df43640 1 -- 192.168.123.107:0/3312708593 shutdown_connections 2026-03-09T20:45:11.711 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:11.711+0000 7f280df43640 1 --2- 192.168.123.107:0/3312708593 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f27dc0761c0 0x7f27dc078680 unknown :-1 s=CLOSED pgs=114 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:11.711 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:11.711+0000 7f280df43640 1 --2- 192.168.123.107:0/3312708593 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f28081089d0 0x7f28081a0c40 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:11.711 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:11.711+0000 7f280df43640 1 --2- 192.168.123.107:0/3312708593 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f28081029d0 0x7f28081a0700 unknown :-1 s=CLOSED pgs=266 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:11.711 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:11.711+0000 7f280df43640 1 -- 192.168.123.107:0/3312708593 >> 192.168.123.107:0/3312708593 conn(0x7f28080fe710 msgr2=0x7f280810c9e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:45:11.711 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:11.711+0000 7f280df43640 1 -- 192.168.123.107:0/3312708593 shutdown_connections 2026-03-09T20:45:11.711 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:11.711+0000 7f280df43640 1 -- 192.168.123.107:0/3312708593 wait complete. 2026-03-09T20:45:11.773 INFO:tasks.cephfs.mount:Mounting default Ceph FS; just confirmed its presence on cluster 2026-03-09T20:45:11.773 INFO:tasks.cephfs.mount:Mounting Ceph FS. Following are details of mount; remember "None" represents Python type None - 2026-03-09T20:45:11.773 INFO:tasks.cephfs.mount:self.client_remote.hostname = vm10.local 2026-03-09T20:45:11.773 INFO:tasks.cephfs.mount:self.client.name = client.1 2026-03-09T20:45:11.773 INFO:tasks.cephfs.mount:self.hostfs_mntpt = /home/ubuntu/cephtest/mnt.1 2026-03-09T20:45:11.773 INFO:tasks.cephfs.mount:self.cephfs_name = None 2026-03-09T20:45:11.773 INFO:tasks.cephfs.mount:self.cephfs_mntpt = None 2026-03-09T20:45:11.773 INFO:tasks.cephfs.mount:self.client_keyring_path = None 2026-03-09T20:45:11.773 INFO:tasks.cephfs.mount:Setting the 'None' netns for '/home/ubuntu/cephtest/mnt.1' 2026-03-09T20:45:11.774 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-09T20:45:11.774 DEBUG:teuthology.orchestra.run.vm10:> ip addr 2026-03-09T20:45:11.789 INFO:teuthology.orchestra.run.vm10.stdout:1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 2026-03-09T20:45:11.790 INFO:teuthology.orchestra.run.vm10.stdout: link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2026-03-09T20:45:11.790 INFO:teuthology.orchestra.run.vm10.stdout: inet 127.0.0.1/8 scope host lo 2026-03-09T20:45:11.790 INFO:teuthology.orchestra.run.vm10.stdout: valid_lft forever preferred_lft forever 2026-03-09T20:45:11.790 INFO:teuthology.orchestra.run.vm10.stdout: inet6 ::1/128 scope host 2026-03-09T20:45:11.790 INFO:teuthology.orchestra.run.vm10.stdout: valid_lft forever preferred_lft forever 2026-03-09T20:45:11.790 INFO:teuthology.orchestra.run.vm10.stdout:2: eth0: mtu 1500 qdisc fq_codel state UP group default qlen 1000 2026-03-09T20:45:11.790 INFO:teuthology.orchestra.run.vm10.stdout: link/ether 52:55:00:00:00:0a brd ff:ff:ff:ff:ff:ff 2026-03-09T20:45:11.790 INFO:teuthology.orchestra.run.vm10.stdout: altname enp0s3 2026-03-09T20:45:11.790 INFO:teuthology.orchestra.run.vm10.stdout: altname ens3 2026-03-09T20:45:11.790 INFO:teuthology.orchestra.run.vm10.stdout: inet 192.168.123.110/24 brd 192.168.123.255 scope global dynamic noprefixroute eth0 2026-03-09T20:45:11.790 INFO:teuthology.orchestra.run.vm10.stdout: valid_lft 2869sec preferred_lft 2869sec 2026-03-09T20:45:11.790 INFO:teuthology.orchestra.run.vm10.stdout: inet6 fe80::5055:ff:fe00:a/64 scope link noprefixroute 2026-03-09T20:45:11.790 INFO:teuthology.orchestra.run.vm10.stdout: valid_lft forever preferred_lft forever 2026-03-09T20:45:11.790 INFO:tasks.cephfs.mount:Setuping the 'ceph-brx' with 192.168.159.254/20 2026-03-09T20:45:11.790 DEBUG:teuthology.orchestra.run.vm10:> (cd / && exec stdin-killer --timeout=300 -- bash -c ' 2026-03-09T20:45:11.790 DEBUG:teuthology.orchestra.run.vm10:> set -e 2026-03-09T20:45:11.790 DEBUG:teuthology.orchestra.run.vm10:> sudo ip link add name ceph-brx type bridge 2026-03-09T20:45:11.790 DEBUG:teuthology.orchestra.run.vm10:> sudo ip addr flush dev ceph-brx 2026-03-09T20:45:11.790 DEBUG:teuthology.orchestra.run.vm10:> sudo ip link set ceph-brx up 2026-03-09T20:45:11.790 DEBUG:teuthology.orchestra.run.vm10:> sudo ip addr add 192.168.159.254/20 brd 192.168.159.255 dev ceph-brx 2026-03-09T20:45:11.790 DEBUG:teuthology.orchestra.run.vm10:> ') 2026-03-09T20:45:11.870 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:45:11 stdin-killer INFO: expiration expected; waiting 300 seconds for command to complete 2026-03-09T20:45:11.954 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:45:11 stdin-killer INFO: command exited with status 0: exiting normally with same code! 2026-03-09T20:45:11.958 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-09T20:45:11.958 DEBUG:teuthology.orchestra.run.vm10:> echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward 2026-03-09T20:45:12.028 INFO:teuthology.orchestra.run.vm10.stdout:1 2026-03-09T20:45:12.029 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-09T20:45:12.029 DEBUG:teuthology.orchestra.run.vm10:> ip r 2026-03-09T20:45:12.083 INFO:teuthology.orchestra.run.vm10.stdout:default via 192.168.123.1 dev eth0 proto dhcp src 192.168.123.110 metric 100 2026-03-09T20:45:12.083 INFO:teuthology.orchestra.run.vm10.stdout:192.168.123.0/24 dev eth0 proto kernel scope link src 192.168.123.110 metric 100 2026-03-09T20:45:12.083 INFO:teuthology.orchestra.run.vm10.stdout:192.168.144.0/20 dev ceph-brx proto kernel scope link src 192.168.159.254 2026-03-09T20:45:12.084 DEBUG:teuthology.orchestra.run.vm10:> (cd / && exec stdin-killer --timeout=300 -- bash -c ' 2026-03-09T20:45:12.084 DEBUG:teuthology.orchestra.run.vm10:> set -e 2026-03-09T20:45:12.084 DEBUG:teuthology.orchestra.run.vm10:> sudo iptables -A FORWARD -o eth0 -i ceph-brx -j ACCEPT 2026-03-09T20:45:12.084 DEBUG:teuthology.orchestra.run.vm10:> sudo iptables -A FORWARD -i eth0 -o ceph-brx -j ACCEPT 2026-03-09T20:45:12.084 DEBUG:teuthology.orchestra.run.vm10:> sudo iptables -t nat -A POSTROUTING -s 192.168.159.254/20 -o eth0 -j MASQUERADE 2026-03-09T20:45:12.084 DEBUG:teuthology.orchestra.run.vm10:> ') 2026-03-09T20:45:12.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:11 vm07.local ceph-mon[49120]: from='client.? 192.168.123.107:0/3312708593' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch 2026-03-09T20:45:12.161 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:45:12 stdin-killer INFO: expiration expected; waiting 300 seconds for command to complete 2026-03-09T20:45:12.168 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:11 vm10 ceph-mon[57011]: from='client.? 192.168.123.107:0/3312708593' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch 2026-03-09T20:45:12.226 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:45:12 stdin-killer INFO: command exited with status 0: exiting normally with same code! 2026-03-09T20:45:12.229 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-09T20:45:12.229 DEBUG:teuthology.orchestra.run.vm10:> ip netns list 2026-03-09T20:45:12.285 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-09T20:45:12.285 DEBUG:teuthology.orchestra.run.vm10:> ip netns list-id 2026-03-09T20:45:12.342 DEBUG:teuthology.orchestra.run.vm10:> (cd / && exec stdin-killer --timeout=300 -- bash -c ' 2026-03-09T20:45:12.342 DEBUG:teuthology.orchestra.run.vm10:> set -e 2026-03-09T20:45:12.342 DEBUG:teuthology.orchestra.run.vm10:> sudo ip netns add ceph-ns--home-ubuntu-cephtest-mnt.1 2026-03-09T20:45:12.342 DEBUG:teuthology.orchestra.run.vm10:> sudo ip netns set ceph-ns--home-ubuntu-cephtest-mnt.1 0 2026-03-09T20:45:12.342 DEBUG:teuthology.orchestra.run.vm10:> ') 2026-03-09T20:45:12.416 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:45:12 stdin-killer INFO: expiration expected; waiting 300 seconds for command to complete 2026-03-09T20:45:12.441 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:45:12 stdin-killer INFO: command exited with status 0: exiting normally with same code! 2026-03-09T20:45:12.445 INFO:tasks.cephfs.mount:Setuping the netns 'ceph-ns--home-ubuntu-cephtest-mnt.1' with 192.168.144.1/20 2026-03-09T20:45:12.445 DEBUG:teuthology.orchestra.run.vm10:> (cd / && exec stdin-killer --timeout=300 -- bash -c ' 2026-03-09T20:45:12.445 DEBUG:teuthology.orchestra.run.vm10:> set -e 2026-03-09T20:45:12.445 DEBUG:teuthology.orchestra.run.vm10:> sudo ip link add veth0 netns ceph-ns--home-ubuntu-cephtest-mnt.1 type veth peer name brx.0 2026-03-09T20:45:12.445 DEBUG:teuthology.orchestra.run.vm10:> sudo ip netns exec ceph-ns--home-ubuntu-cephtest-mnt.1 ip addr add 192.168.144.1/20 brd 192.168.159.255 dev veth0 2026-03-09T20:45:12.445 DEBUG:teuthology.orchestra.run.vm10:> sudo ip netns exec ceph-ns--home-ubuntu-cephtest-mnt.1 ip link set veth0 up 2026-03-09T20:45:12.445 DEBUG:teuthology.orchestra.run.vm10:> sudo ip netns exec ceph-ns--home-ubuntu-cephtest-mnt.1 ip link set lo up 2026-03-09T20:45:12.445 DEBUG:teuthology.orchestra.run.vm10:> sudo ip netns exec ceph-ns--home-ubuntu-cephtest-mnt.1 ip route add default via 192.168.159.254 2026-03-09T20:45:12.445 DEBUG:teuthology.orchestra.run.vm10:> ') 2026-03-09T20:45:12.521 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:45:12 stdin-killer INFO: expiration expected; waiting 300 seconds for command to complete 2026-03-09T20:45:12.587 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:45:12 stdin-killer INFO: command exited with status 0: exiting normally with same code! 2026-03-09T20:45:12.590 DEBUG:teuthology.orchestra.run.vm10:> (cd / && exec stdin-killer --timeout=300 -- bash -c ' 2026-03-09T20:45:12.590 DEBUG:teuthology.orchestra.run.vm10:> set -e 2026-03-09T20:45:12.590 DEBUG:teuthology.orchestra.run.vm10:> sudo ip link set brx.0 up 2026-03-09T20:45:12.590 DEBUG:teuthology.orchestra.run.vm10:> sudo ip link set dev brx.0 master ceph-brx 2026-03-09T20:45:12.590 DEBUG:teuthology.orchestra.run.vm10:> ') 2026-03-09T20:45:12.668 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:45:12 stdin-killer INFO: expiration expected; waiting 300 seconds for command to complete 2026-03-09T20:45:12.694 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:45:12 stdin-killer INFO: command exited with status 0: exiting normally with same code! 2026-03-09T20:45:12.699 INFO:tasks.cephfs.fuse_mount:Client client.1 config is {} 2026-03-09T20:45:12.699 INFO:teuthology.orchestra.run:Running command with timeout 60 2026-03-09T20:45:12.699 DEBUG:teuthology.orchestra.run.vm10:> mkdir -p -v /home/ubuntu/cephtest/mnt.1 2026-03-09T20:45:12.754 INFO:teuthology.orchestra.run.vm10.stdout:mkdir: created directory '/home/ubuntu/cephtest/mnt.1' 2026-03-09T20:45:12.755 INFO:teuthology.orchestra.run:Running command with timeout 60 2026-03-09T20:45:12.755 DEBUG:teuthology.orchestra.run.vm10:> chmod 0000 /home/ubuntu/cephtest/mnt.1 2026-03-09T20:45:12.811 DEBUG:teuthology.orchestra.run.vm10:> sudo modprobe fuse 2026-03-09T20:45:12.877 DEBUG:teuthology.orchestra.run.vm10:> cat /proc/self/mounts | awk '{print $2}' 2026-03-09T20:45:12.933 INFO:teuthology.orchestra.run.vm10.stdout:/proc 2026-03-09T20:45:12.933 INFO:teuthology.orchestra.run.vm10.stdout:/sys 2026-03-09T20:45:12.933 INFO:teuthology.orchestra.run.vm10.stdout:/dev 2026-03-09T20:45:12.933 INFO:teuthology.orchestra.run.vm10.stdout:/sys/kernel/security 2026-03-09T20:45:12.933 INFO:teuthology.orchestra.run.vm10.stdout:/dev/shm 2026-03-09T20:45:12.933 INFO:teuthology.orchestra.run.vm10.stdout:/dev/pts 2026-03-09T20:45:12.933 INFO:teuthology.orchestra.run.vm10.stdout:/run 2026-03-09T20:45:12.933 INFO:teuthology.orchestra.run.vm10.stdout:/sys/fs/cgroup 2026-03-09T20:45:12.933 INFO:teuthology.orchestra.run.vm10.stdout:/sys/fs/pstore 2026-03-09T20:45:12.933 INFO:teuthology.orchestra.run.vm10.stdout:/sys/fs/bpf 2026-03-09T20:45:12.933 INFO:teuthology.orchestra.run.vm10.stdout:/sys/kernel/config 2026-03-09T20:45:12.933 INFO:teuthology.orchestra.run.vm10.stdout:/ 2026-03-09T20:45:12.933 INFO:teuthology.orchestra.run.vm10.stdout:/var/lib/nfs/rpc_pipefs 2026-03-09T20:45:12.933 INFO:teuthology.orchestra.run.vm10.stdout:/sys/fs/selinux 2026-03-09T20:45:12.933 INFO:teuthology.orchestra.run.vm10.stdout:/proc/sys/fs/binfmt_misc 2026-03-09T20:45:12.933 INFO:teuthology.orchestra.run.vm10.stdout:/dev/hugepages 2026-03-09T20:45:12.933 INFO:teuthology.orchestra.run.vm10.stdout:/dev/mqueue 2026-03-09T20:45:12.933 INFO:teuthology.orchestra.run.vm10.stdout:/sys/kernel/debug 2026-03-09T20:45:12.933 INFO:teuthology.orchestra.run.vm10.stdout:/sys/kernel/tracing 2026-03-09T20:45:12.933 INFO:teuthology.orchestra.run.vm10.stdout:/run/credentials/systemd-sysctl.service 2026-03-09T20:45:12.933 INFO:teuthology.orchestra.run.vm10.stdout:/sys/fs/fuse/connections 2026-03-09T20:45:12.933 INFO:teuthology.orchestra.run.vm10.stdout:/run/credentials/systemd-sysusers.service 2026-03-09T20:45:12.934 INFO:teuthology.orchestra.run.vm10.stdout:/run/credentials/systemd-tmpfiles-setup-dev.service 2026-03-09T20:45:12.934 INFO:teuthology.orchestra.run.vm10.stdout:/run/credentials/systemd-tmpfiles-setup.service 2026-03-09T20:45:12.934 INFO:teuthology.orchestra.run.vm10.stdout:/run/user/1000 2026-03-09T20:45:12.934 INFO:teuthology.orchestra.run.vm10.stdout:/run/user/0 2026-03-09T20:45:12.934 INFO:teuthology.orchestra.run.vm10.stdout:/proc/sys/fs/binfmt_misc 2026-03-09T20:45:12.934 INFO:teuthology.orchestra.run.vm10.stdout:/var/lib/containers/storage/overlay 2026-03-09T20:45:12.934 INFO:teuthology.orchestra.run.vm10.stdout:/var/lib/containers/storage/overlay/63d6432c6c694456a2bcf5d1fe2e4a232389f22b26602396e0d01f72caeb41e6/merged 2026-03-09T20:45:12.934 INFO:teuthology.orchestra.run.vm10.stdout:/var/lib/containers/storage/overlay/067617fcaf80765c8d6bd60f51e8e015aa7e5fb117d030c7336d705beda2721f/merged 2026-03-09T20:45:12.934 INFO:teuthology.orchestra.run.vm10.stdout:/var/lib/containers/storage/overlay/5c05c8320e04f8c06cd4c548ee99ba992113a822725ab0509c22ef6079e93bbd/merged 2026-03-09T20:45:12.934 INFO:teuthology.orchestra.run.vm10.stdout:/var/lib/containers/storage/overlay/7a97a8c431a4c88e4dbb6ec2227f551e99f3480eb0228ef5af94c4e9ed42c354/merged 2026-03-09T20:45:12.934 INFO:teuthology.orchestra.run.vm10.stdout:/var/lib/containers/storage/overlay/c880916847fc854717ddd8ebf35b3423875ff09dc88e2568df478bc96370e217/merged 2026-03-09T20:45:12.934 INFO:teuthology.orchestra.run.vm10.stdout:/var/lib/containers/storage/overlay/17ee7c1dc13c212959abf454ae03d8920e6b014a098667d0b09ff658893fa894/merged 2026-03-09T20:45:12.934 INFO:teuthology.orchestra.run.vm10.stdout:/var/lib/containers/storage/overlay/ff63a747433f5e8596f4c21dab840bc573aae130f2c9f0c25c449dff9ce5dcc4/merged 2026-03-09T20:45:12.934 INFO:teuthology.orchestra.run.vm10.stdout:/var/lib/containers/storage/overlay/9cd40bf287db9e0785550f50b9264b77bd6525ea6e2e3406925f979e4b8a017f/merged 2026-03-09T20:45:12.934 INFO:teuthology.orchestra.run.vm10.stdout:/var/lib/containers/storage/overlay/4481a0c40ee4c4f89d3c4914c2813dea57964bc7ec08997297f546b684e18070/merged 2026-03-09T20:45:12.934 INFO:teuthology.orchestra.run.vm10.stdout:/var/lib/containers/storage/overlay/d00f48b1f14bdc10c2963fe0f34b41394f6aa99eee31e08d6ec01ab4907bb420/merged 2026-03-09T20:45:12.934 INFO:teuthology.orchestra.run.vm10.stdout:/run/netns 2026-03-09T20:45:12.934 INFO:teuthology.orchestra.run.vm10.stdout:/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.1 2026-03-09T20:45:12.934 INFO:teuthology.orchestra.run.vm10.stdout:/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.1 2026-03-09T20:45:12.934 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-09T20:45:12.934 DEBUG:teuthology.orchestra.run.vm10:> ls /sys/fs/fuse/connections 2026-03-09T20:45:12.991 INFO:tasks.cephfs.fuse_mount:Pre-mount connections: [] 2026-03-09T20:45:12.991 DEBUG:teuthology.orchestra.run.vm10:> (cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.1 sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.1 --id 1) 2026-03-09T20:45:13.033 DEBUG:teuthology.orchestra.run.vm10:> sudo modprobe fuse 2026-03-09T20:45:13.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:12 vm10.local ceph-mon[57011]: pgmap v86: 65 pgs: 65 active+clean; 452 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.3 KiB/s rd, 2.6 KiB/s wr, 11 op/s 2026-03-09T20:45:13.060 DEBUG:teuthology.orchestra.run.vm10:> cat /proc/self/mounts | awk '{print $2}' 2026-03-09T20:45:13.105 INFO:tasks.cephfs.fuse_mount.ceph-fuse.1.vm10.stderr:2026-03-09T20:45:13.102+0000 7fa7c53d5580 -1 init, newargv = 0x55cc9fbc05c0 newargc=15 2026-03-09T20:45:13.105 INFO:tasks.cephfs.fuse_mount.ceph-fuse.1.vm10.stderr:ceph-fuse[82512]: starting ceph client 2026-03-09T20:45:13.115 INFO:tasks.cephfs.fuse_mount.ceph-fuse.1.vm10.stderr:ceph-fuse[82512]: starting fuse 2026-03-09T20:45:13.129 INFO:teuthology.orchestra.run.vm10.stdout:/proc 2026-03-09T20:45:13.129 INFO:teuthology.orchestra.run.vm10.stdout:/sys 2026-03-09T20:45:13.129 INFO:teuthology.orchestra.run.vm10.stdout:/dev 2026-03-09T20:45:13.129 INFO:teuthology.orchestra.run.vm10.stdout:/sys/kernel/security 2026-03-09T20:45:13.129 INFO:teuthology.orchestra.run.vm10.stdout:/dev/shm 2026-03-09T20:45:13.129 INFO:teuthology.orchestra.run.vm10.stdout:/dev/pts 2026-03-09T20:45:13.129 INFO:teuthology.orchestra.run.vm10.stdout:/run 2026-03-09T20:45:13.129 INFO:teuthology.orchestra.run.vm10.stdout:/sys/fs/cgroup 2026-03-09T20:45:13.129 INFO:teuthology.orchestra.run.vm10.stdout:/sys/fs/pstore 2026-03-09T20:45:13.129 INFO:teuthology.orchestra.run.vm10.stdout:/sys/fs/bpf 2026-03-09T20:45:13.129 INFO:teuthology.orchestra.run.vm10.stdout:/sys/kernel/config 2026-03-09T20:45:13.129 INFO:teuthology.orchestra.run.vm10.stdout:/ 2026-03-09T20:45:13.129 INFO:teuthology.orchestra.run.vm10.stdout:/var/lib/nfs/rpc_pipefs 2026-03-09T20:45:13.129 INFO:teuthology.orchestra.run.vm10.stdout:/sys/fs/selinux 2026-03-09T20:45:13.129 INFO:teuthology.orchestra.run.vm10.stdout:/proc/sys/fs/binfmt_misc 2026-03-09T20:45:13.129 INFO:teuthology.orchestra.run.vm10.stdout:/dev/hugepages 2026-03-09T20:45:13.129 INFO:teuthology.orchestra.run.vm10.stdout:/dev/mqueue 2026-03-09T20:45:13.129 INFO:teuthology.orchestra.run.vm10.stdout:/sys/kernel/debug 2026-03-09T20:45:13.129 INFO:teuthology.orchestra.run.vm10.stdout:/sys/kernel/tracing 2026-03-09T20:45:13.129 INFO:teuthology.orchestra.run.vm10.stdout:/run/credentials/systemd-sysctl.service 2026-03-09T20:45:13.129 INFO:teuthology.orchestra.run.vm10.stdout:/sys/fs/fuse/connections 2026-03-09T20:45:13.129 INFO:teuthology.orchestra.run.vm10.stdout:/run/credentials/systemd-sysusers.service 2026-03-09T20:45:13.129 INFO:teuthology.orchestra.run.vm10.stdout:/run/credentials/systemd-tmpfiles-setup-dev.service 2026-03-09T20:45:13.129 INFO:teuthology.orchestra.run.vm10.stdout:/run/credentials/systemd-tmpfiles-setup.service 2026-03-09T20:45:13.129 INFO:teuthology.orchestra.run.vm10.stdout:/run/user/1000 2026-03-09T20:45:13.130 INFO:teuthology.orchestra.run.vm10.stdout:/run/user/0 2026-03-09T20:45:13.130 INFO:teuthology.orchestra.run.vm10.stdout:/proc/sys/fs/binfmt_misc 2026-03-09T20:45:13.130 INFO:teuthology.orchestra.run.vm10.stdout:/var/lib/containers/storage/overlay 2026-03-09T20:45:13.130 INFO:teuthology.orchestra.run.vm10.stdout:/var/lib/containers/storage/overlay/63d6432c6c694456a2bcf5d1fe2e4a232389f22b26602396e0d01f72caeb41e6/merged 2026-03-09T20:45:13.130 INFO:teuthology.orchestra.run.vm10.stdout:/var/lib/containers/storage/overlay/067617fcaf80765c8d6bd60f51e8e015aa7e5fb117d030c7336d705beda2721f/merged 2026-03-09T20:45:13.130 INFO:teuthology.orchestra.run.vm10.stdout:/var/lib/containers/storage/overlay/5c05c8320e04f8c06cd4c548ee99ba992113a822725ab0509c22ef6079e93bbd/merged 2026-03-09T20:45:13.130 INFO:teuthology.orchestra.run.vm10.stdout:/var/lib/containers/storage/overlay/7a97a8c431a4c88e4dbb6ec2227f551e99f3480eb0228ef5af94c4e9ed42c354/merged 2026-03-09T20:45:13.130 INFO:teuthology.orchestra.run.vm10.stdout:/var/lib/containers/storage/overlay/c880916847fc854717ddd8ebf35b3423875ff09dc88e2568df478bc96370e217/merged 2026-03-09T20:45:13.130 INFO:teuthology.orchestra.run.vm10.stdout:/var/lib/containers/storage/overlay/17ee7c1dc13c212959abf454ae03d8920e6b014a098667d0b09ff658893fa894/merged 2026-03-09T20:45:13.130 INFO:teuthology.orchestra.run.vm10.stdout:/var/lib/containers/storage/overlay/ff63a747433f5e8596f4c21dab840bc573aae130f2c9f0c25c449dff9ce5dcc4/merged 2026-03-09T20:45:13.130 INFO:teuthology.orchestra.run.vm10.stdout:/var/lib/containers/storage/overlay/9cd40bf287db9e0785550f50b9264b77bd6525ea6e2e3406925f979e4b8a017f/merged 2026-03-09T20:45:13.130 INFO:teuthology.orchestra.run.vm10.stdout:/var/lib/containers/storage/overlay/4481a0c40ee4c4f89d3c4914c2813dea57964bc7ec08997297f546b684e18070/merged 2026-03-09T20:45:13.130 INFO:teuthology.orchestra.run.vm10.stdout:/var/lib/containers/storage/overlay/d00f48b1f14bdc10c2963fe0f34b41394f6aa99eee31e08d6ec01ab4907bb420/merged 2026-03-09T20:45:13.130 INFO:teuthology.orchestra.run.vm10.stdout:/run/netns 2026-03-09T20:45:13.130 INFO:teuthology.orchestra.run.vm10.stdout:/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.1 2026-03-09T20:45:13.130 INFO:teuthology.orchestra.run.vm10.stdout:/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.1 2026-03-09T20:45:13.130 INFO:teuthology.orchestra.run.vm10.stdout:/home/ubuntu/cephtest/mnt.1 2026-03-09T20:45:13.130 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-09T20:45:13.130 DEBUG:teuthology.orchestra.run.vm10:> ls /sys/fs/fuse/connections 2026-03-09T20:45:13.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:12 vm07.local ceph-mon[49120]: pgmap v86: 65 pgs: 65 active+clean; 452 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.3 KiB/s rd, 2.6 KiB/s wr, 11 op/s 2026-03-09T20:45:13.189 INFO:teuthology.orchestra.run.vm10.stdout:90 2026-03-09T20:45:13.189 INFO:tasks.cephfs.fuse_mount:Post-mount connections: [90] 2026-03-09T20:45:13.189 DEBUG:teuthology.orchestra.run.vm10:> sudo stdin-killer -- python3 -c ' 2026-03-09T20:45:13.189 DEBUG:teuthology.orchestra.run.vm10:> import glob 2026-03-09T20:45:13.189 DEBUG:teuthology.orchestra.run.vm10:> import re 2026-03-09T20:45:13.189 DEBUG:teuthology.orchestra.run.vm10:> import os 2026-03-09T20:45:13.189 DEBUG:teuthology.orchestra.run.vm10:> import subprocess 2026-03-09T20:45:13.189 DEBUG:teuthology.orchestra.run.vm10:> 2026-03-09T20:45:13.189 DEBUG:teuthology.orchestra.run.vm10:> def _find_admin_socket(client_name): 2026-03-09T20:45:13.189 DEBUG:teuthology.orchestra.run.vm10:> asok_path = "/var/run/ceph/ceph-client.1.*.asok" 2026-03-09T20:45:13.189 DEBUG:teuthology.orchestra.run.vm10:> files = glob.glob(asok_path) 2026-03-09T20:45:13.189 DEBUG:teuthology.orchestra.run.vm10:> mountpoint = "/home/ubuntu/cephtest/mnt.1" 2026-03-09T20:45:13.189 DEBUG:teuthology.orchestra.run.vm10:> 2026-03-09T20:45:13.189 DEBUG:teuthology.orchestra.run.vm10:> # Given a non-glob path, it better be there 2026-03-09T20:45:13.189 DEBUG:teuthology.orchestra.run.vm10:> if "*" not in asok_path: 2026-03-09T20:45:13.189 DEBUG:teuthology.orchestra.run.vm10:> assert(len(files) == 1) 2026-03-09T20:45:13.189 DEBUG:teuthology.orchestra.run.vm10:> return files[0] 2026-03-09T20:45:13.189 DEBUG:teuthology.orchestra.run.vm10:> 2026-03-09T20:45:13.189 DEBUG:teuthology.orchestra.run.vm10:> for f in files: 2026-03-09T20:45:13.189 DEBUG:teuthology.orchestra.run.vm10:> pid = re.match(".*\.(\d+)\.asok$", f).group(1) 2026-03-09T20:45:13.189 DEBUG:teuthology.orchestra.run.vm10:> if os.path.exists("/proc/{0}".format(pid)): 2026-03-09T20:45:13.189 DEBUG:teuthology.orchestra.run.vm10:> with open("/proc/{0}/cmdline".format(pid), '"'"'r'"'"') as proc_f: 2026-03-09T20:45:13.189 DEBUG:teuthology.orchestra.run.vm10:> contents = proc_f.read() 2026-03-09T20:45:13.189 DEBUG:teuthology.orchestra.run.vm10:> if mountpoint in contents: 2026-03-09T20:45:13.189 DEBUG:teuthology.orchestra.run.vm10:> return f 2026-03-09T20:45:13.189 DEBUG:teuthology.orchestra.run.vm10:> raise RuntimeError("Client socket {0} not found".format(client_name)) 2026-03-09T20:45:13.189 DEBUG:teuthology.orchestra.run.vm10:> 2026-03-09T20:45:13.189 DEBUG:teuthology.orchestra.run.vm10:> print(_find_admin_socket("client.1")) 2026-03-09T20:45:13.189 DEBUG:teuthology.orchestra.run.vm10:> ' 2026-03-09T20:45:13.289 INFO:teuthology.orchestra.run.vm10.stdout:/var/run/ceph/ceph-client.1.82512.asok 2026-03-09T20:45:13.291 INFO:teuthology.orchestra.run.vm10.stderr:2026-03-09T20:45:13 stdin-killer INFO: command exited with status 0: exiting normally with same code! 2026-03-09T20:45:13.296 INFO:tasks.cephfs.fuse_mount:Found client admin socket at /var/run/ceph/ceph-client.1.82512.asok 2026-03-09T20:45:13.296 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-09T20:45:13.296 DEBUG:teuthology.orchestra.run.vm10:> sudo ceph --admin-daemon /var/run/ceph/ceph-client.1.82512.asok status 2026-03-09T20:45:13.404 INFO:teuthology.orchestra.run.vm10.stdout:{ 2026-03-09T20:45:13.404 INFO:teuthology.orchestra.run.vm10.stdout: "metadata": { 2026-03-09T20:45:13.404 INFO:teuthology.orchestra.run.vm10.stdout: "ceph_sha1": "ab47f43c099b2cbae6e21342fe673ce251da54d6", 2026-03-09T20:45:13.404 INFO:teuthology.orchestra.run.vm10.stdout: "ceph_version": "ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable)", 2026-03-09T20:45:13.404 INFO:teuthology.orchestra.run.vm10.stdout: "entity_id": "1", 2026-03-09T20:45:13.404 INFO:teuthology.orchestra.run.vm10.stdout: "hostname": "vm10.local", 2026-03-09T20:45:13.404 INFO:teuthology.orchestra.run.vm10.stdout: "mount_point": "/home/ubuntu/cephtest/mnt.1", 2026-03-09T20:45:13.404 INFO:teuthology.orchestra.run.vm10.stdout: "pid": "82512", 2026-03-09T20:45:13.404 INFO:teuthology.orchestra.run.vm10.stdout: "root": "/" 2026-03-09T20:45:13.404 INFO:teuthology.orchestra.run.vm10.stdout: }, 2026-03-09T20:45:13.404 INFO:teuthology.orchestra.run.vm10.stdout: "dentry_count": 0, 2026-03-09T20:45:13.404 INFO:teuthology.orchestra.run.vm10.stdout: "dentry_pinned_count": 0, 2026-03-09T20:45:13.404 INFO:teuthology.orchestra.run.vm10.stdout: "id": 24331, 2026-03-09T20:45:13.404 INFO:teuthology.orchestra.run.vm10.stdout: "inst": { 2026-03-09T20:45:13.404 INFO:teuthology.orchestra.run.vm10.stdout: "name": { 2026-03-09T20:45:13.404 INFO:teuthology.orchestra.run.vm10.stdout: "type": "client", 2026-03-09T20:45:13.404 INFO:teuthology.orchestra.run.vm10.stdout: "num": 24331 2026-03-09T20:45:13.404 INFO:teuthology.orchestra.run.vm10.stdout: }, 2026-03-09T20:45:13.404 INFO:teuthology.orchestra.run.vm10.stdout: "addr": { 2026-03-09T20:45:13.404 INFO:teuthology.orchestra.run.vm10.stdout: "type": "v1", 2026-03-09T20:45:13.404 INFO:teuthology.orchestra.run.vm10.stdout: "addr": "192.168.123.110:0", 2026-03-09T20:45:13.404 INFO:teuthology.orchestra.run.vm10.stdout: "nonce": 2539319683 2026-03-09T20:45:13.404 INFO:teuthology.orchestra.run.vm10.stdout: } 2026-03-09T20:45:13.404 INFO:teuthology.orchestra.run.vm10.stdout: }, 2026-03-09T20:45:13.404 INFO:teuthology.orchestra.run.vm10.stdout: "addr": { 2026-03-09T20:45:13.404 INFO:teuthology.orchestra.run.vm10.stdout: "type": "v1", 2026-03-09T20:45:13.404 INFO:teuthology.orchestra.run.vm10.stdout: "addr": "192.168.123.110:0", 2026-03-09T20:45:13.404 INFO:teuthology.orchestra.run.vm10.stdout: "nonce": 2539319683 2026-03-09T20:45:13.404 INFO:teuthology.orchestra.run.vm10.stdout: }, 2026-03-09T20:45:13.404 INFO:teuthology.orchestra.run.vm10.stdout: "inst_str": "client.24331 192.168.123.110:0/2539319683", 2026-03-09T20:45:13.404 INFO:teuthology.orchestra.run.vm10.stdout: "addr_str": "192.168.123.110:0/2539319683", 2026-03-09T20:45:13.404 INFO:teuthology.orchestra.run.vm10.stdout: "inode_count": 1, 2026-03-09T20:45:13.404 INFO:teuthology.orchestra.run.vm10.stdout: "mds_epoch": 11, 2026-03-09T20:45:13.404 INFO:teuthology.orchestra.run.vm10.stdout: "osd_epoch": 42, 2026-03-09T20:45:13.404 INFO:teuthology.orchestra.run.vm10.stdout: "osd_epoch_barrier": 0, 2026-03-09T20:45:13.404 INFO:teuthology.orchestra.run.vm10.stdout: "blocklisted": false, 2026-03-09T20:45:13.404 INFO:teuthology.orchestra.run.vm10.stdout: "fs_name": "cephfs" 2026-03-09T20:45:13.404 INFO:teuthology.orchestra.run.vm10.stdout:} 2026-03-09T20:45:13.410 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-09T20:45:13.410 DEBUG:teuthology.orchestra.run.vm07:> stat --file-system '--printf=%T 2026-03-09T20:45:13.410 DEBUG:teuthology.orchestra.run.vm07:> ' -- /home/ubuntu/cephtest/mnt.0 2026-03-09T20:45:13.426 INFO:teuthology.orchestra.run.vm07.stdout:fuseblk 2026-03-09T20:45:13.426 INFO:tasks.cephfs.fuse_mount:ceph-fuse is mounted on /home/ubuntu/cephtest/mnt.0 2026-03-09T20:45:13.426 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-09T20:45:13.426 DEBUG:teuthology.orchestra.run.vm07:> sudo chmod 1777 /home/ubuntu/cephtest/mnt.0 2026-03-09T20:45:13.496 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-09T20:45:13.496 DEBUG:teuthology.orchestra.run.vm10:> stat --file-system '--printf=%T 2026-03-09T20:45:13.496 DEBUG:teuthology.orchestra.run.vm10:> ' -- /home/ubuntu/cephtest/mnt.1 2026-03-09T20:45:13.512 INFO:teuthology.orchestra.run.vm10.stdout:fuseblk 2026-03-09T20:45:13.512 INFO:tasks.cephfs.fuse_mount:ceph-fuse is mounted on /home/ubuntu/cephtest/mnt.1 2026-03-09T20:45:13.512 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-09T20:45:13.512 DEBUG:teuthology.orchestra.run.vm10:> sudo chmod 1777 /home/ubuntu/cephtest/mnt.1 2026-03-09T20:45:14.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:13 vm07.local ceph-mon[49120]: mds.? [v2:192.168.123.110:6826/3212743251,v1:192.168.123.110:6827/3212743251] up:standby-replay 2026-03-09T20:45:14.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:13 vm07.local ceph-mon[49120]: fsmap cephfs:2 {0=cephfs.vm07.rovdbp=up:active,1=cephfs.vm10.qpltwp=up:active} 2 up:standby-replay 2026-03-09T20:45:14.256 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:13 vm10.local ceph-mon[57011]: mds.? [v2:192.168.123.110:6826/3212743251,v1:192.168.123.110:6827/3212743251] up:standby-replay 2026-03-09T20:45:14.256 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:13 vm10.local ceph-mon[57011]: fsmap cephfs:2 {0=cephfs.vm07.rovdbp=up:active,1=cephfs.vm10.qpltwp=up:active} 2 up:standby-replay 2026-03-09T20:45:15.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:14 vm07.local ceph-mon[49120]: pgmap v87: 65 pgs: 65 active+clean; 453 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.6 KiB/s rd, 2.4 KiB/s wr, 11 op/s 2026-03-09T20:45:15.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:14 vm10.local ceph-mon[57011]: pgmap v87: 65 pgs: 65 active+clean; 453 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.6 KiB/s rd, 2.4 KiB/s wr, 11 op/s 2026-03-09T20:45:16.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:16 vm07.local ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:16.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:16 vm07.local ceph-mon[49120]: pgmap v88: 65 pgs: 65 active+clean; 453 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 3.0 KiB/s rd, 2.4 KiB/s wr, 11 op/s 2026-03-09T20:45:16.411 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:16 vm10.local ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:16.411 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:16 vm10.local ceph-mon[57011]: pgmap v88: 65 pgs: 65 active+clean; 453 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 3.0 KiB/s rd, 2.4 KiB/s wr, 11 op/s 2026-03-09T20:45:16.411 INFO:teuthology.run_tasks:Running task print... 2026-03-09T20:45:16.414 INFO:teuthology.task.print:**** done client 2026-03-09T20:45:16.414 INFO:teuthology.run_tasks:Running task parallel... 2026-03-09T20:45:16.417 INFO:teuthology.task.parallel:starting parallel... 2026-03-09T20:45:16.417 INFO:teuthology.task.parallel:In parallel, running task sequential... 2026-03-09T20:45:16.417 INFO:teuthology.task.sequential:In sequential, running task cephadm.shell... 2026-03-09T20:45:16.417 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm07.local 2026-03-09T20:45:16.418 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph config set mgr mgr/orchestrator/fail_fs false || true' 2026-03-09T20:45:16.418 INFO:teuthology.task.parallel:In parallel, running task sequential... 2026-03-09T20:45:16.418 INFO:teuthology.task.sequential:In sequential, running task workunit... 2026-03-09T20:45:16.420 INFO:tasks.workunit:Pulling workunits from ref 569c3e99c9b32a51b4eaf08731c728f4513ed589 2026-03-09T20:45:16.420 INFO:tasks.workunit:Making a separate scratch dir for every client... 2026-03-09T20:45:16.420 INFO:tasks.workunit:timeout=3h 2026-03-09T20:45:16.420 INFO:tasks.workunit:cleanup=True 2026-03-09T20:45:16.420 DEBUG:teuthology.orchestra.run.vm07:> stat -- /home/ubuntu/cephtest/mnt.0 2026-03-09T20:45:16.440 INFO:teuthology.orchestra.run.vm07.stdout: File: /home/ubuntu/cephtest/mnt.0 2026-03-09T20:45:16.440 INFO:teuthology.orchestra.run.vm07.stdout: Size: 0 Blocks: 1 IO Block: 4194304 directory 2026-03-09T20:45:16.440 INFO:teuthology.orchestra.run.vm07.stdout:Device: 4bh/75d Inode: 1 Links: 2 2026-03-09T20:45:16.440 INFO:teuthology.orchestra.run.vm07.stdout:Access: (1777/drwxrwxrwt) Uid: ( 0/ root) Gid: ( 0/ root) 2026-03-09T20:45:16.440 INFO:teuthology.orchestra.run.vm07.stdout:Context: system_u:object_r:fusefs_t:s0 2026-03-09T20:45:16.440 INFO:teuthology.orchestra.run.vm07.stdout:Access: 1970-01-01 00:00:00.000000000 +0000 2026-03-09T20:45:16.440 INFO:teuthology.orchestra.run.vm07.stdout:Modify: 2026-03-09 20:45:02.019833206 +0000 2026-03-09T20:45:16.440 INFO:teuthology.orchestra.run.vm07.stdout:Change: 2026-03-09 20:45:13.493791812 +0000 2026-03-09T20:45:16.440 INFO:teuthology.orchestra.run.vm07.stdout: Birth: - 2026-03-09T20:45:16.440 INFO:tasks.workunit:Did not need to create dir /home/ubuntu/cephtest/mnt.0 2026-03-09T20:45:16.440 DEBUG:teuthology.orchestra.run.vm07:> cd -- /home/ubuntu/cephtest/mnt.0 && sudo install -d -m 0755 --owner=ubuntu -- client.0 2026-03-09T20:45:16.511 DEBUG:teuthology.orchestra.run.vm10:> stat -- /home/ubuntu/cephtest/mnt.1 2026-03-09T20:45:16.531 INFO:teuthology.orchestra.run.vm10.stdout: File: /home/ubuntu/cephtest/mnt.1 2026-03-09T20:45:16.531 INFO:teuthology.orchestra.run.vm10.stdout: Size: 0 Blocks: 1 IO Block: 4194304 directory 2026-03-09T20:45:16.531 INFO:teuthology.orchestra.run.vm10.stdout:Device: 5ah/90d Inode: 1 Links: 3 2026-03-09T20:45:16.531 INFO:teuthology.orchestra.run.vm10.stdout:Access: (1777/drwxrwxrwt) Uid: ( 0/ root) Gid: ( 0/ root) 2026-03-09T20:45:16.531 INFO:teuthology.orchestra.run.vm10.stdout:Context: system_u:object_r:fusefs_t:s0 2026-03-09T20:45:16.531 INFO:teuthology.orchestra.run.vm10.stdout:Access: 1970-01-01 00:00:00.000000000 +0000 2026-03-09T20:45:16.531 INFO:teuthology.orchestra.run.vm10.stdout:Modify: 2026-03-09 20:45:16.506970455 +0000 2026-03-09T20:45:16.532 INFO:teuthology.orchestra.run.vm10.stdout:Change: 2026-03-09 20:45:16.506970455 +0000 2026-03-09T20:45:16.532 INFO:teuthology.orchestra.run.vm10.stdout: Birth: - 2026-03-09T20:45:16.532 INFO:tasks.workunit:Did not need to create dir /home/ubuntu/cephtest/mnt.1 2026-03-09T20:45:16.532 DEBUG:teuthology.orchestra.run.vm10:> cd -- /home/ubuntu/cephtest/mnt.1 && sudo install -d -m 0755 --owner=ubuntu -- client.1 2026-03-09T20:45:16.569 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:45:16.606 DEBUG:teuthology.orchestra.run.vm07:> rm -rf /home/ubuntu/cephtest/clone.client.0 && git clone https://github.com/kshtsk/ceph.git /home/ubuntu/cephtest/clone.client.0 && cd /home/ubuntu/cephtest/clone.client.0 && git checkout 569c3e99c9b32a51b4eaf08731c728f4513ed589 2026-03-09T20:45:16.606 DEBUG:teuthology.orchestra.run.vm10:> rm -rf /home/ubuntu/cephtest/clone.client.1 && git clone https://github.com/kshtsk/ceph.git /home/ubuntu/cephtest/clone.client.1 && cd /home/ubuntu/cephtest/clone.client.1 && git checkout 569c3e99c9b32a51b4eaf08731c728f4513ed589 2026-03-09T20:45:16.639 INFO:tasks.workunit.client.0.vm07.stderr:Cloning into '/home/ubuntu/cephtest/clone.client.0'... 2026-03-09T20:45:16.665 INFO:tasks.workunit.client.1.vm10.stderr:Cloning into '/home/ubuntu/cephtest/clone.client.1'... 2026-03-09T20:45:16.823 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:16.823+0000 7f00e7703640 1 -- 192.168.123.107:0/3950800818 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f00e0073b40 msgr2=0x7f00e0073fa0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:16.823 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:16.823+0000 7f00e7703640 1 --2- 192.168.123.107:0/3950800818 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f00e0073b40 0x7f00e0073fa0 secure :-1 s=READY pgs=268 cs=0 l=1 rev1=1 crypto rx=0x7f00d40099b0 tx=0x7f00d402f220 comp rx=0 tx=0).stop 2026-03-09T20:45:16.824 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:16.824+0000 7f00e7703640 1 -- 192.168.123.107:0/3950800818 shutdown_connections 2026-03-09T20:45:16.824 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:16.824+0000 7f00e7703640 1 --2- 192.168.123.107:0/3950800818 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f00e0073b40 0x7f00e0073fa0 unknown :-1 s=CLOSED pgs=268 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:16.824 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:16.824+0000 7f00e7703640 1 --2- 192.168.123.107:0/3950800818 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f00e00751a0 0x7f00e0073600 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:16.824 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:16.824+0000 7f00e7703640 1 -- 192.168.123.107:0/3950800818 >> 192.168.123.107:0/3950800818 conn(0x7f00e00fbfb0 msgr2=0x7f00e00fe3d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:45:16.824 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:16.824+0000 7f00e7703640 1 -- 192.168.123.107:0/3950800818 shutdown_connections 2026-03-09T20:45:16.824 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:16.824+0000 7f00e7703640 1 -- 192.168.123.107:0/3950800818 wait complete. 2026-03-09T20:45:16.824 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:16.824+0000 7f00e7703640 1 Processor -- start 2026-03-09T20:45:16.825 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:16.825+0000 7f00e7703640 1 -- start start 2026-03-09T20:45:16.825 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:16.825+0000 7f00e7703640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f00e0073b40 0x7f00e019e870 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:16.825 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:16.825+0000 7f00e7703640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f00e00751a0 0x7f00e019edb0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:16.825 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:16.825+0000 7f00e7703640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f00e019f380 con 0x7f00e0073b40 2026-03-09T20:45:16.825 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:16.825+0000 7f00e7703640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f00e019f4f0 con 0x7f00e00751a0 2026-03-09T20:45:16.825 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:16.825+0000 7f00e5478640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f00e0073b40 0x7f00e019e870 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:16.825 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:16.825+0000 7f00e5478640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f00e0073b40 0x7f00e019e870 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:53774/0 (socket says 192.168.123.107:53774) 2026-03-09T20:45:16.825 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:16.825+0000 7f00e5478640 1 -- 192.168.123.107:0/1030059638 learned_addr learned my addr 192.168.123.107:0/1030059638 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:45:16.825 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:16.825+0000 7f00e5478640 1 -- 192.168.123.107:0/1030059638 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f00e00751a0 msgr2=0x7f00e019edb0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:16.825 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:16.825+0000 7f00e5478640 1 --2- 192.168.123.107:0/1030059638 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f00e00751a0 0x7f00e019edb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:16.825 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:16.825+0000 7f00e5478640 1 -- 192.168.123.107:0/1030059638 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f00d4009660 con 0x7f00e0073b40 2026-03-09T20:45:16.825 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:16.825+0000 7f00e5478640 1 --2- 192.168.123.107:0/1030059638 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f00e0073b40 0x7f00e019e870 secure :-1 s=READY pgs=269 cs=0 l=1 rev1=1 crypto rx=0x7f00d000b790 tx=0x7f00d000bc60 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:45:16.825 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:16.826+0000 7f00ce7fc640 1 -- 192.168.123.107:0/1030059638 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f00d0004070 con 0x7f00e0073b40 2026-03-09T20:45:16.826 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:16.826+0000 7f00ce7fc640 1 -- 192.168.123.107:0/1030059638 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f00d00026e0 con 0x7f00e0073b40 2026-03-09T20:45:16.826 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:16.826+0000 7f00ce7fc640 1 -- 192.168.123.107:0/1030059638 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f00d000caf0 con 0x7f00e0073b40 2026-03-09T20:45:16.827 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:16.826+0000 7f00e7703640 1 -- 192.168.123.107:0/1030059638 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f00e0104870 con 0x7f00e0073b40 2026-03-09T20:45:16.827 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:16.827+0000 7f00e7703640 1 -- 192.168.123.107:0/1030059638 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f00e0104d40 con 0x7f00e0073b40 2026-03-09T20:45:16.828 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:16.827+0000 7f00ce7fc640 1 -- 192.168.123.107:0/1030059638 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7f00d000cc50 con 0x7f00e0073b40 2026-03-09T20:45:16.828 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:16.828+0000 7f00e7703640 1 -- 192.168.123.107:0/1030059638 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f00a8005350 con 0x7f00e0073b40 2026-03-09T20:45:16.831 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:16.828+0000 7f00ce7fc640 1 --2- 192.168.123.107:0/1030059638 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f00bc075fb0 0x7f00bc078470 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:16.831 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:16.829+0000 7f00e4c77640 1 --2- 192.168.123.107:0/1030059638 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f00bc075fb0 0x7f00bc078470 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:16.831 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:16.829+0000 7f00ce7fc640 1 -- 192.168.123.107:0/1030059638 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(42..42 src has 1..42) v4 ==== 5448+0+0 (secure 0 0 0) 0x7f00d0097ce0 con 0x7f00e0073b40 2026-03-09T20:45:16.831 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:16.830+0000 7f00e4c77640 1 --2- 192.168.123.107:0/1030059638 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f00bc075fb0 0x7f00bc078470 secure :-1 s=READY pgs=115 cs=0 l=1 rev1=1 crypto rx=0x7f00d4004870 tx=0x7f00d4004800 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:45:16.831 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:16.831+0000 7f00ce7fc640 1 -- 192.168.123.107:0/1030059638 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f00d009b050 con 0x7f00e0073b40 2026-03-09T20:45:16.918 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:16.918+0000 7f00e7703640 1 -- 192.168.123.107:0/1030059638 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command([{prefix=config set, name=mgr/orchestrator/fail_fs}] v 0) v1 -- 0x7f00a80058d0 con 0x7f00e0073b40 2026-03-09T20:45:16.926 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:16.923+0000 7f00ce7fc640 1 -- 192.168.123.107:0/1030059638 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{prefix=config set, name=mgr/orchestrator/fail_fs}]=0 v20)=0 v20) v1 ==== 126+0+0 (secure 0 0 0) 0x7f00d005edd0 con 0x7f00e0073b40 2026-03-09T20:45:16.931 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:16.931+0000 7f00e7703640 1 -- 192.168.123.107:0/1030059638 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f00bc075fb0 msgr2=0x7f00bc078470 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:16.932 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:16.931+0000 7f00e7703640 1 --2- 192.168.123.107:0/1030059638 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f00bc075fb0 0x7f00bc078470 secure :-1 s=READY pgs=115 cs=0 l=1 rev1=1 crypto rx=0x7f00d4004870 tx=0x7f00d4004800 comp rx=0 tx=0).stop 2026-03-09T20:45:16.932 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:16.931+0000 7f00e7703640 1 -- 192.168.123.107:0/1030059638 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f00e0073b40 msgr2=0x7f00e019e870 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:16.932 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:16.931+0000 7f00e7703640 1 --2- 192.168.123.107:0/1030059638 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f00e0073b40 0x7f00e019e870 secure :-1 s=READY pgs=269 cs=0 l=1 rev1=1 crypto rx=0x7f00d000b790 tx=0x7f00d000bc60 comp rx=0 tx=0).stop 2026-03-09T20:45:16.932 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:16.932+0000 7f00e7703640 1 -- 192.168.123.107:0/1030059638 shutdown_connections 2026-03-09T20:45:16.932 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:16.932+0000 7f00e7703640 1 --2- 192.168.123.107:0/1030059638 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f00bc075fb0 0x7f00bc078470 unknown :-1 s=CLOSED pgs=115 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:16.932 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:16.932+0000 7f00e7703640 1 --2- 192.168.123.107:0/1030059638 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f00e00751a0 0x7f00e019edb0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:16.932 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:16.932+0000 7f00e7703640 1 --2- 192.168.123.107:0/1030059638 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f00e0073b40 0x7f00e019e870 unknown :-1 s=CLOSED pgs=269 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:16.932 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:16.932+0000 7f00e7703640 1 -- 192.168.123.107:0/1030059638 >> 192.168.123.107:0/1030059638 conn(0x7f00e00fbfb0 msgr2=0x7f00e00fda40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:45:16.932 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:16.933+0000 7f00e7703640 1 -- 192.168.123.107:0/1030059638 shutdown_connections 2026-03-09T20:45:16.933 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:16.933+0000 7f00e7703640 1 -- 192.168.123.107:0/1030059638 wait complete. 2026-03-09T20:45:16.990 INFO:teuthology.task.sequential:In sequential, running task cephadm.shell... 2026-03-09T20:45:16.990 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm07.local 2026-03-09T20:45:16.990 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph config set mon mon_warn_on_insecure_global_id_reclaim false --force' 2026-03-09T20:45:17.188 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:45:17.429 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.429+0000 7f388aa12640 1 -- 192.168.123.107:0/2322346134 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3884073b40 msgr2=0x7f3884073fa0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:17.429 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.429+0000 7f388aa12640 1 --2- 192.168.123.107:0/2322346134 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3884073b40 0x7f3884073fa0 secure :-1 s=READY pgs=270 cs=0 l=1 rev1=1 crypto rx=0x7f3870009a00 tx=0x7f387002f280 comp rx=0 tx=0).stop 2026-03-09T20:45:17.429 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.430+0000 7f388aa12640 1 -- 192.168.123.107:0/2322346134 shutdown_connections 2026-03-09T20:45:17.429 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.430+0000 7f388aa12640 1 --2- 192.168.123.107:0/2322346134 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3884073b40 0x7f3884073fa0 unknown :-1 s=CLOSED pgs=270 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:17.429 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.430+0000 7f388aa12640 1 --2- 192.168.123.107:0/2322346134 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f38840751a0 0x7f3884073600 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:17.430 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.430+0000 7f388aa12640 1 -- 192.168.123.107:0/2322346134 >> 192.168.123.107:0/2322346134 conn(0x7f38840fbf80 msgr2=0x7f38840fe3c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:45:17.430 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.430+0000 7f388aa12640 1 -- 192.168.123.107:0/2322346134 shutdown_connections 2026-03-09T20:45:17.430 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.430+0000 7f388aa12640 1 -- 192.168.123.107:0/2322346134 wait complete. 2026-03-09T20:45:17.430 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.430+0000 7f388aa12640 1 Processor -- start 2026-03-09T20:45:17.430 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.430+0000 7f388aa12640 1 -- start start 2026-03-09T20:45:17.430 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.430+0000 7f388aa12640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3884073b40 0x7f388419e8f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:17.430 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.431+0000 7f388aa12640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f38840751a0 0x7f388419ee30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:17.430 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.431+0000 7f388aa12640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f388419f400 con 0x7f3884073b40 2026-03-09T20:45:17.430 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.431+0000 7f388aa12640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f388419f570 con 0x7f38840751a0 2026-03-09T20:45:17.431 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.431+0000 7f3883fff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3884073b40 0x7f388419e8f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:17.431 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.431+0000 7f3883fff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3884073b40 0x7f388419e8f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:53796/0 (socket says 192.168.123.107:53796) 2026-03-09T20:45:17.431 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.431+0000 7f3883fff640 1 -- 192.168.123.107:0/587136177 learned_addr learned my addr 192.168.123.107:0/587136177 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:45:17.431 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.431+0000 7f38837fe640 1 --2- 192.168.123.107:0/587136177 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f38840751a0 0x7f388419ee30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:17.431 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.431+0000 7f3883fff640 1 -- 192.168.123.107:0/587136177 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f38840751a0 msgr2=0x7f388419ee30 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:17.431 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.431+0000 7f3883fff640 1 --2- 192.168.123.107:0/587136177 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f38840751a0 0x7f388419ee30 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:17.431 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.431+0000 7f3883fff640 1 -- 192.168.123.107:0/587136177 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f3870009660 con 0x7f3884073b40 2026-03-09T20:45:17.431 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.431+0000 7f3883fff640 1 --2- 192.168.123.107:0/587136177 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3884073b40 0x7f388419e8f0 secure :-1 s=READY pgs=271 cs=0 l=1 rev1=1 crypto rx=0x7f387400d930 tx=0x7f387400de00 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:45:17.432 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.432+0000 7f38817fa640 1 -- 192.168.123.107:0/587136177 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f3874004490 con 0x7f3884073b40 2026-03-09T20:45:17.433 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.432+0000 7f38817fa640 1 -- 192.168.123.107:0/587136177 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f3874007610 con 0x7f3884073b40 2026-03-09T20:45:17.433 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.432+0000 7f38817fa640 1 -- 192.168.123.107:0/587136177 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f3874005080 con 0x7f3884073b40 2026-03-09T20:45:17.433 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.432+0000 7f388aa12640 1 -- 192.168.123.107:0/587136177 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f38841a3fb0 con 0x7f3884073b40 2026-03-09T20:45:17.433 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.432+0000 7f388aa12640 1 -- 192.168.123.107:0/587136177 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f38841a4500 con 0x7f3884073b40 2026-03-09T20:45:17.433 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.433+0000 7f38817fa640 1 -- 192.168.123.107:0/587136177 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7f38740052f0 con 0x7f3884073b40 2026-03-09T20:45:17.433 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.433+0000 7f38817fa640 1 --2- 192.168.123.107:0/587136177 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f38580761c0 0x7f3858078680 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:17.433 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.433+0000 7f38817fa640 1 -- 192.168.123.107:0/587136177 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(42..42 src has 1..42) v4 ==== 5448+0+0 (secure 0 0 0) 0x7f38740986e0 con 0x7f3884073b40 2026-03-09T20:45:17.433 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.433+0000 7f38837fe640 1 --2- 192.168.123.107:0/587136177 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f38580761c0 0x7f3858078680 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:17.434 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.434+0000 7f388aa12640 1 -- 192.168.123.107:0/587136177 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f388410fba0 con 0x7f3884073b40 2026-03-09T20:45:17.437 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.437+0000 7f38837fe640 1 --2- 192.168.123.107:0/587136177 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f38580761c0 0x7f3858078680 secure :-1 s=READY pgs=116 cs=0 l=1 rev1=1 crypto rx=0x7f3870002c80 tx=0x7f3870005c70 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:45:17.437 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.437+0000 7f38817fa640 1 -- 192.168.123.107:0/587136177 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f3874061c70 con 0x7f3884073b40 2026-03-09T20:45:17.525 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.525+0000 7f388aa12640 1 -- 192.168.123.107:0/587136177 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command([{prefix=config set, name=mon_warn_on_insecure_global_id_reclaim}] v 0) v1 -- 0x7f388410fda0 con 0x7f3884073b40 2026-03-09T20:45:17.529 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.529+0000 7f38817fa640 1 -- 192.168.123.107:0/587136177 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{prefix=config set, name=mon_warn_on_insecure_global_id_reclaim}]=0 v20)=0 v20) v1 ==== 155+0+0 (secure 0 0 0) 0x7f387401d0e0 con 0x7f3884073b40 2026-03-09T20:45:17.531 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.531+0000 7f388aa12640 1 -- 192.168.123.107:0/587136177 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f38580761c0 msgr2=0x7f3858078680 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:17.531 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.531+0000 7f388aa12640 1 --2- 192.168.123.107:0/587136177 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f38580761c0 0x7f3858078680 secure :-1 s=READY pgs=116 cs=0 l=1 rev1=1 crypto rx=0x7f3870002c80 tx=0x7f3870005c70 comp rx=0 tx=0).stop 2026-03-09T20:45:17.531 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.531+0000 7f388aa12640 1 -- 192.168.123.107:0/587136177 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3884073b40 msgr2=0x7f388419e8f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:17.531 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.531+0000 7f388aa12640 1 --2- 192.168.123.107:0/587136177 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3884073b40 0x7f388419e8f0 secure :-1 s=READY pgs=271 cs=0 l=1 rev1=1 crypto rx=0x7f387400d930 tx=0x7f387400de00 comp rx=0 tx=0).stop 2026-03-09T20:45:17.531 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.531+0000 7f388aa12640 1 -- 192.168.123.107:0/587136177 shutdown_connections 2026-03-09T20:45:17.531 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.531+0000 7f388aa12640 1 --2- 192.168.123.107:0/587136177 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f38580761c0 0x7f3858078680 unknown :-1 s=CLOSED pgs=116 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:17.531 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.531+0000 7f388aa12640 1 --2- 192.168.123.107:0/587136177 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f38840751a0 0x7f388419ee30 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:17.531 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.531+0000 7f388aa12640 1 --2- 192.168.123.107:0/587136177 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3884073b40 0x7f388419e8f0 unknown :-1 s=CLOSED pgs=271 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:17.531 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.532+0000 7f388aa12640 1 -- 192.168.123.107:0/587136177 >> 192.168.123.107:0/587136177 conn(0x7f38840fbf80 msgr2=0x7f38840fdc50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:45:17.531 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.532+0000 7f388aa12640 1 -- 192.168.123.107:0/587136177 shutdown_connections 2026-03-09T20:45:17.532 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.532+0000 7f388aa12640 1 -- 192.168.123.107:0/587136177 wait complete. 2026-03-09T20:45:17.591 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph config set mon mon_warn_on_insecure_global_id_reclaim_allowed false --force' 2026-03-09T20:45:17.754 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:45:17.997 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.997+0000 7f5543dbe640 1 -- 192.168.123.107:0/4180259383 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f553c075ba0 msgr2=0x7f553c075fa0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:17.997 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.997+0000 7f5543dbe640 1 --2- 192.168.123.107:0/4180259383 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f553c075ba0 0x7f553c075fa0 secure :-1 s=READY pgs=56 cs=0 l=1 rev1=1 crypto rx=0x7f55300099b0 tx=0x7f553002f220 comp rx=0 tx=0).stop 2026-03-09T20:45:17.998 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.998+0000 7f5543dbe640 1 -- 192.168.123.107:0/4180259383 shutdown_connections 2026-03-09T20:45:17.998 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.998+0000 7f5543dbe640 1 --2- 192.168.123.107:0/4180259383 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f553c076df0 0x7f553c077250 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:17.998 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.998+0000 7f5543dbe640 1 --2- 192.168.123.107:0/4180259383 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f553c075ba0 0x7f553c075fa0 unknown :-1 s=CLOSED pgs=56 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:17.998 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.998+0000 7f5543dbe640 1 -- 192.168.123.107:0/4180259383 >> 192.168.123.107:0/4180259383 conn(0x7f553c0fe250 msgr2=0x7f553c100670 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:45:17.998 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.998+0000 7f5543dbe640 1 -- 192.168.123.107:0/4180259383 shutdown_connections 2026-03-09T20:45:17.998 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.998+0000 7f5543dbe640 1 -- 192.168.123.107:0/4180259383 wait complete. 2026-03-09T20:45:17.998 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.998+0000 7f5543dbe640 1 Processor -- start 2026-03-09T20:45:17.998 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.998+0000 7f5543dbe640 1 -- start start 2026-03-09T20:45:17.998 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.999+0000 7f5543dbe640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f553c075ba0 0x7f553c19e900 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:17.999 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.999+0000 7f5543dbe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f553c076df0 0x7f553c19ee40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:17.999 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.999+0000 7f5543dbe640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f553c19f410 con 0x7f553c076df0 2026-03-09T20:45:17.999 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.999+0000 7f5543dbe640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f553c19f580 con 0x7f553c075ba0 2026-03-09T20:45:17.999 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.999+0000 7f5541b33640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f553c075ba0 0x7f553c19e900 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:17.999 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.999+0000 7f5541b33640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f553c075ba0 0x7f553c19e900 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:44290/0 (socket says 192.168.123.107:44290) 2026-03-09T20:45:17.999 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.999+0000 7f5541b33640 1 -- 192.168.123.107:0/4044274225 learned_addr learned my addr 192.168.123.107:0/4044274225 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:45:17.999 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.999+0000 7f5541332640 1 --2- 192.168.123.107:0/4044274225 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f553c076df0 0x7f553c19ee40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:17.999 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.999+0000 7f5541b33640 1 -- 192.168.123.107:0/4044274225 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f553c076df0 msgr2=0x7f553c19ee40 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:17.999 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.999+0000 7f5541b33640 1 --2- 192.168.123.107:0/4044274225 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f553c076df0 0x7f553c19ee40 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:17.999 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.999+0000 7f5541b33640 1 -- 192.168.123.107:0/4044274225 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f552c009590 con 0x7f553c075ba0 2026-03-09T20:45:17.999 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:17.999+0000 7f5541332640 1 --2- 192.168.123.107:0/4044274225 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f553c076df0 0x7f553c19ee40 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T20:45:17.999 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.000+0000 7f5541b33640 1 --2- 192.168.123.107:0/4044274225 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f553c075ba0 0x7f553c19e900 secure :-1 s=READY pgs=57 cs=0 l=1 rev1=1 crypto rx=0x7f5530004580 tx=0x7f55300029a0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:45:18.000 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.000+0000 7f552affd640 1 -- 192.168.123.107:0/4044274225 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f553003d070 con 0x7f553c075ba0 2026-03-09T20:45:18.000 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.000+0000 7f5543dbe640 1 -- 192.168.123.107:0/4044274225 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f5530009660 con 0x7f553c075ba0 2026-03-09T20:45:18.000 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.000+0000 7f5543dbe640 1 -- 192.168.123.107:0/4044274225 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f553c1a42a0 con 0x7f553c075ba0 2026-03-09T20:45:18.000 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.001+0000 7f552affd640 1 -- 192.168.123.107:0/4044274225 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f5530031df0 con 0x7f553c075ba0 2026-03-09T20:45:18.000 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.001+0000 7f552affd640 1 -- 192.168.123.107:0/4044274225 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f5530031880 con 0x7f553c075ba0 2026-03-09T20:45:18.001 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.001+0000 7f552affd640 1 -- 192.168.123.107:0/4044274225 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7f5530038490 con 0x7f553c075ba0 2026-03-09T20:45:18.001 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.002+0000 7f5543dbe640 1 -- 192.168.123.107:0/4044274225 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f553c075fa0 con 0x7f553c075ba0 2026-03-09T20:45:18.002 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.002+0000 7f552affd640 1 --2- 192.168.123.107:0/4044274225 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f55140761c0 0x7f5514078680 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:18.002 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.002+0000 7f5541332640 1 --2- 192.168.123.107:0/4044274225 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f55140761c0 0x7f5514078680 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:18.002 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.002+0000 7f552affd640 1 -- 192.168.123.107:0/4044274225 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(42..42 src has 1..42) v4 ==== 5448+0+0 (secure 0 0 0) 0x7f55300bc2f0 con 0x7f553c075ba0 2026-03-09T20:45:18.002 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.002+0000 7f5541332640 1 --2- 192.168.123.107:0/4044274225 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f55140761c0 0x7f5514078680 secure :-1 s=READY pgs=117 cs=0 l=1 rev1=1 crypto rx=0x7f553c19fe20 tx=0x7f552c009290 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:45:18.005 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.006+0000 7f552affd640 1 -- 192.168.123.107:0/4044274225 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f5530086840 con 0x7f553c075ba0 2026-03-09T20:45:18.096 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.096+0000 7f5543dbe640 1 -- 192.168.123.107:0/4044274225 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command([{prefix=config set, name=mon_warn_on_insecure_global_id_reclaim_allowed}] v 0) v1 -- 0x7f553c10fb80 con 0x7f553c075ba0 2026-03-09T20:45:18.096 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:17 vm07.local ceph-mon[49120]: from='client.? 192.168.123.107:0/1030059638' entity='client.admin' 2026-03-09T20:45:18.096 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:17 vm07.local ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:45:18.096 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:17 vm07.local ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:45:18.096 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:17 vm07.local ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:45:18.096 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:17 vm07.local ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:18.099 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.099+0000 7f552affd640 1 -- 192.168.123.107:0/4044274225 <== mon.1 v2:192.168.123.110:3300/0 7 ==== mon_command_ack([{prefix=config set, name=mon_warn_on_insecure_global_id_reclaim_allowed}]=0 v20)=0 v20) v1 ==== 163+0+0 (secure 0 0 0) 0x7f55300861e0 con 0x7f553c075ba0 2026-03-09T20:45:18.101 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.101+0000 7f5543dbe640 1 -- 192.168.123.107:0/4044274225 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f55140761c0 msgr2=0x7f5514078680 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:18.101 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.101+0000 7f5543dbe640 1 --2- 192.168.123.107:0/4044274225 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f55140761c0 0x7f5514078680 secure :-1 s=READY pgs=117 cs=0 l=1 rev1=1 crypto rx=0x7f553c19fe20 tx=0x7f552c009290 comp rx=0 tx=0).stop 2026-03-09T20:45:18.101 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.101+0000 7f5543dbe640 1 -- 192.168.123.107:0/4044274225 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f553c075ba0 msgr2=0x7f553c19e900 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:18.101 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.101+0000 7f5543dbe640 1 --2- 192.168.123.107:0/4044274225 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f553c075ba0 0x7f553c19e900 secure :-1 s=READY pgs=57 cs=0 l=1 rev1=1 crypto rx=0x7f5530004580 tx=0x7f55300029a0 comp rx=0 tx=0).stop 2026-03-09T20:45:18.101 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.101+0000 7f5543dbe640 1 -- 192.168.123.107:0/4044274225 shutdown_connections 2026-03-09T20:45:18.101 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.101+0000 7f5543dbe640 1 --2- 192.168.123.107:0/4044274225 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f55140761c0 0x7f5514078680 unknown :-1 s=CLOSED pgs=117 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:18.101 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.101+0000 7f5543dbe640 1 --2- 192.168.123.107:0/4044274225 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f553c076df0 0x7f553c19ee40 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:18.101 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.101+0000 7f5543dbe640 1 --2- 192.168.123.107:0/4044274225 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f553c075ba0 0x7f553c19e900 unknown :-1 s=CLOSED pgs=57 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:18.101 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.101+0000 7f5543dbe640 1 -- 192.168.123.107:0/4044274225 >> 192.168.123.107:0/4044274225 conn(0x7f553c0fe250 msgr2=0x7f553c0ffaf0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:45:18.101 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.101+0000 7f5543dbe640 1 -- 192.168.123.107:0/4044274225 shutdown_connections 2026-03-09T20:45:18.101 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.102+0000 7f5543dbe640 1 -- 192.168.123.107:0/4044274225 wait complete. 2026-03-09T20:45:18.157 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph config set global log_to_journald false --force' 2026-03-09T20:45:18.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:17 vm10.local ceph-mon[57011]: from='client.? 192.168.123.107:0/1030059638' entity='client.admin' 2026-03-09T20:45:18.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:17 vm10.local ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:45:18.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:17 vm10.local ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:45:18.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:17 vm10.local ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:45:18.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:17 vm10.local ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:18.302 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:45:18.528 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.528+0000 7f503e56b640 1 -- 192.168.123.107:0/1527564324 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5038073b40 msgr2=0x7f5038073fa0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:18.528 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.528+0000 7f503e56b640 1 --2- 192.168.123.107:0/1527564324 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5038073b40 0x7f5038073fa0 secure :-1 s=READY pgs=272 cs=0 l=1 rev1=1 crypto rx=0x7f5024009a00 tx=0x7f502402f280 comp rx=0 tx=0).stop 2026-03-09T20:45:18.528 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.528+0000 7f503e56b640 1 -- 192.168.123.107:0/1527564324 shutdown_connections 2026-03-09T20:45:18.529 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.528+0000 7f503e56b640 1 --2- 192.168.123.107:0/1527564324 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5038073b40 0x7f5038073fa0 unknown :-1 s=CLOSED pgs=272 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:18.529 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.529+0000 7f503e56b640 1 --2- 192.168.123.107:0/1527564324 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f50380751a0 0x7f5038073600 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:18.529 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.529+0000 7f503e56b640 1 -- 192.168.123.107:0/1527564324 >> 192.168.123.107:0/1527564324 conn(0x7f50380fbf80 msgr2=0x7f50380fe3c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:45:18.529 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.529+0000 7f503e56b640 1 -- 192.168.123.107:0/1527564324 shutdown_connections 2026-03-09T20:45:18.529 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.529+0000 7f503e56b640 1 -- 192.168.123.107:0/1527564324 wait complete. 2026-03-09T20:45:18.529 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.529+0000 7f503e56b640 1 Processor -- start 2026-03-09T20:45:18.529 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.529+0000 7f503e56b640 1 -- start start 2026-03-09T20:45:18.529 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.530+0000 7f503e56b640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5038073b40 0x7f503819a450 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:18.530 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.530+0000 7f503e56b640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f50380751a0 0x7f503819a990 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:18.530 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.530+0000 7f503e56b640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f503819af60 con 0x7f50380751a0 2026-03-09T20:45:18.530 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.530+0000 7f503e56b640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f503819b0d0 con 0x7f5038073b40 2026-03-09T20:45:18.530 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.530+0000 7f50377fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f50380751a0 0x7f503819a990 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:18.530 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.530+0000 7f5037fff640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5038073b40 0x7f503819a450 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:18.530 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.530+0000 7f50377fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f50380751a0 0x7f503819a990 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:39614/0 (socket says 192.168.123.107:39614) 2026-03-09T20:45:18.530 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.530+0000 7f50377fe640 1 -- 192.168.123.107:0/3599238860 learned_addr learned my addr 192.168.123.107:0/3599238860 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:45:18.530 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.530+0000 7f50377fe640 1 -- 192.168.123.107:0/3599238860 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5038073b40 msgr2=0x7f503819a450 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:18.530 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.530+0000 7f50377fe640 1 --2- 192.168.123.107:0/3599238860 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5038073b40 0x7f503819a450 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:18.530 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.530+0000 7f50377fe640 1 -- 192.168.123.107:0/3599238860 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f5024009660 con 0x7f50380751a0 2026-03-09T20:45:18.530 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.530+0000 7f5037fff640 1 --2- 192.168.123.107:0/3599238860 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5038073b40 0x7f503819a450 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T20:45:18.530 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.530+0000 7f50377fe640 1 --2- 192.168.123.107:0/3599238860 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f50380751a0 0x7f503819a990 secure :-1 s=READY pgs=273 cs=0 l=1 rev1=1 crypto rx=0x7f5024031d30 tx=0x7f5024031d60 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:45:18.530 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.531+0000 7f50357fa640 1 -- 192.168.123.107:0/3599238860 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f5024031f00 con 0x7f50380751a0 2026-03-09T20:45:18.531 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.531+0000 7f50357fa640 1 -- 192.168.123.107:0/3599238860 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f5024031110 con 0x7f50380751a0 2026-03-09T20:45:18.531 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.531+0000 7f50357fa640 1 -- 192.168.123.107:0/3599238860 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f5024038660 con 0x7f50380751a0 2026-03-09T20:45:18.531 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.531+0000 7f503e56b640 1 -- 192.168.123.107:0/3599238860 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f503819fb10 con 0x7f50380751a0 2026-03-09T20:45:18.531 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.531+0000 7f503e56b640 1 -- 192.168.123.107:0/3599238860 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f503819ff50 con 0x7f50380751a0 2026-03-09T20:45:18.532 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.532+0000 7f503e56b640 1 -- 192.168.123.107:0/3599238860 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f4ffc005350 con 0x7f50380751a0 2026-03-09T20:45:18.535 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.535+0000 7f50357fa640 1 -- 192.168.123.107:0/3599238860 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7f502403f030 con 0x7f50380751a0 2026-03-09T20:45:18.535 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.536+0000 7f50357fa640 1 --2- 192.168.123.107:0/3599238860 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f500c076290 0x7f500c078750 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:18.536 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.536+0000 7f5037fff640 1 --2- 192.168.123.107:0/3599238860 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f500c076290 0x7f500c078750 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:18.536 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.536+0000 7f50357fa640 1 -- 192.168.123.107:0/3599238860 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(42..42 src has 1..42) v4 ==== 5448+0+0 (secure 0 0 0) 0x7f50240bc960 con 0x7f50380751a0 2026-03-09T20:45:18.536 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.536+0000 7f5037fff640 1 --2- 192.168.123.107:0/3599238860 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f500c076290 0x7f500c078750 secure :-1 s=READY pgs=118 cs=0 l=1 rev1=1 crypto rx=0x7f50280041f0 tx=0x7f5028009290 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:45:18.536 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.536+0000 7f50357fa640 1 -- 192.168.123.107:0/3599238860 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f50240ea8c0 con 0x7f50380751a0 2026-03-09T20:45:18.628 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.627+0000 7f503e56b640 1 -- 192.168.123.107:0/3599238860 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command([{prefix=config set, name=log_to_journald}] v 0) v1 -- 0x7f4ffc0058d0 con 0x7f50380751a0 2026-03-09T20:45:18.629 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.629+0000 7f50357fa640 1 -- 192.168.123.107:0/3599238860 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{prefix=config set, name=log_to_journald}]=0 v20)=0 v20) v1 ==== 135+0+0 (secure 0 0 0) 0x7f5024085f70 con 0x7f50380751a0 2026-03-09T20:45:18.632 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.632+0000 7f503e56b640 1 -- 192.168.123.107:0/3599238860 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f500c076290 msgr2=0x7f500c078750 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:18.632 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.632+0000 7f503e56b640 1 --2- 192.168.123.107:0/3599238860 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f500c076290 0x7f500c078750 secure :-1 s=READY pgs=118 cs=0 l=1 rev1=1 crypto rx=0x7f50280041f0 tx=0x7f5028009290 comp rx=0 tx=0).stop 2026-03-09T20:45:18.632 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.633+0000 7f503e56b640 1 -- 192.168.123.107:0/3599238860 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f50380751a0 msgr2=0x7f503819a990 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:18.633 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.633+0000 7f503e56b640 1 --2- 192.168.123.107:0/3599238860 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f50380751a0 0x7f503819a990 secure :-1 s=READY pgs=273 cs=0 l=1 rev1=1 crypto rx=0x7f5024031d30 tx=0x7f5024031d60 comp rx=0 tx=0).stop 2026-03-09T20:45:18.633 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.633+0000 7f503e56b640 1 -- 192.168.123.107:0/3599238860 shutdown_connections 2026-03-09T20:45:18.633 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.633+0000 7f503e56b640 1 --2- 192.168.123.107:0/3599238860 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f500c076290 0x7f500c078750 unknown :-1 s=CLOSED pgs=118 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:18.633 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.633+0000 7f503e56b640 1 --2- 192.168.123.107:0/3599238860 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f50380751a0 0x7f503819a990 unknown :-1 s=CLOSED pgs=273 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:18.633 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.633+0000 7f503e56b640 1 --2- 192.168.123.107:0/3599238860 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5038073b40 0x7f503819a450 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:18.633 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.634+0000 7f503e56b640 1 -- 192.168.123.107:0/3599238860 >> 192.168.123.107:0/3599238860 conn(0x7f50380fbf80 msgr2=0x7f50380fdab0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:45:18.634 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.634+0000 7f503e56b640 1 -- 192.168.123.107:0/3599238860 shutdown_connections 2026-03-09T20:45:18.634 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:18.634+0000 7f503e56b640 1 -- 192.168.123.107:0/3599238860 wait complete. 2026-03-09T20:45:18.696 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1' 2026-03-09T20:45:18.852 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:45:19.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:18 vm07.local ceph-mon[49120]: pgmap v89: 65 pgs: 65 active+clean; 454 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 3.0 KiB/s rd, 2.2 KiB/s wr, 8 op/s 2026-03-09T20:45:19.156 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.156+0000 7f04895ff640 1 -- 192.168.123.107:0/1797234453 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0484072390 msgr2=0x7f048410c590 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:19.156 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.156+0000 7f04895ff640 1 --2- 192.168.123.107:0/1797234453 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0484072390 0x7f048410c590 secure :-1 s=READY pgs=274 cs=0 l=1 rev1=1 crypto rx=0x7f047c00b0a0 tx=0x7f047c02f4c0 comp rx=0 tx=0).stop 2026-03-09T20:45:19.156 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.156+0000 7f04895ff640 1 -- 192.168.123.107:0/1797234453 shutdown_connections 2026-03-09T20:45:19.156 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.156+0000 7f04895ff640 1 --2- 192.168.123.107:0/1797234453 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0484072390 0x7f048410c590 unknown :-1 s=CLOSED pgs=274 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:19.156 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.156+0000 7f04895ff640 1 --2- 192.168.123.107:0/1797234453 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f04840719c0 0x7f0484071dc0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:19.156 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.156+0000 7f04895ff640 1 -- 192.168.123.107:0/1797234453 >> 192.168.123.107:0/1797234453 conn(0x7f048406d4f0 msgr2=0x7f048406f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:45:19.157 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.157+0000 7f04895ff640 1 -- 192.168.123.107:0/1797234453 shutdown_connections 2026-03-09T20:45:19.157 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.157+0000 7f04895ff640 1 -- 192.168.123.107:0/1797234453 wait complete. 2026-03-09T20:45:19.157 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.157+0000 7f04895ff640 1 Processor -- start 2026-03-09T20:45:19.157 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.157+0000 7f04895ff640 1 -- start start 2026-03-09T20:45:19.158 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.157+0000 7f04895ff640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f04840719c0 0x7f04841a7140 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:19.158 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.157+0000 7f04895ff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f04841a7680 0x7f04841ac6f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:19.158 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.157+0000 7f04895ff640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f04841a7b00 con 0x7f04841a7680 2026-03-09T20:45:19.158 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.157+0000 7f04895ff640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f04841a7c70 con 0x7f04840719c0 2026-03-09T20:45:19.158 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.158+0000 7f04827fc640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f04841a7680 0x7f04841ac6f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:19.158 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.158+0000 7f04827fc640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f04841a7680 0x7f04841ac6f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:39646/0 (socket says 192.168.123.107:39646) 2026-03-09T20:45:19.158 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.158+0000 7f04827fc640 1 -- 192.168.123.107:0/4098428249 learned_addr learned my addr 192.168.123.107:0/4098428249 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:45:19.158 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.158+0000 7f0482ffd640 1 --2- 192.168.123.107:0/4098428249 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f04840719c0 0x7f04841a7140 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:19.158 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.158+0000 7f04827fc640 1 -- 192.168.123.107:0/4098428249 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f04840719c0 msgr2=0x7f04841a7140 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:19.158 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.158+0000 7f04827fc640 1 --2- 192.168.123.107:0/4098428249 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f04840719c0 0x7f04841a7140 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:19.158 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.158+0000 7f04827fc640 1 -- 192.168.123.107:0/4098428249 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f047c009d00 con 0x7f04841a7680 2026-03-09T20:45:19.158 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.158+0000 7f04827fc640 1 --2- 192.168.123.107:0/4098428249 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f04841a7680 0x7f04841ac6f0 secure :-1 s=READY pgs=275 cs=0 l=1 rev1=1 crypto rx=0x7f047c009fd0 tx=0x7f047c009300 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:45:19.158 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.158+0000 7f0463fff640 1 -- 192.168.123.107:0/4098428249 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f047c009470 con 0x7f04841a7680 2026-03-09T20:45:19.159 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.159+0000 7f04895ff640 1 -- 192.168.123.107:0/4098428249 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f04841acc30 con 0x7f04841a7680 2026-03-09T20:45:19.159 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.159+0000 7f0463fff640 1 -- 192.168.123.107:0/4098428249 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f047c0048f0 con 0x7f04841a7680 2026-03-09T20:45:19.159 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.159+0000 7f0463fff640 1 -- 192.168.123.107:0/4098428249 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f047c040a80 con 0x7f04841a7680 2026-03-09T20:45:19.163 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.159+0000 7f04895ff640 1 -- 192.168.123.107:0/4098428249 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f04841ad130 con 0x7f04841a7680 2026-03-09T20:45:19.163 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.160+0000 7f0461ffb640 1 -- 192.168.123.107:0/4098428249 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f044c005350 con 0x7f04841a7680 2026-03-09T20:45:19.163 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.161+0000 7f0463fff640 1 -- 192.168.123.107:0/4098428249 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7f047c040be0 con 0x7f04841a7680 2026-03-09T20:45:19.163 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.161+0000 7f0463fff640 1 --2- 192.168.123.107:0/4098428249 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f0470076290 0x7f0470078750 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:19.163 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.161+0000 7f0463fff640 1 -- 192.168.123.107:0/4098428249 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(42..42 src has 1..42) v4 ==== 5448+0+0 (secure 0 0 0) 0x7f047c0bc890 con 0x7f04841a7680 2026-03-09T20:45:19.171 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.171+0000 7f0482ffd640 1 --2- 192.168.123.107:0/4098428249 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f0470076290 0x7f0470078750 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:19.171 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.171+0000 7f0463fff640 1 -- 192.168.123.107:0/4098428249 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f047c085ec0 con 0x7f04841a7680 2026-03-09T20:45:19.171 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.171+0000 7f0482ffd640 1 --2- 192.168.123.107:0/4098428249 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f0470076290 0x7f0470078750 secure :-1 s=READY pgs=119 cs=0 l=1 rev1=1 crypto rx=0x7f0474007c40 tx=0x7f04740073d0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:45:19.200 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:18 vm10.local ceph-mon[57011]: pgmap v89: 65 pgs: 65 active+clean; 454 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 3.0 KiB/s rd, 2.2 KiB/s wr, 8 op/s 2026-03-09T20:45:19.273 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.273+0000 7f0461ffb640 1 -- 192.168.123.107:0/4098428249 --> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] -- mgr_command(tid 0: {"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}) v1 -- 0x7f044c002bf0 con 0x7f0470076290 2026-03-09T20:45:19.283 INFO:teuthology.orchestra.run.vm07.stdout:Initiating upgrade to quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T20:45:19.283 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.279+0000 7f0463fff640 1 -- 192.168.123.107:0/4098428249 <== mgr.14225 v2:192.168.123.107:6800/4233182156 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+89 (secure 0 0 0) 0x7f044c002bf0 con 0x7f0470076290 2026-03-09T20:45:19.283 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.283+0000 7f04895ff640 1 -- 192.168.123.107:0/4098428249 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f0470076290 msgr2=0x7f0470078750 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:19.283 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.283+0000 7f04895ff640 1 --2- 192.168.123.107:0/4098428249 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f0470076290 0x7f0470078750 secure :-1 s=READY pgs=119 cs=0 l=1 rev1=1 crypto rx=0x7f0474007c40 tx=0x7f04740073d0 comp rx=0 tx=0).stop 2026-03-09T20:45:19.283 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.283+0000 7f04895ff640 1 -- 192.168.123.107:0/4098428249 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f04841a7680 msgr2=0x7f04841ac6f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:19.283 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.283+0000 7f04895ff640 1 --2- 192.168.123.107:0/4098428249 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f04841a7680 0x7f04841ac6f0 secure :-1 s=READY pgs=275 cs=0 l=1 rev1=1 crypto rx=0x7f047c009fd0 tx=0x7f047c009300 comp rx=0 tx=0).stop 2026-03-09T20:45:19.283 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.283+0000 7f04895ff640 1 -- 192.168.123.107:0/4098428249 shutdown_connections 2026-03-09T20:45:19.283 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.283+0000 7f04895ff640 1 --2- 192.168.123.107:0/4098428249 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f0470076290 0x7f0470078750 unknown :-1 s=CLOSED pgs=119 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:19.283 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.283+0000 7f04895ff640 1 --2- 192.168.123.107:0/4098428249 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f04841a7680 0x7f04841ac6f0 unknown :-1 s=CLOSED pgs=275 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:19.284 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.284+0000 7f04895ff640 1 --2- 192.168.123.107:0/4098428249 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f04840719c0 0x7f04841a7140 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:19.284 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.284+0000 7f04895ff640 1 -- 192.168.123.107:0/4098428249 >> 192.168.123.107:0/4098428249 conn(0x7f048406d4f0 msgr2=0x7f0484070460 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:45:19.284 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.284+0000 7f04895ff640 1 -- 192.168.123.107:0/4098428249 shutdown_connections 2026-03-09T20:45:19.284 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.284+0000 7f04895ff640 1 -- 192.168.123.107:0/4098428249 wait complete. 2026-03-09T20:45:19.348 INFO:teuthology.task.sequential:In sequential, running task cephadm.shell... 2026-03-09T20:45:19.349 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm07.local 2026-03-09T20:45:19.349 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'while ceph orch upgrade status | jq '"'"'.in_progress'"'"' | grep true && ! ceph orch upgrade status | jq '"'"'.message'"'"' | grep Error ; do ceph orch ps ; ceph versions ; ceph fs dump; ceph orch upgrade status ; ceph health detail ; sleep 30 ; done' 2026-03-09T20:45:19.585 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:45:19.958 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.957+0000 7fbbbb874640 1 -- 192.168.123.107:0/133379394 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbbb4073b40 msgr2=0x7fbbb4073fa0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:19.958 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.957+0000 7fbbbb874640 1 --2- 192.168.123.107:0/133379394 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbbb4073b40 0x7fbbb4073fa0 secure :-1 s=READY pgs=276 cs=0 l=1 rev1=1 crypto rx=0x7fbba40099b0 tx=0x7fbba402f220 comp rx=0 tx=0).stop 2026-03-09T20:45:19.958 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.958+0000 7fbbbb874640 1 -- 192.168.123.107:0/133379394 shutdown_connections 2026-03-09T20:45:19.958 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.958+0000 7fbbbb874640 1 --2- 192.168.123.107:0/133379394 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbbb4073b40 0x7fbbb4073fa0 unknown :-1 s=CLOSED pgs=276 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:19.958 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.958+0000 7fbbbb874640 1 --2- 192.168.123.107:0/133379394 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fbbb40751a0 0x7fbbb4073600 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:19.958 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.958+0000 7fbbbb874640 1 -- 192.168.123.107:0/133379394 >> 192.168.123.107:0/133379394 conn(0x7fbbb40fbf80 msgr2=0x7fbbb40fe3c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:45:19.959 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.959+0000 7fbbbb874640 1 -- 192.168.123.107:0/133379394 shutdown_connections 2026-03-09T20:45:19.959 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.959+0000 7fbbbb874640 1 -- 192.168.123.107:0/133379394 wait complete. 2026-03-09T20:45:19.959 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.959+0000 7fbbbb874640 1 Processor -- start 2026-03-09T20:45:19.959 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.959+0000 7fbbbb874640 1 -- start start 2026-03-09T20:45:19.960 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.960+0000 7fbbbb874640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbbb40751a0 0x7fbbb419e900 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:19.960 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.960+0000 7fbbb95e9640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbbb40751a0 0x7fbbb419e900 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:19.960 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.960+0000 7fbbb95e9640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbbb40751a0 0x7fbbb419e900 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:39658/0 (socket says 192.168.123.107:39658) 2026-03-09T20:45:19.960 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.960+0000 7fbbbb874640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fbbb419ee40 0x7fbbb41a3eb0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:19.960 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.960+0000 7fbbbb874640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fbbb419f2c0 con 0x7fbbb40751a0 2026-03-09T20:45:19.960 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.960+0000 7fbbbb874640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fbbb419f430 con 0x7fbbb419ee40 2026-03-09T20:45:19.961 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.960+0000 7fbbb95e9640 1 -- 192.168.123.107:0/3052637405 learned_addr learned my addr 192.168.123.107:0/3052637405 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:45:19.961 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.960+0000 7fbbb8de8640 1 --2- 192.168.123.107:0/3052637405 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fbbb419ee40 0x7fbbb41a3eb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:19.961 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.961+0000 7fbbb8de8640 1 -- 192.168.123.107:0/3052637405 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbbb40751a0 msgr2=0x7fbbb419e900 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:19.961 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.961+0000 7fbbb8de8640 1 --2- 192.168.123.107:0/3052637405 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbbb40751a0 0x7fbbb419e900 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:19.961 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.961+0000 7fbbb8de8640 1 -- 192.168.123.107:0/3052637405 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fbba4009660 con 0x7fbbb419ee40 2026-03-09T20:45:19.961 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.961+0000 7fbbb95e9640 1 --2- 192.168.123.107:0/3052637405 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbbb40751a0 0x7fbbb419e900 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-09T20:45:19.962 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.961+0000 7fbbb8de8640 1 --2- 192.168.123.107:0/3052637405 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fbbb419ee40 0x7fbbb41a3eb0 secure :-1 s=READY pgs=58 cs=0 l=1 rev1=1 crypto rx=0x7fbba402f730 tx=0x7fbba4002bc0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:45:19.962 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.962+0000 7fbbaa7fc640 1 -- 192.168.123.107:0/3052637405 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fbba403d070 con 0x7fbbb419ee40 2026-03-09T20:45:19.962 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.962+0000 7fbbbb874640 1 -- 192.168.123.107:0/3052637405 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fbbb41a43f0 con 0x7fbbb419ee40 2026-03-09T20:45:19.962 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.962+0000 7fbbbb874640 1 -- 192.168.123.107:0/3052637405 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fbbb41a4890 con 0x7fbbb419ee40 2026-03-09T20:45:19.963 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.962+0000 7fbbaa7fc640 1 -- 192.168.123.107:0/3052637405 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fbba4004510 con 0x7fbbb419ee40 2026-03-09T20:45:19.963 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.962+0000 7fbbaa7fc640 1 -- 192.168.123.107:0/3052637405 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fbba4038da0 con 0x7fbbb419ee40 2026-03-09T20:45:19.963 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.963+0000 7fbbaa7fc640 1 -- 192.168.123.107:0/3052637405 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7fbba4049050 con 0x7fbbb419ee40 2026-03-09T20:45:19.964 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.964+0000 7fbbaa7fc640 1 --2- 192.168.123.107:0/3052637405 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fbb880761c0 0x7fbb88078680 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:19.964 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.964+0000 7fbbb95e9640 1 --2- 192.168.123.107:0/3052637405 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fbb880761c0 0x7fbb88078680 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:19.964 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.964+0000 7fbbaa7fc640 1 -- 192.168.123.107:0/3052637405 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(42..42 src has 1..42) v4 ==== 5448+0+0 (secure 0 0 0) 0x7fbba40bbed0 con 0x7fbbb419ee40 2026-03-09T20:45:19.964 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.964+0000 7fbbbb874640 1 -- 192.168.123.107:0/3052637405 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fbb7c005350 con 0x7fbbb419ee40 2026-03-09T20:45:19.965 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.965+0000 7fbbb95e9640 1 --2- 192.168.123.107:0/3052637405 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fbb880761c0 0x7fbb88078680 secure :-1 s=READY pgs=120 cs=0 l=1 rev1=1 crypto rx=0x7fbb9c0046d0 tx=0x7fbb9c004050 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:45:19.968 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:19.968+0000 7fbbaa7fc640 1 -- 192.168.123.107:0/3052637405 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7fbba4080e60 con 0x7fbbb419ee40 2026-03-09T20:45:20.139 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.135+0000 7fbbbb874640 1 -- 192.168.123.107:0/3052637405 --> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7fbb7c002bf0 con 0x7fbb880761c0 2026-03-09T20:45:20.143 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.141+0000 7fbbaa7fc640 1 -- 192.168.123.107:0/3052637405 <== mgr.14225 v2:192.168.123.107:6800/4233182156 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+358 (secure 0 0 0) 0x7fbb7c002bf0 con 0x7fbb880761c0 2026-03-09T20:45:20.143 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.143+0000 7fbbbb874640 1 -- 192.168.123.107:0/3052637405 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fbb880761c0 msgr2=0x7fbb88078680 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:20.143 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.143+0000 7fbbbb874640 1 --2- 192.168.123.107:0/3052637405 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fbb880761c0 0x7fbb88078680 secure :-1 s=READY pgs=120 cs=0 l=1 rev1=1 crypto rx=0x7fbb9c0046d0 tx=0x7fbb9c004050 comp rx=0 tx=0).stop 2026-03-09T20:45:20.143 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.143+0000 7fbbbb874640 1 -- 192.168.123.107:0/3052637405 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fbbb419ee40 msgr2=0x7fbbb41a3eb0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:20.143 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.143+0000 7fbbbb874640 1 --2- 192.168.123.107:0/3052637405 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fbbb419ee40 0x7fbbb41a3eb0 secure :-1 s=READY pgs=58 cs=0 l=1 rev1=1 crypto rx=0x7fbba402f730 tx=0x7fbba4002bc0 comp rx=0 tx=0).stop 2026-03-09T20:45:20.144 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.143+0000 7fbbbb874640 1 -- 192.168.123.107:0/3052637405 shutdown_connections 2026-03-09T20:45:20.144 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.143+0000 7fbbbb874640 1 --2- 192.168.123.107:0/3052637405 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fbb880761c0 0x7fbb88078680 unknown :-1 s=CLOSED pgs=120 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:20.144 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.143+0000 7fbbbb874640 1 --2- 192.168.123.107:0/3052637405 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fbbb419ee40 0x7fbbb41a3eb0 unknown :-1 s=CLOSED pgs=58 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:20.144 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.143+0000 7fbbbb874640 1 --2- 192.168.123.107:0/3052637405 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbbb40751a0 0x7fbbb419e900 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:20.144 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.143+0000 7fbbbb874640 1 -- 192.168.123.107:0/3052637405 >> 192.168.123.107:0/3052637405 conn(0x7fbbb40fbf80 msgr2=0x7fbbb40fd530 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:45:20.144 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.143+0000 7fbbbb874640 1 -- 192.168.123.107:0/3052637405 shutdown_connections 2026-03-09T20:45:20.144 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.143+0000 7fbbbb874640 1 -- 192.168.123.107:0/3052637405 wait complete. 2026-03-09T20:45:20.159 INFO:teuthology.orchestra.run.vm07.stdout:true 2026-03-09T20:45:20.217 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.215+0000 7f930d47c640 1 -- 192.168.123.107:0/2492527097 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f93080719c0 msgr2=0x7f9308071dc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:20.217 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.215+0000 7f930d47c640 1 --2- 192.168.123.107:0/2492527097 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f93080719c0 0x7f9308071dc0 secure :-1 s=READY pgs=277 cs=0 l=1 rev1=1 crypto rx=0x7f92f00099b0 tx=0x7f92f002f240 comp rx=0 tx=0).stop 2026-03-09T20:45:20.217 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.215+0000 7f930d47c640 1 -- 192.168.123.107:0/2492527097 shutdown_connections 2026-03-09T20:45:20.217 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.215+0000 7f930d47c640 1 --2- 192.168.123.107:0/2492527097 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9308072390 0x7f930810c590 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:20.217 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.215+0000 7f930d47c640 1 --2- 192.168.123.107:0/2492527097 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f93080719c0 0x7f9308071dc0 unknown :-1 s=CLOSED pgs=277 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:20.217 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.215+0000 7f930d47c640 1 -- 192.168.123.107:0/2492527097 >> 192.168.123.107:0/2492527097 conn(0x7f930806d4f0 msgr2=0x7f930806f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:45:20.217 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.215+0000 7f930d47c640 1 -- 192.168.123.107:0/2492527097 shutdown_connections 2026-03-09T20:45:20.217 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.215+0000 7f930d47c640 1 -- 192.168.123.107:0/2492527097 wait complete. 2026-03-09T20:45:20.221 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.216+0000 7f930d47c640 1 Processor -- start 2026-03-09T20:45:20.221 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.217+0000 7f930d47c640 1 -- start start 2026-03-09T20:45:20.221 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.217+0000 7f930d47c640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9308072390 0x7f93081a7090 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:20.221 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.217+0000 7f930d47c640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f93081a75d0 0x7f93080773c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:20.221 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.217+0000 7f930d47c640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f93081a7ae0 con 0x7f93081a75d0 2026-03-09T20:45:20.221 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.217+0000 7f930d47c640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f93081a7c50 con 0x7f9308072390 2026-03-09T20:45:20.221 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.217+0000 7f93067fc640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f93081a75d0 0x7f93080773c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:20.221 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.217+0000 7f93067fc640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f93081a75d0 0x7f93080773c0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:39684/0 (socket says 192.168.123.107:39684) 2026-03-09T20:45:20.221 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.217+0000 7f93067fc640 1 -- 192.168.123.107:0/351528613 learned_addr learned my addr 192.168.123.107:0/351528613 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:45:20.221 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.217+0000 7f9306ffd640 1 --2- 192.168.123.107:0/351528613 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9308072390 0x7f93081a7090 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:20.221 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.217+0000 7f93067fc640 1 -- 192.168.123.107:0/351528613 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9308072390 msgr2=0x7f93081a7090 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:20.221 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.217+0000 7f93067fc640 1 --2- 192.168.123.107:0/351528613 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9308072390 0x7f93081a7090 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:20.221 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.217+0000 7f93067fc640 1 -- 192.168.123.107:0/351528613 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f92f0009660 con 0x7f93081a75d0 2026-03-09T20:45:20.221 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.217+0000 7f93067fc640 1 --2- 192.168.123.107:0/351528613 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f93081a75d0 0x7f93080773c0 secure :-1 s=READY pgs=278 cs=0 l=1 rev1=1 crypto rx=0x7f93000049b0 tx=0x7f930000d4a0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:45:20.221 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.217+0000 7f92fffff640 1 -- 192.168.123.107:0/351528613 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f930000dbb0 con 0x7f93081a75d0 2026-03-09T20:45:20.221 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.218+0000 7f930d47c640 1 -- 192.168.123.107:0/351528613 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f930810ed10 con 0x7f93081a75d0 2026-03-09T20:45:20.221 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.218+0000 7f930d47c640 1 -- 192.168.123.107:0/351528613 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f930810f260 con 0x7f93081a75d0 2026-03-09T20:45:20.221 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.218+0000 7f92fffff640 1 -- 192.168.123.107:0/351528613 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f930000f040 con 0x7f93081a75d0 2026-03-09T20:45:20.221 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.218+0000 7f92fffff640 1 -- 192.168.123.107:0/351528613 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f9300013600 con 0x7f93081a75d0 2026-03-09T20:45:20.221 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.219+0000 7f930d47c640 1 -- 192.168.123.107:0/351528613 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f93081183e0 con 0x7f93081a75d0 2026-03-09T20:45:20.221 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.219+0000 7f92fffff640 1 -- 192.168.123.107:0/351528613 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7f9300004b40 con 0x7f93081a75d0 2026-03-09T20:45:20.221 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.220+0000 7f92fffff640 1 --2- 192.168.123.107:0/351528613 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f92e40761c0 0x7f92e4078680 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:20.221 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.220+0000 7f92fffff640 1 -- 192.168.123.107:0/351528613 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(42..42 src has 1..42) v4 ==== 5448+0+0 (secure 0 0 0) 0x7f93000971b0 con 0x7f93081a75d0 2026-03-09T20:45:20.221 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.220+0000 7f9306ffd640 1 --2- 192.168.123.107:0/351528613 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f92e40761c0 0x7f92e4078680 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:20.221 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.220+0000 7f9306ffd640 1 --2- 192.168.123.107:0/351528613 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f92e40761c0 0x7f92e4078680 secure :-1 s=READY pgs=121 cs=0 l=1 rev1=1 crypto rx=0x7f93081a85f0 tx=0x7f92f003a040 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:45:20.229 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.223+0000 7f92fffff640 1 -- 192.168.123.107:0/351528613 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f9300060870 con 0x7f93081a75d0 2026-03-09T20:45:20.323 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.322+0000 7f930d47c640 1 -- 192.168.123.107:0/351528613 --> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f930810b590 con 0x7f92e40761c0 2026-03-09T20:45:20.323 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.323+0000 7f92fffff640 1 -- 192.168.123.107:0/351528613 <== mgr.14225 v2:192.168.123.107:6800/4233182156 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+358 (secure 0 0 0) 0x7f930810b590 con 0x7f92e40761c0 2026-03-09T20:45:20.329 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.329+0000 7f92fdffb640 1 -- 192.168.123.107:0/351528613 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f92e40761c0 msgr2=0x7f92e4078680 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:20.329 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.329+0000 7f92fdffb640 1 --2- 192.168.123.107:0/351528613 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f92e40761c0 0x7f92e4078680 secure :-1 s=READY pgs=121 cs=0 l=1 rev1=1 crypto rx=0x7f93081a85f0 tx=0x7f92f003a040 comp rx=0 tx=0).stop 2026-03-09T20:45:20.329 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.329+0000 7f92fdffb640 1 -- 192.168.123.107:0/351528613 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f93081a75d0 msgr2=0x7f93080773c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:20.329 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.329+0000 7f92fdffb640 1 --2- 192.168.123.107:0/351528613 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f93081a75d0 0x7f93080773c0 secure :-1 s=READY pgs=278 cs=0 l=1 rev1=1 crypto rx=0x7f93000049b0 tx=0x7f930000d4a0 comp rx=0 tx=0).stop 2026-03-09T20:45:20.329 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.329+0000 7f92fdffb640 1 -- 192.168.123.107:0/351528613 shutdown_connections 2026-03-09T20:45:20.329 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.329+0000 7f92fdffb640 1 --2- 192.168.123.107:0/351528613 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f92e40761c0 0x7f92e4078680 unknown :-1 s=CLOSED pgs=121 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:20.329 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.329+0000 7f92fdffb640 1 --2- 192.168.123.107:0/351528613 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f93081a75d0 0x7f93080773c0 unknown :-1 s=CLOSED pgs=278 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:20.329 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.329+0000 7f92fdffb640 1 --2- 192.168.123.107:0/351528613 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9308072390 0x7f93081a7090 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:20.329 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.329+0000 7f92fdffb640 1 -- 192.168.123.107:0/351528613 >> 192.168.123.107:0/351528613 conn(0x7f930806d4f0 msgr2=0x7f9308070400 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:45:20.329 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.329+0000 7f92fdffb640 1 -- 192.168.123.107:0/351528613 shutdown_connections 2026-03-09T20:45:20.329 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.329+0000 7f92fdffb640 1 -- 192.168.123.107:0/351528613 wait complete. 2026-03-09T20:45:20.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:20 vm07.local ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:20.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:20 vm07.local ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:45:20.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:20 vm07.local ceph-mon[49120]: pgmap v90: 65 pgs: 65 active+clean; 454 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.0 KiB/s rd, 426 B/s wr, 4 op/s 2026-03-09T20:45:20.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:20 vm07.local ceph-mon[49120]: from='client.14544 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:45:20.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:20 vm07.local ceph-mon[49120]: Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T20:45:20.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:20 vm07.local ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:20.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:20 vm07.local ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:45:20.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:20 vm07.local ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:45:20.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:20 vm07.local ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:45:20.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:20 vm07.local ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:20.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:20 vm07.local ceph-mon[49120]: Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T20:45:20.387 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.387+0000 7f122e589640 1 -- 192.168.123.107:0/1556492926 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1228072440 msgr2=0x7f12280771b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:20.387 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.387+0000 7f122e589640 1 --2- 192.168.123.107:0/1556492926 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1228072440 0x7f12280771b0 secure :-1 s=READY pgs=279 cs=0 l=1 rev1=1 crypto rx=0x7f122000b1a0 tx=0x7f1220031770 comp rx=0 tx=0).stop 2026-03-09T20:45:20.387 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.387+0000 7f122e589640 1 -- 192.168.123.107:0/1556492926 shutdown_connections 2026-03-09T20:45:20.388 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.387+0000 7f122e589640 1 --2- 192.168.123.107:0/1556492926 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1228072440 0x7f12280771b0 unknown :-1 s=CLOSED pgs=279 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:20.388 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.387+0000 7f122e589640 1 --2- 192.168.123.107:0/1556492926 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1228071a70 0x7f1228071e70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:20.388 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.387+0000 7f122e589640 1 -- 192.168.123.107:0/1556492926 >> 192.168.123.107:0/1556492926 conn(0x7f122806d4f0 msgr2=0x7f122806f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:45:20.388 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.388+0000 7f122e589640 1 -- 192.168.123.107:0/1556492926 shutdown_connections 2026-03-09T20:45:20.389 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.388+0000 7f122e589640 1 -- 192.168.123.107:0/1556492926 wait complete. 2026-03-09T20:45:20.389 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.388+0000 7f122e589640 1 Processor -- start 2026-03-09T20:45:20.389 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.388+0000 7f122e589640 1 -- start start 2026-03-09T20:45:20.389 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.388+0000 7f122e589640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1228071a70 0x7f12280840d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:20.389 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.388+0000 7f122e589640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1228082720 0x7f1228082ba0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:20.389 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.388+0000 7f122e589640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f1228084610 con 0x7f1228071a70 2026-03-09T20:45:20.389 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.388+0000 7f122e589640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f12280830e0 con 0x7f1228082720 2026-03-09T20:45:20.389 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.388+0000 7f1227fff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1228071a70 0x7f12280840d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:20.389 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.388+0000 7f1227fff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1228071a70 0x7f12280840d0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:39688/0 (socket says 192.168.123.107:39688) 2026-03-09T20:45:20.390 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.388+0000 7f1227fff640 1 -- 192.168.123.107:0/619477718 learned_addr learned my addr 192.168.123.107:0/619477718 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:45:20.390 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.389+0000 7f12277fe640 1 --2- 192.168.123.107:0/619477718 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1228082720 0x7f1228082ba0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:20.390 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.389+0000 7f1227fff640 1 -- 192.168.123.107:0/619477718 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1228082720 msgr2=0x7f1228082ba0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:20.390 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.389+0000 7f1227fff640 1 --2- 192.168.123.107:0/619477718 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1228082720 0x7f1228082ba0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:20.390 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.389+0000 7f1227fff640 1 -- 192.168.123.107:0/619477718 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f1220009d00 con 0x7f1228071a70 2026-03-09T20:45:20.390 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.389+0000 7f1227fff640 1 --2- 192.168.123.107:0/619477718 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1228071a70 0x7f12280840d0 secure :-1 s=READY pgs=280 cs=0 l=1 rev1=1 crypto rx=0x7f1218007c70 tx=0x7f121800d490 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:45:20.390 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.390+0000 7f12257fa640 1 -- 192.168.123.107:0/619477718 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f121800f040 con 0x7f1228071a70 2026-03-09T20:45:20.391 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.390+0000 7f122e589640 1 -- 192.168.123.107:0/619477718 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f1228083390 con 0x7f1228071a70 2026-03-09T20:45:20.391 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.390+0000 7f122e589640 1 -- 192.168.123.107:0/619477718 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f122812ef70 con 0x7f1228071a70 2026-03-09T20:45:20.391 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.391+0000 7f12257fa640 1 -- 192.168.123.107:0/619477718 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f1218007e90 con 0x7f1228071a70 2026-03-09T20:45:20.391 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.391+0000 7f12257fa640 1 -- 192.168.123.107:0/619477718 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f1218014690 con 0x7f1228071a70 2026-03-09T20:45:20.392 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.392+0000 7f122e589640 1 -- 192.168.123.107:0/619477718 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f11f4005350 con 0x7f1228071a70 2026-03-09T20:45:20.392 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.392+0000 7f12257fa640 1 -- 192.168.123.107:0/619477718 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7f12180040b0 con 0x7f1228071a70 2026-03-09T20:45:20.393 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.393+0000 7f12257fa640 1 --2- 192.168.123.107:0/619477718 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f1208076290 0x7f1208078750 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:20.393 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.393+0000 7f12277fe640 1 --2- 192.168.123.107:0/619477718 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f1208076290 0x7f1208078750 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:20.393 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.393+0000 7f12277fe640 1 --2- 192.168.123.107:0/619477718 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f1208076290 0x7f1208078750 secure :-1 s=READY pgs=122 cs=0 l=1 rev1=1 crypto rx=0x7f1220002790 tx=0x7f1220009b80 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:45:20.393 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.394+0000 7f12257fa640 1 -- 192.168.123.107:0/619477718 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(42..42 src has 1..42) v4 ==== 5448+0+0 (secure 0 0 0) 0x7f1218097760 con 0x7f1228071a70 2026-03-09T20:45:20.395 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.395+0000 7f12257fa640 1 -- 192.168.123.107:0/619477718 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f1218060d90 con 0x7f1228071a70 2026-03-09T20:45:20.498 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.498+0000 7f122e589640 1 -- 192.168.123.107:0/619477718 --> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] -- mgr_command(tid 0: {"prefix": "orch ps", "target": ["mon-mgr", ""]}) v1 -- 0x7f11f4002bf0 con 0x7f1208076290 2026-03-09T20:45:20.505 INFO:teuthology.orchestra.run.vm07.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T20:45:20.505 INFO:teuthology.orchestra.run.vm07.stdout:alertmanager.vm07 vm07 *:9093,9094 running (99s) 11s ago 2m 23.6M - 0.25.0 c8568f914cd2 aa3206f6f5cb 2026-03-09T20:45:20.505 INFO:teuthology.orchestra.run.vm07.stdout:ceph-exporter.vm07 vm07 running (2m) 11s ago 2m 8514k - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 06140d824fae 2026-03-09T20:45:20.505 INFO:teuthology.orchestra.run.vm07.stdout:ceph-exporter.vm10 vm10 running (116s) 13s ago 116s 8652k - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 ecddc8340426 2026-03-09T20:45:20.505 INFO:teuthology.orchestra.run.vm07.stdout:crash.vm07 vm07 running (2m) 11s ago 2m 7620k - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 8dda9981b08b 2026-03-09T20:45:20.505 INFO:teuthology.orchestra.run.vm07.stdout:crash.vm10 vm10 running (115s) 13s ago 115s 7616k - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 eba80e79586f 2026-03-09T20:45:20.505 INFO:teuthology.orchestra.run.vm07.stdout:grafana.vm07 vm07 *:3000 running (98s) 11s ago 2m 78.3M - 9.4.7 954c08fa6188 74cf2e7ee6ad 2026-03-09T20:45:20.505 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm07.potfau vm07 running (17s) 11s ago 17s 18.5M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 2492b6874dc8 2026-03-09T20:45:20.505 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm07.rovdbp vm07 running (19s) 11s ago 19s 19.8M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 3dd0b4a28f35 2026-03-09T20:45:20.505 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm10.hzyuyq vm10 running (16s) 13s ago 16s 16.6M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 ed740ceed51a 2026-03-09T20:45:20.505 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm10.qpltwp vm10 running (18s) 13s ago 18s 17.3M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 c5fdba181aaf 2026-03-09T20:45:20.505 INFO:teuthology.orchestra.run.vm07.stdout:mgr.vm07.xjrvch vm07 *:9283,8765,8443 running (2m) 11s ago 2m 542M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 7a35a71cbc43 2026-03-09T20:45:20.505 INFO:teuthology.orchestra.run.vm07.stdout:mgr.vm10.byqahe vm10 *:8443,9283,8765 running (112s) 13s ago 111s 485M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 91b60c6e69dc 2026-03-09T20:45:20.505 INFO:teuthology.orchestra.run.vm07.stdout:mon.vm07 vm07 running (2m) 11s ago 2m 53.1M 2048M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 f3e88bdaa0dd 2026-03-09T20:45:20.505 INFO:teuthology.orchestra.run.vm07.stdout:mon.vm10 vm10 running (110s) 13s ago 110s 46.8M 2048M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 4e5d7d18c660 2026-03-09T20:45:20.505 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.vm07 vm07 *:9100 running (2m) 11s ago 2m 14.3M - 1.5.0 0da6a335fe13 d6fac1f8a1d0 2026-03-09T20:45:20.505 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.vm10 vm10 *:9100 running (112s) 13s ago 112s 14.7M - 1.5.0 0da6a335fe13 9716a97e7ed1 2026-03-09T20:45:20.505 INFO:teuthology.orchestra.run.vm07.stdout:osd.0 vm07 running (91s) 11s ago 91s 66.7M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 482878bd7721 2026-03-09T20:45:20.505 INFO:teuthology.orchestra.run.vm07.stdout:osd.1 vm07 running (81s) 11s ago 81s 68.5M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 15564e5032c9 2026-03-09T20:45:20.505 INFO:teuthology.orchestra.run.vm07.stdout:osd.2 vm07 running (71s) 11s ago 70s 47.2M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 a2ad523a264c 2026-03-09T20:45:20.505 INFO:teuthology.orchestra.run.vm07.stdout:osd.3 vm10 running (62s) 13s ago 62s 67.1M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 c4d7e2279ba1 2026-03-09T20:45:20.505 INFO:teuthology.orchestra.run.vm07.stdout:osd.4 vm10 running (53s) 13s ago 53s 44.6M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 37651efc9a7d 2026-03-09T20:45:20.505 INFO:teuthology.orchestra.run.vm07.stdout:osd.5 vm10 running (44s) 13s ago 44s 63.9M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 e1bd83add343 2026-03-09T20:45:20.505 INFO:teuthology.orchestra.run.vm07.stdout:prometheus.vm07 vm07 *:9095 running (93s) 11s ago 2m 37.0M - 2.43.0 a07b618ecd1d 08a586cd1392 2026-03-09T20:45:20.505 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.503+0000 7f12257fa640 1 -- 192.168.123.107:0/619477718 <== mgr.14225 v2:192.168.123.107:6800/4233182156 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+3624 (secure 0 0 0) 0x7f11f4002bf0 con 0x7f1208076290 2026-03-09T20:45:20.505 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.505+0000 7f122e589640 1 -- 192.168.123.107:0/619477718 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f1208076290 msgr2=0x7f1208078750 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:20.506 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.505+0000 7f122e589640 1 --2- 192.168.123.107:0/619477718 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f1208076290 0x7f1208078750 secure :-1 s=READY pgs=122 cs=0 l=1 rev1=1 crypto rx=0x7f1220002790 tx=0x7f1220009b80 comp rx=0 tx=0).stop 2026-03-09T20:45:20.506 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.505+0000 7f122e589640 1 -- 192.168.123.107:0/619477718 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1228071a70 msgr2=0x7f12280840d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:20.506 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.505+0000 7f122e589640 1 --2- 192.168.123.107:0/619477718 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1228071a70 0x7f12280840d0 secure :-1 s=READY pgs=280 cs=0 l=1 rev1=1 crypto rx=0x7f1218007c70 tx=0x7f121800d490 comp rx=0 tx=0).stop 2026-03-09T20:45:20.506 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.506+0000 7f122e589640 1 -- 192.168.123.107:0/619477718 shutdown_connections 2026-03-09T20:45:20.506 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.506+0000 7f122e589640 1 --2- 192.168.123.107:0/619477718 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f1208076290 0x7f1208078750 unknown :-1 s=CLOSED pgs=122 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:20.506 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.506+0000 7f122e589640 1 --2- 192.168.123.107:0/619477718 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1228082720 0x7f1228082ba0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:20.506 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.506+0000 7f122e589640 1 --2- 192.168.123.107:0/619477718 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1228071a70 0x7f12280840d0 unknown :-1 s=CLOSED pgs=280 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:20.506 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.506+0000 7f122e589640 1 -- 192.168.123.107:0/619477718 >> 192.168.123.107:0/619477718 conn(0x7f122806d4f0 msgr2=0x7f1228073150 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:45:20.506 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.506+0000 7f122e589640 1 -- 192.168.123.107:0/619477718 shutdown_connections 2026-03-09T20:45:20.506 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.506+0000 7f122e589640 1 -- 192.168.123.107:0/619477718 wait complete. 2026-03-09T20:45:20.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:20 vm10.local ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:20.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:20 vm10.local ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:45:20.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:20 vm10.local ceph-mon[57011]: pgmap v90: 65 pgs: 65 active+clean; 454 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.0 KiB/s rd, 426 B/s wr, 4 op/s 2026-03-09T20:45:20.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:20 vm10.local ceph-mon[57011]: from='client.14544 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:45:20.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:20 vm10.local ceph-mon[57011]: Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T20:45:20.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:20 vm10.local ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:20.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:20 vm10.local ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:45:20.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:20 vm10.local ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:45:20.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:20 vm10.local ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:45:20.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:20 vm10.local ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:45:20.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:20 vm10.local ceph-mon[57011]: Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T20:45:20.581 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.581+0000 7f01f8f4a640 1 -- 192.168.123.107:0/1288335796 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f01f4073b40 msgr2=0x7f01f4073fa0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:20.581 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.581+0000 7f01f8f4a640 1 --2- 192.168.123.107:0/1288335796 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f01f4073b40 0x7f01f4073fa0 secure :-1 s=READY pgs=281 cs=0 l=1 rev1=1 crypto rx=0x7f01e401cae0 tx=0x7f01e40403c0 comp rx=0 tx=0).stop 2026-03-09T20:45:20.586 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.581+0000 7f01f8f4a640 1 -- 192.168.123.107:0/1288335796 shutdown_connections 2026-03-09T20:45:20.586 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.581+0000 7f01f8f4a640 1 --2- 192.168.123.107:0/1288335796 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f01f4073b40 0x7f01f4073fa0 unknown :-1 s=CLOSED pgs=281 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:20.586 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.581+0000 7f01f8f4a640 1 --2- 192.168.123.107:0/1288335796 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f01f40751a0 0x7f01f4073600 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:20.586 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.581+0000 7f01f8f4a640 1 -- 192.168.123.107:0/1288335796 >> 192.168.123.107:0/1288335796 conn(0x7f01f40fbdb0 msgr2=0x7f01f40fe1f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:45:20.586 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.581+0000 7f01f8f4a640 1 -- 192.168.123.107:0/1288335796 shutdown_connections 2026-03-09T20:45:20.586 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.581+0000 7f01f8f4a640 1 -- 192.168.123.107:0/1288335796 wait complete. 2026-03-09T20:45:20.586 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.582+0000 7f01f8f4a640 1 Processor -- start 2026-03-09T20:45:20.586 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.582+0000 7f01f8f4a640 1 -- start start 2026-03-09T20:45:20.586 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.582+0000 7f01f8f4a640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f01f4073b40 0x7f01f4195e30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:20.586 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.582+0000 7f01f8f4a640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f01f40751a0 0x7f01f4196370 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:20.586 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.582+0000 7f01f8f4a640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f01f4196940 con 0x7f01f40751a0 2026-03-09T20:45:20.586 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.582+0000 7f01f8f4a640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f01f4196ab0 con 0x7f01f4073b40 2026-03-09T20:45:20.586 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.582+0000 7f01f2ffd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f01f40751a0 0x7f01f4196370 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:20.586 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.582+0000 7f01f2ffd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f01f40751a0 0x7f01f4196370 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:39702/0 (socket says 192.168.123.107:39702) 2026-03-09T20:45:20.586 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.582+0000 7f01f2ffd640 1 -- 192.168.123.107:0/1499679466 learned_addr learned my addr 192.168.123.107:0/1499679466 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:45:20.586 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.582+0000 7f01f37fe640 1 --2- 192.168.123.107:0/1499679466 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f01f4073b40 0x7f01f4195e30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:20.586 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.583+0000 7f01f2ffd640 1 -- 192.168.123.107:0/1499679466 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f01f4073b40 msgr2=0x7f01f4195e30 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:20.586 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.583+0000 7f01f2ffd640 1 --2- 192.168.123.107:0/1499679466 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f01f4073b40 0x7f01f4195e30 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:20.586 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.583+0000 7f01f2ffd640 1 -- 192.168.123.107:0/1499679466 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f01e401c790 con 0x7f01f40751a0 2026-03-09T20:45:20.586 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.583+0000 7f01f2ffd640 1 --2- 192.168.123.107:0/1499679466 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f01f40751a0 0x7f01f4196370 secure :-1 s=READY pgs=282 cs=0 l=1 rev1=1 crypto rx=0x7f01e4002910 tx=0x7f01e4002940 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:45:20.586 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.583+0000 7f01f0ff9640 1 -- 192.168.123.107:0/1499679466 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f01e404e070 con 0x7f01f40751a0 2026-03-09T20:45:20.586 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.583+0000 7f01f8f4a640 1 -- 192.168.123.107:0/1499679466 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f01f419b4f0 con 0x7f01f40751a0 2026-03-09T20:45:20.586 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.583+0000 7f01f8f4a640 1 -- 192.168.123.107:0/1499679466 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f01f419ba60 con 0x7f01f40751a0 2026-03-09T20:45:20.586 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.583+0000 7f01f0ff9640 1 -- 192.168.123.107:0/1499679466 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f01e40043d0 con 0x7f01f40751a0 2026-03-09T20:45:20.590 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.587+0000 7f01f0ff9640 1 -- 192.168.123.107:0/1499679466 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f01e4042780 con 0x7f01f40751a0 2026-03-09T20:45:20.590 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.587+0000 7f01f0ff9640 1 -- 192.168.123.107:0/1499679466 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7f01e405a050 con 0x7f01f40751a0 2026-03-09T20:45:20.590 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.587+0000 7f01f0ff9640 1 --2- 192.168.123.107:0/1499679466 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f01c4076290 0x7f01c4078750 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:20.590 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.587+0000 7f01f0ff9640 1 -- 192.168.123.107:0/1499679466 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(42..42 src has 1..42) v4 ==== 5448+0+0 (secure 0 0 0) 0x7f01e4002a60 con 0x7f01f40751a0 2026-03-09T20:45:20.590 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.588+0000 7f01f8f4a640 1 -- 192.168.123.107:0/1499679466 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f01c0005350 con 0x7f01f40751a0 2026-03-09T20:45:20.590 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.590+0000 7f01f37fe640 1 --2- 192.168.123.107:0/1499679466 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f01c4076290 0x7f01c4078750 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:20.590 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.590+0000 7f01f37fe640 1 --2- 192.168.123.107:0/1499679466 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f01c4076290 0x7f01c4078750 secure :-1 s=READY pgs=123 cs=0 l=1 rev1=1 crypto rx=0x7f01e00046e0 tx=0x7f01e00091c0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:45:20.596 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.592+0000 7f01f0ff9640 1 -- 192.168.123.107:0/1499679466 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f01e4097e80 con 0x7f01f40751a0 2026-03-09T20:45:20.806 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T20:45:20.806 INFO:teuthology.orchestra.run.vm07.stdout: "mon": { 2026-03-09T20:45:20.806 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable)": 2 2026-03-09T20:45:20.806 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:45:20.806 INFO:teuthology.orchestra.run.vm07.stdout: "mgr": { 2026-03-09T20:45:20.806 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable)": 2 2026-03-09T20:45:20.806 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:45:20.806 INFO:teuthology.orchestra.run.vm07.stdout: "osd": { 2026-03-09T20:45:20.806 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable)": 6 2026-03-09T20:45:20.806 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:45:20.806 INFO:teuthology.orchestra.run.vm07.stdout: "mds": { 2026-03-09T20:45:20.806 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable)": 4 2026-03-09T20:45:20.806 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:45:20.806 INFO:teuthology.orchestra.run.vm07.stdout: "overall": { 2026-03-09T20:45:20.806 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable)": 14 2026-03-09T20:45:20.806 INFO:teuthology.orchestra.run.vm07.stdout: } 2026-03-09T20:45:20.806 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T20:45:20.807 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.804+0000 7f01f8f4a640 1 -- 192.168.123.107:0/1499679466 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7f01c0005e10 con 0x7f01f40751a0 2026-03-09T20:45:20.807 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.805+0000 7f01f0ff9640 1 -- 192.168.123.107:0/1499679466 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+633 (secure 0 0 0) 0x7f01e4097820 con 0x7f01f40751a0 2026-03-09T20:45:20.809 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.809+0000 7f01de7fc640 1 -- 192.168.123.107:0/1499679466 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f01c4076290 msgr2=0x7f01c4078750 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:20.809 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.809+0000 7f01de7fc640 1 --2- 192.168.123.107:0/1499679466 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f01c4076290 0x7f01c4078750 secure :-1 s=READY pgs=123 cs=0 l=1 rev1=1 crypto rx=0x7f01e00046e0 tx=0x7f01e00091c0 comp rx=0 tx=0).stop 2026-03-09T20:45:20.809 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.809+0000 7f01de7fc640 1 -- 192.168.123.107:0/1499679466 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f01f40751a0 msgr2=0x7f01f4196370 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:20.809 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.809+0000 7f01de7fc640 1 --2- 192.168.123.107:0/1499679466 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f01f40751a0 0x7f01f4196370 secure :-1 s=READY pgs=282 cs=0 l=1 rev1=1 crypto rx=0x7f01e4002910 tx=0x7f01e4002940 comp rx=0 tx=0).stop 2026-03-09T20:45:20.809 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.809+0000 7f01de7fc640 1 -- 192.168.123.107:0/1499679466 shutdown_connections 2026-03-09T20:45:20.809 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.809+0000 7f01de7fc640 1 --2- 192.168.123.107:0/1499679466 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f01c4076290 0x7f01c4078750 unknown :-1 s=CLOSED pgs=123 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:20.809 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.809+0000 7f01de7fc640 1 --2- 192.168.123.107:0/1499679466 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f01f40751a0 0x7f01f4196370 unknown :-1 s=CLOSED pgs=282 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:20.809 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.809+0000 7f01de7fc640 1 --2- 192.168.123.107:0/1499679466 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f01f4073b40 0x7f01f4195e30 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:20.809 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.809+0000 7f01de7fc640 1 -- 192.168.123.107:0/1499679466 >> 192.168.123.107:0/1499679466 conn(0x7f01f40fbdb0 msgr2=0x7f01f40fd980 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:45:20.809 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.809+0000 7f01de7fc640 1 -- 192.168.123.107:0/1499679466 shutdown_connections 2026-03-09T20:45:20.809 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.809+0000 7f01de7fc640 1 -- 192.168.123.107:0/1499679466 wait complete. 2026-03-09T20:45:20.882 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.882+0000 7fbcb1aed640 1 -- 192.168.123.107:0/3898842450 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbcac1019f0 msgr2=0x7fbcac101e70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:20.883 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.882+0000 7fbcb1aed640 1 --2- 192.168.123.107:0/3898842450 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbcac1019f0 0x7fbcac101e70 secure :-1 s=READY pgs=283 cs=0 l=1 rev1=1 crypto rx=0x7fbc9c009a00 tx=0x7fbc9c02f290 comp rx=0 tx=0).stop 2026-03-09T20:45:20.883 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.882+0000 7fbcb1aed640 1 -- 192.168.123.107:0/3898842450 shutdown_connections 2026-03-09T20:45:20.883 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.882+0000 7fbcb1aed640 1 --2- 192.168.123.107:0/3898842450 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbcac1019f0 0x7fbcac101e70 unknown :-1 s=CLOSED pgs=283 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:20.883 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.882+0000 7fbcb1aed640 1 --2- 192.168.123.107:0/3898842450 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fbcac1007f0 0x7fbcac100bf0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:20.883 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.882+0000 7fbcb1aed640 1 -- 192.168.123.107:0/3898842450 >> 192.168.123.107:0/3898842450 conn(0x7fbcac0fbf80 msgr2=0x7fbcac0fe3c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:45:20.886 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.884+0000 7fbcb1aed640 1 -- 192.168.123.107:0/3898842450 shutdown_connections 2026-03-09T20:45:20.886 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.884+0000 7fbcb1aed640 1 -- 192.168.123.107:0/3898842450 wait complete. 2026-03-09T20:45:20.886 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.885+0000 7fbcb1aed640 1 Processor -- start 2026-03-09T20:45:20.886 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.885+0000 7fbcb1aed640 1 -- start start 2026-03-09T20:45:20.886 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.885+0000 7fbcb1aed640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbcac1007f0 0x7fbcac195f30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:20.886 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.885+0000 7fbcab7fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbcac1007f0 0x7fbcac195f30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:20.886 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.885+0000 7fbcab7fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbcac1007f0 0x7fbcac195f30 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:39722/0 (socket says 192.168.123.107:39722) 2026-03-09T20:45:20.886 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.885+0000 7fbcab7fe640 1 -- 192.168.123.107:0/661992301 learned_addr learned my addr 192.168.123.107:0/661992301 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:45:20.886 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.885+0000 7fbcb1aed640 1 --2- 192.168.123.107:0/661992301 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fbcac1019f0 0x7fbcac196470 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:20.886 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.885+0000 7fbcb1aed640 1 -- 192.168.123.107:0/661992301 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fbcac196a40 con 0x7fbcac1007f0 2026-03-09T20:45:20.886 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.885+0000 7fbcb1aed640 1 -- 192.168.123.107:0/661992301 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fbcac196bb0 con 0x7fbcac1019f0 2026-03-09T20:45:20.886 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.885+0000 7fbcaaffd640 1 --2- 192.168.123.107:0/661992301 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fbcac1019f0 0x7fbcac196470 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:20.886 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.886+0000 7fbcab7fe640 1 -- 192.168.123.107:0/661992301 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fbcac1019f0 msgr2=0x7fbcac196470 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:20.886 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.886+0000 7fbcab7fe640 1 --2- 192.168.123.107:0/661992301 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fbcac1019f0 0x7fbcac196470 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:20.886 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.886+0000 7fbcab7fe640 1 -- 192.168.123.107:0/661992301 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fbc9c009660 con 0x7fbcac1007f0 2026-03-09T20:45:20.887 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.886+0000 7fbcab7fe640 1 --2- 192.168.123.107:0/661992301 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbcac1007f0 0x7fbcac195f30 secure :-1 s=READY pgs=284 cs=0 l=1 rev1=1 crypto rx=0x7fbc9800ece0 tx=0x7fbc9800c6a0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:45:20.887 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.887+0000 7fbca8ff9640 1 -- 192.168.123.107:0/661992301 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fbc98009800 con 0x7fbcac1007f0 2026-03-09T20:45:20.888 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.887+0000 7fbcb1aed640 1 -- 192.168.123.107:0/661992301 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fbcac19b650 con 0x7fbcac1007f0 2026-03-09T20:45:20.892 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.889+0000 7fbcb1aed640 1 -- 192.168.123.107:0/661992301 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fbcac19baf0 con 0x7fbcac1007f0 2026-03-09T20:45:20.892 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.891+0000 7fbca8ff9640 1 -- 192.168.123.107:0/661992301 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fbc9800eea0 con 0x7fbcac1007f0 2026-03-09T20:45:20.892 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.891+0000 7fbca8ff9640 1 -- 192.168.123.107:0/661992301 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fbc98010640 con 0x7fbcac1007f0 2026-03-09T20:45:20.892 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.891+0000 7fbca8ff9640 1 -- 192.168.123.107:0/661992301 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7fbc98010860 con 0x7fbcac1007f0 2026-03-09T20:45:20.892 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.891+0000 7fbca8ff9640 1 --2- 192.168.123.107:0/661992301 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fbc7c076360 0x7fbc7c078820 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:20.893 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.892+0000 7fbcaaffd640 1 --2- 192.168.123.107:0/661992301 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fbc7c076360 0x7fbc7c078820 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:20.893 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.892+0000 7fbca8ff9640 1 -- 192.168.123.107:0/661992301 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(42..42 src has 1..42) v4 ==== 5448+0+0 (secure 0 0 0) 0x7fbc98014070 con 0x7fbcac1007f0 2026-03-09T20:45:20.893 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.892+0000 7fbcaaffd640 1 --2- 192.168.123.107:0/661992301 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fbc7c076360 0x7fbc7c078820 secure :-1 s=READY pgs=124 cs=0 l=1 rev1=1 crypto rx=0x7fbcac197450 tx=0x7fbc9c0023d0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:45:20.896 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.893+0000 7fbc8e7fc640 1 -- 192.168.123.107:0/661992301 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fbc74005350 con 0x7fbcac1007f0 2026-03-09T20:45:20.897 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:20.897+0000 7fbca8ff9640 1 -- 192.168.123.107:0/661992301 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7fbc98061940 con 0x7fbcac1007f0 2026-03-09T20:45:21.019 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.018+0000 7fbc8e7fc640 1 -- 192.168.123.107:0/661992301 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "fs dump"} v 0) v1 -- 0x7fbc740058d0 con 0x7fbcac1007f0 2026-03-09T20:45:21.023 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.022+0000 7fbca8ff9640 1 -- 192.168.123.107:0/661992301 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump"}]=0 dumped fsmap epoch 11 v11) v1 ==== 76+0+1867 (secure 0 0 0) 0x7fbc980612e0 con 0x7fbcac1007f0 2026-03-09T20:45:21.023 INFO:teuthology.orchestra.run.vm07.stdout:e11 2026-03-09T20:45:21.023 INFO:teuthology.orchestra.run.vm07.stdout:enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T20:45:21.023 INFO:teuthology.orchestra.run.vm07.stdout:default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T20:45:21.023 INFO:teuthology.orchestra.run.vm07.stdout:legacy client fscid: 1 2026-03-09T20:45:21.023 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:45:21.023 INFO:teuthology.orchestra.run.vm07.stdout:Filesystem 'cephfs' (1) 2026-03-09T20:45:21.023 INFO:teuthology.orchestra.run.vm07.stdout:fs_name cephfs 2026-03-09T20:45:21.023 INFO:teuthology.orchestra.run.vm07.stdout:epoch 11 2026-03-09T20:45:21.023 INFO:teuthology.orchestra.run.vm07.stdout:flags 32 joinable allow_snaps allow_multimds_snaps allow_standby_replay 2026-03-09T20:45:21.023 INFO:teuthology.orchestra.run.vm07.stdout:created 2026-03-09T20:44:59.885491+0000 2026-03-09T20:45:21.023 INFO:teuthology.orchestra.run.vm07.stdout:modified 2026-03-09T20:45:12.822947+0000 2026-03-09T20:45:21.023 INFO:teuthology.orchestra.run.vm07.stdout:tableserver 0 2026-03-09T20:45:21.023 INFO:teuthology.orchestra.run.vm07.stdout:root 0 2026-03-09T20:45:21.023 INFO:teuthology.orchestra.run.vm07.stdout:session_timeout 60 2026-03-09T20:45:21.023 INFO:teuthology.orchestra.run.vm07.stdout:session_autoclose 300 2026-03-09T20:45:21.023 INFO:teuthology.orchestra.run.vm07.stdout:max_file_size 1099511627776 2026-03-09T20:45:21.023 INFO:teuthology.orchestra.run.vm07.stdout:max_xattr_size 65536 2026-03-09T20:45:21.023 INFO:teuthology.orchestra.run.vm07.stdout:required_client_features {} 2026-03-09T20:45:21.023 INFO:teuthology.orchestra.run.vm07.stdout:last_failure 0 2026-03-09T20:45:21.023 INFO:teuthology.orchestra.run.vm07.stdout:last_failure_osd_epoch 0 2026-03-09T20:45:21.023 INFO:teuthology.orchestra.run.vm07.stdout:compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T20:45:21.023 INFO:teuthology.orchestra.run.vm07.stdout:max_mds 2 2026-03-09T20:45:21.023 INFO:teuthology.orchestra.run.vm07.stdout:in 0,1 2026-03-09T20:45:21.023 INFO:teuthology.orchestra.run.vm07.stdout:up {0=14476,1=24291} 2026-03-09T20:45:21.023 INFO:teuthology.orchestra.run.vm07.stdout:failed 2026-03-09T20:45:21.023 INFO:teuthology.orchestra.run.vm07.stdout:damaged 2026-03-09T20:45:21.023 INFO:teuthology.orchestra.run.vm07.stdout:stopped 2026-03-09T20:45:21.023 INFO:teuthology.orchestra.run.vm07.stdout:data_pools [3] 2026-03-09T20:45:21.024 INFO:teuthology.orchestra.run.vm07.stdout:metadata_pool 2 2026-03-09T20:45:21.024 INFO:teuthology.orchestra.run.vm07.stdout:inline_data disabled 2026-03-09T20:45:21.024 INFO:teuthology.orchestra.run.vm07.stdout:balancer 2026-03-09T20:45:21.024 INFO:teuthology.orchestra.run.vm07.stdout:bal_rank_mask -1 2026-03-09T20:45:21.024 INFO:teuthology.orchestra.run.vm07.stdout:standby_count_wanted 1 2026-03-09T20:45:21.024 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm07.rovdbp{0:14476} state up:active seq 3 join_fscid=1 addr [v2:192.168.123.107:6826/2216764941,v1:192.168.123.107:6827/2216764941] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:45:21.024 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm10.hzyuyq{0:14498} state up:standby-replay seq 3 join_fscid=1 addr [v2:192.168.123.110:6826/3212743251,v1:192.168.123.110:6827/3212743251] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:45:21.024 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm10.qpltwp{1:24291} state up:active seq 3 join_fscid=1 addr [v2:192.168.123.110:6824/61492274,v1:192.168.123.110:6825/61492274] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:45:21.024 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm07.potfau{1:14490} state up:standby-replay seq 2 join_fscid=1 addr [v2:192.168.123.107:6828/3289699342,v1:192.168.123.107:6829/3289699342] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:45:21.024 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:45:21.024 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:45:21.024 INFO:teuthology.orchestra.run.vm07.stderr:dumped fsmap epoch 11 2026-03-09T20:45:21.026 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.026+0000 7fbc8e7fc640 1 -- 192.168.123.107:0/661992301 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fbc7c076360 msgr2=0x7fbc7c078820 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:21.026 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.026+0000 7fbc8e7fc640 1 --2- 192.168.123.107:0/661992301 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fbc7c076360 0x7fbc7c078820 secure :-1 s=READY pgs=124 cs=0 l=1 rev1=1 crypto rx=0x7fbcac197450 tx=0x7fbc9c0023d0 comp rx=0 tx=0).stop 2026-03-09T20:45:21.026 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.026+0000 7fbc8e7fc640 1 -- 192.168.123.107:0/661992301 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbcac1007f0 msgr2=0x7fbcac195f30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:21.026 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.026+0000 7fbc8e7fc640 1 --2- 192.168.123.107:0/661992301 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbcac1007f0 0x7fbcac195f30 secure :-1 s=READY pgs=284 cs=0 l=1 rev1=1 crypto rx=0x7fbc9800ece0 tx=0x7fbc9800c6a0 comp rx=0 tx=0).stop 2026-03-09T20:45:21.027 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.027+0000 7fbc8e7fc640 1 -- 192.168.123.107:0/661992301 shutdown_connections 2026-03-09T20:45:21.027 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.027+0000 7fbc8e7fc640 1 --2- 192.168.123.107:0/661992301 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fbc7c076360 0x7fbc7c078820 unknown :-1 s=CLOSED pgs=124 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:21.027 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.027+0000 7fbc8e7fc640 1 --2- 192.168.123.107:0/661992301 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fbcac1019f0 0x7fbcac196470 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:21.027 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.027+0000 7fbc8e7fc640 1 --2- 192.168.123.107:0/661992301 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbcac1007f0 0x7fbcac195f30 unknown :-1 s=CLOSED pgs=284 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:21.027 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.027+0000 7fbc8e7fc640 1 -- 192.168.123.107:0/661992301 >> 192.168.123.107:0/661992301 conn(0x7fbcac0fbf80 msgr2=0x7fbcac0fdab0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:45:21.027 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.027+0000 7fbc8e7fc640 1 -- 192.168.123.107:0/661992301 shutdown_connections 2026-03-09T20:45:21.027 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.027+0000 7fbc8e7fc640 1 -- 192.168.123.107:0/661992301 wait complete. 2026-03-09T20:45:21.098 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.098+0000 7f69ef591640 1 -- 192.168.123.107:0/758550076 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f69e8071a50 msgr2=0x7f69e8071e50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:21.098 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.098+0000 7f69ef591640 1 --2- 192.168.123.107:0/758550076 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f69e8071a50 0x7f69e8071e50 secure :-1 s=READY pgs=59 cs=0 l=1 rev1=1 crypto rx=0x7f69dc007920 tx=0x7f69dc02ffe0 comp rx=0 tx=0).stop 2026-03-09T20:45:21.099 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.100+0000 7f69ef591640 1 -- 192.168.123.107:0/758550076 shutdown_connections 2026-03-09T20:45:21.099 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.100+0000 7f69ef591640 1 --2- 192.168.123.107:0/758550076 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f69e8072420 0x7f69e8077190 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:21.099 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.100+0000 7f69ef591640 1 --2- 192.168.123.107:0/758550076 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f69e8071a50 0x7f69e8071e50 unknown :-1 s=CLOSED pgs=59 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:21.100 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.100+0000 7f69ef591640 1 -- 192.168.123.107:0/758550076 >> 192.168.123.107:0/758550076 conn(0x7f69e806d4f0 msgr2=0x7f69e806f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:45:21.100 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.100+0000 7f69ef591640 1 -- 192.168.123.107:0/758550076 shutdown_connections 2026-03-09T20:45:21.100 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.100+0000 7f69ef591640 1 -- 192.168.123.107:0/758550076 wait complete. 2026-03-09T20:45:21.103 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.100+0000 7f69ef591640 1 Processor -- start 2026-03-09T20:45:21.103 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.100+0000 7f69ef591640 1 -- start start 2026-03-09T20:45:21.103 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.100+0000 7f69ef591640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f69e8071a50 0x7f69e81319f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:21.103 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.100+0000 7f69ef591640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f69e8072420 0x7f69e8131f30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:21.104 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.100+0000 7f69ef591640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f69e8133430 con 0x7f69e8072420 2026-03-09T20:45:21.104 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.100+0000 7f69ef591640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f69e81335a0 con 0x7f69e8071a50 2026-03-09T20:45:21.104 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.101+0000 7f69edd8e640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f69e8072420 0x7f69e8131f30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:21.104 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.101+0000 7f69edd8e640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f69e8072420 0x7f69e8131f30 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:39724/0 (socket says 192.168.123.107:39724) 2026-03-09T20:45:21.104 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.101+0000 7f69edd8e640 1 -- 192.168.123.107:0/2288866032 learned_addr learned my addr 192.168.123.107:0/2288866032 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:45:21.104 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.101+0000 7f69edd8e640 1 -- 192.168.123.107:0/2288866032 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f69e8071a50 msgr2=0x7f69e81319f0 unknown :-1 s=STATE_CONNECTING_RE l=1).mark_down 2026-03-09T20:45:21.104 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.101+0000 7f69edd8e640 1 --2- 192.168.123.107:0/2288866032 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f69e8071a50 0x7f69e81319f0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:21.104 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.101+0000 7f69edd8e640 1 -- 192.168.123.107:0/2288866032 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f69dc0075d0 con 0x7f69e8072420 2026-03-09T20:45:21.104 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.101+0000 7f69edd8e640 1 --2- 192.168.123.107:0/2288866032 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f69e8072420 0x7f69e8131f30 secure :-1 s=READY pgs=285 cs=0 l=1 rev1=1 crypto rx=0x7f69d400b6d0 tx=0x7f69d400bba0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:45:21.104 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.101+0000 7f69d37fe640 1 -- 192.168.123.107:0/2288866032 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f69d400be60 con 0x7f69e8072420 2026-03-09T20:45:21.104 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.101+0000 7f69d37fe640 1 -- 192.168.123.107:0/2288866032 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f69d40027a0 con 0x7f69e8072420 2026-03-09T20:45:21.104 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.101+0000 7f69ef591640 1 -- 192.168.123.107:0/2288866032 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f69e8132590 con 0x7f69e8072420 2026-03-09T20:45:21.104 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.101+0000 7f69ef591640 1 -- 192.168.123.107:0/2288866032 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f69e807fda0 con 0x7f69e8072420 2026-03-09T20:45:21.104 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.101+0000 7f69d37fe640 1 -- 192.168.123.107:0/2288866032 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f69d4004430 con 0x7f69e8072420 2026-03-09T20:45:21.104 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.103+0000 7f69d37fe640 1 -- 192.168.123.107:0/2288866032 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7f69d400c820 con 0x7f69e8072420 2026-03-09T20:45:21.104 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.103+0000 7f69d37fe640 1 --2- 192.168.123.107:0/2288866032 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f69bc0761c0 0x7f69bc078680 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:21.104 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.103+0000 7f69d37fe640 1 -- 192.168.123.107:0/2288866032 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(42..42 src has 1..42) v4 ==== 5448+0+0 (secure 0 0 0) 0x7f69d4096e60 con 0x7f69e8072420 2026-03-09T20:45:21.104 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.103+0000 7f69ee58f640 1 --2- 192.168.123.107:0/2288866032 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f69bc0761c0 0x7f69bc078680 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:21.104 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.104+0000 7f69ef591640 1 -- 192.168.123.107:0/2288866032 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f69b8005350 con 0x7f69e8072420 2026-03-09T20:45:21.106 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.106+0000 7f69ee58f640 1 --2- 192.168.123.107:0/2288866032 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f69bc0761c0 0x7f69bc078680 secure :-1 s=READY pgs=125 cs=0 l=1 rev1=1 crypto rx=0x7f69dc0304f0 tx=0x7f69dc0023d0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:45:21.107 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.107+0000 7f69d37fe640 1 -- 192.168.123.107:0/2288866032 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f69d4060480 con 0x7f69e8072420 2026-03-09T20:45:21.233 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.232+0000 7f69ef591640 1 -- 192.168.123.107:0/2288866032 --> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f69b8002bf0 con 0x7f69bc0761c0 2026-03-09T20:45:21.233 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.233+0000 7f69d37fe640 1 -- 192.168.123.107:0/2288866032 <== mgr.14225 v2:192.168.123.107:6800/4233182156 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+358 (secure 0 0 0) 0x7f69b8002bf0 con 0x7f69bc0761c0 2026-03-09T20:45:21.233 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T20:45:21.233 INFO:teuthology.orchestra.run.vm07.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", 2026-03-09T20:45:21.233 INFO:teuthology.orchestra.run.vm07.stdout: "in_progress": true, 2026-03-09T20:45:21.233 INFO:teuthology.orchestra.run.vm07.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-09T20:45:21.233 INFO:teuthology.orchestra.run.vm07.stdout: "services_complete": [], 2026-03-09T20:45:21.233 INFO:teuthology.orchestra.run.vm07.stdout: "progress": "", 2026-03-09T20:45:21.233 INFO:teuthology.orchestra.run.vm07.stdout: "message": "Doing first pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df image", 2026-03-09T20:45:21.233 INFO:teuthology.orchestra.run.vm07.stdout: "is_paused": false 2026-03-09T20:45:21.233 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T20:45:21.236 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.235+0000 7f69ef591640 1 -- 192.168.123.107:0/2288866032 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f69bc0761c0 msgr2=0x7f69bc078680 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:21.236 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.235+0000 7f69ef591640 1 --2- 192.168.123.107:0/2288866032 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f69bc0761c0 0x7f69bc078680 secure :-1 s=READY pgs=125 cs=0 l=1 rev1=1 crypto rx=0x7f69dc0304f0 tx=0x7f69dc0023d0 comp rx=0 tx=0).stop 2026-03-09T20:45:21.236 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.235+0000 7f69ef591640 1 -- 192.168.123.107:0/2288866032 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f69e8072420 msgr2=0x7f69e8131f30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:21.236 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.235+0000 7f69ef591640 1 --2- 192.168.123.107:0/2288866032 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f69e8072420 0x7f69e8131f30 secure :-1 s=READY pgs=285 cs=0 l=1 rev1=1 crypto rx=0x7f69d400b6d0 tx=0x7f69d400bba0 comp rx=0 tx=0).stop 2026-03-09T20:45:21.236 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.236+0000 7f69ef591640 1 -- 192.168.123.107:0/2288866032 shutdown_connections 2026-03-09T20:45:21.236 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.236+0000 7f69ef591640 1 --2- 192.168.123.107:0/2288866032 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f69bc0761c0 0x7f69bc078680 unknown :-1 s=CLOSED pgs=125 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:21.236 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.236+0000 7f69ef591640 1 --2- 192.168.123.107:0/2288866032 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f69e8072420 0x7f69e8131f30 unknown :-1 s=CLOSED pgs=285 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:21.236 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.236+0000 7f69ef591640 1 --2- 192.168.123.107:0/2288866032 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f69e8071a50 0x7f69e81319f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:21.236 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.236+0000 7f69ef591640 1 -- 192.168.123.107:0/2288866032 >> 192.168.123.107:0/2288866032 conn(0x7f69e806d4f0 msgr2=0x7f69e80756d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:45:21.236 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.236+0000 7f69ef591640 1 -- 192.168.123.107:0/2288866032 shutdown_connections 2026-03-09T20:45:21.236 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.236+0000 7f69ef591640 1 -- 192.168.123.107:0/2288866032 wait complete. 2026-03-09T20:45:21.303 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:21 vm07.local ceph-mon[49120]: from='client.24349 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:45:21.304 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:21 vm07.local ceph-mon[49120]: from='client.14552 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:45:21.304 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:21 vm07.local ceph-mon[49120]: from='client.14556 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:45:21.304 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:21 vm07.local ceph-mon[49120]: from='client.? 192.168.123.107:0/1499679466' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:45:21.304 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:21 vm07.local ceph-mon[49120]: from='client.? 192.168.123.107:0/661992301' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T20:45:21.304 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.303+0000 7fe486fb2640 1 -- 192.168.123.107:0/1616102822 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe480076df0 msgr2=0x7fe480077250 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:21.304 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.303+0000 7fe486fb2640 1 --2- 192.168.123.107:0/1616102822 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe480076df0 0x7fe480077250 secure :-1 s=READY pgs=286 cs=0 l=1 rev1=1 crypto rx=0x7fe468009a00 tx=0x7fe46802f280 comp rx=0 tx=0).stop 2026-03-09T20:45:21.310 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.307+0000 7fe486fb2640 1 -- 192.168.123.107:0/1616102822 shutdown_connections 2026-03-09T20:45:21.310 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.307+0000 7fe486fb2640 1 --2- 192.168.123.107:0/1616102822 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe480076df0 0x7fe480077250 unknown :-1 s=CLOSED pgs=286 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:21.310 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.307+0000 7fe486fb2640 1 --2- 192.168.123.107:0/1616102822 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe480075ba0 0x7fe480075fa0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:21.310 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.307+0000 7fe486fb2640 1 -- 192.168.123.107:0/1616102822 >> 192.168.123.107:0/1616102822 conn(0x7fe4800fe060 msgr2=0x7fe480100480 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:45:21.310 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.311+0000 7fe486fb2640 1 -- 192.168.123.107:0/1616102822 shutdown_connections 2026-03-09T20:45:21.310 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.311+0000 7fe486fb2640 1 -- 192.168.123.107:0/1616102822 wait complete. 2026-03-09T20:45:21.312 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.312+0000 7fe486fb2640 1 Processor -- start 2026-03-09T20:45:21.312 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.312+0000 7fe486fb2640 1 -- start start 2026-03-09T20:45:21.313 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.313+0000 7fe486fb2640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe480075ba0 0x7fe480071610 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:21.313 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.313+0000 7fe486fb2640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe480076df0 0x7fe480071b50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:21.313 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.313+0000 7fe486fb2640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fe480073050 con 0x7fe480075ba0 2026-03-09T20:45:21.313 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.313+0000 7fe486fb2640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fe4800731c0 con 0x7fe480076df0 2026-03-09T20:45:21.313 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.313+0000 7fe485fb0640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe480075ba0 0x7fe480071610 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:21.313 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.313+0000 7fe485fb0640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe480075ba0 0x7fe480071610 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:39740/0 (socket says 192.168.123.107:39740) 2026-03-09T20:45:21.313 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.313+0000 7fe485fb0640 1 -- 192.168.123.107:0/13747742 learned_addr learned my addr 192.168.123.107:0/13747742 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:45:21.313 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.314+0000 7fe4857af640 1 --2- 192.168.123.107:0/13747742 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe480076df0 0x7fe480071b50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:21.314 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.314+0000 7fe485fb0640 1 -- 192.168.123.107:0/13747742 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe480076df0 msgr2=0x7fe480071b50 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:21.314 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.314+0000 7fe485fb0640 1 --2- 192.168.123.107:0/13747742 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe480076df0 0x7fe480071b50 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:21.314 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.314+0000 7fe485fb0640 1 -- 192.168.123.107:0/13747742 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fe468009660 con 0x7fe480075ba0 2026-03-09T20:45:21.314 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.314+0000 7fe485fb0640 1 --2- 192.168.123.107:0/13747742 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe480075ba0 0x7fe480071610 secure :-1 s=READY pgs=287 cs=0 l=1 rev1=1 crypto rx=0x7fe47000e990 tx=0x7fe47000ee60 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:45:21.314 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.315+0000 7fe476ffd640 1 -- 192.168.123.107:0/13747742 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fe47000cd30 con 0x7fe480075ba0 2026-03-09T20:45:21.315 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.315+0000 7fe476ffd640 1 -- 192.168.123.107:0/13747742 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fe47000ce90 con 0x7fe480075ba0 2026-03-09T20:45:21.315 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.315+0000 7fe486fb2640 1 -- 192.168.123.107:0/13747742 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fe4800721b0 con 0x7fe480075ba0 2026-03-09T20:45:21.315 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.315+0000 7fe486fb2640 1 -- 192.168.123.107:0/13747742 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fe4801acec0 con 0x7fe480075ba0 2026-03-09T20:45:21.315 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.315+0000 7fe476ffd640 1 -- 192.168.123.107:0/13747742 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fe470010640 con 0x7fe480075ba0 2026-03-09T20:45:21.317 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.316+0000 7fe486fb2640 1 -- 192.168.123.107:0/13747742 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fe448005350 con 0x7fe480075ba0 2026-03-09T20:45:21.317 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.318+0000 7fe476ffd640 1 -- 192.168.123.107:0/13747742 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7fe470002900 con 0x7fe480075ba0 2026-03-09T20:45:21.319 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.318+0000 7fe476ffd640 1 --2- 192.168.123.107:0/13747742 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fe458076290 0x7fe458078750 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:21.319 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.318+0000 7fe476ffd640 1 -- 192.168.123.107:0/13747742 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(42..42 src has 1..42) v4 ==== 5448+0+0 (secure 0 0 0) 0x7fe470014070 con 0x7fe480075ba0 2026-03-09T20:45:21.319 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.319+0000 7fe4857af640 1 --2- 192.168.123.107:0/13747742 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fe458076290 0x7fe458078750 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:21.320 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.320+0000 7fe4857af640 1 --2- 192.168.123.107:0/13747742 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fe458076290 0x7fe458078750 secure :-1 s=READY pgs=126 cs=0 l=1 rev1=1 crypto rx=0x7fe468002c80 tx=0x7fe468005c70 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:45:21.320 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.321+0000 7fe476ffd640 1 -- 192.168.123.107:0/13747742 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7fe470061160 con 0x7fe480075ba0 2026-03-09T20:45:21.486 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.486+0000 7fe486fb2640 1 -- 192.168.123.107:0/13747742 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "health", "detail": "detail"} v 0) v1 -- 0x7fe448005600 con 0x7fe480075ba0 2026-03-09T20:45:21.486 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.486+0000 7fe476ffd640 1 -- 192.168.123.107:0/13747742 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "health", "detail": "detail"}]=0 v0) v1 ==== 74+0+10 (secure 0 0 0) 0x7fe470060f80 con 0x7fe480075ba0 2026-03-09T20:45:21.486 INFO:teuthology.orchestra.run.vm07.stdout:HEALTH_OK 2026-03-09T20:45:21.489 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.489+0000 7fe486fb2640 1 -- 192.168.123.107:0/13747742 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fe458076290 msgr2=0x7fe458078750 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:21.489 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.489+0000 7fe486fb2640 1 --2- 192.168.123.107:0/13747742 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fe458076290 0x7fe458078750 secure :-1 s=READY pgs=126 cs=0 l=1 rev1=1 crypto rx=0x7fe468002c80 tx=0x7fe468005c70 comp rx=0 tx=0).stop 2026-03-09T20:45:21.489 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.489+0000 7fe486fb2640 1 -- 192.168.123.107:0/13747742 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe480075ba0 msgr2=0x7fe480071610 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:21.489 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.489+0000 7fe486fb2640 1 --2- 192.168.123.107:0/13747742 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe480075ba0 0x7fe480071610 secure :-1 s=READY pgs=287 cs=0 l=1 rev1=1 crypto rx=0x7fe47000e990 tx=0x7fe47000ee60 comp rx=0 tx=0).stop 2026-03-09T20:45:21.489 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.489+0000 7fe486fb2640 1 -- 192.168.123.107:0/13747742 shutdown_connections 2026-03-09T20:45:21.489 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.489+0000 7fe486fb2640 1 --2- 192.168.123.107:0/13747742 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fe458076290 0x7fe458078750 unknown :-1 s=CLOSED pgs=126 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:21.489 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.489+0000 7fe486fb2640 1 --2- 192.168.123.107:0/13747742 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe480076df0 0x7fe480071b50 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:21.489 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.489+0000 7fe486fb2640 1 --2- 192.168.123.107:0/13747742 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe480075ba0 0x7fe480071610 unknown :-1 s=CLOSED pgs=287 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:21.490 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.489+0000 7fe486fb2640 1 -- 192.168.123.107:0/13747742 >> 192.168.123.107:0/13747742 conn(0x7fe4800fe060 msgr2=0x7fe4800ff930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:45:21.490 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.489+0000 7fe486fb2640 1 -- 192.168.123.107:0/13747742 shutdown_connections 2026-03-09T20:45:21.490 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:21.489+0000 7fe486fb2640 1 -- 192.168.123.107:0/13747742 wait complete. 2026-03-09T20:45:21.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:21 vm10.local ceph-mon[57011]: from='client.24349 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:45:21.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:21 vm10.local ceph-mon[57011]: from='client.14552 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:45:21.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:21 vm10.local ceph-mon[57011]: from='client.14556 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:45:21.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:21 vm10.local ceph-mon[57011]: from='client.? 192.168.123.107:0/1499679466' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:45:21.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:21 vm10.local ceph-mon[57011]: from='client.? 192.168.123.107:0/661992301' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T20:45:22.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:22 vm10.local ceph-mon[57011]: pgmap v91: 65 pgs: 65 active+clean; 459 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.8 KiB/s rd, 853 B/s wr, 5 op/s 2026-03-09T20:45:22.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:22 vm10.local ceph-mon[57011]: from='client.14566 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:45:22.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:22 vm10.local ceph-mon[57011]: from='client.? 192.168.123.107:0/13747742' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T20:45:22.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:22 vm07.local ceph-mon[49120]: pgmap v91: 65 pgs: 65 active+clean; 459 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.8 KiB/s rd, 853 B/s wr, 5 op/s 2026-03-09T20:45:22.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:22 vm07.local ceph-mon[49120]: from='client.14566 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:45:22.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:22 vm07.local ceph-mon[49120]: from='client.? 192.168.123.107:0/13747742' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T20:45:24.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:24 vm10.local ceph-mon[57011]: pgmap v92: 65 pgs: 65 active+clean; 459 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.8 KiB/s rd, 938 B/s wr, 5 op/s 2026-03-09T20:45:24.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:24 vm07.local ceph-mon[49120]: pgmap v92: 65 pgs: 65 active+clean; 459 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.8 KiB/s rd, 938 B/s wr, 5 op/s 2026-03-09T20:45:26.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:26 vm10.local ceph-mon[57011]: pgmap v93: 65 pgs: 65 active+clean; 459 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.7 KiB/s rd, 853 B/s wr, 5 op/s 2026-03-09T20:45:26.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:26 vm07.local ceph-mon[49120]: pgmap v93: 65 pgs: 65 active+clean; 459 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.7 KiB/s rd, 853 B/s wr, 5 op/s 2026-03-09T20:45:28.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:28 vm07.local ceph-mon[49120]: pgmap v94: 65 pgs: 65 active+clean; 459 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.6 KiB/s rd, 767 B/s wr, 5 op/s 2026-03-09T20:45:28.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:28 vm10.local ceph-mon[57011]: pgmap v94: 65 pgs: 65 active+clean; 459 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.6 KiB/s rd, 767 B/s wr, 5 op/s 2026-03-09T20:45:30.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:30 vm07.local ceph-mon[49120]: pgmap v95: 65 pgs: 65 active+clean; 459 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.1 KiB/s rd, 597 B/s wr, 3 op/s 2026-03-09T20:45:30.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:30 vm10.local ceph-mon[57011]: pgmap v95: 65 pgs: 65 active+clean; 459 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.1 KiB/s rd, 597 B/s wr, 3 op/s 2026-03-09T20:45:33.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:32 vm10.local ceph-mon[57011]: pgmap v96: 65 pgs: 65 active+clean; 462 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.8 KiB/s rd, 1023 B/s wr, 5 op/s 2026-03-09T20:45:33.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:32 vm07.local ceph-mon[49120]: pgmap v96: 65 pgs: 65 active+clean; 462 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.8 KiB/s rd, 1023 B/s wr, 5 op/s 2026-03-09T20:45:34.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:34 vm10.local ceph-mon[57011]: pgmap v97: 65 pgs: 65 active+clean; 462 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.4 KiB/s rd, 597 B/s wr, 4 op/s 2026-03-09T20:45:34.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:34 vm10.local ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:45:35.057 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:34 vm07.local ceph-mon[49120]: pgmap v97: 65 pgs: 65 active+clean; 462 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.4 KiB/s rd, 597 B/s wr, 4 op/s 2026-03-09T20:45:35.057 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:34 vm07.local ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:45:37.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:37 vm10.local ceph-mon[57011]: pgmap v98: 65 pgs: 65 active+clean; 462 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.4 KiB/s rd, 511 B/s wr, 4 op/s 2026-03-09T20:45:37.316 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:37 vm07.local ceph-mon[49120]: pgmap v98: 65 pgs: 65 active+clean; 462 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.4 KiB/s rd, 511 B/s wr, 4 op/s 2026-03-09T20:45:38.348 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:38 vm10.local ceph-mon[57011]: pgmap v99: 65 pgs: 65 active+clean; 462 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.4 KiB/s rd, 511 B/s wr, 4 op/s 2026-03-09T20:45:38.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:38 vm07.local ceph-mon[49120]: pgmap v99: 65 pgs: 65 active+clean; 462 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.4 KiB/s rd, 511 B/s wr, 4 op/s 2026-03-09T20:45:40.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:40 vm10.local ceph-mon[57011]: pgmap v100: 65 pgs: 65 active+clean; 462 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.0 KiB/s rd, 426 B/s wr, 3 op/s 2026-03-09T20:45:40.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:40 vm07.local ceph-mon[49120]: pgmap v100: 65 pgs: 65 active+clean; 462 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.0 KiB/s rd, 426 B/s wr, 3 op/s 2026-03-09T20:45:42.331 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:42 vm10.local ceph-mon[57011]: pgmap v101: 65 pgs: 65 active+clean; 462 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.4 KiB/s rd, 426 B/s wr, 4 op/s 2026-03-09T20:45:42.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:42 vm07.local ceph-mon[49120]: pgmap v101: 65 pgs: 65 active+clean; 462 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.4 KiB/s rd, 426 B/s wr, 4 op/s 2026-03-09T20:45:44.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:44 vm07.local ceph-mon[49120]: pgmap v102: 65 pgs: 65 active+clean; 462 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:45:44.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:44 vm10.local ceph-mon[57011]: pgmap v102: 65 pgs: 65 active+clean; 462 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:45:46.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:46 vm07.local ceph-mon[49120]: pgmap v103: 65 pgs: 65 active+clean; 462 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:45:46.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:46 vm10.local ceph-mon[57011]: pgmap v103: 65 pgs: 65 active+clean; 462 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:45:48.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:48 vm10.local ceph-mon[57011]: pgmap v104: 65 pgs: 65 active+clean; 462 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:45:48.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:48 vm07.local ceph-mon[49120]: pgmap v104: 65 pgs: 65 active+clean; 462 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:45:49.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:49 vm10.local ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:45:49.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:49 vm07.local ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:45:50.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:50 vm10.local ceph-mon[57011]: pgmap v105: 65 pgs: 65 active+clean; 462 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:45:50.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:50 vm07.local ceph-mon[49120]: pgmap v105: 65 pgs: 65 active+clean; 462 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:45:51.577 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.576+0000 7febeb7c8640 1 -- 192.168.123.107:0/4233623742 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7febe41019f0 msgr2=0x7febe4101e70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:51.577 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.576+0000 7febeb7c8640 1 --2- 192.168.123.107:0/4233623742 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7febe41019f0 0x7febe4101e70 secure :-1 s=READY pgs=60 cs=0 l=1 rev1=1 crypto rx=0x7febd40099b0 tx=0x7febd402f240 comp rx=0 tx=0).stop 2026-03-09T20:45:51.577 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.578+0000 7febeb7c8640 1 -- 192.168.123.107:0/4233623742 shutdown_connections 2026-03-09T20:45:51.577 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.578+0000 7febeb7c8640 1 --2- 192.168.123.107:0/4233623742 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7febe41019f0 0x7febe4101e70 unknown :-1 s=CLOSED pgs=60 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:51.577 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.578+0000 7febeb7c8640 1 --2- 192.168.123.107:0/4233623742 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7febe41007f0 0x7febe4100bf0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:51.577 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.578+0000 7febeb7c8640 1 -- 192.168.123.107:0/4233623742 >> 192.168.123.107:0/4233623742 conn(0x7febe40fbf80 msgr2=0x7febe40fe3c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:45:51.578 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.578+0000 7febeb7c8640 1 -- 192.168.123.107:0/4233623742 shutdown_connections 2026-03-09T20:45:51.578 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.578+0000 7febeb7c8640 1 -- 192.168.123.107:0/4233623742 wait complete. 2026-03-09T20:45:51.578 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.578+0000 7febeb7c8640 1 Processor -- start 2026-03-09T20:45:51.578 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.578+0000 7febeb7c8640 1 -- start start 2026-03-09T20:45:51.580 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.579+0000 7febeb7c8640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7febe41007f0 0x7febe419a450 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:51.580 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.579+0000 7febeb7c8640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7febe41019f0 0x7febe419a990 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:51.580 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.579+0000 7febeb7c8640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7febe419af60 con 0x7febe41019f0 2026-03-09T20:45:51.580 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.579+0000 7febeb7c8640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7febe419b0d0 con 0x7febe41007f0 2026-03-09T20:45:51.580 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.579+0000 7febe953d640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7febe41007f0 0x7febe419a450 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:51.580 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.579+0000 7febe8d3c640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7febe41019f0 0x7febe419a990 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:51.580 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.579+0000 7febe8d3c640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7febe41019f0 0x7febe419a990 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:57286/0 (socket says 192.168.123.107:57286) 2026-03-09T20:45:51.580 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.579+0000 7febe953d640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7febe41007f0 0x7febe419a450 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:44894/0 (socket says 192.168.123.107:44894) 2026-03-09T20:45:51.580 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.579+0000 7febe8d3c640 1 -- 192.168.123.107:0/3300182156 learned_addr learned my addr 192.168.123.107:0/3300182156 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:45:51.580 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.579+0000 7febe8d3c640 1 -- 192.168.123.107:0/3300182156 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7febe41007f0 msgr2=0x7febe419a450 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:51.580 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.579+0000 7febe8d3c640 1 --2- 192.168.123.107:0/3300182156 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7febe41007f0 0x7febe419a450 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:51.580 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.579+0000 7febe8d3c640 1 -- 192.168.123.107:0/3300182156 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7febd4009660 con 0x7febe41019f0 2026-03-09T20:45:51.580 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.580+0000 7febe8d3c640 1 --2- 192.168.123.107:0/3300182156 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7febe41019f0 0x7febe419a990 secure :-1 s=READY pgs=288 cs=0 l=1 rev1=1 crypto rx=0x7febd4002910 tx=0x7febd4002940 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:45:51.587 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.582+0000 7febd27fc640 1 -- 192.168.123.107:0/3300182156 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7febd403d070 con 0x7febe41019f0 2026-03-09T20:45:51.587 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.582+0000 7febeb7c8640 1 -- 192.168.123.107:0/3300182156 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7febe419fb10 con 0x7febe41019f0 2026-03-09T20:45:51.587 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.582+0000 7febeb7c8640 1 -- 192.168.123.107:0/3300182156 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7febe41a0000 con 0x7febe41019f0 2026-03-09T20:45:51.587 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.582+0000 7febd27fc640 1 -- 192.168.123.107:0/3300182156 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7febd402fcb0 con 0x7febe41019f0 2026-03-09T20:45:51.587 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.582+0000 7febd27fc640 1 -- 192.168.123.107:0/3300182156 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7febd40418b0 con 0x7febe41019f0 2026-03-09T20:45:51.587 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.582+0000 7febeb7c8640 1 -- 192.168.123.107:0/3300182156 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7febac005350 con 0x7febe41019f0 2026-03-09T20:45:51.587 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.584+0000 7febd27fc640 1 -- 192.168.123.107:0/3300182156 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7febd4049050 con 0x7febe41019f0 2026-03-09T20:45:51.587 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.584+0000 7febd27fc640 1 --2- 192.168.123.107:0/3300182156 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7febb80761c0 0x7febb8078680 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:51.587 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.584+0000 7febd27fc640 1 -- 192.168.123.107:0/3300182156 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(42..42 src has 1..42) v4 ==== 5448+0+0 (secure 0 0 0) 0x7febd40bc270 con 0x7febe41019f0 2026-03-09T20:45:51.587 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.584+0000 7febe953d640 1 --2- 192.168.123.107:0/3300182156 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7febb80761c0 0x7febb8078680 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:51.587 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.585+0000 7febe953d640 1 --2- 192.168.123.107:0/3300182156 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7febb80761c0 0x7febb8078680 secure :-1 s=READY pgs=127 cs=0 l=1 rev1=1 crypto rx=0x7febd8005fd0 tx=0x7febd8004380 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:45:51.587 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.586+0000 7febd27fc640 1 -- 192.168.123.107:0/3300182156 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7febd4085880 con 0x7febe41019f0 2026-03-09T20:45:51.712 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.712+0000 7febeb7c8640 1 -- 192.168.123.107:0/3300182156 --> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7febac002bf0 con 0x7febb80761c0 2026-03-09T20:45:51.713 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.713+0000 7febd27fc640 1 -- 192.168.123.107:0/3300182156 <== mgr.14225 v2:192.168.123.107:6800/4233182156 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+358 (secure 0 0 0) 0x7febac002bf0 con 0x7febb80761c0 2026-03-09T20:45:51.716 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.716+0000 7febb3fff640 1 -- 192.168.123.107:0/3300182156 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7febb80761c0 msgr2=0x7febb8078680 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:51.716 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.716+0000 7febb3fff640 1 --2- 192.168.123.107:0/3300182156 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7febb80761c0 0x7febb8078680 secure :-1 s=READY pgs=127 cs=0 l=1 rev1=1 crypto rx=0x7febd8005fd0 tx=0x7febd8004380 comp rx=0 tx=0).stop 2026-03-09T20:45:51.716 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.716+0000 7febb3fff640 1 -- 192.168.123.107:0/3300182156 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7febe41019f0 msgr2=0x7febe419a990 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:51.716 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.716+0000 7febb3fff640 1 --2- 192.168.123.107:0/3300182156 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7febe41019f0 0x7febe419a990 secure :-1 s=READY pgs=288 cs=0 l=1 rev1=1 crypto rx=0x7febd4002910 tx=0x7febd4002940 comp rx=0 tx=0).stop 2026-03-09T20:45:51.716 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.717+0000 7febb3fff640 1 -- 192.168.123.107:0/3300182156 shutdown_connections 2026-03-09T20:45:51.717 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.717+0000 7febb3fff640 1 --2- 192.168.123.107:0/3300182156 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7febb80761c0 0x7febb8078680 unknown :-1 s=CLOSED pgs=127 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:51.717 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.717+0000 7febb3fff640 1 --2- 192.168.123.107:0/3300182156 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7febe41019f0 0x7febe419a990 unknown :-1 s=CLOSED pgs=288 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:51.717 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.717+0000 7febb3fff640 1 --2- 192.168.123.107:0/3300182156 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7febe41007f0 0x7febe419a450 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:51.717 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.717+0000 7febb3fff640 1 -- 192.168.123.107:0/3300182156 >> 192.168.123.107:0/3300182156 conn(0x7febe40fbf80 msgr2=0x7febe40fda60 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:45:51.721 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.721+0000 7febb3fff640 1 -- 192.168.123.107:0/3300182156 shutdown_connections 2026-03-09T20:45:51.721 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.721+0000 7febb3fff640 1 -- 192.168.123.107:0/3300182156 wait complete. 2026-03-09T20:45:51.729 INFO:teuthology.orchestra.run.vm07.stdout:true 2026-03-09T20:45:51.781 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.781+0000 7f121e0cc640 1 -- 192.168.123.107:0/1182385859 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1218071a70 msgr2=0x7f1218071e70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:51.781 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.781+0000 7f121e0cc640 1 --2- 192.168.123.107:0/1182385859 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1218071a70 0x7f1218071e70 secure :-1 s=READY pgs=289 cs=0 l=1 rev1=1 crypto rx=0x7f1208009a00 tx=0x7f120802f290 comp rx=0 tx=0).stop 2026-03-09T20:45:51.781 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.781+0000 7f121e0cc640 1 -- 192.168.123.107:0/1182385859 shutdown_connections 2026-03-09T20:45:51.781 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.781+0000 7f121e0cc640 1 --2- 192.168.123.107:0/1182385859 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1218072440 0x7f12180771b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:51.781 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.781+0000 7f121e0cc640 1 --2- 192.168.123.107:0/1182385859 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1218071a70 0x7f1218071e70 unknown :-1 s=CLOSED pgs=289 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:51.781 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.781+0000 7f121e0cc640 1 -- 192.168.123.107:0/1182385859 >> 192.168.123.107:0/1182385859 conn(0x7f121806d4f0 msgr2=0x7f121806f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:45:51.782 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.781+0000 7f121e0cc640 1 -- 192.168.123.107:0/1182385859 shutdown_connections 2026-03-09T20:45:51.783 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.781+0000 7f121e0cc640 1 -- 192.168.123.107:0/1182385859 wait complete. 2026-03-09T20:45:51.783 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.782+0000 7f121e0cc640 1 Processor -- start 2026-03-09T20:45:51.783 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.782+0000 7f121e0cc640 1 -- start start 2026-03-09T20:45:51.783 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.782+0000 7f121e0cc640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1218072440 0x7f12180840c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:51.783 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.782+0000 7f121e0cc640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1218082710 0x7f1218082b90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:51.783 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.782+0000 7f121e0cc640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f1218084600 con 0x7f1218072440 2026-03-09T20:45:51.783 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.782+0000 7f121e0cc640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f12180830d0 con 0x7f1218082710 2026-03-09T20:45:51.783 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.782+0000 7f1216ffd640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1218082710 0x7f1218082b90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:51.783 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.782+0000 7f1216ffd640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1218082710 0x7f1218082b90 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:44912/0 (socket says 192.168.123.107:44912) 2026-03-09T20:45:51.783 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.782+0000 7f1216ffd640 1 -- 192.168.123.107:0/983636032 learned_addr learned my addr 192.168.123.107:0/983636032 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:45:51.783 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.782+0000 7f12177fe640 1 --2- 192.168.123.107:0/983636032 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1218072440 0x7f12180840c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:51.783 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.782+0000 7f1216ffd640 1 -- 192.168.123.107:0/983636032 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1218072440 msgr2=0x7f12180840c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:51.783 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.782+0000 7f1216ffd640 1 --2- 192.168.123.107:0/983636032 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1218072440 0x7f12180840c0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:51.783 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.782+0000 7f1216ffd640 1 -- 192.168.123.107:0/983636032 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f1208009660 con 0x7f1218082710 2026-03-09T20:45:51.783 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.782+0000 7f12177fe640 1 --2- 192.168.123.107:0/983636032 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1218072440 0x7f12180840c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T20:45:51.783 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.783+0000 7f1216ffd640 1 --2- 192.168.123.107:0/983636032 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1218082710 0x7f1218082b90 secure :-1 s=READY pgs=61 cs=0 l=1 rev1=1 crypto rx=0x7f121000efc0 tx=0x7f121000c490 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:45:51.783 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.783+0000 7f1214ff9640 1 -- 192.168.123.107:0/983636032 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f1210009280 con 0x7f1218082710 2026-03-09T20:45:51.784 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.784+0000 7f121e0cc640 1 -- 192.168.123.107:0/983636032 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f1218083350 con 0x7f1218082710 2026-03-09T20:45:51.784 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.784+0000 7f121e0cc640 1 -- 192.168.123.107:0/983636032 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f12181b5bc0 con 0x7f1218082710 2026-03-09T20:45:51.784 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.784+0000 7f1214ff9640 1 -- 192.168.123.107:0/983636032 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f121000f040 con 0x7f1218082710 2026-03-09T20:45:51.786 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.784+0000 7f1214ff9640 1 -- 192.168.123.107:0/983636032 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f1210004910 con 0x7f1218082710 2026-03-09T20:45:51.786 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.785+0000 7f1214ff9640 1 -- 192.168.123.107:0/983636032 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7f1210007500 con 0x7f1218082710 2026-03-09T20:45:51.786 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.786+0000 7f1214ff9640 1 --2- 192.168.123.107:0/983636032 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f11f8076290 0x7f11f8078750 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:51.786 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.786+0000 7f12177fe640 1 --2- 192.168.123.107:0/983636032 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f11f8076290 0x7f11f8078750 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:51.786 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.786+0000 7f1214ff9640 1 -- 192.168.123.107:0/983636032 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(42..42 src has 1..42) v4 ==== 5448+0+0 (secure 0 0 0) 0x7f1210098480 con 0x7f1218082710 2026-03-09T20:45:51.787 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.787+0000 7f12177fe640 1 --2- 192.168.123.107:0/983636032 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f11f8076290 0x7f11f8078750 secure :-1 s=READY pgs=128 cs=0 l=1 rev1=1 crypto rx=0x7f1208002410 tx=0x7f1208002f70 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:45:51.787 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.787+0000 7f121e0cc640 1 -- 192.168.123.107:0/983636032 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f11e4005350 con 0x7f1218082710 2026-03-09T20:45:51.790 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.790+0000 7f1214ff9640 1 -- 192.168.123.107:0/983636032 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f1210061ab0 con 0x7f1218082710 2026-03-09T20:45:51.908 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.907+0000 7f121e0cc640 1 -- 192.168.123.107:0/983636032 --> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f11e4002bf0 con 0x7f11f8076290 2026-03-09T20:45:51.914 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.912+0000 7f1214ff9640 1 -- 192.168.123.107:0/983636032 <== mgr.14225 v2:192.168.123.107:6800/4233182156 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+358 (secure 0 0 0) 0x7f11e4002bf0 con 0x7f11f8076290 2026-03-09T20:45:51.918 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.914+0000 7f11f67fc640 1 -- 192.168.123.107:0/983636032 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f11f8076290 msgr2=0x7f11f8078750 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:51.919 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.914+0000 7f11f67fc640 1 --2- 192.168.123.107:0/983636032 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f11f8076290 0x7f11f8078750 secure :-1 s=READY pgs=128 cs=0 l=1 rev1=1 crypto rx=0x7f1208002410 tx=0x7f1208002f70 comp rx=0 tx=0).stop 2026-03-09T20:45:51.919 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.915+0000 7f11f67fc640 1 -- 192.168.123.107:0/983636032 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1218082710 msgr2=0x7f1218082b90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:51.919 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.915+0000 7f11f67fc640 1 --2- 192.168.123.107:0/983636032 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1218082710 0x7f1218082b90 secure :-1 s=READY pgs=61 cs=0 l=1 rev1=1 crypto rx=0x7f121000efc0 tx=0x7f121000c490 comp rx=0 tx=0).stop 2026-03-09T20:45:51.919 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.915+0000 7f11f67fc640 1 -- 192.168.123.107:0/983636032 shutdown_connections 2026-03-09T20:45:51.919 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.915+0000 7f11f67fc640 1 --2- 192.168.123.107:0/983636032 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f11f8076290 0x7f11f8078750 unknown :-1 s=CLOSED pgs=128 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:51.919 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.915+0000 7f11f67fc640 1 --2- 192.168.123.107:0/983636032 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1218082710 0x7f1218082b90 unknown :-1 s=CLOSED pgs=61 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:51.919 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.915+0000 7f11f67fc640 1 --2- 192.168.123.107:0/983636032 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1218072440 0x7f12180840c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:51.919 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.915+0000 7f11f67fc640 1 -- 192.168.123.107:0/983636032 >> 192.168.123.107:0/983636032 conn(0x7f121806d4f0 msgr2=0x7f12180753f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:45:51.919 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.915+0000 7f11f67fc640 1 -- 192.168.123.107:0/983636032 shutdown_connections 2026-03-09T20:45:51.919 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.915+0000 7f11f67fc640 1 -- 192.168.123.107:0/983636032 wait complete. 2026-03-09T20:45:51.987 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.987+0000 7f0c498aa640 1 -- 192.168.123.107:0/2310481874 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f0c440719c0 msgr2=0x7f0c44071dc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:51.988 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.987+0000 7f0c498aa640 1 --2- 192.168.123.107:0/2310481874 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f0c440719c0 0x7f0c44071dc0 secure :-1 s=READY pgs=62 cs=0 l=1 rev1=1 crypto rx=0x7f0c340099b0 tx=0x7f0c3402f240 comp rx=0 tx=0).stop 2026-03-09T20:45:51.988 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.987+0000 7f0c498aa640 1 -- 192.168.123.107:0/2310481874 shutdown_connections 2026-03-09T20:45:51.988 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.987+0000 7f0c498aa640 1 --2- 192.168.123.107:0/2310481874 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0c44072390 0x7f0c4410c590 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:51.988 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.987+0000 7f0c498aa640 1 --2- 192.168.123.107:0/2310481874 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f0c440719c0 0x7f0c44071dc0 unknown :-1 s=CLOSED pgs=62 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:51.988 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.987+0000 7f0c498aa640 1 -- 192.168.123.107:0/2310481874 >> 192.168.123.107:0/2310481874 conn(0x7f0c4406d4f0 msgr2=0x7f0c4406f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:45:51.988 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.987+0000 7f0c498aa640 1 -- 192.168.123.107:0/2310481874 shutdown_connections 2026-03-09T20:45:51.988 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.988+0000 7f0c498aa640 1 -- 192.168.123.107:0/2310481874 wait complete. 2026-03-09T20:45:51.988 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.988+0000 7f0c498aa640 1 Processor -- start 2026-03-09T20:45:51.993 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.992+0000 7f0c498aa640 1 -- start start 2026-03-09T20:45:51.993 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.992+0000 7f0c498aa640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f0c440719c0 0x7f0c441a73d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:51.993 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.992+0000 7f0c498aa640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0c44072390 0x7f0c441a7910 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:51.993 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.992+0000 7f0c498aa640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f0c441a7ee0 con 0x7f0c44072390 2026-03-09T20:45:51.993 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.992+0000 7f0c498aa640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f0c441a8050 con 0x7f0c440719c0 2026-03-09T20:45:51.993 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.992+0000 7f0c42ffd640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f0c440719c0 0x7f0c441a73d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:51.993 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.992+0000 7f0c42ffd640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f0c440719c0 0x7f0c441a73d0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:44922/0 (socket says 192.168.123.107:44922) 2026-03-09T20:45:51.993 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.992+0000 7f0c42ffd640 1 -- 192.168.123.107:0/2831121157 learned_addr learned my addr 192.168.123.107:0/2831121157 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:45:51.993 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.992+0000 7f0c3a5ff640 1 --2- 192.168.123.107:0/2831121157 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0c44072390 0x7f0c441a7910 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:51.994 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.992+0000 7f0c42ffd640 1 -- 192.168.123.107:0/2831121157 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0c44072390 msgr2=0x7f0c441a7910 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:51.994 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.992+0000 7f0c42ffd640 1 --2- 192.168.123.107:0/2831121157 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0c44072390 0x7f0c441a7910 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:51.994 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.992+0000 7f0c42ffd640 1 -- 192.168.123.107:0/2831121157 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f0c34009660 con 0x7f0c440719c0 2026-03-09T20:45:51.994 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.993+0000 7f0c42ffd640 1 --2- 192.168.123.107:0/2831121157 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f0c440719c0 0x7f0c441a73d0 secure :-1 s=READY pgs=63 cs=0 l=1 rev1=1 crypto rx=0x7f0c34002fd0 tx=0x7f0c34002bf0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:45:51.994 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.993+0000 7f0c488a8640 1 -- 192.168.123.107:0/2831121157 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f0c3403d070 con 0x7f0c440719c0 2026-03-09T20:45:51.994 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.993+0000 7f0c498aa640 1 -- 192.168.123.107:0/2831121157 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f0c4410ee50 con 0x7f0c440719c0 2026-03-09T20:45:51.994 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.993+0000 7f0c498aa640 1 -- 192.168.123.107:0/2831121157 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f0c4410f340 con 0x7f0c440719c0 2026-03-09T20:45:51.994 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.993+0000 7f0c488a8640 1 -- 192.168.123.107:0/2831121157 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f0c34004510 con 0x7f0c440719c0 2026-03-09T20:45:51.994 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.993+0000 7f0c488a8640 1 -- 192.168.123.107:0/2831121157 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f0c34038ba0 con 0x7f0c440719c0 2026-03-09T20:45:51.997 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.997+0000 7f0c488a8640 1 -- 192.168.123.107:0/2831121157 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7f0c34004050 con 0x7f0c440719c0 2026-03-09T20:45:51.997 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.997+0000 7f0c3affd640 1 -- 192.168.123.107:0/2831121157 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f0c441183e0 con 0x7f0c440719c0 2026-03-09T20:45:51.998 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.998+0000 7f0c488a8640 1 --2- 192.168.123.107:0/2831121157 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f0c14076170 0x7f0c14078630 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:51.998 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.998+0000 7f0c488a8640 1 -- 192.168.123.107:0/2831121157 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(42..42 src has 1..42) v4 ==== 5448+0+0 (secure 0 0 0) 0x7f0c340bc5a0 con 0x7f0c440719c0 2026-03-09T20:45:51.998 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.998+0000 7f0c3a5ff640 1 --2- 192.168.123.107:0/2831121157 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f0c14076170 0x7f0c14078630 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:51.998 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:51.998+0000 7f0c3a5ff640 1 --2- 192.168.123.107:0/2831121157 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f0c14076170 0x7f0c14078630 secure :-1 s=READY pgs=129 cs=0 l=1 rev1=1 crypto rx=0x7f0c441a88f0 tx=0x7f0c30009290 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:45:52.003 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.000+0000 7f0c488a8640 1 -- 192.168.123.107:0/2831121157 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f0c34085b30 con 0x7f0c440719c0 2026-03-09T20:45:52.150 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.150+0000 7f0c3affd640 1 -- 192.168.123.107:0/2831121157 --> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] -- mgr_command(tid 0: {"prefix": "orch ps", "target": ["mon-mgr", ""]}) v1 -- 0x7f0c44061df0 con 0x7f0c14076170 2026-03-09T20:45:52.164 INFO:teuthology.orchestra.run.vm07.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T20:45:52.164 INFO:teuthology.orchestra.run.vm07.stdout:alertmanager.vm07 vm07 *:9093,9094 running (2m) 43s ago 2m 23.6M - 0.25.0 c8568f914cd2 aa3206f6f5cb 2026-03-09T20:45:52.164 INFO:teuthology.orchestra.run.vm07.stdout:ceph-exporter.vm07 vm07 running (3m) 43s ago 3m 8514k - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 06140d824fae 2026-03-09T20:45:52.164 INFO:teuthology.orchestra.run.vm07.stdout:ceph-exporter.vm10 vm10 running (2m) 44s ago 2m 8652k - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 ecddc8340426 2026-03-09T20:45:52.164 INFO:teuthology.orchestra.run.vm07.stdout:crash.vm07 vm07 running (3m) 43s ago 3m 7620k - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 8dda9981b08b 2026-03-09T20:45:52.164 INFO:teuthology.orchestra.run.vm07.stdout:crash.vm10 vm10 running (2m) 44s ago 2m 7616k - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 eba80e79586f 2026-03-09T20:45:52.164 INFO:teuthology.orchestra.run.vm07.stdout:grafana.vm07 vm07 *:3000 running (2m) 43s ago 2m 78.3M - 9.4.7 954c08fa6188 74cf2e7ee6ad 2026-03-09T20:45:52.164 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm07.potfau vm07 running (48s) 43s ago 48s 18.5M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 2492b6874dc8 2026-03-09T20:45:52.164 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm07.rovdbp vm07 running (50s) 43s ago 50s 19.8M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 3dd0b4a28f35 2026-03-09T20:45:52.164 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm10.hzyuyq vm10 running (48s) 44s ago 48s 16.6M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 ed740ceed51a 2026-03-09T20:45:52.164 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm10.qpltwp vm10 running (49s) 44s ago 49s 17.3M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 c5fdba181aaf 2026-03-09T20:45:52.164 INFO:teuthology.orchestra.run.vm07.stdout:mgr.vm07.xjrvch vm07 *:9283,8765,8443 running (3m) 43s ago 3m 542M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 7a35a71cbc43 2026-03-09T20:45:52.164 INFO:teuthology.orchestra.run.vm07.stdout:mgr.vm10.byqahe vm10 *:8443,9283,8765 running (2m) 44s ago 2m 485M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 91b60c6e69dc 2026-03-09T20:45:52.164 INFO:teuthology.orchestra.run.vm07.stdout:mon.vm07 vm07 running (3m) 43s ago 3m 53.1M 2048M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 f3e88bdaa0dd 2026-03-09T20:45:52.164 INFO:teuthology.orchestra.run.vm07.stdout:mon.vm10 vm10 running (2m) 44s ago 2m 46.8M 2048M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 4e5d7d18c660 2026-03-09T20:45:52.164 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.vm07 vm07 *:9100 running (2m) 43s ago 2m 14.3M - 1.5.0 0da6a335fe13 d6fac1f8a1d0 2026-03-09T20:45:52.164 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.vm10 vm10 *:9100 running (2m) 44s ago 2m 14.7M - 1.5.0 0da6a335fe13 9716a97e7ed1 2026-03-09T20:45:52.164 INFO:teuthology.orchestra.run.vm07.stdout:osd.0 vm07 running (2m) 43s ago 2m 66.7M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 482878bd7721 2026-03-09T20:45:52.164 INFO:teuthology.orchestra.run.vm07.stdout:osd.1 vm07 running (112s) 43s ago 112s 68.5M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 15564e5032c9 2026-03-09T20:45:52.164 INFO:teuthology.orchestra.run.vm07.stdout:osd.2 vm07 running (102s) 43s ago 102s 47.2M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 a2ad523a264c 2026-03-09T20:45:52.164 INFO:teuthology.orchestra.run.vm07.stdout:osd.3 vm10 running (93s) 44s ago 93s 67.1M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 c4d7e2279ba1 2026-03-09T20:45:52.164 INFO:teuthology.orchestra.run.vm07.stdout:osd.4 vm10 running (85s) 44s ago 85s 44.6M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 37651efc9a7d 2026-03-09T20:45:52.164 INFO:teuthology.orchestra.run.vm07.stdout:osd.5 vm10 running (76s) 44s ago 76s 63.9M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 e1bd83add343 2026-03-09T20:45:52.164 INFO:teuthology.orchestra.run.vm07.stdout:prometheus.vm07 vm07 *:9095 running (2m) 43s ago 2m 37.0M - 2.43.0 a07b618ecd1d 08a586cd1392 2026-03-09T20:45:52.164 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.158+0000 7f0c488a8640 1 -- 192.168.123.107:0/2831121157 <== mgr.14225 v2:192.168.123.107:6800/4233182156 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+3624 (secure 0 0 0) 0x7f0c44061df0 con 0x7f0c14076170 2026-03-09T20:45:52.164 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.161+0000 7f0c498aa640 1 -- 192.168.123.107:0/2831121157 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f0c14076170 msgr2=0x7f0c14078630 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:52.164 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.161+0000 7f0c498aa640 1 --2- 192.168.123.107:0/2831121157 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f0c14076170 0x7f0c14078630 secure :-1 s=READY pgs=129 cs=0 l=1 rev1=1 crypto rx=0x7f0c441a88f0 tx=0x7f0c30009290 comp rx=0 tx=0).stop 2026-03-09T20:45:52.164 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.161+0000 7f0c498aa640 1 -- 192.168.123.107:0/2831121157 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f0c440719c0 msgr2=0x7f0c441a73d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:52.164 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.161+0000 7f0c498aa640 1 --2- 192.168.123.107:0/2831121157 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f0c440719c0 0x7f0c441a73d0 secure :-1 s=READY pgs=63 cs=0 l=1 rev1=1 crypto rx=0x7f0c34002fd0 tx=0x7f0c34002bf0 comp rx=0 tx=0).stop 2026-03-09T20:45:52.164 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.161+0000 7f0c498aa640 1 -- 192.168.123.107:0/2831121157 shutdown_connections 2026-03-09T20:45:52.164 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.161+0000 7f0c498aa640 1 --2- 192.168.123.107:0/2831121157 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f0c14076170 0x7f0c14078630 secure :-1 s=CLOSED pgs=129 cs=0 l=1 rev1=1 crypto rx=0x7f0c441a88f0 tx=0x7f0c30009290 comp rx=0 tx=0).stop 2026-03-09T20:45:52.165 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.161+0000 7f0c498aa640 1 --2- 192.168.123.107:0/2831121157 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0c44072390 0x7f0c441a7910 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:52.165 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.161+0000 7f0c498aa640 1 --2- 192.168.123.107:0/2831121157 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f0c440719c0 0x7f0c441a73d0 secure :-1 s=CLOSED pgs=63 cs=0 l=1 rev1=1 crypto rx=0x7f0c34002fd0 tx=0x7f0c34002bf0 comp rx=0 tx=0).stop 2026-03-09T20:45:52.165 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.161+0000 7f0c498aa640 1 -- 192.168.123.107:0/2831121157 >> 192.168.123.107:0/2831121157 conn(0x7f0c4406d4f0 msgr2=0x7f0c44070890 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:45:52.165 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.164+0000 7f0c498aa640 1 -- 192.168.123.107:0/2831121157 shutdown_connections 2026-03-09T20:45:52.165 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.164+0000 7f0c498aa640 1 -- 192.168.123.107:0/2831121157 wait complete. 2026-03-09T20:45:52.224 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.223+0000 7f18e1d51640 1 -- 192.168.123.107:0/1900857293 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f18dc071a50 msgr2=0x7f18dc071e50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:52.224 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.223+0000 7f18e1d51640 1 --2- 192.168.123.107:0/1900857293 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f18dc071a50 0x7f18dc071e50 secure :-1 s=READY pgs=290 cs=0 l=1 rev1=1 crypto rx=0x7f18cc009a00 tx=0x7f18cc02f270 comp rx=0 tx=0).stop 2026-03-09T20:45:52.224 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.223+0000 7f18e1d51640 1 -- 192.168.123.107:0/1900857293 shutdown_connections 2026-03-09T20:45:52.224 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.223+0000 7f18e1d51640 1 --2- 192.168.123.107:0/1900857293 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f18dc072420 0x7f18dc077190 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:52.224 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.223+0000 7f18e1d51640 1 --2- 192.168.123.107:0/1900857293 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f18dc071a50 0x7f18dc071e50 unknown :-1 s=CLOSED pgs=290 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:52.224 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.223+0000 7f18e1d51640 1 -- 192.168.123.107:0/1900857293 >> 192.168.123.107:0/1900857293 conn(0x7f18dc06d4f0 msgr2=0x7f18dc06f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:45:52.225 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.224+0000 7f18e1d51640 1 -- 192.168.123.107:0/1900857293 shutdown_connections 2026-03-09T20:45:52.225 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.224+0000 7f18e1d51640 1 -- 192.168.123.107:0/1900857293 wait complete. 2026-03-09T20:45:52.225 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.224+0000 7f18e1d51640 1 Processor -- start 2026-03-09T20:45:52.225 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.224+0000 7f18e1d51640 1 -- start start 2026-03-09T20:45:52.225 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.224+0000 7f18e1d51640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f18dc072420 0x7f18dc084070 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:52.225 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.224+0000 7f18e1d51640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f18dc0826c0 0x7f18dc082b40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:52.225 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.224+0000 7f18e1d51640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f18dc0845b0 con 0x7f18dc0826c0 2026-03-09T20:45:52.225 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.224+0000 7f18e1d51640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f18dc083080 con 0x7f18dc072420 2026-03-09T20:45:52.225 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.224+0000 7f18dbfff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f18dc0826c0 0x7f18dc082b40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:52.225 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.224+0000 7f18dbfff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f18dc0826c0 0x7f18dc082b40 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:57342/0 (socket says 192.168.123.107:57342) 2026-03-09T20:45:52.225 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.224+0000 7f18dbfff640 1 -- 192.168.123.107:0/1535183990 learned_addr learned my addr 192.168.123.107:0/1535183990 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:45:52.225 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.225+0000 7f18dbfff640 1 -- 192.168.123.107:0/1535183990 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f18dc072420 msgr2=0x7f18dc084070 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T20:45:52.225 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.225+0000 7f18dbfff640 1 --2- 192.168.123.107:0/1535183990 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f18dc072420 0x7f18dc084070 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:52.225 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.225+0000 7f18dbfff640 1 -- 192.168.123.107:0/1535183990 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f18cc009660 con 0x7f18dc0826c0 2026-03-09T20:45:52.225 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.225+0000 7f18dbfff640 1 --2- 192.168.123.107:0/1535183990 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f18dc0826c0 0x7f18dc082b40 secure :-1 s=READY pgs=291 cs=0 l=1 rev1=1 crypto rx=0x7f18d4004a00 tx=0x7f18d4004a30 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:45:52.226 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.226+0000 7f18d9ffb640 1 -- 192.168.123.107:0/1535183990 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f18d40090d0 con 0x7f18dc0826c0 2026-03-09T20:45:52.230 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.228+0000 7f18e1d51640 1 -- 192.168.123.107:0/1535183990 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f18dc083360 con 0x7f18dc0826c0 2026-03-09T20:45:52.230 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.228+0000 7f18e1d51640 1 -- 192.168.123.107:0/1535183990 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f18dc1b5bc0 con 0x7f18dc0826c0 2026-03-09T20:45:52.230 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.230+0000 7f18d9ffb640 1 -- 192.168.123.107:0/1535183990 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f18d400f040 con 0x7f18dc0826c0 2026-03-09T20:45:52.230 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.230+0000 7f18d9ffb640 1 -- 192.168.123.107:0/1535183990 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f18d4013690 con 0x7f18dc0826c0 2026-03-09T20:45:52.231 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.231+0000 7f18d9ffb640 1 -- 192.168.123.107:0/1535183990 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7f18d401d430 con 0x7f18dc0826c0 2026-03-09T20:45:52.231 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.231+0000 7f18d9ffb640 1 --2- 192.168.123.107:0/1535183990 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f18c8076390 0x7f18c8078850 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:52.231 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.231+0000 7f18e0d4f640 1 --2- 192.168.123.107:0/1535183990 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f18c8076390 0x7f18c8078850 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:52.231 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.231+0000 7f18d9ffb640 1 -- 192.168.123.107:0/1535183990 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(42..42 src has 1..42) v4 ==== 5448+0+0 (secure 0 0 0) 0x7f18d4098250 con 0x7f18dc0826c0 2026-03-09T20:45:52.232 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.232+0000 7f18e0d4f640 1 --2- 192.168.123.107:0/1535183990 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f18c8076390 0x7f18c8078850 secure :-1 s=READY pgs=130 cs=0 l=1 rev1=1 crypto rx=0x7f18cc02f780 tx=0x7f18cc0023d0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:45:52.232 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.232+0000 7f18e1d51640 1 -- 192.168.123.107:0/1535183990 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f18dc07b150 con 0x7f18dc0826c0 2026-03-09T20:45:52.239 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.237+0000 7f18d9ffb640 1 -- 192.168.123.107:0/1535183990 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f18d40617d0 con 0x7f18dc0826c0 2026-03-09T20:45:52.390 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T20:45:52.390 INFO:teuthology.orchestra.run.vm07.stdout: "mon": { 2026-03-09T20:45:52.390 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable)": 2 2026-03-09T20:45:52.390 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:45:52.390 INFO:teuthology.orchestra.run.vm07.stdout: "mgr": { 2026-03-09T20:45:52.390 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable)": 2 2026-03-09T20:45:52.390 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:45:52.390 INFO:teuthology.orchestra.run.vm07.stdout: "osd": { 2026-03-09T20:45:52.390 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable)": 6 2026-03-09T20:45:52.390 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:45:52.390 INFO:teuthology.orchestra.run.vm07.stdout: "mds": { 2026-03-09T20:45:52.390 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable)": 4 2026-03-09T20:45:52.390 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:45:52.390 INFO:teuthology.orchestra.run.vm07.stdout: "overall": { 2026-03-09T20:45:52.390 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable)": 14 2026-03-09T20:45:52.391 INFO:teuthology.orchestra.run.vm07.stdout: } 2026-03-09T20:45:52.391 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T20:45:52.391 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.383+0000 7f18e1d51640 1 -- 192.168.123.107:0/1535183990 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7f18a40051c0 con 0x7f18dc0826c0 2026-03-09T20:45:52.391 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.387+0000 7f18d9ffb640 1 -- 192.168.123.107:0/1535183990 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+633 (secure 0 0 0) 0x7f18d4018290 con 0x7f18dc0826c0 2026-03-09T20:45:52.394 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.392+0000 7f18e1d51640 1 -- 192.168.123.107:0/1535183990 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f18c8076390 msgr2=0x7f18c8078850 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:52.394 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.392+0000 7f18e1d51640 1 --2- 192.168.123.107:0/1535183990 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f18c8076390 0x7f18c8078850 secure :-1 s=READY pgs=130 cs=0 l=1 rev1=1 crypto rx=0x7f18cc02f780 tx=0x7f18cc0023d0 comp rx=0 tx=0).stop 2026-03-09T20:45:52.394 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.392+0000 7f18e1d51640 1 -- 192.168.123.107:0/1535183990 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f18dc0826c0 msgr2=0x7f18dc082b40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:52.394 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.392+0000 7f18e1d51640 1 --2- 192.168.123.107:0/1535183990 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f18dc0826c0 0x7f18dc082b40 secure :-1 s=READY pgs=291 cs=0 l=1 rev1=1 crypto rx=0x7f18d4004a00 tx=0x7f18d4004a30 comp rx=0 tx=0).stop 2026-03-09T20:45:52.397 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.394+0000 7f18e1d51640 1 -- 192.168.123.107:0/1535183990 shutdown_connections 2026-03-09T20:45:52.397 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.394+0000 7f18e1d51640 1 --2- 192.168.123.107:0/1535183990 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f18c8076390 0x7f18c8078850 unknown :-1 s=CLOSED pgs=130 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:52.397 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.394+0000 7f18e1d51640 1 --2- 192.168.123.107:0/1535183990 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f18dc0826c0 0x7f18dc082b40 unknown :-1 s=CLOSED pgs=291 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:52.397 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.394+0000 7f18e1d51640 1 --2- 192.168.123.107:0/1535183990 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f18dc072420 0x7f18dc084070 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:52.397 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.394+0000 7f18e1d51640 1 -- 192.168.123.107:0/1535183990 >> 192.168.123.107:0/1535183990 conn(0x7f18dc06d4f0 msgr2=0x7f18dc0753c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:45:52.397 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.397+0000 7f18e1d51640 1 -- 192.168.123.107:0/1535183990 shutdown_connections 2026-03-09T20:45:52.397 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.397+0000 7f18e1d51640 1 -- 192.168.123.107:0/1535183990 wait complete. 2026-03-09T20:45:52.431 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:52 vm07.local ceph-mon[49120]: pgmap v106: 65 pgs: 65 active+clean; 462 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:45:52.480 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.480+0000 7f08e88a8640 1 -- 192.168.123.107:0/2665423893 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f08e40719a0 msgr2=0x7f08e4071da0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:52.480 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.480+0000 7f08e88a8640 1 --2- 192.168.123.107:0/2665423893 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f08e40719a0 0x7f08e4071da0 secure :-1 s=READY pgs=292 cs=0 l=1 rev1=1 crypto rx=0x7f08d40099b0 tx=0x7f08d402f240 comp rx=0 tx=0).stop 2026-03-09T20:45:52.480 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.480+0000 7f08e88a8640 1 -- 192.168.123.107:0/2665423893 shutdown_connections 2026-03-09T20:45:52.480 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.480+0000 7f08e88a8640 1 --2- 192.168.123.107:0/2665423893 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f08e4072370 0x7f08e410c590 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:52.480 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.480+0000 7f08e88a8640 1 --2- 192.168.123.107:0/2665423893 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f08e40719a0 0x7f08e4071da0 unknown :-1 s=CLOSED pgs=292 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:52.480 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.480+0000 7f08e88a8640 1 -- 192.168.123.107:0/2665423893 >> 192.168.123.107:0/2665423893 conn(0x7f08e406d4f0 msgr2=0x7f08e406f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:45:52.485 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.484+0000 7f08e88a8640 1 -- 192.168.123.107:0/2665423893 shutdown_connections 2026-03-09T20:45:52.485 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.484+0000 7f08e88a8640 1 -- 192.168.123.107:0/2665423893 wait complete. 2026-03-09T20:45:52.485 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.484+0000 7f08e88a8640 1 Processor -- start 2026-03-09T20:45:52.485 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.484+0000 7f08e88a8640 1 -- start start 2026-03-09T20:45:52.485 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.484+0000 7f08e88a8640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f08e40719a0 0x7f08e41159e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:52.485 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.484+0000 7f08e88a8640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f08e4072370 0x7f08e4115f20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:52.485 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.484+0000 7f08e88a8640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f08e4117420 con 0x7f08e40719a0 2026-03-09T20:45:52.485 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.484+0000 7f08e88a8640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f08e4117590 con 0x7f08e4072370 2026-03-09T20:45:52.485 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.485+0000 7f08e2ffd640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f08e4072370 0x7f08e4115f20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:52.485 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.485+0000 7f08e2ffd640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f08e4072370 0x7f08e4115f20 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:44930/0 (socket says 192.168.123.107:44930) 2026-03-09T20:45:52.485 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.485+0000 7f08e2ffd640 1 -- 192.168.123.107:0/4129552735 learned_addr learned my addr 192.168.123.107:0/4129552735 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:45:52.486 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.485+0000 7f08e37fe640 1 --2- 192.168.123.107:0/4129552735 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f08e40719a0 0x7f08e41159e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:52.486 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.485+0000 7f08e2ffd640 1 -- 192.168.123.107:0/4129552735 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f08e40719a0 msgr2=0x7f08e41159e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:52.486 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.485+0000 7f08e2ffd640 1 --2- 192.168.123.107:0/4129552735 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f08e40719a0 0x7f08e41159e0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:52.486 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.485+0000 7f08e2ffd640 1 -- 192.168.123.107:0/4129552735 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f08d4009660 con 0x7f08e4072370 2026-03-09T20:45:52.487 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.487+0000 7f08e2ffd640 1 --2- 192.168.123.107:0/4129552735 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f08e4072370 0x7f08e4115f20 secure :-1 s=READY pgs=64 cs=0 l=1 rev1=1 crypto rx=0x7f08dc0049b0 tx=0x7f08dc00d4a0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:45:52.488 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.487+0000 7f08e0ff9640 1 -- 192.168.123.107:0/4129552735 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f08dc00dbb0 con 0x7f08e4072370 2026-03-09T20:45:52.488 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.487+0000 7f08e0ff9640 1 -- 192.168.123.107:0/4129552735 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f08dc00f040 con 0x7f08e4072370 2026-03-09T20:45:52.488 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.487+0000 7f08e0ff9640 1 -- 192.168.123.107:0/4129552735 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f08dc013600 con 0x7f08e4072370 2026-03-09T20:45:52.488 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.487+0000 7f08e88a8640 1 -- 192.168.123.107:0/4129552735 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f08e4116580 con 0x7f08e4072370 2026-03-09T20:45:52.488 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.487+0000 7f08e88a8640 1 -- 192.168.123.107:0/4129552735 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f08e41b58d0 con 0x7f08e4072370 2026-03-09T20:45:52.488 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.488+0000 7f08e88a8640 1 -- 192.168.123.107:0/4129552735 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f08a8005350 con 0x7f08e4072370 2026-03-09T20:45:52.491 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.489+0000 7f08e0ff9640 1 -- 192.168.123.107:0/4129552735 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7f08dc01a020 con 0x7f08e4072370 2026-03-09T20:45:52.492 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.490+0000 7f08e0ff9640 1 --2- 192.168.123.107:0/4129552735 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f08b80761c0 0x7f08b8078680 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:52.492 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.490+0000 7f08e0ff9640 1 -- 192.168.123.107:0/4129552735 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(42..42 src has 1..42) v4 ==== 5448+0+0 (secure 0 0 0) 0x7f08dc022080 con 0x7f08e4072370 2026-03-09T20:45:52.492 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.491+0000 7f08e37fe640 1 --2- 192.168.123.107:0/4129552735 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f08b80761c0 0x7f08b8078680 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:52.492 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.491+0000 7f08e0ff9640 1 -- 192.168.123.107:0/4129552735 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f08dc060600 con 0x7f08e4072370 2026-03-09T20:45:52.496 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.496+0000 7f08e37fe640 1 --2- 192.168.123.107:0/4129552735 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f08b80761c0 0x7f08b8078680 secure :-1 s=READY pgs=131 cs=0 l=1 rev1=1 crypto rx=0x7f08d4002410 tx=0x7f08d403a040 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:45:52.661 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.659+0000 7f08e88a8640 1 -- 192.168.123.107:0/4129552735 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "fs dump"} v 0) v1 -- 0x7f08a80058d0 con 0x7f08e4072370 2026-03-09T20:45:52.663 INFO:teuthology.orchestra.run.vm07.stdout:e11 2026-03-09T20:45:52.664 INFO:teuthology.orchestra.run.vm07.stdout:enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T20:45:52.664 INFO:teuthology.orchestra.run.vm07.stdout:default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T20:45:52.664 INFO:teuthology.orchestra.run.vm07.stdout:legacy client fscid: 1 2026-03-09T20:45:52.664 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:45:52.664 INFO:teuthology.orchestra.run.vm07.stdout:Filesystem 'cephfs' (1) 2026-03-09T20:45:52.664 INFO:teuthology.orchestra.run.vm07.stdout:fs_name cephfs 2026-03-09T20:45:52.664 INFO:teuthology.orchestra.run.vm07.stdout:epoch 11 2026-03-09T20:45:52.664 INFO:teuthology.orchestra.run.vm07.stdout:flags 32 joinable allow_snaps allow_multimds_snaps allow_standby_replay 2026-03-09T20:45:52.664 INFO:teuthology.orchestra.run.vm07.stdout:created 2026-03-09T20:44:59.885491+0000 2026-03-09T20:45:52.664 INFO:teuthology.orchestra.run.vm07.stdout:modified 2026-03-09T20:45:12.822947+0000 2026-03-09T20:45:52.664 INFO:teuthology.orchestra.run.vm07.stdout:tableserver 0 2026-03-09T20:45:52.664 INFO:teuthology.orchestra.run.vm07.stdout:root 0 2026-03-09T20:45:52.664 INFO:teuthology.orchestra.run.vm07.stdout:session_timeout 60 2026-03-09T20:45:52.664 INFO:teuthology.orchestra.run.vm07.stdout:session_autoclose 300 2026-03-09T20:45:52.664 INFO:teuthology.orchestra.run.vm07.stdout:max_file_size 1099511627776 2026-03-09T20:45:52.664 INFO:teuthology.orchestra.run.vm07.stdout:max_xattr_size 65536 2026-03-09T20:45:52.664 INFO:teuthology.orchestra.run.vm07.stdout:required_client_features {} 2026-03-09T20:45:52.664 INFO:teuthology.orchestra.run.vm07.stdout:last_failure 0 2026-03-09T20:45:52.664 INFO:teuthology.orchestra.run.vm07.stdout:last_failure_osd_epoch 0 2026-03-09T20:45:52.664 INFO:teuthology.orchestra.run.vm07.stdout:compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T20:45:52.664 INFO:teuthology.orchestra.run.vm07.stdout:max_mds 2 2026-03-09T20:45:52.664 INFO:teuthology.orchestra.run.vm07.stdout:in 0,1 2026-03-09T20:45:52.664 INFO:teuthology.orchestra.run.vm07.stdout:up {0=14476,1=24291} 2026-03-09T20:45:52.664 INFO:teuthology.orchestra.run.vm07.stdout:failed 2026-03-09T20:45:52.664 INFO:teuthology.orchestra.run.vm07.stdout:damaged 2026-03-09T20:45:52.664 INFO:teuthology.orchestra.run.vm07.stdout:stopped 2026-03-09T20:45:52.664 INFO:teuthology.orchestra.run.vm07.stdout:data_pools [3] 2026-03-09T20:45:52.664 INFO:teuthology.orchestra.run.vm07.stdout:metadata_pool 2 2026-03-09T20:45:52.664 INFO:teuthology.orchestra.run.vm07.stdout:inline_data disabled 2026-03-09T20:45:52.664 INFO:teuthology.orchestra.run.vm07.stdout:balancer 2026-03-09T20:45:52.664 INFO:teuthology.orchestra.run.vm07.stdout:bal_rank_mask -1 2026-03-09T20:45:52.664 INFO:teuthology.orchestra.run.vm07.stdout:standby_count_wanted 1 2026-03-09T20:45:52.664 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm07.rovdbp{0:14476} state up:active seq 3 join_fscid=1 addr [v2:192.168.123.107:6826/2216764941,v1:192.168.123.107:6827/2216764941] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:45:52.664 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm10.hzyuyq{0:14498} state up:standby-replay seq 3 join_fscid=1 addr [v2:192.168.123.110:6826/3212743251,v1:192.168.123.110:6827/3212743251] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:45:52.664 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm10.qpltwp{1:24291} state up:active seq 3 join_fscid=1 addr [v2:192.168.123.110:6824/61492274,v1:192.168.123.110:6825/61492274] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:45:52.664 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm07.potfau{1:14490} state up:standby-replay seq 2 join_fscid=1 addr [v2:192.168.123.107:6828/3289699342,v1:192.168.123.107:6829/3289699342] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:45:52.664 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:45:52.664 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:45:52.664 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.661+0000 7f08e0ff9640 1 -- 192.168.123.107:0/4129552735 <== mon.1 v2:192.168.123.110:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump"}]=0 dumped fsmap epoch 11 v11) v1 ==== 76+0+1867 (secure 0 0 0) 0x7f08dc05ffa0 con 0x7f08e4072370 2026-03-09T20:45:52.664 INFO:teuthology.orchestra.run.vm07.stderr:dumped fsmap epoch 11 2026-03-09T20:45:52.664 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.664+0000 7f08e88a8640 1 -- 192.168.123.107:0/4129552735 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f08b80761c0 msgr2=0x7f08b8078680 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:52.664 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.664+0000 7f08e88a8640 1 --2- 192.168.123.107:0/4129552735 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f08b80761c0 0x7f08b8078680 secure :-1 s=READY pgs=131 cs=0 l=1 rev1=1 crypto rx=0x7f08d4002410 tx=0x7f08d403a040 comp rx=0 tx=0).stop 2026-03-09T20:45:52.664 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.664+0000 7f08e88a8640 1 -- 192.168.123.107:0/4129552735 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f08e4072370 msgr2=0x7f08e4115f20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:52.664 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.664+0000 7f08e88a8640 1 --2- 192.168.123.107:0/4129552735 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f08e4072370 0x7f08e4115f20 secure :-1 s=READY pgs=64 cs=0 l=1 rev1=1 crypto rx=0x7f08dc0049b0 tx=0x7f08dc00d4a0 comp rx=0 tx=0).stop 2026-03-09T20:45:52.664 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.664+0000 7f08e88a8640 1 -- 192.168.123.107:0/4129552735 shutdown_connections 2026-03-09T20:45:52.664 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.664+0000 7f08e88a8640 1 --2- 192.168.123.107:0/4129552735 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f08b80761c0 0x7f08b8078680 unknown :-1 s=CLOSED pgs=131 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:52.664 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.664+0000 7f08e88a8640 1 --2- 192.168.123.107:0/4129552735 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f08e4072370 0x7f08e4115f20 unknown :-1 s=CLOSED pgs=64 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:52.664 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.664+0000 7f08e88a8640 1 --2- 192.168.123.107:0/4129552735 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f08e40719a0 0x7f08e41159e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:52.664 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.664+0000 7f08e88a8640 1 -- 192.168.123.107:0/4129552735 >> 192.168.123.107:0/4129552735 conn(0x7f08e406d4f0 msgr2=0x7f08e410a880 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:45:52.665 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.665+0000 7f08e88a8640 1 -- 192.168.123.107:0/4129552735 shutdown_connections 2026-03-09T20:45:52.665 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.665+0000 7f08e88a8640 1 -- 192.168.123.107:0/4129552735 wait complete. 2026-03-09T20:45:52.742 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:52 vm07.local ceph-mon[49120]: from='client.14574 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:45:52.742 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:52 vm07.local ceph-mon[49120]: from='client.24373 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:45:52.742 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:52 vm07.local ceph-mon[49120]: from='client.? 192.168.123.107:0/1535183990' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:45:52.743 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.737+0000 7f500c988640 1 -- 192.168.123.107:0/2555755269 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5008072420 msgr2=0x7f5008077190 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:52.743 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.737+0000 7f500c988640 1 --2- 192.168.123.107:0/2555755269 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5008072420 0x7f5008077190 secure :-1 s=READY pgs=293 cs=0 l=1 rev1=1 crypto rx=0x7f5000008030 tx=0x7f5000030dc0 comp rx=0 tx=0).stop 2026-03-09T20:45:52.743 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.737+0000 7f500c988640 1 -- 192.168.123.107:0/2555755269 shutdown_connections 2026-03-09T20:45:52.743 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.737+0000 7f500c988640 1 --2- 192.168.123.107:0/2555755269 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5008072420 0x7f5008077190 unknown :-1 s=CLOSED pgs=293 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:52.743 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.737+0000 7f500c988640 1 --2- 192.168.123.107:0/2555755269 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5008071a50 0x7f5008071e50 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:52.743 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.737+0000 7f500c988640 1 -- 192.168.123.107:0/2555755269 >> 192.168.123.107:0/2555755269 conn(0x7f500806d4f0 msgr2=0x7f500806f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:45:52.743 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.742+0000 7f500c988640 1 -- 192.168.123.107:0/2555755269 shutdown_connections 2026-03-09T20:45:52.743 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.742+0000 7f500c988640 1 -- 192.168.123.107:0/2555755269 wait complete. 2026-03-09T20:45:52.743 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.742+0000 7f500c988640 1 Processor -- start 2026-03-09T20:45:52.745 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.742+0000 7f500c988640 1 -- start start 2026-03-09T20:45:52.745 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.744+0000 7f500c988640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5008071a50 0x7f50081b86d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:52.745 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.744+0000 7f500c988640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5008072420 0x7f50081b8c10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:52.745 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.744+0000 7f500c988640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f50081ba0c0 con 0x7f5008072420 2026-03-09T20:45:52.745 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.744+0000 7f500c988640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f50081ba230 con 0x7f5008071a50 2026-03-09T20:45:52.745 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.744+0000 7f50077fe640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5008071a50 0x7f50081b86d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:52.745 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.744+0000 7f5006ffd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5008072420 0x7f50081b8c10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:52.745 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.744+0000 7f5006ffd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5008072420 0x7f50081b8c10 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:57370/0 (socket says 192.168.123.107:57370) 2026-03-09T20:45:52.745 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.744+0000 7f5006ffd640 1 -- 192.168.123.107:0/3949336328 learned_addr learned my addr 192.168.123.107:0/3949336328 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:45:52.745 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.744+0000 7f5006ffd640 1 -- 192.168.123.107:0/3949336328 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5008071a50 msgr2=0x7f50081b86d0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:52.745 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.744+0000 7f5006ffd640 1 --2- 192.168.123.107:0/3949336328 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5008071a50 0x7f50081b86d0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:52.745 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.744+0000 7f5006ffd640 1 -- 192.168.123.107:0/3949336328 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f5000007ce0 con 0x7f5008072420 2026-03-09T20:45:52.745 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.745+0000 7f5006ffd640 1 --2- 192.168.123.107:0/3949336328 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5008072420 0x7f50081b8c10 secure :-1 s=READY pgs=294 cs=0 l=1 rev1=1 crypto rx=0x7f5000004900 tx=0x7f5000031b60 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:45:52.745 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.745+0000 7f5004ff9640 1 -- 192.168.123.107:0/3949336328 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f5000004aa0 con 0x7f5008072420 2026-03-09T20:45:52.747 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.745+0000 7f5004ff9640 1 -- 192.168.123.107:0/3949336328 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f5000004050 con 0x7f5008072420 2026-03-09T20:45:52.747 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.745+0000 7f5004ff9640 1 -- 192.168.123.107:0/3949336328 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f50000427c0 con 0x7f5008072420 2026-03-09T20:45:52.750 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.746+0000 7f500c988640 1 -- 192.168.123.107:0/3949336328 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f50081b9240 con 0x7f5008072420 2026-03-09T20:45:52.750 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.746+0000 7f500c988640 1 -- 192.168.123.107:0/3949336328 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f500807fc80 con 0x7f5008072420 2026-03-09T20:45:52.750 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.747+0000 7f4fe67fc640 1 -- 192.168.123.107:0/3949336328 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f5008071e50 con 0x7f5008072420 2026-03-09T20:45:52.754 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.751+0000 7f5004ff9640 1 -- 192.168.123.107:0/3949336328 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7f500003aba0 con 0x7f5008072420 2026-03-09T20:45:52.755 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.751+0000 7f5004ff9640 1 --2- 192.168.123.107:0/3949336328 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f4fdc076170 0x7f4fdc078630 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:52.755 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.751+0000 7f5004ff9640 1 -- 192.168.123.107:0/3949336328 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(42..42 src has 1..42) v4 ==== 5448+0+0 (secure 0 0 0) 0x7f50000bd880 con 0x7f5008072420 2026-03-09T20:45:52.755 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.751+0000 7f5004ff9640 1 -- 192.168.123.107:0/3949336328 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f50000eb8c0 con 0x7f5008072420 2026-03-09T20:45:52.755 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.752+0000 7f50077fe640 1 --2- 192.168.123.107:0/3949336328 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f4fdc076170 0x7f4fdc078630 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:52.755 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.752+0000 7f50077fe640 1 --2- 192.168.123.107:0/3949336328 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f4fdc076170 0x7f4fdc078630 secure :-1 s=READY pgs=132 cs=0 l=1 rev1=1 crypto rx=0x7f4ff80059c0 tx=0x7f4ff800a380 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:45:52.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:52 vm10.local ceph-mon[57011]: pgmap v106: 65 pgs: 65 active+clean; 462 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:45:52.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:52 vm10.local ceph-mon[57011]: from='client.14574 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:45:52.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:52 vm10.local ceph-mon[57011]: from='client.24373 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:45:52.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:52 vm10.local ceph-mon[57011]: from='client.? 192.168.123.107:0/1535183990' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:45:52.875 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.874+0000 7f4fe67fc640 1 -- 192.168.123.107:0/3949336328 --> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f50080611c0 con 0x7f4fdc076170 2026-03-09T20:45:52.876 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.877+0000 7f5004ff9640 1 -- 192.168.123.107:0/3949336328 <== mgr.14225 v2:192.168.123.107:6800/4233182156 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+358 (secure 0 0 0) 0x7f50080611c0 con 0x7f4fdc076170 2026-03-09T20:45:52.877 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T20:45:52.877 INFO:teuthology.orchestra.run.vm07.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", 2026-03-09T20:45:52.877 INFO:teuthology.orchestra.run.vm07.stdout: "in_progress": true, 2026-03-09T20:45:52.877 INFO:teuthology.orchestra.run.vm07.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-09T20:45:52.877 INFO:teuthology.orchestra.run.vm07.stdout: "services_complete": [], 2026-03-09T20:45:52.877 INFO:teuthology.orchestra.run.vm07.stdout: "progress": "", 2026-03-09T20:45:52.877 INFO:teuthology.orchestra.run.vm07.stdout: "message": "Doing first pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df image", 2026-03-09T20:45:52.877 INFO:teuthology.orchestra.run.vm07.stdout: "is_paused": false 2026-03-09T20:45:52.877 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T20:45:52.880 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.880+0000 7f500c988640 1 -- 192.168.123.107:0/3949336328 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f4fdc076170 msgr2=0x7f4fdc078630 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:52.880 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.880+0000 7f500c988640 1 --2- 192.168.123.107:0/3949336328 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f4fdc076170 0x7f4fdc078630 secure :-1 s=READY pgs=132 cs=0 l=1 rev1=1 crypto rx=0x7f4ff80059c0 tx=0x7f4ff800a380 comp rx=0 tx=0).stop 2026-03-09T20:45:52.880 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.880+0000 7f500c988640 1 -- 192.168.123.107:0/3949336328 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5008072420 msgr2=0x7f50081b8c10 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:52.880 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.880+0000 7f500c988640 1 --2- 192.168.123.107:0/3949336328 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5008072420 0x7f50081b8c10 secure :-1 s=READY pgs=294 cs=0 l=1 rev1=1 crypto rx=0x7f5000004900 tx=0x7f5000031b60 comp rx=0 tx=0).stop 2026-03-09T20:45:52.881 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.880+0000 7f500c988640 1 -- 192.168.123.107:0/3949336328 shutdown_connections 2026-03-09T20:45:52.881 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.880+0000 7f500c988640 1 --2- 192.168.123.107:0/3949336328 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f4fdc076170 0x7f4fdc078630 unknown :-1 s=CLOSED pgs=132 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:52.881 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.880+0000 7f500c988640 1 --2- 192.168.123.107:0/3949336328 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5008072420 0x7f50081b8c10 unknown :-1 s=CLOSED pgs=294 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:52.881 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.880+0000 7f500c988640 1 --2- 192.168.123.107:0/3949336328 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5008071a50 0x7f50081b86d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:52.881 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.880+0000 7f500c988640 1 -- 192.168.123.107:0/3949336328 >> 192.168.123.107:0/3949336328 conn(0x7f500806d4f0 msgr2=0x7f50080753a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:45:52.881 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.881+0000 7f500c988640 1 -- 192.168.123.107:0/3949336328 shutdown_connections 2026-03-09T20:45:52.881 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.881+0000 7f500c988640 1 -- 192.168.123.107:0/3949336328 wait complete. 2026-03-09T20:45:52.959 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.955+0000 7f94c007d640 1 -- 192.168.123.107:0/3423371516 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f94b8072440 msgr2=0x7f94b80771b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:52.959 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.955+0000 7f94c007d640 1 --2- 192.168.123.107:0/3423371516 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f94b8072440 0x7f94b80771b0 secure :-1 s=READY pgs=295 cs=0 l=1 rev1=1 crypto rx=0x7f94a8009a00 tx=0x7f94a802f290 comp rx=0 tx=0).stop 2026-03-09T20:45:52.959 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.955+0000 7f94c007d640 1 -- 192.168.123.107:0/3423371516 shutdown_connections 2026-03-09T20:45:52.959 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.955+0000 7f94c007d640 1 --2- 192.168.123.107:0/3423371516 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f94b8072440 0x7f94b80771b0 unknown :-1 s=CLOSED pgs=295 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:52.959 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.955+0000 7f94c007d640 1 --2- 192.168.123.107:0/3423371516 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f94b8071a70 0x7f94b8071e70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:52.959 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.955+0000 7f94c007d640 1 -- 192.168.123.107:0/3423371516 >> 192.168.123.107:0/3423371516 conn(0x7f94b806d4f0 msgr2=0x7f94b806f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:45:52.959 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.955+0000 7f94c007d640 1 -- 192.168.123.107:0/3423371516 shutdown_connections 2026-03-09T20:45:52.959 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.955+0000 7f94c007d640 1 -- 192.168.123.107:0/3423371516 wait complete. 2026-03-09T20:45:52.959 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.955+0000 7f94c007d640 1 Processor -- start 2026-03-09T20:45:52.959 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.955+0000 7f94c007d640 1 -- start start 2026-03-09T20:45:52.959 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.956+0000 7f94c007d640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f94b8071a70 0x7f94b80840c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:52.959 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.956+0000 7f94c007d640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f94b8072440 0x7f94b8082780 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:52.959 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.956+0000 7f94c007d640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f94b8082cc0 con 0x7f94b8071a70 2026-03-09T20:45:52.959 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.956+0000 7f94c007d640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f94b8082e30 con 0x7f94b8072440 2026-03-09T20:45:52.959 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.957+0000 7f94bddf2640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f94b8071a70 0x7f94b80840c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:52.959 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.957+0000 7f94bddf2640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f94b8071a70 0x7f94b80840c0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:57394/0 (socket says 192.168.123.107:57394) 2026-03-09T20:45:52.959 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.957+0000 7f94bddf2640 1 -- 192.168.123.107:0/2327358412 learned_addr learned my addr 192.168.123.107:0/2327358412 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:45:52.959 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.957+0000 7f94bddf2640 1 -- 192.168.123.107:0/2327358412 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f94b8072440 msgr2=0x7f94b8082780 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T20:45:52.959 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.957+0000 7f94bddf2640 1 --2- 192.168.123.107:0/2327358412 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f94b8072440 0x7f94b8082780 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:52.959 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.957+0000 7f94bddf2640 1 -- 192.168.123.107:0/2327358412 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f94a8009660 con 0x7f94b8071a70 2026-03-09T20:45:52.959 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.957+0000 7f94bddf2640 1 --2- 192.168.123.107:0/2327358412 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f94b8071a70 0x7f94b80840c0 secure :-1 s=READY pgs=296 cs=0 l=1 rev1=1 crypto rx=0x7f94b4009f40 tx=0x7f94b400c6a0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:45:52.959 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.958+0000 7f94aeffd640 1 -- 192.168.123.107:0/2327358412 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f94b400ce80 con 0x7f94b8071a70 2026-03-09T20:45:52.959 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.958+0000 7f94c007d640 1 -- 192.168.123.107:0/2327358412 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f94b80830b0 con 0x7f94b8071a70 2026-03-09T20:45:52.959 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.958+0000 7f94c007d640 1 -- 192.168.123.107:0/2327358412 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f94b8083600 con 0x7f94b8071a70 2026-03-09T20:45:52.966 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.959+0000 7f94aeffd640 1 -- 192.168.123.107:0/2327358412 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f94b4004590 con 0x7f94b8071a70 2026-03-09T20:45:52.966 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.959+0000 7f94aeffd640 1 -- 192.168.123.107:0/2327358412 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f94b4055650 con 0x7f94b8071a70 2026-03-09T20:45:52.966 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.959+0000 7f94aeffd640 1 -- 192.168.123.107:0/2327358412 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7f94b40557f0 con 0x7f94b8071a70 2026-03-09T20:45:52.966 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.962+0000 7f94aeffd640 1 --2- 192.168.123.107:0/2327358412 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f948c076360 0x7f948c078820 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:45:52.966 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.962+0000 7f94c007d640 1 -- 192.168.123.107:0/2327358412 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f94b807a810 con 0x7f94b8071a70 2026-03-09T20:45:52.966 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.962+0000 7f94bd5f1640 1 --2- 192.168.123.107:0/2327358412 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f948c076360 0x7f948c078820 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:45:52.966 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.963+0000 7f94bd5f1640 1 --2- 192.168.123.107:0/2327358412 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f948c076360 0x7f948c078820 secure :-1 s=READY pgs=133 cs=0 l=1 rev1=1 crypto rx=0x7f94a802f7a0 tx=0x7f94a80023d0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:45:52.966 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.963+0000 7f94aeffd640 1 -- 192.168.123.107:0/2327358412 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(42..42 src has 1..42) v4 ==== 5448+0+0 (secure 0 0 0) 0x7f94b4059070 con 0x7f94b8071a70 2026-03-09T20:45:52.967 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:52.966+0000 7f94aeffd640 1 -- 192.168.123.107:0/2327358412 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f94b40a7450 con 0x7f94b8071a70 2026-03-09T20:45:53.162 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:53.161+0000 7f94c007d640 1 -- 192.168.123.107:0/2327358412 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "health", "detail": "detail"} v 0) v1 -- 0x7f94b8083240 con 0x7f94b8071a70 2026-03-09T20:45:53.162 INFO:teuthology.orchestra.run.vm07.stdout:HEALTH_OK 2026-03-09T20:45:53.162 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:53.162+0000 7f94aeffd640 1 -- 192.168.123.107:0/2327358412 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "health", "detail": "detail"}]=0 v0) v1 ==== 74+0+10 (secure 0 0 0) 0x7f94b8083240 con 0x7f94b8071a70 2026-03-09T20:45:53.164 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:53.164+0000 7f94acff9640 1 -- 192.168.123.107:0/2327358412 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f948c076360 msgr2=0x7f948c078820 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:53.164 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:53.164+0000 7f94acff9640 1 --2- 192.168.123.107:0/2327358412 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f948c076360 0x7f948c078820 secure :-1 s=READY pgs=133 cs=0 l=1 rev1=1 crypto rx=0x7f94a802f7a0 tx=0x7f94a80023d0 comp rx=0 tx=0).stop 2026-03-09T20:45:53.164 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:53.164+0000 7f94acff9640 1 -- 192.168.123.107:0/2327358412 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f94b8071a70 msgr2=0x7f94b80840c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:45:53.164 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:53.164+0000 7f94acff9640 1 --2- 192.168.123.107:0/2327358412 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f94b8071a70 0x7f94b80840c0 secure :-1 s=READY pgs=296 cs=0 l=1 rev1=1 crypto rx=0x7f94b4009f40 tx=0x7f94b400c6a0 comp rx=0 tx=0).stop 2026-03-09T20:45:53.164 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:53.164+0000 7f94acff9640 1 -- 192.168.123.107:0/2327358412 shutdown_connections 2026-03-09T20:45:53.164 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:53.164+0000 7f94acff9640 1 --2- 192.168.123.107:0/2327358412 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f948c076360 0x7f948c078820 unknown :-1 s=CLOSED pgs=133 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:53.164 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:53.164+0000 7f94acff9640 1 --2- 192.168.123.107:0/2327358412 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f94b8072440 0x7f94b8082780 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:53.164 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:53.164+0000 7f94acff9640 1 --2- 192.168.123.107:0/2327358412 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f94b8071a70 0x7f94b80840c0 unknown :-1 s=CLOSED pgs=296 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:45:53.164 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:53.164+0000 7f94acff9640 1 -- 192.168.123.107:0/2327358412 >> 192.168.123.107:0/2327358412 conn(0x7f94b806d4f0 msgr2=0x7f94b8075440 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:45:53.164 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:53.164+0000 7f94acff9640 1 -- 192.168.123.107:0/2327358412 shutdown_connections 2026-03-09T20:45:53.164 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:45:53.164+0000 7f94acff9640 1 -- 192.168.123.107:0/2327358412 wait complete. 2026-03-09T20:45:53.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:53 vm10.local ceph-mon[57011]: from='client.24377 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:45:53.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:53 vm10.local ceph-mon[57011]: from='client.? 192.168.123.107:0/4129552735' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T20:45:53.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:53 vm10.local ceph-mon[57011]: from='client.14594 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:45:53.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:53 vm10.local ceph-mon[57011]: from='client.? 192.168.123.107:0/2327358412' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T20:45:53.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:53 vm07.local ceph-mon[49120]: from='client.24377 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:45:53.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:53 vm07.local ceph-mon[49120]: from='client.? 192.168.123.107:0/4129552735' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T20:45:53.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:53 vm07.local ceph-mon[49120]: from='client.14594 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:45:53.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:53 vm07.local ceph-mon[49120]: from='client.? 192.168.123.107:0/2327358412' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T20:45:54.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:54 vm10.local ceph-mon[57011]: pgmap v107: 65 pgs: 65 active+clean; 462 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:45:54.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:54 vm07.local ceph-mon[49120]: pgmap v107: 65 pgs: 65 active+clean; 462 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:45:56.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:56 vm10.local ceph-mon[57011]: pgmap v108: 65 pgs: 65 active+clean; 462 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:45:56.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:56 vm07.local ceph-mon[49120]: pgmap v108: 65 pgs: 65 active+clean; 462 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:45:59.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:45:59 vm10.local ceph-mon[57011]: pgmap v109: 65 pgs: 65 active+clean; 462 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:45:59.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:45:59 vm07.local ceph-mon[49120]: pgmap v109: 65 pgs: 65 active+clean; 462 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:46:00.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:46:00 vm10.local ceph-mon[57011]: pgmap v110: 65 pgs: 65 active+clean; 462 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:46:00.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:46:00 vm07.local ceph-mon[49120]: pgmap v110: 65 pgs: 65 active+clean; 462 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:46:03.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:46:02 vm10.local ceph-mon[57011]: pgmap v111: 65 pgs: 65 active+clean; 462 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:46:03.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:46:02 vm07.local ceph-mon[49120]: pgmap v111: 65 pgs: 65 active+clean; 462 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:46:05.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:46:04 vm10.local ceph-mon[57011]: pgmap v112: 65 pgs: 65 active+clean; 462 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:46:05.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:46:04 vm10.local ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:46:05.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:46:04 vm07.local ceph-mon[49120]: pgmap v112: 65 pgs: 65 active+clean; 462 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:46:05.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:46:04 vm07.local ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:46:07.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:46:06 vm10.local ceph-mon[57011]: pgmap v113: 65 pgs: 65 active+clean; 462 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:46:07.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:46:06 vm10.local ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:46:07.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:46:06 vm10.local ceph-mon[57011]: Upgrade: Target is version 19.2.3-678-ge911bdeb (unknown) 2026-03-09T20:46:07.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:46:06 vm10.local ceph-mon[57011]: Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-09T20:46:07.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:46:06 vm10.local ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:46:07.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:46:06 vm10.local ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:46:07.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:46:06 vm10.local ceph-mon[57011]: Upgrade: Need to upgrade myself (mgr.vm07.xjrvch) 2026-03-09T20:46:07.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:46:06 vm07.local ceph-mon[49120]: pgmap v113: 65 pgs: 65 active+clean; 462 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:46:07.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:46:06 vm07.local ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:46:07.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:46:06 vm07.local ceph-mon[49120]: Upgrade: Target is version 19.2.3-678-ge911bdeb (unknown) 2026-03-09T20:46:07.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:46:06 vm07.local ceph-mon[49120]: Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-09T20:46:07.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:46:06 vm07.local ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:46:07.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:46:06 vm07.local ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:46:07.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:46:06 vm07.local ceph-mon[49120]: Upgrade: Need to upgrade myself (mgr.vm07.xjrvch) 2026-03-09T20:46:08.187 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:46:07 vm10.local ceph-mon[57011]: Upgrade: Pulling quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc on vm10 2026-03-09T20:46:08.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:46:07 vm07.local ceph-mon[49120]: Upgrade: Pulling quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc on vm10 2026-03-09T20:46:09.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:46:08 vm10.local ceph-mon[57011]: pgmap v114: 65 pgs: 65 active+clean; 462 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:46:09.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:46:08 vm07.local ceph-mon[49120]: pgmap v114: 65 pgs: 65 active+clean; 462 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:46:10.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:46:10 vm10.local ceph-mon[57011]: pgmap v115: 65 pgs: 65 active+clean; 462 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:46:10.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:46:10 vm07.local ceph-mon[49120]: pgmap v115: 65 pgs: 65 active+clean; 462 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:46:12.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:46:12 vm10.local ceph-mon[57011]: pgmap v116: 65 pgs: 65 active+clean; 462 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:46:12.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:46:12 vm07.local ceph-mon[49120]: pgmap v116: 65 pgs: 65 active+clean; 462 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:46:14.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:46:14 vm10.local ceph-mon[57011]: pgmap v117: 65 pgs: 65 active+clean; 462 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:46:14.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:46:14 vm07.local ceph-mon[49120]: pgmap v117: 65 pgs: 65 active+clean; 462 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:46:15.760 INFO:tasks.workunit.client.1.vm10.stderr:Note: switching to '569c3e99c9b32a51b4eaf08731c728f4513ed589'. 2026-03-09T20:46:15.761 INFO:tasks.workunit.client.1.vm10.stderr: 2026-03-09T20:46:15.761 INFO:tasks.workunit.client.1.vm10.stderr:You are in 'detached HEAD' state. You can look around, make experimental 2026-03-09T20:46:15.761 INFO:tasks.workunit.client.1.vm10.stderr:changes and commit them, and you can discard any commits you make in this 2026-03-09T20:46:15.761 INFO:tasks.workunit.client.1.vm10.stderr:state without impacting any branches by switching back to a branch. 2026-03-09T20:46:15.761 INFO:tasks.workunit.client.1.vm10.stderr: 2026-03-09T20:46:15.761 INFO:tasks.workunit.client.1.vm10.stderr:If you want to create a new branch to retain commits you create, you may 2026-03-09T20:46:15.761 INFO:tasks.workunit.client.1.vm10.stderr:do so (now or later) by using -c with the switch command. Example: 2026-03-09T20:46:15.761 INFO:tasks.workunit.client.1.vm10.stderr: 2026-03-09T20:46:15.761 INFO:tasks.workunit.client.1.vm10.stderr: git switch -c 2026-03-09T20:46:15.761 INFO:tasks.workunit.client.1.vm10.stderr: 2026-03-09T20:46:15.761 INFO:tasks.workunit.client.1.vm10.stderr:Or undo this operation with: 2026-03-09T20:46:15.761 INFO:tasks.workunit.client.1.vm10.stderr: 2026-03-09T20:46:15.761 INFO:tasks.workunit.client.1.vm10.stderr: git switch - 2026-03-09T20:46:15.761 INFO:tasks.workunit.client.1.vm10.stderr: 2026-03-09T20:46:15.761 INFO:tasks.workunit.client.1.vm10.stderr:Turn off this advice by setting config variable advice.detachedHead to false 2026-03-09T20:46:15.761 INFO:tasks.workunit.client.1.vm10.stderr: 2026-03-09T20:46:15.761 INFO:tasks.workunit.client.1.vm10.stderr:HEAD is now at 569c3e99c9b qa/rgw: bucket notifications use pynose 2026-03-09T20:46:15.765 DEBUG:teuthology.orchestra.run.vm10:> cd -- /home/ubuntu/cephtest/clone.client.1/qa/workunits && if test -e Makefile ; then make ; fi && find -executable -type f -printf '%P\0' >/home/ubuntu/cephtest/workunits.list.client.1 2026-03-09T20:46:15.820 INFO:tasks.workunit.client.1.vm10.stdout:for d in direct_io fs ; do ( cd $d ; make all ) ; done 2026-03-09T20:46:15.831 INFO:tasks.workunit.client.1.vm10.stdout:make[1]: Entering directory '/home/ubuntu/cephtest/clone.client.1/qa/workunits/direct_io' 2026-03-09T20:46:15.831 INFO:tasks.workunit.client.1.vm10.stdout:cc -Wall -Wextra -D_GNU_SOURCE direct_io_test.c -o direct_io_test 2026-03-09T20:46:15.865 INFO:tasks.workunit.client.1.vm10.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_sync_io.c -o test_sync_io 2026-03-09T20:46:15.898 INFO:tasks.workunit.client.1.vm10.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_short_dio_read.c -o test_short_dio_read 2026-03-09T20:46:15.928 INFO:tasks.workunit.client.1.vm10.stdout:make[1]: Leaving directory '/home/ubuntu/cephtest/clone.client.1/qa/workunits/direct_io' 2026-03-09T20:46:15.930 INFO:tasks.workunit.client.1.vm10.stdout:make[1]: Entering directory '/home/ubuntu/cephtest/clone.client.1/qa/workunits/fs' 2026-03-09T20:46:15.930 INFO:tasks.workunit.client.1.vm10.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_o_trunc.c -o test_o_trunc 2026-03-09T20:46:15.962 INFO:tasks.workunit.client.1.vm10.stdout:make[1]: Leaving directory '/home/ubuntu/cephtest/clone.client.1/qa/workunits/fs' 2026-03-09T20:46:15.966 DEBUG:teuthology.orchestra.run.vm10:> set -ex 2026-03-09T20:46:15.966 DEBUG:teuthology.orchestra.run.vm10:> dd if=/home/ubuntu/cephtest/workunits.list.client.1 of=/dev/stdout 2026-03-09T20:46:16.022 INFO:tasks.workunit:Running workunits matching suites/fsstress.sh on client.1... 2026-03-09T20:46:16.023 INFO:tasks.workunit:Running workunit suites/fsstress.sh... 2026-03-09T20:46:16.023 DEBUG:teuthology.orchestra.run.vm10:workunit test suites/fsstress.sh> mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=569c3e99c9b32a51b4eaf08731c728f4513ed589 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 CEPH_MNT=/home/ubuntu/cephtest/mnt.1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/suites/fsstress.sh 2026-03-09T20:46:16.091 INFO:tasks.workunit.client.1.vm10.stderr:+ mkdir -p fsstress 2026-03-09T20:46:16.093 INFO:tasks.workunit.client.1.vm10.stderr:+ pushd fsstress 2026-03-09T20:46:16.094 INFO:tasks.workunit.client.1.vm10.stdout:~/cephtest/mnt.1/client.1/tmp/fsstress ~/cephtest/mnt.1/client.1/tmp 2026-03-09T20:46:16.094 INFO:tasks.workunit.client.1.vm10.stderr:+ wget -q -O ltp-full.tgz http://download.ceph.com/qa/ltp-full-20091231.tgz 2026-03-09T20:46:17.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:46:16 vm10.local ceph-mon[57011]: pgmap v118: 65 pgs: 65 active+clean; 462 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:46:17.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:46:16 vm07.local ceph-mon[49120]: pgmap v118: 65 pgs: 65 active+clean; 462 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:46:19.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:46:18 vm10.local ceph-mon[57011]: pgmap v119: 65 pgs: 65 active+clean; 462 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.1 KiB/s rd, 85 B/s wr, 4 op/s 2026-03-09T20:46:19.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:46:18 vm07.local ceph-mon[49120]: pgmap v119: 65 pgs: 65 active+clean; 462 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 2.1 KiB/s rd, 85 B/s wr, 4 op/s 2026-03-09T20:46:19.319 INFO:tasks.workunit.client.1.vm10.stderr:+ tar xzf ltp-full.tgz 2026-03-09T20:46:19.971 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:46:19 vm10.local ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:46:20.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:46:19 vm07.local ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:46:21.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:46:20 vm10.local ceph-mon[57011]: pgmap v120: 65 pgs: 65 active+clean; 462 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 1.7 KiB/s rd, 85 B/s wr, 3 op/s 2026-03-09T20:46:21.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:46:20 vm07.local ceph-mon[49120]: pgmap v120: 65 pgs: 65 active+clean; 462 KiB data, 173 MiB used, 120 GiB / 120 GiB avail; 1.7 KiB/s rd, 85 B/s wr, 3 op/s 2026-03-09T20:46:22.023 INFO:tasks.workunit.client.0.vm07.stderr:Note: switching to '569c3e99c9b32a51b4eaf08731c728f4513ed589'. 2026-03-09T20:46:22.023 INFO:tasks.workunit.client.0.vm07.stderr: 2026-03-09T20:46:22.023 INFO:tasks.workunit.client.0.vm07.stderr:You are in 'detached HEAD' state. You can look around, make experimental 2026-03-09T20:46:22.023 INFO:tasks.workunit.client.0.vm07.stderr:changes and commit them, and you can discard any commits you make in this 2026-03-09T20:46:22.023 INFO:tasks.workunit.client.0.vm07.stderr:state without impacting any branches by switching back to a branch. 2026-03-09T20:46:22.023 INFO:tasks.workunit.client.0.vm07.stderr: 2026-03-09T20:46:22.023 INFO:tasks.workunit.client.0.vm07.stderr:If you want to create a new branch to retain commits you create, you may 2026-03-09T20:46:22.023 INFO:tasks.workunit.client.0.vm07.stderr:do so (now or later) by using -c with the switch command. Example: 2026-03-09T20:46:22.023 INFO:tasks.workunit.client.0.vm07.stderr: 2026-03-09T20:46:22.023 INFO:tasks.workunit.client.0.vm07.stderr: git switch -c 2026-03-09T20:46:22.023 INFO:tasks.workunit.client.0.vm07.stderr: 2026-03-09T20:46:22.023 INFO:tasks.workunit.client.0.vm07.stderr:Or undo this operation with: 2026-03-09T20:46:22.023 INFO:tasks.workunit.client.0.vm07.stderr: 2026-03-09T20:46:22.023 INFO:tasks.workunit.client.0.vm07.stderr: git switch - 2026-03-09T20:46:22.023 INFO:tasks.workunit.client.0.vm07.stderr: 2026-03-09T20:46:22.023 INFO:tasks.workunit.client.0.vm07.stderr:Turn off this advice by setting config variable advice.detachedHead to false 2026-03-09T20:46:22.023 INFO:tasks.workunit.client.0.vm07.stderr: 2026-03-09T20:46:22.023 INFO:tasks.workunit.client.0.vm07.stderr:HEAD is now at 569c3e99c9b qa/rgw: bucket notifications use pynose 2026-03-09T20:46:22.029 DEBUG:teuthology.orchestra.run.vm07:> cd -- /home/ubuntu/cephtest/clone.client.0/qa/workunits && if test -e Makefile ; then make ; fi && find -executable -type f -printf '%P\0' >/home/ubuntu/cephtest/workunits.list.client.0 2026-03-09T20:46:22.113 INFO:tasks.workunit.client.0.vm07.stdout:for d in direct_io fs ; do ( cd $d ; make all ) ; done 2026-03-09T20:46:22.115 INFO:tasks.workunit.client.0.vm07.stdout:make[1]: Entering directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/direct_io' 2026-03-09T20:46:22.115 INFO:tasks.workunit.client.0.vm07.stdout:cc -Wall -Wextra -D_GNU_SOURCE direct_io_test.c -o direct_io_test 2026-03-09T20:46:22.258 INFO:tasks.workunit.client.0.vm07.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_sync_io.c -o test_sync_io 2026-03-09T20:46:22.302 INFO:tasks.workunit.client.0.vm07.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_short_dio_read.c -o test_short_dio_read 2026-03-09T20:46:22.334 INFO:tasks.workunit.client.0.vm07.stdout:make[1]: Leaving directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/direct_io' 2026-03-09T20:46:22.336 INFO:tasks.workunit.client.0.vm07.stdout:make[1]: Entering directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/fs' 2026-03-09T20:46:22.336 INFO:tasks.workunit.client.0.vm07.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_o_trunc.c -o test_o_trunc 2026-03-09T20:46:22.378 INFO:tasks.workunit.client.0.vm07.stdout:make[1]: Leaving directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/fs' 2026-03-09T20:46:22.383 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-09T20:46:22.383 DEBUG:teuthology.orchestra.run.vm07:> dd if=/home/ubuntu/cephtest/workunits.list.client.0 of=/dev/stdout 2026-03-09T20:46:22.408 INFO:tasks.workunit:Running workunits matching suites/fsstress.sh on client.0... 2026-03-09T20:46:22.409 INFO:tasks.workunit:Running workunit suites/fsstress.sh... 2026-03-09T20:46:22.409 DEBUG:teuthology.orchestra.run.vm07:workunit test suites/fsstress.sh> mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=569c3e99c9b32a51b4eaf08731c728f4513ed589 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsstress.sh 2026-03-09T20:46:22.482 INFO:tasks.workunit.client.0.vm07.stderr:+ mkdir -p fsstress 2026-03-09T20:46:22.486 INFO:tasks.workunit.client.0.vm07.stderr:+ pushd fsstress 2026-03-09T20:46:22.487 INFO:tasks.workunit.client.0.vm07.stdout:~/cephtest/mnt.0/client.0/tmp/fsstress ~/cephtest/mnt.0/client.0/tmp 2026-03-09T20:46:22.487 INFO:tasks.workunit.client.0.vm07.stderr:+ wget -q -O ltp-full.tgz http://download.ceph.com/qa/ltp-full-20091231.tgz 2026-03-09T20:46:22.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:46:22 vm07.local ceph-mon[49120]: pgmap v121: 65 pgs: 65 active+clean; 917 KiB data, 178 MiB used, 120 GiB / 120 GiB avail; 3.7 KiB/s rd, 44 KiB/s wr, 14 op/s 2026-03-09T20:46:22.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:46:22 vm10.local ceph-mon[57011]: pgmap v121: 65 pgs: 65 active+clean; 917 KiB data, 178 MiB used, 120 GiB / 120 GiB avail; 3.7 KiB/s rd, 44 KiB/s wr, 14 op/s 2026-03-09T20:46:23.243 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:23.241+0000 7eff2cc25640 1 -- 192.168.123.107:0/2375424958 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7eff28072420 msgr2=0x7eff28077190 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:46:23.244 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:23.241+0000 7eff2cc25640 1 --2- 192.168.123.107:0/2375424958 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7eff28072420 0x7eff28077190 secure :-1 s=READY pgs=65 cs=0 l=1 rev1=1 crypto rx=0x7eff2000b950 tx=0x7eff2002f440 comp rx=0 tx=0).stop 2026-03-09T20:46:23.244 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:23.241+0000 7eff2cc25640 1 -- 192.168.123.107:0/2375424958 shutdown_connections 2026-03-09T20:46:23.244 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:23.241+0000 7eff2cc25640 1 --2- 192.168.123.107:0/2375424958 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7eff28072420 0x7eff28077190 unknown :-1 s=CLOSED pgs=65 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:23.244 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:23.241+0000 7eff2cc25640 1 --2- 192.168.123.107:0/2375424958 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7eff28071a50 0x7eff28071e50 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:23.244 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:23.241+0000 7eff2cc25640 1 -- 192.168.123.107:0/2375424958 >> 192.168.123.107:0/2375424958 conn(0x7eff2806d4f0 msgr2=0x7eff2806f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:46:23.244 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:23.241+0000 7eff2cc25640 1 -- 192.168.123.107:0/2375424958 shutdown_connections 2026-03-09T20:46:23.244 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:23.241+0000 7eff2cc25640 1 -- 192.168.123.107:0/2375424958 wait complete. 2026-03-09T20:46:23.244 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:23.241+0000 7eff2cc25640 1 Processor -- start 2026-03-09T20:46:23.244 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:23.241+0000 7eff2cc25640 1 -- start start 2026-03-09T20:46:23.244 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:23.242+0000 7eff2cc25640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7eff28071a50 0x7eff28084040 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:46:23.244 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:23.242+0000 7eff2cc25640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7eff28082690 0x7eff28082b10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:46:23.244 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:23.242+0000 7eff2cc25640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7eff28084580 con 0x7eff28082690 2026-03-09T20:46:23.244 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:23.242+0000 7eff2cc25640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7eff28083050 con 0x7eff28071a50 2026-03-09T20:46:23.244 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:23.242+0000 7eff277fe640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7eff28071a50 0x7eff28084040 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:46:23.244 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:23.242+0000 7eff277fe640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7eff28071a50 0x7eff28084040 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:39192/0 (socket says 192.168.123.107:39192) 2026-03-09T20:46:23.244 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:23.242+0000 7eff277fe640 1 -- 192.168.123.107:0/3163022892 learned_addr learned my addr 192.168.123.107:0/3163022892 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:46:23.244 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:23.242+0000 7eff26ffd640 1 --2- 192.168.123.107:0/3163022892 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7eff28082690 0x7eff28082b10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:46:23.244 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:23.242+0000 7eff277fe640 1 -- 192.168.123.107:0/3163022892 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7eff28082690 msgr2=0x7eff28082b10 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:46:23.244 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:23.242+0000 7eff277fe640 1 --2- 192.168.123.107:0/3163022892 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7eff28082690 0x7eff28082b10 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:23.244 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:23.242+0000 7eff277fe640 1 -- 192.168.123.107:0/3163022892 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7eff20009d00 con 0x7eff28071a50 2026-03-09T20:46:23.244 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:23.243+0000 7eff277fe640 1 --2- 192.168.123.107:0/3163022892 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7eff28071a50 0x7eff28084040 secure :-1 s=READY pgs=66 cs=0 l=1 rev1=1 crypto rx=0x7eff1800eae0 tx=0x7eff1800ebe0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:46:23.244 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:23.243+0000 7eff24ff9640 1 -- 192.168.123.107:0/3163022892 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7eff1800cdb0 con 0x7eff28071a50 2026-03-09T20:46:23.244 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:23.243+0000 7eff2cc25640 1 -- 192.168.123.107:0/3163022892 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7eff28083330 con 0x7eff28071a50 2026-03-09T20:46:23.244 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:23.243+0000 7eff2cc25640 1 -- 192.168.123.107:0/3163022892 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7eff281b5bc0 con 0x7eff28071a50 2026-03-09T20:46:23.246 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:23.244+0000 7eff24ff9640 1 -- 192.168.123.107:0/3163022892 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7eff18004590 con 0x7eff28071a50 2026-03-09T20:46:23.246 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:23.244+0000 7eff24ff9640 1 -- 192.168.123.107:0/3163022892 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7eff180165f0 con 0x7eff28071a50 2026-03-09T20:46:23.246 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:23.244+0000 7eff2cc25640 1 -- 192.168.123.107:0/3163022892 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7eff28072420 con 0x7eff28071a50 2026-03-09T20:46:23.246 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:23.245+0000 7eff24ff9640 1 -- 192.168.123.107:0/3163022892 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7eff18002950 con 0x7eff28071a50 2026-03-09T20:46:23.246 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:23.245+0000 7eff24ff9640 1 --2- 192.168.123.107:0/3163022892 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7eff08076260 0x7eff08078720 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:46:23.246 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:23.245+0000 7eff24ff9640 1 -- 192.168.123.107:0/3163022892 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(42..42 src has 1..42) v4 ==== 5448+0+0 (secure 0 0 0) 0x7eff1800ee40 con 0x7eff28071a50 2026-03-09T20:46:23.246 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:23.245+0000 7eff26ffd640 1 --2- 192.168.123.107:0/3163022892 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7eff08076260 0x7eff08078720 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:46:23.247 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:23.246+0000 7eff26ffd640 1 --2- 192.168.123.107:0/3163022892 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7eff08076260 0x7eff08078720 secure :-1 s=READY pgs=134 cs=0 l=1 rev1=1 crypto rx=0x7eff28083dc0 tx=0x7eff2003a040 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:46:23.248 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:23.247+0000 7eff24ff9640 1 -- 192.168.123.107:0/3163022892 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7eff18060e40 con 0x7eff28071a50 2026-03-09T20:46:23.356 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:23.353+0000 7eff2cc25640 1 -- 192.168.123.107:0/3163022892 --> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7eff28075bd0 con 0x7eff08076260 2026-03-09T20:46:23.356 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:23.354+0000 7eff24ff9640 1 -- 192.168.123.107:0/3163022892 <== mgr.14225 v2:192.168.123.107:6800/4233182156 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+442 (secure 0 0 0) 0x7eff28075bd0 con 0x7eff08076260 2026-03-09T20:46:23.359 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:23.357+0000 7eff2cc25640 1 -- 192.168.123.107:0/3163022892 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7eff08076260 msgr2=0x7eff08078720 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:46:23.359 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:23.357+0000 7eff2cc25640 1 --2- 192.168.123.107:0/3163022892 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7eff08076260 0x7eff08078720 secure :-1 s=READY pgs=134 cs=0 l=1 rev1=1 crypto rx=0x7eff28083dc0 tx=0x7eff2003a040 comp rx=0 tx=0).stop 2026-03-09T20:46:23.359 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:23.357+0000 7eff2cc25640 1 -- 192.168.123.107:0/3163022892 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7eff28071a50 msgr2=0x7eff28084040 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:46:23.359 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:23.357+0000 7eff2cc25640 1 --2- 192.168.123.107:0/3163022892 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7eff28071a50 0x7eff28084040 secure :-1 s=READY pgs=66 cs=0 l=1 rev1=1 crypto rx=0x7eff1800eae0 tx=0x7eff1800ebe0 comp rx=0 tx=0).stop 2026-03-09T20:46:23.359 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:23.357+0000 7eff2cc25640 1 -- 192.168.123.107:0/3163022892 shutdown_connections 2026-03-09T20:46:23.359 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:23.357+0000 7eff2cc25640 1 --2- 192.168.123.107:0/3163022892 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7eff08076260 0x7eff08078720 unknown :-1 s=CLOSED pgs=134 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:23.359 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:23.357+0000 7eff2cc25640 1 --2- 192.168.123.107:0/3163022892 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7eff28082690 0x7eff28082b10 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:23.359 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:23.357+0000 7eff2cc25640 1 --2- 192.168.123.107:0/3163022892 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7eff28071a50 0x7eff28084040 unknown :-1 s=CLOSED pgs=66 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:23.359 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:23.357+0000 7eff2cc25640 1 -- 192.168.123.107:0/3163022892 >> 192.168.123.107:0/3163022892 conn(0x7eff2806d4f0 msgr2=0x7eff28073130 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:46:23.359 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:23.357+0000 7eff2cc25640 1 -- 192.168.123.107:0/3163022892 shutdown_connections 2026-03-09T20:46:23.359 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:23.357+0000 7eff2cc25640 1 -- 192.168.123.107:0/3163022892 wait complete. 2026-03-09T20:46:23.365 INFO:teuthology.orchestra.run.vm07.stdout:true 2026-03-09T20:46:24.019 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.018+0000 7f7b60b91640 1 -- 192.168.123.107:0/2454589569 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7b5c072440 msgr2=0x7f7b5c0771b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:46:24.020 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.018+0000 7f7b60b91640 1 --2- 192.168.123.107:0/2454589569 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7b5c072440 0x7f7b5c0771b0 secure :-1 s=READY pgs=297 cs=0 l=1 rev1=1 crypto rx=0x7f7b5400b3e0 tx=0x7f7b5402f730 comp rx=0 tx=0).stop 2026-03-09T20:46:24.020 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.018+0000 7f7b60b91640 1 -- 192.168.123.107:0/2454589569 shutdown_connections 2026-03-09T20:46:24.020 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.018+0000 7f7b60b91640 1 --2- 192.168.123.107:0/2454589569 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7b5c072440 0x7f7b5c0771b0 unknown :-1 s=CLOSED pgs=297 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:24.020 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.018+0000 7f7b60b91640 1 --2- 192.168.123.107:0/2454589569 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f7b5c071a70 0x7f7b5c071e70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:24.020 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.018+0000 7f7b60b91640 1 -- 192.168.123.107:0/2454589569 >> 192.168.123.107:0/2454589569 conn(0x7f7b5c06d4f0 msgr2=0x7f7b5c06f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:46:24.020 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.018+0000 7f7b60b91640 1 -- 192.168.123.107:0/2454589569 shutdown_connections 2026-03-09T20:46:24.020 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.018+0000 7f7b60b91640 1 -- 192.168.123.107:0/2454589569 wait complete. 2026-03-09T20:46:24.020 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.019+0000 7f7b60b91640 1 Processor -- start 2026-03-09T20:46:24.020 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.019+0000 7f7b60b91640 1 -- start start 2026-03-09T20:46:24.020 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.019+0000 7f7b60b91640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f7b5c071a70 0x7f7b5c084180 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:46:24.020 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.019+0000 7f7b60b91640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7b5c0827d0 0x7f7b5c082c50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:46:24.020 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.019+0000 7f7b60b91640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f7b5c083190 con 0x7f7b5c0827d0 2026-03-09T20:46:24.020 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.019+0000 7f7b60b91640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f7b5c083300 con 0x7f7b5c071a70 2026-03-09T20:46:24.020 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.019+0000 7f7b59d74640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7b5c0827d0 0x7f7b5c082c50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:46:24.020 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.019+0000 7f7b59d74640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7b5c0827d0 0x7f7b5c082c50 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:43480/0 (socket says 192.168.123.107:43480) 2026-03-09T20:46:24.020 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.019+0000 7f7b59d74640 1 -- 192.168.123.107:0/3879929390 learned_addr learned my addr 192.168.123.107:0/3879929390 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:46:24.020 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.020+0000 7f7b5a575640 1 --2- 192.168.123.107:0/3879929390 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f7b5c071a70 0x7f7b5c084180 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:46:24.021 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.020+0000 7f7b5a575640 1 -- 192.168.123.107:0/3879929390 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7b5c0827d0 msgr2=0x7f7b5c082c50 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:46:24.021 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.020+0000 7f7b5a575640 1 --2- 192.168.123.107:0/3879929390 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7b5c0827d0 0x7f7b5c082c50 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:24.021 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.020+0000 7f7b5a575640 1 -- 192.168.123.107:0/3879929390 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f7b54009d00 con 0x7f7b5c071a70 2026-03-09T20:46:24.021 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.020+0000 7f7b5a575640 1 --2- 192.168.123.107:0/3879929390 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f7b5c071a70 0x7f7b5c084180 secure :-1 s=READY pgs=67 cs=0 l=1 rev1=1 crypto rx=0x7f7b4c00b4d0 tx=0x7f7b4c00b9a0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:46:24.091 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.090+0000 7f7b4b7fe640 1 -- 192.168.123.107:0/3879929390 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f7b4c004300 con 0x7f7b5c071a70 2026-03-09T20:46:24.091 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.090+0000 7f7b4b7fe640 1 -- 192.168.123.107:0/3879929390 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f7b4c004460 con 0x7f7b5c071a70 2026-03-09T20:46:24.091 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.090+0000 7f7b4b7fe640 1 -- 192.168.123.107:0/3879929390 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f7b4c010c20 con 0x7f7b5c071a70 2026-03-09T20:46:24.092 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.090+0000 7f7b60b91640 1 -- 192.168.123.107:0/3879929390 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f7b5c0835e0 con 0x7f7b5c071a70 2026-03-09T20:46:24.092 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.090+0000 7f7b60b91640 1 -- 192.168.123.107:0/3879929390 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f7b5c1b5c10 con 0x7f7b5c071a70 2026-03-09T20:46:24.092 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.091+0000 7f7b60b91640 1 -- 192.168.123.107:0/3879929390 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f7b5c072440 con 0x7f7b5c071a70 2026-03-09T20:46:24.095 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.093+0000 7f7b4b7fe640 1 -- 192.168.123.107:0/3879929390 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7f7b4c0027a0 con 0x7f7b5c071a70 2026-03-09T20:46:24.095 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.093+0000 7f7b4b7fe640 1 --2- 192.168.123.107:0/3879929390 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f7b2c075f60 0x7f7b2c078420 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:46:24.095 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.093+0000 7f7b4b7fe640 1 -- 192.168.123.107:0/3879929390 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(42..42 src has 1..42) v4 ==== 5448+0+0 (secure 0 0 0) 0x7f7b4c097700 con 0x7f7b5c071a70 2026-03-09T20:46:24.095 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.094+0000 7f7b59d74640 1 --2- 192.168.123.107:0/3879929390 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f7b2c075f60 0x7f7b2c078420 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:46:24.095 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.094+0000 7f7b59d74640 1 --2- 192.168.123.107:0/3879929390 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f7b2c075f60 0x7f7b2c078420 secure :-1 s=READY pgs=135 cs=0 l=1 rev1=1 crypto rx=0x7f7b5c083f00 tx=0x7f7b54007580 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:46:24.095 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.094+0000 7f7b4b7fe640 1 -- 192.168.123.107:0/3879929390 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f7b4c060c90 con 0x7f7b5c071a70 2026-03-09T20:46:24.222 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.221+0000 7f7b60b91640 1 -- 192.168.123.107:0/3879929390 --> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f7b5c076240 con 0x7f7b2c075f60 2026-03-09T20:46:24.224 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.223+0000 7f7b4b7fe640 1 -- 192.168.123.107:0/3879929390 <== mgr.14225 v2:192.168.123.107:6800/4233182156 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+442 (secure 0 0 0) 0x7f7b5c076240 con 0x7f7b2c075f60 2026-03-09T20:46:24.230 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.228+0000 7f7b497fa640 1 -- 192.168.123.107:0/3879929390 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f7b2c075f60 msgr2=0x7f7b2c078420 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:46:24.230 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.228+0000 7f7b497fa640 1 --2- 192.168.123.107:0/3879929390 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f7b2c075f60 0x7f7b2c078420 secure :-1 s=READY pgs=135 cs=0 l=1 rev1=1 crypto rx=0x7f7b5c083f00 tx=0x7f7b54007580 comp rx=0 tx=0).stop 2026-03-09T20:46:24.230 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.228+0000 7f7b497fa640 1 -- 192.168.123.107:0/3879929390 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f7b5c071a70 msgr2=0x7f7b5c084180 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:46:24.230 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.228+0000 7f7b497fa640 1 --2- 192.168.123.107:0/3879929390 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f7b5c071a70 0x7f7b5c084180 secure :-1 s=READY pgs=67 cs=0 l=1 rev1=1 crypto rx=0x7f7b4c00b4d0 tx=0x7f7b4c00b9a0 comp rx=0 tx=0).stop 2026-03-09T20:46:24.230 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.229+0000 7f7b497fa640 1 -- 192.168.123.107:0/3879929390 shutdown_connections 2026-03-09T20:46:24.230 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.229+0000 7f7b497fa640 1 --2- 192.168.123.107:0/3879929390 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f7b2c075f60 0x7f7b2c078420 unknown :-1 s=CLOSED pgs=135 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:24.230 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.229+0000 7f7b497fa640 1 --2- 192.168.123.107:0/3879929390 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7b5c0827d0 0x7f7b5c082c50 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:24.230 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.229+0000 7f7b497fa640 1 --2- 192.168.123.107:0/3879929390 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f7b5c071a70 0x7f7b5c084180 unknown :-1 s=CLOSED pgs=67 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:24.230 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.229+0000 7f7b497fa640 1 -- 192.168.123.107:0/3879929390 >> 192.168.123.107:0/3879929390 conn(0x7f7b5c06d4f0 msgr2=0x7f7b5c0704c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:46:24.230 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.229+0000 7f7b497fa640 1 -- 192.168.123.107:0/3879929390 shutdown_connections 2026-03-09T20:46:24.230 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.229+0000 7f7b497fa640 1 -- 192.168.123.107:0/3879929390 wait complete. 2026-03-09T20:46:24.336 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.331+0000 7f6207294640 1 -- 192.168.123.107:0/3734234264 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6200072440 msgr2=0x7f62000771b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:46:24.336 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.331+0000 7f6207294640 1 --2- 192.168.123.107:0/3734234264 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6200072440 0x7f62000771b0 secure :-1 s=READY pgs=298 cs=0 l=1 rev1=1 crypto rx=0x7f61f800caa0 tx=0x7f61f8031710 comp rx=0 tx=0).stop 2026-03-09T20:46:24.336 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.331+0000 7f6207294640 1 -- 192.168.123.107:0/3734234264 shutdown_connections 2026-03-09T20:46:24.336 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.331+0000 7f6207294640 1 --2- 192.168.123.107:0/3734234264 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6200072440 0x7f62000771b0 unknown :-1 s=CLOSED pgs=298 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:24.336 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.331+0000 7f6207294640 1 --2- 192.168.123.107:0/3734234264 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f6200071a70 0x7f6200071e70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:24.336 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.331+0000 7f6207294640 1 -- 192.168.123.107:0/3734234264 >> 192.168.123.107:0/3734234264 conn(0x7f620006d4f0 msgr2=0x7f620006f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:46:24.336 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.335+0000 7f6207294640 1 -- 192.168.123.107:0/3734234264 shutdown_connections 2026-03-09T20:46:24.337 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.335+0000 7f6207294640 1 -- 192.168.123.107:0/3734234264 wait complete. 2026-03-09T20:46:24.337 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.335+0000 7f6207294640 1 Processor -- start 2026-03-09T20:46:24.337 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.335+0000 7f6207294640 1 -- start start 2026-03-09T20:46:24.337 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.335+0000 7f6207294640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f6200071a70 0x7f6200084110 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:46:24.337 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.335+0000 7f6207294640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6200072440 0x7f6200082800 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:46:24.337 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.335+0000 7f6207294640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f6200084650 con 0x7f6200072440 2026-03-09T20:46:24.337 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.335+0000 7f6207294640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f6200082d70 con 0x7f6200071a70 2026-03-09T20:46:24.337 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.335+0000 7f6205009640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f6200071a70 0x7f6200084110 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:46:24.337 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.335+0000 7f6205009640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f6200071a70 0x7f6200084110 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:39228/0 (socket says 192.168.123.107:39228) 2026-03-09T20:46:24.337 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.335+0000 7f6205009640 1 -- 192.168.123.107:0/2007117248 learned_addr learned my addr 192.168.123.107:0/2007117248 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:46:24.337 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.336+0000 7f6205009640 1 -- 192.168.123.107:0/2007117248 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6200072440 msgr2=0x7f6200082800 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:46:24.337 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.336+0000 7f6205009640 1 --2- 192.168.123.107:0/2007117248 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6200072440 0x7f6200082800 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:24.337 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.336+0000 7f6205009640 1 -- 192.168.123.107:0/2007117248 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f61f8009d00 con 0x7f6200071a70 2026-03-09T20:46:24.338 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:24.336+0000 7f6205009640 1 --2- 192.168.123.107:0/2007117248 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f6200071a70 0x7f6200084110 secure :-1 s=READY pgs=68 cs=0 l=1 rev1=1 crypto rx=0x7f61fc00ca30 tx=0x7f61fc00cf00 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:46:24.345 INFO:tasks.workunit.client.0.vm07.stderr:+ tar xzf ltp-full.tgz 2026-03-09T20:46:25.349 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.345+0000 7f61f67fc640 1 -- 192.168.123.107:0/2007117248 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f61fc004430 con 0x7f6200071a70 2026-03-09T20:46:25.349 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.346+0000 7f6207294640 1 -- 192.168.123.107:0/2007117248 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f6200083050 con 0x7f6200071a70 2026-03-09T20:46:25.349 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.346+0000 7f6207294640 1 -- 192.168.123.107:0/2007117248 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f62000835a0 con 0x7f6200071a70 2026-03-09T20:46:25.349 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.346+0000 7f61f67fc640 1 -- 192.168.123.107:0/2007117248 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f61fc004590 con 0x7f6200071a70 2026-03-09T20:46:25.349 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.346+0000 7f61f67fc640 1 -- 192.168.123.107:0/2007117248 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f61fc00f660 con 0x7f6200071a70 2026-03-09T20:46:25.349 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.346+0000 7f6207294640 1 -- 192.168.123.107:0/2007117248 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f6200079e80 con 0x7f6200071a70 2026-03-09T20:46:25.349 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.347+0000 7f61f67fc640 1 -- 192.168.123.107:0/2007117248 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7f61fc00f7c0 con 0x7f6200071a70 2026-03-09T20:46:25.349 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.348+0000 7f61f67fc640 1 --2- 192.168.123.107:0/2007117248 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f61ec076220 0x7f61ec0786e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:46:25.349 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.348+0000 7f61f67fc640 1 -- 192.168.123.107:0/2007117248 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(42..42 src has 1..42) v4 ==== 5448+0+0 (secure 0 0 0) 0x7f61fc098d60 con 0x7f6200071a70 2026-03-09T20:46:25.350 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.349+0000 7f6204808640 1 --2- 192.168.123.107:0/2007117248 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f61ec076220 0x7f61ec0786e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:46:25.352 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.351+0000 7f6204808640 1 --2- 192.168.123.107:0/2007117248 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f61ec076220 0x7f61ec0786e0 secure :-1 s=READY pgs=136 cs=0 l=1 rev1=1 crypto rx=0x7f61f80048f0 tx=0x7f61f8002750 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:46:25.352 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.351+0000 7f61f67fc640 1 -- 192.168.123.107:0/2007117248 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f61fc062370 con 0x7f6200071a70 2026-03-09T20:46:25.475 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:46:25 vm10.local ceph-mon[57011]: pgmap v122: 65 pgs: 65 active+clean; 13 MiB data, 242 MiB used, 120 GiB / 120 GiB avail; 3.7 KiB/s rd, 1.1 MiB/s wr, 50 op/s 2026-03-09T20:46:25.475 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:46:25 vm10.local ceph-mon[57011]: from='client.24391 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:46:25.500 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.499+0000 7f6207294640 1 -- 192.168.123.107:0/2007117248 --> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] -- mgr_command(tid 0: {"prefix": "orch ps", "target": ["mon-mgr", ""]}) v1 -- 0x7f62000767d0 con 0x7f61ec076220 2026-03-09T20:46:25.505 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.504+0000 7f61f67fc640 1 -- 192.168.123.107:0/2007117248 <== mgr.14225 v2:192.168.123.107:6800/4233182156 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+3624 (secure 0 0 0) 0x7f62000767d0 con 0x7f61ec076220 2026-03-09T20:46:25.505 INFO:teuthology.orchestra.run.vm07.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T20:46:25.505 INFO:teuthology.orchestra.run.vm07.stdout:alertmanager.vm07 vm07 *:9093,9094 running (2m) 76s ago 3m 23.6M - 0.25.0 c8568f914cd2 aa3206f6f5cb 2026-03-09T20:46:25.505 INFO:teuthology.orchestra.run.vm07.stdout:ceph-exporter.vm07 vm07 running (3m) 76s ago 3m 8514k - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 06140d824fae 2026-03-09T20:46:25.505 INFO:teuthology.orchestra.run.vm07.stdout:ceph-exporter.vm10 vm10 running (3m) 78s ago 3m 8652k - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 ecddc8340426 2026-03-09T20:46:25.505 INFO:teuthology.orchestra.run.vm07.stdout:crash.vm07 vm07 running (3m) 76s ago 3m 7620k - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 8dda9981b08b 2026-03-09T20:46:25.505 INFO:teuthology.orchestra.run.vm07.stdout:crash.vm10 vm10 running (3m) 78s ago 3m 7616k - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 eba80e79586f 2026-03-09T20:46:25.505 INFO:teuthology.orchestra.run.vm07.stdout:grafana.vm07 vm07 *:3000 running (2m) 76s ago 3m 78.3M - 9.4.7 954c08fa6188 74cf2e7ee6ad 2026-03-09T20:46:25.505 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm07.potfau vm07 running (82s) 76s ago 82s 18.5M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 2492b6874dc8 2026-03-09T20:46:25.505 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm07.rovdbp vm07 running (84s) 76s ago 84s 19.8M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 3dd0b4a28f35 2026-03-09T20:46:25.505 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm10.hzyuyq vm10 running (81s) 78s ago 81s 16.6M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 ed740ceed51a 2026-03-09T20:46:25.505 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm10.qpltwp vm10 running (83s) 78s ago 83s 17.3M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 c5fdba181aaf 2026-03-09T20:46:25.505 INFO:teuthology.orchestra.run.vm07.stdout:mgr.vm07.xjrvch vm07 *:9283,8765,8443 running (4m) 76s ago 4m 542M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 7a35a71cbc43 2026-03-09T20:46:25.505 INFO:teuthology.orchestra.run.vm07.stdout:mgr.vm10.byqahe vm10 *:8443,9283,8765 running (2m) 78s ago 2m 485M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 91b60c6e69dc 2026-03-09T20:46:25.505 INFO:teuthology.orchestra.run.vm07.stdout:mon.vm07 vm07 running (4m) 76s ago 4m 53.1M 2048M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 f3e88bdaa0dd 2026-03-09T20:46:25.505 INFO:teuthology.orchestra.run.vm07.stdout:mon.vm10 vm10 running (2m) 78s ago 2m 46.8M 2048M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 4e5d7d18c660 2026-03-09T20:46:25.505 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.vm07 vm07 *:9100 running (3m) 76s ago 3m 14.3M - 1.5.0 0da6a335fe13 d6fac1f8a1d0 2026-03-09T20:46:25.505 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.vm10 vm10 *:9100 running (2m) 78s ago 2m 14.7M - 1.5.0 0da6a335fe13 9716a97e7ed1 2026-03-09T20:46:25.505 INFO:teuthology.orchestra.run.vm07.stdout:osd.0 vm07 running (2m) 76s ago 2m 66.7M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 482878bd7721 2026-03-09T20:46:25.505 INFO:teuthology.orchestra.run.vm07.stdout:osd.1 vm07 running (2m) 76s ago 2m 68.5M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 15564e5032c9 2026-03-09T20:46:25.505 INFO:teuthology.orchestra.run.vm07.stdout:osd.2 vm07 running (2m) 76s ago 2m 47.2M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 a2ad523a264c 2026-03-09T20:46:25.505 INFO:teuthology.orchestra.run.vm07.stdout:osd.3 vm10 running (2m) 78s ago 2m 67.1M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 c4d7e2279ba1 2026-03-09T20:46:25.505 INFO:teuthology.orchestra.run.vm07.stdout:osd.4 vm10 running (118s) 78s ago 118s 44.6M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 37651efc9a7d 2026-03-09T20:46:25.505 INFO:teuthology.orchestra.run.vm07.stdout:osd.5 vm10 running (109s) 78s ago 109s 63.9M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 e1bd83add343 2026-03-09T20:46:25.505 INFO:teuthology.orchestra.run.vm07.stdout:prometheus.vm07 vm07 *:9095 running (2m) 76s ago 3m 37.0M - 2.43.0 a07b618ecd1d 08a586cd1392 2026-03-09T20:46:25.507 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.506+0000 7f6207294640 1 -- 192.168.123.107:0/2007117248 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f61ec076220 msgr2=0x7f61ec0786e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:46:25.508 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.506+0000 7f6207294640 1 --2- 192.168.123.107:0/2007117248 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f61ec076220 0x7f61ec0786e0 secure :-1 s=READY pgs=136 cs=0 l=1 rev1=1 crypto rx=0x7f61f80048f0 tx=0x7f61f8002750 comp rx=0 tx=0).stop 2026-03-09T20:46:25.508 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.506+0000 7f6207294640 1 -- 192.168.123.107:0/2007117248 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f6200071a70 msgr2=0x7f6200084110 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:46:25.508 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.506+0000 7f6207294640 1 --2- 192.168.123.107:0/2007117248 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f6200071a70 0x7f6200084110 secure :-1 s=READY pgs=68 cs=0 l=1 rev1=1 crypto rx=0x7f61fc00ca30 tx=0x7f61fc00cf00 comp rx=0 tx=0).stop 2026-03-09T20:46:25.508 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.507+0000 7f6207294640 1 -- 192.168.123.107:0/2007117248 shutdown_connections 2026-03-09T20:46:25.508 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.507+0000 7f6207294640 1 --2- 192.168.123.107:0/2007117248 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f61ec076220 0x7f61ec0786e0 unknown :-1 s=CLOSED pgs=136 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:25.508 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.507+0000 7f6207294640 1 --2- 192.168.123.107:0/2007117248 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6200072440 0x7f6200082800 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:25.508 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.507+0000 7f6207294640 1 --2- 192.168.123.107:0/2007117248 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f6200071a70 0x7f6200084110 unknown :-1 s=CLOSED pgs=68 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:25.508 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.507+0000 7f6207294640 1 -- 192.168.123.107:0/2007117248 >> 192.168.123.107:0/2007117248 conn(0x7f620006d4f0 msgr2=0x7f620006f7b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:46:25.508 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.507+0000 7f6207294640 1 -- 192.168.123.107:0/2007117248 shutdown_connections 2026-03-09T20:46:25.508 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.508+0000 7f6207294640 1 -- 192.168.123.107:0/2007117248 wait complete. 2026-03-09T20:46:25.598 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.597+0000 7f65fb577640 1 -- 192.168.123.107:0/448473264 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f65fc072420 msgr2=0x7f65fc077190 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:46:25.599 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.597+0000 7f65fb577640 1 --2- 192.168.123.107:0/448473264 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f65fc072420 0x7f65fc077190 secure :-1 s=READY pgs=69 cs=0 l=1 rev1=1 crypto rx=0x7f65f4010d50 tx=0x7f65f4033ce0 comp rx=0 tx=0).stop 2026-03-09T20:46:25.599 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.597+0000 7f65fb577640 1 -- 192.168.123.107:0/448473264 shutdown_connections 2026-03-09T20:46:25.599 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.597+0000 7f65fb577640 1 --2- 192.168.123.107:0/448473264 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f65fc072420 0x7f65fc077190 unknown :-1 s=CLOSED pgs=69 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:25.599 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.597+0000 7f65fb577640 1 --2- 192.168.123.107:0/448473264 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f65fc071a50 0x7f65fc071e50 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:25.599 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.597+0000 7f65fb577640 1 -- 192.168.123.107:0/448473264 >> 192.168.123.107:0/448473264 conn(0x7f65fc06d4f0 msgr2=0x7f65fc06f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:46:25.600 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.597+0000 7f65fb577640 1 -- 192.168.123.107:0/448473264 shutdown_connections 2026-03-09T20:46:25.600 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.597+0000 7f65fb577640 1 -- 192.168.123.107:0/448473264 wait complete. 2026-03-09T20:46:25.600 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.598+0000 7f65fb577640 1 Processor -- start 2026-03-09T20:46:25.600 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.599+0000 7f65fb577640 1 -- start start 2026-03-09T20:46:25.600 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.599+0000 7f65fb577640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f65fc071a50 0x7f65fc084030 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:46:25.600 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.599+0000 7f65fb577640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f65fc082680 0x7f65fc082b00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:46:25.600 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.599+0000 7f65fb577640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f65fc084570 con 0x7f65fc082680 2026-03-09T20:46:25.600 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.599+0000 7f65fb577640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f65fc083040 con 0x7f65fc071a50 2026-03-09T20:46:25.600 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.599+0000 7f65fa575640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f65fc071a50 0x7f65fc084030 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:46:25.600 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.599+0000 7f65fa575640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f65fc071a50 0x7f65fc084030 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:39244/0 (socket says 192.168.123.107:39244) 2026-03-09T20:46:25.600 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.599+0000 7f65fa575640 1 -- 192.168.123.107:0/3970619894 learned_addr learned my addr 192.168.123.107:0/3970619894 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:46:25.600 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:46:25 vm07.local ceph-mon[49120]: pgmap v122: 65 pgs: 65 active+clean; 13 MiB data, 242 MiB used, 120 GiB / 120 GiB avail; 3.7 KiB/s rd, 1.1 MiB/s wr, 50 op/s 2026-03-09T20:46:25.600 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:46:25 vm07.local ceph-mon[49120]: from='client.24391 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:46:25.601 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.599+0000 7f65fa575640 1 -- 192.168.123.107:0/3970619894 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f65fc082680 msgr2=0x7f65fc082b00 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:46:25.601 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.599+0000 7f65fa575640 1 --2- 192.168.123.107:0/3970619894 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f65fc082680 0x7f65fc082b00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:25.601 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.599+0000 7f65fa575640 1 -- 192.168.123.107:0/3970619894 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f65f4010a00 con 0x7f65fc071a50 2026-03-09T20:46:25.601 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.600+0000 7f65fa575640 1 --2- 192.168.123.107:0/3970619894 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f65fc071a50 0x7f65fc084030 secure :-1 s=READY pgs=70 cs=0 l=1 rev1=1 crypto rx=0x7f65ec007ae0 tx=0x7f65ec00bee0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:46:25.602 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.601+0000 7f65eb7fe640 1 -- 192.168.123.107:0/3970619894 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f65ec010040 con 0x7f65fc071a50 2026-03-09T20:46:25.602 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.601+0000 7f65eb7fe640 1 -- 192.168.123.107:0/3970619894 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f65ec00b2e0 con 0x7f65fc071a50 2026-03-09T20:46:25.602 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.601+0000 7f65fb577640 1 -- 192.168.123.107:0/3970619894 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f65fc0832c0 con 0x7f65fc071a50 2026-03-09T20:46:25.602 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.601+0000 7f65fb577640 1 -- 192.168.123.107:0/3970619894 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f65fc1b5bc0 con 0x7f65fc071a50 2026-03-09T20:46:25.602 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.601+0000 7f65eb7fe640 1 -- 192.168.123.107:0/3970619894 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f65ec002b50 con 0x7f65fc071a50 2026-03-09T20:46:25.603 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.602+0000 7f65fb577640 1 -- 192.168.123.107:0/3970619894 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f65c8005350 con 0x7f65fc071a50 2026-03-09T20:46:25.603 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.602+0000 7f65eb7fe640 1 -- 192.168.123.107:0/3970619894 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7f65ec013050 con 0x7f65fc071a50 2026-03-09T20:46:25.604 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.603+0000 7f65eb7fe640 1 --2- 192.168.123.107:0/3970619894 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f65dc0761f0 0x7f65dc0786b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:46:25.604 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.603+0000 7f65eb7fe640 1 -- 192.168.123.107:0/3970619894 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(42..42 src has 1..42) v4 ==== 5448+0+0 (secure 0 0 0) 0x7f65ec0970c0 con 0x7f65fc071a50 2026-03-09T20:46:25.604 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.603+0000 7f65f9d74640 1 --2- 192.168.123.107:0/3970619894 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f65dc0761f0 0x7f65dc0786b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:46:25.604 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.604+0000 7f65f9d74640 1 --2- 192.168.123.107:0/3970619894 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f65dc0761f0 0x7f65dc0786b0 secure :-1 s=READY pgs=137 cs=0 l=1 rev1=1 crypto rx=0x7f65f4004bf0 tx=0x7f65f4004b40 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:46:25.606 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.605+0000 7f65eb7fe640 1 -- 192.168.123.107:0/3970619894 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f65ec0606f0 con 0x7f65fc071a50 2026-03-09T20:46:25.784 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.783+0000 7f65fb577640 1 -- 192.168.123.107:0/3970619894 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7f65c80058d0 con 0x7f65fc071a50 2026-03-09T20:46:25.785 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.784+0000 7f65eb7fe640 1 -- 192.168.123.107:0/3970619894 <== mon.1 v2:192.168.123.110:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+633 (secure 0 0 0) 0x7f65ec060090 con 0x7f65fc071a50 2026-03-09T20:46:25.787 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T20:46:25.787 INFO:teuthology.orchestra.run.vm07.stdout: "mon": { 2026-03-09T20:46:25.787 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable)": 2 2026-03-09T20:46:25.787 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:46:25.787 INFO:teuthology.orchestra.run.vm07.stdout: "mgr": { 2026-03-09T20:46:25.787 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable)": 2 2026-03-09T20:46:25.787 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:46:25.787 INFO:teuthology.orchestra.run.vm07.stdout: "osd": { 2026-03-09T20:46:25.787 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable)": 6 2026-03-09T20:46:25.787 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:46:25.787 INFO:teuthology.orchestra.run.vm07.stdout: "mds": { 2026-03-09T20:46:25.787 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable)": 4 2026-03-09T20:46:25.787 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:46:25.787 INFO:teuthology.orchestra.run.vm07.stdout: "overall": { 2026-03-09T20:46:25.787 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable)": 14 2026-03-09T20:46:25.787 INFO:teuthology.orchestra.run.vm07.stdout: } 2026-03-09T20:46:25.787 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T20:46:25.792 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.791+0000 7f65e97fa640 1 -- 192.168.123.107:0/3970619894 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f65dc0761f0 msgr2=0x7f65dc0786b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:46:25.792 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.791+0000 7f65e97fa640 1 --2- 192.168.123.107:0/3970619894 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f65dc0761f0 0x7f65dc0786b0 secure :-1 s=READY pgs=137 cs=0 l=1 rev1=1 crypto rx=0x7f65f4004bf0 tx=0x7f65f4004b40 comp rx=0 tx=0).stop 2026-03-09T20:46:25.792 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.791+0000 7f65e97fa640 1 -- 192.168.123.107:0/3970619894 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f65fc071a50 msgr2=0x7f65fc084030 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:46:25.793 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.791+0000 7f65e97fa640 1 --2- 192.168.123.107:0/3970619894 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f65fc071a50 0x7f65fc084030 secure :-1 s=READY pgs=70 cs=0 l=1 rev1=1 crypto rx=0x7f65ec007ae0 tx=0x7f65ec00bee0 comp rx=0 tx=0).stop 2026-03-09T20:46:25.793 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.792+0000 7f65e97fa640 1 -- 192.168.123.107:0/3970619894 shutdown_connections 2026-03-09T20:46:25.793 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.792+0000 7f65e97fa640 1 --2- 192.168.123.107:0/3970619894 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f65dc0761f0 0x7f65dc0786b0 unknown :-1 s=CLOSED pgs=137 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:25.793 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.792+0000 7f65e97fa640 1 --2- 192.168.123.107:0/3970619894 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f65fc082680 0x7f65fc082b00 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:25.793 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.792+0000 7f65e97fa640 1 --2- 192.168.123.107:0/3970619894 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f65fc071a50 0x7f65fc084030 unknown :-1 s=CLOSED pgs=70 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:25.793 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.792+0000 7f65e97fa640 1 -- 192.168.123.107:0/3970619894 >> 192.168.123.107:0/3970619894 conn(0x7f65fc06d4f0 msgr2=0x7f65fc070420 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:46:25.793 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.792+0000 7f65e97fa640 1 -- 192.168.123.107:0/3970619894 shutdown_connections 2026-03-09T20:46:25.793 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.792+0000 7f65e97fa640 1 -- 192.168.123.107:0/3970619894 wait complete. 2026-03-09T20:46:25.890 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.888+0000 7f189ca4a640 1 -- 192.168.123.107:0/4257458130 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1898071a50 msgr2=0x7f1898071e50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:46:25.890 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.888+0000 7f189ca4a640 1 --2- 192.168.123.107:0/4257458130 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1898071a50 0x7f1898071e50 secure :-1 s=READY pgs=299 cs=0 l=1 rev1=1 crypto rx=0x7f1888009a00 tx=0x7f188802f270 comp rx=0 tx=0).stop 2026-03-09T20:46:25.890 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.888+0000 7f189ca4a640 1 -- 192.168.123.107:0/4257458130 shutdown_connections 2026-03-09T20:46:25.890 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.888+0000 7f189ca4a640 1 --2- 192.168.123.107:0/4257458130 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1898072420 0x7f1898077190 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:25.890 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.888+0000 7f189ca4a640 1 --2- 192.168.123.107:0/4257458130 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1898071a50 0x7f1898071e50 unknown :-1 s=CLOSED pgs=299 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:25.890 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.888+0000 7f189ca4a640 1 -- 192.168.123.107:0/4257458130 >> 192.168.123.107:0/4257458130 conn(0x7f189806d4f0 msgr2=0x7f189806f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:46:25.891 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.889+0000 7f189ca4a640 1 -- 192.168.123.107:0/4257458130 shutdown_connections 2026-03-09T20:46:25.891 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.889+0000 7f189ca4a640 1 -- 192.168.123.107:0/4257458130 wait complete. 2026-03-09T20:46:25.891 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.889+0000 7f189ca4a640 1 Processor -- start 2026-03-09T20:46:25.891 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.889+0000 7f189ca4a640 1 -- start start 2026-03-09T20:46:25.892 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.889+0000 7f189ca4a640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1898072420 0x7f1898131a20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:46:25.892 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.889+0000 7f189ca4a640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1898131f60 0x7f18981323e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:46:25.892 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.889+0000 7f189ca4a640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f18981333d0 con 0x7f1898072420 2026-03-09T20:46:25.892 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.889+0000 7f189ca4a640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f1898133540 con 0x7f1898131f60 2026-03-09T20:46:25.892 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.890+0000 7f1896ffd640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1898131f60 0x7f18981323e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:46:25.892 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.890+0000 7f1896ffd640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1898131f60 0x7f18981323e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:39252/0 (socket says 192.168.123.107:39252) 2026-03-09T20:46:25.892 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.890+0000 7f1896ffd640 1 -- 192.168.123.107:0/3257478711 learned_addr learned my addr 192.168.123.107:0/3257478711 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:46:25.892 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.890+0000 7f18977fe640 1 --2- 192.168.123.107:0/3257478711 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1898072420 0x7f1898131a20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:46:25.892 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.890+0000 7f18977fe640 1 -- 192.168.123.107:0/3257478711 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1898131f60 msgr2=0x7f18981323e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:46:25.892 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.890+0000 7f18977fe640 1 --2- 192.168.123.107:0/3257478711 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1898131f60 0x7f18981323e0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:25.892 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.890+0000 7f18977fe640 1 -- 192.168.123.107:0/3257478711 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f1888009660 con 0x7f1898072420 2026-03-09T20:46:25.892 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.891+0000 7f18977fe640 1 --2- 192.168.123.107:0/3257478711 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1898072420 0x7f1898131a20 secure :-1 s=READY pgs=300 cs=0 l=1 rev1=1 crypto rx=0x7f188802f780 tx=0x7f1888002c90 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:46:25.892 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.891+0000 7f1894ff9640 1 -- 192.168.123.107:0/3257478711 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f188803d070 con 0x7f1898072420 2026-03-09T20:46:25.893 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.892+0000 7f189ca4a640 1 -- 192.168.123.107:0/3257478711 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f189807fae0 con 0x7f1898072420 2026-03-09T20:46:25.893 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.892+0000 7f189ca4a640 1 -- 192.168.123.107:0/3257478711 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f189807ffa0 con 0x7f1898072420 2026-03-09T20:46:25.893 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.892+0000 7f1894ff9640 1 -- 192.168.123.107:0/3257478711 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f1888004530 con 0x7f1898072420 2026-03-09T20:46:25.893 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.892+0000 7f1894ff9640 1 -- 192.168.123.107:0/3257478711 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f1888038e10 con 0x7f1898072420 2026-03-09T20:46:25.895 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.895+0000 7f1894ff9640 1 -- 192.168.123.107:0/3257478711 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7f1888031110 con 0x7f1898072420 2026-03-09T20:46:25.896 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.896+0000 7f1894ff9640 1 --2- 192.168.123.107:0/3257478711 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f1868076290 0x7f1868078750 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:46:25.897 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.896+0000 7f1894ff9640 1 -- 192.168.123.107:0/3257478711 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(42..42 src has 1..42) v4 ==== 5448+0+0 (secure 0 0 0) 0x7f18880bcfd0 con 0x7f1898072420 2026-03-09T20:46:25.897 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.896+0000 7f189ca4a640 1 -- 192.168.123.107:0/3257478711 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f1864005350 con 0x7f1898072420 2026-03-09T20:46:25.898 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.897+0000 7f1896ffd640 1 --2- 192.168.123.107:0/3257478711 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f1868076290 0x7f1868078750 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:46:25.926 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.900+0000 7f1896ffd640 1 --2- 192.168.123.107:0/3257478711 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f1868076290 0x7f1868078750 secure :-1 s=READY pgs=138 cs=0 l=1 rev1=1 crypto rx=0x7f1898133150 tx=0x7f189000b040 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:46:25.926 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:25.900+0000 7f1894ff9640 1 -- 192.168.123.107:0/3257478711 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f18880865e0 con 0x7f1898072420 2026-03-09T20:46:26.074 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.073+0000 7f189ca4a640 1 -- 192.168.123.107:0/3257478711 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "fs dump"} v 0) v1 -- 0x7f1864005e10 con 0x7f1898072420 2026-03-09T20:46:26.076 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.074+0000 7f1894ff9640 1 -- 192.168.123.107:0/3257478711 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump"}]=0 dumped fsmap epoch 11 v11) v1 ==== 76+0+1867 (secure 0 0 0) 0x7f1888085f80 con 0x7f1898072420 2026-03-09T20:46:26.076 INFO:teuthology.orchestra.run.vm07.stdout:e11 2026-03-09T20:46:26.076 INFO:teuthology.orchestra.run.vm07.stdout:enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T20:46:26.076 INFO:teuthology.orchestra.run.vm07.stdout:default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T20:46:26.076 INFO:teuthology.orchestra.run.vm07.stdout:legacy client fscid: 1 2026-03-09T20:46:26.076 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:46:26.076 INFO:teuthology.orchestra.run.vm07.stdout:Filesystem 'cephfs' (1) 2026-03-09T20:46:26.076 INFO:teuthology.orchestra.run.vm07.stdout:fs_name cephfs 2026-03-09T20:46:26.076 INFO:teuthology.orchestra.run.vm07.stdout:epoch 11 2026-03-09T20:46:26.076 INFO:teuthology.orchestra.run.vm07.stdout:flags 32 joinable allow_snaps allow_multimds_snaps allow_standby_replay 2026-03-09T20:46:26.076 INFO:teuthology.orchestra.run.vm07.stdout:created 2026-03-09T20:44:59.885491+0000 2026-03-09T20:46:26.076 INFO:teuthology.orchestra.run.vm07.stdout:modified 2026-03-09T20:45:12.822947+0000 2026-03-09T20:46:26.076 INFO:teuthology.orchestra.run.vm07.stdout:tableserver 0 2026-03-09T20:46:26.076 INFO:teuthology.orchestra.run.vm07.stdout:root 0 2026-03-09T20:46:26.076 INFO:teuthology.orchestra.run.vm07.stdout:session_timeout 60 2026-03-09T20:46:26.076 INFO:teuthology.orchestra.run.vm07.stdout:session_autoclose 300 2026-03-09T20:46:26.076 INFO:teuthology.orchestra.run.vm07.stdout:max_file_size 1099511627776 2026-03-09T20:46:26.076 INFO:teuthology.orchestra.run.vm07.stdout:max_xattr_size 65536 2026-03-09T20:46:26.076 INFO:teuthology.orchestra.run.vm07.stdout:required_client_features {} 2026-03-09T20:46:26.076 INFO:teuthology.orchestra.run.vm07.stdout:last_failure 0 2026-03-09T20:46:26.076 INFO:teuthology.orchestra.run.vm07.stdout:last_failure_osd_epoch 0 2026-03-09T20:46:26.076 INFO:teuthology.orchestra.run.vm07.stdout:compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T20:46:26.076 INFO:teuthology.orchestra.run.vm07.stdout:max_mds 2 2026-03-09T20:46:26.076 INFO:teuthology.orchestra.run.vm07.stdout:in 0,1 2026-03-09T20:46:26.076 INFO:teuthology.orchestra.run.vm07.stdout:up {0=14476,1=24291} 2026-03-09T20:46:26.076 INFO:teuthology.orchestra.run.vm07.stdout:failed 2026-03-09T20:46:26.076 INFO:teuthology.orchestra.run.vm07.stdout:damaged 2026-03-09T20:46:26.076 INFO:teuthology.orchestra.run.vm07.stdout:stopped 2026-03-09T20:46:26.076 INFO:teuthology.orchestra.run.vm07.stdout:data_pools [3] 2026-03-09T20:46:26.076 INFO:teuthology.orchestra.run.vm07.stdout:metadata_pool 2 2026-03-09T20:46:26.076 INFO:teuthology.orchestra.run.vm07.stdout:inline_data disabled 2026-03-09T20:46:26.076 INFO:teuthology.orchestra.run.vm07.stdout:balancer 2026-03-09T20:46:26.076 INFO:teuthology.orchestra.run.vm07.stdout:bal_rank_mask -1 2026-03-09T20:46:26.076 INFO:teuthology.orchestra.run.vm07.stdout:standby_count_wanted 1 2026-03-09T20:46:26.076 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm07.rovdbp{0:14476} state up:active seq 3 join_fscid=1 addr [v2:192.168.123.107:6826/2216764941,v1:192.168.123.107:6827/2216764941] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:46:26.076 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm10.hzyuyq{0:14498} state up:standby-replay seq 3 join_fscid=1 addr [v2:192.168.123.110:6826/3212743251,v1:192.168.123.110:6827/3212743251] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:46:26.076 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm10.qpltwp{1:24291} state up:active seq 3 join_fscid=1 addr [v2:192.168.123.110:6824/61492274,v1:192.168.123.110:6825/61492274] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:46:26.076 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm07.potfau{1:14490} state up:standby-replay seq 2 join_fscid=1 addr [v2:192.168.123.107:6828/3289699342,v1:192.168.123.107:6829/3289699342] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:46:26.076 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:46:26.077 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:46:26.077 INFO:teuthology.orchestra.run.vm07.stderr:dumped fsmap epoch 11 2026-03-09T20:46:26.081 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.079+0000 7f18767fc640 1 -- 192.168.123.107:0/3257478711 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f1868076290 msgr2=0x7f1868078750 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:46:26.081 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.079+0000 7f18767fc640 1 --2- 192.168.123.107:0/3257478711 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f1868076290 0x7f1868078750 secure :-1 s=READY pgs=138 cs=0 l=1 rev1=1 crypto rx=0x7f1898133150 tx=0x7f189000b040 comp rx=0 tx=0).stop 2026-03-09T20:46:26.081 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.079+0000 7f18767fc640 1 -- 192.168.123.107:0/3257478711 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1898072420 msgr2=0x7f1898131a20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:46:26.082 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.079+0000 7f18767fc640 1 --2- 192.168.123.107:0/3257478711 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1898072420 0x7f1898131a20 secure :-1 s=READY pgs=300 cs=0 l=1 rev1=1 crypto rx=0x7f188802f780 tx=0x7f1888002c90 comp rx=0 tx=0).stop 2026-03-09T20:46:26.082 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.081+0000 7f18767fc640 1 -- 192.168.123.107:0/3257478711 shutdown_connections 2026-03-09T20:46:26.082 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.081+0000 7f18767fc640 1 --2- 192.168.123.107:0/3257478711 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f1868076290 0x7f1868078750 unknown :-1 s=CLOSED pgs=138 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:26.082 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.081+0000 7f18767fc640 1 --2- 192.168.123.107:0/3257478711 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1898131f60 0x7f18981323e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:26.082 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.081+0000 7f18767fc640 1 --2- 192.168.123.107:0/3257478711 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1898072420 0x7f1898131a20 unknown :-1 s=CLOSED pgs=300 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:26.082 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.081+0000 7f18767fc640 1 -- 192.168.123.107:0/3257478711 >> 192.168.123.107:0/3257478711 conn(0x7f189806d4f0 msgr2=0x7f1898075440 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:46:26.082 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.081+0000 7f18767fc640 1 -- 192.168.123.107:0/3257478711 shutdown_connections 2026-03-09T20:46:26.082 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.081+0000 7f18767fc640 1 -- 192.168.123.107:0/3257478711 wait complete. 2026-03-09T20:46:26.159 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.157+0000 7fc51bf96640 1 -- 192.168.123.107:0/4160271544 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc514072440 msgr2=0x7fc5140771b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:46:26.159 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.157+0000 7fc51bf96640 1 --2- 192.168.123.107:0/4160271544 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc514072440 0x7fc5140771b0 secure :-1 s=READY pgs=301 cs=0 l=1 rev1=1 crypto rx=0x7fc50c00caa0 tx=0x7fc50c0305a0 comp rx=0 tx=0).stop 2026-03-09T20:46:26.159 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.158+0000 7fc51bf96640 1 -- 192.168.123.107:0/4160271544 shutdown_connections 2026-03-09T20:46:26.159 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.158+0000 7fc51bf96640 1 --2- 192.168.123.107:0/4160271544 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc514072440 0x7fc5140771b0 unknown :-1 s=CLOSED pgs=301 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:26.159 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.158+0000 7fc51bf96640 1 --2- 192.168.123.107:0/4160271544 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc514071a70 0x7fc514071e70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:26.159 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.158+0000 7fc51bf96640 1 -- 192.168.123.107:0/4160271544 >> 192.168.123.107:0/4160271544 conn(0x7fc51406d4f0 msgr2=0x7fc51406f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:46:26.160 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.158+0000 7fc51bf96640 1 -- 192.168.123.107:0/4160271544 shutdown_connections 2026-03-09T20:46:26.160 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.158+0000 7fc51bf96640 1 -- 192.168.123.107:0/4160271544 wait complete. 2026-03-09T20:46:26.160 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.158+0000 7fc51bf96640 1 Processor -- start 2026-03-09T20:46:26.161 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.158+0000 7fc51bf96640 1 -- start start 2026-03-09T20:46:26.161 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.159+0000 7fc51bf96640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc514071a70 0x7fc5140840e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:46:26.161 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.159+0000 7fc51bf96640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc514082730 0x7fc514082bb0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:46:26.161 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.159+0000 7fc51bf96640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc514084620 con 0x7fc514082730 2026-03-09T20:46:26.161 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.159+0000 7fc51bf96640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc5140830f0 con 0x7fc514071a70 2026-03-09T20:46:26.161 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.159+0000 7fc519d0b640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc514071a70 0x7fc5140840e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:46:26.161 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.159+0000 7fc51950a640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc514082730 0x7fc514082bb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:46:26.161 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.159+0000 7fc51950a640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc514082730 0x7fc514082bb0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:43540/0 (socket says 192.168.123.107:43540) 2026-03-09T20:46:26.161 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.159+0000 7fc51950a640 1 -- 192.168.123.107:0/811975979 learned_addr learned my addr 192.168.123.107:0/811975979 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:46:26.161 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.159+0000 7fc51950a640 1 -- 192.168.123.107:0/811975979 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc514071a70 msgr2=0x7fc5140840e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:46:26.161 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.159+0000 7fc51950a640 1 --2- 192.168.123.107:0/811975979 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc514071a70 0x7fc5140840e0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:26.161 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.159+0000 7fc51950a640 1 -- 192.168.123.107:0/811975979 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fc50c009d00 con 0x7fc514082730 2026-03-09T20:46:26.161 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.160+0000 7fc51950a640 1 --2- 192.168.123.107:0/811975979 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc514082730 0x7fc514082bb0 secure :-1 s=READY pgs=302 cs=0 l=1 rev1=1 crypto rx=0x7fc50c0094f0 tx=0x7fc50c009520 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:46:26.161 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.160+0000 7fc50affd640 1 -- 192.168.123.107:0/811975979 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fc50c007db0 con 0x7fc514082730 2026-03-09T20:46:26.162 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.161+0000 7fc51bf96640 1 -- 192.168.123.107:0/811975979 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fc514083370 con 0x7fc514082730 2026-03-09T20:46:26.162 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.161+0000 7fc51bf96640 1 -- 192.168.123.107:0/811975979 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fc5141b5bc0 con 0x7fc514082730 2026-03-09T20:46:26.162 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.161+0000 7fc50affd640 1 -- 192.168.123.107:0/811975979 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fc50c033070 con 0x7fc514082730 2026-03-09T20:46:26.162 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.161+0000 7fc50affd640 1 -- 192.168.123.107:0/811975979 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fc50c00c280 con 0x7fc514082730 2026-03-09T20:46:26.164 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.163+0000 7fc50affd640 1 -- 192.168.123.107:0/811975979 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7fc50c00c3e0 con 0x7fc514082730 2026-03-09T20:46:26.164 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.163+0000 7fc50affd640 1 --2- 192.168.123.107:0/811975979 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fc4f4076290 0x7fc4f4078750 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:46:26.165 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.164+0000 7fc50affd640 1 -- 192.168.123.107:0/811975979 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(42..42 src has 1..42) v4 ==== 5448+0+0 (secure 0 0 0) 0x7fc50c0bd1f0 con 0x7fc514082730 2026-03-09T20:46:26.165 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.164+0000 7fc519d0b640 1 --2- 192.168.123.107:0/811975979 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fc4f4076290 0x7fc4f4078750 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:46:26.165 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.164+0000 7fc51bf96640 1 -- 192.168.123.107:0/811975979 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fc4e4005350 con 0x7fc514082730 2026-03-09T20:46:26.166 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.166+0000 7fc519d0b640 1 --2- 192.168.123.107:0/811975979 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fc4f4076290 0x7fc4f4078750 secure :-1 s=READY pgs=139 cs=0 l=1 rev1=1 crypto rx=0x7fc5100059c0 tx=0x7fc51000e890 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:46:26.171 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.167+0000 7fc50affd640 1 -- 192.168.123.107:0/811975979 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7fc50c086820 con 0x7fc514082730 2026-03-09T20:46:26.347 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.345+0000 7fc51bf96640 1 -- 192.168.123.107:0/811975979 --> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7fc4e4002bf0 con 0x7fc4f4076290 2026-03-09T20:46:26.349 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.348+0000 7fc50affd640 1 -- 192.168.123.107:0/811975979 <== mgr.14225 v2:192.168.123.107:6800/4233182156 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+442 (secure 0 0 0) 0x7fc4e4002bf0 con 0x7fc4f4076290 2026-03-09T20:46:26.349 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T20:46:26.349 INFO:teuthology.orchestra.run.vm07.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc", 2026-03-09T20:46:26.349 INFO:teuthology.orchestra.run.vm07.stdout: "in_progress": true, 2026-03-09T20:46:26.349 INFO:teuthology.orchestra.run.vm07.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-09T20:46:26.349 INFO:teuthology.orchestra.run.vm07.stdout: "services_complete": [], 2026-03-09T20:46:26.349 INFO:teuthology.orchestra.run.vm07.stdout: "progress": "0/23 daemons upgraded", 2026-03-09T20:46:26.349 INFO:teuthology.orchestra.run.vm07.stdout: "message": "Pulling quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc image on host vm10", 2026-03-09T20:46:26.349 INFO:teuthology.orchestra.run.vm07.stdout: "is_paused": false 2026-03-09T20:46:26.349 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T20:46:26.352 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.351+0000 7fc51bf96640 1 -- 192.168.123.107:0/811975979 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fc4f4076290 msgr2=0x7fc4f4078750 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:46:26.352 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.351+0000 7fc51bf96640 1 --2- 192.168.123.107:0/811975979 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fc4f4076290 0x7fc4f4078750 secure :-1 s=READY pgs=139 cs=0 l=1 rev1=1 crypto rx=0x7fc5100059c0 tx=0x7fc51000e890 comp rx=0 tx=0).stop 2026-03-09T20:46:26.352 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.351+0000 7fc51bf96640 1 -- 192.168.123.107:0/811975979 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc514082730 msgr2=0x7fc514082bb0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:46:26.352 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.351+0000 7fc51bf96640 1 --2- 192.168.123.107:0/811975979 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc514082730 0x7fc514082bb0 secure :-1 s=READY pgs=302 cs=0 l=1 rev1=1 crypto rx=0x7fc50c0094f0 tx=0x7fc50c009520 comp rx=0 tx=0).stop 2026-03-09T20:46:26.353 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.352+0000 7fc51bf96640 1 -- 192.168.123.107:0/811975979 shutdown_connections 2026-03-09T20:46:26.353 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.352+0000 7fc51bf96640 1 --2- 192.168.123.107:0/811975979 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fc4f4076290 0x7fc4f4078750 unknown :-1 s=CLOSED pgs=139 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:26.353 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.352+0000 7fc51bf96640 1 --2- 192.168.123.107:0/811975979 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc514082730 0x7fc514082bb0 unknown :-1 s=CLOSED pgs=302 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:26.353 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.352+0000 7fc51bf96640 1 --2- 192.168.123.107:0/811975979 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc514071a70 0x7fc5140840e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:26.353 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.352+0000 7fc51bf96640 1 -- 192.168.123.107:0/811975979 >> 192.168.123.107:0/811975979 conn(0x7fc51406d4f0 msgr2=0x7fc5140704d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:46:26.353 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.352+0000 7fc51bf96640 1 -- 192.168.123.107:0/811975979 shutdown_connections 2026-03-09T20:46:26.353 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.352+0000 7fc51bf96640 1 -- 192.168.123.107:0/811975979 wait complete. 2026-03-09T20:46:26.426 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.424+0000 7fdd7f593640 1 -- 192.168.123.107:0/2982095773 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fdd78071a70 msgr2=0x7fdd78071e70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:46:26.426 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.424+0000 7fdd7f593640 1 --2- 192.168.123.107:0/2982095773 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fdd78071a70 0x7fdd78071e70 secure :-1 s=READY pgs=303 cs=0 l=1 rev1=1 crypto rx=0x7fdd74007920 tx=0x7fdd7402ffc0 comp rx=0 tx=0).stop 2026-03-09T20:46:26.426 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.424+0000 7fdd7f593640 1 -- 192.168.123.107:0/2982095773 shutdown_connections 2026-03-09T20:46:26.426 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.424+0000 7fdd7f593640 1 --2- 192.168.123.107:0/2982095773 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fdd78072440 0x7fdd780771b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:26.426 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.424+0000 7fdd7f593640 1 --2- 192.168.123.107:0/2982095773 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fdd78071a70 0x7fdd78071e70 unknown :-1 s=CLOSED pgs=303 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:26.426 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.424+0000 7fdd7f593640 1 -- 192.168.123.107:0/2982095773 >> 192.168.123.107:0/2982095773 conn(0x7fdd7806d4f0 msgr2=0x7fdd7806f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:46:26.427 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.426+0000 7fdd7f593640 1 -- 192.168.123.107:0/2982095773 shutdown_connections 2026-03-09T20:46:26.427 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.426+0000 7fdd7f593640 1 -- 192.168.123.107:0/2982095773 wait complete. 2026-03-09T20:46:26.428 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.426+0000 7fdd7f593640 1 Processor -- start 2026-03-09T20:46:26.428 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.426+0000 7fdd7f593640 1 -- start start 2026-03-09T20:46:26.428 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.426+0000 7fdd7f593640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fdd78072440 0x7fdd78084140 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:46:26.428 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.426+0000 7fdd7f593640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fdd78082790 0x7fdd78082c10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:46:26.428 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.426+0000 7fdd7f593640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fdd78083150 con 0x7fdd78072440 2026-03-09T20:46:26.428 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.426+0000 7fdd7f593640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fdd780832c0 con 0x7fdd78082790 2026-03-09T20:46:26.428 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.426+0000 7fdd7d308640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fdd78072440 0x7fdd78084140 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:46:26.428 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.426+0000 7fdd7d308640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fdd78072440 0x7fdd78084140 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:43550/0 (socket says 192.168.123.107:43550) 2026-03-09T20:46:26.428 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.426+0000 7fdd7d308640 1 -- 192.168.123.107:0/658950647 learned_addr learned my addr 192.168.123.107:0/658950647 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:46:26.428 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.426+0000 7fdd7cb07640 1 --2- 192.168.123.107:0/658950647 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fdd78082790 0x7fdd78082c10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:46:26.428 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.427+0000 7fdd7d308640 1 -- 192.168.123.107:0/658950647 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fdd78082790 msgr2=0x7fdd78082c10 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:46:26.428 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.427+0000 7fdd7d308640 1 --2- 192.168.123.107:0/658950647 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fdd78082790 0x7fdd78082c10 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:26.428 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.427+0000 7fdd7d308640 1 -- 192.168.123.107:0/658950647 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fdd70009c40 con 0x7fdd78072440 2026-03-09T20:46:26.428 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.427+0000 7fdd7d308640 1 --2- 192.168.123.107:0/658950647 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fdd78072440 0x7fdd78084140 secure :-1 s=READY pgs=304 cs=0 l=1 rev1=1 crypto rx=0x7fdd74004870 tx=0x7fdd74009f60 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:46:26.428 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.427+0000 7fdd6e7fc640 1 -- 192.168.123.107:0/658950647 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fdd74042a50 con 0x7fdd78072440 2026-03-09T20:46:26.429 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.428+0000 7fdd6e7fc640 1 -- 192.168.123.107:0/658950647 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fdd74030ce0 con 0x7fdd78072440 2026-03-09T20:46:26.429 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.428+0000 7fdd7f593640 1 -- 192.168.123.107:0/658950647 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fdd740075d0 con 0x7fdd78072440 2026-03-09T20:46:26.429 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.428+0000 7fdd7f593640 1 -- 192.168.123.107:0/658950647 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fdd781b5bc0 con 0x7fdd78072440 2026-03-09T20:46:26.429 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.428+0000 7fdd6e7fc640 1 -- 192.168.123.107:0/658950647 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fdd7404b8c0 con 0x7fdd78072440 2026-03-09T20:46:26.430 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.429+0000 7fdd6e7fc640 1 -- 192.168.123.107:0/658950647 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7fdd74030820 con 0x7fdd78072440 2026-03-09T20:46:26.431 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.430+0000 7fdd6e7fc640 1 --2- 192.168.123.107:0/658950647 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fdd54076260 0x7fdd54078720 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:46:26.432 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.431+0000 7fdd6e7fc640 1 -- 192.168.123.107:0/658950647 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(42..42 src has 1..42) v4 ==== 5448+0+0 (secure 0 0 0) 0x7fdd740c49c0 con 0x7fdd78072440 2026-03-09T20:46:26.432 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.431+0000 7fdd7cb07640 1 --2- 192.168.123.107:0/658950647 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fdd54076260 0x7fdd54078720 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:46:26.432 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.429+0000 7fdd7f593640 1 -- 192.168.123.107:0/658950647 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fdd48005350 con 0x7fdd78072440 2026-03-09T20:46:26.432 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.432+0000 7fdd7cb07640 1 --2- 192.168.123.107:0/658950647 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fdd54076260 0x7fdd54078720 secure :-1 s=READY pgs=140 cs=0 l=1 rev1=1 crypto rx=0x7fdd78083ec0 tx=0x7fdd70009920 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:46:26.437 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.437+0000 7fdd6e7fc640 1 -- 192.168.123.107:0/658950647 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7fdd7408dfe0 con 0x7fdd78072440 2026-03-09T20:46:26.602 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.601+0000 7fdd7f593640 1 -- 192.168.123.107:0/658950647 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "health", "detail": "detail"} v 0) v1 -- 0x7fdd48005600 con 0x7fdd78072440 2026-03-09T20:46:27.003 INFO:teuthology.orchestra.run.vm07.stdout:HEALTH_OK 2026-03-09T20:46:27.003 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:26.999+0000 7fdd6e7fc640 1 -- 192.168.123.107:0/658950647 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "health", "detail": "detail"}]=0 v0) v1 ==== 74+0+10 (secure 0 0 0) 0x7fdd7408d980 con 0x7fdd78072440 2026-03-09T20:46:27.003 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:27.002+0000 7fdd43fff640 1 -- 192.168.123.107:0/658950647 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fdd54076260 msgr2=0x7fdd54078720 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:46:27.003 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:27.002+0000 7fdd43fff640 1 --2- 192.168.123.107:0/658950647 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fdd54076260 0x7fdd54078720 secure :-1 s=READY pgs=140 cs=0 l=1 rev1=1 crypto rx=0x7fdd78083ec0 tx=0x7fdd70009920 comp rx=0 tx=0).stop 2026-03-09T20:46:27.004 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:27.002+0000 7fdd43fff640 1 -- 192.168.123.107:0/658950647 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fdd78072440 msgr2=0x7fdd78084140 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:46:27.004 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:27.002+0000 7fdd43fff640 1 --2- 192.168.123.107:0/658950647 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fdd78072440 0x7fdd78084140 secure :-1 s=READY pgs=304 cs=0 l=1 rev1=1 crypto rx=0x7fdd74004870 tx=0x7fdd74009f60 comp rx=0 tx=0).stop 2026-03-09T20:46:27.004 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:27.002+0000 7fdd43fff640 1 -- 192.168.123.107:0/658950647 shutdown_connections 2026-03-09T20:46:27.004 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:27.002+0000 7fdd43fff640 1 --2- 192.168.123.107:0/658950647 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fdd54076260 0x7fdd54078720 unknown :-1 s=CLOSED pgs=140 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:27.004 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:27.002+0000 7fdd43fff640 1 --2- 192.168.123.107:0/658950647 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fdd78082790 0x7fdd78082c10 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:27.004 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:27.002+0000 7fdd43fff640 1 --2- 192.168.123.107:0/658950647 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fdd78072440 0x7fdd78084140 unknown :-1 s=CLOSED pgs=304 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:27.004 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:27.002+0000 7fdd43fff640 1 -- 192.168.123.107:0/658950647 >> 192.168.123.107:0/658950647 conn(0x7fdd7806d4f0 msgr2=0x7fdd78075430 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:46:27.004 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:27.002+0000 7fdd43fff640 1 -- 192.168.123.107:0/658950647 shutdown_connections 2026-03-09T20:46:27.004 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:27.002+0000 7fdd43fff640 1 -- 192.168.123.107:0/658950647 wait complete. 2026-03-09T20:46:27.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:46:27 vm10.local ceph-mon[57011]: from='client.24393 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:46:27.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:46:27 vm10.local ceph-mon[57011]: pgmap v123: 65 pgs: 65 active+clean; 19 MiB data, 300 MiB used, 120 GiB / 120 GiB avail; 343 KiB/s rd, 1.6 MiB/s wr, 96 op/s 2026-03-09T20:46:27.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:46:27 vm10.local ceph-mon[57011]: from='client.24395 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:46:27.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:46:27 vm10.local ceph-mon[57011]: from='client.? 192.168.123.107:0/3970619894' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:46:27.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:46:27 vm10.local ceph-mon[57011]: from='client.? 192.168.123.107:0/3257478711' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T20:46:27.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:46:27 vm07.local ceph-mon[49120]: from='client.24393 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:46:27.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:46:27 vm07.local ceph-mon[49120]: pgmap v123: 65 pgs: 65 active+clean; 19 MiB data, 300 MiB used, 120 GiB / 120 GiB avail; 343 KiB/s rd, 1.6 MiB/s wr, 96 op/s 2026-03-09T20:46:27.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:46:27 vm07.local ceph-mon[49120]: from='client.24395 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:46:27.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:46:27 vm07.local ceph-mon[49120]: from='client.? 192.168.123.107:0/3970619894' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:46:27.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:46:27 vm07.local ceph-mon[49120]: from='client.? 192.168.123.107:0/3257478711' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T20:46:28.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:46:28 vm07.local ceph-mon[49120]: from='client.14618 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:46:28.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:46:28 vm07.local ceph-mon[49120]: from='client.? 192.168.123.107:0/658950647' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T20:46:28.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:46:28 vm07.local ceph-mon[49120]: pgmap v124: 65 pgs: 65 active+clean; 37 MiB data, 435 MiB used, 120 GiB / 120 GiB avail; 684 KiB/s rd, 3.2 MiB/s wr, 234 op/s 2026-03-09T20:46:28.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:46:28 vm10.local ceph-mon[57011]: from='client.14618 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:46:28.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:46:28 vm10.local ceph-mon[57011]: from='client.? 192.168.123.107:0/658950647' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T20:46:28.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:46:28 vm10.local ceph-mon[57011]: pgmap v124: 65 pgs: 65 active+clean; 37 MiB data, 435 MiB used, 120 GiB / 120 GiB avail; 684 KiB/s rd, 3.2 MiB/s wr, 234 op/s 2026-03-09T20:46:31.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:46:30 vm07.local ceph-mon[49120]: pgmap v125: 65 pgs: 65 active+clean; 38 MiB data, 447 MiB used, 120 GiB / 120 GiB avail; 684 KiB/s rd, 3.3 MiB/s wr, 260 op/s 2026-03-09T20:46:31.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:46:30 vm10.local ceph-mon[57011]: pgmap v125: 65 pgs: 65 active+clean; 38 MiB data, 447 MiB used, 120 GiB / 120 GiB avail; 684 KiB/s rd, 3.3 MiB/s wr, 260 op/s 2026-03-09T20:46:33.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:46:32 vm10.local ceph-mon[57011]: pgmap v126: 65 pgs: 65 active+clean; 52 MiB data, 483 MiB used, 120 GiB / 120 GiB avail; 1.2 MiB/s rd, 4.5 MiB/s wr, 351 op/s 2026-03-09T20:46:33.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:46:32 vm07.local ceph-mon[49120]: pgmap v126: 65 pgs: 65 active+clean; 52 MiB data, 483 MiB used, 120 GiB / 120 GiB avail; 1.2 MiB/s rd, 4.5 MiB/s wr, 351 op/s 2026-03-09T20:46:34.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:46:34 vm07.local ceph-mon[49120]: pgmap v127: 65 pgs: 65 active+clean; 60 MiB data, 594 MiB used, 119 GiB / 120 GiB avail; 1.2 MiB/s rd, 5.2 MiB/s wr, 450 op/s 2026-03-09T20:46:34.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:46:34 vm07.local ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:46:35.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:46:34 vm10.local ceph-mon[57011]: pgmap v127: 65 pgs: 65 active+clean; 60 MiB data, 594 MiB used, 119 GiB / 120 GiB avail; 1.2 MiB/s rd, 5.2 MiB/s wr, 450 op/s 2026-03-09T20:46:35.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:46:34 vm10.local ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:46:36.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:46:36 vm07.local ceph-mon[49120]: pgmap v128: 65 pgs: 65 active+clean; 70 MiB data, 698 MiB used, 119 GiB / 120 GiB avail; 1.6 MiB/s rd, 5.0 MiB/s wr, 492 op/s 2026-03-09T20:46:37.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:46:36 vm10.local ceph-mon[57011]: pgmap v128: 65 pgs: 65 active+clean; 70 MiB data, 698 MiB used, 119 GiB / 120 GiB avail; 1.6 MiB/s rd, 5.0 MiB/s wr, 492 op/s 2026-03-09T20:46:39.538 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:46:39 vm10.local ceph-mon[57011]: pgmap v129: 65 pgs: 65 active+clean; 80 MiB data, 845 MiB used, 119 GiB / 120 GiB avail; 1.3 MiB/s rd, 5.5 MiB/s wr, 589 op/s 2026-03-09T20:46:39.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:46:39 vm07.local ceph-mon[49120]: pgmap v129: 65 pgs: 65 active+clean; 80 MiB data, 845 MiB used, 119 GiB / 120 GiB avail; 1.3 MiB/s rd, 5.5 MiB/s wr, 589 op/s 2026-03-09T20:46:40.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:46:40 vm10.local ceph-mon[57011]: pgmap v130: 65 pgs: 65 active+clean; 86 MiB data, 861 MiB used, 119 GiB / 120 GiB avail; 1.3 MiB/s rd, 4.4 MiB/s wr, 477 op/s 2026-03-09T20:46:40.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:46:40 vm07.local ceph-mon[49120]: pgmap v130: 65 pgs: 65 active+clean; 86 MiB data, 861 MiB used, 119 GiB / 120 GiB avail; 1.3 MiB/s rd, 4.4 MiB/s wr, 477 op/s 2026-03-09T20:46:42.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:46:42 vm10.local ceph-mon[57011]: pgmap v131: 65 pgs: 65 active+clean; 103 MiB data, 927 MiB used, 119 GiB / 120 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 546 op/s 2026-03-09T20:46:42.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:46:42 vm07.local ceph-mon[49120]: pgmap v131: 65 pgs: 65 active+clean; 103 MiB data, 927 MiB used, 119 GiB / 120 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 546 op/s 2026-03-09T20:46:45.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:46:44 vm07.local ceph-mon[49120]: pgmap v132: 65 pgs: 65 active+clean; 107 MiB data, 1.0 GiB used, 119 GiB / 120 GiB avail; 1.8 MiB/s rd, 4.9 MiB/s wr, 579 op/s 2026-03-09T20:46:45.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:46:44 vm10.local ceph-mon[57011]: pgmap v132: 65 pgs: 65 active+clean; 107 MiB data, 1.0 GiB used, 119 GiB / 120 GiB avail; 1.8 MiB/s rd, 4.9 MiB/s wr, 579 op/s 2026-03-09T20:46:46.755 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:46:46 vm07.local ceph-mon[49120]: pgmap v133: 65 pgs: 65 active+clean; 108 MiB data, 1.0 GiB used, 119 GiB / 120 GiB avail; 1.8 MiB/s rd, 4.3 MiB/s wr, 541 op/s 2026-03-09T20:46:46.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:46:46 vm10.local ceph-mon[57011]: pgmap v133: 65 pgs: 65 active+clean; 108 MiB data, 1.0 GiB used, 119 GiB / 120 GiB avail; 1.8 MiB/s rd, 4.3 MiB/s wr, 541 op/s 2026-03-09T20:46:48.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:46:48 vm10.local ceph-mon[57011]: pgmap v134: 65 pgs: 65 active+clean; 121 MiB data, 1.1 GiB used, 119 GiB / 120 GiB avail; 2.1 MiB/s rd, 4.6 MiB/s wr, 612 op/s 2026-03-09T20:46:48.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:46:48 vm07.local ceph-mon[49120]: pgmap v134: 65 pgs: 65 active+clean; 121 MiB data, 1.1 GiB used, 119 GiB / 120 GiB avail; 2.1 MiB/s rd, 4.6 MiB/s wr, 612 op/s 2026-03-09T20:46:49.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:46:49 vm10.local ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:46:49.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:46:49 vm07.local ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:46:50.820 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:46:50 vm10.local ceph-mon[57011]: pgmap v135: 65 pgs: 65 active+clean; 123 MiB data, 1.1 GiB used, 119 GiB / 120 GiB avail; 2.0 MiB/s rd, 3.8 MiB/s wr, 494 op/s 2026-03-09T20:46:50.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:46:50 vm07.local ceph-mon[49120]: pgmap v135: 65 pgs: 65 active+clean; 123 MiB data, 1.1 GiB used, 119 GiB / 120 GiB avail; 2.0 MiB/s rd, 3.8 MiB/s wr, 494 op/s 2026-03-09T20:46:53.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:46:52 vm10.local ceph-mon[57011]: pgmap v136: 65 pgs: 65 active+clean; 136 MiB data, 1.2 GiB used, 119 GiB / 120 GiB avail; 2.3 MiB/s rd, 4.5 MiB/s wr, 540 op/s 2026-03-09T20:46:53.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:46:52 vm07.local ceph-mon[49120]: pgmap v136: 65 pgs: 65 active+clean; 136 MiB data, 1.2 GiB used, 119 GiB / 120 GiB avail; 2.3 MiB/s rd, 4.5 MiB/s wr, 540 op/s 2026-03-09T20:46:55.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:46:54 vm10.local ceph-mon[57011]: pgmap v137: 65 pgs: 65 active+clean; 152 MiB data, 1.4 GiB used, 119 GiB / 120 GiB avail; 1.7 MiB/s rd, 4.5 MiB/s wr, 546 op/s 2026-03-09T20:46:55.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:46:54 vm07.local ceph-mon[49120]: pgmap v137: 65 pgs: 65 active+clean; 152 MiB data, 1.4 GiB used, 119 GiB / 120 GiB avail; 1.7 MiB/s rd, 4.5 MiB/s wr, 546 op/s 2026-03-09T20:46:56.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:46:56 vm07.local ceph-mon[49120]: pgmap v138: 65 pgs: 65 active+clean; 159 MiB data, 1.4 GiB used, 119 GiB / 120 GiB avail; 2.0 MiB/s rd, 4.6 MiB/s wr, 478 op/s 2026-03-09T20:46:56.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:46:56 vm10.local ceph-mon[57011]: pgmap v138: 65 pgs: 65 active+clean; 159 MiB data, 1.4 GiB used, 119 GiB / 120 GiB avail; 2.0 MiB/s rd, 4.6 MiB/s wr, 478 op/s 2026-03-09T20:46:57.098 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.097+0000 7fa8d1f57640 1 -- 192.168.123.107:0/1772406458 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa8cc072440 msgr2=0x7fa8cc0771b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:46:57.098 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.097+0000 7fa8d1f57640 1 --2- 192.168.123.107:0/1772406458 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa8cc072440 0x7fa8cc0771b0 secure :-1 s=READY pgs=71 cs=0 l=1 rev1=1 crypto rx=0x7fa8c400b0a0 tx=0x7fa8c402f4c0 comp rx=0 tx=0).stop 2026-03-09T20:46:57.098 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.097+0000 7fa8d1f57640 1 -- 192.168.123.107:0/1772406458 shutdown_connections 2026-03-09T20:46:57.098 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.097+0000 7fa8d1f57640 1 --2- 192.168.123.107:0/1772406458 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa8cc072440 0x7fa8cc0771b0 unknown :-1 s=CLOSED pgs=71 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:57.098 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.097+0000 7fa8d1f57640 1 --2- 192.168.123.107:0/1772406458 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa8cc071a70 0x7fa8cc071e70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:57.098 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.097+0000 7fa8d1f57640 1 -- 192.168.123.107:0/1772406458 >> 192.168.123.107:0/1772406458 conn(0x7fa8cc06d4f0 msgr2=0x7fa8cc06f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:46:57.099 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.097+0000 7fa8d1f57640 1 -- 192.168.123.107:0/1772406458 shutdown_connections 2026-03-09T20:46:57.099 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.097+0000 7fa8d1f57640 1 -- 192.168.123.107:0/1772406458 wait complete. 2026-03-09T20:46:57.099 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.097+0000 7fa8d1f57640 1 Processor -- start 2026-03-09T20:46:57.099 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.097+0000 7fa8d1f57640 1 -- start start 2026-03-09T20:46:57.099 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.098+0000 7fa8d1f57640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa8cc071a70 0x7fa8cc084070 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:46:57.099 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.098+0000 7fa8d1f57640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa8cc0826c0 0x7fa8cc082b40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:46:57.099 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.098+0000 7fa8d1f57640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa8cc0845b0 con 0x7fa8cc0826c0 2026-03-09T20:46:57.099 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.098+0000 7fa8d1f57640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa8cc083080 con 0x7fa8cc071a70 2026-03-09T20:46:57.099 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.098+0000 7fa8cb7fe640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa8cc071a70 0x7fa8cc084070 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:46:57.099 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.098+0000 7fa8caffd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa8cc0826c0 0x7fa8cc082b40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:46:57.099 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.098+0000 7fa8cb7fe640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa8cc071a70 0x7fa8cc084070 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:38496/0 (socket says 192.168.123.107:38496) 2026-03-09T20:46:57.099 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.098+0000 7fa8cb7fe640 1 -- 192.168.123.107:0/432920699 learned_addr learned my addr 192.168.123.107:0/432920699 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:46:57.099 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.098+0000 7fa8cb7fe640 1 -- 192.168.123.107:0/432920699 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa8cc0826c0 msgr2=0x7fa8cc082b40 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:46:57.099 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.098+0000 7fa8cb7fe640 1 --2- 192.168.123.107:0/432920699 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa8cc0826c0 0x7fa8cc082b40 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:57.099 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.098+0000 7fa8cb7fe640 1 -- 192.168.123.107:0/432920699 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fa8bc009590 con 0x7fa8cc071a70 2026-03-09T20:46:57.100 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.099+0000 7fa8cb7fe640 1 --2- 192.168.123.107:0/432920699 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa8cc071a70 0x7fa8cc084070 secure :-1 s=READY pgs=72 cs=0 l=1 rev1=1 crypto rx=0x7fa8bc002760 tx=0x7fa8bc002c30 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:46:57.100 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.099+0000 7fa8c8ff9640 1 -- 192.168.123.107:0/432920699 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fa8bc00ecf0 con 0x7fa8cc071a70 2026-03-09T20:46:57.101 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.100+0000 7fa8d1f57640 1 -- 192.168.123.107:0/432920699 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fa8c4009d00 con 0x7fa8cc071a70 2026-03-09T20:46:57.101 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.100+0000 7fa8d1f57640 1 -- 192.168.123.107:0/432920699 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fa8cc1b5bc0 con 0x7fa8cc071a70 2026-03-09T20:46:57.101 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.100+0000 7fa8c8ff9640 1 -- 192.168.123.107:0/432920699 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fa8bc002e90 con 0x7fa8cc071a70 2026-03-09T20:46:57.101 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.100+0000 7fa8c8ff9640 1 -- 192.168.123.107:0/432920699 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fa8bc00f660 con 0x7fa8cc071a70 2026-03-09T20:46:57.103 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.102+0000 7fa8c8ff9640 1 -- 192.168.123.107:0/432920699 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7fa8bc016020 con 0x7fa8cc071a70 2026-03-09T20:46:57.103 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.103+0000 7fa8c8ff9640 1 --2- 192.168.123.107:0/432920699 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fa898076290 0x7fa898078750 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:46:57.104 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.103+0000 7fa8caffd640 1 --2- 192.168.123.107:0/432920699 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fa898076290 0x7fa898078750 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:46:57.104 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.103+0000 7fa8c8ff9640 1 -- 192.168.123.107:0/432920699 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(42..42 src has 1..42) v4 ==== 5448+0+0 (secure 0 0 0) 0x7fa8bc098c20 con 0x7fa8cc071a70 2026-03-09T20:46:57.104 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.104+0000 7fa8d1f57640 1 -- 192.168.123.107:0/432920699 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fa894005350 con 0x7fa8cc071a70 2026-03-09T20:46:57.105 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.104+0000 7fa8caffd640 1 --2- 192.168.123.107:0/432920699 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fa898076290 0x7fa898078750 secure :-1 s=READY pgs=141 cs=0 l=1 rev1=1 crypto rx=0x7fa8c402f9d0 tx=0x7fa8c4002750 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:46:57.108 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.107+0000 7fa8c8ff9640 1 -- 192.168.123.107:0/432920699 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7fa8bc061d20 con 0x7fa8cc071a70 2026-03-09T20:46:57.263 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.262+0000 7fa8d1f57640 1 -- 192.168.123.107:0/432920699 --> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7fa894002bf0 con 0x7fa898076290 2026-03-09T20:46:57.266 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.265+0000 7fa8c8ff9640 1 -- 192.168.123.107:0/432920699 <== mgr.14225 v2:192.168.123.107:6800/4233182156 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+442 (secure 0 0 0) 0x7fa894002bf0 con 0x7fa898076290 2026-03-09T20:46:57.272 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.270+0000 7fa8d1f57640 1 -- 192.168.123.107:0/432920699 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fa898076290 msgr2=0x7fa898078750 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:46:57.272 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.270+0000 7fa8d1f57640 1 --2- 192.168.123.107:0/432920699 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fa898076290 0x7fa898078750 secure :-1 s=READY pgs=141 cs=0 l=1 rev1=1 crypto rx=0x7fa8c402f9d0 tx=0x7fa8c4002750 comp rx=0 tx=0).stop 2026-03-09T20:46:57.272 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.270+0000 7fa8d1f57640 1 -- 192.168.123.107:0/432920699 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa8cc071a70 msgr2=0x7fa8cc084070 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:46:57.272 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.270+0000 7fa8d1f57640 1 --2- 192.168.123.107:0/432920699 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa8cc071a70 0x7fa8cc084070 secure :-1 s=READY pgs=72 cs=0 l=1 rev1=1 crypto rx=0x7fa8bc002760 tx=0x7fa8bc002c30 comp rx=0 tx=0).stop 2026-03-09T20:46:57.272 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.271+0000 7fa8d1f57640 1 -- 192.168.123.107:0/432920699 shutdown_connections 2026-03-09T20:46:57.272 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.271+0000 7fa8d1f57640 1 --2- 192.168.123.107:0/432920699 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fa898076290 0x7fa898078750 unknown :-1 s=CLOSED pgs=141 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:57.272 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.271+0000 7fa8d1f57640 1 --2- 192.168.123.107:0/432920699 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa8cc0826c0 0x7fa8cc082b40 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:57.272 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.271+0000 7fa8d1f57640 1 --2- 192.168.123.107:0/432920699 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa8cc071a70 0x7fa8cc084070 unknown :-1 s=CLOSED pgs=72 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:57.272 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.271+0000 7fa8d1f57640 1 -- 192.168.123.107:0/432920699 >> 192.168.123.107:0/432920699 conn(0x7fa8cc06d4f0 msgr2=0x7fa8cc070440 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:46:57.272 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.271+0000 7fa8d1f57640 1 -- 192.168.123.107:0/432920699 shutdown_connections 2026-03-09T20:46:57.272 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.271+0000 7fa8d1f57640 1 -- 192.168.123.107:0/432920699 wait complete. 2026-03-09T20:46:57.285 INFO:teuthology.orchestra.run.vm07.stdout:true 2026-03-09T20:46:57.362 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.361+0000 7f8472ffb640 1 -- 192.168.123.107:0/1005381388 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f846c072440 msgr2=0x7f846c0771b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:46:57.362 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.361+0000 7f8472ffb640 1 --2- 192.168.123.107:0/1005381388 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f846c072440 0x7f846c0771b0 secure :-1 s=READY pgs=73 cs=0 l=1 rev1=1 crypto rx=0x7f846400b600 tx=0x7f8464030670 comp rx=0 tx=0).stop 2026-03-09T20:46:57.362 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.361+0000 7f8472ffb640 1 -- 192.168.123.107:0/1005381388 shutdown_connections 2026-03-09T20:46:57.362 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.361+0000 7f8472ffb640 1 --2- 192.168.123.107:0/1005381388 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f846c072440 0x7f846c0771b0 unknown :-1 s=CLOSED pgs=73 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:57.362 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.361+0000 7f8472ffb640 1 --2- 192.168.123.107:0/1005381388 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f846c071a70 0x7f846c071e70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:57.362 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.361+0000 7f8472ffb640 1 -- 192.168.123.107:0/1005381388 >> 192.168.123.107:0/1005381388 conn(0x7f846c06d4f0 msgr2=0x7f846c06f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:46:57.363 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.361+0000 7f8472ffb640 1 -- 192.168.123.107:0/1005381388 shutdown_connections 2026-03-09T20:46:57.363 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.361+0000 7f8472ffb640 1 -- 192.168.123.107:0/1005381388 wait complete. 2026-03-09T20:46:57.363 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.361+0000 7f8472ffb640 1 Processor -- start 2026-03-09T20:46:57.363 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.361+0000 7f8472ffb640 1 -- start start 2026-03-09T20:46:57.363 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.362+0000 7f8472ffb640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f846c071a70 0x7f846c084070 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:46:57.363 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.362+0000 7f8472ffb640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f846c0826c0 0x7f846c082b40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:46:57.363 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.362+0000 7f8472ffb640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f846c0845b0 con 0x7f846c071a70 2026-03-09T20:46:57.363 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.362+0000 7f8472ffb640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f846c083080 con 0x7f846c0826c0 2026-03-09T20:46:57.363 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.362+0000 7f8470d70640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f846c071a70 0x7f846c084070 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:46:57.363 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.362+0000 7f8470d70640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f846c071a70 0x7f846c084070 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:49564/0 (socket says 192.168.123.107:49564) 2026-03-09T20:46:57.363 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.362+0000 7f8470d70640 1 -- 192.168.123.107:0/2833373741 learned_addr learned my addr 192.168.123.107:0/2833373741 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:46:57.363 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.362+0000 7f846bfff640 1 --2- 192.168.123.107:0/2833373741 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f846c0826c0 0x7f846c082b40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:46:57.364 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.363+0000 7f8470d70640 1 -- 192.168.123.107:0/2833373741 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f846c0826c0 msgr2=0x7f846c082b40 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:46:57.364 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.363+0000 7f8470d70640 1 --2- 192.168.123.107:0/2833373741 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f846c0826c0 0x7f846c082b40 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:57.364 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.363+0000 7f8470d70640 1 -- 192.168.123.107:0/2833373741 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f8464009d00 con 0x7f846c071a70 2026-03-09T20:46:57.364 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.363+0000 7f8470d70640 1 --2- 192.168.123.107:0/2833373741 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f846c071a70 0x7f846c084070 secure :-1 s=READY pgs=305 cs=0 l=1 rev1=1 crypto rx=0x7f845c0077b0 tx=0x7f845c007c80 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:46:57.364 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.363+0000 7f8469ffb640 1 -- 192.168.123.107:0/2833373741 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f845c004110 con 0x7f846c071a70 2026-03-09T20:46:57.365 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.363+0000 7f8472ffb640 1 -- 192.168.123.107:0/2833373741 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f846c083360 con 0x7f846c071a70 2026-03-09T20:46:57.365 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.363+0000 7f8472ffb640 1 -- 192.168.123.107:0/2833373741 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f846c1b5bc0 con 0x7f846c071a70 2026-03-09T20:46:57.365 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.364+0000 7f8469ffb640 1 -- 192.168.123.107:0/2833373741 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f845c0026e0 con 0x7f846c071a70 2026-03-09T20:46:57.365 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.364+0000 7f8469ffb640 1 -- 192.168.123.107:0/2833373741 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f845c00d940 con 0x7f846c071a70 2026-03-09T20:46:57.366 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.365+0000 7f8472ffb640 1 -- 192.168.123.107:0/2833373741 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f846c07a810 con 0x7f846c071a70 2026-03-09T20:46:57.367 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.366+0000 7f8469ffb640 1 -- 192.168.123.107:0/2833373741 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7f845c002850 con 0x7f846c071a70 2026-03-09T20:46:57.367 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.366+0000 7f8469ffb640 1 --2- 192.168.123.107:0/2833373741 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f84440761c0 0x7f8444078680 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:46:57.367 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.366+0000 7f8469ffb640 1 -- 192.168.123.107:0/2833373741 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(42..42 src has 1..42) v4 ==== 5448+0+0 (secure 0 0 0) 0x7f845c097740 con 0x7f846c071a70 2026-03-09T20:46:57.367 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.367+0000 7f846bfff640 1 --2- 192.168.123.107:0/2833373741 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f84440761c0 0x7f8444078680 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:46:57.368 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.367+0000 7f846bfff640 1 --2- 192.168.123.107:0/2833373741 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f84440761c0 0x7f8444078680 secure :-1 s=READY pgs=142 cs=0 l=1 rev1=1 crypto rx=0x7f8464030b80 tx=0x7f8464002750 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:46:57.369 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.368+0000 7f8469ffb640 1 -- 192.168.123.107:0/2833373741 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f845c060cc0 con 0x7f846c071a70 2026-03-09T20:46:57.511 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.509+0000 7f8472ffb640 1 -- 192.168.123.107:0/2833373741 --> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f846c0761d0 con 0x7f84440761c0 2026-03-09T20:46:57.511 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.510+0000 7f8469ffb640 1 -- 192.168.123.107:0/2833373741 <== mgr.14225 v2:192.168.123.107:6800/4233182156 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+442 (secure 0 0 0) 0x7f846c0761d0 con 0x7f84440761c0 2026-03-09T20:46:57.514 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.513+0000 7f843f7fe640 1 -- 192.168.123.107:0/2833373741 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f84440761c0 msgr2=0x7f8444078680 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:46:57.514 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.513+0000 7f843f7fe640 1 --2- 192.168.123.107:0/2833373741 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f84440761c0 0x7f8444078680 secure :-1 s=READY pgs=142 cs=0 l=1 rev1=1 crypto rx=0x7f8464030b80 tx=0x7f8464002750 comp rx=0 tx=0).stop 2026-03-09T20:46:57.516 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.515+0000 7f843f7fe640 1 -- 192.168.123.107:0/2833373741 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f846c071a70 msgr2=0x7f846c084070 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:46:57.516 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.515+0000 7f843f7fe640 1 --2- 192.168.123.107:0/2833373741 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f846c071a70 0x7f846c084070 secure :-1 s=READY pgs=305 cs=0 l=1 rev1=1 crypto rx=0x7f845c0077b0 tx=0x7f845c007c80 comp rx=0 tx=0).stop 2026-03-09T20:46:57.517 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.516+0000 7f843f7fe640 1 -- 192.168.123.107:0/2833373741 shutdown_connections 2026-03-09T20:46:57.517 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.516+0000 7f843f7fe640 1 --2- 192.168.123.107:0/2833373741 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f84440761c0 0x7f8444078680 unknown :-1 s=CLOSED pgs=142 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:57.517 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.516+0000 7f843f7fe640 1 --2- 192.168.123.107:0/2833373741 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f846c0826c0 0x7f846c082b40 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:57.517 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.516+0000 7f843f7fe640 1 --2- 192.168.123.107:0/2833373741 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f846c071a70 0x7f846c084070 unknown :-1 s=CLOSED pgs=305 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:57.517 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.516+0000 7f843f7fe640 1 -- 192.168.123.107:0/2833373741 >> 192.168.123.107:0/2833373741 conn(0x7f846c06d4f0 msgr2=0x7f846c070440 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:46:57.517 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.516+0000 7f843f7fe640 1 -- 192.168.123.107:0/2833373741 shutdown_connections 2026-03-09T20:46:57.517 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.517+0000 7f843f7fe640 1 -- 192.168.123.107:0/2833373741 wait complete. 2026-03-09T20:46:57.613 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.611+0000 7f18187c6640 1 -- 192.168.123.107:0/974193518 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1810072440 msgr2=0x7f18100771b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:46:57.613 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.611+0000 7f18187c6640 1 --2- 192.168.123.107:0/974193518 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1810072440 0x7f18100771b0 secure :-1 s=READY pgs=74 cs=0 l=1 rev1=1 crypto rx=0x7f180800b0a0 tx=0x7f180802f4c0 comp rx=0 tx=0).stop 2026-03-09T20:46:57.613 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.611+0000 7f18187c6640 1 -- 192.168.123.107:0/974193518 shutdown_connections 2026-03-09T20:46:57.613 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.611+0000 7f18187c6640 1 --2- 192.168.123.107:0/974193518 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1810072440 0x7f18100771b0 unknown :-1 s=CLOSED pgs=74 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:57.613 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.611+0000 7f18187c6640 1 --2- 192.168.123.107:0/974193518 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1810071a70 0x7f1810071e70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:57.613 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.611+0000 7f18187c6640 1 -- 192.168.123.107:0/974193518 >> 192.168.123.107:0/974193518 conn(0x7f181006d4f0 msgr2=0x7f181006f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:46:57.615 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.612+0000 7f18187c6640 1 -- 192.168.123.107:0/974193518 shutdown_connections 2026-03-09T20:46:57.615 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.612+0000 7f18187c6640 1 -- 192.168.123.107:0/974193518 wait complete. 2026-03-09T20:46:57.615 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.613+0000 7f18187c6640 1 Processor -- start 2026-03-09T20:46:57.615 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.613+0000 7f18187c6640 1 -- start start 2026-03-09T20:46:57.615 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.614+0000 7f18187c6640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1810071a70 0x7f1810083950 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:46:57.615 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.614+0000 7f18187c6640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1810072440 0x7f1810083e90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:46:57.615 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.614+0000 7f18187c6640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f1810084460 con 0x7f1810071a70 2026-03-09T20:46:57.615 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.614+0000 7f18187c6640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f18100845d0 con 0x7f1810072440 2026-03-09T20:46:57.615 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.614+0000 7f181653b640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1810071a70 0x7f1810083950 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:46:57.615 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.614+0000 7f181653b640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1810071a70 0x7f1810083950 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:48932/0 (socket says 192.168.123.107:48932) 2026-03-09T20:46:57.615 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.614+0000 7f181653b640 1 -- 192.168.123.107:0/3301739495 learned_addr learned my addr 192.168.123.107:0/3301739495 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:46:57.615 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.614+0000 7f1815d3a640 1 --2- 192.168.123.107:0/3301739495 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1810072440 0x7f1810083e90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:46:57.616 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.615+0000 7f1815d3a640 1 -- 192.168.123.107:0/3301739495 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1810071a70 msgr2=0x7f1810083950 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:46:57.616 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.615+0000 7f1815d3a640 1 --2- 192.168.123.107:0/3301739495 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1810071a70 0x7f1810083950 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:57.616 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.615+0000 7f1815d3a640 1 -- 192.168.123.107:0/3301739495 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f180c009590 con 0x7f1810072440 2026-03-09T20:46:57.616 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.615+0000 7f1815d3a640 1 --2- 192.168.123.107:0/3301739495 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1810072440 0x7f1810083e90 secure :-1 s=READY pgs=75 cs=0 l=1 rev1=1 crypto rx=0x7f180802f9d0 tx=0x7f1808002c90 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:46:57.616 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.615+0000 7f18077fe640 1 -- 192.168.123.107:0/3301739495 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f1808004960 con 0x7f1810072440 2026-03-09T20:46:57.617 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.615+0000 7f18187c6640 1 -- 192.168.123.107:0/3301739495 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f1808009d00 con 0x7f1810072440 2026-03-09T20:46:57.617 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.615+0000 7f18187c6640 1 -- 192.168.123.107:0/3301739495 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f181007c720 con 0x7f1810072440 2026-03-09T20:46:57.617 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.616+0000 7f18077fe640 1 -- 192.168.123.107:0/3301739495 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f1808002e80 con 0x7f1810072440 2026-03-09T20:46:57.617 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.616+0000 7f18077fe640 1 -- 192.168.123.107:0/3301739495 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f1808040e40 con 0x7f1810072440 2026-03-09T20:46:57.618 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.617+0000 7f18187c6640 1 -- 192.168.123.107:0/3301739495 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f181007a810 con 0x7f1810072440 2026-03-09T20:46:57.619 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.618+0000 7f18077fe640 1 -- 192.168.123.107:0/3301739495 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7f1808040650 con 0x7f1810072440 2026-03-09T20:46:57.619 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.618+0000 7f18077fe640 1 --2- 192.168.123.107:0/3301739495 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f17f00761c0 0x7f17f0078680 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:46:57.619 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.619+0000 7f181653b640 1 --2- 192.168.123.107:0/3301739495 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f17f00761c0 0x7f17f0078680 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:46:57.620 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.619+0000 7f18077fe640 1 -- 192.168.123.107:0/3301739495 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(42..42 src has 1..42) v4 ==== 5448+0+0 (secure 0 0 0) 0x7f18080bd370 con 0x7f1810072440 2026-03-09T20:46:57.620 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.619+0000 7f181653b640 1 --2- 192.168.123.107:0/3301739495 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f17f00761c0 0x7f17f0078680 secure :-1 s=READY pgs=143 cs=0 l=1 rev1=1 crypto rx=0x7f180c009920 tx=0x7f180c010040 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:46:57.622 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.621+0000 7f18077fe640 1 -- 192.168.123.107:0/3301739495 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f1808086a30 con 0x7f1810072440 2026-03-09T20:46:57.779 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.778+0000 7f18187c6640 1 -- 192.168.123.107:0/3301739495 --> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] -- mgr_command(tid 0: {"prefix": "orch ps", "target": ["mon-mgr", ""]}) v1 -- 0x7f181007c360 con 0x7f17f00761c0 2026-03-09T20:46:57.790 INFO:teuthology.orchestra.run.vm07.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T20:46:57.790 INFO:teuthology.orchestra.run.vm07.stdout:alertmanager.vm07 vm07 *:9093,9094 running (3m) 109s ago 4m 23.6M - 0.25.0 c8568f914cd2 aa3206f6f5cb 2026-03-09T20:46:57.790 INFO:teuthology.orchestra.run.vm07.stdout:ceph-exporter.vm07 vm07 running (4m) 109s ago 4m 8514k - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 06140d824fae 2026-03-09T20:46:57.790 INFO:teuthology.orchestra.run.vm07.stdout:ceph-exporter.vm10 vm10 running (3m) 110s ago 3m 8652k - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 ecddc8340426 2026-03-09T20:46:57.790 INFO:teuthology.orchestra.run.vm07.stdout:crash.vm07 vm07 running (4m) 109s ago 4m 7620k - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 8dda9981b08b 2026-03-09T20:46:57.790 INFO:teuthology.orchestra.run.vm07.stdout:crash.vm10 vm10 running (3m) 110s ago 3m 7616k - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 eba80e79586f 2026-03-09T20:46:57.790 INFO:teuthology.orchestra.run.vm07.stdout:grafana.vm07 vm07 *:3000 running (3m) 109s ago 3m 78.3M - 9.4.7 954c08fa6188 74cf2e7ee6ad 2026-03-09T20:46:57.790 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm07.potfau vm07 running (114s) 109s ago 114s 18.5M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 2492b6874dc8 2026-03-09T20:46:57.790 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm07.rovdbp vm07 running (116s) 109s ago 116s 19.8M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 3dd0b4a28f35 2026-03-09T20:46:57.790 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm10.hzyuyq vm10 running (113s) 110s ago 113s 16.6M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 ed740ceed51a 2026-03-09T20:46:57.790 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm10.qpltwp vm10 running (115s) 110s ago 115s 17.3M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 c5fdba181aaf 2026-03-09T20:46:57.790 INFO:teuthology.orchestra.run.vm07.stdout:mgr.vm07.xjrvch vm07 *:9283,8765,8443 running (4m) 109s ago 4m 542M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 7a35a71cbc43 2026-03-09T20:46:57.790 INFO:teuthology.orchestra.run.vm07.stdout:mgr.vm10.byqahe vm10 *:8443,9283,8765 running (3m) 110s ago 3m 485M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 91b60c6e69dc 2026-03-09T20:46:57.790 INFO:teuthology.orchestra.run.vm07.stdout:mon.vm07 vm07 running (4m) 109s ago 4m 53.1M 2048M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 f3e88bdaa0dd 2026-03-09T20:46:57.790 INFO:teuthology.orchestra.run.vm07.stdout:mon.vm10 vm10 running (3m) 110s ago 3m 46.8M 2048M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 4e5d7d18c660 2026-03-09T20:46:57.790 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.vm07 vm07 *:9100 running (4m) 109s ago 4m 14.3M - 1.5.0 0da6a335fe13 d6fac1f8a1d0 2026-03-09T20:46:57.790 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.vm10 vm10 *:9100 running (3m) 110s ago 3m 14.7M - 1.5.0 0da6a335fe13 9716a97e7ed1 2026-03-09T20:46:57.791 INFO:teuthology.orchestra.run.vm07.stdout:osd.0 vm07 running (3m) 109s ago 3m 66.7M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 482878bd7721 2026-03-09T20:46:57.791 INFO:teuthology.orchestra.run.vm07.stdout:osd.1 vm07 running (2m) 109s ago 2m 68.5M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 15564e5032c9 2026-03-09T20:46:57.791 INFO:teuthology.orchestra.run.vm07.stdout:osd.2 vm07 running (2m) 109s ago 2m 47.2M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 a2ad523a264c 2026-03-09T20:46:57.791 INFO:teuthology.orchestra.run.vm07.stdout:osd.3 vm10 running (2m) 110s ago 2m 67.1M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 c4d7e2279ba1 2026-03-09T20:46:57.791 INFO:teuthology.orchestra.run.vm07.stdout:osd.4 vm10 running (2m) 110s ago 2m 44.6M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 37651efc9a7d 2026-03-09T20:46:57.791 INFO:teuthology.orchestra.run.vm07.stdout:osd.5 vm10 running (2m) 110s ago 2m 63.9M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 e1bd83add343 2026-03-09T20:46:57.791 INFO:teuthology.orchestra.run.vm07.stdout:prometheus.vm07 vm07 *:9095 running (3m) 109s ago 3m 37.0M - 2.43.0 a07b618ecd1d 08a586cd1392 2026-03-09T20:46:57.791 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.787+0000 7f18077fe640 1 -- 192.168.123.107:0/3301739495 <== mgr.14225 v2:192.168.123.107:6800/4233182156 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+3624 (secure 0 0 0) 0x7f181007c360 con 0x7f17f00761c0 2026-03-09T20:46:57.791 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.789+0000 7f18187c6640 1 -- 192.168.123.107:0/3301739495 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f17f00761c0 msgr2=0x7f17f0078680 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:46:57.791 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.789+0000 7f18187c6640 1 --2- 192.168.123.107:0/3301739495 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f17f00761c0 0x7f17f0078680 secure :-1 s=READY pgs=143 cs=0 l=1 rev1=1 crypto rx=0x7f180c009920 tx=0x7f180c010040 comp rx=0 tx=0).stop 2026-03-09T20:46:57.791 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.789+0000 7f18187c6640 1 -- 192.168.123.107:0/3301739495 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1810072440 msgr2=0x7f1810083e90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:46:57.791 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.789+0000 7f18187c6640 1 --2- 192.168.123.107:0/3301739495 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1810072440 0x7f1810083e90 secure :-1 s=READY pgs=75 cs=0 l=1 rev1=1 crypto rx=0x7f180802f9d0 tx=0x7f1808002c90 comp rx=0 tx=0).stop 2026-03-09T20:46:57.791 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.789+0000 7f18187c6640 1 -- 192.168.123.107:0/3301739495 shutdown_connections 2026-03-09T20:46:57.791 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.789+0000 7f18187c6640 1 --2- 192.168.123.107:0/3301739495 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f17f00761c0 0x7f17f0078680 unknown :-1 s=CLOSED pgs=143 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:57.791 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.789+0000 7f18187c6640 1 --2- 192.168.123.107:0/3301739495 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1810072440 0x7f1810083e90 unknown :-1 s=CLOSED pgs=75 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:57.791 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.789+0000 7f18187c6640 1 --2- 192.168.123.107:0/3301739495 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1810071a70 0x7f1810083950 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:57.791 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.789+0000 7f18187c6640 1 -- 192.168.123.107:0/3301739495 >> 192.168.123.107:0/3301739495 conn(0x7f181006d4f0 msgr2=0x7f181006f700 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:46:57.791 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.790+0000 7f18187c6640 1 -- 192.168.123.107:0/3301739495 shutdown_connections 2026-03-09T20:46:57.791 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.790+0000 7f18187c6640 1 -- 192.168.123.107:0/3301739495 wait complete. 2026-03-09T20:46:57.892 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.891+0000 7f7912746640 1 -- 192.168.123.107:0/1833339060 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f790c072390 msgr2=0x7f790c10c590 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:46:57.892 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.891+0000 7f7912746640 1 --2- 192.168.123.107:0/1833339060 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f790c072390 0x7f790c10c590 secure :-1 s=READY pgs=76 cs=0 l=1 rev1=1 crypto rx=0x7f790400b0a0 tx=0x7f790402f4c0 comp rx=0 tx=0).stop 2026-03-09T20:46:57.892 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.891+0000 7f7912746640 1 -- 192.168.123.107:0/1833339060 shutdown_connections 2026-03-09T20:46:57.892 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.891+0000 7f7912746640 1 --2- 192.168.123.107:0/1833339060 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f790c072390 0x7f790c10c590 unknown :-1 s=CLOSED pgs=76 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:57.892 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.891+0000 7f7912746640 1 --2- 192.168.123.107:0/1833339060 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f790c0719c0 0x7f790c071dc0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:57.892 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.891+0000 7f7912746640 1 -- 192.168.123.107:0/1833339060 >> 192.168.123.107:0/1833339060 conn(0x7f790c06d4f0 msgr2=0x7f790c06f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:46:57.892 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.891+0000 7f7912746640 1 -- 192.168.123.107:0/1833339060 shutdown_connections 2026-03-09T20:46:57.893 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.891+0000 7f7912746640 1 -- 192.168.123.107:0/1833339060 wait complete. 2026-03-09T20:46:57.893 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.892+0000 7f7912746640 1 Processor -- start 2026-03-09T20:46:57.893 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.892+0000 7f7912746640 1 -- start start 2026-03-09T20:46:57.893 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.892+0000 7f7912746640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f790c115a50 0x7f790c115e70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:46:57.893 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.892+0000 7f7912746640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f790c1163b0 0x7f790c1a4390 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:46:57.893 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.892+0000 7f7912746640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f790c116830 con 0x7f790c115a50 2026-03-09T20:46:57.893 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.892+0000 7f7912746640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f790c1173c0 con 0x7f790c1163b0 2026-03-09T20:46:57.893 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.892+0000 7f790b7fe640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f790c1163b0 0x7f790c1a4390 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:46:57.894 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.892+0000 7f790bfff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f790c115a50 0x7f790c115e70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:46:57.894 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.892+0000 7f790b7fe640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f790c1163b0 0x7f790c1a4390 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:55374/0 (socket says 192.168.123.107:55374) 2026-03-09T20:46:57.894 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.892+0000 7f790b7fe640 1 -- 192.168.123.107:0/3843908811 learned_addr learned my addr 192.168.123.107:0/3843908811 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:46:57.894 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.893+0000 7f790b7fe640 1 -- 192.168.123.107:0/3843908811 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f790c115a50 msgr2=0x7f790c115e70 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:46:57.894 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.893+0000 7f790b7fe640 1 --2- 192.168.123.107:0/3843908811 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f790c115a50 0x7f790c115e70 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:57.894 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.893+0000 7f790b7fe640 1 -- 192.168.123.107:0/3843908811 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f7904009d00 con 0x7f790c1163b0 2026-03-09T20:46:57.896 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.895+0000 7f790b7fe640 1 --2- 192.168.123.107:0/3843908811 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f790c1163b0 0x7f790c1a4390 secure :-1 s=READY pgs=77 cs=0 l=1 rev1=1 crypto rx=0x7f790402f9d0 tx=0x7f7904009370 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:46:57.896 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.896+0000 7f79097fa640 1 -- 192.168.123.107:0/3843908811 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f7904002c70 con 0x7f790c1163b0 2026-03-09T20:46:57.898 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.896+0000 7f7912746640 1 -- 192.168.123.107:0/3843908811 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f790c1a48d0 con 0x7f790c1163b0 2026-03-09T20:46:57.898 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.896+0000 7f7912746640 1 -- 192.168.123.107:0/3843908811 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f790c1a4e20 con 0x7f790c1163b0 2026-03-09T20:46:57.898 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.897+0000 7f79097fa640 1 -- 192.168.123.107:0/3843908811 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f7904002dd0 con 0x7f790c1163b0 2026-03-09T20:46:57.898 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.897+0000 7f79097fa640 1 -- 192.168.123.107:0/3843908811 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f7904040970 con 0x7f790c1163b0 2026-03-09T20:46:57.901 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.900+0000 7f79097fa640 1 -- 192.168.123.107:0/3843908811 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7f790404a430 con 0x7f790c1163b0 2026-03-09T20:46:57.901 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.900+0000 7f79097fa640 1 --2- 192.168.123.107:0/3843908811 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f78f80762c0 0x7f78f8078780 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:46:57.901 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.900+0000 7f79097fa640 1 -- 192.168.123.107:0/3843908811 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(42..42 src has 1..42) v4 ==== 5448+0+0 (secure 0 0 0) 0x7f79040bd270 con 0x7f790c1163b0 2026-03-09T20:46:57.902 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.901+0000 7f7912746640 1 -- 192.168.123.107:0/3843908811 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f790c118dc0 con 0x7f790c1163b0 2026-03-09T20:46:57.903 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.902+0000 7f790bfff640 1 --2- 192.168.123.107:0/3843908811 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f78f80762c0 0x7f78f8078780 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:46:57.904 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.903+0000 7f790bfff640 1 --2- 192.168.123.107:0/3843908811 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f78f80762c0 0x7f78f8078780 secure :-1 s=READY pgs=144 cs=0 l=1 rev1=1 crypto rx=0x7f790c071840 tx=0x7f78f4009210 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:46:57.905 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:57.904+0000 7f79097fa640 1 -- 192.168.123.107:0/3843908811 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f79040868a0 con 0x7f790c1163b0 2026-03-09T20:46:58.077 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.075+0000 7f7912746640 1 -- 192.168.123.107:0/3843908811 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7f790c072390 con 0x7f790c1163b0 2026-03-09T20:46:58.081 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.079+0000 7f79097fa640 1 -- 192.168.123.107:0/3843908811 <== mon.1 v2:192.168.123.110:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+633 (secure 0 0 0) 0x7f7904086240 con 0x7f790c1163b0 2026-03-09T20:46:58.081 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T20:46:58.081 INFO:teuthology.orchestra.run.vm07.stdout: "mon": { 2026-03-09T20:46:58.081 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable)": 2 2026-03-09T20:46:58.081 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:46:58.081 INFO:teuthology.orchestra.run.vm07.stdout: "mgr": { 2026-03-09T20:46:58.081 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable)": 2 2026-03-09T20:46:58.081 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:46:58.081 INFO:teuthology.orchestra.run.vm07.stdout: "osd": { 2026-03-09T20:46:58.081 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable)": 6 2026-03-09T20:46:58.081 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:46:58.081 INFO:teuthology.orchestra.run.vm07.stdout: "mds": { 2026-03-09T20:46:58.081 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable)": 4 2026-03-09T20:46:58.081 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:46:58.081 INFO:teuthology.orchestra.run.vm07.stdout: "overall": { 2026-03-09T20:46:58.081 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable)": 14 2026-03-09T20:46:58.081 INFO:teuthology.orchestra.run.vm07.stdout: } 2026-03-09T20:46:58.081 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T20:46:58.083 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.082+0000 7f78eaffd640 1 -- 192.168.123.107:0/3843908811 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f78f80762c0 msgr2=0x7f78f8078780 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:46:58.083 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.082+0000 7f78eaffd640 1 --2- 192.168.123.107:0/3843908811 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f78f80762c0 0x7f78f8078780 secure :-1 s=READY pgs=144 cs=0 l=1 rev1=1 crypto rx=0x7f790c071840 tx=0x7f78f4009210 comp rx=0 tx=0).stop 2026-03-09T20:46:58.083 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.082+0000 7f78eaffd640 1 -- 192.168.123.107:0/3843908811 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f790c1163b0 msgr2=0x7f790c1a4390 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:46:58.083 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.082+0000 7f78eaffd640 1 --2- 192.168.123.107:0/3843908811 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f790c1163b0 0x7f790c1a4390 secure :-1 s=READY pgs=77 cs=0 l=1 rev1=1 crypto rx=0x7f790402f9d0 tx=0x7f7904009370 comp rx=0 tx=0).stop 2026-03-09T20:46:58.084 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.082+0000 7f78eaffd640 1 -- 192.168.123.107:0/3843908811 shutdown_connections 2026-03-09T20:46:58.084 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.082+0000 7f78eaffd640 1 --2- 192.168.123.107:0/3843908811 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f78f80762c0 0x7f78f8078780 unknown :-1 s=CLOSED pgs=144 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:58.084 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.082+0000 7f78eaffd640 1 --2- 192.168.123.107:0/3843908811 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f790c1163b0 0x7f790c1a4390 unknown :-1 s=CLOSED pgs=77 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:58.084 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.083+0000 7f78eaffd640 1 --2- 192.168.123.107:0/3843908811 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f790c115a50 0x7f790c115e70 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:58.084 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.083+0000 7f78eaffd640 1 -- 192.168.123.107:0/3843908811 >> 192.168.123.107:0/3843908811 conn(0x7f790c06d4f0 msgr2=0x7f790c070470 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:46:58.084 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.083+0000 7f78eaffd640 1 -- 192.168.123.107:0/3843908811 shutdown_connections 2026-03-09T20:46:58.084 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.083+0000 7f78eaffd640 1 -- 192.168.123.107:0/3843908811 wait complete. 2026-03-09T20:46:58.162 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.161+0000 7f285ce12640 1 -- 192.168.123.107:0/2006215187 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2858072440 msgr2=0x7f28580771b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:46:58.162 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.161+0000 7f285ce12640 1 --2- 192.168.123.107:0/2006215187 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2858072440 0x7f28580771b0 secure :-1 s=READY pgs=306 cs=0 l=1 rev1=1 crypto rx=0x7f2850009040 tx=0x7f2850033d20 comp rx=0 tx=0).stop 2026-03-09T20:46:58.162 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.161+0000 7f285ce12640 1 -- 192.168.123.107:0/2006215187 shutdown_connections 2026-03-09T20:46:58.162 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.161+0000 7f285ce12640 1 --2- 192.168.123.107:0/2006215187 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2858072440 0x7f28580771b0 unknown :-1 s=CLOSED pgs=306 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:58.162 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.161+0000 7f285ce12640 1 --2- 192.168.123.107:0/2006215187 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f2858071a70 0x7f2858071e70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:58.162 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.161+0000 7f285ce12640 1 -- 192.168.123.107:0/2006215187 >> 192.168.123.107:0/2006215187 conn(0x7f285806d4f0 msgr2=0x7f285806f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:46:58.163 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.161+0000 7f285ce12640 1 -- 192.168.123.107:0/2006215187 shutdown_connections 2026-03-09T20:46:58.163 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.161+0000 7f285ce12640 1 -- 192.168.123.107:0/2006215187 wait complete. 2026-03-09T20:46:58.163 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.161+0000 7f285ce12640 1 Processor -- start 2026-03-09T20:46:58.163 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.161+0000 7f285ce12640 1 -- start start 2026-03-09T20:46:58.163 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.161+0000 7f285ce12640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f2858071a70 0x7f2858081b30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:46:58.163 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.161+0000 7f285ce12640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f28580801f0 0x7f2858080670 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:46:58.163 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.161+0000 7f285ce12640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f2858082070 con 0x7f28580801f0 2026-03-09T20:46:58.163 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.161+0000 7f285ce12640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f2858080bb0 con 0x7f2858071a70 2026-03-09T20:46:58.163 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.162+0000 7f2855d74640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f28580801f0 0x7f2858080670 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:46:58.163 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.162+0000 7f2855d74640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f28580801f0 0x7f2858080670 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:48976/0 (socket says 192.168.123.107:48976) 2026-03-09T20:46:58.163 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.162+0000 7f2855d74640 1 -- 192.168.123.107:0/3231224374 learned_addr learned my addr 192.168.123.107:0/3231224374 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:46:58.163 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.162+0000 7f2856575640 1 --2- 192.168.123.107:0/3231224374 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f2858071a70 0x7f2858081b30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:46:58.163 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.162+0000 7f2855d74640 1 -- 192.168.123.107:0/3231224374 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f2858071a70 msgr2=0x7f2858081b30 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:46:58.163 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.162+0000 7f2855d74640 1 --2- 192.168.123.107:0/3231224374 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f2858071a70 0x7f2858081b30 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:58.163 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.162+0000 7f2855d74640 1 -- 192.168.123.107:0/3231224374 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f2850008cf0 con 0x7f28580801f0 2026-03-09T20:46:58.164 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.162+0000 7f2855d74640 1 --2- 192.168.123.107:0/3231224374 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f28580801f0 0x7f2858080670 secure :-1 s=READY pgs=307 cs=0 l=1 rev1=1 crypto rx=0x7f2850034770 tx=0x7f28500347a0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:46:58.164 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.163+0000 7f28477fe640 1 -- 192.168.123.107:0/3231224374 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f2850013070 con 0x7f28580801f0 2026-03-09T20:46:58.165 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.163+0000 7f285ce12640 1 -- 192.168.123.107:0/3231224374 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f2858080e00 con 0x7f28580801f0 2026-03-09T20:46:58.165 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.163+0000 7f285ce12640 1 -- 192.168.123.107:0/3231224374 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f28581c6fd0 con 0x7f28580801f0 2026-03-09T20:46:58.165 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.164+0000 7f28477fe640 1 -- 192.168.123.107:0/3231224374 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f28500133e0 con 0x7f28580801f0 2026-03-09T20:46:58.165 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.164+0000 7f28477fe640 1 -- 192.168.123.107:0/3231224374 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f285003dbd0 con 0x7f28580801f0 2026-03-09T20:46:58.166 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.165+0000 7f28477fe640 1 -- 192.168.123.107:0/3231224374 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7f285003c3d0 con 0x7f28580801f0 2026-03-09T20:46:58.167 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.166+0000 7f28477fe640 1 --2- 192.168.123.107:0/3231224374 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f283c0761c0 0x7f283c078680 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:46:58.167 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.166+0000 7f28477fe640 1 -- 192.168.123.107:0/3231224374 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(42..42 src has 1..42) v4 ==== 5448+0+0 (secure 0 0 0) 0x7f28500bdad0 con 0x7f28580801f0 2026-03-09T20:46:58.168 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.166+0000 7f2856575640 1 --2- 192.168.123.107:0/3231224374 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f283c0761c0 0x7f283c078680 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:46:58.168 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.167+0000 7f285ce12640 1 -- 192.168.123.107:0/3231224374 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f2824005350 con 0x7f28580801f0 2026-03-09T20:46:58.168 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.168+0000 7f2856575640 1 --2- 192.168.123.107:0/3231224374 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f283c0761c0 0x7f283c078680 secure :-1 s=READY pgs=145 cs=0 l=1 rev1=1 crypto rx=0x7f2848002970 tx=0x7f2848006f50 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:46:58.171 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.170+0000 7f28477fe640 1 -- 192.168.123.107:0/3231224374 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f2850086fd0 con 0x7f28580801f0 2026-03-09T20:46:58.313 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.311+0000 7f285ce12640 1 -- 192.168.123.107:0/3231224374 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "fs dump"} v 0) v1 -- 0x7f2824005e10 con 0x7f28580801f0 2026-03-09T20:46:58.313 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.312+0000 7f28477fe640 1 -- 192.168.123.107:0/3231224374 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump"}]=0 dumped fsmap epoch 11 v11) v1 ==== 76+0+1867 (secure 0 0 0) 0x7f285004b070 con 0x7f28580801f0 2026-03-09T20:46:58.314 INFO:teuthology.orchestra.run.vm07.stdout:e11 2026-03-09T20:46:58.314 INFO:teuthology.orchestra.run.vm07.stdout:enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T20:46:58.314 INFO:teuthology.orchestra.run.vm07.stdout:default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T20:46:58.314 INFO:teuthology.orchestra.run.vm07.stdout:legacy client fscid: 1 2026-03-09T20:46:58.314 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:46:58.314 INFO:teuthology.orchestra.run.vm07.stdout:Filesystem 'cephfs' (1) 2026-03-09T20:46:58.314 INFO:teuthology.orchestra.run.vm07.stdout:fs_name cephfs 2026-03-09T20:46:58.314 INFO:teuthology.orchestra.run.vm07.stdout:epoch 11 2026-03-09T20:46:58.314 INFO:teuthology.orchestra.run.vm07.stdout:flags 32 joinable allow_snaps allow_multimds_snaps allow_standby_replay 2026-03-09T20:46:58.314 INFO:teuthology.orchestra.run.vm07.stdout:created 2026-03-09T20:44:59.885491+0000 2026-03-09T20:46:58.314 INFO:teuthology.orchestra.run.vm07.stdout:modified 2026-03-09T20:45:12.822947+0000 2026-03-09T20:46:58.314 INFO:teuthology.orchestra.run.vm07.stdout:tableserver 0 2026-03-09T20:46:58.314 INFO:teuthology.orchestra.run.vm07.stdout:root 0 2026-03-09T20:46:58.314 INFO:teuthology.orchestra.run.vm07.stdout:session_timeout 60 2026-03-09T20:46:58.314 INFO:teuthology.orchestra.run.vm07.stdout:session_autoclose 300 2026-03-09T20:46:58.314 INFO:teuthology.orchestra.run.vm07.stdout:max_file_size 1099511627776 2026-03-09T20:46:58.314 INFO:teuthology.orchestra.run.vm07.stdout:max_xattr_size 65536 2026-03-09T20:46:58.314 INFO:teuthology.orchestra.run.vm07.stdout:required_client_features {} 2026-03-09T20:46:58.314 INFO:teuthology.orchestra.run.vm07.stdout:last_failure 0 2026-03-09T20:46:58.314 INFO:teuthology.orchestra.run.vm07.stdout:last_failure_osd_epoch 0 2026-03-09T20:46:58.314 INFO:teuthology.orchestra.run.vm07.stdout:compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T20:46:58.314 INFO:teuthology.orchestra.run.vm07.stdout:max_mds 2 2026-03-09T20:46:58.314 INFO:teuthology.orchestra.run.vm07.stdout:in 0,1 2026-03-09T20:46:58.314 INFO:teuthology.orchestra.run.vm07.stdout:up {0=14476,1=24291} 2026-03-09T20:46:58.314 INFO:teuthology.orchestra.run.vm07.stdout:failed 2026-03-09T20:46:58.314 INFO:teuthology.orchestra.run.vm07.stdout:damaged 2026-03-09T20:46:58.314 INFO:teuthology.orchestra.run.vm07.stdout:stopped 2026-03-09T20:46:58.314 INFO:teuthology.orchestra.run.vm07.stdout:data_pools [3] 2026-03-09T20:46:58.315 INFO:teuthology.orchestra.run.vm07.stdout:metadata_pool 2 2026-03-09T20:46:58.315 INFO:teuthology.orchestra.run.vm07.stdout:inline_data disabled 2026-03-09T20:46:58.315 INFO:teuthology.orchestra.run.vm07.stdout:balancer 2026-03-09T20:46:58.315 INFO:teuthology.orchestra.run.vm07.stdout:bal_rank_mask -1 2026-03-09T20:46:58.315 INFO:teuthology.orchestra.run.vm07.stdout:standby_count_wanted 1 2026-03-09T20:46:58.315 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm07.rovdbp{0:14476} state up:active seq 3 join_fscid=1 addr [v2:192.168.123.107:6826/2216764941,v1:192.168.123.107:6827/2216764941] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:46:58.315 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm10.hzyuyq{0:14498} state up:standby-replay seq 3 join_fscid=1 addr [v2:192.168.123.110:6826/3212743251,v1:192.168.123.110:6827/3212743251] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:46:58.315 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm10.qpltwp{1:24291} state up:active seq 3 join_fscid=1 addr [v2:192.168.123.110:6824/61492274,v1:192.168.123.110:6825/61492274] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:46:58.315 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm07.potfau{1:14490} state up:standby-replay seq 2 join_fscid=1 addr [v2:192.168.123.107:6828/3289699342,v1:192.168.123.107:6829/3289699342] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:46:58.315 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:46:58.315 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:46:58.315 INFO:teuthology.orchestra.run.vm07.stderr:dumped fsmap epoch 11 2026-03-09T20:46:58.317 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.316+0000 7f28457fa640 1 -- 192.168.123.107:0/3231224374 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f283c0761c0 msgr2=0x7f283c078680 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:46:58.317 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.316+0000 7f28457fa640 1 --2- 192.168.123.107:0/3231224374 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f283c0761c0 0x7f283c078680 secure :-1 s=READY pgs=145 cs=0 l=1 rev1=1 crypto rx=0x7f2848002970 tx=0x7f2848006f50 comp rx=0 tx=0).stop 2026-03-09T20:46:58.317 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.316+0000 7f28457fa640 1 -- 192.168.123.107:0/3231224374 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f28580801f0 msgr2=0x7f2858080670 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:46:58.317 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.316+0000 7f28457fa640 1 --2- 192.168.123.107:0/3231224374 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f28580801f0 0x7f2858080670 secure :-1 s=READY pgs=307 cs=0 l=1 rev1=1 crypto rx=0x7f2850034770 tx=0x7f28500347a0 comp rx=0 tx=0).stop 2026-03-09T20:46:58.318 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.317+0000 7f28457fa640 1 -- 192.168.123.107:0/3231224374 shutdown_connections 2026-03-09T20:46:58.318 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.317+0000 7f28457fa640 1 --2- 192.168.123.107:0/3231224374 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f283c0761c0 0x7f283c078680 unknown :-1 s=CLOSED pgs=145 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:58.318 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.317+0000 7f28457fa640 1 --2- 192.168.123.107:0/3231224374 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f28580801f0 0x7f2858080670 unknown :-1 s=CLOSED pgs=307 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:58.318 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.317+0000 7f28457fa640 1 --2- 192.168.123.107:0/3231224374 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f2858071a70 0x7f2858081b30 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:58.318 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.317+0000 7f28457fa640 1 -- 192.168.123.107:0/3231224374 >> 192.168.123.107:0/3231224374 conn(0x7f285806d4f0 msgr2=0x7f285806fe80 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:46:58.318 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.317+0000 7f28457fa640 1 -- 192.168.123.107:0/3231224374 shutdown_connections 2026-03-09T20:46:58.318 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.317+0000 7f28457fa640 1 -- 192.168.123.107:0/3231224374 wait complete. 2026-03-09T20:46:58.427 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.426+0000 7fe663fff640 1 -- 192.168.123.107:0/1208573872 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe664072420 msgr2=0x7fe664077190 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:46:58.428 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.426+0000 7fe663fff640 1 --2- 192.168.123.107:0/1208573872 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe664072420 0x7fe664077190 secure :-1 s=READY pgs=308 cs=0 l=1 rev1=1 crypto rx=0x7fe65c010d50 tx=0x7fe65c033ce0 comp rx=0 tx=0).stop 2026-03-09T20:46:58.428 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.426+0000 7fe663fff640 1 -- 192.168.123.107:0/1208573872 shutdown_connections 2026-03-09T20:46:58.428 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.426+0000 7fe663fff640 1 --2- 192.168.123.107:0/1208573872 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe664072420 0x7fe664077190 unknown :-1 s=CLOSED pgs=308 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:58.428 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.426+0000 7fe663fff640 1 --2- 192.168.123.107:0/1208573872 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe664071a50 0x7fe664071e50 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:58.428 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.426+0000 7fe663fff640 1 -- 192.168.123.107:0/1208573872 >> 192.168.123.107:0/1208573872 conn(0x7fe66406d4f0 msgr2=0x7fe66406f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:46:58.428 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.426+0000 7fe663fff640 1 -- 192.168.123.107:0/1208573872 shutdown_connections 2026-03-09T20:46:58.428 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.426+0000 7fe663fff640 1 -- 192.168.123.107:0/1208573872 wait complete. 2026-03-09T20:46:58.428 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.427+0000 7fe663fff640 1 Processor -- start 2026-03-09T20:46:58.428 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.427+0000 7fe663fff640 1 -- start start 2026-03-09T20:46:58.428 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.427+0000 7fe663fff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe664071a50 0x7fe664083d90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:46:58.428 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.427+0000 7fe663fff640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe6640823e0 0x7fe664082860 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:46:58.428 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.427+0000 7fe663fff640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fe6640842d0 con 0x7fe664071a50 2026-03-09T20:46:58.428 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.427+0000 7fe663fff640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fe664082da0 con 0x7fe6640823e0 2026-03-09T20:46:58.428 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.427+0000 7fe6627fc640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe6640823e0 0x7fe664082860 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:46:58.428 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.427+0000 7fe6627fc640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe6640823e0 0x7fe664082860 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:55404/0 (socket says 192.168.123.107:55404) 2026-03-09T20:46:58.428 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.427+0000 7fe6627fc640 1 -- 192.168.123.107:0/1083726171 learned_addr learned my addr 192.168.123.107:0/1083726171 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:46:58.429 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.428+0000 7fe662ffd640 1 --2- 192.168.123.107:0/1083726171 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe664071a50 0x7fe664083d90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:46:58.429 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.428+0000 7fe6627fc640 1 -- 192.168.123.107:0/1083726171 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe664071a50 msgr2=0x7fe664083d90 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:46:58.429 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.428+0000 7fe6627fc640 1 --2- 192.168.123.107:0/1083726171 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe664071a50 0x7fe664083d90 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:58.429 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.428+0000 7fe6627fc640 1 -- 192.168.123.107:0/1083726171 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fe65c010a00 con 0x7fe6640823e0 2026-03-09T20:46:58.429 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.429+0000 7fe6627fc640 1 --2- 192.168.123.107:0/1083726171 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe6640823e0 0x7fe664082860 secure :-1 s=READY pgs=78 cs=0 l=1 rev1=1 crypto rx=0x7fe65c034770 tx=0x7fe65c0347a0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:46:58.486 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.483+0000 7fe643fff640 1 -- 192.168.123.107:0/1083726171 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fe65c002e80 con 0x7fe6640823e0 2026-03-09T20:46:58.486 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.483+0000 7fe663fff640 1 -- 192.168.123.107:0/1083726171 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fe664082fc0 con 0x7fe6640823e0 2026-03-09T20:46:58.486 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.484+0000 7fe663fff640 1 -- 192.168.123.107:0/1083726171 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fe6641b5bc0 con 0x7fe6640823e0 2026-03-09T20:46:58.486 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.484+0000 7fe643fff640 1 -- 192.168.123.107:0/1083726171 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fe65c0348e0 con 0x7fe6640823e0 2026-03-09T20:46:58.487 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.484+0000 7fe663fff640 1 -- 192.168.123.107:0/1083726171 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fe664072420 con 0x7fe6640823e0 2026-03-09T20:46:58.487 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.485+0000 7fe643fff640 1 -- 192.168.123.107:0/1083726171 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fe65c03db40 con 0x7fe6640823e0 2026-03-09T20:46:58.488 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.487+0000 7fe643fff640 1 -- 192.168.123.107:0/1083726171 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7fe65c043020 con 0x7fe6640823e0 2026-03-09T20:46:58.488 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.487+0000 7fe643fff640 1 --2- 192.168.123.107:0/1083726171 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fe63c076290 0x7fe63c078750 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:46:58.488 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.487+0000 7fe643fff640 1 -- 192.168.123.107:0/1083726171 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(42..42 src has 1..42) v4 ==== 5448+0+0 (secure 0 0 0) 0x7fe65c0be480 con 0x7fe6640823e0 2026-03-09T20:46:58.497 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.488+0000 7fe662ffd640 1 --2- 192.168.123.107:0/1083726171 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fe63c076290 0x7fe63c078750 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:46:58.497 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.488+0000 7fe643fff640 1 -- 192.168.123.107:0/1083726171 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7fe65c087a00 con 0x7fe6640823e0 2026-03-09T20:46:58.497 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.489+0000 7fe662ffd640 1 --2- 192.168.123.107:0/1083726171 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fe63c076290 0x7fe63c078750 secure :-1 s=READY pgs=146 cs=0 l=1 rev1=1 crypto rx=0x7fe654005fd0 tx=0x7fe654005950 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:46:58.661 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.660+0000 7fe663fff640 1 -- 192.168.123.107:0/1083726171 --> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7fe6640762d0 con 0x7fe63c076290 2026-03-09T20:46:58.662 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.661+0000 7fe643fff640 1 -- 192.168.123.107:0/1083726171 <== mgr.14225 v2:192.168.123.107:6800/4233182156 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+442 (secure 0 0 0) 0x7fe6640762d0 con 0x7fe63c076290 2026-03-09T20:46:58.662 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T20:46:58.662 INFO:teuthology.orchestra.run.vm07.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc", 2026-03-09T20:46:58.662 INFO:teuthology.orchestra.run.vm07.stdout: "in_progress": true, 2026-03-09T20:46:58.662 INFO:teuthology.orchestra.run.vm07.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-09T20:46:58.662 INFO:teuthology.orchestra.run.vm07.stdout: "services_complete": [], 2026-03-09T20:46:58.662 INFO:teuthology.orchestra.run.vm07.stdout: "progress": "0/23 daemons upgraded", 2026-03-09T20:46:58.662 INFO:teuthology.orchestra.run.vm07.stdout: "message": "Pulling quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc image on host vm10", 2026-03-09T20:46:58.662 INFO:teuthology.orchestra.run.vm07.stdout: "is_paused": false 2026-03-09T20:46:58.662 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T20:46:58.666 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.665+0000 7fe641ffb640 1 -- 192.168.123.107:0/1083726171 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fe63c076290 msgr2=0x7fe63c078750 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:46:58.666 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.665+0000 7fe641ffb640 1 --2- 192.168.123.107:0/1083726171 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fe63c076290 0x7fe63c078750 secure :-1 s=READY pgs=146 cs=0 l=1 rev1=1 crypto rx=0x7fe654005fd0 tx=0x7fe654005950 comp rx=0 tx=0).stop 2026-03-09T20:46:58.666 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.665+0000 7fe641ffb640 1 -- 192.168.123.107:0/1083726171 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe6640823e0 msgr2=0x7fe664082860 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:46:58.666 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.665+0000 7fe641ffb640 1 --2- 192.168.123.107:0/1083726171 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe6640823e0 0x7fe664082860 secure :-1 s=READY pgs=78 cs=0 l=1 rev1=1 crypto rx=0x7fe65c034770 tx=0x7fe65c0347a0 comp rx=0 tx=0).stop 2026-03-09T20:46:58.666 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.665+0000 7fe641ffb640 1 -- 192.168.123.107:0/1083726171 shutdown_connections 2026-03-09T20:46:58.666 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.665+0000 7fe641ffb640 1 --2- 192.168.123.107:0/1083726171 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7fe63c076290 0x7fe63c078750 unknown :-1 s=CLOSED pgs=146 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:58.666 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.665+0000 7fe641ffb640 1 --2- 192.168.123.107:0/1083726171 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe6640823e0 0x7fe664082860 unknown :-1 s=CLOSED pgs=78 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:58.666 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.665+0000 7fe641ffb640 1 --2- 192.168.123.107:0/1083726171 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe664071a50 0x7fe664083d90 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:58.666 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.665+0000 7fe641ffb640 1 -- 192.168.123.107:0/1083726171 >> 192.168.123.107:0/1083726171 conn(0x7fe66406d4f0 msgr2=0x7fe664070070 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:46:58.666 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.665+0000 7fe641ffb640 1 -- 192.168.123.107:0/1083726171 shutdown_connections 2026-03-09T20:46:58.666 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.665+0000 7fe641ffb640 1 -- 192.168.123.107:0/1083726171 wait complete. 2026-03-09T20:46:58.747 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.745+0000 7f96119f6640 1 -- 192.168.123.107:0/1092086308 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f960c072440 msgr2=0x7f960c0771b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:46:58.747 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.745+0000 7f96119f6640 1 --2- 192.168.123.107:0/1092086308 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f960c072440 0x7f960c0771b0 secure :-1 s=READY pgs=79 cs=0 l=1 rev1=1 crypto rx=0x7f960400caa0 tx=0x7f96040305a0 comp rx=0 tx=0).stop 2026-03-09T20:46:58.747 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.745+0000 7f96119f6640 1 -- 192.168.123.107:0/1092086308 shutdown_connections 2026-03-09T20:46:58.747 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.745+0000 7f96119f6640 1 --2- 192.168.123.107:0/1092086308 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f960c072440 0x7f960c0771b0 unknown :-1 s=CLOSED pgs=79 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:58.747 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.745+0000 7f96119f6640 1 --2- 192.168.123.107:0/1092086308 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f960c071a70 0x7f960c071e70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:58.747 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.745+0000 7f96119f6640 1 -- 192.168.123.107:0/1092086308 >> 192.168.123.107:0/1092086308 conn(0x7f960c06d4f0 msgr2=0x7f960c06f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:46:58.747 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.746+0000 7f96119f6640 1 -- 192.168.123.107:0/1092086308 shutdown_connections 2026-03-09T20:46:58.747 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.746+0000 7f96119f6640 1 -- 192.168.123.107:0/1092086308 wait complete. 2026-03-09T20:46:58.748 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.746+0000 7f96119f6640 1 Processor -- start 2026-03-09T20:46:58.748 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.746+0000 7f96119f6640 1 -- start start 2026-03-09T20:46:58.748 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.746+0000 7f96119f6640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f960c071a70 0x7f960c131a20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:46:58.748 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.746+0000 7f96119f6640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f960c131f60 0x7f960c1323e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:46:58.748 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.746+0000 7f96119f6640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f960c1333d0 con 0x7f960c131f60 2026-03-09T20:46:58.748 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.746+0000 7f96119f6640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f960c133540 con 0x7f960c071a70 2026-03-09T20:46:58.748 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.747+0000 7f960affd640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f960c071a70 0x7f960c131a20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:46:58.749 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.747+0000 7f960a7fc640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f960c131f60 0x7f960c1323e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:46:58.749 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.747+0000 7f960a7fc640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f960c131f60 0x7f960c1323e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:49016/0 (socket says 192.168.123.107:49016) 2026-03-09T20:46:58.749 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.747+0000 7f960a7fc640 1 -- 192.168.123.107:0/2239073360 learned_addr learned my addr 192.168.123.107:0/2239073360 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:46:58.749 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.747+0000 7f960affd640 1 -- 192.168.123.107:0/2239073360 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f960c131f60 msgr2=0x7f960c1323e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:46:58.749 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.747+0000 7f960affd640 1 --2- 192.168.123.107:0/2239073360 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f960c131f60 0x7f960c1323e0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:58.749 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.747+0000 7f960affd640 1 -- 192.168.123.107:0/2239073360 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f9604009d00 con 0x7f960c071a70 2026-03-09T20:46:58.749 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.748+0000 7f960affd640 1 --2- 192.168.123.107:0/2239073360 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f960c071a70 0x7f960c131a20 secure :-1 s=READY pgs=80 cs=0 l=1 rev1=1 crypto rx=0x7f95fc00d6b0 tx=0x7f95fc00db80 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:46:58.749 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.748+0000 7f96109f4640 1 -- 192.168.123.107:0/2239073360 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f95fc004280 con 0x7f960c071a70 2026-03-09T20:46:58.750 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.748+0000 7f96119f6640 1 -- 192.168.123.107:0/2239073360 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f960c07fad0 con 0x7f960c071a70 2026-03-09T20:46:58.750 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.748+0000 7f96119f6640 1 -- 192.168.123.107:0/2239073360 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f960c07ffa0 con 0x7f960c071a70 2026-03-09T20:46:58.750 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.749+0000 7f96109f4640 1 -- 192.168.123.107:0/2239073360 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f95fc004d60 con 0x7f960c071a70 2026-03-09T20:46:58.750 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.749+0000 7f96109f4640 1 -- 192.168.123.107:0/2239073360 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f95fc005020 con 0x7f960c071a70 2026-03-09T20:46:58.751 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.749+0000 7f96119f6640 1 -- 192.168.123.107:0/2239073360 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f95d8005350 con 0x7f960c071a70 2026-03-09T20:46:58.752 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.751+0000 7f96109f4640 1 -- 192.168.123.107:0/2239073360 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 19) v1 ==== 98480+0+0 (secure 0 0 0) 0x7f95fc00b840 con 0x7f960c071a70 2026-03-09T20:46:58.753 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.752+0000 7f96109f4640 1 --2- 192.168.123.107:0/2239073360 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f95ec076290 0x7f95ec078750 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:46:58.753 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.752+0000 7f96109f4640 1 -- 192.168.123.107:0/2239073360 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(42..42 src has 1..42) v4 ==== 5448+0+0 (secure 0 0 0) 0x7f95fc0976b0 con 0x7f960c071a70 2026-03-09T20:46:58.753 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.752+0000 7f960a7fc640 1 --2- 192.168.123.107:0/2239073360 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f95ec076290 0x7f95ec078750 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:46:58.754 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.753+0000 7f960a7fc640 1 --2- 192.168.123.107:0/2239073360 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f95ec076290 0x7f95ec078750 secure :-1 s=READY pgs=147 cs=0 l=1 rev1=1 crypto rx=0x7f9604002790 tx=0x7f960403a040 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:46:58.756 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.755+0000 7f96109f4640 1 -- 192.168.123.107:0/2239073360 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f95fc060ce0 con 0x7f960c071a70 2026-03-09T20:46:58.925 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.922+0000 7f96119f6640 1 -- 192.168.123.107:0/2239073360 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "health", "detail": "detail"} v 0) v1 -- 0x7f95d80051c0 con 0x7f960c071a70 2026-03-09T20:46:58.925 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.923+0000 7f96109f4640 1 -- 192.168.123.107:0/2239073360 <== mon.1 v2:192.168.123.110:3300/0 7 ==== mon_command_ack([{"prefix": "health", "detail": "detail"}]=0 v0) v1 ==== 74+0+10 (secure 0 0 0) 0x7f95fc060680 con 0x7f960c071a70 2026-03-09T20:46:58.925 INFO:teuthology.orchestra.run.vm07.stdout:HEALTH_OK 2026-03-09T20:46:58.938 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.936+0000 7f96119f6640 1 -- 192.168.123.107:0/2239073360 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f95ec076290 msgr2=0x7f95ec078750 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:46:58.938 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.936+0000 7f96119f6640 1 --2- 192.168.123.107:0/2239073360 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f95ec076290 0x7f95ec078750 secure :-1 s=READY pgs=147 cs=0 l=1 rev1=1 crypto rx=0x7f9604002790 tx=0x7f960403a040 comp rx=0 tx=0).stop 2026-03-09T20:46:58.938 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.936+0000 7f96119f6640 1 -- 192.168.123.107:0/2239073360 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f960c071a70 msgr2=0x7f960c131a20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:46:58.938 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.936+0000 7f96119f6640 1 --2- 192.168.123.107:0/2239073360 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f960c071a70 0x7f960c131a20 secure :-1 s=READY pgs=80 cs=0 l=1 rev1=1 crypto rx=0x7f95fc00d6b0 tx=0x7f95fc00db80 comp rx=0 tx=0).stop 2026-03-09T20:46:58.938 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.937+0000 7f96119f6640 1 -- 192.168.123.107:0/2239073360 shutdown_connections 2026-03-09T20:46:58.938 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.937+0000 7f96119f6640 1 --2- 192.168.123.107:0/2239073360 >> [v2:192.168.123.107:6800/4233182156,v1:192.168.123.107:6801/4233182156] conn(0x7f95ec076290 0x7f95ec078750 unknown :-1 s=CLOSED pgs=147 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:58.938 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.937+0000 7f96119f6640 1 --2- 192.168.123.107:0/2239073360 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f960c131f60 0x7f960c1323e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:58.938 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.937+0000 7f96119f6640 1 --2- 192.168.123.107:0/2239073360 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f960c071a70 0x7f960c131a20 unknown :-1 s=CLOSED pgs=80 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:46:58.938 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.937+0000 7f96119f6640 1 -- 192.168.123.107:0/2239073360 >> 192.168.123.107:0/2239073360 conn(0x7f960c06d4f0 msgr2=0x7f960c0751d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:46:58.938 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.937+0000 7f96119f6640 1 -- 192.168.123.107:0/2239073360 shutdown_connections 2026-03-09T20:46:58.938 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:46:58.937+0000 7f96119f6640 1 -- 192.168.123.107:0/2239073360 wait complete. 2026-03-09T20:46:58.939 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:46:58 vm07.local ceph-mon[49120]: pgmap v139: 65 pgs: 65 active+clean; 173 MiB data, 1.5 GiB used, 119 GiB / 120 GiB avail; 3.0 MiB/s rd, 5.7 MiB/s wr, 528 op/s 2026-03-09T20:46:58.939 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:46:58 vm07.local ceph-mon[49120]: from='client.24413 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:46:58.939 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:46:58 vm07.local ceph-mon[49120]: from='client.14626 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:46:58.939 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:46:58 vm07.local ceph-mon[49120]: from='client.24421 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:46:58.939 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:46:58 vm07.local ceph-mon[49120]: from='client.? 192.168.123.107:0/3843908811' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:46:59.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:46:58 vm10.local ceph-mon[57011]: pgmap v139: 65 pgs: 65 active+clean; 173 MiB data, 1.5 GiB used, 119 GiB / 120 GiB avail; 3.0 MiB/s rd, 5.7 MiB/s wr, 528 op/s 2026-03-09T20:46:59.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:46:58 vm10.local ceph-mon[57011]: from='client.24413 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:46:59.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:46:58 vm10.local ceph-mon[57011]: from='client.14626 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:46:59.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:46:58 vm10.local ceph-mon[57011]: from='client.24421 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:46:59.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:46:58 vm10.local ceph-mon[57011]: from='client.? 192.168.123.107:0/3843908811' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:47:00.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:00 vm10.local ceph-mon[57011]: from='client.? 192.168.123.107:0/3231224374' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T20:47:00.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:00 vm10.local ceph-mon[57011]: from='client.24431 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:47:00.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:00 vm10.local ceph-mon[57011]: from='client.? 192.168.123.107:0/2239073360' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T20:47:00.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:00 vm07.local ceph-mon[49120]: from='client.? 192.168.123.107:0/3231224374' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T20:47:00.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:00 vm07.local ceph-mon[49120]: from='client.24431 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:47:00.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:00 vm07.local ceph-mon[49120]: from='client.? 192.168.123.107:0/2239073360' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T20:47:01.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:01 vm07.local ceph-mon[49120]: pgmap v140: 65 pgs: 65 active+clean; 177 MiB data, 1.5 GiB used, 118 GiB / 120 GiB avail; 2.5 MiB/s rd, 4.8 MiB/s wr, 400 op/s 2026-03-09T20:47:01.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:01 vm10.local ceph-mon[57011]: pgmap v140: 65 pgs: 65 active+clean; 177 MiB data, 1.5 GiB used, 118 GiB / 120 GiB avail; 2.5 MiB/s rd, 4.8 MiB/s wr, 400 op/s 2026-03-09T20:47:02.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:02 vm07.local ceph-mon[49120]: pgmap v141: 65 pgs: 65 active+clean; 180 MiB data, 1.6 GiB used, 118 GiB / 120 GiB avail; 2.5 MiB/s rd, 5.0 MiB/s wr, 422 op/s 2026-03-09T20:47:03.039 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:02 vm10.local ceph-mon[57011]: pgmap v141: 65 pgs: 65 active+clean; 180 MiB data, 1.6 GiB used, 118 GiB / 120 GiB avail; 2.5 MiB/s rd, 5.0 MiB/s wr, 422 op/s 2026-03-09T20:47:05.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:04 vm07.local ceph-mon[49120]: pgmap v142: 65 pgs: 65 active+clean; 184 MiB data, 1.7 GiB used, 118 GiB / 120 GiB avail; 1.8 MiB/s rd, 4.2 MiB/s wr, 407 op/s 2026-03-09T20:47:05.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:04 vm07.local ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:47:05.292 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:04 vm10.local ceph-mon[57011]: pgmap v142: 65 pgs: 65 active+clean; 184 MiB data, 1.7 GiB used, 118 GiB / 120 GiB avail; 1.8 MiB/s rd, 4.2 MiB/s wr, 407 op/s 2026-03-09T20:47:05.292 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:04 vm10.local ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:47:05.864 INFO:tasks.workunit.client.0.vm07.stderr:+ pushd ltp-full-20091231/testcases/kernel/fs/fsstress 2026-03-09T20:47:05.868 INFO:tasks.workunit.client.0.vm07.stdout:~/cephtest/mnt.0/client.0/tmp/fsstress/ltp-full-20091231/testcases/kernel/fs/fsstress ~/cephtest/mnt.0/client.0/tmp/fsstress ~/cephtest/mnt.0/client.0/tmp 2026-03-09T20:47:05.868 INFO:tasks.workunit.client.0.vm07.stderr:+ make 2026-03-09T20:47:06.317 INFO:tasks.workunit.client.0.vm07.stdout:cc -DNO_XFS -I/home/ubuntu/cephtest/mnt.0/client.0/tmp/fsstress/ltp-full-20091231/testcases/kernel/fs/fsstress -D_LARGEFILE64_SOURCE -D_GNU_SOURCE -I../../../../include -I../../../../include -L../../../../lib fsstress.c -o fsstress 2026-03-09T20:47:06.749 INFO:tasks.workunit.client.0.vm07.stderr:++ readlink -f fsstress 2026-03-09T20:47:06.751 INFO:tasks.workunit.client.0.vm07.stderr:+ BIN=/home/ubuntu/cephtest/mnt.0/client.0/tmp/fsstress/ltp-full-20091231/testcases/kernel/fs/fsstress/fsstress 2026-03-09T20:47:06.751 INFO:tasks.workunit.client.0.vm07.stderr:+ popd 2026-03-09T20:47:06.752 INFO:tasks.workunit.client.0.vm07.stdout:~/cephtest/mnt.0/client.0/tmp/fsstress ~/cephtest/mnt.0/client.0/tmp 2026-03-09T20:47:06.752 INFO:tasks.workunit.client.0.vm07.stderr:+ popd 2026-03-09T20:47:06.753 INFO:tasks.workunit.client.0.vm07.stdout:~/cephtest/mnt.0/client.0/tmp 2026-03-09T20:47:06.753 INFO:tasks.workunit.client.0.vm07.stderr:++ mktemp -d -p . 2026-03-09T20:47:06.759 INFO:tasks.workunit.client.0.vm07.stderr:+ T=./tmp.plnCPV9zAr 2026-03-09T20:47:06.759 INFO:tasks.workunit.client.0.vm07.stderr:+ /home/ubuntu/cephtest/mnt.0/client.0/tmp/fsstress/ltp-full-20091231/testcases/kernel/fs/fsstress/fsstress -d ./tmp.plnCPV9zAr -l 1 -n 1000 -p 10 -v 2026-03-09T20:47:06.766 INFO:tasks.workunit.client.0.vm07.stdout:seed = 1772389115 2026-03-09T20:47:06.773 INFO:tasks.workunit.client.0.vm07.stdout:0/0: write - no filename 2026-03-09T20:47:06.773 INFO:tasks.workunit.client.0.vm07.stdout:0/1: dwrite - no filename 2026-03-09T20:47:06.774 INFO:tasks.workunit.client.0.vm07.stdout:7/0: dread - no filename 2026-03-09T20:47:06.781 INFO:tasks.workunit.client.0.vm07.stdout:5/0: fsync - no filename 2026-03-09T20:47:06.781 INFO:tasks.workunit.client.0.vm07.stdout:7/1: getdents . 0 2026-03-09T20:47:06.781 INFO:tasks.workunit.client.0.vm07.stdout:0/2: creat f0 x:0 0 0 2026-03-09T20:47:06.784 INFO:tasks.workunit.client.0.vm07.stdout:6/0: symlink l0 0 2026-03-09T20:47:06.784 INFO:tasks.workunit.client.0.vm07.stdout:6/1: read - no filename 2026-03-09T20:47:06.786 INFO:tasks.workunit.client.0.vm07.stdout:8/0: chown . 49111 1 2026-03-09T20:47:06.786 INFO:tasks.workunit.client.0.vm07.stdout:8/1: write - no filename 2026-03-09T20:47:06.786 INFO:tasks.workunit.client.0.vm07.stdout:8/2: fsync - no filename 2026-03-09T20:47:06.787 INFO:tasks.workunit.client.0.vm07.stdout:0/3: mkdir d1 0 2026-03-09T20:47:06.788 INFO:tasks.workunit.client.0.vm07.stdout:0/4: stat f0 0 2026-03-09T20:47:06.788 INFO:tasks.workunit.client.0.vm07.stdout:0/5: dread - f0 zero size 2026-03-09T20:47:06.794 INFO:tasks.workunit.client.0.vm07.stdout:6/2: unlink l0 0 2026-03-09T20:47:06.794 INFO:tasks.workunit.client.0.vm07.stdout:6/3: dwrite - no filename 2026-03-09T20:47:06.796 INFO:tasks.workunit.client.0.vm07.stdout:5/1: symlink l0 0 2026-03-09T20:47:06.796 INFO:tasks.workunit.client.0.vm07.stdout:5/2: dwrite - no filename 2026-03-09T20:47:06.796 INFO:tasks.workunit.client.0.vm07.stdout:5/3: dread - no filename 2026-03-09T20:47:06.796 INFO:tasks.workunit.client.0.vm07.stdout:5/4: dwrite - no filename 2026-03-09T20:47:06.797 INFO:tasks.workunit.client.0.vm07.stdout:7/2: mknod c0 0 2026-03-09T20:47:06.797 INFO:tasks.workunit.client.0.vm07.stdout:9/0: mknod c0 0 2026-03-09T20:47:06.797 INFO:tasks.workunit.client.0.vm07.stdout:9/1: dread - no filename 2026-03-09T20:47:06.800 INFO:tasks.workunit.client.0.vm07.stdout:5/5: symlink l1 0 2026-03-09T20:47:06.800 INFO:tasks.workunit.client.0.vm07.stdout:7/3: symlink l1 0 2026-03-09T20:47:06.800 INFO:tasks.workunit.client.0.vm07.stdout:7/4: write - no filename 2026-03-09T20:47:06.800 INFO:tasks.workunit.client.0.vm07.stdout:7/5: dread - no filename 2026-03-09T20:47:06.800 INFO:tasks.workunit.client.0.vm07.stdout:7/6: dread - no filename 2026-03-09T20:47:06.800 INFO:tasks.workunit.client.0.vm07.stdout:7/7: write - no filename 2026-03-09T20:47:06.800 INFO:tasks.workunit.client.0.vm07.stdout:7/8: dwrite - no filename 2026-03-09T20:47:06.800 INFO:tasks.workunit.client.0.vm07.stdout:9/2: mknod c1 0 2026-03-09T20:47:06.800 INFO:tasks.workunit.client.0.vm07.stdout:9/3: truncate - no filename 2026-03-09T20:47:06.801 INFO:tasks.workunit.client.0.vm07.stdout:9/4: stat c0 0 2026-03-09T20:47:06.801 INFO:tasks.workunit.client.0.vm07.stdout:9/5: dwrite - no filename 2026-03-09T20:47:06.801 INFO:tasks.workunit.client.0.vm07.stdout:6/4: symlink l1 0 2026-03-09T20:47:06.803 INFO:tasks.workunit.client.0.vm07.stdout:8/3: symlink l0 0 2026-03-09T20:47:06.803 INFO:tasks.workunit.client.0.vm07.stdout:8/4: dwrite - no filename 2026-03-09T20:47:06.803 INFO:tasks.workunit.client.0.vm07.stdout:8/5: chown l0 1168541176 1 2026-03-09T20:47:06.803 INFO:tasks.workunit.client.0.vm07.stdout:8/6: write - no filename 2026-03-09T20:47:06.806 INFO:tasks.workunit.client.0.vm07.stdout:5/6: mkdir d2 0 2026-03-09T20:47:06.806 INFO:tasks.workunit.client.0.vm07.stdout:9/6: creat f2 x:0 0 0 2026-03-09T20:47:06.806 INFO:tasks.workunit.client.0.vm07.stdout:2/0: rename - no filename 2026-03-09T20:47:06.812 INFO:tasks.workunit.client.0.vm07.stdout:4/0: creat f0 x:0 0 0 2026-03-09T20:47:06.812 INFO:tasks.workunit.client.0.vm07.stdout:4/1: rmdir - no directory 2026-03-09T20:47:06.812 INFO:tasks.workunit.client.0.vm07.stdout:6/5: creat f2 x:0 0 0 2026-03-09T20:47:06.812 INFO:tasks.workunit.client.0.vm07.stdout:8/7: mkdir d1 0 2026-03-09T20:47:06.812 INFO:tasks.workunit.client.0.vm07.stdout:6/6: chown f2 241968447 1 2026-03-09T20:47:06.813 INFO:tasks.workunit.client.0.vm07.stdout:4/2: dread - f0 zero size 2026-03-09T20:47:06.813 INFO:tasks.workunit.client.0.vm07.stdout:4/3: read - f0 zero size 2026-03-09T20:47:06.813 INFO:tasks.workunit.client.0.vm07.stdout:4/4: write f0 [550652,12356] 0 2026-03-09T20:47:06.817 INFO:tasks.workunit.client.0.vm07.stdout:9/7: symlink l3 0 2026-03-09T20:47:06.817 INFO:tasks.workunit.client.0.vm07.stdout:3/0: getdents . 0 2026-03-09T20:47:06.820 INFO:tasks.workunit.client.0.vm07.stdout:6/7: creat f3 x:0 0 0 2026-03-09T20:47:06.827 INFO:tasks.workunit.client.0.vm07.stdout:6/8: dwrite f2 [0,4194304] 0 2026-03-09T20:47:06.954 INFO:tasks.workunit.client.0.vm07.stdout:8/8: rename l0 to d1/l2 0 2026-03-09T20:47:06.954 INFO:tasks.workunit.client.0.vm07.stdout:8/9: dwrite - no filename 2026-03-09T20:47:06.958 INFO:tasks.workunit.client.0.vm07.stdout:4/5: creat f1 x:0 0 0 2026-03-09T20:47:06.959 INFO:tasks.workunit.client.0.vm07.stdout:2/1: creat f0 x:0 0 0 2026-03-09T20:47:06.960 INFO:tasks.workunit.client.0.vm07.stdout:2/2: write f0 [703578,12381] 0 2026-03-09T20:47:06.964 INFO:tasks.workunit.client.0.vm07.stdout:2/3: dwrite f0 [0,4194304] 0 2026-03-09T20:47:06.967 INFO:tasks.workunit.client.0.vm07.stdout:2/4: write f0 [928093,47225] 0 2026-03-09T20:47:06.969 INFO:tasks.workunit.client.0.vm07.stdout:9/8: unlink c1 0 2026-03-09T20:47:06.975 INFO:tasks.workunit.client.0.vm07.stdout:9/9: dwrite f2 [0,4194304] 0 2026-03-09T20:47:07.003 INFO:tasks.workunit.client.0.vm07.stdout:6/9: rename f2 to f4 0 2026-03-09T20:47:07.004 INFO:tasks.workunit.client.0.vm07.stdout:6/10: chown l1 513438 1 2026-03-09T20:47:07.005 INFO:tasks.workunit.client.0.vm07.stdout:5/7: rmdir d2 0 2026-03-09T20:47:07.006 INFO:tasks.workunit.client.0.vm07.stdout:5/8: chown l1 44324588 1 2026-03-09T20:47:07.006 INFO:tasks.workunit.client.0.vm07.stdout:5/9: fdatasync - no filename 2026-03-09T20:47:07.006 INFO:tasks.workunit.client.0.vm07.stdout:5/10: dread - no filename 2026-03-09T20:47:07.006 INFO:tasks.workunit.client.0.vm07.stdout:5/11: dwrite - no filename 2026-03-09T20:47:07.006 INFO:tasks.workunit.client.0.vm07.stdout:5/12: fsync - no filename 2026-03-09T20:47:07.006 INFO:tasks.workunit.client.0.vm07.stdout:5/13: dwrite - no filename 2026-03-09T20:47:07.006 INFO:tasks.workunit.client.0.vm07.stdout:5/14: write - no filename 2026-03-09T20:47:07.006 INFO:tasks.workunit.client.0.vm07.stdout:5/15: dwrite - no filename 2026-03-09T20:47:07.006 INFO:tasks.workunit.client.0.vm07.stdout:5/16: dread - no filename 2026-03-09T20:47:07.006 INFO:tasks.workunit.client.0.vm07.stdout:5/17: dwrite - no filename 2026-03-09T20:47:07.006 INFO:tasks.workunit.client.0.vm07.stdout:5/18: readlink l0 0 2026-03-09T20:47:07.006 INFO:tasks.workunit.client.0.vm07.stdout:5/19: readlink l0 0 2026-03-09T20:47:07.006 INFO:tasks.workunit.client.0.vm07.stdout:5/20: read - no filename 2026-03-09T20:47:07.006 INFO:tasks.workunit.client.0.vm07.stdout:5/21: dwrite - no filename 2026-03-09T20:47:07.006 INFO:tasks.workunit.client.0.vm07.stdout:5/22: chown l1 62476 1 2026-03-09T20:47:07.006 INFO:tasks.workunit.client.0.vm07.stdout:5/23: dread - no filename 2026-03-09T20:47:07.008 INFO:tasks.workunit.client.0.vm07.stdout:4/6: mkdir d2 0 2026-03-09T20:47:07.009 INFO:tasks.workunit.client.0.vm07.stdout:4/7: write f1 [237716,118600] 0 2026-03-09T20:47:07.012 INFO:tasks.workunit.client.0.vm07.stdout:1/0: mkdir d0 0 2026-03-09T20:47:07.012 INFO:tasks.workunit.client.0.vm07.stdout:1/1: truncate - no filename 2026-03-09T20:47:07.012 INFO:tasks.workunit.client.0.vm07.stdout:1/2: unlink - no file 2026-03-09T20:47:07.012 INFO:tasks.workunit.client.0.vm07.stdout:1/3: dwrite - no filename 2026-03-09T20:47:07.013 INFO:tasks.workunit.client.0.vm07.stdout:9/10: mkdir d4 0 2026-03-09T20:47:07.015 INFO:tasks.workunit.client.0.vm07.stdout:3/1: mknod c0 0 2026-03-09T20:47:07.015 INFO:tasks.workunit.client.0.vm07.stdout:3/2: dread - no filename 2026-03-09T20:47:07.018 INFO:tasks.workunit.client.0.vm07.stdout:4/8: dwrite f1 [0,4194304] 0 2026-03-09T20:47:07.018 INFO:tasks.workunit.client.0.vm07.stdout:4/9: readlink - no filename 2026-03-09T20:47:07.138 INFO:tasks.workunit.client.0.vm07.stdout:8/10: chown d1/l2 136843 1 2026-03-09T20:47:07.138 INFO:tasks.workunit.client.0.vm07.stdout:8/11: chown d1 4765225 1 2026-03-09T20:47:07.138 INFO:tasks.workunit.client.0.vm07.stdout:8/12: dread - no filename 2026-03-09T20:47:07.138 INFO:tasks.workunit.client.0.vm07.stdout:8/13: truncate - no filename 2026-03-09T20:47:07.138 INFO:tasks.workunit.client.0.vm07.stdout:8/14: dread - no filename 2026-03-09T20:47:07.138 INFO:tasks.workunit.client.0.vm07.stdout:8/15: write - no filename 2026-03-09T20:47:07.143 INFO:tasks.workunit.client.0.vm07.stdout:2/5: write f0 [4277229,76452] 0 2026-03-09T20:47:07.147 INFO:tasks.workunit.client.0.vm07.stdout:2/6: dwrite f0 [4194304,4194304] 0 2026-03-09T20:47:07.154 INFO:tasks.workunit.client.0.vm07.stdout:2/7: dwrite f0 [4194304,4194304] 0 2026-03-09T20:47:07.189 INFO:tasks.workunit.client.0.vm07.stdout:5/24: mkdir d3 0 2026-03-09T20:47:07.190 INFO:tasks.workunit.client.0.vm07.stdout:5/25: dwrite - no filename 2026-03-09T20:47:07.192 INFO:tasks.workunit.client.0.vm07.stdout:3/3: mkdir d1 0 2026-03-09T20:47:07.194 INFO:tasks.workunit.client.0.vm07.stdout:8/16: rename d1/l2 to d1/l3 0 2026-03-09T20:47:07.196 INFO:tasks.workunit.client.0.vm07.stdout:9/11: creat d4/f5 x:0 0 0 2026-03-09T20:47:07.197 INFO:tasks.workunit.client.0.vm07.stdout:1/4: rmdir d0 0 2026-03-09T20:47:07.198 INFO:tasks.workunit.client.0.vm07.stdout:1/5: stat - no entries 2026-03-09T20:47:07.198 INFO:tasks.workunit.client.0.vm07.stdout:3/4: getdents d1 0 2026-03-09T20:47:07.199 INFO:tasks.workunit.client.0.vm07.stdout:3/5: rename d1 to d1/d2 22 2026-03-09T20:47:07.199 INFO:tasks.workunit.client.0.vm07.stdout:3/6: truncate - no filename 2026-03-09T20:47:07.199 INFO:tasks.workunit.client.0.vm07.stdout:3/7: dwrite - no filename 2026-03-09T20:47:07.199 INFO:tasks.workunit.client.0.vm07.stdout:3/8: dwrite - no filename 2026-03-09T20:47:07.200 INFO:tasks.workunit.client.0.vm07.stdout:9/12: creat d4/f6 x:0 0 0 2026-03-09T20:47:07.201 INFO:tasks.workunit.client.0.vm07.stdout:5/26: rmdir d3 0 2026-03-09T20:47:07.202 INFO:tasks.workunit.client.0.vm07.stdout:1/6: mknod c1 0 2026-03-09T20:47:07.203 INFO:tasks.workunit.client.0.vm07.stdout:9/13: mknod d4/c7 0 2026-03-09T20:47:07.204 INFO:tasks.workunit.client.0.vm07.stdout:5/27: mknod c4 0 2026-03-09T20:47:07.204 INFO:tasks.workunit.client.0.vm07.stdout:5/28: dwrite - no filename 2026-03-09T20:47:07.204 INFO:tasks.workunit.client.0.vm07.stdout:5/29: write - no filename 2026-03-09T20:47:07.204 INFO:tasks.workunit.client.0.vm07.stdout:5/30: dread - no filename 2026-03-09T20:47:07.204 INFO:tasks.workunit.client.0.vm07.stdout:5/31: truncate - no filename 2026-03-09T20:47:07.204 INFO:tasks.workunit.client.0.vm07.stdout:5/32: rmdir - no directory 2026-03-09T20:47:07.204 INFO:tasks.workunit.client.0.vm07.stdout:5/33: dwrite - no filename 2026-03-09T20:47:07.205 INFO:tasks.workunit.client.0.vm07.stdout:1/7: creat f2 x:0 0 0 2026-03-09T20:47:07.205 INFO:tasks.workunit.client.0.vm07.stdout:3/9: creat d1/f3 x:0 0 0 2026-03-09T20:47:07.215 INFO:tasks.workunit.client.0.vm07.stdout:3/10: dwrite d1/f3 [0,4194304] 0 2026-03-09T20:47:07.223 INFO:tasks.workunit.client.0.vm07.stdout:9/14: mkdir d4/d8 0 2026-03-09T20:47:07.223 INFO:tasks.workunit.client.0.vm07.stdout:5/34: mkdir d5 0 2026-03-09T20:47:07.223 INFO:tasks.workunit.client.0.vm07.stdout:5/35: dread - no filename 2026-03-09T20:47:07.223 INFO:tasks.workunit.client.0.vm07.stdout:1/8: unlink c1 0 2026-03-09T20:47:07.225 INFO:tasks.workunit.client.0.vm07.stdout:9/15: symlink d4/l9 0 2026-03-09T20:47:07.227 INFO:tasks.workunit.client.0.vm07.stdout:5/36: rename l0 to d5/l6 0 2026-03-09T20:47:07.227 INFO:tasks.workunit.client.0.vm07.stdout:5/37: dread - no filename 2026-03-09T20:47:07.230 INFO:tasks.workunit.client.0.vm07.stdout:1/9: mkdir d3 0 2026-03-09T20:47:07.231 INFO:tasks.workunit.client.0.vm07.stdout:9/16: dwrite d4/f5 [0,4194304] 0 2026-03-09T20:47:07.236 INFO:tasks.workunit.client.0.vm07.stdout:9/17: creat d4/fa x:0 0 0 2026-03-09T20:47:07.254 INFO:tasks.workunit.client.0.vm07.stdout:1/10: creat d3/f4 x:0 0 0 2026-03-09T20:47:07.254 INFO:tasks.workunit.client.0.vm07.stdout:1/11: creat d3/f5 x:0 0 0 2026-03-09T20:47:07.254 INFO:tasks.workunit.client.0.vm07.stdout:9/18: creat d4/d8/fb x:0 0 0 2026-03-09T20:47:07.254 INFO:tasks.workunit.client.0.vm07.stdout:1/12: dwrite f2 [0,4194304] 0 2026-03-09T20:47:07.254 INFO:tasks.workunit.client.0.vm07.stdout:1/13: dread - d3/f5 zero size 2026-03-09T20:47:07.255 INFO:tasks.workunit.client.0.vm07.stdout:9/19: mkdir d4/d8/dc 0 2026-03-09T20:47:07.256 INFO:tasks.workunit.client.0.vm07.stdout:1/14: symlink d3/l6 0 2026-03-09T20:47:07.258 INFO:tasks.workunit.client.0.vm07.stdout:1/15: dread f2 [0,4194304] 0 2026-03-09T20:47:07.278 INFO:tasks.workunit.client.0.vm07.stdout:1/16: creat d3/f7 x:0 0 0 2026-03-09T20:47:07.278 INFO:tasks.workunit.client.0.vm07.stdout:1/17: write d3/f4 [658971,62780] 0 2026-03-09T20:47:07.278 INFO:tasks.workunit.client.0.vm07.stdout:1/18: write d3/f4 [370355,58902] 0 2026-03-09T20:47:07.278 INFO:tasks.workunit.client.0.vm07.stdout:1/19: chown d3/l6 34601 1 2026-03-09T20:47:07.278 INFO:tasks.workunit.client.0.vm07.stdout:1/20: dread - d3/f5 zero size 2026-03-09T20:47:07.278 INFO:tasks.workunit.client.0.vm07.stdout:1/21: write d3/f4 [1336547,4797] 0 2026-03-09T20:47:07.278 INFO:tasks.workunit.client.0.vm07.stdout:1/22: creat d3/f8 x:0 0 0 2026-03-09T20:47:07.278 INFO:tasks.workunit.client.0.vm07.stdout:1/23: dread - d3/f8 zero size 2026-03-09T20:47:07.579 INFO:tasks.workunit.client.0.vm07.stdout:1/24: dread d3/f4 [0,4194304] 0 2026-03-09T20:47:07.581 INFO:tasks.workunit.client.0.vm07.stdout:1/25: creat d3/f9 x:0 0 0 2026-03-09T20:47:07.615 INFO:tasks.workunit.client.0.vm07.stdout:4/10: write f1 [4786051,95040] 0 2026-03-09T20:47:07.620 INFO:tasks.workunit.client.0.vm07.stdout:4/11: write f1 [3434112,91844] 0 2026-03-09T20:47:07.620 INFO:tasks.workunit.client.0.vm07.stdout:4/12: dwrite f1 [0,4194304] 0 2026-03-09T20:47:07.632 INFO:tasks.workunit.client.0.vm07.stdout:4/13: link f0 d2/f3 0 2026-03-09T20:47:07.646 INFO:tasks.workunit.client.0.vm07.stdout:4/14: link f1 d2/f4 0 2026-03-09T20:47:07.646 INFO:tasks.workunit.client.0.vm07.stdout:4/15: read f0 [321477,130023] 0 2026-03-09T20:47:07.646 INFO:tasks.workunit.client.0.vm07.stdout:4/16: chown d2/f3 365 1 2026-03-09T20:47:07.646 INFO:tasks.workunit.client.0.vm07.stdout:4/17: creat d2/f5 x:0 0 0 2026-03-09T20:47:07.646 INFO:tasks.workunit.client.0.vm07.stdout:5/38: getdents d5 0 2026-03-09T20:47:07.646 INFO:tasks.workunit.client.0.vm07.stdout:5/39: rmdir d5 39 2026-03-09T20:47:07.646 INFO:tasks.workunit.client.0.vm07.stdout:5/40: write - no filename 2026-03-09T20:47:07.646 INFO:tasks.workunit.client.0.vm07.stdout:5/41: chown c4 126 1 2026-03-09T20:47:07.646 INFO:tasks.workunit.client.0.vm07.stdout:5/42: getdents d5 0 2026-03-09T20:47:07.649 INFO:tasks.workunit.client.0.vm07.stdout:5/43: link c4 d5/c7 0 2026-03-09T20:47:07.649 INFO:tasks.workunit.client.0.vm07.stdout:5/44: dwrite - no filename 2026-03-09T20:47:07.649 INFO:tasks.workunit.client.0.vm07.stdout:5/45: write - no filename 2026-03-09T20:47:07.734 INFO:tasks.workunit.client.0.vm07.stdout:4/18: truncate f0 1173091 0 2026-03-09T20:47:07.736 INFO:tasks.workunit.client.0.vm07.stdout:4/19: rename d2/f4 to d2/f6 0 2026-03-09T20:47:07.737 INFO:tasks.workunit.client.0.vm07.stdout:4/20: write d2/f6 [5153031,57182] 0 2026-03-09T20:47:07.762 INFO:tasks.workunit.client.0.vm07.stdout:4/21: dread f0 [0,4194304] 0 2026-03-09T20:47:07.763 INFO:tasks.workunit.client.0.vm07.stdout:4/22: write d2/f6 [4936813,85513] 0 2026-03-09T20:47:07.767 INFO:tasks.workunit.client.0.vm07.stdout:4/23: rmdir d2 39 2026-03-09T20:47:07.768 INFO:tasks.workunit.client.0.vm07.stdout:4/24: creat d2/f7 x:0 0 0 2026-03-09T20:47:07.770 INFO:tasks.workunit.client.0.vm07.stdout:4/25: mknod d2/c8 0 2026-03-09T20:47:07.771 INFO:tasks.workunit.client.0.vm07.stdout:4/26: creat d2/f9 x:0 0 0 2026-03-09T20:47:07.975 INFO:tasks.workunit.client.0.vm07.stdout:3/11: sync 2026-03-09T20:47:07.975 INFO:tasks.workunit.client.0.vm07.stdout:1/26: sync 2026-03-09T20:47:07.975 INFO:tasks.workunit.client.0.vm07.stdout:8/17: sync 2026-03-09T20:47:07.975 INFO:tasks.workunit.client.0.vm07.stdout:7/9: sync 2026-03-09T20:47:07.975 INFO:tasks.workunit.client.0.vm07.stdout:6/11: sync 2026-03-09T20:47:07.976 INFO:tasks.workunit.client.0.vm07.stdout:5/46: sync 2026-03-09T20:47:07.976 INFO:tasks.workunit.client.0.vm07.stdout:9/20: sync 2026-03-09T20:47:07.976 INFO:tasks.workunit.client.0.vm07.stdout:2/8: sync 2026-03-09T20:47:07.976 INFO:tasks.workunit.client.0.vm07.stdout:8/18: truncate - no filename 2026-03-09T20:47:07.976 INFO:tasks.workunit.client.0.vm07.stdout:8/19: write - no filename 2026-03-09T20:47:07.976 INFO:tasks.workunit.client.0.vm07.stdout:8/20: dwrite - no filename 2026-03-09T20:47:07.976 INFO:tasks.workunit.client.0.vm07.stdout:8/21: dwrite - no filename 2026-03-09T20:47:07.976 INFO:tasks.workunit.client.0.vm07.stdout:8/22: write - no filename 2026-03-09T20:47:07.976 INFO:tasks.workunit.client.0.vm07.stdout:8/23: dread - no filename 2026-03-09T20:47:07.976 INFO:tasks.workunit.client.0.vm07.stdout:5/47: chown l1 2 1 2026-03-09T20:47:07.976 INFO:tasks.workunit.client.0.vm07.stdout:0/6: sync 2026-03-09T20:47:07.978 INFO:tasks.workunit.client.0.vm07.stdout:7/10: stat l1 0 2026-03-09T20:47:07.978 INFO:tasks.workunit.client.0.vm07.stdout:7/11: write - no filename 2026-03-09T20:47:07.978 INFO:tasks.workunit.client.0.vm07.stdout:7/12: truncate - no filename 2026-03-09T20:47:07.979 INFO:tasks.workunit.client.0.vm07.stdout:6/12: truncate f3 192233 0 2026-03-09T20:47:07.979 INFO:tasks.workunit.client.0.vm07.stdout:0/7: chown f0 10361 1 2026-03-09T20:47:07.980 INFO:tasks.workunit.client.0.vm07.stdout:3/12: read d1/f3 [903079,28465] 0 2026-03-09T20:47:07.985 INFO:tasks.workunit.client.0.vm07.stdout:1/27: dread d3/f4 [0,4194304] 0 2026-03-09T20:47:07.987 INFO:tasks.workunit.client.0.vm07.stdout:2/9: dread f0 [0,4194304] 0 2026-03-09T20:47:07.987 INFO:tasks.workunit.client.0.vm07.stdout:6/13: dread f4 [0,4194304] 0 2026-03-09T20:47:07.987 INFO:tasks.workunit.client.0.vm07.stdout:6/14: fdatasync f3 0 2026-03-09T20:47:07.992 INFO:tasks.workunit.client.0.vm07.stdout:2/10: dread f0 [0,4194304] 0 2026-03-09T20:47:07.993 INFO:tasks.workunit.client.0.vm07.stdout:4/27: write f0 [2145569,62210] 0 2026-03-09T20:47:07.994 INFO:tasks.workunit.client.0.vm07.stdout:4/28: chown d2/f5 1715974 1 2026-03-09T20:47:07.996 INFO:tasks.workunit.client.0.vm07.stdout:4/29: truncate d2/f5 707811 0 2026-03-09T20:47:07.997 INFO:tasks.workunit.client.0.vm07.stdout:5/48: rename d5/l6 to d5/l8 0 2026-03-09T20:47:08.012 INFO:tasks.workunit.client.0.vm07.stdout:9/21: sync 2026-03-09T20:47:08.012 INFO:tasks.workunit.client.0.vm07.stdout:8/24: symlink d1/l4 0 2026-03-09T20:47:08.013 INFO:tasks.workunit.client.0.vm07.stdout:8/25: chown d1 0 1 2026-03-09T20:47:08.013 INFO:tasks.workunit.client.0.vm07.stdout:8/26: write - no filename 2026-03-09T20:47:08.014 INFO:tasks.workunit.client.0.vm07.stdout:9/22: write f2 [2687916,24024] 0 2026-03-09T20:47:08.023 INFO:tasks.workunit.client.0.vm07.stdout:4/30: dread d2/f6 [4194304,4194304] 0 2026-03-09T20:47:08.037 INFO:tasks.workunit.client.0.vm07.stdout:4/31: dwrite d2/f7 [0,4194304] 0 2026-03-09T20:47:08.038 INFO:tasks.workunit.client.0.vm07.stdout:4/32: write d2/f5 [224522,103373] 0 2026-03-09T20:47:08.039 INFO:tasks.workunit.client.0.vm07.stdout:4/33: chown d2/f6 31875 1 2026-03-09T20:47:08.051 INFO:tasks.workunit.client.0.vm07.stdout:3/13: rmdir d1 39 2026-03-09T20:47:08.051 INFO:tasks.workunit.client.0.vm07.stdout:6/15: rename f4 to f5 0 2026-03-09T20:47:08.052 INFO:tasks.workunit.client.0.vm07.stdout:5/49: creat d5/f9 x:0 0 0 2026-03-09T20:47:08.052 INFO:tasks.workunit.client.0.vm07.stdout:4/34: dread f1 [0,4194304] 0 2026-03-09T20:47:08.056 INFO:tasks.workunit.client.0.vm07.stdout:4/35: write d2/f5 [631032,36359] 0 2026-03-09T20:47:08.058 INFO:tasks.workunit.client.0.vm07.stdout:4/36: write d2/f3 [166937,33031] 0 2026-03-09T20:47:08.263 INFO:tasks.workunit.client.0.vm07.stdout:8/27: mknod d1/c5 0 2026-03-09T20:47:08.264 INFO:tasks.workunit.client.0.vm07.stdout:9/23: creat d4/d8/fd x:0 0 0 2026-03-09T20:47:08.264 INFO:tasks.workunit.client.0.vm07.stdout:8/28: stat d1/l4 0 2026-03-09T20:47:08.264 INFO:tasks.workunit.client.0.vm07.stdout:0/8: mkdir d1/d2 0 2026-03-09T20:47:08.264 INFO:tasks.workunit.client.0.vm07.stdout:3/14: stat d1/f3 0 2026-03-09T20:47:08.265 INFO:tasks.workunit.client.0.vm07.stdout:0/9: write f0 [809021,106104] 0 2026-03-09T20:47:08.265 INFO:tasks.workunit.client.0.vm07.stdout:3/15: chown d1/f3 843467 1 2026-03-09T20:47:08.278 INFO:tasks.workunit.client.0.vm07.stdout:1/28: link d3/f9 d3/fa 0 2026-03-09T20:47:08.278 INFO:tasks.workunit.client.0.vm07.stdout:2/11: unlink f0 0 2026-03-09T20:47:08.278 INFO:tasks.workunit.client.0.vm07.stdout:2/12: stat - no entries 2026-03-09T20:47:08.279 INFO:tasks.workunit.client.0.vm07.stdout:5/50: chown c4 29248 1 2026-03-09T20:47:08.280 INFO:tasks.workunit.client.0.vm07.stdout:5/51: write d5/f9 [960841,110442] 0 2026-03-09T20:47:08.283 INFO:tasks.workunit.client.0.vm07.stdout:9/24: symlink d4/d8/le 0 2026-03-09T20:47:08.284 INFO:tasks.workunit.client.0.vm07.stdout:1/29: dwrite d3/f5 [0,4194304] 0 2026-03-09T20:47:08.284 INFO:tasks.workunit.client.0.vm07.stdout:9/25: truncate d4/f6 802750 0 2026-03-09T20:47:08.285 INFO:tasks.workunit.client.0.vm07.stdout:9/26: readlink l3 0 2026-03-09T20:47:08.285 INFO:tasks.workunit.client.0.vm07.stdout:3/16: creat d1/f4 x:0 0 0 2026-03-09T20:47:08.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:08 vm10.local ceph-mon[57011]: pgmap v143: 65 pgs: 65 active+clean; 193 MiB data, 1.7 GiB used, 118 GiB / 120 GiB avail; 1.9 MiB/s rd, 3.5 MiB/s wr, 352 op/s 2026-03-09T20:47:08.290 INFO:tasks.workunit.client.0.vm07.stdout:9/27: write d4/d8/fd [140925,131020] 0 2026-03-09T20:47:08.290 INFO:tasks.workunit.client.0.vm07.stdout:9/28: chown d4/d8/le 121274605 1 2026-03-09T20:47:08.290 INFO:tasks.workunit.client.0.vm07.stdout:2/13: creat f1 x:0 0 0 2026-03-09T20:47:08.290 INFO:tasks.workunit.client.0.vm07.stdout:6/16: link f5 f6 0 2026-03-09T20:47:08.291 INFO:tasks.workunit.client.0.vm07.stdout:2/14: dread - f1 zero size 2026-03-09T20:47:08.291 INFO:tasks.workunit.client.0.vm07.stdout:6/17: truncate f3 415847 0 2026-03-09T20:47:08.295 INFO:tasks.workunit.client.0.vm07.stdout:7/13: getdents . 0 2026-03-09T20:47:08.299 INFO:tasks.workunit.client.0.vm07.stdout:0/10: mknod d1/d2/c3 0 2026-03-09T20:47:08.299 INFO:tasks.workunit.client.0.vm07.stdout:0/11: rename d1 to d1/d2/d4 22 2026-03-09T20:47:08.300 INFO:tasks.workunit.client.0.vm07.stdout:0/12: write f0 [395926,21780] 0 2026-03-09T20:47:08.301 INFO:tasks.workunit.client.0.vm07.stdout:0/13: write f0 [720156,20880] 0 2026-03-09T20:47:08.304 INFO:tasks.workunit.client.0.vm07.stdout:3/17: mkdir d1/d5 0 2026-03-09T20:47:08.307 INFO:tasks.workunit.client.0.vm07.stdout:2/15: mkdir d2 0 2026-03-09T20:47:08.313 INFO:tasks.workunit.client.0.vm07.stdout:5/52: dread d5/f9 [0,4194304] 0 2026-03-09T20:47:08.320 INFO:tasks.workunit.client.0.vm07.stdout:6/18: write f6 [1646001,30662] 0 2026-03-09T20:47:08.322 INFO:tasks.workunit.client.0.vm07.stdout:6/19: dread f5 [0,4194304] 0 2026-03-09T20:47:08.325 INFO:tasks.workunit.client.0.vm07.stdout:6/20: dread f6 [0,4194304] 0 2026-03-09T20:47:08.331 INFO:tasks.workunit.client.0.vm07.stdout:9/29: unlink d4/c7 0 2026-03-09T20:47:08.338 INFO:tasks.workunit.client.0.vm07.stdout:0/14: dread f0 [0,4194304] 0 2026-03-09T20:47:08.338 INFO:tasks.workunit.client.0.vm07.stdout:0/15: rename d1 to d1/d5 22 2026-03-09T20:47:08.349 INFO:tasks.workunit.client.0.vm07.stdout:7/14: mknod c2 0 2026-03-09T20:47:08.349 INFO:tasks.workunit.client.0.vm07.stdout:7/15: truncate - no filename 2026-03-09T20:47:08.363 INFO:tasks.workunit.client.0.vm07.stdout:3/18: symlink d1/l6 0 2026-03-09T20:47:08.367 INFO:tasks.workunit.client.0.vm07.stdout:5/53: write d5/f9 [806031,39619] 0 2026-03-09T20:47:08.367 INFO:tasks.workunit.client.0.vm07.stdout:5/54: stat d5/c7 0 2026-03-09T20:47:08.373 INFO:tasks.workunit.client.0.vm07.stdout:5/55: dwrite d5/f9 [0,4194304] 0 2026-03-09T20:47:08.377 INFO:tasks.workunit.client.0.vm07.stdout:6/21: unlink f3 0 2026-03-09T20:47:08.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:08 vm07.local ceph-mon[49120]: pgmap v143: 65 pgs: 65 active+clean; 193 MiB data, 1.7 GiB used, 118 GiB / 120 GiB avail; 1.9 MiB/s rd, 3.5 MiB/s wr, 352 op/s 2026-03-09T20:47:08.407 INFO:tasks.workunit.client.0.vm07.stdout:3/19: symlink d1/l7 0 2026-03-09T20:47:08.408 INFO:tasks.workunit.client.0.vm07.stdout:2/16: link f1 d2/f3 0 2026-03-09T20:47:08.408 INFO:tasks.workunit.client.0.vm07.stdout:5/56: symlink d5/la 0 2026-03-09T20:47:08.408 INFO:tasks.workunit.client.0.vm07.stdout:3/20: chown d1/f3 338 1 2026-03-09T20:47:08.409 INFO:tasks.workunit.client.0.vm07.stdout:9/30: link d4/d8/fd d4/d8/dc/ff 0 2026-03-09T20:47:08.409 INFO:tasks.workunit.client.0.vm07.stdout:9/31: chown d4/l9 8721192 1 2026-03-09T20:47:08.410 INFO:tasks.workunit.client.0.vm07.stdout:7/16: mkdir d3 0 2026-03-09T20:47:08.411 INFO:tasks.workunit.client.0.vm07.stdout:3/21: dread d1/f3 [0,4194304] 0 2026-03-09T20:47:08.411 INFO:tasks.workunit.client.0.vm07.stdout:3/22: dread - d1/f4 zero size 2026-03-09T20:47:08.415 INFO:tasks.workunit.client.0.vm07.stdout:6/22: dread f6 [0,4194304] 0 2026-03-09T20:47:08.421 INFO:tasks.workunit.client.0.vm07.stdout:2/17: unlink d2/f3 0 2026-03-09T20:47:08.421 INFO:tasks.workunit.client.0.vm07.stdout:3/23: symlink d1/l8 0 2026-03-09T20:47:08.422 INFO:tasks.workunit.client.0.vm07.stdout:3/24: dread - d1/f4 zero size 2026-03-09T20:47:08.422 INFO:tasks.workunit.client.0.vm07.stdout:6/23: write f6 [591329,13744] 0 2026-03-09T20:47:08.423 INFO:tasks.workunit.client.0.vm07.stdout:3/25: chown d1/f3 27257 1 2026-03-09T20:47:08.424 INFO:tasks.workunit.client.0.vm07.stdout:6/24: write f5 [1335482,100673] 0 2026-03-09T20:47:08.430 INFO:tasks.workunit.client.0.vm07.stdout:5/57: unlink d5/l8 0 2026-03-09T20:47:08.431 INFO:tasks.workunit.client.0.vm07.stdout:3/26: dwrite d1/f4 [0,4194304] 0 2026-03-09T20:47:08.435 INFO:tasks.workunit.client.0.vm07.stdout:9/32: link d4/d8/dc/ff d4/f10 0 2026-03-09T20:47:08.436 INFO:tasks.workunit.client.0.vm07.stdout:2/18: creat d2/f4 x:0 0 0 2026-03-09T20:47:08.436 INFO:tasks.workunit.client.0.vm07.stdout:9/33: stat d4/d8/fb 0 2026-03-09T20:47:08.436 INFO:tasks.workunit.client.0.vm07.stdout:2/19: fdatasync d2/f4 0 2026-03-09T20:47:08.437 INFO:tasks.workunit.client.0.vm07.stdout:2/20: dread - d2/f4 zero size 2026-03-09T20:47:08.440 INFO:tasks.workunit.client.0.vm07.stdout:9/34: dread d4/d8/fd [0,4194304] 0 2026-03-09T20:47:08.448 INFO:tasks.workunit.client.0.vm07.stdout:6/25: creat f7 x:0 0 0 2026-03-09T20:47:08.448 INFO:tasks.workunit.client.0.vm07.stdout:6/26: write f5 [1546817,126099] 0 2026-03-09T20:47:08.451 INFO:tasks.workunit.client.0.vm07.stdout:6/27: dread f6 [0,4194304] 0 2026-03-09T20:47:08.456 INFO:tasks.workunit.client.0.vm07.stdout:6/28: dwrite f7 [0,4194304] 0 2026-03-09T20:47:08.470 INFO:tasks.workunit.client.0.vm07.stdout:5/58: unlink d5/f9 0 2026-03-09T20:47:08.476 INFO:tasks.workunit.client.0.vm07.stdout:3/27: unlink d1/f3 0 2026-03-09T20:47:08.490 INFO:tasks.workunit.client.0.vm07.stdout:9/35: mkdir d4/d11 0 2026-03-09T20:47:08.513 INFO:tasks.workunit.client.0.vm07.stdout:5/59: symlink d5/lb 0 2026-03-09T20:47:08.513 INFO:tasks.workunit.client.0.vm07.stdout:5/60: readlink l1 0 2026-03-09T20:47:08.520 INFO:tasks.workunit.client.0.vm07.stdout:7/17: link l1 d3/l4 0 2026-03-09T20:47:08.525 INFO:tasks.workunit.client.0.vm07.stdout:2/21: dwrite f1 [0,4194304] 0 2026-03-09T20:47:08.536 INFO:tasks.workunit.client.0.vm07.stdout:8/29: getdents d1 0 2026-03-09T20:47:08.540 INFO:tasks.workunit.client.0.vm07.stdout:5/61: symlink d5/lc 0 2026-03-09T20:47:08.540 INFO:tasks.workunit.client.0.vm07.stdout:5/62: write - no filename 2026-03-09T20:47:08.540 INFO:tasks.workunit.client.0.vm07.stdout:5/63: write - no filename 2026-03-09T20:47:08.540 INFO:tasks.workunit.client.0.vm07.stdout:5/64: dread - no filename 2026-03-09T20:47:08.542 INFO:tasks.workunit.client.0.vm07.stdout:3/28: mkdir d1/d5/d9 0 2026-03-09T20:47:08.542 INFO:tasks.workunit.client.0.vm07.stdout:3/29: fdatasync d1/f4 0 2026-03-09T20:47:08.546 INFO:tasks.workunit.client.0.vm07.stdout:1/30: rmdir d3 39 2026-03-09T20:47:08.550 INFO:tasks.workunit.client.0.vm07.stdout:7/18: chown c0 2590065 1 2026-03-09T20:47:08.550 INFO:tasks.workunit.client.0.vm07.stdout:7/19: write - no filename 2026-03-09T20:47:08.554 INFO:tasks.workunit.client.0.vm07.stdout:3/30: fsync d1/f4 0 2026-03-09T20:47:08.557 INFO:tasks.workunit.client.0.vm07.stdout:2/22: creat d2/f5 x:0 0 0 2026-03-09T20:47:08.558 INFO:tasks.workunit.client.0.vm07.stdout:2/23: chown d2/f5 185148 1 2026-03-09T20:47:08.558 INFO:tasks.workunit.client.0.vm07.stdout:2/24: write d2/f4 [241345,128208] 0 2026-03-09T20:47:08.563 INFO:tasks.workunit.client.0.vm07.stdout:4/37: truncate d2/f7 3727181 0 2026-03-09T20:47:08.565 INFO:tasks.workunit.client.0.vm07.stdout:5/65: creat d5/fd x:0 0 0 2026-03-09T20:47:08.571 INFO:tasks.workunit.client.0.vm07.stdout:1/31: dread - d3/f7 zero size 2026-03-09T20:47:08.571 INFO:tasks.workunit.client.0.vm07.stdout:1/32: chown d3/l6 28 1 2026-03-09T20:47:08.572 INFO:tasks.workunit.client.0.vm07.stdout:1/33: chown d3/f4 2 1 2026-03-09T20:47:08.572 INFO:tasks.workunit.client.0.vm07.stdout:1/34: chown d3 72 1 2026-03-09T20:47:08.587 INFO:tasks.workunit.client.0.vm07.stdout:0/16: truncate f0 770711 0 2026-03-09T20:47:08.589 INFO:tasks.workunit.client.0.vm07.stdout:2/25: unlink f1 0 2026-03-09T20:47:08.593 INFO:tasks.workunit.client.0.vm07.stdout:2/26: dwrite d2/f4 [0,4194304] 0 2026-03-09T20:47:08.604 INFO:tasks.workunit.client.0.vm07.stdout:4/38: creat d2/fa x:0 0 0 2026-03-09T20:47:08.612 INFO:tasks.workunit.client.0.vm07.stdout:9/36: getdents d4/d8/dc 0 2026-03-09T20:47:08.615 INFO:tasks.workunit.client.0.vm07.stdout:7/20: creat d3/f5 x:0 0 0 2026-03-09T20:47:08.619 INFO:tasks.workunit.client.0.vm07.stdout:3/31: mknod d1/d5/d9/ca 0 2026-03-09T20:47:08.624 INFO:tasks.workunit.client.0.vm07.stdout:0/17: symlink d1/l6 0 2026-03-09T20:47:08.627 INFO:tasks.workunit.client.0.vm07.stdout:3/32: dread d1/f4 [0,4194304] 0 2026-03-09T20:47:08.630 INFO:tasks.workunit.client.0.vm07.stdout:3/33: dread d1/f4 [0,4194304] 0 2026-03-09T20:47:08.632 INFO:tasks.workunit.client.0.vm07.stdout:3/34: dread d1/f4 [0,4194304] 0 2026-03-09T20:47:08.650 INFO:tasks.workunit.client.0.vm07.stdout:3/35: readlink d1/l7 0 2026-03-09T20:47:08.650 INFO:tasks.workunit.client.0.vm07.stdout:3/36: dread d1/f4 [0,4194304] 0 2026-03-09T20:47:08.650 INFO:tasks.workunit.client.0.vm07.stdout:3/37: dwrite d1/f4 [4194304,4194304] 0 2026-03-09T20:47:08.650 INFO:tasks.workunit.client.0.vm07.stdout:2/27: symlink d2/l6 0 2026-03-09T20:47:08.655 INFO:tasks.workunit.client.0.vm07.stdout:4/39: mknod d2/cb 0 2026-03-09T20:47:08.655 INFO:tasks.workunit.client.0.vm07.stdout:4/40: chown d2/f6 56 1 2026-03-09T20:47:08.660 INFO:tasks.workunit.client.0.vm07.stdout:8/30: link d1/c5 d1/c6 0 2026-03-09T20:47:08.662 INFO:tasks.workunit.client.0.vm07.stdout:6/29: truncate f7 2509667 0 2026-03-09T20:47:08.665 INFO:tasks.workunit.client.0.vm07.stdout:9/37: symlink d4/d8/dc/l12 0 2026-03-09T20:47:08.668 INFO:tasks.workunit.client.0.vm07.stdout:9/38: dread f2 [0,4194304] 0 2026-03-09T20:47:08.668 INFO:tasks.workunit.client.0.vm07.stdout:9/39: chown d4/d8 0 1 2026-03-09T20:47:08.669 INFO:tasks.workunit.client.0.vm07.stdout:1/35: sync 2026-03-09T20:47:08.669 INFO:tasks.workunit.client.0.vm07.stdout:1/36: chown d3/f4 257193 1 2026-03-09T20:47:08.669 INFO:tasks.workunit.client.0.vm07.stdout:1/37: truncate d3/fa 810388 0 2026-03-09T20:47:08.671 INFO:tasks.workunit.client.0.vm07.stdout:2/28: fsync d2/f4 0 2026-03-09T20:47:08.672 INFO:tasks.workunit.client.0.vm07.stdout:1/38: dread d3/f5 [0,4194304] 0 2026-03-09T20:47:08.672 INFO:tasks.workunit.client.0.vm07.stdout:2/29: truncate d2/f5 760639 0 2026-03-09T20:47:08.672 INFO:tasks.workunit.client.0.vm07.stdout:1/39: chown d3/f9 6512827 1 2026-03-09T20:47:08.674 INFO:tasks.workunit.client.0.vm07.stdout:2/30: dread d2/f5 [0,4194304] 0 2026-03-09T20:47:08.674 INFO:tasks.workunit.client.0.vm07.stdout:1/40: dread d3/f9 [0,4194304] 0 2026-03-09T20:47:08.674 INFO:tasks.workunit.client.0.vm07.stdout:2/31: readlink d2/l6 0 2026-03-09T20:47:08.674 INFO:tasks.workunit.client.0.vm07.stdout:2/32: chown d2/f4 3556660 1 2026-03-09T20:47:08.683 INFO:tasks.workunit.client.0.vm07.stdout:0/18: symlink d1/l7 0 2026-03-09T20:47:08.685 INFO:tasks.workunit.client.0.vm07.stdout:3/38: creat d1/d5/d9/fb x:0 0 0 2026-03-09T20:47:08.686 INFO:tasks.workunit.client.0.vm07.stdout:3/39: truncate d1/d5/d9/fb 638082 0 2026-03-09T20:47:08.688 INFO:tasks.workunit.client.0.vm07.stdout:3/40: dread d1/f4 [4194304,4194304] 0 2026-03-09T20:47:08.695 INFO:tasks.workunit.client.0.vm07.stdout:4/41: mknod d2/cc 0 2026-03-09T20:47:08.702 INFO:tasks.workunit.client.0.vm07.stdout:7/21: rmdir d3 39 2026-03-09T20:47:08.702 INFO:tasks.workunit.client.0.vm07.stdout:8/31: chown d1/c5 92 1 2026-03-09T20:47:08.703 INFO:tasks.workunit.client.0.vm07.stdout:3/41: dwrite d1/d5/d9/fb [0,4194304] 0 2026-03-09T20:47:08.704 INFO:tasks.workunit.client.0.vm07.stdout:6/30: mkdir d8 0 2026-03-09T20:47:08.707 INFO:tasks.workunit.client.0.vm07.stdout:4/42: dread f1 [0,4194304] 0 2026-03-09T20:47:08.712 INFO:tasks.workunit.client.0.vm07.stdout:5/66: link d5/la d5/le 0 2026-03-09T20:47:08.720 INFO:tasks.workunit.client.0.vm07.stdout:9/40: rename d4/f6 to d4/d11/f13 0 2026-03-09T20:47:08.721 INFO:tasks.workunit.client.0.vm07.stdout:1/41: rename d3 to d3/db 22 2026-03-09T20:47:08.721 INFO:tasks.workunit.client.0.vm07.stdout:4/43: rename d2 to d2/dd 22 2026-03-09T20:47:08.722 INFO:tasks.workunit.client.0.vm07.stdout:4/44: readlink - no filename 2026-03-09T20:47:08.722 INFO:tasks.workunit.client.0.vm07.stdout:4/45: truncate d2/f5 866283 0 2026-03-09T20:47:08.725 INFO:tasks.workunit.client.0.vm07.stdout:9/41: dwrite d4/d8/fb [0,4194304] 0 2026-03-09T20:47:08.729 INFO:tasks.workunit.client.0.vm07.stdout:6/31: sync 2026-03-09T20:47:08.754 INFO:tasks.workunit.client.0.vm07.stdout:0/19: fdatasync f0 0 2026-03-09T20:47:08.775 INFO:tasks.workunit.client.0.vm07.stdout:7/22: dread - d3/f5 zero size 2026-03-09T20:47:08.780 INFO:tasks.workunit.client.0.vm07.stdout:3/42: creat d1/d5/d9/fc x:0 0 0 2026-03-09T20:47:08.782 INFO:tasks.workunit.client.0.vm07.stdout:5/67: mkdir d5/df 0 2026-03-09T20:47:08.782 INFO:tasks.workunit.client.0.vm07.stdout:5/68: fdatasync d5/fd 0 2026-03-09T20:47:08.786 INFO:tasks.workunit.client.0.vm07.stdout:5/69: dwrite d5/fd [0,4194304] 0 2026-03-09T20:47:08.787 INFO:tasks.workunit.client.0.vm07.stdout:3/43: sync 2026-03-09T20:47:08.788 INFO:tasks.workunit.client.0.vm07.stdout:5/70: write d5/fd [1118483,66241] 0 2026-03-09T20:47:08.792 INFO:tasks.workunit.client.0.vm07.stdout:5/71: dread d5/fd [0,4194304] 0 2026-03-09T20:47:08.792 INFO:tasks.workunit.client.0.vm07.stdout:5/72: truncate d5/fd 4402061 0 2026-03-09T20:47:08.810 INFO:tasks.workunit.client.0.vm07.stdout:1/42: rmdir d3 39 2026-03-09T20:47:08.812 INFO:tasks.workunit.client.0.vm07.stdout:4/46: rename d2/c8 to d2/ce 0 2026-03-09T20:47:08.820 INFO:tasks.workunit.client.0.vm07.stdout:9/42: creat d4/f14 x:0 0 0 2026-03-09T20:47:08.829 INFO:tasks.workunit.client.0.vm07.stdout:6/32: dwrite f6 [0,4194304] 0 2026-03-09T20:47:08.831 INFO:tasks.workunit.client.0.vm07.stdout:6/33: write f5 [3170967,130977] 0 2026-03-09T20:47:08.842 INFO:tasks.workunit.client.0.vm07.stdout:2/33: unlink d2/f5 0 2026-03-09T20:47:08.854 INFO:tasks.workunit.client.0.vm07.stdout:7/23: mknod d3/c6 0 2026-03-09T20:47:08.854 INFO:tasks.workunit.client.0.vm07.stdout:7/24: dread - d3/f5 zero size 2026-03-09T20:47:08.856 INFO:tasks.workunit.client.0.vm07.stdout:8/32: symlink d1/l7 0 2026-03-09T20:47:08.856 INFO:tasks.workunit.client.0.vm07.stdout:8/33: chown d1/c5 6452 1 2026-03-09T20:47:08.856 INFO:tasks.workunit.client.0.vm07.stdout:8/34: truncate - no filename 2026-03-09T20:47:08.857 INFO:tasks.workunit.client.0.vm07.stdout:8/35: stat d1/c6 0 2026-03-09T20:47:08.865 INFO:tasks.workunit.client.0.vm07.stdout:8/36: sync 2026-03-09T20:47:08.866 INFO:tasks.workunit.client.0.vm07.stdout:3/44: rmdir d1/d5 39 2026-03-09T20:47:08.877 INFO:tasks.workunit.client.0.vm07.stdout:1/43: dwrite d3/f7 [0,4194304] 0 2026-03-09T20:47:08.882 INFO:tasks.workunit.client.0.vm07.stdout:4/47: mkdir d2/df 0 2026-03-09T20:47:08.900 INFO:tasks.workunit.client.0.vm07.stdout:7/25: mknod d3/c7 0 2026-03-09T20:47:08.900 INFO:tasks.workunit.client.0.vm07.stdout:7/26: write d3/f5 [382178,74988] 0 2026-03-09T20:47:08.907 INFO:tasks.workunit.client.0.vm07.stdout:3/45: write d1/d5/d9/fc [124295,123705] 0 2026-03-09T20:47:08.908 INFO:tasks.workunit.client.0.vm07.stdout:3/46: write d1/d5/d9/fb [2007015,116563] 0 2026-03-09T20:47:08.909 INFO:tasks.workunit.client.0.vm07.stdout:3/47: truncate d1/d5/d9/fc 520296 0 2026-03-09T20:47:08.918 INFO:tasks.workunit.client.0.vm07.stdout:5/73: creat d5/df/f10 x:0 0 0 2026-03-09T20:47:08.924 INFO:tasks.workunit.client.0.vm07.stdout:1/44: creat d3/fc x:0 0 0 2026-03-09T20:47:08.929 INFO:tasks.workunit.client.0.vm07.stdout:4/48: dread d2/f7 [0,4194304] 0 2026-03-09T20:47:08.930 INFO:tasks.workunit.client.0.vm07.stdout:4/49: read d2/f3 [1535266,49753] 0 2026-03-09T20:47:08.930 INFO:tasks.workunit.client.0.vm07.stdout:4/50: dread - d2/fa zero size 2026-03-09T20:47:08.930 INFO:tasks.workunit.client.0.vm07.stdout:4/51: chown d2/f5 0 1 2026-03-09T20:47:08.933 INFO:tasks.workunit.client.0.vm07.stdout:9/43: mkdir d4/d8/dc/d15 0 2026-03-09T20:47:08.936 INFO:tasks.workunit.client.0.vm07.stdout:6/34: creat d8/f9 x:0 0 0 2026-03-09T20:47:08.936 INFO:tasks.workunit.client.0.vm07.stdout:6/35: chown l1 127270690 1 2026-03-09T20:47:08.939 INFO:tasks.workunit.client.0.vm07.stdout:9/44: sync 2026-03-09T20:47:08.944 INFO:tasks.workunit.client.0.vm07.stdout:2/34: truncate d2/f4 3377631 0 2026-03-09T20:47:08.949 INFO:tasks.workunit.client.0.vm07.stdout:8/37: mknod d1/c8 0 2026-03-09T20:47:08.960 INFO:tasks.workunit.client.0.vm07.stdout:3/48: unlink d1/f4 0 2026-03-09T20:47:08.962 INFO:tasks.workunit.client.0.vm07.stdout:5/74: rename d5/c7 to d5/df/c11 0 2026-03-09T20:47:08.963 INFO:tasks.workunit.client.0.vm07.stdout:5/75: truncate d5/df/f10 740131 0 2026-03-09T20:47:08.965 INFO:tasks.workunit.client.0.vm07.stdout:1/45: mknod d3/cd 0 2026-03-09T20:47:08.979 INFO:tasks.workunit.client.0.vm07.stdout:0/20: getdents d1/d2 0 2026-03-09T20:47:08.999 INFO:tasks.workunit.client.0.vm07.stdout:5/76: dwrite d5/fd [4194304,4194304] 0 2026-03-09T20:47:09.002 INFO:tasks.workunit.client.0.vm07.stdout:5/77: dread d5/fd [4194304,4194304] 0 2026-03-09T20:47:09.002 INFO:tasks.workunit.client.0.vm07.stdout:5/78: chown d5/lc 30736 1 2026-03-09T20:47:09.041 INFO:tasks.workunit.client.0.vm07.stdout:7/27: link c0 d3/c8 0 2026-03-09T20:47:09.060 INFO:tasks.workunit.client.0.vm07.stdout:6/36: write f7 [1474620,70308] 0 2026-03-09T20:47:09.060 INFO:tasks.workunit.client.0.vm07.stdout:9/45: write d4/d8/dc/ff [752621,28912] 0 2026-03-09T20:47:09.065 INFO:tasks.workunit.client.0.vm07.stdout:2/35: dwrite d2/f4 [0,4194304] 0 2026-03-09T20:47:09.077 INFO:tasks.workunit.client.0.vm07.stdout:5/79: creat d5/f12 x:0 0 0 2026-03-09T20:47:09.105 INFO:tasks.workunit.client.0.vm07.stdout:5/80: sync 2026-03-09T20:47:09.222 INFO:tasks.workunit.client.0.vm07.stdout:4/52: link d2/cc d2/df/c10 0 2026-03-09T20:47:09.225 INFO:tasks.workunit.client.0.vm07.stdout:7/28: rename d3/c7 to d3/c9 0 2026-03-09T20:47:09.227 INFO:tasks.workunit.client.0.vm07.stdout:3/49: creat d1/fd x:0 0 0 2026-03-09T20:47:09.227 INFO:tasks.workunit.client.0.vm07.stdout:9/46: mkdir d4/d16 0 2026-03-09T20:47:09.228 INFO:tasks.workunit.client.0.vm07.stdout:9/47: dread - d4/fa zero size 2026-03-09T20:47:09.230 INFO:tasks.workunit.client.0.vm07.stdout:6/37: rename f6 to d8/fa 0 2026-03-09T20:47:09.232 INFO:tasks.workunit.client.0.vm07.stdout:6/38: dread f5 [0,4194304] 0 2026-03-09T20:47:09.243 INFO:tasks.workunit.client.0.vm07.stdout:2/36: creat d2/f7 x:0 0 0 2026-03-09T20:47:09.243 INFO:tasks.workunit.client.0.vm07.stdout:2/37: truncate d2/f7 957605 0 2026-03-09T20:47:09.244 INFO:tasks.workunit.client.0.vm07.stdout:2/38: readlink d2/l6 0 2026-03-09T20:47:09.245 INFO:tasks.workunit.client.0.vm07.stdout:5/81: mkdir d5/df/d13 0 2026-03-09T20:47:09.248 INFO:tasks.workunit.client.0.vm07.stdout:0/21: link d1/d2/c3 d1/d2/c8 0 2026-03-09T20:47:09.249 INFO:tasks.workunit.client.0.vm07.stdout:7/29: mkdir d3/da 0 2026-03-09T20:47:09.250 INFO:tasks.workunit.client.0.vm07.stdout:3/50: creat d1/d5/d9/fe x:0 0 0 2026-03-09T20:47:09.251 INFO:tasks.workunit.client.0.vm07.stdout:3/51: dread - d1/fd zero size 2026-03-09T20:47:09.254 INFO:tasks.workunit.client.0.vm07.stdout:6/39: mknod d8/cb 0 2026-03-09T20:47:09.256 INFO:tasks.workunit.client.0.vm07.stdout:2/39: mknod d2/c8 0 2026-03-09T20:47:09.262 INFO:tasks.workunit.client.0.vm07.stdout:8/38: link d1/l4 d1/l9 0 2026-03-09T20:47:09.265 INFO:tasks.workunit.client.0.vm07.stdout:3/52: unlink d1/d5/d9/fc 0 2026-03-09T20:47:09.265 INFO:tasks.workunit.client.0.vm07.stdout:3/53: fdatasync d1/d5/d9/fe 0 2026-03-09T20:47:09.267 INFO:tasks.workunit.client.0.vm07.stdout:9/48: link d4/d8/fb d4/f17 0 2026-03-09T20:47:09.271 INFO:tasks.workunit.client.0.vm07.stdout:2/40: symlink d2/l9 0 2026-03-09T20:47:09.271 INFO:tasks.workunit.client.0.vm07.stdout:6/40: dwrite d8/fa [0,4194304] 0 2026-03-09T20:47:09.274 INFO:tasks.workunit.client.0.vm07.stdout:5/82: rename c4 to d5/df/d13/c14 0 2026-03-09T20:47:09.278 INFO:tasks.workunit.client.0.vm07.stdout:7/30: mkdir d3/da/db 0 2026-03-09T20:47:09.282 INFO:tasks.workunit.client.0.vm07.stdout:3/54: creat d1/d5/d9/ff x:0 0 0 2026-03-09T20:47:09.286 INFO:tasks.workunit.client.0.vm07.stdout:2/41: creat d2/fa x:0 0 0 2026-03-09T20:47:09.286 INFO:tasks.workunit.client.0.vm07.stdout:2/42: readlink d2/l6 0 2026-03-09T20:47:09.288 INFO:tasks.workunit.client.0.vm07.stdout:6/41: unlink d8/f9 0 2026-03-09T20:47:09.291 INFO:tasks.workunit.client.0.vm07.stdout:4/53: link d2/ce d2/c11 0 2026-03-09T20:47:09.292 INFO:tasks.workunit.client.0.vm07.stdout:0/22: link d1/l7 d1/l9 0 2026-03-09T20:47:09.302 INFO:tasks.workunit.client.0.vm07.stdout:7/31: unlink d3/f5 0 2026-03-09T20:47:09.302 INFO:tasks.workunit.client.0.vm07.stdout:8/39: symlink d1/la 0 2026-03-09T20:47:09.302 INFO:tasks.workunit.client.0.vm07.stdout:8/40: write - no filename 2026-03-09T20:47:09.302 INFO:tasks.workunit.client.0.vm07.stdout:8/41: dread - no filename 2026-03-09T20:47:09.302 INFO:tasks.workunit.client.0.vm07.stdout:8/42: write - no filename 2026-03-09T20:47:09.302 INFO:tasks.workunit.client.0.vm07.stdout:3/55: mkdir d1/d5/d10 0 2026-03-09T20:47:09.303 INFO:tasks.workunit.client.0.vm07.stdout:3/56: write d1/d5/d9/ff [48346,17027] 0 2026-03-09T20:47:09.305 INFO:tasks.workunit.client.0.vm07.stdout:2/43: mkdir d2/db 0 2026-03-09T20:47:09.307 INFO:tasks.workunit.client.0.vm07.stdout:6/42: mknod d8/cc 0 2026-03-09T20:47:09.307 INFO:tasks.workunit.client.0.vm07.stdout:6/43: write f5 [4994072,53572] 0 2026-03-09T20:47:09.314 INFO:tasks.workunit.client.0.vm07.stdout:0/23: rmdir d1/d2 39 2026-03-09T20:47:09.317 INFO:tasks.workunit.client.0.vm07.stdout:7/32: mknod d3/da/cc 0 2026-03-09T20:47:09.319 INFO:tasks.workunit.client.0.vm07.stdout:3/57: unlink d1/d5/d9/fb 0 2026-03-09T20:47:09.319 INFO:tasks.workunit.client.0.vm07.stdout:2/44: unlink d2/fa 0 2026-03-09T20:47:09.320 INFO:tasks.workunit.client.0.vm07.stdout:2/45: write d2/f7 [1903673,61622] 0 2026-03-09T20:47:09.321 INFO:tasks.workunit.client.0.vm07.stdout:6/44: rename d8/cb to d8/cd 0 2026-03-09T20:47:09.321 INFO:tasks.workunit.client.0.vm07.stdout:6/45: chown l1 0 1 2026-03-09T20:47:09.323 INFO:tasks.workunit.client.0.vm07.stdout:2/46: dwrite d2/f7 [0,4194304] 0 2026-03-09T20:47:09.341 INFO:tasks.workunit.client.0.vm07.stdout:6/46: dread f7 [0,4194304] 0 2026-03-09T20:47:09.341 INFO:tasks.workunit.client.0.vm07.stdout:2/47: dread d2/f7 [0,4194304] 0 2026-03-09T20:47:09.341 INFO:tasks.workunit.client.0.vm07.stdout:4/54: chown d2/ce 16101 1 2026-03-09T20:47:09.341 INFO:tasks.workunit.client.0.vm07.stdout:9/49: getdents d4/d8/dc 0 2026-03-09T20:47:09.341 INFO:tasks.workunit.client.0.vm07.stdout:9/50: chown d4/f14 108674 1 2026-03-09T20:47:09.342 INFO:tasks.workunit.client.0.vm07.stdout:7/33: mknod d3/da/cd 0 2026-03-09T20:47:09.342 INFO:tasks.workunit.client.0.vm07.stdout:7/34: dread - no filename 2026-03-09T20:47:09.342 INFO:tasks.workunit.client.0.vm07.stdout:7/35: chown d3/da/cc 35004 1 2026-03-09T20:47:09.342 INFO:tasks.workunit.client.0.vm07.stdout:3/58: unlink d1/d5/d9/ff 0 2026-03-09T20:47:09.344 INFO:tasks.workunit.client.0.vm07.stdout:4/55: symlink d2/l12 0 2026-03-09T20:47:09.345 INFO:tasks.workunit.client.0.vm07.stdout:4/56: dread - d2/fa zero size 2026-03-09T20:47:09.350 INFO:tasks.workunit.client.0.vm07.stdout:8/43: creat d1/fb x:0 0 0 2026-03-09T20:47:09.351 INFO:tasks.workunit.client.0.vm07.stdout:8/44: write d1/fb [580153,118667] 0 2026-03-09T20:47:09.351 INFO:tasks.workunit.client.0.vm07.stdout:8/45: write d1/fb [1587473,109840] 0 2026-03-09T20:47:09.353 INFO:tasks.workunit.client.0.vm07.stdout:0/24: mknod d1/ca 0 2026-03-09T20:47:09.356 INFO:tasks.workunit.client.0.vm07.stdout:7/36: readlink d3/l4 0 2026-03-09T20:47:09.356 INFO:tasks.workunit.client.0.vm07.stdout:7/37: write - no filename 2026-03-09T20:47:09.356 INFO:tasks.workunit.client.0.vm07.stdout:7/38: dwrite - no filename 2026-03-09T20:47:09.363 INFO:tasks.workunit.client.0.vm07.stdout:3/59: unlink d1/fd 0 2026-03-09T20:47:09.363 INFO:tasks.workunit.client.0.vm07.stdout:3/60: write d1/d5/d9/fe [710192,20806] 0 2026-03-09T20:47:09.371 INFO:tasks.workunit.client.0.vm07.stdout:7/39: rmdir d3/da 39 2026-03-09T20:47:09.373 INFO:tasks.workunit.client.0.vm07.stdout:3/61: mkdir d1/d5/d9/d11 0 2026-03-09T20:47:09.374 INFO:tasks.workunit.client.0.vm07.stdout:4/57: creat d2/df/f13 x:0 0 0 2026-03-09T20:47:09.376 INFO:tasks.workunit.client.0.vm07.stdout:8/46: mkdir d1/dc 0 2026-03-09T20:47:09.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:09 vm07.local ceph-mon[49120]: pgmap v144: 65 pgs: 65 active+clean; 214 MiB data, 1.8 GiB used, 118 GiB / 120 GiB avail; 1.9 MiB/s rd, 4.8 MiB/s wr, 397 op/s 2026-03-09T20:47:09.386 INFO:tasks.workunit.client.0.vm07.stdout:3/62: link d1/d5/d9/ca d1/c12 0 2026-03-09T20:47:09.390 INFO:tasks.workunit.client.0.vm07.stdout:3/63: getdents d1/d5/d9/d11 0 2026-03-09T20:47:09.390 INFO:tasks.workunit.client.0.vm07.stdout:3/64: readlink d1/l7 0 2026-03-09T20:47:09.392 INFO:tasks.workunit.client.0.vm07.stdout:3/65: truncate d1/d5/d9/fe 1149079 0 2026-03-09T20:47:09.396 INFO:tasks.workunit.client.1.vm10.stderr:+ pushd ltp-full-20091231/testcases/kernel/fs/fsstress 2026-03-09T20:47:09.403 INFO:tasks.workunit.client.1.vm10.stdout:~/cephtest/mnt.1/client.1/tmp/fsstress/ltp-full-20091231/testcases/kernel/fs/fsstress ~/cephtest/mnt.1/client.1/tmp/fsstress ~/cephtest/mnt.1/client.1/tmp 2026-03-09T20:47:09.403 INFO:tasks.workunit.client.1.vm10.stderr:+ make 2026-03-09T20:47:09.406 INFO:tasks.workunit.client.0.vm07.stdout:0/25: sync 2026-03-09T20:47:09.406 INFO:tasks.workunit.client.0.vm07.stdout:6/47: sync 2026-03-09T20:47:09.409 INFO:tasks.workunit.client.0.vm07.stdout:0/26: dwrite f0 [0,4194304] 0 2026-03-09T20:47:09.414 INFO:tasks.workunit.client.0.vm07.stdout:1/46: truncate d3/f7 5705 0 2026-03-09T20:47:09.425 INFO:tasks.workunit.client.0.vm07.stdout:6/48: symlink d8/le 0 2026-03-09T20:47:09.428 INFO:tasks.workunit.client.0.vm07.stdout:6/49: creat d8/ff x:0 0 0 2026-03-09T20:47:09.428 INFO:tasks.workunit.client.0.vm07.stdout:6/50: write d8/ff [166449,49464] 0 2026-03-09T20:47:09.433 INFO:tasks.workunit.client.0.vm07.stdout:7/40: creat d3/da/fe x:0 0 0 2026-03-09T20:47:09.437 INFO:tasks.workunit.client.0.vm07.stdout:7/41: write d3/da/fe [754435,77184] 0 2026-03-09T20:47:09.437 INFO:tasks.workunit.client.0.vm07.stdout:6/51: mknod d8/c10 0 2026-03-09T20:47:09.437 INFO:tasks.workunit.client.0.vm07.stdout:1/47: link d3/cd d3/ce 0 2026-03-09T20:47:09.438 INFO:tasks.workunit.client.0.vm07.stdout:1/48: dread d3/f9 [0,4194304] 0 2026-03-09T20:47:09.439 INFO:tasks.workunit.client.0.vm07.stdout:0/27: fdatasync f0 0 2026-03-09T20:47:09.440 INFO:tasks.workunit.client.0.vm07.stdout:0/28: chown d1/l6 11176094 1 2026-03-09T20:47:09.442 INFO:tasks.workunit.client.0.vm07.stdout:7/42: symlink d3/da/lf 0 2026-03-09T20:47:09.442 INFO:tasks.workunit.client.0.vm07.stdout:7/43: chown d3/da/cc 103568889 1 2026-03-09T20:47:09.443 INFO:tasks.workunit.client.0.vm07.stdout:6/52: rename d8/c10 to d8/c11 0 2026-03-09T20:47:09.446 INFO:tasks.workunit.client.0.vm07.stdout:6/53: dwrite f5 [0,4194304] 0 2026-03-09T20:47:09.446 INFO:tasks.workunit.client.0.vm07.stdout:6/54: stat f5 0 2026-03-09T20:47:09.448 INFO:tasks.workunit.client.0.vm07.stdout:6/55: write d8/ff [256057,92242] 0 2026-03-09T20:47:09.449 INFO:tasks.workunit.client.0.vm07.stdout:9/51: truncate d4/f17 74987 0 2026-03-09T20:47:09.450 INFO:tasks.workunit.client.0.vm07.stdout:9/52: dread - d4/fa zero size 2026-03-09T20:47:09.454 INFO:tasks.workunit.client.0.vm07.stdout:5/83: chown d5/df/d13/c14 97 1 2026-03-09T20:47:09.465 INFO:tasks.workunit.client.0.vm07.stdout:6/56: creat d8/f12 x:0 0 0 2026-03-09T20:47:09.475 INFO:tasks.workunit.client.0.vm07.stdout:5/84: unlink d5/fd 0 2026-03-09T20:47:09.480 INFO:tasks.workunit.client.0.vm07.stdout:9/53: creat d4/d8/dc/d15/f18 x:0 0 0 2026-03-09T20:47:09.481 INFO:tasks.workunit.client.0.vm07.stdout:2/48: rmdir d2 39 2026-03-09T20:47:09.482 INFO:tasks.workunit.client.0.vm07.stdout:5/85: creat d5/df/f15 x:0 0 0 2026-03-09T20:47:09.482 INFO:tasks.workunit.client.0.vm07.stdout:5/86: dread - d5/f12 zero size 2026-03-09T20:47:09.483 INFO:tasks.workunit.client.0.vm07.stdout:5/87: chown d5/df/f10 292384098 1 2026-03-09T20:47:09.484 INFO:tasks.workunit.client.0.vm07.stdout:5/88: write d5/df/f15 [322060,76874] 0 2026-03-09T20:47:09.485 INFO:tasks.workunit.client.0.vm07.stdout:6/57: sync 2026-03-09T20:47:09.486 INFO:tasks.workunit.client.0.vm07.stdout:6/58: write d8/ff [715593,67947] 0 2026-03-09T20:47:09.491 INFO:tasks.workunit.client.0.vm07.stdout:6/59: fdatasync d8/fa 0 2026-03-09T20:47:09.492 INFO:tasks.workunit.client.0.vm07.stdout:7/44: rename d3/da/cd to d3/c10 0 2026-03-09T20:47:09.493 INFO:tasks.workunit.client.0.vm07.stdout:7/45: write d3/da/fe [555725,51438] 0 2026-03-09T20:47:09.494 INFO:tasks.workunit.client.0.vm07.stdout:7/46: read d3/da/fe [816490,39449] 0 2026-03-09T20:47:09.494 INFO:tasks.workunit.client.0.vm07.stdout:7/47: write d3/da/fe [1832411,81211] 0 2026-03-09T20:47:09.495 INFO:tasks.workunit.client.0.vm07.stdout:9/54: mkdir d4/d8/d19 0 2026-03-09T20:47:09.501 INFO:tasks.workunit.client.0.vm07.stdout:7/48: dwrite d3/da/fe [0,4194304] 0 2026-03-09T20:47:09.507 INFO:tasks.workunit.client.0.vm07.stdout:2/49: dread d2/f4 [0,4194304] 0 2026-03-09T20:47:09.510 INFO:tasks.workunit.client.0.vm07.stdout:6/60: mknod d8/c13 0 2026-03-09T20:47:09.514 INFO:tasks.workunit.client.0.vm07.stdout:9/55: creat d4/d11/f1a x:0 0 0 2026-03-09T20:47:09.515 INFO:tasks.workunit.client.0.vm07.stdout:7/49: creat d3/da/f11 x:0 0 0 2026-03-09T20:47:09.515 INFO:tasks.workunit.client.0.vm07.stdout:7/50: dread - d3/da/f11 zero size 2026-03-09T20:47:09.520 INFO:tasks.workunit.client.0.vm07.stdout:6/61: creat d8/f14 x:0 0 0 2026-03-09T20:47:09.524 INFO:tasks.workunit.client.0.vm07.stdout:6/62: dwrite d8/f12 [0,4194304] 0 2026-03-09T20:47:09.528 INFO:tasks.workunit.client.0.vm07.stdout:8/47: rmdir d1 39 2026-03-09T20:47:09.528 INFO:tasks.workunit.client.0.vm07.stdout:9/56: rename c0 to d4/d8/d19/c1b 0 2026-03-09T20:47:09.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:09 vm10.local ceph-mon[57011]: pgmap v144: 65 pgs: 65 active+clean; 214 MiB data, 1.8 GiB used, 118 GiB / 120 GiB avail; 1.9 MiB/s rd, 4.8 MiB/s wr, 397 op/s 2026-03-09T20:47:09.540 INFO:tasks.workunit.client.0.vm07.stdout:3/66: truncate d1/d5/d9/fe 942266 0 2026-03-09T20:47:09.540 INFO:tasks.workunit.client.0.vm07.stdout:1/49: rename d3/f7 to d3/ff 0 2026-03-09T20:47:09.542 INFO:tasks.workunit.client.0.vm07.stdout:4/58: dwrite f0 [0,4194304] 0 2026-03-09T20:47:09.544 INFO:tasks.workunit.client.0.vm07.stdout:4/59: read d2/f7 [3724522,107295] 0 2026-03-09T20:47:09.546 INFO:tasks.workunit.client.0.vm07.stdout:2/50: symlink d2/db/lc 0 2026-03-09T20:47:09.546 INFO:tasks.workunit.client.0.vm07.stdout:0/29: truncate f0 555372 0 2026-03-09T20:47:09.551 INFO:tasks.workunit.client.0.vm07.stdout:9/57: creat d4/d8/f1c x:0 0 0 2026-03-09T20:47:09.552 INFO:tasks.workunit.client.0.vm07.stdout:9/58: dread - d4/d8/dc/d15/f18 zero size 2026-03-09T20:47:09.552 INFO:tasks.workunit.client.0.vm07.stdout:9/59: fsync d4/d8/fd 0 2026-03-09T20:47:09.552 INFO:tasks.workunit.client.0.vm07.stdout:9/60: chown d4 272890 1 2026-03-09T20:47:09.553 INFO:tasks.workunit.client.0.vm07.stdout:9/61: chown d4/f5 2 1 2026-03-09T20:47:09.553 INFO:tasks.workunit.client.0.vm07.stdout:9/62: fdatasync d4/d11/f1a 0 2026-03-09T20:47:09.554 INFO:tasks.workunit.client.0.vm07.stdout:9/63: write d4/d8/dc/d15/f18 [484701,127061] 0 2026-03-09T20:47:09.562 INFO:tasks.workunit.client.0.vm07.stdout:7/51: creat d3/da/db/f12 x:0 0 0 2026-03-09T20:47:09.563 INFO:tasks.workunit.client.0.vm07.stdout:7/52: truncate d3/da/f11 624199 0 2026-03-09T20:47:09.567 INFO:tasks.workunit.client.0.vm07.stdout:7/53: dwrite d3/da/db/f12 [0,4194304] 0 2026-03-09T20:47:09.569 INFO:tasks.workunit.client.0.vm07.stdout:7/54: chown d3/da 265469218 1 2026-03-09T20:47:09.576 INFO:tasks.workunit.client.0.vm07.stdout:4/60: symlink d2/l14 0 2026-03-09T20:47:09.576 INFO:tasks.workunit.client.0.vm07.stdout:4/61: truncate d2/f3 5113901 0 2026-03-09T20:47:09.576 INFO:tasks.workunit.client.0.vm07.stdout:4/62: chown d2/df 20441382 1 2026-03-09T20:47:09.577 INFO:tasks.workunit.client.0.vm07.stdout:2/51: creat d2/db/fd x:0 0 0 2026-03-09T20:47:09.578 INFO:tasks.workunit.client.0.vm07.stdout:2/52: chown d2/l6 1292255105 1 2026-03-09T20:47:09.580 INFO:tasks.workunit.client.0.vm07.stdout:4/63: dread d2/f6 [0,4194304] 0 2026-03-09T20:47:09.586 INFO:tasks.workunit.client.0.vm07.stdout:9/64: dread d4/d8/dc/d15/f18 [0,4194304] 0 2026-03-09T20:47:09.587 INFO:tasks.workunit.client.0.vm07.stdout:9/65: chown d4/d11/f1a 931 1 2026-03-09T20:47:09.593 INFO:tasks.workunit.client.0.vm07.stdout:7/55: fsync d3/da/fe 0 2026-03-09T20:47:09.599 INFO:tasks.workunit.client.1.vm10.stdout:cc -DNO_XFS -I/home/ubuntu/cephtest/mnt.1/client.1/tmp/fsstress/ltp-full-20091231/testcases/kernel/fs/fsstress -D_LARGEFILE64_SOURCE -D_GNU_SOURCE -I../../../../include -I../../../../include -L../../../../lib fsstress.c -o fsstress 2026-03-09T20:47:09.603 INFO:tasks.workunit.client.0.vm07.stdout:2/53: unlink d2/l9 0 2026-03-09T20:47:09.603 INFO:tasks.workunit.client.0.vm07.stdout:4/64: mknod d2/df/c15 0 2026-03-09T20:47:09.603 INFO:tasks.workunit.client.0.vm07.stdout:0/30: rmdir d1/d2 39 2026-03-09T20:47:09.603 INFO:tasks.workunit.client.0.vm07.stdout:5/89: truncate d5/df/f15 135866 0 2026-03-09T20:47:09.603 INFO:tasks.workunit.client.0.vm07.stdout:8/48: creat d1/dc/fd x:0 0 0 2026-03-09T20:47:09.609 INFO:tasks.workunit.client.0.vm07.stdout:8/49: dwrite d1/dc/fd [0,4194304] 0 2026-03-09T20:47:09.611 INFO:tasks.workunit.client.0.vm07.stdout:9/66: sync 2026-03-09T20:47:09.611 INFO:tasks.workunit.client.0.vm07.stdout:5/90: sync 2026-03-09T20:47:09.619 INFO:tasks.workunit.client.0.vm07.stdout:7/56: symlink d3/da/db/l13 0 2026-03-09T20:47:09.622 INFO:tasks.workunit.client.0.vm07.stdout:7/57: dread d3/da/f11 [0,4194304] 0 2026-03-09T20:47:09.623 INFO:tasks.workunit.client.0.vm07.stdout:7/58: fdatasync d3/da/f11 0 2026-03-09T20:47:09.623 INFO:tasks.workunit.client.0.vm07.stdout:7/59: stat d3/da/cc 0 2026-03-09T20:47:09.624 INFO:tasks.workunit.client.0.vm07.stdout:7/60: fdatasync d3/da/db/f12 0 2026-03-09T20:47:09.629 INFO:tasks.workunit.client.0.vm07.stdout:7/61: dwrite d3/da/f11 [0,4194304] 0 2026-03-09T20:47:09.634 INFO:tasks.workunit.client.0.vm07.stdout:2/54: creat d2/fe x:0 0 0 2026-03-09T20:47:09.640 INFO:tasks.workunit.client.0.vm07.stdout:4/65: creat d2/df/f16 x:0 0 0 2026-03-09T20:47:09.644 INFO:tasks.workunit.client.0.vm07.stdout:0/31: fdatasync f0 0 2026-03-09T20:47:09.648 INFO:tasks.workunit.client.0.vm07.stdout:6/63: dwrite f7 [0,4194304] 0 2026-03-09T20:47:09.649 INFO:tasks.workunit.client.0.vm07.stdout:6/64: write d8/f14 [574789,16501] 0 2026-03-09T20:47:09.649 INFO:tasks.workunit.client.0.vm07.stdout:6/65: write d8/f14 [1393518,51588] 0 2026-03-09T20:47:09.655 INFO:tasks.workunit.client.0.vm07.stdout:6/66: dread d8/f12 [0,4194304] 0 2026-03-09T20:47:09.657 INFO:tasks.workunit.client.0.vm07.stdout:6/67: chown f5 12 1 2026-03-09T20:47:09.660 INFO:tasks.workunit.client.0.vm07.stdout:8/50: creat d1/dc/fe x:0 0 0 2026-03-09T20:47:09.663 INFO:tasks.workunit.client.0.vm07.stdout:5/91: unlink d5/f12 0 2026-03-09T20:47:09.663 INFO:tasks.workunit.client.0.vm07.stdout:8/51: write d1/dc/fd [2748458,60750] 0 2026-03-09T20:47:09.663 INFO:tasks.workunit.client.0.vm07.stdout:9/67: mknod d4/d11/c1d 0 2026-03-09T20:47:09.674 INFO:tasks.workunit.client.0.vm07.stdout:7/62: mkdir d3/da/db/d14 0 2026-03-09T20:47:09.675 INFO:tasks.workunit.client.0.vm07.stdout:7/63: write d3/da/fe [4406713,37496] 0 2026-03-09T20:47:09.679 INFO:tasks.workunit.client.0.vm07.stdout:3/67: link d1/d5/d9/fe d1/d5/d10/f13 0 2026-03-09T20:47:09.685 INFO:tasks.workunit.client.0.vm07.stdout:1/50: rename d3/ff to d3/f10 0 2026-03-09T20:47:09.686 INFO:tasks.workunit.client.0.vm07.stdout:4/66: mkdir d2/df/d17 0 2026-03-09T20:47:09.686 INFO:tasks.workunit.client.0.vm07.stdout:4/67: fsync d2/f9 0 2026-03-09T20:47:09.692 INFO:tasks.workunit.client.0.vm07.stdout:6/68: creat d8/f15 x:0 0 0 2026-03-09T20:47:09.694 INFO:tasks.workunit.client.0.vm07.stdout:9/68: rmdir d4/d8/dc 39 2026-03-09T20:47:09.695 INFO:tasks.workunit.client.0.vm07.stdout:6/69: dread f7 [0,4194304] 0 2026-03-09T20:47:09.698 INFO:tasks.workunit.client.0.vm07.stdout:5/92: mknod d5/df/d13/c16 0 2026-03-09T20:47:09.705 INFO:tasks.workunit.client.0.vm07.stdout:7/64: dwrite d3/da/f11 [0,4194304] 0 2026-03-09T20:47:09.706 INFO:tasks.workunit.client.0.vm07.stdout:7/65: fsync d3/da/db/f12 0 2026-03-09T20:47:09.717 INFO:tasks.workunit.client.0.vm07.stdout:3/68: dwrite d1/d5/d9/fe [0,4194304] 0 2026-03-09T20:47:09.722 INFO:tasks.workunit.client.0.vm07.stdout:1/51: dwrite d3/f5 [0,4194304] 0 2026-03-09T20:47:09.727 INFO:tasks.workunit.client.0.vm07.stdout:3/69: dwrite d1/d5/d10/f13 [0,4194304] 0 2026-03-09T20:47:09.728 INFO:tasks.workunit.client.0.vm07.stdout:3/70: write d1/d5/d10/f13 [3616763,97241] 0 2026-03-09T20:47:09.742 INFO:tasks.workunit.client.0.vm07.stdout:9/69: dread d4/d8/fb [0,4194304] 0 2026-03-09T20:47:09.743 INFO:tasks.workunit.client.0.vm07.stdout:9/70: chown d4/f10 456844411 1 2026-03-09T20:47:09.744 INFO:tasks.workunit.client.0.vm07.stdout:6/70: mkdir d8/d16 0 2026-03-09T20:47:09.744 INFO:tasks.workunit.client.0.vm07.stdout:6/71: readlink l1 0 2026-03-09T20:47:09.744 INFO:tasks.workunit.client.0.vm07.stdout:2/55: dwrite d2/f4 [0,4194304] 0 2026-03-09T20:47:09.746 INFO:tasks.workunit.client.0.vm07.stdout:5/93: stat d5/le 0 2026-03-09T20:47:09.752 INFO:tasks.workunit.client.0.vm07.stdout:6/72: dwrite f5 [0,4194304] 0 2026-03-09T20:47:09.756 INFO:tasks.workunit.client.0.vm07.stdout:7/66: unlink c0 0 2026-03-09T20:47:09.763 INFO:tasks.workunit.client.0.vm07.stdout:4/68: unlink d2/ce 0 2026-03-09T20:47:09.778 INFO:tasks.workunit.client.0.vm07.stdout:3/71: rename d1/l7 to d1/d5/d9/l14 0 2026-03-09T20:47:09.778 INFO:tasks.workunit.client.0.vm07.stdout:2/56: creat d2/ff x:0 0 0 2026-03-09T20:47:09.785 INFO:tasks.workunit.client.0.vm07.stdout:7/67: mknod d3/da/db/c15 0 2026-03-09T20:47:09.785 INFO:tasks.workunit.client.0.vm07.stdout:7/68: truncate d3/da/db/f12 4762373 0 2026-03-09T20:47:09.789 INFO:tasks.workunit.client.0.vm07.stdout:7/69: dwrite d3/da/f11 [4194304,4194304] 0 2026-03-09T20:47:09.795 INFO:tasks.workunit.client.0.vm07.stdout:0/32: creat d1/d2/fb x:0 0 0 2026-03-09T20:47:09.795 INFO:tasks.workunit.client.0.vm07.stdout:0/33: stat d1/d2 0 2026-03-09T20:47:09.795 INFO:tasks.workunit.client.0.vm07.stdout:0/34: fdatasync d1/d2/fb 0 2026-03-09T20:47:09.795 INFO:tasks.workunit.client.0.vm07.stdout:0/35: dread f0 [0,4194304] 0 2026-03-09T20:47:09.797 INFO:tasks.workunit.client.0.vm07.stdout:4/69: symlink d2/df/l18 0 2026-03-09T20:47:09.797 INFO:tasks.workunit.client.0.vm07.stdout:4/70: write d2/f9 [483136,83493] 0 2026-03-09T20:47:09.803 INFO:tasks.workunit.client.0.vm07.stdout:1/52: link d3/f4 d3/f11 0 2026-03-09T20:47:09.807 INFO:tasks.workunit.client.0.vm07.stdout:9/71: rename d4/d8/le to d4/d11/l1e 0 2026-03-09T20:47:09.807 INFO:tasks.workunit.client.0.vm07.stdout:8/52: link d1/l7 d1/lf 0 2026-03-09T20:47:09.807 INFO:tasks.workunit.client.0.vm07.stdout:8/53: fdatasync d1/dc/fd 0 2026-03-09T20:47:09.817 INFO:tasks.workunit.client.0.vm07.stdout:6/73: creat d8/d16/f17 x:0 0 0 2026-03-09T20:47:09.823 INFO:tasks.workunit.client.0.vm07.stdout:4/71: creat d2/f19 x:0 0 0 2026-03-09T20:47:09.825 INFO:tasks.workunit.client.0.vm07.stdout:1/53: write f2 [238924,112072] 0 2026-03-09T20:47:09.827 INFO:tasks.workunit.client.0.vm07.stdout:1/54: dread d3/fa [0,4194304] 0 2026-03-09T20:47:09.831 INFO:tasks.workunit.client.0.vm07.stdout:1/55: readlink d3/l6 0 2026-03-09T20:47:09.834 INFO:tasks.workunit.client.0.vm07.stdout:3/72: write d1/d5/d10/f13 [2975737,2602] 0 2026-03-09T20:47:09.835 INFO:tasks.workunit.client.0.vm07.stdout:3/73: write d1/d5/d10/f13 [3857159,24633] 0 2026-03-09T20:47:09.835 INFO:tasks.workunit.client.0.vm07.stdout:3/74: chown d1/l8 253644120 1 2026-03-09T20:47:09.838 INFO:tasks.workunit.client.0.vm07.stdout:3/75: dwrite d1/d5/d9/fe [0,4194304] 0 2026-03-09T20:47:09.856 INFO:tasks.workunit.client.0.vm07.stdout:6/74: rename d8/ff to d8/d16/f18 0 2026-03-09T20:47:09.858 INFO:tasks.workunit.client.0.vm07.stdout:7/70: mknod d3/da/db/d14/c16 0 2026-03-09T20:47:09.863 INFO:tasks.workunit.client.0.vm07.stdout:9/72: read d4/d8/dc/d15/f18 [235134,112152] 0 2026-03-09T20:47:09.864 INFO:tasks.workunit.client.0.vm07.stdout:8/54: symlink d1/l10 0 2026-03-09T20:47:09.866 INFO:tasks.workunit.client.0.vm07.stdout:8/55: dwrite d1/dc/fd [0,4194304] 0 2026-03-09T20:47:09.867 INFO:tasks.workunit.client.0.vm07.stdout:3/76: creat d1/d5/d9/f15 x:0 0 0 2026-03-09T20:47:09.882 INFO:tasks.workunit.client.0.vm07.stdout:6/75: creat d8/d16/f19 x:0 0 0 2026-03-09T20:47:09.882 INFO:tasks.workunit.client.0.vm07.stdout:6/76: fdatasync d8/d16/f19 0 2026-03-09T20:47:09.883 INFO:tasks.workunit.client.0.vm07.stdout:6/77: read - d8/d16/f17 zero size 2026-03-09T20:47:09.890 INFO:tasks.workunit.client.0.vm07.stdout:9/73: readlink d4/d11/l1e 0 2026-03-09T20:47:09.893 INFO:tasks.workunit.client.0.vm07.stdout:3/77: mknod d1/d5/d9/c16 0 2026-03-09T20:47:09.898 INFO:tasks.workunit.client.0.vm07.stdout:4/72: mknod d2/c1a 0 2026-03-09T20:47:09.902 INFO:tasks.workunit.client.0.vm07.stdout:0/36: getdents d1/d2 0 2026-03-09T20:47:09.905 INFO:tasks.workunit.client.0.vm07.stdout:2/57: truncate d2/f4 2066926 0 2026-03-09T20:47:09.906 INFO:tasks.workunit.client.0.vm07.stdout:9/74: chown d4/d8/d19/c1b 604774 1 2026-03-09T20:47:09.910 INFO:tasks.workunit.client.0.vm07.stdout:9/75: dwrite d4/f5 [0,4194304] 0 2026-03-09T20:47:09.911 INFO:tasks.workunit.client.0.vm07.stdout:9/76: chown d4/d8/dc/d15 26836 1 2026-03-09T20:47:09.926 INFO:tasks.workunit.client.0.vm07.stdout:1/56: write d3/f9 [137803,2971] 0 2026-03-09T20:47:09.936 INFO:tasks.workunit.client.0.vm07.stdout:1/57: dread d3/fa [0,4194304] 0 2026-03-09T20:47:09.936 INFO:tasks.workunit.client.0.vm07.stdout:0/37: mkdir d1/d2/dc 0 2026-03-09T20:47:09.936 INFO:tasks.workunit.client.0.vm07.stdout:2/58: creat d2/f10 x:0 0 0 2026-03-09T20:47:09.936 INFO:tasks.workunit.client.0.vm07.stdout:8/56: creat d1/f11 x:0 0 0 2026-03-09T20:47:09.943 INFO:tasks.workunit.client.0.vm07.stdout:3/78: sync 2026-03-09T20:47:09.943 INFO:tasks.workunit.client.0.vm07.stdout:3/79: stat d1/d5/d9/f15 0 2026-03-09T20:47:09.943 INFO:tasks.workunit.client.0.vm07.stdout:8/57: dread d1/dc/fd [0,4194304] 0 2026-03-09T20:47:09.944 INFO:tasks.workunit.client.0.vm07.stdout:3/80: read - d1/d5/d9/f15 zero size 2026-03-09T20:47:09.945 INFO:tasks.workunit.client.0.vm07.stdout:8/58: write d1/f11 [623542,47147] 0 2026-03-09T20:47:09.950 INFO:tasks.workunit.client.0.vm07.stdout:6/78: link d8/cc d8/d16/c1a 0 2026-03-09T20:47:09.955 INFO:tasks.workunit.client.0.vm07.stdout:1/58: mkdir d3/d12 0 2026-03-09T20:47:09.956 INFO:tasks.workunit.client.0.vm07.stdout:5/94: dwrite d5/df/f15 [0,4194304] 0 2026-03-09T20:47:09.967 INFO:tasks.workunit.client.0.vm07.stdout:2/59: mkdir d2/d11 0 2026-03-09T20:47:09.967 INFO:tasks.workunit.client.0.vm07.stdout:2/60: truncate d2/f10 1038282 0 2026-03-09T20:47:09.968 INFO:tasks.workunit.client.0.vm07.stdout:2/61: chown d2 59434879 1 2026-03-09T20:47:09.968 INFO:tasks.workunit.client.0.vm07.stdout:2/62: truncate d2/f10 1165143 0 2026-03-09T20:47:09.969 INFO:tasks.workunit.client.0.vm07.stdout:9/77: mknod d4/d8/c1f 0 2026-03-09T20:47:09.972 INFO:tasks.workunit.client.0.vm07.stdout:3/81: creat d1/d5/f17 x:0 0 0 2026-03-09T20:47:09.976 INFO:tasks.workunit.client.0.vm07.stdout:8/59: readlink d1/l7 0 2026-03-09T20:47:09.980 INFO:tasks.workunit.client.0.vm07.stdout:3/82: dwrite d1/d5/d9/fe [4194304,4194304] 0 2026-03-09T20:47:09.980 INFO:tasks.workunit.client.0.vm07.stdout:3/83: readlink d1/l8 0 2026-03-09T20:47:09.990 INFO:tasks.workunit.client.0.vm07.stdout:7/71: read d3/da/fe [2639862,128791] 0 2026-03-09T20:47:09.991 INFO:tasks.workunit.client.0.vm07.stdout:5/95: creat d5/df/d13/f17 x:0 0 0 2026-03-09T20:47:09.993 INFO:tasks.workunit.client.0.vm07.stdout:7/72: dread d3/da/db/f12 [0,4194304] 0 2026-03-09T20:47:09.994 INFO:tasks.workunit.client.0.vm07.stdout:0/38: symlink d1/ld 0 2026-03-09T20:47:09.996 INFO:tasks.workunit.client.0.vm07.stdout:2/63: mknod d2/c12 0 2026-03-09T20:47:09.997 INFO:tasks.workunit.client.0.vm07.stdout:9/78: symlink d4/l20 0 2026-03-09T20:47:10.004 INFO:tasks.workunit.client.0.vm07.stdout:8/60: symlink d1/dc/l12 0 2026-03-09T20:47:10.005 INFO:tasks.workunit.client.0.vm07.stdout:8/61: chown d1 2090 1 2026-03-09T20:47:10.005 INFO:tasks.workunit.client.0.vm07.stdout:8/62: chown d1/la 30678069 1 2026-03-09T20:47:10.009 INFO:tasks.workunit.client.0.vm07.stdout:8/63: dwrite d1/dc/fe [0,4194304] 0 2026-03-09T20:47:10.017 INFO:tasks.workunit.client.0.vm07.stdout:1/59: creat d3/d12/f13 x:0 0 0 2026-03-09T20:47:10.018 INFO:tasks.workunit.client.0.vm07.stdout:5/96: mknod d5/df/d13/c18 0 2026-03-09T20:47:10.020 INFO:tasks.workunit.client.0.vm07.stdout:7/73: rename d3/da/db/l13 to d3/da/db/d14/l17 0 2026-03-09T20:47:10.020 INFO:tasks.workunit.client.0.vm07.stdout:7/74: fdatasync d3/da/f11 0 2026-03-09T20:47:10.030 INFO:tasks.workunit.client.0.vm07.stdout:3/84: mknod d1/c18 0 2026-03-09T20:47:10.030 INFO:tasks.workunit.client.0.vm07.stdout:3/85: chown d1/c18 1412248728 1 2026-03-09T20:47:10.032 INFO:tasks.workunit.client.0.vm07.stdout:9/79: dwrite d4/d11/f13 [0,4194304] 0 2026-03-09T20:47:10.036 INFO:tasks.workunit.client.0.vm07.stdout:1/60: mkdir d3/d14 0 2026-03-09T20:47:10.039 INFO:tasks.workunit.client.0.vm07.stdout:5/97: mkdir d5/d19 0 2026-03-09T20:47:10.047 INFO:tasks.workunit.client.0.vm07.stdout:7/75: dread d3/da/db/f12 [0,4194304] 0 2026-03-09T20:47:10.048 INFO:tasks.workunit.client.0.vm07.stdout:2/64: creat d2/d11/f13 x:0 0 0 2026-03-09T20:47:10.048 INFO:tasks.workunit.client.0.vm07.stdout:2/65: dread - d2/fe zero size 2026-03-09T20:47:10.049 INFO:tasks.workunit.client.0.vm07.stdout:2/66: rename d2/d11 to d2/d11/d14 22 2026-03-09T20:47:10.113 INFO:tasks.workunit.client.0.vm07.stdout:4/73: creat d2/df/d17/f1b x:0 0 0 2026-03-09T20:47:10.117 INFO:tasks.workunit.client.0.vm07.stdout:6/79: getdents d8 0 2026-03-09T20:47:10.118 INFO:tasks.workunit.client.0.vm07.stdout:1/61: rmdir d3/d12 39 2026-03-09T20:47:10.118 INFO:tasks.workunit.client.0.vm07.stdout:5/98: symlink d5/df/d13/l1a 0 2026-03-09T20:47:10.119 INFO:tasks.workunit.client.0.vm07.stdout:0/39: creat d1/d2/dc/fe x:0 0 0 2026-03-09T20:47:10.123 INFO:tasks.workunit.client.0.vm07.stdout:0/40: dwrite d1/d2/dc/fe [0,4194304] 0 2026-03-09T20:47:10.134 INFO:tasks.workunit.client.0.vm07.stdout:2/67: symlink d2/d11/l15 0 2026-03-09T20:47:10.142 INFO:tasks.workunit.client.0.vm07.stdout:2/68: write d2/f10 [209879,73710] 0 2026-03-09T20:47:10.142 INFO:tasks.workunit.client.0.vm07.stdout:2/69: fsync d2/ff 0 2026-03-09T20:47:10.143 INFO:tasks.workunit.client.0.vm07.stdout:2/70: write d2/d11/f13 [183119,91292] 0 2026-03-09T20:47:10.143 INFO:tasks.workunit.client.0.vm07.stdout:2/71: dread - d2/fe zero size 2026-03-09T20:47:10.143 INFO:tasks.workunit.client.0.vm07.stdout:8/64: creat d1/f13 x:0 0 0 2026-03-09T20:47:10.143 INFO:tasks.workunit.client.0.vm07.stdout:8/65: dwrite d1/f11 [0,4194304] 0 2026-03-09T20:47:10.146 INFO:tasks.workunit.client.0.vm07.stdout:6/80: symlink d8/l1b 0 2026-03-09T20:47:10.148 INFO:tasks.workunit.client.0.vm07.stdout:6/81: write f5 [3677422,74009] 0 2026-03-09T20:47:10.148 INFO:tasks.workunit.client.0.vm07.stdout:0/41: write d1/d2/fb [797725,81716] 0 2026-03-09T20:47:10.157 INFO:tasks.workunit.client.0.vm07.stdout:3/86: creat d1/f19 x:0 0 0 2026-03-09T20:47:10.157 INFO:tasks.workunit.client.0.vm07.stdout:3/87: readlink d1/l6 0 2026-03-09T20:47:10.158 INFO:tasks.workunit.client.0.vm07.stdout:2/72: creat d2/db/f16 x:0 0 0 2026-03-09T20:47:10.160 INFO:tasks.workunit.client.0.vm07.stdout:7/76: creat d3/f18 x:0 0 0 2026-03-09T20:47:10.160 INFO:tasks.workunit.client.0.vm07.stdout:1/62: creat d3/d12/f15 x:0 0 0 2026-03-09T20:47:10.163 INFO:tasks.workunit.client.0.vm07.stdout:6/82: creat d8/f1c x:0 0 0 2026-03-09T20:47:10.165 INFO:tasks.workunit.client.0.vm07.stdout:3/88: rmdir d1/d5/d10 39 2026-03-09T20:47:10.170 INFO:tasks.workunit.client.0.vm07.stdout:3/89: dwrite d1/f19 [0,4194304] 0 2026-03-09T20:47:10.173 INFO:tasks.workunit.client.0.vm07.stdout:2/73: creat d2/f17 x:0 0 0 2026-03-09T20:47:10.173 INFO:tasks.workunit.client.0.vm07.stdout:2/74: stat d2/f17 0 2026-03-09T20:47:10.177 INFO:tasks.workunit.client.0.vm07.stdout:6/83: dwrite d8/d16/f17 [0,4194304] 0 2026-03-09T20:47:10.194 INFO:tasks.workunit.client.0.vm07.stdout:6/84: dread - d8/f1c zero size 2026-03-09T20:47:10.194 INFO:tasks.workunit.client.0.vm07.stdout:3/90: dwrite d1/f19 [0,4194304] 0 2026-03-09T20:47:10.194 INFO:tasks.workunit.client.0.vm07.stdout:6/85: dwrite d8/d16/f17 [0,4194304] 0 2026-03-09T20:47:10.194 INFO:tasks.workunit.client.0.vm07.stdout:6/86: dread - d8/f1c zero size 2026-03-09T20:47:10.194 INFO:tasks.workunit.client.0.vm07.stdout:6/87: write d8/f14 [1213006,44524] 0 2026-03-09T20:47:10.204 INFO:tasks.workunit.client.0.vm07.stdout:2/75: symlink d2/l18 0 2026-03-09T20:47:10.204 INFO:tasks.workunit.client.0.vm07.stdout:2/76: chown d2/fe 66870 1 2026-03-09T20:47:10.205 INFO:tasks.workunit.client.0.vm07.stdout:2/77: write d2/d11/f13 [1246120,9792] 0 2026-03-09T20:47:10.205 INFO:tasks.workunit.client.0.vm07.stdout:2/78: write d2/f10 [200809,77944] 0 2026-03-09T20:47:10.208 INFO:tasks.workunit.client.0.vm07.stdout:0/42: creat d1/d2/ff x:0 0 0 2026-03-09T20:47:10.210 INFO:tasks.workunit.client.0.vm07.stdout:6/88: mknod d8/c1d 0 2026-03-09T20:47:10.210 INFO:tasks.workunit.client.0.vm07.stdout:2/79: rename d2/l6 to d2/db/l19 0 2026-03-09T20:47:10.211 INFO:tasks.workunit.client.0.vm07.stdout:2/80: chown d2/d11/l15 5723253 1 2026-03-09T20:47:10.211 INFO:tasks.workunit.client.0.vm07.stdout:2/81: dread - d2/f17 zero size 2026-03-09T20:47:10.211 INFO:tasks.workunit.client.0.vm07.stdout:2/82: chown d2/d11 5 1 2026-03-09T20:47:10.213 INFO:tasks.workunit.client.0.vm07.stdout:8/66: getdents d1/dc 0 2026-03-09T20:47:10.213 INFO:tasks.workunit.client.0.vm07.stdout:0/43: creat d1/d2/dc/f10 x:0 0 0 2026-03-09T20:47:10.213 INFO:tasks.workunit.client.0.vm07.stdout:3/91: creat d1/d5/d10/f1a x:0 0 0 2026-03-09T20:47:10.214 INFO:tasks.workunit.client.0.vm07.stdout:0/44: dread f0 [0,4194304] 0 2026-03-09T20:47:10.214 INFO:tasks.workunit.client.0.vm07.stdout:8/67: read d1/dc/fe [388015,105604] 0 2026-03-09T20:47:10.215 INFO:tasks.workunit.client.0.vm07.stdout:8/68: dread - d1/f13 zero size 2026-03-09T20:47:10.216 INFO:tasks.workunit.client.0.vm07.stdout:2/83: mknod d2/db/c1a 0 2026-03-09T20:47:10.219 INFO:tasks.workunit.client.0.vm07.stdout:8/69: write d1/f13 [308962,51828] 0 2026-03-09T20:47:10.229 INFO:tasks.workunit.client.0.vm07.stdout:3/92: dwrite d1/d5/d9/f15 [0,4194304] 0 2026-03-09T20:47:10.232 INFO:tasks.workunit.client.0.vm07.stdout:6/89: link d8/c1d d8/c1e 0 2026-03-09T20:47:10.234 INFO:tasks.workunit.client.0.vm07.stdout:0/45: creat d1/f11 x:0 0 0 2026-03-09T20:47:10.234 INFO:tasks.workunit.client.0.vm07.stdout:6/90: creat d8/d16/f1f x:0 0 0 2026-03-09T20:47:10.235 INFO:tasks.workunit.client.0.vm07.stdout:8/70: dwrite d1/dc/fe [4194304,4194304] 0 2026-03-09T20:47:10.237 INFO:tasks.workunit.client.0.vm07.stdout:2/84: dwrite d2/fe [0,4194304] 0 2026-03-09T20:47:10.241 INFO:tasks.workunit.client.0.vm07.stdout:6/91: link d8/d16/f18 d8/f20 0 2026-03-09T20:47:10.245 INFO:tasks.workunit.client.0.vm07.stdout:6/92: dwrite f5 [0,4194304] 0 2026-03-09T20:47:10.294 INFO:tasks.workunit.client.0.vm07.stdout:2/85: creat d2/db/f1b x:0 0 0 2026-03-09T20:47:10.294 INFO:tasks.workunit.client.0.vm07.stdout:6/93: write d8/fa [2104068,123052] 0 2026-03-09T20:47:10.294 INFO:tasks.workunit.client.0.vm07.stdout:0/46: dwrite d1/f11 [0,4194304] 0 2026-03-09T20:47:10.294 INFO:tasks.workunit.client.0.vm07.stdout:6/94: mknod d8/c21 0 2026-03-09T20:47:10.294 INFO:tasks.workunit.client.0.vm07.stdout:6/95: write d8/d16/f19 [867976,110175] 0 2026-03-09T20:47:10.294 INFO:tasks.workunit.client.0.vm07.stdout:2/86: unlink d2/d11/f13 0 2026-03-09T20:47:10.294 INFO:tasks.workunit.client.0.vm07.stdout:2/87: chown d2/fe 3 1 2026-03-09T20:47:10.294 INFO:tasks.workunit.client.0.vm07.stdout:0/47: dread d1/d2/dc/fe [0,4194304] 0 2026-03-09T20:47:10.294 INFO:tasks.workunit.client.0.vm07.stdout:2/88: mkdir d2/db/d1c 0 2026-03-09T20:47:10.294 INFO:tasks.workunit.client.0.vm07.stdout:0/48: dwrite d1/d2/ff [0,4194304] 0 2026-03-09T20:47:10.294 INFO:tasks.workunit.client.0.vm07.stdout:0/49: dread d1/d2/dc/fe [0,4194304] 0 2026-03-09T20:47:10.294 INFO:tasks.workunit.client.0.vm07.stdout:0/50: write d1/f11 [3510604,118247] 0 2026-03-09T20:47:10.294 INFO:tasks.workunit.client.0.vm07.stdout:2/89: symlink d2/d11/l1d 0 2026-03-09T20:47:10.294 INFO:tasks.workunit.client.0.vm07.stdout:2/90: dread - d2/db/fd zero size 2026-03-09T20:47:10.294 INFO:tasks.workunit.client.0.vm07.stdout:2/91: truncate d2/db/f1b 318174 0 2026-03-09T20:47:10.294 INFO:tasks.workunit.client.0.vm07.stdout:2/92: write d2/db/fd [678800,21237] 0 2026-03-09T20:47:10.294 INFO:tasks.workunit.client.0.vm07.stdout:2/93: dread d2/f7 [0,4194304] 0 2026-03-09T20:47:10.294 INFO:tasks.workunit.client.0.vm07.stdout:2/94: write d2/db/f1b [1273957,25249] 0 2026-03-09T20:47:10.294 INFO:tasks.workunit.client.0.vm07.stdout:2/95: write d2/db/fd [1563377,34125] 0 2026-03-09T20:47:10.295 INFO:tasks.workunit.client.0.vm07.stdout:0/51: creat d1/d2/dc/f12 x:0 0 0 2026-03-09T20:47:10.295 INFO:tasks.workunit.client.0.vm07.stdout:0/52: rename d1 to d1/d2/dc/d13 22 2026-03-09T20:47:10.302 INFO:tasks.workunit.client.0.vm07.stdout:0/53: creat d1/d2/f14 x:0 0 0 2026-03-09T20:47:10.303 INFO:tasks.workunit.client.0.vm07.stdout:0/54: write d1/d2/dc/f12 [52607,19184] 0 2026-03-09T20:47:10.304 INFO:tasks.workunit.client.0.vm07.stdout:0/55: chown d1/d2/c3 2824 1 2026-03-09T20:47:10.642 INFO:tasks.workunit.client.0.vm07.stdout:6/96: sync 2026-03-09T20:47:10.643 INFO:tasks.workunit.client.0.vm07.stdout:6/97: mkdir d8/d16/d22 0 2026-03-09T20:47:10.644 INFO:tasks.workunit.client.0.vm07.stdout:6/98: dread - d8/d16/f1f zero size 2026-03-09T20:47:10.646 INFO:tasks.workunit.client.0.vm07.stdout:6/99: write d8/f20 [1584880,13759] 0 2026-03-09T20:47:10.655 INFO:tasks.workunit.client.0.vm07.stdout:6/100: dread d8/f12 [0,4194304] 0 2026-03-09T20:47:10.655 INFO:tasks.workunit.client.0.vm07.stdout:6/101: dread - d8/f15 zero size 2026-03-09T20:47:10.655 INFO:tasks.workunit.client.0.vm07.stdout:6/102: dread d8/fa [0,4194304] 0 2026-03-09T20:47:10.655 INFO:tasks.workunit.client.0.vm07.stdout:6/103: link d8/d16/f1f d8/d16/f23 0 2026-03-09T20:47:10.656 INFO:tasks.workunit.client.0.vm07.stdout:6/104: fdatasync d8/f1c 0 2026-03-09T20:47:10.660 INFO:tasks.workunit.client.0.vm07.stdout:6/105: mkdir d8/d16/d22/d24 0 2026-03-09T20:47:10.663 INFO:tasks.workunit.client.0.vm07.stdout:6/106: creat d8/d16/d22/d24/f25 x:0 0 0 2026-03-09T20:47:10.789 INFO:tasks.workunit.client.0.vm07.stdout:4/74: getdents d2/df/d17 0 2026-03-09T20:47:10.789 INFO:tasks.workunit.client.0.vm07.stdout:9/80: write f2 [1311975,50791] 0 2026-03-09T20:47:10.789 INFO:tasks.workunit.client.0.vm07.stdout:9/81: stat d4/f17 0 2026-03-09T20:47:10.793 INFO:tasks.workunit.client.0.vm07.stdout:9/82: dwrite d4/fa [0,4194304] 0 2026-03-09T20:47:10.793 INFO:tasks.workunit.client.0.vm07.stdout:9/83: read d4/d8/fb [65335,19304] 0 2026-03-09T20:47:10.794 INFO:tasks.workunit.client.0.vm07.stdout:9/84: chown d4/d8/fd 13372 1 2026-03-09T20:47:10.799 INFO:tasks.workunit.client.0.vm07.stdout:4/75: fsync d2/f5 0 2026-03-09T20:47:10.800 INFO:tasks.workunit.client.0.vm07.stdout:4/76: read f0 [1710006,101006] 0 2026-03-09T20:47:10.806 INFO:tasks.workunit.client.0.vm07.stdout:5/99: rmdir d5 39 2026-03-09T20:47:10.807 INFO:tasks.workunit.client.0.vm07.stdout:9/85: creat d4/d8/dc/f21 x:0 0 0 2026-03-09T20:47:10.809 INFO:tasks.workunit.client.0.vm07.stdout:4/77: mknod d2/df/c1c 0 2026-03-09T20:47:10.813 INFO:tasks.workunit.client.0.vm07.stdout:5/100: symlink d5/l1b 0 2026-03-09T20:47:10.814 INFO:tasks.workunit.client.0.vm07.stdout:5/101: readlink d5/la 0 2026-03-09T20:47:10.816 INFO:tasks.workunit.client.0.vm07.stdout:4/78: dwrite f0 [4194304,4194304] 0 2026-03-09T20:47:10.820 INFO:tasks.workunit.client.0.vm07.stdout:7/77: fsync d3/f18 0 2026-03-09T20:47:10.825 INFO:tasks.workunit.client.0.vm07.stdout:5/102: rename l1 to d5/df/d13/l1c 0 2026-03-09T20:47:10.828 INFO:tasks.workunit.client.0.vm07.stdout:5/103: rename d5/df/d13/l1a to d5/df/l1d 0 2026-03-09T20:47:10.829 INFO:tasks.workunit.client.0.vm07.stdout:5/104: dread - d5/df/d13/f17 zero size 2026-03-09T20:47:10.830 INFO:tasks.workunit.client.0.vm07.stdout:4/79: getdents d2/df/d17 0 2026-03-09T20:47:10.831 INFO:tasks.workunit.client.0.vm07.stdout:7/78: mknod d3/da/c19 0 2026-03-09T20:47:10.832 INFO:tasks.workunit.client.0.vm07.stdout:7/79: write d3/f18 [519549,20247] 0 2026-03-09T20:47:10.834 INFO:tasks.workunit.client.0.vm07.stdout:4/80: rmdir d2/df/d17 39 2026-03-09T20:47:10.838 INFO:tasks.workunit.client.0.vm07.stdout:1/63: dwrite d3/f9 [0,4194304] 0 2026-03-09T20:47:10.838 INFO:tasks.workunit.client.0.vm07.stdout:1/64: dread - d3/f8 zero size 2026-03-09T20:47:10.839 INFO:tasks.workunit.client.0.vm07.stdout:1/65: dread - d3/f8 zero size 2026-03-09T20:47:10.839 INFO:tasks.workunit.client.0.vm07.stdout:1/66: chown d3/f5 94868269 1 2026-03-09T20:47:10.839 INFO:tasks.workunit.client.0.vm07.stdout:1/67: readlink d3/l6 0 2026-03-09T20:47:10.853 INFO:tasks.workunit.client.0.vm07.stdout:5/105: mknod d5/d19/c1e 0 2026-03-09T20:47:10.854 INFO:tasks.workunit.client.0.vm07.stdout:5/106: dread - d5/df/d13/f17 zero size 2026-03-09T20:47:10.858 INFO:tasks.workunit.client.0.vm07.stdout:5/107: dwrite d5/df/d13/f17 [0,4194304] 0 2026-03-09T20:47:10.867 INFO:tasks.workunit.client.0.vm07.stdout:4/81: write d2/f3 [7931801,58952] 0 2026-03-09T20:47:10.867 INFO:tasks.workunit.client.0.vm07.stdout:4/82: fsync d2/df/f13 0 2026-03-09T20:47:10.867 INFO:tasks.workunit.client.0.vm07.stdout:4/83: fsync d2/df/f13 0 2026-03-09T20:47:10.867 INFO:tasks.workunit.client.0.vm07.stdout:5/108: dread d5/df/d13/f17 [0,4194304] 0 2026-03-09T20:47:10.869 INFO:tasks.workunit.client.0.vm07.stdout:5/109: dwrite d5/df/f10 [0,4194304] 0 2026-03-09T20:47:10.869 INFO:tasks.workunit.client.0.vm07.stdout:1/68: mkdir d3/d12/d16 0 2026-03-09T20:47:10.871 INFO:tasks.workunit.client.0.vm07.stdout:4/84: dwrite d2/df/f16 [0,4194304] 0 2026-03-09T20:47:10.873 INFO:tasks.workunit.client.0.vm07.stdout:5/110: chown d5/df/d13/c14 95890642 1 2026-03-09T20:47:10.879 INFO:tasks.workunit.client.0.vm07.stdout:1/69: dwrite d3/d12/f13 [0,4194304] 0 2026-03-09T20:47:10.892 INFO:tasks.workunit.client.0.vm07.stdout:5/111: dread d5/df/f10 [0,4194304] 0 2026-03-09T20:47:10.900 INFO:tasks.workunit.client.0.vm07.stdout:0/56: getdents d1 0 2026-03-09T20:47:10.906 INFO:tasks.workunit.client.0.vm07.stdout:0/57: dwrite d1/d2/fb [0,4194304] 0 2026-03-09T20:47:10.907 INFO:tasks.workunit.client.0.vm07.stdout:0/58: fdatasync d1/d2/f14 0 2026-03-09T20:47:10.908 INFO:tasks.workunit.client.0.vm07.stdout:8/71: write d1/dc/fd [3577834,99877] 0 2026-03-09T20:47:10.908 INFO:tasks.workunit.client.0.vm07.stdout:8/72: write d1/fb [315053,6648] 0 2026-03-09T20:47:10.909 INFO:tasks.workunit.client.0.vm07.stdout:8/73: write d1/fb [481114,88272] 0 2026-03-09T20:47:10.909 INFO:tasks.workunit.client.0.vm07.stdout:8/74: write d1/dc/fe [6818901,107447] 0 2026-03-09T20:47:10.915 INFO:tasks.workunit.client.0.vm07.stdout:7/80: getdents d3/da/db 0 2026-03-09T20:47:10.916 INFO:tasks.workunit.client.0.vm07.stdout:4/85: dread - d2/fa zero size 2026-03-09T20:47:10.920 INFO:tasks.workunit.client.0.vm07.stdout:1/70: truncate d3/f10 1019920 0 2026-03-09T20:47:10.922 INFO:tasks.workunit.client.0.vm07.stdout:1/71: dread d3/f5 [0,4194304] 0 2026-03-09T20:47:10.926 INFO:tasks.workunit.client.0.vm07.stdout:3/93: truncate d1/d5/d10/f13 1242420 0 2026-03-09T20:47:10.930 INFO:tasks.workunit.client.0.vm07.stdout:8/75: mkdir d1/dc/d14 0 2026-03-09T20:47:10.931 INFO:tasks.workunit.client.0.vm07.stdout:7/81: readlink d3/l4 0 2026-03-09T20:47:10.932 INFO:tasks.workunit.client.0.vm07.stdout:2/96: rmdir d2/d11 39 2026-03-09T20:47:10.935 INFO:tasks.workunit.client.0.vm07.stdout:5/112: link d5/df/f15 d5/df/d13/f1f 0 2026-03-09T20:47:10.939 INFO:tasks.workunit.client.0.vm07.stdout:5/113: chown d5/df 272781290 1 2026-03-09T20:47:10.939 INFO:tasks.workunit.client.0.vm07.stdout:5/114: dread d5/df/d13/f1f [0,4194304] 0 2026-03-09T20:47:10.939 INFO:tasks.workunit.client.0.vm07.stdout:0/59: symlink d1/d2/l15 0 2026-03-09T20:47:10.939 INFO:tasks.workunit.client.0.vm07.stdout:3/94: creat d1/d5/d9/f1b x:0 0 0 2026-03-09T20:47:10.939 INFO:tasks.workunit.client.0.vm07.stdout:3/95: chown d1/f19 338168023 1 2026-03-09T20:47:10.940 INFO:tasks.workunit.client.0.vm07.stdout:7/82: creat d3/da/db/d14/f1a x:0 0 0 2026-03-09T20:47:10.941 INFO:tasks.workunit.client.0.vm07.stdout:7/83: stat d3/da 0 2026-03-09T20:47:10.942 INFO:tasks.workunit.client.0.vm07.stdout:4/86: unlink d2/l12 0 2026-03-09T20:47:10.947 INFO:tasks.workunit.client.0.vm07.stdout:5/115: creat d5/d19/f20 x:0 0 0 2026-03-09T20:47:10.948 INFO:tasks.workunit.client.0.vm07.stdout:0/60: readlink d1/l9 0 2026-03-09T20:47:10.949 INFO:tasks.workunit.client.0.vm07.stdout:0/61: readlink d1/d2/l15 0 2026-03-09T20:47:10.951 INFO:tasks.workunit.client.0.vm07.stdout:7/84: mknod d3/da/db/d14/c1b 0 2026-03-09T20:47:10.952 INFO:tasks.workunit.client.0.vm07.stdout:2/97: creat d2/d11/f1e x:0 0 0 2026-03-09T20:47:10.952 INFO:tasks.workunit.client.0.vm07.stdout:2/98: readlink d2/d11/l1d 0 2026-03-09T20:47:10.953 INFO:tasks.workunit.client.0.vm07.stdout:2/99: write d2/d11/f1e [274864,19552] 0 2026-03-09T20:47:10.955 INFO:tasks.workunit.client.0.vm07.stdout:8/76: link d1/la d1/dc/d14/l15 0 2026-03-09T20:47:10.958 INFO:tasks.workunit.client.0.vm07.stdout:2/100: mknod d2/db/c1f 0 2026-03-09T20:47:10.962 INFO:tasks.workunit.client.0.vm07.stdout:4/87: creat d2/f1d x:0 0 0 2026-03-09T20:47:10.963 INFO:tasks.workunit.client.0.vm07.stdout:2/101: readlink d2/db/l19 0 2026-03-09T20:47:10.964 INFO:tasks.workunit.client.0.vm07.stdout:0/62: rename d1/d2/dc/fe to d1/d2/f16 0 2026-03-09T20:47:10.964 INFO:tasks.workunit.client.0.vm07.stdout:0/63: chown d1/d2/dc 6970586 1 2026-03-09T20:47:10.966 INFO:tasks.workunit.client.0.vm07.stdout:8/77: unlink d1/dc/d14/l15 0 2026-03-09T20:47:10.978 INFO:tasks.workunit.client.0.vm07.stdout:2/102: creat d2/d11/f20 x:0 0 0 2026-03-09T20:47:10.978 INFO:tasks.workunit.client.0.vm07.stdout:2/103: fsync d2/db/fd 0 2026-03-09T20:47:10.980 INFO:tasks.workunit.client.0.vm07.stdout:0/64: mkdir d1/d2/dc/d17 0 2026-03-09T20:47:10.982 INFO:tasks.workunit.client.0.vm07.stdout:7/85: link d3/c8 d3/da/db/c1c 0 2026-03-09T20:47:10.984 INFO:tasks.workunit.client.0.vm07.stdout:2/104: symlink d2/d11/l21 0 2026-03-09T20:47:10.990 INFO:tasks.workunit.client.0.vm07.stdout:0/65: chown d1/d2/c8 2389274 1 2026-03-09T20:47:10.991 INFO:tasks.workunit.client.0.vm07.stdout:7/86: mknod d3/da/c1d 0 2026-03-09T20:47:10.994 INFO:tasks.workunit.client.0.vm07.stdout:0/66: getdents d1/d2/dc/d17 0 2026-03-09T20:47:10.998 INFO:tasks.workunit.client.0.vm07.stdout:0/67: dwrite d1/d2/dc/f12 [0,4194304] 0 2026-03-09T20:47:11.014 INFO:tasks.workunit.client.0.vm07.stdout:0/68: unlink d1/d2/fb 0 2026-03-09T20:47:11.014 INFO:tasks.workunit.client.0.vm07.stdout:0/69: readlink d1/l7 0 2026-03-09T20:47:11.014 INFO:tasks.workunit.client.0.vm07.stdout:0/70: readlink d1/l7 0 2026-03-09T20:47:11.014 INFO:tasks.workunit.client.0.vm07.stdout:0/71: readlink d1/ld 0 2026-03-09T20:47:11.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:10 vm10.local ceph-mon[57011]: pgmap v145: 65 pgs: 65 active+clean; 224 MiB data, 1.9 GiB used, 118 GiB / 120 GiB avail; 1.1 MiB/s rd, 4.5 MiB/s wr, 293 op/s 2026-03-09T20:47:11.047 INFO:tasks.workunit.client.0.vm07.stdout:4/88: dread d2/f5 [0,4194304] 0 2026-03-09T20:47:11.055 INFO:tasks.workunit.client.0.vm07.stdout:6/107: rmdir d8 39 2026-03-09T20:47:11.073 INFO:tasks.workunit.client.0.vm07.stdout:5/116: truncate d5/df/d13/f17 94611 0 2026-03-09T20:47:11.074 INFO:tasks.workunit.client.0.vm07.stdout:5/117: dread - d5/d19/f20 zero size 2026-03-09T20:47:11.075 INFO:tasks.workunit.client.0.vm07.stdout:5/118: read d5/df/f15 [3885720,76174] 0 2026-03-09T20:47:11.075 INFO:tasks.workunit.client.0.vm07.stdout:5/119: dread - d5/d19/f20 zero size 2026-03-09T20:47:11.089 INFO:tasks.workunit.client.0.vm07.stdout:3/96: truncate d1/f19 510644 0 2026-03-09T20:47:11.093 INFO:tasks.workunit.client.0.vm07.stdout:8/78: write d1/f11 [4303840,93129] 0 2026-03-09T20:47:11.093 INFO:tasks.workunit.client.0.vm07.stdout:2/105: rmdir d2 39 2026-03-09T20:47:11.095 INFO:tasks.workunit.client.0.vm07.stdout:7/87: write d3/da/fe [2272730,58459] 0 2026-03-09T20:47:11.099 INFO:tasks.workunit.client.0.vm07.stdout:0/72: truncate d1/f11 399999 0 2026-03-09T20:47:11.101 INFO:tasks.workunit.client.0.vm07.stdout:0/73: dread d1/d2/f16 [0,4194304] 0 2026-03-09T20:47:11.102 INFO:tasks.workunit.client.0.vm07.stdout:8/79: fdatasync d1/f11 0 2026-03-09T20:47:11.105 INFO:tasks.workunit.client.0.vm07.stdout:8/80: dread d1/f13 [0,4194304] 0 2026-03-09T20:47:11.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:10 vm07.local ceph-mon[49120]: pgmap v145: 65 pgs: 65 active+clean; 224 MiB data, 1.9 GiB used, 118 GiB / 120 GiB avail; 1.1 MiB/s rd, 4.5 MiB/s wr, 293 op/s 2026-03-09T20:47:11.145 INFO:tasks.workunit.client.0.vm07.stdout:1/72: write d3/f10 [1369923,111043] 0 2026-03-09T20:47:11.148 INFO:tasks.workunit.client.0.vm07.stdout:1/73: dwrite d3/fa [0,4194304] 0 2026-03-09T20:47:11.149 INFO:tasks.workunit.client.0.vm07.stdout:1/74: readlink d3/l6 0 2026-03-09T20:47:11.154 INFO:tasks.workunit.client.0.vm07.stdout:1/75: dwrite d3/fa [0,4194304] 0 2026-03-09T20:47:11.170 INFO:tasks.workunit.client.1.vm10.stderr:++ readlink -f fsstress 2026-03-09T20:47:11.175 INFO:tasks.workunit.client.1.vm10.stderr:+ BIN=/home/ubuntu/cephtest/mnt.1/client.1/tmp/fsstress/ltp-full-20091231/testcases/kernel/fs/fsstress/fsstress 2026-03-09T20:47:11.175 INFO:tasks.workunit.client.1.vm10.stderr:+ popd 2026-03-09T20:47:11.176 INFO:tasks.workunit.client.1.vm10.stdout:~/cephtest/mnt.1/client.1/tmp/fsstress ~/cephtest/mnt.1/client.1/tmp 2026-03-09T20:47:11.176 INFO:tasks.workunit.client.1.vm10.stderr:+ popd 2026-03-09T20:47:11.178 INFO:tasks.workunit.client.1.vm10.stdout:~/cephtest/mnt.1/client.1/tmp 2026-03-09T20:47:11.178 INFO:tasks.workunit.client.1.vm10.stderr:++ mktemp -d -p . 2026-03-09T20:47:11.189 INFO:tasks.workunit.client.0.vm07.stdout:5/120: symlink d5/df/l21 0 2026-03-09T20:47:11.190 INFO:tasks.workunit.client.0.vm07.stdout:8/81: sync 2026-03-09T20:47:11.190 INFO:tasks.workunit.client.0.vm07.stdout:5/121: dread - d5/d19/f20 zero size 2026-03-09T20:47:11.190 INFO:tasks.workunit.client.0.vm07.stdout:8/82: fdatasync d1/dc/fd 0 2026-03-09T20:47:11.191 INFO:tasks.workunit.client.0.vm07.stdout:8/83: read d1/f13 [216503,118658] 0 2026-03-09T20:47:11.192 INFO:tasks.workunit.client.0.vm07.stdout:4/89: dread d2/f6 [0,4194304] 0 2026-03-09T20:47:11.192 INFO:tasks.workunit.client.0.vm07.stdout:3/97: creat d1/d5/d9/f1c x:0 0 0 2026-03-09T20:47:11.196 INFO:tasks.workunit.client.0.vm07.stdout:5/122: dwrite d5/d19/f20 [0,4194304] 0 2026-03-09T20:47:11.202 INFO:tasks.workunit.client.0.vm07.stdout:3/98: dwrite d1/d5/d9/f1b [0,4194304] 0 2026-03-09T20:47:11.218 INFO:tasks.workunit.client.1.vm10.stderr:+ T=./tmp.ts7NzC5V0t 2026-03-09T20:47:11.218 INFO:tasks.workunit.client.1.vm10.stderr:+ /home/ubuntu/cephtest/mnt.1/client.1/tmp/fsstress/ltp-full-20091231/testcases/kernel/fs/fsstress/fsstress -d ./tmp.ts7NzC5V0t -l 1 -n 1000 -p 10 -v 2026-03-09T20:47:11.218 INFO:tasks.workunit.client.0.vm07.stdout:7/88: rmdir d3/da/db 39 2026-03-09T20:47:11.219 INFO:tasks.workunit.client.0.vm07.stdout:0/74: write d1/d2/f16 [1476002,94970] 0 2026-03-09T20:47:11.221 INFO:tasks.workunit.client.0.vm07.stdout:2/106: fsync d2/db/f16 0 2026-03-09T20:47:11.225 INFO:tasks.workunit.client.0.vm07.stdout:2/107: dread - d2/f17 zero size 2026-03-09T20:47:11.226 INFO:tasks.workunit.client.0.vm07.stdout:6/108: mkdir d8/d26 0 2026-03-09T20:47:11.227 INFO:tasks.workunit.client.0.vm07.stdout:7/89: dwrite d3/f18 [0,4194304] 0 2026-03-09T20:47:11.227 INFO:tasks.workunit.client.0.vm07.stdout:2/108: fsync d2/db/f16 0 2026-03-09T20:47:11.231 INFO:tasks.workunit.client.0.vm07.stdout:2/109: stat d2/d11/l21 0 2026-03-09T20:47:11.231 INFO:tasks.workunit.client.0.vm07.stdout:0/75: dwrite d1/d2/f14 [0,4194304] 0 2026-03-09T20:47:11.231 INFO:tasks.workunit.client.0.vm07.stdout:6/109: rename d8/d16 to d8/d16/d27 22 2026-03-09T20:47:11.238 INFO:tasks.workunit.client.0.vm07.stdout:7/90: dread d3/da/f11 [4194304,4194304] 0 2026-03-09T20:47:11.242 INFO:tasks.workunit.client.0.vm07.stdout:9/86: mkdir d4/d8/dc/d22 0 2026-03-09T20:47:11.243 INFO:tasks.workunit.client.0.vm07.stdout:8/84: mkdir d1/dc/d16 0 2026-03-09T20:47:11.243 INFO:tasks.workunit.client.0.vm07.stdout:9/87: read d4/d8/fb [20121,65046] 0 2026-03-09T20:47:11.244 INFO:tasks.workunit.client.0.vm07.stdout:2/110: dwrite d2/d11/f20 [0,4194304] 0 2026-03-09T20:47:11.253 INFO:tasks.workunit.client.0.vm07.stdout:7/91: dwrite d3/da/fe [0,4194304] 0 2026-03-09T20:47:11.253 INFO:tasks.workunit.client.0.vm07.stdout:9/88: dwrite d4/d8/dc/f21 [0,4194304] 0 2026-03-09T20:47:11.256 INFO:tasks.workunit.client.0.vm07.stdout:4/90: readlink d2/df/l18 0 2026-03-09T20:47:11.257 INFO:tasks.workunit.client.0.vm07.stdout:8/85: chown d1/c6 4316742 1 2026-03-09T20:47:11.257 INFO:tasks.workunit.client.0.vm07.stdout:4/91: dread - d2/df/f13 zero size 2026-03-09T20:47:11.259 INFO:tasks.workunit.client.0.vm07.stdout:4/92: dread d2/f5 [0,4194304] 0 2026-03-09T20:47:11.264 INFO:tasks.workunit.client.0.vm07.stdout:4/93: write d2/f1d [973884,74425] 0 2026-03-09T20:47:11.267 INFO:tasks.workunit.client.0.vm07.stdout:5/123: creat d5/df/f22 x:0 0 0 2026-03-09T20:47:11.268 INFO:tasks.workunit.client.0.vm07.stdout:3/99: symlink d1/d5/d10/l1d 0 2026-03-09T20:47:11.268 INFO:tasks.workunit.client.0.vm07.stdout:3/100: truncate d1/d5/d9/f1c 496498 0 2026-03-09T20:47:11.274 INFO:tasks.workunit.client.0.vm07.stdout:1/76: getdents d3/d14 0 2026-03-09T20:47:11.275 INFO:tasks.workunit.client.1.vm10.stdout:seed = 1772815195 2026-03-09T20:47:11.275 INFO:tasks.workunit.client.0.vm07.stdout:1/77: write d3/d12/f15 [569352,84340] 0 2026-03-09T20:47:11.277 INFO:tasks.workunit.client.0.vm07.stdout:6/110: rmdir d8/d16/d22 39 2026-03-09T20:47:11.278 INFO:tasks.workunit.client.0.vm07.stdout:6/111: truncate d8/f14 2091330 0 2026-03-09T20:47:11.281 INFO:tasks.workunit.client.0.vm07.stdout:0/76: symlink d1/d2/dc/l18 0 2026-03-09T20:47:11.281 INFO:tasks.workunit.client.0.vm07.stdout:0/77: chown d1/d2/dc 64 1 2026-03-09T20:47:11.281 INFO:tasks.workunit.client.0.vm07.stdout:0/78: stat d1/ca 0 2026-03-09T20:47:11.282 INFO:tasks.workunit.client.0.vm07.stdout:0/79: chown d1/d2/f14 3914689 1 2026-03-09T20:47:11.284 INFO:tasks.workunit.client.0.vm07.stdout:0/80: read d1/d2/f14 [3559610,13401] 0 2026-03-09T20:47:11.285 INFO:tasks.workunit.client.1.vm10.stdout:0/0: stat - no entries 2026-03-09T20:47:11.285 INFO:tasks.workunit.client.1.vm10.stdout:0/1: chown . 9 1 2026-03-09T20:47:11.285 INFO:tasks.workunit.client.1.vm10.stdout:0/2: link - no file 2026-03-09T20:47:11.285 INFO:tasks.workunit.client.1.vm10.stdout:0/3: read - no filename 2026-03-09T20:47:11.285 INFO:tasks.workunit.client.1.vm10.stdout:0/4: chown . 636943773 1 2026-03-09T20:47:11.285 INFO:tasks.workunit.client.1.vm10.stdout:0/5: dread - no filename 2026-03-09T20:47:11.285 INFO:tasks.workunit.client.0.vm07.stdout:9/89: unlink d4/l9 0 2026-03-09T20:47:11.286 INFO:tasks.workunit.client.0.vm07.stdout:8/86: mknod d1/dc/c17 0 2026-03-09T20:47:11.297 INFO:tasks.workunit.client.0.vm07.stdout:9/90: write d4/d8/fd [233767,96599] 0 2026-03-09T20:47:11.298 INFO:tasks.workunit.client.0.vm07.stdout:4/94: fsync d2/f19 0 2026-03-09T20:47:11.298 INFO:tasks.workunit.client.0.vm07.stdout:9/91: fsync d4/f14 0 2026-03-09T20:47:11.298 INFO:tasks.workunit.client.0.vm07.stdout:9/92: fsync d4/d11/f1a 0 2026-03-09T20:47:11.298 INFO:tasks.workunit.client.0.vm07.stdout:0/81: dwrite d1/d2/f16 [0,4194304] 0 2026-03-09T20:47:11.300 INFO:tasks.workunit.client.0.vm07.stdout:2/111: creat d2/db/d1c/f22 x:0 0 0 2026-03-09T20:47:11.300 INFO:tasks.workunit.client.0.vm07.stdout:6/112: dread - d8/d16/d22/d24/f25 zero size 2026-03-09T20:47:11.301 INFO:tasks.workunit.client.1.vm10.stdout:0/6: creat f0 x:0 0 0 2026-03-09T20:47:11.303 INFO:tasks.workunit.client.0.vm07.stdout:8/87: creat d1/dc/d14/f18 x:0 0 0 2026-03-09T20:47:11.306 INFO:tasks.workunit.client.1.vm10.stdout:0/7: rename f0 to f1 0 2026-03-09T20:47:11.310 INFO:tasks.workunit.client.0.vm07.stdout:3/101: rename d1/d5/d9/l14 to d1/d5/d9/l1e 0 2026-03-09T20:47:11.312 INFO:tasks.workunit.client.1.vm10.stdout:1/0: symlink l0 0 2026-03-09T20:47:11.313 INFO:tasks.workunit.client.1.vm10.stdout:1/1: chown l0 27582 1 2026-03-09T20:47:11.313 INFO:tasks.workunit.client.1.vm10.stdout:1/2: truncate - no filename 2026-03-09T20:47:11.313 INFO:tasks.workunit.client.1.vm10.stdout:1/3: dwrite - no filename 2026-03-09T20:47:11.313 INFO:tasks.workunit.client.1.vm10.stdout:1/4: fdatasync - no filename 2026-03-09T20:47:11.313 INFO:tasks.workunit.client.1.vm10.stdout:1/5: write - no filename 2026-03-09T20:47:11.315 INFO:tasks.workunit.client.0.vm07.stdout:0/82: dwrite d1/d2/dc/f10 [0,4194304] 0 2026-03-09T20:47:11.318 INFO:tasks.workunit.client.0.vm07.stdout:2/112: dwrite d2/db/f16 [0,4194304] 0 2026-03-09T20:47:11.335 INFO:tasks.workunit.client.0.vm07.stdout:1/78: rmdir d3/d12/d16 0 2026-03-09T20:47:11.337 INFO:tasks.workunit.client.0.vm07.stdout:8/88: creat d1/dc/d14/f19 x:0 0 0 2026-03-09T20:47:11.337 INFO:tasks.workunit.client.1.vm10.stdout:0/8: write f1 [268800,23828] 0 2026-03-09T20:47:11.337 INFO:tasks.workunit.client.1.vm10.stdout:3/0: readlink - no filename 2026-03-09T20:47:11.337 INFO:tasks.workunit.client.1.vm10.stdout:3/1: readlink - no filename 2026-03-09T20:47:11.337 INFO:tasks.workunit.client.1.vm10.stdout:3/2: write - no filename 2026-03-09T20:47:11.339 INFO:tasks.workunit.client.0.vm07.stdout:1/79: dread d3/f9 [0,4194304] 0 2026-03-09T20:47:11.341 INFO:tasks.workunit.client.0.vm07.stdout:4/95: mknod d2/df/d17/c1e 0 2026-03-09T20:47:11.342 INFO:tasks.workunit.client.1.vm10.stdout:1/6: rename l0 to l1 0 2026-03-09T20:47:11.350 INFO:tasks.workunit.client.0.vm07.stdout:0/83: dwrite d1/d2/f14 [0,4194304] 0 2026-03-09T20:47:11.354 INFO:tasks.workunit.client.0.vm07.stdout:6/113: mkdir d8/d26/d28 0 2026-03-09T20:47:11.354 INFO:tasks.workunit.client.1.vm10.stdout:2/0: symlink l0 0 2026-03-09T20:47:11.355 INFO:tasks.workunit.client.1.vm10.stdout:0/9: mkdir d2 0 2026-03-09T20:47:11.355 INFO:tasks.workunit.client.1.vm10.stdout:2/1: dwrite - no filename 2026-03-09T20:47:11.355 INFO:tasks.workunit.client.1.vm10.stdout:2/2: write - no filename 2026-03-09T20:47:11.355 INFO:tasks.workunit.client.1.vm10.stdout:2/3: truncate - no filename 2026-03-09T20:47:11.355 INFO:tasks.workunit.client.1.vm10.stdout:2/4: rmdir - no directory 2026-03-09T20:47:11.359 INFO:tasks.workunit.client.0.vm07.stdout:1/80: rename d3/d12/f15 to d3/d14/f17 0 2026-03-09T20:47:11.360 INFO:tasks.workunit.client.0.vm07.stdout:4/96: readlink d2/l14 0 2026-03-09T20:47:11.360 INFO:tasks.workunit.client.0.vm07.stdout:0/84: dwrite d1/d2/f14 [0,4194304] 0 2026-03-09T20:47:11.361 INFO:tasks.workunit.client.1.vm10.stdout:1/7: mkdir d2 0 2026-03-09T20:47:11.362 INFO:tasks.workunit.client.0.vm07.stdout:1/81: write d3/f10 [1064628,8163] 0 2026-03-09T20:47:11.363 INFO:tasks.workunit.client.0.vm07.stdout:1/82: chown d3/f9 6 1 2026-03-09T20:47:11.370 INFO:tasks.workunit.client.0.vm07.stdout:3/102: mkdir d1/d5/d9/d11/d1f 0 2026-03-09T20:47:11.372 INFO:tasks.workunit.client.1.vm10.stdout:3/3: symlink l0 0 2026-03-09T20:47:11.385 INFO:tasks.workunit.client.1.vm10.stdout:2/5: creat f1 x:0 0 0 2026-03-09T20:47:11.400 INFO:tasks.workunit.client.1.vm10.stdout:2/6: dread - f1 zero size 2026-03-09T20:47:11.401 INFO:tasks.workunit.client.0.vm07.stdout:8/89: symlink d1/dc/d16/l1a 0 2026-03-09T20:47:11.401 INFO:tasks.workunit.client.0.vm07.stdout:4/97: stat d2/c11 0 2026-03-09T20:47:11.401 INFO:tasks.workunit.client.0.vm07.stdout:1/83: symlink d3/d14/l18 0 2026-03-09T20:47:11.401 INFO:tasks.workunit.client.1.vm10.stdout:2/7: dread - f1 zero size 2026-03-09T20:47:11.401 INFO:tasks.workunit.client.1.vm10.stdout:5/0: dread - no filename 2026-03-09T20:47:11.401 INFO:tasks.workunit.client.1.vm10.stdout:5/1: write - no filename 2026-03-09T20:47:11.401 INFO:tasks.workunit.client.1.vm10.stdout:3/4: creat f1 x:0 0 0 2026-03-09T20:47:11.401 INFO:tasks.workunit.client.1.vm10.stdout:2/8: mknod c2 0 2026-03-09T20:47:11.401 INFO:tasks.workunit.client.1.vm10.stdout:2/9: fdatasync f1 0 2026-03-09T20:47:11.404 INFO:tasks.workunit.client.0.vm07.stdout:3/103: sync 2026-03-09T20:47:11.407 INFO:tasks.workunit.client.1.vm10.stdout:4/0: symlink l0 0 2026-03-09T20:47:11.407 INFO:tasks.workunit.client.0.vm07.stdout:2/113: link d2/db/c1f d2/db/c23 0 2026-03-09T20:47:11.408 INFO:tasks.workunit.client.0.vm07.stdout:2/114: chown d2/l18 245885591 1 2026-03-09T20:47:11.411 INFO:tasks.workunit.client.1.vm10.stdout:3/5: rename f1 to f2 0 2026-03-09T20:47:11.415 INFO:tasks.workunit.client.0.vm07.stdout:8/90: unlink d1/c6 0 2026-03-09T20:47:11.416 INFO:tasks.workunit.client.0.vm07.stdout:8/91: dread - d1/dc/d14/f19 zero size 2026-03-09T20:47:11.417 INFO:tasks.workunit.client.0.vm07.stdout:6/114: link d8/f14 d8/f29 0 2026-03-09T20:47:11.417 INFO:tasks.workunit.client.0.vm07.stdout:9/93: dread d4/d8/fd [0,4194304] 0 2026-03-09T20:47:11.419 INFO:tasks.workunit.client.0.vm07.stdout:9/94: write d4/d8/f1c [438497,22744] 0 2026-03-09T20:47:11.421 INFO:tasks.workunit.client.0.vm07.stdout:9/95: dread d4/f17 [0,4194304] 0 2026-03-09T20:47:11.423 INFO:tasks.workunit.client.0.vm07.stdout:8/92: dwrite d1/dc/d14/f18 [0,4194304] 0 2026-03-09T20:47:11.465 INFO:tasks.workunit.client.1.vm10.stdout:0/10: dread f1 [0,4194304] 0 2026-03-09T20:47:11.470 INFO:tasks.workunit.client.1.vm10.stdout:5/2: symlink l0 0 2026-03-09T20:47:11.474 INFO:tasks.workunit.client.1.vm10.stdout:9/0: dwrite - no filename 2026-03-09T20:47:11.474 INFO:tasks.workunit.client.1.vm10.stdout:9/1: rmdir - no directory 2026-03-09T20:47:11.474 INFO:tasks.workunit.client.1.vm10.stdout:9/2: read - no filename 2026-03-09T20:47:11.474 INFO:tasks.workunit.client.1.vm10.stdout:9/3: truncate - no filename 2026-03-09T20:47:11.474 INFO:tasks.workunit.client.1.vm10.stdout:4/1: mkdir d1 0 2026-03-09T20:47:11.475 INFO:tasks.workunit.client.1.vm10.stdout:4/2: write - no filename 2026-03-09T20:47:11.475 INFO:tasks.workunit.client.1.vm10.stdout:4/3: chown d1 0 1 2026-03-09T20:47:11.475 INFO:tasks.workunit.client.1.vm10.stdout:4/4: write - no filename 2026-03-09T20:47:11.486 INFO:tasks.workunit.client.0.vm07.stdout:1/84: dwrite d3/fa [0,4194304] 0 2026-03-09T20:47:11.487 INFO:tasks.workunit.client.1.vm10.stdout:3/6: fdatasync f2 0 2026-03-09T20:47:11.492 INFO:tasks.workunit.client.0.vm07.stdout:9/96: read d4/d8/f1c [55774,35566] 0 2026-03-09T20:47:11.499 INFO:tasks.workunit.client.0.vm07.stdout:9/97: dread d4/f17 [0,4194304] 0 2026-03-09T20:47:11.504 INFO:tasks.workunit.client.1.vm10.stdout:6/0: creat f0 x:0 0 0 2026-03-09T20:47:11.504 INFO:tasks.workunit.client.1.vm10.stdout:5/3: creat f1 x:0 0 0 2026-03-09T20:47:11.506 INFO:tasks.workunit.client.0.vm07.stdout:8/93: rmdir d1/dc/d16 39 2026-03-09T20:47:11.509 INFO:tasks.workunit.client.0.vm07.stdout:4/98: mkdir d2/d1f 0 2026-03-09T20:47:11.513 INFO:tasks.workunit.client.1.vm10.stdout:6/1: truncate f0 144330 0 2026-03-09T20:47:11.518 INFO:tasks.workunit.client.1.vm10.stdout:7/0: write - no filename 2026-03-09T20:47:11.518 INFO:tasks.workunit.client.1.vm10.stdout:7/1: chown . 14 1 2026-03-09T20:47:11.518 INFO:tasks.workunit.client.1.vm10.stdout:7/2: chown . 536 1 2026-03-09T20:47:11.518 INFO:tasks.workunit.client.1.vm10.stdout:7/3: write - no filename 2026-03-09T20:47:11.518 INFO:tasks.workunit.client.1.vm10.stdout:7/4: fdatasync - no filename 2026-03-09T20:47:11.518 INFO:tasks.workunit.client.1.vm10.stdout:7/5: rename - no filename 2026-03-09T20:47:11.520 INFO:tasks.workunit.client.0.vm07.stdout:4/99: dwrite d2/f1d [0,4194304] 0 2026-03-09T20:47:11.523 INFO:tasks.workunit.client.1.vm10.stdout:3/7: creat f3 x:0 0 0 2026-03-09T20:47:11.529 INFO:tasks.workunit.client.1.vm10.stdout:5/4: dwrite f1 [0,4194304] 0 2026-03-09T20:47:11.531 INFO:tasks.workunit.client.1.vm10.stdout:3/8: write f3 [1034623,129837] 0 2026-03-09T20:47:11.541 INFO:tasks.workunit.client.1.vm10.stdout:0/11: mknod d2/c3 0 2026-03-09T20:47:11.542 INFO:tasks.workunit.client.1.vm10.stdout:0/12: read f1 [104778,28488] 0 2026-03-09T20:47:11.545 INFO:tasks.workunit.client.0.vm07.stdout:1/85: creat d3/d14/f19 x:0 0 0 2026-03-09T20:47:11.553 INFO:tasks.workunit.client.0.vm07.stdout:9/98: mkdir d4/d11/d23 0 2026-03-09T20:47:11.553 INFO:tasks.workunit.client.0.vm07.stdout:9/99: readlink l3 0 2026-03-09T20:47:11.553 INFO:tasks.workunit.client.1.vm10.stdout:3/9: dwrite f2 [0,4194304] 0 2026-03-09T20:47:11.553 INFO:tasks.workunit.client.1.vm10.stdout:3/10: write f3 [571753,23694] 0 2026-03-09T20:47:11.559 INFO:tasks.workunit.client.1.vm10.stdout:8/0: write - no filename 2026-03-09T20:47:11.559 INFO:tasks.workunit.client.1.vm10.stdout:8/1: fdatasync - no filename 2026-03-09T20:47:11.559 INFO:tasks.workunit.client.1.vm10.stdout:8/2: truncate - no filename 2026-03-09T20:47:11.559 INFO:tasks.workunit.client.1.vm10.stdout:8/3: chown . 0 1 2026-03-09T20:47:11.559 INFO:tasks.workunit.client.1.vm10.stdout:8/4: dwrite - no filename 2026-03-09T20:47:11.560 INFO:tasks.workunit.client.1.vm10.stdout:8/5: rename - no filename 2026-03-09T20:47:11.560 INFO:tasks.workunit.client.1.vm10.stdout:8/6: dwrite - no filename 2026-03-09T20:47:11.560 INFO:tasks.workunit.client.1.vm10.stdout:8/7: rmdir - no directory 2026-03-09T20:47:11.567 INFO:tasks.workunit.client.1.vm10.stdout:9/4: mknod c0 0 2026-03-09T20:47:11.569 INFO:tasks.workunit.client.0.vm07.stdout:3/104: creat d1/d5/f20 x:0 0 0 2026-03-09T20:47:11.569 INFO:tasks.workunit.client.0.vm07.stdout:5/124: write d5/df/f15 [4184407,89419] 0 2026-03-09T20:47:11.576 INFO:tasks.workunit.client.1.vm10.stdout:4/5: mkdir d1/d2 0 2026-03-09T20:47:11.577 INFO:tasks.workunit.client.1.vm10.stdout:5/5: mkdir d2 0 2026-03-09T20:47:11.581 INFO:tasks.workunit.client.1.vm10.stdout:0/13: rename d2/c3 to d2/c4 0 2026-03-09T20:47:11.582 INFO:tasks.workunit.client.0.vm07.stdout:1/86: dwrite d3/d14/f17 [0,4194304] 0 2026-03-09T20:47:11.583 INFO:tasks.workunit.client.1.vm10.stdout:0/14: chown f1 13614059 1 2026-03-09T20:47:11.583 INFO:tasks.workunit.client.1.vm10.stdout:0/15: stat d2 0 2026-03-09T20:47:11.584 INFO:tasks.workunit.client.1.vm10.stdout:0/16: dread f1 [0,4194304] 0 2026-03-09T20:47:11.585 INFO:tasks.workunit.client.0.vm07.stdout:9/100: sync 2026-03-09T20:47:11.586 INFO:tasks.workunit.client.0.vm07.stdout:9/101: chown f2 162 1 2026-03-09T20:47:11.589 INFO:tasks.workunit.client.0.vm07.stdout:7/92: truncate d3/da/db/f12 231847 0 2026-03-09T20:47:11.592 INFO:tasks.workunit.client.0.vm07.stdout:7/93: truncate d3/da/db/d14/f1a 258217 0 2026-03-09T20:47:11.593 INFO:tasks.workunit.client.0.vm07.stdout:7/94: truncate d3/da/db/d14/f1a 690238 0 2026-03-09T20:47:11.596 INFO:tasks.workunit.client.0.vm07.stdout:2/115: getdents d2 0 2026-03-09T20:47:11.598 INFO:tasks.workunit.client.0.vm07.stdout:3/105: link d1/d5/d10/f1a d1/d5/d9/d11/f21 0 2026-03-09T20:47:11.602 INFO:tasks.workunit.client.0.vm07.stdout:1/87: mknod d3/d14/c1a 0 2026-03-09T20:47:11.606 INFO:tasks.workunit.client.1.vm10.stdout:7/6: creat f0 x:0 0 0 2026-03-09T20:47:11.611 INFO:tasks.workunit.client.1.vm10.stdout:7/7: stat f0 0 2026-03-09T20:47:11.611 INFO:tasks.workunit.client.0.vm07.stdout:7/95: unlink d3/da/c19 0 2026-03-09T20:47:11.611 INFO:tasks.workunit.client.1.vm10.stdout:2/10: fsync f1 0 2026-03-09T20:47:11.611 INFO:tasks.workunit.client.1.vm10.stdout:0/17: creat d2/f5 x:0 0 0 2026-03-09T20:47:11.615 INFO:tasks.workunit.client.1.vm10.stdout:1/8: rename l1 to d2/l3 0 2026-03-09T20:47:11.615 INFO:tasks.workunit.client.1.vm10.stdout:1/9: write - no filename 2026-03-09T20:47:11.616 INFO:tasks.workunit.client.0.vm07.stdout:1/88: dread d3/f5 [0,4194304] 0 2026-03-09T20:47:11.616 INFO:tasks.workunit.client.0.vm07.stdout:1/89: write d3/f10 [355645,90387] 0 2026-03-09T20:47:11.619 INFO:tasks.workunit.client.1.vm10.stdout:8/8: mkdir d0 0 2026-03-09T20:47:11.619 INFO:tasks.workunit.client.1.vm10.stdout:8/9: truncate - no filename 2026-03-09T20:47:11.622 INFO:tasks.workunit.client.1.vm10.stdout:3/11: rename f2 to f4 0 2026-03-09T20:47:11.624 INFO:tasks.workunit.client.0.vm07.stdout:4/100: rename d2/cc to d2/df/c20 0 2026-03-09T20:47:11.627 INFO:tasks.workunit.client.1.vm10.stdout:2/11: dwrite f1 [0,4194304] 0 2026-03-09T20:47:11.629 INFO:tasks.workunit.client.1.vm10.stdout:0/18: dwrite d2/f5 [0,4194304] 0 2026-03-09T20:47:11.630 INFO:tasks.workunit.client.0.vm07.stdout:4/101: dwrite d2/df/f16 [4194304,4194304] 0 2026-03-09T20:47:11.631 INFO:tasks.workunit.client.0.vm07.stdout:7/96: creat d3/da/db/f1e x:0 0 0 2026-03-09T20:47:11.631 INFO:tasks.workunit.client.0.vm07.stdout:7/97: chown d3/da/db/d14 0 1 2026-03-09T20:47:11.636 INFO:tasks.workunit.client.0.vm07.stdout:7/98: read - d3/da/db/f1e zero size 2026-03-09T20:47:11.643 INFO:tasks.workunit.client.0.vm07.stdout:3/106: unlink d1/d5/d10/f13 0 2026-03-09T20:47:11.647 INFO:tasks.workunit.client.0.vm07.stdout:3/107: dwrite d1/d5/f20 [0,4194304] 0 2026-03-09T20:47:11.655 INFO:tasks.workunit.client.0.vm07.stdout:1/90: mknod d3/d14/c1b 0 2026-03-09T20:47:11.658 INFO:tasks.workunit.client.0.vm07.stdout:5/125: rename d5/df/f10 to d5/f23 0 2026-03-09T20:47:11.665 INFO:tasks.workunit.client.0.vm07.stdout:7/99: mkdir d3/da/db/d14/d1f 0 2026-03-09T20:47:11.666 INFO:tasks.workunit.client.0.vm07.stdout:7/100: write d3/da/fe [3993689,71414] 0 2026-03-09T20:47:11.668 INFO:tasks.workunit.client.0.vm07.stdout:2/116: link d2/d11/l21 d2/d11/l24 0 2026-03-09T20:47:11.672 INFO:tasks.workunit.client.0.vm07.stdout:3/108: unlink d1/d5/f17 0 2026-03-09T20:47:11.676 INFO:tasks.workunit.client.0.vm07.stdout:5/126: write d5/f23 [2744740,82699] 0 2026-03-09T20:47:11.678 INFO:tasks.workunit.client.0.vm07.stdout:0/85: truncate d1/d2/f14 775281 0 2026-03-09T20:47:11.680 INFO:tasks.workunit.client.0.vm07.stdout:6/115: write f7 [3732986,117926] 0 2026-03-09T20:47:11.680 INFO:tasks.workunit.client.1.vm10.stdout:6/2: fsync f0 0 2026-03-09T20:47:11.682 INFO:tasks.workunit.client.1.vm10.stdout:6/3: chown f0 1 1 2026-03-09T20:47:11.682 INFO:tasks.workunit.client.0.vm07.stdout:7/101: unlink c2 0 2026-03-09T20:47:11.683 INFO:tasks.workunit.client.0.vm07.stdout:2/117: rmdir d2/db 39 2026-03-09T20:47:11.687 INFO:tasks.workunit.client.0.vm07.stdout:3/109: creat d1/d5/d10/f22 x:0 0 0 2026-03-09T20:47:11.689 INFO:tasks.workunit.client.0.vm07.stdout:1/91: symlink d3/l1c 0 2026-03-09T20:47:11.689 INFO:tasks.workunit.client.0.vm07.stdout:8/94: write d1/f13 [775116,78068] 0 2026-03-09T20:47:11.690 INFO:tasks.workunit.client.0.vm07.stdout:8/95: chown d1/l10 2008107 1 2026-03-09T20:47:11.693 INFO:tasks.workunit.client.0.vm07.stdout:5/127: creat d5/df/f24 x:0 0 0 2026-03-09T20:47:11.693 INFO:tasks.workunit.client.1.vm10.stdout:6/4: dwrite f0 [0,4194304] 0 2026-03-09T20:47:11.695 INFO:tasks.workunit.client.0.vm07.stdout:4/102: creat d2/f21 x:0 0 0 2026-03-09T20:47:11.695 INFO:tasks.workunit.client.0.vm07.stdout:4/103: write d2/f21 [462585,41881] 0 2026-03-09T20:47:11.705 INFO:tasks.workunit.client.0.vm07.stdout:6/116: mkdir d8/d26/d2a 0 2026-03-09T20:47:11.708 INFO:tasks.workunit.client.0.vm07.stdout:9/102: write d4/d8/fb [940606,105076] 0 2026-03-09T20:47:11.712 INFO:tasks.workunit.client.1.vm10.stdout:6/5: dread f0 [0,4194304] 0 2026-03-09T20:47:11.715 INFO:tasks.workunit.client.1.vm10.stdout:6/6: read f0 [2148811,83654] 0 2026-03-09T20:47:11.716 INFO:tasks.workunit.client.0.vm07.stdout:8/96: chown d1/la 6850 1 2026-03-09T20:47:11.717 INFO:tasks.workunit.client.0.vm07.stdout:5/128: creat d5/f25 x:0 0 0 2026-03-09T20:47:11.718 INFO:tasks.workunit.client.0.vm07.stdout:5/129: rename d5 to d5/df/d26 22 2026-03-09T20:47:11.721 INFO:tasks.workunit.client.0.vm07.stdout:6/117: stat d8/cc 0 2026-03-09T20:47:11.723 INFO:tasks.workunit.client.0.vm07.stdout:6/118: chown d8/c21 1 1 2026-03-09T20:47:11.723 INFO:tasks.workunit.client.0.vm07.stdout:6/119: chown l1 54 1 2026-03-09T20:47:11.723 INFO:tasks.workunit.client.0.vm07.stdout:6/120: readlink d8/le 0 2026-03-09T20:47:11.724 INFO:tasks.workunit.client.1.vm10.stdout:6/7: dread f0 [0,4194304] 0 2026-03-09T20:47:11.724 INFO:tasks.workunit.client.1.vm10.stdout:6/8: rmdir - no directory 2026-03-09T20:47:11.728 INFO:tasks.workunit.client.0.vm07.stdout:9/103: rmdir d4 39 2026-03-09T20:47:11.731 INFO:tasks.workunit.client.0.vm07.stdout:3/110: creat d1/d5/d9/d11/d1f/f23 x:0 0 0 2026-03-09T20:47:11.733 INFO:tasks.workunit.client.0.vm07.stdout:5/130: mknod d5/d19/c27 0 2026-03-09T20:47:11.735 INFO:tasks.workunit.client.0.vm07.stdout:4/104: symlink d2/l22 0 2026-03-09T20:47:11.740 INFO:tasks.workunit.client.0.vm07.stdout:6/121: unlink d8/d16/f19 0 2026-03-09T20:47:11.741 INFO:tasks.workunit.client.0.vm07.stdout:4/105: dwrite d2/f19 [0,4194304] 0 2026-03-09T20:47:11.745 INFO:tasks.workunit.client.0.vm07.stdout:2/118: symlink d2/db/d1c/l25 0 2026-03-09T20:47:11.746 INFO:tasks.workunit.client.0.vm07.stdout:9/104: chown d4 804 1 2026-03-09T20:47:11.747 INFO:tasks.workunit.client.0.vm07.stdout:3/111: fsync d1/d5/d9/d11/f21 0 2026-03-09T20:47:11.748 INFO:tasks.workunit.client.0.vm07.stdout:8/97: mknod d1/c1b 0 2026-03-09T20:47:11.751 INFO:tasks.workunit.client.0.vm07.stdout:5/131: rename d5/la to d5/df/d13/l28 0 2026-03-09T20:47:11.751 INFO:tasks.workunit.client.0.vm07.stdout:6/122: dwrite f7 [0,4194304] 0 2026-03-09T20:47:11.754 INFO:tasks.workunit.client.0.vm07.stdout:6/123: chown d8/d16 18038423 1 2026-03-09T20:47:11.757 INFO:tasks.workunit.client.0.vm07.stdout:4/106: dwrite d2/df/f13 [0,4194304] 0 2026-03-09T20:47:11.762 INFO:tasks.workunit.client.0.vm07.stdout:2/119: dread - d2/f17 zero size 2026-03-09T20:47:11.766 INFO:tasks.workunit.client.0.vm07.stdout:2/120: truncate d2/db/d1c/f22 15190 0 2026-03-09T20:47:11.773 INFO:tasks.workunit.client.0.vm07.stdout:6/124: dwrite d8/d16/d22/d24/f25 [0,4194304] 0 2026-03-09T20:47:11.784 INFO:tasks.workunit.client.0.vm07.stdout:5/132: unlink d5/df/f15 0 2026-03-09T20:47:11.787 INFO:tasks.workunit.client.0.vm07.stdout:1/92: rmdir d3/d14 39 2026-03-09T20:47:11.793 INFO:tasks.workunit.client.0.vm07.stdout:2/121: creat d2/db/d1c/f26 x:0 0 0 2026-03-09T20:47:11.819 INFO:tasks.workunit.client.0.vm07.stdout:6/125: mkdir d8/d16/d22/d24/d2b 0 2026-03-09T20:47:11.820 INFO:tasks.workunit.client.0.vm07.stdout:9/105: mkdir d4/d8/dc/d22/d24 0 2026-03-09T20:47:11.820 INFO:tasks.workunit.client.0.vm07.stdout:1/93: dread - d3/d14/f19 zero size 2026-03-09T20:47:11.820 INFO:tasks.workunit.client.0.vm07.stdout:9/106: fdatasync d4/f10 0 2026-03-09T20:47:11.820 INFO:tasks.workunit.client.0.vm07.stdout:9/107: fdatasync d4/f10 0 2026-03-09T20:47:11.820 INFO:tasks.workunit.client.0.vm07.stdout:3/112: link d1/l6 d1/d5/d9/d11/d1f/l24 0 2026-03-09T20:47:11.820 INFO:tasks.workunit.client.0.vm07.stdout:9/108: truncate d4/d8/fb 1838252 0 2026-03-09T20:47:11.820 INFO:tasks.workunit.client.0.vm07.stdout:8/98: creat d1/dc/f1c x:0 0 0 2026-03-09T20:47:11.820 INFO:tasks.workunit.client.0.vm07.stdout:8/99: write d1/f13 [350816,70710] 0 2026-03-09T20:47:11.820 INFO:tasks.workunit.client.0.vm07.stdout:8/100: write d1/dc/f1c [501113,91015] 0 2026-03-09T20:47:11.820 INFO:tasks.workunit.client.0.vm07.stdout:3/113: dread d1/d5/d9/f15 [0,4194304] 0 2026-03-09T20:47:11.820 INFO:tasks.workunit.client.0.vm07.stdout:8/101: read d1/fb [1481623,120049] 0 2026-03-09T20:47:11.820 INFO:tasks.workunit.client.0.vm07.stdout:8/102: write d1/f11 [2996596,95464] 0 2026-03-09T20:47:11.822 INFO:tasks.workunit.client.0.vm07.stdout:1/94: mknod d3/d14/c1d 0 2026-03-09T20:47:11.822 INFO:tasks.workunit.client.0.vm07.stdout:4/107: creat d2/df/f23 x:0 0 0 2026-03-09T20:47:11.823 INFO:tasks.workunit.client.0.vm07.stdout:2/122: rename d2/db/c1a to d2/d11/c27 0 2026-03-09T20:47:11.825 INFO:tasks.workunit.client.0.vm07.stdout:9/109: unlink d4/d8/d19/c1b 0 2026-03-09T20:47:11.828 INFO:tasks.workunit.client.0.vm07.stdout:1/95: creat d3/d14/f1e x:0 0 0 2026-03-09T20:47:11.828 INFO:tasks.workunit.client.0.vm07.stdout:1/96: chown d3/l6 326967 1 2026-03-09T20:47:11.831 INFO:tasks.workunit.client.0.vm07.stdout:2/123: mkdir d2/db/d28 0 2026-03-09T20:47:11.832 INFO:tasks.workunit.client.0.vm07.stdout:2/124: write d2/db/d1c/f26 [281411,83844] 0 2026-03-09T20:47:11.837 INFO:tasks.workunit.client.0.vm07.stdout:4/108: dwrite d2/f3 [0,4194304] 0 2026-03-09T20:47:11.840 INFO:tasks.workunit.client.0.vm07.stdout:1/97: fdatasync d3/f5 0 2026-03-09T20:47:11.842 INFO:tasks.workunit.client.0.vm07.stdout:6/126: getdents d8 0 2026-03-09T20:47:11.845 INFO:tasks.workunit.client.0.vm07.stdout:6/127: dread d8/d16/f17 [0,4194304] 0 2026-03-09T20:47:11.893 INFO:tasks.workunit.client.0.vm07.stdout:3/114: creat d1/d5/f25 x:0 0 0 2026-03-09T20:47:11.893 INFO:tasks.workunit.client.0.vm07.stdout:9/110: fdatasync d4/d8/dc/d15/f18 0 2026-03-09T20:47:11.893 INFO:tasks.workunit.client.0.vm07.stdout:4/109: chown d2/fa 34555000 1 2026-03-09T20:47:11.893 INFO:tasks.workunit.client.0.vm07.stdout:8/103: creat d1/f1d x:0 0 0 2026-03-09T20:47:11.893 INFO:tasks.workunit.client.0.vm07.stdout:1/98: mknod d3/d14/c1f 0 2026-03-09T20:47:11.893 INFO:tasks.workunit.client.0.vm07.stdout:6/128: rmdir d8 39 2026-03-09T20:47:11.893 INFO:tasks.workunit.client.0.vm07.stdout:4/110: dread f0 [0,4194304] 0 2026-03-09T20:47:11.894 INFO:tasks.workunit.client.0.vm07.stdout:3/115: creat d1/d5/d9/d11/f26 x:0 0 0 2026-03-09T20:47:11.894 INFO:tasks.workunit.client.0.vm07.stdout:6/129: stat d8/cd 0 2026-03-09T20:47:11.894 INFO:tasks.workunit.client.0.vm07.stdout:8/104: dwrite d1/dc/f1c [0,4194304] 0 2026-03-09T20:47:11.894 INFO:tasks.workunit.client.0.vm07.stdout:6/130: chown d8/d16/d22 3 1 2026-03-09T20:47:11.894 INFO:tasks.workunit.client.0.vm07.stdout:6/131: fdatasync d8/f20 0 2026-03-09T20:47:11.894 INFO:tasks.workunit.client.0.vm07.stdout:1/99: dwrite d3/d14/f17 [0,4194304] 0 2026-03-09T20:47:11.894 INFO:tasks.workunit.client.0.vm07.stdout:8/105: chown d1/dc/l12 14622 1 2026-03-09T20:47:11.894 INFO:tasks.workunit.client.0.vm07.stdout:4/111: dwrite f1 [0,4194304] 0 2026-03-09T20:47:11.894 INFO:tasks.workunit.client.0.vm07.stdout:9/111: link d4/d8/fb d4/d8/dc/f25 0 2026-03-09T20:47:11.894 INFO:tasks.workunit.client.0.vm07.stdout:4/112: truncate d2/df/f16 8766822 0 2026-03-09T20:47:11.894 INFO:tasks.workunit.client.0.vm07.stdout:6/132: getdents d8/d16/d22/d24/d2b 0 2026-03-09T20:47:11.897 INFO:tasks.workunit.client.0.vm07.stdout:6/133: dwrite d8/f15 [0,4194304] 0 2026-03-09T20:47:11.907 INFO:tasks.workunit.client.0.vm07.stdout:9/112: mkdir d4/d8/d19/d26 0 2026-03-09T20:47:11.908 INFO:tasks.workunit.client.0.vm07.stdout:9/113: write d4/d11/f13 [474773,26454] 0 2026-03-09T20:47:11.911 INFO:tasks.workunit.client.0.vm07.stdout:6/134: creat d8/d16/d22/f2c x:0 0 0 2026-03-09T20:47:11.918 INFO:tasks.workunit.client.0.vm07.stdout:6/135: unlink d8/d16/c1a 0 2026-03-09T20:47:11.926 INFO:tasks.workunit.client.0.vm07.stdout:6/136: unlink d8/fa 0 2026-03-09T20:47:11.943 INFO:tasks.workunit.client.1.vm10.stdout:5/6: symlink d2/l3 0 2026-03-09T20:47:11.945 INFO:tasks.workunit.client.1.vm10.stdout:3/12: mknod c5 0 2026-03-09T20:47:11.946 INFO:tasks.workunit.client.1.vm10.stdout:2/12: rename l0 to l3 0 2026-03-09T20:47:11.946 INFO:tasks.workunit.client.1.vm10.stdout:0/19: mknod d2/c6 0 2026-03-09T20:47:11.947 INFO:tasks.workunit.client.1.vm10.stdout:9/5: link c0 c1 0 2026-03-09T20:47:11.948 INFO:tasks.workunit.client.1.vm10.stdout:7/8: rename f0 to f1 0 2026-03-09T20:47:11.949 INFO:tasks.workunit.client.1.vm10.stdout:5/7: symlink d2/l4 0 2026-03-09T20:47:11.950 INFO:tasks.workunit.client.1.vm10.stdout:1/10: rename d2/l3 to d2/l4 0 2026-03-09T20:47:11.950 INFO:tasks.workunit.client.1.vm10.stdout:1/11: write - no filename 2026-03-09T20:47:11.951 INFO:tasks.workunit.client.1.vm10.stdout:8/10: creat d0/f1 x:0 0 0 2026-03-09T20:47:11.955 INFO:tasks.workunit.client.1.vm10.stdout:3/13: dwrite f3 [0,4194304] 0 2026-03-09T20:47:11.957 INFO:tasks.workunit.client.1.vm10.stdout:8/11: truncate d0/f1 71461 0 2026-03-09T20:47:11.957 INFO:tasks.workunit.client.1.vm10.stdout:4/6: getdents d1 0 2026-03-09T20:47:11.960 INFO:tasks.workunit.client.1.vm10.stdout:2/13: dread f1 [0,4194304] 0 2026-03-09T20:47:11.971 INFO:tasks.workunit.client.1.vm10.stdout:7/9: symlink l2 0 2026-03-09T20:47:11.971 INFO:tasks.workunit.client.1.vm10.stdout:5/8: creat d2/f5 x:0 0 0 2026-03-09T20:47:11.971 INFO:tasks.workunit.client.1.vm10.stdout:1/12: creat d2/f5 x:0 0 0 2026-03-09T20:47:11.971 INFO:tasks.workunit.client.1.vm10.stdout:9/6: chown c0 134463771 1 2026-03-09T20:47:11.972 INFO:tasks.workunit.client.1.vm10.stdout:9/7: truncate - no filename 2026-03-09T20:47:11.972 INFO:tasks.workunit.client.1.vm10.stdout:6/9: getdents . 0 2026-03-09T20:47:11.972 INFO:tasks.workunit.client.1.vm10.stdout:7/10: creat f3 x:0 0 0 2026-03-09T20:47:11.972 INFO:tasks.workunit.client.1.vm10.stdout:2/14: symlink l4 0 2026-03-09T20:47:11.972 INFO:tasks.workunit.client.1.vm10.stdout:6/10: read f0 [780947,124366] 0 2026-03-09T20:47:11.972 INFO:tasks.workunit.client.1.vm10.stdout:1/13: truncate d2/f5 482541 0 2026-03-09T20:47:11.972 INFO:tasks.workunit.client.1.vm10.stdout:5/9: symlink d2/l6 0 2026-03-09T20:47:11.972 INFO:tasks.workunit.client.1.vm10.stdout:9/8: mkdir d2 0 2026-03-09T20:47:11.972 INFO:tasks.workunit.client.1.vm10.stdout:4/7: mkdir d1/d2/d3 0 2026-03-09T20:47:11.972 INFO:tasks.workunit.client.1.vm10.stdout:4/8: dread - no filename 2026-03-09T20:47:11.972 INFO:tasks.workunit.client.1.vm10.stdout:4/9: read - no filename 2026-03-09T20:47:11.972 INFO:tasks.workunit.client.1.vm10.stdout:8/12: write d0/f1 [621166,8391] 0 2026-03-09T20:47:11.972 INFO:tasks.workunit.client.1.vm10.stdout:4/10: chown d1/d2 1012 1 2026-03-09T20:47:11.977 INFO:tasks.workunit.client.0.vm07.stdout:6/137: dread d8/f20 [0,4194304] 0 2026-03-09T20:47:11.977 INFO:tasks.workunit.client.1.vm10.stdout:8/13: creat d0/f2 x:0 0 0 2026-03-09T20:47:11.977 INFO:tasks.workunit.client.0.vm07.stdout:6/138: chown d8/d16/d22 182 1 2026-03-09T20:47:11.977 INFO:tasks.workunit.client.1.vm10.stdout:0/20: dwrite f1 [0,4194304] 0 2026-03-09T20:47:11.978 INFO:tasks.workunit.client.0.vm07.stdout:6/139: stat d8/f15 0 2026-03-09T20:47:11.980 INFO:tasks.workunit.client.1.vm10.stdout:1/14: symlink d2/l6 0 2026-03-09T20:47:11.980 INFO:tasks.workunit.client.1.vm10.stdout:8/14: chown d0/f2 187773480 1 2026-03-09T20:47:11.981 INFO:tasks.workunit.client.0.vm07.stdout:6/140: symlink d8/d16/d22/l2d 0 2026-03-09T20:47:11.984 INFO:tasks.workunit.client.1.vm10.stdout:9/9: mkdir d2/d3 0 2026-03-09T20:47:11.984 INFO:tasks.workunit.client.1.vm10.stdout:9/10: readlink - no filename 2026-03-09T20:47:11.986 INFO:tasks.workunit.client.1.vm10.stdout:2/15: dread f1 [0,4194304] 0 2026-03-09T20:47:11.986 INFO:tasks.workunit.client.1.vm10.stdout:5/10: link d2/f5 d2/f7 0 2026-03-09T20:47:11.987 INFO:tasks.workunit.client.1.vm10.stdout:7/11: dwrite f3 [0,4194304] 0 2026-03-09T20:47:11.987 INFO:tasks.workunit.client.1.vm10.stdout:5/11: truncate d2/f5 523789 0 2026-03-09T20:47:11.990 INFO:tasks.workunit.client.1.vm10.stdout:0/21: symlink d2/l7 0 2026-03-09T20:47:11.990 INFO:tasks.workunit.client.1.vm10.stdout:7/12: chown f1 110 1 2026-03-09T20:47:11.990 INFO:tasks.workunit.client.1.vm10.stdout:6/11: dwrite f0 [0,4194304] 0 2026-03-09T20:47:11.991 INFO:tasks.workunit.client.1.vm10.stdout:8/15: mknod d0/c3 0 2026-03-09T20:47:11.991 INFO:tasks.workunit.client.1.vm10.stdout:1/15: creat d2/f7 x:0 0 0 2026-03-09T20:47:11.991 INFO:tasks.workunit.client.1.vm10.stdout:7/13: read f3 [2125567,79966] 0 2026-03-09T20:47:11.992 INFO:tasks.workunit.client.1.vm10.stdout:2/16: mkdir d5 0 2026-03-09T20:47:11.993 INFO:tasks.workunit.client.1.vm10.stdout:6/12: read f0 [575224,37095] 0 2026-03-09T20:47:11.993 INFO:tasks.workunit.client.1.vm10.stdout:9/11: chown c0 68 1 2026-03-09T20:47:11.995 INFO:tasks.workunit.client.1.vm10.stdout:8/16: mknod d0/c4 0 2026-03-09T20:47:11.996 INFO:tasks.workunit.client.1.vm10.stdout:8/17: write d0/f2 [105208,42030] 0 2026-03-09T20:47:11.996 INFO:tasks.workunit.client.1.vm10.stdout:7/14: rename l2 to l4 0 2026-03-09T20:47:11.997 INFO:tasks.workunit.client.1.vm10.stdout:1/16: rename d2/f7 to d2/f8 0 2026-03-09T20:47:11.997 INFO:tasks.workunit.client.1.vm10.stdout:1/17: rename d2 to d2/d9 22 2026-03-09T20:47:11.998 INFO:tasks.workunit.client.1.vm10.stdout:1/18: stat d2/l6 0 2026-03-09T20:47:11.998 INFO:tasks.workunit.client.1.vm10.stdout:6/13: creat f1 x:0 0 0 2026-03-09T20:47:11.999 INFO:tasks.workunit.client.1.vm10.stdout:8/18: symlink d0/l5 0 2026-03-09T20:47:12.000 INFO:tasks.workunit.client.1.vm10.stdout:5/12: dwrite d2/f5 [0,4194304] 0 2026-03-09T20:47:12.000 INFO:tasks.workunit.client.1.vm10.stdout:2/17: creat d5/f6 x:0 0 0 2026-03-09T20:47:12.000 INFO:tasks.workunit.client.1.vm10.stdout:9/12: creat d2/d3/f4 x:0 0 0 2026-03-09T20:47:12.002 INFO:tasks.workunit.client.1.vm10.stdout:1/19: mkdir d2/da 0 2026-03-09T20:47:12.004 INFO:tasks.workunit.client.1.vm10.stdout:1/20: write d2/f5 [1346978,14370] 0 2026-03-09T20:47:12.014 INFO:tasks.workunit.client.1.vm10.stdout:9/13: creat d2/d3/f5 x:0 0 0 2026-03-09T20:47:12.014 INFO:tasks.workunit.client.1.vm10.stdout:9/14: chown d2/d3 71426925 1 2026-03-09T20:47:12.014 INFO:tasks.workunit.client.1.vm10.stdout:9/15: write d2/d3/f4 [243255,46473] 0 2026-03-09T20:47:12.014 INFO:tasks.workunit.client.1.vm10.stdout:8/19: dwrite d0/f2 [0,4194304] 0 2026-03-09T20:47:12.016 INFO:tasks.workunit.client.1.vm10.stdout:5/13: creat d2/f8 x:0 0 0 2026-03-09T20:47:12.016 INFO:tasks.workunit.client.1.vm10.stdout:5/14: dread - d2/f8 zero size 2026-03-09T20:47:12.016 INFO:tasks.workunit.client.1.vm10.stdout:8/20: read d0/f2 [2638392,42560] 0 2026-03-09T20:47:12.016 INFO:tasks.workunit.client.1.vm10.stdout:2/18: link d5/f6 d5/f7 0 2026-03-09T20:47:12.016 INFO:tasks.workunit.client.1.vm10.stdout:6/14: link f0 f2 0 2026-03-09T20:47:12.020 INFO:tasks.workunit.client.1.vm10.stdout:8/21: creat d0/f6 x:0 0 0 2026-03-09T20:47:12.029 INFO:tasks.workunit.client.1.vm10.stdout:6/15: mkdir d3 0 2026-03-09T20:47:12.029 INFO:tasks.workunit.client.1.vm10.stdout:7/15: dwrite f3 [0,4194304] 0 2026-03-09T20:47:12.029 INFO:tasks.workunit.client.1.vm10.stdout:1/21: link d2/l6 d2/lb 0 2026-03-09T20:47:12.030 INFO:tasks.workunit.client.1.vm10.stdout:9/16: dwrite d2/d3/f5 [0,4194304] 0 2026-03-09T20:47:12.036 INFO:tasks.workunit.client.1.vm10.stdout:9/17: readlink - no filename 2026-03-09T20:47:12.043 INFO:tasks.workunit.client.1.vm10.stdout:8/22: mknod d0/c7 0 2026-03-09T20:47:12.062 INFO:tasks.workunit.client.1.vm10.stdout:9/18: rename d2/d3/f4 to d2/f6 0 2026-03-09T20:47:12.062 INFO:tasks.workunit.client.1.vm10.stdout:8/23: rename d0/c3 to d0/c8 0 2026-03-09T20:47:12.062 INFO:tasks.workunit.client.1.vm10.stdout:1/22: getdents d2/da 0 2026-03-09T20:47:12.062 INFO:tasks.workunit.client.1.vm10.stdout:6/16: symlink d3/l4 0 2026-03-09T20:47:12.062 INFO:tasks.workunit.client.1.vm10.stdout:9/19: creat d2/d3/f7 x:0 0 0 2026-03-09T20:47:12.062 INFO:tasks.workunit.client.1.vm10.stdout:8/24: creat d0/f9 x:0 0 0 2026-03-09T20:47:12.062 INFO:tasks.workunit.client.1.vm10.stdout:1/23: symlink d2/lc 0 2026-03-09T20:47:12.062 INFO:tasks.workunit.client.1.vm10.stdout:6/17: creat d3/f5 x:0 0 0 2026-03-09T20:47:12.062 INFO:tasks.workunit.client.1.vm10.stdout:8/25: creat d0/fa x:0 0 0 2026-03-09T20:47:12.062 INFO:tasks.workunit.client.1.vm10.stdout:1/24: mknod d2/da/cd 0 2026-03-09T20:47:12.141 INFO:tasks.workunit.client.0.vm07.stdout:1/100: sync 2026-03-09T20:47:12.141 INFO:tasks.workunit.client.0.vm07.stdout:2/125: sync 2026-03-09T20:47:12.143 INFO:tasks.workunit.client.0.vm07.stdout:2/126: symlink d2/db/d1c/l29 0 2026-03-09T20:47:12.146 INFO:tasks.workunit.client.0.vm07.stdout:1/101: creat d3/d12/f20 x:0 0 0 2026-03-09T20:47:12.148 INFO:tasks.workunit.client.0.vm07.stdout:2/127: write d2/fe [1651987,115049] 0 2026-03-09T20:47:12.152 INFO:tasks.workunit.client.0.vm07.stdout:2/128: mknod d2/db/d28/c2a 0 2026-03-09T20:47:12.153 INFO:tasks.workunit.client.0.vm07.stdout:2/129: truncate d2/ff 365383 0 2026-03-09T20:47:12.157 INFO:tasks.workunit.client.0.vm07.stdout:2/130: dread d2/f7 [0,4194304] 0 2026-03-09T20:47:12.158 INFO:tasks.workunit.client.0.vm07.stdout:2/131: truncate d2/f10 2041783 0 2026-03-09T20:47:12.159 INFO:tasks.workunit.client.0.vm07.stdout:2/132: fsync d2/f10 0 2026-03-09T20:47:12.159 INFO:tasks.workunit.client.0.vm07.stdout:2/133: chown d2/ff 1 1 2026-03-09T20:47:12.162 INFO:tasks.workunit.client.0.vm07.stdout:2/134: dread d2/d11/f1e [0,4194304] 0 2026-03-09T20:47:12.200 INFO:tasks.workunit.client.0.vm07.stdout:1/102: sync 2026-03-09T20:47:12.200 INFO:tasks.workunit.client.0.vm07.stdout:1/103: chown d3/d14/c1d 3 1 2026-03-09T20:47:12.295 INFO:tasks.workunit.client.0.vm07.stdout:7/102: truncate d3/f18 4120364 0 2026-03-09T20:47:12.295 INFO:tasks.workunit.client.0.vm07.stdout:7/103: fdatasync d3/da/f11 0 2026-03-09T20:47:12.299 INFO:tasks.workunit.client.0.vm07.stdout:4/113: getdents d2 0 2026-03-09T20:47:12.303 INFO:tasks.workunit.client.0.vm07.stdout:4/114: dwrite d2/df/f13 [0,4194304] 0 2026-03-09T20:47:12.324 INFO:tasks.workunit.client.0.vm07.stdout:0/86: dwrite d1/f11 [0,4194304] 0 2026-03-09T20:47:12.324 INFO:tasks.workunit.client.0.vm07.stdout:0/87: chown d1/d2/l15 0 1 2026-03-09T20:47:12.324 INFO:tasks.workunit.client.0.vm07.stdout:0/88: symlink d1/d2/dc/d17/l19 0 2026-03-09T20:47:12.325 INFO:tasks.workunit.client.0.vm07.stdout:0/89: creat d1/f1a x:0 0 0 2026-03-09T20:47:12.325 INFO:tasks.workunit.client.0.vm07.stdout:0/90: write d1/f11 [3275027,61871] 0 2026-03-09T20:47:12.325 INFO:tasks.workunit.client.0.vm07.stdout:0/91: creat d1/d2/f1b x:0 0 0 2026-03-09T20:47:12.325 INFO:tasks.workunit.client.0.vm07.stdout:0/92: dread d1/d2/dc/f10 [0,4194304] 0 2026-03-09T20:47:12.327 INFO:tasks.workunit.client.0.vm07.stdout:0/93: dwrite d1/d2/f14 [0,4194304] 0 2026-03-09T20:47:12.338 INFO:tasks.workunit.client.0.vm07.stdout:0/94: symlink d1/l1c 0 2026-03-09T20:47:12.376 INFO:tasks.workunit.client.0.vm07.stdout:0/95: sync 2026-03-09T20:47:12.376 INFO:tasks.workunit.client.0.vm07.stdout:0/96: chown d1/f1a 22 1 2026-03-09T20:47:12.377 INFO:tasks.workunit.client.0.vm07.stdout:0/97: write d1/d2/f14 [995709,91262] 0 2026-03-09T20:47:12.378 INFO:tasks.workunit.client.0.vm07.stdout:0/98: truncate d1/f11 5065608 0 2026-03-09T20:47:12.378 INFO:tasks.workunit.client.0.vm07.stdout:0/99: write d1/f11 [1616575,79961] 0 2026-03-09T20:47:12.383 INFO:tasks.workunit.client.0.vm07.stdout:0/100: rename d1/d2/c8 to d1/d2/dc/c1d 0 2026-03-09T20:47:12.383 INFO:tasks.workunit.client.0.vm07.stdout:0/101: chown f0 959375 1 2026-03-09T20:47:12.385 INFO:tasks.workunit.client.0.vm07.stdout:0/102: rename d1/d2/dc/d17/l19 to d1/l1e 0 2026-03-09T20:47:12.386 INFO:tasks.workunit.client.0.vm07.stdout:0/103: write d1/d2/ff [3691538,114783] 0 2026-03-09T20:47:12.394 INFO:tasks.workunit.client.0.vm07.stdout:0/104: unlink d1/l6 0 2026-03-09T20:47:12.403 INFO:tasks.workunit.client.0.vm07.stdout:0/105: mkdir d1/d1f 0 2026-03-09T20:47:12.484 INFO:tasks.workunit.client.0.vm07.stdout:3/116: rmdir d1/d5/d9/d11/d1f 39 2026-03-09T20:47:12.489 INFO:tasks.workunit.client.0.vm07.stdout:3/117: creat d1/d5/d9/d11/d1f/f27 x:0 0 0 2026-03-09T20:47:12.493 INFO:tasks.workunit.client.0.vm07.stdout:3/118: dread d1/d5/d9/fe [0,4194304] 0 2026-03-09T20:47:12.517 INFO:tasks.workunit.client.0.vm07.stdout:5/133: truncate d5/d19/f20 101374 0 2026-03-09T20:47:12.517 INFO:tasks.workunit.client.0.vm07.stdout:5/134: chown d5/d19/c1e 1875 1 2026-03-09T20:47:12.517 INFO:tasks.workunit.client.0.vm07.stdout:3/119: mknod d1/d5/d9/d11/d1f/c28 0 2026-03-09T20:47:12.517 INFO:tasks.workunit.client.0.vm07.stdout:3/120: dread d1/d5/d9/f1c [0,4194304] 0 2026-03-09T20:47:12.517 INFO:tasks.workunit.client.0.vm07.stdout:3/121: chown d1/d5/d9/d11 21714387 1 2026-03-09T20:47:12.517 INFO:tasks.workunit.client.0.vm07.stdout:5/135: rename d5/df/l21 to d5/df/l29 0 2026-03-09T20:47:12.517 INFO:tasks.workunit.client.0.vm07.stdout:3/122: rename d1 to d1/d5/d29 22 2026-03-09T20:47:12.517 INFO:tasks.workunit.client.0.vm07.stdout:5/136: dwrite d5/f25 [0,4194304] 0 2026-03-09T20:47:12.517 INFO:tasks.workunit.client.0.vm07.stdout:3/123: unlink d1/d5/d9/d11/d1f/f23 0 2026-03-09T20:47:12.517 INFO:tasks.workunit.client.0.vm07.stdout:9/114: write d4/d8/dc/d15/f18 [1534124,12449] 0 2026-03-09T20:47:12.517 INFO:tasks.workunit.client.0.vm07.stdout:3/124: creat d1/d5/d9/d11/f2a x:0 0 0 2026-03-09T20:47:12.519 INFO:tasks.workunit.client.0.vm07.stdout:9/115: creat d4/d16/f27 x:0 0 0 2026-03-09T20:47:12.529 INFO:tasks.workunit.client.0.vm07.stdout:9/116: write d4/d11/f1a [531227,34010] 0 2026-03-09T20:47:12.529 INFO:tasks.workunit.client.0.vm07.stdout:9/117: dread - d4/d16/f27 zero size 2026-03-09T20:47:12.529 INFO:tasks.workunit.client.0.vm07.stdout:3/125: dwrite d1/d5/f25 [0,4194304] 0 2026-03-09T20:47:12.530 INFO:tasks.workunit.client.0.vm07.stdout:3/126: chown d1/d5/d10/l1d 114224 1 2026-03-09T20:47:12.535 INFO:tasks.workunit.client.0.vm07.stdout:9/118: dwrite d4/d16/f27 [0,4194304] 0 2026-03-09T20:47:12.543 INFO:tasks.workunit.client.0.vm07.stdout:9/119: creat d4/d8/d19/f28 x:0 0 0 2026-03-09T20:47:12.544 INFO:tasks.workunit.client.0.vm07.stdout:9/120: unlink d4/d8/fb 0 2026-03-09T20:47:12.544 INFO:tasks.workunit.client.0.vm07.stdout:9/121: fdatasync d4/fa 0 2026-03-09T20:47:12.553 INFO:tasks.workunit.client.0.vm07.stdout:9/122: rename d4/d8/dc/d22 to d4/d16/d29 0 2026-03-09T20:47:12.553 INFO:tasks.workunit.client.0.vm07.stdout:9/123: readlink l3 0 2026-03-09T20:47:12.555 INFO:tasks.workunit.client.0.vm07.stdout:9/124: mkdir d4/d11/d2a 0 2026-03-09T20:47:12.558 INFO:tasks.workunit.client.0.vm07.stdout:9/125: mknod d4/d8/dc/d15/c2b 0 2026-03-09T20:47:12.569 INFO:tasks.workunit.client.0.vm07.stdout:9/126: dwrite f2 [0,4194304] 0 2026-03-09T20:47:12.579 INFO:tasks.workunit.client.0.vm07.stdout:9/127: write d4/d8/fd [111823,17089] 0 2026-03-09T20:47:12.580 INFO:tasks.workunit.client.0.vm07.stdout:9/128: chown d4/d8/d19 4004191 1 2026-03-09T20:47:12.580 INFO:tasks.workunit.client.0.vm07.stdout:9/129: creat d4/d11/f2c x:0 0 0 2026-03-09T20:47:12.580 INFO:tasks.workunit.client.0.vm07.stdout:9/130: mknod d4/d16/d29/d24/c2d 0 2026-03-09T20:47:12.580 INFO:tasks.workunit.client.0.vm07.stdout:9/131: dread - d4/d11/f2c zero size 2026-03-09T20:47:12.586 INFO:tasks.workunit.client.0.vm07.stdout:9/132: link d4/f5 d4/d16/d29/d24/f2e 0 2026-03-09T20:47:12.587 INFO:tasks.workunit.client.0.vm07.stdout:3/127: sync 2026-03-09T20:47:12.713 INFO:tasks.workunit.client.0.vm07.stdout:8/106: truncate d1/dc/fe 7405814 0 2026-03-09T20:47:12.736 INFO:tasks.workunit.client.0.vm07.stdout:8/107: dwrite d1/dc/fd [0,4194304] 0 2026-03-09T20:47:12.737 INFO:tasks.workunit.client.0.vm07.stdout:8/108: creat d1/dc/d16/f1e x:0 0 0 2026-03-09T20:47:12.738 INFO:tasks.workunit.client.0.vm07.stdout:8/109: truncate d1/f13 1539684 0 2026-03-09T20:47:12.757 INFO:tasks.workunit.client.1.vm10.stdout:7/16: fsync f1 0 2026-03-09T20:47:12.758 INFO:tasks.workunit.client.1.vm10.stdout:2/19: link l3 d5/l8 0 2026-03-09T20:47:12.758 INFO:tasks.workunit.client.1.vm10.stdout:7/17: chown f1 67769907 1 2026-03-09T20:47:12.759 INFO:tasks.workunit.client.1.vm10.stdout:2/20: truncate d5/f7 42438 0 2026-03-09T20:47:12.762 INFO:tasks.workunit.client.1.vm10.stdout:3/14: truncate f4 1785571 0 2026-03-09T20:47:12.764 INFO:tasks.workunit.client.0.vm07.stdout:6/141: write d8/f14 [3100531,31938] 0 2026-03-09T20:47:12.764 INFO:tasks.workunit.client.1.vm10.stdout:7/18: dread f3 [0,4194304] 0 2026-03-09T20:47:12.764 INFO:tasks.workunit.client.1.vm10.stdout:7/19: chown f3 3 1 2026-03-09T20:47:12.765 INFO:tasks.workunit.client.1.vm10.stdout:7/20: chown f1 2 1 2026-03-09T20:47:12.765 INFO:tasks.workunit.client.1.vm10.stdout:7/21: dread - f1 zero size 2026-03-09T20:47:12.766 INFO:tasks.workunit.client.1.vm10.stdout:2/21: dwrite f1 [0,4194304] 0 2026-03-09T20:47:12.767 INFO:tasks.workunit.client.0.vm07.stdout:6/142: symlink d8/d16/d22/d24/l2e 0 2026-03-09T20:47:12.770 INFO:tasks.workunit.client.1.vm10.stdout:7/22: readlink l4 0 2026-03-09T20:47:12.771 INFO:tasks.workunit.client.1.vm10.stdout:2/22: mknod d5/c9 0 2026-03-09T20:47:12.772 INFO:tasks.workunit.client.0.vm07.stdout:6/143: link d8/f12 d8/d16/d22/d24/d2b/f2f 0 2026-03-09T20:47:12.773 INFO:tasks.workunit.client.0.vm07.stdout:6/144: mknod d8/d16/d22/c30 0 2026-03-09T20:47:12.774 INFO:tasks.workunit.client.0.vm07.stdout:6/145: mkdir d8/d26/d31 0 2026-03-09T20:47:12.774 INFO:tasks.workunit.client.0.vm07.stdout:6/146: chown d8/d16 164846138 1 2026-03-09T20:47:12.776 INFO:tasks.workunit.client.0.vm07.stdout:6/147: rename f7 to d8/f32 0 2026-03-09T20:47:12.777 INFO:tasks.workunit.client.0.vm07.stdout:6/148: fdatasync d8/f1c 0 2026-03-09T20:47:12.778 INFO:tasks.workunit.client.0.vm07.stdout:6/149: mkdir d8/d16/d22/d33 0 2026-03-09T20:47:12.802 INFO:tasks.workunit.client.0.vm07.stdout:6/150: chown d8/d16/d22/l2d 11 1 2026-03-09T20:47:12.802 INFO:tasks.workunit.client.0.vm07.stdout:2/135: rmdir d2/db/d1c 39 2026-03-09T20:47:12.802 INFO:tasks.workunit.client.0.vm07.stdout:2/136: fdatasync d2/db/fd 0 2026-03-09T20:47:12.802 INFO:tasks.workunit.client.0.vm07.stdout:1/104: write d3/f4 [1701601,42227] 0 2026-03-09T20:47:12.802 INFO:tasks.workunit.client.0.vm07.stdout:1/105: rmdir d3/d14 39 2026-03-09T20:47:12.802 INFO:tasks.workunit.client.0.vm07.stdout:4/115: dwrite d2/f7 [0,4194304] 0 2026-03-09T20:47:12.802 INFO:tasks.workunit.client.1.vm10.stdout:5/15: rmdir d2 39 2026-03-09T20:47:12.802 INFO:tasks.workunit.client.1.vm10.stdout:1/25: getdents d2 0 2026-03-09T20:47:12.802 INFO:tasks.workunit.client.1.vm10.stdout:8/26: chown d0/c8 1757 1 2026-03-09T20:47:12.802 INFO:tasks.workunit.client.1.vm10.stdout:8/27: chown d0/f6 56 1 2026-03-09T20:47:12.802 INFO:tasks.workunit.client.1.vm10.stdout:8/28: dread - d0/fa zero size 2026-03-09T20:47:12.802 INFO:tasks.workunit.client.1.vm10.stdout:8/29: truncate d0/f6 558257 0 2026-03-09T20:47:12.802 INFO:tasks.workunit.client.1.vm10.stdout:1/26: dwrite d2/f5 [0,4194304] 0 2026-03-09T20:47:12.808 INFO:tasks.workunit.client.0.vm07.stdout:1/106: getdents d3 0 2026-03-09T20:47:12.809 INFO:tasks.workunit.client.1.vm10.stdout:8/30: dwrite d0/f2 [0,4194304] 0 2026-03-09T20:47:12.823 INFO:tasks.workunit.client.1.vm10.stdout:8/31: unlink d0/f2 0 2026-03-09T20:47:12.824 INFO:tasks.workunit.client.1.vm10.stdout:1/27: link d2/f8 d2/da/fe 0 2026-03-09T20:47:12.829 INFO:tasks.workunit.client.0.vm07.stdout:1/107: link d3/l6 d3/l21 0 2026-03-09T20:47:12.829 INFO:tasks.workunit.client.1.vm10.stdout:8/32: dwrite d0/f9 [0,4194304] 0 2026-03-09T20:47:12.832 INFO:tasks.workunit.client.1.vm10.stdout:1/28: creat d2/ff x:0 0 0 2026-03-09T20:47:12.833 INFO:tasks.workunit.client.1.vm10.stdout:8/33: symlink d0/lb 0 2026-03-09T20:47:12.834 INFO:tasks.workunit.client.0.vm07.stdout:5/137: creat d5/df/d13/f2a x:0 0 0 2026-03-09T20:47:12.836 INFO:tasks.workunit.client.1.vm10.stdout:1/29: creat d2/da/f10 x:0 0 0 2026-03-09T20:47:12.836 INFO:tasks.workunit.client.1.vm10.stdout:1/30: dread - d2/f8 zero size 2026-03-09T20:47:12.837 INFO:tasks.workunit.client.1.vm10.stdout:1/31: write d2/da/f10 [847375,93652] 0 2026-03-09T20:47:12.838 INFO:tasks.workunit.client.0.vm07.stdout:8/110: sync 2026-03-09T20:47:12.844 INFO:tasks.workunit.client.1.vm10.stdout:1/32: write d2/da/f10 [418561,97351] 0 2026-03-09T20:47:12.845 INFO:tasks.workunit.client.0.vm07.stdout:5/138: dwrite d5/df/f24 [0,4194304] 0 2026-03-09T20:47:12.845 INFO:tasks.workunit.client.0.vm07.stdout:5/139: stat d5 0 2026-03-09T20:47:12.845 INFO:tasks.workunit.client.0.vm07.stdout:5/140: write d5/df/f22 [646251,52052] 0 2026-03-09T20:47:12.853 INFO:tasks.workunit.client.1.vm10.stdout:8/34: mknod d0/cc 0 2026-03-09T20:47:12.860 INFO:tasks.workunit.client.0.vm07.stdout:8/111: rename d1/l10 to d1/dc/d14/l1f 0 2026-03-09T20:47:12.861 INFO:tasks.workunit.client.1.vm10.stdout:1/33: unlink d2/ff 0 2026-03-09T20:47:12.861 INFO:tasks.workunit.client.1.vm10.stdout:8/35: symlink d0/ld 0 2026-03-09T20:47:12.861 INFO:tasks.workunit.client.1.vm10.stdout:8/36: write d0/fa [299396,65064] 0 2026-03-09T20:47:12.871 INFO:tasks.workunit.client.1.vm10.stdout:1/34: dwrite d2/da/fe [0,4194304] 0 2026-03-09T20:47:12.887 INFO:tasks.workunit.client.1.vm10.stdout:1/35: dwrite d2/f8 [0,4194304] 0 2026-03-09T20:47:12.891 INFO:tasks.workunit.client.1.vm10.stdout:1/36: creat d2/da/f11 x:0 0 0 2026-03-09T20:47:12.891 INFO:tasks.workunit.client.1.vm10.stdout:1/37: write d2/f5 [1085793,95673] 0 2026-03-09T20:47:12.903 INFO:tasks.workunit.client.1.vm10.stdout:1/38: link d2/f5 d2/da/f12 0 2026-03-09T20:47:12.906 INFO:tasks.workunit.client.1.vm10.stdout:1/39: dread d2/da/fe [0,4194304] 0 2026-03-09T20:47:12.909 INFO:tasks.workunit.client.1.vm10.stdout:1/40: unlink d2/da/f12 0 2026-03-09T20:47:12.981 INFO:tasks.workunit.client.0.vm07.stdout:2/137: fdatasync d2/db/d1c/f26 0 2026-03-09T20:47:12.984 INFO:tasks.workunit.client.0.vm07.stdout:2/138: dread d2/d11/f20 [0,4194304] 0 2026-03-09T20:47:13.017 INFO:tasks.workunit.client.0.vm07.stdout:2/139: rmdir d2/d11 39 2026-03-09T20:47:13.017 INFO:tasks.workunit.client.0.vm07.stdout:2/140: link d2/db/f1b d2/db/d1c/f2b 0 2026-03-09T20:47:13.017 INFO:tasks.workunit.client.0.vm07.stdout:2/141: chown d2/db/d1c/l29 506800 1 2026-03-09T20:47:13.017 INFO:tasks.workunit.client.0.vm07.stdout:2/142: write d2/fe [1827127,54895] 0 2026-03-09T20:47:13.017 INFO:tasks.workunit.client.0.vm07.stdout:2/143: truncate d2/db/d1c/f26 558334 0 2026-03-09T20:47:13.017 INFO:tasks.workunit.client.0.vm07.stdout:2/144: dread d2/db/fd [0,4194304] 0 2026-03-09T20:47:13.017 INFO:tasks.workunit.client.0.vm07.stdout:2/145: creat d2/f2c x:0 0 0 2026-03-09T20:47:13.017 INFO:tasks.workunit.client.0.vm07.stdout:2/146: stat d2/db/fd 0 2026-03-09T20:47:13.017 INFO:tasks.workunit.client.0.vm07.stdout:2/147: mknod d2/db/d1c/c2d 0 2026-03-09T20:47:13.043 INFO:tasks.workunit.client.1.vm10.stdout:1/41: dread d2/da/f10 [0,4194304] 0 2026-03-09T20:47:13.046 INFO:tasks.workunit.client.1.vm10.stdout:1/42: dread d2/da/fe [0,4194304] 0 2026-03-09T20:47:13.050 INFO:tasks.workunit.client.0.vm07.stdout:2/148: dread d2/db/f1b [0,4194304] 0 2026-03-09T20:47:13.051 INFO:tasks.workunit.client.1.vm10.stdout:1/43: symlink d2/l13 0 2026-03-09T20:47:13.052 INFO:tasks.workunit.client.1.vm10.stdout:1/44: chown d2/da/f11 0 1 2026-03-09T20:47:13.054 INFO:tasks.workunit.client.0.vm07.stdout:3/128: dwrite d1/d5/d10/f1a [0,4194304] 0 2026-03-09T20:47:13.055 INFO:tasks.workunit.client.0.vm07.stdout:3/129: stat d1/d5/d9/f1c 0 2026-03-09T20:47:13.057 INFO:tasks.workunit.client.0.vm07.stdout:9/133: truncate d4/d16/f27 1360443 0 2026-03-09T20:47:13.062 INFO:tasks.workunit.client.0.vm07.stdout:2/149: rename d2/db/f16 to d2/db/d1c/f2e 0 2026-03-09T20:47:13.067 INFO:tasks.workunit.client.0.vm07.stdout:3/130: stat d1/d5/d9/ca 0 2026-03-09T20:47:13.067 INFO:tasks.workunit.client.1.vm10.stdout:1/45: creat d2/f14 x:0 0 0 2026-03-09T20:47:13.067 INFO:tasks.workunit.client.1.vm10.stdout:1/46: symlink d2/l15 0 2026-03-09T20:47:13.068 INFO:tasks.workunit.client.0.vm07.stdout:2/150: write d2/db/d1c/f2e [3588940,14766] 0 2026-03-09T20:47:13.069 INFO:tasks.workunit.client.0.vm07.stdout:3/131: mknod d1/d5/d10/c2b 0 2026-03-09T20:47:13.071 INFO:tasks.workunit.client.0.vm07.stdout:3/132: dread d1/d5/d9/f1b [0,4194304] 0 2026-03-09T20:47:13.079 INFO:tasks.workunit.client.0.vm07.stdout:3/133: symlink d1/l2c 0 2026-03-09T20:47:13.083 INFO:tasks.workunit.client.0.vm07.stdout:2/151: getdents d2/d11 0 2026-03-09T20:47:13.084 INFO:tasks.workunit.client.0.vm07.stdout:2/152: mknod d2/d11/c2f 0 2026-03-09T20:47:13.084 INFO:tasks.workunit.client.0.vm07.stdout:2/153: chown d2/db/d28/c2a 743485 1 2026-03-09T20:47:13.086 INFO:tasks.workunit.client.0.vm07.stdout:2/154: dread d2/db/d1c/f22 [0,4194304] 0 2026-03-09T20:47:13.096 INFO:tasks.workunit.client.0.vm07.stdout:2/155: symlink d2/d11/l30 0 2026-03-09T20:47:13.104 INFO:tasks.workunit.client.1.vm10.stdout:3/15: fsync f3 0 2026-03-09T20:47:13.104 INFO:tasks.workunit.client.1.vm10.stdout:0/22: fsync f1 0 2026-03-09T20:47:13.104 INFO:tasks.workunit.client.0.vm07.stdout:2/156: chown d2/db/d1c/c2d 41 1 2026-03-09T20:47:13.104 INFO:tasks.workunit.client.0.vm07.stdout:2/157: mknod d2/db/c31 0 2026-03-09T20:47:13.106 INFO:tasks.workunit.client.1.vm10.stdout:0/23: dwrite d2/f5 [0,4194304] 0 2026-03-09T20:47:13.109 INFO:tasks.workunit.client.0.vm07.stdout:2/158: rename d2/fe to d2/db/d28/f32 0 2026-03-09T20:47:13.116 INFO:tasks.workunit.client.1.vm10.stdout:0/24: truncate f1 249331 0 2026-03-09T20:47:13.126 INFO:tasks.workunit.client.1.vm10.stdout:0/25: unlink d2/c6 0 2026-03-09T20:47:13.126 INFO:tasks.workunit.client.1.vm10.stdout:0/26: readlink d2/l7 0 2026-03-09T20:47:13.126 INFO:tasks.workunit.client.1.vm10.stdout:0/27: rename d2/c4 to d2/c8 0 2026-03-09T20:47:13.126 INFO:tasks.workunit.client.0.vm07.stdout:2/159: link d2/db/d1c/f26 d2/f33 0 2026-03-09T20:47:13.126 INFO:tasks.workunit.client.0.vm07.stdout:2/160: chown d2/d11/l30 2 1 2026-03-09T20:47:13.126 INFO:tasks.workunit.client.0.vm07.stdout:2/161: creat d2/db/d28/f34 x:0 0 0 2026-03-09T20:47:13.126 INFO:tasks.workunit.client.0.vm07.stdout:2/162: mknod d2/c35 0 2026-03-09T20:47:13.137 INFO:tasks.workunit.client.1.vm10.stdout:7/23: truncate f1 726431 0 2026-03-09T20:47:13.139 INFO:tasks.workunit.client.1.vm10.stdout:2/23: getdents d5 0 2026-03-09T20:47:13.139 INFO:tasks.workunit.client.1.vm10.stdout:7/24: creat f5 x:0 0 0 2026-03-09T20:47:13.139 INFO:tasks.workunit.client.1.vm10.stdout:7/25: rmdir - no directory 2026-03-09T20:47:13.141 INFO:tasks.workunit.client.0.vm07.stdout:6/151: write d8/d16/f23 [865652,104841] 0 2026-03-09T20:47:13.142 INFO:tasks.workunit.client.1.vm10.stdout:7/26: creat f6 x:0 0 0 2026-03-09T20:47:13.143 INFO:tasks.workunit.client.0.vm07.stdout:6/152: mknod d8/d16/d22/d24/c34 0 2026-03-09T20:47:13.145 INFO:tasks.workunit.client.0.vm07.stdout:6/153: creat d8/d16/d22/f35 x:0 0 0 2026-03-09T20:47:13.148 INFO:tasks.workunit.client.0.vm07.stdout:7/104: dwrite d3/f18 [0,4194304] 0 2026-03-09T20:47:13.157 INFO:tasks.workunit.client.0.vm07.stdout:2/163: dread d2/f10 [0,4194304] 0 2026-03-09T20:47:13.157 INFO:tasks.workunit.client.1.vm10.stdout:2/24: dwrite f1 [0,4194304] 0 2026-03-09T20:47:13.157 INFO:tasks.workunit.client.0.vm07.stdout:2/164: write d2/ff [1021277,101864] 0 2026-03-09T20:47:13.159 INFO:tasks.workunit.client.0.vm07.stdout:2/165: truncate d2/db/d1c/f26 1571899 0 2026-03-09T20:47:13.160 INFO:tasks.workunit.client.0.vm07.stdout:2/166: fdatasync d2/db/d28/f34 0 2026-03-09T20:47:13.161 INFO:tasks.workunit.client.0.vm07.stdout:7/105: symlink d3/da/db/l20 0 2026-03-09T20:47:13.162 INFO:tasks.workunit.client.1.vm10.stdout:2/25: rmdir d5 39 2026-03-09T20:47:13.163 INFO:tasks.workunit.client.0.vm07.stdout:2/167: creat d2/d11/f36 x:0 0 0 2026-03-09T20:47:13.163 INFO:tasks.workunit.client.0.vm07.stdout:2/168: chown d2/f33 92 1 2026-03-09T20:47:13.164 INFO:tasks.workunit.client.0.vm07.stdout:6/154: symlink d8/d16/d22/d24/d2b/l36 0 2026-03-09T20:47:13.165 INFO:tasks.workunit.client.0.vm07.stdout:7/106: rename d3/da/lf to d3/da/db/d14/d1f/l21 0 2026-03-09T20:47:13.165 INFO:tasks.workunit.client.0.vm07.stdout:7/107: chown d3/da/db/c15 21939 1 2026-03-09T20:47:13.167 INFO:tasks.workunit.client.1.vm10.stdout:7/27: dwrite f3 [0,4194304] 0 2026-03-09T20:47:13.168 INFO:tasks.workunit.client.0.vm07.stdout:7/108: rmdir d3/da/db/d14/d1f 39 2026-03-09T20:47:13.174 INFO:tasks.workunit.client.0.vm07.stdout:6/155: dwrite d8/f32 [0,4194304] 0 2026-03-09T20:47:13.175 INFO:tasks.workunit.client.1.vm10.stdout:2/26: dread d5/f6 [0,4194304] 0 2026-03-09T20:47:13.175 INFO:tasks.workunit.client.0.vm07.stdout:2/169: link d2/db/c1f d2/c37 0 2026-03-09T20:47:13.176 INFO:tasks.workunit.client.0.vm07.stdout:7/109: dwrite d3/da/fe [0,4194304] 0 2026-03-09T20:47:13.178 INFO:tasks.workunit.client.1.vm10.stdout:2/27: dread d5/f6 [0,4194304] 0 2026-03-09T20:47:13.183 INFO:tasks.workunit.client.1.vm10.stdout:2/28: rmdir d5 39 2026-03-09T20:47:13.191 INFO:tasks.workunit.client.0.vm07.stdout:6/156: fsync f5 0 2026-03-09T20:47:13.191 INFO:tasks.workunit.client.0.vm07.stdout:2/170: dwrite d2/f7 [0,4194304] 0 2026-03-09T20:47:13.197 INFO:tasks.workunit.client.1.vm10.stdout:4/11: sync 2026-03-09T20:47:13.197 INFO:tasks.workunit.client.1.vm10.stdout:4/12: dread - no filename 2026-03-09T20:47:13.197 INFO:tasks.workunit.client.0.vm07.stdout:6/157: creat d8/d26/d2a/f37 x:0 0 0 2026-03-09T20:47:13.198 INFO:tasks.workunit.client.0.vm07.stdout:6/158: chown d8/c11 814368836 1 2026-03-09T20:47:13.199 INFO:tasks.workunit.client.0.vm07.stdout:6/159: write d8/d16/f18 [2156405,3662] 0 2026-03-09T20:47:13.200 INFO:tasks.workunit.client.0.vm07.stdout:7/110: link d3/da/db/d14/c1b d3/da/c22 0 2026-03-09T20:47:13.200 INFO:tasks.workunit.client.1.vm10.stdout:2/29: creat d5/fa x:0 0 0 2026-03-09T20:47:13.201 INFO:tasks.workunit.client.0.vm07.stdout:7/111: chown d3/c10 6360 1 2026-03-09T20:47:13.202 INFO:tasks.workunit.client.0.vm07.stdout:6/160: rename d8/cc to d8/c38 0 2026-03-09T20:47:13.209 INFO:tasks.workunit.client.0.vm07.stdout:7/112: dwrite d3/f18 [0,4194304] 0 2026-03-09T20:47:13.214 INFO:tasks.workunit.client.1.vm10.stdout:2/30: creat d5/fb x:0 0 0 2026-03-09T20:47:13.214 INFO:tasks.workunit.client.0.vm07.stdout:6/161: dwrite d8/f29 [0,4194304] 0 2026-03-09T20:47:13.214 INFO:tasks.workunit.client.0.vm07.stdout:7/113: dread - d3/da/db/f1e zero size 2026-03-09T20:47:13.214 INFO:tasks.workunit.client.0.vm07.stdout:0/106: fsync d1/d2/ff 0 2026-03-09T20:47:13.231 INFO:tasks.workunit.client.0.vm07.stdout:0/107: mkdir d1/d1f/d20 0 2026-03-09T20:47:13.235 INFO:tasks.workunit.client.1.vm10.stdout:2/31: link l4 d5/lc 0 2026-03-09T20:47:13.235 INFO:tasks.workunit.client.1.vm10.stdout:9/20: dread d2/f6 [0,4194304] 0 2026-03-09T20:47:13.235 INFO:tasks.workunit.client.0.vm07.stdout:0/108: dread d1/d2/dc/f10 [0,4194304] 0 2026-03-09T20:47:13.237 INFO:tasks.workunit.client.0.vm07.stdout:0/109: creat d1/d1f/d20/f21 x:0 0 0 2026-03-09T20:47:13.238 INFO:tasks.workunit.client.0.vm07.stdout:0/110: mknod d1/d1f/d20/c22 0 2026-03-09T20:47:13.240 INFO:tasks.workunit.client.0.vm07.stdout:0/111: creat d1/d2/dc/d17/f23 x:0 0 0 2026-03-09T20:47:13.241 INFO:tasks.workunit.client.0.vm07.stdout:0/112: mknod d1/c24 0 2026-03-09T20:47:13.241 INFO:tasks.workunit.client.0.vm07.stdout:0/113: write d1/f1a [926824,13348] 0 2026-03-09T20:47:13.246 INFO:tasks.workunit.client.0.vm07.stdout:0/114: dwrite d1/d2/f16 [4194304,4194304] 0 2026-03-09T20:47:13.250 INFO:tasks.workunit.client.0.vm07.stdout:0/115: chown d1/l7 12798994 1 2026-03-09T20:47:13.255 INFO:tasks.workunit.client.0.vm07.stdout:0/116: rename d1/l7 to d1/d2/dc/l25 0 2026-03-09T20:47:13.258 INFO:tasks.workunit.client.0.vm07.stdout:0/117: rename d1/d2/dc/c1d to d1/d2/c26 0 2026-03-09T20:47:13.265 INFO:tasks.workunit.client.0.vm07.stdout:0/118: dwrite d1/d2/f14 [0,4194304] 0 2026-03-09T20:47:13.268 INFO:tasks.workunit.client.0.vm07.stdout:0/119: symlink d1/d2/l27 0 2026-03-09T20:47:13.269 INFO:tasks.workunit.client.0.vm07.stdout:0/120: rename d1/c24 to d1/d2/dc/c28 0 2026-03-09T20:47:13.273 INFO:tasks.workunit.client.0.vm07.stdout:0/121: dread d1/d2/f16 [4194304,4194304] 0 2026-03-09T20:47:13.279 INFO:tasks.workunit.client.0.vm07.stdout:0/122: stat d1/d1f/d20/f21 0 2026-03-09T20:47:13.279 INFO:tasks.workunit.client.0.vm07.stdout:0/123: dread d1/d2/f14 [0,4194304] 0 2026-03-09T20:47:13.279 INFO:tasks.workunit.client.0.vm07.stdout:0/124: read d1/d2/dc/f12 [3768369,76264] 0 2026-03-09T20:47:13.280 INFO:tasks.workunit.client.0.vm07.stdout:0/125: truncate d1/d2/dc/d17/f23 634143 0 2026-03-09T20:47:13.282 INFO:tasks.workunit.client.0.vm07.stdout:0/126: mknod d1/d2/dc/d17/c29 0 2026-03-09T20:47:13.289 INFO:tasks.workunit.client.0.vm07.stdout:0/127: creat d1/f2a x:0 0 0 2026-03-09T20:47:13.289 INFO:tasks.workunit.client.0.vm07.stdout:0/128: dwrite d1/d2/f16 [4194304,4194304] 0 2026-03-09T20:47:13.294 INFO:tasks.workunit.client.0.vm07.stdout:0/129: dwrite d1/d2/f1b [0,4194304] 0 2026-03-09T20:47:13.301 INFO:tasks.workunit.client.0.vm07.stdout:0/130: mknod d1/d2/dc/d17/c2b 0 2026-03-09T20:47:13.305 INFO:tasks.workunit.client.0.vm07.stdout:0/131: creat d1/d1f/d20/f2c x:0 0 0 2026-03-09T20:47:13.348 INFO:tasks.workunit.client.1.vm10.stdout:6/18: sync 2026-03-09T20:47:13.348 INFO:tasks.workunit.client.1.vm10.stdout:6/19: truncate d3/f5 944275 0 2026-03-09T20:47:13.349 INFO:tasks.workunit.client.1.vm10.stdout:6/20: chown d3 1632 1 2026-03-09T20:47:13.349 INFO:tasks.workunit.client.1.vm10.stdout:6/21: write f1 [120886,34344] 0 2026-03-09T20:47:13.351 INFO:tasks.workunit.client.1.vm10.stdout:6/22: symlink d3/l6 0 2026-03-09T20:47:13.352 INFO:tasks.workunit.client.1.vm10.stdout:6/23: rename d3/f5 to d3/f7 0 2026-03-09T20:47:13.397 INFO:tasks.workunit.client.1.vm10.stdout:7/28: sync 2026-03-09T20:47:13.400 INFO:tasks.workunit.client.1.vm10.stdout:9/21: sync 2026-03-09T20:47:13.400 INFO:tasks.workunit.client.1.vm10.stdout:5/16: sync 2026-03-09T20:47:13.400 INFO:tasks.workunit.client.1.vm10.stdout:4/13: sync 2026-03-09T20:47:13.400 INFO:tasks.workunit.client.1.vm10.stdout:4/14: truncate - no filename 2026-03-09T20:47:13.402 INFO:tasks.workunit.client.1.vm10.stdout:4/15: rename d1 to d1/d2/d3/d4 22 2026-03-09T20:47:13.402 INFO:tasks.workunit.client.1.vm10.stdout:4/16: fdatasync - no filename 2026-03-09T20:47:13.402 INFO:tasks.workunit.client.1.vm10.stdout:4/17: dread - no filename 2026-03-09T20:47:13.402 INFO:tasks.workunit.client.1.vm10.stdout:4/18: truncate - no filename 2026-03-09T20:47:13.402 INFO:tasks.workunit.client.1.vm10.stdout:4/19: dread - no filename 2026-03-09T20:47:13.402 INFO:tasks.workunit.client.1.vm10.stdout:4/20: dread - no filename 2026-03-09T20:47:13.406 INFO:tasks.workunit.client.1.vm10.stdout:9/22: dread d2/d3/f5 [0,4194304] 0 2026-03-09T20:47:13.406 INFO:tasks.workunit.client.1.vm10.stdout:4/21: mknod d1/d2/d3/c5 0 2026-03-09T20:47:13.406 INFO:tasks.workunit.client.1.vm10.stdout:4/22: dread - no filename 2026-03-09T20:47:13.406 INFO:tasks.workunit.client.1.vm10.stdout:9/23: chown d2/f6 41924 1 2026-03-09T20:47:13.406 INFO:tasks.workunit.client.1.vm10.stdout:4/23: chown d1/d2 1 1 2026-03-09T20:47:13.406 INFO:tasks.workunit.client.1.vm10.stdout:4/24: fsync - no filename 2026-03-09T20:47:13.406 INFO:tasks.workunit.client.1.vm10.stdout:4/25: truncate - no filename 2026-03-09T20:47:13.409 INFO:tasks.workunit.client.1.vm10.stdout:4/26: rename l0 to d1/d2/d3/l6 0 2026-03-09T20:47:13.410 INFO:tasks.workunit.client.1.vm10.stdout:4/27: dread - no filename 2026-03-09T20:47:13.411 INFO:tasks.workunit.client.1.vm10.stdout:7/29: dwrite f5 [0,4194304] 0 2026-03-09T20:47:13.413 INFO:tasks.workunit.client.1.vm10.stdout:5/17: dread d2/f5 [0,4194304] 0 2026-03-09T20:47:13.413 INFO:tasks.workunit.client.1.vm10.stdout:9/24: link c0 d2/c8 0 2026-03-09T20:47:13.414 INFO:tasks.workunit.client.1.vm10.stdout:7/30: stat f1 0 2026-03-09T20:47:13.414 INFO:tasks.workunit.client.1.vm10.stdout:5/18: read d2/f7 [3636213,78861] 0 2026-03-09T20:47:13.422 INFO:tasks.workunit.client.1.vm10.stdout:7/31: mknod c7 0 2026-03-09T20:47:13.422 INFO:tasks.workunit.client.1.vm10.stdout:9/25: symlink d2/d3/l9 0 2026-03-09T20:47:13.422 INFO:tasks.workunit.client.1.vm10.stdout:7/32: readlink l4 0 2026-03-09T20:47:13.423 INFO:tasks.workunit.client.1.vm10.stdout:9/26: chown d2/d3 6 1 2026-03-09T20:47:13.423 INFO:tasks.workunit.client.1.vm10.stdout:5/19: write d2/f5 [2173967,121008] 0 2026-03-09T20:47:13.423 INFO:tasks.workunit.client.1.vm10.stdout:7/33: creat f8 x:0 0 0 2026-03-09T20:47:13.423 INFO:tasks.workunit.client.1.vm10.stdout:9/27: write d2/d3/f7 [496211,37776] 0 2026-03-09T20:47:13.424 INFO:tasks.workunit.client.1.vm10.stdout:9/28: chown d2 260618 1 2026-03-09T20:47:13.424 INFO:tasks.workunit.client.1.vm10.stdout:9/29: chown d2/d3/l9 4735495 1 2026-03-09T20:47:13.432 INFO:tasks.workunit.client.1.vm10.stdout:7/34: chown l4 114011 1 2026-03-09T20:47:13.432 INFO:tasks.workunit.client.1.vm10.stdout:5/20: rmdir d2 39 2026-03-09T20:47:13.434 INFO:tasks.workunit.client.1.vm10.stdout:5/21: read d2/f7 [3540187,47428] 0 2026-03-09T20:47:13.434 INFO:tasks.workunit.client.1.vm10.stdout:7/35: mknod c9 0 2026-03-09T20:47:13.435 INFO:tasks.workunit.client.1.vm10.stdout:5/22: mknod d2/c9 0 2026-03-09T20:47:13.437 INFO:tasks.workunit.client.1.vm10.stdout:5/23: mknod d2/ca 0 2026-03-09T20:47:13.445 INFO:tasks.workunit.client.1.vm10.stdout:7/36: dwrite f6 [0,4194304] 0 2026-03-09T20:47:13.456 INFO:tasks.workunit.client.1.vm10.stdout:5/24: dwrite d2/f5 [0,4194304] 0 2026-03-09T20:47:13.466 INFO:tasks.workunit.client.1.vm10.stdout:7/37: dread f5 [0,4194304] 0 2026-03-09T20:47:13.467 INFO:tasks.workunit.client.1.vm10.stdout:5/25: creat d2/fb x:0 0 0 2026-03-09T20:47:13.467 INFO:tasks.workunit.client.1.vm10.stdout:7/38: creat fa x:0 0 0 2026-03-09T20:47:13.468 INFO:tasks.workunit.client.1.vm10.stdout:5/26: unlink d2/l3 0 2026-03-09T20:47:13.468 INFO:tasks.workunit.client.1.vm10.stdout:5/27: read - d2/fb zero size 2026-03-09T20:47:13.468 INFO:tasks.workunit.client.1.vm10.stdout:7/39: mkdir db 0 2026-03-09T20:47:13.469 INFO:tasks.workunit.client.1.vm10.stdout:5/28: symlink d2/lc 0 2026-03-09T20:47:13.470 INFO:tasks.workunit.client.1.vm10.stdout:7/40: mknod db/cc 0 2026-03-09T20:47:13.470 INFO:tasks.workunit.client.1.vm10.stdout:5/29: truncate d2/f8 1005992 0 2026-03-09T20:47:13.473 INFO:tasks.workunit.client.1.vm10.stdout:7/41: rename f6 to db/fd 0 2026-03-09T20:47:13.473 INFO:tasks.workunit.client.1.vm10.stdout:7/42: write f3 [542116,116732] 0 2026-03-09T20:47:13.474 INFO:tasks.workunit.client.1.vm10.stdout:7/43: chown c9 643121 1 2026-03-09T20:47:13.510 INFO:tasks.workunit.client.0.vm07.stdout:8/112: getdents d1 0 2026-03-09T20:47:13.511 INFO:tasks.workunit.client.1.vm10.stdout:9/30: sync 2026-03-09T20:47:13.513 INFO:tasks.workunit.client.0.vm07.stdout:8/113: dread d1/dc/f1c [0,4194304] 0 2026-03-09T20:47:13.520 INFO:tasks.workunit.client.1.vm10.stdout:8/37: getdents d0 0 2026-03-09T20:47:13.521 INFO:tasks.workunit.client.0.vm07.stdout:8/114: creat d1/f20 x:0 0 0 2026-03-09T20:47:13.523 INFO:tasks.workunit.client.0.vm07.stdout:1/108: mknod d3/d14/c22 0 2026-03-09T20:47:13.532 INFO:tasks.workunit.client.1.vm10.stdout:1/47: getdents d2/da 0 2026-03-09T20:47:13.532 INFO:tasks.workunit.client.1.vm10.stdout:8/38: dwrite d0/fa [0,4194304] 0 2026-03-09T20:47:13.532 INFO:tasks.workunit.client.0.vm07.stdout:1/109: chown d3/d14/c1d 2816 1 2026-03-09T20:47:13.532 INFO:tasks.workunit.client.0.vm07.stdout:8/115: stat d1/f1d 0 2026-03-09T20:47:13.532 INFO:tasks.workunit.client.0.vm07.stdout:1/110: dread - d3/fc zero size 2026-03-09T20:47:13.532 INFO:tasks.workunit.client.0.vm07.stdout:8/116: stat d1/fb 0 2026-03-09T20:47:13.535 INFO:tasks.workunit.client.1.vm10.stdout:1/48: mknod d2/da/c16 0 2026-03-09T20:47:13.538 INFO:tasks.workunit.client.0.vm07.stdout:1/111: write d3/fa [486690,26678] 0 2026-03-09T20:47:13.540 INFO:tasks.workunit.client.1.vm10.stdout:8/39: creat d0/fe x:0 0 0 2026-03-09T20:47:13.543 INFO:tasks.workunit.client.0.vm07.stdout:1/112: dread - d3/f8 zero size 2026-03-09T20:47:13.546 INFO:tasks.workunit.client.1.vm10.stdout:8/40: rename d0/c4 to d0/cf 0 2026-03-09T20:47:13.547 INFO:tasks.workunit.client.0.vm07.stdout:9/134: chown d4/d8/dc/l12 109804 1 2026-03-09T20:47:13.548 INFO:tasks.workunit.client.1.vm10.stdout:1/49: sync 2026-03-09T20:47:13.549 INFO:tasks.workunit.client.1.vm10.stdout:8/41: write d0/f9 [2370630,48745] 0 2026-03-09T20:47:13.550 INFO:tasks.workunit.client.0.vm07.stdout:9/135: creat d4/d11/d23/f2f x:0 0 0 2026-03-09T20:47:13.552 INFO:tasks.workunit.client.0.vm07.stdout:1/113: dwrite d3/f4 [0,4194304] 0 2026-03-09T20:47:13.560 INFO:tasks.workunit.client.0.vm07.stdout:9/136: creat d4/d8/dc/d15/f30 x:0 0 0 2026-03-09T20:47:13.561 INFO:tasks.workunit.client.0.vm07.stdout:9/137: write f2 [1987408,43212] 0 2026-03-09T20:47:13.561 INFO:tasks.workunit.client.0.vm07.stdout:1/114: mkdir d3/d23 0 2026-03-09T20:47:13.561 INFO:tasks.workunit.client.1.vm10.stdout:8/42: dread d0/f1 [0,4194304] 0 2026-03-09T20:47:13.562 INFO:tasks.workunit.client.1.vm10.stdout:1/50: stat d2/lb 0 2026-03-09T20:47:13.566 INFO:tasks.workunit.client.1.vm10.stdout:8/43: creat d0/f10 x:0 0 0 2026-03-09T20:47:13.571 INFO:tasks.workunit.client.0.vm07.stdout:9/138: fdatasync d4/d8/dc/f25 0 2026-03-09T20:47:13.583 INFO:tasks.workunit.client.1.vm10.stdout:8/44: sync 2026-03-09T20:47:13.585 INFO:tasks.workunit.client.0.vm07.stdout:3/134: dwrite d1/d5/d9/f15 [4194304,4194304] 0 2026-03-09T20:47:13.586 INFO:tasks.workunit.client.1.vm10.stdout:8/45: read d0/fa [1035912,40487] 0 2026-03-09T20:47:13.586 INFO:tasks.workunit.client.1.vm10.stdout:8/46: stat d0/f10 0 2026-03-09T20:47:13.587 INFO:tasks.workunit.client.1.vm10.stdout:1/51: creat d2/f17 x:0 0 0 2026-03-09T20:47:13.592 INFO:tasks.workunit.client.1.vm10.stdout:8/47: creat d0/f11 x:0 0 0 2026-03-09T20:47:13.605 INFO:tasks.workunit.client.0.vm07.stdout:1/115: rename f2 to d3/f24 0 2026-03-09T20:47:13.605 INFO:tasks.workunit.client.0.vm07.stdout:1/116: write d3/d14/f1e [958881,43153] 0 2026-03-09T20:47:13.605 INFO:tasks.workunit.client.0.vm07.stdout:3/135: symlink d1/l2d 0 2026-03-09T20:47:13.605 INFO:tasks.workunit.client.0.vm07.stdout:1/117: dread d3/f4 [0,4194304] 0 2026-03-09T20:47:13.605 INFO:tasks.workunit.client.0.vm07.stdout:3/136: dwrite d1/d5/d9/d11/f26 [0,4194304] 0 2026-03-09T20:47:13.605 INFO:tasks.workunit.client.0.vm07.stdout:3/137: chown d1 3180827 1 2026-03-09T20:47:13.605 INFO:tasks.workunit.client.0.vm07.stdout:1/118: rmdir d3/d12 39 2026-03-09T20:47:13.605 INFO:tasks.workunit.client.0.vm07.stdout:1/119: write d3/d14/f1e [246144,46174] 0 2026-03-09T20:47:13.607 INFO:tasks.workunit.client.0.vm07.stdout:3/138: rename d1/d5/f20 to d1/d5/d10/f2e 0 2026-03-09T20:47:13.607 INFO:tasks.workunit.client.0.vm07.stdout:1/120: write d3/f10 [2489262,37913] 0 2026-03-09T20:47:13.608 INFO:tasks.workunit.client.0.vm07.stdout:1/121: creat d3/d14/f25 x:0 0 0 2026-03-09T20:47:13.609 INFO:tasks.workunit.client.0.vm07.stdout:3/139: mkdir d1/d5/d9/d2f 0 2026-03-09T20:47:13.611 INFO:tasks.workunit.client.0.vm07.stdout:3/140: creat d1/d5/d10/f30 x:0 0 0 2026-03-09T20:47:13.613 INFO:tasks.workunit.client.1.vm10.stdout:1/52: sync 2026-03-09T20:47:13.615 INFO:tasks.workunit.client.0.vm07.stdout:1/122: dwrite d3/fa [4194304,4194304] 0 2026-03-09T20:47:13.622 INFO:tasks.workunit.client.0.vm07.stdout:3/141: dwrite d1/d5/d10/f22 [0,4194304] 0 2026-03-09T20:47:13.632 INFO:tasks.workunit.client.0.vm07.stdout:2/171: dread d2/ff [0,4194304] 0 2026-03-09T20:47:13.633 INFO:tasks.workunit.client.0.vm07.stdout:2/172: dread d2/db/d1c/f2b [0,4194304] 0 2026-03-09T20:47:13.633 INFO:tasks.workunit.client.0.vm07.stdout:2/173: chown d2/db/d1c/c2d 113809709 1 2026-03-09T20:47:13.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:13 vm07.local ceph-mon[49120]: pgmap v146: 65 pgs: 65 active+clean; 278 MiB data, 2.0 GiB used, 118 GiB / 120 GiB avail; 1.0 MiB/s rd, 11 MiB/s wr, 302 op/s 2026-03-09T20:47:13.639 INFO:tasks.workunit.client.0.vm07.stdout:2/174: dwrite d2/f2c [0,4194304] 0 2026-03-09T20:47:13.643 INFO:tasks.workunit.client.0.vm07.stdout:1/123: truncate d3/f8 416423 0 2026-03-09T20:47:13.655 INFO:tasks.workunit.client.1.vm10.stdout:9/31: dread d2/d3/f7 [0,4194304] 0 2026-03-09T20:47:13.657 INFO:tasks.workunit.client.0.vm07.stdout:2/175: truncate d2/db/d1c/f22 405883 0 2026-03-09T20:47:13.658 INFO:tasks.workunit.client.0.vm07.stdout:2/176: fdatasync d2/f10 0 2026-03-09T20:47:13.659 INFO:tasks.workunit.client.0.vm07.stdout:4/116: dread d2/f9 [0,4194304] 0 2026-03-09T20:47:13.663 INFO:tasks.workunit.client.0.vm07.stdout:6/162: fdatasync d8/f20 0 2026-03-09T20:47:13.667 INFO:tasks.workunit.client.0.vm07.stdout:2/177: read d2/db/d1c/f26 [515887,110685] 0 2026-03-09T20:47:13.670 INFO:tasks.workunit.client.0.vm07.stdout:2/178: write d2/f2c [284792,65815] 0 2026-03-09T20:47:13.670 INFO:tasks.workunit.client.0.vm07.stdout:2/179: readlink d2/db/lc 0 2026-03-09T20:47:13.729 INFO:tasks.workunit.client.1.vm10.stdout:0/28: fsync f1 0 2026-03-09T20:47:13.737 INFO:tasks.workunit.client.1.vm10.stdout:0/29: dwrite d2/f5 [0,4194304] 0 2026-03-09T20:47:13.745 INFO:tasks.workunit.client.1.vm10.stdout:1/53: symlink d2/da/l18 0 2026-03-09T20:47:13.750 INFO:tasks.workunit.client.1.vm10.stdout:1/54: fsync d2/da/f10 0 2026-03-09T20:47:13.751 INFO:tasks.workunit.client.1.vm10.stdout:9/32: dwrite d2/d3/f5 [0,4194304] 0 2026-03-09T20:47:13.752 INFO:tasks.workunit.client.1.vm10.stdout:1/55: creat d2/f19 x:0 0 0 2026-03-09T20:47:13.760 INFO:tasks.workunit.client.1.vm10.stdout:9/33: dwrite d2/d3/f7 [0,4194304] 0 2026-03-09T20:47:13.762 INFO:tasks.workunit.client.1.vm10.stdout:9/34: chown c0 190139 1 2026-03-09T20:47:13.767 INFO:tasks.workunit.client.1.vm10.stdout:9/35: creat d2/d3/fa x:0 0 0 2026-03-09T20:47:13.774 INFO:tasks.workunit.client.1.vm10.stdout:9/36: mknod d2/cb 0 2026-03-09T20:47:13.782 INFO:tasks.workunit.client.1.vm10.stdout:7/44: truncate f1 253320 0 2026-03-09T20:47:13.782 INFO:tasks.workunit.client.1.vm10.stdout:3/16: write f4 [192493,10648] 0 2026-03-09T20:47:13.784 INFO:tasks.workunit.client.1.vm10.stdout:9/37: creat d2/fc x:0 0 0 2026-03-09T20:47:13.785 INFO:tasks.workunit.client.1.vm10.stdout:7/45: rmdir db 39 2026-03-09T20:47:13.791 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:13 vm10.local ceph-mon[57011]: pgmap v146: 65 pgs: 65 active+clean; 278 MiB data, 2.0 GiB used, 118 GiB / 120 GiB avail; 1.0 MiB/s rd, 11 MiB/s wr, 302 op/s 2026-03-09T20:47:13.795 INFO:tasks.workunit.client.1.vm10.stdout:3/17: rename f3 to f6 0 2026-03-09T20:47:13.795 INFO:tasks.workunit.client.1.vm10.stdout:3/18: creat f7 x:0 0 0 2026-03-09T20:47:13.795 INFO:tasks.workunit.client.1.vm10.stdout:9/38: dread d2/f6 [0,4194304] 0 2026-03-09T20:47:13.795 INFO:tasks.workunit.client.1.vm10.stdout:9/39: unlink d2/cb 0 2026-03-09T20:47:13.798 INFO:tasks.workunit.client.1.vm10.stdout:9/40: creat d2/d3/fd x:0 0 0 2026-03-09T20:47:13.799 INFO:tasks.workunit.client.1.vm10.stdout:7/46: dwrite f5 [0,4194304] 0 2026-03-09T20:47:13.800 INFO:tasks.workunit.client.1.vm10.stdout:9/41: mkdir d2/d3/de 0 2026-03-09T20:47:13.800 INFO:tasks.workunit.client.1.vm10.stdout:7/47: chown f5 7500 1 2026-03-09T20:47:13.801 INFO:tasks.workunit.client.1.vm10.stdout:3/19: dwrite f6 [0,4194304] 0 2026-03-09T20:47:13.802 INFO:tasks.workunit.client.1.vm10.stdout:3/20: fdatasync f7 0 2026-03-09T20:47:13.810 INFO:tasks.workunit.client.1.vm10.stdout:3/21: symlink l8 0 2026-03-09T20:47:13.812 INFO:tasks.workunit.client.1.vm10.stdout:3/22: chown l8 4059768 1 2026-03-09T20:47:13.814 INFO:tasks.workunit.client.1.vm10.stdout:3/23: write f4 [1999317,34560] 0 2026-03-09T20:47:13.815 INFO:tasks.workunit.client.1.vm10.stdout:9/42: dwrite d2/fc [0,4194304] 0 2026-03-09T20:47:13.819 INFO:tasks.workunit.client.1.vm10.stdout:3/24: mknod c9 0 2026-03-09T20:47:13.824 INFO:tasks.workunit.client.1.vm10.stdout:3/25: chown f4 1 1 2026-03-09T20:47:13.824 INFO:tasks.workunit.client.1.vm10.stdout:3/26: rmdir - no directory 2026-03-09T20:47:13.825 INFO:tasks.workunit.client.1.vm10.stdout:9/43: rename d2/c8 to d2/cf 0 2026-03-09T20:47:13.835 INFO:tasks.workunit.client.0.vm07.stdout:7/114: readlink d3/da/db/d14/d1f/l21 0 2026-03-09T20:47:13.837 INFO:tasks.workunit.client.0.vm07.stdout:7/115: dread d3/f18 [0,4194304] 0 2026-03-09T20:47:13.837 INFO:tasks.workunit.client.0.vm07.stdout:7/116: write d3/da/f11 [1642886,17936] 0 2026-03-09T20:47:13.840 INFO:tasks.workunit.client.0.vm07.stdout:7/117: dread d3/da/fe [0,4194304] 0 2026-03-09T20:47:13.840 INFO:tasks.workunit.client.0.vm07.stdout:7/118: chown d3/da/db/d14/d1f 28329382 1 2026-03-09T20:47:13.844 INFO:tasks.workunit.client.0.vm07.stdout:2/180: getdents d2 0 2026-03-09T20:47:13.849 INFO:tasks.workunit.client.0.vm07.stdout:2/181: creat d2/d11/f38 x:0 0 0 2026-03-09T20:47:13.853 INFO:tasks.workunit.client.0.vm07.stdout:2/182: symlink d2/db/d28/l39 0 2026-03-09T20:47:13.854 INFO:tasks.workunit.client.0.vm07.stdout:2/183: fdatasync d2/f7 0 2026-03-09T20:47:13.855 INFO:tasks.workunit.client.0.vm07.stdout:2/184: stat d2/db/d28/c2a 0 2026-03-09T20:47:13.857 INFO:tasks.workunit.client.1.vm10.stdout:2/32: dwrite d5/f6 [0,4194304] 0 2026-03-09T20:47:13.859 INFO:tasks.workunit.client.1.vm10.stdout:2/33: chown d5/fb 901929 1 2026-03-09T20:47:13.859 INFO:tasks.workunit.client.1.vm10.stdout:2/34: fdatasync d5/fa 0 2026-03-09T20:47:13.860 INFO:tasks.workunit.client.1.vm10.stdout:2/35: chown c2 182034758 1 2026-03-09T20:47:13.872 INFO:tasks.workunit.client.1.vm10.stdout:2/36: creat d5/fd x:0 0 0 2026-03-09T20:47:13.873 INFO:tasks.workunit.client.0.vm07.stdout:0/132: chown d1/d2/dc/c28 1415 1 2026-03-09T20:47:13.881 INFO:tasks.workunit.client.1.vm10.stdout:6/24: truncate d3/f7 193568 0 2026-03-09T20:47:13.890 INFO:tasks.workunit.client.1.vm10.stdout:6/25: dwrite f1 [0,4194304] 0 2026-03-09T20:47:13.890 INFO:tasks.workunit.client.0.vm07.stdout:0/133: link d1/d2/c26 d1/d2/c2d 0 2026-03-09T20:47:13.890 INFO:tasks.workunit.client.0.vm07.stdout:0/134: write d1/d1f/d20/f21 [736655,88677] 0 2026-03-09T20:47:13.890 INFO:tasks.workunit.client.0.vm07.stdout:0/135: chown d1/d2 1874584 1 2026-03-09T20:47:13.890 INFO:tasks.workunit.client.0.vm07.stdout:0/136: dread d1/d2/f16 [0,4194304] 0 2026-03-09T20:47:13.890 INFO:tasks.workunit.client.0.vm07.stdout:0/137: read - d1/f2a zero size 2026-03-09T20:47:13.891 INFO:tasks.workunit.client.1.vm10.stdout:4/28: chown d1/d2/d3/l6 35705237 1 2026-03-09T20:47:13.893 INFO:tasks.workunit.client.1.vm10.stdout:4/29: chown d1/d2 956 1 2026-03-09T20:47:13.894 INFO:tasks.workunit.client.1.vm10.stdout:4/30: chown d1/d2 80699760 1 2026-03-09T20:47:13.894 INFO:tasks.workunit.client.1.vm10.stdout:4/31: dwrite - no filename 2026-03-09T20:47:13.894 INFO:tasks.workunit.client.1.vm10.stdout:4/32: dwrite - no filename 2026-03-09T20:47:13.901 INFO:tasks.workunit.client.0.vm07.stdout:0/138: mknod d1/d2/c2e 0 2026-03-09T20:47:13.902 INFO:tasks.workunit.client.0.vm07.stdout:0/139: fsync d1/d1f/d20/f21 0 2026-03-09T20:47:13.903 INFO:tasks.workunit.client.1.vm10.stdout:6/26: mknod d3/c8 0 2026-03-09T20:47:13.903 INFO:tasks.workunit.client.1.vm10.stdout:4/33: creat d1/d2/f7 x:0 0 0 2026-03-09T20:47:13.903 INFO:tasks.workunit.client.1.vm10.stdout:6/27: readlink d3/l6 0 2026-03-09T20:47:13.904 INFO:tasks.workunit.client.0.vm07.stdout:0/140: read d1/d1f/d20/f21 [401809,2976] 0 2026-03-09T20:47:13.905 INFO:tasks.workunit.client.1.vm10.stdout:6/28: truncate f2 5158234 0 2026-03-09T20:47:13.905 INFO:tasks.workunit.client.0.vm07.stdout:0/141: read d1/d2/f14 [90116,92036] 0 2026-03-09T20:47:13.906 INFO:tasks.workunit.client.1.vm10.stdout:4/34: mkdir d1/d8 0 2026-03-09T20:47:13.908 INFO:tasks.workunit.client.1.vm10.stdout:6/29: creat d3/f9 x:0 0 0 2026-03-09T20:47:13.913 INFO:tasks.workunit.client.0.vm07.stdout:0/142: rename d1/f2a to d1/f2f 0 2026-03-09T20:47:13.913 INFO:tasks.workunit.client.1.vm10.stdout:5/30: getdents d2 0 2026-03-09T20:47:13.913 INFO:tasks.workunit.client.1.vm10.stdout:7/48: mkdir db/de 0 2026-03-09T20:47:13.913 INFO:tasks.workunit.client.1.vm10.stdout:6/30: dread - d3/f9 zero size 2026-03-09T20:47:13.913 INFO:tasks.workunit.client.1.vm10.stdout:6/31: write f1 [237961,96787] 0 2026-03-09T20:47:13.918 INFO:tasks.workunit.client.1.vm10.stdout:4/35: dwrite d1/d2/f7 [0,4194304] 0 2026-03-09T20:47:13.919 INFO:tasks.workunit.client.1.vm10.stdout:5/31: dwrite d2/fb [0,4194304] 0 2026-03-09T20:47:13.919 INFO:tasks.workunit.client.1.vm10.stdout:4/36: chown d1/d2 603 1 2026-03-09T20:47:13.919 INFO:tasks.workunit.client.0.vm07.stdout:5/141: dread d5/d19/f20 [0,4194304] 0 2026-03-09T20:47:13.919 INFO:tasks.workunit.client.1.vm10.stdout:4/37: readlink d1/d2/d3/l6 0 2026-03-09T20:47:13.923 INFO:tasks.workunit.client.0.vm07.stdout:0/143: mkdir d1/d1f/d30 0 2026-03-09T20:47:13.927 INFO:tasks.workunit.client.0.vm07.stdout:8/117: dwrite d1/fb [0,4194304] 0 2026-03-09T20:47:13.928 INFO:tasks.workunit.client.0.vm07.stdout:0/144: dread d1/d2/f1b [0,4194304] 0 2026-03-09T20:47:13.931 INFO:tasks.workunit.client.0.vm07.stdout:0/145: read f0 [498238,110285] 0 2026-03-09T20:47:13.941 INFO:tasks.workunit.client.0.vm07.stdout:5/142: creat d5/df/f2b x:0 0 0 2026-03-09T20:47:13.952 INFO:tasks.workunit.client.0.vm07.stdout:9/139: truncate d4/f17 860920 0 2026-03-09T20:47:13.957 INFO:tasks.workunit.client.1.vm10.stdout:4/38: creat d1/f9 x:0 0 0 2026-03-09T20:47:13.957 INFO:tasks.workunit.client.1.vm10.stdout:8/48: getdents d0 0 2026-03-09T20:47:13.960 INFO:tasks.workunit.client.1.vm10.stdout:8/49: write d0/f10 [991397,11342] 0 2026-03-09T20:47:13.961 INFO:tasks.workunit.client.0.vm07.stdout:1/124: truncate d3/f11 2330749 0 2026-03-09T20:47:13.962 INFO:tasks.workunit.client.0.vm07.stdout:1/125: dread - d3/d14/f25 zero size 2026-03-09T20:47:13.971 INFO:tasks.workunit.client.1.vm10.stdout:8/50: truncate d0/fe 127116 0 2026-03-09T20:47:13.971 INFO:tasks.workunit.client.1.vm10.stdout:5/32: getdents d2 0 2026-03-09T20:47:13.971 INFO:tasks.workunit.client.1.vm10.stdout:4/39: symlink d1/d8/la 0 2026-03-09T20:47:13.971 INFO:tasks.workunit.client.0.vm07.stdout:9/140: mkdir d4/d11/d31 0 2026-03-09T20:47:13.971 INFO:tasks.workunit.client.0.vm07.stdout:9/141: stat d4/fa 0 2026-03-09T20:47:13.971 INFO:tasks.workunit.client.0.vm07.stdout:9/142: mkdir d4/d11/d23/d32 0 2026-03-09T20:47:13.971 INFO:tasks.workunit.client.0.vm07.stdout:9/143: truncate d4/d11/d23/f2f 246556 0 2026-03-09T20:47:13.971 INFO:tasks.workunit.client.0.vm07.stdout:3/142: rmdir d1 39 2026-03-09T20:47:13.975 INFO:tasks.workunit.client.0.vm07.stdout:8/118: symlink d1/dc/l21 0 2026-03-09T20:47:13.978 INFO:tasks.workunit.client.1.vm10.stdout:5/33: creat d2/fd x:0 0 0 2026-03-09T20:47:13.978 INFO:tasks.workunit.client.0.vm07.stdout:0/146: creat d1/f31 x:0 0 0 2026-03-09T20:47:13.979 INFO:tasks.workunit.client.1.vm10.stdout:4/40: rename d1/d8/la to d1/d8/lb 0 2026-03-09T20:47:13.979 INFO:tasks.workunit.client.0.vm07.stdout:0/147: write d1/f1a [1306968,115825] 0 2026-03-09T20:47:13.979 INFO:tasks.workunit.client.1.vm10.stdout:5/34: truncate f1 5135586 0 2026-03-09T20:47:13.980 INFO:tasks.workunit.client.0.vm07.stdout:4/117: truncate d2/f3 4097685 0 2026-03-09T20:47:13.980 INFO:tasks.workunit.client.0.vm07.stdout:4/118: chown d2/f5 127 1 2026-03-09T20:47:13.985 INFO:tasks.workunit.client.0.vm07.stdout:6/163: dwrite d8/d16/f17 [0,4194304] 0 2026-03-09T20:47:13.987 INFO:tasks.workunit.client.0.vm07.stdout:6/164: read d8/f29 [2660101,65117] 0 2026-03-09T20:47:13.987 INFO:tasks.workunit.client.0.vm07.stdout:6/165: readlink l1 0 2026-03-09T20:47:13.987 INFO:tasks.workunit.client.0.vm07.stdout:6/166: write d8/f20 [2145399,24446] 0 2026-03-09T20:47:13.989 INFO:tasks.workunit.client.0.vm07.stdout:6/167: dread - d8/d26/d2a/f37 zero size 2026-03-09T20:47:13.989 INFO:tasks.workunit.client.0.vm07.stdout:3/143: read - d1/d5/d9/d11/d1f/f27 zero size 2026-03-09T20:47:13.997 INFO:tasks.workunit.client.0.vm07.stdout:9/144: creat d4/d16/f33 x:0 0 0 2026-03-09T20:47:14.000 INFO:tasks.workunit.client.1.vm10.stdout:5/35: mknod d2/ce 0 2026-03-09T20:47:14.000 INFO:tasks.workunit.client.1.vm10.stdout:5/36: stat d2/fb 0 2026-03-09T20:47:14.001 INFO:tasks.workunit.client.1.vm10.stdout:5/37: chown d2/c9 61958515 1 2026-03-09T20:47:14.001 INFO:tasks.workunit.client.0.vm07.stdout:0/148: dread d1/d2/f1b [0,4194304] 0 2026-03-09T20:47:14.002 INFO:tasks.workunit.client.0.vm07.stdout:0/149: fdatasync d1/f31 0 2026-03-09T20:47:14.006 INFO:tasks.workunit.client.0.vm07.stdout:0/150: fsync d1/d2/ff 0 2026-03-09T20:47:14.006 INFO:tasks.workunit.client.0.vm07.stdout:4/119: symlink d2/df/d17/l24 0 2026-03-09T20:47:14.007 INFO:tasks.workunit.client.0.vm07.stdout:4/120: stat d2/df/d17/l24 0 2026-03-09T20:47:14.007 INFO:tasks.workunit.client.0.vm07.stdout:4/121: stat d2/f1d 0 2026-03-09T20:47:14.009 INFO:tasks.workunit.client.0.vm07.stdout:3/144: symlink d1/d5/d9/l31 0 2026-03-09T20:47:14.015 INFO:tasks.workunit.client.0.vm07.stdout:0/151: dwrite d1/f31 [0,4194304] 0 2026-03-09T20:47:14.017 INFO:tasks.workunit.client.0.vm07.stdout:0/152: fdatasync d1/d1f/d20/f2c 0 2026-03-09T20:47:14.018 INFO:tasks.workunit.client.0.vm07.stdout:4/122: dwrite d2/f7 [0,4194304] 0 2026-03-09T20:47:14.019 INFO:tasks.workunit.client.0.vm07.stdout:4/123: readlink d2/df/d17/l24 0 2026-03-09T20:47:14.020 INFO:tasks.workunit.client.0.vm07.stdout:8/119: unlink d1/l7 0 2026-03-09T20:47:14.020 INFO:tasks.workunit.client.0.vm07.stdout:4/124: chown d2/df/d17/f1b 5 1 2026-03-09T20:47:14.020 INFO:tasks.workunit.client.0.vm07.stdout:4/125: chown d2/df/f23 1583131025 1 2026-03-09T20:47:14.021 INFO:tasks.workunit.client.0.vm07.stdout:0/153: readlink d1/l1e 0 2026-03-09T20:47:14.038 INFO:tasks.workunit.client.0.vm07.stdout:3/145: mknod d1/d5/c32 0 2026-03-09T20:47:14.039 INFO:tasks.workunit.client.0.vm07.stdout:8/120: read d1/f13 [914071,94734] 0 2026-03-09T20:47:14.042 INFO:tasks.workunit.client.0.vm07.stdout:4/126: rename d2/df/f16 to d2/d1f/f25 0 2026-03-09T20:47:14.044 INFO:tasks.workunit.client.0.vm07.stdout:9/145: creat d4/d8/f34 x:0 0 0 2026-03-09T20:47:14.048 INFO:tasks.workunit.client.0.vm07.stdout:9/146: dwrite d4/d16/f33 [0,4194304] 0 2026-03-09T20:47:14.049 INFO:tasks.workunit.client.0.vm07.stdout:4/127: rename d2/df/f13 to d2/d1f/f26 0 2026-03-09T20:47:14.049 INFO:tasks.workunit.client.0.vm07.stdout:6/168: getdents d8/d16/d22/d24/d2b 0 2026-03-09T20:47:14.050 INFO:tasks.workunit.client.0.vm07.stdout:4/128: readlink d2/l14 0 2026-03-09T20:47:14.052 INFO:tasks.workunit.client.0.vm07.stdout:9/147: fdatasync d4/d11/f2c 0 2026-03-09T20:47:14.052 INFO:tasks.workunit.client.0.vm07.stdout:0/154: mknod d1/d2/c32 0 2026-03-09T20:47:14.053 INFO:tasks.workunit.client.0.vm07.stdout:9/148: chown d4/d16/f27 81864 1 2026-03-09T20:47:14.055 INFO:tasks.workunit.client.0.vm07.stdout:9/149: rename d4/d11 to d4/d11/d23/d32/d35 22 2026-03-09T20:47:14.086 INFO:tasks.workunit.client.0.vm07.stdout:9/150: stat d4/d11/f1a 0 2026-03-09T20:47:14.086 INFO:tasks.workunit.client.0.vm07.stdout:9/151: creat d4/d16/d29/d24/f36 x:0 0 0 2026-03-09T20:47:14.086 INFO:tasks.workunit.client.0.vm07.stdout:9/152: readlink d4/d8/dc/l12 0 2026-03-09T20:47:14.086 INFO:tasks.workunit.client.0.vm07.stdout:2/185: rmdir d2/db/d28 39 2026-03-09T20:47:14.086 INFO:tasks.workunit.client.0.vm07.stdout:7/119: dwrite d3/da/db/f12 [0,4194304] 0 2026-03-09T20:47:14.086 INFO:tasks.workunit.client.0.vm07.stdout:7/120: truncate d3/da/db/f12 4350653 0 2026-03-09T20:47:14.086 INFO:tasks.workunit.client.1.vm10.stdout:0/30: write f1 [716916,53158] 0 2026-03-09T20:47:14.086 INFO:tasks.workunit.client.1.vm10.stdout:0/31: chown d2/l7 3 1 2026-03-09T20:47:14.086 INFO:tasks.workunit.client.1.vm10.stdout:0/32: write f1 [316798,130432] 0 2026-03-09T20:47:14.086 INFO:tasks.workunit.client.1.vm10.stdout:0/33: write f1 [178548,119424] 0 2026-03-09T20:47:14.086 INFO:tasks.workunit.client.1.vm10.stdout:0/34: chown d2/l7 1 1 2026-03-09T20:47:14.086 INFO:tasks.workunit.client.1.vm10.stdout:1/56: rmdir d2 39 2026-03-09T20:47:14.086 INFO:tasks.workunit.client.1.vm10.stdout:7/49: write f1 [76133,62765] 0 2026-03-09T20:47:14.086 INFO:tasks.workunit.client.1.vm10.stdout:7/50: stat fa 0 2026-03-09T20:47:14.086 INFO:tasks.workunit.client.1.vm10.stdout:1/57: stat d2 0 2026-03-09T20:47:14.086 INFO:tasks.workunit.client.1.vm10.stdout:1/58: chown d2/f14 308 1 2026-03-09T20:47:14.086 INFO:tasks.workunit.client.1.vm10.stdout:3/27: fsync f6 0 2026-03-09T20:47:14.086 INFO:tasks.workunit.client.1.vm10.stdout:9/44: chown d2/cf 1 1 2026-03-09T20:47:14.086 INFO:tasks.workunit.client.1.vm10.stdout:3/28: dread - f7 zero size 2026-03-09T20:47:14.086 INFO:tasks.workunit.client.1.vm10.stdout:7/51: dread f5 [0,4194304] 0 2026-03-09T20:47:14.087 INFO:tasks.workunit.client.1.vm10.stdout:2/37: rmdir d5 39 2026-03-09T20:47:14.087 INFO:tasks.workunit.client.1.vm10.stdout:2/38: read f1 [2334059,118074] 0 2026-03-09T20:47:14.087 INFO:tasks.workunit.client.1.vm10.stdout:9/45: dread d2/fc [0,4194304] 0 2026-03-09T20:47:14.091 INFO:tasks.workunit.client.1.vm10.stdout:3/29: dwrite f7 [0,4194304] 0 2026-03-09T20:47:14.094 INFO:tasks.workunit.client.1.vm10.stdout:9/46: fsync d2/d3/fd 0 2026-03-09T20:47:14.096 INFO:tasks.workunit.client.1.vm10.stdout:1/59: unlink d2/f5 0 2026-03-09T20:47:14.096 INFO:tasks.workunit.client.0.vm07.stdout:8/121: dread d1/f11 [0,4194304] 0 2026-03-09T20:47:14.096 INFO:tasks.workunit.client.1.vm10.stdout:1/60: dread - d2/f17 zero size 2026-03-09T20:47:14.097 INFO:tasks.workunit.client.1.vm10.stdout:7/52: rename db/fd to db/de/ff 0 2026-03-09T20:47:14.098 INFO:tasks.workunit.client.1.vm10.stdout:2/39: chown d5/fb 17 1 2026-03-09T20:47:14.099 INFO:tasks.workunit.client.1.vm10.stdout:2/40: write d5/fd [835835,93122] 0 2026-03-09T20:47:14.099 INFO:tasks.workunit.client.1.vm10.stdout:1/61: creat d2/f1a x:0 0 0 2026-03-09T20:47:14.100 INFO:tasks.workunit.client.1.vm10.stdout:1/62: dread - d2/da/f11 zero size 2026-03-09T20:47:14.101 INFO:tasks.workunit.client.1.vm10.stdout:1/63: readlink d2/da/l18 0 2026-03-09T20:47:14.101 INFO:tasks.workunit.client.1.vm10.stdout:7/53: dwrite f8 [0,4194304] 0 2026-03-09T20:47:14.104 INFO:tasks.workunit.client.1.vm10.stdout:1/64: dread d2/da/fe [0,4194304] 0 2026-03-09T20:47:14.108 INFO:tasks.workunit.client.0.vm07.stdout:2/186: rename d2/d11/f20 to d2/db/d1c/f3a 0 2026-03-09T20:47:14.112 INFO:tasks.workunit.client.1.vm10.stdout:9/47: rename c0 to d2/d3/c10 0 2026-03-09T20:47:14.113 INFO:tasks.workunit.client.1.vm10.stdout:1/65: rename d2 to d2/da/d1b 22 2026-03-09T20:47:14.113 INFO:tasks.workunit.client.1.vm10.stdout:9/48: dread - d2/d3/fa zero size 2026-03-09T20:47:14.113 INFO:tasks.workunit.client.1.vm10.stdout:9/49: dread - d2/d3/fa zero size 2026-03-09T20:47:14.114 INFO:tasks.workunit.client.1.vm10.stdout:3/30: truncate f6 3718412 0 2026-03-09T20:47:14.116 INFO:tasks.workunit.client.1.vm10.stdout:9/50: dread d2/d3/f5 [0,4194304] 0 2026-03-09T20:47:14.116 INFO:tasks.workunit.client.0.vm07.stdout:2/187: unlink d2/db/d1c/f2b 0 2026-03-09T20:47:14.117 INFO:tasks.workunit.client.0.vm07.stdout:2/188: readlink d2/l18 0 2026-03-09T20:47:14.117 INFO:tasks.workunit.client.0.vm07.stdout:2/189: stat d2/db/d1c/l25 0 2026-03-09T20:47:14.120 INFO:tasks.workunit.client.0.vm07.stdout:7/121: unlink d3/da/db/c1c 0 2026-03-09T20:47:14.125 INFO:tasks.workunit.client.1.vm10.stdout:7/54: creat db/de/f10 x:0 0 0 2026-03-09T20:47:14.125 INFO:tasks.workunit.client.1.vm10.stdout:7/55: chown db/de/ff 13577 1 2026-03-09T20:47:14.125 INFO:tasks.workunit.client.1.vm10.stdout:1/66: creat d2/f1c x:0 0 0 2026-03-09T20:47:14.125 INFO:tasks.workunit.client.1.vm10.stdout:1/67: stat d2/da/cd 0 2026-03-09T20:47:14.125 INFO:tasks.workunit.client.1.vm10.stdout:1/68: truncate d2/f19 275591 0 2026-03-09T20:47:14.128 INFO:tasks.workunit.client.1.vm10.stdout:1/69: write d2/da/f10 [125332,128414] 0 2026-03-09T20:47:14.130 INFO:tasks.workunit.client.0.vm07.stdout:8/122: link d1/dc/c17 d1/dc/c22 0 2026-03-09T20:47:14.131 INFO:tasks.workunit.client.1.vm10.stdout:3/31: rename c9 to ca 0 2026-03-09T20:47:14.131 INFO:tasks.workunit.client.1.vm10.stdout:9/51: fsync d2/f6 0 2026-03-09T20:47:14.131 INFO:tasks.workunit.client.1.vm10.stdout:3/32: stat f4 0 2026-03-09T20:47:14.132 INFO:tasks.workunit.client.0.vm07.stdout:8/123: fsync d1/f13 0 2026-03-09T20:47:14.133 INFO:tasks.workunit.client.0.vm07.stdout:7/122: getdents d3/da/db/d14/d1f 0 2026-03-09T20:47:14.133 INFO:tasks.workunit.client.0.vm07.stdout:7/123: fsync d3/da/db/d14/f1a 0 2026-03-09T20:47:14.135 INFO:tasks.workunit.client.0.vm07.stdout:7/124: dread d3/da/db/d14/f1a [0,4194304] 0 2026-03-09T20:47:14.136 INFO:tasks.workunit.client.0.vm07.stdout:7/125: chown d3/l4 0 1 2026-03-09T20:47:14.137 INFO:tasks.workunit.client.1.vm10.stdout:3/33: dwrite f7 [0,4194304] 0 2026-03-09T20:47:14.138 INFO:tasks.workunit.client.1.vm10.stdout:7/56: link fa db/f11 0 2026-03-09T20:47:14.138 INFO:tasks.workunit.client.1.vm10.stdout:7/57: chown fa 21035 1 2026-03-09T20:47:14.140 INFO:tasks.workunit.client.1.vm10.stdout:3/34: dread f4 [0,4194304] 0 2026-03-09T20:47:14.140 INFO:tasks.workunit.client.0.vm07.stdout:7/126: dread d3/da/db/f12 [0,4194304] 0 2026-03-09T20:47:14.152 INFO:tasks.workunit.client.1.vm10.stdout:9/52: mknod d2/d3/c11 0 2026-03-09T20:47:14.154 INFO:tasks.workunit.client.0.vm07.stdout:8/124: rename d1/dc/d14/l1f to d1/dc/d16/l23 0 2026-03-09T20:47:14.157 INFO:tasks.workunit.client.0.vm07.stdout:8/125: dwrite d1/f1d [0,4194304] 0 2026-03-09T20:47:14.167 INFO:tasks.workunit.client.1.vm10.stdout:7/58: mkdir db/de/d12 0 2026-03-09T20:47:14.168 INFO:tasks.workunit.client.1.vm10.stdout:3/35: symlink lb 0 2026-03-09T20:47:14.170 INFO:tasks.workunit.client.0.vm07.stdout:7/127: mknod d3/da/db/d14/c23 0 2026-03-09T20:47:14.170 INFO:tasks.workunit.client.1.vm10.stdout:9/53: dread d2/fc [0,4194304] 0 2026-03-09T20:47:14.174 INFO:tasks.workunit.client.1.vm10.stdout:7/59: creat db/de/d12/f13 x:0 0 0 2026-03-09T20:47:14.178 INFO:tasks.workunit.client.0.vm07.stdout:7/128: dwrite d3/da/db/f1e [0,4194304] 0 2026-03-09T20:47:14.178 INFO:tasks.workunit.client.1.vm10.stdout:7/60: write fa [824670,28453] 0 2026-03-09T20:47:14.184 INFO:tasks.workunit.client.0.vm07.stdout:7/129: creat d3/da/db/d14/f24 x:0 0 0 2026-03-09T20:47:14.189 INFO:tasks.workunit.client.0.vm07.stdout:7/130: readlink d3/da/db/l20 0 2026-03-09T20:47:14.189 INFO:tasks.workunit.client.1.vm10.stdout:9/54: dread d2/d3/f5 [0,4194304] 0 2026-03-09T20:47:14.189 INFO:tasks.workunit.client.1.vm10.stdout:7/61: dwrite db/de/f10 [0,4194304] 0 2026-03-09T20:47:14.194 INFO:tasks.workunit.client.1.vm10.stdout:9/55: mkdir d2/d12 0 2026-03-09T20:47:14.195 INFO:tasks.workunit.client.0.vm07.stdout:5/143: sync 2026-03-09T20:47:14.195 INFO:tasks.workunit.client.1.vm10.stdout:9/56: chown d2/d3/c11 732756 1 2026-03-09T20:47:14.195 INFO:tasks.workunit.client.1.vm10.stdout:2/41: sync 2026-03-09T20:47:14.195 INFO:tasks.workunit.client.0.vm07.stdout:2/190: sync 2026-03-09T20:47:14.195 INFO:tasks.workunit.client.0.vm07.stdout:2/191: readlink d2/db/d1c/l25 0 2026-03-09T20:47:14.196 INFO:tasks.workunit.client.1.vm10.stdout:5/38: sync 2026-03-09T20:47:14.196 INFO:tasks.workunit.client.1.vm10.stdout:1/70: sync 2026-03-09T20:47:14.197 INFO:tasks.workunit.client.1.vm10.stdout:2/42: write d5/fd [316247,16924] 0 2026-03-09T20:47:14.197 INFO:tasks.workunit.client.0.vm07.stdout:7/131: mknod d3/da/db/d14/c25 0 2026-03-09T20:47:14.198 INFO:tasks.workunit.client.0.vm07.stdout:2/192: read d2/f33 [470535,43127] 0 2026-03-09T20:47:14.199 INFO:tasks.workunit.client.1.vm10.stdout:2/43: write d5/f6 [2756108,67582] 0 2026-03-09T20:47:14.200 INFO:tasks.workunit.client.1.vm10.stdout:2/44: write d5/fd [776224,99661] 0 2026-03-09T20:47:14.204 INFO:tasks.workunit.client.1.vm10.stdout:9/57: mkdir d2/d3/d13 0 2026-03-09T20:47:14.211 INFO:tasks.workunit.client.1.vm10.stdout:9/58: rename d2 to d2/d3/d13/d14 22 2026-03-09T20:47:14.211 INFO:tasks.workunit.client.1.vm10.stdout:2/45: dwrite d5/f7 [0,4194304] 0 2026-03-09T20:47:14.228 INFO:tasks.workunit.client.0.vm07.stdout:5/144: write d5/df/d13/f17 [546157,96898] 0 2026-03-09T20:47:14.229 INFO:tasks.workunit.client.0.vm07.stdout:0/155: fdatasync d1/f1a 0 2026-03-09T20:47:14.229 INFO:tasks.workunit.client.0.vm07.stdout:0/156: fsync d1/d2/ff 0 2026-03-09T20:47:14.233 INFO:tasks.workunit.client.1.vm10.stdout:4/41: fsync d1/f9 0 2026-03-09T20:47:14.235 INFO:tasks.workunit.client.1.vm10.stdout:6/32: fsync f0 0 2026-03-09T20:47:14.239 INFO:tasks.workunit.client.1.vm10.stdout:8/51: write d0/f1 [1592767,36198] 0 2026-03-09T20:47:14.240 INFO:tasks.workunit.client.0.vm07.stdout:9/153: rmdir d4/d11/d23 39 2026-03-09T20:47:14.240 INFO:tasks.workunit.client.1.vm10.stdout:6/33: stat f2 0 2026-03-09T20:47:14.242 INFO:tasks.workunit.client.1.vm10.stdout:6/34: dwrite f0 [4194304,4194304] 0 2026-03-09T20:47:14.243 INFO:tasks.workunit.client.0.vm07.stdout:1/126: dwrite d3/f5 [0,4194304] 0 2026-03-09T20:47:14.243 INFO:tasks.workunit.client.1.vm10.stdout:6/35: fdatasync f1 0 2026-03-09T20:47:14.252 INFO:tasks.workunit.client.0.vm07.stdout:6/169: truncate d8/f14 610668 0 2026-03-09T20:47:14.253 INFO:tasks.workunit.client.0.vm07.stdout:6/170: write f5 [3362888,76964] 0 2026-03-09T20:47:14.254 INFO:tasks.workunit.client.0.vm07.stdout:0/157: sync 2026-03-09T20:47:14.256 INFO:tasks.workunit.client.1.vm10.stdout:0/35: fsync f1 0 2026-03-09T20:47:14.258 INFO:tasks.workunit.client.0.vm07.stdout:2/193: mknod d2/db/d28/c3b 0 2026-03-09T20:47:14.261 INFO:tasks.workunit.client.0.vm07.stdout:2/194: dread d2/f33 [0,4194304] 0 2026-03-09T20:47:14.262 INFO:tasks.workunit.client.1.vm10.stdout:7/62: link db/de/f10 db/de/f14 0 2026-03-09T20:47:14.263 INFO:tasks.workunit.client.0.vm07.stdout:5/145: chown d5/df/l29 5 1 2026-03-09T20:47:14.264 INFO:tasks.workunit.client.0.vm07.stdout:5/146: dread - d5/df/d13/f2a zero size 2026-03-09T20:47:14.265 INFO:tasks.workunit.client.0.vm07.stdout:3/146: write d1/d5/d9/fe [1985069,34777] 0 2026-03-09T20:47:14.266 INFO:tasks.workunit.client.0.vm07.stdout:9/154: mkdir d4/d16/d29/d24/d37 0 2026-03-09T20:47:14.268 INFO:tasks.workunit.client.0.vm07.stdout:7/132: creat d3/da/f26 x:0 0 0 2026-03-09T20:47:14.273 INFO:tasks.workunit.client.0.vm07.stdout:1/127: dwrite d3/fc [0,4194304] 0 2026-03-09T20:47:14.277 INFO:tasks.workunit.client.0.vm07.stdout:2/195: sync 2026-03-09T20:47:14.278 INFO:tasks.workunit.client.0.vm07.stdout:2/196: truncate d2/db/d1c/f3a 4769809 0 2026-03-09T20:47:14.278 INFO:tasks.workunit.client.0.vm07.stdout:1/128: dread d3/d14/f17 [0,4194304] 0 2026-03-09T20:47:14.279 INFO:tasks.workunit.client.0.vm07.stdout:2/197: chown d2/db/d28/c2a 507049396 1 2026-03-09T20:47:14.283 INFO:tasks.workunit.client.0.vm07.stdout:1/129: dread d3/f4 [0,4194304] 0 2026-03-09T20:47:14.285 INFO:tasks.workunit.client.0.vm07.stdout:2/198: dread d2/db/d1c/f26 [0,4194304] 0 2026-03-09T20:47:14.285 INFO:tasks.workunit.client.0.vm07.stdout:1/130: write d3/f8 [472510,30066] 0 2026-03-09T20:47:14.298 INFO:tasks.workunit.client.1.vm10.stdout:8/52: creat d0/f12 x:0 0 0 2026-03-09T20:47:14.298 INFO:tasks.workunit.client.1.vm10.stdout:8/53: stat d0/fe 0 2026-03-09T20:47:14.301 INFO:tasks.workunit.client.1.vm10.stdout:6/36: mkdir d3/da 0 2026-03-09T20:47:14.301 INFO:tasks.workunit.client.1.vm10.stdout:6/37: stat d3 0 2026-03-09T20:47:14.301 INFO:tasks.workunit.client.1.vm10.stdout:0/36: mkdir d2/d9 0 2026-03-09T20:47:14.301 INFO:tasks.workunit.client.0.vm07.stdout:5/147: creat d5/d19/f2c x:0 0 0 2026-03-09T20:47:14.301 INFO:tasks.workunit.client.1.vm10.stdout:0/37: stat d2/f5 0 2026-03-09T20:47:14.302 INFO:tasks.workunit.client.0.vm07.stdout:5/148: chown d5/d19/c27 31 1 2026-03-09T20:47:14.302 INFO:tasks.workunit.client.1.vm10.stdout:0/38: truncate f1 936744 0 2026-03-09T20:47:14.302 INFO:tasks.workunit.client.1.vm10.stdout:0/39: write f1 [1183379,52926] 0 2026-03-09T20:47:14.303 INFO:tasks.workunit.client.0.vm07.stdout:5/149: write d5/df/f24 [1303811,65308] 0 2026-03-09T20:47:14.307 INFO:tasks.workunit.client.1.vm10.stdout:7/63: creat db/de/f15 x:0 0 0 2026-03-09T20:47:14.308 INFO:tasks.workunit.client.0.vm07.stdout:3/147: creat d1/d5/d9/f33 x:0 0 0 2026-03-09T20:47:14.308 INFO:tasks.workunit.client.0.vm07.stdout:3/148: read - d1/d5/d9/f33 zero size 2026-03-09T20:47:14.309 INFO:tasks.workunit.client.0.vm07.stdout:3/149: truncate d1/d5/d10/f30 776903 0 2026-03-09T20:47:14.309 INFO:tasks.workunit.client.1.vm10.stdout:9/59: stat c1 0 2026-03-09T20:47:14.311 INFO:tasks.workunit.client.1.vm10.stdout:2/46: link d5/fd d5/fe 0 2026-03-09T20:47:14.311 INFO:tasks.workunit.client.0.vm07.stdout:7/133: creat d3/da/db/f27 x:0 0 0 2026-03-09T20:47:14.314 INFO:tasks.workunit.client.1.vm10.stdout:8/54: sync 2026-03-09T20:47:14.319 INFO:tasks.workunit.client.0.vm07.stdout:7/134: dwrite d3/da/f11 [0,4194304] 0 2026-03-09T20:47:14.321 INFO:tasks.workunit.client.1.vm10.stdout:7/64: dwrite fa [0,4194304] 0 2026-03-09T20:47:14.321 INFO:tasks.workunit.client.1.vm10.stdout:9/60: mknod d2/d3/c15 0 2026-03-09T20:47:14.322 INFO:tasks.workunit.client.1.vm10.stdout:7/65: readlink l4 0 2026-03-09T20:47:14.325 INFO:tasks.workunit.client.1.vm10.stdout:9/61: fdatasync d2/f6 0 2026-03-09T20:47:14.325 INFO:tasks.workunit.client.0.vm07.stdout:2/199: rename d2/d11/l1d to d2/db/d1c/l3c 0 2026-03-09T20:47:14.326 INFO:tasks.workunit.client.0.vm07.stdout:0/158: mkdir d1/d2/d33 0 2026-03-09T20:47:14.326 INFO:tasks.workunit.client.0.vm07.stdout:6/171: chown d8/f29 27 1 2026-03-09T20:47:14.326 INFO:tasks.workunit.client.1.vm10.stdout:2/47: dwrite d5/fb [0,4194304] 0 2026-03-09T20:47:14.326 INFO:tasks.workunit.client.1.vm10.stdout:8/55: creat d0/f13 x:0 0 0 2026-03-09T20:47:14.327 INFO:tasks.workunit.client.1.vm10.stdout:8/56: readlink d0/l5 0 2026-03-09T20:47:14.327 INFO:tasks.workunit.client.0.vm07.stdout:1/131: write d3/f9 [3741972,120350] 0 2026-03-09T20:47:14.330 INFO:tasks.workunit.client.1.vm10.stdout:8/57: chown d0/f13 1007 1 2026-03-09T20:47:14.330 INFO:tasks.workunit.client.1.vm10.stdout:9/62: write d2/f6 [610276,103707] 0 2026-03-09T20:47:14.330 INFO:tasks.workunit.client.1.vm10.stdout:7/66: truncate db/de/f15 476123 0 2026-03-09T20:47:14.331 INFO:tasks.workunit.client.1.vm10.stdout:8/58: truncate d0/fa 5068103 0 2026-03-09T20:47:14.332 INFO:tasks.workunit.client.0.vm07.stdout:1/132: chown d3/f10 143 1 2026-03-09T20:47:14.333 INFO:tasks.workunit.client.1.vm10.stdout:2/48: dread d5/fb [0,4194304] 0 2026-03-09T20:47:14.334 INFO:tasks.workunit.client.0.vm07.stdout:2/200: dread d2/f33 [0,4194304] 0 2026-03-09T20:47:14.335 INFO:tasks.workunit.client.1.vm10.stdout:8/59: write d0/fe [414674,16814] 0 2026-03-09T20:47:14.338 INFO:tasks.workunit.client.0.vm07.stdout:2/201: chown d2/db/d28/c3b 0 1 2026-03-09T20:47:14.339 INFO:tasks.workunit.client.0.vm07.stdout:2/202: chown d2/db/d28/c2a 3 1 2026-03-09T20:47:14.339 INFO:tasks.workunit.client.0.vm07.stdout:1/133: dwrite d3/f9 [0,4194304] 0 2026-03-09T20:47:14.358 INFO:tasks.workunit.client.0.vm07.stdout:5/150: symlink d5/l2d 0 2026-03-09T20:47:14.363 INFO:tasks.workunit.client.1.vm10.stdout:6/38: mknod d3/da/cb 0 2026-03-09T20:47:14.368 INFO:tasks.workunit.client.1.vm10.stdout:6/39: write d3/f9 [109359,67929] 0 2026-03-09T20:47:14.368 INFO:tasks.workunit.client.1.vm10.stdout:6/40: stat f2 0 2026-03-09T20:47:14.368 INFO:tasks.workunit.client.1.vm10.stdout:0/40: mkdir d2/d9/da 0 2026-03-09T20:47:14.368 INFO:tasks.workunit.client.1.vm10.stdout:4/42: link d1/d8/lb d1/d8/lc 0 2026-03-09T20:47:14.368 INFO:tasks.workunit.client.1.vm10.stdout:0/41: truncate f1 2257534 0 2026-03-09T20:47:14.369 INFO:tasks.workunit.client.1.vm10.stdout:9/63: write d2/fc [1963736,80137] 0 2026-03-09T20:47:14.373 INFO:tasks.workunit.client.1.vm10.stdout:7/67: rename db/de/d12/f13 to db/f16 0 2026-03-09T20:47:14.375 INFO:tasks.workunit.client.1.vm10.stdout:4/43: dwrite d1/d2/f7 [4194304,4194304] 0 2026-03-09T20:47:14.376 INFO:tasks.workunit.client.0.vm07.stdout:0/159: mknod d1/d2/dc/c34 0 2026-03-09T20:47:14.377 INFO:tasks.workunit.client.0.vm07.stdout:0/160: fdatasync d1/d1f/d20/f21 0 2026-03-09T20:47:14.377 INFO:tasks.workunit.client.1.vm10.stdout:9/64: dread d2/fc [0,4194304] 0 2026-03-09T20:47:14.381 INFO:tasks.workunit.client.1.vm10.stdout:9/65: stat d2/d3/de 0 2026-03-09T20:47:14.385 INFO:tasks.workunit.client.1.vm10.stdout:0/42: mkdir d2/db 0 2026-03-09T20:47:14.392 INFO:tasks.workunit.client.0.vm07.stdout:3/150: mkdir d1/d5/d9/d2f/d34 0 2026-03-09T20:47:14.392 INFO:tasks.workunit.client.1.vm10.stdout:0/43: readlink d2/l7 0 2026-03-09T20:47:14.392 INFO:tasks.workunit.client.1.vm10.stdout:7/68: symlink db/de/d12/l17 0 2026-03-09T20:47:14.394 INFO:tasks.workunit.client.1.vm10.stdout:8/60: link d0/f9 d0/f14 0 2026-03-09T20:47:14.395 INFO:tasks.workunit.client.0.vm07.stdout:6/172: creat d8/d26/d31/f39 x:0 0 0 2026-03-09T20:47:14.397 INFO:tasks.workunit.client.1.vm10.stdout:4/44: rmdir d1/d2 39 2026-03-09T20:47:14.397 INFO:tasks.workunit.client.1.vm10.stdout:4/45: chown d1/f9 233868 1 2026-03-09T20:47:14.397 INFO:tasks.workunit.client.0.vm07.stdout:0/161: fsync d1/d2/dc/f10 0 2026-03-09T20:47:14.401 INFO:tasks.workunit.client.1.vm10.stdout:8/61: creat d0/f15 x:0 0 0 2026-03-09T20:47:14.402 INFO:tasks.workunit.client.1.vm10.stdout:8/62: truncate d0/f12 1030495 0 2026-03-09T20:47:14.402 INFO:tasks.workunit.client.1.vm10.stdout:4/46: mknod d1/d8/cd 0 2026-03-09T20:47:14.403 INFO:tasks.workunit.client.0.vm07.stdout:7/135: creat d3/da/db/d14/f28 x:0 0 0 2026-03-09T20:47:14.407 INFO:tasks.workunit.client.0.vm07.stdout:7/136: dwrite d3/da/f11 [0,4194304] 0 2026-03-09T20:47:14.408 INFO:tasks.workunit.client.0.vm07.stdout:7/137: truncate d3/da/db/d14/f1a 1053925 0 2026-03-09T20:47:14.409 INFO:tasks.workunit.client.0.vm07.stdout:7/138: write d3/da/db/d14/f24 [537223,65181] 0 2026-03-09T20:47:14.410 INFO:tasks.workunit.client.0.vm07.stdout:0/162: chown d1/d2/c26 46945274 1 2026-03-09T20:47:14.413 INFO:tasks.workunit.client.1.vm10.stdout:7/69: sync 2026-03-09T20:47:14.413 INFO:tasks.workunit.client.0.vm07.stdout:3/151: mkdir d1/d35 0 2026-03-09T20:47:14.414 INFO:tasks.workunit.client.1.vm10.stdout:4/47: write d1/d2/f7 [4834782,76984] 0 2026-03-09T20:47:14.415 INFO:tasks.workunit.client.1.vm10.stdout:4/48: chown d1/d8/cd 1976748 1 2026-03-09T20:47:14.416 INFO:tasks.workunit.client.1.vm10.stdout:7/70: symlink db/de/d12/l18 0 2026-03-09T20:47:14.421 INFO:tasks.workunit.client.0.vm07.stdout:7/139: mknod d3/da/db/d14/c29 0 2026-03-09T20:47:14.422 INFO:tasks.workunit.client.0.vm07.stdout:0/163: getdents d1/d2/d33 0 2026-03-09T20:47:14.422 INFO:tasks.workunit.client.0.vm07.stdout:0/164: stat d1/d2/dc/c28 0 2026-03-09T20:47:14.422 INFO:tasks.workunit.client.0.vm07.stdout:0/165: fdatasync f0 0 2026-03-09T20:47:14.423 INFO:tasks.workunit.client.1.vm10.stdout:7/71: rename db/f11 to db/f19 0 2026-03-09T20:47:14.426 INFO:tasks.workunit.client.1.vm10.stdout:4/49: creat d1/fe x:0 0 0 2026-03-09T20:47:14.426 INFO:tasks.workunit.client.1.vm10.stdout:7/72: unlink f8 0 2026-03-09T20:47:14.426 INFO:tasks.workunit.client.0.vm07.stdout:0/166: mkdir d1/d2/d33/d35 0 2026-03-09T20:47:14.428 INFO:tasks.workunit.client.0.vm07.stdout:7/140: rename d3/da/fe to d3/da/db/d14/f2a 0 2026-03-09T20:47:14.428 INFO:tasks.workunit.client.0.vm07.stdout:7/141: fsync d3/da/db/f1e 0 2026-03-09T20:47:14.431 INFO:tasks.workunit.client.1.vm10.stdout:4/50: mknod d1/d2/d3/cf 0 2026-03-09T20:47:14.431 INFO:tasks.workunit.client.1.vm10.stdout:4/51: truncate d1/fe 1020722 0 2026-03-09T20:47:14.432 INFO:tasks.workunit.client.0.vm07.stdout:0/167: symlink d1/d2/l36 0 2026-03-09T20:47:14.432 INFO:tasks.workunit.client.0.vm07.stdout:0/168: chown d1/d1f 66488550 1 2026-03-09T20:47:14.438 INFO:tasks.workunit.client.0.vm07.stdout:0/169: link d1/ld d1/d2/dc/d17/l37 0 2026-03-09T20:47:14.448 INFO:tasks.workunit.client.0.vm07.stdout:0/170: link d1/d2/dc/l25 d1/d2/d33/d35/l38 0 2026-03-09T20:47:14.450 INFO:tasks.workunit.client.0.vm07.stdout:0/171: symlink d1/d2/d33/d35/l39 0 2026-03-09T20:47:14.453 INFO:tasks.workunit.client.1.vm10.stdout:2/49: truncate f1 3653735 0 2026-03-09T20:47:14.453 INFO:tasks.workunit.client.1.vm10.stdout:9/66: chown d2/d3/c10 44120 1 2026-03-09T20:47:14.457 INFO:tasks.workunit.client.1.vm10.stdout:3/36: dread f6 [0,4194304] 0 2026-03-09T20:47:14.459 INFO:tasks.workunit.client.1.vm10.stdout:1/71: rmdir d2 39 2026-03-09T20:47:14.465 INFO:tasks.workunit.client.1.vm10.stdout:3/37: dwrite f4 [0,4194304] 0 2026-03-09T20:47:14.466 INFO:tasks.workunit.client.0.vm07.stdout:8/126: truncate d1/dc/d14/f18 1871430 0 2026-03-09T20:47:14.467 INFO:tasks.workunit.client.1.vm10.stdout:1/72: dread d2/f19 [0,4194304] 0 2026-03-09T20:47:14.467 INFO:tasks.workunit.client.1.vm10.stdout:3/38: chown l0 30 1 2026-03-09T20:47:14.475 INFO:tasks.workunit.client.1.vm10.stdout:3/39: dread f6 [0,4194304] 0 2026-03-09T20:47:14.476 INFO:tasks.workunit.client.1.vm10.stdout:3/40: mkdir dc 0 2026-03-09T20:47:14.476 INFO:tasks.workunit.client.1.vm10.stdout:3/41: stat f6 0 2026-03-09T20:47:14.478 INFO:tasks.workunit.client.1.vm10.stdout:3/42: mkdir dc/dd 0 2026-03-09T20:47:14.482 INFO:tasks.workunit.client.1.vm10.stdout:3/43: chown l0 19 1 2026-03-09T20:47:14.482 INFO:tasks.workunit.client.1.vm10.stdout:1/73: dwrite d2/f17 [0,4194304] 0 2026-03-09T20:47:14.485 INFO:tasks.workunit.client.1.vm10.stdout:1/74: write d2/f19 [394474,24017] 0 2026-03-09T20:47:14.499 INFO:tasks.workunit.client.1.vm10.stdout:9/67: fsync d2/fc 0 2026-03-09T20:47:14.500 INFO:tasks.workunit.client.1.vm10.stdout:9/68: dread - d2/d3/fd zero size 2026-03-09T20:47:14.502 INFO:tasks.workunit.client.1.vm10.stdout:9/69: write d2/d3/f7 [2749300,108291] 0 2026-03-09T20:47:14.513 INFO:tasks.workunit.client.1.vm10.stdout:9/70: dwrite d2/d3/fa [0,4194304] 0 2026-03-09T20:47:14.514 INFO:tasks.workunit.client.1.vm10.stdout:9/71: stat d2/d3/de 0 2026-03-09T20:47:14.526 INFO:tasks.workunit.client.1.vm10.stdout:9/72: creat d2/f16 x:0 0 0 2026-03-09T20:47:14.543 INFO:tasks.workunit.client.1.vm10.stdout:9/73: dwrite d2/d3/fd [0,4194304] 0 2026-03-09T20:47:14.555 INFO:tasks.workunit.client.1.vm10.stdout:9/74: rmdir d2/d3/d13 0 2026-03-09T20:47:14.563 INFO:tasks.workunit.client.1.vm10.stdout:9/75: rename d2/d3/c10 to d2/c17 0 2026-03-09T20:47:14.631 INFO:tasks.workunit.client.0.vm07.stdout:4/129: write f0 [3816824,8730] 0 2026-03-09T20:47:14.632 INFO:tasks.workunit.client.0.vm07.stdout:4/130: write d2/f19 [412739,83851] 0 2026-03-09T20:47:14.638 INFO:tasks.workunit.client.1.vm10.stdout:5/39: truncate d2/f8 355272 0 2026-03-09T20:47:14.644 INFO:tasks.workunit.client.1.vm10.stdout:5/40: unlink d2/ce 0 2026-03-09T20:47:14.645 INFO:tasks.workunit.client.1.vm10.stdout:2/50: truncate d5/fe 349913 0 2026-03-09T20:47:14.647 INFO:tasks.workunit.client.1.vm10.stdout:5/41: symlink d2/lf 0 2026-03-09T20:47:14.649 INFO:tasks.workunit.client.1.vm10.stdout:2/51: mknod d5/cf 0 2026-03-09T20:47:14.652 INFO:tasks.workunit.client.0.vm07.stdout:4/131: rename d2/df/c20 to d2/df/c27 0 2026-03-09T20:47:14.653 INFO:tasks.workunit.client.1.vm10.stdout:2/52: dread d5/f6 [0,4194304] 0 2026-03-09T20:47:14.656 INFO:tasks.workunit.client.0.vm07.stdout:4/132: dwrite d2/df/f23 [0,4194304] 0 2026-03-09T20:47:14.661 INFO:tasks.workunit.client.1.vm10.stdout:1/75: fsync d2/f17 0 2026-03-09T20:47:14.661 INFO:tasks.workunit.client.1.vm10.stdout:5/42: dread f1 [4194304,4194304] 0 2026-03-09T20:47:14.664 INFO:tasks.workunit.client.1.vm10.stdout:1/76: write d2/da/f10 [1431208,117541] 0 2026-03-09T20:47:14.664 INFO:tasks.workunit.client.1.vm10.stdout:1/77: chown d2/f1a 7701718 1 2026-03-09T20:47:14.665 INFO:tasks.workunit.client.0.vm07.stdout:9/155: truncate d4/f10 488963 0 2026-03-09T20:47:14.670 INFO:tasks.workunit.client.1.vm10.stdout:9/76: write d2/fc [3220574,101322] 0 2026-03-09T20:47:14.671 INFO:tasks.workunit.client.0.vm07.stdout:4/133: creat d2/f28 x:0 0 0 2026-03-09T20:47:14.671 INFO:tasks.workunit.client.0.vm07.stdout:4/134: chown d2 120691 1 2026-03-09T20:47:14.672 INFO:tasks.workunit.client.1.vm10.stdout:5/43: dread d2/f5 [0,4194304] 0 2026-03-09T20:47:14.673 INFO:tasks.workunit.client.0.vm07.stdout:8/127: link d1/lf d1/dc/d14/l24 0 2026-03-09T20:47:14.675 INFO:tasks.workunit.client.1.vm10.stdout:9/77: dwrite d2/fc [4194304,4194304] 0 2026-03-09T20:47:14.676 INFO:tasks.workunit.client.1.vm10.stdout:1/78: mknod d2/da/c1d 0 2026-03-09T20:47:14.677 INFO:tasks.workunit.client.1.vm10.stdout:1/79: dread - d2/da/f11 zero size 2026-03-09T20:47:14.678 INFO:tasks.workunit.client.0.vm07.stdout:2/203: dwrite d2/d11/f1e [0,4194304] 0 2026-03-09T20:47:14.678 INFO:tasks.workunit.client.1.vm10.stdout:1/80: chown d2/da/c16 4 1 2026-03-09T20:47:14.692 INFO:tasks.workunit.client.1.vm10.stdout:9/78: dwrite d2/f6 [0,4194304] 0 2026-03-09T20:47:14.693 INFO:tasks.workunit.client.0.vm07.stdout:8/128: dwrite d1/f1d [0,4194304] 0 2026-03-09T20:47:14.698 INFO:tasks.workunit.client.0.vm07.stdout:9/156: symlink d4/d8/d19/d26/l38 0 2026-03-09T20:47:14.706 INFO:tasks.workunit.client.1.vm10.stdout:5/44: symlink d2/l10 0 2026-03-09T20:47:14.711 INFO:tasks.workunit.client.0.vm07.stdout:2/204: symlink d2/l3d 0 2026-03-09T20:47:14.712 INFO:tasks.workunit.client.0.vm07.stdout:1/134: truncate d3/fa 7006580 0 2026-03-09T20:47:14.716 INFO:tasks.workunit.client.0.vm07.stdout:5/151: dwrite d5/df/d13/f1f [0,4194304] 0 2026-03-09T20:47:14.723 INFO:tasks.workunit.client.0.vm07.stdout:4/135: symlink d2/d1f/l29 0 2026-03-09T20:47:14.733 INFO:tasks.workunit.client.1.vm10.stdout:8/63: rmdir d0 39 2026-03-09T20:47:14.733 INFO:tasks.workunit.client.1.vm10.stdout:1/81: creat d2/da/f1e x:0 0 0 2026-03-09T20:47:14.735 INFO:tasks.workunit.client.1.vm10.stdout:0/44: truncate f1 1587256 0 2026-03-09T20:47:14.735 INFO:tasks.workunit.client.1.vm10.stdout:0/45: write d2/f5 [2237093,8464] 0 2026-03-09T20:47:14.737 INFO:tasks.workunit.client.1.vm10.stdout:0/46: chown d2/d9 1530 1 2026-03-09T20:47:14.739 INFO:tasks.workunit.client.0.vm07.stdout:6/173: truncate d8/d16/f17 1622990 0 2026-03-09T20:47:14.741 INFO:tasks.workunit.client.1.vm10.stdout:9/79: stat d2/cf 0 2026-03-09T20:47:14.741 INFO:tasks.workunit.client.1.vm10.stdout:9/80: stat d2/f6 0 2026-03-09T20:47:14.743 INFO:tasks.workunit.client.0.vm07.stdout:6/174: dwrite d8/d16/d22/f35 [0,4194304] 0 2026-03-09T20:47:14.753 INFO:tasks.workunit.client.0.vm07.stdout:6/175: chown d8/d16/d22/d24/c34 212898 1 2026-03-09T20:47:14.754 INFO:tasks.workunit.client.1.vm10.stdout:7/73: write fa [4714456,106982] 0 2026-03-09T20:47:14.760 INFO:tasks.workunit.client.0.vm07.stdout:3/152: truncate d1/d5/d9/f15 5299107 0 2026-03-09T20:47:14.764 INFO:tasks.workunit.client.1.vm10.stdout:5/45: link d2/fb d2/f11 0 2026-03-09T20:47:14.764 INFO:tasks.workunit.client.0.vm07.stdout:9/157: creat d4/d11/d2a/f39 x:0 0 0 2026-03-09T20:47:14.765 INFO:tasks.workunit.client.0.vm07.stdout:1/135: symlink d3/d12/l26 0 2026-03-09T20:47:14.765 INFO:tasks.workunit.client.0.vm07.stdout:8/129: sync 2026-03-09T20:47:14.767 INFO:tasks.workunit.client.1.vm10.stdout:8/64: mknod d0/c16 0 2026-03-09T20:47:14.770 INFO:tasks.workunit.client.0.vm07.stdout:5/152: rename d5/df/l1d to d5/l2e 0 2026-03-09T20:47:14.771 INFO:tasks.workunit.client.1.vm10.stdout:7/74: dwrite db/de/f15 [0,4194304] 0 2026-03-09T20:47:14.774 INFO:tasks.workunit.client.0.vm07.stdout:7/142: write d3/f18 [2624879,106017] 0 2026-03-09T20:47:14.774 INFO:tasks.workunit.client.0.vm07.stdout:7/143: stat l1 0 2026-03-09T20:47:14.775 INFO:tasks.workunit.client.0.vm07.stdout:5/153: dwrite d5/df/d13/f1f [0,4194304] 0 2026-03-09T20:47:14.779 INFO:tasks.workunit.client.0.vm07.stdout:5/154: write d5/f23 [10700,22406] 0 2026-03-09T20:47:14.780 INFO:tasks.workunit.client.0.vm07.stdout:5/155: chown d5/df/f2b 21372 1 2026-03-09T20:47:14.782 INFO:tasks.workunit.client.1.vm10.stdout:5/46: dread d2/f11 [0,4194304] 0 2026-03-09T20:47:14.790 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:14 vm10.local ceph-mon[57011]: pgmap v147: 65 pgs: 65 active+clean; 401 MiB data, 2.5 GiB used, 118 GiB / 120 GiB avail; 2.0 MiB/s rd, 27 MiB/s wr, 299 op/s 2026-03-09T20:47:14.793 INFO:tasks.workunit.client.0.vm07.stdout:0/172: write d1/d2/f1b [3860924,76664] 0 2026-03-09T20:47:14.794 INFO:tasks.workunit.client.1.vm10.stdout:4/52: truncate d1/fe 977950 0 2026-03-09T20:47:14.794 INFO:tasks.workunit.client.1.vm10.stdout:8/65: dwrite d0/f13 [0,4194304] 0 2026-03-09T20:47:14.794 INFO:tasks.workunit.client.1.vm10.stdout:4/53: dread - d1/f9 zero size 2026-03-09T20:47:14.797 INFO:tasks.workunit.client.1.vm10.stdout:1/82: sync 2026-03-09T20:47:14.799 INFO:tasks.workunit.client.1.vm10.stdout:1/83: chown d2/f1c 57 1 2026-03-09T20:47:14.802 INFO:tasks.workunit.client.0.vm07.stdout:4/136: creat d2/df/d17/f2a x:0 0 0 2026-03-09T20:47:14.816 INFO:tasks.workunit.client.1.vm10.stdout:3/44: truncate f6 1630223 0 2026-03-09T20:47:14.816 INFO:tasks.workunit.client.1.vm10.stdout:9/81: mknod d2/d3/de/c18 0 2026-03-09T20:47:14.816 INFO:tasks.workunit.client.0.vm07.stdout:9/158: symlink d4/d16/l3a 0 2026-03-09T20:47:14.816 INFO:tasks.workunit.client.0.vm07.stdout:1/136: chown d3/ce 94346181 1 2026-03-09T20:47:14.816 INFO:tasks.workunit.client.0.vm07.stdout:1/137: fsync d3/f5 0 2026-03-09T20:47:14.816 INFO:tasks.workunit.client.0.vm07.stdout:1/138: write d3/f8 [386838,16510] 0 2026-03-09T20:47:14.816 INFO:tasks.workunit.client.0.vm07.stdout:1/139: dwrite d3/f5 [0,4194304] 0 2026-03-09T20:47:14.820 INFO:tasks.workunit.client.1.vm10.stdout:3/45: dwrite f4 [0,4194304] 0 2026-03-09T20:47:14.825 INFO:tasks.workunit.client.0.vm07.stdout:7/144: rmdir d3/da/db 39 2026-03-09T20:47:14.834 INFO:tasks.workunit.client.0.vm07.stdout:7/145: dread d3/f18 [0,4194304] 0 2026-03-09T20:47:14.834 INFO:tasks.workunit.client.1.vm10.stdout:5/47: mknod d2/c12 0 2026-03-09T20:47:14.834 INFO:tasks.workunit.client.1.vm10.stdout:3/46: chown l0 154184 1 2026-03-09T20:47:14.834 INFO:tasks.workunit.client.1.vm10.stdout:7/75: dwrite db/de/f14 [0,4194304] 0 2026-03-09T20:47:14.836 INFO:tasks.workunit.client.1.vm10.stdout:7/76: dwrite f3 [0,4194304] 0 2026-03-09T20:47:14.839 INFO:tasks.workunit.client.1.vm10.stdout:7/77: dread f5 [0,4194304] 0 2026-03-09T20:47:14.844 INFO:tasks.workunit.client.1.vm10.stdout:7/78: fsync fa 0 2026-03-09T20:47:14.848 INFO:tasks.workunit.client.1.vm10.stdout:1/84: rmdir d2 39 2026-03-09T20:47:14.853 INFO:tasks.workunit.client.1.vm10.stdout:6/41: dread d3/f7 [0,4194304] 0 2026-03-09T20:47:14.857 INFO:tasks.workunit.client.0.vm07.stdout:6/176: chown d8/d16/f17 238427 1 2026-03-09T20:47:14.857 INFO:tasks.workunit.client.0.vm07.stdout:6/177: chown d8/le 862 1 2026-03-09T20:47:14.857 INFO:tasks.workunit.client.0.vm07.stdout:6/178: fsync d8/f32 0 2026-03-09T20:47:14.858 INFO:tasks.workunit.client.0.vm07.stdout:0/173: symlink d1/d2/d33/d35/l3a 0 2026-03-09T20:47:14.858 INFO:tasks.workunit.client.0.vm07.stdout:0/174: stat d1/d1f/d20/f21 0 2026-03-09T20:47:14.863 INFO:tasks.workunit.client.1.vm10.stdout:9/82: creat d2/d3/de/f19 x:0 0 0 2026-03-09T20:47:14.863 INFO:tasks.workunit.client.0.vm07.stdout:0/175: chown d1/d2/dc/d17/c29 1314 1 2026-03-09T20:47:14.863 INFO:tasks.workunit.client.0.vm07.stdout:0/176: dwrite f0 [0,4194304] 0 2026-03-09T20:47:14.872 INFO:tasks.workunit.client.0.vm07.stdout:9/159: creat d4/d11/d2a/f3b x:0 0 0 2026-03-09T20:47:14.881 INFO:tasks.workunit.client.1.vm10.stdout:1/85: dread d2/f19 [0,4194304] 0 2026-03-09T20:47:14.882 INFO:tasks.workunit.client.1.vm10.stdout:7/79: creat db/de/f1a x:0 0 0 2026-03-09T20:47:14.882 INFO:tasks.workunit.client.1.vm10.stdout:7/80: dread - db/f16 zero size 2026-03-09T20:47:14.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:14 vm07.local ceph-mon[49120]: pgmap v147: 65 pgs: 65 active+clean; 401 MiB data, 2.5 GiB used, 118 GiB / 120 GiB avail; 2.0 MiB/s rd, 27 MiB/s wr, 299 op/s 2026-03-09T20:47:14.884 INFO:tasks.workunit.client.1.vm10.stdout:6/42: rmdir d3/da 39 2026-03-09T20:47:14.884 INFO:tasks.workunit.client.1.vm10.stdout:6/43: chown f1 10 1 2026-03-09T20:47:14.885 INFO:tasks.workunit.client.1.vm10.stdout:6/44: write f2 [1449354,89484] 0 2026-03-09T20:47:14.892 INFO:tasks.workunit.client.0.vm07.stdout:0/177: fdatasync d1/f2f 0 2026-03-09T20:47:14.892 INFO:tasks.workunit.client.0.vm07.stdout:0/178: chown d1/d2/f14 1 1 2026-03-09T20:47:14.893 INFO:tasks.workunit.client.1.vm10.stdout:8/66: link d0/f14 d0/f17 0 2026-03-09T20:47:14.895 INFO:tasks.workunit.client.1.vm10.stdout:3/47: creat dc/dd/fe x:0 0 0 2026-03-09T20:47:14.906 INFO:tasks.workunit.client.0.vm07.stdout:2/205: rename d2/db/fd to d2/f3e 0 2026-03-09T20:47:14.906 INFO:tasks.workunit.client.0.vm07.stdout:7/146: mkdir d3/da/db/d14/d1f/d2b 0 2026-03-09T20:47:14.907 INFO:tasks.workunit.client.1.vm10.stdout:4/54: link d1/d2/d3/c5 d1/d8/c10 0 2026-03-09T20:47:14.907 INFO:tasks.workunit.client.1.vm10.stdout:8/67: dwrite d0/f12 [0,4194304] 0 2026-03-09T20:47:14.907 INFO:tasks.workunit.client.1.vm10.stdout:1/86: creat d2/f1f x:0 0 0 2026-03-09T20:47:14.907 INFO:tasks.workunit.client.1.vm10.stdout:8/68: chown d0/fe 190 1 2026-03-09T20:47:14.910 INFO:tasks.workunit.client.0.vm07.stdout:0/179: chown d1/d2/dc/l25 372862 1 2026-03-09T20:47:14.912 INFO:tasks.workunit.client.0.vm07.stdout:3/153: getdents d1/d5/d9 0 2026-03-09T20:47:14.915 INFO:tasks.workunit.client.1.vm10.stdout:6/45: fsync d3/f7 0 2026-03-09T20:47:14.915 INFO:tasks.workunit.client.1.vm10.stdout:8/69: dread d0/fa [0,4194304] 0 2026-03-09T20:47:14.915 INFO:tasks.workunit.client.1.vm10.stdout:6/46: chown d3/l4 1 1 2026-03-09T20:47:14.918 INFO:tasks.workunit.client.0.vm07.stdout:9/160: symlink d4/d8/l3c 0 2026-03-09T20:47:14.919 INFO:tasks.workunit.client.1.vm10.stdout:6/47: dread f0 [0,4194304] 0 2026-03-09T20:47:14.925 INFO:tasks.workunit.client.0.vm07.stdout:2/206: dread d2/f2c [0,4194304] 0 2026-03-09T20:47:14.926 INFO:tasks.workunit.client.0.vm07.stdout:7/147: write d3/f18 [3334312,15652] 0 2026-03-09T20:47:14.926 INFO:tasks.workunit.client.0.vm07.stdout:2/207: chown d2/db/d1c/f2e 0 1 2026-03-09T20:47:14.928 INFO:tasks.workunit.client.1.vm10.stdout:4/55: creat d1/d8/f11 x:0 0 0 2026-03-09T20:47:14.931 INFO:tasks.workunit.client.0.vm07.stdout:3/154: dread d1/f19 [0,4194304] 0 2026-03-09T20:47:14.933 INFO:tasks.workunit.client.0.vm07.stdout:2/208: rmdir d2 39 2026-03-09T20:47:14.936 INFO:tasks.workunit.client.0.vm07.stdout:8/130: rename d1/dc/d14/f19 to d1/f25 0 2026-03-09T20:47:14.940 INFO:tasks.workunit.client.1.vm10.stdout:3/48: link f7 dc/ff 0 2026-03-09T20:47:14.940 INFO:tasks.workunit.client.1.vm10.stdout:3/49: read dc/ff [133786,78720] 0 2026-03-09T20:47:14.941 INFO:tasks.workunit.client.1.vm10.stdout:4/56: creat d1/d2/f12 x:0 0 0 2026-03-09T20:47:14.941 INFO:tasks.workunit.client.0.vm07.stdout:7/148: creat d3/da/db/d14/d1f/d2b/f2c x:0 0 0 2026-03-09T20:47:14.942 INFO:tasks.workunit.client.0.vm07.stdout:7/149: fsync d3/da/db/d14/f28 0 2026-03-09T20:47:14.943 INFO:tasks.workunit.client.0.vm07.stdout:0/180: creat d1/f3b x:0 0 0 2026-03-09T20:47:14.944 INFO:tasks.workunit.client.0.vm07.stdout:9/161: link d4/d11/d23/f2f d4/d8/d19/d26/f3d 0 2026-03-09T20:47:14.944 INFO:tasks.workunit.client.0.vm07.stdout:9/162: write d4/f14 [11370,13121] 0 2026-03-09T20:47:14.947 INFO:tasks.workunit.client.1.vm10.stdout:4/57: dwrite d1/d2/f7 [4194304,4194304] 0 2026-03-09T20:47:14.951 INFO:tasks.workunit.client.0.vm07.stdout:8/131: dwrite d1/f25 [0,4194304] 0 2026-03-09T20:47:14.955 INFO:tasks.workunit.client.0.vm07.stdout:6/179: rename d8/d26/d31 to d8/d16/d22/d3a 0 2026-03-09T20:47:14.957 INFO:tasks.workunit.client.1.vm10.stdout:3/50: creat dc/f10 x:0 0 0 2026-03-09T20:47:14.962 INFO:tasks.workunit.client.1.vm10.stdout:8/70: dread d0/f17 [0,4194304] 0 2026-03-09T20:47:14.963 INFO:tasks.workunit.client.0.vm07.stdout:7/150: mknod d3/da/db/c2d 0 2026-03-09T20:47:14.963 INFO:tasks.workunit.client.1.vm10.stdout:6/48: creat d3/fc x:0 0 0 2026-03-09T20:47:14.964 INFO:tasks.workunit.client.1.vm10.stdout:6/49: dread d3/f7 [0,4194304] 0 2026-03-09T20:47:14.964 INFO:tasks.workunit.client.1.vm10.stdout:4/58: mknod d1/d8/c13 0 2026-03-09T20:47:14.966 INFO:tasks.workunit.client.1.vm10.stdout:3/51: creat dc/f11 x:0 0 0 2026-03-09T20:47:14.971 INFO:tasks.workunit.client.1.vm10.stdout:8/71: creat d0/f18 x:0 0 0 2026-03-09T20:47:14.988 INFO:tasks.workunit.client.1.vm10.stdout:8/72: dwrite d0/fa [0,4194304] 0 2026-03-09T20:47:14.988 INFO:tasks.workunit.client.1.vm10.stdout:6/50: creat d3/da/fd x:0 0 0 2026-03-09T20:47:14.988 INFO:tasks.workunit.client.1.vm10.stdout:6/51: write d3/f9 [590989,8128] 0 2026-03-09T20:47:14.988 INFO:tasks.workunit.client.0.vm07.stdout:6/180: creat d8/f3b x:0 0 0 2026-03-09T20:47:14.988 INFO:tasks.workunit.client.0.vm07.stdout:6/181: read d8/d16/f1f [324444,90793] 0 2026-03-09T20:47:14.989 INFO:tasks.workunit.client.0.vm07.stdout:0/181: link d1/f2f d1/d2/dc/d17/f3c 0 2026-03-09T20:47:14.989 INFO:tasks.workunit.client.0.vm07.stdout:9/163: mknod d4/d11/c3e 0 2026-03-09T20:47:14.989 INFO:tasks.workunit.client.0.vm07.stdout:3/155: getdents d1 0 2026-03-09T20:47:14.989 INFO:tasks.workunit.client.0.vm07.stdout:3/156: dread - d1/d5/d9/d11/d1f/f27 zero size 2026-03-09T20:47:14.989 INFO:tasks.workunit.client.0.vm07.stdout:9/164: mknod d4/d11/d23/c3f 0 2026-03-09T20:47:14.989 INFO:tasks.workunit.client.0.vm07.stdout:3/157: unlink d1/d5/c32 0 2026-03-09T20:47:14.989 INFO:tasks.workunit.client.1.vm10.stdout:8/73: creat d0/f19 x:0 0 0 2026-03-09T20:47:14.990 INFO:tasks.workunit.client.0.vm07.stdout:9/165: rmdir d4/d8/d19/d26 39 2026-03-09T20:47:14.991 INFO:tasks.workunit.client.1.vm10.stdout:8/74: write d0/f18 [75564,101224] 0 2026-03-09T20:47:14.995 INFO:tasks.workunit.client.1.vm10.stdout:8/75: chown d0/cc 12746041 1 2026-03-09T20:47:14.998 INFO:tasks.workunit.client.1.vm10.stdout:6/52: creat d3/fe x:0 0 0 2026-03-09T20:47:15.000 INFO:tasks.workunit.client.0.vm07.stdout:6/182: rmdir d8/d26/d28 0 2026-03-09T20:47:15.000 INFO:tasks.workunit.client.0.vm07.stdout:6/183: stat d8/d16/d22/f2c 0 2026-03-09T20:47:15.000 INFO:tasks.workunit.client.1.vm10.stdout:6/53: read - d3/fe zero size 2026-03-09T20:47:15.000 INFO:tasks.workunit.client.1.vm10.stdout:6/54: read - d3/fc zero size 2026-03-09T20:47:15.002 INFO:tasks.workunit.client.1.vm10.stdout:8/76: symlink d0/l1a 0 2026-03-09T20:47:15.002 INFO:tasks.workunit.client.0.vm07.stdout:0/182: creat d1/f3d x:0 0 0 2026-03-09T20:47:15.003 INFO:tasks.workunit.client.1.vm10.stdout:8/77: write d0/f11 [213510,3770] 0 2026-03-09T20:47:15.007 INFO:tasks.workunit.client.0.vm07.stdout:0/183: dwrite d1/d2/dc/d17/f23 [0,4194304] 0 2026-03-09T20:47:15.009 INFO:tasks.workunit.client.0.vm07.stdout:1/140: dread d3/fa [0,4194304] 0 2026-03-09T20:47:15.009 INFO:tasks.workunit.client.0.vm07.stdout:1/141: read d3/fc [1104196,31633] 0 2026-03-09T20:47:15.011 INFO:tasks.workunit.client.1.vm10.stdout:8/78: dwrite d0/fa [0,4194304] 0 2026-03-09T20:47:15.016 INFO:tasks.workunit.client.0.vm07.stdout:9/166: dread d4/d8/dc/f21 [0,4194304] 0 2026-03-09T20:47:15.021 INFO:tasks.workunit.client.0.vm07.stdout:6/184: dwrite d8/d16/f23 [0,4194304] 0 2026-03-09T20:47:15.025 INFO:tasks.workunit.client.0.vm07.stdout:9/167: dwrite d4/f14 [0,4194304] 0 2026-03-09T20:47:15.029 INFO:tasks.workunit.client.0.vm07.stdout:6/185: dwrite d8/f20 [0,4194304] 0 2026-03-09T20:47:15.033 INFO:tasks.workunit.client.0.vm07.stdout:6/186: dread - d8/d26/d2a/f37 zero size 2026-03-09T20:47:15.033 INFO:tasks.workunit.client.1.vm10.stdout:8/79: rename d0/cf to d0/c1b 0 2026-03-09T20:47:15.037 INFO:tasks.workunit.client.0.vm07.stdout:3/158: sync 2026-03-09T20:47:15.037 INFO:tasks.workunit.client.0.vm07.stdout:1/142: sync 2026-03-09T20:47:15.041 INFO:tasks.workunit.client.0.vm07.stdout:9/168: dwrite d4/d8/d19/f28 [0,4194304] 0 2026-03-09T20:47:15.046 INFO:tasks.workunit.client.0.vm07.stdout:6/187: truncate d8/f15 4236428 0 2026-03-09T20:47:15.046 INFO:tasks.workunit.client.0.vm07.stdout:6/188: stat d8/c11 0 2026-03-09T20:47:15.046 INFO:tasks.workunit.client.0.vm07.stdout:6/189: chown d8/d16/d22/d24 1 1 2026-03-09T20:47:15.046 INFO:tasks.workunit.client.0.vm07.stdout:6/190: chown d8/d16/d22 425 1 2026-03-09T20:47:15.046 INFO:tasks.workunit.client.0.vm07.stdout:3/159: dread d1/d5/d10/f30 [0,4194304] 0 2026-03-09T20:47:15.046 INFO:tasks.workunit.client.1.vm10.stdout:3/52: fsync f4 0 2026-03-09T20:47:15.049 INFO:tasks.workunit.client.1.vm10.stdout:8/80: symlink d0/l1c 0 2026-03-09T20:47:15.051 INFO:tasks.workunit.client.0.vm07.stdout:5/156: rmdir d5/df 39 2026-03-09T20:47:15.051 INFO:tasks.workunit.client.0.vm07.stdout:5/157: fsync d5/f25 0 2026-03-09T20:47:15.051 INFO:tasks.workunit.client.1.vm10.stdout:3/53: write dc/ff [471767,81263] 0 2026-03-09T20:47:15.054 INFO:tasks.workunit.client.0.vm07.stdout:6/191: sync 2026-03-09T20:47:15.058 INFO:tasks.workunit.client.1.vm10.stdout:8/81: dread d0/f12 [0,4194304] 0 2026-03-09T20:47:15.067 INFO:tasks.workunit.client.1.vm10.stdout:3/54: dwrite f4 [0,4194304] 0 2026-03-09T20:47:15.071 INFO:tasks.workunit.client.1.vm10.stdout:4/59: dread d1/fe [0,4194304] 0 2026-03-09T20:47:15.072 INFO:tasks.workunit.client.1.vm10.stdout:4/60: fdatasync d1/d8/f11 0 2026-03-09T20:47:15.074 INFO:tasks.workunit.client.0.vm07.stdout:4/137: rmdir d2 39 2026-03-09T20:47:15.074 INFO:tasks.workunit.client.1.vm10.stdout:3/55: symlink dc/dd/l12 0 2026-03-09T20:47:15.076 INFO:tasks.workunit.client.1.vm10.stdout:8/82: dwrite d0/f12 [0,4194304] 0 2026-03-09T20:47:15.079 INFO:tasks.workunit.client.0.vm07.stdout:1/143: mknod d3/d12/c27 0 2026-03-09T20:47:15.080 INFO:tasks.workunit.client.1.vm10.stdout:3/56: dread f4 [0,4194304] 0 2026-03-09T20:47:15.083 INFO:tasks.workunit.client.1.vm10.stdout:4/61: symlink d1/d8/l14 0 2026-03-09T20:47:15.084 INFO:tasks.workunit.client.0.vm07.stdout:3/160: creat d1/f36 x:0 0 0 2026-03-09T20:47:15.086 INFO:tasks.workunit.client.1.vm10.stdout:2/53: dwrite f1 [0,4194304] 0 2026-03-09T20:47:15.087 INFO:tasks.workunit.client.1.vm10.stdout:8/83: mknod d0/c1d 0 2026-03-09T20:47:15.096 INFO:tasks.workunit.client.0.vm07.stdout:0/184: mknod d1/d2/c3e 0 2026-03-09T20:47:15.097 INFO:tasks.workunit.client.0.vm07.stdout:5/158: mknod d5/d19/c2f 0 2026-03-09T20:47:15.101 INFO:tasks.workunit.client.0.vm07.stdout:6/192: rmdir d8/d26 39 2026-03-09T20:47:15.102 INFO:tasks.workunit.client.1.vm10.stdout:8/84: mknod d0/c1e 0 2026-03-09T20:47:15.104 INFO:tasks.workunit.client.1.vm10.stdout:2/54: symlink d5/l10 0 2026-03-09T20:47:15.107 INFO:tasks.workunit.client.0.vm07.stdout:1/144: chown d3/l6 0 1 2026-03-09T20:47:15.119 INFO:tasks.workunit.client.0.vm07.stdout:0/185: creat d1/d2/dc/d17/f3f x:0 0 0 2026-03-09T20:47:15.119 INFO:tasks.workunit.client.0.vm07.stdout:0/186: readlink d1/l1e 0 2026-03-09T20:47:15.124 INFO:tasks.workunit.client.0.vm07.stdout:9/169: rename d4/d8/c1f to d4/d8/dc/c40 0 2026-03-09T20:47:15.125 INFO:tasks.workunit.client.0.vm07.stdout:0/187: creat d1/d2/dc/f40 x:0 0 0 2026-03-09T20:47:15.125 INFO:tasks.workunit.client.0.vm07.stdout:0/188: readlink d1/d2/l36 0 2026-03-09T20:47:15.129 INFO:tasks.workunit.client.0.vm07.stdout:4/138: creat d2/f2b x:0 0 0 2026-03-09T20:47:15.129 INFO:tasks.workunit.client.0.vm07.stdout:9/170: fsync d4/d8/dc/f25 0 2026-03-09T20:47:15.131 INFO:tasks.workunit.client.0.vm07.stdout:0/189: symlink d1/d1f/d30/l41 0 2026-03-09T20:47:15.131 INFO:tasks.workunit.client.0.vm07.stdout:4/139: stat d2/l22 0 2026-03-09T20:47:15.133 INFO:tasks.workunit.client.0.vm07.stdout:9/171: rename d4/f14 to d4/d16/f41 0 2026-03-09T20:47:15.134 INFO:tasks.workunit.client.0.vm07.stdout:0/190: mkdir d1/d2/dc/d17/d42 0 2026-03-09T20:47:15.135 INFO:tasks.workunit.client.0.vm07.stdout:4/140: write d2/fa [596210,13214] 0 2026-03-09T20:47:15.142 INFO:tasks.workunit.client.0.vm07.stdout:4/141: readlink d2/l14 0 2026-03-09T20:47:15.145 INFO:tasks.workunit.client.0.vm07.stdout:9/172: dread d4/d11/f13 [0,4194304] 0 2026-03-09T20:47:15.146 INFO:tasks.workunit.client.0.vm07.stdout:4/142: rename f0 to d2/d1f/f2c 0 2026-03-09T20:47:15.152 INFO:tasks.workunit.client.0.vm07.stdout:0/191: sync 2026-03-09T20:47:15.154 INFO:tasks.workunit.client.0.vm07.stdout:0/192: creat d1/d1f/d20/f43 x:0 0 0 2026-03-09T20:47:15.157 INFO:tasks.workunit.client.0.vm07.stdout:0/193: dwrite d1/f1a [0,4194304] 0 2026-03-09T20:47:15.161 INFO:tasks.workunit.client.0.vm07.stdout:0/194: rename d1/d2/d33/d35/l38 to d1/d1f/d20/l44 0 2026-03-09T20:47:15.162 INFO:tasks.workunit.client.0.vm07.stdout:0/195: unlink d1/d2/dc/d17/f3f 0 2026-03-09T20:47:15.165 INFO:tasks.workunit.client.0.vm07.stdout:0/196: rmdir d1/d2/dc/d17/d42 0 2026-03-09T20:47:15.166 INFO:tasks.workunit.client.0.vm07.stdout:0/197: creat d1/d2/d33/d35/f45 x:0 0 0 2026-03-09T20:47:15.167 INFO:tasks.workunit.client.0.vm07.stdout:0/198: write d1/d2/f14 [2856654,37439] 0 2026-03-09T20:47:15.168 INFO:tasks.workunit.client.0.vm07.stdout:0/199: creat d1/d2/d33/d35/f46 x:0 0 0 2026-03-09T20:47:15.169 INFO:tasks.workunit.client.0.vm07.stdout:0/200: write f0 [3514094,107086] 0 2026-03-09T20:47:15.172 INFO:tasks.workunit.client.0.vm07.stdout:0/201: rename d1/d2/f16 to d1/d2/f47 0 2026-03-09T20:47:15.175 INFO:tasks.workunit.client.0.vm07.stdout:0/202: link d1/d2/dc/f10 d1/f48 0 2026-03-09T20:47:15.216 INFO:tasks.workunit.client.1.vm10.stdout:2/55: dread d5/fd [0,4194304] 0 2026-03-09T20:47:15.223 INFO:tasks.workunit.client.1.vm10.stdout:2/56: dwrite d5/fa [0,4194304] 0 2026-03-09T20:47:15.225 INFO:tasks.workunit.client.1.vm10.stdout:2/57: stat f1 0 2026-03-09T20:47:15.235 INFO:tasks.workunit.client.1.vm10.stdout:2/58: fdatasync d5/f7 0 2026-03-09T20:47:15.237 INFO:tasks.workunit.client.1.vm10.stdout:2/59: mknod d5/c11 0 2026-03-09T20:47:15.241 INFO:tasks.workunit.client.1.vm10.stdout:5/48: write d2/f8 [1330087,37451] 0 2026-03-09T20:47:15.247 INFO:tasks.workunit.client.1.vm10.stdout:1/87: getdents d2 0 2026-03-09T20:47:15.248 INFO:tasks.workunit.client.1.vm10.stdout:9/83: truncate d2/d3/f7 600225 0 2026-03-09T20:47:15.248 INFO:tasks.workunit.client.0.vm07.stdout:6/193: read d8/f14 [142755,65795] 0 2026-03-09T20:47:15.248 INFO:tasks.workunit.client.0.vm07.stdout:6/194: creat d8/d16/d22/d24/d2b/f3c x:0 0 0 2026-03-09T20:47:15.248 INFO:tasks.workunit.client.0.vm07.stdout:6/195: truncate d8/d16/d22/d3a/f39 998872 0 2026-03-09T20:47:15.248 INFO:tasks.workunit.client.0.vm07.stdout:6/196: fdatasync d8/d16/f18 0 2026-03-09T20:47:15.248 INFO:tasks.workunit.client.0.vm07.stdout:7/151: dread d3/da/db/d14/f24 [0,4194304] 0 2026-03-09T20:47:15.250 INFO:tasks.workunit.client.0.vm07.stdout:9/173: write d4/f10 [67838,72077] 0 2026-03-09T20:47:15.251 INFO:tasks.workunit.client.0.vm07.stdout:6/197: unlink d8/d16/d22/d24/c34 0 2026-03-09T20:47:15.254 INFO:tasks.workunit.client.1.vm10.stdout:2/60: dread d5/f6 [0,4194304] 0 2026-03-09T20:47:15.255 INFO:tasks.workunit.client.1.vm10.stdout:5/49: dread d2/f5 [0,4194304] 0 2026-03-09T20:47:15.256 INFO:tasks.workunit.client.1.vm10.stdout:7/81: truncate db/de/f14 505637 0 2026-03-09T20:47:15.257 INFO:tasks.workunit.client.0.vm07.stdout:7/152: mknod d3/c2e 0 2026-03-09T20:47:15.266 INFO:tasks.workunit.client.1.vm10.stdout:7/82: truncate db/f16 608561 0 2026-03-09T20:47:15.266 INFO:tasks.workunit.client.1.vm10.stdout:9/84: dwrite d2/d3/de/f19 [0,4194304] 0 2026-03-09T20:47:15.267 INFO:tasks.workunit.client.1.vm10.stdout:1/88: dwrite d2/f8 [0,4194304] 0 2026-03-09T20:47:15.270 INFO:tasks.workunit.client.1.vm10.stdout:1/89: chown d2/f1f 2314 1 2026-03-09T20:47:15.274 INFO:tasks.workunit.client.1.vm10.stdout:5/50: dread d2/fb [0,4194304] 0 2026-03-09T20:47:15.277 INFO:tasks.workunit.client.1.vm10.stdout:2/61: rename d5/cf to d5/c12 0 2026-03-09T20:47:15.277 INFO:tasks.workunit.client.1.vm10.stdout:5/51: chown d2/lf 11144603 1 2026-03-09T20:47:15.279 INFO:tasks.workunit.client.0.vm07.stdout:2/209: write d2/ff [39711,121378] 0 2026-03-09T20:47:15.281 INFO:tasks.workunit.client.1.vm10.stdout:9/85: creat d2/f1a x:0 0 0 2026-03-09T20:47:15.283 INFO:tasks.workunit.client.0.vm07.stdout:2/210: dread - d2/d11/f36 zero size 2026-03-09T20:47:15.284 INFO:tasks.workunit.client.0.vm07.stdout:2/211: chown d2/db/d28/c2a 7848 1 2026-03-09T20:47:15.284 INFO:tasks.workunit.client.1.vm10.stdout:1/90: dwrite d2/da/fe [4194304,4194304] 0 2026-03-09T20:47:15.285 INFO:tasks.workunit.client.0.vm07.stdout:2/212: symlink d2/d11/l3f 0 2026-03-09T20:47:15.285 INFO:tasks.workunit.client.0.vm07.stdout:2/213: dread - d2/d11/f36 zero size 2026-03-09T20:47:15.291 INFO:tasks.workunit.client.1.vm10.stdout:7/83: dread db/f16 [0,4194304] 0 2026-03-09T20:47:15.306 INFO:tasks.workunit.client.1.vm10.stdout:5/52: symlink d2/l13 0 2026-03-09T20:47:15.306 INFO:tasks.workunit.client.1.vm10.stdout:7/84: symlink db/l1b 0 2026-03-09T20:47:15.306 INFO:tasks.workunit.client.1.vm10.stdout:1/91: creat d2/da/f20 x:0 0 0 2026-03-09T20:47:15.307 INFO:tasks.workunit.client.0.vm07.stdout:2/214: dread d2/db/d1c/f2e [0,4194304] 0 2026-03-09T20:47:15.309 INFO:tasks.workunit.client.1.vm10.stdout:1/92: creat d2/f21 x:0 0 0 2026-03-09T20:47:15.310 INFO:tasks.workunit.client.0.vm07.stdout:2/215: dread d2/f3e [0,4194304] 0 2026-03-09T20:47:15.310 INFO:tasks.workunit.client.0.vm07.stdout:2/216: write d2/ff [165341,83176] 0 2026-03-09T20:47:15.311 INFO:tasks.workunit.client.0.vm07.stdout:2/217: readlink d2/db/d28/l39 0 2026-03-09T20:47:15.311 INFO:tasks.workunit.client.1.vm10.stdout:5/53: symlink d2/l14 0 2026-03-09T20:47:15.311 INFO:tasks.workunit.client.1.vm10.stdout:1/93: write d2/f14 [164382,67161] 0 2026-03-09T20:47:15.312 INFO:tasks.workunit.client.1.vm10.stdout:7/85: creat db/de/d12/f1c x:0 0 0 2026-03-09T20:47:15.313 INFO:tasks.workunit.client.0.vm07.stdout:2/218: chown d2/db/d1c/f22 2 1 2026-03-09T20:47:15.315 INFO:tasks.workunit.client.1.vm10.stdout:5/54: creat d2/f15 x:0 0 0 2026-03-09T20:47:15.324 INFO:tasks.workunit.client.1.vm10.stdout:1/94: creat d2/da/f22 x:0 0 0 2026-03-09T20:47:15.325 INFO:tasks.workunit.client.0.vm07.stdout:0/203: fdatasync d1/f1a 0 2026-03-09T20:47:15.325 INFO:tasks.workunit.client.0.vm07.stdout:2/219: dread d2/db/d1c/f3a [0,4194304] 0 2026-03-09T20:47:15.325 INFO:tasks.workunit.client.0.vm07.stdout:0/204: dwrite d1/d2/d33/d35/f45 [0,4194304] 0 2026-03-09T20:47:15.325 INFO:tasks.workunit.client.0.vm07.stdout:0/205: dread - d1/d2/dc/d17/f3c zero size 2026-03-09T20:47:15.329 INFO:tasks.workunit.client.0.vm07.stdout:0/206: dread d1/d2/f14 [0,4194304] 0 2026-03-09T20:47:15.331 INFO:tasks.workunit.client.0.vm07.stdout:4/143: dread d2/d1f/f25 [0,4194304] 0 2026-03-09T20:47:15.332 INFO:tasks.workunit.client.0.vm07.stdout:4/144: write d2/df/d17/f1b [520277,101714] 0 2026-03-09T20:47:15.332 INFO:tasks.workunit.client.1.vm10.stdout:0/47: dwrite f1 [0,4194304] 0 2026-03-09T20:47:15.333 INFO:tasks.workunit.client.1.vm10.stdout:1/95: unlink d2/da/c1d 0 2026-03-09T20:47:15.334 INFO:tasks.workunit.client.1.vm10.stdout:0/48: fsync d2/f5 0 2026-03-09T20:47:15.340 INFO:tasks.workunit.client.0.vm07.stdout:2/220: sync 2026-03-09T20:47:15.340 INFO:tasks.workunit.client.1.vm10.stdout:7/86: dwrite db/f16 [0,4194304] 0 2026-03-09T20:47:15.355 INFO:tasks.workunit.client.1.vm10.stdout:6/55: truncate f0 6494534 0 2026-03-09T20:47:15.364 INFO:tasks.workunit.client.1.vm10.stdout:1/96: mknod d2/c23 0 2026-03-09T20:47:15.364 INFO:tasks.workunit.client.1.vm10.stdout:3/57: rmdir dc/dd 39 2026-03-09T20:47:15.366 INFO:tasks.workunit.client.1.vm10.stdout:4/62: truncate d1/fe 782274 0 2026-03-09T20:47:15.366 INFO:tasks.workunit.client.0.vm07.stdout:0/207: symlink d1/d2/d33/l49 0 2026-03-09T20:47:15.367 INFO:tasks.workunit.client.0.vm07.stdout:4/145: stat d2/cb 0 2026-03-09T20:47:15.368 INFO:tasks.workunit.client.1.vm10.stdout:1/97: truncate d2/da/f10 1972764 0 2026-03-09T20:47:15.372 INFO:tasks.workunit.client.0.vm07.stdout:5/159: rmdir d5 39 2026-03-09T20:47:15.373 INFO:tasks.workunit.client.0.vm07.stdout:2/221: write d2/f3e [643695,119069] 0 2026-03-09T20:47:15.378 INFO:tasks.workunit.client.1.vm10.stdout:8/85: write d0/f9 [5167519,92191] 0 2026-03-09T20:47:15.380 INFO:tasks.workunit.client.0.vm07.stdout:6/198: dwrite d8/d16/f17 [0,4194304] 0 2026-03-09T20:47:15.382 INFO:tasks.workunit.client.0.vm07.stdout:6/199: chown d8/d16/d22/d24/f25 268262017 1 2026-03-09T20:47:15.384 INFO:tasks.workunit.client.0.vm07.stdout:1/145: truncate d3/f9 2760803 0 2026-03-09T20:47:15.384 INFO:tasks.workunit.client.0.vm07.stdout:1/146: fsync d3/f8 0 2026-03-09T20:47:15.385 INFO:tasks.workunit.client.0.vm07.stdout:1/147: write d3/d12/f13 [3947017,2400] 0 2026-03-09T20:47:15.386 INFO:tasks.workunit.client.1.vm10.stdout:0/49: creat d2/d9/da/fc x:0 0 0 2026-03-09T20:47:15.387 INFO:tasks.workunit.client.0.vm07.stdout:1/148: chown d3/d12/f20 24 1 2026-03-09T20:47:15.387 INFO:tasks.workunit.client.0.vm07.stdout:1/149: write d3/f10 [1771309,50909] 0 2026-03-09T20:47:15.390 INFO:tasks.workunit.client.0.vm07.stdout:1/150: dread d3/f11 [0,4194304] 0 2026-03-09T20:47:15.390 INFO:tasks.workunit.client.0.vm07.stdout:1/151: write d3/f5 [3028428,55833] 0 2026-03-09T20:47:15.393 INFO:tasks.workunit.client.0.vm07.stdout:4/146: chown d2/f1d 3274902 1 2026-03-09T20:47:15.397 INFO:tasks.workunit.client.0.vm07.stdout:9/174: getdents d4 0 2026-03-09T20:47:15.401 INFO:tasks.workunit.client.1.vm10.stdout:1/98: symlink d2/l24 0 2026-03-09T20:47:15.404 INFO:tasks.workunit.client.0.vm07.stdout:4/147: dread f1 [0,4194304] 0 2026-03-09T20:47:15.407 INFO:tasks.workunit.client.0.vm07.stdout:3/161: fsync d1/d5/d9/fe 0 2026-03-09T20:47:15.408 INFO:tasks.workunit.client.1.vm10.stdout:3/58: dwrite dc/dd/fe [0,4194304] 0 2026-03-09T20:47:15.414 INFO:tasks.workunit.client.1.vm10.stdout:4/63: symlink d1/l15 0 2026-03-09T20:47:15.415 INFO:tasks.workunit.client.1.vm10.stdout:8/86: mknod d0/c1f 0 2026-03-09T20:47:15.415 INFO:tasks.workunit.client.1.vm10.stdout:1/99: mkdir d2/da/d25 0 2026-03-09T20:47:15.415 INFO:tasks.workunit.client.1.vm10.stdout:8/87: readlink d0/l5 0 2026-03-09T20:47:15.416 INFO:tasks.workunit.client.1.vm10.stdout:8/88: chown d0/l1c 0 1 2026-03-09T20:47:15.418 INFO:tasks.workunit.client.1.vm10.stdout:4/64: dread d1/d2/f7 [0,4194304] 0 2026-03-09T20:47:15.425 INFO:tasks.workunit.client.0.vm07.stdout:5/160: read d5/f23 [2103954,105961] 0 2026-03-09T20:47:15.425 INFO:tasks.workunit.client.1.vm10.stdout:4/65: read - d1/f9 zero size 2026-03-09T20:47:15.426 INFO:tasks.workunit.client.1.vm10.stdout:3/59: write f4 [2567143,60417] 0 2026-03-09T20:47:15.426 INFO:tasks.workunit.client.1.vm10.stdout:0/50: sync 2026-03-09T20:47:15.427 INFO:tasks.workunit.client.0.vm07.stdout:1/152: dwrite d3/f11 [0,4194304] 0 2026-03-09T20:47:15.427 INFO:tasks.workunit.client.1.vm10.stdout:0/51: dread - d2/d9/da/fc zero size 2026-03-09T20:47:15.427 INFO:tasks.workunit.client.1.vm10.stdout:3/60: write dc/ff [3507594,7534] 0 2026-03-09T20:47:15.430 INFO:tasks.workunit.client.1.vm10.stdout:0/52: write d2/f5 [3559699,62470] 0 2026-03-09T20:47:15.433 INFO:tasks.workunit.client.1.vm10.stdout:0/53: stat d2/d9/da 0 2026-03-09T20:47:15.434 INFO:tasks.workunit.client.1.vm10.stdout:8/89: dwrite d0/f11 [0,4194304] 0 2026-03-09T20:47:15.440 INFO:tasks.workunit.client.1.vm10.stdout:8/90: write d0/fa [351597,27643] 0 2026-03-09T20:47:15.440 INFO:tasks.workunit.client.1.vm10.stdout:0/54: dread d2/f5 [0,4194304] 0 2026-03-09T20:47:15.448 INFO:tasks.workunit.client.1.vm10.stdout:3/61: unlink dc/dd/fe 0 2026-03-09T20:47:15.448 INFO:tasks.workunit.client.1.vm10.stdout:3/62: write f7 [484757,55261] 0 2026-03-09T20:47:15.449 INFO:tasks.workunit.client.1.vm10.stdout:3/63: read dc/ff [1702047,71478] 0 2026-03-09T20:47:15.449 INFO:tasks.workunit.client.1.vm10.stdout:3/64: readlink l0 0 2026-03-09T20:47:15.449 INFO:tasks.workunit.client.1.vm10.stdout:3/65: dread - dc/f11 zero size 2026-03-09T20:47:15.450 INFO:tasks.workunit.client.1.vm10.stdout:3/66: dread - dc/f10 zero size 2026-03-09T20:47:15.452 INFO:tasks.workunit.client.0.vm07.stdout:6/200: creat d8/d26/f3d x:0 0 0 2026-03-09T20:47:15.455 INFO:tasks.workunit.client.0.vm07.stdout:6/201: dwrite d8/f3b [0,4194304] 0 2026-03-09T20:47:15.456 INFO:tasks.workunit.client.0.vm07.stdout:6/202: write d8/d26/d2a/f37 [282703,54350] 0 2026-03-09T20:47:15.459 INFO:tasks.workunit.client.1.vm10.stdout:8/91: symlink d0/l20 0 2026-03-09T20:47:15.460 INFO:tasks.workunit.client.1.vm10.stdout:8/92: truncate d0/f9 5686628 0 2026-03-09T20:47:15.463 INFO:tasks.workunit.client.1.vm10.stdout:0/55: creat d2/d9/da/fd x:0 0 0 2026-03-09T20:47:15.465 INFO:tasks.workunit.client.0.vm07.stdout:8/132: truncate d1/dc/d14/f18 1261115 0 2026-03-09T20:47:15.469 INFO:tasks.workunit.client.0.vm07.stdout:9/175: unlink d4/d8/d19/d26/l38 0 2026-03-09T20:47:15.471 INFO:tasks.workunit.client.0.vm07.stdout:4/148: fdatasync d2/f9 0 2026-03-09T20:47:15.471 INFO:tasks.workunit.client.1.vm10.stdout:8/93: dwrite d0/f1 [0,4194304] 0 2026-03-09T20:47:15.481 INFO:tasks.workunit.client.0.vm07.stdout:0/208: getdents d1/d2/d33/d35 0 2026-03-09T20:47:15.484 INFO:tasks.workunit.client.0.vm07.stdout:8/133: mkdir d1/dc/d16/d26 0 2026-03-09T20:47:15.486 INFO:tasks.workunit.client.0.vm07.stdout:0/209: dwrite d1/d2/ff [0,4194304] 0 2026-03-09T20:47:15.488 INFO:tasks.workunit.client.1.vm10.stdout:8/94: sync 2026-03-09T20:47:15.491 INFO:tasks.workunit.client.0.vm07.stdout:8/134: dwrite d1/f25 [0,4194304] 0 2026-03-09T20:47:15.509 INFO:tasks.workunit.client.0.vm07.stdout:1/153: getdents d3/d23 0 2026-03-09T20:47:15.510 INFO:tasks.workunit.client.0.vm07.stdout:4/149: write d2/d1f/f2c [4444170,53051] 0 2026-03-09T20:47:15.510 INFO:tasks.workunit.client.0.vm07.stdout:1/154: write d3/d12/f13 [486879,113155] 0 2026-03-09T20:47:15.514 INFO:tasks.workunit.client.1.vm10.stdout:8/95: creat d0/f21 x:0 0 0 2026-03-09T20:47:15.517 INFO:tasks.workunit.client.0.vm07.stdout:2/222: link d2/f17 d2/f40 0 2026-03-09T20:47:15.517 INFO:tasks.workunit.client.0.vm07.stdout:5/161: mkdir d5/df/d13/d30 0 2026-03-09T20:47:15.518 INFO:tasks.workunit.client.1.vm10.stdout:8/96: dread d0/f9 [0,4194304] 0 2026-03-09T20:47:15.523 INFO:tasks.workunit.client.1.vm10.stdout:8/97: mkdir d0/d22 0 2026-03-09T20:47:15.523 INFO:tasks.workunit.client.0.vm07.stdout:1/155: write d3/f24 [2934501,72781] 0 2026-03-09T20:47:15.523 INFO:tasks.workunit.client.0.vm07.stdout:4/150: rmdir d2/df/d17 39 2026-03-09T20:47:15.527 INFO:tasks.workunit.client.1.vm10.stdout:8/98: rename d0/l1c to d0/d22/l23 0 2026-03-09T20:47:15.528 INFO:tasks.workunit.client.0.vm07.stdout:0/210: getdents d1/d2/d33/d35 0 2026-03-09T20:47:15.530 INFO:tasks.workunit.client.0.vm07.stdout:3/162: link d1/l8 d1/d35/l37 0 2026-03-09T20:47:15.530 INFO:tasks.workunit.client.0.vm07.stdout:3/163: chown d1/d5/d9/f33 774467457 1 2026-03-09T20:47:15.531 INFO:tasks.workunit.client.0.vm07.stdout:6/203: getdents d8/d26/d2a 0 2026-03-09T20:47:15.531 INFO:tasks.workunit.client.0.vm07.stdout:6/204: truncate d8/f20 4612070 0 2026-03-09T20:47:15.532 INFO:tasks.workunit.client.1.vm10.stdout:8/99: dwrite d0/f13 [0,4194304] 0 2026-03-09T20:47:15.532 INFO:tasks.workunit.client.0.vm07.stdout:6/205: chown d8/d16/d22/d3a/f39 314617041 1 2026-03-09T20:47:15.532 INFO:tasks.workunit.client.0.vm07.stdout:6/206: chown d8/d16/f17 113157128 1 2026-03-09T20:47:15.533 INFO:tasks.workunit.client.1.vm10.stdout:8/100: chown d0 86 1 2026-03-09T20:47:15.533 INFO:tasks.workunit.client.0.vm07.stdout:2/223: fdatasync d2/f4 0 2026-03-09T20:47:15.534 INFO:tasks.workunit.client.1.vm10.stdout:8/101: chown d0/f6 0 1 2026-03-09T20:47:15.535 INFO:tasks.workunit.client.0.vm07.stdout:0/211: fsync d1/f2f 0 2026-03-09T20:47:15.538 INFO:tasks.workunit.client.0.vm07.stdout:6/207: dwrite d8/f15 [4194304,4194304] 0 2026-03-09T20:47:15.554 INFO:tasks.workunit.client.0.vm07.stdout:0/212: mknod d1/d1f/d20/c4a 0 2026-03-09T20:47:15.554 INFO:tasks.workunit.client.0.vm07.stdout:6/208: unlink d8/c13 0 2026-03-09T20:47:15.554 INFO:tasks.workunit.client.0.vm07.stdout:0/213: mkdir d1/d2/d4b 0 2026-03-09T20:47:15.554 INFO:tasks.workunit.client.0.vm07.stdout:0/214: fdatasync d1/f3b 0 2026-03-09T20:47:15.554 INFO:tasks.workunit.client.0.vm07.stdout:2/224: link d2/f10 d2/db/f41 0 2026-03-09T20:47:15.554 INFO:tasks.workunit.client.0.vm07.stdout:0/215: symlink d1/d2/dc/d17/l4c 0 2026-03-09T20:47:15.554 INFO:tasks.workunit.client.0.vm07.stdout:0/216: dread - d1/f2f zero size 2026-03-09T20:47:15.554 INFO:tasks.workunit.client.0.vm07.stdout:2/225: creat d2/db/d1c/f42 x:0 0 0 2026-03-09T20:47:15.555 INFO:tasks.workunit.client.0.vm07.stdout:2/226: dread d2/ff [0,4194304] 0 2026-03-09T20:47:15.557 INFO:tasks.workunit.client.0.vm07.stdout:0/217: dwrite d1/d2/d33/d35/f45 [0,4194304] 0 2026-03-09T20:47:15.560 INFO:tasks.workunit.client.0.vm07.stdout:2/227: symlink d2/db/l43 0 2026-03-09T20:47:15.601 INFO:tasks.workunit.client.1.vm10.stdout:2/62: write d5/fa [4742231,20877] 0 2026-03-09T20:47:15.601 INFO:tasks.workunit.client.1.vm10.stdout:9/86: rmdir d2/d3 39 2026-03-09T20:47:15.603 INFO:tasks.workunit.client.1.vm10.stdout:2/63: fdatasync d5/fb 0 2026-03-09T20:47:15.605 INFO:tasks.workunit.client.0.vm07.stdout:6/209: rmdir d8/d16 39 2026-03-09T20:47:15.611 INFO:tasks.workunit.client.1.vm10.stdout:9/87: readlink d2/d3/l9 0 2026-03-09T20:47:15.611 INFO:tasks.workunit.client.0.vm07.stdout:6/210: symlink d8/d16/d22/d33/l3e 0 2026-03-09T20:47:15.613 INFO:tasks.workunit.client.0.vm07.stdout:7/153: truncate d3/da/db/d14/f2a 4055035 0 2026-03-09T20:47:15.614 INFO:tasks.workunit.client.0.vm07.stdout:7/154: read d3/da/db/d14/f24 [20689,2467] 0 2026-03-09T20:47:15.615 INFO:tasks.workunit.client.1.vm10.stdout:2/64: dwrite d5/fb [0,4194304] 0 2026-03-09T20:47:15.615 INFO:tasks.workunit.client.1.vm10.stdout:9/88: dread d2/d3/f5 [0,4194304] 0 2026-03-09T20:47:15.616 INFO:tasks.workunit.client.0.vm07.stdout:7/155: creat d3/da/db/d14/d1f/d2b/f2f x:0 0 0 2026-03-09T20:47:15.627 INFO:tasks.workunit.client.1.vm10.stdout:2/65: chown d5/lc 973 1 2026-03-09T20:47:15.627 INFO:tasks.workunit.client.1.vm10.stdout:9/89: mknod d2/d3/de/c1b 0 2026-03-09T20:47:15.629 INFO:tasks.workunit.client.1.vm10.stdout:9/90: fdatasync d2/d3/de/f19 0 2026-03-09T20:47:15.632 INFO:tasks.workunit.client.1.vm10.stdout:9/91: creat d2/d3/f1c x:0 0 0 2026-03-09T20:47:15.632 INFO:tasks.workunit.client.1.vm10.stdout:2/66: dwrite d5/fb [0,4194304] 0 2026-03-09T20:47:15.635 INFO:tasks.workunit.client.1.vm10.stdout:9/92: creat d2/d3/f1d x:0 0 0 2026-03-09T20:47:15.640 INFO:tasks.workunit.client.1.vm10.stdout:9/93: write d2/f16 [511962,45910] 0 2026-03-09T20:47:15.640 INFO:tasks.workunit.client.1.vm10.stdout:2/67: dwrite f1 [0,4194304] 0 2026-03-09T20:47:15.640 INFO:tasks.workunit.client.1.vm10.stdout:2/68: chown l4 223 1 2026-03-09T20:47:15.640 INFO:tasks.workunit.client.1.vm10.stdout:2/69: fsync d5/f6 0 2026-03-09T20:47:15.643 INFO:tasks.workunit.client.1.vm10.stdout:2/70: symlink d5/l13 0 2026-03-09T20:47:15.644 INFO:tasks.workunit.client.1.vm10.stdout:2/71: read d5/f7 [559432,5020] 0 2026-03-09T20:47:15.645 INFO:tasks.workunit.client.1.vm10.stdout:2/72: truncate d5/fd 1150192 0 2026-03-09T20:47:15.646 INFO:tasks.workunit.client.1.vm10.stdout:9/94: creat d2/d12/f1e x:0 0 0 2026-03-09T20:47:15.652 INFO:tasks.workunit.client.1.vm10.stdout:9/95: dread d2/d3/f5 [0,4194304] 0 2026-03-09T20:47:15.652 INFO:tasks.workunit.client.1.vm10.stdout:9/96: chown d2/d3/de/f19 147507 1 2026-03-09T20:47:15.655 INFO:tasks.workunit.client.1.vm10.stdout:9/97: write d2/d3/fa [3456240,105498] 0 2026-03-09T20:47:15.657 INFO:tasks.workunit.client.1.vm10.stdout:2/73: symlink d5/l14 0 2026-03-09T20:47:15.657 INFO:tasks.workunit.client.1.vm10.stdout:9/98: mknod d2/d3/c1f 0 2026-03-09T20:47:15.665 INFO:tasks.workunit.client.0.vm07.stdout:7/156: sync 2026-03-09T20:47:15.666 INFO:tasks.workunit.client.1.vm10.stdout:2/74: chown d5/c12 7077 1 2026-03-09T20:47:15.669 INFO:tasks.workunit.client.0.vm07.stdout:7/157: rename d3/da/db/d14/f28 to d3/da/db/d14/f30 0 2026-03-09T20:47:15.669 INFO:tasks.workunit.client.1.vm10.stdout:2/75: dread d5/f6 [0,4194304] 0 2026-03-09T20:47:15.671 INFO:tasks.workunit.client.1.vm10.stdout:2/76: chown d5/c9 0 1 2026-03-09T20:47:15.671 INFO:tasks.workunit.client.0.vm07.stdout:7/158: mknod d3/da/db/d14/d1f/d2b/c31 0 2026-03-09T20:47:15.672 INFO:tasks.workunit.client.0.vm07.stdout:7/159: write d3/da/db/d14/f1a [1505703,31645] 0 2026-03-09T20:47:15.674 INFO:tasks.workunit.client.0.vm07.stdout:7/160: mkdir d3/da/db/d32 0 2026-03-09T20:47:15.675 INFO:tasks.workunit.client.1.vm10.stdout:9/99: dwrite d2/d3/f1c [0,4194304] 0 2026-03-09T20:47:15.679 INFO:tasks.workunit.client.0.vm07.stdout:7/161: dread d3/da/f11 [0,4194304] 0 2026-03-09T20:47:15.682 INFO:tasks.workunit.client.0.vm07.stdout:7/162: unlink d3/da/db/c2d 0 2026-03-09T20:47:15.682 INFO:tasks.workunit.client.0.vm07.stdout:7/163: readlink l1 0 2026-03-09T20:47:15.683 INFO:tasks.workunit.client.0.vm07.stdout:7/164: chown d3/da/db/d14/c29 2 1 2026-03-09T20:47:15.687 INFO:tasks.workunit.client.1.vm10.stdout:9/100: creat d2/d12/f20 x:0 0 0 2026-03-09T20:47:15.693 INFO:tasks.workunit.client.1.vm10.stdout:2/77: sync 2026-03-09T20:47:15.693 INFO:tasks.workunit.client.1.vm10.stdout:2/78: chown d5 9 1 2026-03-09T20:47:15.696 INFO:tasks.workunit.client.1.vm10.stdout:9/101: dwrite d2/d3/fd [4194304,4194304] 0 2026-03-09T20:47:15.717 INFO:tasks.workunit.client.0.vm07.stdout:2/228: rmdir d2/d11 39 2026-03-09T20:47:15.717 INFO:tasks.workunit.client.1.vm10.stdout:7/87: fsync db/f16 0 2026-03-09T20:47:15.718 INFO:tasks.workunit.client.0.vm07.stdout:2/229: read d2/db/d1c/f3a [3173905,35499] 0 2026-03-09T20:47:15.719 INFO:tasks.workunit.client.0.vm07.stdout:2/230: write d2/db/d28/f32 [2826521,55244] 0 2026-03-09T20:47:15.725 INFO:tasks.workunit.client.0.vm07.stdout:2/231: creat d2/d11/f44 x:0 0 0 2026-03-09T20:47:15.729 INFO:tasks.workunit.client.1.vm10.stdout:7/88: getdents db 0 2026-03-09T20:47:15.731 INFO:tasks.workunit.client.1.vm10.stdout:5/55: write d2/f5 [4356614,62362] 0 2026-03-09T20:47:15.733 INFO:tasks.workunit.client.1.vm10.stdout:6/56: read f0 [6106058,72850] 0 2026-03-09T20:47:15.737 INFO:tasks.workunit.client.1.vm10.stdout:7/89: truncate db/f19 1108120 0 2026-03-09T20:47:15.739 INFO:tasks.workunit.client.1.vm10.stdout:7/90: dread db/f16 [0,4194304] 0 2026-03-09T20:47:15.741 INFO:tasks.workunit.client.1.vm10.stdout:5/56: creat d2/f16 x:0 0 0 2026-03-09T20:47:15.741 INFO:tasks.workunit.client.1.vm10.stdout:4/66: dread d1/fe [0,4194304] 0 2026-03-09T20:47:15.742 INFO:tasks.workunit.client.1.vm10.stdout:7/91: mkdir db/de/d1d 0 2026-03-09T20:47:15.747 INFO:tasks.workunit.client.1.vm10.stdout:5/57: mkdir d2/d17 0 2026-03-09T20:47:15.747 INFO:tasks.workunit.client.1.vm10.stdout:7/92: fdatasync fa 0 2026-03-09T20:47:15.750 INFO:tasks.workunit.client.1.vm10.stdout:7/93: rmdir db/de/d1d 0 2026-03-09T20:47:15.754 INFO:tasks.workunit.client.1.vm10.stdout:5/58: dread d2/f11 [0,4194304] 0 2026-03-09T20:47:15.756 INFO:tasks.workunit.client.1.vm10.stdout:7/94: dwrite db/de/f1a [0,4194304] 0 2026-03-09T20:47:15.759 INFO:tasks.workunit.client.1.vm10.stdout:5/59: rmdir d2/d17 0 2026-03-09T20:47:15.759 INFO:tasks.workunit.client.1.vm10.stdout:5/60: chown d2/lf 15 1 2026-03-09T20:47:15.767 INFO:tasks.workunit.client.1.vm10.stdout:7/95: dwrite db/de/ff [0,4194304] 0 2026-03-09T20:47:15.772 INFO:tasks.workunit.client.1.vm10.stdout:7/96: dwrite f3 [0,4194304] 0 2026-03-09T20:47:15.779 INFO:tasks.workunit.client.1.vm10.stdout:7/97: dread db/de/f14 [0,4194304] 0 2026-03-09T20:47:15.781 INFO:tasks.workunit.client.1.vm10.stdout:7/98: creat db/de/f1e x:0 0 0 2026-03-09T20:47:15.783 INFO:tasks.workunit.client.1.vm10.stdout:7/99: mkdir db/d1f 0 2026-03-09T20:47:15.834 INFO:tasks.workunit.client.1.vm10.stdout:1/100: rmdir d2 39 2026-03-09T20:47:15.835 INFO:tasks.workunit.client.1.vm10.stdout:1/101: fsync d2/da/f22 0 2026-03-09T20:47:15.836 INFO:tasks.workunit.client.1.vm10.stdout:4/67: truncate d1/d2/f7 3599440 0 2026-03-09T20:47:15.836 INFO:tasks.workunit.client.1.vm10.stdout:1/102: read - d2/da/f22 zero size 2026-03-09T20:47:15.836 INFO:tasks.workunit.client.1.vm10.stdout:3/67: rmdir dc/dd 39 2026-03-09T20:47:15.837 INFO:tasks.workunit.client.1.vm10.stdout:4/68: creat d1/d8/f16 x:0 0 0 2026-03-09T20:47:15.839 INFO:tasks.workunit.client.1.vm10.stdout:3/68: creat dc/dd/f13 x:0 0 0 2026-03-09T20:47:15.841 INFO:tasks.workunit.client.1.vm10.stdout:1/103: getdents d2 0 2026-03-09T20:47:15.844 INFO:tasks.workunit.client.1.vm10.stdout:3/69: mkdir dc/d14 0 2026-03-09T20:47:15.845 INFO:tasks.workunit.client.0.vm07.stdout:4/151: dread d2/f6 [0,4194304] 0 2026-03-09T20:47:15.847 INFO:tasks.workunit.client.1.vm10.stdout:1/104: creat d2/da/f26 x:0 0 0 2026-03-09T20:47:15.847 INFO:tasks.workunit.client.1.vm10.stdout:1/105: fsync d2/da/f10 0 2026-03-09T20:47:15.847 INFO:tasks.workunit.client.1.vm10.stdout:3/70: creat dc/dd/f15 x:0 0 0 2026-03-09T20:47:15.848 INFO:tasks.workunit.client.0.vm07.stdout:4/152: fdatasync d2/df/d17/f2a 0 2026-03-09T20:47:15.848 INFO:tasks.workunit.client.0.vm07.stdout:4/153: readlink d2/l22 0 2026-03-09T20:47:15.849 INFO:tasks.workunit.client.1.vm10.stdout:4/69: sync 2026-03-09T20:47:15.850 INFO:tasks.workunit.client.1.vm10.stdout:3/71: symlink dc/dd/l16 0 2026-03-09T20:47:15.850 INFO:tasks.workunit.client.0.vm07.stdout:4/154: mkdir d2/d1f/d2d 0 2026-03-09T20:47:15.850 INFO:tasks.workunit.client.0.vm07.stdout:4/155: chown d2/df 1 1 2026-03-09T20:47:15.851 INFO:tasks.workunit.client.1.vm10.stdout:1/106: creat d2/da/d25/f27 x:0 0 0 2026-03-09T20:47:15.855 INFO:tasks.workunit.client.0.vm07.stdout:4/156: dwrite d2/df/f23 [4194304,4194304] 0 2026-03-09T20:47:15.856 INFO:tasks.workunit.client.1.vm10.stdout:1/107: dwrite d2/da/f22 [0,4194304] 0 2026-03-09T20:47:15.869 INFO:tasks.workunit.client.1.vm10.stdout:1/108: dwrite d2/da/f26 [0,4194304] 0 2026-03-09T20:47:15.869 INFO:tasks.workunit.client.0.vm07.stdout:4/157: getdents d2/d1f/d2d 0 2026-03-09T20:47:15.869 INFO:tasks.workunit.client.0.vm07.stdout:4/158: readlink d2/l14 0 2026-03-09T20:47:15.870 INFO:tasks.workunit.client.0.vm07.stdout:4/159: write d2/f28 [584930,126610] 0 2026-03-09T20:47:15.870 INFO:tasks.workunit.client.0.vm07.stdout:4/160: read f1 [1496454,51942] 0 2026-03-09T20:47:15.873 INFO:tasks.workunit.client.1.vm10.stdout:1/109: fsync d2/da/f20 0 2026-03-09T20:47:15.925 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:15 vm10.local ceph-mon[57011]: Upgrade: Updating mgr.vm10.byqahe 2026-03-09T20:47:15.925 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:15 vm10.local ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:47:15.925 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:15 vm10.local ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm10.byqahe", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T20:47:15.925 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:15 vm10.local ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T20:47:15.925 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:15 vm10.local ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:47:15.925 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:15 vm10.local ceph-mon[57011]: Deploying daemon mgr.vm10.byqahe on vm10 2026-03-09T20:47:15.927 INFO:tasks.workunit.client.0.vm07.stdout:5/162: truncate d5/f23 3773161 0 2026-03-09T20:47:15.929 INFO:tasks.workunit.client.0.vm07.stdout:5/163: rmdir d5/df/d13 39 2026-03-09T20:47:15.932 INFO:tasks.workunit.client.0.vm07.stdout:5/164: unlink d5/d19/c2f 0 2026-03-09T20:47:15.935 INFO:tasks.workunit.client.1.vm10.stdout:0/56: truncate f1 448475 0 2026-03-09T20:47:15.936 INFO:tasks.workunit.client.0.vm07.stdout:5/165: mknod d5/c31 0 2026-03-09T20:47:15.937 INFO:tasks.workunit.client.0.vm07.stdout:9/176: write d4/d16/d29/d24/f2e [823089,78724] 0 2026-03-09T20:47:15.938 INFO:tasks.workunit.client.0.vm07.stdout:9/177: write d4/d11/d2a/f39 [742281,32264] 0 2026-03-09T20:47:15.939 INFO:tasks.workunit.client.0.vm07.stdout:9/178: stat d4/d8/dc/f25 0 2026-03-09T20:47:15.941 INFO:tasks.workunit.client.1.vm10.stdout:0/57: chown d2/c8 477792 1 2026-03-09T20:47:15.944 INFO:tasks.workunit.client.0.vm07.stdout:9/179: fdatasync d4/d8/dc/ff 0 2026-03-09T20:47:15.951 INFO:tasks.workunit.client.0.vm07.stdout:9/180: unlink d4/d16/d29/d24/f2e 0 2026-03-09T20:47:15.955 INFO:tasks.workunit.client.1.vm10.stdout:8/102: rmdir d0 39 2026-03-09T20:47:15.956 INFO:tasks.workunit.client.0.vm07.stdout:1/156: write d3/fc [172520,127208] 0 2026-03-09T20:47:15.956 INFO:tasks.workunit.client.0.vm07.stdout:0/218: getdents d1/d1f/d20 0 2026-03-09T20:47:15.957 INFO:tasks.workunit.client.0.vm07.stdout:0/219: write d1/d2/d33/d35/f46 [177557,43910] 0 2026-03-09T20:47:15.958 INFO:tasks.workunit.client.0.vm07.stdout:3/164: dwrite d1/d5/d9/f1b [0,4194304] 0 2026-03-09T20:47:15.965 INFO:tasks.workunit.client.0.vm07.stdout:6/211: getdents d8 0 2026-03-09T20:47:15.965 INFO:tasks.workunit.client.0.vm07.stdout:9/181: creat d4/d8/d19/f42 x:0 0 0 2026-03-09T20:47:15.965 INFO:tasks.workunit.client.0.vm07.stdout:9/182: write d4/f10 [422748,5012] 0 2026-03-09T20:47:15.965 INFO:tasks.workunit.client.0.vm07.stdout:9/183: write d4/d8/d19/f42 [643641,79080] 0 2026-03-09T20:47:15.966 INFO:tasks.workunit.client.0.vm07.stdout:6/212: rmdir d8/d16/d22/d24/d2b 39 2026-03-09T20:47:15.968 INFO:tasks.workunit.client.1.vm10.stdout:8/103: rename d0/ld to d0/l24 0 2026-03-09T20:47:15.969 INFO:tasks.workunit.client.0.vm07.stdout:3/165: mknod d1/c38 0 2026-03-09T20:47:15.971 INFO:tasks.workunit.client.0.vm07.stdout:1/157: creat d3/f28 x:0 0 0 2026-03-09T20:47:15.971 INFO:tasks.workunit.client.0.vm07.stdout:1/158: chown d3/d14/f25 870851949 1 2026-03-09T20:47:15.973 INFO:tasks.workunit.client.1.vm10.stdout:8/104: mkdir d0/d22/d25 0 2026-03-09T20:47:15.975 INFO:tasks.workunit.client.0.vm07.stdout:1/159: rmdir d3/d12 39 2026-03-09T20:47:15.975 INFO:tasks.workunit.client.1.vm10.stdout:9/102: dread d2/d3/f7 [0,4194304] 0 2026-03-09T20:47:15.976 INFO:tasks.workunit.client.0.vm07.stdout:3/166: symlink d1/d5/d9/d2f/l39 0 2026-03-09T20:47:15.978 INFO:tasks.workunit.client.1.vm10.stdout:8/105: symlink d0/l26 0 2026-03-09T20:47:15.980 INFO:tasks.workunit.client.1.vm10.stdout:9/103: symlink d2/d12/l21 0 2026-03-09T20:47:15.980 INFO:tasks.workunit.client.0.vm07.stdout:3/167: readlink d1/l2c 0 2026-03-09T20:47:15.982 INFO:tasks.workunit.client.1.vm10.stdout:9/104: symlink d2/d12/l22 0 2026-03-09T20:47:15.982 INFO:tasks.workunit.client.0.vm07.stdout:3/168: symlink d1/d5/l3a 0 2026-03-09T20:47:15.984 INFO:tasks.workunit.client.1.vm10.stdout:8/106: dread d0/f17 [0,4194304] 0 2026-03-09T20:47:15.985 INFO:tasks.workunit.client.1.vm10.stdout:9/105: unlink d2/d3/de/c1b 0 2026-03-09T20:47:15.985 INFO:tasks.workunit.client.0.vm07.stdout:3/169: rename d1/d5/d10/c2b to d1/d5/d9/d2f/c3b 0 2026-03-09T20:47:15.985 INFO:tasks.workunit.client.1.vm10.stdout:8/107: write d0/fe [1279839,10435] 0 2026-03-09T20:47:15.987 INFO:tasks.workunit.client.0.vm07.stdout:3/170: creat d1/d5/d9/f3c x:0 0 0 2026-03-09T20:47:15.988 INFO:tasks.workunit.client.0.vm07.stdout:3/171: mkdir d1/d5/d9/d2f/d3d 0 2026-03-09T20:47:15.991 INFO:tasks.workunit.client.1.vm10.stdout:8/108: dwrite d0/fa [4194304,4194304] 0 2026-03-09T20:47:15.992 INFO:tasks.workunit.client.1.vm10.stdout:9/106: sync 2026-03-09T20:47:15.994 INFO:tasks.workunit.client.1.vm10.stdout:9/107: symlink d2/d12/l23 0 2026-03-09T20:47:15.994 INFO:tasks.workunit.client.1.vm10.stdout:8/109: getdents d0/d22/d25 0 2026-03-09T20:47:15.996 INFO:tasks.workunit.client.1.vm10.stdout:9/108: rename d2/d3/f1d to d2/d3/de/f24 0 2026-03-09T20:47:15.997 INFO:tasks.workunit.client.1.vm10.stdout:9/109: write d2/d12/f20 [373800,104430] 0 2026-03-09T20:47:16.005 INFO:tasks.workunit.client.1.vm10.stdout:9/110: link c1 d2/c25 0 2026-03-09T20:47:16.005 INFO:tasks.workunit.client.1.vm10.stdout:9/111: chown d2/d3/c15 671857 1 2026-03-09T20:47:16.005 INFO:tasks.workunit.client.1.vm10.stdout:9/112: fdatasync d2/d3/fa 0 2026-03-09T20:47:16.006 INFO:tasks.workunit.client.1.vm10.stdout:9/113: creat d2/d12/f26 x:0 0 0 2026-03-09T20:47:16.007 INFO:tasks.workunit.client.1.vm10.stdout:9/114: truncate d2/d3/de/f24 204064 0 2026-03-09T20:47:16.009 INFO:tasks.workunit.client.1.vm10.stdout:9/115: unlink d2/d3/de/f19 0 2026-03-09T20:47:16.010 INFO:tasks.workunit.client.1.vm10.stdout:9/116: symlink d2/l27 0 2026-03-09T20:47:16.021 INFO:tasks.workunit.client.1.vm10.stdout:9/117: mkdir d2/d28 0 2026-03-09T20:47:16.022 INFO:tasks.workunit.client.1.vm10.stdout:9/118: read - d2/f1a zero size 2026-03-09T20:47:16.022 INFO:tasks.workunit.client.1.vm10.stdout:9/119: read d2/f6 [2415696,36617] 0 2026-03-09T20:47:16.022 INFO:tasks.workunit.client.0.vm07.stdout:5/166: dread d5/df/d13/f17 [0,4194304] 0 2026-03-09T20:47:16.022 INFO:tasks.workunit.client.0.vm07.stdout:5/167: stat d5/d19 0 2026-03-09T20:47:16.025 INFO:tasks.workunit.client.0.vm07.stdout:3/172: sync 2026-03-09T20:47:16.028 INFO:tasks.workunit.client.0.vm07.stdout:3/173: chown d1/d5/d9/d11/d1f/l24 657 1 2026-03-09T20:47:16.028 INFO:tasks.workunit.client.0.vm07.stdout:3/174: chown d1/l2c 1349013964 1 2026-03-09T20:47:16.029 INFO:tasks.workunit.client.0.vm07.stdout:3/175: chown d1/d5/d9/d2f/l39 6 1 2026-03-09T20:47:16.033 INFO:tasks.workunit.client.0.vm07.stdout:7/165: dread d3/da/db/d14/f2a [0,4194304] 0 2026-03-09T20:47:16.039 INFO:tasks.workunit.client.0.vm07.stdout:7/166: unlink d3/da/db/d14/c16 0 2026-03-09T20:47:16.040 INFO:tasks.workunit.client.0.vm07.stdout:7/167: truncate d3/da/db/f1e 4790945 0 2026-03-09T20:47:16.043 INFO:tasks.workunit.client.0.vm07.stdout:3/176: symlink d1/l3e 0 2026-03-09T20:47:16.043 INFO:tasks.workunit.client.0.vm07.stdout:3/177: dread - d1/d5/d9/f3c zero size 2026-03-09T20:47:16.046 INFO:tasks.workunit.client.1.vm10.stdout:2/79: truncate d5/fd 64864 0 2026-03-09T20:47:16.050 INFO:tasks.workunit.client.1.vm10.stdout:2/80: creat d5/f15 x:0 0 0 2026-03-09T20:47:16.050 INFO:tasks.workunit.client.0.vm07.stdout:2/232: dwrite d2/db/d1c/f22 [0,4194304] 0 2026-03-09T20:47:16.050 INFO:tasks.workunit.client.0.vm07.stdout:3/178: creat d1/d5/d9/d2f/d34/f3f x:0 0 0 2026-03-09T20:47:16.052 INFO:tasks.workunit.client.0.vm07.stdout:2/233: read - d2/d11/f36 zero size 2026-03-09T20:47:16.053 INFO:tasks.workunit.client.0.vm07.stdout:3/179: creat d1/d5/d9/d2f/d34/f40 x:0 0 0 2026-03-09T20:47:16.055 INFO:tasks.workunit.client.0.vm07.stdout:2/234: creat d2/db/d1c/f45 x:0 0 0 2026-03-09T20:47:16.056 INFO:tasks.workunit.client.0.vm07.stdout:2/235: write d2/d11/f44 [120967,103276] 0 2026-03-09T20:47:16.057 INFO:tasks.workunit.client.0.vm07.stdout:2/236: readlink d2/db/lc 0 2026-03-09T20:47:16.059 INFO:tasks.workunit.client.0.vm07.stdout:3/180: mkdir d1/d5/d9/d41 0 2026-03-09T20:47:16.060 INFO:tasks.workunit.client.0.vm07.stdout:3/181: dread - d1/d5/d9/d11/d1f/f27 zero size 2026-03-09T20:47:16.062 INFO:tasks.workunit.client.0.vm07.stdout:3/182: read d1/d5/d10/f30 [705757,25726] 0 2026-03-09T20:47:16.063 INFO:tasks.workunit.client.1.vm10.stdout:2/81: sync 2026-03-09T20:47:16.067 INFO:tasks.workunit.client.1.vm10.stdout:2/82: creat d5/f16 x:0 0 0 2026-03-09T20:47:16.067 INFO:tasks.workunit.client.1.vm10.stdout:2/83: readlink l4 0 2026-03-09T20:47:16.067 INFO:tasks.workunit.client.0.vm07.stdout:3/183: mknod d1/d5/d9/d2f/d3d/c42 0 2026-03-09T20:47:16.069 INFO:tasks.workunit.client.0.vm07.stdout:3/184: mkdir d1/d5/d10/d43 0 2026-03-09T20:47:16.069 INFO:tasks.workunit.client.0.vm07.stdout:3/185: readlink d1/l2c 0 2026-03-09T20:47:16.070 INFO:tasks.workunit.client.0.vm07.stdout:3/186: symlink d1/d5/d9/d2f/d34/l44 0 2026-03-09T20:47:16.076 INFO:tasks.workunit.client.0.vm07.stdout:3/187: mkdir d1/d5/d9/d11/d45 0 2026-03-09T20:47:16.087 INFO:tasks.workunit.client.0.vm07.stdout:2/237: dread d2/db/d28/f32 [0,4194304] 0 2026-03-09T20:47:16.089 INFO:tasks.workunit.client.1.vm10.stdout:6/57: truncate f0 1251986 0 2026-03-09T20:47:16.091 INFO:tasks.workunit.client.1.vm10.stdout:6/58: creat d3/ff x:0 0 0 2026-03-09T20:47:16.100 INFO:tasks.workunit.client.0.vm07.stdout:2/238: sync 2026-03-09T20:47:16.100 INFO:tasks.workunit.client.1.vm10.stdout:6/59: creat d3/da/f10 x:0 0 0 2026-03-09T20:47:16.100 INFO:tasks.workunit.client.1.vm10.stdout:7/100: truncate f1 243825 0 2026-03-09T20:47:16.102 INFO:tasks.workunit.client.1.vm10.stdout:6/60: write d3/da/f10 [449272,32319] 0 2026-03-09T20:47:16.103 INFO:tasks.workunit.client.1.vm10.stdout:4/70: rmdir d1/d8 39 2026-03-09T20:47:16.103 INFO:tasks.workunit.client.1.vm10.stdout:1/110: getdents d2/da 0 2026-03-09T20:47:16.105 INFO:tasks.workunit.client.1.vm10.stdout:4/71: write d1/d2/f12 [909189,20917] 0 2026-03-09T20:47:16.105 INFO:tasks.workunit.client.1.vm10.stdout:6/61: readlink d3/l6 0 2026-03-09T20:47:16.105 INFO:tasks.workunit.client.1.vm10.stdout:3/72: truncate dc/ff 133782 0 2026-03-09T20:47:16.107 INFO:tasks.workunit.client.1.vm10.stdout:6/62: read - d3/fe zero size 2026-03-09T20:47:16.108 INFO:tasks.workunit.client.1.vm10.stdout:3/73: chown f4 96134522 1 2026-03-09T20:47:16.108 INFO:tasks.workunit.client.0.vm07.stdout:8/135: dwrite d1/dc/d14/f18 [0,4194304] 0 2026-03-09T20:47:16.108 INFO:tasks.workunit.client.1.vm10.stdout:5/61: dwrite f1 [0,4194304] 0 2026-03-09T20:47:16.111 INFO:tasks.workunit.client.0.vm07.stdout:4/161: truncate d2/d1f/f25 1060415 0 2026-03-09T20:47:16.112 INFO:tasks.workunit.client.0.vm07.stdout:4/162: write d2/fa [12362,69711] 0 2026-03-09T20:47:16.113 INFO:tasks.workunit.client.1.vm10.stdout:5/62: write d2/f15 [219487,89459] 0 2026-03-09T20:47:16.113 INFO:tasks.workunit.client.1.vm10.stdout:5/63: stat d2/lf 0 2026-03-09T20:47:16.114 INFO:tasks.workunit.client.0.vm07.stdout:2/239: dwrite d2/d11/f38 [0,4194304] 0 2026-03-09T20:47:16.117 INFO:tasks.workunit.client.1.vm10.stdout:5/64: dread f1 [0,4194304] 0 2026-03-09T20:47:16.123 INFO:tasks.workunit.client.0.vm07.stdout:0/220: write d1/d2/dc/d17/f3c [211928,25091] 0 2026-03-09T20:47:16.123 INFO:tasks.workunit.client.0.vm07.stdout:6/213: write d8/d16/d22/d24/d2b/f2f [4940261,122738] 0 2026-03-09T20:47:16.128 INFO:tasks.workunit.client.0.vm07.stdout:9/184: dwrite d4/d8/dc/f25 [0,4194304] 0 2026-03-09T20:47:16.133 INFO:tasks.workunit.client.1.vm10.stdout:0/58: dread f1 [0,4194304] 0 2026-03-09T20:47:16.133 INFO:tasks.workunit.client.1.vm10.stdout:8/110: getdents d0 0 2026-03-09T20:47:16.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:15 vm07.local ceph-mon[49120]: Upgrade: Updating mgr.vm10.byqahe 2026-03-09T20:47:16.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:15 vm07.local ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:47:16.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:15 vm07.local ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm10.byqahe", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T20:47:16.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:15 vm07.local ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T20:47:16.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:15 vm07.local ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:47:16.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:15 vm07.local ceph-mon[49120]: Deploying daemon mgr.vm10.byqahe on vm10 2026-03-09T20:47:16.146 INFO:tasks.workunit.client.1.vm10.stdout:1/111: creat d2/da/d25/f28 x:0 0 0 2026-03-09T20:47:16.146 INFO:tasks.workunit.client.1.vm10.stdout:1/112: dread - d2/f21 zero size 2026-03-09T20:47:16.147 INFO:tasks.workunit.client.1.vm10.stdout:1/113: write d2/f14 [1275880,14056] 0 2026-03-09T20:47:16.148 INFO:tasks.workunit.client.0.vm07.stdout:8/136: readlink d1/lf 0 2026-03-09T20:47:16.155 INFO:tasks.workunit.client.1.vm10.stdout:6/63: mkdir d3/da/d11 0 2026-03-09T20:47:16.160 INFO:tasks.workunit.client.0.vm07.stdout:2/240: stat d2/c8 0 2026-03-09T20:47:16.168 INFO:tasks.workunit.client.0.vm07.stdout:0/221: creat d1/d1f/d20/f4d x:0 0 0 2026-03-09T20:47:16.168 INFO:tasks.workunit.client.0.vm07.stdout:6/214: rmdir d8/d16/d22/d24/d2b 39 2026-03-09T20:47:16.168 INFO:tasks.workunit.client.1.vm10.stdout:5/65: rename l0 to d2/l18 0 2026-03-09T20:47:16.168 INFO:tasks.workunit.client.1.vm10.stdout:1/114: rename d2/da to d2/da/d29 22 2026-03-09T20:47:16.168 INFO:tasks.workunit.client.1.vm10.stdout:1/115: write d2/da/d25/f27 [405872,106580] 0 2026-03-09T20:47:16.169 INFO:tasks.workunit.client.1.vm10.stdout:0/59: mkdir d2/d9/da/de 0 2026-03-09T20:47:16.169 INFO:tasks.workunit.client.1.vm10.stdout:8/111: creat d0/d22/f27 x:0 0 0 2026-03-09T20:47:16.170 INFO:tasks.workunit.client.0.vm07.stdout:9/185: symlink d4/d16/l43 0 2026-03-09T20:47:16.172 INFO:tasks.workunit.client.1.vm10.stdout:9/120: truncate d2/d12/f20 393368 0 2026-03-09T20:47:16.174 INFO:tasks.workunit.client.0.vm07.stdout:5/168: truncate d5/df/f24 432224 0 2026-03-09T20:47:16.182 INFO:tasks.workunit.client.1.vm10.stdout:3/74: mknod dc/d14/c17 0 2026-03-09T20:47:16.182 INFO:tasks.workunit.client.1.vm10.stdout:3/75: chown dc/d14 0 1 2026-03-09T20:47:16.185 INFO:tasks.workunit.client.0.vm07.stdout:7/168: dwrite d3/da/db/f12 [0,4194304] 0 2026-03-09T20:47:16.189 INFO:tasks.workunit.client.1.vm10.stdout:5/66: mknod d2/c19 0 2026-03-09T20:47:16.191 INFO:tasks.workunit.client.0.vm07.stdout:8/137: unlink d1/dc/c17 0 2026-03-09T20:47:16.191 INFO:tasks.workunit.client.0.vm07.stdout:8/138: chown d1/dc/d14/f18 26336674 1 2026-03-09T20:47:16.196 INFO:tasks.workunit.client.1.vm10.stdout:1/116: creat d2/f2a x:0 0 0 2026-03-09T20:47:16.196 INFO:tasks.workunit.client.1.vm10.stdout:1/117: truncate d2/f21 1043115 0 2026-03-09T20:47:16.197 INFO:tasks.workunit.client.1.vm10.stdout:1/118: read - d2/da/f11 zero size 2026-03-09T20:47:16.197 INFO:tasks.workunit.client.1.vm10.stdout:1/119: chown d2/f21 3 1 2026-03-09T20:47:16.198 INFO:tasks.workunit.client.0.vm07.stdout:3/188: dwrite d1/d5/d9/f15 [0,4194304] 0 2026-03-09T20:47:16.208 INFO:tasks.workunit.client.1.vm10.stdout:8/112: rmdir d0 39 2026-03-09T20:47:16.208 INFO:tasks.workunit.client.0.vm07.stdout:3/189: dread - d1/d5/d9/f33 zero size 2026-03-09T20:47:16.208 INFO:tasks.workunit.client.0.vm07.stdout:4/163: link d2/f9 d2/df/f2e 0 2026-03-09T20:47:16.211 INFO:tasks.workunit.client.1.vm10.stdout:7/101: link db/de/d12/l17 db/de/l20 0 2026-03-09T20:47:16.212 INFO:tasks.workunit.client.0.vm07.stdout:9/186: getdents d4/d11/d23/d32 0 2026-03-09T20:47:16.213 INFO:tasks.workunit.client.0.vm07.stdout:9/187: fdatasync d4/d11/f1a 0 2026-03-09T20:47:16.213 INFO:tasks.workunit.client.0.vm07.stdout:7/169: creat d3/da/db/d14/d1f/d2b/f33 x:0 0 0 2026-03-09T20:47:16.214 INFO:tasks.workunit.client.1.vm10.stdout:3/76: creat dc/dd/f18 x:0 0 0 2026-03-09T20:47:16.215 INFO:tasks.workunit.client.0.vm07.stdout:8/139: dread - d1/f20 zero size 2026-03-09T20:47:16.216 INFO:tasks.workunit.client.0.vm07.stdout:3/190: mkdir d1/d5/d9/d2f/d34/d46 0 2026-03-09T20:47:16.216 INFO:tasks.workunit.client.1.vm10.stdout:5/67: symlink d2/l1a 0 2026-03-09T20:47:16.216 INFO:tasks.workunit.client.0.vm07.stdout:3/191: chown d1/d5/d9/fe 315 1 2026-03-09T20:47:16.216 INFO:tasks.workunit.client.1.vm10.stdout:9/121: dwrite d2/d3/f7 [0,4194304] 0 2026-03-09T20:47:16.217 INFO:tasks.workunit.client.0.vm07.stdout:4/164: truncate d2/f6 4485236 0 2026-03-09T20:47:16.218 INFO:tasks.workunit.client.1.vm10.stdout:0/60: mknod d2/d9/da/de/cf 0 2026-03-09T20:47:16.218 INFO:tasks.workunit.client.0.vm07.stdout:4/165: chown d2/df/f2e 277307345 1 2026-03-09T20:47:16.218 INFO:tasks.workunit.client.1.vm10.stdout:9/122: fdatasync d2/d3/fa 0 2026-03-09T20:47:16.219 INFO:tasks.workunit.client.0.vm07.stdout:6/215: link d8/d16/f1f d8/d16/d22/d24/f3f 0 2026-03-09T20:47:16.220 INFO:tasks.workunit.client.0.vm07.stdout:6/216: chown d8/d16/f23 633212132 1 2026-03-09T20:47:16.226 INFO:tasks.workunit.client.1.vm10.stdout:7/102: mkdir db/d21 0 2026-03-09T20:47:16.228 INFO:tasks.workunit.client.0.vm07.stdout:9/188: write d4/d8/dc/d15/f30 [749022,52940] 0 2026-03-09T20:47:16.228 INFO:tasks.workunit.client.0.vm07.stdout:8/140: fdatasync d1/f1d 0 2026-03-09T20:47:16.228 INFO:tasks.workunit.client.0.vm07.stdout:3/192: rmdir d1/d5/d9/d2f 39 2026-03-09T20:47:16.233 INFO:tasks.workunit.client.1.vm10.stdout:5/68: mkdir d2/d1b 0 2026-03-09T20:47:16.240 INFO:tasks.workunit.client.0.vm07.stdout:3/193: dwrite d1/d5/d9/d11/f26 [0,4194304] 0 2026-03-09T20:47:16.256 INFO:tasks.workunit.client.0.vm07.stdout:6/217: mkdir d8/d26/d2a/d40 0 2026-03-09T20:47:16.256 INFO:tasks.workunit.client.0.vm07.stdout:6/218: fdatasync d8/d16/d22/f2c 0 2026-03-09T20:47:16.260 INFO:tasks.workunit.client.1.vm10.stdout:0/61: dwrite d2/f5 [0,4194304] 0 2026-03-09T20:47:16.261 INFO:tasks.workunit.client.0.vm07.stdout:2/241: write d2/db/f41 [170515,78853] 0 2026-03-09T20:47:16.261 INFO:tasks.workunit.client.1.vm10.stdout:0/62: chown d2/f5 0 1 2026-03-09T20:47:16.262 INFO:tasks.workunit.client.0.vm07.stdout:2/242: dread - d2/db/d1c/f45 zero size 2026-03-09T20:47:16.263 INFO:tasks.workunit.client.1.vm10.stdout:9/123: write d2/d12/f20 [165674,83409] 0 2026-03-09T20:47:16.268 INFO:tasks.workunit.client.0.vm07.stdout:0/222: dwrite d1/d2/dc/f12 [0,4194304] 0 2026-03-09T20:47:16.268 INFO:tasks.workunit.client.1.vm10.stdout:0/63: truncate d2/d9/da/fc 447312 0 2026-03-09T20:47:16.269 INFO:tasks.workunit.client.1.vm10.stdout:6/64: write d3/f7 [10452,31284] 0 2026-03-09T20:47:16.269 INFO:tasks.workunit.client.1.vm10.stdout:6/65: truncate d3/da/fd 654218 0 2026-03-09T20:47:16.271 INFO:tasks.workunit.client.0.vm07.stdout:0/223: dwrite f0 [0,4194304] 0 2026-03-09T20:47:16.281 INFO:tasks.workunit.client.0.vm07.stdout:9/189: rmdir d4/d11/d23 39 2026-03-09T20:47:16.292 INFO:tasks.workunit.client.1.vm10.stdout:7/103: creat db/de/f22 x:0 0 0 2026-03-09T20:47:16.293 INFO:tasks.workunit.client.1.vm10.stdout:5/69: rename d2/l1a to d2/l1c 0 2026-03-09T20:47:16.295 INFO:tasks.workunit.client.0.vm07.stdout:6/219: fdatasync d8/f14 0 2026-03-09T20:47:16.296 INFO:tasks.workunit.client.1.vm10.stdout:8/113: creat d0/d22/d25/f28 x:0 0 0 2026-03-09T20:47:16.296 INFO:tasks.workunit.client.0.vm07.stdout:2/243: rmdir d2 39 2026-03-09T20:47:16.297 INFO:tasks.workunit.client.1.vm10.stdout:8/114: truncate d0/d22/f27 298786 0 2026-03-09T20:47:16.301 INFO:tasks.workunit.client.1.vm10.stdout:1/120: getdents d2/da/d25 0 2026-03-09T20:47:16.311 INFO:tasks.workunit.client.0.vm07.stdout:6/220: creat d8/d26/d2a/f41 x:0 0 0 2026-03-09T20:47:16.311 INFO:tasks.workunit.client.1.vm10.stdout:2/84: dwrite d5/fd [0,4194304] 0 2026-03-09T20:47:16.311 INFO:tasks.workunit.client.1.vm10.stdout:9/124: creat d2/d28/f29 x:0 0 0 2026-03-09T20:47:16.311 INFO:tasks.workunit.client.1.vm10.stdout:2/85: write d5/fa [4513111,71198] 0 2026-03-09T20:47:16.311 INFO:tasks.workunit.client.1.vm10.stdout:2/86: dread f1 [0,4194304] 0 2026-03-09T20:47:16.311 INFO:tasks.workunit.client.1.vm10.stdout:2/87: chown d5/fa 16925116 1 2026-03-09T20:47:16.312 INFO:tasks.workunit.client.0.vm07.stdout:0/224: sync 2026-03-09T20:47:16.313 INFO:tasks.workunit.client.1.vm10.stdout:0/64: mknod d2/db/c10 0 2026-03-09T20:47:16.313 INFO:tasks.workunit.client.0.vm07.stdout:0/225: chown d1/d2/f1b 823 1 2026-03-09T20:47:16.314 INFO:tasks.workunit.client.1.vm10.stdout:8/115: creat d0/d22/f29 x:0 0 0 2026-03-09T20:47:16.314 INFO:tasks.workunit.client.1.vm10.stdout:6/66: mkdir d3/d12 0 2026-03-09T20:47:16.315 INFO:tasks.workunit.client.0.vm07.stdout:9/190: mkdir d4/d16/d29/d24/d37/d44 0 2026-03-09T20:47:16.315 INFO:tasks.workunit.client.0.vm07.stdout:9/191: stat d4/l20 0 2026-03-09T20:47:16.316 INFO:tasks.workunit.client.0.vm07.stdout:9/192: fsync d4/d8/dc/ff 0 2026-03-09T20:47:16.317 INFO:tasks.workunit.client.0.vm07.stdout:4/166: getdents d2/d1f 0 2026-03-09T20:47:16.317 INFO:tasks.workunit.client.0.vm07.stdout:4/167: chown d2/d1f/f26 6 1 2026-03-09T20:47:16.318 INFO:tasks.workunit.client.1.vm10.stdout:3/77: getdents dc 0 2026-03-09T20:47:16.322 INFO:tasks.workunit.client.1.vm10.stdout:5/70: getdents d2/d1b 0 2026-03-09T20:47:16.322 INFO:tasks.workunit.client.0.vm07.stdout:2/244: fdatasync d2/f17 0 2026-03-09T20:47:16.323 INFO:tasks.workunit.client.1.vm10.stdout:5/71: stat f1 0 2026-03-09T20:47:16.324 INFO:tasks.workunit.client.1.vm10.stdout:0/65: dread d2/f5 [0,4194304] 0 2026-03-09T20:47:16.325 INFO:tasks.workunit.client.1.vm10.stdout:1/121: symlink d2/da/l2b 0 2026-03-09T20:47:16.325 INFO:tasks.workunit.client.1.vm10.stdout:9/125: creat d2/d12/f2a x:0 0 0 2026-03-09T20:47:16.327 INFO:tasks.workunit.client.1.vm10.stdout:8/116: dwrite d0/f12 [0,4194304] 0 2026-03-09T20:47:16.328 INFO:tasks.workunit.client.0.vm07.stdout:2/245: dwrite d2/db/d1c/f45 [0,4194304] 0 2026-03-09T20:47:16.330 INFO:tasks.workunit.client.0.vm07.stdout:2/246: truncate d2/db/f41 2614878 0 2026-03-09T20:47:16.332 INFO:tasks.workunit.client.1.vm10.stdout:2/88: sync 2026-03-09T20:47:16.332 INFO:tasks.workunit.client.1.vm10.stdout:0/66: sync 2026-03-09T20:47:16.336 INFO:tasks.workunit.client.1.vm10.stdout:3/78: dwrite dc/dd/f15 [0,4194304] 0 2026-03-09T20:47:16.336 INFO:tasks.workunit.client.1.vm10.stdout:8/117: write d0/d22/f29 [511362,66897] 0 2026-03-09T20:47:16.336 INFO:tasks.workunit.client.0.vm07.stdout:2/247: dwrite d2/db/f41 [0,4194304] 0 2026-03-09T20:47:16.342 INFO:tasks.workunit.client.1.vm10.stdout:5/72: dread f1 [0,4194304] 0 2026-03-09T20:47:16.343 INFO:tasks.workunit.client.0.vm07.stdout:4/168: write d2/f7 [589633,129233] 0 2026-03-09T20:47:16.344 INFO:tasks.workunit.client.1.vm10.stdout:1/122: dwrite d2/da/f10 [0,4194304] 0 2026-03-09T20:47:16.347 INFO:tasks.workunit.client.0.vm07.stdout:6/221: symlink d8/d16/l42 0 2026-03-09T20:47:16.363 INFO:tasks.workunit.client.0.vm07.stdout:1/160: write d3/f9 [2817327,73917] 0 2026-03-09T20:47:16.364 INFO:tasks.workunit.client.0.vm07.stdout:0/226: fsync d1/d2/dc/f12 0 2026-03-09T20:47:16.368 INFO:tasks.workunit.client.1.vm10.stdout:2/89: rename d5/l13 to d5/l17 0 2026-03-09T20:47:16.371 INFO:tasks.workunit.client.1.vm10.stdout:2/90: dread d5/fe [0,4194304] 0 2026-03-09T20:47:16.381 INFO:tasks.workunit.client.0.vm07.stdout:0/227: creat d1/d2/d33/f4e x:0 0 0 2026-03-09T20:47:16.382 INFO:tasks.workunit.client.1.vm10.stdout:8/118: symlink d0/l2a 0 2026-03-09T20:47:16.382 INFO:tasks.workunit.client.1.vm10.stdout:0/67: mkdir d2/d9/da/d11 0 2026-03-09T20:47:16.382 INFO:tasks.workunit.client.1.vm10.stdout:0/68: chown d2/f5 32034619 1 2026-03-09T20:47:16.382 INFO:tasks.workunit.client.1.vm10.stdout:8/119: truncate d0/d22/f29 1430483 0 2026-03-09T20:47:16.382 INFO:tasks.workunit.client.1.vm10.stdout:8/120: chown d0/lb 8192 1 2026-03-09T20:47:16.382 INFO:tasks.workunit.client.1.vm10.stdout:8/121: write d0/f15 [1013095,91711] 0 2026-03-09T20:47:16.382 INFO:tasks.workunit.client.1.vm10.stdout:3/79: creat dc/dd/f19 x:0 0 0 2026-03-09T20:47:16.382 INFO:tasks.workunit.client.1.vm10.stdout:3/80: readlink l0 0 2026-03-09T20:47:16.382 INFO:tasks.workunit.client.1.vm10.stdout:2/91: dwrite d5/fb [0,4194304] 0 2026-03-09T20:47:16.382 INFO:tasks.workunit.client.1.vm10.stdout:2/92: stat c2 0 2026-03-09T20:47:16.383 INFO:tasks.workunit.client.1.vm10.stdout:5/73: symlink d2/l1d 0 2026-03-09T20:47:16.385 INFO:tasks.workunit.client.0.vm07.stdout:7/170: dwrite d3/da/db/d14/f2a [0,4194304] 0 2026-03-09T20:47:16.386 INFO:tasks.workunit.client.1.vm10.stdout:4/72: dread d1/d2/f7 [0,4194304] 0 2026-03-09T20:47:16.390 INFO:tasks.workunit.client.1.vm10.stdout:1/123: mknod d2/c2c 0 2026-03-09T20:47:16.392 INFO:tasks.workunit.client.0.vm07.stdout:2/248: mkdir d2/d46 0 2026-03-09T20:47:16.396 INFO:tasks.workunit.client.1.vm10.stdout:1/124: dwrite d2/da/fe [0,4194304] 0 2026-03-09T20:47:16.399 INFO:tasks.workunit.client.0.vm07.stdout:4/169: rename d2/f6 to d2/d1f/d2d/f2f 0 2026-03-09T20:47:16.399 INFO:tasks.workunit.client.0.vm07.stdout:1/161: symlink d3/d23/l29 0 2026-03-09T20:47:16.401 INFO:tasks.workunit.client.1.vm10.stdout:8/122: creat d0/d22/d25/f2b x:0 0 0 2026-03-09T20:47:16.402 INFO:tasks.workunit.client.0.vm07.stdout:2/249: write d2/db/d1c/f2e [1291863,9753] 0 2026-03-09T20:47:16.403 INFO:tasks.workunit.client.1.vm10.stdout:4/73: sync 2026-03-09T20:47:16.414 INFO:tasks.workunit.client.0.vm07.stdout:7/171: rename d3/da/db/d14/c25 to d3/da/db/d14/d1f/d2b/c34 0 2026-03-09T20:47:16.414 INFO:tasks.workunit.client.0.vm07.stdout:7/172: stat d3/da/db/d14/c29 0 2026-03-09T20:47:16.419 INFO:tasks.workunit.client.0.vm07.stdout:7/173: dwrite d3/da/db/f1e [0,4194304] 0 2026-03-09T20:47:16.423 INFO:tasks.workunit.client.0.vm07.stdout:1/162: symlink d3/d14/l2a 0 2026-03-09T20:47:16.431 INFO:tasks.workunit.client.1.vm10.stdout:9/126: creat d2/f2b x:0 0 0 2026-03-09T20:47:16.450 INFO:tasks.workunit.client.1.vm10.stdout:1/125: mknod d2/da/c2d 0 2026-03-09T20:47:16.450 INFO:tasks.workunit.client.1.vm10.stdout:1/126: write d2/f14 [1340416,76246] 0 2026-03-09T20:47:16.451 INFO:tasks.workunit.client.1.vm10.stdout:8/123: mkdir d0/d22/d2c 0 2026-03-09T20:47:16.452 INFO:tasks.workunit.client.1.vm10.stdout:8/124: write d0/fe [410771,36298] 0 2026-03-09T20:47:16.452 INFO:tasks.workunit.client.1.vm10.stdout:8/125: chown d0/f21 617046 1 2026-03-09T20:47:16.453 INFO:tasks.workunit.client.1.vm10.stdout:4/74: mknod d1/d2/d3/c17 0 2026-03-09T20:47:16.459 INFO:tasks.workunit.client.1.vm10.stdout:2/93: truncate d5/f7 1561276 0 2026-03-09T20:47:16.460 INFO:tasks.workunit.client.1.vm10.stdout:2/94: write d5/f16 [516055,89177] 0 2026-03-09T20:47:16.460 INFO:tasks.workunit.client.0.vm07.stdout:7/174: mknod d3/da/db/d14/c35 0 2026-03-09T20:47:16.464 INFO:tasks.workunit.client.1.vm10.stdout:5/74: creat d2/d1b/f1e x:0 0 0 2026-03-09T20:47:16.477 INFO:tasks.workunit.client.0.vm07.stdout:7/175: symlink d3/da/db/d14/d1f/d2b/l36 0 2026-03-09T20:47:16.477 INFO:tasks.workunit.client.0.vm07.stdout:7/176: read - d3/da/f26 zero size 2026-03-09T20:47:16.477 INFO:tasks.workunit.client.0.vm07.stdout:7/177: read d3/da/db/f12 [1218240,121430] 0 2026-03-09T20:47:16.477 INFO:tasks.workunit.client.0.vm07.stdout:7/178: creat d3/da/db/d14/d1f/f37 x:0 0 0 2026-03-09T20:47:16.477 INFO:tasks.workunit.client.1.vm10.stdout:9/127: creat d2/d3/de/f2c x:0 0 0 2026-03-09T20:47:16.478 INFO:tasks.workunit.client.1.vm10.stdout:1/127: creat d2/da/d25/f2e x:0 0 0 2026-03-09T20:47:16.478 INFO:tasks.workunit.client.1.vm10.stdout:1/128: dread d2/da/f26 [0,4194304] 0 2026-03-09T20:47:16.478 INFO:tasks.workunit.client.1.vm10.stdout:1/129: dread - d2/f1c zero size 2026-03-09T20:47:16.478 INFO:tasks.workunit.client.1.vm10.stdout:8/126: creat d0/d22/d25/f2d x:0 0 0 2026-03-09T20:47:16.478 INFO:tasks.workunit.client.1.vm10.stdout:1/130: readlink d2/lb 0 2026-03-09T20:47:16.478 INFO:tasks.workunit.client.1.vm10.stdout:4/75: sync 2026-03-09T20:47:16.479 INFO:tasks.workunit.client.1.vm10.stdout:1/131: truncate d2/f1a 751592 0 2026-03-09T20:47:16.481 INFO:tasks.workunit.client.1.vm10.stdout:1/132: dread - d2/f2a zero size 2026-03-09T20:47:16.481 INFO:tasks.workunit.client.1.vm10.stdout:1/133: stat d2/l15 0 2026-03-09T20:47:16.482 INFO:tasks.workunit.client.1.vm10.stdout:1/134: readlink d2/l13 0 2026-03-09T20:47:16.482 INFO:tasks.workunit.client.1.vm10.stdout:1/135: stat d2/da/d25/f2e 0 2026-03-09T20:47:16.483 INFO:tasks.workunit.client.1.vm10.stdout:2/95: mkdir d5/d18 0 2026-03-09T20:47:16.484 INFO:tasks.workunit.client.1.vm10.stdout:4/76: dread d1/d2/f12 [0,4194304] 0 2026-03-09T20:47:16.485 INFO:tasks.workunit.client.1.vm10.stdout:2/96: truncate d5/f15 997162 0 2026-03-09T20:47:16.486 INFO:tasks.workunit.client.1.vm10.stdout:5/75: mknod d2/d1b/c1f 0 2026-03-09T20:47:16.490 INFO:tasks.workunit.client.1.vm10.stdout:9/128: truncate d2/d3/f5 1840910 0 2026-03-09T20:47:16.493 INFO:tasks.workunit.client.1.vm10.stdout:8/127: mkdir d0/d22/d25/d2e 0 2026-03-09T20:47:16.497 INFO:tasks.workunit.client.1.vm10.stdout:1/136: symlink d2/da/d25/l2f 0 2026-03-09T20:47:16.501 INFO:tasks.workunit.client.1.vm10.stdout:5/76: symlink d2/l20 0 2026-03-09T20:47:16.504 INFO:tasks.workunit.client.1.vm10.stdout:8/128: unlink d0/f18 0 2026-03-09T20:47:16.504 INFO:tasks.workunit.client.1.vm10.stdout:8/129: read d0/f13 [2212788,50812] 0 2026-03-09T20:47:16.507 INFO:tasks.workunit.client.1.vm10.stdout:2/97: mknod d5/d18/c19 0 2026-03-09T20:47:16.509 INFO:tasks.workunit.client.1.vm10.stdout:5/77: creat d2/f21 x:0 0 0 2026-03-09T20:47:16.510 INFO:tasks.workunit.client.1.vm10.stdout:9/129: symlink d2/l2d 0 2026-03-09T20:47:16.515 INFO:tasks.workunit.client.1.vm10.stdout:8/130: unlink d0/f15 0 2026-03-09T20:47:16.516 INFO:tasks.workunit.client.1.vm10.stdout:9/130: dwrite d2/d3/f1c [0,4194304] 0 2026-03-09T20:47:16.517 INFO:tasks.workunit.client.1.vm10.stdout:8/131: readlink d0/l24 0 2026-03-09T20:47:16.522 INFO:tasks.workunit.client.1.vm10.stdout:1/137: link d2/l15 d2/da/d25/l30 0 2026-03-09T20:47:16.522 INFO:tasks.workunit.client.1.vm10.stdout:1/138: chown d2/l6 44 1 2026-03-09T20:47:16.536 INFO:tasks.workunit.client.1.vm10.stdout:5/78: link d2/l1d d2/d1b/l22 0 2026-03-09T20:47:16.538 INFO:tasks.workunit.client.1.vm10.stdout:5/79: creat d2/f23 x:0 0 0 2026-03-09T20:47:16.540 INFO:tasks.workunit.client.1.vm10.stdout:5/80: link d2/l13 d2/l24 0 2026-03-09T20:47:16.549 INFO:tasks.workunit.client.1.vm10.stdout:7/104: rename db/de to db/d21/d23 0 2026-03-09T20:47:16.553 INFO:tasks.workunit.client.0.vm07.stdout:8/141: truncate d1/dc/fd 3932813 0 2026-03-09T20:47:16.553 INFO:tasks.workunit.client.1.vm10.stdout:7/105: chown f3 221 1 2026-03-09T20:47:16.553 INFO:tasks.workunit.client.1.vm10.stdout:7/106: fdatasync db/d21/d23/f1a 0 2026-03-09T20:47:16.553 INFO:tasks.workunit.client.1.vm10.stdout:0/69: rename d2/d9/da/fc to d2/d9/f12 0 2026-03-09T20:47:16.555 INFO:tasks.workunit.client.0.vm07.stdout:8/142: dwrite d1/f25 [0,4194304] 0 2026-03-09T20:47:16.559 INFO:tasks.workunit.client.1.vm10.stdout:5/81: sync 2026-03-09T20:47:16.560 INFO:tasks.workunit.client.1.vm10.stdout:5/82: dread - d2/f23 zero size 2026-03-09T20:47:16.560 INFO:tasks.workunit.client.0.vm07.stdout:3/194: dwrite d1/f19 [0,4194304] 0 2026-03-09T20:47:16.560 INFO:tasks.workunit.client.1.vm10.stdout:5/83: fdatasync d2/f5 0 2026-03-09T20:47:16.563 INFO:tasks.workunit.client.1.vm10.stdout:7/107: symlink db/l24 0 2026-03-09T20:47:16.566 INFO:tasks.workunit.client.1.vm10.stdout:0/70: write d2/d9/f12 [172907,90230] 0 2026-03-09T20:47:16.569 INFO:tasks.workunit.client.1.vm10.stdout:7/108: sync 2026-03-09T20:47:16.571 INFO:tasks.workunit.client.0.vm07.stdout:3/195: truncate d1/d5/d9/f1c 37974 0 2026-03-09T20:47:16.571 INFO:tasks.workunit.client.1.vm10.stdout:5/84: link d2/c9 d2/c25 0 2026-03-09T20:47:16.572 INFO:tasks.workunit.client.1.vm10.stdout:7/109: dread f3 [0,4194304] 0 2026-03-09T20:47:16.572 INFO:tasks.workunit.client.0.vm07.stdout:3/196: write d1/d5/d9/d2f/d34/f40 [894019,8079] 0 2026-03-09T20:47:16.573 INFO:tasks.workunit.client.1.vm10.stdout:5/85: symlink d2/l26 0 2026-03-09T20:47:16.573 INFO:tasks.workunit.client.0.vm07.stdout:3/197: write d1/d5/d9/d2f/d34/f3f [423319,6908] 0 2026-03-09T20:47:16.575 INFO:tasks.workunit.client.1.vm10.stdout:7/110: mknod db/d21/c25 0 2026-03-09T20:47:16.580 INFO:tasks.workunit.client.1.vm10.stdout:5/86: mkdir d2/d27 0 2026-03-09T20:47:16.580 INFO:tasks.workunit.client.1.vm10.stdout:7/111: mkdir db/d21/d26 0 2026-03-09T20:47:16.581 INFO:tasks.workunit.client.1.vm10.stdout:5/87: fdatasync d2/f11 0 2026-03-09T20:47:16.581 INFO:tasks.workunit.client.1.vm10.stdout:5/88: link d2/f7 d2/d1b/f28 0 2026-03-09T20:47:16.662 INFO:tasks.workunit.client.1.vm10.stdout:6/67: rmdir d3 39 2026-03-09T20:47:16.668 INFO:tasks.workunit.client.1.vm10.stdout:6/68: mknod d3/da/d11/c13 0 2026-03-09T20:47:16.669 INFO:tasks.workunit.client.0.vm07.stdout:8/143: creat d1/dc/d16/d26/f27 x:0 0 0 2026-03-09T20:47:16.670 INFO:tasks.workunit.client.1.vm10.stdout:6/69: unlink d3/l6 0 2026-03-09T20:47:16.671 INFO:tasks.workunit.client.1.vm10.stdout:6/70: symlink d3/da/l14 0 2026-03-09T20:47:16.673 INFO:tasks.workunit.client.0.vm07.stdout:2/250: truncate d2/f17 427371 0 2026-03-09T20:47:16.674 INFO:tasks.workunit.client.1.vm10.stdout:6/71: link d3/da/fd d3/da/f15 0 2026-03-09T20:47:16.674 INFO:tasks.workunit.client.0.vm07.stdout:2/251: mkdir d2/db/d1c/d47 0 2026-03-09T20:47:16.675 INFO:tasks.workunit.client.0.vm07.stdout:2/252: write d2/f3e [1792639,88953] 0 2026-03-09T20:47:16.679 INFO:tasks.workunit.client.0.vm07.stdout:2/253: getdents d2/d11 0 2026-03-09T20:47:16.680 INFO:tasks.workunit.client.0.vm07.stdout:9/193: write d4/f5 [3954001,97410] 0 2026-03-09T20:47:16.686 INFO:tasks.workunit.client.1.vm10.stdout:3/81: getdents dc/dd 0 2026-03-09T20:47:16.687 INFO:tasks.workunit.client.1.vm10.stdout:6/72: sync 2026-03-09T20:47:16.687 INFO:tasks.workunit.client.1.vm10.stdout:3/82: creat dc/d14/f1a x:0 0 0 2026-03-09T20:47:16.691 INFO:tasks.workunit.client.1.vm10.stdout:6/73: creat d3/d12/f16 x:0 0 0 2026-03-09T20:47:16.694 INFO:tasks.workunit.client.1.vm10.stdout:6/74: write d3/da/fd [457904,105453] 0 2026-03-09T20:47:16.694 INFO:tasks.workunit.client.0.vm07.stdout:4/170: write d2/f9 [1238415,92315] 0 2026-03-09T20:47:16.698 INFO:tasks.workunit.client.1.vm10.stdout:6/75: dwrite d3/f7 [0,4194304] 0 2026-03-09T20:47:16.704 INFO:tasks.workunit.client.1.vm10.stdout:6/76: dwrite d3/da/fd [0,4194304] 0 2026-03-09T20:47:16.709 INFO:tasks.workunit.client.0.vm07.stdout:0/228: dwrite d1/d2/dc/f10 [0,4194304] 0 2026-03-09T20:47:16.728 INFO:tasks.workunit.client.1.vm10.stdout:6/77: creat d3/da/d11/f17 x:0 0 0 2026-03-09T20:47:16.731 INFO:tasks.workunit.client.1.vm10.stdout:2/98: rename d5/f6 to d5/d18/f1a 0 2026-03-09T20:47:16.732 INFO:tasks.workunit.client.1.vm10.stdout:6/78: truncate f2 991757 0 2026-03-09T20:47:16.733 INFO:tasks.workunit.client.1.vm10.stdout:1/139: rename d2/lc to d2/da/d25/l31 0 2026-03-09T20:47:16.733 INFO:tasks.workunit.client.1.vm10.stdout:2/99: mkdir d5/d18/d1b 0 2026-03-09T20:47:16.734 INFO:tasks.workunit.client.1.vm10.stdout:1/140: fsync d2/da/fe 0 2026-03-09T20:47:16.735 INFO:tasks.workunit.client.1.vm10.stdout:6/79: unlink d3/da/cb 0 2026-03-09T20:47:16.735 INFO:tasks.workunit.client.1.vm10.stdout:1/141: write d2/f14 [1828265,120747] 0 2026-03-09T20:47:16.735 INFO:tasks.workunit.client.1.vm10.stdout:7/112: rename db/d21/d23/f15 to db/d21/f27 0 2026-03-09T20:47:16.738 INFO:tasks.workunit.client.1.vm10.stdout:2/100: mknod d5/d18/d1b/c1c 0 2026-03-09T20:47:16.740 INFO:tasks.workunit.client.1.vm10.stdout:1/142: creat d2/da/f32 x:0 0 0 2026-03-09T20:47:16.740 INFO:tasks.workunit.client.1.vm10.stdout:2/101: chown d5/fa 103061 1 2026-03-09T20:47:16.741 INFO:tasks.workunit.client.1.vm10.stdout:5/89: rename d2/lc to d2/d1b/l29 0 2026-03-09T20:47:16.741 INFO:tasks.workunit.client.1.vm10.stdout:2/102: chown d5 11681 1 2026-03-09T20:47:16.741 INFO:tasks.workunit.client.1.vm10.stdout:7/113: dwrite db/f19 [0,4194304] 0 2026-03-09T20:47:16.743 INFO:tasks.workunit.client.1.vm10.stdout:9/131: dread d2/d3/f5 [0,4194304] 0 2026-03-09T20:47:16.743 INFO:tasks.workunit.client.1.vm10.stdout:4/77: write d1/fe [39137,97172] 0 2026-03-09T20:47:16.748 INFO:tasks.workunit.client.1.vm10.stdout:1/143: symlink d2/da/l33 0 2026-03-09T20:47:16.748 INFO:tasks.workunit.client.1.vm10.stdout:2/103: stat d5/f7 0 2026-03-09T20:47:16.748 INFO:tasks.workunit.client.1.vm10.stdout:7/114: mkdir db/d28 0 2026-03-09T20:47:16.750 INFO:tasks.workunit.client.1.vm10.stdout:2/104: fdatasync d5/fb 0 2026-03-09T20:47:16.750 INFO:tasks.workunit.client.1.vm10.stdout:9/132: chown d2/c17 23462 1 2026-03-09T20:47:16.750 INFO:tasks.workunit.client.1.vm10.stdout:2/105: readlink l4 0 2026-03-09T20:47:16.750 INFO:tasks.workunit.client.1.vm10.stdout:1/144: dread d2/f19 [0,4194304] 0 2026-03-09T20:47:16.751 INFO:tasks.workunit.client.1.vm10.stdout:5/90: creat d2/d27/f2a x:0 0 0 2026-03-09T20:47:16.754 INFO:tasks.workunit.client.1.vm10.stdout:2/106: chown d5/f7 23269 1 2026-03-09T20:47:16.757 INFO:tasks.workunit.client.1.vm10.stdout:1/145: unlink d2/f1f 0 2026-03-09T20:47:16.758 INFO:tasks.workunit.client.1.vm10.stdout:8/132: dwrite d0/f17 [4194304,4194304] 0 2026-03-09T20:47:16.766 INFO:tasks.workunit.client.1.vm10.stdout:7/115: dwrite db/d21/f27 [0,4194304] 0 2026-03-09T20:47:16.766 INFO:tasks.workunit.client.1.vm10.stdout:9/133: dwrite d2/d3/fa [4194304,4194304] 0 2026-03-09T20:47:16.767 INFO:tasks.workunit.client.1.vm10.stdout:9/134: chown d2/d3/f7 811 1 2026-03-09T20:47:16.768 INFO:tasks.workunit.client.0.vm07.stdout:0/229: sync 2026-03-09T20:47:16.769 INFO:tasks.workunit.client.1.vm10.stdout:2/107: sync 2026-03-09T20:47:16.770 INFO:tasks.workunit.client.1.vm10.stdout:8/133: mkdir d0/d22/d2f 0 2026-03-09T20:47:16.777 INFO:tasks.workunit.client.1.vm10.stdout:7/116: creat db/d21/d23/f29 x:0 0 0 2026-03-09T20:47:16.779 INFO:tasks.workunit.client.1.vm10.stdout:9/135: rename d2/f16 to d2/d3/f2e 0 2026-03-09T20:47:16.785 INFO:tasks.workunit.client.1.vm10.stdout:2/108: creat d5/f1d x:0 0 0 2026-03-09T20:47:16.785 INFO:tasks.workunit.client.1.vm10.stdout:9/136: chown d2/d3/f2e 6 1 2026-03-09T20:47:16.785 INFO:tasks.workunit.client.1.vm10.stdout:9/137: chown d2/d3/de/c18 1388492 1 2026-03-09T20:47:16.785 INFO:tasks.workunit.client.1.vm10.stdout:7/117: creat db/d1f/f2a x:0 0 0 2026-03-09T20:47:16.785 INFO:tasks.workunit.client.1.vm10.stdout:2/109: rmdir d5/d18/d1b 39 2026-03-09T20:47:16.785 INFO:tasks.workunit.client.1.vm10.stdout:7/118: write fa [455133,123361] 0 2026-03-09T20:47:16.785 INFO:tasks.workunit.client.1.vm10.stdout:8/134: link d0/lb d0/d22/d2f/l30 0 2026-03-09T20:47:16.785 INFO:tasks.workunit.client.1.vm10.stdout:9/138: creat d2/d3/f2f x:0 0 0 2026-03-09T20:47:16.785 INFO:tasks.workunit.client.1.vm10.stdout:8/135: chown d0/f11 0 1 2026-03-09T20:47:16.785 INFO:tasks.workunit.client.1.vm10.stdout:7/119: truncate db/d1f/f2a 751010 0 2026-03-09T20:47:16.785 INFO:tasks.workunit.client.1.vm10.stdout:7/120: dread - db/d21/d23/d12/f1c zero size 2026-03-09T20:47:16.795 INFO:tasks.workunit.client.1.vm10.stdout:9/139: creat d2/f30 x:0 0 0 2026-03-09T20:47:16.796 INFO:tasks.workunit.client.1.vm10.stdout:8/136: creat d0/d22/d2f/f31 x:0 0 0 2026-03-09T20:47:16.799 INFO:tasks.workunit.client.1.vm10.stdout:9/140: creat d2/d12/f31 x:0 0 0 2026-03-09T20:47:16.799 INFO:tasks.workunit.client.0.vm07.stdout:0/230: dread d1/d1f/d20/f21 [0,4194304] 0 2026-03-09T20:47:16.801 INFO:tasks.workunit.client.0.vm07.stdout:0/231: dread d1/d1f/d20/f21 [0,4194304] 0 2026-03-09T20:47:16.801 INFO:tasks.workunit.client.0.vm07.stdout:0/232: chown d1/d1f/d20/c4a 0 1 2026-03-09T20:47:16.802 INFO:tasks.workunit.client.0.vm07.stdout:0/233: write d1/d2/dc/f10 [71673,63061] 0 2026-03-09T20:47:16.805 INFO:tasks.workunit.client.1.vm10.stdout:7/121: dwrite f3 [0,4194304] 0 2026-03-09T20:47:16.809 INFO:tasks.workunit.client.0.vm07.stdout:0/234: dwrite d1/f2f [0,4194304] 0 2026-03-09T20:47:16.810 INFO:tasks.workunit.client.1.vm10.stdout:8/137: dwrite d0/d22/d2f/f31 [0,4194304] 0 2026-03-09T20:47:16.812 INFO:tasks.workunit.client.0.vm07.stdout:0/235: dread d1/f1a [0,4194304] 0 2026-03-09T20:47:16.825 INFO:tasks.workunit.client.0.vm07.stdout:5/169: write d5/f23 [1024280,29575] 0 2026-03-09T20:47:16.830 INFO:tasks.workunit.client.0.vm07.stdout:0/236: symlink d1/d1f/d30/l4f 0 2026-03-09T20:47:16.832 INFO:tasks.workunit.client.1.vm10.stdout:7/122: mkdir db/d28/d2b 0 2026-03-09T20:47:16.832 INFO:tasks.workunit.client.0.vm07.stdout:0/237: creat d1/d1f/d30/f50 x:0 0 0 2026-03-09T20:47:16.833 INFO:tasks.workunit.client.0.vm07.stdout:5/170: symlink d5/df/d13/l32 0 2026-03-09T20:47:16.833 INFO:tasks.workunit.client.0.vm07.stdout:5/171: chown d5 43048918 1 2026-03-09T20:47:16.836 INFO:tasks.workunit.client.1.vm10.stdout:9/141: getdents d2/d3/de 0 2026-03-09T20:47:16.839 INFO:tasks.workunit.client.1.vm10.stdout:9/142: write d2/fc [4838818,61095] 0 2026-03-09T20:47:16.839 INFO:tasks.workunit.client.1.vm10.stdout:7/123: dwrite fa [0,4194304] 0 2026-03-09T20:47:16.840 INFO:tasks.workunit.client.1.vm10.stdout:7/124: readlink db/l1b 0 2026-03-09T20:47:16.849 INFO:tasks.workunit.client.1.vm10.stdout:9/143: dwrite d2/fc [4194304,4194304] 0 2026-03-09T20:47:16.856 INFO:tasks.workunit.client.1.vm10.stdout:9/144: rename d2/d3/de/f2c to d2/d28/f32 0 2026-03-09T20:47:16.859 INFO:tasks.workunit.client.1.vm10.stdout:9/145: chown d2/f6 441676545 1 2026-03-09T20:47:16.860 INFO:tasks.workunit.client.1.vm10.stdout:7/125: dwrite f5 [0,4194304] 0 2026-03-09T20:47:16.862 INFO:tasks.workunit.client.1.vm10.stdout:0/71: truncate d2/f5 4044984 0 2026-03-09T20:47:16.865 INFO:tasks.workunit.client.1.vm10.stdout:0/72: dread - d2/d9/da/fd zero size 2026-03-09T20:47:16.872 INFO:tasks.workunit.client.1.vm10.stdout:0/73: dwrite d2/d9/da/fd [0,4194304] 0 2026-03-09T20:47:16.880 INFO:tasks.workunit.client.1.vm10.stdout:9/146: mkdir d2/d33 0 2026-03-09T20:47:16.880 INFO:tasks.workunit.client.1.vm10.stdout:7/126: creat db/d1f/f2c x:0 0 0 2026-03-09T20:47:16.881 INFO:tasks.workunit.client.1.vm10.stdout:9/147: stat d2/d3/f5 0 2026-03-09T20:47:16.884 INFO:tasks.workunit.client.1.vm10.stdout:0/74: creat d2/db/f13 x:0 0 0 2026-03-09T20:47:16.885 INFO:tasks.workunit.client.1.vm10.stdout:9/148: rmdir d2/d3/de 39 2026-03-09T20:47:16.885 INFO:tasks.workunit.client.1.vm10.stdout:7/127: dread fa [0,4194304] 0 2026-03-09T20:47:16.900 INFO:tasks.workunit.client.1.vm10.stdout:0/75: creat d2/db/f14 x:0 0 0 2026-03-09T20:47:16.900 INFO:tasks.workunit.client.1.vm10.stdout:0/76: chown d2/db/c10 49892 1 2026-03-09T20:47:16.904 INFO:tasks.workunit.client.1.vm10.stdout:0/77: creat d2/d9/da/d11/f15 x:0 0 0 2026-03-09T20:47:16.910 INFO:tasks.workunit.client.1.vm10.stdout:9/149: creat d2/d3/de/f34 x:0 0 0 2026-03-09T20:47:16.910 INFO:tasks.workunit.client.1.vm10.stdout:0/78: dread f1 [0,4194304] 0 2026-03-09T20:47:16.910 INFO:tasks.workunit.client.1.vm10.stdout:7/128: link db/d21/d23/d12/l18 db/l2d 0 2026-03-09T20:47:17.007 INFO:tasks.workunit.client.0.vm07.stdout:1/163: creat d3/f2b x:0 0 0 2026-03-09T20:47:17.007 INFO:tasks.workunit.client.0.vm07.stdout:1/164: write d3/f4 [676495,111527] 0 2026-03-09T20:47:17.012 INFO:tasks.workunit.client.0.vm07.stdout:1/165: creat d3/d23/f2c x:0 0 0 2026-03-09T20:47:17.012 INFO:tasks.workunit.client.0.vm07.stdout:1/166: chown d3/ce 1007402 1 2026-03-09T20:47:17.014 INFO:tasks.workunit.client.0.vm07.stdout:1/167: rename d3/d14/f1e to d3/d14/f2d 0 2026-03-09T20:47:17.016 INFO:tasks.workunit.client.0.vm07.stdout:1/168: mknod d3/d12/c2e 0 2026-03-09T20:47:17.017 INFO:tasks.workunit.client.0.vm07.stdout:1/169: write d3/f9 [250555,38610] 0 2026-03-09T20:47:17.027 INFO:tasks.workunit.client.0.vm07.stdout:1/170: symlink d3/d23/l2f 0 2026-03-09T20:47:17.028 INFO:tasks.workunit.client.0.vm07.stdout:1/171: write d3/f24 [3009861,122610] 0 2026-03-09T20:47:17.028 INFO:tasks.workunit.client.0.vm07.stdout:1/172: fsync d3/f10 0 2026-03-09T20:47:17.031 INFO:tasks.workunit.client.0.vm07.stdout:7/179: creat d3/da/f38 x:0 0 0 2026-03-09T20:47:17.031 INFO:tasks.workunit.client.0.vm07.stdout:7/180: stat d3/da/c1d 0 2026-03-09T20:47:17.033 INFO:tasks.workunit.client.0.vm07.stdout:1/173: write d3/d14/f2d [1209142,109248] 0 2026-03-09T20:47:17.053 INFO:tasks.workunit.client.0.vm07.stdout:6/222: dread f5 [0,4194304] 0 2026-03-09T20:47:17.055 INFO:tasks.workunit.client.1.vm10.stdout:6/80: rmdir d3 39 2026-03-09T20:47:17.055 INFO:tasks.workunit.client.1.vm10.stdout:7/129: fsync db/d21/f27 0 2026-03-09T20:47:17.059 INFO:tasks.workunit.client.0.vm07.stdout:6/223: rmdir d8/d16/d22/d33 39 2026-03-09T20:47:17.064 INFO:tasks.workunit.client.0.vm07.stdout:6/224: dwrite d8/d26/d2a/f37 [0,4194304] 0 2026-03-09T20:47:17.064 INFO:tasks.workunit.client.1.vm10.stdout:7/130: write db/d21/f27 [2973671,123365] 0 2026-03-09T20:47:17.067 INFO:tasks.workunit.client.1.vm10.stdout:7/131: write db/d21/d23/f29 [9787,124966] 0 2026-03-09T20:47:17.067 INFO:tasks.workunit.client.1.vm10.stdout:7/132: write db/d21/d23/f1e [581620,61464] 0 2026-03-09T20:47:17.068 INFO:tasks.workunit.client.0.vm07.stdout:6/225: dwrite d8/f32 [0,4194304] 0 2026-03-09T20:47:17.070 INFO:tasks.workunit.client.1.vm10.stdout:4/78: truncate d1/fe 659128 0 2026-03-09T20:47:17.079 INFO:tasks.workunit.client.0.vm07.stdout:6/226: dwrite d8/d16/f18 [0,4194304] 0 2026-03-09T20:47:17.082 INFO:tasks.workunit.client.1.vm10.stdout:6/81: creat d3/d12/f18 x:0 0 0 2026-03-09T20:47:17.082 INFO:tasks.workunit.client.1.vm10.stdout:1/146: rmdir d2 39 2026-03-09T20:47:17.083 INFO:tasks.workunit.client.1.vm10.stdout:5/91: truncate f1 3128976 0 2026-03-09T20:47:17.083 INFO:tasks.workunit.client.1.vm10.stdout:7/133: write db/d21/d23/f14 [1140446,78983] 0 2026-03-09T20:47:17.083 INFO:tasks.workunit.client.1.vm10.stdout:3/83: write dc/ff [887067,25602] 0 2026-03-09T20:47:17.084 INFO:tasks.workunit.client.1.vm10.stdout:8/138: getdents d0/d22 0 2026-03-09T20:47:17.085 INFO:tasks.workunit.client.1.vm10.stdout:4/79: truncate d1/d2/f12 897530 0 2026-03-09T20:47:17.087 INFO:tasks.workunit.client.1.vm10.stdout:2/110: truncate d5/fe 1300667 0 2026-03-09T20:47:17.088 INFO:tasks.workunit.client.0.vm07.stdout:6/227: creat d8/d16/d22/d24/f43 x:0 0 0 2026-03-09T20:47:17.089 INFO:tasks.workunit.client.1.vm10.stdout:8/139: dwrite d0/f21 [0,4194304] 0 2026-03-09T20:47:17.090 INFO:tasks.workunit.client.0.vm07.stdout:6/228: dread d8/f29 [0,4194304] 0 2026-03-09T20:47:17.100 INFO:tasks.workunit.client.0.vm07.stdout:3/198: rename d1/d5/d9/ca to d1/d5/d9/d2f/c47 0 2026-03-09T20:47:17.103 INFO:tasks.workunit.client.0.vm07.stdout:3/199: chown d1/c38 223 1 2026-03-09T20:47:17.103 INFO:tasks.workunit.client.1.vm10.stdout:1/147: chown d2/f14 11544 1 2026-03-09T20:47:17.103 INFO:tasks.workunit.client.1.vm10.stdout:3/84: creat dc/dd/f1b x:0 0 0 2026-03-09T20:47:17.103 INFO:tasks.workunit.client.1.vm10.stdout:3/85: fdatasync dc/f11 0 2026-03-09T20:47:17.103 INFO:tasks.workunit.client.1.vm10.stdout:3/86: chown dc/d14/f1a 21 1 2026-03-09T20:47:17.104 INFO:tasks.workunit.client.1.vm10.stdout:8/140: dread d0/fa [0,4194304] 0 2026-03-09T20:47:17.105 INFO:tasks.workunit.client.1.vm10.stdout:5/92: mknod d2/c2b 0 2026-03-09T20:47:17.107 INFO:tasks.workunit.client.1.vm10.stdout:9/150: rmdir d2/d12 39 2026-03-09T20:47:17.113 INFO:tasks.workunit.client.1.vm10.stdout:4/80: fsync d1/d8/f16 0 2026-03-09T20:47:17.130 INFO:tasks.workunit.client.0.vm07.stdout:6/229: creat d8/d16/d22/d24/f44 x:0 0 0 2026-03-09T20:47:17.142 INFO:tasks.workunit.client.0.vm07.stdout:3/200: link d1/c12 d1/d5/d10/d43/c48 0 2026-03-09T20:47:17.142 INFO:tasks.workunit.client.0.vm07.stdout:6/230: mknod d8/d16/d22/d24/d2b/c45 0 2026-03-09T20:47:17.142 INFO:tasks.workunit.client.0.vm07.stdout:6/231: dwrite f5 [0,4194304] 0 2026-03-09T20:47:17.143 INFO:tasks.workunit.client.1.vm10.stdout:4/81: write d1/f9 [906666,2340] 0 2026-03-09T20:47:17.143 INFO:tasks.workunit.client.1.vm10.stdout:3/87: rmdir dc/d14 39 2026-03-09T20:47:17.143 INFO:tasks.workunit.client.1.vm10.stdout:3/88: chown l0 41127 1 2026-03-09T20:47:17.143 INFO:tasks.workunit.client.1.vm10.stdout:9/151: dwrite d2/d3/de/f34 [0,4194304] 0 2026-03-09T20:47:17.143 INFO:tasks.workunit.client.1.vm10.stdout:4/82: dread d1/d2/f7 [0,4194304] 0 2026-03-09T20:47:17.143 INFO:tasks.workunit.client.1.vm10.stdout:4/83: creat d1/d2/d3/f18 x:0 0 0 2026-03-09T20:47:17.143 INFO:tasks.workunit.client.1.vm10.stdout:8/141: link d0/f1 d0/d22/d2c/f32 0 2026-03-09T20:47:17.143 INFO:tasks.workunit.client.1.vm10.stdout:8/142: write d0/d22/d25/f28 [513984,910] 0 2026-03-09T20:47:17.143 INFO:tasks.workunit.client.1.vm10.stdout:9/152: fsync d2/d3/f2f 0 2026-03-09T20:47:17.143 INFO:tasks.workunit.client.1.vm10.stdout:8/143: write d0/f11 [93214,43724] 0 2026-03-09T20:47:17.143 INFO:tasks.workunit.client.1.vm10.stdout:8/144: write d0/d22/f29 [405424,1779] 0 2026-03-09T20:47:17.143 INFO:tasks.workunit.client.1.vm10.stdout:2/111: link l3 d5/l1e 0 2026-03-09T20:47:17.143 INFO:tasks.workunit.client.1.vm10.stdout:2/112: dread - d5/f1d zero size 2026-03-09T20:47:17.143 INFO:tasks.workunit.client.1.vm10.stdout:9/153: dread d2/d12/f20 [0,4194304] 0 2026-03-09T20:47:17.143 INFO:tasks.workunit.client.1.vm10.stdout:2/113: chown d5/d18/c19 1708674 1 2026-03-09T20:47:17.143 INFO:tasks.workunit.client.1.vm10.stdout:9/154: mkdir d2/d3/de/d35 0 2026-03-09T20:47:17.143 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:16 vm07.local ceph-mon[49120]: pgmap v148: 65 pgs: 65 active+clean; 506 MiB data, 2.8 GiB used, 117 GiB / 120 GiB avail; 2.6 MiB/s rd, 42 MiB/s wr, 279 op/s 2026-03-09T20:47:17.143 INFO:tasks.workunit.client.1.vm10.stdout:2/114: dread d5/f7 [0,4194304] 0 2026-03-09T20:47:17.145 INFO:tasks.workunit.client.1.vm10.stdout:9/155: write d2/d3/f2f [668044,8937] 0 2026-03-09T20:47:17.148 INFO:tasks.workunit.client.1.vm10.stdout:2/115: truncate d5/f7 2552724 0 2026-03-09T20:47:17.151 INFO:tasks.workunit.client.1.vm10.stdout:9/156: mkdir d2/d3/de/d35/d36 0 2026-03-09T20:47:17.160 INFO:tasks.workunit.client.1.vm10.stdout:9/157: write d2/d3/f2e [442754,18875] 0 2026-03-09T20:47:17.160 INFO:tasks.workunit.client.1.vm10.stdout:2/116: dwrite d5/f15 [0,4194304] 0 2026-03-09T20:47:17.161 INFO:tasks.workunit.client.1.vm10.stdout:9/158: write d2/d3/f2f [557839,19584] 0 2026-03-09T20:47:17.166 INFO:tasks.workunit.client.1.vm10.stdout:3/89: sync 2026-03-09T20:47:17.166 INFO:tasks.workunit.client.1.vm10.stdout:8/145: sync 2026-03-09T20:47:17.169 INFO:tasks.workunit.client.1.vm10.stdout:2/117: chown d5/f7 3162 1 2026-03-09T20:47:17.172 INFO:tasks.workunit.client.1.vm10.stdout:8/146: chown d0/lb 250130825 1 2026-03-09T20:47:17.178 INFO:tasks.workunit.client.1.vm10.stdout:2/118: truncate d5/fb 2663396 0 2026-03-09T20:47:17.182 INFO:tasks.workunit.client.1.vm10.stdout:8/147: dread d0/d22/d2c/f32 [0,4194304] 0 2026-03-09T20:47:17.193 INFO:tasks.workunit.client.1.vm10.stdout:3/90: dwrite dc/f10 [0,4194304] 0 2026-03-09T20:47:17.193 INFO:tasks.workunit.client.1.vm10.stdout:8/148: write d0/f1 [2894580,33431] 0 2026-03-09T20:47:17.195 INFO:tasks.workunit.client.1.vm10.stdout:8/149: creat d0/d22/d25/d2e/f33 x:0 0 0 2026-03-09T20:47:17.195 INFO:tasks.workunit.client.1.vm10.stdout:3/91: symlink dc/l1c 0 2026-03-09T20:47:17.196 INFO:tasks.workunit.client.1.vm10.stdout:8/150: creat d0/d22/d25/f34 x:0 0 0 2026-03-09T20:47:17.196 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:16 vm10.local ceph-mon[57011]: pgmap v148: 65 pgs: 65 active+clean; 506 MiB data, 2.8 GiB used, 117 GiB / 120 GiB avail; 2.6 MiB/s rd, 42 MiB/s wr, 279 op/s 2026-03-09T20:47:17.204 INFO:tasks.workunit.client.1.vm10.stdout:8/151: dwrite d0/d22/d25/f2b [0,4194304] 0 2026-03-09T20:47:17.204 INFO:tasks.workunit.client.1.vm10.stdout:3/92: unlink f4 0 2026-03-09T20:47:17.216 INFO:tasks.workunit.client.1.vm10.stdout:8/152: dwrite d0/f1 [0,4194304] 0 2026-03-09T20:47:17.228 INFO:tasks.workunit.client.1.vm10.stdout:4/84: dread d1/f9 [0,4194304] 0 2026-03-09T20:47:17.228 INFO:tasks.workunit.client.1.vm10.stdout:4/85: truncate d1/d8/f16 67452 0 2026-03-09T20:47:17.232 INFO:tasks.workunit.client.1.vm10.stdout:4/86: dwrite d1/f9 [0,4194304] 0 2026-03-09T20:47:17.246 INFO:tasks.workunit.client.1.vm10.stdout:4/87: sync 2026-03-09T20:47:17.275 INFO:tasks.workunit.client.1.vm10.stdout:4/88: sync 2026-03-09T20:47:17.275 INFO:tasks.workunit.client.0.vm07.stdout:3/201: rmdir d1 39 2026-03-09T20:47:17.275 INFO:tasks.workunit.client.0.vm07.stdout:8/144: rmdir d1/dc 39 2026-03-09T20:47:17.277 INFO:tasks.workunit.client.1.vm10.stdout:0/79: fsync d2/db/f13 0 2026-03-09T20:47:17.279 INFO:tasks.workunit.client.0.vm07.stdout:5/172: mkdir d5/d33 0 2026-03-09T20:47:17.283 INFO:tasks.workunit.client.1.vm10.stdout:0/80: unlink d2/l7 0 2026-03-09T20:47:17.284 INFO:tasks.workunit.client.1.vm10.stdout:0/81: write d2/db/f13 [284377,70599] 0 2026-03-09T20:47:17.285 INFO:tasks.workunit.client.1.vm10.stdout:4/89: rename d1/d8/lc to d1/d2/d3/l19 0 2026-03-09T20:47:17.286 INFO:tasks.workunit.client.1.vm10.stdout:0/82: mknod d2/d9/da/d11/c16 0 2026-03-09T20:47:17.287 INFO:tasks.workunit.client.1.vm10.stdout:4/90: creat d1/d2/f1a x:0 0 0 2026-03-09T20:47:17.287 INFO:tasks.workunit.client.1.vm10.stdout:0/83: write d2/d9/f12 [849014,124607] 0 2026-03-09T20:47:17.292 INFO:tasks.workunit.client.0.vm07.stdout:8/145: rename d1/dc/l21 to d1/dc/l28 0 2026-03-09T20:47:17.298 INFO:tasks.workunit.client.0.vm07.stdout:8/146: write d1/dc/d14/f18 [4344336,33855] 0 2026-03-09T20:47:17.298 INFO:tasks.workunit.client.1.vm10.stdout:4/91: mkdir d1/d8/d1b 0 2026-03-09T20:47:17.298 INFO:tasks.workunit.client.1.vm10.stdout:0/84: rename d2/c8 to d2/d9/da/d11/c17 0 2026-03-09T20:47:17.298 INFO:tasks.workunit.client.1.vm10.stdout:0/85: symlink d2/db/l18 0 2026-03-09T20:47:17.298 INFO:tasks.workunit.client.0.vm07.stdout:8/147: creat d1/dc/f29 x:0 0 0 2026-03-09T20:47:17.299 INFO:tasks.workunit.client.0.vm07.stdout:8/148: stat d1/l9 0 2026-03-09T20:47:17.331 INFO:tasks.workunit.client.0.vm07.stdout:3/202: sync 2026-03-09T20:47:17.333 INFO:tasks.workunit.client.0.vm07.stdout:3/203: fsync d1/d5/d9/f3c 0 2026-03-09T20:47:17.338 INFO:tasks.workunit.client.0.vm07.stdout:3/204: dwrite d1/d5/d10/f2e [0,4194304] 0 2026-03-09T20:47:17.383 INFO:tasks.workunit.client.0.vm07.stdout:5/173: creat d5/df/f34 x:0 0 0 2026-03-09T20:47:17.383 INFO:tasks.workunit.client.0.vm07.stdout:2/254: creat d2/db/f48 x:0 0 0 2026-03-09T20:47:17.384 INFO:tasks.workunit.client.0.vm07.stdout:9/194: rename d4/l20 to d4/d8/l45 0 2026-03-09T20:47:17.384 INFO:tasks.workunit.client.0.vm07.stdout:0/238: mknod d1/d2/c51 0 2026-03-09T20:47:17.385 INFO:tasks.workunit.client.0.vm07.stdout:5/174: write d5/df/f22 [834226,56000] 0 2026-03-09T20:47:17.385 INFO:tasks.workunit.client.0.vm07.stdout:9/195: dread - d4/d8/f34 zero size 2026-03-09T20:47:17.386 INFO:tasks.workunit.client.0.vm07.stdout:5/175: stat d5/c31 0 2026-03-09T20:47:17.387 INFO:tasks.workunit.client.0.vm07.stdout:4/171: write d2/d1f/d2d/f2f [3118353,89349] 0 2026-03-09T20:47:17.391 INFO:tasks.workunit.client.0.vm07.stdout:2/255: chown d2/f7 792358 1 2026-03-09T20:47:17.392 INFO:tasks.workunit.client.0.vm07.stdout:2/256: readlink d2/db/l19 0 2026-03-09T20:47:17.395 INFO:tasks.workunit.client.0.vm07.stdout:6/232: rename d8/d16/d22/f35 to d8/f46 0 2026-03-09T20:47:17.396 INFO:tasks.workunit.client.1.vm10.stdout:8/153: read d0/d22/f27 [202590,116019] 0 2026-03-09T20:47:17.400 INFO:tasks.workunit.client.1.vm10.stdout:8/154: sync 2026-03-09T20:47:17.402 INFO:tasks.workunit.client.0.vm07.stdout:7/181: dwrite d3/da/db/f12 [0,4194304] 0 2026-03-09T20:47:17.404 INFO:tasks.workunit.client.0.vm07.stdout:7/182: chown d3/da/db/d14/d1f/d2b/f33 4 1 2026-03-09T20:47:17.415 INFO:tasks.workunit.client.0.vm07.stdout:4/172: rmdir d2/d1f 39 2026-03-09T20:47:17.416 INFO:tasks.workunit.client.0.vm07.stdout:4/173: write d2/f7 [2226390,94751] 0 2026-03-09T20:47:17.416 INFO:tasks.workunit.client.1.vm10.stdout:8/155: rename d0/f12 to d0/d22/f35 0 2026-03-09T20:47:17.420 INFO:tasks.workunit.client.1.vm10.stdout:8/156: rename d0/d22/d25/f28 to d0/d22/d2c/f36 0 2026-03-09T20:47:17.421 INFO:tasks.workunit.client.1.vm10.stdout:8/157: truncate d0/d22/d25/d2e/f33 503865 0 2026-03-09T20:47:17.423 INFO:tasks.workunit.client.0.vm07.stdout:7/183: mknod d3/da/db/d14/d1f/c39 0 2026-03-09T20:47:17.425 INFO:tasks.workunit.client.1.vm10.stdout:1/148: dwrite d2/f19 [0,4194304] 0 2026-03-09T20:47:17.426 INFO:tasks.workunit.client.0.vm07.stdout:0/239: symlink d1/d2/d4b/l52 0 2026-03-09T20:47:17.427 INFO:tasks.workunit.client.0.vm07.stdout:5/176: mknod d5/df/d13/c35 0 2026-03-09T20:47:17.427 INFO:tasks.workunit.client.1.vm10.stdout:1/149: chown d2/da/f11 2 1 2026-03-09T20:47:17.428 INFO:tasks.workunit.client.1.vm10.stdout:1/150: write d2/da/fe [4283260,47426] 0 2026-03-09T20:47:17.428 INFO:tasks.workunit.client.1.vm10.stdout:6/82: truncate d3/da/fd 453233 0 2026-03-09T20:47:17.428 INFO:tasks.workunit.client.1.vm10.stdout:8/158: rename d0/f9 to d0/d22/d25/f37 0 2026-03-09T20:47:17.429 INFO:tasks.workunit.client.0.vm07.stdout:4/174: unlink d2/df/c15 0 2026-03-09T20:47:17.430 INFO:tasks.workunit.client.0.vm07.stdout:4/175: write d2/f28 [664669,130180] 0 2026-03-09T20:47:17.435 INFO:tasks.workunit.client.0.vm07.stdout:0/240: dwrite d1/d2/d33/f4e [0,4194304] 0 2026-03-09T20:47:17.438 INFO:tasks.workunit.client.1.vm10.stdout:8/159: dread d0/d22/d2f/f31 [0,4194304] 0 2026-03-09T20:47:17.438 INFO:tasks.workunit.client.1.vm10.stdout:8/160: truncate d0/d22/d2f/f31 4426724 0 2026-03-09T20:47:17.445 INFO:tasks.workunit.client.1.vm10.stdout:8/161: dread d0/f6 [0,4194304] 0 2026-03-09T20:47:17.445 INFO:tasks.workunit.client.0.vm07.stdout:8/149: rename d1/dc/f1c to d1/dc/d16/d26/f2a 0 2026-03-09T20:47:17.445 INFO:tasks.workunit.client.0.vm07.stdout:4/176: dread d2/f9 [0,4194304] 0 2026-03-09T20:47:17.447 INFO:tasks.workunit.client.0.vm07.stdout:8/150: truncate d1/dc/f29 353249 0 2026-03-09T20:47:17.451 INFO:tasks.workunit.client.1.vm10.stdout:5/93: dwrite d2/f11 [0,4194304] 0 2026-03-09T20:47:17.451 INFO:tasks.workunit.client.0.vm07.stdout:6/233: fsync d8/d16/f23 0 2026-03-09T20:47:17.453 INFO:tasks.workunit.client.1.vm10.stdout:1/151: creat d2/da/f34 x:0 0 0 2026-03-09T20:47:17.455 INFO:tasks.workunit.client.0.vm07.stdout:5/177: unlink d5/f23 0 2026-03-09T20:47:17.456 INFO:tasks.workunit.client.0.vm07.stdout:5/178: chown d5/df/f22 76972 1 2026-03-09T20:47:17.457 INFO:tasks.workunit.client.1.vm10.stdout:8/162: mkdir d0/d22/d2f/d38 0 2026-03-09T20:47:17.460 INFO:tasks.workunit.client.0.vm07.stdout:0/241: mkdir d1/d1f/d53 0 2026-03-09T20:47:17.461 INFO:tasks.workunit.client.0.vm07.stdout:0/242: dread - d1/d2/dc/f40 zero size 2026-03-09T20:47:17.461 INFO:tasks.workunit.client.0.vm07.stdout:0/243: fsync d1/d1f/d20/f4d 0 2026-03-09T20:47:17.462 INFO:tasks.workunit.client.0.vm07.stdout:0/244: fsync d1/d1f/d20/f2c 0 2026-03-09T20:47:17.463 INFO:tasks.workunit.client.0.vm07.stdout:4/177: write d2/f3 [550271,86132] 0 2026-03-09T20:47:17.469 INFO:tasks.workunit.client.1.vm10.stdout:8/163: write d0/d22/f35 [275948,24484] 0 2026-03-09T20:47:17.469 INFO:tasks.workunit.client.0.vm07.stdout:4/178: chown d2/fa 181 1 2026-03-09T20:47:17.469 INFO:tasks.workunit.client.0.vm07.stdout:9/196: rename d4/d11/c3e to d4/d16/d29/d24/c46 0 2026-03-09T20:47:17.473 INFO:tasks.workunit.client.1.vm10.stdout:7/134: truncate db/d21/d23/f1e 429400 0 2026-03-09T20:47:17.475 INFO:tasks.workunit.client.0.vm07.stdout:8/151: creat d1/dc/d16/d26/f2b x:0 0 0 2026-03-09T20:47:17.476 INFO:tasks.workunit.client.1.vm10.stdout:6/83: dread f1 [0,4194304] 0 2026-03-09T20:47:17.477 INFO:tasks.workunit.client.1.vm10.stdout:1/152: link d2/da/d25/f2e d2/da/f35 0 2026-03-09T20:47:17.477 INFO:tasks.workunit.client.1.vm10.stdout:6/84: chown d3/f9 45374 1 2026-03-09T20:47:17.478 INFO:tasks.workunit.client.1.vm10.stdout:6/85: read d3/da/f10 [103731,16480] 0 2026-03-09T20:47:17.483 INFO:tasks.workunit.client.1.vm10.stdout:7/135: mkdir db/d1f/d2e 0 2026-03-09T20:47:17.487 INFO:tasks.workunit.client.1.vm10.stdout:8/164: mknod d0/d22/d2f/d38/c39 0 2026-03-09T20:47:17.487 INFO:tasks.workunit.client.1.vm10.stdout:6/86: symlink d3/da/l19 0 2026-03-09T20:47:17.488 INFO:tasks.workunit.client.1.vm10.stdout:8/165: chown d0/d22/d2f/d38/c39 11 1 2026-03-09T20:47:17.488 INFO:tasks.workunit.client.0.vm07.stdout:5/179: rename d5/df/f22 to d5/df/d13/d30/f36 0 2026-03-09T20:47:17.489 INFO:tasks.workunit.client.1.vm10.stdout:8/166: truncate d0/f10 1253170 0 2026-03-09T20:47:17.489 INFO:tasks.workunit.client.0.vm07.stdout:0/245: rename d1 to d1/d2/dc/d54 22 2026-03-09T20:47:17.490 INFO:tasks.workunit.client.0.vm07.stdout:3/205: read d1/d5/d9/d2f/d34/f40 [323844,126562] 0 2026-03-09T20:47:17.490 INFO:tasks.workunit.client.1.vm10.stdout:5/94: getdents d2/d1b 0 2026-03-09T20:47:17.490 INFO:tasks.workunit.client.0.vm07.stdout:9/197: rename d4/d11 to d4/d11/d23/d32/d47 22 2026-03-09T20:47:17.491 INFO:tasks.workunit.client.0.vm07.stdout:9/198: chown d4/f5 1113 1 2026-03-09T20:47:17.491 INFO:tasks.workunit.client.0.vm07.stdout:3/206: stat d1/d5/d9/d11/d1f 0 2026-03-09T20:47:17.491 INFO:tasks.workunit.client.0.vm07.stdout:9/199: write d4/d8/f34 [643600,12737] 0 2026-03-09T20:47:17.492 INFO:tasks.workunit.client.0.vm07.stdout:3/207: fsync d1/d5/d9/d11/f2a 0 2026-03-09T20:47:17.494 INFO:tasks.workunit.client.1.vm10.stdout:6/87: dwrite d3/fc [0,4194304] 0 2026-03-09T20:47:17.495 INFO:tasks.workunit.client.0.vm07.stdout:6/234: truncate d8/f29 740475 0 2026-03-09T20:47:17.496 INFO:tasks.workunit.client.1.vm10.stdout:6/88: chown d3/f7 3952600 1 2026-03-09T20:47:17.496 INFO:tasks.workunit.client.1.vm10.stdout:6/89: read - d3/d12/f16 zero size 2026-03-09T20:47:17.497 INFO:tasks.workunit.client.1.vm10.stdout:8/167: dread d0/d22/d25/f2b [0,4194304] 0 2026-03-09T20:47:17.497 INFO:tasks.workunit.client.0.vm07.stdout:7/184: rename d3/da/db/d14/f30 to d3/da/db/d14/f3a 0 2026-03-09T20:47:17.498 INFO:tasks.workunit.client.0.vm07.stdout:7/185: read d3/da/db/d14/f24 [54577,112228] 0 2026-03-09T20:47:17.499 INFO:tasks.workunit.client.0.vm07.stdout:4/179: symlink d2/l30 0 2026-03-09T20:47:17.501 INFO:tasks.workunit.client.1.vm10.stdout:3/93: fsync dc/f10 0 2026-03-09T20:47:17.503 INFO:tasks.workunit.client.1.vm10.stdout:3/94: chown dc/dd/l16 109 1 2026-03-09T20:47:17.508 INFO:tasks.workunit.client.1.vm10.stdout:2/119: dread d5/d18/f1a [0,4194304] 0 2026-03-09T20:47:17.514 INFO:tasks.workunit.client.1.vm10.stdout:8/168: sync 2026-03-09T20:47:17.517 INFO:tasks.workunit.client.0.vm07.stdout:8/152: symlink d1/dc/l2c 0 2026-03-09T20:47:17.521 INFO:tasks.workunit.client.0.vm07.stdout:3/208: dwrite d1/d5/d9/d2f/d34/f40 [0,4194304] 0 2026-03-09T20:47:17.526 INFO:tasks.workunit.client.0.vm07.stdout:0/246: mknod d1/d2/c55 0 2026-03-09T20:47:17.526 INFO:tasks.workunit.client.0.vm07.stdout:0/247: readlink d1/d2/d33/l49 0 2026-03-09T20:47:17.527 INFO:tasks.workunit.client.0.vm07.stdout:0/248: readlink d1/d2/dc/d17/l4c 0 2026-03-09T20:47:17.527 INFO:tasks.workunit.client.1.vm10.stdout:4/92: write d1/d2/f7 [2424734,4176] 0 2026-03-09T20:47:17.532 INFO:tasks.workunit.client.1.vm10.stdout:8/169: write d0/d22/f27 [1001307,3428] 0 2026-03-09T20:47:17.533 INFO:tasks.workunit.client.1.vm10.stdout:7/136: rmdir db/d1f/d2e 0 2026-03-09T20:47:17.534 INFO:tasks.workunit.client.0.vm07.stdout:6/235: mknod d8/d16/d22/d24/c47 0 2026-03-09T20:47:17.535 INFO:tasks.workunit.client.1.vm10.stdout:8/170: dread d0/d22/d25/d2e/f33 [0,4194304] 0 2026-03-09T20:47:17.535 INFO:tasks.workunit.client.1.vm10.stdout:8/171: stat d0/f6 0 2026-03-09T20:47:17.536 INFO:tasks.workunit.client.1.vm10.stdout:7/137: sync 2026-03-09T20:47:17.536 INFO:tasks.workunit.client.1.vm10.stdout:5/95: link d2/f11 d2/f2c 0 2026-03-09T20:47:17.536 INFO:tasks.workunit.client.1.vm10.stdout:7/138: readlink l4 0 2026-03-09T20:47:17.536 INFO:tasks.workunit.client.1.vm10.stdout:8/172: chown d0/d22/d2f/d38 0 1 2026-03-09T20:47:17.537 INFO:tasks.workunit.client.1.vm10.stdout:5/96: dread - d2/d27/f2a zero size 2026-03-09T20:47:17.539 INFO:tasks.workunit.client.1.vm10.stdout:1/153: getdents d2 0 2026-03-09T20:47:17.539 INFO:tasks.workunit.client.1.vm10.stdout:3/95: rename dc/d14/c17 to dc/dd/c1d 0 2026-03-09T20:47:17.539 INFO:tasks.workunit.client.0.vm07.stdout:0/249: rmdir d1/d1f/d30 39 2026-03-09T20:47:17.540 INFO:tasks.workunit.client.0.vm07.stdout:9/200: link d4/d8/dc/d15/c2b d4/d11/d23/d32/c48 0 2026-03-09T20:47:17.540 INFO:tasks.workunit.client.0.vm07.stdout:0/250: dread - d1/d2/dc/f40 zero size 2026-03-09T20:47:17.540 INFO:tasks.workunit.client.1.vm10.stdout:5/97: write d2/d1b/f1e [1021116,107392] 0 2026-03-09T20:47:17.541 INFO:tasks.workunit.client.0.vm07.stdout:9/201: chown d4/d11/d2a/f39 111 1 2026-03-09T20:47:17.541 INFO:tasks.workunit.client.0.vm07.stdout:9/202: fsync d4/d8/d19/f28 0 2026-03-09T20:47:17.541 INFO:tasks.workunit.client.1.vm10.stdout:5/98: write d2/f16 [197464,65099] 0 2026-03-09T20:47:17.544 INFO:tasks.workunit.client.1.vm10.stdout:4/93: mkdir d1/d8/d1c 0 2026-03-09T20:47:17.546 INFO:tasks.workunit.client.0.vm07.stdout:7/186: creat d3/da/f3b x:0 0 0 2026-03-09T20:47:17.551 INFO:tasks.workunit.client.1.vm10.stdout:8/173: symlink d0/d22/d2c/l3a 0 2026-03-09T20:47:17.551 INFO:tasks.workunit.client.1.vm10.stdout:0/86: dwrite f1 [0,4194304] 0 2026-03-09T20:47:17.551 INFO:tasks.workunit.client.0.vm07.stdout:3/209: mknod d1/d5/d9/d2f/d34/d46/c49 0 2026-03-09T20:47:17.553 INFO:tasks.workunit.client.0.vm07.stdout:3/210: write d1/d5/d9/f15 [1236604,52536] 0 2026-03-09T20:47:17.558 INFO:tasks.workunit.client.0.vm07.stdout:9/203: mknod d4/c49 0 2026-03-09T20:47:17.561 INFO:tasks.workunit.client.0.vm07.stdout:7/187: write d3/da/db/d14/f3a [1864,127179] 0 2026-03-09T20:47:17.562 INFO:tasks.workunit.client.0.vm07.stdout:7/188: stat d3/da/c1d 0 2026-03-09T20:47:17.562 INFO:tasks.workunit.client.0.vm07.stdout:7/189: chown d3/da/f3b 117693847 1 2026-03-09T20:47:17.562 INFO:tasks.workunit.client.0.vm07.stdout:7/190: chown d3/da/db/d14/d1f 252336 1 2026-03-09T20:47:17.564 INFO:tasks.workunit.client.1.vm10.stdout:6/90: fdatasync d3/fc 0 2026-03-09T20:47:17.565 INFO:tasks.workunit.client.1.vm10.stdout:6/91: stat d3/da/l19 0 2026-03-09T20:47:17.566 INFO:tasks.workunit.client.1.vm10.stdout:2/120: creat d5/d18/f1f x:0 0 0 2026-03-09T20:47:17.571 INFO:tasks.workunit.client.1.vm10.stdout:1/154: mknod d2/da/d25/c36 0 2026-03-09T20:47:17.571 INFO:tasks.workunit.client.0.vm07.stdout:1/174: dread d3/d14/f2d [0,4194304] 0 2026-03-09T20:47:17.571 INFO:tasks.workunit.client.0.vm07.stdout:8/153: read d1/dc/f29 [29591,119004] 0 2026-03-09T20:47:17.572 INFO:tasks.workunit.client.1.vm10.stdout:5/99: creat d2/d27/f2d x:0 0 0 2026-03-09T20:47:17.572 INFO:tasks.workunit.client.1.vm10.stdout:1/155: dread d2/da/fe [4194304,4194304] 0 2026-03-09T20:47:17.585 INFO:tasks.workunit.client.0.vm07.stdout:2/257: write d2/f40 [181704,76733] 0 2026-03-09T20:47:17.589 INFO:tasks.workunit.client.1.vm10.stdout:9/159: dread d2/d3/f2f [0,4194304] 0 2026-03-09T20:47:17.590 INFO:tasks.workunit.client.0.vm07.stdout:9/204: creat d4/d16/d29/f4a x:0 0 0 2026-03-09T20:47:17.593 INFO:tasks.workunit.client.0.vm07.stdout:7/191: symlink d3/da/db/l3c 0 2026-03-09T20:47:17.595 INFO:tasks.workunit.client.1.vm10.stdout:7/139: creat db/d21/d26/f2f x:0 0 0 2026-03-09T20:47:17.596 INFO:tasks.workunit.client.0.vm07.stdout:8/154: creat d1/dc/d16/d26/f2d x:0 0 0 2026-03-09T20:47:17.599 INFO:tasks.workunit.client.1.vm10.stdout:8/174: write d0/d22/d2c/f36 [9912,16139] 0 2026-03-09T20:47:17.608 INFO:tasks.workunit.client.1.vm10.stdout:0/87: symlink d2/l19 0 2026-03-09T20:47:17.608 INFO:tasks.workunit.client.1.vm10.stdout:8/175: dwrite d0/f21 [0,4194304] 0 2026-03-09T20:47:17.608 INFO:tasks.workunit.client.1.vm10.stdout:0/88: read d2/d9/da/fd [2096812,96517] 0 2026-03-09T20:47:17.614 INFO:tasks.workunit.client.0.vm07.stdout:4/180: dread d2/f19 [0,4194304] 0 2026-03-09T20:47:17.619 INFO:tasks.workunit.client.0.vm07.stdout:4/181: write d2/f3 [2657202,66410] 0 2026-03-09T20:47:17.619 INFO:tasks.workunit.client.0.vm07.stdout:4/182: dread f1 [0,4194304] 0 2026-03-09T20:47:17.622 INFO:tasks.workunit.client.0.vm07.stdout:1/175: sync 2026-03-09T20:47:17.624 INFO:tasks.workunit.client.1.vm10.stdout:4/94: creat d1/d8/d1c/f1d x:0 0 0 2026-03-09T20:47:17.636 INFO:tasks.workunit.client.1.vm10.stdout:5/100: unlink d2/l14 0 2026-03-09T20:47:17.637 INFO:tasks.workunit.client.0.vm07.stdout:3/211: rmdir d1/d5/d9/d11/d45 0 2026-03-09T20:47:17.638 INFO:tasks.workunit.client.0.vm07.stdout:3/212: chown d1/d5/d9/d2f/d34/d46/c49 7272245 1 2026-03-09T20:47:17.639 INFO:tasks.workunit.client.1.vm10.stdout:1/156: symlink d2/da/d25/l37 0 2026-03-09T20:47:17.640 INFO:tasks.workunit.client.1.vm10.stdout:1/157: truncate d2/da/d25/f28 634640 0 2026-03-09T20:47:17.640 INFO:tasks.workunit.client.1.vm10.stdout:9/160: rename d2/d3/de/d35/d36 to d2/d33/d37 0 2026-03-09T20:47:17.643 INFO:tasks.workunit.client.1.vm10.stdout:7/140: mkdir db/d28/d30 0 2026-03-09T20:47:17.643 INFO:tasks.workunit.client.0.vm07.stdout:2/258: mkdir d2/db/d49 0 2026-03-09T20:47:17.644 INFO:tasks.workunit.client.0.vm07.stdout:2/259: read - d2/db/f48 zero size 2026-03-09T20:47:17.647 INFO:tasks.workunit.client.0.vm07.stdout:6/236: dread d8/f29 [0,4194304] 0 2026-03-09T20:47:17.648 INFO:tasks.workunit.client.0.vm07.stdout:1/176: creat d3/d14/f30 x:0 0 0 2026-03-09T20:47:17.652 INFO:tasks.workunit.client.0.vm07.stdout:1/177: dwrite d3/f11 [0,4194304] 0 2026-03-09T20:47:17.662 INFO:tasks.workunit.client.0.vm07.stdout:1/178: write d3/d14/f30 [462522,100703] 0 2026-03-09T20:47:17.663 INFO:tasks.workunit.client.1.vm10.stdout:7/141: sync 2026-03-09T20:47:17.664 INFO:tasks.workunit.client.1.vm10.stdout:7/142: fsync db/d1f/f2c 0 2026-03-09T20:47:17.667 INFO:tasks.workunit.client.0.vm07.stdout:8/155: mknod d1/c2e 0 2026-03-09T20:47:17.668 INFO:tasks.workunit.client.1.vm10.stdout:6/92: mknod d3/c1a 0 2026-03-09T20:47:17.668 INFO:tasks.workunit.client.0.vm07.stdout:8/156: write d1/dc/d16/f1e [720224,82582] 0 2026-03-09T20:47:17.669 INFO:tasks.workunit.client.0.vm07.stdout:2/260: sync 2026-03-09T20:47:17.669 INFO:tasks.workunit.client.1.vm10.stdout:0/89: mkdir d2/d9/da/de/d1a 0 2026-03-09T20:47:17.673 INFO:tasks.workunit.client.0.vm07.stdout:3/213: creat d1/d5/d9/d11/d1f/f4a x:0 0 0 2026-03-09T20:47:17.674 INFO:tasks.workunit.client.0.vm07.stdout:3/214: chown d1/d5/f25 613856 1 2026-03-09T20:47:17.674 INFO:tasks.workunit.client.0.vm07.stdout:5/180: write d5/d19/f20 [1008003,68110] 0 2026-03-09T20:47:17.674 INFO:tasks.workunit.client.0.vm07.stdout:3/215: write d1/d5/d9/fe [473766,127406] 0 2026-03-09T20:47:17.675 INFO:tasks.workunit.client.1.vm10.stdout:7/143: sync 2026-03-09T20:47:17.679 INFO:tasks.workunit.client.1.vm10.stdout:7/144: dread db/d21/d23/f1e [0,4194304] 0 2026-03-09T20:47:17.684 INFO:tasks.workunit.client.0.vm07.stdout:6/237: fsync d8/d16/f23 0 2026-03-09T20:47:17.687 INFO:tasks.workunit.client.1.vm10.stdout:1/158: unlink d2/da/l33 0 2026-03-09T20:47:17.688 INFO:tasks.workunit.client.1.vm10.stdout:8/176: rename d0/f1 to d0/d22/d25/f3b 0 2026-03-09T20:47:17.689 INFO:tasks.workunit.client.0.vm07.stdout:1/179: symlink d3/d12/l31 0 2026-03-09T20:47:17.689 INFO:tasks.workunit.client.0.vm07.stdout:1/180: dread - d3/d14/f25 zero size 2026-03-09T20:47:17.690 INFO:tasks.workunit.client.0.vm07.stdout:1/181: chown d3/d23/l29 671985262 1 2026-03-09T20:47:17.691 INFO:tasks.workunit.client.1.vm10.stdout:9/161: creat d2/d3/de/d35/f38 x:0 0 0 2026-03-09T20:47:17.692 INFO:tasks.workunit.client.1.vm10.stdout:9/162: chown d2/d12/f1e 0 1 2026-03-09T20:47:17.693 INFO:tasks.workunit.client.1.vm10.stdout:8/177: sync 2026-03-09T20:47:17.693 INFO:tasks.workunit.client.0.vm07.stdout:8/157: mkdir d1/dc/d14/d2f 0 2026-03-09T20:47:17.694 INFO:tasks.workunit.client.1.vm10.stdout:3/96: getdents dc 0 2026-03-09T20:47:17.697 INFO:tasks.workunit.client.1.vm10.stdout:6/93: creat d3/da/f1b x:0 0 0 2026-03-09T20:47:17.698 INFO:tasks.workunit.client.0.vm07.stdout:2/261: dwrite d2/db/d1c/f3a [4194304,4194304] 0 2026-03-09T20:47:17.699 INFO:tasks.workunit.client.0.vm07.stdout:2/262: readlink d2/l3d 0 2026-03-09T20:47:17.699 INFO:tasks.workunit.client.0.vm07.stdout:2/263: chown d2/d11 177752 1 2026-03-09T20:47:17.709 INFO:tasks.workunit.client.1.vm10.stdout:0/90: symlink d2/l1b 0 2026-03-09T20:47:17.719 INFO:tasks.workunit.client.0.vm07.stdout:0/251: write d1/d2/f47 [1871984,100530] 0 2026-03-09T20:47:17.723 INFO:tasks.workunit.client.0.vm07.stdout:1/182: creat d3/d12/f32 x:0 0 0 2026-03-09T20:47:17.723 INFO:tasks.workunit.client.1.vm10.stdout:5/101: truncate d2/f7 978582 0 2026-03-09T20:47:17.723 INFO:tasks.workunit.client.1.vm10.stdout:1/159: symlink d2/l38 0 2026-03-09T20:47:17.724 INFO:tasks.workunit.client.0.vm07.stdout:9/205: link d4/d16/d29/d24/c46 d4/d16/c4b 0 2026-03-09T20:47:17.725 INFO:tasks.workunit.client.0.vm07.stdout:9/206: truncate d4/d8/d19/f42 1614348 0 2026-03-09T20:47:17.727 INFO:tasks.workunit.client.1.vm10.stdout:9/163: write d2/d12/f20 [863416,99015] 0 2026-03-09T20:47:17.730 INFO:tasks.workunit.client.1.vm10.stdout:9/164: write d2/f30 [288238,23506] 0 2026-03-09T20:47:17.731 INFO:tasks.workunit.client.1.vm10.stdout:9/165: write d2/d3/f2e [1108338,652] 0 2026-03-09T20:47:17.734 INFO:tasks.workunit.client.0.vm07.stdout:2/264: mkdir d2/db/d1c/d4a 0 2026-03-09T20:47:17.738 INFO:tasks.workunit.client.1.vm10.stdout:3/97: mknod dc/dd/c1e 0 2026-03-09T20:47:17.739 INFO:tasks.workunit.client.1.vm10.stdout:3/98: chown l8 2 1 2026-03-09T20:47:17.739 INFO:tasks.workunit.client.0.vm07.stdout:5/181: symlink d5/d33/l37 0 2026-03-09T20:47:17.740 INFO:tasks.workunit.client.1.vm10.stdout:6/94: mknod d3/d12/c1c 0 2026-03-09T20:47:17.741 INFO:tasks.workunit.client.1.vm10.stdout:0/91: chown d2/f5 45791 1 2026-03-09T20:47:17.742 INFO:tasks.workunit.client.1.vm10.stdout:0/92: write f1 [2849396,8064] 0 2026-03-09T20:47:17.743 INFO:tasks.workunit.client.1.vm10.stdout:0/93: write d2/db/f13 [604098,691] 0 2026-03-09T20:47:17.744 INFO:tasks.workunit.client.1.vm10.stdout:0/94: chown d2/d9/da/de/cf 1887568 1 2026-03-09T20:47:17.744 INFO:tasks.workunit.client.1.vm10.stdout:0/95: chown d2/db/f14 258908 1 2026-03-09T20:47:17.750 INFO:tasks.workunit.client.1.vm10.stdout:4/95: link d1/d2/d3/f18 d1/f1e 0 2026-03-09T20:47:17.753 INFO:tasks.workunit.client.0.vm07.stdout:0/252: creat d1/d2/dc/f56 x:0 0 0 2026-03-09T20:47:17.758 INFO:tasks.workunit.client.0.vm07.stdout:7/192: truncate d3/da/f11 1480975 0 2026-03-09T20:47:17.759 INFO:tasks.workunit.client.0.vm07.stdout:0/253: dread d1/d2/dc/d17/f3c [0,4194304] 0 2026-03-09T20:47:17.765 INFO:tasks.workunit.client.0.vm07.stdout:0/254: dread d1/d2/dc/f12 [0,4194304] 0 2026-03-09T20:47:17.770 INFO:tasks.workunit.client.0.vm07.stdout:4/183: truncate d2/f7 350314 0 2026-03-09T20:47:17.770 INFO:tasks.workunit.client.0.vm07.stdout:8/158: rename d1/f11 to d1/dc/d14/f30 0 2026-03-09T20:47:17.772 INFO:tasks.workunit.client.0.vm07.stdout:4/184: dread - d2/df/d17/f2a zero size 2026-03-09T20:47:17.773 INFO:tasks.workunit.client.0.vm07.stdout:4/185: stat d2/df/d17/f1b 0 2026-03-09T20:47:17.773 INFO:tasks.workunit.client.0.vm07.stdout:2/265: mknod d2/db/d1c/c4b 0 2026-03-09T20:47:17.780 INFO:tasks.workunit.client.0.vm07.stdout:6/238: truncate d8/d16/f23 1260865 0 2026-03-09T20:47:17.784 INFO:tasks.workunit.client.1.vm10.stdout:2/121: truncate d5/fd 1229369 0 2026-03-09T20:47:17.786 INFO:tasks.workunit.client.0.vm07.stdout:3/216: rmdir d1/d5/d9/d41 0 2026-03-09T20:47:17.786 INFO:tasks.workunit.client.0.vm07.stdout:1/183: rmdir d3 39 2026-03-09T20:47:17.798 INFO:tasks.workunit.client.0.vm07.stdout:9/207: mknod d4/d11/d31/c4c 0 2026-03-09T20:47:17.802 INFO:tasks.workunit.client.1.vm10.stdout:6/95: rmdir d3/da 39 2026-03-09T20:47:17.802 INFO:tasks.workunit.client.0.vm07.stdout:3/217: sync 2026-03-09T20:47:17.812 INFO:tasks.workunit.client.1.vm10.stdout:7/145: dwrite f1 [0,4194304] 0 2026-03-09T20:47:17.821 INFO:tasks.workunit.client.1.vm10.stdout:7/146: dwrite db/d21/d23/d12/f1c [0,4194304] 0 2026-03-09T20:47:17.835 INFO:tasks.workunit.client.0.vm07.stdout:1/184: truncate d3/f28 467358 0 2026-03-09T20:47:17.842 INFO:tasks.workunit.client.1.vm10.stdout:8/178: link d0/c1e d0/d22/d2c/c3c 0 2026-03-09T20:47:17.850 INFO:tasks.workunit.client.0.vm07.stdout:7/193: creat d3/da/db/d32/f3d x:0 0 0 2026-03-09T20:47:17.853 INFO:tasks.workunit.client.1.vm10.stdout:2/122: rename d5/c9 to d5/c20 0 2026-03-09T20:47:17.867 INFO:tasks.workunit.client.1.vm10.stdout:6/96: read f2 [184949,117781] 0 2026-03-09T20:47:17.871 INFO:tasks.workunit.client.0.vm07.stdout:2/266: rename d2/db/d1c/l3c to d2/db/d28/l4c 0 2026-03-09T20:47:17.871 INFO:tasks.workunit.client.0.vm07.stdout:4/186: dread d2/d1f/d2d/f2f [0,4194304] 0 2026-03-09T20:47:17.872 INFO:tasks.workunit.client.0.vm07.stdout:2/267: write d2/db/d28/f34 [1047104,109481] 0 2026-03-09T20:47:17.872 INFO:tasks.workunit.client.0.vm07.stdout:2/268: rename d2 to d2/db/d1c/d4a/d4d 22 2026-03-09T20:47:17.873 INFO:tasks.workunit.client.0.vm07.stdout:2/269: chown d2/db/d1c/f42 21152 1 2026-03-09T20:47:17.882 INFO:tasks.workunit.client.0.vm07.stdout:5/182: creat d5/df/d13/f38 x:0 0 0 2026-03-09T20:47:17.882 INFO:tasks.workunit.client.0.vm07.stdout:1/185: creat d3/d14/f33 x:0 0 0 2026-03-09T20:47:17.883 INFO:tasks.workunit.client.0.vm07.stdout:7/194: mkdir d3/da/db/d32/d3e 0 2026-03-09T20:47:17.883 INFO:tasks.workunit.client.1.vm10.stdout:7/147: creat db/d28/f31 x:0 0 0 2026-03-09T20:47:17.883 INFO:tasks.workunit.client.1.vm10.stdout:0/96: write d2/d9/da/fd [314411,89458] 0 2026-03-09T20:47:17.883 INFO:tasks.workunit.client.0.vm07.stdout:1/186: read - d3/d12/f32 zero size 2026-03-09T20:47:17.884 INFO:tasks.workunit.client.1.vm10.stdout:4/96: write d1/f1e [90973,128895] 0 2026-03-09T20:47:17.885 INFO:tasks.workunit.client.0.vm07.stdout:1/187: write d3/d12/f32 [529335,13542] 0 2026-03-09T20:47:17.886 INFO:tasks.workunit.client.1.vm10.stdout:7/148: write db/d21/d23/f29 [1143108,38592] 0 2026-03-09T20:47:17.888 INFO:tasks.workunit.client.0.vm07.stdout:8/159: dwrite d1/dc/f29 [0,4194304] 0 2026-03-09T20:47:17.892 INFO:tasks.workunit.client.1.vm10.stdout:0/97: dread d2/f5 [0,4194304] 0 2026-03-09T20:47:17.892 INFO:tasks.workunit.client.0.vm07.stdout:7/195: dread d3/da/db/f1e [0,4194304] 0 2026-03-09T20:47:17.893 INFO:tasks.workunit.client.1.vm10.stdout:5/102: dwrite d2/f5 [0,4194304] 0 2026-03-09T20:47:17.895 INFO:tasks.workunit.client.0.vm07.stdout:5/183: dread d5/d19/f20 [0,4194304] 0 2026-03-09T20:47:17.898 INFO:tasks.workunit.client.1.vm10.stdout:0/98: dread f1 [0,4194304] 0 2026-03-09T20:47:17.901 INFO:tasks.workunit.client.0.vm07.stdout:9/208: dwrite d4/d11/d23/f2f [0,4194304] 0 2026-03-09T20:47:17.904 INFO:tasks.workunit.client.1.vm10.stdout:0/99: dread d2/db/f13 [0,4194304] 0 2026-03-09T20:47:17.906 INFO:tasks.workunit.client.1.vm10.stdout:0/100: fdatasync d2/d9/f12 0 2026-03-09T20:47:17.907 INFO:tasks.workunit.client.1.vm10.stdout:0/101: write f1 [2670922,51441] 0 2026-03-09T20:47:17.907 INFO:tasks.workunit.client.1.vm10.stdout:0/102: stat d2/db/f13 0 2026-03-09T20:47:17.911 INFO:tasks.workunit.client.1.vm10.stdout:3/99: truncate f7 534181 0 2026-03-09T20:47:17.912 INFO:tasks.workunit.client.0.vm07.stdout:0/255: truncate d1/d2/dc/d17/f3c 4096610 0 2026-03-09T20:47:17.913 INFO:tasks.workunit.client.0.vm07.stdout:0/256: write d1/d2/dc/f12 [1102592,6909] 0 2026-03-09T20:47:17.914 INFO:tasks.workunit.client.0.vm07.stdout:0/257: write d1/f48 [1743298,1707] 0 2026-03-09T20:47:17.925 INFO:tasks.workunit.client.1.vm10.stdout:8/179: mkdir d0/d22/d2f/d3d 0 2026-03-09T20:47:17.933 INFO:tasks.workunit.client.1.vm10.stdout:9/166: rename c1 to d2/d3/c39 0 2026-03-09T20:47:17.934 INFO:tasks.workunit.client.0.vm07.stdout:2/270: symlink d2/db/d1c/l4e 0 2026-03-09T20:47:17.935 INFO:tasks.workunit.client.0.vm07.stdout:2/271: dread d2/ff [0,4194304] 0 2026-03-09T20:47:17.944 INFO:tasks.workunit.client.1.vm10.stdout:1/160: link d2/da/d25/l30 d2/da/l39 0 2026-03-09T20:47:17.944 INFO:tasks.workunit.client.0.vm07.stdout:6/239: write d8/d16/d22/d24/f3f [1727411,7364] 0 2026-03-09T20:47:17.944 INFO:tasks.workunit.client.1.vm10.stdout:4/97: creat d1/d8/d1c/f1f x:0 0 0 2026-03-09T20:47:17.946 INFO:tasks.workunit.client.1.vm10.stdout:5/103: symlink d2/d27/l2e 0 2026-03-09T20:47:17.947 INFO:tasks.workunit.client.0.vm07.stdout:8/160: mkdir d1/dc/d16/d31 0 2026-03-09T20:47:17.949 INFO:tasks.workunit.client.1.vm10.stdout:5/104: dread - d2/d27/f2a zero size 2026-03-09T20:47:17.950 INFO:tasks.workunit.client.1.vm10.stdout:1/161: sync 2026-03-09T20:47:17.952 INFO:tasks.workunit.client.0.vm07.stdout:9/209: chown d4/d8/dc/d15/c2b 267850685 1 2026-03-09T20:47:17.952 INFO:tasks.workunit.client.1.vm10.stdout:5/105: chown d2/d1b/f28 465148025 1 2026-03-09T20:47:17.954 INFO:tasks.workunit.client.1.vm10.stdout:3/100: creat dc/f1f x:0 0 0 2026-03-09T20:47:17.957 INFO:tasks.workunit.client.1.vm10.stdout:8/180: symlink d0/d22/d25/l3e 0 2026-03-09T20:47:17.957 INFO:tasks.workunit.client.1.vm10.stdout:9/167: mknod d2/d28/c3a 0 2026-03-09T20:47:17.958 INFO:tasks.workunit.client.1.vm10.stdout:1/162: sync 2026-03-09T20:47:17.960 INFO:tasks.workunit.client.1.vm10.stdout:1/163: fdatasync d2/f1a 0 2026-03-09T20:47:17.962 INFO:tasks.workunit.client.1.vm10.stdout:1/164: chown d2/f19 61 1 2026-03-09T20:47:17.962 INFO:tasks.workunit.client.1.vm10.stdout:6/97: creat d3/da/d11/f1d x:0 0 0 2026-03-09T20:47:17.963 INFO:tasks.workunit.client.1.vm10.stdout:4/98: mknod d1/d2/d3/c20 0 2026-03-09T20:47:17.964 INFO:tasks.workunit.client.0.vm07.stdout:6/240: write d8/d26/f3d [52258,19969] 0 2026-03-09T20:47:17.974 INFO:tasks.workunit.client.0.vm07.stdout:8/161: chown d1/dc/l28 97 1 2026-03-09T20:47:17.975 INFO:tasks.workunit.client.0.vm07.stdout:8/162: rename d1/dc/d16/d26 to d1/dc/d16/d26/d32 22 2026-03-09T20:47:17.984 INFO:tasks.workunit.client.1.vm10.stdout:7/149: rmdir db/d28 39 2026-03-09T20:47:17.984 INFO:tasks.workunit.client.0.vm07.stdout:3/218: truncate d1/d5/d9/d2f/d34/f40 3117509 0 2026-03-09T20:47:17.985 INFO:tasks.workunit.client.0.vm07.stdout:3/219: chown d1/d5/f25 49 1 2026-03-09T20:47:18.001 INFO:tasks.workunit.client.0.vm07.stdout:2/272: dwrite d2/db/d1c/f26 [0,4194304] 0 2026-03-09T20:47:18.003 INFO:tasks.workunit.client.0.vm07.stdout:4/187: dwrite d2/f5 [0,4194304] 0 2026-03-09T20:47:18.022 INFO:tasks.workunit.client.1.vm10.stdout:5/106: rename d2/d1b/f1e to d2/d1b/f2f 0 2026-03-09T20:47:18.023 INFO:tasks.workunit.client.0.vm07.stdout:5/184: truncate d5/df/f24 1401144 0 2026-03-09T20:47:18.023 INFO:tasks.workunit.client.0.vm07.stdout:5/185: chown d5/d19 63056544 1 2026-03-09T20:47:18.024 INFO:tasks.workunit.client.1.vm10.stdout:9/168: creat d2/d33/f3b x:0 0 0 2026-03-09T20:47:18.026 INFO:tasks.workunit.client.1.vm10.stdout:1/165: rmdir d2 39 2026-03-09T20:47:18.031 INFO:tasks.workunit.client.0.vm07.stdout:9/210: chown d4/d16/c4b 1022107 1 2026-03-09T20:47:18.031 INFO:tasks.workunit.client.0.vm07.stdout:1/188: creat d3/f34 x:0 0 0 2026-03-09T20:47:18.031 INFO:tasks.workunit.client.1.vm10.stdout:4/99: rename d1/d2/d3/l6 to d1/d8/d1c/l21 0 2026-03-09T20:47:18.031 INFO:tasks.workunit.client.1.vm10.stdout:0/103: link d2/db/l18 d2/db/l1c 0 2026-03-09T20:47:18.031 INFO:tasks.workunit.client.1.vm10.stdout:4/100: chown d1/d8 11051 1 2026-03-09T20:47:18.031 INFO:tasks.workunit.client.1.vm10.stdout:7/150: dread - db/d28/f31 zero size 2026-03-09T20:47:18.031 INFO:tasks.workunit.client.0.vm07.stdout:7/196: creat d3/f3f x:0 0 0 2026-03-09T20:47:18.036 INFO:tasks.workunit.client.1.vm10.stdout:2/123: link d5/c11 d5/d18/d1b/c21 0 2026-03-09T20:47:18.038 INFO:tasks.workunit.client.1.vm10.stdout:2/124: chown d5/f16 5762 1 2026-03-09T20:47:18.039 INFO:tasks.workunit.client.0.vm07.stdout:3/220: rename d1/d5/d10/f2e to d1/d5/d9/d2f/d34/f4b 0 2026-03-09T20:47:18.043 INFO:tasks.workunit.client.1.vm10.stdout:9/169: dwrite d2/d12/f2a [0,4194304] 0 2026-03-09T20:47:18.044 INFO:tasks.workunit.client.1.vm10.stdout:9/170: stat d2/d12/l22 0 2026-03-09T20:47:18.044 INFO:tasks.workunit.client.0.vm07.stdout:4/188: creat d2/d1f/d2d/f31 x:0 0 0 2026-03-09T20:47:18.044 INFO:tasks.workunit.client.0.vm07.stdout:5/186: mkdir d5/d33/d39 0 2026-03-09T20:47:18.050 INFO:tasks.workunit.client.1.vm10.stdout:3/101: dwrite dc/f10 [0,4194304] 0 2026-03-09T20:47:18.054 INFO:tasks.workunit.client.0.vm07.stdout:6/241: symlink d8/d16/d22/d33/l48 0 2026-03-09T20:47:18.056 INFO:tasks.workunit.client.0.vm07.stdout:6/242: chown d8/d16/d22/d24/d2b/l36 34234356 1 2026-03-09T20:47:18.063 INFO:tasks.workunit.client.0.vm07.stdout:1/189: dwrite d3/fc [0,4194304] 0 2026-03-09T20:47:18.066 INFO:tasks.workunit.client.0.vm07.stdout:4/189: dwrite d2/d1f/f2c [0,4194304] 0 2026-03-09T20:47:18.071 INFO:tasks.workunit.client.0.vm07.stdout:1/190: dread - d3/d23/f2c zero size 2026-03-09T20:47:18.081 INFO:tasks.workunit.client.1.vm10.stdout:0/104: symlink d2/d9/da/de/l1d 0 2026-03-09T20:47:18.081 INFO:tasks.workunit.client.0.vm07.stdout:3/221: read d1/d5/d9/fe [597582,69138] 0 2026-03-09T20:47:18.081 INFO:tasks.workunit.client.0.vm07.stdout:1/191: write d3/d14/f33 [924952,55277] 0 2026-03-09T20:47:18.081 INFO:tasks.workunit.client.0.vm07.stdout:3/222: write d1/d5/d9/f1b [2538596,111471] 0 2026-03-09T20:47:18.090 INFO:tasks.workunit.client.1.vm10.stdout:7/151: symlink db/d21/d23/l32 0 2026-03-09T20:47:18.095 INFO:tasks.workunit.client.0.vm07.stdout:9/211: symlink d4/d16/d29/l4d 0 2026-03-09T20:47:18.095 INFO:tasks.workunit.client.0.vm07.stdout:9/212: write d4/d16/f33 [3025489,54354] 0 2026-03-09T20:47:18.095 INFO:tasks.workunit.client.0.vm07.stdout:2/273: symlink d2/db/l4f 0 2026-03-09T20:47:18.095 INFO:tasks.workunit.client.0.vm07.stdout:2/274: fsync d2/d11/f1e 0 2026-03-09T20:47:18.096 INFO:tasks.workunit.client.1.vm10.stdout:2/125: truncate d5/f7 1179873 0 2026-03-09T20:47:18.098 INFO:tasks.workunit.client.1.vm10.stdout:9/171: creat d2/d33/f3c x:0 0 0 2026-03-09T20:47:18.099 INFO:tasks.workunit.client.1.vm10.stdout:0/105: mknod d2/d9/da/d11/c1e 0 2026-03-09T20:47:18.100 INFO:tasks.workunit.client.1.vm10.stdout:4/101: mknod d1/d8/d1b/c22 0 2026-03-09T20:47:18.104 INFO:tasks.workunit.client.0.vm07.stdout:0/258: getdents d1 0 2026-03-09T20:47:18.104 INFO:tasks.workunit.client.1.vm10.stdout:2/126: mkdir d5/d18/d1b/d22 0 2026-03-09T20:47:18.105 INFO:tasks.workunit.client.1.vm10.stdout:6/98: dwrite d3/da/f10 [0,4194304] 0 2026-03-09T20:47:18.105 INFO:tasks.workunit.client.0.vm07.stdout:8/163: creat d1/f33 x:0 0 0 2026-03-09T20:47:18.106 INFO:tasks.workunit.client.1.vm10.stdout:9/172: creat d2/d33/f3d x:0 0 0 2026-03-09T20:47:18.107 INFO:tasks.workunit.client.0.vm07.stdout:8/164: truncate d1/dc/d16/d26/f2d 806004 0 2026-03-09T20:47:18.107 INFO:tasks.workunit.client.1.vm10.stdout:3/102: mkdir dc/d14/d20 0 2026-03-09T20:47:18.107 INFO:tasks.workunit.client.1.vm10.stdout:4/102: creat d1/d8/d1c/f23 x:0 0 0 2026-03-09T20:47:18.108 INFO:tasks.workunit.client.1.vm10.stdout:9/173: read - d2/f1a zero size 2026-03-09T20:47:18.114 INFO:tasks.workunit.client.0.vm07.stdout:6/243: mknod d8/d16/d22/d3a/c49 0 2026-03-09T20:47:18.114 INFO:tasks.workunit.client.0.vm07.stdout:4/190: rmdir d2/df/d17 39 2026-03-09T20:47:18.117 INFO:tasks.workunit.client.1.vm10.stdout:6/99: dread f2 [0,4194304] 0 2026-03-09T20:47:18.119 INFO:tasks.workunit.client.1.vm10.stdout:7/152: dwrite db/f19 [0,4194304] 0 2026-03-09T20:47:18.129 INFO:tasks.workunit.client.0.vm07.stdout:1/192: rename d3/d12 to d3/d14/d35 0 2026-03-09T20:47:18.129 INFO:tasks.workunit.client.1.vm10.stdout:6/100: truncate d3/fe 276833 0 2026-03-09T20:47:18.129 INFO:tasks.workunit.client.1.vm10.stdout:3/103: write dc/f11 [959377,119116] 0 2026-03-09T20:47:18.130 INFO:tasks.workunit.client.0.vm07.stdout:7/197: creat d3/da/db/d32/d3e/f40 x:0 0 0 2026-03-09T20:47:18.131 INFO:tasks.workunit.client.0.vm07.stdout:3/223: sync 2026-03-09T20:47:18.131 INFO:tasks.workunit.client.0.vm07.stdout:2/275: sync 2026-03-09T20:47:18.134 INFO:tasks.workunit.client.1.vm10.stdout:4/103: creat d1/d8/d1b/f24 x:0 0 0 2026-03-09T20:47:18.137 INFO:tasks.workunit.client.0.vm07.stdout:6/244: dread d8/d16/d22/d24/d2b/f2f [0,4194304] 0 2026-03-09T20:47:18.141 INFO:tasks.workunit.client.1.vm10.stdout:1/166: getdents d2/da 0 2026-03-09T20:47:18.142 INFO:tasks.workunit.client.1.vm10.stdout:1/167: truncate d2/da/f11 611539 0 2026-03-09T20:47:18.146 INFO:tasks.workunit.client.1.vm10.stdout:7/153: symlink db/d28/l33 0 2026-03-09T20:47:18.146 INFO:tasks.workunit.client.1.vm10.stdout:6/101: creat d3/d12/f1e x:0 0 0 2026-03-09T20:47:18.148 INFO:tasks.workunit.client.1.vm10.stdout:1/168: symlink d2/l3a 0 2026-03-09T20:47:18.156 INFO:tasks.workunit.client.0.vm07.stdout:4/191: unlink d2/l14 0 2026-03-09T20:47:18.156 INFO:tasks.workunit.client.0.vm07.stdout:1/193: symlink d3/d14/d35/l36 0 2026-03-09T20:47:18.157 INFO:tasks.workunit.client.0.vm07.stdout:7/198: symlink d3/da/db/d32/l41 0 2026-03-09T20:47:18.161 INFO:tasks.workunit.client.0.vm07.stdout:4/192: dread d2/f3 [0,4194304] 0 2026-03-09T20:47:18.163 INFO:tasks.workunit.client.0.vm07.stdout:2/276: chown d2/f33 1396720 1 2026-03-09T20:47:18.168 INFO:tasks.workunit.client.0.vm07.stdout:3/224: rename d1/c38 to d1/d5/d9/d2f/d34/c4c 0 2026-03-09T20:47:18.169 INFO:tasks.workunit.client.1.vm10.stdout:1/169: symlink d2/da/d25/l3b 0 2026-03-09T20:47:18.171 INFO:tasks.workunit.client.0.vm07.stdout:9/213: mkdir d4/d8/dc/d4e 0 2026-03-09T20:47:18.172 INFO:tasks.workunit.client.0.vm07.stdout:5/187: read d5/df/f24 [1077709,98893] 0 2026-03-09T20:47:18.172 INFO:tasks.workunit.client.0.vm07.stdout:8/165: creat d1/dc/d14/d2f/f34 x:0 0 0 2026-03-09T20:47:18.175 INFO:tasks.workunit.client.1.vm10.stdout:9/174: rename d2/cf to d2/c3e 0 2026-03-09T20:47:18.176 INFO:tasks.workunit.client.1.vm10.stdout:9/175: chown d2/d12/f2a 15897 1 2026-03-09T20:47:18.178 INFO:tasks.workunit.client.1.vm10.stdout:7/154: link db/f16 db/d21/d23/f34 0 2026-03-09T20:47:18.179 INFO:tasks.workunit.client.1.vm10.stdout:6/102: rename d3/f9 to d3/f1f 0 2026-03-09T20:47:18.180 INFO:tasks.workunit.client.0.vm07.stdout:7/199: symlink d3/da/db/d14/d1f/d2b/l42 0 2026-03-09T20:47:18.180 INFO:tasks.workunit.client.0.vm07.stdout:1/194: write d3/d14/d35/f20 [815837,72223] 0 2026-03-09T20:47:18.180 INFO:tasks.workunit.client.1.vm10.stdout:4/104: getdents d1 0 2026-03-09T20:47:18.180 INFO:tasks.workunit.client.1.vm10.stdout:7/155: fsync db/d21/d23/f1a 0 2026-03-09T20:47:18.180 INFO:tasks.workunit.client.1.vm10.stdout:4/105: fdatasync d1/d2/f7 0 2026-03-09T20:47:18.184 INFO:tasks.workunit.client.1.vm10.stdout:9/176: dwrite d2/f2b [0,4194304] 0 2026-03-09T20:47:18.185 INFO:tasks.workunit.client.0.vm07.stdout:2/277: rename d2/d11/f1e to d2/d11/f50 0 2026-03-09T20:47:18.197 INFO:tasks.workunit.client.1.vm10.stdout:9/177: rename d2/d12/f1e to d2/d33/f3f 0 2026-03-09T20:47:18.198 INFO:tasks.workunit.client.1.vm10.stdout:7/156: link db/f16 db/d21/d23/d12/f35 0 2026-03-09T20:47:18.200 INFO:tasks.workunit.client.0.vm07.stdout:1/195: creat d3/d23/f37 x:0 0 0 2026-03-09T20:47:18.200 INFO:tasks.workunit.client.0.vm07.stdout:1/196: stat d3/f8 0 2026-03-09T20:47:18.202 INFO:tasks.workunit.client.1.vm10.stdout:1/170: sync 2026-03-09T20:47:18.202 INFO:tasks.workunit.client.1.vm10.stdout:4/106: sync 2026-03-09T20:47:18.202 INFO:tasks.workunit.client.1.vm10.stdout:1/171: chown d2/da/f34 213 1 2026-03-09T20:47:18.203 INFO:tasks.workunit.client.0.vm07.stdout:2/278: dwrite d2/f3e [0,4194304] 0 2026-03-09T20:47:18.208 INFO:tasks.workunit.client.1.vm10.stdout:4/107: creat d1/d8/f25 x:0 0 0 2026-03-09T20:47:18.209 INFO:tasks.workunit.client.0.vm07.stdout:6/245: rename d8/f20 to d8/d16/d22/d24/d2b/f4a 0 2026-03-09T20:47:18.211 INFO:tasks.workunit.client.1.vm10.stdout:1/172: creat d2/f3c x:0 0 0 2026-03-09T20:47:18.211 INFO:tasks.workunit.client.1.vm10.stdout:4/108: chown d1/d8/cd 124269 1 2026-03-09T20:47:18.211 INFO:tasks.workunit.client.1.vm10.stdout:4/109: write d1/d2/f1a [673443,10729] 0 2026-03-09T20:47:18.211 INFO:tasks.workunit.client.1.vm10.stdout:4/110: write d1/d2/f7 [3140316,82451] 0 2026-03-09T20:47:18.211 INFO:tasks.workunit.client.1.vm10.stdout:9/178: dwrite d2/d33/f3b [0,4194304] 0 2026-03-09T20:47:18.212 INFO:tasks.workunit.client.1.vm10.stdout:9/179: dread - d2/d28/f32 zero size 2026-03-09T20:47:18.219 INFO:tasks.workunit.client.1.vm10.stdout:4/111: write d1/f1e [204370,73844] 0 2026-03-09T20:47:18.225 INFO:tasks.workunit.client.0.vm07.stdout:5/188: symlink d5/df/d13/l3a 0 2026-03-09T20:47:18.225 INFO:tasks.workunit.client.0.vm07.stdout:7/200: mkdir d3/da/db/d14/d43 0 2026-03-09T20:47:18.226 INFO:tasks.workunit.client.0.vm07.stdout:2/279: creat d2/d11/f51 x:0 0 0 2026-03-09T20:47:18.240 INFO:tasks.workunit.client.0.vm07.stdout:7/201: dwrite d3/da/db/d14/f1a [0,4194304] 0 2026-03-09T20:47:18.248 INFO:tasks.workunit.client.1.vm10.stdout:9/180: mknod d2/d33/d37/c40 0 2026-03-09T20:47:18.248 INFO:tasks.workunit.client.1.vm10.stdout:4/112: creat d1/f26 x:0 0 0 2026-03-09T20:47:18.249 INFO:tasks.workunit.client.0.vm07.stdout:2/280: dwrite d2/f40 [0,4194304] 0 2026-03-09T20:47:18.249 INFO:tasks.workunit.client.0.vm07.stdout:9/214: creat d4/d11/f4f x:0 0 0 2026-03-09T20:47:18.249 INFO:tasks.workunit.client.0.vm07.stdout:3/225: creat d1/d5/d9/d11/f4d x:0 0 0 2026-03-09T20:47:18.252 INFO:tasks.workunit.client.0.vm07.stdout:5/189: mkdir d5/d33/d3b 0 2026-03-09T20:47:18.272 INFO:tasks.workunit.client.0.vm07.stdout:3/226: dwrite d1/f36 [0,4194304] 0 2026-03-09T20:47:18.273 INFO:tasks.workunit.client.1.vm10.stdout:1/173: link d2/da/d25/f27 d2/da/f3d 0 2026-03-09T20:47:18.274 INFO:tasks.workunit.client.1.vm10.stdout:8/181: truncate d0/d22/f27 744549 0 2026-03-09T20:47:18.274 INFO:tasks.workunit.client.0.vm07.stdout:3/227: write d1/d5/d9/f1b [1706110,47261] 0 2026-03-09T20:47:18.274 INFO:tasks.workunit.client.0.vm07.stdout:0/259: chown d1/d2/dc/d17/f3c 9932 1 2026-03-09T20:47:18.277 INFO:tasks.workunit.client.1.vm10.stdout:5/107: write d2/fb [4112457,126621] 0 2026-03-09T20:47:18.278 INFO:tasks.workunit.client.1.vm10.stdout:1/174: read - d2/f2a zero size 2026-03-09T20:47:18.278 INFO:tasks.workunit.client.1.vm10.stdout:5/108: chown d2/d27/f2d 51128842 1 2026-03-09T20:47:18.314 INFO:tasks.workunit.client.0.vm07.stdout:3/228: dread d1/d5/d10/f22 [0,4194304] 0 2026-03-09T20:47:18.316 INFO:tasks.workunit.client.1.vm10.stdout:0/106: rmdir d2/d9/da 39 2026-03-09T20:47:18.331 INFO:tasks.workunit.client.0.vm07.stdout:7/202: mknod d3/da/db/d14/d1f/d2b/c44 0 2026-03-09T20:47:18.333 INFO:tasks.workunit.client.0.vm07.stdout:1/197: rmdir d3/d14 39 2026-03-09T20:47:18.333 INFO:tasks.workunit.client.1.vm10.stdout:8/182: creat d0/d22/d2c/f3f x:0 0 0 2026-03-09T20:47:18.335 INFO:tasks.workunit.client.0.vm07.stdout:6/246: mkdir d8/d16/d4b 0 2026-03-09T20:47:18.336 INFO:tasks.workunit.client.0.vm07.stdout:6/247: readlink d8/d16/d22/d33/l48 0 2026-03-09T20:47:18.336 INFO:tasks.workunit.client.0.vm07.stdout:6/248: stat d8/d16/d22/d33/l48 0 2026-03-09T20:47:18.337 INFO:tasks.workunit.client.1.vm10.stdout:1/175: mkdir d2/da/d25/d3e 0 2026-03-09T20:47:18.337 INFO:tasks.workunit.client.1.vm10.stdout:8/183: readlink d0/l24 0 2026-03-09T20:47:18.337 INFO:tasks.workunit.client.0.vm07.stdout:6/249: write d8/d16/f23 [2269448,73143] 0 2026-03-09T20:47:18.338 INFO:tasks.workunit.client.1.vm10.stdout:2/127: dwrite d5/fe [0,4194304] 0 2026-03-09T20:47:18.338 INFO:tasks.workunit.client.1.vm10.stdout:1/176: write d2/da/f22 [2818264,66930] 0 2026-03-09T20:47:18.339 INFO:tasks.workunit.client.1.vm10.stdout:8/184: write d0/d22/d25/f2d [88427,51251] 0 2026-03-09T20:47:18.340 INFO:tasks.workunit.client.1.vm10.stdout:1/177: truncate d2/da/f3d 1333328 0 2026-03-09T20:47:18.343 INFO:tasks.workunit.client.0.vm07.stdout:5/190: dwrite d5/df/d13/f17 [0,4194304] 0 2026-03-09T20:47:18.344 INFO:tasks.workunit.client.1.vm10.stdout:1/178: dread - d2/da/d25/f2e zero size 2026-03-09T20:47:18.345 INFO:tasks.workunit.client.0.vm07.stdout:0/260: creat d1/f57 x:0 0 0 2026-03-09T20:47:18.346 INFO:tasks.workunit.client.1.vm10.stdout:2/128: creat d5/d18/d1b/f23 x:0 0 0 2026-03-09T20:47:18.346 INFO:tasks.workunit.client.0.vm07.stdout:1/198: chown d3/d14/c1f 402 1 2026-03-09T20:47:18.347 INFO:tasks.workunit.client.1.vm10.stdout:2/129: fsync d5/f1d 0 2026-03-09T20:47:18.347 INFO:tasks.workunit.client.0.vm07.stdout:9/215: mknod d4/d8/dc/c50 0 2026-03-09T20:47:18.347 INFO:tasks.workunit.client.1.vm10.stdout:1/179: read - d2/da/f34 zero size 2026-03-09T20:47:18.348 INFO:tasks.workunit.client.1.vm10.stdout:2/130: write d5/d18/d1b/f23 [296524,108051] 0 2026-03-09T20:47:18.350 INFO:tasks.workunit.client.1.vm10.stdout:5/109: link d2/fb d2/d1b/f30 0 2026-03-09T20:47:18.351 INFO:tasks.workunit.client.0.vm07.stdout:6/250: rename d8/d16/d22/d24/d2b/l36 to d8/d16/d22/d24/d2b/l4c 0 2026-03-09T20:47:18.352 INFO:tasks.workunit.client.0.vm07.stdout:7/203: sync 2026-03-09T20:47:18.356 INFO:tasks.workunit.client.1.vm10.stdout:1/180: mknod d2/da/c3f 0 2026-03-09T20:47:18.356 INFO:tasks.workunit.client.1.vm10.stdout:9/181: creat d2/f41 x:0 0 0 2026-03-09T20:47:18.360 INFO:tasks.workunit.client.1.vm10.stdout:1/181: fsync d2/f21 0 2026-03-09T20:47:18.361 INFO:tasks.workunit.client.1.vm10.stdout:9/182: truncate d2/f41 342789 0 2026-03-09T20:47:18.361 INFO:tasks.workunit.client.1.vm10.stdout:1/182: write d2/da/f20 [383936,121371] 0 2026-03-09T20:47:18.363 INFO:tasks.workunit.client.0.vm07.stdout:3/229: rename d1/c18 to d1/d5/d10/d43/c4e 0 2026-03-09T20:47:18.363 INFO:tasks.workunit.client.0.vm07.stdout:3/230: stat d1/d5/d9/f1b 0 2026-03-09T20:47:18.364 INFO:tasks.workunit.client.0.vm07.stdout:3/231: write d1/d5/d9/d11/f26 [4655275,128326] 0 2026-03-09T20:47:18.365 INFO:tasks.workunit.client.0.vm07.stdout:3/232: chown d1/f36 8672 1 2026-03-09T20:47:18.365 INFO:tasks.workunit.client.1.vm10.stdout:9/183: dread - d2/d33/f3d zero size 2026-03-09T20:47:18.366 INFO:tasks.workunit.client.1.vm10.stdout:9/184: fsync d2/d28/f29 0 2026-03-09T20:47:18.366 INFO:tasks.workunit.client.1.vm10.stdout:8/185: dread d0/d22/f29 [0,4194304] 0 2026-03-09T20:47:18.369 INFO:tasks.workunit.client.0.vm07.stdout:1/199: creat d3/d14/d35/f38 x:0 0 0 2026-03-09T20:47:18.370 INFO:tasks.workunit.client.0.vm07.stdout:1/200: chown d3/d14/d35/l26 763 1 2026-03-09T20:47:18.370 INFO:tasks.workunit.client.0.vm07.stdout:1/201: chown d3/f8 5211746 1 2026-03-09T20:47:18.374 INFO:tasks.workunit.client.0.vm07.stdout:9/216: creat d4/d16/d29/d24/d37/f51 x:0 0 0 2026-03-09T20:47:18.377 INFO:tasks.workunit.client.0.vm07.stdout:2/281: rmdir d2/db/d1c/d47 0 2026-03-09T20:47:18.380 INFO:tasks.workunit.client.0.vm07.stdout:5/191: mknod d5/d33/d39/c3c 0 2026-03-09T20:47:18.381 INFO:tasks.workunit.client.1.vm10.stdout:9/185: dwrite d2/f41 [0,4194304] 0 2026-03-09T20:47:18.383 INFO:tasks.workunit.client.1.vm10.stdout:1/183: rmdir d2/da/d25 39 2026-03-09T20:47:18.384 INFO:tasks.workunit.client.0.vm07.stdout:5/192: dwrite d5/df/d13/f38 [0,4194304] 0 2026-03-09T20:47:18.388 INFO:tasks.workunit.client.1.vm10.stdout:1/184: read d2/da/f22 [2115575,120331] 0 2026-03-09T20:47:18.389 INFO:tasks.workunit.client.1.vm10.stdout:8/186: mkdir d0/d22/d25/d40 0 2026-03-09T20:47:18.389 INFO:tasks.workunit.client.1.vm10.stdout:5/110: dwrite d2/d1b/f30 [4194304,4194304] 0 2026-03-09T20:47:18.391 INFO:tasks.workunit.client.1.vm10.stdout:1/185: truncate d2/da/f11 1085113 0 2026-03-09T20:47:18.398 INFO:tasks.workunit.client.0.vm07.stdout:9/217: creat d4/d11/d23/f52 x:0 0 0 2026-03-09T20:47:18.401 INFO:tasks.workunit.client.0.vm07.stdout:2/282: stat d2/l18 0 2026-03-09T20:47:18.404 INFO:tasks.workunit.client.1.vm10.stdout:8/187: mkdir d0/d22/d25/d2e/d41 0 2026-03-09T20:47:18.407 INFO:tasks.workunit.client.1.vm10.stdout:1/186: dread d2/da/fe [4194304,4194304] 0 2026-03-09T20:47:18.409 INFO:tasks.workunit.client.1.vm10.stdout:1/187: read d2/f8 [4305596,49120] 0 2026-03-09T20:47:18.409 INFO:tasks.workunit.client.0.vm07.stdout:2/283: creat d2/d11/f52 x:0 0 0 2026-03-09T20:47:18.416 INFO:tasks.workunit.client.0.vm07.stdout:7/204: link d3/f3f d3/da/f45 0 2026-03-09T20:47:18.417 INFO:tasks.workunit.client.0.vm07.stdout:7/205: write d3/da/db/d14/d1f/d2b/f2c [151659,102431] 0 2026-03-09T20:47:18.420 INFO:tasks.workunit.client.0.vm07.stdout:9/218: creat d4/d8/dc/d4e/f53 x:0 0 0 2026-03-09T20:47:18.422 INFO:tasks.workunit.client.1.vm10.stdout:8/188: dwrite d0/f14 [8388608,4194304] 0 2026-03-09T20:47:18.422 INFO:tasks.workunit.client.1.vm10.stdout:1/188: creat d2/da/d25/f40 x:0 0 0 2026-03-09T20:47:18.422 INFO:tasks.workunit.client.0.vm07.stdout:5/193: creat d5/df/d13/f3d x:0 0 0 2026-03-09T20:47:18.425 INFO:tasks.workunit.client.0.vm07.stdout:9/219: mkdir d4/d8/dc/d4e/d54 0 2026-03-09T20:47:18.427 INFO:tasks.workunit.client.0.vm07.stdout:2/284: symlink d2/d46/l53 0 2026-03-09T20:47:18.434 INFO:tasks.workunit.client.1.vm10.stdout:8/189: chown d0/f14 94524 1 2026-03-09T20:47:18.434 INFO:tasks.workunit.client.0.vm07.stdout:2/285: truncate d2/f17 4579958 0 2026-03-09T20:47:18.434 INFO:tasks.workunit.client.0.vm07.stdout:2/286: dread d2/db/d28/f32 [0,4194304] 0 2026-03-09T20:47:18.434 INFO:tasks.workunit.client.0.vm07.stdout:2/287: rename d2 to d2/d11/d54 22 2026-03-09T20:47:18.435 INFO:tasks.workunit.client.1.vm10.stdout:1/189: read d2/f21 [626580,70439] 0 2026-03-09T20:47:18.435 INFO:tasks.workunit.client.0.vm07.stdout:5/194: mkdir d5/df/d13/d3e 0 2026-03-09T20:47:18.435 INFO:tasks.workunit.client.0.vm07.stdout:5/195: chown d5/d19 2681092 1 2026-03-09T20:47:18.437 INFO:tasks.workunit.client.0.vm07.stdout:5/196: read d5/df/d13/f1f [2093500,42442] 0 2026-03-09T20:47:18.438 INFO:tasks.workunit.client.0.vm07.stdout:9/220: mknod d4/d8/dc/c55 0 2026-03-09T20:47:18.444 INFO:tasks.workunit.client.0.vm07.stdout:9/221: dwrite d4/d11/d2a/f39 [0,4194304] 0 2026-03-09T20:47:18.448 INFO:tasks.workunit.client.0.vm07.stdout:5/197: mknod d5/df/d13/c3f 0 2026-03-09T20:47:18.451 INFO:tasks.workunit.client.0.vm07.stdout:5/198: creat d5/df/d13/d30/f40 x:0 0 0 2026-03-09T20:47:18.452 INFO:tasks.workunit.client.0.vm07.stdout:9/222: symlink d4/d8/l56 0 2026-03-09T20:47:18.453 INFO:tasks.workunit.client.0.vm07.stdout:9/223: write d4/d8/dc/d15/f30 [678466,21383] 0 2026-03-09T20:47:18.453 INFO:tasks.workunit.client.0.vm07.stdout:9/224: chown d4/d11/d23/d32 84761 1 2026-03-09T20:47:18.457 INFO:tasks.workunit.client.0.vm07.stdout:5/199: creat d5/df/d13/f41 x:0 0 0 2026-03-09T20:47:18.458 INFO:tasks.workunit.client.0.vm07.stdout:5/200: write d5/df/d13/f3d [847860,50759] 0 2026-03-09T20:47:18.458 INFO:tasks.workunit.client.0.vm07.stdout:5/201: stat d5/df/l29 0 2026-03-09T20:47:18.461 INFO:tasks.workunit.client.1.vm10.stdout:5/111: dread d2/f16 [0,4194304] 0 2026-03-09T20:47:18.464 INFO:tasks.workunit.client.1.vm10.stdout:5/112: read - d2/f23 zero size 2026-03-09T20:47:18.476 INFO:tasks.workunit.client.1.vm10.stdout:8/190: dread d0/f10 [0,4194304] 0 2026-03-09T20:47:18.567 INFO:tasks.workunit.client.1.vm10.stdout:7/157: rename db/d21/d23/d12 to db/d28/d2b/d36 0 2026-03-09T20:47:18.567 INFO:tasks.workunit.client.0.vm07.stdout:4/193: dwrite d2/f9 [0,4194304] 0 2026-03-09T20:47:18.569 INFO:tasks.workunit.client.1.vm10.stdout:6/103: dwrite f2 [0,4194304] 0 2026-03-09T20:47:18.569 INFO:tasks.workunit.client.0.vm07.stdout:4/194: write d2/f2b [893620,111604] 0 2026-03-09T20:47:18.571 INFO:tasks.workunit.client.0.vm07.stdout:3/233: getdents d1/d5/d9/d11 0 2026-03-09T20:47:18.575 INFO:tasks.workunit.client.0.vm07.stdout:3/234: fdatasync d1/d5/d9/d11/f2a 0 2026-03-09T20:47:18.575 INFO:tasks.workunit.client.0.vm07.stdout:8/166: dwrite d1/dc/fd [0,4194304] 0 2026-03-09T20:47:18.595 INFO:tasks.workunit.client.1.vm10.stdout:0/107: rename d2/db/f14 to d2/d9/da/d11/f1f 0 2026-03-09T20:47:18.596 INFO:tasks.workunit.client.1.vm10.stdout:0/108: chown d2/d9/da/de/cf 11253750 1 2026-03-09T20:47:18.599 INFO:tasks.workunit.client.1.vm10.stdout:6/104: dread d3/f1f [0,4194304] 0 2026-03-09T20:47:18.599 INFO:tasks.workunit.client.1.vm10.stdout:6/105: fdatasync d3/f7 0 2026-03-09T20:47:18.600 INFO:tasks.workunit.client.1.vm10.stdout:6/106: write d3/fe [483895,16768] 0 2026-03-09T20:47:18.602 INFO:tasks.workunit.client.0.vm07.stdout:8/167: dread d1/dc/d14/f30 [0,4194304] 0 2026-03-09T20:47:18.605 INFO:tasks.workunit.client.1.vm10.stdout:6/107: dread d3/fc [0,4194304] 0 2026-03-09T20:47:18.605 INFO:tasks.workunit.client.1.vm10.stdout:7/158: readlink db/d28/d2b/d36/l17 0 2026-03-09T20:47:18.606 INFO:tasks.workunit.client.1.vm10.stdout:7/159: dread - db/d28/f31 zero size 2026-03-09T20:47:18.608 INFO:tasks.workunit.client.0.vm07.stdout:8/168: mknod d1/dc/d14/c35 0 2026-03-09T20:47:18.621 INFO:tasks.workunit.client.1.vm10.stdout:5/113: rename d2/d1b/f30 to d2/f31 0 2026-03-09T20:47:18.622 INFO:tasks.workunit.client.1.vm10.stdout:5/114: fsync d2/f23 0 2026-03-09T20:47:18.622 INFO:tasks.workunit.client.1.vm10.stdout:3/104: write f7 [510766,40683] 0 2026-03-09T20:47:18.622 INFO:tasks.workunit.client.1.vm10.stdout:4/113: stat d1/d8/cd 0 2026-03-09T20:47:18.622 INFO:tasks.workunit.client.0.vm07.stdout:4/195: symlink d2/df/l32 0 2026-03-09T20:47:18.622 INFO:tasks.workunit.client.0.vm07.stdout:4/196: write d2/f28 [903843,125719] 0 2026-03-09T20:47:18.622 INFO:tasks.workunit.client.0.vm07.stdout:3/235: mknod d1/d5/c4f 0 2026-03-09T20:47:18.622 INFO:tasks.workunit.client.0.vm07.stdout:6/251: getdents d8/d16 0 2026-03-09T20:47:18.626 INFO:tasks.workunit.client.0.vm07.stdout:3/236: rename d1/d5/l3a to d1/d5/d9/d11/l50 0 2026-03-09T20:47:18.628 INFO:tasks.workunit.client.0.vm07.stdout:5/202: fdatasync d5/df/d13/f17 0 2026-03-09T20:47:18.631 INFO:tasks.workunit.client.0.vm07.stdout:3/237: dwrite d1/d5/d9/f1b [0,4194304] 0 2026-03-09T20:47:18.646 INFO:tasks.workunit.client.0.vm07.stdout:1/202: dread d3/d14/d35/f32 [0,4194304] 0 2026-03-09T20:47:18.648 INFO:tasks.workunit.client.1.vm10.stdout:8/191: fdatasync d0/d22/d25/f37 0 2026-03-09T20:47:18.649 INFO:tasks.workunit.client.1.vm10.stdout:8/192: chown d0/f19 594300549 1 2026-03-09T20:47:18.663 INFO:tasks.workunit.client.1.vm10.stdout:5/115: symlink d2/d1b/l32 0 2026-03-09T20:47:18.665 INFO:tasks.workunit.client.1.vm10.stdout:5/116: dread d2/f11 [4194304,4194304] 0 2026-03-09T20:47:18.665 INFO:tasks.workunit.client.1.vm10.stdout:5/117: rename d2 to d2/d1b/d33 22 2026-03-09T20:47:18.668 INFO:tasks.workunit.client.0.vm07.stdout:4/197: truncate d2/f7 35927 0 2026-03-09T20:47:18.672 INFO:tasks.workunit.client.1.vm10.stdout:4/114: symlink d1/d8/d1c/l27 0 2026-03-09T20:47:18.673 INFO:tasks.workunit.client.0.vm07.stdout:0/261: write d1/d2/d33/d35/f46 [421598,55373] 0 2026-03-09T20:47:18.673 INFO:tasks.workunit.client.1.vm10.stdout:9/186: rmdir d2 39 2026-03-09T20:47:18.675 INFO:tasks.workunit.client.1.vm10.stdout:8/193: dread d0/f17 [0,4194304] 0 2026-03-09T20:47:18.677 INFO:tasks.workunit.client.0.vm07.stdout:3/238: mkdir d1/d5/d9/d2f/d3d/d51 0 2026-03-09T20:47:18.678 INFO:tasks.workunit.client.0.vm07.stdout:9/225: getdents d4/d16/d29/d24/d37 0 2026-03-09T20:47:18.678 INFO:tasks.workunit.client.0.vm07.stdout:9/226: chown d4/d8/dc/c55 114234 1 2026-03-09T20:47:18.679 INFO:tasks.workunit.client.0.vm07.stdout:9/227: read d4/d11/f13 [3618591,109639] 0 2026-03-09T20:47:18.682 INFO:tasks.workunit.client.1.vm10.stdout:7/160: dread db/d21/d23/f1a [0,4194304] 0 2026-03-09T20:47:18.682 INFO:tasks.workunit.client.1.vm10.stdout:7/161: stat l4 0 2026-03-09T20:47:18.682 INFO:tasks.workunit.client.1.vm10.stdout:2/131: write d5/fb [1226836,64616] 0 2026-03-09T20:47:18.683 INFO:tasks.workunit.client.1.vm10.stdout:5/118: creat d2/d27/f34 x:0 0 0 2026-03-09T20:47:18.684 INFO:tasks.workunit.client.1.vm10.stdout:2/132: write d5/fa [4862511,27302] 0 2026-03-09T20:47:18.685 INFO:tasks.workunit.client.0.vm07.stdout:6/252: unlink d8/f32 0 2026-03-09T20:47:18.685 INFO:tasks.workunit.client.1.vm10.stdout:7/162: fsync db/d21/d23/f14 0 2026-03-09T20:47:18.689 INFO:tasks.workunit.client.0.vm07.stdout:0/262: read - d1/f3b zero size 2026-03-09T20:47:18.690 INFO:tasks.workunit.client.1.vm10.stdout:3/105: mkdir dc/d14/d20/d21 0 2026-03-09T20:47:18.690 INFO:tasks.workunit.client.1.vm10.stdout:9/187: chown d2/f2b 1045682399 1 2026-03-09T20:47:18.690 INFO:tasks.workunit.client.1.vm10.stdout:8/194: mknod d0/d22/d2f/d38/c42 0 2026-03-09T20:47:18.691 INFO:tasks.workunit.client.0.vm07.stdout:3/239: rmdir d1/d5/d9/d2f 39 2026-03-09T20:47:18.691 INFO:tasks.workunit.client.1.vm10.stdout:7/163: dread db/d1f/f2a [0,4194304] 0 2026-03-09T20:47:18.694 INFO:tasks.workunit.client.0.vm07.stdout:3/240: dread d1/f19 [0,4194304] 0 2026-03-09T20:47:18.694 INFO:tasks.workunit.client.1.vm10.stdout:5/119: dwrite d2/d27/f2a [0,4194304] 0 2026-03-09T20:47:18.696 INFO:tasks.workunit.client.1.vm10.stdout:7/164: chown db/d21/d23/ff 523317 1 2026-03-09T20:47:18.699 INFO:tasks.workunit.client.1.vm10.stdout:2/133: dwrite d5/fe [0,4194304] 0 2026-03-09T20:47:18.700 INFO:tasks.workunit.client.0.vm07.stdout:7/206: rmdir d3 39 2026-03-09T20:47:18.701 INFO:tasks.workunit.client.1.vm10.stdout:7/165: chown db/d21/d23/f22 42577 1 2026-03-09T20:47:18.701 INFO:tasks.workunit.client.0.vm07.stdout:8/169: getdents d1/dc/d14/d2f 0 2026-03-09T20:47:18.707 INFO:tasks.workunit.client.1.vm10.stdout:7/166: dread db/d1f/f2a [0,4194304] 0 2026-03-09T20:47:18.708 INFO:tasks.workunit.client.1.vm10.stdout:5/120: dread d2/d1b/f28 [0,4194304] 0 2026-03-09T20:47:18.710 INFO:tasks.workunit.client.1.vm10.stdout:2/134: dwrite d5/fd [0,4194304] 0 2026-03-09T20:47:18.710 INFO:tasks.workunit.client.1.vm10.stdout:2/135: chown d5/f1d 46966 1 2026-03-09T20:47:18.711 INFO:tasks.workunit.client.0.vm07.stdout:2/288: dwrite d2/db/f1b [0,4194304] 0 2026-03-09T20:47:18.711 INFO:tasks.workunit.client.1.vm10.stdout:2/136: write d5/fb [3590054,34277] 0 2026-03-09T20:47:18.715 INFO:tasks.workunit.client.1.vm10.stdout:3/106: dread - dc/dd/f13 zero size 2026-03-09T20:47:18.719 INFO:tasks.workunit.client.1.vm10.stdout:6/108: getdents d3/da/d11 0 2026-03-09T20:47:18.721 INFO:tasks.workunit.client.1.vm10.stdout:1/190: dwrite d2/da/f35 [0,4194304] 0 2026-03-09T20:47:18.728 INFO:tasks.workunit.client.1.vm10.stdout:9/188: creat d2/d3/de/f42 x:0 0 0 2026-03-09T20:47:18.728 INFO:tasks.workunit.client.1.vm10.stdout:8/195: creat d0/d22/d2f/d38/f43 x:0 0 0 2026-03-09T20:47:18.728 INFO:tasks.workunit.client.1.vm10.stdout:8/196: chown d0/d22/d2f/d38/c42 9379 1 2026-03-09T20:47:18.729 INFO:tasks.workunit.client.0.vm07.stdout:8/170: creat d1/dc/d16/d26/f36 x:0 0 0 2026-03-09T20:47:18.733 INFO:tasks.workunit.client.1.vm10.stdout:2/137: dread d5/f16 [0,4194304] 0 2026-03-09T20:47:18.737 INFO:tasks.workunit.client.1.vm10.stdout:5/121: creat d2/f35 x:0 0 0 2026-03-09T20:47:18.742 INFO:tasks.workunit.client.0.vm07.stdout:4/198: link d2/fa d2/f33 0 2026-03-09T20:47:18.743 INFO:tasks.workunit.client.0.vm07.stdout:0/263: symlink d1/l58 0 2026-03-09T20:47:18.746 INFO:tasks.workunit.client.1.vm10.stdout:7/167: creat db/d1f/f37 x:0 0 0 2026-03-09T20:47:18.746 INFO:tasks.workunit.client.1.vm10.stdout:3/107: mkdir dc/d14/d22 0 2026-03-09T20:47:18.748 INFO:tasks.workunit.client.1.vm10.stdout:5/122: dread - d2/f23 zero size 2026-03-09T20:47:18.748 INFO:tasks.workunit.client.1.vm10.stdout:0/109: write d2/f5 [329662,48487] 0 2026-03-09T20:47:18.748 INFO:tasks.workunit.client.0.vm07.stdout:7/207: rename d3/da/db/d14/d1f/d2b/f2f to d3/da/db/d14/d1f/f46 0 2026-03-09T20:47:18.748 INFO:tasks.workunit.client.1.vm10.stdout:5/123: stat d2/c19 0 2026-03-09T20:47:18.749 INFO:tasks.workunit.client.1.vm10.stdout:0/110: dread - d2/d9/da/d11/f15 zero size 2026-03-09T20:47:18.751 INFO:tasks.workunit.client.1.vm10.stdout:8/197: symlink d0/d22/d2f/d38/l44 0 2026-03-09T20:47:18.753 INFO:tasks.workunit.client.0.vm07.stdout:4/199: readlink d2/df/d17/l24 0 2026-03-09T20:47:18.756 INFO:tasks.workunit.client.0.vm07.stdout:9/228: getdents d4/d16 0 2026-03-09T20:47:18.756 INFO:tasks.workunit.client.1.vm10.stdout:3/108: dwrite dc/dd/f1b [0,4194304] 0 2026-03-09T20:47:18.757 INFO:tasks.workunit.client.0.vm07.stdout:9/229: stat d4/d16/d29/d24/d37/f51 0 2026-03-09T20:47:18.758 INFO:tasks.workunit.client.1.vm10.stdout:4/115: getdents d1/d8/d1b 0 2026-03-09T20:47:18.759 INFO:tasks.workunit.client.1.vm10.stdout:4/116: dread - d1/f26 zero size 2026-03-09T20:47:18.759 INFO:tasks.workunit.client.0.vm07.stdout:8/171: link d1/dc/d14/d2f/f34 d1/dc/d16/d26/f37 0 2026-03-09T20:47:18.759 INFO:tasks.workunit.client.0.vm07.stdout:8/172: readlink d1/dc/l28 0 2026-03-09T20:47:18.760 INFO:tasks.workunit.client.1.vm10.stdout:4/117: chown d1 224574274 1 2026-03-09T20:47:18.763 INFO:tasks.workunit.client.0.vm07.stdout:8/173: dread d1/dc/fd [0,4194304] 0 2026-03-09T20:47:18.764 INFO:tasks.workunit.client.0.vm07.stdout:8/174: truncate d1/f25 4981812 0 2026-03-09T20:47:18.764 INFO:tasks.workunit.client.0.vm07.stdout:8/175: readlink d1/dc/l2c 0 2026-03-09T20:47:18.766 INFO:tasks.workunit.client.1.vm10.stdout:1/191: creat d2/da/d25/d3e/f41 x:0 0 0 2026-03-09T20:47:18.769 INFO:tasks.workunit.client.1.vm10.stdout:6/109: fsync d3/da/fd 0 2026-03-09T20:47:18.771 INFO:tasks.workunit.client.1.vm10.stdout:6/110: chown d3/c8 233748 1 2026-03-09T20:47:18.774 INFO:tasks.workunit.client.0.vm07.stdout:4/200: fsync d2/f1d 0 2026-03-09T20:47:18.774 INFO:tasks.workunit.client.1.vm10.stdout:6/111: dread - d3/ff zero size 2026-03-09T20:47:18.780 INFO:tasks.workunit.client.1.vm10.stdout:5/124: rename d2/l20 to d2/d27/l36 0 2026-03-09T20:47:18.780 INFO:tasks.workunit.client.0.vm07.stdout:0/264: creat d1/d2/d33/d35/f59 x:0 0 0 2026-03-09T20:47:18.781 INFO:tasks.workunit.client.0.vm07.stdout:9/230: creat d4/d8/dc/d15/f57 x:0 0 0 2026-03-09T20:47:18.786 INFO:tasks.workunit.client.0.vm07.stdout:2/289: getdents d2/d11 0 2026-03-09T20:47:18.788 INFO:tasks.workunit.client.0.vm07.stdout:0/265: unlink d1/d2/dc/d17/c29 0 2026-03-09T20:47:18.790 INFO:tasks.workunit.client.1.vm10.stdout:3/109: creat dc/dd/f23 x:0 0 0 2026-03-09T20:47:18.791 INFO:tasks.workunit.client.1.vm10.stdout:1/192: dwrite d2/da/d25/f2e [0,4194304] 0 2026-03-09T20:47:18.804 INFO:tasks.workunit.client.0.vm07.stdout:7/208: rename d3/da/db/d14/f24 to d3/da/f47 0 2026-03-09T20:47:18.804 INFO:tasks.workunit.client.0.vm07.stdout:7/209: read - d3/da/f3b zero size 2026-03-09T20:47:18.804 INFO:tasks.workunit.client.0.vm07.stdout:2/290: write d2/d11/f50 [1443743,69668] 0 2026-03-09T20:47:18.808 INFO:tasks.workunit.client.1.vm10.stdout:6/112: symlink d3/d12/l20 0 2026-03-09T20:47:18.810 INFO:tasks.workunit.client.1.vm10.stdout:3/110: dread dc/f10 [0,4194304] 0 2026-03-09T20:47:18.811 INFO:tasks.workunit.client.1.vm10.stdout:5/125: write d2/f2c [4395401,76434] 0 2026-03-09T20:47:18.812 INFO:tasks.workunit.client.1.vm10.stdout:3/111: dread - dc/dd/f13 zero size 2026-03-09T20:47:18.816 INFO:tasks.workunit.client.1.vm10.stdout:2/138: creat d5/d18/f24 x:0 0 0 2026-03-09T20:47:18.818 INFO:tasks.workunit.client.0.vm07.stdout:2/291: write d2/db/d28/f32 [1137024,36456] 0 2026-03-09T20:47:18.822 INFO:tasks.workunit.client.1.vm10.stdout:8/198: symlink d0/d22/d25/d2e/d41/l45 0 2026-03-09T20:47:18.823 INFO:tasks.workunit.client.0.vm07.stdout:7/210: dread d3/f18 [0,4194304] 0 2026-03-09T20:47:18.824 INFO:tasks.workunit.client.1.vm10.stdout:6/113: dwrite d3/f7 [4194304,4194304] 0 2026-03-09T20:47:18.827 INFO:tasks.workunit.client.1.vm10.stdout:1/193: mkdir d2/da/d25/d3e/d42 0 2026-03-09T20:47:18.832 INFO:tasks.workunit.client.1.vm10.stdout:4/118: dwrite d1/f9 [0,4194304] 0 2026-03-09T20:47:18.838 INFO:tasks.workunit.client.0.vm07.stdout:8/176: rename d1/dc/d16/f1e to d1/dc/f38 0 2026-03-09T20:47:18.838 INFO:tasks.workunit.client.0.vm07.stdout:4/201: rename d2/d1f to d2/d1f/d34 22 2026-03-09T20:47:18.841 INFO:tasks.workunit.client.1.vm10.stdout:0/111: creat d2/d9/f20 x:0 0 0 2026-03-09T20:47:18.845 INFO:tasks.workunit.client.1.vm10.stdout:2/139: readlink l3 0 2026-03-09T20:47:18.845 INFO:tasks.workunit.client.1.vm10.stdout:5/126: write d2/f16 [475226,69288] 0 2026-03-09T20:47:18.850 INFO:tasks.workunit.client.1.vm10.stdout:1/194: mknod d2/da/d25/c43 0 2026-03-09T20:47:18.851 INFO:tasks.workunit.client.1.vm10.stdout:4/119: fdatasync d1/d2/f1a 0 2026-03-09T20:47:18.852 INFO:tasks.workunit.client.1.vm10.stdout:8/199: dwrite d0/d22/d25/f34 [0,4194304] 0 2026-03-09T20:47:18.860 INFO:tasks.workunit.client.1.vm10.stdout:0/112: dread d2/d9/f12 [0,4194304] 0 2026-03-09T20:47:18.866 INFO:tasks.workunit.client.0.vm07.stdout:8/177: mknod d1/dc/d16/d31/c39 0 2026-03-09T20:47:18.866 INFO:tasks.workunit.client.0.vm07.stdout:8/178: dread d1/dc/f29 [0,4194304] 0 2026-03-09T20:47:18.866 INFO:tasks.workunit.client.1.vm10.stdout:5/127: mkdir d2/d27/d37 0 2026-03-09T20:47:18.866 INFO:tasks.workunit.client.1.vm10.stdout:5/128: truncate d2/f23 958241 0 2026-03-09T20:47:18.869 INFO:tasks.workunit.client.1.vm10.stdout:0/113: write d2/d9/f20 [995983,83701] 0 2026-03-09T20:47:18.869 INFO:tasks.workunit.client.1.vm10.stdout:2/140: mknod d5/c25 0 2026-03-09T20:47:18.869 INFO:tasks.workunit.client.1.vm10.stdout:3/112: dwrite dc/ff [0,4194304] 0 2026-03-09T20:47:18.870 INFO:tasks.workunit.client.1.vm10.stdout:2/141: fsync d5/d18/f1f 0 2026-03-09T20:47:18.871 INFO:tasks.workunit.client.1.vm10.stdout:2/142: stat d5/d18/c19 0 2026-03-09T20:47:18.872 INFO:tasks.workunit.client.1.vm10.stdout:1/195: creat d2/da/d25/d3e/f44 x:0 0 0 2026-03-09T20:47:18.873 INFO:tasks.workunit.client.0.vm07.stdout:5/203: truncate d5/df/d13/d30/f36 451869 0 2026-03-09T20:47:18.874 INFO:tasks.workunit.client.1.vm10.stdout:4/120: dwrite d1/d8/d1c/f1f [0,4194304] 0 2026-03-09T20:47:18.876 INFO:tasks.workunit.client.1.vm10.stdout:4/121: write d1/f26 [224804,97498] 0 2026-03-09T20:47:18.877 INFO:tasks.workunit.client.1.vm10.stdout:3/113: write dc/ff [2995463,114698] 0 2026-03-09T20:47:18.878 INFO:tasks.workunit.client.1.vm10.stdout:4/122: readlink d1/d8/d1c/l27 0 2026-03-09T20:47:18.881 INFO:tasks.workunit.client.1.vm10.stdout:4/123: chown d1/d2 2 1 2026-03-09T20:47:18.893 INFO:tasks.workunit.client.0.vm07.stdout:1/203: truncate d3/fc 3007469 0 2026-03-09T20:47:18.893 INFO:tasks.workunit.client.0.vm07.stdout:1/204: chown d3/f2b 1 1 2026-03-09T20:47:18.896 INFO:tasks.workunit.client.0.vm07.stdout:6/253: truncate d8/f46 198962 0 2026-03-09T20:47:18.896 INFO:tasks.workunit.client.0.vm07.stdout:5/204: dread d5/f25 [0,4194304] 0 2026-03-09T20:47:18.900 INFO:tasks.workunit.client.0.vm07.stdout:6/254: dwrite d8/d16/d22/f2c [0,4194304] 0 2026-03-09T20:47:18.906 INFO:tasks.workunit.client.0.vm07.stdout:3/241: truncate d1/d5/d10/f1a 1227464 0 2026-03-09T20:47:18.915 INFO:tasks.workunit.client.0.vm07.stdout:8/179: rename d1/dc/d14/l24 to d1/dc/d16/l3a 0 2026-03-09T20:47:18.916 INFO:tasks.workunit.client.1.vm10.stdout:6/114: rename d3/ff to d3/f21 0 2026-03-09T20:47:18.926 INFO:tasks.workunit.client.0.vm07.stdout:1/205: creat d3/d23/f39 x:0 0 0 2026-03-09T20:47:18.926 INFO:tasks.workunit.client.0.vm07.stdout:1/206: readlink d3/l21 0 2026-03-09T20:47:18.926 INFO:tasks.workunit.client.0.vm07.stdout:1/207: fsync d3/d23/f2c 0 2026-03-09T20:47:18.945 INFO:tasks.workunit.client.1.vm10.stdout:0/114: rename f1 to d2/d9/da/de/d1a/f21 0 2026-03-09T20:47:18.946 INFO:tasks.workunit.client.1.vm10.stdout:2/143: creat d5/d18/d1b/f26 x:0 0 0 2026-03-09T20:47:18.946 INFO:tasks.workunit.client.1.vm10.stdout:2/144: dread - d5/f1d zero size 2026-03-09T20:47:18.950 INFO:tasks.workunit.client.0.vm07.stdout:1/208: unlink d3/f11 0 2026-03-09T20:47:18.951 INFO:tasks.workunit.client.0.vm07.stdout:1/209: dread - d3/d23/f37 zero size 2026-03-09T20:47:18.953 INFO:tasks.workunit.client.0.vm07.stdout:5/205: mknod d5/c42 0 2026-03-09T20:47:18.954 INFO:tasks.workunit.client.0.vm07.stdout:6/255: getdents d8/d16/d4b 0 2026-03-09T20:47:18.956 INFO:tasks.workunit.client.1.vm10.stdout:3/114: symlink dc/d14/l24 0 2026-03-09T20:47:18.956 INFO:tasks.workunit.client.0.vm07.stdout:5/206: dread d5/df/d13/f38 [0,4194304] 0 2026-03-09T20:47:18.959 INFO:tasks.workunit.client.0.vm07.stdout:8/180: mkdir d1/d3b 0 2026-03-09T20:47:18.964 INFO:tasks.workunit.client.0.vm07.stdout:6/256: creat d8/d26/f4d x:0 0 0 2026-03-09T20:47:18.965 INFO:tasks.workunit.client.1.vm10.stdout:5/129: creat d2/d27/d37/f38 x:0 0 0 2026-03-09T20:47:18.966 INFO:tasks.workunit.client.1.vm10.stdout:5/130: chown d2/l10 229233 1 2026-03-09T20:47:18.967 INFO:tasks.workunit.client.0.vm07.stdout:6/257: dread d8/d16/f18 [0,4194304] 0 2026-03-09T20:47:18.972 INFO:tasks.workunit.client.1.vm10.stdout:3/115: write dc/f10 [1214283,64358] 0 2026-03-09T20:47:18.977 INFO:tasks.workunit.client.0.vm07.stdout:5/207: creat d5/d19/f43 x:0 0 0 2026-03-09T20:47:18.977 INFO:tasks.workunit.client.0.vm07.stdout:8/181: mknod d1/dc/d14/c3c 0 2026-03-09T20:47:18.978 INFO:tasks.workunit.client.0.vm07.stdout:8/182: dread - d1/dc/d14/d2f/f34 zero size 2026-03-09T20:47:18.978 INFO:tasks.workunit.client.0.vm07.stdout:8/183: chown d1/dc/f38 13928 1 2026-03-09T20:47:18.978 INFO:tasks.workunit.client.1.vm10.stdout:6/115: mknod d3/c22 0 2026-03-09T20:47:18.979 INFO:tasks.workunit.client.0.vm07.stdout:1/210: mkdir d3/d3a 0 2026-03-09T20:47:18.985 INFO:tasks.workunit.client.1.vm10.stdout:2/145: mkdir d5/d18/d27 0 2026-03-09T20:47:18.988 INFO:tasks.workunit.client.1.vm10.stdout:5/131: mkdir d2/d39 0 2026-03-09T20:47:18.990 INFO:tasks.workunit.client.0.vm07.stdout:3/242: link d1/d5/d9/d2f/d34/d46/c49 d1/d5/d9/d2f/d3d/d51/c52 0 2026-03-09T20:47:18.994 INFO:tasks.workunit.client.0.vm07.stdout:5/208: symlink d5/d33/d39/l44 0 2026-03-09T20:47:19.000 INFO:tasks.workunit.client.1.vm10.stdout:3/116: dwrite dc/dd/f13 [0,4194304] 0 2026-03-09T20:47:19.004 INFO:tasks.workunit.client.1.vm10.stdout:5/132: mkdir d2/d27/d3a 0 2026-03-09T20:47:19.007 INFO:tasks.workunit.client.0.vm07.stdout:3/243: mknod d1/d5/d9/d2f/d3d/c53 0 2026-03-09T20:47:19.008 INFO:tasks.workunit.client.1.vm10.stdout:2/146: getdents d5/d18/d1b/d22 0 2026-03-09T20:47:19.008 INFO:tasks.workunit.client.0.vm07.stdout:5/209: rename d5/d19/c1e to d5/d19/c45 0 2026-03-09T20:47:19.011 INFO:tasks.workunit.client.0.vm07.stdout:8/184: mknod d1/c3d 0 2026-03-09T20:47:19.012 INFO:tasks.workunit.client.1.vm10.stdout:2/147: chown d5/d18/d1b 2998 1 2026-03-09T20:47:19.012 INFO:tasks.workunit.client.0.vm07.stdout:3/244: chown d1/d5/d9/f1c 0 1 2026-03-09T20:47:19.014 INFO:tasks.workunit.client.0.vm07.stdout:5/210: creat d5/df/d13/d30/f46 x:0 0 0 2026-03-09T20:47:19.019 INFO:tasks.workunit.client.0.vm07.stdout:5/211: dwrite d5/df/d13/f17 [0,4194304] 0 2026-03-09T20:47:19.019 INFO:tasks.workunit.client.0.vm07.stdout:5/212: stat d5/d19/f20 0 2026-03-09T20:47:19.020 INFO:tasks.workunit.client.0.vm07.stdout:5/213: fsync d5/df/d13/f2a 0 2026-03-09T20:47:19.023 INFO:tasks.workunit.client.0.vm07.stdout:8/185: creat d1/d3b/f3e x:0 0 0 2026-03-09T20:47:19.028 INFO:tasks.workunit.client.0.vm07.stdout:5/214: mkdir d5/df/d13/d3e/d47 0 2026-03-09T20:47:19.028 INFO:tasks.workunit.client.0.vm07.stdout:8/186: dwrite d1/dc/d16/d26/f36 [0,4194304] 0 2026-03-09T20:47:19.030 INFO:tasks.workunit.client.0.vm07.stdout:5/215: mknod d5/d33/c48 0 2026-03-09T20:47:19.031 INFO:tasks.workunit.client.0.vm07.stdout:5/216: mknod d5/df/d13/d3e/c49 0 2026-03-09T20:47:19.031 INFO:tasks.workunit.client.1.vm10.stdout:5/133: link d2/c12 d2/d27/d37/c3b 0 2026-03-09T20:47:19.031 INFO:tasks.workunit.client.1.vm10.stdout:2/148: mkdir d5/d18/d27/d28 0 2026-03-09T20:47:19.032 INFO:tasks.workunit.client.0.vm07.stdout:8/187: chown d1/dc/d16/d26/f36 62519391 1 2026-03-09T20:47:19.033 INFO:tasks.workunit.client.1.vm10.stdout:5/134: chown d2/l13 21000 1 2026-03-09T20:47:19.034 INFO:tasks.workunit.client.0.vm07.stdout:5/217: creat d5/df/f4a x:0 0 0 2026-03-09T20:47:19.035 INFO:tasks.workunit.client.0.vm07.stdout:5/218: fsync d5/df/d13/f41 0 2026-03-09T20:47:19.036 INFO:tasks.workunit.client.0.vm07.stdout:8/188: write d1/dc/d14/f18 [2712062,117709] 0 2026-03-09T20:47:19.036 INFO:tasks.workunit.client.1.vm10.stdout:2/149: creat d5/d18/d27/f29 x:0 0 0 2026-03-09T20:47:19.036 INFO:tasks.workunit.client.1.vm10.stdout:5/135: creat d2/f3c x:0 0 0 2026-03-09T20:47:19.036 INFO:tasks.workunit.client.0.vm07.stdout:5/219: mkdir d5/d19/d4b 0 2026-03-09T20:47:19.038 INFO:tasks.workunit.client.0.vm07.stdout:5/220: dread - d5/df/d13/f41 zero size 2026-03-09T20:47:19.045 INFO:tasks.workunit.client.0.vm07.stdout:5/221: read d5/df/f24 [1187961,126435] 0 2026-03-09T20:47:19.047 INFO:tasks.workunit.client.0.vm07.stdout:5/222: dread d5/f25 [0,4194304] 0 2026-03-09T20:47:19.048 INFO:tasks.workunit.client.0.vm07.stdout:5/223: truncate d5/df/f34 150807 0 2026-03-09T20:47:19.048 INFO:tasks.workunit.client.0.vm07.stdout:5/224: chown d5/df 1438591 1 2026-03-09T20:47:19.049 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:18 vm10.local ceph-mon[57011]: pgmap v149: 65 pgs: 65 active+clean; 683 MiB data, 3.3 GiB used, 117 GiB / 120 GiB avail; 4.7 MiB/s rd, 72 MiB/s wr, 257 op/s 2026-03-09T20:47:19.049 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:18 vm10.local ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:47:19.049 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:18 vm10.local ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:47:19.049 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:18 vm10.local ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:47:19.049 INFO:tasks.workunit.client.1.vm10.stdout:5/136: mkdir d2/d3d 0 2026-03-09T20:47:19.058 INFO:tasks.workunit.client.0.vm07.stdout:8/189: rename d1/c2e to d1/c3f 0 2026-03-09T20:47:19.059 INFO:tasks.workunit.client.0.vm07.stdout:8/190: dread - d1/dc/d16/d26/f2b zero size 2026-03-09T20:47:19.059 INFO:tasks.workunit.client.0.vm07.stdout:8/191: read - d1/dc/d16/d26/f27 zero size 2026-03-09T20:47:19.061 INFO:tasks.workunit.client.1.vm10.stdout:5/137: creat d2/f3e x:0 0 0 2026-03-09T20:47:19.061 INFO:tasks.workunit.client.1.vm10.stdout:2/150: getdents d5/d18/d27/d28 0 2026-03-09T20:47:19.061 INFO:tasks.workunit.client.1.vm10.stdout:2/151: chown d5/f1d 2311766 1 2026-03-09T20:47:19.062 INFO:tasks.workunit.client.1.vm10.stdout:2/152: chown l4 10327 1 2026-03-09T20:47:19.062 INFO:tasks.workunit.client.0.vm07.stdout:5/225: truncate d5/df/f24 311934 0 2026-03-09T20:47:19.062 INFO:tasks.workunit.client.0.vm07.stdout:5/226: chown d5/df/d13/l32 3 1 2026-03-09T20:47:19.062 INFO:tasks.workunit.client.1.vm10.stdout:2/153: write d5/d18/f24 [229826,108632] 0 2026-03-09T20:47:19.064 INFO:tasks.workunit.client.0.vm07.stdout:8/192: dwrite d1/dc/d14/f18 [4194304,4194304] 0 2026-03-09T20:47:19.075 INFO:tasks.workunit.client.0.vm07.stdout:5/227: dwrite d5/df/f2b [0,4194304] 0 2026-03-09T20:47:19.075 INFO:tasks.workunit.client.0.vm07.stdout:8/193: link d1/l4 d1/dc/d14/l40 0 2026-03-09T20:47:19.079 INFO:tasks.workunit.client.1.vm10.stdout:5/138: mknod d2/d27/c3f 0 2026-03-09T20:47:19.081 INFO:tasks.workunit.client.0.vm07.stdout:5/228: mknod d5/d33/d3b/c4c 0 2026-03-09T20:47:19.092 INFO:tasks.workunit.client.0.vm07.stdout:5/229: unlink d5/df/d13/d30/f40 0 2026-03-09T20:47:19.094 INFO:tasks.workunit.client.0.vm07.stdout:8/194: dread d1/f1d [0,4194304] 0 2026-03-09T20:47:19.095 INFO:tasks.workunit.client.0.vm07.stdout:5/230: truncate d5/df/d13/f1f 5049803 0 2026-03-09T20:47:19.102 INFO:tasks.workunit.client.0.vm07.stdout:5/231: creat d5/d19/f4d x:0 0 0 2026-03-09T20:47:19.104 INFO:tasks.workunit.client.1.vm10.stdout:2/154: dwrite d5/f15 [0,4194304] 0 2026-03-09T20:47:19.109 INFO:tasks.workunit.client.0.vm07.stdout:5/232: dwrite d5/df/d13/f38 [0,4194304] 0 2026-03-09T20:47:19.113 INFO:tasks.workunit.client.0.vm07.stdout:5/233: truncate d5/df/d13/d30/f46 488159 0 2026-03-09T20:47:19.123 INFO:tasks.workunit.client.1.vm10.stdout:5/139: dwrite d2/f7 [0,4194304] 0 2026-03-09T20:47:19.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:18 vm07.local ceph-mon[49120]: pgmap v149: 65 pgs: 65 active+clean; 683 MiB data, 3.3 GiB used, 117 GiB / 120 GiB avail; 4.7 MiB/s rd, 72 MiB/s wr, 257 op/s 2026-03-09T20:47:19.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:18 vm07.local ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:47:19.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:18 vm07.local ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:47:19.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:18 vm07.local ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:47:19.145 INFO:tasks.workunit.client.0.vm07.stdout:5/234: link d5/c42 d5/d33/c4e 0 2026-03-09T20:47:19.149 INFO:tasks.workunit.client.1.vm10.stdout:2/155: fsync d5/d18/f1a 0 2026-03-09T20:47:19.150 INFO:tasks.workunit.client.0.vm07.stdout:5/235: mkdir d5/df/d13/d4f 0 2026-03-09T20:47:19.151 INFO:tasks.workunit.client.0.vm07.stdout:5/236: fdatasync d5/f25 0 2026-03-09T20:47:19.151 INFO:tasks.workunit.client.0.vm07.stdout:5/237: fsync d5/df/d13/f2a 0 2026-03-09T20:47:19.156 INFO:tasks.workunit.client.0.vm07.stdout:5/238: rename d5/d19/d4b to d5/d50 0 2026-03-09T20:47:19.157 INFO:tasks.workunit.client.0.vm07.stdout:5/239: dread - d5/d19/f2c zero size 2026-03-09T20:47:19.159 INFO:tasks.workunit.client.1.vm10.stdout:5/140: truncate f1 3929134 0 2026-03-09T20:47:19.163 INFO:tasks.workunit.client.0.vm07.stdout:5/240: rename d5/df/d13/d30/f46 to d5/f51 0 2026-03-09T20:47:19.163 INFO:tasks.workunit.client.1.vm10.stdout:2/156: creat d5/d18/d27/f2a x:0 0 0 2026-03-09T20:47:19.170 INFO:tasks.workunit.client.0.vm07.stdout:5/241: link d5/f51 d5/d50/f52 0 2026-03-09T20:47:19.170 INFO:tasks.workunit.client.1.vm10.stdout:5/141: dread d2/d1b/f2f [0,4194304] 0 2026-03-09T20:47:19.171 INFO:tasks.workunit.client.1.vm10.stdout:5/142: chown d2/f8 49098792 1 2026-03-09T20:47:19.171 INFO:tasks.workunit.client.1.vm10.stdout:5/143: read d2/f2c [763070,127992] 0 2026-03-09T20:47:19.172 INFO:tasks.workunit.client.0.vm07.stdout:5/242: mknod d5/df/d13/d4f/c53 0 2026-03-09T20:47:19.182 INFO:tasks.workunit.client.1.vm10.stdout:5/144: unlink d2/f31 0 2026-03-09T20:47:19.184 INFO:tasks.workunit.client.1.vm10.stdout:5/145: creat d2/f40 x:0 0 0 2026-03-09T20:47:19.185 INFO:tasks.workunit.client.1.vm10.stdout:5/146: write d2/d27/d37/f38 [875025,82077] 0 2026-03-09T20:47:19.195 INFO:tasks.workunit.client.1.vm10.stdout:5/147: link d2/f5 d2/d1b/f41 0 2026-03-09T20:47:19.244 INFO:tasks.workunit.client.0.vm07.stdout:4/202: dwrite d2/f1d [0,4194304] 0 2026-03-09T20:47:19.245 INFO:tasks.workunit.client.0.vm07.stdout:4/203: read d2/f3 [2353101,78597] 0 2026-03-09T20:47:19.245 INFO:tasks.workunit.client.1.vm10.stdout:9/189: write d2/d3/f5 [1488003,59768] 0 2026-03-09T20:47:19.248 INFO:tasks.workunit.client.0.vm07.stdout:4/204: dread d2/d1f/d2d/f2f [0,4194304] 0 2026-03-09T20:47:19.251 INFO:tasks.workunit.client.0.vm07.stdout:0/266: dwrite d1/d2/f14 [0,4194304] 0 2026-03-09T20:47:19.258 INFO:tasks.workunit.client.0.vm07.stdout:9/231: truncate d4/f5 2981857 0 2026-03-09T20:47:19.258 INFO:tasks.workunit.client.0.vm07.stdout:9/232: readlink d4/d11/l1e 0 2026-03-09T20:47:19.259 INFO:tasks.workunit.client.1.vm10.stdout:7/168: truncate f5 830066 0 2026-03-09T20:47:19.260 INFO:tasks.workunit.client.1.vm10.stdout:6/116: rmdir d3/d12 39 2026-03-09T20:47:19.267 INFO:tasks.workunit.client.1.vm10.stdout:9/190: mkdir d2/d28/d43 0 2026-03-09T20:47:19.269 INFO:tasks.workunit.client.1.vm10.stdout:6/117: mknod d3/da/c23 0 2026-03-09T20:47:19.270 INFO:tasks.workunit.client.0.vm07.stdout:4/205: symlink d2/df/l35 0 2026-03-09T20:47:19.271 INFO:tasks.workunit.client.0.vm07.stdout:0/267: mknod d1/d1f/d20/c5a 0 2026-03-09T20:47:19.274 INFO:tasks.workunit.client.0.vm07.stdout:0/268: dread d1/d2/dc/f12 [0,4194304] 0 2026-03-09T20:47:19.275 INFO:tasks.workunit.client.0.vm07.stdout:0/269: dread d1/d1f/d20/f21 [0,4194304] 0 2026-03-09T20:47:19.275 INFO:tasks.workunit.client.0.vm07.stdout:0/270: stat d1/d2/d33/d35/l3a 0 2026-03-09T20:47:19.275 INFO:tasks.workunit.client.0.vm07.stdout:0/271: stat d1/d1f 0 2026-03-09T20:47:19.279 INFO:tasks.workunit.client.1.vm10.stdout:9/191: dwrite d2/d3/f2f [0,4194304] 0 2026-03-09T20:47:19.280 INFO:tasks.workunit.client.0.vm07.stdout:7/211: dwrite d3/da/f11 [0,4194304] 0 2026-03-09T20:47:19.283 INFO:tasks.workunit.client.0.vm07.stdout:7/212: dread d3/da/db/d14/f1a [0,4194304] 0 2026-03-09T20:47:19.283 INFO:tasks.workunit.client.1.vm10.stdout:9/192: dread - d2/f1a zero size 2026-03-09T20:47:19.286 INFO:tasks.workunit.client.1.vm10.stdout:1/196: rmdir d2 39 2026-03-09T20:47:19.289 INFO:tasks.workunit.client.0.vm07.stdout:4/206: mknod d2/df/d17/c36 0 2026-03-09T20:47:19.299 INFO:tasks.workunit.client.0.vm07.stdout:1/211: rmdir d3/d23 39 2026-03-09T20:47:19.299 INFO:tasks.workunit.client.1.vm10.stdout:1/197: truncate d2/da/d25/f28 1350422 0 2026-03-09T20:47:19.301 INFO:tasks.workunit.client.1.vm10.stdout:6/118: mkdir d3/d12/d24 0 2026-03-09T20:47:19.302 INFO:tasks.workunit.client.0.vm07.stdout:9/233: link d4/c49 d4/d16/c58 0 2026-03-09T20:47:19.302 INFO:tasks.workunit.client.1.vm10.stdout:8/200: dwrite d0/f13 [0,4194304] 0 2026-03-09T20:47:19.302 INFO:tasks.workunit.client.0.vm07.stdout:9/234: write d4/d8/dc/d4e/f53 [567885,82033] 0 2026-03-09T20:47:19.303 INFO:tasks.workunit.client.0.vm07.stdout:9/235: chown d4/d8/dc/f21 1269385 1 2026-03-09T20:47:19.304 INFO:tasks.workunit.client.0.vm07.stdout:5/243: read d5/df/d13/d30/f36 [184244,12892] 0 2026-03-09T20:47:19.317 INFO:tasks.workunit.client.1.vm10.stdout:3/117: dwrite f7 [4194304,4194304] 0 2026-03-09T20:47:19.319 INFO:tasks.workunit.client.1.vm10.stdout:6/119: creat d3/d12/f25 x:0 0 0 2026-03-09T20:47:19.320 INFO:tasks.workunit.client.1.vm10.stdout:7/169: truncate db/d1f/f2a 70379 0 2026-03-09T20:47:19.320 INFO:tasks.workunit.client.1.vm10.stdout:1/198: mknod d2/da/d25/c45 0 2026-03-09T20:47:19.321 INFO:tasks.workunit.client.0.vm07.stdout:7/213: symlink d3/l48 0 2026-03-09T20:47:19.327 INFO:tasks.workunit.client.1.vm10.stdout:8/201: mknod d0/c46 0 2026-03-09T20:47:19.327 INFO:tasks.workunit.client.1.vm10.stdout:6/120: dwrite d3/d12/f18 [0,4194304] 0 2026-03-09T20:47:19.328 INFO:tasks.workunit.client.0.vm07.stdout:1/212: symlink d3/l3b 0 2026-03-09T20:47:19.328 INFO:tasks.workunit.client.0.vm07.stdout:1/213: chown d3/d14/d35/l36 43 1 2026-03-09T20:47:19.329 INFO:tasks.workunit.client.0.vm07.stdout:1/214: chown d3/d14/f33 339 1 2026-03-09T20:47:19.330 INFO:tasks.workunit.client.0.vm07.stdout:7/214: creat d3/da/db/d14/d1f/d2b/f49 x:0 0 0 2026-03-09T20:47:19.331 INFO:tasks.workunit.client.0.vm07.stdout:7/215: chown d3/da/db/d14/d1f/d2b/c31 13538484 1 2026-03-09T20:47:19.332 INFO:tasks.workunit.client.1.vm10.stdout:8/202: chown d0/l5 29286 1 2026-03-09T20:47:19.332 INFO:tasks.workunit.client.0.vm07.stdout:9/236: mkdir d4/d8/d59 0 2026-03-09T20:47:19.333 INFO:tasks.workunit.client.0.vm07.stdout:9/237: chown d4/d8/d19/d26/f3d 3828 1 2026-03-09T20:47:19.334 INFO:tasks.workunit.client.0.vm07.stdout:4/207: link d2/fa d2/df/d17/f37 0 2026-03-09T20:47:19.335 INFO:tasks.workunit.client.0.vm07.stdout:5/244: rename d5/df/d13/l1c to d5/d50/l54 0 2026-03-09T20:47:19.339 INFO:tasks.workunit.client.1.vm10.stdout:7/170: mknod db/d21/d26/c38 0 2026-03-09T20:47:19.340 INFO:tasks.workunit.client.1.vm10.stdout:9/193: getdents d2/d3/de/d35 0 2026-03-09T20:47:19.340 INFO:tasks.workunit.client.1.vm10.stdout:9/194: fsync d2/fc 0 2026-03-09T20:47:19.340 INFO:tasks.workunit.client.1.vm10.stdout:1/199: mkdir d2/da/d25/d46 0 2026-03-09T20:47:19.341 INFO:tasks.workunit.client.1.vm10.stdout:1/200: readlink d2/lb 0 2026-03-09T20:47:19.345 INFO:tasks.workunit.client.1.vm10.stdout:1/201: write d2/da/f20 [29598,102404] 0 2026-03-09T20:47:19.348 INFO:tasks.workunit.client.1.vm10.stdout:7/171: dwrite db/d21/d23/f22 [0,4194304] 0 2026-03-09T20:47:19.348 INFO:tasks.workunit.client.1.vm10.stdout:6/121: mkdir d3/da/d11/d26 0 2026-03-09T20:47:19.350 INFO:tasks.workunit.client.1.vm10.stdout:3/118: mkdir dc/d25 0 2026-03-09T20:47:19.351 INFO:tasks.workunit.client.0.vm07.stdout:7/216: mknod d3/c4a 0 2026-03-09T20:47:19.351 INFO:tasks.workunit.client.0.vm07.stdout:7/217: chown d3/c10 147891217 1 2026-03-09T20:47:19.352 INFO:tasks.workunit.client.0.vm07.stdout:4/208: mknod d2/df/c38 0 2026-03-09T20:47:19.353 INFO:tasks.workunit.client.1.vm10.stdout:0/115: dwrite d2/db/f13 [0,4194304] 0 2026-03-09T20:47:19.353 INFO:tasks.workunit.client.0.vm07.stdout:4/209: write d2/df/d17/f2a [545976,44051] 0 2026-03-09T20:47:19.354 INFO:tasks.workunit.client.0.vm07.stdout:1/215: rmdir d3/d3a 0 2026-03-09T20:47:19.355 INFO:tasks.workunit.client.0.vm07.stdout:7/218: rename d3/da/db/d32/l41 to d3/da/db/l4b 0 2026-03-09T20:47:19.359 INFO:tasks.workunit.client.1.vm10.stdout:7/172: fsync db/d21/d23/ff 0 2026-03-09T20:47:19.359 INFO:tasks.workunit.client.1.vm10.stdout:9/195: mkdir d2/d3/de/d35/d44 0 2026-03-09T20:47:19.359 INFO:tasks.workunit.client.1.vm10.stdout:9/196: chown d2/d3 58 1 2026-03-09T20:47:19.359 INFO:tasks.workunit.client.0.vm07.stdout:9/238: link d4/d11/d31/c4c d4/d8/dc/c5a 0 2026-03-09T20:47:19.361 INFO:tasks.workunit.client.0.vm07.stdout:1/216: creat d3/d23/f3c x:0 0 0 2026-03-09T20:47:19.370 INFO:tasks.workunit.client.1.vm10.stdout:0/116: rmdir d2 39 2026-03-09T20:47:19.373 INFO:tasks.workunit.client.0.vm07.stdout:7/219: symlink d3/da/l4c 0 2026-03-09T20:47:19.373 INFO:tasks.workunit.client.1.vm10.stdout:8/203: dwrite d0/f13 [0,4194304] 0 2026-03-09T20:47:19.377 INFO:tasks.workunit.client.0.vm07.stdout:1/217: dread d3/fa [0,4194304] 0 2026-03-09T20:47:19.380 INFO:tasks.workunit.client.1.vm10.stdout:6/122: creat d3/d12/d24/f27 x:0 0 0 2026-03-09T20:47:19.381 INFO:tasks.workunit.client.0.vm07.stdout:9/239: dread d4/d8/dc/f25 [0,4194304] 0 2026-03-09T20:47:19.382 INFO:tasks.workunit.client.0.vm07.stdout:9/240: truncate d4/d11/d2a/f3b 127863 0 2026-03-09T20:47:19.382 INFO:tasks.workunit.client.0.vm07.stdout:9/241: readlink d4/d16/l3a 0 2026-03-09T20:47:19.386 INFO:tasks.workunit.client.1.vm10.stdout:3/119: mkdir dc/d14/d26 0 2026-03-09T20:47:19.387 INFO:tasks.workunit.client.1.vm10.stdout:1/202: symlink d2/da/d25/d3e/d42/l47 0 2026-03-09T20:47:19.387 INFO:tasks.workunit.client.1.vm10.stdout:9/197: dwrite d2/d3/de/d35/f38 [0,4194304] 0 2026-03-09T20:47:19.388 INFO:tasks.workunit.client.0.vm07.stdout:7/220: symlink d3/da/db/d14/d43/l4d 0 2026-03-09T20:47:19.389 INFO:tasks.workunit.client.0.vm07.stdout:7/221: dread - d3/da/db/d32/f3d zero size 2026-03-09T20:47:19.389 INFO:tasks.workunit.client.0.vm07.stdout:9/242: dwrite d4/d8/fd [0,4194304] 0 2026-03-09T20:47:19.390 INFO:tasks.workunit.client.1.vm10.stdout:1/203: truncate d2/da/f3d 1898507 0 2026-03-09T20:47:19.392 INFO:tasks.workunit.client.1.vm10.stdout:8/204: chown d0/d22/f27 19297295 1 2026-03-09T20:47:19.398 INFO:tasks.workunit.client.1.vm10.stdout:6/123: dwrite d3/d12/f25 [0,4194304] 0 2026-03-09T20:47:19.399 INFO:tasks.workunit.client.1.vm10.stdout:7/173: creat db/f39 x:0 0 0 2026-03-09T20:47:19.401 INFO:tasks.workunit.client.0.vm07.stdout:9/243: creat d4/d11/d31/f5b x:0 0 0 2026-03-09T20:47:19.405 INFO:tasks.workunit.client.1.vm10.stdout:6/124: stat d3/d12 0 2026-03-09T20:47:19.408 INFO:tasks.workunit.client.1.vm10.stdout:4/124: truncate d1/d8/d1c/f1f 336120 0 2026-03-09T20:47:19.410 INFO:tasks.workunit.client.1.vm10.stdout:8/205: dread d0/f14 [4194304,4194304] 0 2026-03-09T20:47:19.411 INFO:tasks.workunit.client.1.vm10.stdout:3/120: dread dc/f11 [0,4194304] 0 2026-03-09T20:47:19.411 INFO:tasks.workunit.client.1.vm10.stdout:8/206: read - d0/d22/d2f/d38/f43 zero size 2026-03-09T20:47:19.412 INFO:tasks.workunit.client.0.vm07.stdout:6/258: write d8/d26/d2a/f37 [4697547,28470] 0 2026-03-09T20:47:19.412 INFO:tasks.workunit.client.1.vm10.stdout:8/207: stat d0/d22/d25/f2d 0 2026-03-09T20:47:19.412 INFO:tasks.workunit.client.1.vm10.stdout:3/121: fsync dc/d14/f1a 0 2026-03-09T20:47:19.413 INFO:tasks.workunit.client.0.vm07.stdout:7/222: mknod d3/da/db/d14/c4e 0 2026-03-09T20:47:19.417 INFO:tasks.workunit.client.0.vm07.stdout:6/259: symlink d8/d26/d2a/l4e 0 2026-03-09T20:47:19.419 INFO:tasks.workunit.client.1.vm10.stdout:0/117: mknod d2/d9/da/de/d1a/c22 0 2026-03-09T20:47:19.424 INFO:tasks.workunit.client.1.vm10.stdout:9/198: symlink d2/d3/de/l45 0 2026-03-09T20:47:19.425 INFO:tasks.workunit.client.1.vm10.stdout:6/125: creat d3/d12/f28 x:0 0 0 2026-03-09T20:47:19.429 INFO:tasks.workunit.client.0.vm07.stdout:9/244: getdents d4/d8/dc/d4e/d54 0 2026-03-09T20:47:19.430 INFO:tasks.workunit.client.1.vm10.stdout:8/208: mkdir d0/d22/d25/d2e/d41/d47 0 2026-03-09T20:47:19.432 INFO:tasks.workunit.client.1.vm10.stdout:3/122: rename dc/d25 to dc/d14/d27 0 2026-03-09T20:47:19.433 INFO:tasks.workunit.client.0.vm07.stdout:6/260: write d8/d16/d22/d24/d2b/f4a [2538833,43647] 0 2026-03-09T20:47:19.433 INFO:tasks.workunit.client.0.vm07.stdout:6/261: chown d8/d16/d4b 329462 1 2026-03-09T20:47:19.434 INFO:tasks.workunit.client.1.vm10.stdout:0/118: mkdir d2/d9/da/de/d23 0 2026-03-09T20:47:19.434 INFO:tasks.workunit.client.0.vm07.stdout:9/245: symlink d4/d8/dc/d15/l5c 0 2026-03-09T20:47:19.436 INFO:tasks.workunit.client.1.vm10.stdout:6/126: mknod d3/d12/c29 0 2026-03-09T20:47:19.436 INFO:tasks.workunit.client.0.vm07.stdout:7/223: creat d3/f4f x:0 0 0 2026-03-09T20:47:19.440 INFO:tasks.workunit.client.1.vm10.stdout:8/209: unlink d0/c46 0 2026-03-09T20:47:19.442 INFO:tasks.workunit.client.0.vm07.stdout:9/246: creat d4/d11/d2a/f5d x:0 0 0 2026-03-09T20:47:19.443 INFO:tasks.workunit.client.1.vm10.stdout:3/123: write dc/f11 [2097338,12986] 0 2026-03-09T20:47:19.447 INFO:tasks.workunit.client.0.vm07.stdout:7/224: mkdir d3/da/db/d14/d1f/d50 0 2026-03-09T20:47:19.449 INFO:tasks.workunit.client.1.vm10.stdout:0/119: symlink d2/d9/da/d11/l24 0 2026-03-09T20:47:19.453 INFO:tasks.workunit.client.1.vm10.stdout:9/199: creat d2/f46 x:0 0 0 2026-03-09T20:47:19.456 INFO:tasks.workunit.client.0.vm07.stdout:9/247: mknod d4/d16/d29/d24/c5e 0 2026-03-09T20:47:19.457 INFO:tasks.workunit.client.1.vm10.stdout:9/200: mkdir d2/d28/d47 0 2026-03-09T20:47:19.457 INFO:tasks.workunit.client.1.vm10.stdout:6/127: creat d3/da/d11/d26/f2a x:0 0 0 2026-03-09T20:47:19.457 INFO:tasks.workunit.client.0.vm07.stdout:9/248: write d4/d8/d19/f28 [3811494,106281] 0 2026-03-09T20:47:19.460 INFO:tasks.workunit.client.1.vm10.stdout:9/201: readlink d2/l2d 0 2026-03-09T20:47:19.460 INFO:tasks.workunit.client.1.vm10.stdout:9/202: chown d2/d33/f3b 48206 1 2026-03-09T20:47:19.461 INFO:tasks.workunit.client.1.vm10.stdout:9/203: write d2/d3/de/d35/f38 [1564837,115607] 0 2026-03-09T20:47:19.464 INFO:tasks.workunit.client.0.vm07.stdout:9/249: mkdir d4/d8/d19/d5f 0 2026-03-09T20:47:19.467 INFO:tasks.workunit.client.0.vm07.stdout:9/250: creat d4/d11/d2a/f60 x:0 0 0 2026-03-09T20:47:19.468 INFO:tasks.workunit.client.1.vm10.stdout:6/128: unlink d3/da/f10 0 2026-03-09T20:47:19.468 INFO:tasks.workunit.client.1.vm10.stdout:0/120: rmdir d2/d9/da/de/d23 0 2026-03-09T20:47:19.468 INFO:tasks.workunit.client.0.vm07.stdout:9/251: chown d4/d11/d2a 10178 1 2026-03-09T20:47:19.470 INFO:tasks.workunit.client.1.vm10.stdout:9/204: read d2/d3/de/f24 [166379,23415] 0 2026-03-09T20:47:19.475 INFO:tasks.workunit.client.1.vm10.stdout:6/129: creat d3/d12/f2b x:0 0 0 2026-03-09T20:47:19.475 INFO:tasks.workunit.client.0.vm07.stdout:9/252: symlink d4/d16/d29/l61 0 2026-03-09T20:47:19.475 INFO:tasks.workunit.client.0.vm07.stdout:9/253: read d4/d8/dc/f21 [2492443,66376] 0 2026-03-09T20:47:19.475 INFO:tasks.workunit.client.0.vm07.stdout:9/254: mkdir d4/d16/d29/d24/d37/d44/d62 0 2026-03-09T20:47:19.475 INFO:tasks.workunit.client.1.vm10.stdout:0/121: mkdir d2/d9/da/de/d1a/d25 0 2026-03-09T20:47:19.476 INFO:tasks.workunit.client.0.vm07.stdout:9/255: mkdir d4/d8/dc/d63 0 2026-03-09T20:47:19.479 INFO:tasks.workunit.client.0.vm07.stdout:9/256: creat d4/d16/d29/f64 x:0 0 0 2026-03-09T20:47:19.480 INFO:tasks.workunit.client.0.vm07.stdout:9/257: creat d4/d11/d2a/f65 x:0 0 0 2026-03-09T20:47:19.482 INFO:tasks.workunit.client.1.vm10.stdout:0/122: mkdir d2/d9/da/de/d1a/d25/d26 0 2026-03-09T20:47:19.484 INFO:tasks.workunit.client.1.vm10.stdout:0/123: mknod d2/d9/da/de/d1a/d25/d26/c27 0 2026-03-09T20:47:19.485 INFO:tasks.workunit.client.1.vm10.stdout:0/124: fdatasync d2/f5 0 2026-03-09T20:47:19.488 INFO:tasks.workunit.client.0.vm07.stdout:6/262: dread d8/d16/f1f [0,4194304] 0 2026-03-09T20:47:19.495 INFO:tasks.workunit.client.1.vm10.stdout:0/125: truncate d2/d9/f12 812838 0 2026-03-09T20:47:19.496 INFO:tasks.workunit.client.1.vm10.stdout:0/126: creat d2/d9/da/de/d1a/d25/d26/f28 x:0 0 0 2026-03-09T20:47:19.496 INFO:tasks.workunit.client.1.vm10.stdout:0/127: write d2/db/f13 [2352425,126977] 0 2026-03-09T20:47:19.496 INFO:tasks.workunit.client.0.vm07.stdout:6/263: mkdir d8/d16/d22/d3a/d4f 0 2026-03-09T20:47:19.496 INFO:tasks.workunit.client.0.vm07.stdout:6/264: mkdir d8/d50 0 2026-03-09T20:47:19.496 INFO:tasks.workunit.client.0.vm07.stdout:6/265: creat d8/d26/f51 x:0 0 0 2026-03-09T20:47:19.496 INFO:tasks.workunit.client.0.vm07.stdout:6/266: write d8/d16/d22/d24/d2b/f3c [534972,27015] 0 2026-03-09T20:47:19.500 INFO:tasks.workunit.client.1.vm10.stdout:0/128: mknod d2/d9/da/de/c29 0 2026-03-09T20:47:19.506 INFO:tasks.workunit.client.1.vm10.stdout:0/129: dread d2/db/f13 [0,4194304] 0 2026-03-09T20:47:19.537 INFO:tasks.workunit.client.1.vm10.stdout:0/130: mkdir d2/d9/d2a 0 2026-03-09T20:47:19.537 INFO:tasks.workunit.client.1.vm10.stdout:0/131: link d2/d9/da/de/d1a/d25/d26/f28 d2/d9/da/de/d1a/f2b 0 2026-03-09T20:47:19.537 INFO:tasks.workunit.client.1.vm10.stdout:0/132: stat d2/db/l18 0 2026-03-09T20:47:19.537 INFO:tasks.workunit.client.1.vm10.stdout:0/133: dread - d2/d9/da/d11/f1f zero size 2026-03-09T20:47:19.537 INFO:tasks.workunit.client.1.vm10.stdout:0/134: write d2/d9/da/d11/f1f [817040,40790] 0 2026-03-09T20:47:19.547 INFO:tasks.workunit.client.1.vm10.stdout:0/135: creat d2/d9/da/f2c x:0 0 0 2026-03-09T20:47:19.548 INFO:tasks.workunit.client.1.vm10.stdout:0/136: read d2/db/f13 [2523575,63432] 0 2026-03-09T20:47:19.549 INFO:tasks.workunit.client.1.vm10.stdout:0/137: symlink d2/d9/da/de/d1a/d25/d26/l2d 0 2026-03-09T20:47:19.550 INFO:tasks.workunit.client.0.vm07.stdout:7/225: sync 2026-03-09T20:47:19.556 INFO:tasks.workunit.client.1.vm10.stdout:0/138: dwrite d2/d9/da/de/d1a/d25/d26/f28 [0,4194304] 0 2026-03-09T20:47:19.561 INFO:tasks.workunit.client.1.vm10.stdout:0/139: mknod d2/d9/da/d11/c2e 0 2026-03-09T20:47:19.567 INFO:tasks.workunit.client.1.vm10.stdout:0/140: fdatasync d2/d9/da/fd 0 2026-03-09T20:47:19.567 INFO:tasks.workunit.client.1.vm10.stdout:0/141: creat d2/d9/da/f2f x:0 0 0 2026-03-09T20:47:19.567 INFO:tasks.workunit.client.1.vm10.stdout:0/142: mkdir d2/d9/da/de/d1a/d25/d26/d30 0 2026-03-09T20:47:19.567 INFO:tasks.workunit.client.1.vm10.stdout:0/143: stat d2/d9/da/de/d1a/f21 0 2026-03-09T20:47:19.567 INFO:tasks.workunit.client.1.vm10.stdout:0/144: chown d2/d9/da/de/l1d 1606413 1 2026-03-09T20:47:19.569 INFO:tasks.workunit.client.1.vm10.stdout:0/145: rename d2/d9/da/f2c to d2/d9/da/de/d1a/f31 0 2026-03-09T20:47:19.571 INFO:tasks.workunit.client.1.vm10.stdout:0/146: mknod d2/d9/c32 0 2026-03-09T20:47:19.579 INFO:tasks.workunit.client.0.vm07.stdout:7/226: sync 2026-03-09T20:47:19.580 INFO:tasks.workunit.client.0.vm07.stdout:7/227: write d3/da/f47 [64999,17916] 0 2026-03-09T20:47:19.581 INFO:tasks.workunit.client.0.vm07.stdout:7/228: write d3/da/db/d14/d1f/d2b/f49 [124689,29742] 0 2026-03-09T20:47:19.584 INFO:tasks.workunit.client.0.vm07.stdout:7/229: symlink d3/da/db/d32/d3e/l51 0 2026-03-09T20:47:19.587 INFO:tasks.workunit.client.0.vm07.stdout:7/230: dread d3/da/db/f12 [0,4194304] 0 2026-03-09T20:47:19.587 INFO:tasks.workunit.client.0.vm07.stdout:7/231: read - d3/f4f zero size 2026-03-09T20:47:19.589 INFO:tasks.workunit.client.0.vm07.stdout:7/232: mkdir d3/da/db/d14/d1f/d2b/d52 0 2026-03-09T20:47:19.591 INFO:tasks.workunit.client.0.vm07.stdout:7/233: mkdir d3/da/d53 0 2026-03-09T20:47:19.594 INFO:tasks.workunit.client.0.vm07.stdout:7/234: mknod d3/da/d53/c54 0 2026-03-09T20:47:19.595 INFO:tasks.workunit.client.0.vm07.stdout:7/235: chown d3/da 63 1 2026-03-09T20:47:19.595 INFO:tasks.workunit.client.0.vm07.stdout:3/245: write d1/d5/d9/f1c [135528,42400] 0 2026-03-09T20:47:19.599 INFO:tasks.workunit.client.0.vm07.stdout:7/236: symlink d3/da/db/d14/d1f/d2b/l55 0 2026-03-09T20:47:19.599 INFO:tasks.workunit.client.0.vm07.stdout:7/237: readlink d3/da/db/l20 0 2026-03-09T20:47:19.603 INFO:tasks.workunit.client.0.vm07.stdout:7/238: dwrite d3/da/f3b [0,4194304] 0 2026-03-09T20:47:19.614 INFO:tasks.workunit.client.0.vm07.stdout:7/239: rmdir d3/da/db 39 2026-03-09T20:47:19.618 INFO:tasks.workunit.client.0.vm07.stdout:3/246: rmdir d1/d5/d9/d11/d1f 39 2026-03-09T20:47:19.618 INFO:tasks.workunit.client.0.vm07.stdout:7/240: chown d3/da/db/f27 187933109 1 2026-03-09T20:47:19.618 INFO:tasks.workunit.client.0.vm07.stdout:7/241: dread - d3/da/db/d14/d1f/f37 zero size 2026-03-09T20:47:19.618 INFO:tasks.workunit.client.0.vm07.stdout:7/242: write d3/da/f47 [257916,86293] 0 2026-03-09T20:47:19.622 INFO:tasks.workunit.client.0.vm07.stdout:3/247: rename d1/d5/d9/d2f/d3d/d51 to d1/d5/d10/d43/d54 0 2026-03-09T20:47:19.623 INFO:tasks.workunit.client.0.vm07.stdout:3/248: write d1/d5/d9/f15 [5970886,95648] 0 2026-03-09T20:47:19.625 INFO:tasks.workunit.client.0.vm07.stdout:3/249: link d1/d5/d9/fe d1/d5/d10/f55 0 2026-03-09T20:47:19.634 INFO:tasks.workunit.client.0.vm07.stdout:7/243: sync 2026-03-09T20:47:19.635 INFO:tasks.workunit.client.0.vm07.stdout:7/244: stat d3/da/f47 0 2026-03-09T20:47:19.636 INFO:tasks.workunit.client.0.vm07.stdout:7/245: symlink d3/da/db/d14/d1f/l56 0 2026-03-09T20:47:19.641 INFO:tasks.workunit.client.0.vm07.stdout:7/246: dwrite d3/da/db/d14/d1f/d2b/f2c [0,4194304] 0 2026-03-09T20:47:19.648 INFO:tasks.workunit.client.0.vm07.stdout:7/247: dwrite d3/da/f26 [0,4194304] 0 2026-03-09T20:47:19.658 INFO:tasks.workunit.client.0.vm07.stdout:7/248: link l1 d3/da/d53/l57 0 2026-03-09T20:47:19.661 INFO:tasks.workunit.client.0.vm07.stdout:7/249: dread d3/f18 [0,4194304] 0 2026-03-09T20:47:19.661 INFO:tasks.workunit.client.0.vm07.stdout:7/250: stat d3/c10 0 2026-03-09T20:47:19.666 INFO:tasks.workunit.client.0.vm07.stdout:7/251: dwrite d3/f4f [0,4194304] 0 2026-03-09T20:47:19.669 INFO:tasks.workunit.client.0.vm07.stdout:8/195: truncate d1/dc/d14/f18 2395420 0 2026-03-09T20:47:19.669 INFO:tasks.workunit.client.0.vm07.stdout:8/196: dread - d1/dc/d16/d26/f37 zero size 2026-03-09T20:47:19.670 INFO:tasks.workunit.client.0.vm07.stdout:8/197: fsync d1/dc/d16/d26/f36 0 2026-03-09T20:47:19.672 INFO:tasks.workunit.client.0.vm07.stdout:7/252: mkdir d3/d58 0 2026-03-09T20:47:19.672 INFO:tasks.workunit.client.1.vm10.stdout:2/157: write d5/f7 [1162001,123732] 0 2026-03-09T20:47:19.673 INFO:tasks.workunit.client.1.vm10.stdout:2/158: chown d5/d18/d1b/c21 9 1 2026-03-09T20:47:19.679 INFO:tasks.workunit.client.0.vm07.stdout:8/198: symlink d1/dc/d16/d31/l41 0 2026-03-09T20:47:19.683 INFO:tasks.workunit.client.0.vm07.stdout:7/253: creat d3/f59 x:0 0 0 2026-03-09T20:47:19.684 INFO:tasks.workunit.client.0.vm07.stdout:8/199: creat d1/dc/f42 x:0 0 0 2026-03-09T20:47:19.697 INFO:tasks.workunit.client.0.vm07.stdout:7/254: symlink d3/da/db/d14/l5a 0 2026-03-09T20:47:19.697 INFO:tasks.workunit.client.0.vm07.stdout:7/255: chown d3/da/db/d14/f2a 1 1 2026-03-09T20:47:19.699 INFO:tasks.workunit.client.0.vm07.stdout:7/256: stat d3/da/db/d14/l17 0 2026-03-09T20:47:19.720 INFO:tasks.workunit.client.1.vm10.stdout:2/159: rmdir d5 39 2026-03-09T20:47:19.722 INFO:tasks.workunit.client.1.vm10.stdout:2/160: mkdir d5/d2b 0 2026-03-09T20:47:19.726 INFO:tasks.workunit.client.1.vm10.stdout:2/161: creat d5/d18/f2c x:0 0 0 2026-03-09T20:47:19.727 INFO:tasks.workunit.client.1.vm10.stdout:2/162: mkdir d5/d18/d2d 0 2026-03-09T20:47:19.737 INFO:tasks.workunit.client.1.vm10.stdout:2/163: dread - d5/d18/d27/f29 zero size 2026-03-09T20:47:19.737 INFO:tasks.workunit.client.1.vm10.stdout:2/164: read d5/f16 [123665,107645] 0 2026-03-09T20:47:19.737 INFO:tasks.workunit.client.1.vm10.stdout:2/165: unlink d5/l8 0 2026-03-09T20:47:19.737 INFO:tasks.workunit.client.1.vm10.stdout:2/166: creat d5/d18/d1b/f2e x:0 0 0 2026-03-09T20:47:19.737 INFO:tasks.workunit.client.1.vm10.stdout:2/167: rename d5/d18/d1b/c21 to d5/c2f 0 2026-03-09T20:47:19.737 INFO:tasks.workunit.client.1.vm10.stdout:2/168: mknod d5/c30 0 2026-03-09T20:47:19.737 INFO:tasks.workunit.client.1.vm10.stdout:2/169: creat d5/d18/d2d/f31 x:0 0 0 2026-03-09T20:47:19.743 INFO:tasks.workunit.client.1.vm10.stdout:2/170: dwrite d5/f7 [0,4194304] 0 2026-03-09T20:47:19.752 INFO:tasks.workunit.client.1.vm10.stdout:2/171: mkdir d5/d2b/d32 0 2026-03-09T20:47:19.753 INFO:tasks.workunit.client.1.vm10.stdout:2/172: symlink d5/d2b/d32/l33 0 2026-03-09T20:47:19.754 INFO:tasks.workunit.client.1.vm10.stdout:2/173: stat d5/l17 0 2026-03-09T20:47:19.754 INFO:tasks.workunit.client.1.vm10.stdout:2/174: readlink l4 0 2026-03-09T20:47:19.755 INFO:tasks.workunit.client.1.vm10.stdout:2/175: dread - d5/d18/d27/f29 zero size 2026-03-09T20:47:19.760 INFO:tasks.workunit.client.1.vm10.stdout:2/176: dwrite d5/d18/f1a [0,4194304] 0 2026-03-09T20:47:19.793 INFO:tasks.workunit.client.0.vm07.stdout:6/267: dwrite d8/f46 [0,4194304] 0 2026-03-09T20:47:19.808 INFO:tasks.workunit.client.0.vm07.stdout:6/268: dwrite d8/f15 [4194304,4194304] 0 2026-03-09T20:47:19.810 INFO:tasks.workunit.client.0.vm07.stdout:6/269: stat d8/f29 0 2026-03-09T20:47:19.836 INFO:tasks.workunit.client.1.vm10.stdout:9/205: fsync d2/d3/f2f 0 2026-03-09T20:47:19.841 INFO:tasks.workunit.client.0.vm07.stdout:5/245: dwrite d5/d19/f20 [0,4194304] 0 2026-03-09T20:47:19.841 INFO:tasks.workunit.client.1.vm10.stdout:9/206: stat d2/c25 0 2026-03-09T20:47:19.843 INFO:tasks.workunit.client.0.vm07.stdout:5/246: dread d5/df/d13/d30/f36 [0,4194304] 0 2026-03-09T20:47:19.856 INFO:tasks.workunit.client.0.vm07.stdout:5/247: mkdir d5/df/d13/d55 0 2026-03-09T20:47:19.857 INFO:tasks.workunit.client.0.vm07.stdout:5/248: mkdir d5/df/d13/d30/d56 0 2026-03-09T20:47:19.874 INFO:tasks.workunit.client.0.vm07.stdout:5/249: dread d5/df/d13/f1f [0,4194304] 0 2026-03-09T20:47:19.985 INFO:tasks.workunit.client.0.vm07.stdout:9/258: fdatasync d4/d8/dc/ff 0 2026-03-09T20:47:19.992 INFO:tasks.workunit.client.0.vm07.stdout:9/259: dwrite d4/d8/dc/f21 [4194304,4194304] 0 2026-03-09T20:47:19.993 INFO:tasks.workunit.client.0.vm07.stdout:9/260: chown d4/d8/d19/d26 3828572 1 2026-03-09T20:47:19.993 INFO:tasks.workunit.client.0.vm07.stdout:9/261: dread - d4/d16/d29/d24/d37/f51 zero size 2026-03-09T20:47:19.998 INFO:tasks.workunit.client.0.vm07.stdout:4/210: dwrite d2/d1f/d2d/f2f [4194304,4194304] 0 2026-03-09T20:47:20.009 INFO:tasks.workunit.client.0.vm07.stdout:4/211: link d2/l30 d2/df/l39 0 2026-03-09T20:47:20.009 INFO:tasks.workunit.client.0.vm07.stdout:4/212: stat d2/cb 0 2026-03-09T20:47:20.010 INFO:tasks.workunit.client.0.vm07.stdout:4/213: chown d2/f7 257 1 2026-03-09T20:47:20.011 INFO:tasks.workunit.client.0.vm07.stdout:4/214: mknod d2/d1f/c3a 0 2026-03-09T20:47:20.070 INFO:tasks.workunit.client.1.vm10.stdout:1/204: write d2/da/f3d [2040372,17074] 0 2026-03-09T20:47:20.070 INFO:tasks.workunit.client.1.vm10.stdout:4/125: dread d1/d8/d1c/f1f [0,4194304] 0 2026-03-09T20:47:20.074 INFO:tasks.workunit.client.1.vm10.stdout:6/130: fsync d3/d12/f28 0 2026-03-09T20:47:20.074 INFO:tasks.workunit.client.1.vm10.stdout:1/205: fdatasync d2/da/f22 0 2026-03-09T20:47:20.080 INFO:tasks.workunit.client.1.vm10.stdout:6/131: rename d3/c8 to d3/da/c2c 0 2026-03-09T20:47:20.081 INFO:tasks.workunit.client.1.vm10.stdout:9/207: rmdir d2 39 2026-03-09T20:47:20.081 INFO:tasks.workunit.client.1.vm10.stdout:8/210: write d0/f6 [1342815,120291] 0 2026-03-09T20:47:20.081 INFO:tasks.workunit.client.1.vm10.stdout:6/132: stat d3/d12/f2b 0 2026-03-09T20:47:20.081 INFO:tasks.workunit.client.1.vm10.stdout:3/124: truncate dc/dd/f13 3885172 0 2026-03-09T20:47:20.082 INFO:tasks.workunit.client.1.vm10.stdout:1/206: dread d2/da/f35 [0,4194304] 0 2026-03-09T20:47:20.082 INFO:tasks.workunit.client.1.vm10.stdout:8/211: write d0/f11 [4034460,86986] 0 2026-03-09T20:47:20.087 INFO:tasks.workunit.client.1.vm10.stdout:8/212: rename d0/l2a to d0/d22/d2f/d3d/l48 0 2026-03-09T20:47:20.089 INFO:tasks.workunit.client.1.vm10.stdout:1/207: link d2/f19 d2/da/d25/f48 0 2026-03-09T20:47:20.089 INFO:tasks.workunit.client.1.vm10.stdout:8/213: fdatasync d0/f13 0 2026-03-09T20:47:20.090 INFO:tasks.workunit.client.1.vm10.stdout:8/214: chown d0/c1f 10 1 2026-03-09T20:47:20.091 INFO:tasks.workunit.client.1.vm10.stdout:1/208: write d2/da/d25/f48 [3655074,59159] 0 2026-03-09T20:47:20.094 INFO:tasks.workunit.client.1.vm10.stdout:9/208: dwrite d2/f30 [0,4194304] 0 2026-03-09T20:47:20.097 INFO:tasks.workunit.client.1.vm10.stdout:6/133: rmdir d3/d12 39 2026-03-09T20:47:20.098 INFO:tasks.workunit.client.1.vm10.stdout:1/209: dread - d2/da/d25/d3e/f41 zero size 2026-03-09T20:47:20.113 INFO:tasks.workunit.client.1.vm10.stdout:1/210: write d2/da/f3d [1342127,103640] 0 2026-03-09T20:47:20.113 INFO:tasks.workunit.client.1.vm10.stdout:6/134: dwrite d3/fe [0,4194304] 0 2026-03-09T20:47:20.114 INFO:tasks.workunit.client.1.vm10.stdout:4/126: dread d1/f26 [0,4194304] 0 2026-03-09T20:47:20.123 INFO:tasks.workunit.client.0.vm07.stdout:2/292: dread d2/db/d28/f34 [0,4194304] 0 2026-03-09T20:47:20.133 INFO:tasks.workunit.client.0.vm07.stdout:2/293: dwrite d2/d11/f36 [0,4194304] 0 2026-03-09T20:47:20.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:19 vm07.local ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:47:20.145 INFO:tasks.workunit.client.1.vm10.stdout:1/211: unlink d2/da/f20 0 2026-03-09T20:47:20.145 INFO:tasks.workunit.client.1.vm10.stdout:0/147: getdents d2/d9/da 0 2026-03-09T20:47:20.146 INFO:tasks.workunit.client.1.vm10.stdout:9/209: symlink d2/d28/d47/l48 0 2026-03-09T20:47:20.147 INFO:tasks.workunit.client.0.vm07.stdout:2/294: creat d2/db/d1c/d4a/f55 x:0 0 0 2026-03-09T20:47:20.147 INFO:tasks.workunit.client.1.vm10.stdout:0/148: rename d2/db to d2/db/d33 22 2026-03-09T20:47:20.147 INFO:tasks.workunit.client.0.vm07.stdout:2/295: dread - d2/d11/f52 zero size 2026-03-09T20:47:20.153 INFO:tasks.workunit.client.1.vm10.stdout:4/127: mknod d1/d8/c28 0 2026-03-09T20:47:20.153 INFO:tasks.workunit.client.0.vm07.stdout:2/296: unlink d2/c12 0 2026-03-09T20:47:20.154 INFO:tasks.workunit.client.1.vm10.stdout:4/128: dread - d1/d8/d1c/f1d zero size 2026-03-09T20:47:20.156 INFO:tasks.workunit.client.1.vm10.stdout:1/212: symlink d2/da/d25/l49 0 2026-03-09T20:47:20.157 INFO:tasks.workunit.client.0.vm07.stdout:2/297: mkdir d2/d11/d56 0 2026-03-09T20:47:20.162 INFO:tasks.workunit.client.1.vm10.stdout:0/149: mkdir d2/d9/da/de/d1a/d25/d34 0 2026-03-09T20:47:20.164 INFO:tasks.workunit.client.0.vm07.stdout:3/250: dwrite d1/d5/d9/d2f/d34/f40 [0,4194304] 0 2026-03-09T20:47:20.165 INFO:tasks.workunit.client.1.vm10.stdout:6/135: symlink d3/d12/d24/l2d 0 2026-03-09T20:47:20.169 INFO:tasks.workunit.client.1.vm10.stdout:4/129: dwrite d1/d2/f1a [0,4194304] 0 2026-03-09T20:47:20.170 INFO:tasks.workunit.client.1.vm10.stdout:6/136: readlink d3/d12/d24/l2d 0 2026-03-09T20:47:20.170 INFO:tasks.workunit.client.1.vm10.stdout:6/137: chown d3/d12/f18 402 1 2026-03-09T20:47:20.170 INFO:tasks.workunit.client.0.vm07.stdout:0/272: write d1/f31 [4327541,23111] 0 2026-03-09T20:47:20.171 INFO:tasks.workunit.client.0.vm07.stdout:0/273: chown d1/d2/l36 386 1 2026-03-09T20:47:20.175 INFO:tasks.workunit.client.1.vm10.stdout:1/213: symlink d2/da/d25/d3e/l4a 0 2026-03-09T20:47:20.185 INFO:tasks.workunit.client.0.vm07.stdout:8/200: write d1/dc/d14/f18 [1887524,116488] 0 2026-03-09T20:47:20.189 INFO:tasks.workunit.client.0.vm07.stdout:0/274: rename d1/d1f/d20/c4a to d1/d1f/d53/c5b 0 2026-03-09T20:47:20.193 INFO:tasks.workunit.client.0.vm07.stdout:0/275: chown d1/d2/d33/d35/f46 27 1 2026-03-09T20:47:20.193 INFO:tasks.workunit.client.1.vm10.stdout:6/138: symlink d3/da/l2e 0 2026-03-09T20:47:20.193 INFO:tasks.workunit.client.1.vm10.stdout:0/150: dwrite d2/d9/da/de/d1a/d25/d26/f28 [4194304,4194304] 0 2026-03-09T20:47:20.194 INFO:tasks.workunit.client.1.vm10.stdout:6/139: read - d3/da/d11/f1d zero size 2026-03-09T20:47:20.195 INFO:tasks.workunit.client.1.vm10.stdout:6/140: chown d3/fe 1 1 2026-03-09T20:47:20.200 INFO:tasks.workunit.client.1.vm10.stdout:4/130: rename d1/d2/f1a to d1/d8/f29 0 2026-03-09T20:47:20.200 INFO:tasks.workunit.client.1.vm10.stdout:4/131: chown d1 2 1 2026-03-09T20:47:20.200 INFO:tasks.workunit.client.1.vm10.stdout:0/151: write d2/d9/da/d11/f1f [1420009,35874] 0 2026-03-09T20:47:20.200 INFO:tasks.workunit.client.1.vm10.stdout:1/214: dwrite d2/da/f32 [0,4194304] 0 2026-03-09T20:47:20.201 INFO:tasks.workunit.client.0.vm07.stdout:3/251: mknod d1/d35/c56 0 2026-03-09T20:47:20.201 INFO:tasks.workunit.client.0.vm07.stdout:3/252: stat d1/d5/d9/d2f/d3d 0 2026-03-09T20:47:20.206 INFO:tasks.workunit.client.0.vm07.stdout:7/257: dwrite d3/f3f [0,4194304] 0 2026-03-09T20:47:20.207 INFO:tasks.workunit.client.0.vm07.stdout:0/276: creat d1/d2/d33/d35/f5c x:0 0 0 2026-03-09T20:47:20.207 INFO:tasks.workunit.client.1.vm10.stdout:9/210: link d2/d3/l9 d2/d3/de/l49 0 2026-03-09T20:47:20.212 INFO:tasks.workunit.client.0.vm07.stdout:0/277: symlink d1/d1f/d20/l5d 0 2026-03-09T20:47:20.213 INFO:tasks.workunit.client.1.vm10.stdout:4/132: dwrite d1/d8/f16 [0,4194304] 0 2026-03-09T20:47:20.213 INFO:tasks.workunit.client.1.vm10.stdout:9/211: dread - d2/d12/f31 zero size 2026-03-09T20:47:20.214 INFO:tasks.workunit.client.1.vm10.stdout:1/215: fdatasync d2/da/f26 0 2026-03-09T20:47:20.214 INFO:tasks.workunit.client.0.vm07.stdout:0/278: write d1/d2/f1b [3376354,85904] 0 2026-03-09T20:47:20.216 INFO:tasks.workunit.client.1.vm10.stdout:0/152: fdatasync d2/d9/da/de/d1a/f31 0 2026-03-09T20:47:20.218 INFO:tasks.workunit.client.0.vm07.stdout:7/258: rename d3/da/c1d to d3/c5b 0 2026-03-09T20:47:20.221 INFO:tasks.workunit.client.0.vm07.stdout:0/279: fdatasync d1/d2/dc/d17/f23 0 2026-03-09T20:47:20.222 INFO:tasks.workunit.client.0.vm07.stdout:0/280: write d1/d2/dc/f10 [1603379,10404] 0 2026-03-09T20:47:20.228 INFO:tasks.workunit.client.1.vm10.stdout:1/216: dread d2/da/f35 [0,4194304] 0 2026-03-09T20:47:20.231 INFO:tasks.workunit.client.0.vm07.stdout:0/281: creat d1/d2/f5e x:0 0 0 2026-03-09T20:47:20.231 INFO:tasks.workunit.client.1.vm10.stdout:6/141: creat d3/f2f x:0 0 0 2026-03-09T20:47:20.234 INFO:tasks.workunit.client.1.vm10.stdout:9/212: rename d2/f1a to d2/d28/f4a 0 2026-03-09T20:47:20.234 INFO:tasks.workunit.client.0.vm07.stdout:0/282: rename d1/d1f/d30/l41 to d1/d2/l5f 0 2026-03-09T20:47:20.238 INFO:tasks.workunit.client.0.vm07.stdout:0/283: getdents d1/d2/d4b 0 2026-03-09T20:47:20.240 INFO:tasks.workunit.client.1.vm10.stdout:0/153: rename d2/d9/da/de/d1a/d25/d26 to d2/d9/da/d35 0 2026-03-09T20:47:20.241 INFO:tasks.workunit.client.1.vm10.stdout:9/213: rmdir d2/d3 39 2026-03-09T20:47:20.243 INFO:tasks.workunit.client.1.vm10.stdout:4/133: dwrite d1/d2/f7 [4194304,4194304] 0 2026-03-09T20:47:20.256 INFO:tasks.workunit.client.1.vm10.stdout:9/214: readlink d2/l2d 0 2026-03-09T20:47:20.256 INFO:tasks.workunit.client.1.vm10.stdout:6/142: getdents d3/da/d11 0 2026-03-09T20:47:20.256 INFO:tasks.workunit.client.1.vm10.stdout:9/215: readlink d2/d12/l22 0 2026-03-09T20:47:20.256 INFO:tasks.workunit.client.1.vm10.stdout:6/143: chown d3/d12 12055 1 2026-03-09T20:47:20.256 INFO:tasks.workunit.client.1.vm10.stdout:9/216: write d2/f46 [72887,94874] 0 2026-03-09T20:47:20.256 INFO:tasks.workunit.client.1.vm10.stdout:9/217: rename d2/d28 to d2/d28/d4b 22 2026-03-09T20:47:20.257 INFO:tasks.workunit.client.1.vm10.stdout:0/154: rmdir d2/d9/da/de/d1a/d25 39 2026-03-09T20:47:20.258 INFO:tasks.workunit.client.1.vm10.stdout:0/155: write d2/d9/da/d35/f28 [2949207,89974] 0 2026-03-09T20:47:20.259 INFO:tasks.workunit.client.1.vm10.stdout:0/156: stat d2/db/l1c 0 2026-03-09T20:47:20.263 INFO:tasks.workunit.client.1.vm10.stdout:9/218: link d2/d3/f5 d2/d33/d37/f4c 0 2026-03-09T20:47:20.267 INFO:tasks.workunit.client.1.vm10.stdout:0/157: write d2/d9/da/de/d1a/f2b [7562403,109764] 0 2026-03-09T20:47:20.273 INFO:tasks.workunit.client.1.vm10.stdout:9/219: creat d2/d3/de/d35/d44/f4d x:0 0 0 2026-03-09T20:47:20.284 INFO:tasks.workunit.client.0.vm07.stdout:6/270: dwrite d8/f29 [0,4194304] 0 2026-03-09T20:47:20.286 INFO:tasks.workunit.client.1.vm10.stdout:5/148: dwrite f1 [0,4194304] 0 2026-03-09T20:47:20.291 INFO:tasks.workunit.client.1.vm10.stdout:5/149: chown d2/f3c 1 1 2026-03-09T20:47:20.295 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:19 vm10.local ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:47:20.295 INFO:tasks.workunit.client.1.vm10.stdout:2/177: dwrite d5/f16 [0,4194304] 0 2026-03-09T20:47:20.296 INFO:tasks.workunit.client.1.vm10.stdout:9/220: dwrite d2/d3/f1c [0,4194304] 0 2026-03-09T20:47:20.296 INFO:tasks.workunit.client.1.vm10.stdout:5/150: stat d2/f3c 0 2026-03-09T20:47:20.296 INFO:tasks.workunit.client.1.vm10.stdout:9/221: readlink d2/d12/l23 0 2026-03-09T20:47:20.296 INFO:tasks.workunit.client.0.vm07.stdout:3/253: dread d1/d5/d9/f15 [0,4194304] 0 2026-03-09T20:47:20.299 INFO:tasks.workunit.client.1.vm10.stdout:5/151: stat d2/d1b 0 2026-03-09T20:47:20.300 INFO:tasks.workunit.client.0.vm07.stdout:5/250: write d5/d50/f52 [336971,35353] 0 2026-03-09T20:47:20.301 INFO:tasks.workunit.client.0.vm07.stdout:3/254: dread d1/d5/d9/d2f/d34/f3f [0,4194304] 0 2026-03-09T20:47:20.303 INFO:tasks.workunit.client.0.vm07.stdout:1/218: truncate d3/fc 1779878 0 2026-03-09T20:47:20.306 INFO:tasks.workunit.client.0.vm07.stdout:1/219: dread d3/f9 [0,4194304] 0 2026-03-09T20:47:20.307 INFO:tasks.workunit.client.1.vm10.stdout:5/152: chown d2/f11 643 1 2026-03-09T20:47:20.308 INFO:tasks.workunit.client.1.vm10.stdout:9/222: write d2/d33/f3f [1009113,49265] 0 2026-03-09T20:47:20.312 INFO:tasks.workunit.client.1.vm10.stdout:2/178: rename c2 to d5/d18/c34 0 2026-03-09T20:47:20.312 INFO:tasks.workunit.client.0.vm07.stdout:1/220: mknod d3/d23/c3d 0 2026-03-09T20:47:20.313 INFO:tasks.workunit.client.0.vm07.stdout:1/221: fsync d3/f5 0 2026-03-09T20:47:20.314 INFO:tasks.workunit.client.1.vm10.stdout:5/153: rename d2/ca to d2/c42 0 2026-03-09T20:47:20.316 INFO:tasks.workunit.client.1.vm10.stdout:9/223: unlink d2/f41 0 2026-03-09T20:47:20.317 INFO:tasks.workunit.client.0.vm07.stdout:6/271: creat d8/f52 x:0 0 0 2026-03-09T20:47:20.318 INFO:tasks.workunit.client.0.vm07.stdout:6/272: write d8/d16/d22/f2c [79923,33866] 0 2026-03-09T20:47:20.326 INFO:tasks.workunit.client.0.vm07.stdout:5/251: symlink d5/df/d13/l57 0 2026-03-09T20:47:20.328 INFO:tasks.workunit.client.1.vm10.stdout:5/154: mkdir d2/d43 0 2026-03-09T20:47:20.334 INFO:tasks.workunit.client.1.vm10.stdout:2/179: getdents d5/d18/d1b/d22 0 2026-03-09T20:47:20.334 INFO:tasks.workunit.client.1.vm10.stdout:5/155: symlink d2/d27/l44 0 2026-03-09T20:47:20.336 INFO:tasks.workunit.client.0.vm07.stdout:6/273: link d8/d16/d22/d3a/f39 d8/d16/d22/d24/d2b/f53 0 2026-03-09T20:47:20.338 INFO:tasks.workunit.client.0.vm07.stdout:5/252: mknod d5/df/d13/c58 0 2026-03-09T20:47:20.338 INFO:tasks.workunit.client.1.vm10.stdout:9/224: dwrite d2/d3/de/f34 [0,4194304] 0 2026-03-09T20:47:20.342 INFO:tasks.workunit.client.1.vm10.stdout:6/144: sync 2026-03-09T20:47:20.348 INFO:tasks.workunit.client.0.vm07.stdout:6/274: rename d8/d16/d22/d24/c47 to d8/d16/d22/d3a/c54 0 2026-03-09T20:47:20.348 INFO:tasks.workunit.client.1.vm10.stdout:5/156: rename d2/l24 to d2/d27/d3a/l45 0 2026-03-09T20:47:20.348 INFO:tasks.workunit.client.1.vm10.stdout:2/180: dread d5/fe [0,4194304] 0 2026-03-09T20:47:20.348 INFO:tasks.workunit.client.1.vm10.stdout:6/145: truncate d3/fe 5011440 0 2026-03-09T20:47:20.348 INFO:tasks.workunit.client.1.vm10.stdout:9/225: symlink d2/d33/l4e 0 2026-03-09T20:47:20.349 INFO:tasks.workunit.client.0.vm07.stdout:6/275: creat d8/d50/f55 x:0 0 0 2026-03-09T20:47:20.349 INFO:tasks.workunit.client.1.vm10.stdout:5/157: truncate d2/fd 220456 0 2026-03-09T20:47:20.350 INFO:tasks.workunit.client.1.vm10.stdout:2/181: mknod d5/d18/d1b/c35 0 2026-03-09T20:47:20.351 INFO:tasks.workunit.client.0.vm07.stdout:6/276: symlink d8/d16/d22/d3a/l56 0 2026-03-09T20:47:20.351 INFO:tasks.workunit.client.0.vm07.stdout:6/277: dread - d8/d26/d2a/f41 zero size 2026-03-09T20:47:20.353 INFO:tasks.workunit.client.0.vm07.stdout:5/253: link d5/d19/c45 d5/df/d13/d3e/d47/c59 0 2026-03-09T20:47:20.353 INFO:tasks.workunit.client.1.vm10.stdout:2/182: write d5/f1d [47773,102883] 0 2026-03-09T20:47:20.353 INFO:tasks.workunit.client.1.vm10.stdout:5/158: mkdir d2/d27/d37/d46 0 2026-03-09T20:47:20.354 INFO:tasks.workunit.client.0.vm07.stdout:5/254: creat d5/d33/f5a x:0 0 0 2026-03-09T20:47:20.359 INFO:tasks.workunit.client.1.vm10.stdout:6/146: dread d3/d12/f25 [0,4194304] 0 2026-03-09T20:47:20.359 INFO:tasks.workunit.client.0.vm07.stdout:6/278: rename d8/c1d to d8/d26/d2a/d40/c57 0 2026-03-09T20:47:20.359 INFO:tasks.workunit.client.0.vm07.stdout:5/255: chown d5/df/d13/c18 82680 1 2026-03-09T20:47:20.359 INFO:tasks.workunit.client.0.vm07.stdout:6/279: symlink d8/d16/l58 0 2026-03-09T20:47:20.363 INFO:tasks.workunit.client.1.vm10.stdout:9/226: dwrite d2/fc [0,4194304] 0 2026-03-09T20:47:20.364 INFO:tasks.workunit.client.0.vm07.stdout:6/280: symlink d8/d26/d2a/d40/l59 0 2026-03-09T20:47:20.364 INFO:tasks.workunit.client.0.vm07.stdout:6/281: dread - d8/d26/f4d zero size 2026-03-09T20:47:20.365 INFO:tasks.workunit.client.1.vm10.stdout:6/147: write d3/d12/d24/f27 [95137,46962] 0 2026-03-09T20:47:20.365 INFO:tasks.workunit.client.0.vm07.stdout:6/282: dread - d8/f52 zero size 2026-03-09T20:47:20.366 INFO:tasks.workunit.client.0.vm07.stdout:5/256: link d5/f51 d5/df/d13/f5b 0 2026-03-09T20:47:20.368 INFO:tasks.workunit.client.1.vm10.stdout:5/159: link d2/d27/d37/f38 d2/d43/f47 0 2026-03-09T20:47:20.375 INFO:tasks.workunit.client.1.vm10.stdout:6/148: dread d3/f1f [0,4194304] 0 2026-03-09T20:47:20.375 INFO:tasks.workunit.client.0.vm07.stdout:6/283: chown d8/d16/d22/d24/d2b/l4c 647846547 1 2026-03-09T20:47:20.375 INFO:tasks.workunit.client.0.vm07.stdout:6/284: unlink l1 0 2026-03-09T20:47:20.375 INFO:tasks.workunit.client.0.vm07.stdout:7/259: sync 2026-03-09T20:47:20.375 INFO:tasks.workunit.client.0.vm07.stdout:0/284: sync 2026-03-09T20:47:20.375 INFO:tasks.workunit.client.0.vm07.stdout:3/255: sync 2026-03-09T20:47:20.376 INFO:tasks.workunit.client.1.vm10.stdout:5/160: symlink d2/l48 0 2026-03-09T20:47:20.376 INFO:tasks.workunit.client.1.vm10.stdout:6/149: dread f0 [0,4194304] 0 2026-03-09T20:47:20.379 INFO:tasks.workunit.client.0.vm07.stdout:6/285: dwrite d8/d16/d22/d24/d2b/f4a [4194304,4194304] 0 2026-03-09T20:47:20.380 INFO:tasks.workunit.client.1.vm10.stdout:2/183: getdents d5/d2b 0 2026-03-09T20:47:20.381 INFO:tasks.workunit.client.0.vm07.stdout:5/257: rename d5/c42 to d5/df/d13/c5c 0 2026-03-09T20:47:20.382 INFO:tasks.workunit.client.0.vm07.stdout:6/286: write f5 [5026409,103342] 0 2026-03-09T20:47:20.386 INFO:tasks.workunit.client.1.vm10.stdout:9/227: dread d2/d3/de/d35/f38 [0,4194304] 0 2026-03-09T20:47:20.386 INFO:tasks.workunit.client.0.vm07.stdout:0/285: mkdir d1/d2/d33/d35/d60 0 2026-03-09T20:47:20.386 INFO:tasks.workunit.client.0.vm07.stdout:0/286: write d1/d2/f5e [231057,35799] 0 2026-03-09T20:47:20.386 INFO:tasks.workunit.client.0.vm07.stdout:7/260: mkdir d3/da/db/d32/d3e/d5c 0 2026-03-09T20:47:20.386 INFO:tasks.workunit.client.0.vm07.stdout:7/261: chown d3/da/db/d32/d3e 0 1 2026-03-09T20:47:20.386 INFO:tasks.workunit.client.0.vm07.stdout:7/262: fdatasync d3/da/f26 0 2026-03-09T20:47:20.387 INFO:tasks.workunit.client.0.vm07.stdout:7/263: chown d3/da/db/d32/d3e 199 1 2026-03-09T20:47:20.391 INFO:tasks.workunit.client.0.vm07.stdout:0/287: read - d1/f3b zero size 2026-03-09T20:47:20.392 INFO:tasks.workunit.client.1.vm10.stdout:2/184: creat d5/d2b/f36 x:0 0 0 2026-03-09T20:47:20.394 INFO:tasks.workunit.client.1.vm10.stdout:6/150: mkdir d3/d30 0 2026-03-09T20:47:20.395 INFO:tasks.workunit.client.1.vm10.stdout:6/151: dread - d3/f2f zero size 2026-03-09T20:47:20.395 INFO:tasks.workunit.client.0.vm07.stdout:7/264: creat d3/da/db/d14/d1f/f5d x:0 0 0 2026-03-09T20:47:20.396 INFO:tasks.workunit.client.1.vm10.stdout:6/152: write d3/da/d11/d26/f2a [809436,91125] 0 2026-03-09T20:47:20.401 INFO:tasks.workunit.client.0.vm07.stdout:0/288: chown d1/l1c 20714 1 2026-03-09T20:47:20.401 INFO:tasks.workunit.client.1.vm10.stdout:2/185: symlink d5/d18/d27/l37 0 2026-03-09T20:47:20.402 INFO:tasks.workunit.client.1.vm10.stdout:6/153: chown d3/d12/f16 1503932 1 2026-03-09T20:47:20.402 INFO:tasks.workunit.client.1.vm10.stdout:2/186: truncate d5/d2b/f36 436009 0 2026-03-09T20:47:20.403 INFO:tasks.workunit.client.0.vm07.stdout:6/287: dwrite d8/d16/f23 [0,4194304] 0 2026-03-09T20:47:20.406 INFO:tasks.workunit.client.0.vm07.stdout:5/258: creat d5/f5d x:0 0 0 2026-03-09T20:47:20.408 INFO:tasks.workunit.client.0.vm07.stdout:0/289: creat d1/d2/d4b/f61 x:0 0 0 2026-03-09T20:47:20.408 INFO:tasks.workunit.client.1.vm10.stdout:9/228: dwrite d2/fc [4194304,4194304] 0 2026-03-09T20:47:20.408 INFO:tasks.workunit.client.0.vm07.stdout:7/265: creat d3/da/db/d14/d1f/d2b/d52/f5e x:0 0 0 2026-03-09T20:47:20.409 INFO:tasks.workunit.client.0.vm07.stdout:7/266: chown d3/da/f47 0 1 2026-03-09T20:47:20.414 INFO:tasks.workunit.client.0.vm07.stdout:6/288: rename d8/d16/d22/d24/f3f to d8/d16/d22/d24/d2b/f5a 0 2026-03-09T20:47:20.419 INFO:tasks.workunit.client.0.vm07.stdout:6/289: fsync d8/d16/d22/d24/f43 0 2026-03-09T20:47:20.419 INFO:tasks.workunit.client.0.vm07.stdout:0/290: creat d1/d2/d33/f62 x:0 0 0 2026-03-09T20:47:20.419 INFO:tasks.workunit.client.0.vm07.stdout:7/267: symlink d3/da/d53/l5f 0 2026-03-09T20:47:20.420 INFO:tasks.workunit.client.1.vm10.stdout:2/187: mkdir d5/d18/d27/d38 0 2026-03-09T20:47:20.420 INFO:tasks.workunit.client.0.vm07.stdout:6/290: chown d8/d16/f1f 1 1 2026-03-09T20:47:20.420 INFO:tasks.workunit.client.1.vm10.stdout:9/229: rmdir d2/d33 39 2026-03-09T20:47:20.421 INFO:tasks.workunit.client.1.vm10.stdout:6/154: dwrite d3/d12/d24/f27 [0,4194304] 0 2026-03-09T20:47:20.422 INFO:tasks.workunit.client.1.vm10.stdout:9/230: dread - d2/d3/de/d35/d44/f4d zero size 2026-03-09T20:47:20.422 INFO:tasks.workunit.client.0.vm07.stdout:7/268: creat d3/d58/f60 x:0 0 0 2026-03-09T20:47:20.425 INFO:tasks.workunit.client.0.vm07.stdout:5/259: sync 2026-03-09T20:47:20.432 INFO:tasks.workunit.client.1.vm10.stdout:2/188: symlink d5/d2b/d32/l39 0 2026-03-09T20:47:20.434 INFO:tasks.workunit.client.1.vm10.stdout:6/155: mkdir d3/da/d11/d31 0 2026-03-09T20:47:20.435 INFO:tasks.workunit.client.0.vm07.stdout:5/260: mkdir d5/df/d13/d3e/d5e 0 2026-03-09T20:47:20.435 INFO:tasks.workunit.client.1.vm10.stdout:2/189: symlink d5/d18/d2d/l3a 0 2026-03-09T20:47:20.435 INFO:tasks.workunit.client.0.vm07.stdout:5/261: chown d5/df/d13/d3e 1067648 1 2026-03-09T20:47:20.436 INFO:tasks.workunit.client.0.vm07.stdout:0/291: creat d1/d1f/f63 x:0 0 0 2026-03-09T20:47:20.438 INFO:tasks.workunit.client.0.vm07.stdout:0/292: creat d1/d2/d33/d35/f64 x:0 0 0 2026-03-09T20:47:20.439 INFO:tasks.workunit.client.1.vm10.stdout:2/190: chown d5/c12 2013425 1 2026-03-09T20:47:20.439 INFO:tasks.workunit.client.0.vm07.stdout:5/262: creat d5/df/d13/d55/f5f x:0 0 0 2026-03-09T20:47:20.440 INFO:tasks.workunit.client.0.vm07.stdout:5/263: dread d5/df/d13/d30/f36 [0,4194304] 0 2026-03-09T20:47:20.441 INFO:tasks.workunit.client.0.vm07.stdout:0/293: creat d1/d1f/d53/f65 x:0 0 0 2026-03-09T20:47:20.445 INFO:tasks.workunit.client.0.vm07.stdout:0/294: dwrite d1/d2/dc/d17/f23 [4194304,4194304] 0 2026-03-09T20:47:20.451 INFO:tasks.workunit.client.0.vm07.stdout:4/215: rmdir d2 39 2026-03-09T20:47:20.452 INFO:tasks.workunit.client.0.vm07.stdout:0/295: dwrite d1/d2/dc/f10 [0,4194304] 0 2026-03-09T20:47:20.458 INFO:tasks.workunit.client.1.vm10.stdout:6/156: dwrite d3/d12/f2b [0,4194304] 0 2026-03-09T20:47:20.460 INFO:tasks.workunit.client.0.vm07.stdout:0/296: dwrite d1/d1f/d20/f43 [0,4194304] 0 2026-03-09T20:47:20.463 INFO:tasks.workunit.client.0.vm07.stdout:0/297: write d1/d2/d33/d35/f64 [239346,107981] 0 2026-03-09T20:47:20.467 INFO:tasks.workunit.client.1.vm10.stdout:6/157: sync 2026-03-09T20:47:20.473 INFO:tasks.workunit.client.0.vm07.stdout:4/216: rename d2/df/c27 to d2/df/d17/c3b 0 2026-03-09T20:47:20.474 INFO:tasks.workunit.client.1.vm10.stdout:6/158: rename d3/da/l14 to d3/d12/d24/l32 0 2026-03-09T20:47:20.476 INFO:tasks.workunit.client.0.vm07.stdout:0/298: link d1/d1f/d20/c5a d1/d2/c66 0 2026-03-09T20:47:20.480 INFO:tasks.workunit.client.0.vm07.stdout:0/299: dwrite d1/d2/dc/f56 [0,4194304] 0 2026-03-09T20:47:20.481 INFO:tasks.workunit.client.0.vm07.stdout:0/300: chown f0 244141 1 2026-03-09T20:47:20.483 INFO:tasks.workunit.client.1.vm10.stdout:6/159: write d3/d12/f28 [546185,69865] 0 2026-03-09T20:47:20.484 INFO:tasks.workunit.client.1.vm10.stdout:6/160: truncate d3/da/d11/f1d 355598 0 2026-03-09T20:47:20.484 INFO:tasks.workunit.client.1.vm10.stdout:6/161: stat d3/d12/d24 0 2026-03-09T20:47:20.485 INFO:tasks.workunit.client.0.vm07.stdout:0/301: mknod d1/d1f/d30/c67 0 2026-03-09T20:47:20.485 INFO:tasks.workunit.client.0.vm07.stdout:0/302: write d1/d2/f1b [819043,47105] 0 2026-03-09T20:47:20.488 INFO:tasks.workunit.client.0.vm07.stdout:0/303: symlink d1/d2/d33/d35/l68 0 2026-03-09T20:47:20.499 INFO:tasks.workunit.client.0.vm07.stdout:0/304: dread d1/d2/ff [0,4194304] 0 2026-03-09T20:47:20.499 INFO:tasks.workunit.client.0.vm07.stdout:0/305: truncate d1/d2/d4b/f61 1040764 0 2026-03-09T20:47:20.501 INFO:tasks.workunit.client.0.vm07.stdout:0/306: truncate d1/f3b 318755 0 2026-03-09T20:47:20.503 INFO:tasks.workunit.client.0.vm07.stdout:0/307: getdents d1 0 2026-03-09T20:47:20.506 INFO:tasks.workunit.client.1.vm10.stdout:0/158: dread d2/d9/da/d11/f1f [0,4194304] 0 2026-03-09T20:47:20.511 INFO:tasks.workunit.client.1.vm10.stdout:0/159: write d2/d9/da/de/d1a/f2b [4682896,54222] 0 2026-03-09T20:47:20.514 INFO:tasks.workunit.client.1.vm10.stdout:0/160: mknod d2/d9/c36 0 2026-03-09T20:47:20.515 INFO:tasks.workunit.client.1.vm10.stdout:0/161: write d2/d9/f20 [1793235,9448] 0 2026-03-09T20:47:20.519 INFO:tasks.workunit.client.0.vm07.stdout:9/262: fdatasync d4/d8/dc/f21 0 2026-03-09T20:47:20.520 INFO:tasks.workunit.client.0.vm07.stdout:9/263: creat d4/d8/d59/f66 x:0 0 0 2026-03-09T20:47:20.522 INFO:tasks.workunit.client.0.vm07.stdout:9/264: rename d4/d11/d23/c3f to d4/d16/d29/d24/c67 0 2026-03-09T20:47:20.525 INFO:tasks.workunit.client.1.vm10.stdout:0/162: dread d2/d9/da/fd [0,4194304] 0 2026-03-09T20:47:20.526 INFO:tasks.workunit.client.1.vm10.stdout:0/163: stat d2/d9/da/d35/f28 0 2026-03-09T20:47:20.526 INFO:tasks.workunit.client.1.vm10.stdout:0/164: dread - d2/d9/da/d11/f15 zero size 2026-03-09T20:47:20.528 INFO:tasks.workunit.client.0.vm07.stdout:9/265: read f2 [3782355,101398] 0 2026-03-09T20:47:20.529 INFO:tasks.workunit.client.0.vm07.stdout:9/266: stat d4/d16/d29/d24/c2d 0 2026-03-09T20:47:20.530 INFO:tasks.workunit.client.0.vm07.stdout:9/267: creat d4/d8/dc/f68 x:0 0 0 2026-03-09T20:47:20.531 INFO:tasks.workunit.client.1.vm10.stdout:0/165: dread d2/db/f13 [0,4194304] 0 2026-03-09T20:47:20.532 INFO:tasks.workunit.client.1.vm10.stdout:0/166: write d2/d9/da/f2f [829933,4154] 0 2026-03-09T20:47:20.540 INFO:tasks.workunit.client.1.vm10.stdout:0/167: dread d2/d9/f12 [0,4194304] 0 2026-03-09T20:47:20.543 INFO:tasks.workunit.client.1.vm10.stdout:0/168: dread d2/d9/da/fd [0,4194304] 0 2026-03-09T20:47:20.545 INFO:tasks.workunit.client.1.vm10.stdout:0/169: dread d2/d9/da/d11/f1f [0,4194304] 0 2026-03-09T20:47:20.551 INFO:tasks.workunit.client.1.vm10.stdout:0/170: dwrite d2/d9/da/f2f [0,4194304] 0 2026-03-09T20:47:20.553 INFO:tasks.workunit.client.1.vm10.stdout:0/171: stat d2/d9/da/d11/l24 0 2026-03-09T20:47:20.555 INFO:tasks.workunit.client.1.vm10.stdout:0/172: symlink d2/db/l37 0 2026-03-09T20:47:20.608 INFO:tasks.workunit.client.1.vm10.stdout:9/231: read d2/d33/d37/f4c [767251,83413] 0 2026-03-09T20:47:20.608 INFO:tasks.workunit.client.1.vm10.stdout:9/232: dread - d2/d28/f4a zero size 2026-03-09T20:47:20.610 INFO:tasks.workunit.client.1.vm10.stdout:9/233: mknod d2/c4f 0 2026-03-09T20:47:20.621 INFO:tasks.workunit.client.1.vm10.stdout:9/234: write d2/d3/fa [4721608,48916] 0 2026-03-09T20:47:20.621 INFO:tasks.workunit.client.1.vm10.stdout:9/235: dwrite d2/d33/d37/f4c [0,4194304] 0 2026-03-09T20:47:20.626 INFO:tasks.workunit.client.1.vm10.stdout:9/236: dwrite d2/f2b [4194304,4194304] 0 2026-03-09T20:47:20.627 INFO:tasks.workunit.client.1.vm10.stdout:9/237: write d2/d3/de/f34 [1304618,59558] 0 2026-03-09T20:47:20.629 INFO:tasks.workunit.client.1.vm10.stdout:9/238: mkdir d2/d28/d47/d50 0 2026-03-09T20:47:20.635 INFO:tasks.workunit.client.1.vm10.stdout:9/239: dwrite d2/d28/f29 [0,4194304] 0 2026-03-09T20:47:20.638 INFO:tasks.workunit.client.1.vm10.stdout:9/240: rename d2/d3/fd to d2/d28/f51 0 2026-03-09T20:47:20.640 INFO:tasks.workunit.client.1.vm10.stdout:9/241: creat d2/d28/d47/f52 x:0 0 0 2026-03-09T20:47:20.643 INFO:tasks.workunit.client.1.vm10.stdout:9/242: symlink d2/d33/d37/l53 0 2026-03-09T20:47:20.643 INFO:tasks.workunit.client.1.vm10.stdout:9/243: stat d2/d33/d37/f4c 0 2026-03-09T20:47:20.646 INFO:tasks.workunit.client.1.vm10.stdout:9/244: creat d2/d3/f54 x:0 0 0 2026-03-09T20:47:20.647 INFO:tasks.workunit.client.1.vm10.stdout:9/245: symlink d2/d28/d47/l55 0 2026-03-09T20:47:20.648 INFO:tasks.workunit.client.1.vm10.stdout:9/246: symlink d2/d28/l56 0 2026-03-09T20:47:20.656 INFO:tasks.workunit.client.1.vm10.stdout:9/247: symlink d2/d28/d47/d50/l57 0 2026-03-09T20:47:20.661 INFO:tasks.workunit.client.1.vm10.stdout:9/248: creat d2/d28/d47/f58 x:0 0 0 2026-03-09T20:47:20.670 INFO:tasks.workunit.client.1.vm10.stdout:9/249: write d2/d3/f54 [127921,49700] 0 2026-03-09T20:47:20.672 INFO:tasks.workunit.client.1.vm10.stdout:9/250: truncate d2/d3/f54 235867 0 2026-03-09T20:47:20.675 INFO:tasks.workunit.client.1.vm10.stdout:9/251: unlink d2/c4f 0 2026-03-09T20:47:20.677 INFO:tasks.workunit.client.1.vm10.stdout:9/252: chown d2/d28/d47/d50/l57 27 1 2026-03-09T20:47:20.687 INFO:tasks.workunit.client.1.vm10.stdout:9/253: rename d2/d33/f3b to d2/d28/d47/d50/f59 0 2026-03-09T20:47:20.696 INFO:tasks.workunit.client.1.vm10.stdout:9/254: mkdir d2/d12/d5a 0 2026-03-09T20:47:20.701 INFO:tasks.workunit.client.1.vm10.stdout:9/255: chown d2/d33 398745 1 2026-03-09T20:47:20.757 INFO:tasks.workunit.client.1.vm10.stdout:3/125: dwrite dc/dd/f13 [4194304,4194304] 0 2026-03-09T20:47:20.769 INFO:tasks.workunit.client.1.vm10.stdout:3/126: dread dc/dd/f1b [0,4194304] 0 2026-03-09T20:47:20.770 INFO:tasks.workunit.client.1.vm10.stdout:3/127: truncate dc/dd/f18 547736 0 2026-03-09T20:47:20.775 INFO:tasks.workunit.client.1.vm10.stdout:3/128: mknod dc/dd/c28 0 2026-03-09T20:47:20.779 INFO:tasks.workunit.client.1.vm10.stdout:3/129: dread f7 [4194304,4194304] 0 2026-03-09T20:47:20.780 INFO:tasks.workunit.client.1.vm10.stdout:3/130: write dc/dd/f13 [199464,74826] 0 2026-03-09T20:47:20.790 INFO:tasks.workunit.client.1.vm10.stdout:8/215: dwrite d0/d22/d25/d2e/f33 [0,4194304] 0 2026-03-09T20:47:20.796 INFO:tasks.workunit.client.1.vm10.stdout:8/216: chown d0/d22/f29 14683957 1 2026-03-09T20:47:20.796 INFO:tasks.workunit.client.1.vm10.stdout:8/217: fdatasync d0/d22/d2c/f3f 0 2026-03-09T20:47:20.801 INFO:tasks.workunit.client.1.vm10.stdout:8/218: rename d0/fa to d0/d22/d2f/d3d/f49 0 2026-03-09T20:47:20.807 INFO:tasks.workunit.client.1.vm10.stdout:8/219: mknod d0/d22/d25/d2e/c4a 0 2026-03-09T20:47:20.824 INFO:tasks.workunit.client.1.vm10.stdout:8/220: dread d0/f21 [0,4194304] 0 2026-03-09T20:47:20.830 INFO:tasks.workunit.client.1.vm10.stdout:8/221: link d0/f17 d0/d22/d25/d2e/d41/d47/f4b 0 2026-03-09T20:47:20.831 INFO:tasks.workunit.client.1.vm10.stdout:8/222: symlink d0/d22/d2c/l4c 0 2026-03-09T20:47:20.834 INFO:tasks.workunit.client.1.vm10.stdout:8/223: link d0/d22/d2c/l4c d0/d22/d25/d2e/d41/l4d 0 2026-03-09T20:47:20.898 INFO:tasks.workunit.client.0.vm07.stdout:7/269: dread d3/da/f47 [0,4194304] 0 2026-03-09T20:47:20.899 INFO:tasks.workunit.client.0.vm07.stdout:7/270: fdatasync d3/da/db/d32/d3e/f40 0 2026-03-09T20:47:20.902 INFO:tasks.workunit.client.0.vm07.stdout:2/298: write d2/f7 [3619722,41454] 0 2026-03-09T20:47:20.907 INFO:tasks.workunit.client.0.vm07.stdout:2/299: dwrite d2/f33 [4194304,4194304] 0 2026-03-09T20:47:20.934 INFO:tasks.workunit.client.0.vm07.stdout:7/271: sync 2026-03-09T20:47:20.935 INFO:tasks.workunit.client.0.vm07.stdout:7/272: fdatasync d3/da/db/d14/f3a 0 2026-03-09T20:47:20.937 INFO:tasks.workunit.client.0.vm07.stdout:7/273: write d3/da/db/d14/d1f/f46 [32176,22146] 0 2026-03-09T20:47:20.940 INFO:tasks.workunit.client.0.vm07.stdout:7/274: dread d3/da/db/d14/f1a [0,4194304] 0 2026-03-09T20:47:20.950 INFO:tasks.workunit.client.0.vm07.stdout:7/275: sync 2026-03-09T20:47:20.954 INFO:tasks.workunit.client.0.vm07.stdout:7/276: creat d3/f61 x:0 0 0 2026-03-09T20:47:20.958 INFO:tasks.workunit.client.0.vm07.stdout:7/277: dwrite d3/da/db/d32/d3e/f40 [0,4194304] 0 2026-03-09T20:47:20.959 INFO:tasks.workunit.client.0.vm07.stdout:7/278: stat d3/c2e 0 2026-03-09T20:47:20.961 INFO:tasks.workunit.client.0.vm07.stdout:7/279: write d3/da/db/d14/f2a [2776451,90123] 0 2026-03-09T20:47:20.968 INFO:tasks.workunit.client.0.vm07.stdout:7/280: mkdir d3/da/db/d14/d43/d62 0 2026-03-09T20:47:20.972 INFO:tasks.workunit.client.0.vm07.stdout:7/281: fsync d3/da/f45 0 2026-03-09T20:47:20.972 INFO:tasks.workunit.client.0.vm07.stdout:7/282: truncate d3/f61 972929 0 2026-03-09T20:47:20.972 INFO:tasks.workunit.client.0.vm07.stdout:7/283: creat d3/da/db/d14/d1f/f63 x:0 0 0 2026-03-09T20:47:20.974 INFO:tasks.workunit.client.0.vm07.stdout:7/284: creat d3/da/db/d32/d3e/d5c/f64 x:0 0 0 2026-03-09T20:47:20.976 INFO:tasks.workunit.client.0.vm07.stdout:7/285: read d3/da/db/f1e [3137821,51813] 0 2026-03-09T20:47:20.977 INFO:tasks.workunit.client.0.vm07.stdout:7/286: chown d3/da/d53/c54 0 1 2026-03-09T20:47:20.977 INFO:tasks.workunit.client.0.vm07.stdout:7/287: stat d3/da/db/f12 0 2026-03-09T20:47:20.977 INFO:tasks.workunit.client.0.vm07.stdout:7/288: truncate d3/da/f38 1012122 0 2026-03-09T20:47:20.981 INFO:tasks.workunit.client.0.vm07.stdout:7/289: dwrite d3/d58/f60 [0,4194304] 0 2026-03-09T20:47:20.986 INFO:tasks.workunit.client.0.vm07.stdout:7/290: dread - d3/da/db/d14/d1f/f37 zero size 2026-03-09T20:47:20.989 INFO:tasks.workunit.client.0.vm07.stdout:7/291: creat d3/da/db/d32/d3e/f65 x:0 0 0 2026-03-09T20:47:20.991 INFO:tasks.workunit.client.0.vm07.stdout:8/201: write d1/fb [3990479,54856] 0 2026-03-09T20:47:20.995 INFO:tasks.workunit.client.0.vm07.stdout:7/292: creat d3/da/db/d14/d1f/d2b/d52/f66 x:0 0 0 2026-03-09T20:47:20.999 INFO:tasks.workunit.client.0.vm07.stdout:8/202: rmdir d1/dc/d16/d26 39 2026-03-09T20:47:20.999 INFO:tasks.workunit.client.0.vm07.stdout:8/203: stat d1/dc/d16/d26/f37 0 2026-03-09T20:47:20.999 INFO:tasks.workunit.client.0.vm07.stdout:8/204: read - d1/f33 zero size 2026-03-09T20:47:20.999 INFO:tasks.workunit.client.0.vm07.stdout:8/205: chown d1/dc/l12 236 1 2026-03-09T20:47:20.999 INFO:tasks.workunit.client.0.vm07.stdout:8/206: read - d1/d3b/f3e zero size 2026-03-09T20:47:21.000 INFO:tasks.workunit.client.0.vm07.stdout:8/207: fsync d1/dc/d16/d26/f36 0 2026-03-09T20:47:21.004 INFO:tasks.workunit.client.0.vm07.stdout:8/208: unlink d1/dc/d16/l23 0 2026-03-09T20:47:21.005 INFO:tasks.workunit.client.0.vm07.stdout:8/209: mknod d1/dc/d14/d2f/c43 0 2026-03-09T20:47:21.007 INFO:tasks.workunit.client.0.vm07.stdout:8/210: mknod d1/dc/d14/c44 0 2026-03-09T20:47:21.013 INFO:tasks.workunit.client.0.vm07.stdout:8/211: dwrite d1/dc/d16/d26/f2d [0,4194304] 0 2026-03-09T20:47:21.014 INFO:tasks.workunit.client.1.vm10.stdout:4/134: truncate d1/f1e 79078 0 2026-03-09T20:47:21.015 INFO:tasks.workunit.client.0.vm07.stdout:8/212: chown d1/dc/d16/d26/f2b 21 1 2026-03-09T20:47:21.024 INFO:tasks.workunit.client.0.vm07.stdout:1/222: getdents d3/d23 0 2026-03-09T20:47:21.025 INFO:tasks.workunit.client.0.vm07.stdout:1/223: read - d3/d23/f37 zero size 2026-03-09T20:47:21.025 INFO:tasks.workunit.client.1.vm10.stdout:4/135: creat d1/d2/f2a x:0 0 0 2026-03-09T20:47:21.026 INFO:tasks.workunit.client.0.vm07.stdout:8/213: stat d1/c1b 0 2026-03-09T20:47:21.037 INFO:tasks.workunit.client.1.vm10.stdout:4/136: getdents d1 0 2026-03-09T20:47:21.038 INFO:tasks.workunit.client.1.vm10.stdout:4/137: stat d1/d8/d1b/c22 0 2026-03-09T20:47:21.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:20 vm10.local ceph-mon[57011]: pgmap v150: 65 pgs: 65 active+clean; 782 MiB data, 3.6 GiB used, 116 GiB / 120 GiB avail; 5.9 MiB/s rd, 90 MiB/s wr, 176 op/s 2026-03-09T20:47:21.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:20 vm10.local ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:47:21.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:20 vm10.local ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:47:21.038 INFO:tasks.workunit.client.0.vm07.stdout:1/224: mkdir d3/d14/d35/d3e 0 2026-03-09T20:47:21.038 INFO:tasks.workunit.client.0.vm07.stdout:1/225: fsync d3/d14/d35/f38 0 2026-03-09T20:47:21.038 INFO:tasks.workunit.client.0.vm07.stdout:8/214: dwrite d1/fb [0,4194304] 0 2026-03-09T20:47:21.038 INFO:tasks.workunit.client.0.vm07.stdout:8/215: mknod d1/d3b/c45 0 2026-03-09T20:47:21.038 INFO:tasks.workunit.client.0.vm07.stdout:1/226: rename d3/d23/f3c to d3/f3f 0 2026-03-09T20:47:21.040 INFO:tasks.workunit.client.1.vm10.stdout:4/138: mkdir d1/d8/d1c/d2b 0 2026-03-09T20:47:21.045 INFO:tasks.workunit.client.1.vm10.stdout:4/139: mknod d1/d8/c2c 0 2026-03-09T20:47:21.046 INFO:tasks.workunit.client.0.vm07.stdout:1/227: symlink d3/d23/l40 0 2026-03-09T20:47:21.050 INFO:tasks.workunit.client.0.vm07.stdout:1/228: mkdir d3/d14/d41 0 2026-03-09T20:47:21.059 INFO:tasks.workunit.client.1.vm10.stdout:4/140: fdatasync d1/d8/f29 0 2026-03-09T20:47:21.059 INFO:tasks.workunit.client.1.vm10.stdout:4/141: read d1/d2/f7 [6332470,52437] 0 2026-03-09T20:47:21.059 INFO:tasks.workunit.client.1.vm10.stdout:5/161: link d2/d27/d3a/l45 d2/d27/l49 0 2026-03-09T20:47:21.059 INFO:tasks.workunit.client.0.vm07.stdout:8/216: link d1/dc/d16/d31/l41 d1/d3b/l46 0 2026-03-09T20:47:21.059 INFO:tasks.workunit.client.0.vm07.stdout:8/217: chown d1/c5 4487 1 2026-03-09T20:47:21.059 INFO:tasks.workunit.client.0.vm07.stdout:8/218: chown d1/dc/d14/f30 60 1 2026-03-09T20:47:21.065 INFO:tasks.workunit.client.0.vm07.stdout:1/229: rmdir d3/d14/d41 0 2026-03-09T20:47:21.065 INFO:tasks.workunit.client.1.vm10.stdout:4/142: dwrite d1/f26 [0,4194304] 0 2026-03-09T20:47:21.066 INFO:tasks.workunit.client.1.vm10.stdout:5/162: dwrite d2/d27/f34 [0,4194304] 0 2026-03-09T20:47:21.068 INFO:tasks.workunit.client.0.vm07.stdout:1/230: mknod d3/d14/c42 0 2026-03-09T20:47:21.072 INFO:tasks.workunit.client.1.vm10.stdout:4/143: truncate d1/d8/f29 56539 0 2026-03-09T20:47:21.073 INFO:tasks.workunit.client.1.vm10.stdout:4/144: read - d1/d8/f25 zero size 2026-03-09T20:47:21.073 INFO:tasks.workunit.client.0.vm07.stdout:3/256: write d1/d5/d10/f1a [1667201,54370] 0 2026-03-09T20:47:21.075 INFO:tasks.workunit.client.1.vm10.stdout:4/145: creat d1/d2/f2d x:0 0 0 2026-03-09T20:47:21.075 INFO:tasks.workunit.client.0.vm07.stdout:3/257: symlink d1/d5/d9/d2f/d3d/l57 0 2026-03-09T20:47:21.081 INFO:tasks.workunit.client.1.vm10.stdout:4/146: creat d1/d2/f2e x:0 0 0 2026-03-09T20:47:21.085 INFO:tasks.workunit.client.0.vm07.stdout:6/291: write d8/f12 [5169705,85636] 0 2026-03-09T20:47:21.088 INFO:tasks.workunit.client.0.vm07.stdout:3/258: rmdir d1/d5/d10/d43/d54 39 2026-03-09T20:47:21.090 INFO:tasks.workunit.client.0.vm07.stdout:5/264: getdents d5/df/d13/d55 0 2026-03-09T20:47:21.092 INFO:tasks.workunit.client.1.vm10.stdout:2/191: truncate d5/d18/f1a 4037349 0 2026-03-09T20:47:21.092 INFO:tasks.workunit.client.0.vm07.stdout:3/259: unlink d1/d5/d9/d2f/d3d/c42 0 2026-03-09T20:47:21.093 INFO:tasks.workunit.client.0.vm07.stdout:5/265: creat d5/df/d13/d55/f60 x:0 0 0 2026-03-09T20:47:21.094 INFO:tasks.workunit.client.0.vm07.stdout:5/266: dread - d5/df/d13/d55/f60 zero size 2026-03-09T20:47:21.095 INFO:tasks.workunit.client.0.vm07.stdout:5/267: write d5/df/d13/d55/f5f [398445,49083] 0 2026-03-09T20:47:21.096 INFO:tasks.workunit.client.0.vm07.stdout:6/292: creat d8/d16/f5b x:0 0 0 2026-03-09T20:47:21.097 INFO:tasks.workunit.client.0.vm07.stdout:6/293: chown d8/f46 7886822 1 2026-03-09T20:47:21.098 INFO:tasks.workunit.client.0.vm07.stdout:5/268: rename d5/f5d to d5/d50/f61 0 2026-03-09T20:47:21.109 INFO:tasks.workunit.client.0.vm07.stdout:3/260: creat d1/d5/d9/d11/f58 x:0 0 0 2026-03-09T20:47:21.112 INFO:tasks.workunit.client.0.vm07.stdout:4/217: truncate d2/df/d17/f2a 454729 0 2026-03-09T20:47:21.112 INFO:tasks.workunit.client.0.vm07.stdout:0/308: write d1/d2/dc/f56 [4528790,95204] 0 2026-03-09T20:47:21.116 INFO:tasks.workunit.client.0.vm07.stdout:0/309: dwrite d1/d1f/d30/f50 [0,4194304] 0 2026-03-09T20:47:21.121 INFO:tasks.workunit.client.1.vm10.stdout:6/162: getdents d3/da 0 2026-03-09T20:47:21.121 INFO:tasks.workunit.client.0.vm07.stdout:9/268: write d4/d11/d2a/f39 [5181109,45224] 0 2026-03-09T20:47:21.124 INFO:tasks.workunit.client.0.vm07.stdout:6/294: mknod d8/d16/d22/c5c 0 2026-03-09T20:47:21.126 INFO:tasks.workunit.client.1.vm10.stdout:6/163: write f1 [2804284,59027] 0 2026-03-09T20:47:21.127 INFO:tasks.workunit.client.0.vm07.stdout:5/269: mkdir d5/df/d62 0 2026-03-09T20:47:21.131 INFO:tasks.workunit.client.0.vm07.stdout:5/270: dwrite d5/d33/f5a [0,4194304] 0 2026-03-09T20:47:21.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:20 vm07.local ceph-mon[49120]: pgmap v150: 65 pgs: 65 active+clean; 782 MiB data, 3.6 GiB used, 116 GiB / 120 GiB avail; 5.9 MiB/s rd, 90 MiB/s wr, 176 op/s 2026-03-09T20:47:21.147 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:20 vm07.local ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:47:21.147 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:20 vm07.local ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:47:21.147 INFO:tasks.workunit.client.1.vm10.stdout:0/173: dwrite d2/d9/da/de/d1a/f21 [0,4194304] 0 2026-03-09T20:47:21.147 INFO:tasks.workunit.client.1.vm10.stdout:2/192: sync 2026-03-09T20:47:21.152 INFO:tasks.workunit.client.1.vm10.stdout:2/193: dread d5/d2b/f36 [0,4194304] 0 2026-03-09T20:47:21.164 INFO:tasks.workunit.client.1.vm10.stdout:0/174: dwrite d2/d9/da/de/d1a/f31 [0,4194304] 0 2026-03-09T20:47:21.164 INFO:tasks.workunit.client.1.vm10.stdout:6/164: write f2 [657744,113501] 0 2026-03-09T20:47:21.165 INFO:tasks.workunit.client.0.vm07.stdout:9/269: sync 2026-03-09T20:47:21.165 INFO:tasks.workunit.client.0.vm07.stdout:6/295: sync 2026-03-09T20:47:21.168 INFO:tasks.workunit.client.0.vm07.stdout:4/218: dwrite d2/d1f/f26 [4194304,4194304] 0 2026-03-09T20:47:21.168 INFO:tasks.workunit.client.1.vm10.stdout:6/165: fdatasync d3/da/d11/d26/f2a 0 2026-03-09T20:47:21.169 INFO:tasks.workunit.client.0.vm07.stdout:4/219: readlink d2/df/l18 0 2026-03-09T20:47:21.169 INFO:tasks.workunit.client.1.vm10.stdout:6/166: fsync d3/d12/f16 0 2026-03-09T20:47:21.173 INFO:tasks.workunit.client.1.vm10.stdout:6/167: write d3/da/d11/d26/f2a [1662658,27360] 0 2026-03-09T20:47:21.179 INFO:tasks.workunit.client.0.vm07.stdout:0/310: write d1/d2/ff [1920801,99499] 0 2026-03-09T20:47:21.181 INFO:tasks.workunit.client.0.vm07.stdout:0/311: dread d1/d1f/d20/f21 [0,4194304] 0 2026-03-09T20:47:21.182 INFO:tasks.workunit.client.1.vm10.stdout:2/194: chown d5/d18/c34 366055039 1 2026-03-09T20:47:21.182 INFO:tasks.workunit.client.1.vm10.stdout:6/168: dwrite f0 [0,4194304] 0 2026-03-09T20:47:21.186 INFO:tasks.workunit.client.0.vm07.stdout:3/261: dread d1/d5/d9/fe [0,4194304] 0 2026-03-09T20:47:21.189 INFO:tasks.workunit.client.0.vm07.stdout:3/262: write d1/f36 [4042036,23289] 0 2026-03-09T20:47:21.191 INFO:tasks.workunit.client.0.vm07.stdout:5/271: creat d5/d33/d3b/f63 x:0 0 0 2026-03-09T20:47:21.191 INFO:tasks.workunit.client.0.vm07.stdout:5/272: chown d5/df/d13/f38 397 1 2026-03-09T20:47:21.195 INFO:tasks.workunit.client.1.vm10.stdout:0/175: rename d2/d9/da/de/d1a/f31 to d2/db/f38 0 2026-03-09T20:47:21.196 INFO:tasks.workunit.client.0.vm07.stdout:9/270: write d4/d11/f13 [1755148,61822] 0 2026-03-09T20:47:21.199 INFO:tasks.workunit.client.1.vm10.stdout:7/174: dread db/d21/d23/f34 [0,4194304] 0 2026-03-09T20:47:21.211 INFO:tasks.workunit.client.1.vm10.stdout:2/195: write d5/fe [2367893,14410] 0 2026-03-09T20:47:21.222 INFO:tasks.workunit.client.0.vm07.stdout:3/263: rmdir d1/d35 39 2026-03-09T20:47:21.223 INFO:tasks.workunit.client.0.vm07.stdout:0/312: dread d1/d2/d33/d35/f45 [0,4194304] 0 2026-03-09T20:47:21.224 INFO:tasks.workunit.client.0.vm07.stdout:0/313: write d1/d2/d4b/f61 [605547,59534] 0 2026-03-09T20:47:21.226 INFO:tasks.workunit.client.1.vm10.stdout:0/176: creat d2/f39 x:0 0 0 2026-03-09T20:47:21.226 INFO:tasks.workunit.client.0.vm07.stdout:5/273: creat d5/df/d13/d30/f64 x:0 0 0 2026-03-09T20:47:21.226 INFO:tasks.workunit.client.0.vm07.stdout:5/274: chown d5/d33/f5a 190518418 1 2026-03-09T20:47:21.227 INFO:tasks.workunit.client.0.vm07.stdout:5/275: chown d5/d33/c48 0 1 2026-03-09T20:47:21.232 INFO:tasks.workunit.client.1.vm10.stdout:7/175: mknod db/d28/c3a 0 2026-03-09T20:47:21.233 INFO:tasks.workunit.client.1.vm10.stdout:7/176: readlink db/d28/d2b/d36/l17 0 2026-03-09T20:47:21.233 INFO:tasks.workunit.client.1.vm10.stdout:1/217: dread d2/f17 [0,4194304] 0 2026-03-09T20:47:21.234 INFO:tasks.workunit.client.0.vm07.stdout:6/296: mkdir d8/d5d 0 2026-03-09T20:47:21.236 INFO:tasks.workunit.client.0.vm07.stdout:9/271: chown d4/d16/c58 258517 1 2026-03-09T20:47:21.238 INFO:tasks.workunit.client.1.vm10.stdout:2/196: creat d5/d2b/d32/f3b x:0 0 0 2026-03-09T20:47:21.238 INFO:tasks.workunit.client.1.vm10.stdout:7/177: dwrite db/d21/d23/f14 [0,4194304] 0 2026-03-09T20:47:21.240 INFO:tasks.workunit.client.0.vm07.stdout:5/276: mknod d5/d33/d39/c65 0 2026-03-09T20:47:21.242 INFO:tasks.workunit.client.1.vm10.stdout:2/197: chown d5/d18/d27/f29 70045570 1 2026-03-09T20:47:21.247 INFO:tasks.workunit.client.1.vm10.stdout:2/198: chown d5/l17 352284247 1 2026-03-09T20:47:21.247 INFO:tasks.workunit.client.0.vm07.stdout:5/277: dread d5/df/d13/f38 [0,4194304] 0 2026-03-09T20:47:21.247 INFO:tasks.workunit.client.0.vm07.stdout:5/278: dread - d5/df/d13/f41 zero size 2026-03-09T20:47:21.251 INFO:tasks.workunit.client.1.vm10.stdout:7/178: sync 2026-03-09T20:47:21.254 INFO:tasks.workunit.client.1.vm10.stdout:0/177: sync 2026-03-09T20:47:21.254 INFO:tasks.workunit.client.1.vm10.stdout:1/218: sync 2026-03-09T20:47:21.254 INFO:tasks.workunit.client.1.vm10.stdout:7/179: dread - db/d1f/f37 zero size 2026-03-09T20:47:21.256 INFO:tasks.workunit.client.1.vm10.stdout:7/180: chown db/d21/d23/f29 0 1 2026-03-09T20:47:21.258 INFO:tasks.workunit.client.1.vm10.stdout:2/199: dread d5/d18/f24 [0,4194304] 0 2026-03-09T20:47:21.258 INFO:tasks.workunit.client.1.vm10.stdout:3/131: dread dc/dd/f13 [0,4194304] 0 2026-03-09T20:47:21.261 INFO:tasks.workunit.client.0.vm07.stdout:7/293: fsync d3/d58/f60 0 2026-03-09T20:47:21.262 INFO:tasks.workunit.client.0.vm07.stdout:7/294: readlink d3/da/db/l3c 0 2026-03-09T20:47:21.262 INFO:tasks.workunit.client.0.vm07.stdout:7/295: dread - d3/da/db/d32/d3e/f65 zero size 2026-03-09T20:47:21.269 INFO:tasks.workunit.client.1.vm10.stdout:2/200: dread d5/fe [0,4194304] 0 2026-03-09T20:47:21.269 INFO:tasks.workunit.client.0.vm07.stdout:5/279: symlink d5/d19/l66 0 2026-03-09T20:47:21.269 INFO:tasks.workunit.client.1.vm10.stdout:0/178: creat d2/d9/da/d35/f3a x:0 0 0 2026-03-09T20:47:21.270 INFO:tasks.workunit.client.1.vm10.stdout:7/181: mkdir db/d28/d2b/d36/d3b 0 2026-03-09T20:47:21.271 INFO:tasks.workunit.client.1.vm10.stdout:7/182: readlink db/l1b 0 2026-03-09T20:47:21.271 INFO:tasks.workunit.client.1.vm10.stdout:7/183: fdatasync f3 0 2026-03-09T20:47:21.272 INFO:tasks.workunit.client.1.vm10.stdout:9/256: write d2/f6 [656564,72957] 0 2026-03-09T20:47:21.274 INFO:tasks.workunit.client.0.vm07.stdout:5/280: mknod d5/df/d13/d4f/c67 0 2026-03-09T20:47:21.275 INFO:tasks.workunit.client.1.vm10.stdout:0/179: write d2/f39 [808250,104920] 0 2026-03-09T20:47:21.276 INFO:tasks.workunit.client.0.vm07.stdout:5/281: dread d5/f25 [0,4194304] 0 2026-03-09T20:47:21.284 INFO:tasks.workunit.client.0.vm07.stdout:5/282: symlink d5/d33/l68 0 2026-03-09T20:47:21.284 INFO:tasks.workunit.client.0.vm07.stdout:5/283: chown d5/df/d13/d3e/c49 52886 1 2026-03-09T20:47:21.286 INFO:tasks.workunit.client.1.vm10.stdout:3/132: mkdir dc/d14/d26/d29 0 2026-03-09T20:47:21.286 INFO:tasks.workunit.client.0.vm07.stdout:5/284: dread d5/df/f2b [0,4194304] 0 2026-03-09T20:47:21.291 INFO:tasks.workunit.client.0.vm07.stdout:5/285: dwrite d5/df/f4a [0,4194304] 0 2026-03-09T20:47:21.294 INFO:tasks.workunit.client.0.vm07.stdout:5/286: fsync d5/df/d13/f41 0 2026-03-09T20:47:21.296 INFO:tasks.workunit.client.0.vm07.stdout:4/220: dread d2/df/d17/f37 [0,4194304] 0 2026-03-09T20:47:21.297 INFO:tasks.workunit.client.0.vm07.stdout:0/314: dread d1/f1a [0,4194304] 0 2026-03-09T20:47:21.309 INFO:tasks.workunit.client.0.vm07.stdout:7/296: creat d3/f67 x:0 0 0 2026-03-09T20:47:21.310 INFO:tasks.workunit.client.1.vm10.stdout:8/224: fsync d0/f14 0 2026-03-09T20:47:21.310 INFO:tasks.workunit.client.1.vm10.stdout:0/180: fsync d2/db/f13 0 2026-03-09T20:47:21.312 INFO:tasks.workunit.client.0.vm07.stdout:2/300: truncate d2/f10 218515 0 2026-03-09T20:47:21.313 INFO:tasks.workunit.client.1.vm10.stdout:7/184: unlink db/d28/c3a 0 2026-03-09T20:47:21.314 INFO:tasks.workunit.client.0.vm07.stdout:4/221: rename d2/d1f/d2d/f2f to d2/d1f/f3c 0 2026-03-09T20:47:21.314 INFO:tasks.workunit.client.1.vm10.stdout:7/185: chown db/cc 11033 1 2026-03-09T20:47:21.314 INFO:tasks.workunit.client.1.vm10.stdout:8/225: write d0/d22/d25/d2e/f33 [2392473,116936] 0 2026-03-09T20:47:21.316 INFO:tasks.workunit.client.0.vm07.stdout:0/315: symlink d1/d2/d33/l69 0 2026-03-09T20:47:21.318 INFO:tasks.workunit.client.1.vm10.stdout:8/226: readlink d0/lb 0 2026-03-09T20:47:21.319 INFO:tasks.workunit.client.0.vm07.stdout:5/287: mkdir d5/d69 0 2026-03-09T20:47:21.324 INFO:tasks.workunit.client.1.vm10.stdout:8/227: fsync d0/d22/d25/f37 0 2026-03-09T20:47:21.325 INFO:tasks.workunit.client.0.vm07.stdout:0/316: dread d1/f2f [0,4194304] 0 2026-03-09T20:47:21.329 INFO:tasks.workunit.client.1.vm10.stdout:9/257: link d2/d3/f7 d2/d28/d47/d50/f5b 0 2026-03-09T20:47:21.336 INFO:tasks.workunit.client.0.vm07.stdout:2/301: mkdir d2/db/d28/d57 0 2026-03-09T20:47:21.340 INFO:tasks.workunit.client.1.vm10.stdout:7/186: write db/d21/d23/f34 [2420245,15276] 0 2026-03-09T20:47:21.343 INFO:tasks.workunit.client.1.vm10.stdout:3/133: mkdir dc/d14/d26/d29/d2a 0 2026-03-09T20:47:21.344 INFO:tasks.workunit.client.1.vm10.stdout:0/181: symlink d2/d9/l3b 0 2026-03-09T20:47:21.345 INFO:tasks.workunit.client.0.vm07.stdout:4/222: dwrite d2/f3 [0,4194304] 0 2026-03-09T20:47:21.346 INFO:tasks.workunit.client.1.vm10.stdout:0/182: write d2/f39 [48879,26862] 0 2026-03-09T20:47:21.351 INFO:tasks.workunit.client.1.vm10.stdout:8/228: symlink d0/d22/l4e 0 2026-03-09T20:47:21.355 INFO:tasks.workunit.client.1.vm10.stdout:7/187: dread db/d21/d23/f29 [0,4194304] 0 2026-03-09T20:47:21.356 INFO:tasks.workunit.client.1.vm10.stdout:7/188: stat db/d21/d23/f1e 0 2026-03-09T20:47:21.356 INFO:tasks.workunit.client.0.vm07.stdout:2/302: dread d2/f17 [0,4194304] 0 2026-03-09T20:47:21.361 INFO:tasks.workunit.client.0.vm07.stdout:5/288: mknod d5/df/d13/d55/c6a 0 2026-03-09T20:47:21.362 INFO:tasks.workunit.client.1.vm10.stdout:9/258: mknod d2/d3/de/c5c 0 2026-03-09T20:47:21.365 INFO:tasks.workunit.client.0.vm07.stdout:8/219: write d1/f13 [2455830,130253] 0 2026-03-09T20:47:21.367 INFO:tasks.workunit.client.0.vm07.stdout:8/220: chown d1/c3d 1 1 2026-03-09T20:47:21.367 INFO:tasks.workunit.client.0.vm07.stdout:8/221: chown d1/dc/l12 15 1 2026-03-09T20:47:21.367 INFO:tasks.workunit.client.0.vm07.stdout:8/222: write d1/dc/d16/d26/f2d [2663367,15211] 0 2026-03-09T20:47:21.369 INFO:tasks.workunit.client.0.vm07.stdout:1/231: truncate d3/f9 360186 0 2026-03-09T20:47:21.369 INFO:tasks.workunit.client.0.vm07.stdout:1/232: truncate d3/f3f 268763 0 2026-03-09T20:47:21.371 INFO:tasks.workunit.client.1.vm10.stdout:5/163: write d2/d27/f34 [5035780,11100] 0 2026-03-09T20:47:21.371 INFO:tasks.workunit.client.1.vm10.stdout:9/259: dread d2/d33/f3f [0,4194304] 0 2026-03-09T20:47:21.372 INFO:tasks.workunit.client.1.vm10.stdout:3/134: symlink dc/d14/d26/l2b 0 2026-03-09T20:47:21.375 INFO:tasks.workunit.client.1.vm10.stdout:3/135: chown l8 47997230 1 2026-03-09T20:47:21.379 INFO:tasks.workunit.client.1.vm10.stdout:8/229: mknod d0/d22/d2f/d38/c4f 0 2026-03-09T20:47:21.381 INFO:tasks.workunit.client.0.vm07.stdout:4/223: unlink d2/d1f/d2d/f31 0 2026-03-09T20:47:21.383 INFO:tasks.workunit.client.0.vm07.stdout:2/303: creat d2/db/d28/f58 x:0 0 0 2026-03-09T20:47:21.385 INFO:tasks.workunit.client.0.vm07.stdout:5/289: symlink d5/df/d13/d30/l6b 0 2026-03-09T20:47:21.385 INFO:tasks.workunit.client.0.vm07.stdout:5/290: write d5/d33/d3b/f63 [1006663,16893] 0 2026-03-09T20:47:21.389 INFO:tasks.workunit.client.0.vm07.stdout:5/291: dread d5/df/d13/f5b [0,4194304] 0 2026-03-09T20:47:21.392 INFO:tasks.workunit.client.1.vm10.stdout:5/164: symlink d2/d27/d37/l4a 0 2026-03-09T20:47:21.393 INFO:tasks.workunit.client.0.vm07.stdout:1/233: mkdir d3/d14/d35/d43 0 2026-03-09T20:47:21.394 INFO:tasks.workunit.client.1.vm10.stdout:5/165: dread - d2/f21 zero size 2026-03-09T20:47:21.397 INFO:tasks.workunit.client.1.vm10.stdout:6/169: truncate f2 639247 0 2026-03-09T20:47:21.397 INFO:tasks.workunit.client.0.vm07.stdout:3/264: write d1/d5/d10/f30 [1209166,72226] 0 2026-03-09T20:47:21.401 INFO:tasks.workunit.client.0.vm07.stdout:4/224: mknod d2/df/d17/c3d 0 2026-03-09T20:47:21.402 INFO:tasks.workunit.client.0.vm07.stdout:6/297: write d8/d16/d22/d24/d2b/f53 [1320001,54946] 0 2026-03-09T20:47:21.402 INFO:tasks.workunit.client.1.vm10.stdout:6/170: dwrite d3/da/d11/f1d [0,4194304] 0 2026-03-09T20:47:21.403 INFO:tasks.workunit.client.0.vm07.stdout:6/298: write d8/d50/f55 [698917,105779] 0 2026-03-09T20:47:21.404 INFO:tasks.workunit.client.0.vm07.stdout:2/304: read d2/f40 [158240,41059] 0 2026-03-09T20:47:21.407 INFO:tasks.workunit.client.0.vm07.stdout:9/272: dwrite d4/d8/dc/d15/f18 [0,4194304] 0 2026-03-09T20:47:21.410 INFO:tasks.workunit.client.0.vm07.stdout:2/305: dwrite d2/db/d1c/f22 [0,4194304] 0 2026-03-09T20:47:21.410 INFO:tasks.workunit.client.1.vm10.stdout:8/230: read d0/d22/d2f/d3d/f49 [7410551,72147] 0 2026-03-09T20:47:21.419 INFO:tasks.workunit.client.1.vm10.stdout:1/219: dwrite d2/da/f1e [0,4194304] 0 2026-03-09T20:47:21.425 INFO:tasks.workunit.client.1.vm10.stdout:9/260: link d2/d12/f26 d2/d3/de/f5d 0 2026-03-09T20:47:21.449 INFO:tasks.workunit.client.1.vm10.stdout:4/147: truncate d1/d2/d3/f18 456133 0 2026-03-09T20:47:21.453 INFO:tasks.workunit.client.0.vm07.stdout:7/297: dwrite d3/da/db/f27 [0,4194304] 0 2026-03-09T20:47:21.454 INFO:tasks.workunit.client.0.vm07.stdout:7/298: write d3/f4f [1798961,30669] 0 2026-03-09T20:47:21.475 INFO:tasks.workunit.client.0.vm07.stdout:0/317: dwrite d1/f2f [0,4194304] 0 2026-03-09T20:47:21.478 INFO:tasks.workunit.client.1.vm10.stdout:9/261: dread d2/d28/f51 [0,4194304] 0 2026-03-09T20:47:21.478 INFO:tasks.workunit.client.0.vm07.stdout:6/299: fsync d8/d16/d22/d24/d2b/f5a 0 2026-03-09T20:47:21.478 INFO:tasks.workunit.client.1.vm10.stdout:9/262: fsync d2/f46 0 2026-03-09T20:47:21.490 INFO:tasks.workunit.client.1.vm10.stdout:0/183: symlink d2/d9/da/de/d1a/d25/d34/l3c 0 2026-03-09T20:47:21.492 INFO:tasks.workunit.client.0.vm07.stdout:2/306: chown d2/d11/l24 57065 1 2026-03-09T20:47:21.492 INFO:tasks.workunit.client.1.vm10.stdout:8/231: truncate d0/d22/f27 201635 0 2026-03-09T20:47:21.492 INFO:tasks.workunit.client.0.vm07.stdout:5/292: mkdir d5/df/d13/d6c 0 2026-03-09T20:47:21.493 INFO:tasks.workunit.client.0.vm07.stdout:5/293: chown d5/df/d13/l57 106 1 2026-03-09T20:47:21.494 INFO:tasks.workunit.client.0.vm07.stdout:1/234: mknod d3/d14/d35/d3e/c44 0 2026-03-09T20:47:21.494 INFO:tasks.workunit.client.0.vm07.stdout:1/235: dread - d3/d23/f39 zero size 2026-03-09T20:47:21.495 INFO:tasks.workunit.client.0.vm07.stdout:1/236: write d3/d14/d35/f20 [1823211,72084] 0 2026-03-09T20:47:21.495 INFO:tasks.workunit.client.0.vm07.stdout:1/237: write d3/f10 [1374517,7105] 0 2026-03-09T20:47:21.496 INFO:tasks.workunit.client.1.vm10.stdout:2/201: dwrite d5/f7 [0,4194304] 0 2026-03-09T20:47:21.498 INFO:tasks.workunit.client.1.vm10.stdout:5/166: mkdir d2/d39/d4b 0 2026-03-09T20:47:21.498 INFO:tasks.workunit.client.1.vm10.stdout:4/148: rename d1/d2/d3/c20 to d1/d8/d1c/d2b/c2f 0 2026-03-09T20:47:21.499 INFO:tasks.workunit.client.1.vm10.stdout:5/167: stat d2/d1b/l29 0 2026-03-09T20:47:21.502 INFO:tasks.workunit.client.1.vm10.stdout:5/168: write d2/d27/f2a [815262,11161] 0 2026-03-09T20:47:21.507 INFO:tasks.workunit.client.1.vm10.stdout:9/263: mknod d2/d33/c5e 0 2026-03-09T20:47:21.508 INFO:tasks.workunit.client.0.vm07.stdout:6/300: mkdir d8/d50/d5e 0 2026-03-09T20:47:21.508 INFO:tasks.workunit.client.1.vm10.stdout:9/264: chown d2/d3/de/f5d 0 1 2026-03-09T20:47:21.508 INFO:tasks.workunit.client.1.vm10.stdout:3/136: link lb dc/d14/d20/d21/l2c 0 2026-03-09T20:47:21.509 INFO:tasks.workunit.client.1.vm10.stdout:6/171: mkdir d3/d30/d33 0 2026-03-09T20:47:21.510 INFO:tasks.workunit.client.0.vm07.stdout:5/294: symlink d5/df/d13/d3e/l6d 0 2026-03-09T20:47:21.513 INFO:tasks.workunit.client.0.vm07.stdout:5/295: dwrite d5/d19/f43 [0,4194304] 0 2026-03-09T20:47:21.514 INFO:tasks.workunit.client.1.vm10.stdout:9/265: dread d2/d3/de/d35/f38 [0,4194304] 0 2026-03-09T20:47:21.516 INFO:tasks.workunit.client.1.vm10.stdout:8/232: chown d0/d22/d2c/c3c 3256425 1 2026-03-09T20:47:21.516 INFO:tasks.workunit.client.0.vm07.stdout:1/238: chown d3/cd 113232139 1 2026-03-09T20:47:21.516 INFO:tasks.workunit.client.1.vm10.stdout:8/233: chown d0/f14 6607607 1 2026-03-09T20:47:21.517 INFO:tasks.workunit.client.0.vm07.stdout:4/225: link d2/f7 d2/df/d17/f3e 0 2026-03-09T20:47:21.518 INFO:tasks.workunit.client.0.vm07.stdout:5/296: dread d5/d19/f20 [0,4194304] 0 2026-03-09T20:47:21.519 INFO:tasks.workunit.client.1.vm10.stdout:5/169: rmdir d2/d39 39 2026-03-09T20:47:21.519 INFO:tasks.workunit.client.0.vm07.stdout:2/307: symlink d2/l59 0 2026-03-09T20:47:21.521 INFO:tasks.workunit.client.0.vm07.stdout:1/239: creat d3/d23/f45 x:0 0 0 2026-03-09T20:47:21.524 INFO:tasks.workunit.client.1.vm10.stdout:6/172: symlink d3/d12/l34 0 2026-03-09T20:47:21.524 INFO:tasks.workunit.client.0.vm07.stdout:5/297: dread d5/df/f2b [0,4194304] 0 2026-03-09T20:47:21.525 INFO:tasks.workunit.client.1.vm10.stdout:3/137: symlink dc/d14/d26/d29/d2a/l2d 0 2026-03-09T20:47:21.525 INFO:tasks.workunit.client.1.vm10.stdout:9/266: dwrite d2/d12/f31 [0,4194304] 0 2026-03-09T20:47:21.525 INFO:tasks.workunit.client.0.vm07.stdout:1/240: mknod d3/d14/d35/d3e/c46 0 2026-03-09T20:47:21.526 INFO:tasks.workunit.client.1.vm10.stdout:5/170: rename d2/f16 to d2/d27/d37/d46/f4c 0 2026-03-09T20:47:21.527 INFO:tasks.workunit.client.1.vm10.stdout:4/149: dread d1/d2/f7 [4194304,4194304] 0 2026-03-09T20:47:21.527 INFO:tasks.workunit.client.0.vm07.stdout:6/301: creat d8/f5f x:0 0 0 2026-03-09T20:47:21.528 INFO:tasks.workunit.client.0.vm07.stdout:5/298: write d5/d50/f61 [109731,24176] 0 2026-03-09T20:47:21.529 INFO:tasks.workunit.client.0.vm07.stdout:2/308: creat d2/d11/d56/f5a x:0 0 0 2026-03-09T20:47:21.531 INFO:tasks.workunit.client.0.vm07.stdout:5/299: creat d5/d33/d39/f6e x:0 0 0 2026-03-09T20:47:21.533 INFO:tasks.workunit.client.0.vm07.stdout:1/241: rmdir d3/d14/d35/d43 0 2026-03-09T20:47:21.533 INFO:tasks.workunit.client.0.vm07.stdout:1/242: read - d3/d23/f39 zero size 2026-03-09T20:47:21.534 INFO:tasks.workunit.client.0.vm07.stdout:5/300: mknod d5/c6f 0 2026-03-09T20:47:21.534 INFO:tasks.workunit.client.0.vm07.stdout:1/243: rmdir d3/d14 39 2026-03-09T20:47:21.535 INFO:tasks.workunit.client.1.vm10.stdout:1/220: getdents d2/da 0 2026-03-09T20:47:21.535 INFO:tasks.workunit.client.1.vm10.stdout:8/234: truncate d0/f13 960458 0 2026-03-09T20:47:21.535 INFO:tasks.workunit.client.1.vm10.stdout:3/138: mkdir dc/d14/d20/d2e 0 2026-03-09T20:47:21.535 INFO:tasks.workunit.client.1.vm10.stdout:4/150: mkdir d1/d8/d1b/d30 0 2026-03-09T20:47:21.537 INFO:tasks.workunit.client.0.vm07.stdout:1/244: dwrite d3/d23/f37 [0,4194304] 0 2026-03-09T20:47:21.537 INFO:tasks.workunit.client.0.vm07.stdout:7/299: sync 2026-03-09T20:47:21.542 INFO:tasks.workunit.client.1.vm10.stdout:2/202: sync 2026-03-09T20:47:21.542 INFO:tasks.workunit.client.1.vm10.stdout:2/203: read d5/d18/f1a [1257687,97711] 0 2026-03-09T20:47:21.542 INFO:tasks.workunit.client.1.vm10.stdout:2/204: write d5/d18/d27/f2a [159703,1312] 0 2026-03-09T20:47:21.543 INFO:tasks.workunit.client.0.vm07.stdout:7/300: dread d3/da/db/f12 [0,4194304] 0 2026-03-09T20:47:21.545 INFO:tasks.workunit.client.1.vm10.stdout:5/171: symlink d2/l4d 0 2026-03-09T20:47:21.545 INFO:tasks.workunit.client.0.vm07.stdout:5/301: chown d5/le 942 1 2026-03-09T20:47:21.549 INFO:tasks.workunit.client.0.vm07.stdout:0/318: sync 2026-03-09T20:47:21.549 INFO:tasks.workunit.client.0.vm07.stdout:2/309: sync 2026-03-09T20:47:21.550 INFO:tasks.workunit.client.1.vm10.stdout:4/151: creat d1/d2/d3/f31 x:0 0 0 2026-03-09T20:47:21.550 INFO:tasks.workunit.client.1.vm10.stdout:6/173: sync 2026-03-09T20:47:21.550 INFO:tasks.workunit.client.1.vm10.stdout:8/235: sync 2026-03-09T20:47:21.550 INFO:tasks.workunit.client.1.vm10.stdout:5/172: write d2/f40 [595246,3988] 0 2026-03-09T20:47:21.551 INFO:tasks.workunit.client.1.vm10.stdout:4/152: dread - d1/d8/d1b/f24 zero size 2026-03-09T20:47:21.551 INFO:tasks.workunit.client.1.vm10.stdout:6/174: chown d3/d12/d24/f27 1055 1 2026-03-09T20:47:21.552 INFO:tasks.workunit.client.1.vm10.stdout:8/236: read d0/f6 [218082,32815] 0 2026-03-09T20:47:21.552 INFO:tasks.workunit.client.0.vm07.stdout:5/302: dwrite d5/f25 [0,4194304] 0 2026-03-09T20:47:21.556 INFO:tasks.workunit.client.0.vm07.stdout:0/319: dread d1/f48 [0,4194304] 0 2026-03-09T20:47:21.562 INFO:tasks.workunit.client.0.vm07.stdout:5/303: dread d5/d33/f5a [0,4194304] 0 2026-03-09T20:47:21.565 INFO:tasks.workunit.client.1.vm10.stdout:8/237: dwrite d0/d22/d2f/d38/f43 [0,4194304] 0 2026-03-09T20:47:21.566 INFO:tasks.workunit.client.1.vm10.stdout:8/238: stat d0/d22/f29 0 2026-03-09T20:47:21.567 INFO:tasks.workunit.client.1.vm10.stdout:3/139: rename dc/dd/c1e to dc/d14/d20/d21/c2f 0 2026-03-09T20:47:21.576 INFO:tasks.workunit.client.0.vm07.stdout:1/245: chown d3/d14/d35/d3e/c44 14423 1 2026-03-09T20:47:21.577 INFO:tasks.workunit.client.0.vm07.stdout:1/246: chown d3/d23/l2f 261185 1 2026-03-09T20:47:21.579 INFO:tasks.workunit.client.1.vm10.stdout:9/267: dread d2/d3/fa [0,4194304] 0 2026-03-09T20:47:21.587 INFO:tasks.workunit.client.0.vm07.stdout:2/310: dread d2/db/d1c/f45 [0,4194304] 0 2026-03-09T20:47:21.594 INFO:tasks.workunit.client.1.vm10.stdout:5/173: fdatasync d2/f8 0 2026-03-09T20:47:21.594 INFO:tasks.workunit.client.1.vm10.stdout:4/153: chown d1/d8/c10 183 1 2026-03-09T20:47:21.594 INFO:tasks.workunit.client.1.vm10.stdout:4/154: chown d1/d2/d3 1055642499 1 2026-03-09T20:47:21.600 INFO:tasks.workunit.client.0.vm07.stdout:5/304: dwrite d5/d19/f20 [4194304,4194304] 0 2026-03-09T20:47:21.600 INFO:tasks.workunit.client.1.vm10.stdout:4/155: read d1/d8/d1c/f1f [268326,83408] 0 2026-03-09T20:47:21.606 INFO:tasks.workunit.client.1.vm10.stdout:4/156: write d1/d2/f2d [397267,31272] 0 2026-03-09T20:47:21.608 INFO:tasks.workunit.client.0.vm07.stdout:1/247: write d3/d14/d35/f32 [165688,59453] 0 2026-03-09T20:47:21.608 INFO:tasks.workunit.client.0.vm07.stdout:1/248: chown d3/d23/c3d 375 1 2026-03-09T20:47:21.608 INFO:tasks.workunit.client.0.vm07.stdout:1/249: chown d3/f10 39 1 2026-03-09T20:47:21.613 INFO:tasks.workunit.client.1.vm10.stdout:3/140: creat dc/d14/d26/d29/f30 x:0 0 0 2026-03-09T20:47:21.614 INFO:tasks.workunit.client.0.vm07.stdout:1/250: mkdir d3/d23/d47 0 2026-03-09T20:47:21.614 INFO:tasks.workunit.client.0.vm07.stdout:1/251: stat d3/d14 0 2026-03-09T20:47:21.625 INFO:tasks.workunit.client.1.vm10.stdout:2/205: mknod d5/d18/c3c 0 2026-03-09T20:47:21.625 INFO:tasks.workunit.client.1.vm10.stdout:6/175: creat d3/d30/d33/f35 x:0 0 0 2026-03-09T20:47:21.625 INFO:tasks.workunit.client.1.vm10.stdout:6/176: dread - d3/d12/f1e zero size 2026-03-09T20:47:21.625 INFO:tasks.workunit.client.1.vm10.stdout:2/206: chown d5/f15 8932 1 2026-03-09T20:47:21.625 INFO:tasks.workunit.client.1.vm10.stdout:8/239: symlink d0/d22/d25/d2e/d41/d47/l50 0 2026-03-09T20:47:21.625 INFO:tasks.workunit.client.0.vm07.stdout:1/252: dread - d3/f2b zero size 2026-03-09T20:47:21.625 INFO:tasks.workunit.client.0.vm07.stdout:1/253: mknod d3/d14/c48 0 2026-03-09T20:47:21.625 INFO:tasks.workunit.client.0.vm07.stdout:1/254: stat d3/d14/d35/d3e/c44 0 2026-03-09T20:47:21.625 INFO:tasks.workunit.client.0.vm07.stdout:1/255: dread - d3/d14/d35/f38 zero size 2026-03-09T20:47:21.625 INFO:tasks.workunit.client.0.vm07.stdout:1/256: creat d3/d23/f49 x:0 0 0 2026-03-09T20:47:21.625 INFO:tasks.workunit.client.0.vm07.stdout:1/257: dwrite d3/d23/f49 [0,4194304] 0 2026-03-09T20:47:21.626 INFO:tasks.workunit.client.1.vm10.stdout:2/207: chown d5/d18/f24 15 1 2026-03-09T20:47:21.626 INFO:tasks.workunit.client.0.vm07.stdout:1/258: fdatasync d3/f3f 0 2026-03-09T20:47:21.627 INFO:tasks.workunit.client.0.vm07.stdout:1/259: write d3/d23/f49 [2337763,80192] 0 2026-03-09T20:47:21.632 INFO:tasks.workunit.client.1.vm10.stdout:6/177: mkdir d3/d12/d36 0 2026-03-09T20:47:21.637 INFO:tasks.workunit.client.0.vm07.stdout:1/260: creat d3/d14/d35/d3e/f4a x:0 0 0 2026-03-09T20:47:21.640 INFO:tasks.workunit.client.1.vm10.stdout:5/174: creat d2/d39/d4b/f4e x:0 0 0 2026-03-09T20:47:21.649 INFO:tasks.workunit.client.1.vm10.stdout:3/141: dwrite dc/dd/f18 [0,4194304] 0 2026-03-09T20:47:21.649 INFO:tasks.workunit.client.1.vm10.stdout:6/178: creat d3/d30/d33/f37 x:0 0 0 2026-03-09T20:47:21.650 INFO:tasks.workunit.client.1.vm10.stdout:3/142: creat dc/d14/d26/f31 x:0 0 0 2026-03-09T20:47:21.653 INFO:tasks.workunit.client.1.vm10.stdout:3/143: creat dc/d14/d20/d2e/f32 x:0 0 0 2026-03-09T20:47:21.653 INFO:tasks.workunit.client.1.vm10.stdout:6/179: mknod d3/d30/c38 0 2026-03-09T20:47:21.654 INFO:tasks.workunit.client.1.vm10.stdout:2/208: link d5/c25 d5/d18/d27/d28/c3d 0 2026-03-09T20:47:21.658 INFO:tasks.workunit.client.1.vm10.stdout:6/180: write d3/da/d11/f1d [2358329,78479] 0 2026-03-09T20:47:21.669 INFO:tasks.workunit.client.1.vm10.stdout:6/181: mkdir d3/d12/d24/d39 0 2026-03-09T20:47:21.669 INFO:tasks.workunit.client.1.vm10.stdout:6/182: creat d3/d30/d33/f3a x:0 0 0 2026-03-09T20:47:21.669 INFO:tasks.workunit.client.1.vm10.stdout:6/183: mknod d3/da/d11/d31/c3b 0 2026-03-09T20:47:21.669 INFO:tasks.workunit.client.1.vm10.stdout:5/175: dwrite f1 [0,4194304] 0 2026-03-09T20:47:21.669 INFO:tasks.workunit.client.1.vm10.stdout:3/144: dread f6 [0,4194304] 0 2026-03-09T20:47:21.669 INFO:tasks.workunit.client.1.vm10.stdout:6/184: chown d3/f21 20 1 2026-03-09T20:47:21.669 INFO:tasks.workunit.client.1.vm10.stdout:2/209: dwrite d5/d18/f2c [0,4194304] 0 2026-03-09T20:47:21.670 INFO:tasks.workunit.client.1.vm10.stdout:5/176: symlink d2/d27/d37/l4f 0 2026-03-09T20:47:21.670 INFO:tasks.workunit.client.1.vm10.stdout:6/185: truncate d3/d30/d33/f3a 958485 0 2026-03-09T20:47:21.690 INFO:tasks.workunit.client.1.vm10.stdout:5/177: dwrite d2/f3c [0,4194304] 0 2026-03-09T20:47:21.693 INFO:tasks.workunit.client.1.vm10.stdout:2/210: getdents d5 0 2026-03-09T20:47:21.693 INFO:tasks.workunit.client.1.vm10.stdout:5/178: mknod d2/d39/d4b/c50 0 2026-03-09T20:47:21.696 INFO:tasks.workunit.client.1.vm10.stdout:5/179: chown d2/f23 7400 1 2026-03-09T20:47:21.701 INFO:tasks.workunit.client.1.vm10.stdout:2/211: creat d5/d18/d2d/f3e x:0 0 0 2026-03-09T20:47:21.701 INFO:tasks.workunit.client.1.vm10.stdout:5/180: chown d2/d1b/l22 69 1 2026-03-09T20:47:21.701 INFO:tasks.workunit.client.1.vm10.stdout:2/212: creat d5/d2b/f3f x:0 0 0 2026-03-09T20:47:21.702 INFO:tasks.workunit.client.1.vm10.stdout:2/213: symlink d5/l40 0 2026-03-09T20:47:21.713 INFO:tasks.workunit.client.1.vm10.stdout:2/214: dread d5/d18/f2c [0,4194304] 0 2026-03-09T20:47:21.717 INFO:tasks.workunit.client.1.vm10.stdout:2/215: mkdir d5/d18/d27/d28/d41 0 2026-03-09T20:47:21.717 INFO:tasks.workunit.client.1.vm10.stdout:2/216: write d5/d18/d1b/f23 [920921,11733] 0 2026-03-09T20:47:21.718 INFO:tasks.workunit.client.1.vm10.stdout:2/217: dread - d5/d2b/d32/f3b zero size 2026-03-09T20:47:21.724 INFO:tasks.workunit.client.1.vm10.stdout:2/218: rename d5/d18/d1b/c1c to d5/d18/c42 0 2026-03-09T20:47:21.724 INFO:tasks.workunit.client.1.vm10.stdout:2/219: chown d5/f16 13563656 1 2026-03-09T20:47:21.728 INFO:tasks.workunit.client.1.vm10.stdout:2/220: dwrite d5/f7 [0,4194304] 0 2026-03-09T20:47:21.738 INFO:tasks.workunit.client.1.vm10.stdout:2/221: dwrite d5/d18/f2c [0,4194304] 0 2026-03-09T20:47:21.750 INFO:tasks.workunit.client.1.vm10.stdout:2/222: rmdir d5/d18/d27/d28 39 2026-03-09T20:47:21.751 INFO:tasks.workunit.client.1.vm10.stdout:2/223: read d5/d18/f24 [54268,109761] 0 2026-03-09T20:47:21.751 INFO:tasks.workunit.client.1.vm10.stdout:2/224: dread - d5/d18/d1b/f26 zero size 2026-03-09T20:47:21.753 INFO:tasks.workunit.client.1.vm10.stdout:2/225: creat d5/d18/d27/d38/f43 x:0 0 0 2026-03-09T20:47:21.755 INFO:tasks.workunit.client.1.vm10.stdout:2/226: mknod d5/d18/c44 0 2026-03-09T20:47:21.757 INFO:tasks.workunit.client.1.vm10.stdout:2/227: dwrite d5/d18/d2d/f31 [0,4194304] 0 2026-03-09T20:47:21.770 INFO:tasks.workunit.client.1.vm10.stdout:2/228: unlink d5/d2b/d32/f3b 0 2026-03-09T20:47:21.782 INFO:tasks.workunit.client.0.vm07.stdout:8/223: write d1/dc/d14/d2f/f34 [855362,27822] 0 2026-03-09T20:47:21.783 INFO:tasks.workunit.client.0.vm07.stdout:8/224: truncate d1/f13 3294265 0 2026-03-09T20:47:21.784 INFO:tasks.workunit.client.0.vm07.stdout:8/225: write d1/dc/d16/d26/f2d [876412,117363] 0 2026-03-09T20:47:21.789 INFO:tasks.workunit.client.0.vm07.stdout:8/226: dwrite d1/dc/d16/d26/f2b [0,4194304] 0 2026-03-09T20:47:21.789 INFO:tasks.workunit.client.1.vm10.stdout:6/186: dwrite f2 [0,4194304] 0 2026-03-09T20:47:21.792 INFO:tasks.workunit.client.0.vm07.stdout:8/227: stat d1/f20 0 2026-03-09T20:47:21.793 INFO:tasks.workunit.client.0.vm07.stdout:8/228: truncate d1/dc/f42 917330 0 2026-03-09T20:47:21.803 INFO:tasks.workunit.client.1.vm10.stdout:7/189: truncate db/d28/d2b/d36/f35 1611475 0 2026-03-09T20:47:21.804 INFO:tasks.workunit.client.0.vm07.stdout:8/229: rmdir d1/dc/d16 39 2026-03-09T20:47:21.806 INFO:tasks.workunit.client.1.vm10.stdout:7/190: creat db/d28/d2b/d36/f3c x:0 0 0 2026-03-09T20:47:21.807 INFO:tasks.workunit.client.0.vm07.stdout:3/265: write d1/d5/d10/f22 [1103644,114782] 0 2026-03-09T20:47:21.807 INFO:tasks.workunit.client.1.vm10.stdout:7/191: truncate db/d28/f31 121860 0 2026-03-09T20:47:21.808 INFO:tasks.workunit.client.1.vm10.stdout:6/187: link d3/da/d11/d31/c3b d3/da/d11/d26/c3c 0 2026-03-09T20:47:21.810 INFO:tasks.workunit.client.0.vm07.stdout:8/230: creat d1/dc/d16/d31/f47 x:0 0 0 2026-03-09T20:47:21.812 INFO:tasks.workunit.client.1.vm10.stdout:6/188: readlink d3/d12/d24/l32 0 2026-03-09T20:47:21.813 INFO:tasks.workunit.client.0.vm07.stdout:8/231: creat d1/dc/d16/d26/f48 x:0 0 0 2026-03-09T20:47:21.813 INFO:tasks.workunit.client.0.vm07.stdout:8/232: chown d1/dc/l28 148 1 2026-03-09T20:47:21.815 INFO:tasks.workunit.client.0.vm07.stdout:3/266: dwrite d1/d5/d9/d2f/d34/f4b [4194304,4194304] 0 2026-03-09T20:47:21.817 INFO:tasks.workunit.client.0.vm07.stdout:3/267: truncate d1/d5/d9/d2f/d34/f4b 8885608 0 2026-03-09T20:47:21.820 INFO:tasks.workunit.client.1.vm10.stdout:6/189: symlink d3/d30/l3d 0 2026-03-09T20:47:21.824 INFO:tasks.workunit.client.0.vm07.stdout:8/233: fsync d1/dc/d14/f30 0 2026-03-09T20:47:21.825 INFO:tasks.workunit.client.0.vm07.stdout:8/234: dwrite d1/dc/d16/d26/f2b [0,4194304] 0 2026-03-09T20:47:21.834 INFO:tasks.workunit.client.0.vm07.stdout:8/235: truncate d1/dc/fd 2647089 0 2026-03-09T20:47:21.835 INFO:tasks.workunit.client.1.vm10.stdout:6/190: mknod d3/d12/d24/d39/c3e 0 2026-03-09T20:47:21.835 INFO:tasks.workunit.client.0.vm07.stdout:8/236: readlink d1/dc/d16/l3a 0 2026-03-09T20:47:21.836 INFO:tasks.workunit.client.1.vm10.stdout:6/191: write d3/f1f [798421,81043] 0 2026-03-09T20:47:21.842 INFO:tasks.workunit.client.1.vm10.stdout:2/229: dread d5/d18/d1b/f23 [0,4194304] 0 2026-03-09T20:47:21.843 INFO:tasks.workunit.client.1.vm10.stdout:6/192: link d3/d12/d24/l2d d3/d30/d33/l3f 0 2026-03-09T20:47:21.843 INFO:tasks.workunit.client.1.vm10.stdout:6/193: dread - d3/f21 zero size 2026-03-09T20:47:21.843 INFO:tasks.workunit.client.1.vm10.stdout:6/194: chown d3/d30/c38 0 1 2026-03-09T20:47:21.858 INFO:tasks.workunit.client.1.vm10.stdout:2/230: dread f1 [0,4194304] 0 2026-03-09T20:47:21.859 INFO:tasks.workunit.client.1.vm10.stdout:2/231: creat d5/d18/d27/d38/f45 x:0 0 0 2026-03-09T20:47:21.871 INFO:tasks.workunit.client.0.vm07.stdout:3/268: sync 2026-03-09T20:47:21.882 INFO:tasks.workunit.client.0.vm07.stdout:3/269: dwrite d1/d5/d9/d11/f26 [0,4194304] 0 2026-03-09T20:47:21.885 INFO:tasks.workunit.client.0.vm07.stdout:3/270: mkdir d1/d5/d10/d59 0 2026-03-09T20:47:21.888 INFO:tasks.workunit.client.0.vm07.stdout:3/271: getdents d1/d35 0 2026-03-09T20:47:21.889 INFO:tasks.workunit.client.0.vm07.stdout:3/272: rmdir d1/d5/d9/d2f/d34/d46 39 2026-03-09T20:47:21.890 INFO:tasks.workunit.client.0.vm07.stdout:3/273: write d1/d5/d9/f3c [299990,58231] 0 2026-03-09T20:47:21.897 INFO:tasks.workunit.client.0.vm07.stdout:3/274: dread d1/d5/d10/f30 [0,4194304] 0 2026-03-09T20:47:21.897 INFO:tasks.workunit.client.0.vm07.stdout:3/275: truncate d1/d5/d9/d11/d1f/f27 825612 0 2026-03-09T20:47:21.898 INFO:tasks.workunit.client.0.vm07.stdout:3/276: creat d1/d35/f5a x:0 0 0 2026-03-09T20:47:21.898 INFO:tasks.workunit.client.0.vm07.stdout:3/277: chown d1/d5/d9/d2f/d3d/l57 44590 1 2026-03-09T20:47:21.899 INFO:tasks.workunit.client.0.vm07.stdout:3/278: symlink d1/d35/l5b 0 2026-03-09T20:47:21.900 INFO:tasks.workunit.client.0.vm07.stdout:3/279: stat d1/d5/d9/f1c 0 2026-03-09T20:47:21.905 INFO:tasks.workunit.client.0.vm07.stdout:3/280: dwrite d1/d5/d9/d2f/d34/f40 [0,4194304] 0 2026-03-09T20:47:21.908 INFO:tasks.workunit.client.0.vm07.stdout:3/281: truncate d1/d5/d9/d11/d1f/f27 1507931 0 2026-03-09T20:47:21.909 INFO:tasks.workunit.client.0.vm07.stdout:3/282: fsync d1/d5/d9/f1b 0 2026-03-09T20:47:21.917 INFO:tasks.workunit.client.0.vm07.stdout:3/283: creat d1/d5/d9/d2f/d34/f5c x:0 0 0 2026-03-09T20:47:21.917 INFO:tasks.workunit.client.0.vm07.stdout:3/284: fsync d1/d5/d10/f30 0 2026-03-09T20:47:21.925 INFO:tasks.workunit.client.0.vm07.stdout:3/285: dwrite d1/d5/d9/d2f/d34/f5c [0,4194304] 0 2026-03-09T20:47:21.966 INFO:tasks.workunit.client.0.vm07.stdout:9/273: rename d4/d11/d2a/f60 to d4/d8/d19/f69 0 2026-03-09T20:47:21.969 INFO:tasks.workunit.client.0.vm07.stdout:6/302: rename d8/d16/f5b to d8/d16/d22/d33/f60 0 2026-03-09T20:47:21.970 INFO:tasks.workunit.client.0.vm07.stdout:9/274: unlink d4/d8/d19/d26/f3d 0 2026-03-09T20:47:21.972 INFO:tasks.workunit.client.0.vm07.stdout:7/301: rename d3/da/db/f12 to d3/da/db/d14/d43/f68 0 2026-03-09T20:47:21.975 INFO:tasks.workunit.client.0.vm07.stdout:0/320: rename d1/d1f/d20/l5d to d1/d2/d33/d35/d60/l6a 0 2026-03-09T20:47:21.977 INFO:tasks.workunit.client.1.vm10.stdout:4/157: truncate d1/f1e 575504 0 2026-03-09T20:47:21.978 INFO:tasks.workunit.client.0.vm07.stdout:6/303: mkdir d8/d16/d61 0 2026-03-09T20:47:21.980 INFO:tasks.workunit.client.1.vm10.stdout:4/158: rmdir d1/d8/d1b 39 2026-03-09T20:47:21.983 INFO:tasks.workunit.client.0.vm07.stdout:8/237: rename d1/l4 to d1/dc/d16/d26/l49 0 2026-03-09T20:47:21.989 INFO:tasks.workunit.client.1.vm10.stdout:4/159: dread d1/fe [0,4194304] 0 2026-03-09T20:47:22.004 INFO:tasks.workunit.client.1.vm10.stdout:3/145: getdents dc/d14/d26/d29/d2a 0 2026-03-09T20:47:22.007 INFO:tasks.workunit.client.1.vm10.stdout:5/181: getdents d2/d27/d37/d46 0 2026-03-09T20:47:22.007 INFO:tasks.workunit.client.0.vm07.stdout:2/311: rmdir d2/d11/d56 39 2026-03-09T20:47:22.007 INFO:tasks.workunit.client.0.vm07.stdout:4/226: dwrite d2/f21 [0,4194304] 0 2026-03-09T20:47:22.009 INFO:tasks.workunit.client.0.vm07.stdout:2/312: creat d2/db/d28/f5b x:0 0 0 2026-03-09T20:47:22.010 INFO:tasks.workunit.client.0.vm07.stdout:4/227: chown d2/l30 215 1 2026-03-09T20:47:22.012 INFO:tasks.workunit.client.0.vm07.stdout:4/228: write d2/f5 [3988809,74406] 0 2026-03-09T20:47:22.012 INFO:tasks.workunit.client.0.vm07.stdout:4/229: fsync d2/d1f/f2c 0 2026-03-09T20:47:22.014 INFO:tasks.workunit.client.0.vm07.stdout:4/230: chown d2/df/d17/l24 455507 1 2026-03-09T20:47:22.015 INFO:tasks.workunit.client.0.vm07.stdout:2/313: mkdir d2/db/d28/d5c 0 2026-03-09T20:47:22.016 INFO:tasks.workunit.client.0.vm07.stdout:5/305: write d5/df/d13/f1f [4599085,93979] 0 2026-03-09T20:47:22.017 INFO:tasks.workunit.client.1.vm10.stdout:9/268: dwrite d2/f30 [4194304,4194304] 0 2026-03-09T20:47:22.018 INFO:tasks.workunit.client.0.vm07.stdout:0/321: sync 2026-03-09T20:47:22.019 INFO:tasks.workunit.client.0.vm07.stdout:0/322: fsync d1/d2/f1b 0 2026-03-09T20:47:22.024 INFO:tasks.workunit.client.0.vm07.stdout:4/231: chown d2/df/c10 1 1 2026-03-09T20:47:22.031 INFO:tasks.workunit.client.0.vm07.stdout:0/323: creat d1/d2/d33/d35/d60/f6b x:0 0 0 2026-03-09T20:47:22.033 INFO:tasks.workunit.client.0.vm07.stdout:5/306: dwrite d5/f51 [0,4194304] 0 2026-03-09T20:47:22.035 INFO:tasks.workunit.client.0.vm07.stdout:5/307: chown d5/df/d13/d30 31 1 2026-03-09T20:47:22.039 INFO:tasks.workunit.client.1.vm10.stdout:5/182: dread d2/f23 [0,4194304] 0 2026-03-09T20:47:22.047 INFO:tasks.workunit.client.0.vm07.stdout:5/308: dwrite d5/df/d13/f5b [0,4194304] 0 2026-03-09T20:47:22.057 INFO:tasks.workunit.client.0.vm07.stdout:5/309: fsync d5/df/d13/f3d 0 2026-03-09T20:47:22.074 INFO:tasks.workunit.client.0.vm07.stdout:7/302: dread d3/da/db/d14/d1f/d2b/f2c [0,4194304] 0 2026-03-09T20:47:22.074 INFO:tasks.workunit.client.1.vm10.stdout:5/183: dread d2/f2c [0,4194304] 0 2026-03-09T20:47:22.077 INFO:tasks.workunit.client.0.vm07.stdout:2/314: dread d2/d11/f38 [0,4194304] 0 2026-03-09T20:47:22.078 INFO:tasks.workunit.client.0.vm07.stdout:2/315: chown d2/db/d1c/c4b 5650209 1 2026-03-09T20:47:22.081 INFO:tasks.workunit.client.1.vm10.stdout:9/269: dread d2/d3/f54 [0,4194304] 0 2026-03-09T20:47:22.085 INFO:tasks.workunit.client.1.vm10.stdout:9/270: mknod d2/d33/d37/c5f 0 2026-03-09T20:47:22.090 INFO:tasks.workunit.client.1.vm10.stdout:9/271: dwrite d2/d3/f1c [0,4194304] 0 2026-03-09T20:47:22.091 INFO:tasks.workunit.client.1.vm10.stdout:9/272: chown d2/d3/de/f42 6278 1 2026-03-09T20:47:22.092 INFO:tasks.workunit.client.1.vm10.stdout:9/273: dread - d2/d3/de/d35/d44/f4d zero size 2026-03-09T20:47:22.093 INFO:tasks.workunit.client.0.vm07.stdout:2/316: creat d2/d11/f5d x:0 0 0 2026-03-09T20:47:22.094 INFO:tasks.workunit.client.1.vm10.stdout:9/274: write d2/d3/de/f24 [1156714,72573] 0 2026-03-09T20:47:22.141 INFO:tasks.workunit.client.1.vm10.stdout:8/240: truncate d0/fe 462195 0 2026-03-09T20:47:22.141 INFO:tasks.workunit.client.0.vm07.stdout:1/261: dwrite d3/d14/f19 [0,4194304] 0 2026-03-09T20:47:22.146 INFO:tasks.workunit.client.0.vm07.stdout:1/262: creat d3/d14/d35/f4b x:0 0 0 2026-03-09T20:47:22.146 INFO:tasks.workunit.client.0.vm07.stdout:1/263: fsync d3/f28 0 2026-03-09T20:47:22.147 INFO:tasks.workunit.client.1.vm10.stdout:8/241: truncate d0/f10 459842 0 2026-03-09T20:47:22.161 INFO:tasks.workunit.client.1.vm10.stdout:2/232: getdents d5/d18/d27/d38 0 2026-03-09T20:47:22.164 INFO:tasks.workunit.client.0.vm07.stdout:3/286: rename d1/d35 to d1/d5/d9/d2f/d34/d46/d5d 0 2026-03-09T20:47:22.166 INFO:tasks.workunit.client.0.vm07.stdout:6/304: rename d8/d16/d22/d24/f44 to d8/d16/d22/d3a/f62 0 2026-03-09T20:47:22.167 INFO:tasks.workunit.client.1.vm10.stdout:2/233: dwrite d5/d18/d1b/f23 [0,4194304] 0 2026-03-09T20:47:22.168 INFO:tasks.workunit.client.0.vm07.stdout:3/287: creat d1/d5/d9/d11/d1f/f5e x:0 0 0 2026-03-09T20:47:22.172 INFO:tasks.workunit.client.0.vm07.stdout:7/303: rename d3/da/db/d14/d1f/l21 to d3/da/db/d14/d43/l69 0 2026-03-09T20:47:22.172 INFO:tasks.workunit.client.0.vm07.stdout:7/304: read d3/f3f [2230016,46170] 0 2026-03-09T20:47:22.178 INFO:tasks.workunit.client.0.vm07.stdout:6/305: fsync d8/f1c 0 2026-03-09T20:47:22.180 INFO:tasks.workunit.client.0.vm07.stdout:2/317: rename d2/db/d1c/c4b to d2/db/d28/c5e 0 2026-03-09T20:47:22.180 INFO:tasks.workunit.client.1.vm10.stdout:1/221: symlink d2/l4b 0 2026-03-09T20:47:22.180 INFO:tasks.workunit.client.0.vm07.stdout:2/318: dread - d2/db/d1c/d4a/f55 zero size 2026-03-09T20:47:22.183 INFO:tasks.workunit.client.1.vm10.stdout:2/234: mknod d5/d18/d1b/d22/c46 0 2026-03-09T20:47:22.184 INFO:tasks.workunit.client.0.vm07.stdout:2/319: symlink d2/db/d1c/l5f 0 2026-03-09T20:47:22.185 INFO:tasks.workunit.client.1.vm10.stdout:1/222: readlink d2/l15 0 2026-03-09T20:47:22.185 INFO:tasks.workunit.client.1.vm10.stdout:1/223: chown d2/l4b 1461 1 2026-03-09T20:47:22.186 INFO:tasks.workunit.client.0.vm07.stdout:3/288: link d1/d5/d9/d2f/d34/l44 d1/d5/d9/d2f/d34/d46/d5d/l5f 0 2026-03-09T20:47:22.186 INFO:tasks.workunit.client.1.vm10.stdout:2/235: rmdir d5/d18/d27 39 2026-03-09T20:47:22.186 INFO:tasks.workunit.client.0.vm07.stdout:3/289: chown d1/d5/d9/d11/d1f/l24 595106 1 2026-03-09T20:47:22.193 INFO:tasks.workunit.client.1.vm10.stdout:1/224: stat d2/da/d25/d3e/l4a 0 2026-03-09T20:47:22.193 INFO:tasks.workunit.client.1.vm10.stdout:2/236: dread d5/fe [0,4194304] 0 2026-03-09T20:47:22.193 INFO:tasks.workunit.client.0.vm07.stdout:7/305: link d3/c2e d3/da/db/d14/d1f/c6a 0 2026-03-09T20:47:22.193 INFO:tasks.workunit.client.0.vm07.stdout:7/306: dwrite d3/f67 [0,4194304] 0 2026-03-09T20:47:22.193 INFO:tasks.workunit.client.0.vm07.stdout:9/275: unlink d4/d11/l1e 0 2026-03-09T20:47:22.193 INFO:tasks.workunit.client.0.vm07.stdout:9/276: chown l3 2027 1 2026-03-09T20:47:22.195 INFO:tasks.workunit.client.0.vm07.stdout:7/307: dwrite d3/f4f [0,4194304] 0 2026-03-09T20:47:22.206 INFO:tasks.workunit.client.1.vm10.stdout:6/195: creat d3/f40 x:0 0 0 2026-03-09T20:47:22.207 INFO:tasks.workunit.client.0.vm07.stdout:3/290: read - d1/d5/d9/d11/f4d zero size 2026-03-09T20:47:22.207 INFO:tasks.workunit.client.0.vm07.stdout:3/291: chown d1/d5/d9/c16 1 1 2026-03-09T20:47:22.211 INFO:tasks.workunit.client.0.vm07.stdout:6/306: link d8/d16/d22/c5c d8/d16/d22/d33/c63 0 2026-03-09T20:47:22.220 INFO:tasks.workunit.client.0.vm07.stdout:6/307: write d8/f52 [368008,61733] 0 2026-03-09T20:47:22.220 INFO:tasks.workunit.client.0.vm07.stdout:7/308: mknod d3/c6b 0 2026-03-09T20:47:22.220 INFO:tasks.workunit.client.1.vm10.stdout:3/146: unlink l0 0 2026-03-09T20:47:22.220 INFO:tasks.workunit.client.1.vm10.stdout:2/237: chown d5/d18/d27/f29 0 1 2026-03-09T20:47:22.220 INFO:tasks.workunit.client.1.vm10.stdout:2/238: fdatasync d5/d18/f1a 0 2026-03-09T20:47:22.220 INFO:tasks.workunit.client.1.vm10.stdout:6/196: dwrite d3/fc [0,4194304] 0 2026-03-09T20:47:22.220 INFO:tasks.workunit.client.1.vm10.stdout:3/147: mknod dc/d14/c33 0 2026-03-09T20:47:22.220 INFO:tasks.workunit.client.1.vm10.stdout:1/225: rmdir d2/da/d25 39 2026-03-09T20:47:22.220 INFO:tasks.workunit.client.1.vm10.stdout:2/239: mkdir d5/d18/d2d/d47 0 2026-03-09T20:47:22.225 INFO:tasks.workunit.client.0.vm07.stdout:1/264: chown d3/f9 28 1 2026-03-09T20:47:22.227 INFO:tasks.workunit.client.1.vm10.stdout:0/184: rmdir d2 39 2026-03-09T20:47:22.231 INFO:tasks.workunit.client.1.vm10.stdout:4/160: truncate d1/d2/f2d 210158 0 2026-03-09T20:47:22.232 INFO:tasks.workunit.client.0.vm07.stdout:7/309: fdatasync d3/da/db/f1e 0 2026-03-09T20:47:22.232 INFO:tasks.workunit.client.0.vm07.stdout:1/265: symlink d3/d14/d35/l4c 0 2026-03-09T20:47:22.233 INFO:tasks.workunit.client.1.vm10.stdout:3/148: creat dc/d14/d26/f34 x:0 0 0 2026-03-09T20:47:22.233 INFO:tasks.workunit.client.1.vm10.stdout:2/240: unlink d5/fe 0 2026-03-09T20:47:22.235 INFO:tasks.workunit.client.0.vm07.stdout:1/266: creat d3/d14/f4d x:0 0 0 2026-03-09T20:47:22.246 INFO:tasks.workunit.client.1.vm10.stdout:0/185: fdatasync d2/f39 0 2026-03-09T20:47:22.246 INFO:tasks.workunit.client.1.vm10.stdout:8/242: symlink d0/d22/l51 0 2026-03-09T20:47:22.246 INFO:tasks.workunit.client.1.vm10.stdout:2/241: rmdir d5/d18/d1b/d22 39 2026-03-09T20:47:22.246 INFO:tasks.workunit.client.0.vm07.stdout:1/267: symlink d3/d23/l4e 0 2026-03-09T20:47:22.246 INFO:tasks.workunit.client.0.vm07.stdout:0/324: rmdir d1/d2/d33/d35/d60 39 2026-03-09T20:47:22.246 INFO:tasks.workunit.client.0.vm07.stdout:0/325: write d1/d2/dc/f56 [2799755,69236] 0 2026-03-09T20:47:22.246 INFO:tasks.workunit.client.0.vm07.stdout:0/326: read - d1/d2/d33/d35/f5c zero size 2026-03-09T20:47:22.246 INFO:tasks.workunit.client.0.vm07.stdout:4/232: truncate d2/f5 1217346 0 2026-03-09T20:47:22.246 INFO:tasks.workunit.client.0.vm07.stdout:0/327: unlink d1/d2/dc/d17/c2b 0 2026-03-09T20:47:22.248 INFO:tasks.workunit.client.0.vm07.stdout:7/310: getdents d3/da/db/d32/d3e/d5c 0 2026-03-09T20:47:22.251 INFO:tasks.workunit.client.0.vm07.stdout:1/268: mknod d3/d23/d47/c4f 0 2026-03-09T20:47:22.253 INFO:tasks.workunit.client.1.vm10.stdout:5/184: write d2/fd [449988,112775] 0 2026-03-09T20:47:22.255 INFO:tasks.workunit.client.0.vm07.stdout:3/292: sync 2026-03-09T20:47:22.255 INFO:tasks.workunit.client.0.vm07.stdout:0/328: rename d1/d2/d33/f62 to d1/d2/dc/d17/f6c 0 2026-03-09T20:47:22.259 INFO:tasks.workunit.client.0.vm07.stdout:1/269: dread d3/f3f [0,4194304] 0 2026-03-09T20:47:22.259 INFO:tasks.workunit.client.1.vm10.stdout:2/242: dwrite d5/d18/f1a [0,4194304] 0 2026-03-09T20:47:22.259 INFO:tasks.workunit.client.0.vm07.stdout:1/270: dread - d3/d23/f2c zero size 2026-03-09T20:47:22.260 INFO:tasks.workunit.client.0.vm07.stdout:1/271: readlink d3/l21 0 2026-03-09T20:47:22.263 INFO:tasks.workunit.client.0.vm07.stdout:3/293: dwrite d1/d5/d9/d11/f58 [0,4194304] 0 2026-03-09T20:47:22.267 INFO:tasks.workunit.client.0.vm07.stdout:0/329: dwrite d1/d2/d33/f4e [0,4194304] 0 2026-03-09T20:47:22.279 INFO:tasks.workunit.client.1.vm10.stdout:1/226: dread d2/da/d25/f2e [0,4194304] 0 2026-03-09T20:47:22.283 INFO:tasks.workunit.client.1.vm10.stdout:0/186: rename d2/d9/da/de/cf to d2/db/c3d 0 2026-03-09T20:47:22.284 INFO:tasks.workunit.client.0.vm07.stdout:5/310: truncate d5/df/d13/d55/f5f 59102 0 2026-03-09T20:47:22.285 INFO:tasks.workunit.client.0.vm07.stdout:5/311: chown d5/df/d13/d4f/c53 773440 1 2026-03-09T20:47:22.286 INFO:tasks.workunit.client.0.vm07.stdout:5/312: write d5/d50/f52 [71007,127964] 0 2026-03-09T20:47:22.289 INFO:tasks.workunit.client.0.vm07.stdout:5/313: dwrite d5/df/d13/f5b [0,4194304] 0 2026-03-09T20:47:22.290 INFO:tasks.workunit.client.1.vm10.stdout:5/185: dread d2/f8 [0,4194304] 0 2026-03-09T20:47:22.292 INFO:tasks.workunit.client.0.vm07.stdout:5/314: stat d5/d33/d39/l44 0 2026-03-09T20:47:22.293 INFO:tasks.workunit.client.0.vm07.stdout:5/315: fdatasync d5/d33/d39/f6e 0 2026-03-09T20:47:22.294 INFO:tasks.workunit.client.1.vm10.stdout:8/243: fsync d0/d22/d2f/f31 0 2026-03-09T20:47:22.297 INFO:tasks.workunit.client.0.vm07.stdout:5/316: dread d5/d19/f20 [0,4194304] 0 2026-03-09T20:47:22.297 INFO:tasks.workunit.client.1.vm10.stdout:8/244: write d0/f17 [7961279,26499] 0 2026-03-09T20:47:22.298 INFO:tasks.workunit.client.1.vm10.stdout:8/245: write d0/f17 [11204894,48572] 0 2026-03-09T20:47:22.302 INFO:tasks.workunit.client.0.vm07.stdout:5/317: dwrite d5/d19/f2c [0,4194304] 0 2026-03-09T20:47:22.305 INFO:tasks.workunit.client.0.vm07.stdout:7/311: rename d3/da/db/d14/d1f/d2b/c44 to d3/da/db/d14/d1f/d2b/d52/c6c 0 2026-03-09T20:47:22.306 INFO:tasks.workunit.client.0.vm07.stdout:7/312: read d3/d58/f60 [2121277,123464] 0 2026-03-09T20:47:22.313 INFO:tasks.workunit.client.0.vm07.stdout:5/318: dread d5/d33/d3b/f63 [0,4194304] 0 2026-03-09T20:47:22.319 INFO:tasks.workunit.client.1.vm10.stdout:8/246: dwrite d0/d22/d25/f2d [0,4194304] 0 2026-03-09T20:47:22.321 INFO:tasks.workunit.client.1.vm10.stdout:0/187: mkdir d2/d9/da/de/d1a/d25/d3e 0 2026-03-09T20:47:22.332 INFO:tasks.workunit.client.0.vm07.stdout:1/272: mknod d3/d23/d47/c50 0 2026-03-09T20:47:22.333 INFO:tasks.workunit.client.0.vm07.stdout:1/273: chown d3/f28 89883 1 2026-03-09T20:47:22.343 INFO:tasks.workunit.client.1.vm10.stdout:5/186: rename d2/f15 to d2/d39/d4b/f51 0 2026-03-09T20:47:22.344 INFO:tasks.workunit.client.1.vm10.stdout:5/187: chown d2/d1b/l29 65355043 1 2026-03-09T20:47:22.344 INFO:tasks.workunit.client.1.vm10.stdout:5/188: fsync d2/d39/d4b/f4e 0 2026-03-09T20:47:22.358 INFO:tasks.workunit.client.1.vm10.stdout:8/247: write d0/d22/d25/f2b [3971177,110039] 0 2026-03-09T20:47:22.361 INFO:tasks.workunit.client.1.vm10.stdout:0/188: mkdir d2/d9/da/de/d1a/d25/d34/d3f 0 2026-03-09T20:47:22.366 INFO:tasks.workunit.client.0.vm07.stdout:5/319: fdatasync d5/d33/f5a 0 2026-03-09T20:47:22.375 INFO:tasks.workunit.client.1.vm10.stdout:5/189: dwrite d2/f5 [0,4194304] 0 2026-03-09T20:47:22.376 INFO:tasks.workunit.client.0.vm07.stdout:3/294: mkdir d1/d5/d9/d11/d60 0 2026-03-09T20:47:22.380 INFO:tasks.workunit.client.0.vm07.stdout:0/330: link d1/d2/d33/d35/f64 d1/d2/dc/f6d 0 2026-03-09T20:47:22.390 INFO:tasks.workunit.client.1.vm10.stdout:0/189: mknod d2/d9/da/d11/c40 0 2026-03-09T20:47:22.390 INFO:tasks.workunit.client.1.vm10.stdout:1/227: getdents d2/da/d25/d3e 0 2026-03-09T20:47:22.398 INFO:tasks.workunit.client.0.vm07.stdout:5/320: mknod d5/d50/c70 0 2026-03-09T20:47:22.401 INFO:tasks.workunit.client.1.vm10.stdout:1/228: truncate d2/f1c 269042 0 2026-03-09T20:47:22.407 INFO:tasks.workunit.client.0.vm07.stdout:3/295: rename d1/f19 to d1/d5/d10/d59/f61 0 2026-03-09T20:47:22.410 INFO:tasks.workunit.client.0.vm07.stdout:3/296: dread - d1/d5/d9/d11/d1f/f4a zero size 2026-03-09T20:47:22.410 INFO:tasks.workunit.client.0.vm07.stdout:0/331: creat d1/d2/dc/d17/f6e x:0 0 0 2026-03-09T20:47:22.410 INFO:tasks.workunit.client.1.vm10.stdout:8/248: link d0/c1d d0/d22/d25/c52 0 2026-03-09T20:47:22.411 INFO:tasks.workunit.client.1.vm10.stdout:8/249: readlink d0/d22/d25/d2e/d41/l45 0 2026-03-09T20:47:22.416 INFO:tasks.workunit.client.0.vm07.stdout:7/313: creat d3/da/db/d14/f6d x:0 0 0 2026-03-09T20:47:22.416 INFO:tasks.workunit.client.0.vm07.stdout:7/314: chown d3/da/l4c 1150595 1 2026-03-09T20:47:22.418 INFO:tasks.workunit.client.0.vm07.stdout:0/332: dread d1/d2/f14 [0,4194304] 0 2026-03-09T20:47:22.419 INFO:tasks.workunit.client.0.vm07.stdout:0/333: chown d1/d2/d33/d35/f5c 1037 1 2026-03-09T20:47:22.419 INFO:tasks.workunit.client.0.vm07.stdout:0/334: dread - d1/d1f/d20/f4d zero size 2026-03-09T20:47:22.422 INFO:tasks.workunit.client.0.vm07.stdout:5/321: mknod d5/d19/c71 0 2026-03-09T20:47:22.424 INFO:tasks.workunit.client.1.vm10.stdout:0/190: truncate d2/db/f13 3501321 0 2026-03-09T20:47:22.425 INFO:tasks.workunit.client.0.vm07.stdout:1/274: creat d3/f51 x:0 0 0 2026-03-09T20:47:22.432 INFO:tasks.workunit.client.1.vm10.stdout:0/191: rename d2/f5 to d2/d9/da/de/d1a/d25/d3e/f41 0 2026-03-09T20:47:22.434 INFO:tasks.workunit.client.0.vm07.stdout:7/315: fdatasync d3/da/f47 0 2026-03-09T20:47:22.439 INFO:tasks.workunit.client.0.vm07.stdout:7/316: dwrite d3/da/db/d32/d3e/d5c/f64 [0,4194304] 0 2026-03-09T20:47:22.443 INFO:tasks.workunit.client.0.vm07.stdout:0/335: fsync d1/d2/dc/f12 0 2026-03-09T20:47:22.453 INFO:tasks.workunit.client.0.vm07.stdout:1/275: mkdir d3/d23/d52 0 2026-03-09T20:47:22.453 INFO:tasks.workunit.client.0.vm07.stdout:1/276: dread - d3/d14/d35/f38 zero size 2026-03-09T20:47:22.453 INFO:tasks.workunit.client.1.vm10.stdout:0/192: creat d2/d9/da/d11/f42 x:0 0 0 2026-03-09T20:47:22.459 INFO:tasks.workunit.client.0.vm07.stdout:5/322: creat d5/df/d13/d30/d56/f72 x:0 0 0 2026-03-09T20:47:22.463 INFO:tasks.workunit.client.0.vm07.stdout:1/277: creat d3/d23/d47/f53 x:0 0 0 2026-03-09T20:47:22.465 INFO:tasks.workunit.client.0.vm07.stdout:5/323: mkdir d5/d19/d73 0 2026-03-09T20:47:22.467 INFO:tasks.workunit.client.1.vm10.stdout:4/161: symlink d1/d2/l32 0 2026-03-09T20:47:22.472 INFO:tasks.workunit.client.1.vm10.stdout:7/192: rmdir db/d21 39 2026-03-09T20:47:22.472 INFO:tasks.workunit.client.1.vm10.stdout:9/275: symlink d2/d3/l60 0 2026-03-09T20:47:22.473 INFO:tasks.workunit.client.1.vm10.stdout:4/162: dread - d1/d8/d1b/f24 zero size 2026-03-09T20:47:22.474 INFO:tasks.workunit.client.0.vm07.stdout:8/238: dwrite d1/dc/fd [0,4194304] 0 2026-03-09T20:47:22.474 INFO:tasks.workunit.client.1.vm10.stdout:7/193: write db/d21/d26/f2f [118880,3529] 0 2026-03-09T20:47:22.476 INFO:tasks.workunit.client.0.vm07.stdout:7/317: sync 2026-03-09T20:47:22.485 INFO:tasks.workunit.client.0.vm07.stdout:0/336: getdents d1/d1f 0 2026-03-09T20:47:22.486 INFO:tasks.workunit.client.1.vm10.stdout:9/276: symlink d2/d3/de/l61 0 2026-03-09T20:47:22.486 INFO:tasks.workunit.client.0.vm07.stdout:0/337: dread d1/f3b [0,4194304] 0 2026-03-09T20:47:22.487 INFO:tasks.workunit.client.0.vm07.stdout:0/338: fsync d1/d2/d33/f4e 0 2026-03-09T20:47:22.488 INFO:tasks.workunit.client.1.vm10.stdout:7/194: rename db/d21/d23/f10 to db/d28/d2b/d36/d3b/f3d 0 2026-03-09T20:47:22.489 INFO:tasks.workunit.client.1.vm10.stdout:9/277: chown d2/f46 24 1 2026-03-09T20:47:22.491 INFO:tasks.workunit.client.0.vm07.stdout:7/318: link d3/da/db/d32/d3e/f40 d3/da/db/d14/d1f/d2b/d52/f6e 0 2026-03-09T20:47:22.499 INFO:tasks.workunit.client.0.vm07.stdout:0/339: sync 2026-03-09T20:47:22.500 INFO:tasks.workunit.client.0.vm07.stdout:9/277: read d4/d8/fd [2241527,49449] 0 2026-03-09T20:47:22.506 INFO:tasks.workunit.client.0.vm07.stdout:0/340: getdents d1/d2/d4b 0 2026-03-09T20:47:22.507 INFO:tasks.workunit.client.0.vm07.stdout:0/341: write d1/d2/f5e [208315,12587] 0 2026-03-09T20:47:22.507 INFO:tasks.workunit.client.0.vm07.stdout:0/342: chown d1/d2/d33/d35 4125 1 2026-03-09T20:47:22.509 INFO:tasks.workunit.client.0.vm07.stdout:0/343: mknod d1/d1f/d20/c6f 0 2026-03-09T20:47:22.513 INFO:tasks.workunit.client.0.vm07.stdout:0/344: dwrite d1/d2/f1b [0,4194304] 0 2026-03-09T20:47:22.517 INFO:tasks.workunit.client.0.vm07.stdout:0/345: fsync d1/d2/dc/f56 0 2026-03-09T20:47:22.523 INFO:tasks.workunit.client.0.vm07.stdout:0/346: dwrite d1/d1f/d20/f2c [0,4194304] 0 2026-03-09T20:47:22.524 INFO:tasks.workunit.client.0.vm07.stdout:0/347: write d1/d2/d33/d35/f5c [442572,17866] 0 2026-03-09T20:47:22.534 INFO:tasks.workunit.client.0.vm07.stdout:0/348: creat d1/d2/d4b/f70 x:0 0 0 2026-03-09T20:47:22.535 INFO:tasks.workunit.client.0.vm07.stdout:0/349: write d1/d2/d33/d35/f5c [65026,24779] 0 2026-03-09T20:47:22.535 INFO:tasks.workunit.client.0.vm07.stdout:0/350: stat d1/d2/l36 0 2026-03-09T20:47:22.535 INFO:tasks.workunit.client.0.vm07.stdout:0/351: readlink d1/d2/d33/d35/l68 0 2026-03-09T20:47:22.540 INFO:tasks.workunit.client.0.vm07.stdout:2/320: unlink d2/db/d28/c5e 0 2026-03-09T20:47:22.544 INFO:tasks.workunit.client.1.vm10.stdout:3/149: getdents dc/d14 0 2026-03-09T20:47:22.545 INFO:tasks.workunit.client.0.vm07.stdout:0/352: rename d1/d1f/d30/c67 to d1/d2/c71 0 2026-03-09T20:47:22.547 INFO:tasks.workunit.client.0.vm07.stdout:2/321: sync 2026-03-09T20:47:22.548 INFO:tasks.workunit.client.1.vm10.stdout:3/150: mknod dc/d14/d20/d2e/c35 0 2026-03-09T20:47:22.548 INFO:tasks.workunit.client.0.vm07.stdout:6/308: truncate d8/f15 1885042 0 2026-03-09T20:47:22.551 INFO:tasks.workunit.client.1.vm10.stdout:3/151: write dc/d14/d26/f34 [935739,127428] 0 2026-03-09T20:47:22.552 INFO:tasks.workunit.client.0.vm07.stdout:0/353: rename d1/d2/d33/d35/d60 to d1/d1f/d53/d72 0 2026-03-09T20:47:22.552 INFO:tasks.workunit.client.0.vm07.stdout:0/354: write d1/d2/f47 [3792055,43747] 0 2026-03-09T20:47:22.554 INFO:tasks.workunit.client.0.vm07.stdout:1/278: rename d3/d14/d35 to d3/d14/d54 0 2026-03-09T20:47:22.557 INFO:tasks.workunit.client.0.vm07.stdout:1/279: dread d3/d23/f49 [0,4194304] 0 2026-03-09T20:47:22.564 INFO:tasks.workunit.client.0.vm07.stdout:7/319: truncate d3/da/db/f1e 5168025 0 2026-03-09T20:47:22.564 INFO:tasks.workunit.client.0.vm07.stdout:7/320: readlink d3/da/db/d32/d3e/l51 0 2026-03-09T20:47:22.565 INFO:tasks.workunit.client.0.vm07.stdout:7/321: dread - d3/da/db/d14/d1f/f37 zero size 2026-03-09T20:47:22.567 INFO:tasks.workunit.client.1.vm10.stdout:3/152: rename dc/dd/f1b to dc/d14/d20/d21/f36 0 2026-03-09T20:47:22.567 INFO:tasks.workunit.client.1.vm10.stdout:6/197: write d3/da/f15 [114900,31215] 0 2026-03-09T20:47:22.568 INFO:tasks.workunit.client.0.vm07.stdout:0/355: creat d1/d1f/d20/f73 x:0 0 0 2026-03-09T20:47:22.569 INFO:tasks.workunit.client.0.vm07.stdout:0/356: write d1/d2/f5e [111833,33019] 0 2026-03-09T20:47:22.572 INFO:tasks.workunit.client.0.vm07.stdout:2/322: creat d2/d11/f60 x:0 0 0 2026-03-09T20:47:22.573 INFO:tasks.workunit.client.0.vm07.stdout:2/323: chown d2/db/d1c/f42 1914528867 1 2026-03-09T20:47:22.574 INFO:tasks.workunit.client.1.vm10.stdout:3/153: mkdir dc/d14/d26/d37 0 2026-03-09T20:47:22.574 INFO:tasks.workunit.client.1.vm10.stdout:6/198: symlink d3/da/d11/d31/l41 0 2026-03-09T20:47:22.575 INFO:tasks.workunit.client.0.vm07.stdout:1/280: mkdir d3/d23/d55 0 2026-03-09T20:47:22.577 INFO:tasks.workunit.client.0.vm07.stdout:7/322: symlink d3/da/db/d14/d1f/l6f 0 2026-03-09T20:47:22.579 INFO:tasks.workunit.client.1.vm10.stdout:8/250: dread d0/f10 [0,4194304] 0 2026-03-09T20:47:22.580 INFO:tasks.workunit.client.1.vm10.stdout:3/154: creat dc/d14/d20/d2e/f38 x:0 0 0 2026-03-09T20:47:22.582 INFO:tasks.workunit.client.0.vm07.stdout:0/357: rename d1/d2/dc/c28 to d1/d1f/c74 0 2026-03-09T20:47:22.583 INFO:tasks.workunit.client.1.vm10.stdout:3/155: dread - dc/f1f zero size 2026-03-09T20:47:22.584 INFO:tasks.workunit.client.1.vm10.stdout:6/199: dwrite d3/da/f1b [0,4194304] 0 2026-03-09T20:47:22.585 INFO:tasks.workunit.client.0.vm07.stdout:2/324: symlink d2/db/d1c/d4a/l61 0 2026-03-09T20:47:22.586 INFO:tasks.workunit.client.0.vm07.stdout:2/325: truncate d2/f3e 5158589 0 2026-03-09T20:47:22.587 INFO:tasks.workunit.client.1.vm10.stdout:3/156: dread - dc/d14/d20/d2e/f32 zero size 2026-03-09T20:47:22.587 INFO:tasks.workunit.client.0.vm07.stdout:2/326: chown d2/l18 60895 1 2026-03-09T20:47:22.590 INFO:tasks.workunit.client.1.vm10.stdout:8/251: dwrite d0/d22/d25/f2d [0,4194304] 0 2026-03-09T20:47:22.594 INFO:tasks.workunit.client.1.vm10.stdout:6/200: dread d3/da/fd [0,4194304] 0 2026-03-09T20:47:22.596 INFO:tasks.workunit.client.0.vm07.stdout:4/233: dwrite d2/d1f/f25 [0,4194304] 0 2026-03-09T20:47:22.599 INFO:tasks.workunit.client.1.vm10.stdout:6/201: fsync d3/da/fd 0 2026-03-09T20:47:22.599 INFO:tasks.workunit.client.1.vm10.stdout:6/202: chown d3 2012 1 2026-03-09T20:47:22.606 INFO:tasks.workunit.client.0.vm07.stdout:1/281: mkdir d3/d23/d55/d56 0 2026-03-09T20:47:22.616 INFO:tasks.workunit.client.0.vm07.stdout:1/282: unlink d3/f8 0 2026-03-09T20:47:22.628 INFO:tasks.workunit.client.1.vm10.stdout:6/203: rmdir d3/da/d11/d26 39 2026-03-09T20:47:22.628 INFO:tasks.workunit.client.1.vm10.stdout:2/243: truncate d5/f16 256017 0 2026-03-09T20:47:22.628 INFO:tasks.workunit.client.1.vm10.stdout:2/244: write d5/f1d [602400,114468] 0 2026-03-09T20:47:22.628 INFO:tasks.workunit.client.1.vm10.stdout:2/245: write d5/d18/f2c [3179684,72399] 0 2026-03-09T20:47:22.628 INFO:tasks.workunit.client.0.vm07.stdout:1/283: readlink d3/l3b 0 2026-03-09T20:47:22.628 INFO:tasks.workunit.client.0.vm07.stdout:1/284: dread - d3/d14/d54/f4b zero size 2026-03-09T20:47:22.628 INFO:tasks.workunit.client.0.vm07.stdout:1/285: write d3/d14/f4d [448787,43148] 0 2026-03-09T20:47:22.628 INFO:tasks.workunit.client.0.vm07.stdout:1/286: rename d3/ce to d3/d14/c57 0 2026-03-09T20:47:22.628 INFO:tasks.workunit.client.0.vm07.stdout:1/287: chown d3/d23/f2c 0 1 2026-03-09T20:47:22.630 INFO:tasks.workunit.client.1.vm10.stdout:2/246: write d5/d18/d2d/f31 [2094876,78548] 0 2026-03-09T20:47:22.635 INFO:tasks.workunit.client.1.vm10.stdout:6/204: creat d3/da/f42 x:0 0 0 2026-03-09T20:47:22.637 INFO:tasks.workunit.client.1.vm10.stdout:6/205: dread - d3/f21 zero size 2026-03-09T20:47:22.640 INFO:tasks.workunit.client.0.vm07.stdout:4/234: dread d2/f5 [0,4194304] 0 2026-03-09T20:47:22.641 INFO:tasks.workunit.client.1.vm10.stdout:6/206: write d3/d12/d24/f27 [3778429,744] 0 2026-03-09T20:47:22.642 INFO:tasks.workunit.client.1.vm10.stdout:6/207: readlink d3/d12/l34 0 2026-03-09T20:47:22.648 INFO:tasks.workunit.client.1.vm10.stdout:2/247: mknod d5/d18/d27/d38/c48 0 2026-03-09T20:47:22.648 INFO:tasks.workunit.client.1.vm10.stdout:5/190: truncate d2/d1b/f2f 610432 0 2026-03-09T20:47:22.652 INFO:tasks.workunit.client.0.vm07.stdout:4/235: sync 2026-03-09T20:47:22.653 INFO:tasks.workunit.client.1.vm10.stdout:5/191: mknod d2/d1b/c52 0 2026-03-09T20:47:22.659 INFO:tasks.workunit.client.1.vm10.stdout:6/208: link d3/da/d11/f1d d3/d30/f43 0 2026-03-09T20:47:22.662 INFO:tasks.workunit.client.0.vm07.stdout:5/324: getdents d5/df/d13/d30/d56 0 2026-03-09T20:47:22.665 INFO:tasks.workunit.client.1.vm10.stdout:0/193: dread d2/db/f13 [0,4194304] 0 2026-03-09T20:47:22.665 INFO:tasks.workunit.client.0.vm07.stdout:5/325: dread d5/df/d13/f5b [0,4194304] 0 2026-03-09T20:47:22.666 INFO:tasks.workunit.client.0.vm07.stdout:5/326: chown d5/d19/f20 108927 1 2026-03-09T20:47:22.672 INFO:tasks.workunit.client.1.vm10.stdout:6/209: mknod d3/da/c44 0 2026-03-09T20:47:22.672 INFO:tasks.workunit.client.1.vm10.stdout:6/210: dread - d3/f21 zero size 2026-03-09T20:47:22.672 INFO:tasks.workunit.client.1.vm10.stdout:2/248: dwrite d5/d18/d1b/f2e [0,4194304] 0 2026-03-09T20:47:22.674 INFO:tasks.workunit.client.0.vm07.stdout:5/327: getdents d5/df/d13/d30/d56 0 2026-03-09T20:47:22.675 INFO:tasks.workunit.client.1.vm10.stdout:6/211: readlink d3/da/l2e 0 2026-03-09T20:47:22.678 INFO:tasks.workunit.client.0.vm07.stdout:8/239: truncate d1/dc/fd 1468734 0 2026-03-09T20:47:22.681 INFO:tasks.workunit.client.0.vm07.stdout:8/240: creat d1/dc/d16/f4a x:0 0 0 2026-03-09T20:47:22.682 INFO:tasks.workunit.client.1.vm10.stdout:7/195: read db/d21/d23/f34 [1556265,52902] 0 2026-03-09T20:47:22.683 INFO:tasks.workunit.client.1.vm10.stdout:5/192: dwrite d2/d1b/f28 [0,4194304] 0 2026-03-09T20:47:22.688 INFO:tasks.workunit.client.1.vm10.stdout:7/196: chown db/d28/d2b/d36/d3b/f3d 691137 1 2026-03-09T20:47:22.692 INFO:tasks.workunit.client.1.vm10.stdout:5/193: dread d2/f23 [0,4194304] 0 2026-03-09T20:47:22.701 INFO:tasks.workunit.client.1.vm10.stdout:0/194: dwrite d2/d9/da/d11/f42 [0,4194304] 0 2026-03-09T20:47:22.703 INFO:tasks.workunit.client.1.vm10.stdout:8/252: dwrite d0/fe [0,4194304] 0 2026-03-09T20:47:22.705 INFO:tasks.workunit.client.1.vm10.stdout:2/249: link d5/f15 d5/d18/d27/d28/d41/f49 0 2026-03-09T20:47:22.705 INFO:tasks.workunit.client.1.vm10.stdout:0/195: stat d2/db/f38 0 2026-03-09T20:47:22.708 INFO:tasks.workunit.client.1.vm10.stdout:2/250: write d5/d18/d27/f29 [474227,85285] 0 2026-03-09T20:47:22.709 INFO:tasks.workunit.client.1.vm10.stdout:4/163: dwrite d1/d2/f7 [0,4194304] 0 2026-03-09T20:47:22.718 INFO:tasks.workunit.client.1.vm10.stdout:2/251: write d5/f1d [1425870,115042] 0 2026-03-09T20:47:22.722 INFO:tasks.workunit.client.1.vm10.stdout:8/253: mknod d0/d22/c53 0 2026-03-09T20:47:22.724 INFO:tasks.workunit.client.1.vm10.stdout:4/164: dread d1/fe [0,4194304] 0 2026-03-09T20:47:22.732 INFO:tasks.workunit.client.1.vm10.stdout:8/254: chown d0/d22/d25/f34 9 1 2026-03-09T20:47:22.736 INFO:tasks.workunit.client.1.vm10.stdout:4/165: chown d1/d2/d3/l19 5852734 1 2026-03-09T20:47:22.736 INFO:tasks.workunit.client.1.vm10.stdout:8/255: dwrite d0/d22/d2c/f36 [0,4194304] 0 2026-03-09T20:47:22.745 INFO:tasks.workunit.client.1.vm10.stdout:8/256: dwrite d0/d22/d2f/d38/f43 [0,4194304] 0 2026-03-09T20:47:22.746 INFO:tasks.workunit.client.1.vm10.stdout:7/197: read db/d21/d23/f1e [401678,46336] 0 2026-03-09T20:47:22.753 INFO:tasks.workunit.client.1.vm10.stdout:4/166: creat d1/f33 x:0 0 0 2026-03-09T20:47:22.753 INFO:tasks.workunit.client.1.vm10.stdout:6/212: link d3/da/d11/d26/c3c d3/d12/d36/c45 0 2026-03-09T20:47:22.757 INFO:tasks.workunit.client.1.vm10.stdout:7/198: symlink db/d28/l3e 0 2026-03-09T20:47:22.760 INFO:tasks.workunit.client.1.vm10.stdout:4/167: rename d1/d8/c28 to d1/c34 0 2026-03-09T20:47:22.762 INFO:tasks.workunit.client.1.vm10.stdout:4/168: write d1/f33 [31440,78000] 0 2026-03-09T20:47:22.763 INFO:tasks.workunit.client.0.vm07.stdout:9/278: write d4/d16/d29/d24/f36 [191703,115799] 0 2026-03-09T20:47:22.763 INFO:tasks.workunit.client.0.vm07.stdout:9/279: chown d4/f17 72118258 1 2026-03-09T20:47:22.769 INFO:tasks.workunit.client.1.vm10.stdout:9/278: dwrite d2/d28/d47/d50/f5b [4194304,4194304] 0 2026-03-09T20:47:22.775 INFO:tasks.workunit.client.1.vm10.stdout:4/169: read d1/f9 [3482966,120575] 0 2026-03-09T20:47:22.775 INFO:tasks.workunit.client.1.vm10.stdout:9/279: chown d2/d3/f7 192103664 1 2026-03-09T20:47:22.777 INFO:tasks.workunit.client.1.vm10.stdout:7/199: dwrite db/d28/f31 [0,4194304] 0 2026-03-09T20:47:22.786 INFO:tasks.workunit.client.1.vm10.stdout:6/213: read d3/fe [4882509,32868] 0 2026-03-09T20:47:22.788 INFO:tasks.workunit.client.1.vm10.stdout:9/280: creat d2/d12/f62 x:0 0 0 2026-03-09T20:47:22.789 INFO:tasks.workunit.client.1.vm10.stdout:6/214: write f2 [238719,87367] 0 2026-03-09T20:47:22.791 INFO:tasks.workunit.client.1.vm10.stdout:7/200: mkdir db/d28/d2b/d36/d3f 0 2026-03-09T20:47:22.794 INFO:tasks.workunit.client.1.vm10.stdout:1/229: dread d2/da/f10 [0,4194304] 0 2026-03-09T20:47:22.796 INFO:tasks.workunit.client.1.vm10.stdout:7/201: dread db/d21/d23/f14 [0,4194304] 0 2026-03-09T20:47:22.796 INFO:tasks.workunit.client.1.vm10.stdout:6/215: write d3/da/f1b [2589009,63746] 0 2026-03-09T20:47:22.800 INFO:tasks.workunit.client.1.vm10.stdout:1/230: chown d2/da/d25/d3e/f44 12570 1 2026-03-09T20:47:22.800 INFO:tasks.workunit.client.1.vm10.stdout:8/257: getdents d0/d22/d2f/d3d 0 2026-03-09T20:47:22.801 INFO:tasks.workunit.client.1.vm10.stdout:8/258: write d0/d22/d2c/f3f [660910,100220] 0 2026-03-09T20:47:22.803 INFO:tasks.workunit.client.1.vm10.stdout:1/231: readlink d2/l15 0 2026-03-09T20:47:22.804 INFO:tasks.workunit.client.1.vm10.stdout:9/281: link d2/d3/f7 d2/d28/f63 0 2026-03-09T20:47:22.806 INFO:tasks.workunit.client.1.vm10.stdout:9/282: fdatasync d2/d3/f2f 0 2026-03-09T20:47:22.806 INFO:tasks.workunit.client.1.vm10.stdout:9/283: chown d2/d28/d43 863089390 1 2026-03-09T20:47:22.813 INFO:tasks.workunit.client.0.vm07.stdout:0/358: write d1/d1f/d20/f4d [406545,7709] 0 2026-03-09T20:47:22.817 INFO:tasks.workunit.client.0.vm07.stdout:0/359: mknod d1/d2/d33/d35/c75 0 2026-03-09T20:47:22.817 INFO:tasks.workunit.client.0.vm07.stdout:0/360: fdatasync d1/f57 0 2026-03-09T20:47:22.818 INFO:tasks.workunit.client.1.vm10.stdout:9/284: creat d2/d28/d47/d50/f64 x:0 0 0 2026-03-09T20:47:22.820 INFO:tasks.workunit.client.1.vm10.stdout:6/216: mkdir d3/d46 0 2026-03-09T20:47:22.820 INFO:tasks.workunit.client.0.vm07.stdout:0/361: link d1/d1f/d30/l4f d1/d2/dc/d17/l76 0 2026-03-09T20:47:22.824 INFO:tasks.workunit.client.0.vm07.stdout:0/362: symlink d1/d2/dc/l77 0 2026-03-09T20:47:22.824 INFO:tasks.workunit.client.1.vm10.stdout:9/285: rmdir d2/d3/de/d35/d44 39 2026-03-09T20:47:22.824 INFO:tasks.workunit.client.0.vm07.stdout:0/363: readlink d1/d2/dc/l25 0 2026-03-09T20:47:22.826 INFO:tasks.workunit.client.0.vm07.stdout:0/364: readlink d1/d2/l5f 0 2026-03-09T20:47:22.826 INFO:tasks.workunit.client.1.vm10.stdout:9/286: creat d2/d12/f65 x:0 0 0 2026-03-09T20:47:22.832 INFO:tasks.workunit.client.1.vm10.stdout:9/287: stat d2/d12/l23 0 2026-03-09T20:47:22.832 INFO:tasks.workunit.client.1.vm10.stdout:9/288: creat d2/d33/d37/f66 x:0 0 0 2026-03-09T20:47:22.832 INFO:tasks.workunit.client.1.vm10.stdout:9/289: mkdir d2/d28/d47/d67 0 2026-03-09T20:47:22.833 INFO:tasks.workunit.client.0.vm07.stdout:0/365: link d1/d2/dc/l18 d1/d2/d33/d35/l78 0 2026-03-09T20:47:22.835 INFO:tasks.workunit.client.1.vm10.stdout:6/217: write d3/d12/f2b [2147145,11393] 0 2026-03-09T20:47:22.837 INFO:tasks.workunit.client.0.vm07.stdout:0/366: creat d1/d1f/d53/f79 x:0 0 0 2026-03-09T20:47:22.838 INFO:tasks.workunit.client.0.vm07.stdout:0/367: write d1/d1f/d53/f65 [158075,87178] 0 2026-03-09T20:47:22.845 INFO:tasks.workunit.client.0.vm07.stdout:0/368: rename d1/d1f/d20/f73 to d1/d1f/d30/f7a 0 2026-03-09T20:47:22.850 INFO:tasks.workunit.client.0.vm07.stdout:0/369: dwrite d1/d1f/d53/f65 [0,4194304] 0 2026-03-09T20:47:22.857 INFO:tasks.workunit.client.0.vm07.stdout:5/328: dread d5/df/d13/f17 [0,4194304] 0 2026-03-09T20:47:22.857 INFO:tasks.workunit.client.1.vm10.stdout:6/218: dwrite d3/d12/f18 [0,4194304] 0 2026-03-09T20:47:22.873 INFO:tasks.workunit.client.1.vm10.stdout:3/157: truncate dc/dd/f18 1030670 0 2026-03-09T20:47:22.873 INFO:tasks.workunit.client.0.vm07.stdout:2/327: write d2/d11/f44 [549015,25290] 0 2026-03-09T20:47:22.873 INFO:tasks.workunit.client.0.vm07.stdout:7/323: write d3/da/db/d14/d1f/d2b/f33 [782659,79300] 0 2026-03-09T20:47:22.874 INFO:tasks.workunit.client.1.vm10.stdout:3/158: chown c5 3750436 1 2026-03-09T20:47:22.881 INFO:tasks.workunit.client.0.vm07.stdout:1/288: dwrite d3/f9 [0,4194304] 0 2026-03-09T20:47:22.882 INFO:tasks.workunit.client.1.vm10.stdout:6/219: mkdir d3/da/d11/d31/d47 0 2026-03-09T20:47:22.885 INFO:tasks.workunit.client.0.vm07.stdout:3/297: write d1/d5/d10/d59/f61 [1671582,127853] 0 2026-03-09T20:47:22.891 INFO:tasks.workunit.client.0.vm07.stdout:3/298: dwrite d1/d5/d9/d2f/d34/d46/d5d/f5a [0,4194304] 0 2026-03-09T20:47:22.894 INFO:tasks.workunit.client.1.vm10.stdout:3/159: creat dc/d14/f39 x:0 0 0 2026-03-09T20:47:22.895 INFO:tasks.workunit.client.0.vm07.stdout:1/289: creat d3/d23/f58 x:0 0 0 2026-03-09T20:47:22.899 INFO:tasks.workunit.client.0.vm07.stdout:7/324: sync 2026-03-09T20:47:22.899 INFO:tasks.workunit.client.0.vm07.stdout:2/328: symlink d2/d11/l62 0 2026-03-09T20:47:22.899 INFO:tasks.workunit.client.1.vm10.stdout:6/220: creat d3/d46/f48 x:0 0 0 2026-03-09T20:47:22.902 INFO:tasks.workunit.client.1.vm10.stdout:3/160: rename f7 to dc/d14/d26/d37/f3a 0 2026-03-09T20:47:22.911 INFO:tasks.workunit.client.0.vm07.stdout:1/290: creat d3/d14/d54/d3e/f59 x:0 0 0 2026-03-09T20:47:22.911 INFO:tasks.workunit.client.0.vm07.stdout:1/291: chown d3/d23/f39 1 1 2026-03-09T20:47:22.912 INFO:tasks.workunit.client.0.vm07.stdout:7/325: rename d3/c10 to d3/da/db/d32/d3e/c70 0 2026-03-09T20:47:22.912 INFO:tasks.workunit.client.1.vm10.stdout:6/221: creat d3/d46/f49 x:0 0 0 2026-03-09T20:47:22.913 INFO:tasks.workunit.client.0.vm07.stdout:3/299: mknod d1/d5/d9/d11/c62 0 2026-03-09T20:47:22.915 INFO:tasks.workunit.client.0.vm07.stdout:1/292: rename d3/d23/l29 to d3/d23/d55/l5a 0 2026-03-09T20:47:22.916 INFO:tasks.workunit.client.0.vm07.stdout:7/326: symlink d3/da/db/d14/d1f/l71 0 2026-03-09T20:47:22.916 INFO:tasks.workunit.client.0.vm07.stdout:3/300: readlink d1/l8 0 2026-03-09T20:47:22.918 INFO:tasks.workunit.client.0.vm07.stdout:2/329: creat d2/f63 x:0 0 0 2026-03-09T20:47:22.919 INFO:tasks.workunit.client.0.vm07.stdout:1/293: unlink d3/d14/d54/f38 0 2026-03-09T20:47:22.920 INFO:tasks.workunit.client.0.vm07.stdout:1/294: write d3/f5 [1298784,106358] 0 2026-03-09T20:47:22.921 INFO:tasks.workunit.client.0.vm07.stdout:7/327: rename d3/da/db/d14/d43/l4d to d3/d58/l72 0 2026-03-09T20:47:22.924 INFO:tasks.workunit.client.1.vm10.stdout:7/202: dread db/f19 [0,4194304] 0 2026-03-09T20:47:22.925 INFO:tasks.workunit.client.1.vm10.stdout:7/203: fsync db/d21/d26/f2f 0 2026-03-09T20:47:22.926 INFO:tasks.workunit.client.0.vm07.stdout:3/301: unlink d1/d5/d9/d2f/d34/d46/d5d/f5a 0 2026-03-09T20:47:22.926 INFO:tasks.workunit.client.0.vm07.stdout:7/328: dwrite d3/f59 [0,4194304] 0 2026-03-09T20:47:22.929 INFO:tasks.workunit.client.1.vm10.stdout:3/161: dread dc/d14/d26/f34 [0,4194304] 0 2026-03-09T20:47:22.929 INFO:tasks.workunit.client.0.vm07.stdout:7/329: write d3/da/f38 [1388906,23271] 0 2026-03-09T20:47:22.934 INFO:tasks.workunit.client.0.vm07.stdout:2/330: rename d2/db/d1c/d4a/f55 to d2/db/d49/f64 0 2026-03-09T20:47:22.934 INFO:tasks.workunit.client.1.vm10.stdout:7/204: mkdir db/d28/d2b/d36/d40 0 2026-03-09T20:47:22.934 INFO:tasks.workunit.client.0.vm07.stdout:2/331: chown d2/d11/f5d 278332 1 2026-03-09T20:47:22.936 INFO:tasks.workunit.client.0.vm07.stdout:3/302: creat d1/d5/d10/f63 x:0 0 0 2026-03-09T20:47:22.936 INFO:tasks.workunit.client.0.vm07.stdout:3/303: stat d1/d5/d9/f33 0 2026-03-09T20:47:22.942 INFO:tasks.workunit.client.1.vm10.stdout:7/205: dread db/d28/f31 [0,4194304] 0 2026-03-09T20:47:22.944 INFO:tasks.workunit.client.0.vm07.stdout:3/304: dwrite d1/d5/d10/f1a [0,4194304] 0 2026-03-09T20:47:22.947 INFO:tasks.workunit.client.1.vm10.stdout:8/259: dread d0/d22/d25/f34 [0,4194304] 0 2026-03-09T20:47:22.950 INFO:tasks.workunit.client.0.vm07.stdout:2/332: unlink d2/db/f1b 0 2026-03-09T20:47:22.951 INFO:tasks.workunit.client.1.vm10.stdout:8/260: write d0/d22/d2f/d38/f43 [4958111,95011] 0 2026-03-09T20:47:22.951 INFO:tasks.workunit.client.1.vm10.stdout:8/261: fsync d0/d22/d2c/f3f 0 2026-03-09T20:47:22.952 INFO:tasks.workunit.client.0.vm07.stdout:3/305: write d1/d5/d9/d11/f21 [3559922,118305] 0 2026-03-09T20:47:22.954 INFO:tasks.workunit.client.1.vm10.stdout:7/206: link db/f19 db/d28/f41 0 2026-03-09T20:47:22.954 INFO:tasks.workunit.client.1.vm10.stdout:7/207: readlink db/d28/d2b/d36/l17 0 2026-03-09T20:47:22.957 INFO:tasks.workunit.client.0.vm07.stdout:1/295: dread d3/f4 [0,4194304] 0 2026-03-09T20:47:22.958 INFO:tasks.workunit.client.1.vm10.stdout:7/208: chown db/f39 25113 1 2026-03-09T20:47:22.959 INFO:tasks.workunit.client.1.vm10.stdout:8/262: fdatasync d0/f21 0 2026-03-09T20:47:22.959 INFO:tasks.workunit.client.0.vm07.stdout:3/306: fdatasync d1/d5/d10/f55 0 2026-03-09T20:47:22.959 INFO:tasks.workunit.client.1.vm10.stdout:8/263: read d0/d22/d25/d2e/f33 [2636362,114208] 0 2026-03-09T20:47:22.960 INFO:tasks.workunit.client.0.vm07.stdout:1/296: symlink d3/d23/d47/l5b 0 2026-03-09T20:47:22.961 INFO:tasks.workunit.client.0.vm07.stdout:2/333: creat d2/db/d28/d57/f65 x:0 0 0 2026-03-09T20:47:22.961 INFO:tasks.workunit.client.0.vm07.stdout:2/334: chown d2/db/d1c/d4a 319 1 2026-03-09T20:47:22.962 INFO:tasks.workunit.client.0.vm07.stdout:2/335: truncate d2/d11/f44 1586313 0 2026-03-09T20:47:22.965 INFO:tasks.workunit.client.0.vm07.stdout:1/297: creat d3/f5c x:0 0 0 2026-03-09T20:47:22.985 INFO:tasks.workunit.client.1.vm10.stdout:8/264: mkdir d0/d54 0 2026-03-09T20:47:22.985 INFO:tasks.workunit.client.1.vm10.stdout:8/265: write d0/fe [408564,23973] 0 2026-03-09T20:47:22.985 INFO:tasks.workunit.client.1.vm10.stdout:7/209: truncate db/d1f/f2a 87984 0 2026-03-09T20:47:22.985 INFO:tasks.workunit.client.1.vm10.stdout:7/210: link db/d1f/f2c db/d28/d2b/d36/d3b/f42 0 2026-03-09T20:47:22.985 INFO:tasks.workunit.client.1.vm10.stdout:7/211: creat db/d21/d26/f43 x:0 0 0 2026-03-09T20:47:22.988 INFO:tasks.workunit.client.1.vm10.stdout:7/212: dread db/d28/f41 [0,4194304] 0 2026-03-09T20:47:22.989 INFO:tasks.workunit.client.1.vm10.stdout:7/213: dread - db/d1f/f2c zero size 2026-03-09T20:47:22.993 INFO:tasks.workunit.client.1.vm10.stdout:7/214: link db/d28/d2b/d36/d3b/f42 db/d28/d2b/d36/d40/f44 0 2026-03-09T20:47:22.996 INFO:tasks.workunit.client.1.vm10.stdout:7/215: rename f1 to db/f45 0 2026-03-09T20:47:22.996 INFO:tasks.workunit.client.1.vm10.stdout:7/216: truncate db/d28/d2b/d36/f3c 855565 0 2026-03-09T20:47:23.000 INFO:tasks.workunit.client.1.vm10.stdout:7/217: unlink db/d1f/f2c 0 2026-03-09T20:47:23.011 INFO:tasks.workunit.client.1.vm10.stdout:3/162: sync 2026-03-09T20:47:23.011 INFO:tasks.workunit.client.1.vm10.stdout:8/266: sync 2026-03-09T20:47:23.012 INFO:tasks.workunit.client.0.vm07.stdout:4/236: truncate d2/d1f/f3c 3805993 0 2026-03-09T20:47:23.012 INFO:tasks.workunit.client.0.vm07.stdout:4/237: chown d2/f28 23349 1 2026-03-09T20:47:23.012 INFO:tasks.workunit.client.1.vm10.stdout:3/163: chown dc/d14/f1a 10 1 2026-03-09T20:47:23.012 INFO:tasks.workunit.client.1.vm10.stdout:3/164: dread - dc/dd/f23 zero size 2026-03-09T20:47:23.012 INFO:tasks.workunit.client.1.vm10.stdout:8/267: sync 2026-03-09T20:47:23.013 INFO:tasks.workunit.client.1.vm10.stdout:3/165: sync 2026-03-09T20:47:23.014 INFO:tasks.workunit.client.1.vm10.stdout:8/268: fdatasync d0/f19 0 2026-03-09T20:47:23.018 INFO:tasks.workunit.client.1.vm10.stdout:8/269: truncate d0/f21 1378272 0 2026-03-09T20:47:23.019 INFO:tasks.workunit.client.1.vm10.stdout:8/270: sync 2026-03-09T20:47:23.059 INFO:tasks.workunit.client.1.vm10.stdout:1/232: dread d2/da/fe [0,4194304] 0 2026-03-09T20:47:23.060 INFO:tasks.workunit.client.1.vm10.stdout:1/233: fsync d2/da/d25/f48 0 2026-03-09T20:47:23.060 INFO:tasks.workunit.client.1.vm10.stdout:8/271: dread d0/d22/d2f/d3d/f49 [0,4194304] 0 2026-03-09T20:47:23.061 INFO:tasks.workunit.client.1.vm10.stdout:1/234: creat d2/f4c x:0 0 0 2026-03-09T20:47:23.066 INFO:tasks.workunit.client.1.vm10.stdout:1/235: truncate d2/da/f10 4355008 0 2026-03-09T20:47:23.090 INFO:tasks.workunit.client.0.vm07.stdout:8/241: rmdir d1/dc/d16 39 2026-03-09T20:47:23.091 INFO:tasks.workunit.client.1.vm10.stdout:5/194: write d2/f8 [2301962,83581] 0 2026-03-09T20:47:23.091 INFO:tasks.workunit.client.1.vm10.stdout:2/252: truncate d5/f15 4181791 0 2026-03-09T20:47:23.093 INFO:tasks.workunit.client.1.vm10.stdout:2/253: write d5/d18/d27/d38/f45 [456361,83157] 0 2026-03-09T20:47:23.098 INFO:tasks.workunit.client.1.vm10.stdout:5/195: unlink d2/d27/d37/d46/f4c 0 2026-03-09T20:47:23.100 INFO:tasks.workunit.client.1.vm10.stdout:2/254: getdents d5/d18/d27/d38 0 2026-03-09T20:47:23.102 INFO:tasks.workunit.client.1.vm10.stdout:5/196: dwrite f1 [4194304,4194304] 0 2026-03-09T20:47:23.104 INFO:tasks.workunit.client.1.vm10.stdout:2/255: link d5/l40 d5/d18/d2d/d47/l4a 0 2026-03-09T20:47:23.106 INFO:tasks.workunit.client.1.vm10.stdout:2/256: readlink d5/d18/d2d/d47/l4a 0 2026-03-09T20:47:23.106 INFO:tasks.workunit.client.1.vm10.stdout:5/197: link d2/l6 d2/d1b/l53 0 2026-03-09T20:47:23.108 INFO:tasks.workunit.client.1.vm10.stdout:5/198: chown d2/d39/d4b/f51 433731127 1 2026-03-09T20:47:23.109 INFO:tasks.workunit.client.1.vm10.stdout:5/199: dread - d2/f3e zero size 2026-03-09T20:47:23.109 INFO:tasks.workunit.client.1.vm10.stdout:2/257: creat d5/d18/d27/d28/d41/f4b x:0 0 0 2026-03-09T20:47:23.112 INFO:tasks.workunit.client.1.vm10.stdout:5/200: stat d2/c25 0 2026-03-09T20:47:23.112 INFO:tasks.workunit.client.1.vm10.stdout:2/258: creat d5/d18/d2d/f4c x:0 0 0 2026-03-09T20:47:23.118 INFO:tasks.workunit.client.0.vm07.stdout:7/330: dread d3/da/db/d14/f2a [0,4194304] 0 2026-03-09T20:47:23.122 INFO:tasks.workunit.client.0.vm07.stdout:7/331: creat d3/da/db/d14/d1f/d2b/d52/f73 x:0 0 0 2026-03-09T20:47:23.122 INFO:tasks.workunit.client.1.vm10.stdout:5/201: dwrite d2/f3e [0,4194304] 0 2026-03-09T20:47:23.122 INFO:tasks.workunit.client.0.vm07.stdout:7/332: read d3/da/db/d14/f2a [3449256,83890] 0 2026-03-09T20:47:23.123 INFO:tasks.workunit.client.0.vm07.stdout:7/333: fsync d3/da/db/d14/d1f/f46 0 2026-03-09T20:47:23.124 INFO:tasks.workunit.client.0.vm07.stdout:7/334: write d3/da/db/f27 [3815898,117145] 0 2026-03-09T20:47:23.129 INFO:tasks.workunit.client.1.vm10.stdout:0/196: truncate d2/d9/da/de/d1a/f2b 6781004 0 2026-03-09T20:47:23.132 INFO:tasks.workunit.client.1.vm10.stdout:4/170: getdents d1/d8 0 2026-03-09T20:47:23.133 INFO:tasks.workunit.client.0.vm07.stdout:6/309: write d8/f15 [1712604,110240] 0 2026-03-09T20:47:23.134 INFO:tasks.workunit.client.1.vm10.stdout:8/272: dwrite d0/d22/d2c/f36 [0,4194304] 0 2026-03-09T20:47:23.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:22 vm07.local ceph-mon[49120]: pgmap v151: 65 pgs: 65 active+clean; 943 MiB data, 3.9 GiB used, 116 GiB / 120 GiB avail; 11 MiB/s rd, 114 MiB/s wr, 232 op/s 2026-03-09T20:47:23.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:22 vm07.local ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:47:23.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:22 vm07.local ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:47:23.136 INFO:tasks.workunit.client.0.vm07.stdout:9/280: dwrite d4/d8/d19/f69 [0,4194304] 0 2026-03-09T20:47:23.140 INFO:tasks.workunit.client.0.vm07.stdout:9/281: write d4/d8/dc/f21 [4508406,71952] 0 2026-03-09T20:47:23.141 INFO:tasks.workunit.client.0.vm07.stdout:6/310: dwrite d8/d16/d22/d24/d2b/f3c [0,4194304] 0 2026-03-09T20:47:23.153 INFO:tasks.workunit.client.0.vm07.stdout:8/242: read d1/f13 [959092,83641] 0 2026-03-09T20:47:23.158 INFO:tasks.workunit.client.0.vm07.stdout:9/282: dwrite d4/d11/d2a/f5d [0,4194304] 0 2026-03-09T20:47:23.164 INFO:tasks.workunit.client.1.vm10.stdout:0/197: mknod d2/d9/da/de/d1a/d25/d3e/c43 0 2026-03-09T20:47:23.166 INFO:tasks.workunit.client.1.vm10.stdout:5/202: dread d2/d27/d37/f38 [0,4194304] 0 2026-03-09T20:47:23.172 INFO:tasks.workunit.client.0.vm07.stdout:7/335: creat d3/da/db/d14/d1f/d2b/d52/f74 x:0 0 0 2026-03-09T20:47:23.172 INFO:tasks.workunit.client.1.vm10.stdout:4/171: mknod d1/d8/d1b/c35 0 2026-03-09T20:47:23.175 INFO:tasks.workunit.client.1.vm10.stdout:8/273: symlink d0/d22/d2f/d38/l55 0 2026-03-09T20:47:23.176 INFO:tasks.workunit.client.0.vm07.stdout:6/311: write d8/d26/f3d [591947,51483] 0 2026-03-09T20:47:23.176 INFO:tasks.workunit.client.1.vm10.stdout:9/290: dwrite d2/d12/f26 [0,4194304] 0 2026-03-09T20:47:23.181 INFO:tasks.workunit.client.0.vm07.stdout:5/329: write d5/d33/f5a [337761,9121] 0 2026-03-09T20:47:23.186 INFO:tasks.workunit.client.0.vm07.stdout:0/370: dwrite f0 [0,4194304] 0 2026-03-09T20:47:23.197 INFO:tasks.workunit.client.1.vm10.stdout:0/198: symlink d2/d9/da/de/d1a/d25/d3e/l44 0 2026-03-09T20:47:23.202 INFO:tasks.workunit.client.1.vm10.stdout:9/291: dread d2/d12/f2a [0,4194304] 0 2026-03-09T20:47:23.205 INFO:tasks.workunit.client.0.vm07.stdout:7/336: mknod d3/da/d53/c75 0 2026-03-09T20:47:23.205 INFO:tasks.workunit.client.1.vm10.stdout:5/203: mkdir d2/d1b/d54 0 2026-03-09T20:47:23.206 INFO:tasks.workunit.client.0.vm07.stdout:7/337: write d3/da/f47 [1270524,32802] 0 2026-03-09T20:47:23.211 INFO:tasks.workunit.client.1.vm10.stdout:8/274: unlink d0/f6 0 2026-03-09T20:47:23.221 INFO:tasks.workunit.client.1.vm10.stdout:0/199: sync 2026-03-09T20:47:23.228 INFO:tasks.workunit.client.0.vm07.stdout:0/371: symlink d1/d2/d4b/l7b 0 2026-03-09T20:47:23.232 INFO:tasks.workunit.client.1.vm10.stdout:6/222: truncate d3/d12/f28 460956 0 2026-03-09T20:47:23.233 INFO:tasks.workunit.client.0.vm07.stdout:7/338: chown d3/c5b 263 1 2026-03-09T20:47:23.235 INFO:tasks.workunit.client.0.vm07.stdout:6/312: getdents d8/d5d 0 2026-03-09T20:47:23.236 INFO:tasks.workunit.client.1.vm10.stdout:6/223: dwrite d3/da/f15 [0,4194304] 0 2026-03-09T20:47:23.238 INFO:tasks.workunit.client.0.vm07.stdout:6/313: dwrite d8/f46 [0,4194304] 0 2026-03-09T20:47:23.245 INFO:tasks.workunit.client.1.vm10.stdout:4/172: unlink d1/f1e 0 2026-03-09T20:47:23.253 INFO:tasks.workunit.client.0.vm07.stdout:3/307: rename d1/d5/d10 to d1/d5/d9/d2f/d3d/d64 0 2026-03-09T20:47:23.255 INFO:tasks.workunit.client.0.vm07.stdout:0/372: unlink d1/d2/c26 0 2026-03-09T20:47:23.258 INFO:tasks.workunit.client.1.vm10.stdout:8/275: mknod d0/d22/d2f/c56 0 2026-03-09T20:47:23.269 INFO:tasks.workunit.client.0.vm07.stdout:9/283: rmdir d4/d8/dc/d63 0 2026-03-09T20:47:23.269 INFO:tasks.workunit.client.0.vm07.stdout:9/284: fdatasync d4/d8/d59/f66 0 2026-03-09T20:47:23.270 INFO:tasks.workunit.client.0.vm07.stdout:1/298: write d3/f4 [488331,38810] 0 2026-03-09T20:47:23.271 INFO:tasks.workunit.client.0.vm07.stdout:1/299: chown d3/d23/f39 11 1 2026-03-09T20:47:23.273 INFO:tasks.workunit.client.0.vm07.stdout:7/339: symlink d3/da/db/d14/d43/l76 0 2026-03-09T20:47:23.275 INFO:tasks.workunit.client.1.vm10.stdout:9/292: symlink d2/d28/d43/l68 0 2026-03-09T20:47:23.277 INFO:tasks.workunit.client.0.vm07.stdout:7/340: dwrite d3/f59 [0,4194304] 0 2026-03-09T20:47:23.282 INFO:tasks.workunit.client.1.vm10.stdout:6/224: mkdir d3/d12/d4a 0 2026-03-09T20:47:23.289 INFO:tasks.workunit.client.0.vm07.stdout:2/336: rename d2/db/d28/f5b to d2/db/d28/d5c/f66 0 2026-03-09T20:47:23.289 INFO:tasks.workunit.client.1.vm10.stdout:7/218: dwrite db/d21/d23/f29 [0,4194304] 0 2026-03-09T20:47:23.290 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:22 vm10.local ceph-mon[57011]: pgmap v151: 65 pgs: 65 active+clean; 943 MiB data, 3.9 GiB used, 116 GiB / 120 GiB avail; 11 MiB/s rd, 114 MiB/s wr, 232 op/s 2026-03-09T20:47:23.290 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:22 vm10.local ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:47:23.290 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:22 vm10.local ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:47:23.290 INFO:tasks.workunit.client.1.vm10.stdout:6/225: dread d3/da/f15 [0,4194304] 0 2026-03-09T20:47:23.290 INFO:tasks.workunit.client.1.vm10.stdout:5/204: symlink d2/d1b/d54/l55 0 2026-03-09T20:47:23.290 INFO:tasks.workunit.client.1.vm10.stdout:6/226: dread - d3/d46/f48 zero size 2026-03-09T20:47:23.291 INFO:tasks.workunit.client.0.vm07.stdout:4/238: dwrite d2/f33 [0,4194304] 0 2026-03-09T20:47:23.293 INFO:tasks.workunit.client.1.vm10.stdout:4/173: creat d1/d8/d1c/d2b/f36 x:0 0 0 2026-03-09T20:47:23.301 INFO:tasks.workunit.client.0.vm07.stdout:0/373: rmdir d1/d1f 39 2026-03-09T20:47:23.308 INFO:tasks.workunit.client.0.vm07.stdout:0/374: dwrite d1/d2/dc/d17/f6e [0,4194304] 0 2026-03-09T20:47:23.311 INFO:tasks.workunit.client.0.vm07.stdout:0/375: readlink d1/d2/l15 0 2026-03-09T20:47:23.323 INFO:tasks.workunit.client.1.vm10.stdout:1/236: dwrite d2/f14 [0,4194304] 0 2026-03-09T20:47:23.328 INFO:tasks.workunit.client.0.vm07.stdout:8/243: link d1/f33 d1/dc/d16/f4b 0 2026-03-09T20:47:23.332 INFO:tasks.workunit.client.0.vm07.stdout:8/244: dwrite d1/f25 [0,4194304] 0 2026-03-09T20:47:23.347 INFO:tasks.workunit.client.0.vm07.stdout:8/245: stat d1/d3b/f3e 0 2026-03-09T20:47:23.347 INFO:tasks.workunit.client.0.vm07.stdout:8/246: write d1/dc/d16/d26/f48 [640564,123014] 0 2026-03-09T20:47:23.347 INFO:tasks.workunit.client.0.vm07.stdout:9/285: dread - d4/d11/f2c zero size 2026-03-09T20:47:23.350 INFO:tasks.workunit.client.0.vm07.stdout:1/300: creat d3/d23/f5d x:0 0 0 2026-03-09T20:47:23.355 INFO:tasks.workunit.client.1.vm10.stdout:9/293: creat d2/d12/f69 x:0 0 0 2026-03-09T20:47:23.365 INFO:tasks.workunit.client.0.vm07.stdout:7/341: mkdir d3/d58/d77 0 2026-03-09T20:47:23.365 INFO:tasks.workunit.client.1.vm10.stdout:9/294: fdatasync d2/d3/f2f 0 2026-03-09T20:47:23.369 INFO:tasks.workunit.client.1.vm10.stdout:6/227: rename d3/d12/f1e to d3/d12/d4a/f4b 0 2026-03-09T20:47:23.369 INFO:tasks.workunit.client.1.vm10.stdout:4/174: unlink d1/d2/d3/cf 0 2026-03-09T20:47:23.372 INFO:tasks.workunit.client.0.vm07.stdout:9/286: chown d4/d8/dc/l12 3901252 1 2026-03-09T20:47:23.372 INFO:tasks.workunit.client.1.vm10.stdout:1/237: truncate d2/da/fe 1357685 0 2026-03-09T20:47:23.372 INFO:tasks.workunit.client.0.vm07.stdout:9/287: chown d4/d8/dc 1436 1 2026-03-09T20:47:23.373 INFO:tasks.workunit.client.0.vm07.stdout:9/288: fsync d4/d8/dc/f68 0 2026-03-09T20:47:23.373 INFO:tasks.workunit.client.0.vm07.stdout:1/301: symlink d3/d14/d54/l5e 0 2026-03-09T20:47:23.375 INFO:tasks.workunit.client.0.vm07.stdout:9/289: write d4/d11/d31/f5b [867738,15680] 0 2026-03-09T20:47:23.379 INFO:tasks.workunit.client.0.vm07.stdout:1/302: dwrite d3/d23/f45 [0,4194304] 0 2026-03-09T20:47:23.379 INFO:tasks.workunit.client.1.vm10.stdout:9/295: dwrite d2/d12/f2a [0,4194304] 0 2026-03-09T20:47:23.379 INFO:tasks.workunit.client.1.vm10.stdout:9/296: dread - d2/d28/f32 zero size 2026-03-09T20:47:23.380 INFO:tasks.workunit.client.1.vm10.stdout:7/219: mkdir db/d46 0 2026-03-09T20:47:23.388 INFO:tasks.workunit.client.1.vm10.stdout:5/205: symlink d2/d3d/l56 0 2026-03-09T20:47:23.388 INFO:tasks.workunit.client.1.vm10.stdout:5/206: fsync d2/d27/f34 0 2026-03-09T20:47:23.388 INFO:tasks.workunit.client.0.vm07.stdout:6/314: rename d8/c38 to d8/d16/d4b/c64 0 2026-03-09T20:47:23.388 INFO:tasks.workunit.client.0.vm07.stdout:3/308: creat d1/f65 x:0 0 0 2026-03-09T20:47:23.394 INFO:tasks.workunit.client.0.vm07.stdout:6/315: dwrite d8/d50/f55 [0,4194304] 0 2026-03-09T20:47:23.398 INFO:tasks.workunit.client.1.vm10.stdout:6/228: mkdir d3/da/d11/d31/d4c 0 2026-03-09T20:47:23.400 INFO:tasks.workunit.client.1.vm10.stdout:4/175: rmdir d1/d8 39 2026-03-09T20:47:23.403 INFO:tasks.workunit.client.1.vm10.stdout:9/297: mkdir d2/d28/d47/d6a 0 2026-03-09T20:47:23.404 INFO:tasks.workunit.client.0.vm07.stdout:9/290: creat d4/d16/d29/d24/f6a x:0 0 0 2026-03-09T20:47:23.410 INFO:tasks.workunit.client.0.vm07.stdout:7/342: mknod d3/da/db/d14/c78 0 2026-03-09T20:47:23.415 INFO:tasks.workunit.client.1.vm10.stdout:2/259: dwrite d5/d2b/f36 [0,4194304] 0 2026-03-09T20:47:23.416 INFO:tasks.workunit.client.0.vm07.stdout:7/343: fsync d3/da/db/d14/d1f/d2b/d52/f5e 0 2026-03-09T20:47:23.422 INFO:tasks.workunit.client.1.vm10.stdout:1/238: fdatasync d2/da/f10 0 2026-03-09T20:47:23.424 INFO:tasks.workunit.client.0.vm07.stdout:6/316: unlink d8/d26/f51 0 2026-03-09T20:47:23.424 INFO:tasks.workunit.client.1.vm10.stdout:1/239: read d2/f21 [870826,12427] 0 2026-03-09T20:47:23.425 INFO:tasks.workunit.client.1.vm10.stdout:1/240: stat d2/da/d25/f2e 0 2026-03-09T20:47:23.426 INFO:tasks.workunit.client.1.vm10.stdout:7/220: creat db/d46/f47 x:0 0 0 2026-03-09T20:47:23.426 INFO:tasks.workunit.client.0.vm07.stdout:8/247: creat d1/dc/f4c x:0 0 0 2026-03-09T20:47:23.443 INFO:tasks.workunit.client.1.vm10.stdout:6/229: creat d3/f4d x:0 0 0 2026-03-09T20:47:23.444 INFO:tasks.workunit.client.1.vm10.stdout:6/230: write d3/d30/d33/f3a [908454,122145] 0 2026-03-09T20:47:23.445 INFO:tasks.workunit.client.0.vm07.stdout:7/344: mkdir d3/da/db/d79 0 2026-03-09T20:47:23.445 INFO:tasks.workunit.client.0.vm07.stdout:7/345: write d3/da/db/d14/d1f/d2b/d52/f5e [495897,32033] 0 2026-03-09T20:47:23.445 INFO:tasks.workunit.client.0.vm07.stdout:6/317: creat d8/d26/d2a/d40/f65 x:0 0 0 2026-03-09T20:47:23.445 INFO:tasks.workunit.client.0.vm07.stdout:6/318: write d8/d26/f4d [531124,107027] 0 2026-03-09T20:47:23.445 INFO:tasks.workunit.client.0.vm07.stdout:0/376: creat d1/d1f/f7c x:0 0 0 2026-03-09T20:47:23.445 INFO:tasks.workunit.client.0.vm07.stdout:0/377: write d1/d2/f5e [812963,102042] 0 2026-03-09T20:47:23.445 INFO:tasks.workunit.client.0.vm07.stdout:0/378: write d1/d2/d33/d35/f59 [800146,89313] 0 2026-03-09T20:47:23.445 INFO:tasks.workunit.client.0.vm07.stdout:8/248: mkdir d1/dc/d14/d2f/d4d 0 2026-03-09T20:47:23.445 INFO:tasks.workunit.client.0.vm07.stdout:9/291: symlink d4/d8/dc/l6b 0 2026-03-09T20:47:23.445 INFO:tasks.workunit.client.0.vm07.stdout:1/303: link d3/d14/d54/l36 d3/d23/d55/l5f 0 2026-03-09T20:47:23.445 INFO:tasks.workunit.client.0.vm07.stdout:7/346: mkdir d3/da/db/d32/d7a 0 2026-03-09T20:47:23.445 INFO:tasks.workunit.client.1.vm10.stdout:7/221: creat db/d28/d2b/d36/d40/f48 x:0 0 0 2026-03-09T20:47:23.446 INFO:tasks.workunit.client.0.vm07.stdout:0/379: mknod d1/d2/d33/c7d 0 2026-03-09T20:47:23.448 INFO:tasks.workunit.client.0.vm07.stdout:8/249: creat d1/dc/d16/d26/f4e x:0 0 0 2026-03-09T20:47:23.449 INFO:tasks.workunit.client.1.vm10.stdout:6/231: creat d3/d30/d33/f4e x:0 0 0 2026-03-09T20:47:23.449 INFO:tasks.workunit.client.0.vm07.stdout:9/292: symlink d4/d11/d2a/l6c 0 2026-03-09T20:47:23.450 INFO:tasks.workunit.client.0.vm07.stdout:9/293: truncate d4/d8/dc/ff 4513105 0 2026-03-09T20:47:23.451 INFO:tasks.workunit.client.1.vm10.stdout:1/241: creat d2/da/f4d x:0 0 0 2026-03-09T20:47:23.454 INFO:tasks.workunit.client.0.vm07.stdout:1/304: rename d3/d23/d47 to d3/d23/d55/d56/d60 0 2026-03-09T20:47:23.454 INFO:tasks.workunit.client.1.vm10.stdout:7/222: symlink db/d1f/l49 0 2026-03-09T20:47:23.454 INFO:tasks.workunit.client.1.vm10.stdout:2/260: link l3 d5/d18/d2d/l4d 0 2026-03-09T20:47:23.457 INFO:tasks.workunit.client.1.vm10.stdout:1/242: symlink d2/l4e 0 2026-03-09T20:47:23.460 INFO:tasks.workunit.client.0.vm07.stdout:1/305: mknod d3/d23/d55/c61 0 2026-03-09T20:47:23.460 INFO:tasks.workunit.client.1.vm10.stdout:1/243: write d2/f2a [920347,1581] 0 2026-03-09T20:47:23.461 INFO:tasks.workunit.client.0.vm07.stdout:8/250: dwrite d1/dc/d16/f4b [0,4194304] 0 2026-03-09T20:47:23.465 INFO:tasks.workunit.client.0.vm07.stdout:1/306: write d3/f10 [542245,114544] 0 2026-03-09T20:47:23.468 INFO:tasks.workunit.client.0.vm07.stdout:1/307: stat d3/l21 0 2026-03-09T20:47:23.469 INFO:tasks.workunit.client.0.vm07.stdout:6/319: getdents d8/d16/d22/d24 0 2026-03-09T20:47:23.469 INFO:tasks.workunit.client.0.vm07.stdout:8/251: creat d1/dc/d16/d26/f4f x:0 0 0 2026-03-09T20:47:23.471 INFO:tasks.workunit.client.0.vm07.stdout:8/252: write d1/dc/d16/f4a [548851,87796] 0 2026-03-09T20:47:23.476 INFO:tasks.workunit.client.1.vm10.stdout:6/232: sync 2026-03-09T20:47:23.477 INFO:tasks.workunit.client.1.vm10.stdout:6/233: write d3/d12/f2b [4097029,41314] 0 2026-03-09T20:47:23.477 INFO:tasks.workunit.client.1.vm10.stdout:6/234: chown d3/da/d11/d31/d4c 1331691 1 2026-03-09T20:47:23.479 INFO:tasks.workunit.client.0.vm07.stdout:7/347: rename d3/c2e to d3/da/db/d14/c7b 0 2026-03-09T20:47:23.481 INFO:tasks.workunit.client.1.vm10.stdout:6/235: dwrite d3/da/f1b [4194304,4194304] 0 2026-03-09T20:47:23.482 INFO:tasks.workunit.client.1.vm10.stdout:7/223: mknod db/d28/d2b/d36/d3f/c4a 0 2026-03-09T20:47:23.493 INFO:tasks.workunit.client.1.vm10.stdout:7/224: readlink db/l24 0 2026-03-09T20:47:23.493 INFO:tasks.workunit.client.0.vm07.stdout:6/320: rename d8/d16/d22/f2c to d8/d16/d22/d33/f66 0 2026-03-09T20:47:23.493 INFO:tasks.workunit.client.0.vm07.stdout:6/321: truncate d8/d26/d2a/f37 5127712 0 2026-03-09T20:47:23.493 INFO:tasks.workunit.client.0.vm07.stdout:6/322: chown d8/d26/d2a/d40/f65 988362 1 2026-03-09T20:47:23.493 INFO:tasks.workunit.client.0.vm07.stdout:7/348: symlink d3/da/db/d32/d3e/d5c/l7c 0 2026-03-09T20:47:23.493 INFO:tasks.workunit.client.0.vm07.stdout:7/349: dread - d3/da/db/d32/d3e/f65 zero size 2026-03-09T20:47:23.495 INFO:tasks.workunit.client.1.vm10.stdout:6/236: unlink d3/da/l19 0 2026-03-09T20:47:23.513 INFO:tasks.workunit.client.1.vm10.stdout:1/244: mknod d2/da/c4f 0 2026-03-09T20:47:23.514 INFO:tasks.workunit.client.0.vm07.stdout:6/323: mkdir d8/d26/d2a/d40/d67 0 2026-03-09T20:47:23.514 INFO:tasks.workunit.client.0.vm07.stdout:6/324: dread d8/d16/d22/d24/d2b/f3c [0,4194304] 0 2026-03-09T20:47:23.514 INFO:tasks.workunit.client.0.vm07.stdout:8/253: link d1/c1b d1/d3b/c50 0 2026-03-09T20:47:23.514 INFO:tasks.workunit.client.0.vm07.stdout:8/254: truncate d1/dc/d16/f4a 1138744 0 2026-03-09T20:47:23.514 INFO:tasks.workunit.client.0.vm07.stdout:6/325: truncate d8/d16/d22/d33/f60 10609 0 2026-03-09T20:47:23.515 INFO:tasks.workunit.client.0.vm07.stdout:8/255: chown d1/dc/fd 28479606 1 2026-03-09T20:47:23.515 INFO:tasks.workunit.client.0.vm07.stdout:7/350: mknod d3/d58/c7d 0 2026-03-09T20:47:23.515 INFO:tasks.workunit.client.0.vm07.stdout:7/351: chown d3/da/db/d14 156295900 1 2026-03-09T20:47:23.515 INFO:tasks.workunit.client.0.vm07.stdout:7/352: readlink d3/da/db/d14/d1f/d2b/l55 0 2026-03-09T20:47:23.519 INFO:tasks.workunit.client.0.vm07.stdout:7/353: dwrite d3/da/f47 [0,4194304] 0 2026-03-09T20:47:23.529 INFO:tasks.workunit.client.0.vm07.stdout:6/326: dread d8/f14 [0,4194304] 0 2026-03-09T20:47:23.532 INFO:tasks.workunit.client.0.vm07.stdout:8/256: creat d1/dc/d14/d2f/f51 x:0 0 0 2026-03-09T20:47:23.532 INFO:tasks.workunit.client.0.vm07.stdout:6/327: fdatasync d8/d16/d22/d24/d2b/f53 0 2026-03-09T20:47:23.532 INFO:tasks.workunit.client.0.vm07.stdout:6/328: write d8/f29 [717304,120923] 0 2026-03-09T20:47:23.535 INFO:tasks.workunit.client.1.vm10.stdout:6/237: creat d3/d12/d36/f4f x:0 0 0 2026-03-09T20:47:23.536 INFO:tasks.workunit.client.1.vm10.stdout:1/245: unlink d2/da/c16 0 2026-03-09T20:47:23.537 INFO:tasks.workunit.client.1.vm10.stdout:6/238: mkdir d3/d12/d24/d39/d50 0 2026-03-09T20:47:23.538 INFO:tasks.workunit.client.1.vm10.stdout:6/239: mkdir d3/d12/d51 0 2026-03-09T20:47:23.539 INFO:tasks.workunit.client.1.vm10.stdout:6/240: fsync d3/d12/f25 0 2026-03-09T20:47:23.540 INFO:tasks.workunit.client.0.vm07.stdout:8/257: creat d1/dc/d16/d31/f52 x:0 0 0 2026-03-09T20:47:23.540 INFO:tasks.workunit.client.1.vm10.stdout:6/241: fdatasync d3/da/d11/d26/f2a 0 2026-03-09T20:47:23.540 INFO:tasks.workunit.client.0.vm07.stdout:7/354: mknod d3/d58/d77/c7e 0 2026-03-09T20:47:23.541 INFO:tasks.workunit.client.0.vm07.stdout:8/258: read d1/dc/fd [639359,36145] 0 2026-03-09T20:47:23.541 INFO:tasks.workunit.client.1.vm10.stdout:6/242: write d3/d12/d24/f27 [994263,7408] 0 2026-03-09T20:47:23.542 INFO:tasks.workunit.client.0.vm07.stdout:7/355: write d3/f4f [4641614,42956] 0 2026-03-09T20:47:23.548 INFO:tasks.workunit.client.0.vm07.stdout:1/308: sync 2026-03-09T20:47:23.551 INFO:tasks.workunit.client.0.vm07.stdout:8/259: dwrite d1/dc/d16/d31/f52 [0,4194304] 0 2026-03-09T20:47:23.551 INFO:tasks.workunit.client.1.vm10.stdout:1/246: rename d2/da/f22 to d2/da/f50 0 2026-03-09T20:47:23.564 INFO:tasks.workunit.client.0.vm07.stdout:1/309: rename d3/d14/f2d to d3/d14/d54/f62 0 2026-03-09T20:47:23.564 INFO:tasks.workunit.client.0.vm07.stdout:8/260: mkdir d1/dc/d14/d2f/d53 0 2026-03-09T20:47:23.566 INFO:tasks.workunit.client.0.vm07.stdout:7/356: chown d3/da/db/d14/d1f/c6a 0 1 2026-03-09T20:47:23.567 INFO:tasks.workunit.client.0.vm07.stdout:0/380: read d1/d2/d33/d35/f59 [760377,4385] 0 2026-03-09T20:47:23.567 INFO:tasks.workunit.client.0.vm07.stdout:7/357: read d3/da/db/d14/d1f/d2b/f2c [2739790,56940] 0 2026-03-09T20:47:23.569 INFO:tasks.workunit.client.0.vm07.stdout:1/310: symlink d3/d23/d55/d56/d60/l63 0 2026-03-09T20:47:23.569 INFO:tasks.workunit.client.1.vm10.stdout:6/243: creat d3/f52 x:0 0 0 2026-03-09T20:47:23.569 INFO:tasks.workunit.client.1.vm10.stdout:6/244: stat d3/d30/d33/f37 0 2026-03-09T20:47:23.570 INFO:tasks.workunit.client.0.vm07.stdout:0/381: unlink d1/l58 0 2026-03-09T20:47:23.571 INFO:tasks.workunit.client.0.vm07.stdout:0/382: chown d1/d2/dc/d17/f23 2683 1 2026-03-09T20:47:23.571 INFO:tasks.workunit.client.0.vm07.stdout:7/358: creat d3/da/db/d32/d3e/d5c/f7f x:0 0 0 2026-03-09T20:47:23.572 INFO:tasks.workunit.client.0.vm07.stdout:7/359: dread - d3/da/db/d14/d1f/f5d zero size 2026-03-09T20:47:23.574 INFO:tasks.workunit.client.1.vm10.stdout:6/245: creat d3/d12/d51/f53 x:0 0 0 2026-03-09T20:47:23.577 INFO:tasks.workunit.client.0.vm07.stdout:7/360: dwrite d3/da/f47 [0,4194304] 0 2026-03-09T20:47:23.579 INFO:tasks.workunit.client.0.vm07.stdout:8/261: link d1/dc/d16/f4a d1/dc/d16/d31/f54 0 2026-03-09T20:47:23.584 INFO:tasks.workunit.client.0.vm07.stdout:8/262: mkdir d1/dc/d14/d2f/d4d/d55 0 2026-03-09T20:47:23.587 INFO:tasks.workunit.client.1.vm10.stdout:9/298: dread d2/fc [0,4194304] 0 2026-03-09T20:47:23.587 INFO:tasks.workunit.client.0.vm07.stdout:7/361: rename d3/da/db/d14/d1f/d2b/l55 to d3/da/db/d14/d1f/d2b/d52/l80 0 2026-03-09T20:47:23.587 INFO:tasks.workunit.client.1.vm10.stdout:9/299: mknod d2/d33/d37/c6b 0 2026-03-09T20:47:23.589 INFO:tasks.workunit.client.0.vm07.stdout:7/362: symlink d3/da/db/d14/d1f/d2b/d52/l81 0 2026-03-09T20:47:23.590 INFO:tasks.workunit.client.1.vm10.stdout:9/300: dwrite d2/d12/f69 [0,4194304] 0 2026-03-09T20:47:23.591 INFO:tasks.workunit.client.1.vm10.stdout:6/246: sync 2026-03-09T20:47:23.595 INFO:tasks.workunit.client.0.vm07.stdout:8/263: dwrite d1/dc/d16/d26/f2d [4194304,4194304] 0 2026-03-09T20:47:23.597 INFO:tasks.workunit.client.1.vm10.stdout:6/247: dwrite d3/da/f15 [0,4194304] 0 2026-03-09T20:47:23.598 INFO:tasks.workunit.client.1.vm10.stdout:6/248: write d3/d12/d51/f53 [774527,38350] 0 2026-03-09T20:47:23.600 INFO:tasks.workunit.client.0.vm07.stdout:8/264: readlink d1/la 0 2026-03-09T20:47:23.603 INFO:tasks.workunit.client.1.vm10.stdout:6/249: rmdir d3/da/d11/d31 39 2026-03-09T20:47:23.603 INFO:tasks.workunit.client.1.vm10.stdout:6/250: stat d3/d12 0 2026-03-09T20:47:23.608 INFO:tasks.workunit.client.1.vm10.stdout:6/251: dwrite d3/d12/d24/f27 [0,4194304] 0 2026-03-09T20:47:23.610 INFO:tasks.workunit.client.0.vm07.stdout:8/265: symlink d1/dc/d16/d26/l56 0 2026-03-09T20:47:23.612 INFO:tasks.workunit.client.1.vm10.stdout:9/301: creat d2/d3/f6c x:0 0 0 2026-03-09T20:47:23.615 INFO:tasks.workunit.client.0.vm07.stdout:1/311: dread d3/d14/d54/f13 [0,4194304] 0 2026-03-09T20:47:23.629 INFO:tasks.workunit.client.1.vm10.stdout:6/252: mknod d3/da/d11/c54 0 2026-03-09T20:47:23.629 INFO:tasks.workunit.client.1.vm10.stdout:6/253: write d3/da/d11/d26/f2a [2481961,100851] 0 2026-03-09T20:47:23.629 INFO:tasks.workunit.client.1.vm10.stdout:4/176: dread d1/d8/f16 [0,4194304] 0 2026-03-09T20:47:23.629 INFO:tasks.workunit.client.1.vm10.stdout:6/254: link d3/da/f15 d3/d46/f55 0 2026-03-09T20:47:23.629 INFO:tasks.workunit.client.0.vm07.stdout:8/266: truncate d1/dc/f29 3631426 0 2026-03-09T20:47:23.629 INFO:tasks.workunit.client.0.vm07.stdout:8/267: mknod d1/dc/d16/d26/c57 0 2026-03-09T20:47:23.629 INFO:tasks.workunit.client.0.vm07.stdout:1/312: rename d3/d23/d55/l5a to d3/d23/l64 0 2026-03-09T20:47:23.629 INFO:tasks.workunit.client.0.vm07.stdout:8/268: symlink d1/dc/d14/d2f/d4d/d55/l58 0 2026-03-09T20:47:23.629 INFO:tasks.workunit.client.0.vm07.stdout:1/313: truncate d3/fc 306054 0 2026-03-09T20:47:23.632 INFO:tasks.workunit.client.0.vm07.stdout:8/269: creat d1/dc/d16/d26/f59 x:0 0 0 2026-03-09T20:47:23.636 INFO:tasks.workunit.client.0.vm07.stdout:1/314: symlink d3/d23/d55/d56/d60/l65 0 2026-03-09T20:47:23.637 INFO:tasks.workunit.client.0.vm07.stdout:1/315: chown d3/d14/d54/l5e 512308960 1 2026-03-09T20:47:23.640 INFO:tasks.workunit.client.1.vm10.stdout:1/247: fsync d2/f14 0 2026-03-09T20:47:23.656 INFO:tasks.workunit.client.1.vm10.stdout:1/248: chown d2/da/f32 1212645150 1 2026-03-09T20:47:23.656 INFO:tasks.workunit.client.1.vm10.stdout:1/249: write d2/da/f32 [4809847,105730] 0 2026-03-09T20:47:23.656 INFO:tasks.workunit.client.0.vm07.stdout:1/316: mkdir d3/d66 0 2026-03-09T20:47:23.656 INFO:tasks.workunit.client.0.vm07.stdout:1/317: mkdir d3/d23/d67 0 2026-03-09T20:47:23.656 INFO:tasks.workunit.client.0.vm07.stdout:1/318: chown d3/d14/d54/c27 546 1 2026-03-09T20:47:23.656 INFO:tasks.workunit.client.0.vm07.stdout:1/319: rename d3/f51 to d3/f68 0 2026-03-09T20:47:23.677 INFO:tasks.workunit.client.1.vm10.stdout:3/166: write dc/dd/f18 [2065808,3782] 0 2026-03-09T20:47:23.677 INFO:tasks.workunit.client.1.vm10.stdout:3/167: readlink dc/d14/d26/l2b 0 2026-03-09T20:47:23.678 INFO:tasks.workunit.client.0.vm07.stdout:8/270: sync 2026-03-09T20:47:23.682 INFO:tasks.workunit.client.0.vm07.stdout:5/330: dwrite d5/df/d13/d30/f36 [0,4194304] 0 2026-03-09T20:47:23.684 INFO:tasks.workunit.client.0.vm07.stdout:8/271: dread d1/dc/d16/d31/f54 [0,4194304] 0 2026-03-09T20:47:23.693 INFO:tasks.workunit.client.1.vm10.stdout:5/207: read d2/d1b/f2f [303292,116708] 0 2026-03-09T20:47:23.693 INFO:tasks.workunit.client.0.vm07.stdout:8/272: dwrite d1/dc/d16/d31/f47 [0,4194304] 0 2026-03-09T20:47:23.696 INFO:tasks.workunit.client.1.vm10.stdout:5/208: dread d2/f5 [0,4194304] 0 2026-03-09T20:47:23.696 INFO:tasks.workunit.client.0.vm07.stdout:5/331: truncate d5/d19/f20 6149843 0 2026-03-09T20:47:23.696 INFO:tasks.workunit.client.1.vm10.stdout:3/168: mkdir dc/d14/d20/d21/d3b 0 2026-03-09T20:47:23.697 INFO:tasks.workunit.client.1.vm10.stdout:3/169: read - dc/f1f zero size 2026-03-09T20:47:23.700 INFO:tasks.workunit.client.0.vm07.stdout:8/273: dread d1/f33 [0,4194304] 0 2026-03-09T20:47:23.702 INFO:tasks.workunit.client.0.vm07.stdout:5/332: mknod d5/df/d13/d3e/d47/c74 0 2026-03-09T20:47:23.702 INFO:tasks.workunit.client.0.vm07.stdout:5/333: truncate d5/df/d13/f41 868688 0 2026-03-09T20:47:23.703 INFO:tasks.workunit.client.1.vm10.stdout:8/276: dwrite d0/f19 [0,4194304] 0 2026-03-09T20:47:23.703 INFO:tasks.workunit.client.1.vm10.stdout:5/209: creat d2/d27/d37/f57 x:0 0 0 2026-03-09T20:47:23.703 INFO:tasks.workunit.client.1.vm10.stdout:8/277: stat d0/d22/d25/f2d 0 2026-03-09T20:47:23.704 INFO:tasks.workunit.client.0.vm07.stdout:5/334: mkdir d5/d33/d75 0 2026-03-09T20:47:23.705 INFO:tasks.workunit.client.0.vm07.stdout:5/335: mknod d5/df/d13/d3e/c76 0 2026-03-09T20:47:23.705 INFO:tasks.workunit.client.0.vm07.stdout:8/274: link d1/dc/d14/c44 d1/dc/c5a 0 2026-03-09T20:47:23.705 INFO:tasks.workunit.client.0.vm07.stdout:5/336: read - d5/df/d13/d30/d56/f72 zero size 2026-03-09T20:47:23.707 INFO:tasks.workunit.client.0.vm07.stdout:8/275: chown d1/l9 33038894 1 2026-03-09T20:47:23.709 INFO:tasks.workunit.client.0.vm07.stdout:5/337: rename d5/df/f24 to d5/df/d13/d6c/f77 0 2026-03-09T20:47:23.709 INFO:tasks.workunit.client.0.vm07.stdout:8/276: symlink d1/dc/d14/l5b 0 2026-03-09T20:47:23.710 INFO:tasks.workunit.client.0.vm07.stdout:8/277: chown d1/dc/d16/d31/f52 151 1 2026-03-09T20:47:23.710 INFO:tasks.workunit.client.0.vm07.stdout:5/338: mknod d5/d69/c78 0 2026-03-09T20:47:23.711 INFO:tasks.workunit.client.0.vm07.stdout:8/278: write d1/dc/d16/d26/f2b [1148453,75149] 0 2026-03-09T20:47:23.712 INFO:tasks.workunit.client.0.vm07.stdout:5/339: write d5/df/d13/f1f [4540358,81074] 0 2026-03-09T20:47:23.713 INFO:tasks.workunit.client.0.vm07.stdout:5/340: fdatasync d5/df/d13/f3d 0 2026-03-09T20:47:23.715 INFO:tasks.workunit.client.1.vm10.stdout:5/210: mkdir d2/d58 0 2026-03-09T20:47:23.717 INFO:tasks.workunit.client.1.vm10.stdout:5/211: mknod d2/d39/d4b/c59 0 2026-03-09T20:47:23.717 INFO:tasks.workunit.client.0.vm07.stdout:5/341: creat d5/df/d13/d6c/f79 x:0 0 0 2026-03-09T20:47:23.717 INFO:tasks.workunit.client.1.vm10.stdout:5/212: write d2/f3e [1412595,4120] 0 2026-03-09T20:47:23.718 INFO:tasks.workunit.client.0.vm07.stdout:5/342: mknod d5/d19/c7a 0 2026-03-09T20:47:23.720 INFO:tasks.workunit.client.0.vm07.stdout:5/343: symlink d5/d33/d39/l7b 0 2026-03-09T20:47:23.721 INFO:tasks.workunit.client.0.vm07.stdout:5/344: chown d5/df/d13 22632766 1 2026-03-09T20:47:23.722 INFO:tasks.workunit.client.1.vm10.stdout:5/213: mkdir d2/d1b/d54/d5a 0 2026-03-09T20:47:23.722 INFO:tasks.workunit.client.0.vm07.stdout:5/345: creat d5/df/d13/d3e/d5e/f7c x:0 0 0 2026-03-09T20:47:23.723 INFO:tasks.workunit.client.1.vm10.stdout:5/214: mknod d2/d1b/d54/c5b 0 2026-03-09T20:47:23.723 INFO:tasks.workunit.client.0.vm07.stdout:5/346: rmdir d5/df/d13/d3e/d47 39 2026-03-09T20:47:23.724 INFO:tasks.workunit.client.0.vm07.stdout:5/347: symlink d5/d69/l7d 0 2026-03-09T20:47:23.724 INFO:tasks.workunit.client.1.vm10.stdout:5/215: creat d2/d1b/f5c x:0 0 0 2026-03-09T20:47:23.724 INFO:tasks.workunit.client.0.vm07.stdout:5/348: symlink d5/df/d13/d4f/l7e 0 2026-03-09T20:47:23.725 INFO:tasks.workunit.client.1.vm10.stdout:5/216: rmdir d2/d1b/d54 39 2026-03-09T20:47:23.725 INFO:tasks.workunit.client.0.vm07.stdout:5/349: symlink d5/d69/l7f 0 2026-03-09T20:47:23.726 INFO:tasks.workunit.client.1.vm10.stdout:5/217: mkdir d2/d27/d37/d46/d5d 0 2026-03-09T20:47:23.727 INFO:tasks.workunit.client.0.vm07.stdout:5/350: rmdir d5/df/d13/d4f 39 2026-03-09T20:47:23.728 INFO:tasks.workunit.client.1.vm10.stdout:5/218: mknod d2/d43/c5e 0 2026-03-09T20:47:23.728 INFO:tasks.workunit.client.0.vm07.stdout:5/351: rename d5/df/d13/c3f to d5/d19/c80 0 2026-03-09T20:47:23.729 INFO:tasks.workunit.client.1.vm10.stdout:5/219: write d2/d1b/f41 [4051395,125499] 0 2026-03-09T20:47:23.730 INFO:tasks.workunit.client.0.vm07.stdout:5/352: symlink d5/df/d13/l81 0 2026-03-09T20:47:23.730 INFO:tasks.workunit.client.0.vm07.stdout:5/353: creat d5/d69/f82 x:0 0 0 2026-03-09T20:47:23.731 INFO:tasks.workunit.client.0.vm07.stdout:5/354: chown d5/df/d13/c35 22262 1 2026-03-09T20:47:23.732 INFO:tasks.workunit.client.0.vm07.stdout:5/355: mknod d5/df/d13/d30/c83 0 2026-03-09T20:47:23.732 INFO:tasks.workunit.client.0.vm07.stdout:5/356: truncate d5/d69/f82 242237 0 2026-03-09T20:47:23.735 INFO:tasks.workunit.client.0.vm07.stdout:5/357: creat d5/df/d13/d30/d56/f84 x:0 0 0 2026-03-09T20:47:23.762 INFO:tasks.workunit.client.0.vm07.stdout:5/358: sync 2026-03-09T20:47:23.763 INFO:tasks.workunit.client.0.vm07.stdout:5/359: write d5/d19/f4d [463562,103145] 0 2026-03-09T20:47:23.764 INFO:tasks.workunit.client.0.vm07.stdout:5/360: chown d5/df/d13/d55 22365246 1 2026-03-09T20:47:23.764 INFO:tasks.workunit.client.0.vm07.stdout:5/361: stat d5/d33/d3b/f63 0 2026-03-09T20:47:23.768 INFO:tasks.workunit.client.0.vm07.stdout:5/362: rename d5/df/d13/c5c to d5/df/d62/c85 0 2026-03-09T20:47:23.769 INFO:tasks.workunit.client.0.vm07.stdout:5/363: write d5/df/d13/d6c/f79 [8955,72819] 0 2026-03-09T20:47:23.783 INFO:tasks.workunit.client.0.vm07.stdout:5/364: dread d5/df/f2b [0,4194304] 0 2026-03-09T20:47:23.784 INFO:tasks.workunit.client.0.vm07.stdout:5/365: truncate d5/df/d13/f38 1147943 0 2026-03-09T20:47:23.787 INFO:tasks.workunit.client.0.vm07.stdout:5/366: rename d5/df/d13/d4f/c67 to d5/d69/c86 0 2026-03-09T20:47:23.788 INFO:tasks.workunit.client.0.vm07.stdout:5/367: mknod d5/df/d13/d4f/c87 0 2026-03-09T20:47:23.790 INFO:tasks.workunit.client.0.vm07.stdout:5/368: rmdir d5/df/d13/d55 39 2026-03-09T20:47:23.791 INFO:tasks.workunit.client.0.vm07.stdout:5/369: rmdir d5/df/d13/d4f 39 2026-03-09T20:47:23.791 INFO:tasks.workunit.client.0.vm07.stdout:5/370: symlink d5/d69/l88 0 2026-03-09T20:47:23.792 INFO:tasks.workunit.client.0.vm07.stdout:5/371: dread - d5/df/d13/d30/d56/f84 zero size 2026-03-09T20:47:23.915 INFO:tasks.workunit.client.1.vm10.stdout:0/200: truncate d2/d9/da/de/d1a/d25/d3e/f41 3459732 0 2026-03-09T20:47:23.916 INFO:tasks.workunit.client.1.vm10.stdout:0/201: symlink d2/d9/da/d11/l45 0 2026-03-09T20:47:23.967 INFO:tasks.workunit.client.0.vm07.stdout:2/337: dwrite d2/db/f48 [0,4194304] 0 2026-03-09T20:47:23.967 INFO:tasks.workunit.client.1.vm10.stdout:4/177: getdents d1/d2/d3 0 2026-03-09T20:47:23.975 INFO:tasks.workunit.client.0.vm07.stdout:3/309: dwrite d1/d5/d9/f33 [0,4194304] 0 2026-03-09T20:47:23.977 INFO:tasks.workunit.client.0.vm07.stdout:3/310: fsync d1/d5/d9/d11/f21 0 2026-03-09T20:47:23.983 INFO:tasks.workunit.client.0.vm07.stdout:3/311: mkdir d1/d5/d9/d2f/d66 0 2026-03-09T20:47:23.986 INFO:tasks.workunit.client.0.vm07.stdout:3/312: dread d1/d5/d9/f33 [0,4194304] 0 2026-03-09T20:47:23.988 INFO:tasks.workunit.client.0.vm07.stdout:2/338: creat d2/db/f67 x:0 0 0 2026-03-09T20:47:23.992 INFO:tasks.workunit.client.0.vm07.stdout:6/329: fsync d8/d26/d2a/d40/f65 0 2026-03-09T20:47:23.996 INFO:tasks.workunit.client.0.vm07.stdout:2/339: read d2/db/d1c/f45 [3442543,34962] 0 2026-03-09T20:47:24.000 INFO:tasks.workunit.client.0.vm07.stdout:3/313: link d1/d5/d9/d2f/d34/d46/c49 d1/d5/d9/d11/d60/c67 0 2026-03-09T20:47:24.002 INFO:tasks.workunit.client.0.vm07.stdout:3/314: write d1/d5/d9/f1b [2043719,119745] 0 2026-03-09T20:47:24.003 INFO:tasks.workunit.client.0.vm07.stdout:6/330: sync 2026-03-09T20:47:24.005 INFO:tasks.workunit.client.0.vm07.stdout:0/383: dread d1/d1f/d20/f43 [0,4194304] 0 2026-03-09T20:47:24.007 INFO:tasks.workunit.client.0.vm07.stdout:2/340: creat d2/db/d28/d57/f68 x:0 0 0 2026-03-09T20:47:24.010 INFO:tasks.workunit.client.0.vm07.stdout:3/315: dread - d1/d5/d9/d11/f2a zero size 2026-03-09T20:47:24.010 INFO:tasks.workunit.client.0.vm07.stdout:3/316: read - d1/d5/d9/d11/d1f/f4a zero size 2026-03-09T20:47:24.012 INFO:tasks.workunit.client.0.vm07.stdout:2/341: dread d2/db/f48 [0,4194304] 0 2026-03-09T20:47:24.013 INFO:tasks.workunit.client.0.vm07.stdout:2/342: readlink d2/d46/l53 0 2026-03-09T20:47:24.023 INFO:tasks.workunit.client.0.vm07.stdout:6/331: creat d8/d16/d61/f68 x:0 0 0 2026-03-09T20:47:24.023 INFO:tasks.workunit.client.0.vm07.stdout:9/294: dread d4/d16/f33 [0,4194304] 0 2026-03-09T20:47:24.025 INFO:tasks.workunit.client.0.vm07.stdout:3/317: rmdir d1/d5/d9/d2f/d3d/d64/d43/d54 39 2026-03-09T20:47:24.026 INFO:tasks.workunit.client.0.vm07.stdout:3/318: dread - d1/d5/d9/d11/f2a zero size 2026-03-09T20:47:24.028 INFO:tasks.workunit.client.1.vm10.stdout:2/261: write d5/d18/f24 [967875,35042] 0 2026-03-09T20:47:24.028 INFO:tasks.workunit.client.0.vm07.stdout:6/332: write d8/d16/d22/d33/f60 [673719,75055] 0 2026-03-09T20:47:24.029 INFO:tasks.workunit.client.1.vm10.stdout:2/262: write d5/d18/d27/d38/f45 [171664,34401] 0 2026-03-09T20:47:24.033 INFO:tasks.workunit.client.0.vm07.stdout:9/295: mknod d4/d8/dc/d4e/c6d 0 2026-03-09T20:47:24.039 INFO:tasks.workunit.client.0.vm07.stdout:6/333: dwrite d8/f5f [0,4194304] 0 2026-03-09T20:47:24.047 INFO:tasks.workunit.client.1.vm10.stdout:7/225: write db/d28/d2b/d36/f35 [1022361,81047] 0 2026-03-09T20:47:24.054 INFO:tasks.workunit.client.1.vm10.stdout:2/263: mknod d5/c4e 0 2026-03-09T20:47:24.055 INFO:tasks.workunit.client.1.vm10.stdout:7/226: symlink db/d21/l4b 0 2026-03-09T20:47:24.058 INFO:tasks.workunit.client.0.vm07.stdout:6/334: rename d8/d16/d22/d3a to d8/d26/d2a/d40/d69 0 2026-03-09T20:47:24.058 INFO:tasks.workunit.client.0.vm07.stdout:6/335: readlink d8/d16/d22/d24/l2e 0 2026-03-09T20:47:24.060 INFO:tasks.workunit.client.1.vm10.stdout:7/227: stat db/l2d 0 2026-03-09T20:47:24.060 INFO:tasks.workunit.client.0.vm07.stdout:2/343: link d2/db/d28/l39 d2/l69 0 2026-03-09T20:47:24.060 INFO:tasks.workunit.client.0.vm07.stdout:2/344: chown d2/d11/l15 119146 1 2026-03-09T20:47:24.061 INFO:tasks.workunit.client.0.vm07.stdout:2/345: chown d2/db/d1c/d4a/l61 83792112 1 2026-03-09T20:47:24.062 INFO:tasks.workunit.client.1.vm10.stdout:2/264: creat d5/d18/d1b/d22/f4f x:0 0 0 2026-03-09T20:47:24.063 INFO:tasks.workunit.client.1.vm10.stdout:2/265: readlink d5/l17 0 2026-03-09T20:47:24.068 INFO:tasks.workunit.client.1.vm10.stdout:7/228: dread - db/d28/d2b/d36/d3b/f42 zero size 2026-03-09T20:47:24.069 INFO:tasks.workunit.client.1.vm10.stdout:7/229: stat db/d28/d2b/d36/d3f/c4a 0 2026-03-09T20:47:24.070 INFO:tasks.workunit.client.1.vm10.stdout:7/230: truncate db/d46/f47 213603 0 2026-03-09T20:47:24.070 INFO:tasks.workunit.client.1.vm10.stdout:7/231: chown db/l2d 3070862 1 2026-03-09T20:47:24.071 INFO:tasks.workunit.client.1.vm10.stdout:7/232: write db/d21/d23/f34 [1770345,26430] 0 2026-03-09T20:47:24.076 INFO:tasks.workunit.client.0.vm07.stdout:7/363: chown d3/da/db/d14/d1f/d2b/d52/l80 144787898 1 2026-03-09T20:47:24.077 INFO:tasks.workunit.client.1.vm10.stdout:7/233: dwrite db/d21/d26/f43 [0,4194304] 0 2026-03-09T20:47:24.081 INFO:tasks.workunit.client.0.vm07.stdout:7/364: dwrite d3/da/f11 [0,4194304] 0 2026-03-09T20:47:24.083 INFO:tasks.workunit.client.0.vm07.stdout:7/365: fsync d3/da/db/d14/d1f/d2b/f49 0 2026-03-09T20:47:24.084 INFO:tasks.workunit.client.1.vm10.stdout:2/266: link l4 d5/d18/d27/d28/d41/l50 0 2026-03-09T20:47:24.084 INFO:tasks.workunit.client.0.vm07.stdout:7/366: fsync d3/da/db/d14/f3a 0 2026-03-09T20:47:24.090 INFO:tasks.workunit.client.0.vm07.stdout:7/367: dwrite d3/da/db/d14/d1f/f46 [0,4194304] 0 2026-03-09T20:47:24.092 INFO:tasks.workunit.client.1.vm10.stdout:7/234: mkdir db/d28/d4c 0 2026-03-09T20:47:24.092 INFO:tasks.workunit.client.1.vm10.stdout:0/202: stat d2/d9/da/d35/f28 0 2026-03-09T20:47:24.093 INFO:tasks.workunit.client.1.vm10.stdout:7/235: write db/d28/d2b/d36/d40/f48 [746857,86238] 0 2026-03-09T20:47:24.094 INFO:tasks.workunit.client.1.vm10.stdout:2/267: readlink d5/l40 0 2026-03-09T20:47:24.098 INFO:tasks.workunit.client.1.vm10.stdout:7/236: rmdir db/d1f 39 2026-03-09T20:47:24.100 INFO:tasks.workunit.client.1.vm10.stdout:9/302: truncate d2/d3/f1c 2998931 0 2026-03-09T20:47:24.101 INFO:tasks.workunit.client.0.vm07.stdout:3/319: creat d1/d5/d9/d2f/d34/f68 x:0 0 0 2026-03-09T20:47:24.105 INFO:tasks.workunit.client.0.vm07.stdout:3/320: dread d1/d5/d9/d2f/d3d/d64/f30 [0,4194304] 0 2026-03-09T20:47:24.107 INFO:tasks.workunit.client.1.vm10.stdout:0/203: dread d2/db/f38 [0,4194304] 0 2026-03-09T20:47:24.112 INFO:tasks.workunit.client.0.vm07.stdout:2/346: mknod d2/db/d28/d57/c6a 0 2026-03-09T20:47:24.112 INFO:tasks.workunit.client.1.vm10.stdout:2/268: mknod d5/c51 0 2026-03-09T20:47:24.112 INFO:tasks.workunit.client.1.vm10.stdout:6/255: write d3/da/d11/f17 [900270,14811] 0 2026-03-09T20:47:24.112 INFO:tasks.workunit.client.1.vm10.stdout:0/204: dread d2/db/f13 [0,4194304] 0 2026-03-09T20:47:24.114 INFO:tasks.workunit.client.1.vm10.stdout:6/256: truncate d3/da/f15 4614054 0 2026-03-09T20:47:24.115 INFO:tasks.workunit.client.1.vm10.stdout:2/269: symlink d5/d18/d27/d38/l52 0 2026-03-09T20:47:24.120 INFO:tasks.workunit.client.1.vm10.stdout:8/278: dread d0/d22/d25/d2e/d41/d47/f4b [8388608,4194304] 0 2026-03-09T20:47:24.124 INFO:tasks.workunit.client.1.vm10.stdout:7/237: stat db/d1f/f2a 0 2026-03-09T20:47:24.125 INFO:tasks.workunit.client.1.vm10.stdout:0/205: creat d2/d9/da/de/d1a/d25/d34/f46 x:0 0 0 2026-03-09T20:47:24.126 INFO:tasks.workunit.client.1.vm10.stdout:0/206: chown d2/d9/da/de/d1a/f21 11058377 1 2026-03-09T20:47:24.127 INFO:tasks.workunit.client.1.vm10.stdout:0/207: write d2/d9/da/de/d1a/d25/d34/f46 [860983,96383] 0 2026-03-09T20:47:24.130 INFO:tasks.workunit.client.0.vm07.stdout:7/368: fdatasync d3/da/db/d14/d1f/d2b/f2c 0 2026-03-09T20:47:24.142 INFO:tasks.workunit.client.1.vm10.stdout:6/257: unlink d3/d12/d24/d39/c3e 0 2026-03-09T20:47:24.142 INFO:tasks.workunit.client.1.vm10.stdout:7/238: unlink db/l2d 0 2026-03-09T20:47:24.142 INFO:tasks.workunit.client.1.vm10.stdout:3/170: dwrite dc/dd/f15 [0,4194304] 0 2026-03-09T20:47:24.142 INFO:tasks.workunit.client.1.vm10.stdout:3/171: fsync dc/d14/f39 0 2026-03-09T20:47:24.142 INFO:tasks.workunit.client.0.vm07.stdout:3/321: creat d1/d5/d9/d2f/d3d/d64/d59/f69 x:0 0 0 2026-03-09T20:47:24.142 INFO:tasks.workunit.client.0.vm07.stdout:2/347: chown d2/db/c1f 0 1 2026-03-09T20:47:24.142 INFO:tasks.workunit.client.0.vm07.stdout:2/348: readlink d2/db/d1c/l5f 0 2026-03-09T20:47:24.143 INFO:tasks.workunit.client.0.vm07.stdout:2/349: readlink d2/l18 0 2026-03-09T20:47:24.144 INFO:tasks.workunit.client.1.vm10.stdout:8/279: sync 2026-03-09T20:47:24.148 INFO:tasks.workunit.client.1.vm10.stdout:3/172: dwrite dc/dd/f15 [4194304,4194304] 0 2026-03-09T20:47:24.148 INFO:tasks.workunit.client.0.vm07.stdout:7/369: mkdir d3/d58/d82 0 2026-03-09T20:47:24.149 INFO:tasks.workunit.client.1.vm10.stdout:3/173: truncate dc/dd/f23 768987 0 2026-03-09T20:47:24.153 INFO:tasks.workunit.client.1.vm10.stdout:6/258: write d3/d46/f55 [3618505,16220] 0 2026-03-09T20:47:24.157 INFO:tasks.workunit.client.1.vm10.stdout:2/270: creat d5/d18/d2d/d47/f53 x:0 0 0 2026-03-09T20:47:24.164 INFO:tasks.workunit.client.1.vm10.stdout:2/271: dwrite d5/d18/d27/d28/d41/f4b [0,4194304] 0 2026-03-09T20:47:24.164 INFO:tasks.workunit.client.1.vm10.stdout:2/272: write d5/d18/d1b/f2e [3459229,68610] 0 2026-03-09T20:47:24.167 INFO:tasks.workunit.client.1.vm10.stdout:0/208: mkdir d2/d9/d47 0 2026-03-09T20:47:24.173 INFO:tasks.workunit.client.0.vm07.stdout:3/322: rename d1/d5/d9/d2f/d34/d46/d5d/l5b to d1/l6a 0 2026-03-09T20:47:24.178 INFO:tasks.workunit.client.1.vm10.stdout:8/280: creat d0/d22/d2c/f57 x:0 0 0 2026-03-09T20:47:24.178 INFO:tasks.workunit.client.0.vm07.stdout:8/279: truncate d1/dc/d16/d26/f2d 6856239 0 2026-03-09T20:47:24.178 INFO:tasks.workunit.client.0.vm07.stdout:8/280: dread - d1/d3b/f3e zero size 2026-03-09T20:47:24.181 INFO:tasks.workunit.client.1.vm10.stdout:3/174: creat dc/d14/d27/f3c x:0 0 0 2026-03-09T20:47:24.181 INFO:tasks.workunit.client.1.vm10.stdout:6/259: creat d3/d12/d36/f56 x:0 0 0 2026-03-09T20:47:24.181 INFO:tasks.workunit.client.1.vm10.stdout:8/281: dwrite d0/d22/d2c/f36 [0,4194304] 0 2026-03-09T20:47:24.182 INFO:tasks.workunit.client.1.vm10.stdout:6/260: write d3/d12/f25 [1934099,66556] 0 2026-03-09T20:47:24.182 INFO:tasks.workunit.client.0.vm07.stdout:3/323: symlink d1/d5/d9/d2f/d3d/d64/d59/l6b 0 2026-03-09T20:47:24.182 INFO:tasks.workunit.client.1.vm10.stdout:6/261: fdatasync d3/da/d11/f17 0 2026-03-09T20:47:24.183 INFO:tasks.workunit.client.1.vm10.stdout:3/175: dread - dc/d14/d26/d29/f30 zero size 2026-03-09T20:47:24.187 INFO:tasks.workunit.client.1.vm10.stdout:3/176: dread - dc/d14/f39 zero size 2026-03-09T20:47:24.188 INFO:tasks.workunit.client.0.vm07.stdout:8/281: dwrite d1/dc/f42 [0,4194304] 0 2026-03-09T20:47:24.191 INFO:tasks.workunit.client.1.vm10.stdout:5/220: dwrite d2/d27/f2d [0,4194304] 0 2026-03-09T20:47:24.193 INFO:tasks.workunit.client.1.vm10.stdout:5/221: chown d2/d27/d37 102771836 1 2026-03-09T20:47:24.200 INFO:tasks.workunit.client.1.vm10.stdout:7/239: truncate f3 967395 0 2026-03-09T20:47:24.203 INFO:tasks.workunit.client.0.vm07.stdout:2/350: rename d2/f10 to d2/db/d49/f6b 0 2026-03-09T20:47:24.210 INFO:tasks.workunit.client.1.vm10.stdout:2/273: readlink l4 0 2026-03-09T20:47:24.210 INFO:tasks.workunit.client.0.vm07.stdout:5/372: dwrite d5/d50/f52 [0,4194304] 0 2026-03-09T20:47:24.218 INFO:tasks.workunit.client.1.vm10.stdout:0/209: mkdir d2/d9/da/d48 0 2026-03-09T20:47:24.218 INFO:tasks.workunit.client.1.vm10.stdout:1/250: write d2/da/fe [1406155,20097] 0 2026-03-09T20:47:24.221 INFO:tasks.workunit.client.0.vm07.stdout:2/351: unlink d2/db/d1c/f22 0 2026-03-09T20:47:24.222 INFO:tasks.workunit.client.1.vm10.stdout:0/210: dread d2/db/f38 [0,4194304] 0 2026-03-09T20:47:24.222 INFO:tasks.workunit.client.1.vm10.stdout:4/178: dwrite d1/d2/d3/f18 [0,4194304] 0 2026-03-09T20:47:24.224 INFO:tasks.workunit.client.1.vm10.stdout:8/282: mkdir d0/d22/d25/d2e/d58 0 2026-03-09T20:47:24.224 INFO:tasks.workunit.client.0.vm07.stdout:0/384: dwrite d1/d2/f14 [0,4194304] 0 2026-03-09T20:47:24.233 INFO:tasks.workunit.client.0.vm07.stdout:5/373: symlink d5/d33/l89 0 2026-03-09T20:47:24.239 INFO:tasks.workunit.client.0.vm07.stdout:0/385: truncate d1/d2/dc/f40 917542 0 2026-03-09T20:47:24.241 INFO:tasks.workunit.client.0.vm07.stdout:5/374: symlink d5/df/d13/l8a 0 2026-03-09T20:47:24.241 INFO:tasks.workunit.client.1.vm10.stdout:3/177: readlink dc/d14/d20/d21/l2c 0 2026-03-09T20:47:24.241 INFO:tasks.workunit.client.1.vm10.stdout:3/178: fsync dc/dd/f23 0 2026-03-09T20:47:24.242 INFO:tasks.workunit.client.1.vm10.stdout:7/240: mkdir db/d28/d4d 0 2026-03-09T20:47:24.243 INFO:tasks.workunit.client.0.vm07.stdout:5/375: symlink d5/df/d13/d4f/l8b 0 2026-03-09T20:47:24.247 INFO:tasks.workunit.client.1.vm10.stdout:4/179: unlink d1/d8/c10 0 2026-03-09T20:47:24.258 INFO:tasks.workunit.client.1.vm10.stdout:1/251: chown d2/da/c3f 17 1 2026-03-09T20:47:24.259 INFO:tasks.workunit.client.0.vm07.stdout:5/376: dwrite d5/df/d13/d3e/d5e/f7c [0,4194304] 0 2026-03-09T20:47:24.259 INFO:tasks.workunit.client.0.vm07.stdout:5/377: read d5/df/d13/d6c/f79 [15438,125017] 0 2026-03-09T20:47:24.259 INFO:tasks.workunit.client.0.vm07.stdout:5/378: dwrite d5/d19/f2c [0,4194304] 0 2026-03-09T20:47:24.259 INFO:tasks.workunit.client.0.vm07.stdout:5/379: truncate d5/d50/f52 4687518 0 2026-03-09T20:47:24.259 INFO:tasks.workunit.client.1.vm10.stdout:3/179: dwrite dc/d14/d20/d2e/f32 [0,4194304] 0 2026-03-09T20:47:24.259 INFO:tasks.workunit.client.1.vm10.stdout:6/262: mknod d3/d12/d24/d39/d50/c57 0 2026-03-09T20:47:24.259 INFO:tasks.workunit.client.1.vm10.stdout:8/283: mknod d0/d22/d25/d2e/d41/d47/c59 0 2026-03-09T20:47:24.259 INFO:tasks.workunit.client.1.vm10.stdout:0/211: rename d2/d9/da/de/d1a to d2/d9/da/de/d1a/d49 22 2026-03-09T20:47:24.259 INFO:tasks.workunit.client.1.vm10.stdout:0/212: mkdir d2/d4a 0 2026-03-09T20:47:24.259 INFO:tasks.workunit.client.1.vm10.stdout:8/284: unlink d0/d22/d25/d2e/d41/d47/c59 0 2026-03-09T20:47:24.259 INFO:tasks.workunit.client.1.vm10.stdout:3/180: mkdir dc/d14/d20/d21/d3d 0 2026-03-09T20:47:24.259 INFO:tasks.workunit.client.1.vm10.stdout:1/252: dwrite d2/da/d25/f48 [4194304,4194304] 0 2026-03-09T20:47:24.262 INFO:tasks.workunit.client.0.vm07.stdout:5/380: symlink d5/df/l8c 0 2026-03-09T20:47:24.263 INFO:tasks.workunit.client.1.vm10.stdout:5/222: rmdir d2/d1b/d54/d5a 0 2026-03-09T20:47:24.264 INFO:tasks.workunit.client.1.vm10.stdout:3/181: write dc/f10 [3544375,99899] 0 2026-03-09T20:47:24.265 INFO:tasks.workunit.client.1.vm10.stdout:8/285: unlink d0/d22/d25/f37 0 2026-03-09T20:47:24.267 INFO:tasks.workunit.client.0.vm07.stdout:5/381: mkdir d5/d33/d39/d8d 0 2026-03-09T20:47:24.271 INFO:tasks.workunit.client.0.vm07.stdout:5/382: creat d5/d33/d39/d8d/f8e x:0 0 0 2026-03-09T20:47:24.273 INFO:tasks.workunit.client.0.vm07.stdout:5/383: mknod d5/df/d13/c8f 0 2026-03-09T20:47:24.274 INFO:tasks.workunit.client.0.vm07.stdout:5/384: stat d5/df/d13/d30/f36 0 2026-03-09T20:47:24.277 INFO:tasks.workunit.client.1.vm10.stdout:6/263: getdents d3/da/d11/d26 0 2026-03-09T20:47:24.283 INFO:tasks.workunit.client.0.vm07.stdout:5/385: dwrite d5/d50/f52 [0,4194304] 0 2026-03-09T20:47:24.284 INFO:tasks.workunit.client.1.vm10.stdout:5/223: mkdir d2/d27/d37/d46/d5d/d5f 0 2026-03-09T20:47:24.284 INFO:tasks.workunit.client.1.vm10.stdout:1/253: mkdir d2/da/d25/d46/d51 0 2026-03-09T20:47:24.284 INFO:tasks.workunit.client.0.vm07.stdout:5/386: dread - d5/d33/d39/d8d/f8e zero size 2026-03-09T20:47:24.291 INFO:tasks.workunit.client.1.vm10.stdout:3/182: sync 2026-03-09T20:47:24.292 INFO:tasks.workunit.client.1.vm10.stdout:0/213: rename d2/d9/da/de/d1a/d25/d34/d3f to d2/d9/d4b 0 2026-03-09T20:47:24.297 INFO:tasks.workunit.client.0.vm07.stdout:5/387: mknod d5/df/d62/c90 0 2026-03-09T20:47:24.297 INFO:tasks.workunit.client.0.vm07.stdout:5/388: fsync d5/df/f4a 0 2026-03-09T20:47:24.300 INFO:tasks.workunit.client.1.vm10.stdout:5/224: link d2/d27/f2a d2/d39/d4b/f60 0 2026-03-09T20:47:24.301 INFO:tasks.workunit.client.1.vm10.stdout:5/225: chown d2/d43/f47 439 1 2026-03-09T20:47:24.302 INFO:tasks.workunit.client.1.vm10.stdout:8/286: getdents d0 0 2026-03-09T20:47:24.302 INFO:tasks.workunit.client.1.vm10.stdout:3/183: creat dc/d14/d26/d37/f3e x:0 0 0 2026-03-09T20:47:24.303 INFO:tasks.workunit.client.1.vm10.stdout:3/184: chown dc/d14/d20/d21/d3d 0 1 2026-03-09T20:47:24.303 INFO:tasks.workunit.client.1.vm10.stdout:0/214: rename d2/l1b to d2/d9/da/d48/l4c 0 2026-03-09T20:47:24.304 INFO:tasks.workunit.client.0.vm07.stdout:5/389: dwrite d5/df/d13/f41 [0,4194304] 0 2026-03-09T20:47:24.306 INFO:tasks.workunit.client.1.vm10.stdout:6/264: getdents d3/d12 0 2026-03-09T20:47:24.307 INFO:tasks.workunit.client.0.vm07.stdout:5/390: fsync d5/df/d13/f3d 0 2026-03-09T20:47:24.316 INFO:tasks.workunit.client.0.vm07.stdout:5/391: write d5/d50/f61 [1156216,37096] 0 2026-03-09T20:47:24.316 INFO:tasks.workunit.client.0.vm07.stdout:5/392: fsync d5/d33/f5a 0 2026-03-09T20:47:24.316 INFO:tasks.workunit.client.1.vm10.stdout:8/287: dread d0/d22/d2f/d3d/f49 [4194304,4194304] 0 2026-03-09T20:47:24.316 INFO:tasks.workunit.client.1.vm10.stdout:8/288: chown d0/d22/d25/d2e/d41/d47 0 1 2026-03-09T20:47:24.316 INFO:tasks.workunit.client.1.vm10.stdout:5/226: link d2/d1b/f41 d2/d27/d37/d46/d5d/d5f/f61 0 2026-03-09T20:47:24.316 INFO:tasks.workunit.client.1.vm10.stdout:5/227: chown d2/d39/d4b/c59 10448 1 2026-03-09T20:47:24.320 INFO:tasks.workunit.client.1.vm10.stdout:3/185: getdents dc/d14/d22 0 2026-03-09T20:47:24.320 INFO:tasks.workunit.client.1.vm10.stdout:0/215: symlink d2/d9/da/d48/l4d 0 2026-03-09T20:47:24.322 INFO:tasks.workunit.client.1.vm10.stdout:6/265: rmdir d3/da/d11/d31 39 2026-03-09T20:47:24.326 INFO:tasks.workunit.client.1.vm10.stdout:0/216: fsync d2/db/f13 0 2026-03-09T20:47:24.327 INFO:tasks.workunit.client.1.vm10.stdout:8/289: fsync d0/d22/f27 0 2026-03-09T20:47:24.327 INFO:tasks.workunit.client.1.vm10.stdout:0/217: mkdir d2/d4e 0 2026-03-09T20:47:24.327 INFO:tasks.workunit.client.1.vm10.stdout:0/218: chown d2/d9/da/d35/f3a 64475 1 2026-03-09T20:47:24.328 INFO:tasks.workunit.client.0.vm07.stdout:0/386: sync 2026-03-09T20:47:24.333 INFO:tasks.workunit.client.1.vm10.stdout:3/186: sync 2026-03-09T20:47:24.335 INFO:tasks.workunit.client.1.vm10.stdout:6/266: dwrite d3/d12/d36/f4f [0,4194304] 0 2026-03-09T20:47:24.336 INFO:tasks.workunit.client.1.vm10.stdout:8/290: dread d0/f21 [0,4194304] 0 2026-03-09T20:47:24.342 INFO:tasks.workunit.client.1.vm10.stdout:6/267: dwrite d3/d12/d51/f53 [0,4194304] 0 2026-03-09T20:47:24.368 INFO:tasks.workunit.client.1.vm10.stdout:3/187: fsync dc/d14/d26/f34 0 2026-03-09T20:47:24.369 INFO:tasks.workunit.client.1.vm10.stdout:3/188: chown c5 0 1 2026-03-09T20:47:24.369 INFO:tasks.workunit.client.1.vm10.stdout:3/189: chown dc/d14/d20/d21/d3d 199 1 2026-03-09T20:47:24.375 INFO:tasks.workunit.client.1.vm10.stdout:8/291: write d0/d22/d25/d2e/d41/d47/f4b [7507684,82351] 0 2026-03-09T20:47:24.376 INFO:tasks.workunit.client.1.vm10.stdout:3/190: getdents dc/d14/d20/d2e 0 2026-03-09T20:47:24.376 INFO:tasks.workunit.client.1.vm10.stdout:3/191: stat f6 0 2026-03-09T20:47:24.376 INFO:tasks.workunit.client.1.vm10.stdout:8/292: creat d0/d22/d25/d2e/d41/d47/f5a x:0 0 0 2026-03-09T20:47:24.377 INFO:tasks.workunit.client.1.vm10.stdout:3/192: creat dc/d14/d27/f3f x:0 0 0 2026-03-09T20:47:24.378 INFO:tasks.workunit.client.1.vm10.stdout:8/293: mknod d0/d22/d2c/c5b 0 2026-03-09T20:47:24.379 INFO:tasks.workunit.client.1.vm10.stdout:3/193: chown dc/d14/d20/d21/c2f 196568641 1 2026-03-09T20:47:24.380 INFO:tasks.workunit.client.1.vm10.stdout:3/194: mkdir dc/d14/d26/d29/d40 0 2026-03-09T20:47:24.383 INFO:tasks.workunit.client.1.vm10.stdout:3/195: rename dc/d14/f39 to dc/d14/d20/d21/f41 0 2026-03-09T20:47:24.383 INFO:tasks.workunit.client.1.vm10.stdout:3/196: write dc/d14/f1a [667109,102365] 0 2026-03-09T20:47:24.385 INFO:tasks.workunit.client.1.vm10.stdout:3/197: rename dc/d14/c33 to dc/d14/d20/c42 0 2026-03-09T20:47:24.385 INFO:tasks.workunit.client.1.vm10.stdout:3/198: stat c5 0 2026-03-09T20:47:24.386 INFO:tasks.workunit.client.1.vm10.stdout:3/199: mknod dc/d14/d22/c43 0 2026-03-09T20:47:24.393 INFO:tasks.workunit.client.1.vm10.stdout:3/200: getdents dc/d14/d26/d29/d2a 0 2026-03-09T20:47:24.393 INFO:tasks.workunit.client.1.vm10.stdout:3/201: stat f6 0 2026-03-09T20:47:24.393 INFO:tasks.workunit.client.1.vm10.stdout:3/202: mknod dc/d14/d27/c44 0 2026-03-09T20:47:24.393 INFO:tasks.workunit.client.1.vm10.stdout:3/203: creat dc/d14/d26/f45 x:0 0 0 2026-03-09T20:47:24.393 INFO:tasks.workunit.client.1.vm10.stdout:3/204: creat dc/d14/d26/d29/d40/f46 x:0 0 0 2026-03-09T20:47:24.402 INFO:tasks.workunit.client.1.vm10.stdout:3/205: sync 2026-03-09T20:47:24.402 INFO:tasks.workunit.client.1.vm10.stdout:3/206: write dc/d14/d20/d2e/f38 [298299,6123] 0 2026-03-09T20:47:24.406 INFO:tasks.workunit.client.1.vm10.stdout:3/207: mknod dc/d14/d26/d37/c47 0 2026-03-09T20:47:24.414 INFO:tasks.workunit.client.1.vm10.stdout:3/208: dwrite dc/d14/d27/f3c [0,4194304] 0 2026-03-09T20:47:24.415 INFO:tasks.workunit.client.1.vm10.stdout:3/209: chown lb 1818225 1 2026-03-09T20:47:24.423 INFO:tasks.workunit.client.1.vm10.stdout:3/210: dwrite dc/ff [0,4194304] 0 2026-03-09T20:47:24.424 INFO:tasks.workunit.client.1.vm10.stdout:8/294: dread d0/d22/f35 [0,4194304] 0 2026-03-09T20:47:24.433 INFO:tasks.workunit.client.1.vm10.stdout:8/295: dwrite d0/fe [0,4194304] 0 2026-03-09T20:47:24.442 INFO:tasks.workunit.client.1.vm10.stdout:8/296: link d0/d22/d2c/f3f d0/d22/d25/d2e/d58/f5c 0 2026-03-09T20:47:24.448 INFO:tasks.workunit.client.1.vm10.stdout:8/297: symlink d0/d22/d25/d2e/l5d 0 2026-03-09T20:47:24.452 INFO:tasks.workunit.client.1.vm10.stdout:8/298: dread d0/d22/d25/f2d [0,4194304] 0 2026-03-09T20:47:24.459 INFO:tasks.workunit.client.1.vm10.stdout:8/299: creat d0/d22/d25/d40/f5e x:0 0 0 2026-03-09T20:47:24.521 INFO:tasks.workunit.client.1.vm10.stdout:2/274: getdents d5/d18/d1b/d22 0 2026-03-09T20:47:24.527 INFO:tasks.workunit.client.1.vm10.stdout:2/275: symlink d5/d18/l54 0 2026-03-09T20:47:24.529 INFO:tasks.workunit.client.1.vm10.stdout:2/276: creat d5/d18/d27/d38/f55 x:0 0 0 2026-03-09T20:47:24.529 INFO:tasks.workunit.client.1.vm10.stdout:2/277: write d5/f1d [2281984,110057] 0 2026-03-09T20:47:24.533 INFO:tasks.workunit.client.0.vm07.stdout:1/320: dwrite d3/d14/d54/f62 [0,4194304] 0 2026-03-09T20:47:24.534 INFO:tasks.workunit.client.0.vm07.stdout:1/321: stat d3/d14/f33 0 2026-03-09T20:47:24.541 INFO:tasks.workunit.client.0.vm07.stdout:1/322: creat d3/d23/d67/f69 x:0 0 0 2026-03-09T20:47:24.544 INFO:tasks.workunit.client.1.vm10.stdout:2/278: link d5/d18/d1b/d22/c46 d5/d18/c56 0 2026-03-09T20:47:24.545 INFO:tasks.workunit.client.1.vm10.stdout:2/279: mknod d5/d18/d27/d28/d41/c57 0 2026-03-09T20:47:24.546 INFO:tasks.workunit.client.1.vm10.stdout:2/280: chown d5/d2b/f3f 2 1 2026-03-09T20:47:24.546 INFO:tasks.workunit.client.1.vm10.stdout:2/281: chown d5/l40 1889 1 2026-03-09T20:47:24.547 INFO:tasks.workunit.client.0.vm07.stdout:1/323: dread d3/d14/f30 [0,4194304] 0 2026-03-09T20:47:24.549 INFO:tasks.workunit.client.1.vm10.stdout:2/282: dread d5/d18/d27/d28/d41/f4b [0,4194304] 0 2026-03-09T20:47:24.550 INFO:tasks.workunit.client.1.vm10.stdout:2/283: write d5/d18/d1b/f26 [757941,119912] 0 2026-03-09T20:47:24.552 INFO:tasks.workunit.client.1.vm10.stdout:2/284: unlink d5/d18/d27/d28/d41/f49 0 2026-03-09T20:47:24.554 INFO:tasks.workunit.client.1.vm10.stdout:2/285: mkdir d5/d58 0 2026-03-09T20:47:24.555 INFO:tasks.workunit.client.1.vm10.stdout:2/286: chown d5/d18/d1b/c35 68146928 1 2026-03-09T20:47:24.558 INFO:tasks.workunit.client.0.vm07.stdout:1/324: dread d3/d14/f33 [0,4194304] 0 2026-03-09T20:47:24.564 INFO:tasks.workunit.client.0.vm07.stdout:1/325: rename d3/d23/f45 to d3/d14/f6a 0 2026-03-09T20:47:24.564 INFO:tasks.workunit.client.0.vm07.stdout:7/370: fsync d3/da/f11 0 2026-03-09T20:47:24.569 INFO:tasks.workunit.client.0.vm07.stdout:7/371: dread - d3/da/db/d32/d3e/d5c/f7f zero size 2026-03-09T20:47:24.569 INFO:tasks.workunit.client.0.vm07.stdout:1/326: creat d3/d23/f6b x:0 0 0 2026-03-09T20:47:24.570 INFO:tasks.workunit.client.0.vm07.stdout:7/372: fsync d3/d58/f60 0 2026-03-09T20:47:24.570 INFO:tasks.workunit.client.0.vm07.stdout:1/327: fdatasync d3/d14/d54/f20 0 2026-03-09T20:47:24.578 INFO:tasks.workunit.client.0.vm07.stdout:7/373: read d3/da/db/f1e [3253280,52563] 0 2026-03-09T20:47:24.578 INFO:tasks.workunit.client.0.vm07.stdout:1/328: rename d3/d14/c42 to d3/d14/d54/d3e/c6c 0 2026-03-09T20:47:24.584 INFO:tasks.workunit.client.0.vm07.stdout:1/329: readlink d3/d23/l64 0 2026-03-09T20:47:24.586 INFO:tasks.workunit.client.0.vm07.stdout:1/330: fsync d3/d23/d55/d56/d60/f53 0 2026-03-09T20:47:24.595 INFO:tasks.workunit.client.0.vm07.stdout:1/331: mknod d3/d23/c6d 0 2026-03-09T20:47:24.598 INFO:tasks.workunit.client.0.vm07.stdout:1/332: mkdir d3/d14/d54/d6e 0 2026-03-09T20:47:24.603 INFO:tasks.workunit.client.0.vm07.stdout:1/333: creat d3/f6f x:0 0 0 2026-03-09T20:47:24.604 INFO:tasks.workunit.client.0.vm07.stdout:1/334: write d3/f68 [885648,127819] 0 2026-03-09T20:47:24.605 INFO:tasks.workunit.client.0.vm07.stdout:1/335: chown d3/d23/d55/d56 3227042 1 2026-03-09T20:47:24.608 INFO:tasks.workunit.client.1.vm10.stdout:6/268: read d3/da/d11/f1d [968345,92390] 0 2026-03-09T20:47:24.619 INFO:tasks.workunit.client.0.vm07.stdout:1/336: link d3/cd d3/d14/c70 0 2026-03-09T20:47:24.623 INFO:tasks.workunit.client.0.vm07.stdout:1/337: symlink d3/d14/d54/l71 0 2026-03-09T20:47:24.626 INFO:tasks.workunit.client.1.vm10.stdout:6/269: getdents d3 0 2026-03-09T20:47:24.627 INFO:tasks.workunit.client.0.vm07.stdout:1/338: unlink d3/l21 0 2026-03-09T20:47:24.629 INFO:tasks.workunit.client.1.vm10.stdout:6/270: chown d3/d12/f16 1029980 1 2026-03-09T20:47:24.629 INFO:tasks.workunit.client.1.vm10.stdout:6/271: chown d3/d12/f2b 783273717 1 2026-03-09T20:47:24.635 INFO:tasks.workunit.client.1.vm10.stdout:6/272: dwrite d3/da/fd [0,4194304] 0 2026-03-09T20:47:24.642 INFO:tasks.workunit.client.1.vm10.stdout:6/273: chown d3/d12/d24 303908185 1 2026-03-09T20:47:24.659 INFO:tasks.workunit.client.1.vm10.stdout:6/274: unlink d3/da/c2c 0 2026-03-09T20:47:24.660 INFO:tasks.workunit.client.1.vm10.stdout:6/275: write d3/d12/f16 [44822,120182] 0 2026-03-09T20:47:24.660 INFO:tasks.workunit.client.0.vm07.stdout:8/282: read d1/dc/f29 [3473478,118785] 0 2026-03-09T20:47:24.666 INFO:tasks.workunit.client.1.vm10.stdout:6/276: creat d3/da/f58 x:0 0 0 2026-03-09T20:47:24.667 INFO:tasks.workunit.client.0.vm07.stdout:8/283: symlink d1/dc/d14/d2f/d53/l5c 0 2026-03-09T20:47:24.669 INFO:tasks.workunit.client.0.vm07.stdout:8/284: mkdir d1/d5d 0 2026-03-09T20:47:24.669 INFO:tasks.workunit.client.0.vm07.stdout:8/285: fsync d1/dc/d16/d31/f47 0 2026-03-09T20:47:24.671 INFO:tasks.workunit.client.0.vm07.stdout:8/286: chown d1/d3b/c50 27627102 1 2026-03-09T20:47:24.673 INFO:tasks.workunit.client.0.vm07.stdout:8/287: mknod d1/dc/c5e 0 2026-03-09T20:47:24.673 INFO:tasks.workunit.client.0.vm07.stdout:8/288: stat d1/dc/d16/d26/f2b 0 2026-03-09T20:47:24.674 INFO:tasks.workunit.client.1.vm10.stdout:6/277: creat d3/da/d11/d31/d47/f59 x:0 0 0 2026-03-09T20:47:24.680 INFO:tasks.workunit.client.1.vm10.stdout:9/303: truncate d2/d28/d47/d50/f59 2685404 0 2026-03-09T20:47:24.680 INFO:tasks.workunit.client.1.vm10.stdout:6/278: symlink d3/da/l5a 0 2026-03-09T20:47:24.682 INFO:tasks.workunit.client.0.vm07.stdout:6/336: write d8/f3b [2192635,122490] 0 2026-03-09T20:47:24.685 INFO:tasks.workunit.client.0.vm07.stdout:4/239: dwrite f1 [0,4194304] 0 2026-03-09T20:47:24.686 INFO:tasks.workunit.client.0.vm07.stdout:4/240: chown d2/f5 201 1 2026-03-09T20:47:24.690 INFO:tasks.workunit.client.0.vm07.stdout:4/241: write d2/d1f/f3c [5207901,109821] 0 2026-03-09T20:47:24.695 INFO:tasks.workunit.client.0.vm07.stdout:4/242: dwrite d2/f21 [0,4194304] 0 2026-03-09T20:47:24.713 INFO:tasks.workunit.client.0.vm07.stdout:6/337: creat d8/d16/d22/d24/d2b/f6a x:0 0 0 2026-03-09T20:47:24.716 INFO:tasks.workunit.client.0.vm07.stdout:3/324: write d1/d5/f25 [3237036,108193] 0 2026-03-09T20:47:24.716 INFO:tasks.workunit.client.0.vm07.stdout:3/325: write d1/d5/d9/d2f/d3d/d64/f1a [3721928,102748] 0 2026-03-09T20:47:24.721 INFO:tasks.workunit.client.0.vm07.stdout:4/243: truncate d2/df/f2e 1049452 0 2026-03-09T20:47:24.721 INFO:tasks.workunit.client.0.vm07.stdout:2/352: dwrite d2/f4 [0,4194304] 0 2026-03-09T20:47:24.721 INFO:tasks.workunit.client.0.vm07.stdout:6/338: dwrite d8/f52 [0,4194304] 0 2026-03-09T20:47:24.735 INFO:tasks.workunit.client.1.vm10.stdout:7/241: write db/d1f/f2a [652867,44019] 0 2026-03-09T20:47:24.735 INFO:tasks.workunit.client.1.vm10.stdout:9/304: rename d2/d28/d43 to d2/d3/d6d 0 2026-03-09T20:47:24.736 INFO:tasks.workunit.client.1.vm10.stdout:9/305: write d2/f46 [389286,130324] 0 2026-03-09T20:47:24.752 INFO:tasks.workunit.client.0.vm07.stdout:3/326: chown d1/d5/d9/d2f/d34/c4c 3462318 1 2026-03-09T20:47:24.754 INFO:tasks.workunit.client.1.vm10.stdout:7/242: mknod db/d28/d30/c4e 0 2026-03-09T20:47:24.757 INFO:tasks.workunit.client.1.vm10.stdout:4/180: link d1/d2/d3/c5 d1/d8/d1c/c37 0 2026-03-09T20:47:24.760 INFO:tasks.workunit.client.0.vm07.stdout:6/339: mknod d8/d26/c6b 0 2026-03-09T20:47:24.763 INFO:tasks.workunit.client.0.vm07.stdout:9/296: dread d4/d11/d2a/f39 [0,4194304] 0 2026-03-09T20:47:24.764 INFO:tasks.workunit.client.1.vm10.stdout:9/306: mknod d2/d12/d5a/c6e 0 2026-03-09T20:47:24.764 INFO:tasks.workunit.client.0.vm07.stdout:4/244: mkdir d2/d1f/d2d/d3f 0 2026-03-09T20:47:24.765 INFO:tasks.workunit.client.0.vm07.stdout:4/245: stat d2/cb 0 2026-03-09T20:47:24.770 INFO:tasks.workunit.client.0.vm07.stdout:3/327: read d1/d5/d9/d2f/d34/f3f [382802,70953] 0 2026-03-09T20:47:24.771 INFO:tasks.workunit.client.1.vm10.stdout:1/254: dwrite d2/da/d25/f40 [0,4194304] 0 2026-03-09T20:47:24.771 INFO:tasks.workunit.client.1.vm10.stdout:7/243: creat db/d28/f4f x:0 0 0 2026-03-09T20:47:24.774 INFO:tasks.workunit.client.1.vm10.stdout:4/181: mkdir d1/d8/d1c/d38 0 2026-03-09T20:47:24.775 INFO:tasks.workunit.client.1.vm10.stdout:5/228: getdents d2/d39/d4b 0 2026-03-09T20:47:24.778 INFO:tasks.workunit.client.1.vm10.stdout:9/307: rename d2/f2b to d2/d33/d37/f6f 0 2026-03-09T20:47:24.778 INFO:tasks.workunit.client.1.vm10.stdout:5/229: dread d2/d1b/f28 [0,4194304] 0 2026-03-09T20:47:24.778 INFO:tasks.workunit.client.1.vm10.stdout:9/308: chown d2/d3/de/f34 172027730 1 2026-03-09T20:47:24.779 INFO:tasks.workunit.client.0.vm07.stdout:9/297: creat d4/d16/d29/f6e x:0 0 0 2026-03-09T20:47:24.779 INFO:tasks.workunit.client.1.vm10.stdout:5/230: chown d2/d39/d4b/c59 2338 1 2026-03-09T20:47:24.779 INFO:tasks.workunit.client.1.vm10.stdout:1/255: dread d2/f2a [0,4194304] 0 2026-03-09T20:47:24.780 INFO:tasks.workunit.client.1.vm10.stdout:5/231: write d2/d27/f2d [601404,31732] 0 2026-03-09T20:47:24.783 INFO:tasks.workunit.client.1.vm10.stdout:1/256: truncate d2/da/d25/d3e/f41 414340 0 2026-03-09T20:47:24.783 INFO:tasks.workunit.client.1.vm10.stdout:1/257: stat d2/da/d25/d3e 0 2026-03-09T20:47:24.785 INFO:tasks.workunit.client.0.vm07.stdout:6/340: mknod d8/d16/c6c 0 2026-03-09T20:47:24.787 INFO:tasks.workunit.client.0.vm07.stdout:5/393: write d5/df/d13/d55/f5f [801241,127280] 0 2026-03-09T20:47:24.795 INFO:tasks.workunit.client.1.vm10.stdout:5/232: dread d2/d1b/f41 [0,4194304] 0 2026-03-09T20:47:24.795 INFO:tasks.workunit.client.1.vm10.stdout:0/219: dwrite d2/db/f13 [0,4194304] 0 2026-03-09T20:47:24.795 INFO:tasks.workunit.client.1.vm10.stdout:0/220: chown d2/d9/da/d11/c40 29 1 2026-03-09T20:47:24.795 INFO:tasks.workunit.client.0.vm07.stdout:5/394: write d5/df/f4a [2793992,13095] 0 2026-03-09T20:47:24.795 INFO:tasks.workunit.client.0.vm07.stdout:5/395: readlink d5/le 0 2026-03-09T20:47:24.795 INFO:tasks.workunit.client.0.vm07.stdout:3/328: mknod d1/c6c 0 2026-03-09T20:47:24.795 INFO:tasks.workunit.client.1.vm10.stdout:9/309: mknod d2/d28/d47/c70 0 2026-03-09T20:47:24.801 INFO:tasks.workunit.client.0.vm07.stdout:3/329: dwrite d1/d5/d9/d11/f26 [0,4194304] 0 2026-03-09T20:47:24.802 INFO:tasks.workunit.client.1.vm10.stdout:5/233: dwrite d2/d27/f34 [4194304,4194304] 0 2026-03-09T20:47:24.812 INFO:tasks.workunit.client.0.vm07.stdout:3/330: dwrite d1/d5/d9/f1c [0,4194304] 0 2026-03-09T20:47:24.816 INFO:tasks.workunit.client.0.vm07.stdout:4/246: sync 2026-03-09T20:47:24.819 INFO:tasks.workunit.client.1.vm10.stdout:0/221: dread d2/d9/da/fd [0,4194304] 0 2026-03-09T20:47:24.822 INFO:tasks.workunit.client.0.vm07.stdout:3/331: dwrite d1/f36 [0,4194304] 0 2026-03-09T20:47:24.823 INFO:tasks.workunit.client.1.vm10.stdout:0/222: dread d2/db/f38 [0,4194304] 0 2026-03-09T20:47:24.824 INFO:tasks.workunit.client.0.vm07.stdout:3/332: dread - d1/d5/d9/d2f/d3d/d64/f63 zero size 2026-03-09T20:47:24.861 INFO:tasks.workunit.client.0.vm07.stdout:0/387: dwrite d1/d1f/d20/f43 [4194304,4194304] 0 2026-03-09T20:47:24.863 INFO:tasks.workunit.client.0.vm07.stdout:0/388: chown d1/d2/ff 500 1 2026-03-09T20:47:24.880 INFO:tasks.workunit.client.0.vm07.stdout:6/341: dread d8/d16/d22/d24/d2b/f4a [0,4194304] 0 2026-03-09T20:47:24.896 INFO:tasks.workunit.client.1.vm10.stdout:1/258: rename d2/da/d25/c43 to d2/da/d25/c52 0 2026-03-09T20:47:24.897 INFO:tasks.workunit.client.1.vm10.stdout:1/259: chown d2/f1a 87032658 1 2026-03-09T20:47:24.897 INFO:tasks.workunit.client.1.vm10.stdout:1/260: chown d2/f14 0 1 2026-03-09T20:47:24.897 INFO:tasks.workunit.client.1.vm10.stdout:1/261: readlink d2/lb 0 2026-03-09T20:47:24.897 INFO:tasks.workunit.client.1.vm10.stdout:1/262: fdatasync d2/da/d25/f27 0 2026-03-09T20:47:24.903 INFO:tasks.workunit.client.1.vm10.stdout:4/182: mkdir d1/d8/d39 0 2026-03-09T20:47:24.907 INFO:tasks.workunit.client.0.vm07.stdout:1/339: dread d3/f10 [0,4194304] 0 2026-03-09T20:47:24.927 INFO:tasks.workunit.client.1.vm10.stdout:3/211: truncate dc/f10 50142 0 2026-03-09T20:47:24.929 INFO:tasks.workunit.client.1.vm10.stdout:8/300: truncate d0/f11 1265137 0 2026-03-09T20:47:24.931 INFO:tasks.workunit.client.1.vm10.stdout:3/212: dread dc/d14/d27/f3c [0,4194304] 0 2026-03-09T20:47:24.935 INFO:tasks.workunit.client.0.vm07.stdout:4/247: rmdir d2/df/d17 39 2026-03-09T20:47:24.935 INFO:tasks.workunit.client.0.vm07.stdout:4/248: readlink d2/df/l39 0 2026-03-09T20:47:24.937 INFO:tasks.workunit.client.1.vm10.stdout:9/310: dwrite d2/d33/d37/f6f [0,4194304] 0 2026-03-09T20:47:24.939 INFO:tasks.workunit.client.1.vm10.stdout:9/311: write d2/d3/de/f24 [1213474,66034] 0 2026-03-09T20:47:24.952 INFO:tasks.workunit.client.1.vm10.stdout:2/287: truncate d5/f1d 1936394 0 2026-03-09T20:47:24.952 INFO:tasks.workunit.client.1.vm10.stdout:1/263: creat d2/da/d25/d3e/f53 x:0 0 0 2026-03-09T20:47:24.954 INFO:tasks.workunit.client.1.vm10.stdout:8/301: sync 2026-03-09T20:47:24.957 INFO:tasks.workunit.client.1.vm10.stdout:2/288: dwrite d5/d18/d27/f2a [0,4194304] 0 2026-03-09T20:47:24.961 INFO:tasks.workunit.client.0.vm07.stdout:5/396: symlink d5/df/d13/d3e/d47/l91 0 2026-03-09T20:47:24.963 INFO:tasks.workunit.client.0.vm07.stdout:7/374: dwrite d3/da/db/d14/f1a [0,4194304] 0 2026-03-09T20:47:24.978 INFO:tasks.workunit.client.1.vm10.stdout:4/183: unlink d1/d2/d3/c17 0 2026-03-09T20:47:24.996 INFO:tasks.workunit.client.0.vm07.stdout:1/340: creat d3/d14/d54/d3e/f72 x:0 0 0 2026-03-09T20:47:25.000 INFO:tasks.workunit.client.0.vm07.stdout:1/341: dwrite d3/d14/d54/f32 [0,4194304] 0 2026-03-09T20:47:25.008 INFO:tasks.workunit.client.0.vm07.stdout:4/249: dwrite d2/f5 [0,4194304] 0 2026-03-09T20:47:25.011 INFO:tasks.workunit.client.0.vm07.stdout:3/333: mkdir d1/d5/d9/d11/d6d 0 2026-03-09T20:47:25.021 INFO:tasks.workunit.client.0.vm07.stdout:7/375: dread d3/da/db/d32/d3e/f40 [0,4194304] 0 2026-03-09T20:47:25.021 INFO:tasks.workunit.client.0.vm07.stdout:7/376: write d3/f61 [1058268,75771] 0 2026-03-09T20:47:25.023 INFO:tasks.workunit.client.0.vm07.stdout:0/389: rmdir d1/d1f/d30 39 2026-03-09T20:47:25.027 INFO:tasks.workunit.client.0.vm07.stdout:9/298: getdents d4/d8 0 2026-03-09T20:47:25.034 INFO:tasks.workunit.client.0.vm07.stdout:9/299: rename d4/d16/d29/d24/d37/d44/d62 to d4/d16/d29/d24/d37/d44/d62/d6f 22 2026-03-09T20:47:25.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:24 vm10.local ceph-mon[57011]: pgmap v152: 65 pgs: 65 active+clean; 1.1 GiB data, 4.6 GiB used, 115 GiB / 120 GiB avail; 15 MiB/s rd, 134 MiB/s wr, 226 op/s 2026-03-09T20:47:25.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:24 vm10.local ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:47:25.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:24 vm10.local ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:47:25.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:24 vm10.local ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:47:25.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:24 vm10.local ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:47:25.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:24 vm10.local ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:47:25.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:24 vm10.local ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:47:25.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:24 vm10.local ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mgr fail", "who": "vm07.xjrvch"}]: dispatch 2026-03-09T20:47:25.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:24 vm10.local ceph-mon[57011]: Activating manager daemon vm10.byqahe 2026-03-09T20:47:25.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:24 vm10.local ceph-mon[57011]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "mgr fail", "who": "vm07.xjrvch"}]': finished 2026-03-09T20:47:25.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:24 vm10.local ceph-mon[57011]: osdmap e43: 6 total, 6 up, 6 in 2026-03-09T20:47:25.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:24 vm10.local ceph-mon[57011]: mgrmap e20: vm10.byqahe(active, starting, since 0.0693818s) 2026-03-09T20:47:25.037 INFO:tasks.workunit.client.1.vm10.stdout:9/312: symlink d2/d28/d47/l71 0 2026-03-09T20:47:25.037 INFO:tasks.workunit.client.1.vm10.stdout:9/313: chown d2/d3/de/f42 368962 1 2026-03-09T20:47:25.043 INFO:tasks.workunit.client.0.vm07.stdout:8/289: dwrite d1/dc/fe [4194304,4194304] 0 2026-03-09T20:47:25.067 INFO:tasks.workunit.client.0.vm07.stdout:0/390: dread d1/d2/d33/d35/f45 [0,4194304] 0 2026-03-09T20:47:25.068 INFO:tasks.workunit.client.0.vm07.stdout:5/397: unlink d5/d19/f43 0 2026-03-09T20:47:25.071 INFO:tasks.workunit.client.1.vm10.stdout:8/302: creat d0/d22/d25/d40/f5f x:0 0 0 2026-03-09T20:47:25.078 INFO:tasks.workunit.client.1.vm10.stdout:6/279: truncate d3/f1f 614412 0 2026-03-09T20:47:25.079 INFO:tasks.workunit.client.1.vm10.stdout:6/280: write d3/d30/d33/f3a [777277,26823] 0 2026-03-09T20:47:25.079 INFO:tasks.workunit.client.1.vm10.stdout:6/281: fdatasync d3/da/d11/d26/f2a 0 2026-03-09T20:47:25.091 INFO:tasks.workunit.client.0.vm07.stdout:2/353: dwrite d2/f40 [4194304,4194304] 0 2026-03-09T20:47:25.092 INFO:tasks.workunit.client.0.vm07.stdout:2/354: chown d2/db/d28/d57/f68 5 1 2026-03-09T20:47:25.096 INFO:tasks.workunit.client.1.vm10.stdout:9/314: rmdir d2/d3 39 2026-03-09T20:47:25.102 INFO:tasks.workunit.client.0.vm07.stdout:9/300: symlink d4/d8/dc/d15/l70 0 2026-03-09T20:47:25.103 INFO:tasks.workunit.client.1.vm10.stdout:7/244: dwrite db/d28/d2b/d36/d40/f44 [0,4194304] 0 2026-03-09T20:47:25.109 INFO:tasks.workunit.client.0.vm07.stdout:8/290: creat d1/dc/d14/d2f/d53/f5f x:0 0 0 2026-03-09T20:47:25.109 INFO:tasks.workunit.client.1.vm10.stdout:7/245: sync 2026-03-09T20:47:25.109 INFO:tasks.workunit.client.0.vm07.stdout:8/291: readlink d1/dc/d14/d2f/d4d/d55/l58 0 2026-03-09T20:47:25.115 INFO:tasks.workunit.client.1.vm10.stdout:5/234: dread d2/d27/f2a [0,4194304] 0 2026-03-09T20:47:25.117 INFO:tasks.workunit.client.1.vm10.stdout:6/282: rename d3/d46 to d3/da/d11/d26/d5b 0 2026-03-09T20:47:25.133 INFO:tasks.workunit.client.1.vm10.stdout:0/223: dwrite d2/d9/da/f2f [4194304,4194304] 0 2026-03-09T20:47:25.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:24 vm07.local ceph-mon[49120]: pgmap v152: 65 pgs: 65 active+clean; 1.1 GiB data, 4.6 GiB used, 115 GiB / 120 GiB avail; 15 MiB/s rd, 134 MiB/s wr, 226 op/s 2026-03-09T20:47:25.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:24 vm07.local ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:47:25.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:24 vm07.local ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:47:25.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:24 vm07.local ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:47:25.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:24 vm07.local ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:47:25.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:24 vm07.local ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' 2026-03-09T20:47:25.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:24 vm07.local ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:47:25.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:24 vm07.local ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mgr fail", "who": "vm07.xjrvch"}]: dispatch 2026-03-09T20:47:25.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:24 vm07.local ceph-mon[49120]: Activating manager daemon vm10.byqahe 2026-03-09T20:47:25.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:24 vm07.local ceph-mon[49120]: from='mgr.14225 192.168.123.107:0/3830033584' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "mgr fail", "who": "vm07.xjrvch"}]': finished 2026-03-09T20:47:25.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:24 vm07.local ceph-mon[49120]: osdmap e43: 6 total, 6 up, 6 in 2026-03-09T20:47:25.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:24 vm07.local ceph-mon[49120]: mgrmap e20: vm10.byqahe(active, starting, since 0.0693818s) 2026-03-09T20:47:25.135 INFO:tasks.workunit.client.1.vm10.stdout:2/289: dread d5/d18/f24 [0,4194304] 0 2026-03-09T20:47:25.155 INFO:tasks.workunit.client.0.vm07.stdout:6/342: dwrite d8/d16/d22/d24/f25 [0,4194304] 0 2026-03-09T20:47:25.155 INFO:tasks.workunit.client.1.vm10.stdout:4/184: dread d1/d2/f7 [4194304,4194304] 0 2026-03-09T20:47:25.162 INFO:tasks.workunit.client.1.vm10.stdout:5/235: truncate d2/fb 3722262 0 2026-03-09T20:47:25.163 INFO:tasks.workunit.client.0.vm07.stdout:1/342: rmdir d3/d14/d54 39 2026-03-09T20:47:25.163 INFO:tasks.workunit.client.1.vm10.stdout:5/236: dread d2/d27/d37/f38 [0,4194304] 0 2026-03-09T20:47:25.175 INFO:tasks.workunit.client.1.vm10.stdout:3/213: getdents dc/d14/d27 0 2026-03-09T20:47:25.178 INFO:tasks.workunit.client.1.vm10.stdout:1/264: dwrite d2/da/d25/d3e/f44 [0,4194304] 0 2026-03-09T20:47:25.180 INFO:tasks.workunit.client.1.vm10.stdout:3/214: read dc/f10 [26891,63255] 0 2026-03-09T20:47:25.183 INFO:tasks.workunit.client.1.vm10.stdout:9/315: creat d2/d28/d47/d67/f72 x:0 0 0 2026-03-09T20:47:25.187 INFO:tasks.workunit.client.1.vm10.stdout:3/215: dwrite dc/d14/f1a [0,4194304] 0 2026-03-09T20:47:25.188 INFO:tasks.workunit.client.1.vm10.stdout:3/216: chown dc/d14/d22/c43 811 1 2026-03-09T20:47:25.188 INFO:tasks.workunit.client.1.vm10.stdout:3/217: chown dc/d14 861206 1 2026-03-09T20:47:25.193 INFO:tasks.workunit.client.1.vm10.stdout:4/185: dread d1/d8/f16 [0,4194304] 0 2026-03-09T20:47:25.197 INFO:tasks.workunit.client.0.vm07.stdout:7/377: dread d3/f61 [0,4194304] 0 2026-03-09T20:47:25.201 INFO:tasks.workunit.client.1.vm10.stdout:8/303: fsync d0/f17 0 2026-03-09T20:47:25.203 INFO:tasks.workunit.client.0.vm07.stdout:5/398: mknod d5/df/d13/d3e/d5e/c92 0 2026-03-09T20:47:25.204 INFO:tasks.workunit.client.0.vm07.stdout:5/399: write d5/df/d13/f5b [767162,28474] 0 2026-03-09T20:47:25.204 INFO:tasks.workunit.client.0.vm07.stdout:5/400: fdatasync d5/df/f4a 0 2026-03-09T20:47:25.204 INFO:tasks.workunit.client.0.vm07.stdout:5/401: fdatasync d5/df/d13/f41 0 2026-03-09T20:47:25.207 INFO:tasks.workunit.client.0.vm07.stdout:5/402: dread d5/f51 [0,4194304] 0 2026-03-09T20:47:25.210 INFO:tasks.workunit.client.1.vm10.stdout:5/237: symlink d2/d27/d37/d46/d5d/l62 0 2026-03-09T20:47:25.210 INFO:tasks.workunit.client.1.vm10.stdout:5/238: chown d2 203700881 1 2026-03-09T20:47:25.212 INFO:tasks.workunit.client.0.vm07.stdout:2/355: mkdir d2/db/d1c/d4a/d6c 0 2026-03-09T20:47:25.213 INFO:tasks.workunit.client.1.vm10.stdout:5/239: dread d2/d39/d4b/f51 [0,4194304] 0 2026-03-09T20:47:25.214 INFO:tasks.workunit.client.0.vm07.stdout:9/301: creat d4/d16/d29/d24/d37/f71 x:0 0 0 2026-03-09T20:47:25.216 INFO:tasks.workunit.client.1.vm10.stdout:5/240: dwrite d2/d27/f34 [0,4194304] 0 2026-03-09T20:47:25.218 INFO:tasks.workunit.client.0.vm07.stdout:8/292: mkdir d1/dc/d14/d2f/d53/d60 0 2026-03-09T20:47:25.221 INFO:tasks.workunit.client.0.vm07.stdout:6/343: creat d8/d16/d22/d33/f6d x:0 0 0 2026-03-09T20:47:25.233 INFO:tasks.workunit.client.1.vm10.stdout:7/246: link db/cc db/d21/d26/c50 0 2026-03-09T20:47:25.239 INFO:tasks.workunit.client.1.vm10.stdout:8/304: mknod d0/d22/d2f/c60 0 2026-03-09T20:47:25.245 INFO:tasks.workunit.client.0.vm07.stdout:5/403: mknod d5/d33/c93 0 2026-03-09T20:47:25.248 INFO:tasks.workunit.client.1.vm10.stdout:2/290: write d5/fa [4052108,6483] 0 2026-03-09T20:47:25.252 INFO:tasks.workunit.client.0.vm07.stdout:3/334: creat d1/f6e x:0 0 0 2026-03-09T20:47:25.253 INFO:tasks.workunit.client.0.vm07.stdout:2/356: unlink d2/db/d28/c3b 0 2026-03-09T20:47:25.257 INFO:tasks.workunit.client.0.vm07.stdout:0/391: write d1/f3d [598916,20873] 0 2026-03-09T20:47:25.257 INFO:tasks.workunit.client.0.vm07.stdout:0/392: dread - d1/d2/d4b/f70 zero size 2026-03-09T20:47:25.259 INFO:tasks.workunit.client.1.vm10.stdout:0/224: write d2/d9/da/d35/f28 [488733,6033] 0 2026-03-09T20:47:25.261 INFO:tasks.workunit.client.0.vm07.stdout:9/302: dread - d4/d8/dc/d15/f57 zero size 2026-03-09T20:47:25.262 INFO:tasks.workunit.client.0.vm07.stdout:9/303: stat d4/d16/d29/f64 0 2026-03-09T20:47:25.264 INFO:tasks.workunit.client.0.vm07.stdout:8/293: creat d1/dc/d14/f61 x:0 0 0 2026-03-09T20:47:25.265 INFO:tasks.workunit.client.1.vm10.stdout:0/225: sync 2026-03-09T20:47:25.265 INFO:tasks.workunit.client.0.vm07.stdout:8/294: write d1/dc/d16/d26/f4f [258811,63791] 0 2026-03-09T20:47:25.267 INFO:tasks.workunit.client.1.vm10.stdout:7/247: creat db/d28/d2b/f51 x:0 0 0 2026-03-09T20:47:25.269 INFO:tasks.workunit.client.0.vm07.stdout:6/344: unlink d8/f14 0 2026-03-09T20:47:25.271 INFO:tasks.workunit.client.0.vm07.stdout:4/250: write d2/df/d17/f3e [591379,76189] 0 2026-03-09T20:47:25.272 INFO:tasks.workunit.client.1.vm10.stdout:3/218: getdents dc/d14/d27 0 2026-03-09T20:47:25.272 INFO:tasks.workunit.client.0.vm07.stdout:1/343: creat d3/d23/d52/f73 x:0 0 0 2026-03-09T20:47:25.274 INFO:tasks.workunit.client.0.vm07.stdout:7/378: mkdir d3/da/d83 0 2026-03-09T20:47:25.277 INFO:tasks.workunit.client.1.vm10.stdout:6/283: rename d3/d12/d24/d39/d50 to d3/d12/d36/d5c 0 2026-03-09T20:47:25.282 INFO:tasks.workunit.client.1.vm10.stdout:2/291: dread - d5/d18/f1f zero size 2026-03-09T20:47:25.284 INFO:tasks.workunit.client.1.vm10.stdout:5/241: mkdir d2/d27/d37/d46/d5d/d5f/d63 0 2026-03-09T20:47:25.285 INFO:tasks.workunit.client.0.vm07.stdout:0/393: rename d1/f2f to d1/d2/d33/f7e 0 2026-03-09T20:47:25.287 INFO:tasks.workunit.client.1.vm10.stdout:8/305: dread d0/d22/d25/d2e/f33 [0,4194304] 0 2026-03-09T20:47:25.289 INFO:tasks.workunit.client.0.vm07.stdout:0/394: dwrite d1/f3d [0,4194304] 0 2026-03-09T20:47:25.289 INFO:tasks.workunit.client.0.vm07.stdout:0/395: chown d1 185753 1 2026-03-09T20:47:25.290 INFO:tasks.workunit.client.0.vm07.stdout:0/396: chown d1/d2/dc/f6d 51 1 2026-03-09T20:47:25.291 INFO:tasks.workunit.client.0.vm07.stdout:9/304: chown d4/f5 157 1 2026-03-09T20:47:25.292 INFO:tasks.workunit.client.0.vm07.stdout:9/305: truncate d4/d16/d29/d24/f36 1348367 0 2026-03-09T20:47:25.299 INFO:tasks.workunit.client.1.vm10.stdout:9/316: mknod d2/d3/de/d35/d44/c73 0 2026-03-09T20:47:25.299 INFO:tasks.workunit.client.0.vm07.stdout:8/295: truncate d1/dc/d16/d31/f54 501001 0 2026-03-09T20:47:25.301 INFO:tasks.workunit.client.1.vm10.stdout:0/226: symlink d2/d9/da/d48/l4f 0 2026-03-09T20:47:25.301 INFO:tasks.workunit.client.0.vm07.stdout:6/345: creat d8/d26/d2a/f6e x:0 0 0 2026-03-09T20:47:25.303 INFO:tasks.workunit.client.1.vm10.stdout:3/219: chown dc/l1c 10751326 1 2026-03-09T20:47:25.303 INFO:tasks.workunit.client.0.vm07.stdout:1/344: mknod d3/d23/d55/d56/c74 0 2026-03-09T20:47:25.304 INFO:tasks.workunit.client.1.vm10.stdout:4/186: mknod d1/d8/d1c/d38/c3a 0 2026-03-09T20:47:25.305 INFO:tasks.workunit.client.0.vm07.stdout:5/404: mkdir d5/d19/d73/d94 0 2026-03-09T20:47:25.307 INFO:tasks.workunit.client.1.vm10.stdout:4/187: read d1/d2/f7 [205556,4167] 0 2026-03-09T20:47:25.317 INFO:tasks.workunit.client.0.vm07.stdout:5/405: dwrite d5/df/d13/f41 [4194304,4194304] 0 2026-03-09T20:47:25.317 INFO:tasks.workunit.client.0.vm07.stdout:3/335: mknod d1/c6f 0 2026-03-09T20:47:25.317 INFO:tasks.workunit.client.0.vm07.stdout:2/357: symlink d2/db/l6d 0 2026-03-09T20:47:25.317 INFO:tasks.workunit.client.0.vm07.stdout:0/397: mknod d1/d2/dc/c7f 0 2026-03-09T20:47:25.317 INFO:tasks.workunit.client.1.vm10.stdout:6/284: dwrite d3/d30/d33/f35 [0,4194304] 0 2026-03-09T20:47:25.317 INFO:tasks.workunit.client.1.vm10.stdout:1/265: getdents d2/da/d25 0 2026-03-09T20:47:25.317 INFO:tasks.workunit.client.1.vm10.stdout:2/292: creat d5/f59 x:0 0 0 2026-03-09T20:47:25.317 INFO:tasks.workunit.client.1.vm10.stdout:2/293: fdatasync d5/d18/d27/d38/f43 0 2026-03-09T20:47:25.317 INFO:tasks.workunit.client.0.vm07.stdout:0/398: chown d1/d2/dc/d17 458998457 1 2026-03-09T20:47:25.317 INFO:tasks.workunit.client.1.vm10.stdout:9/317: symlink d2/d33/l74 0 2026-03-09T20:47:25.317 INFO:tasks.workunit.client.0.vm07.stdout:0/399: chown d1/d2/d33/l49 306595173 1 2026-03-09T20:47:25.318 INFO:tasks.workunit.client.0.vm07.stdout:0/400: readlink d1/d2/l36 0 2026-03-09T20:47:25.321 INFO:tasks.workunit.client.1.vm10.stdout:0/227: unlink d2/d9/da/d35/f28 0 2026-03-09T20:47:25.324 INFO:tasks.workunit.client.1.vm10.stdout:0/228: dread d2/db/f38 [0,4194304] 0 2026-03-09T20:47:25.325 INFO:tasks.workunit.client.0.vm07.stdout:8/296: unlink d1/dc/d16/d26/f2b 0 2026-03-09T20:47:25.331 INFO:tasks.workunit.client.1.vm10.stdout:2/294: dread f1 [0,4194304] 0 2026-03-09T20:47:25.331 INFO:tasks.workunit.client.1.vm10.stdout:3/220: mkdir dc/d14/d26/d29/d40/d48 0 2026-03-09T20:47:25.333 INFO:tasks.workunit.client.0.vm07.stdout:4/251: mknod d2/c40 0 2026-03-09T20:47:25.334 INFO:tasks.workunit.client.0.vm07.stdout:4/252: chown d2/d1f/f25 392 1 2026-03-09T20:47:25.336 INFO:tasks.workunit.client.1.vm10.stdout:3/221: dread dc/d14/d20/d2e/f32 [0,4194304] 0 2026-03-09T20:47:25.339 INFO:tasks.workunit.client.1.vm10.stdout:3/222: truncate dc/d14/d26/d29/f30 161001 0 2026-03-09T20:47:25.339 INFO:tasks.workunit.client.1.vm10.stdout:0/229: dread d2/d9/da/de/d1a/f21 [0,4194304] 0 2026-03-09T20:47:25.343 INFO:tasks.workunit.client.1.vm10.stdout:6/285: mknod d3/d30/d33/c5d 0 2026-03-09T20:47:25.343 INFO:tasks.workunit.client.0.vm07.stdout:7/379: creat d3/da/d83/f84 x:0 0 0 2026-03-09T20:47:25.344 INFO:tasks.workunit.client.0.vm07.stdout:5/406: dread - d5/df/d13/f2a zero size 2026-03-09T20:47:25.344 INFO:tasks.workunit.client.1.vm10.stdout:3/223: truncate dc/d14/f1a 4599397 0 2026-03-09T20:47:25.345 INFO:tasks.workunit.client.0.vm07.stdout:5/407: write d5/df/d13/d30/d56/f84 [561522,99032] 0 2026-03-09T20:47:25.346 INFO:tasks.workunit.client.0.vm07.stdout:5/408: chown d5/d69/l7f 36 1 2026-03-09T20:47:25.348 INFO:tasks.workunit.client.1.vm10.stdout:3/224: readlink dc/d14/d20/d21/l2c 0 2026-03-09T20:47:25.348 INFO:tasks.workunit.client.1.vm10.stdout:0/230: dwrite d2/f39 [0,4194304] 0 2026-03-09T20:47:25.350 INFO:tasks.workunit.client.0.vm07.stdout:2/358: dread - d2/d11/f51 zero size 2026-03-09T20:47:25.352 INFO:tasks.workunit.client.1.vm10.stdout:6/286: dwrite d3/d12/d24/f27 [0,4194304] 0 2026-03-09T20:47:25.352 INFO:tasks.workunit.client.1.vm10.stdout:9/318: creat d2/d28/d47/d50/f75 x:0 0 0 2026-03-09T20:47:25.353 INFO:tasks.workunit.client.1.vm10.stdout:9/319: readlink d2/d3/de/l45 0 2026-03-09T20:47:25.359 INFO:tasks.workunit.client.1.vm10.stdout:9/320: stat d2/d33/d37/f6f 0 2026-03-09T20:47:25.364 INFO:tasks.workunit.client.1.vm10.stdout:7/248: link fa db/d21/d26/f52 0 2026-03-09T20:47:25.365 INFO:tasks.workunit.client.1.vm10.stdout:2/295: creat d5/d18/d27/d28/f5a x:0 0 0 2026-03-09T20:47:25.369 INFO:tasks.workunit.client.0.vm07.stdout:6/346: mknod d8/d26/d2a/d40/d67/c6f 0 2026-03-09T20:47:25.369 INFO:tasks.workunit.client.1.vm10.stdout:6/287: dread d3/d30/d33/f3a [0,4194304] 0 2026-03-09T20:47:25.370 INFO:tasks.workunit.client.0.vm07.stdout:6/347: truncate d8/d16/d22/d24/d2b/f6a 455355 0 2026-03-09T20:47:25.374 INFO:tasks.workunit.client.0.vm07.stdout:6/348: dwrite d8/d16/d22/d33/f60 [0,4194304] 0 2026-03-09T20:47:25.378 INFO:tasks.workunit.client.1.vm10.stdout:1/266: symlink d2/da/d25/d46/d51/l54 0 2026-03-09T20:47:25.378 INFO:tasks.workunit.client.0.vm07.stdout:7/380: dread - d3/da/db/d32/f3d zero size 2026-03-09T20:47:25.379 INFO:tasks.workunit.client.0.vm07.stdout:7/381: readlink d3/da/db/d14/d1f/d2b/d52/l80 0 2026-03-09T20:47:25.380 INFO:tasks.workunit.client.0.vm07.stdout:7/382: dread - d3/da/db/d14/d1f/d2b/d52/f73 zero size 2026-03-09T20:47:25.380 INFO:tasks.workunit.client.0.vm07.stdout:7/383: chown d3/da/db/d14/c4e 23031 1 2026-03-09T20:47:25.381 INFO:tasks.workunit.client.0.vm07.stdout:7/384: chown d3/da/db/d14/d1f/d2b/f33 4642 1 2026-03-09T20:47:25.385 INFO:tasks.workunit.client.0.vm07.stdout:7/385: dwrite d3/da/db/d14/d1f/d2b/d52/f74 [0,4194304] 0 2026-03-09T20:47:25.402 INFO:tasks.workunit.client.1.vm10.stdout:5/242: write d2/f5 [3657281,36546] 0 2026-03-09T20:47:25.402 INFO:tasks.workunit.client.1.vm10.stdout:5/243: write d2/f8 [1359859,33090] 0 2026-03-09T20:47:25.403 INFO:tasks.workunit.client.1.vm10.stdout:5/244: dread d2/d27/d37/f38 [0,4194304] 0 2026-03-09T20:47:25.407 INFO:tasks.workunit.client.0.vm07.stdout:2/359: mkdir d2/d46/d6e 0 2026-03-09T20:47:25.408 INFO:tasks.workunit.client.0.vm07.stdout:2/360: write d2/f17 [7427767,34071] 0 2026-03-09T20:47:25.408 INFO:tasks.workunit.client.0.vm07.stdout:2/361: fsync d2/f63 0 2026-03-09T20:47:25.408 INFO:tasks.workunit.client.0.vm07.stdout:2/362: fsync d2/f63 0 2026-03-09T20:47:25.412 INFO:tasks.workunit.client.1.vm10.stdout:0/231: symlink d2/d9/d4b/l50 0 2026-03-09T20:47:25.414 INFO:tasks.workunit.client.0.vm07.stdout:8/297: creat d1/dc/d14/d2f/d53/d60/f62 x:0 0 0 2026-03-09T20:47:25.416 INFO:tasks.workunit.client.1.vm10.stdout:7/249: dread - db/f39 zero size 2026-03-09T20:47:25.417 INFO:tasks.workunit.client.0.vm07.stdout:8/298: dread d1/dc/d16/d31/f52 [0,4194304] 0 2026-03-09T20:47:25.419 INFO:tasks.workunit.client.0.vm07.stdout:4/253: mkdir d2/d1f/d2d/d3f/d41 0 2026-03-09T20:47:25.422 INFO:tasks.workunit.client.1.vm10.stdout:2/296: mkdir d5/d5b 0 2026-03-09T20:47:25.422 INFO:tasks.workunit.client.0.vm07.stdout:4/254: fdatasync d2/d1f/f2c 0 2026-03-09T20:47:25.425 INFO:tasks.workunit.client.1.vm10.stdout:4/188: mknod d1/d8/d1b/d30/c3b 0 2026-03-09T20:47:25.426 INFO:tasks.workunit.client.1.vm10.stdout:9/321: dread d2/d28/f29 [0,4194304] 0 2026-03-09T20:47:25.434 INFO:tasks.workunit.client.1.vm10.stdout:8/306: rename d0/d22/d2f/d38/c4f to d0/d22/d25/d2e/d41/d47/c61 0 2026-03-09T20:47:25.434 INFO:tasks.workunit.client.1.vm10.stdout:8/307: chown d0/l24 4882741 1 2026-03-09T20:47:25.435 INFO:tasks.workunit.client.1.vm10.stdout:8/308: write d0/d22/d2f/d38/f43 [1169805,108227] 0 2026-03-09T20:47:25.454 INFO:tasks.workunit.client.0.vm07.stdout:9/306: getdents d4/d11/d2a 0 2026-03-09T20:47:25.456 INFO:tasks.workunit.client.0.vm07.stdout:9/307: dread d4/d16/f33 [0,4194304] 0 2026-03-09T20:47:25.457 INFO:tasks.workunit.client.0.vm07.stdout:9/308: chown d4/d16/d29/d24/d37 123 1 2026-03-09T20:47:25.461 INFO:tasks.workunit.client.0.vm07.stdout:1/345: write d3/d14/f17 [957075,76536] 0 2026-03-09T20:47:25.464 INFO:tasks.workunit.client.0.vm07.stdout:5/409: dwrite d5/df/f2b [4194304,4194304] 0 2026-03-09T20:47:25.468 INFO:tasks.workunit.client.0.vm07.stdout:5/410: dread d5/df/d13/d3e/d5e/f7c [0,4194304] 0 2026-03-09T20:47:25.474 INFO:tasks.workunit.client.1.vm10.stdout:1/267: write d2/f2a [285588,43604] 0 2026-03-09T20:47:25.479 INFO:tasks.workunit.client.0.vm07.stdout:8/299: mkdir d1/dc/d14/d2f/d4d/d63 0 2026-03-09T20:47:25.486 INFO:tasks.workunit.client.1.vm10.stdout:0/232: dread d2/d9/da/de/d1a/f21 [0,4194304] 0 2026-03-09T20:47:25.486 INFO:tasks.workunit.client.0.vm07.stdout:8/300: read - d1/dc/d14/d2f/f51 zero size 2026-03-09T20:47:25.486 INFO:tasks.workunit.client.0.vm07.stdout:8/301: dwrite d1/dc/d14/f18 [0,4194304] 0 2026-03-09T20:47:25.486 INFO:tasks.workunit.client.0.vm07.stdout:4/255: mknod d2/d1f/c42 0 2026-03-09T20:47:25.487 INFO:tasks.workunit.client.1.vm10.stdout:2/297: readlink d5/lc 0 2026-03-09T20:47:25.488 INFO:tasks.workunit.client.0.vm07.stdout:7/386: mknod d3/c85 0 2026-03-09T20:47:25.488 INFO:tasks.workunit.client.0.vm07.stdout:7/387: chown d3 31905029 1 2026-03-09T20:47:25.490 INFO:tasks.workunit.client.1.vm10.stdout:6/288: truncate d3/f7 5200215 0 2026-03-09T20:47:25.490 INFO:tasks.workunit.client.0.vm07.stdout:3/336: link d1/d5/d9/d11/d1f/l24 d1/d5/l70 0 2026-03-09T20:47:25.490 INFO:tasks.workunit.client.1.vm10.stdout:7/250: dwrite db/d21/d23/f1e [0,4194304] 0 2026-03-09T20:47:25.494 INFO:tasks.workunit.client.1.vm10.stdout:4/189: symlink d1/d2/l3c 0 2026-03-09T20:47:25.496 INFO:tasks.workunit.client.0.vm07.stdout:6/349: truncate d8/d26/d2a/f37 4158988 0 2026-03-09T20:47:25.496 INFO:tasks.workunit.client.1.vm10.stdout:4/190: dread d1/d8/f16 [0,4194304] 0 2026-03-09T20:47:25.497 INFO:tasks.workunit.client.0.vm07.stdout:2/363: rename d2/d11/l24 to d2/d11/d56/l6f 0 2026-03-09T20:47:25.498 INFO:tasks.workunit.client.1.vm10.stdout:9/322: creat d2/d33/f76 x:0 0 0 2026-03-09T20:47:25.506 INFO:tasks.workunit.client.1.vm10.stdout:3/225: link f6 dc/d14/d26/d29/d40/f49 0 2026-03-09T20:47:25.507 INFO:tasks.workunit.client.0.vm07.stdout:1/346: write d3/d14/d54/d3e/f4a [178687,102352] 0 2026-03-09T20:47:25.507 INFO:tasks.workunit.client.1.vm10.stdout:8/309: dwrite d0/d22/d25/f2d [0,4194304] 0 2026-03-09T20:47:25.514 INFO:tasks.workunit.client.0.vm07.stdout:0/401: truncate d1/d1f/d30/f50 487066 0 2026-03-09T20:47:25.519 INFO:tasks.workunit.client.1.vm10.stdout:1/268: mkdir d2/da/d25/d3e/d55 0 2026-03-09T20:47:25.522 INFO:tasks.workunit.client.0.vm07.stdout:5/411: creat d5/d19/f95 x:0 0 0 2026-03-09T20:47:25.530 INFO:tasks.workunit.client.1.vm10.stdout:8/310: dread d0/d22/d25/f2b [0,4194304] 0 2026-03-09T20:47:25.546 INFO:tasks.workunit.client.1.vm10.stdout:8/311: dwrite d0/d22/d2c/f36 [0,4194304] 0 2026-03-09T20:47:25.546 INFO:tasks.workunit.client.0.vm07.stdout:8/302: write d1/dc/f38 [1658290,129373] 0 2026-03-09T20:47:25.546 INFO:tasks.workunit.client.0.vm07.stdout:8/303: readlink d1/dc/l28 0 2026-03-09T20:47:25.546 INFO:tasks.workunit.client.0.vm07.stdout:3/337: mkdir d1/d5/d9/d2f/d3d/d71 0 2026-03-09T20:47:25.551 INFO:tasks.workunit.client.1.vm10.stdout:3/226: mkdir dc/d14/d22/d4a 0 2026-03-09T20:47:25.552 INFO:tasks.workunit.client.1.vm10.stdout:9/323: dread d2/d33/f3f [0,4194304] 0 2026-03-09T20:47:25.554 INFO:tasks.workunit.client.0.vm07.stdout:9/309: symlink d4/d8/dc/d4e/d54/l72 0 2026-03-09T20:47:25.558 INFO:tasks.workunit.client.0.vm07.stdout:1/347: creat d3/d14/d54/d3e/f75 x:0 0 0 2026-03-09T20:47:25.558 INFO:tasks.workunit.client.0.vm07.stdout:1/348: fdatasync d3/f5 0 2026-03-09T20:47:25.559 INFO:tasks.workunit.client.0.vm07.stdout:1/349: dread - d3/d14/d54/d3e/f72 zero size 2026-03-09T20:47:25.562 INFO:tasks.workunit.client.0.vm07.stdout:1/350: dwrite d3/d23/f37 [0,4194304] 0 2026-03-09T20:47:25.565 INFO:tasks.workunit.client.0.vm07.stdout:0/402: mkdir d1/d2/dc/d80 0 2026-03-09T20:47:25.565 INFO:tasks.workunit.client.0.vm07.stdout:5/412: symlink d5/d50/l96 0 2026-03-09T20:47:25.565 INFO:tasks.workunit.client.1.vm10.stdout:5/245: rename d2/f21 to d2/f64 0 2026-03-09T20:47:25.566 INFO:tasks.workunit.client.0.vm07.stdout:0/403: dread - d1/d1f/f7c zero size 2026-03-09T20:47:25.575 INFO:tasks.workunit.client.0.vm07.stdout:6/350: dread d8/f52 [0,4194304] 0 2026-03-09T20:47:25.577 INFO:tasks.workunit.client.1.vm10.stdout:7/251: symlink db/d28/d2b/d36/d3b/l53 0 2026-03-09T20:47:25.578 INFO:tasks.workunit.client.0.vm07.stdout:5/413: chown d5/df/d13/d6c/f77 9455561 1 2026-03-09T20:47:25.580 INFO:tasks.workunit.client.1.vm10.stdout:9/324: creat d2/d33/f77 x:0 0 0 2026-03-09T20:47:25.580 INFO:tasks.workunit.client.0.vm07.stdout:2/364: sync 2026-03-09T20:47:25.581 INFO:tasks.workunit.client.1.vm10.stdout:7/252: dwrite db/d21/d26/f2f [0,4194304] 0 2026-03-09T20:47:25.585 INFO:tasks.workunit.client.0.vm07.stdout:4/256: creat d2/f43 x:0 0 0 2026-03-09T20:47:25.589 INFO:tasks.workunit.client.1.vm10.stdout:5/246: dread - d2/f35 zero size 2026-03-09T20:47:25.590 INFO:tasks.workunit.client.1.vm10.stdout:5/247: write d2/d27/d37/f57 [1043590,27988] 0 2026-03-09T20:47:25.591 INFO:tasks.workunit.client.1.vm10.stdout:5/248: chown d2/d27/d37/d46/d5d/d5f/d63 11541 1 2026-03-09T20:47:25.592 INFO:tasks.workunit.client.0.vm07.stdout:6/351: chown d8/d26 194945 1 2026-03-09T20:47:25.593 INFO:tasks.workunit.client.1.vm10.stdout:2/298: rmdir d5/d58 0 2026-03-09T20:47:25.595 INFO:tasks.workunit.client.0.vm07.stdout:9/310: mkdir d4/d8/d19/d5f/d73 0 2026-03-09T20:47:25.596 INFO:tasks.workunit.client.1.vm10.stdout:6/289: creat d3/f5e x:0 0 0 2026-03-09T20:47:25.597 INFO:tasks.workunit.client.0.vm07.stdout:1/351: creat d3/d66/f76 x:0 0 0 2026-03-09T20:47:25.599 INFO:tasks.workunit.client.0.vm07.stdout:5/414: mkdir d5/d19/d73/d97 0 2026-03-09T20:47:25.601 INFO:tasks.workunit.client.0.vm07.stdout:2/365: write d2/db/d28/d5c/f66 [183377,26297] 0 2026-03-09T20:47:25.604 INFO:tasks.workunit.client.0.vm07.stdout:4/257: mkdir d2/d1f/d44 0 2026-03-09T20:47:25.605 INFO:tasks.workunit.client.0.vm07.stdout:4/258: readlink d2/df/d17/l24 0 2026-03-09T20:47:25.605 INFO:tasks.workunit.client.0.vm07.stdout:4/259: stat d2/d1f/f26 0 2026-03-09T20:47:25.609 INFO:tasks.workunit.client.1.vm10.stdout:0/233: getdents d2/d9/da/d35 0 2026-03-09T20:47:25.611 INFO:tasks.workunit.client.1.vm10.stdout:2/299: creat d5/d2b/d32/f5c x:0 0 0 2026-03-09T20:47:25.611 INFO:tasks.workunit.client.0.vm07.stdout:6/352: sync 2026-03-09T20:47:25.612 INFO:tasks.workunit.client.1.vm10.stdout:0/234: dwrite d2/db/f13 [0,4194304] 0 2026-03-09T20:47:25.612 INFO:tasks.workunit.client.0.vm07.stdout:1/352: fsync d3/f24 0 2026-03-09T20:47:25.613 INFO:tasks.workunit.client.0.vm07.stdout:1/353: chown d3/d23/f49 4982 1 2026-03-09T20:47:25.616 INFO:tasks.workunit.client.0.vm07.stdout:5/415: creat d5/df/d13/d3e/d5e/f98 x:0 0 0 2026-03-09T20:47:25.619 INFO:tasks.workunit.client.0.vm07.stdout:5/416: dread d5/df/d13/d30/f36 [0,4194304] 0 2026-03-09T20:47:25.624 INFO:tasks.workunit.client.0.vm07.stdout:5/417: dwrite d5/d69/f82 [0,4194304] 0 2026-03-09T20:47:25.631 INFO:tasks.workunit.client.0.vm07.stdout:2/366: rmdir d2/db/d28/d57 39 2026-03-09T20:47:25.631 INFO:tasks.workunit.client.0.vm07.stdout:9/311: dread d4/d11/d23/f2f [0,4194304] 0 2026-03-09T20:47:25.631 INFO:tasks.workunit.client.0.vm07.stdout:2/367: write d2/d11/f44 [1005115,41027] 0 2026-03-09T20:47:25.632 INFO:tasks.workunit.client.0.vm07.stdout:9/312: dread d4/d11/d2a/f39 [4194304,4194304] 0 2026-03-09T20:47:25.632 INFO:tasks.workunit.client.0.vm07.stdout:2/368: truncate d2/d11/f5d 357046 0 2026-03-09T20:47:25.633 INFO:tasks.workunit.client.0.vm07.stdout:2/369: dread - d2/db/f67 zero size 2026-03-09T20:47:25.635 INFO:tasks.workunit.client.0.vm07.stdout:4/260: read d2/f2b [503366,80731] 0 2026-03-09T20:47:25.636 INFO:tasks.workunit.client.0.vm07.stdout:0/404: link d1/l9 d1/d2/d33/l81 0 2026-03-09T20:47:25.644 INFO:tasks.workunit.client.0.vm07.stdout:8/304: getdents d1/dc/d14 0 2026-03-09T20:47:25.647 INFO:tasks.workunit.client.0.vm07.stdout:3/338: getdents d1/d5/d9/d11/d1f 0 2026-03-09T20:47:25.651 INFO:tasks.workunit.client.0.vm07.stdout:3/339: dread - d1/d5/d9/d2f/d34/f68 zero size 2026-03-09T20:47:25.651 INFO:tasks.workunit.client.0.vm07.stdout:3/340: write d1/d5/d9/d11/f4d [904142,82833] 0 2026-03-09T20:47:25.652 INFO:tasks.workunit.client.1.vm10.stdout:5/249: creat d2/d58/f65 x:0 0 0 2026-03-09T20:47:25.652 INFO:tasks.workunit.client.0.vm07.stdout:6/353: fsync d8/d50/f55 0 2026-03-09T20:47:25.653 INFO:tasks.workunit.client.1.vm10.stdout:2/300: creat d5/d18/d2d/f5d x:0 0 0 2026-03-09T20:47:25.654 INFO:tasks.workunit.client.0.vm07.stdout:1/354: creat d3/d23/d55/f77 x:0 0 0 2026-03-09T20:47:25.655 INFO:tasks.workunit.client.1.vm10.stdout:5/250: dread d2/d39/d4b/f60 [0,4194304] 0 2026-03-09T20:47:25.655 INFO:tasks.workunit.client.0.vm07.stdout:1/355: dread - d3/d23/f5d zero size 2026-03-09T20:47:25.656 INFO:tasks.workunit.client.1.vm10.stdout:6/290: unlink d3/d12/d24/l32 0 2026-03-09T20:47:25.658 INFO:tasks.workunit.client.1.vm10.stdout:3/227: getdents dc/d14/d26/d37 0 2026-03-09T20:47:25.659 INFO:tasks.workunit.client.1.vm10.stdout:9/325: creat d2/d3/de/d35/f78 x:0 0 0 2026-03-09T20:47:25.660 INFO:tasks.workunit.client.0.vm07.stdout:5/418: creat d5/df/d13/d6c/f99 x:0 0 0 2026-03-09T20:47:25.661 INFO:tasks.workunit.client.0.vm07.stdout:3/341: sync 2026-03-09T20:47:25.661 INFO:tasks.workunit.client.0.vm07.stdout:3/342: stat d1/f6e 0 2026-03-09T20:47:25.666 INFO:tasks.workunit.client.0.vm07.stdout:4/261: unlink d2/df/d17/c1e 0 2026-03-09T20:47:25.667 INFO:tasks.workunit.client.0.vm07.stdout:4/262: read d2/df/d17/f3e [479280,25918] 0 2026-03-09T20:47:25.676 INFO:tasks.workunit.client.0.vm07.stdout:2/370: dread d2/db/d1c/f2e [0,4194304] 0 2026-03-09T20:47:25.691 INFO:tasks.workunit.client.1.vm10.stdout:0/235: creat d2/d9/da/d35/d30/f51 x:0 0 0 2026-03-09T20:47:25.691 INFO:tasks.workunit.client.1.vm10.stdout:2/301: mknod d5/d2b/c5e 0 2026-03-09T20:47:25.691 INFO:tasks.workunit.client.1.vm10.stdout:5/251: unlink d2/d1b/l32 0 2026-03-09T20:47:25.691 INFO:tasks.workunit.client.1.vm10.stdout:9/326: rename d2/d28/f4a to d2/d28/f79 0 2026-03-09T20:47:25.691 INFO:tasks.workunit.client.1.vm10.stdout:9/327: readlink d2/d12/l21 0 2026-03-09T20:47:25.691 INFO:tasks.workunit.client.0.vm07.stdout:8/305: dread d1/dc/d16/d31/f54 [0,4194304] 0 2026-03-09T20:47:25.691 INFO:tasks.workunit.client.0.vm07.stdout:6/354: write d8/f12 [5388283,89257] 0 2026-03-09T20:47:25.691 INFO:tasks.workunit.client.0.vm07.stdout:1/356: symlink d3/d23/d67/l78 0 2026-03-09T20:47:25.691 INFO:tasks.workunit.client.0.vm07.stdout:1/357: fdatasync d3/d14/d54/d3e/f59 0 2026-03-09T20:47:25.691 INFO:tasks.workunit.client.0.vm07.stdout:5/419: rename d5/df/d13/d3e/d47/l91 to d5/df/d62/l9a 0 2026-03-09T20:47:25.691 INFO:tasks.workunit.client.0.vm07.stdout:5/420: truncate d5/df/d13/d6c/f99 1035764 0 2026-03-09T20:47:25.691 INFO:tasks.workunit.client.0.vm07.stdout:3/343: creat d1/d5/d9/d11/d1f/f72 x:0 0 0 2026-03-09T20:47:25.691 INFO:tasks.workunit.client.0.vm07.stdout:9/313: mkdir d4/d16/d29/d24/d37/d44/d62/d74 0 2026-03-09T20:47:25.691 INFO:tasks.workunit.client.0.vm07.stdout:9/314: write d4/d8/d59/f66 [89403,57423] 0 2026-03-09T20:47:25.692 INFO:tasks.workunit.client.0.vm07.stdout:0/405: mkdir d1/d82 0 2026-03-09T20:47:25.696 INFO:tasks.workunit.client.1.vm10.stdout:3/228: sync 2026-03-09T20:47:25.696 INFO:tasks.workunit.client.0.vm07.stdout:0/406: dwrite d1/f3d [0,4194304] 0 2026-03-09T20:47:25.696 INFO:tasks.workunit.client.1.vm10.stdout:6/291: dwrite d3/da/f15 [4194304,4194304] 0 2026-03-09T20:47:25.698 INFO:tasks.workunit.client.1.vm10.stdout:3/229: read dc/d14/d27/f3c [1057505,44142] 0 2026-03-09T20:47:25.698 INFO:tasks.workunit.client.1.vm10.stdout:6/292: fsync f2 0 2026-03-09T20:47:25.701 INFO:tasks.workunit.client.0.vm07.stdout:0/407: dwrite d1/d2/f1b [0,4194304] 0 2026-03-09T20:47:25.702 INFO:tasks.workunit.client.1.vm10.stdout:6/293: dread d3/d12/d36/f4f [0,4194304] 0 2026-03-09T20:47:25.707 INFO:tasks.workunit.client.1.vm10.stdout:9/328: dread d2/d3/de/f24 [0,4194304] 0 2026-03-09T20:47:25.707 INFO:tasks.workunit.client.0.vm07.stdout:0/408: dread d1/d2/f14 [0,4194304] 0 2026-03-09T20:47:25.707 INFO:tasks.workunit.client.1.vm10.stdout:9/329: dread - d2/d28/d47/d50/f75 zero size 2026-03-09T20:47:25.708 INFO:tasks.workunit.client.0.vm07.stdout:4/263: dread d2/df/f2e [0,4194304] 0 2026-03-09T20:47:25.709 INFO:tasks.workunit.client.1.vm10.stdout:2/302: mkdir d5/d18/d27/d5f 0 2026-03-09T20:47:25.724 INFO:tasks.workunit.client.0.vm07.stdout:7/388: write d3/da/db/d32/d3e/f40 [2292819,53061] 0 2026-03-09T20:47:25.725 INFO:tasks.workunit.client.0.vm07.stdout:7/389: chown d3/l48 2 1 2026-03-09T20:47:25.725 INFO:tasks.workunit.client.0.vm07.stdout:7/390: fsync d3/f3f 0 2026-03-09T20:47:25.725 INFO:tasks.workunit.client.0.vm07.stdout:7/391: fdatasync d3/f3f 0 2026-03-09T20:47:25.730 INFO:tasks.workunit.client.1.vm10.stdout:4/191: truncate d1/f26 3846509 0 2026-03-09T20:47:25.743 INFO:tasks.workunit.client.1.vm10.stdout:1/269: dwrite d2/f1c [0,4194304] 0 2026-03-09T20:47:25.749 INFO:tasks.workunit.client.1.vm10.stdout:8/312: dwrite d0/f11 [0,4194304] 0 2026-03-09T20:47:25.755 INFO:tasks.workunit.client.1.vm10.stdout:8/313: chown d0/d22/d2f/d3d 423 1 2026-03-09T20:47:25.755 INFO:tasks.workunit.client.1.vm10.stdout:0/236: dread d2/d9/da/de/d1a/f21 [0,4194304] 0 2026-03-09T20:47:25.755 INFO:tasks.workunit.client.1.vm10.stdout:7/253: dwrite db/f19 [0,4194304] 0 2026-03-09T20:47:25.755 INFO:tasks.workunit.client.0.vm07.stdout:3/344: chown d1/d5/d9/d2f/c3b 1 1 2026-03-09T20:47:25.756 INFO:tasks.workunit.client.0.vm07.stdout:3/345: dread d1/d5/d9/d2f/d3d/d64/f30 [0,4194304] 0 2026-03-09T20:47:25.764 INFO:tasks.workunit.client.1.vm10.stdout:3/230: rename dc/d14/d27/c44 to dc/d14/d26/d29/c4b 0 2026-03-09T20:47:25.765 INFO:tasks.workunit.client.1.vm10.stdout:9/330: mknod d2/d33/c7a 0 2026-03-09T20:47:25.767 INFO:tasks.workunit.client.0.vm07.stdout:2/371: fsync d2/db/d49/f6b 0 2026-03-09T20:47:25.768 INFO:tasks.workunit.client.0.vm07.stdout:2/372: write d2/d11/f44 [1680839,122434] 0 2026-03-09T20:47:25.786 INFO:tasks.workunit.client.0.vm07.stdout:0/409: dread d1/d2/ff [0,4194304] 0 2026-03-09T20:47:25.787 INFO:tasks.workunit.client.0.vm07.stdout:5/421: fdatasync d5/d69/f82 0 2026-03-09T20:47:25.791 INFO:tasks.workunit.client.0.vm07.stdout:5/422: dwrite d5/df/d13/d3e/d5e/f98 [0,4194304] 0 2026-03-09T20:47:25.794 INFO:tasks.workunit.client.0.vm07.stdout:5/423: chown d5/df/d13/d4f/c53 109252088 1 2026-03-09T20:47:25.805 INFO:tasks.workunit.client.1.vm10.stdout:7/254: dread db/d21/d23/f14 [0,4194304] 0 2026-03-09T20:47:25.806 INFO:tasks.workunit.client.0.vm07.stdout:4/264: creat d2/d1f/f45 x:0 0 0 2026-03-09T20:47:25.812 INFO:tasks.workunit.client.1.vm10.stdout:4/192: dwrite d1/fe [0,4194304] 0 2026-03-09T20:47:25.812 INFO:tasks.workunit.client.0.vm07.stdout:4/265: chown d2/d1f/l29 66451 1 2026-03-09T20:47:25.812 INFO:tasks.workunit.client.0.vm07.stdout:4/266: dwrite d2/d1f/f45 [0,4194304] 0 2026-03-09T20:47:25.815 INFO:tasks.workunit.client.0.vm07.stdout:0/410: dread d1/f31 [0,4194304] 0 2026-03-09T20:47:25.831 INFO:tasks.workunit.client.0.vm07.stdout:8/306: link d1/dc/f4c d1/dc/d14/f64 0 2026-03-09T20:47:25.831 INFO:tasks.workunit.client.0.vm07.stdout:8/307: stat d1/dc/f29 0 2026-03-09T20:47:25.833 INFO:tasks.workunit.client.1.vm10.stdout:8/314: dwrite d0/d22/d2c/f32 [0,4194304] 0 2026-03-09T20:47:25.835 INFO:tasks.workunit.client.1.vm10.stdout:6/294: link d3/d12/d51/f53 d3/d12/d36/d5c/f5f 0 2026-03-09T20:47:25.835 INFO:tasks.workunit.client.1.vm10.stdout:8/315: dread d0/f10 [0,4194304] 0 2026-03-09T20:47:25.836 INFO:tasks.workunit.client.1.vm10.stdout:8/316: stat d0/d22/d2c/c3c 0 2026-03-09T20:47:25.837 INFO:tasks.workunit.client.1.vm10.stdout:1/270: rename d2/da/d25/d3e/l4a to d2/da/d25/d46/l56 0 2026-03-09T20:47:25.837 INFO:tasks.workunit.client.1.vm10.stdout:3/231: symlink dc/d14/d27/l4c 0 2026-03-09T20:47:25.842 INFO:tasks.workunit.client.1.vm10.stdout:1/271: write d2/da/d25/d3e/f53 [968724,120466] 0 2026-03-09T20:47:25.846 INFO:tasks.workunit.client.0.vm07.stdout:2/373: symlink d2/db/d1c/l70 0 2026-03-09T20:47:25.849 INFO:tasks.workunit.client.1.vm10.stdout:3/232: dwrite dc/d14/d27/f3f [0,4194304] 0 2026-03-09T20:47:25.850 INFO:tasks.workunit.client.0.vm07.stdout:5/424: creat d5/df/d13/d4f/f9b x:0 0 0 2026-03-09T20:47:25.852 INFO:tasks.workunit.client.1.vm10.stdout:3/233: write dc/d14/d26/d29/d40/f46 [602846,64337] 0 2026-03-09T20:47:25.858 INFO:tasks.workunit.client.1.vm10.stdout:0/237: rename d2/l19 to d2/d9/da/d11/l52 0 2026-03-09T20:47:25.863 INFO:tasks.workunit.client.0.vm07.stdout:4/267: creat d2/df/d17/f46 x:0 0 0 2026-03-09T20:47:25.863 INFO:tasks.workunit.client.0.vm07.stdout:0/411: unlink d1/d2/dc/c34 0 2026-03-09T20:47:25.863 INFO:tasks.workunit.client.0.vm07.stdout:4/268: dwrite d2/d1f/f26 [4194304,4194304] 0 2026-03-09T20:47:25.863 INFO:tasks.workunit.client.1.vm10.stdout:0/238: dread - d2/d9/da/d35/d30/f51 zero size 2026-03-09T20:47:25.863 INFO:tasks.workunit.client.1.vm10.stdout:0/239: chown d2/d9/da/d11/l45 410 1 2026-03-09T20:47:25.864 INFO:tasks.workunit.client.1.vm10.stdout:6/295: dwrite d3/f40 [0,4194304] 0 2026-03-09T20:47:25.876 INFO:tasks.workunit.client.1.vm10.stdout:1/272: truncate d2/f3c 693082 0 2026-03-09T20:47:25.876 INFO:tasks.workunit.client.1.vm10.stdout:1/273: chown d2/c2c 1425 1 2026-03-09T20:47:25.886 INFO:tasks.workunit.client.0.vm07.stdout:4/269: sync 2026-03-09T20:47:25.886 INFO:tasks.workunit.client.0.vm07.stdout:1/358: write d3/f34 [779131,41004] 0 2026-03-09T20:47:25.887 INFO:tasks.workunit.client.0.vm07.stdout:1/359: dread - d3/d23/d55/f77 zero size 2026-03-09T20:47:25.887 INFO:tasks.workunit.client.1.vm10.stdout:5/252: dwrite d2/d27/d37/f38 [0,4194304] 0 2026-03-09T20:47:25.892 INFO:tasks.workunit.client.1.vm10.stdout:3/234: dread dc/d14/d26/d29/d40/f49 [0,4194304] 0 2026-03-09T20:47:25.892 INFO:tasks.workunit.client.0.vm07.stdout:9/315: dwrite d4/d11/f1a [0,4194304] 0 2026-03-09T20:47:25.896 INFO:tasks.workunit.client.1.vm10.stdout:8/317: dread d0/d22/d2f/d38/f43 [0,4194304] 0 2026-03-09T20:47:25.896 INFO:tasks.workunit.client.0.vm07.stdout:9/316: dwrite d4/d16/d29/d24/f36 [0,4194304] 0 2026-03-09T20:47:25.903 INFO:tasks.workunit.client.1.vm10.stdout:7/255: mkdir db/d54 0 2026-03-09T20:47:25.903 INFO:tasks.workunit.client.0.vm07.stdout:8/308: truncate d1/f1d 3149326 0 2026-03-09T20:47:25.904 INFO:tasks.workunit.client.0.vm07.stdout:8/309: chown d1/dc/fe 1106168 1 2026-03-09T20:47:25.904 INFO:tasks.workunit.client.1.vm10.stdout:7/256: truncate db/d28/d2b/d36/d3b/f42 5221650 0 2026-03-09T20:47:25.904 INFO:tasks.workunit.client.0.vm07.stdout:6/355: getdents d8/d5d 0 2026-03-09T20:47:25.906 INFO:tasks.workunit.client.1.vm10.stdout:4/193: unlink d1/d8/d1c/c37 0 2026-03-09T20:47:25.908 INFO:tasks.workunit.client.0.vm07.stdout:2/374: mknod d2/db/d1c/d4a/c71 0 2026-03-09T20:47:25.912 INFO:tasks.workunit.client.1.vm10.stdout:9/331: link d2/c17 d2/d28/d47/d6a/c7b 0 2026-03-09T20:47:25.913 INFO:tasks.workunit.client.0.vm07.stdout:5/425: mkdir d5/d19/d73/d9c 0 2026-03-09T20:47:25.916 INFO:tasks.workunit.client.1.vm10.stdout:0/240: creat d2/d9/da/f53 x:0 0 0 2026-03-09T20:47:25.917 INFO:tasks.workunit.client.1.vm10.stdout:1/274: write d2/da/f50 [1054294,71839] 0 2026-03-09T20:47:25.919 INFO:tasks.workunit.client.0.vm07.stdout:4/270: creat d2/d1f/d2d/f47 x:0 0 0 2026-03-09T20:47:25.919 INFO:tasks.workunit.client.0.vm07.stdout:4/271: chown d2/df/l32 116 1 2026-03-09T20:47:25.926 INFO:tasks.workunit.client.1.vm10.stdout:5/253: mkdir d2/d27/d37/d46/d5d/d5f/d66 0 2026-03-09T20:47:25.926 INFO:tasks.workunit.client.1.vm10.stdout:3/235: unlink dc/d14/d26/d29/d40/f46 0 2026-03-09T20:47:25.933 INFO:tasks.workunit.client.1.vm10.stdout:7/257: unlink db/d21/d26/f43 0 2026-03-09T20:47:25.933 INFO:tasks.workunit.client.0.vm07.stdout:8/310: rmdir d1/dc/d16 39 2026-03-09T20:47:25.934 INFO:tasks.workunit.client.0.vm07.stdout:8/311: dread - d1/dc/d14/d2f/d53/f5f zero size 2026-03-09T20:47:25.934 INFO:tasks.workunit.client.1.vm10.stdout:7/258: chown db/d28/d2b/d36/d40/f48 13253807 1 2026-03-09T20:47:25.939 INFO:tasks.workunit.client.0.vm07.stdout:6/356: creat d8/d26/d2a/d40/d67/f70 x:0 0 0 2026-03-09T20:47:25.940 INFO:tasks.workunit.client.1.vm10.stdout:9/332: creat d2/d3/de/f7c x:0 0 0 2026-03-09T20:47:25.940 INFO:tasks.workunit.client.0.vm07.stdout:3/346: creat d1/d5/d9/d11/f73 x:0 0 0 2026-03-09T20:47:25.942 INFO:tasks.workunit.client.1.vm10.stdout:7/259: dread db/d28/d2b/d36/d40/f44 [4194304,4194304] 0 2026-03-09T20:47:25.942 INFO:tasks.workunit.client.1.vm10.stdout:6/296: mkdir d3/da/d11/d31/d4c/d60 0 2026-03-09T20:47:25.942 INFO:tasks.workunit.client.0.vm07.stdout:2/375: truncate d2/f2c 2646820 0 2026-03-09T20:47:25.947 INFO:tasks.workunit.client.1.vm10.stdout:7/260: dwrite db/d46/f47 [0,4194304] 0 2026-03-09T20:47:25.949 INFO:tasks.workunit.client.1.vm10.stdout:7/261: read db/d28/d2b/d36/d40/f44 [4800236,88736] 0 2026-03-09T20:47:25.949 INFO:tasks.workunit.client.0.vm07.stdout:4/272: fdatasync d2/f2b 0 2026-03-09T20:47:25.951 INFO:tasks.workunit.client.0.vm07.stdout:7/392: rename d3/c6 to d3/da/c86 0 2026-03-09T20:47:25.959 INFO:tasks.workunit.client.1.vm10.stdout:9/333: dread d2/d3/de/d35/f38 [0,4194304] 0 2026-03-09T20:47:25.961 INFO:tasks.workunit.client.1.vm10.stdout:4/194: mknod d1/d8/d39/c3d 0 2026-03-09T20:47:25.961 INFO:tasks.workunit.client.0.vm07.stdout:2/376: mkdir d2/d46/d72 0 2026-03-09T20:47:25.962 INFO:tasks.workunit.client.0.vm07.stdout:0/412: link d1/d2/dc/l77 d1/d82/l83 0 2026-03-09T20:47:25.970 INFO:tasks.workunit.client.1.vm10.stdout:0/241: rename d2/d9/da/d11/c17 to d2/c54 0 2026-03-09T20:47:25.972 INFO:tasks.workunit.client.0.vm07.stdout:1/360: rename d3/d14/d54/f20 to d3/d23/d52/f79 0 2026-03-09T20:47:25.974 INFO:tasks.workunit.client.1.vm10.stdout:7/262: creat db/d28/d2b/d36/f55 x:0 0 0 2026-03-09T20:47:25.980 INFO:tasks.workunit.client.0.vm07.stdout:6/357: mknod d8/d26/d2a/d40/d69/d4f/c71 0 2026-03-09T20:47:25.981 INFO:tasks.workunit.client.1.vm10.stdout:9/334: creat d2/d33/f7d x:0 0 0 2026-03-09T20:47:25.981 INFO:tasks.workunit.client.0.vm07.stdout:3/347: getdents d1/d5/d9/d2f/d3d/d71 0 2026-03-09T20:47:25.981 INFO:tasks.workunit.client.0.vm07.stdout:3/348: readlink d1/d5/d9/d2f/d3d/d64/d59/l6b 0 2026-03-09T20:47:25.983 INFO:tasks.workunit.client.1.vm10.stdout:4/195: creat d1/d8/d1c/f3e x:0 0 0 2026-03-09T20:47:25.984 INFO:tasks.workunit.client.0.vm07.stdout:4/273: mknod d2/c48 0 2026-03-09T20:47:25.984 INFO:tasks.workunit.client.0.vm07.stdout:4/274: readlink d2/l30 0 2026-03-09T20:47:25.987 INFO:tasks.workunit.client.1.vm10.stdout:3/236: rename dc/dd/l16 to dc/d14/d26/d37/l4d 0 2026-03-09T20:47:25.988 INFO:tasks.workunit.client.1.vm10.stdout:3/237: dread - dc/d14/d20/d21/f41 zero size 2026-03-09T20:47:25.988 INFO:tasks.workunit.client.1.vm10.stdout:3/238: dread - dc/d14/d26/f45 zero size 2026-03-09T20:47:25.989 INFO:tasks.workunit.client.0.vm07.stdout:1/361: creat d3/d23/d55/d56/d60/f7a x:0 0 0 2026-03-09T20:47:25.990 INFO:tasks.workunit.client.0.vm07.stdout:7/393: getdents d3/da/db/d79 0 2026-03-09T20:47:25.998 INFO:tasks.workunit.client.1.vm10.stdout:3/239: dwrite dc/f1f [0,4194304] 0 2026-03-09T20:47:25.999 INFO:tasks.workunit.client.1.vm10.stdout:9/335: unlink d2/d28/f29 0 2026-03-09T20:47:25.999 INFO:tasks.workunit.client.0.vm07.stdout:3/349: rmdir d1/d5/d9 39 2026-03-09T20:47:25.999 INFO:tasks.workunit.client.0.vm07.stdout:2/377: creat d2/db/d1c/d4a/d6c/f73 x:0 0 0 2026-03-09T20:47:25.999 INFO:tasks.workunit.client.0.vm07.stdout:2/378: readlink d2/db/l43 0 2026-03-09T20:47:26.005 INFO:tasks.workunit.client.1.vm10.stdout:5/254: link d2/d39/d4b/c50 d2/d1b/c67 0 2026-03-09T20:47:26.013 INFO:tasks.workunit.client.0.vm07.stdout:1/362: creat d3/d23/d55/f7b x:0 0 0 2026-03-09T20:47:26.015 INFO:tasks.workunit.client.0.vm07.stdout:7/394: symlink d3/da/db/d14/d1f/d2b/l87 0 2026-03-09T20:47:26.017 INFO:tasks.workunit.client.1.vm10.stdout:9/336: mknod d2/d28/d47/c7e 0 2026-03-09T20:47:26.017 INFO:tasks.workunit.client.1.vm10.stdout:9/337: write d2/f6 [3951186,114523] 0 2026-03-09T20:47:26.018 INFO:tasks.workunit.client.1.vm10.stdout:9/338: chown d2/d33/d37/l53 108133708 1 2026-03-09T20:47:26.018 INFO:tasks.workunit.client.0.vm07.stdout:7/395: dwrite d3/f4f [0,4194304] 0 2026-03-09T20:47:26.018 INFO:tasks.workunit.client.1.vm10.stdout:9/339: chown d2/d33/c7a 2223130 1 2026-03-09T20:47:26.020 INFO:tasks.workunit.client.0.vm07.stdout:6/358: mknod d8/d16/d22/d24/d2b/c72 0 2026-03-09T20:47:26.030 INFO:tasks.workunit.client.1.vm10.stdout:3/240: symlink dc/d14/d26/d29/d2a/l4e 0 2026-03-09T20:47:26.032 INFO:tasks.workunit.client.0.vm07.stdout:3/350: stat d1/l6 0 2026-03-09T20:47:26.036 INFO:tasks.workunit.client.1.vm10.stdout:6/297: getdents d3 0 2026-03-09T20:47:26.039 INFO:tasks.workunit.client.1.vm10.stdout:6/298: stat d3/d12/d51 0 2026-03-09T20:47:26.039 INFO:tasks.workunit.client.1.vm10.stdout:6/299: truncate d3/d12/d36/f56 879437 0 2026-03-09T20:47:26.039 INFO:tasks.workunit.client.1.vm10.stdout:4/196: rename d1/d8/f11 to d1/d8/d1c/f3f 0 2026-03-09T20:47:26.040 INFO:tasks.workunit.client.1.vm10.stdout:6/300: dwrite d3/d12/f2b [4194304,4194304] 0 2026-03-09T20:47:26.056 INFO:tasks.workunit.client.0.vm07.stdout:7/396: sync 2026-03-09T20:47:26.058 INFO:tasks.workunit.client.1.vm10.stdout:3/241: unlink dc/dd/f18 0 2026-03-09T20:47:26.060 INFO:tasks.workunit.client.0.vm07.stdout:6/359: creat d8/d16/d22/d33/f73 x:0 0 0 2026-03-09T20:47:26.061 INFO:tasks.workunit.client.1.vm10.stdout:3/242: dwrite dc/dd/f23 [0,4194304] 0 2026-03-09T20:47:26.065 INFO:tasks.workunit.client.1.vm10.stdout:3/243: dwrite dc/f1f [0,4194304] 0 2026-03-09T20:47:26.066 INFO:tasks.workunit.client.0.vm07.stdout:3/351: dread d1/d5/d9/d11/f4d [0,4194304] 0 2026-03-09T20:47:26.087 INFO:tasks.workunit.client.1.vm10.stdout:6/301: rename d3/d30/d33/c5d to d3/d12/d36/c61 0 2026-03-09T20:47:26.090 INFO:tasks.workunit.client.1.vm10.stdout:2/303: truncate d5/d18/d2d/f31 943777 0 2026-03-09T20:47:26.100 INFO:tasks.workunit.client.0.vm07.stdout:5/426: rename d5/df/d13/d3e/d47/c59 to d5/df/c9d 0 2026-03-09T20:47:26.102 INFO:tasks.workunit.client.1.vm10.stdout:7/263: getdents db/d21 0 2026-03-09T20:47:26.104 INFO:tasks.workunit.client.1.vm10.stdout:9/340: creat d2/d28/d47/d6a/f7f x:0 0 0 2026-03-09T20:47:26.104 INFO:tasks.workunit.client.1.vm10.stdout:9/341: chown d2/d3/c15 4919 1 2026-03-09T20:47:26.107 INFO:tasks.workunit.client.0.vm07.stdout:9/317: write d4/d16/d29/f4a [523590,53454] 0 2026-03-09T20:47:26.110 INFO:tasks.workunit.client.1.vm10.stdout:8/318: dwrite d0/d22/d2f/d3d/f49 [0,4194304] 0 2026-03-09T20:47:26.130 INFO:tasks.workunit.client.1.vm10.stdout:6/302: mknod d3/da/d11/d26/d5b/c62 0 2026-03-09T20:47:26.133 INFO:tasks.workunit.client.1.vm10.stdout:6/303: dwrite d3/d30/d33/f35 [0,4194304] 0 2026-03-09T20:47:26.134 INFO:tasks.workunit.client.1.vm10.stdout:2/304: rename d5/d18/d27/d38/f55 to d5/d18/d2d/f60 0 2026-03-09T20:47:26.138 INFO:tasks.workunit.client.0.vm07.stdout:8/312: truncate d1/f33 2922497 0 2026-03-09T20:47:26.140 INFO:tasks.workunit.client.1.vm10.stdout:2/305: dwrite d5/fa [0,4194304] 0 2026-03-09T20:47:26.143 INFO:tasks.workunit.client.1.vm10.stdout:6/304: dwrite d3/d12/d36/f56 [0,4194304] 0 2026-03-09T20:47:26.150 INFO:tasks.workunit.client.1.vm10.stdout:7/264: fdatasync db/d21/f27 0 2026-03-09T20:47:26.154 INFO:tasks.workunit.client.1.vm10.stdout:2/306: dwrite d5/fa [0,4194304] 0 2026-03-09T20:47:26.155 INFO:tasks.workunit.client.1.vm10.stdout:2/307: dread - d5/d2b/d32/f5c zero size 2026-03-09T20:47:26.155 INFO:tasks.workunit.client.1.vm10.stdout:2/308: stat d5/d18/f1f 0 2026-03-09T20:47:26.165 INFO:tasks.workunit.client.1.vm10.stdout:0/242: dwrite d2/d9/f20 [0,4194304] 0 2026-03-09T20:47:26.169 INFO:tasks.workunit.client.1.vm10.stdout:0/243: dread - d2/d9/da/d35/f3a zero size 2026-03-09T20:47:26.183 INFO:tasks.workunit.client.1.vm10.stdout:3/244: creat dc/d14/d20/d21/d3b/f4f x:0 0 0 2026-03-09T20:47:26.189 INFO:tasks.workunit.client.1.vm10.stdout:3/245: dwrite dc/d14/f1a [0,4194304] 0 2026-03-09T20:47:26.193 INFO:tasks.workunit.client.1.vm10.stdout:9/342: dread d2/d33/d37/f4c [0,4194304] 0 2026-03-09T20:47:26.194 INFO:tasks.workunit.client.1.vm10.stdout:9/343: write d2/d33/f76 [137707,106373] 0 2026-03-09T20:47:26.207 INFO:tasks.workunit.client.0.vm07.stdout:0/413: truncate d1/d1f/d20/f2c 1617777 0 2026-03-09T20:47:26.220 INFO:tasks.workunit.client.1.vm10.stdout:5/255: dwrite d2/d1b/f2f [0,4194304] 0 2026-03-09T20:47:26.222 INFO:tasks.workunit.client.0.vm07.stdout:3/352: write d1/d5/d9/f33 [2319106,119154] 0 2026-03-09T20:47:26.227 INFO:tasks.workunit.client.0.vm07.stdout:2/379: creat d2/db/f74 x:0 0 0 2026-03-09T20:47:26.228 INFO:tasks.workunit.client.0.vm07.stdout:2/380: stat d2/db/d49/f6b 0 2026-03-09T20:47:26.249 INFO:tasks.workunit.client.0.vm07.stdout:5/427: fsync d5/df/d13/f17 0 2026-03-09T20:47:26.253 INFO:tasks.workunit.client.0.vm07.stdout:5/428: dwrite d5/df/d13/d30/d56/f84 [0,4194304] 0 2026-03-09T20:47:26.256 INFO:tasks.workunit.client.0.vm07.stdout:5/429: fdatasync d5/df/d13/d55/f5f 0 2026-03-09T20:47:26.269 INFO:tasks.workunit.client.1.vm10.stdout:1/275: dread d2/da/f50 [0,4194304] 0 2026-03-09T20:47:26.269 INFO:tasks.workunit.client.1.vm10.stdout:2/309: mkdir d5/d18/d27/d38/d61 0 2026-03-09T20:47:26.280 INFO:tasks.workunit.client.0.vm07.stdout:0/414: dread d1/f31 [0,4194304] 0 2026-03-09T20:47:26.285 INFO:tasks.workunit.client.0.vm07.stdout:0/415: dwrite d1/d1f/f7c [0,4194304] 0 2026-03-09T20:47:26.294 INFO:tasks.workunit.client.1.vm10.stdout:0/244: write d2/db/f38 [3652250,4454] 0 2026-03-09T20:47:26.297 INFO:tasks.workunit.client.0.vm07.stdout:3/353: creat d1/d5/d9/d2f/d3d/f74 x:0 0 0 2026-03-09T20:47:26.316 INFO:tasks.workunit.client.0.vm07.stdout:6/360: dwrite d8/f52 [0,4194304] 0 2026-03-09T20:47:26.327 INFO:tasks.workunit.client.0.vm07.stdout:9/318: write d4/d8/dc/d15/f57 [286139,5901] 0 2026-03-09T20:47:26.330 INFO:tasks.workunit.client.1.vm10.stdout:3/246: truncate dc/d14/d20/d2e/f32 4403211 0 2026-03-09T20:47:26.339 INFO:tasks.workunit.client.0.vm07.stdout:2/381: unlink d2/d11/c2f 0 2026-03-09T20:47:26.339 INFO:tasks.workunit.client.0.vm07.stdout:5/430: rmdir d5/d33/d39 39 2026-03-09T20:47:26.339 INFO:tasks.workunit.client.0.vm07.stdout:1/363: getdents d3/d14/d54 0 2026-03-09T20:47:26.339 INFO:tasks.workunit.client.1.vm10.stdout:3/247: chown dc/d14/d20/d2e 4 1 2026-03-09T20:47:26.339 INFO:tasks.workunit.client.1.vm10.stdout:3/248: fdatasync dc/dd/f15 0 2026-03-09T20:47:26.340 INFO:tasks.workunit.client.1.vm10.stdout:9/344: creat d2/d3/de/f80 x:0 0 0 2026-03-09T20:47:26.340 INFO:tasks.workunit.client.1.vm10.stdout:5/256: fdatasync d2/d27/f2a 0 2026-03-09T20:47:26.340 INFO:tasks.workunit.client.1.vm10.stdout:6/305: creat d3/da/d11/d31/d4c/d60/f63 x:0 0 0 2026-03-09T20:47:26.343 INFO:tasks.workunit.client.1.vm10.stdout:7/265: mknod db/d28/d4d/c56 0 2026-03-09T20:47:26.347 INFO:tasks.workunit.client.0.vm07.stdout:5/431: dread d5/df/d13/f17 [0,4194304] 0 2026-03-09T20:47:26.349 INFO:tasks.workunit.client.0.vm07.stdout:5/432: dread d5/d19/f4d [0,4194304] 0 2026-03-09T20:47:26.359 INFO:tasks.workunit.client.0.vm07.stdout:5/433: dread d5/d33/f5a [0,4194304] 0 2026-03-09T20:47:26.364 INFO:tasks.workunit.client.1.vm10.stdout:1/276: creat d2/da/d25/d3e/d42/f57 x:0 0 0 2026-03-09T20:47:26.364 INFO:tasks.workunit.client.1.vm10.stdout:8/319: truncate d0/d22/d2f/d3d/f49 2395097 0 2026-03-09T20:47:26.365 INFO:tasks.workunit.client.0.vm07.stdout:2/382: sync 2026-03-09T20:47:26.383 INFO:tasks.workunit.client.1.vm10.stdout:2/310: rename d5/d18/d27/d38/c48 to d5/d18/d27/d38/d61/c62 0 2026-03-09T20:47:26.387 INFO:tasks.workunit.client.1.vm10.stdout:4/197: truncate d1/f26 3378902 0 2026-03-09T20:47:26.395 INFO:tasks.workunit.client.0.vm07.stdout:0/416: dread d1/d2/f14 [0,4194304] 0 2026-03-09T20:47:26.395 INFO:tasks.workunit.client.1.vm10.stdout:2/311: dread d5/d18/f2c [0,4194304] 0 2026-03-09T20:47:26.396 INFO:tasks.workunit.client.0.vm07.stdout:0/417: write d1/d2/f5e [1188683,126080] 0 2026-03-09T20:47:26.406 INFO:tasks.workunit.client.0.vm07.stdout:7/397: creat d3/f88 x:0 0 0 2026-03-09T20:47:26.408 INFO:tasks.workunit.client.1.vm10.stdout:3/249: creat dc/d14/d20/d21/f50 x:0 0 0 2026-03-09T20:47:26.408 INFO:tasks.workunit.client.0.vm07.stdout:3/354: creat d1/d5/d9/d2f/d3d/f75 x:0 0 0 2026-03-09T20:47:26.410 INFO:tasks.workunit.client.1.vm10.stdout:5/257: readlink d2/d1b/l53 0 2026-03-09T20:47:26.411 INFO:tasks.workunit.client.0.vm07.stdout:9/319: dread - d4/d16/d29/f64 zero size 2026-03-09T20:47:26.411 INFO:tasks.workunit.client.1.vm10.stdout:5/258: write d2/d1b/f5c [248547,36065] 0 2026-03-09T20:47:26.415 INFO:tasks.workunit.client.1.vm10.stdout:6/306: creat d3/d12/d36/f64 x:0 0 0 2026-03-09T20:47:26.417 INFO:tasks.workunit.client.0.vm07.stdout:4/275: rename d2/f1d to d2/df/f49 0 2026-03-09T20:47:26.419 INFO:tasks.workunit.client.1.vm10.stdout:7/266: rmdir db/d1f 39 2026-03-09T20:47:26.419 INFO:tasks.workunit.client.1.vm10.stdout:7/267: fdatasync fa 0 2026-03-09T20:47:26.419 INFO:tasks.workunit.client.1.vm10.stdout:7/268: chown db/d28 19 1 2026-03-09T20:47:26.420 INFO:tasks.workunit.client.0.vm07.stdout:1/364: creat d3/d23/d55/d56/d60/f7c x:0 0 0 2026-03-09T20:47:26.428 INFO:tasks.workunit.client.1.vm10.stdout:7/269: dread db/d21/d23/f29 [0,4194304] 0 2026-03-09T20:47:26.428 INFO:tasks.workunit.client.0.vm07.stdout:5/434: read d5/d50/f52 [4045773,68035] 0 2026-03-09T20:47:26.428 INFO:tasks.workunit.client.1.vm10.stdout:1/277: dwrite d2/da/f3d [0,4194304] 0 2026-03-09T20:47:26.438 INFO:tasks.workunit.client.1.vm10.stdout:1/278: dread d2/f14 [0,4194304] 0 2026-03-09T20:47:26.439 INFO:tasks.workunit.client.1.vm10.stdout:1/279: write d2/da/d25/d3e/d42/f57 [6372,116414] 0 2026-03-09T20:47:26.503 INFO:tasks.workunit.client.0.vm07.stdout:8/313: mknod d1/dc/d16/c65 0 2026-03-09T20:47:26.509 INFO:tasks.workunit.client.1.vm10.stdout:4/198: mknod d1/d8/d1c/d2b/c40 0 2026-03-09T20:47:26.525 INFO:tasks.workunit.client.1.vm10.stdout:3/250: rmdir dc/d14/d20/d21 39 2026-03-09T20:47:26.538 INFO:tasks.workunit.client.0.vm07.stdout:4/276: fdatasync d2/f19 0 2026-03-09T20:47:26.539 INFO:tasks.workunit.client.0.vm07.stdout:4/277: write d2/fa [4597051,78544] 0 2026-03-09T20:47:26.550 INFO:tasks.workunit.client.1.vm10.stdout:8/320: read - d0/d22/d25/d2e/d41/d47/f5a zero size 2026-03-09T20:47:26.555 INFO:tasks.workunit.client.0.vm07.stdout:5/435: dread d5/df/d13/f38 [0,4194304] 0 2026-03-09T20:47:26.566 INFO:tasks.workunit.client.0.vm07.stdout:0/418: creat d1/d1f/d53/f84 x:0 0 0 2026-03-09T20:47:26.570 INFO:tasks.workunit.client.0.vm07.stdout:7/398: symlink d3/da/db/d14/d1f/d50/l89 0 2026-03-09T20:47:26.574 INFO:tasks.workunit.client.1.vm10.stdout:1/280: creat d2/da/d25/d3e/f58 x:0 0 0 2026-03-09T20:47:26.583 INFO:tasks.workunit.client.1.vm10.stdout:4/199: mkdir d1/d8/d1c/d41 0 2026-03-09T20:47:26.591 INFO:tasks.workunit.client.1.vm10.stdout:3/251: creat dc/d14/d26/d29/f51 x:0 0 0 2026-03-09T20:47:26.598 INFO:tasks.workunit.client.1.vm10.stdout:5/259: mknod d2/d27/d37/d46/d5d/d5f/d66/c68 0 2026-03-09T20:47:26.602 INFO:tasks.workunit.client.0.vm07.stdout:4/278: mkdir d2/d1f/d2d/d3f/d4a 0 2026-03-09T20:47:26.612 INFO:tasks.workunit.client.0.vm07.stdout:4/279: dread d2/d1f/f25 [0,4194304] 0 2026-03-09T20:47:26.622 INFO:tasks.workunit.client.1.vm10.stdout:1/281: creat d2/f59 x:0 0 0 2026-03-09T20:47:26.628 INFO:tasks.workunit.client.1.vm10.stdout:4/200: creat d1/d8/d1b/f42 x:0 0 0 2026-03-09T20:47:26.631 INFO:tasks.workunit.client.0.vm07.stdout:7/399: chown d3/da/db/d14/d1f/c6a 396369 1 2026-03-09T20:47:26.632 INFO:tasks.workunit.client.0.vm07.stdout:7/400: fdatasync d3/da/f11 0 2026-03-09T20:47:26.639 INFO:tasks.workunit.client.1.vm10.stdout:3/252: dread - dc/d14/d20/d21/f41 zero size 2026-03-09T20:47:26.644 INFO:tasks.workunit.client.1.vm10.stdout:9/345: getdents d2/d3 0 2026-03-09T20:47:26.644 INFO:tasks.workunit.client.1.vm10.stdout:9/346: stat d2/d3/de/f80 0 2026-03-09T20:47:26.645 INFO:tasks.workunit.client.1.vm10.stdout:2/312: creat d5/d18/f63 x:0 0 0 2026-03-09T20:47:26.647 INFO:tasks.workunit.client.0.vm07.stdout:1/365: creat d3/f7d x:0 0 0 2026-03-09T20:47:26.647 INFO:tasks.workunit.client.0.vm07.stdout:1/366: fsync d3/f9 0 2026-03-09T20:47:26.647 INFO:tasks.workunit.client.0.vm07.stdout:1/367: chown d3/d23/c6d 7 1 2026-03-09T20:47:26.648 INFO:tasks.workunit.client.0.vm07.stdout:1/368: write d3/d14/d54/d3e/f59 [986934,15818] 0 2026-03-09T20:47:26.649 INFO:tasks.workunit.client.0.vm07.stdout:9/320: write d4/d8/f1c [289162,124436] 0 2026-03-09T20:47:26.650 INFO:tasks.workunit.client.0.vm07.stdout:9/321: chown d4/d16/d29/d24/f6a 213838 1 2026-03-09T20:47:26.656 INFO:tasks.workunit.client.0.vm07.stdout:8/314: dwrite d1/dc/f4c [0,4194304] 0 2026-03-09T20:47:26.668 INFO:tasks.workunit.client.1.vm10.stdout:5/260: rename d2/d3d to d2/d27/d37/d46/d5d/d5f/d69 0 2026-03-09T20:47:26.671 INFO:tasks.workunit.client.1.vm10.stdout:5/261: dread d2/d1b/f2f [0,4194304] 0 2026-03-09T20:47:26.674 INFO:tasks.workunit.client.1.vm10.stdout:3/253: sync 2026-03-09T20:47:26.677 INFO:tasks.workunit.client.1.vm10.stdout:6/307: creat d3/da/d11/f65 x:0 0 0 2026-03-09T20:47:26.677 INFO:tasks.workunit.client.0.vm07.stdout:5/436: mknod d5/d19/d73/d97/c9e 0 2026-03-09T20:47:26.677 INFO:tasks.workunit.client.0.vm07.stdout:5/437: fsync d5/d69/f82 0 2026-03-09T20:47:26.677 INFO:tasks.workunit.client.0.vm07.stdout:2/383: getdents d2 0 2026-03-09T20:47:26.678 INFO:tasks.workunit.client.0.vm07.stdout:2/384: write d2/f40 [4953431,69777] 0 2026-03-09T20:47:26.682 INFO:tasks.workunit.client.0.vm07.stdout:2/385: dwrite d2/d11/d56/f5a [0,4194304] 0 2026-03-09T20:47:26.683 INFO:tasks.workunit.client.0.vm07.stdout:2/386: truncate d2/f63 328168 0 2026-03-09T20:47:26.694 INFO:tasks.workunit.client.0.vm07.stdout:7/401: mknod d3/da/db/d32/d3e/c8a 0 2026-03-09T20:47:26.694 INFO:tasks.workunit.client.0.vm07.stdout:7/402: fdatasync d3/d58/f60 0 2026-03-09T20:47:26.696 INFO:tasks.workunit.client.1.vm10.stdout:0/245: getdents d2/d9/da/de/d1a/d25 0 2026-03-09T20:47:26.696 INFO:tasks.workunit.client.0.vm07.stdout:6/361: truncate d8/f46 576400 0 2026-03-09T20:47:26.699 INFO:tasks.workunit.client.1.vm10.stdout:1/282: unlink d2/da/d25/d3e/f53 0 2026-03-09T20:47:26.700 INFO:tasks.workunit.client.0.vm07.stdout:3/355: unlink d1/f6e 0 2026-03-09T20:47:26.713 INFO:tasks.workunit.client.0.vm07.stdout:3/356: dread d1/d5/d9/d2f/d34/f5c [0,4194304] 0 2026-03-09T20:47:26.715 INFO:tasks.workunit.client.1.vm10.stdout:8/321: write d0/d22/f35 [3808967,99728] 0 2026-03-09T20:47:26.717 INFO:tasks.workunit.client.0.vm07.stdout:4/280: mkdir d2/d1f/d2d/d3f/d4a/d4b 0 2026-03-09T20:47:26.719 INFO:tasks.workunit.client.1.vm10.stdout:7/270: dwrite f3 [0,4194304] 0 2026-03-09T20:47:26.722 INFO:tasks.workunit.client.1.vm10.stdout:8/322: dread d0/d22/d2c/f3f [0,4194304] 0 2026-03-09T20:47:26.727 INFO:tasks.workunit.client.0.vm07.stdout:8/315: fsync d1/dc/d16/d31/f52 0 2026-03-09T20:47:26.730 INFO:tasks.workunit.client.0.vm07.stdout:5/438: rename d5/df/d13/c58 to d5/d33/d75/c9f 0 2026-03-09T20:47:26.755 INFO:tasks.workunit.client.0.vm07.stdout:2/387: dwrite d2/f7 [0,4194304] 0 2026-03-09T20:47:26.760 INFO:tasks.workunit.client.0.vm07.stdout:4/281: read d2/df/f49 [3069617,70599] 0 2026-03-09T20:47:26.762 INFO:tasks.workunit.client.1.vm10.stdout:9/347: creat d2/d28/d47/d67/f81 x:0 0 0 2026-03-09T20:47:26.771 INFO:tasks.workunit.client.1.vm10.stdout:2/313: symlink d5/d18/d1b/d22/l64 0 2026-03-09T20:47:26.775 INFO:tasks.workunit.client.1.vm10.stdout:2/314: truncate d5/d18/d1b/d22/f4f 540002 0 2026-03-09T20:47:26.775 INFO:tasks.workunit.client.1.vm10.stdout:2/315: dread - d5/d18/d27/d28/f5a zero size 2026-03-09T20:47:26.776 INFO:tasks.workunit.client.0.vm07.stdout:5/439: symlink d5/d33/d3b/la0 0 2026-03-09T20:47:26.779 INFO:tasks.workunit.client.0.vm07.stdout:5/440: dread d5/df/d13/d6c/f77 [0,4194304] 0 2026-03-09T20:47:26.779 INFO:tasks.workunit.client.0.vm07.stdout:5/441: write d5/df/f2b [1116960,55209] 0 2026-03-09T20:47:26.785 INFO:tasks.workunit.client.1.vm10.stdout:5/262: creat d2/d27/d37/d46/d5d/d5f/f6a x:0 0 0 2026-03-09T20:47:26.792 INFO:tasks.workunit.client.0.vm07.stdout:7/403: rename d3/da/db/l3c to d3/da/d53/l8b 0 2026-03-09T20:47:26.792 INFO:tasks.workunit.client.0.vm07.stdout:7/404: chown d3/da/db/d14/f6d 73791 1 2026-03-09T20:47:26.792 INFO:tasks.workunit.client.0.vm07.stdout:7/405: chown d3/da/db/d14/d1f/f5d 899 1 2026-03-09T20:47:26.793 INFO:tasks.workunit.client.1.vm10.stdout:3/254: dwrite dc/d14/d20/d21/f41 [0,4194304] 0 2026-03-09T20:47:26.794 INFO:tasks.workunit.client.1.vm10.stdout:6/308: rmdir d3/da/d11/d31/d47 39 2026-03-09T20:47:26.799 INFO:tasks.workunit.client.0.vm07.stdout:0/419: getdents d1/d1f/d53 0 2026-03-09T20:47:26.803 INFO:tasks.workunit.client.0.vm07.stdout:6/362: mknod d8/c74 0 2026-03-09T20:47:26.808 INFO:tasks.workunit.client.1.vm10.stdout:1/283: rename d2/da/d25/l30 to d2/da/d25/d3e/l5a 0 2026-03-09T20:47:26.822 INFO:tasks.workunit.client.0.vm07.stdout:3/357: mkdir d1/d5/d9/d2f/d3d/d71/d76 0 2026-03-09T20:47:26.826 INFO:tasks.workunit.client.0.vm07.stdout:3/358: dwrite d1/d5/d9/d2f/d3d/d64/d59/f69 [0,4194304] 0 2026-03-09T20:47:26.827 INFO:tasks.workunit.client.0.vm07.stdout:2/388: sync 2026-03-09T20:47:26.828 INFO:tasks.workunit.client.0.vm07.stdout:3/359: write d1/d5/d9/d2f/d3d/d64/f30 [1984174,113331] 0 2026-03-09T20:47:26.842 INFO:tasks.workunit.client.0.vm07.stdout:8/316: write d1/f13 [4340373,98817] 0 2026-03-09T20:47:26.848 INFO:tasks.workunit.client.1.vm10.stdout:4/201: write d1/f26 [321713,46949] 0 2026-03-09T20:47:26.852 INFO:tasks.workunit.client.0.vm07.stdout:5/442: dread - d5/d33/d39/d8d/f8e zero size 2026-03-09T20:47:26.852 INFO:tasks.workunit.client.1.vm10.stdout:8/323: symlink d0/d22/d2f/d38/l62 0 2026-03-09T20:47:26.861 INFO:tasks.workunit.client.0.vm07.stdout:7/406: mknod d3/da/db/c8c 0 2026-03-09T20:47:26.865 INFO:tasks.workunit.client.1.vm10.stdout:2/316: dread d5/fb [0,4194304] 0 2026-03-09T20:47:26.866 INFO:tasks.workunit.client.0.vm07.stdout:0/420: symlink d1/d2/d33/l85 0 2026-03-09T20:47:26.866 INFO:tasks.workunit.client.0.vm07.stdout:0/421: readlink d1/d2/l5f 0 2026-03-09T20:47:26.870 INFO:tasks.workunit.client.1.vm10.stdout:5/263: symlink d2/d27/d37/d46/d5d/l6b 0 2026-03-09T20:47:26.870 INFO:tasks.workunit.client.1.vm10.stdout:5/264: fdatasync d2/fd 0 2026-03-09T20:47:26.871 INFO:tasks.workunit.client.0.vm07.stdout:6/363: write d8/d26/d2a/d40/d69/f62 [772632,112963] 0 2026-03-09T20:47:26.871 INFO:tasks.workunit.client.1.vm10.stdout:5/265: write f1 [3721081,10551] 0 2026-03-09T20:47:26.880 INFO:tasks.workunit.client.0.vm07.stdout:1/369: getdents d3/d66 0 2026-03-09T20:47:26.880 INFO:tasks.workunit.client.0.vm07.stdout:1/370: fsync d3/d23/d55/f77 0 2026-03-09T20:47:26.883 INFO:tasks.workunit.client.0.vm07.stdout:1/371: dwrite d3/d14/d54/d3e/f75 [0,4194304] 0 2026-03-09T20:47:26.889 INFO:tasks.workunit.client.0.vm07.stdout:9/322: link d4/d8/dc/c5a d4/d16/c75 0 2026-03-09T20:47:26.889 INFO:tasks.workunit.client.0.vm07.stdout:9/323: stat d4/d11/d31 0 2026-03-09T20:47:26.889 INFO:tasks.workunit.client.1.vm10.stdout:3/255: mknod dc/d14/d26/d29/c52 0 2026-03-09T20:47:26.890 INFO:tasks.workunit.client.0.vm07.stdout:9/324: write d4/d16/d29/d24/d37/f71 [302760,101367] 0 2026-03-09T20:47:26.893 INFO:tasks.workunit.client.1.vm10.stdout:0/246: mknod d2/d4e/c55 0 2026-03-09T20:47:26.894 INFO:tasks.workunit.client.1.vm10.stdout:1/284: truncate d2/f1a 694482 0 2026-03-09T20:47:26.907 INFO:tasks.workunit.client.1.vm10.stdout:4/202: fdatasync d1/d2/d3/f31 0 2026-03-09T20:47:26.908 INFO:tasks.workunit.client.0.vm07.stdout:8/317: dwrite d1/dc/fd [0,4194304] 0 2026-03-09T20:47:26.910 INFO:tasks.workunit.client.1.vm10.stdout:0/247: sync 2026-03-09T20:47:26.921 INFO:tasks.workunit.client.1.vm10.stdout:7/271: link db/d28/d2b/d36/f35 db/d54/f57 0 2026-03-09T20:47:26.921 INFO:tasks.workunit.client.0.vm07.stdout:5/443: symlink d5/d33/d39/d8d/la1 0 2026-03-09T20:47:26.925 INFO:tasks.workunit.client.1.vm10.stdout:2/317: creat d5/d18/d2d/d47/f65 x:0 0 0 2026-03-09T20:47:26.937 INFO:tasks.workunit.client.0.vm07.stdout:1/372: creat d3/d66/f7e x:0 0 0 2026-03-09T20:47:26.937 INFO:tasks.workunit.client.0.vm07.stdout:9/325: fsync d4/d11/d23/f2f 0 2026-03-09T20:47:26.937 INFO:tasks.workunit.client.0.vm07.stdout:2/389: creat d2/db/d28/d57/f75 x:0 0 0 2026-03-09T20:47:26.937 INFO:tasks.workunit.client.1.vm10.stdout:2/318: fdatasync d5/d18/d2d/f3e 0 2026-03-09T20:47:26.937 INFO:tasks.workunit.client.1.vm10.stdout:5/266: mkdir d2/d58/d6c 0 2026-03-09T20:47:26.937 INFO:tasks.workunit.client.1.vm10.stdout:5/267: truncate d2/f3c 4201857 0 2026-03-09T20:47:26.937 INFO:tasks.workunit.client.1.vm10.stdout:6/309: symlink d3/da/d11/l66 0 2026-03-09T20:47:26.937 INFO:tasks.workunit.client.1.vm10.stdout:6/310: chown d3/d30/d33 21720584 1 2026-03-09T20:47:26.937 INFO:tasks.workunit.client.1.vm10.stdout:3/256: symlink dc/d14/d26/d29/d2a/l53 0 2026-03-09T20:47:26.938 INFO:tasks.workunit.client.1.vm10.stdout:3/257: read dc/d14/d26/d29/f30 [39346,113916] 0 2026-03-09T20:47:26.939 INFO:tasks.workunit.client.1.vm10.stdout:6/311: dwrite d3/d12/f2b [4194304,4194304] 0 2026-03-09T20:47:26.939 INFO:tasks.workunit.client.0.vm07.stdout:2/390: dwrite d2/db/d28/d57/f68 [0,4194304] 0 2026-03-09T20:47:26.942 INFO:tasks.workunit.client.1.vm10.stdout:1/285: rename d2/da/d25/c36 to d2/da/d25/d46/d51/c5b 0 2026-03-09T20:47:26.944 INFO:tasks.workunit.client.1.vm10.stdout:1/286: write d2/da/d25/f40 [1387879,119556] 0 2026-03-09T20:47:26.954 INFO:tasks.workunit.client.0.vm07.stdout:8/318: dread d1/f33 [0,4194304] 0 2026-03-09T20:47:26.955 INFO:tasks.workunit.client.0.vm07.stdout:4/282: link d2/d1f/f26 d2/f4c 0 2026-03-09T20:47:26.955 INFO:tasks.workunit.client.0.vm07.stdout:4/283: stat d2/f5 0 2026-03-09T20:47:26.955 INFO:tasks.workunit.client.0.vm07.stdout:4/284: stat d2/df/d17/f37 0 2026-03-09T20:47:26.965 INFO:tasks.workunit.client.1.vm10.stdout:0/248: rmdir d2/d9/da/de/d1a 39 2026-03-09T20:47:26.967 INFO:tasks.workunit.client.0.vm07.stdout:7/407: symlink d3/l8d 0 2026-03-09T20:47:26.968 INFO:tasks.workunit.client.0.vm07.stdout:7/408: write d3/da/db/d14/f6d [846962,83799] 0 2026-03-09T20:47:26.974 INFO:tasks.workunit.client.0.vm07.stdout:8/319: dread d1/dc/f29 [0,4194304] 0 2026-03-09T20:47:26.975 INFO:tasks.workunit.client.0.vm07.stdout:8/320: readlink d1/dc/d14/d2f/d53/l5c 0 2026-03-09T20:47:26.976 INFO:tasks.workunit.client.0.vm07.stdout:8/321: chown d1/dc/d14/d2f/d53 56576508 1 2026-03-09T20:47:26.977 INFO:tasks.workunit.client.0.vm07.stdout:7/409: dread d3/f3f [0,4194304] 0 2026-03-09T20:47:26.978 INFO:tasks.workunit.client.0.vm07.stdout:0/422: creat d1/d82/f86 x:0 0 0 2026-03-09T20:47:26.984 INFO:tasks.workunit.client.0.vm07.stdout:6/364: fdatasync d8/d26/d2a/f37 0 2026-03-09T20:47:26.986 INFO:tasks.workunit.client.0.vm07.stdout:6/365: dwrite d8/d26/d2a/d40/f65 [0,4194304] 0 2026-03-09T20:47:26.992 INFO:tasks.workunit.client.1.vm10.stdout:8/324: mkdir d0/d22/d25/d2e/d41/d47/d63 0 2026-03-09T20:47:26.995 INFO:tasks.workunit.client.0.vm07.stdout:1/373: fdatasync d3/d14/f33 0 2026-03-09T20:47:26.997 INFO:tasks.workunit.client.0.vm07.stdout:8/322: sync 2026-03-09T20:47:26.997 INFO:tasks.workunit.client.0.vm07.stdout:8/323: chown d1/dc/d16/d31/f52 120 1 2026-03-09T20:47:27.000 INFO:tasks.workunit.client.1.vm10.stdout:2/319: symlink d5/d2b/l66 0 2026-03-09T20:47:27.025 INFO:tasks.workunit.client.1.vm10.stdout:5/268: write d2/f35 [72421,92866] 0 2026-03-09T20:47:27.027 INFO:tasks.workunit.client.0.vm07.stdout:9/326: dwrite d4/d8/dc/d4e/f53 [0,4194304] 0 2026-03-09T20:47:27.027 INFO:tasks.workunit.client.0.vm07.stdout:9/327: readlink d4/d8/l3c 0 2026-03-09T20:47:27.031 INFO:tasks.workunit.client.0.vm07.stdout:2/391: write d2/f33 [1579136,39132] 0 2026-03-09T20:47:27.031 INFO:tasks.workunit.client.0.vm07.stdout:2/392: chown d2/db/f67 18852203 1 2026-03-09T20:47:27.033 INFO:tasks.workunit.client.1.vm10.stdout:6/312: mkdir d3/d30/d33/d67 0 2026-03-09T20:47:27.037 INFO:tasks.workunit.client.0.vm07.stdout:5/444: symlink d5/d19/d73/d9c/la2 0 2026-03-09T20:47:27.043 INFO:tasks.workunit.client.1.vm10.stdout:4/203: fsync d1/d8/f29 0 2026-03-09T20:47:27.046 INFO:tasks.workunit.client.1.vm10.stdout:0/249: creat d2/d9/da/d35/d30/f56 x:0 0 0 2026-03-09T20:47:27.050 INFO:tasks.workunit.client.1.vm10.stdout:7/272: truncate db/f19 3050850 0 2026-03-09T20:47:27.054 INFO:tasks.workunit.client.1.vm10.stdout:9/348: getdents d2/d12/d5a 0 2026-03-09T20:47:27.054 INFO:tasks.workunit.client.1.vm10.stdout:8/325: mkdir d0/d22/d2f/d38/d64 0 2026-03-09T20:47:27.056 INFO:tasks.workunit.client.1.vm10.stdout:8/326: dread d0/d22/d25/f3b [0,4194304] 0 2026-03-09T20:47:27.058 INFO:tasks.workunit.client.0.vm07.stdout:8/324: mknod d1/dc/d14/d2f/c66 0 2026-03-09T20:47:27.061 INFO:tasks.workunit.client.0.vm07.stdout:3/360: getdents d1/d5/d9/d2f/d3d 0 2026-03-09T20:47:27.062 INFO:tasks.workunit.client.0.vm07.stdout:3/361: write d1/d5/d9/d2f/d3d/d64/f1a [2233430,67552] 0 2026-03-09T20:47:27.064 INFO:tasks.workunit.client.0.vm07.stdout:3/362: truncate d1/d5/d9/d2f/d3d/d64/f30 2746788 0 2026-03-09T20:47:27.072 INFO:tasks.workunit.client.0.vm07.stdout:3/363: sync 2026-03-09T20:47:27.076 INFO:tasks.workunit.client.0.vm07.stdout:9/328: symlink d4/d8/l76 0 2026-03-09T20:47:27.080 INFO:tasks.workunit.client.1.vm10.stdout:6/313: fsync d3/f21 0 2026-03-09T20:47:27.084 INFO:tasks.workunit.client.1.vm10.stdout:1/287: truncate d2/da/f50 2483774 0 2026-03-09T20:47:27.085 INFO:tasks.workunit.client.0.vm07.stdout:5/445: creat d5/d19/d73/fa3 x:0 0 0 2026-03-09T20:47:27.086 INFO:tasks.workunit.client.0.vm07.stdout:5/446: write d5/df/d13/f1f [4854103,21469] 0 2026-03-09T20:47:27.089 INFO:tasks.workunit.client.1.vm10.stdout:4/204: readlink d1/d8/d1c/l21 0 2026-03-09T20:47:27.091 INFO:tasks.workunit.client.1.vm10.stdout:0/250: rmdir d2/d9/da/d48 39 2026-03-09T20:47:27.094 INFO:tasks.workunit.client.1.vm10.stdout:4/205: sync 2026-03-09T20:47:27.094 INFO:tasks.workunit.client.1.vm10.stdout:0/251: sync 2026-03-09T20:47:27.096 INFO:tasks.workunit.client.1.vm10.stdout:9/349: write d2/d28/f51 [7166079,29494] 0 2026-03-09T20:47:27.097 INFO:tasks.workunit.client.1.vm10.stdout:9/350: stat d2/d33/d37/f4c 0 2026-03-09T20:47:27.102 INFO:tasks.workunit.client.1.vm10.stdout:3/258: link dc/d14/d20/d21/c2f dc/d14/d20/d21/d3b/c54 0 2026-03-09T20:47:27.107 INFO:tasks.workunit.client.1.vm10.stdout:6/314: mknod d3/da/c68 0 2026-03-09T20:47:27.112 INFO:tasks.workunit.client.1.vm10.stdout:4/206: creat d1/d2/f43 x:0 0 0 2026-03-09T20:47:27.116 INFO:tasks.workunit.client.1.vm10.stdout:4/207: dwrite d1/d8/d1c/f3e [0,4194304] 0 2026-03-09T20:47:27.116 INFO:tasks.workunit.client.1.vm10.stdout:4/208: dread - d1/d8/d1b/f42 zero size 2026-03-09T20:47:27.117 INFO:tasks.workunit.client.1.vm10.stdout:7/273: rename db/cc to db/d28/d2b/c58 0 2026-03-09T20:47:27.130 INFO:tasks.workunit.client.1.vm10.stdout:0/252: mknod d2/c57 0 2026-03-09T20:47:27.130 INFO:tasks.workunit.client.1.vm10.stdout:1/288: dwrite d2/da/f10 [0,4194304] 0 2026-03-09T20:47:27.132 INFO:tasks.workunit.client.1.vm10.stdout:9/351: unlink d2/d33/d37/f6f 0 2026-03-09T20:47:27.136 INFO:tasks.workunit.client.1.vm10.stdout:0/253: dread d2/f39 [0,4194304] 0 2026-03-09T20:47:27.142 INFO:tasks.workunit.client.1.vm10.stdout:8/327: creat d0/d54/f65 x:0 0 0 2026-03-09T20:47:27.144 INFO:tasks.workunit.client.1.vm10.stdout:3/259: mkdir dc/d14/d26/d29/d2a/d55 0 2026-03-09T20:47:27.151 INFO:tasks.workunit.client.1.vm10.stdout:6/315: creat d3/da/d11/d31/d4c/f69 x:0 0 0 2026-03-09T20:47:27.152 INFO:tasks.workunit.client.1.vm10.stdout:8/328: dread d0/f17 [4194304,4194304] 0 2026-03-09T20:47:27.155 INFO:tasks.workunit.client.1.vm10.stdout:4/209: unlink d1/d2/d3/f31 0 2026-03-09T20:47:27.158 INFO:tasks.workunit.client.1.vm10.stdout:7/274: creat db/d28/d4d/f59 x:0 0 0 2026-03-09T20:47:27.171 INFO:tasks.workunit.client.0.vm07.stdout:3/364: mknod d1/d5/d9/d2f/d3d/c77 0 2026-03-09T20:47:27.171 INFO:tasks.workunit.client.0.vm07.stdout:3/365: fsync d1/f36 0 2026-03-09T20:47:27.171 INFO:tasks.workunit.client.0.vm07.stdout:3/366: stat d1/d5/d9/f1b 0 2026-03-09T20:47:27.173 INFO:tasks.workunit.client.1.vm10.stdout:9/352: creat d2/d12/d5a/f82 x:0 0 0 2026-03-09T20:47:27.174 INFO:tasks.workunit.client.1.vm10.stdout:2/320: getdents d5/d18/d27 0 2026-03-09T20:47:27.174 INFO:tasks.workunit.client.1.vm10.stdout:2/321: readlink d5/d2b/d32/l33 0 2026-03-09T20:47:27.174 INFO:tasks.workunit.client.0.vm07.stdout:4/285: rmdir d2/d1f/d44 0 2026-03-09T20:47:27.175 INFO:tasks.workunit.client.0.vm07.stdout:4/286: write d2/d1f/f3c [4313963,114032] 0 2026-03-09T20:47:27.176 INFO:tasks.workunit.client.0.vm07.stdout:4/287: stat d2/d1f/d2d/d3f/d41 0 2026-03-09T20:47:27.182 INFO:tasks.workunit.client.1.vm10.stdout:3/260: rename dc/dd to dc/d14/d20/d2e/d56 0 2026-03-09T20:47:27.183 INFO:tasks.workunit.client.1.vm10.stdout:3/261: chown c5 468591883 1 2026-03-09T20:47:27.183 INFO:tasks.workunit.client.1.vm10.stdout:3/262: write dc/d14/d26/f34 [688729,2450] 0 2026-03-09T20:47:27.193 INFO:tasks.workunit.client.0.vm07.stdout:5/447: dwrite d5/f25 [0,4194304] 0 2026-03-09T20:47:27.194 INFO:tasks.workunit.client.1.vm10.stdout:5/269: getdents d2/d58 0 2026-03-09T20:47:27.195 INFO:tasks.workunit.client.0.vm07.stdout:7/410: link d3/da/db/d32/d3e/l51 d3/da/db/d79/l8e 0 2026-03-09T20:47:27.195 INFO:tasks.workunit.client.1.vm10.stdout:5/270: write d2/f3c [4377938,55649] 0 2026-03-09T20:47:27.196 INFO:tasks.workunit.client.1.vm10.stdout:5/271: chown d2/d1b/l29 14 1 2026-03-09T20:47:27.201 INFO:tasks.workunit.client.1.vm10.stdout:6/316: mkdir d3/d30/d6a 0 2026-03-09T20:47:27.202 INFO:tasks.workunit.client.1.vm10.stdout:6/317: readlink d3/da/d11/l66 0 2026-03-09T20:47:27.202 INFO:tasks.workunit.client.1.vm10.stdout:6/318: dread - d3/da/d11/f65 zero size 2026-03-09T20:47:27.202 INFO:tasks.workunit.client.0.vm07.stdout:0/423: link d1/f3b d1/d2/dc/d80/f87 0 2026-03-09T20:47:27.203 INFO:tasks.workunit.client.0.vm07.stdout:0/424: chown d1/d2/l15 515558 1 2026-03-09T20:47:27.203 INFO:tasks.workunit.client.1.vm10.stdout:6/319: write d3/d12/f16 [607703,99611] 0 2026-03-09T20:47:27.203 INFO:tasks.workunit.client.1.vm10.stdout:6/320: fdatasync d3/fc 0 2026-03-09T20:47:27.204 INFO:tasks.workunit.client.1.vm10.stdout:6/321: chown d3/da/d11/d31/d4c/d60 30017 1 2026-03-09T20:47:27.204 INFO:tasks.workunit.client.1.vm10.stdout:4/210: readlink d1/d8/lb 0 2026-03-09T20:47:27.205 INFO:tasks.workunit.client.1.vm10.stdout:6/322: write d3/d12/f2b [4309864,59769] 0 2026-03-09T20:47:27.205 INFO:tasks.workunit.client.1.vm10.stdout:4/211: write d1/d8/d1c/f3e [3457072,123788] 0 2026-03-09T20:47:27.206 INFO:tasks.workunit.client.1.vm10.stdout:6/323: truncate d3/f40 4511127 0 2026-03-09T20:47:27.209 INFO:tasks.workunit.client.0.vm07.stdout:6/366: rename d8/d16/d22/d33/f60 to d8/d16/d22/f75 0 2026-03-09T20:47:27.219 INFO:tasks.workunit.client.1.vm10.stdout:1/289: creat d2/da/d25/d3e/d55/f5c x:0 0 0 2026-03-09T20:47:27.228 INFO:tasks.workunit.client.0.vm07.stdout:8/325: dwrite d1/dc/d16/d26/f2a [0,4194304] 0 2026-03-09T20:47:27.228 INFO:tasks.workunit.client.1.vm10.stdout:9/353: dwrite d2/d3/de/f34 [0,4194304] 0 2026-03-09T20:47:27.232 INFO:tasks.workunit.client.0.vm07.stdout:8/326: dread d1/dc/d16/d26/f48 [0,4194304] 0 2026-03-09T20:47:27.235 INFO:tasks.workunit.client.0.vm07.stdout:8/327: dread d1/dc/f4c [0,4194304] 0 2026-03-09T20:47:27.236 INFO:tasks.workunit.client.0.vm07.stdout:2/393: creat d2/db/f76 x:0 0 0 2026-03-09T20:47:27.244 INFO:tasks.workunit.client.1.vm10.stdout:8/329: rename d0/f10 to d0/d22/f66 0 2026-03-09T20:47:27.251 INFO:tasks.workunit.client.1.vm10.stdout:3/263: creat dc/d14/d26/d29/d2a/f57 x:0 0 0 2026-03-09T20:47:27.254 INFO:tasks.workunit.client.1.vm10.stdout:3/264: dwrite dc/d14/d26/f34 [0,4194304] 0 2026-03-09T20:47:27.261 INFO:tasks.workunit.client.1.vm10.stdout:5/272: dwrite d2/f64 [0,4194304] 0 2026-03-09T20:47:27.262 INFO:tasks.workunit.client.0.vm07.stdout:0/425: symlink d1/d2/d33/l88 0 2026-03-09T20:47:27.274 INFO:tasks.workunit.client.0.vm07.stdout:6/367: symlink d8/d16/d4b/l76 0 2026-03-09T20:47:27.275 INFO:tasks.workunit.client.0.vm07.stdout:6/368: readlink d8/d16/d22/d24/d2b/l4c 0 2026-03-09T20:47:27.277 INFO:tasks.workunit.client.0.vm07.stdout:1/374: getdents d3/d14/d54 0 2026-03-09T20:47:27.280 INFO:tasks.workunit.client.0.vm07.stdout:1/375: dwrite d3/d14/d54/f32 [0,4194304] 0 2026-03-09T20:47:27.284 INFO:tasks.workunit.client.0.vm07.stdout:1/376: chown d3/d23/d52 1 1 2026-03-09T20:47:27.294 INFO:tasks.workunit.client.1.vm10.stdout:7/275: dwrite db/d28/d2b/d36/d3b/f3d [0,4194304] 0 2026-03-09T20:47:27.294 INFO:tasks.workunit.client.1.vm10.stdout:1/290: mkdir d2/da/d25/d46/d51/d5d 0 2026-03-09T20:47:27.307 INFO:tasks.workunit.client.1.vm10.stdout:9/354: mknod d2/d12/d5a/c83 0 2026-03-09T20:47:27.308 INFO:tasks.workunit.client.0.vm07.stdout:8/328: rmdir d1/dc/d16 39 2026-03-09T20:47:27.316 INFO:tasks.workunit.client.1.vm10.stdout:0/254: unlink d2/d9/da/de/d1a/d25/d3e/f41 0 2026-03-09T20:47:27.322 INFO:tasks.workunit.client.0.vm07.stdout:4/288: mknod d2/d1f/d2d/d3f/d41/c4d 0 2026-03-09T20:47:27.323 INFO:tasks.workunit.client.1.vm10.stdout:8/330: creat d0/d22/d25/d2e/d41/f67 x:0 0 0 2026-03-09T20:47:27.324 INFO:tasks.workunit.client.0.vm07.stdout:5/448: symlink d5/d19/d73/d94/la4 0 2026-03-09T20:47:27.330 INFO:tasks.workunit.client.0.vm07.stdout:7/411: truncate d3/da/f45 611043 0 2026-03-09T20:47:27.335 INFO:tasks.workunit.client.0.vm07.stdout:7/412: dwrite d3/da/db/d32/d3e/f40 [0,4194304] 0 2026-03-09T20:47:27.340 INFO:tasks.workunit.client.0.vm07.stdout:0/426: chown d1/d2/dc/l77 118 1 2026-03-09T20:47:27.344 INFO:tasks.workunit.client.0.vm07.stdout:6/369: mknod d8/d16/d22/d24/c77 0 2026-03-09T20:47:27.345 INFO:tasks.workunit.client.0.vm07.stdout:6/370: truncate d8/d26/f4d 1523123 0 2026-03-09T20:47:27.346 INFO:tasks.workunit.client.1.vm10.stdout:4/212: dwrite d1/d2/f2e [0,4194304] 0 2026-03-09T20:47:27.351 INFO:tasks.workunit.client.0.vm07.stdout:6/371: dread d8/d26/f3d [0,4194304] 0 2026-03-09T20:47:27.352 INFO:tasks.workunit.client.0.vm07.stdout:6/372: read - d8/d16/d61/f68 zero size 2026-03-09T20:47:27.368 INFO:tasks.workunit.client.0.vm07.stdout:3/367: creat d1/f78 x:0 0 0 2026-03-09T20:47:27.368 INFO:tasks.workunit.client.0.vm07.stdout:3/368: dread - d1/d5/d9/d11/d1f/f5e zero size 2026-03-09T20:47:27.369 INFO:tasks.workunit.client.0.vm07.stdout:3/369: write d1/d5/f25 [4026117,79535] 0 2026-03-09T20:47:27.374 INFO:tasks.workunit.client.0.vm07.stdout:4/289: sync 2026-03-09T20:47:27.374 INFO:tasks.workunit.client.0.vm07.stdout:0/427: sync 2026-03-09T20:47:27.374 INFO:tasks.workunit.client.1.vm10.stdout:6/324: dwrite f1 [0,4194304] 0 2026-03-09T20:47:27.394 INFO:tasks.workunit.client.0.vm07.stdout:2/394: dwrite d2/db/d49/f6b [0,4194304] 0 2026-03-09T20:47:27.407 INFO:tasks.workunit.client.1.vm10.stdout:3/265: truncate dc/d14/d20/d2e/f32 834731 0 2026-03-09T20:47:27.410 INFO:tasks.workunit.client.0.vm07.stdout:1/377: write d3/f24 [2406415,47719] 0 2026-03-09T20:47:27.411 INFO:tasks.workunit.client.1.vm10.stdout:3/266: dwrite dc/d14/d20/d21/f41 [0,4194304] 0 2026-03-09T20:47:27.442 INFO:tasks.workunit.client.1.vm10.stdout:7/276: rename fa to db/d46/f5a 0 2026-03-09T20:47:27.442 INFO:tasks.workunit.client.1.vm10.stdout:4/213: dread - d1/d8/d1c/f1d zero size 2026-03-09T20:47:27.442 INFO:tasks.workunit.client.1.vm10.stdout:7/277: write f3 [4015439,81160] 0 2026-03-09T20:47:27.443 INFO:tasks.workunit.client.1.vm10.stdout:4/214: read - d1/d8/d1c/f1d zero size 2026-03-09T20:47:27.444 INFO:tasks.workunit.client.1.vm10.stdout:7/278: write db/d28/d2b/d36/f55 [916772,85058] 0 2026-03-09T20:47:27.444 INFO:tasks.workunit.client.1.vm10.stdout:7/279: write db/d28/d2b/f51 [645060,69135] 0 2026-03-09T20:47:27.457 INFO:tasks.workunit.client.1.vm10.stdout:2/322: getdents d5/d18/d2d/d47 0 2026-03-09T20:47:27.486 INFO:tasks.workunit.client.0.vm07.stdout:6/373: rename d8/d16/f1f to d8/d26/d2a/d40/d69/f78 0 2026-03-09T20:47:27.500 INFO:tasks.workunit.client.1.vm10.stdout:5/273: dwrite d2/f2c [0,4194304] 0 2026-03-09T20:47:27.502 INFO:tasks.workunit.client.1.vm10.stdout:0/255: rmdir d2/d9/da/d48 39 2026-03-09T20:47:27.502 INFO:tasks.workunit.client.1.vm10.stdout:8/331: creat d0/d22/d25/d2e/d58/f68 x:0 0 0 2026-03-09T20:47:27.506 INFO:tasks.workunit.client.1.vm10.stdout:8/332: chown d0/d22/d25/f2d 286632 1 2026-03-09T20:47:27.509 INFO:tasks.workunit.client.1.vm10.stdout:0/256: truncate d2/d9/da/d35/f3a 349034 0 2026-03-09T20:47:27.511 INFO:tasks.workunit.client.1.vm10.stdout:0/257: chown d2/d9/da/de/d1a/d25/d34/l3c 5130 1 2026-03-09T20:47:27.511 INFO:tasks.workunit.client.1.vm10.stdout:0/258: stat d2/d9/da/de 0 2026-03-09T20:47:27.513 INFO:tasks.workunit.client.1.vm10.stdout:0/259: write d2/d9/da/d35/f3a [192115,61695] 0 2026-03-09T20:47:27.514 INFO:tasks.workunit.client.1.vm10.stdout:0/260: stat d2/d9/l3b 0 2026-03-09T20:47:27.514 INFO:tasks.workunit.client.1.vm10.stdout:0/261: write d2/d9/da/f2f [5449800,42659] 0 2026-03-09T20:47:27.528 INFO:tasks.workunit.client.0.vm07.stdout:3/370: unlink d1/d5/d9/d2f/d34/f3f 0 2026-03-09T20:47:27.535 INFO:tasks.workunit.client.0.vm07.stdout:0/428: symlink d1/d1f/d20/l89 0 2026-03-09T20:47:27.536 INFO:tasks.workunit.client.0.vm07.stdout:4/290: truncate d2/f28 1159967 0 2026-03-09T20:47:27.536 INFO:tasks.workunit.client.0.vm07.stdout:9/329: link d4/d8/d19/f42 d4/d16/d29/d24/f77 0 2026-03-09T20:47:27.536 INFO:tasks.workunit.client.1.vm10.stdout:1/291: rename d2/da/d25/c45 to d2/da/d25/d3e/c5e 0 2026-03-09T20:47:27.536 INFO:tasks.workunit.client.1.vm10.stdout:1/292: chown d2/l6 2238871 1 2026-03-09T20:47:27.538 INFO:tasks.workunit.client.1.vm10.stdout:4/215: creat d1/d8/d1c/d38/f44 x:0 0 0 2026-03-09T20:47:27.538 INFO:tasks.workunit.client.1.vm10.stdout:4/216: readlink d1/d2/d3/l19 0 2026-03-09T20:47:27.552 INFO:tasks.workunit.client.1.vm10.stdout:5/274: mkdir d2/d27/d37/d46/d5d/d6d 0 2026-03-09T20:47:27.552 INFO:tasks.workunit.client.1.vm10.stdout:5/275: chown d2/d58 0 1 2026-03-09T20:47:27.552 INFO:tasks.workunit.client.1.vm10.stdout:5/276: dread d2/d39/d4b/f60 [0,4194304] 0 2026-03-09T20:47:27.552 INFO:tasks.workunit.client.0.vm07.stdout:0/429: dwrite d1/d2/d33/f4e [0,4194304] 0 2026-03-09T20:47:27.552 INFO:tasks.workunit.client.0.vm07.stdout:1/378: mknod d3/d23/d55/d56/d60/c7f 0 2026-03-09T20:47:27.552 INFO:tasks.workunit.client.0.vm07.stdout:1/379: chown d3/d14/c48 424797968 1 2026-03-09T20:47:27.552 INFO:tasks.workunit.client.0.vm07.stdout:8/329: link d1/dc/f42 d1/dc/d14/d2f/d4d/f67 0 2026-03-09T20:47:27.552 INFO:tasks.workunit.client.0.vm07.stdout:6/374: chown d8/d16/d22/d24 407 1 2026-03-09T20:47:27.553 INFO:tasks.workunit.client.0.vm07.stdout:6/375: truncate d8/d16/d22/d33/f73 636446 0 2026-03-09T20:47:27.554 INFO:tasks.workunit.client.0.vm07.stdout:3/371: truncate d1/d5/d9/f3c 289011 0 2026-03-09T20:47:27.557 INFO:tasks.workunit.client.0.vm07.stdout:9/330: mkdir d4/d16/d78 0 2026-03-09T20:47:27.559 INFO:tasks.workunit.client.0.vm07.stdout:9/331: dread d4/d11/d2a/f3b [0,4194304] 0 2026-03-09T20:47:27.563 INFO:tasks.workunit.client.0.vm07.stdout:9/332: dwrite d4/d11/d31/f5b [0,4194304] 0 2026-03-09T20:47:27.565 INFO:tasks.workunit.client.0.vm07.stdout:3/372: dread d1/d5/d9/d11/f21 [0,4194304] 0 2026-03-09T20:47:27.570 INFO:tasks.workunit.client.1.vm10.stdout:9/355: rename d2/d28/d47/f52 to d2/d3/de/f84 0 2026-03-09T20:47:27.578 INFO:tasks.workunit.client.1.vm10.stdout:1/293: mknod d2/da/d25/d46/c5f 0 2026-03-09T20:47:27.578 INFO:tasks.workunit.client.0.vm07.stdout:0/430: fdatasync d1/d2/dc/d80/f87 0 2026-03-09T20:47:27.578 INFO:tasks.workunit.client.0.vm07.stdout:0/431: chown d1/d82/f86 2604433 1 2026-03-09T20:47:27.578 INFO:tasks.workunit.client.0.vm07.stdout:0/432: dwrite d1/d2/f1b [0,4194304] 0 2026-03-09T20:47:27.585 INFO:tasks.workunit.client.1.vm10.stdout:7/280: truncate f5 224857 0 2026-03-09T20:47:27.586 INFO:tasks.workunit.client.1.vm10.stdout:7/281: write db/d28/d2b/d36/d3b/f3d [3219228,23949] 0 2026-03-09T20:47:27.588 INFO:tasks.workunit.client.1.vm10.stdout:8/333: unlink d0/c1d 0 2026-03-09T20:47:27.588 INFO:tasks.workunit.client.0.vm07.stdout:5/449: link d5/d33/c48 d5/df/d13/d30/d56/ca5 0 2026-03-09T20:47:27.589 INFO:tasks.workunit.client.0.vm07.stdout:7/413: creat d3/f8f x:0 0 0 2026-03-09T20:47:27.589 INFO:tasks.workunit.client.1.vm10.stdout:0/262: truncate d2/d9/da/de/d1a/f2b 7581575 0 2026-03-09T20:47:27.592 INFO:tasks.workunit.client.1.vm10.stdout:9/356: mkdir d2/d3/d85 0 2026-03-09T20:47:27.597 INFO:tasks.workunit.client.1.vm10.stdout:1/294: mknod d2/da/d25/d46/d51/c60 0 2026-03-09T20:47:27.597 INFO:tasks.workunit.client.1.vm10.stdout:2/323: creat d5/d18/f67 x:0 0 0 2026-03-09T20:47:27.598 INFO:tasks.workunit.client.1.vm10.stdout:5/277: creat d2/d27/d37/d46/d5d/d6d/f6e x:0 0 0 2026-03-09T20:47:27.599 INFO:tasks.workunit.client.1.vm10.stdout:1/295: dwrite d2/da/f3d [4194304,4194304] 0 2026-03-09T20:47:27.599 INFO:tasks.workunit.client.1.vm10.stdout:8/334: fdatasync d0/d22/f29 0 2026-03-09T20:47:27.600 INFO:tasks.workunit.client.0.vm07.stdout:9/333: read d4/d11/f13 [1463817,102776] 0 2026-03-09T20:47:27.600 INFO:tasks.workunit.client.0.vm07.stdout:9/334: fsync d4/d16/d29/f4a 0 2026-03-09T20:47:27.602 INFO:tasks.workunit.client.0.vm07.stdout:0/433: dread d1/d2/dc/f40 [0,4194304] 0 2026-03-09T20:47:27.608 INFO:tasks.workunit.client.0.vm07.stdout:6/376: dread d8/d16/f17 [0,4194304] 0 2026-03-09T20:47:27.611 INFO:tasks.workunit.client.1.vm10.stdout:0/263: fdatasync d2/d9/f12 0 2026-03-09T20:47:27.614 INFO:tasks.workunit.client.0.vm07.stdout:8/330: symlink d1/l68 0 2026-03-09T20:47:27.617 INFO:tasks.workunit.client.1.vm10.stdout:6/325: write d3/f2f [89915,118998] 0 2026-03-09T20:47:27.619 INFO:tasks.workunit.client.1.vm10.stdout:2/324: symlink d5/d18/d27/d38/d61/l68 0 2026-03-09T20:47:27.621 INFO:tasks.workunit.client.1.vm10.stdout:6/326: dwrite f1 [4194304,4194304] 0 2026-03-09T20:47:27.624 INFO:tasks.workunit.client.0.vm07.stdout:3/373: getdents d1/d5/d9/d11/d6d 0 2026-03-09T20:47:27.624 INFO:tasks.workunit.client.0.vm07.stdout:9/335: mknod d4/d16/c79 0 2026-03-09T20:47:27.624 INFO:tasks.workunit.client.1.vm10.stdout:8/335: rename d0/l5 to d0/d22/d2f/d38/l69 0 2026-03-09T20:47:27.625 INFO:tasks.workunit.client.1.vm10.stdout:1/296: chown d2/da/cd 258459 1 2026-03-09T20:47:27.628 INFO:tasks.workunit.client.0.vm07.stdout:0/434: rename d1/d82/l83 to d1/d1f/d20/l8a 0 2026-03-09T20:47:27.629 INFO:tasks.workunit.client.0.vm07.stdout:6/377: truncate d8/f15 2756296 0 2026-03-09T20:47:27.638 INFO:tasks.workunit.client.1.vm10.stdout:2/325: creat d5/d2b/f69 x:0 0 0 2026-03-09T20:47:27.644 INFO:tasks.workunit.client.0.vm07.stdout:7/414: mkdir d3/d58/d82/d90 0 2026-03-09T20:47:27.644 INFO:tasks.workunit.client.0.vm07.stdout:7/415: fdatasync d3/d58/f60 0 2026-03-09T20:47:27.644 INFO:tasks.workunit.client.1.vm10.stdout:8/336: creat d0/d22/d2c/f6a x:0 0 0 2026-03-09T20:47:27.644 INFO:tasks.workunit.client.1.vm10.stdout:2/326: write d5/d2b/d32/f5c [140323,82245] 0 2026-03-09T20:47:27.644 INFO:tasks.workunit.client.0.vm07.stdout:1/380: getdents d3/d23/d55 0 2026-03-09T20:47:27.645 INFO:tasks.workunit.client.1.vm10.stdout:0/264: mkdir d2/d4a/d58 0 2026-03-09T20:47:27.645 INFO:tasks.workunit.client.1.vm10.stdout:1/297: creat d2/da/d25/d46/f61 x:0 0 0 2026-03-09T20:47:27.645 INFO:tasks.workunit.client.0.vm07.stdout:8/331: mknod d1/d5d/c69 0 2026-03-09T20:47:27.645 INFO:tasks.workunit.client.0.vm07.stdout:6/378: rmdir d8/d26/d2a/d40/d69 39 2026-03-09T20:47:27.647 INFO:tasks.workunit.client.1.vm10.stdout:1/298: fdatasync d2/da/f10 0 2026-03-09T20:47:27.650 INFO:tasks.workunit.client.1.vm10.stdout:7/282: dread db/d21/d23/f1e [0,4194304] 0 2026-03-09T20:47:27.655 INFO:tasks.workunit.client.0.vm07.stdout:5/450: creat d5/fa6 x:0 0 0 2026-03-09T20:47:27.656 INFO:tasks.workunit.client.0.vm07.stdout:9/336: mknod d4/d8/c7a 0 2026-03-09T20:47:27.656 INFO:tasks.workunit.client.1.vm10.stdout:8/337: write d0/d22/d2f/d3d/f49 [2684663,113148] 0 2026-03-09T20:47:27.657 INFO:tasks.workunit.client.1.vm10.stdout:2/327: mknod d5/d18/d27/d28/c6a 0 2026-03-09T20:47:27.659 INFO:tasks.workunit.client.0.vm07.stdout:7/416: symlink d3/d58/d77/l91 0 2026-03-09T20:47:27.667 INFO:tasks.workunit.client.1.vm10.stdout:2/328: chown d5/f59 1474234 1 2026-03-09T20:47:27.667 INFO:tasks.workunit.client.0.vm07.stdout:1/381: creat d3/d14/d54/d3e/f80 x:0 0 0 2026-03-09T20:47:27.667 INFO:tasks.workunit.client.0.vm07.stdout:9/337: symlink d4/d11/d23/l7b 0 2026-03-09T20:47:27.667 INFO:tasks.workunit.client.0.vm07.stdout:1/382: readlink d3/d23/d55/l5f 0 2026-03-09T20:47:27.669 INFO:tasks.workunit.client.1.vm10.stdout:6/327: sync 2026-03-09T20:47:27.673 INFO:tasks.workunit.client.1.vm10.stdout:6/328: dwrite d3/d12/d36/f56 [0,4194304] 0 2026-03-09T20:47:27.682 INFO:tasks.workunit.client.1.vm10.stdout:1/299: readlink d2/da/l2b 0 2026-03-09T20:47:27.682 INFO:tasks.workunit.client.1.vm10.stdout:0/265: rename d2/d9/da/de/d1a/f21 to d2/d9/da/de/d1a/d25/d3e/f59 0 2026-03-09T20:47:27.682 INFO:tasks.workunit.client.1.vm10.stdout:7/283: unlink db/d28/d4d/c56 0 2026-03-09T20:47:27.682 INFO:tasks.workunit.client.0.vm07.stdout:1/383: mknod d3/d14/d54/c81 0 2026-03-09T20:47:27.682 INFO:tasks.workunit.client.0.vm07.stdout:1/384: write d3/f24 [4785409,15705] 0 2026-03-09T20:47:27.682 INFO:tasks.workunit.client.0.vm07.stdout:1/385: dread d3/d14/d54/d3e/f75 [0,4194304] 0 2026-03-09T20:47:27.682 INFO:tasks.workunit.client.0.vm07.stdout:6/379: creat d8/f79 x:0 0 0 2026-03-09T20:47:27.683 INFO:tasks.workunit.client.1.vm10.stdout:7/284: write db/d46/f47 [4784047,111182] 0 2026-03-09T20:47:27.684 INFO:tasks.workunit.client.0.vm07.stdout:8/332: rename d1/dc/d14/d2f/d53/d60 to d1/dc/d6a 0 2026-03-09T20:47:27.685 INFO:tasks.workunit.client.1.vm10.stdout:8/338: creat d0/d22/d2c/f6b x:0 0 0 2026-03-09T20:47:27.686 INFO:tasks.workunit.client.0.vm07.stdout:0/435: link d1/d2/dc/l25 d1/d2/d33/d35/l8b 0 2026-03-09T20:47:27.687 INFO:tasks.workunit.client.0.vm07.stdout:0/436: write d1/d1f/d53/f79 [514405,18409] 0 2026-03-09T20:47:27.699 INFO:tasks.workunit.client.1.vm10.stdout:1/300: creat d2/da/d25/d3e/d42/f62 x:0 0 0 2026-03-09T20:47:27.699 INFO:tasks.workunit.client.1.vm10.stdout:0/266: unlink d2/db/c10 0 2026-03-09T20:47:27.700 INFO:tasks.workunit.client.0.vm07.stdout:6/380: chown d8/d26/d2a/d40/d69/f62 900 1 2026-03-09T20:47:27.701 INFO:tasks.workunit.client.0.vm07.stdout:6/381: stat d8/d16/d61 0 2026-03-09T20:47:27.707 INFO:tasks.workunit.client.0.vm07.stdout:2/395: write d2/db/d1c/f45 [4668549,130565] 0 2026-03-09T20:47:27.708 INFO:tasks.workunit.client.0.vm07.stdout:2/396: write d2/db/f76 [911106,88545] 0 2026-03-09T20:47:27.708 INFO:tasks.workunit.client.0.vm07.stdout:2/397: chown d2/db/d1c/l70 42653 1 2026-03-09T20:47:27.712 INFO:tasks.workunit.client.1.vm10.stdout:3/267: truncate dc/d14/d26/f34 3942596 0 2026-03-09T20:47:27.715 INFO:tasks.workunit.client.0.vm07.stdout:4/291: write d2/df/f23 [8405175,8133] 0 2026-03-09T20:47:27.715 INFO:tasks.workunit.client.1.vm10.stdout:4/217: write d1/f9 [65730,73101] 0 2026-03-09T20:47:27.715 INFO:tasks.workunit.client.0.vm07.stdout:4/292: read d2/f19 [929792,32338] 0 2026-03-09T20:47:27.717 INFO:tasks.workunit.client.1.vm10.stdout:7/285: dread db/d1f/f2a [0,4194304] 0 2026-03-09T20:47:27.719 INFO:tasks.workunit.client.1.vm10.stdout:8/339: chown d0/d22/d25/d2e/d41/l4d 74235099 1 2026-03-09T20:47:27.720 INFO:tasks.workunit.client.1.vm10.stdout:8/340: truncate d0/d22/d2c/f6a 891387 0 2026-03-09T20:47:27.720 INFO:tasks.workunit.client.0.vm07.stdout:6/382: chown d8/le 6 1 2026-03-09T20:47:27.720 INFO:tasks.workunit.client.1.vm10.stdout:8/341: write d0/d22/d25/d2e/d41/d47/f5a [544900,99234] 0 2026-03-09T20:47:27.721 INFO:tasks.workunit.client.0.vm07.stdout:6/383: dread - d8/f79 zero size 2026-03-09T20:47:27.722 INFO:tasks.workunit.client.0.vm07.stdout:6/384: write d8/d26/d2a/d40/d69/f39 [550808,124378] 0 2026-03-09T20:47:27.724 INFO:tasks.workunit.client.1.vm10.stdout:0/267: unlink d2/d9/da/d35/l2d 0 2026-03-09T20:47:27.728 INFO:tasks.workunit.client.1.vm10.stdout:0/268: dwrite d2/d9/da/f53 [0,4194304] 0 2026-03-09T20:47:27.735 INFO:tasks.workunit.client.0.vm07.stdout:2/398: dwrite d2/db/d49/f64 [0,4194304] 0 2026-03-09T20:47:27.735 INFO:tasks.workunit.client.0.vm07.stdout:8/333: rename d1/dc/d16/d26/f27 to d1/dc/d16/d26/f6b 0 2026-03-09T20:47:27.735 INFO:tasks.workunit.client.0.vm07.stdout:1/386: creat d3/f82 x:0 0 0 2026-03-09T20:47:27.735 INFO:tasks.workunit.client.1.vm10.stdout:7/286: mknod db/d28/d30/c5b 0 2026-03-09T20:47:27.735 INFO:tasks.workunit.client.1.vm10.stdout:6/329: link d3/l4 d3/d12/d36/d5c/l6b 0 2026-03-09T20:47:27.735 INFO:tasks.workunit.client.1.vm10.stdout:7/287: write db/d28/d4d/f59 [484350,112304] 0 2026-03-09T20:47:27.736 INFO:tasks.workunit.client.0.vm07.stdout:1/387: dwrite d3/d23/d55/d56/d60/f7a [0,4194304] 0 2026-03-09T20:47:27.737 INFO:tasks.workunit.client.0.vm07.stdout:1/388: chown d3/d23/d52/f79 1516 1 2026-03-09T20:47:27.740 INFO:tasks.workunit.client.1.vm10.stdout:8/342: rename d0/d22/d25/d2e/d58 to d0/d22/d25/d6c 0 2026-03-09T20:47:27.740 INFO:tasks.workunit.client.0.vm07.stdout:6/385: creat d8/d26/d2a/f7a x:0 0 0 2026-03-09T20:47:27.741 INFO:tasks.workunit.client.1.vm10.stdout:8/343: truncate d0/d22/d25/d6c/f68 841604 0 2026-03-09T20:47:27.741 INFO:tasks.workunit.client.0.vm07.stdout:7/417: rename d3/f59 to d3/da/db/d14/f92 0 2026-03-09T20:47:27.741 INFO:tasks.workunit.client.1.vm10.stdout:8/344: dread - d0/d22/d25/d40/f5e zero size 2026-03-09T20:47:27.745 INFO:tasks.workunit.client.0.vm07.stdout:7/418: dwrite d3/da/d83/f84 [0,4194304] 0 2026-03-09T20:47:27.748 INFO:tasks.workunit.client.0.vm07.stdout:8/334: rmdir d1/dc/d14/d2f/d4d/d55 39 2026-03-09T20:47:27.760 INFO:tasks.workunit.client.1.vm10.stdout:2/329: link d5/c11 d5/d5b/c6b 0 2026-03-09T20:47:27.760 INFO:tasks.workunit.client.1.vm10.stdout:2/330: chown d5/d18/c44 600 1 2026-03-09T20:47:27.760 INFO:tasks.workunit.client.1.vm10.stdout:0/269: creat d2/d4a/f5a x:0 0 0 2026-03-09T20:47:27.760 INFO:tasks.workunit.client.1.vm10.stdout:7/288: fdatasync db/d21/d23/f1e 0 2026-03-09T20:47:27.760 INFO:tasks.workunit.client.1.vm10.stdout:3/268: creat dc/f58 x:0 0 0 2026-03-09T20:47:27.760 INFO:tasks.workunit.client.0.vm07.stdout:8/335: dread d1/dc/f4c [0,4194304] 0 2026-03-09T20:47:27.760 INFO:tasks.workunit.client.0.vm07.stdout:6/386: rmdir d8/d26/d2a/d40/d69 39 2026-03-09T20:47:27.760 INFO:tasks.workunit.client.0.vm07.stdout:6/387: chown d8/d16/d4b 86692 1 2026-03-09T20:47:27.760 INFO:tasks.workunit.client.0.vm07.stdout:6/388: chown d8/d50 548299 1 2026-03-09T20:47:27.760 INFO:tasks.workunit.client.0.vm07.stdout:1/389: rename d3/d14/c22 to d3/d23/d52/c83 0 2026-03-09T20:47:27.760 INFO:tasks.workunit.client.0.vm07.stdout:1/390: write d3/d23/d55/d56/d60/f7c [752622,9590] 0 2026-03-09T20:47:27.760 INFO:tasks.workunit.client.0.vm07.stdout:1/391: stat d3/d23/d55/d56/d60 0 2026-03-09T20:47:27.761 INFO:tasks.workunit.client.1.vm10.stdout:1/301: sync 2026-03-09T20:47:27.768 INFO:tasks.workunit.client.0.vm07.stdout:7/419: dread d3/da/db/d14/d1f/d2b/f49 [0,4194304] 0 2026-03-09T20:47:27.770 INFO:tasks.workunit.client.1.vm10.stdout:8/345: fdatasync d0/f14 0 2026-03-09T20:47:27.771 INFO:tasks.workunit.client.1.vm10.stdout:8/346: truncate d0/d22/d2c/f57 966212 0 2026-03-09T20:47:27.777 INFO:tasks.workunit.client.0.vm07.stdout:7/420: dread d3/da/db/d14/d1f/d2b/f2c [0,4194304] 0 2026-03-09T20:47:27.783 INFO:tasks.workunit.client.0.vm07.stdout:2/399: symlink d2/db/l77 0 2026-03-09T20:47:27.794 INFO:tasks.workunit.client.1.vm10.stdout:0/270: creat d2/d4e/f5b x:0 0 0 2026-03-09T20:47:27.794 INFO:tasks.workunit.client.1.vm10.stdout:6/330: getdents d3/d30/d6a 0 2026-03-09T20:47:27.794 INFO:tasks.workunit.client.1.vm10.stdout:7/289: rename db/d28/l3e to db/d28/d4c/l5c 0 2026-03-09T20:47:27.794 INFO:tasks.workunit.client.1.vm10.stdout:1/302: creat d2/da/d25/d3e/d42/f63 x:0 0 0 2026-03-09T20:47:27.794 INFO:tasks.workunit.client.1.vm10.stdout:2/331: dwrite d5/d18/d2d/f60 [0,4194304] 0 2026-03-09T20:47:27.794 INFO:tasks.workunit.client.0.vm07.stdout:2/400: chown d2/db/d1c/d4a/c71 48 1 2026-03-09T20:47:27.794 INFO:tasks.workunit.client.0.vm07.stdout:8/336: symlink d1/dc/d16/d31/l6c 0 2026-03-09T20:47:27.794 INFO:tasks.workunit.client.0.vm07.stdout:2/401: truncate d2/db/d1c/f2e 1588825 0 2026-03-09T20:47:27.794 INFO:tasks.workunit.client.0.vm07.stdout:2/402: chown d2/db/d1c/l29 29020779 1 2026-03-09T20:47:27.794 INFO:tasks.workunit.client.0.vm07.stdout:8/337: rmdir d1/d5d 39 2026-03-09T20:47:27.794 INFO:tasks.workunit.client.0.vm07.stdout:8/338: chown d1/dc/fe 110 1 2026-03-09T20:47:27.794 INFO:tasks.workunit.client.1.vm10.stdout:4/218: getdents d1/d8 0 2026-03-09T20:47:27.801 INFO:tasks.workunit.client.1.vm10.stdout:3/269: dread dc/f11 [0,4194304] 0 2026-03-09T20:47:27.804 INFO:tasks.workunit.client.1.vm10.stdout:3/270: dread dc/d14/d20/d2e/d56/f23 [0,4194304] 0 2026-03-09T20:47:27.806 INFO:tasks.workunit.client.0.vm07.stdout:6/389: rename d8/f1c to d8/d16/d22/d24/f7b 0 2026-03-09T20:47:27.808 INFO:tasks.workunit.client.1.vm10.stdout:4/219: mknod d1/d8/d1c/c45 0 2026-03-09T20:47:27.808 INFO:tasks.workunit.client.0.vm07.stdout:7/421: link d3/da/db/d32/d3e/l51 d3/da/l93 0 2026-03-09T20:47:27.808 INFO:tasks.workunit.client.1.vm10.stdout:4/220: readlink d1/d8/d1c/l27 0 2026-03-09T20:47:27.809 INFO:tasks.workunit.client.1.vm10.stdout:4/221: chown d1/d8/d39/c3d 986793507 1 2026-03-09T20:47:27.809 INFO:tasks.workunit.client.0.vm07.stdout:6/390: creat d8/d16/d61/f7c x:0 0 0 2026-03-09T20:47:27.811 INFO:tasks.workunit.client.1.vm10.stdout:1/303: symlink d2/da/d25/d46/d51/d5d/l64 0 2026-03-09T20:47:27.816 INFO:tasks.workunit.client.1.vm10.stdout:2/332: creat d5/d5b/f6c x:0 0 0 2026-03-09T20:47:27.816 INFO:tasks.workunit.client.1.vm10.stdout:3/271: symlink dc/d14/d27/l59 0 2026-03-09T20:47:27.816 INFO:tasks.workunit.client.1.vm10.stdout:4/222: creat d1/d8/d1c/d2b/f46 x:0 0 0 2026-03-09T20:47:27.817 INFO:tasks.workunit.client.1.vm10.stdout:2/333: rename d5/d18/d2d/f31 to d5/d18/d1b/d22/f6d 0 2026-03-09T20:47:27.818 INFO:tasks.workunit.client.1.vm10.stdout:1/304: dwrite d2/f21 [0,4194304] 0 2026-03-09T20:47:27.819 INFO:tasks.workunit.client.1.vm10.stdout:4/223: mkdir d1/d47 0 2026-03-09T20:47:27.820 INFO:tasks.workunit.client.1.vm10.stdout:1/305: write d2/da/d25/d3e/d42/f63 [812064,27900] 0 2026-03-09T20:47:27.821 INFO:tasks.workunit.client.1.vm10.stdout:3/272: link dc/d14/d20/d21/d3b/f4f dc/f5a 0 2026-03-09T20:47:27.830 INFO:tasks.workunit.client.1.vm10.stdout:3/273: dwrite dc/d14/d27/f3f [0,4194304] 0 2026-03-09T20:47:27.833 INFO:tasks.workunit.client.1.vm10.stdout:3/274: chown dc/d14/d20/d21/d3b 332506 1 2026-03-09T20:47:27.848 INFO:tasks.workunit.client.0.vm07.stdout:4/293: dread d2/f5 [0,4194304] 0 2026-03-09T20:47:27.853 INFO:tasks.workunit.client.0.vm07.stdout:4/294: dwrite d2/f43 [0,4194304] 0 2026-03-09T20:47:27.857 INFO:tasks.workunit.client.1.vm10.stdout:5/278: truncate d2/f5 1373520 0 2026-03-09T20:47:27.859 INFO:tasks.workunit.client.0.vm07.stdout:3/374: write d1/d5/d9/d2f/d3d/d64/f55 [551378,115874] 0 2026-03-09T20:47:27.860 INFO:tasks.workunit.client.0.vm07.stdout:3/375: write d1/d5/d9/d11/d1f/f5e [397928,51246] 0 2026-03-09T20:47:27.864 INFO:tasks.workunit.client.1.vm10.stdout:3/275: dread dc/d14/d26/d29/f30 [0,4194304] 0 2026-03-09T20:47:27.867 INFO:tasks.workunit.client.0.vm07.stdout:7/422: getdents d3/d58/d77 0 2026-03-09T20:47:27.868 INFO:tasks.workunit.client.0.vm07.stdout:3/376: fsync d1/d5/d9/d2f/d34/f5c 0 2026-03-09T20:47:27.868 INFO:tasks.workunit.client.0.vm07.stdout:3/377: chown d1/f65 76896 1 2026-03-09T20:47:27.869 INFO:tasks.workunit.client.0.vm07.stdout:2/403: dread d2/f63 [0,4194304] 0 2026-03-09T20:47:27.874 INFO:tasks.workunit.client.1.vm10.stdout:2/334: sync 2026-03-09T20:47:27.874 INFO:tasks.workunit.client.1.vm10.stdout:4/224: sync 2026-03-09T20:47:27.874 INFO:tasks.workunit.client.1.vm10.stdout:1/306: sync 2026-03-09T20:47:27.874 INFO:tasks.workunit.client.1.vm10.stdout:4/225: stat d1/d8/d1c/l27 0 2026-03-09T20:47:27.875 INFO:tasks.workunit.client.1.vm10.stdout:1/307: write d2/f8 [1071084,125285] 0 2026-03-09T20:47:27.877 INFO:tasks.workunit.client.1.vm10.stdout:5/279: symlink d2/d27/d37/d46/d5d/d5f/l6f 0 2026-03-09T20:47:27.881 INFO:tasks.workunit.client.1.vm10.stdout:5/280: chown d2/d27/d37/d46/d5d/d5f/d69/l56 617369 1 2026-03-09T20:47:27.881 INFO:tasks.workunit.client.1.vm10.stdout:3/276: mknod dc/d14/c5b 0 2026-03-09T20:47:27.881 INFO:tasks.workunit.client.0.vm07.stdout:5/451: write d5/df/d13/f2a [242938,601] 0 2026-03-09T20:47:27.882 INFO:tasks.workunit.client.1.vm10.stdout:3/277: chown f6 13240 1 2026-03-09T20:47:27.882 INFO:tasks.workunit.client.0.vm07.stdout:3/378: symlink d1/d5/d9/d2f/d3d/l79 0 2026-03-09T20:47:27.882 INFO:tasks.workunit.client.1.vm10.stdout:4/226: dwrite d1/f9 [4194304,4194304] 0 2026-03-09T20:47:27.891 INFO:tasks.workunit.client.1.vm10.stdout:5/281: dwrite d2/d27/d37/f57 [0,4194304] 0 2026-03-09T20:47:27.897 INFO:tasks.workunit.client.1.vm10.stdout:9/357: dwrite d2/d3/fa [4194304,4194304] 0 2026-03-09T20:47:27.898 INFO:tasks.workunit.client.0.vm07.stdout:9/338: dwrite d4/d8/d19/f42 [0,4194304] 0 2026-03-09T20:47:27.900 INFO:tasks.workunit.client.0.vm07.stdout:9/339: chown d4/d11/d2a/l6c 6 1 2026-03-09T20:47:27.909 INFO:tasks.workunit.client.0.vm07.stdout:1/392: getdents d3/d14/d54 0 2026-03-09T20:47:27.913 INFO:tasks.workunit.client.0.vm07.stdout:1/393: dwrite d3/d14/d54/f4b [0,4194304] 0 2026-03-09T20:47:27.926 INFO:tasks.workunit.client.1.vm10.stdout:1/308: dread d2/da/d25/d3e/f44 [0,4194304] 0 2026-03-09T20:47:27.928 INFO:tasks.workunit.client.0.vm07.stdout:2/404: creat d2/d46/d72/f78 x:0 0 0 2026-03-09T20:47:27.935 INFO:tasks.workunit.client.0.vm07.stdout:4/295: getdents d2/df/d17 0 2026-03-09T20:47:27.940 INFO:tasks.workunit.client.1.vm10.stdout:3/278: creat dc/d14/d26/d29/f5c x:0 0 0 2026-03-09T20:47:27.941 INFO:tasks.workunit.client.1.vm10.stdout:4/227: creat d1/d8/d1b/d30/f48 x:0 0 0 2026-03-09T20:47:27.942 INFO:tasks.workunit.client.0.vm07.stdout:9/340: mkdir d4/d16/d29/d24/d7c 0 2026-03-09T20:47:27.944 INFO:tasks.workunit.client.1.vm10.stdout:5/282: mknod d2/d27/c70 0 2026-03-09T20:47:27.950 INFO:tasks.workunit.client.1.vm10.stdout:0/271: truncate d2/d9/da/de/d1a/d25/d3e/f59 3808855 0 2026-03-09T20:47:27.958 INFO:tasks.workunit.client.0.vm07.stdout:5/452: truncate d5/d33/f5a 2422860 0 2026-03-09T20:47:27.958 INFO:tasks.workunit.client.1.vm10.stdout:4/228: mknod d1/d8/d1c/d2b/c49 0 2026-03-09T20:47:27.985 INFO:tasks.workunit.client.0.vm07.stdout:2/405: mkdir d2/db/d28/d79 0 2026-03-09T20:47:27.986 INFO:tasks.workunit.client.1.vm10.stdout:4/229: mkdir d1/d8/d1c/d2b/d4a 0 2026-03-09T20:47:27.990 INFO:tasks.workunit.client.0.vm07.stdout:8/339: dread d1/dc/d14/d2f/d4d/f67 [0,4194304] 0 2026-03-09T20:47:27.995 INFO:tasks.workunit.client.0.vm07.stdout:3/379: rename d1/d5/d9/d2f/d3d/c53 to d1/d5/d9/d2f/d3d/d64/d43/c7a 0 2026-03-09T20:47:28.005 INFO:tasks.workunit.client.0.vm07.stdout:4/296: symlink d2/df/l4e 0 2026-03-09T20:47:28.010 INFO:tasks.workunit.client.1.vm10.stdout:0/272: rename d2/d9/da/de/d1a/f2b to d2/d9/d47/f5c 0 2026-03-09T20:47:28.014 INFO:tasks.workunit.client.0.vm07.stdout:9/341: unlink d4/d8/dc/l12 0 2026-03-09T20:47:28.016 INFO:tasks.workunit.client.1.vm10.stdout:3/279: link dc/d14/d27/l4c dc/d14/d26/d29/d2a/d55/l5d 0 2026-03-09T20:47:28.018 INFO:tasks.workunit.client.1.vm10.stdout:4/230: creat d1/d8/d39/f4b x:0 0 0 2026-03-09T20:47:28.022 INFO:tasks.workunit.client.1.vm10.stdout:3/280: dread dc/f11 [0,4194304] 0 2026-03-09T20:47:28.028 INFO:tasks.workunit.client.1.vm10.stdout:3/281: readlink dc/d14/d20/d21/l2c 0 2026-03-09T20:47:28.028 INFO:tasks.workunit.client.1.vm10.stdout:3/282: dread dc/d14/d26/d29/f30 [0,4194304] 0 2026-03-09T20:47:28.029 INFO:tasks.workunit.client.0.vm07.stdout:3/380: creat d1/d5/d9/d2f/d3d/d64/f7b x:0 0 0 2026-03-09T20:47:28.029 INFO:tasks.workunit.client.1.vm10.stdout:3/283: dwrite dc/ff [4194304,4194304] 0 2026-03-09T20:47:28.032 INFO:tasks.workunit.client.1.vm10.stdout:4/231: link d1/d8/d1c/f3e d1/d8/d1c/d38/f4c 0 2026-03-09T20:47:28.046 INFO:tasks.workunit.client.0.vm07.stdout:9/342: creat d4/d16/d29/f7d x:0 0 0 2026-03-09T20:47:28.047 INFO:tasks.workunit.client.0.vm07.stdout:0/437: write d1/d1f/d53/d72/f6b [167084,74755] 0 2026-03-09T20:47:28.047 INFO:tasks.workunit.client.0.vm07.stdout:9/343: chown d4/d8/dc/d4e/d54/l72 174 1 2026-03-09T20:47:28.048 INFO:tasks.workunit.client.0.vm07.stdout:0/438: fdatasync d1/d2/d33/f4e 0 2026-03-09T20:47:28.052 INFO:tasks.workunit.client.0.vm07.stdout:9/344: dwrite d4/d8/dc/f68 [0,4194304] 0 2026-03-09T20:47:28.053 INFO:tasks.workunit.client.1.vm10.stdout:0/273: dread d2/d9/da/f2f [0,4194304] 0 2026-03-09T20:47:28.054 INFO:tasks.workunit.client.1.vm10.stdout:0/274: dread - d2/d9/da/d35/d30/f56 zero size 2026-03-09T20:47:28.055 INFO:tasks.workunit.client.1.vm10.stdout:0/275: write d2/d9/da/d11/f42 [4325278,92604] 0 2026-03-09T20:47:28.070 INFO:tasks.workunit.client.1.vm10.stdout:4/232: unlink d1/d8/d1c/d2b/c40 0 2026-03-09T20:47:28.076 INFO:tasks.workunit.client.0.vm07.stdout:2/406: creat d2/d46/d6e/f7a x:0 0 0 2026-03-09T20:47:28.077 INFO:tasks.workunit.client.1.vm10.stdout:3/284: link dc/d14/d26/d29/f30 dc/d14/d26/d29/d2a/f5e 0 2026-03-09T20:47:28.077 INFO:tasks.workunit.client.0.vm07.stdout:4/297: mknod d2/df/c4f 0 2026-03-09T20:47:28.078 INFO:tasks.workunit.client.1.vm10.stdout:3/285: chown dc/f10 8 1 2026-03-09T20:47:28.083 INFO:tasks.workunit.client.1.vm10.stdout:4/233: truncate d1/d8/d1b/f24 430424 0 2026-03-09T20:47:28.086 INFO:tasks.workunit.client.1.vm10.stdout:4/234: truncate d1/d2/f2a 935892 0 2026-03-09T20:47:28.091 INFO:tasks.workunit.client.1.vm10.stdout:4/235: symlink d1/d8/d1c/l4d 0 2026-03-09T20:47:28.102 INFO:tasks.workunit.client.1.vm10.stdout:4/236: dread d1/fe [0,4194304] 0 2026-03-09T20:47:28.102 INFO:tasks.workunit.client.1.vm10.stdout:4/237: stat d1 0 2026-03-09T20:47:28.102 INFO:tasks.workunit.client.1.vm10.stdout:4/238: symlink d1/d8/d39/l4e 0 2026-03-09T20:47:28.102 INFO:tasks.workunit.client.1.vm10.stdout:4/239: read d1/d8/f16 [1510534,60831] 0 2026-03-09T20:47:28.103 INFO:tasks.workunit.client.1.vm10.stdout:4/240: creat d1/d47/f4f x:0 0 0 2026-03-09T20:47:28.109 INFO:tasks.workunit.client.1.vm10.stdout:3/286: sync 2026-03-09T20:47:28.109 INFO:tasks.workunit.client.1.vm10.stdout:4/241: write d1/d2/f7 [2149867,62269] 0 2026-03-09T20:47:28.109 INFO:tasks.workunit.client.1.vm10.stdout:3/287: chown dc/d14/d20/d21/f41 1 1 2026-03-09T20:47:28.110 INFO:tasks.workunit.client.1.vm10.stdout:3/288: stat dc/l1c 0 2026-03-09T20:47:28.122 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:28 vm10.local ceph-mon[57011]: Active manager daemon vm10.byqahe restarted 2026-03-09T20:47:28.122 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:28 vm10.local ceph-mon[57011]: Activating manager daemon vm10.byqahe 2026-03-09T20:47:28.122 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:28 vm10.local ceph-mon[57011]: from='mgr.? 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm10.byqahe/crt"}]: dispatch 2026-03-09T20:47:28.122 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:28 vm10.local ceph-mon[57011]: from='mgr.? 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T20:47:28.122 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:28 vm10.local ceph-mon[57011]: from='mgr.? 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm10.byqahe/key"}]: dispatch 2026-03-09T20:47:28.122 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:28 vm10.local ceph-mon[57011]: from='mgr.? 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T20:47:28.122 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:28 vm10.local ceph-mon[57011]: osdmap e44: 6 total, 6 up, 6 in 2026-03-09T20:47:28.122 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:28 vm10.local ceph-mon[57011]: mgrmap e21: vm10.byqahe(active, starting, since 0.0496053s) 2026-03-09T20:47:28.125 INFO:tasks.workunit.client.1.vm10.stdout:4/242: truncate d1/d8/d1c/f1d 293553 0 2026-03-09T20:47:28.137 INFO:tasks.workunit.client.0.vm07.stdout:5/453: getdents d5/df/d13/d55 0 2026-03-09T20:47:28.138 INFO:tasks.workunit.client.1.vm10.stdout:3/289: mknod dc/d14/d26/d29/d40/d48/c5f 0 2026-03-09T20:47:28.142 INFO:tasks.workunit.client.0.vm07.stdout:2/407: creat d2/f7b x:0 0 0 2026-03-09T20:47:28.146 INFO:tasks.workunit.client.1.vm10.stdout:4/243: mknod d1/d8/d1c/d41/c50 0 2026-03-09T20:47:28.151 INFO:tasks.workunit.client.0.vm07.stdout:8/340: creat d1/dc/d16/f6d x:0 0 0 2026-03-09T20:47:28.153 INFO:tasks.workunit.client.1.vm10.stdout:3/290: creat dc/d14/d26/d29/f60 x:0 0 0 2026-03-09T20:47:28.153 INFO:tasks.workunit.client.1.vm10.stdout:8/347: write d0/d22/d25/d2e/f33 [513440,23502] 0 2026-03-09T20:47:28.159 INFO:tasks.workunit.client.1.vm10.stdout:3/291: dwrite dc/d14/d26/d37/f3a [4194304,4194304] 0 2026-03-09T20:47:28.160 INFO:tasks.workunit.client.1.vm10.stdout:6/331: write d3/d30/f43 [166507,7382] 0 2026-03-09T20:47:28.161 INFO:tasks.workunit.client.1.vm10.stdout:6/332: dread - d3/f52 zero size 2026-03-09T20:47:28.164 INFO:tasks.workunit.client.1.vm10.stdout:8/348: creat d0/d22/d2f/d38/f6d x:0 0 0 2026-03-09T20:47:28.166 INFO:tasks.workunit.client.1.vm10.stdout:7/290: dwrite db/d21/d23/f34 [0,4194304] 0 2026-03-09T20:47:28.179 INFO:tasks.workunit.client.0.vm07.stdout:7/423: getdents d3/da 0 2026-03-09T20:47:28.180 INFO:tasks.workunit.client.1.vm10.stdout:6/333: dwrite d3/d30/d33/f35 [4194304,4194304] 0 2026-03-09T20:47:28.180 INFO:tasks.workunit.client.1.vm10.stdout:3/292: dwrite dc/f58 [0,4194304] 0 2026-03-09T20:47:28.180 INFO:tasks.workunit.client.1.vm10.stdout:6/334: dread - d3/da/d11/d31/d4c/f69 zero size 2026-03-09T20:47:28.180 INFO:tasks.workunit.client.1.vm10.stdout:3/293: write dc/d14/d26/f45 [669870,106540] 0 2026-03-09T20:47:28.199 INFO:tasks.workunit.client.1.vm10.stdout:7/291: fsync db/d1f/f37 0 2026-03-09T20:47:28.205 INFO:tasks.workunit.client.1.vm10.stdout:7/292: dwrite db/d28/d2b/d36/f3c [0,4194304] 0 2026-03-09T20:47:28.206 INFO:tasks.workunit.client.0.vm07.stdout:9/345: creat d4/d8/d19/f7e x:0 0 0 2026-03-09T20:47:28.213 INFO:tasks.workunit.client.1.vm10.stdout:7/293: dread db/d28/d2b/d36/d3b/f3d [0,4194304] 0 2026-03-09T20:47:28.214 INFO:tasks.workunit.client.1.vm10.stdout:3/294: mknod dc/d14/d26/d29/d40/d48/c61 0 2026-03-09T20:47:28.217 INFO:tasks.workunit.client.1.vm10.stdout:2/335: fdatasync d5/d18/d1b/d22/f6d 0 2026-03-09T20:47:28.220 INFO:tasks.workunit.client.1.vm10.stdout:9/358: write d2/f30 [7119807,64683] 0 2026-03-09T20:47:28.221 INFO:tasks.workunit.client.0.vm07.stdout:7/424: symlink d3/da/db/d14/d1f/d2b/l94 0 2026-03-09T20:47:28.233 INFO:tasks.workunit.client.0.vm07.stdout:0/439: getdents d1/d1f 0 2026-03-09T20:47:28.235 INFO:tasks.workunit.client.0.vm07.stdout:7/425: dread d3/da/db/d14/d1f/d2b/d52/f74 [0,4194304] 0 2026-03-09T20:47:28.236 INFO:tasks.workunit.client.1.vm10.stdout:1/309: write d2/f3c [635790,51371] 0 2026-03-09T20:47:28.239 INFO:tasks.workunit.client.0.vm07.stdout:1/394: dwrite d3/fc [0,4194304] 0 2026-03-09T20:47:28.242 INFO:tasks.workunit.client.1.vm10.stdout:1/310: dwrite d2/da/d25/d3e/f41 [0,4194304] 0 2026-03-09T20:47:28.254 INFO:tasks.workunit.client.0.vm07.stdout:9/346: unlink d4/d16/d29/d24/c46 0 2026-03-09T20:47:28.264 INFO:tasks.workunit.client.1.vm10.stdout:3/295: truncate dc/d14/d26/f31 950121 0 2026-03-09T20:47:28.264 INFO:tasks.workunit.client.1.vm10.stdout:3/296: dread - dc/f5a zero size 2026-03-09T20:47:28.265 INFO:tasks.workunit.client.1.vm10.stdout:4/244: truncate d1/d8/d1c/f3e 3749285 0 2026-03-09T20:47:28.267 INFO:tasks.workunit.client.1.vm10.stdout:2/336: creat d5/d18/d27/d28/d41/f6e x:0 0 0 2026-03-09T20:47:28.273 INFO:tasks.workunit.client.0.vm07.stdout:1/395: dread - d3/d66/f76 zero size 2026-03-09T20:47:28.274 INFO:tasks.workunit.client.0.vm07.stdout:1/396: write d3/d14/f17 [494587,115686] 0 2026-03-09T20:47:28.275 INFO:tasks.workunit.client.0.vm07.stdout:1/397: chown d3/d23/d55/d56/d60 1084782409 1 2026-03-09T20:47:28.275 INFO:tasks.workunit.client.0.vm07.stdout:9/347: rmdir d4/d11/d23 39 2026-03-09T20:47:28.275 INFO:tasks.workunit.client.0.vm07.stdout:1/398: stat d3/d23/d55/c61 0 2026-03-09T20:47:28.276 INFO:tasks.workunit.client.0.vm07.stdout:1/399: chown d3/d23/l4e 1612119 1 2026-03-09T20:47:28.276 INFO:tasks.workunit.client.0.vm07.stdout:9/348: stat d4/d16/d29/d24/d7c 0 2026-03-09T20:47:28.279 INFO:tasks.workunit.client.1.vm10.stdout:1/311: sync 2026-03-09T20:47:28.279 INFO:tasks.workunit.client.0.vm07.stdout:1/400: dwrite d3/f34 [0,4194304] 0 2026-03-09T20:47:28.291 INFO:tasks.workunit.client.1.vm10.stdout:5/283: dread d2/d1b/f41 [0,4194304] 0 2026-03-09T20:47:28.292 INFO:tasks.workunit.client.1.vm10.stdout:5/284: fsync d2/f3c 0 2026-03-09T20:47:28.293 INFO:tasks.workunit.client.0.vm07.stdout:7/426: dread d3/da/db/d32/d3e/d5c/f64 [0,4194304] 0 2026-03-09T20:47:28.297 INFO:tasks.workunit.client.0.vm07.stdout:7/427: dwrite d3/da/db/d14/f1a [0,4194304] 0 2026-03-09T20:47:28.303 INFO:tasks.workunit.client.0.vm07.stdout:1/401: dread d3/d14/d54/f13 [0,4194304] 0 2026-03-09T20:47:28.303 INFO:tasks.workunit.client.0.vm07.stdout:1/402: dread d3/d14/d54/f4b [0,4194304] 0 2026-03-09T20:47:28.304 INFO:tasks.workunit.client.0.vm07.stdout:1/403: fsync d3/d23/f6b 0 2026-03-09T20:47:28.305 INFO:tasks.workunit.client.0.vm07.stdout:1/404: write d3/d23/f37 [2323602,24935] 0 2026-03-09T20:47:28.327 INFO:tasks.workunit.client.0.vm07.stdout:9/349: symlink d4/d11/d31/l7f 0 2026-03-09T20:47:28.327 INFO:tasks.workunit.client.1.vm10.stdout:4/245: dread d1/d2/f12 [0,4194304] 0 2026-03-09T20:47:28.330 INFO:tasks.workunit.client.0.vm07.stdout:9/350: dwrite d4/d11/d31/f5b [0,4194304] 0 2026-03-09T20:47:28.342 INFO:tasks.workunit.client.0.vm07.stdout:7/428: mknod d3/d58/d82/c95 0 2026-03-09T20:47:28.343 INFO:tasks.workunit.client.1.vm10.stdout:2/337: truncate d5/d18/d1b/f23 1093874 0 2026-03-09T20:47:28.349 INFO:tasks.workunit.client.1.vm10.stdout:6/335: getdents d3/d12/d36 0 2026-03-09T20:47:28.357 INFO:tasks.workunit.client.0.vm07.stdout:0/440: getdents d1/d1f/d53 0 2026-03-09T20:47:28.357 INFO:tasks.workunit.client.0.vm07.stdout:0/441: dread d1/d2/f14 [0,4194304] 0 2026-03-09T20:47:28.358 INFO:tasks.workunit.client.0.vm07.stdout:0/442: chown d1/d2/d33/f7e 388077 1 2026-03-09T20:47:28.367 INFO:tasks.workunit.client.0.vm07.stdout:4/298: truncate d2/df/d17/f37 1258865 0 2026-03-09T20:47:28.369 INFO:tasks.workunit.client.0.vm07.stdout:5/454: dwrite d5/df/f34 [0,4194304] 0 2026-03-09T20:47:28.369 INFO:tasks.workunit.client.0.vm07.stdout:5/455: chown d5/d19/d73/d94 7 1 2026-03-09T20:47:28.370 INFO:tasks.workunit.client.0.vm07.stdout:5/456: readlink d5/d33/d39/l7b 0 2026-03-09T20:47:28.381 INFO:tasks.workunit.client.1.vm10.stdout:0/276: write d2/d9/da/de/d1a/d25/d3e/f59 [3168986,47000] 0 2026-03-09T20:47:28.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:28 vm07.local ceph-mon[49120]: Active manager daemon vm10.byqahe restarted 2026-03-09T20:47:28.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:28 vm07.local ceph-mon[49120]: Activating manager daemon vm10.byqahe 2026-03-09T20:47:28.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:28 vm07.local ceph-mon[49120]: from='mgr.? 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm10.byqahe/crt"}]: dispatch 2026-03-09T20:47:28.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:28 vm07.local ceph-mon[49120]: from='mgr.? 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T20:47:28.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:28 vm07.local ceph-mon[49120]: from='mgr.? 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm10.byqahe/key"}]: dispatch 2026-03-09T20:47:28.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:28 vm07.local ceph-mon[49120]: from='mgr.? 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T20:47:28.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:28 vm07.local ceph-mon[49120]: osdmap e44: 6 total, 6 up, 6 in 2026-03-09T20:47:28.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:28 vm07.local ceph-mon[49120]: mgrmap e21: vm10.byqahe(active, starting, since 0.0496053s) 2026-03-09T20:47:28.385 INFO:tasks.workunit.client.0.vm07.stdout:8/341: dwrite d1/f1d [0,4194304] 0 2026-03-09T20:47:28.399 INFO:tasks.workunit.client.0.vm07.stdout:6/391: dwrite d8/d16/d22/d24/f43 [0,4194304] 0 2026-03-09T20:47:28.410 INFO:tasks.workunit.client.0.vm07.stdout:7/429: mkdir d3/da/d83/d96 0 2026-03-09T20:47:28.414 INFO:tasks.workunit.client.0.vm07.stdout:3/381: write d1/d5/d9/f3c [826209,3145] 0 2026-03-09T20:47:28.416 INFO:tasks.workunit.client.0.vm07.stdout:0/443: symlink d1/d1f/d53/l8c 0 2026-03-09T20:47:28.420 INFO:tasks.workunit.client.1.vm10.stdout:9/359: write d2/d3/f2e [1565296,23403] 0 2026-03-09T20:47:28.421 INFO:tasks.workunit.client.1.vm10.stdout:4/246: dread d1/d2/f2d [0,4194304] 0 2026-03-09T20:47:28.424 INFO:tasks.workunit.client.1.vm10.stdout:3/297: dwrite dc/d14/d26/d29/d40/f49 [0,4194304] 0 2026-03-09T20:47:28.425 INFO:tasks.workunit.client.1.vm10.stdout:4/247: truncate d1/d8/d1c/d2b/f36 346344 0 2026-03-09T20:47:28.426 INFO:tasks.workunit.client.1.vm10.stdout:4/248: truncate d1/d8/f16 4397444 0 2026-03-09T20:47:28.438 INFO:tasks.workunit.client.0.vm07.stdout:5/457: truncate d5/df/d13/d6c/f77 141293 0 2026-03-09T20:47:28.442 INFO:tasks.workunit.client.1.vm10.stdout:6/336: creat d3/d12/d24/d39/f6c x:0 0 0 2026-03-09T20:47:28.442 INFO:tasks.workunit.client.1.vm10.stdout:6/337: readlink d3/da/d11/d31/l41 0 2026-03-09T20:47:28.444 INFO:tasks.workunit.client.0.vm07.stdout:1/405: dwrite d3/d14/d54/d3e/f75 [0,4194304] 0 2026-03-09T20:47:28.448 INFO:tasks.workunit.client.1.vm10.stdout:1/312: link d2/f4c d2/da/d25/f65 0 2026-03-09T20:47:28.457 INFO:tasks.workunit.client.0.vm07.stdout:9/351: dwrite d4/d11/f4f [0,4194304] 0 2026-03-09T20:47:28.459 INFO:tasks.workunit.client.0.vm07.stdout:6/392: mkdir d8/d26/d7d 0 2026-03-09T20:47:28.460 INFO:tasks.workunit.client.0.vm07.stdout:6/393: read d8/d16/d22/f75 [1834025,16044] 0 2026-03-09T20:47:28.472 INFO:tasks.workunit.client.1.vm10.stdout:7/294: getdents db/d28/d2b/d36/d3f 0 2026-03-09T20:47:28.472 INFO:tasks.workunit.client.1.vm10.stdout:7/295: stat db/d46 0 2026-03-09T20:47:28.473 INFO:tasks.workunit.client.0.vm07.stdout:3/382: mknod d1/d5/d9/d2f/d3d/d71/c7c 0 2026-03-09T20:47:28.475 INFO:tasks.workunit.client.0.vm07.stdout:3/383: dread d1/d5/d9/f3c [0,4194304] 0 2026-03-09T20:47:28.477 INFO:tasks.workunit.client.1.vm10.stdout:9/360: mkdir d2/d33/d86 0 2026-03-09T20:47:28.478 INFO:tasks.workunit.client.0.vm07.stdout:4/299: symlink d2/l50 0 2026-03-09T20:47:28.487 INFO:tasks.workunit.client.0.vm07.stdout:5/458: mknod d5/df/d13/d6c/ca7 0 2026-03-09T20:47:28.487 INFO:tasks.workunit.client.0.vm07.stdout:5/459: stat d5/d33/d3b/la0 0 2026-03-09T20:47:28.488 INFO:tasks.workunit.client.0.vm07.stdout:5/460: readlink d5/df/l29 0 2026-03-09T20:47:28.491 INFO:tasks.workunit.client.0.vm07.stdout:5/461: dwrite d5/df/d13/d4f/f9b [0,4194304] 0 2026-03-09T20:47:28.492 INFO:tasks.workunit.client.1.vm10.stdout:4/249: creat d1/d8/d1c/d41/f51 x:0 0 0 2026-03-09T20:47:28.492 INFO:tasks.workunit.client.0.vm07.stdout:5/462: stat d5/d19/c71 0 2026-03-09T20:47:28.496 INFO:tasks.workunit.client.1.vm10.stdout:8/349: rename d0/c1b to d0/d22/d25/c6e 0 2026-03-09T20:47:28.501 INFO:tasks.workunit.client.1.vm10.stdout:6/338: mkdir d3/d12/d36/d6d 0 2026-03-09T20:47:28.501 INFO:tasks.workunit.client.0.vm07.stdout:6/394: dread - d8/d26/d2a/f41 zero size 2026-03-09T20:47:28.503 INFO:tasks.workunit.client.1.vm10.stdout:6/339: dread d3/d30/d33/f35 [4194304,4194304] 0 2026-03-09T20:47:28.504 INFO:tasks.workunit.client.1.vm10.stdout:1/313: symlink d2/da/d25/d3e/d55/l66 0 2026-03-09T20:47:28.504 INFO:tasks.workunit.client.1.vm10.stdout:5/285: creat d2/f71 x:0 0 0 2026-03-09T20:47:28.524 INFO:tasks.workunit.client.1.vm10.stdout:4/250: truncate d1/fe 599526 0 2026-03-09T20:47:28.537 INFO:tasks.workunit.client.0.vm07.stdout:8/342: creat d1/dc/d16/f6e x:0 0 0 2026-03-09T20:47:28.537 INFO:tasks.workunit.client.0.vm07.stdout:5/463: truncate d5/d33/d3b/f63 828502 0 2026-03-09T20:47:28.537 INFO:tasks.workunit.client.0.vm07.stdout:5/464: chown d5/df/d13/d4f/c53 43044 1 2026-03-09T20:47:28.538 INFO:tasks.workunit.client.1.vm10.stdout:4/251: stat d1/d2/f2d 0 2026-03-09T20:47:28.538 INFO:tasks.workunit.client.1.vm10.stdout:8/350: dwrite d0/d22/d25/d40/f5e [0,4194304] 0 2026-03-09T20:47:28.538 INFO:tasks.workunit.client.1.vm10.stdout:4/252: dwrite d1/d2/f7 [0,4194304] 0 2026-03-09T20:47:28.544 INFO:tasks.workunit.client.1.vm10.stdout:5/286: mknod d2/d27/d37/d46/c72 0 2026-03-09T20:47:28.547 INFO:tasks.workunit.client.0.vm07.stdout:2/408: rename d2/db/d1c/d4a/d6c/f73 to d2/db/f7c 0 2026-03-09T20:47:28.548 INFO:tasks.workunit.client.0.vm07.stdout:3/384: mknod d1/d5/d9/d2f/d3d/d71/d76/c7d 0 2026-03-09T20:47:28.549 INFO:tasks.workunit.client.0.vm07.stdout:3/385: dread - d1/d5/d9/d2f/d3d/d64/f7b zero size 2026-03-09T20:47:28.550 INFO:tasks.workunit.client.0.vm07.stdout:3/386: chown d1/l6 88275247 1 2026-03-09T20:47:28.563 INFO:tasks.workunit.client.0.vm07.stdout:1/406: link d3/d14/d54/c2e d3/d14/d54/d3e/c84 0 2026-03-09T20:47:28.563 INFO:tasks.workunit.client.0.vm07.stdout:1/407: chown d3/d14/d54/l5e 449176549 1 2026-03-09T20:47:28.564 INFO:tasks.workunit.client.0.vm07.stdout:1/408: truncate d3/d14/d54/d3e/f72 773338 0 2026-03-09T20:47:28.567 INFO:tasks.workunit.client.1.vm10.stdout:2/338: rename d5/d18/c56 to d5/d18/d27/d38/d61/c6f 0 2026-03-09T20:47:28.573 INFO:tasks.workunit.client.1.vm10.stdout:0/277: dwrite d2/d9/da/d11/f1f [0,4194304] 0 2026-03-09T20:47:28.576 INFO:tasks.workunit.client.0.vm07.stdout:7/430: dwrite d3/f67 [0,4194304] 0 2026-03-09T20:47:28.578 INFO:tasks.workunit.client.0.vm07.stdout:7/431: dread - d3/da/db/d32/d3e/f65 zero size 2026-03-09T20:47:28.584 INFO:tasks.workunit.client.0.vm07.stdout:5/465: creat d5/d33/d75/fa8 x:0 0 0 2026-03-09T20:47:28.587 INFO:tasks.workunit.client.0.vm07.stdout:5/466: dwrite d5/df/d13/f3d [0,4194304] 0 2026-03-09T20:47:28.593 INFO:tasks.workunit.client.1.vm10.stdout:8/351: dwrite d0/d22/d25/d2e/f33 [0,4194304] 0 2026-03-09T20:47:28.601 INFO:tasks.workunit.client.0.vm07.stdout:6/395: rmdir d8/d16/d22/d24/d2b 39 2026-03-09T20:47:28.611 INFO:tasks.workunit.client.1.vm10.stdout:5/287: symlink d2/d39/l73 0 2026-03-09T20:47:28.611 INFO:tasks.workunit.client.0.vm07.stdout:3/387: symlink d1/d5/d9/d2f/d3d/l7e 0 2026-03-09T20:47:28.611 INFO:tasks.workunit.client.0.vm07.stdout:1/409: write d3/f10 [2721283,35407] 0 2026-03-09T20:47:28.611 INFO:tasks.workunit.client.0.vm07.stdout:1/410: dread - d3/d23/f6b zero size 2026-03-09T20:47:28.611 INFO:tasks.workunit.client.0.vm07.stdout:4/300: sync 2026-03-09T20:47:28.611 INFO:tasks.workunit.client.0.vm07.stdout:1/411: chown d3/d14/d54/l71 2152574 1 2026-03-09T20:47:28.620 INFO:tasks.workunit.client.1.vm10.stdout:6/340: rename d3/da/d11/f65 to d3/d12/d36/f6e 0 2026-03-09T20:47:28.620 INFO:tasks.workunit.client.0.vm07.stdout:3/388: dread d1/d5/d9/d2f/d3d/d64/f55 [0,4194304] 0 2026-03-09T20:47:28.626 INFO:tasks.workunit.client.0.vm07.stdout:8/343: rename d1/dc/d14 to d1/d5d/d6f 0 2026-03-09T20:47:28.639 INFO:tasks.workunit.client.1.vm10.stdout:0/278: mkdir d2/db/d5d 0 2026-03-09T20:47:28.639 INFO:tasks.workunit.client.0.vm07.stdout:8/344: truncate d1/dc/d6a/f62 333463 0 2026-03-09T20:47:28.639 INFO:tasks.workunit.client.0.vm07.stdout:0/444: write d1/d2/d33/d35/f46 [1084469,52022] 0 2026-03-09T20:47:28.639 INFO:tasks.workunit.client.0.vm07.stdout:0/445: chown d1/d2/dc/d17 30 1 2026-03-09T20:47:28.639 INFO:tasks.workunit.client.0.vm07.stdout:0/446: chown d1/ca 6515 1 2026-03-09T20:47:28.650 INFO:tasks.workunit.client.0.vm07.stdout:3/389: truncate d1/d5/d9/d11/f4d 1260249 0 2026-03-09T20:47:28.651 INFO:tasks.workunit.client.0.vm07.stdout:3/390: write d1/d5/d9/d2f/d3d/d64/f7b [19345,73099] 0 2026-03-09T20:47:28.657 INFO:tasks.workunit.client.1.vm10.stdout:9/361: rename d2/d28/d47/l48 to d2/d3/de/d35/d44/l87 0 2026-03-09T20:47:28.661 INFO:tasks.workunit.client.0.vm07.stdout:8/345: rmdir d1/dc/d16/d26 39 2026-03-09T20:47:28.665 INFO:tasks.workunit.client.1.vm10.stdout:6/341: unlink d3/d12/f2b 0 2026-03-09T20:47:28.666 INFO:tasks.workunit.client.0.vm07.stdout:0/447: fsync d1/d2/d33/d35/f59 0 2026-03-09T20:47:28.672 INFO:tasks.workunit.client.1.vm10.stdout:0/279: mknod d2/d9/d4b/c5e 0 2026-03-09T20:47:28.673 INFO:tasks.workunit.client.1.vm10.stdout:0/280: dread - d2/d9/da/d35/d30/f56 zero size 2026-03-09T20:47:28.674 INFO:tasks.workunit.client.0.vm07.stdout:5/467: fsync d5/d33/f5a 0 2026-03-09T20:47:28.674 INFO:tasks.workunit.client.1.vm10.stdout:0/281: truncate d2/d4a/f5a 97009 0 2026-03-09T20:47:28.674 INFO:tasks.workunit.client.0.vm07.stdout:5/468: read d5/df/f34 [2829368,43416] 0 2026-03-09T20:47:28.675 INFO:tasks.workunit.client.0.vm07.stdout:5/469: readlink d5/d19/l66 0 2026-03-09T20:47:28.675 INFO:tasks.workunit.client.0.vm07.stdout:5/470: fsync d5/fa6 0 2026-03-09T20:47:28.686 INFO:tasks.workunit.client.1.vm10.stdout:3/298: write dc/d14/d26/f34 [642937,56851] 0 2026-03-09T20:47:28.687 INFO:tasks.workunit.client.1.vm10.stdout:3/299: stat dc/d14/d26/d29/d40/d48/c61 0 2026-03-09T20:47:28.688 INFO:tasks.workunit.client.1.vm10.stdout:3/300: truncate dc/d14/d26/d29/f60 631660 0 2026-03-09T20:47:28.689 INFO:tasks.workunit.client.1.vm10.stdout:3/301: truncate dc/d14/d26/d29/f60 1363564 0 2026-03-09T20:47:28.690 INFO:tasks.workunit.client.1.vm10.stdout:3/302: chown dc/d14/d20/d2e/d56/f19 2233748 1 2026-03-09T20:47:28.701 INFO:tasks.workunit.client.0.vm07.stdout:3/391: dread d1/d5/d9/d11/f26 [0,4194304] 0 2026-03-09T20:47:28.701 INFO:tasks.workunit.client.1.vm10.stdout:7/296: write db/d28/f41 [1257678,15725] 0 2026-03-09T20:47:28.702 INFO:tasks.workunit.client.0.vm07.stdout:3/392: stat d1/d5/d9/d11/f26 0 2026-03-09T20:47:28.705 INFO:tasks.workunit.client.1.vm10.stdout:2/339: rmdir d5/d18 39 2026-03-09T20:47:28.707 INFO:tasks.workunit.client.1.vm10.stdout:2/340: dread d5/d2b/d32/f5c [0,4194304] 0 2026-03-09T20:47:28.707 INFO:tasks.workunit.client.0.vm07.stdout:9/352: truncate d4/d8/fd 4453633 0 2026-03-09T20:47:28.709 INFO:tasks.workunit.client.0.vm07.stdout:1/412: mknod d3/d14/d54/d6e/c85 0 2026-03-09T20:47:28.710 INFO:tasks.workunit.client.0.vm07.stdout:1/413: readlink d3/d14/d54/l26 0 2026-03-09T20:47:28.711 INFO:tasks.workunit.client.1.vm10.stdout:1/314: dwrite d2/da/f50 [0,4194304] 0 2026-03-09T20:47:28.711 INFO:tasks.workunit.client.1.vm10.stdout:1/315: readlink d2/l38 0 2026-03-09T20:47:28.717 INFO:tasks.workunit.client.0.vm07.stdout:7/432: write d3/da/db/d14/d43/f68 [4369802,35208] 0 2026-03-09T20:47:28.723 INFO:tasks.workunit.client.0.vm07.stdout:3/393: dread d1/d5/d9/f1b [0,4194304] 0 2026-03-09T20:47:28.727 INFO:tasks.workunit.client.0.vm07.stdout:0/448: rename d1/d2/dc/f12 to d1/d82/f8d 0 2026-03-09T20:47:28.728 INFO:tasks.workunit.client.0.vm07.stdout:7/433: dread d3/da/db/d14/f3a [0,4194304] 0 2026-03-09T20:47:28.730 INFO:tasks.workunit.client.1.vm10.stdout:4/253: write d1/d8/d1c/f3e [456199,127015] 0 2026-03-09T20:47:28.731 INFO:tasks.workunit.client.1.vm10.stdout:8/352: write d0/f13 [1037486,22859] 0 2026-03-09T20:47:28.731 INFO:tasks.workunit.client.1.vm10.stdout:8/353: chown d0/d22/d2f/d3d/l48 5 1 2026-03-09T20:47:28.732 INFO:tasks.workunit.client.0.vm07.stdout:4/301: write d2/f9 [717469,99334] 0 2026-03-09T20:47:28.732 INFO:tasks.workunit.client.1.vm10.stdout:8/354: write d0/d22/d25/d2e/d41/f67 [203125,76995] 0 2026-03-09T20:47:28.732 INFO:tasks.workunit.client.0.vm07.stdout:4/302: fdatasync d2/f43 0 2026-03-09T20:47:28.738 INFO:tasks.workunit.client.0.vm07.stdout:5/471: readlink d5/l2e 0 2026-03-09T20:47:28.751 INFO:tasks.workunit.client.0.vm07.stdout:6/396: dread d8/d16/d22/d24/d2b/f3c [0,4194304] 0 2026-03-09T20:47:28.751 INFO:tasks.workunit.client.0.vm07.stdout:2/409: getdents d2/db/d49 0 2026-03-09T20:47:28.769 INFO:tasks.workunit.client.0.vm07.stdout:0/449: creat d1/d1f/d30/f8e x:0 0 0 2026-03-09T20:47:28.774 INFO:tasks.workunit.client.0.vm07.stdout:3/394: dwrite d1/d5/d9/d2f/d3d/d64/f55 [0,4194304] 0 2026-03-09T20:47:28.786 INFO:tasks.workunit.client.1.vm10.stdout:5/288: mknod d2/d27/d37/d46/d5d/d5f/d63/c74 0 2026-03-09T20:47:28.793 INFO:tasks.workunit.client.1.vm10.stdout:6/342: mknod d3/da/d11/d31/d4c/d60/c6f 0 2026-03-09T20:47:28.799 INFO:tasks.workunit.client.1.vm10.stdout:7/297: fsync db/d21/d23/f22 0 2026-03-09T20:47:28.799 INFO:tasks.workunit.client.1.vm10.stdout:7/298: write db/d28/f4f [480098,86835] 0 2026-03-09T20:47:28.803 INFO:tasks.workunit.client.0.vm07.stdout:2/410: mkdir d2/db/d49/d7d 0 2026-03-09T20:47:28.809 INFO:tasks.workunit.client.1.vm10.stdout:1/316: creat d2/da/d25/d46/d51/d5d/f67 x:0 0 0 2026-03-09T20:47:28.811 INFO:tasks.workunit.client.0.vm07.stdout:9/353: fsync d4/d16/d29/d24/f6a 0 2026-03-09T20:47:28.812 INFO:tasks.workunit.client.0.vm07.stdout:9/354: write d4/d16/d29/d24/f77 [2174376,66181] 0 2026-03-09T20:47:28.819 INFO:tasks.workunit.client.0.vm07.stdout:1/414: mkdir d3/d66/d86 0 2026-03-09T20:47:28.820 INFO:tasks.workunit.client.1.vm10.stdout:4/254: write d1/f33 [465839,3871] 0 2026-03-09T20:47:28.821 INFO:tasks.workunit.client.1.vm10.stdout:8/355: mknod d0/d22/d2f/d38/c6f 0 2026-03-09T20:47:28.823 INFO:tasks.workunit.client.0.vm07.stdout:8/346: creat d1/dc/d16/f70 x:0 0 0 2026-03-09T20:47:28.828 INFO:tasks.workunit.client.1.vm10.stdout:5/289: mkdir d2/d27/d75 0 2026-03-09T20:47:28.830 INFO:tasks.workunit.client.1.vm10.stdout:9/362: mkdir d2/d3/d6d/d88 0 2026-03-09T20:47:28.834 INFO:tasks.workunit.client.1.vm10.stdout:6/343: stat d3/d12/d24/l2d 0 2026-03-09T20:47:28.835 INFO:tasks.workunit.client.1.vm10.stdout:6/344: write d3/da/d11/d31/d4c/d60/f63 [150985,44762] 0 2026-03-09T20:47:28.836 INFO:tasks.workunit.client.0.vm07.stdout:0/450: dread d1/d2/dc/f56 [4194304,4194304] 0 2026-03-09T20:47:28.839 INFO:tasks.workunit.client.0.vm07.stdout:0/451: dwrite d1/d2/d33/d35/f59 [0,4194304] 0 2026-03-09T20:47:28.848 INFO:tasks.workunit.client.1.vm10.stdout:0/282: creat d2/db/d5d/f5f x:0 0 0 2026-03-09T20:47:28.849 INFO:tasks.workunit.client.0.vm07.stdout:3/395: creat d1/d5/d9/d11/d1f/f7f x:0 0 0 2026-03-09T20:47:28.849 INFO:tasks.workunit.client.0.vm07.stdout:3/396: chown d1/d5 6 1 2026-03-09T20:47:28.851 INFO:tasks.workunit.client.1.vm10.stdout:0/283: dwrite d2/d9/f20 [0,4194304] 0 2026-03-09T20:47:28.860 INFO:tasks.workunit.client.0.vm07.stdout:8/347: sync 2026-03-09T20:47:28.862 INFO:tasks.workunit.client.1.vm10.stdout:3/303: unlink dc/d14/d26/f31 0 2026-03-09T20:47:28.875 INFO:tasks.workunit.client.1.vm10.stdout:0/284: dread d2/d9/da/d11/f42 [0,4194304] 0 2026-03-09T20:47:28.875 INFO:tasks.workunit.client.1.vm10.stdout:0/285: fdatasync d2/d9/da/d35/f3a 0 2026-03-09T20:47:28.875 INFO:tasks.workunit.client.0.vm07.stdout:5/472: mknod d5/df/d13/ca9 0 2026-03-09T20:47:28.886 INFO:tasks.workunit.client.1.vm10.stdout:1/317: symlink d2/da/d25/d3e/d55/l68 0 2026-03-09T20:47:28.899 INFO:tasks.workunit.client.0.vm07.stdout:1/415: mknod d3/d23/d67/c87 0 2026-03-09T20:47:28.903 INFO:tasks.workunit.client.1.vm10.stdout:5/290: rename d2/f11 to d2/d27/d37/d46/d5d/d5f/d69/f76 0 2026-03-09T20:47:28.905 INFO:tasks.workunit.client.0.vm07.stdout:0/452: readlink d1/d1f/d53/d72/l6a 0 2026-03-09T20:47:28.907 INFO:tasks.workunit.client.1.vm10.stdout:9/363: symlink d2/d33/l89 0 2026-03-09T20:47:28.911 INFO:tasks.workunit.client.0.vm07.stdout:7/434: dwrite d3/da/f45 [0,4194304] 0 2026-03-09T20:47:28.913 INFO:tasks.workunit.client.0.vm07.stdout:7/435: write d3/f88 [384488,14047] 0 2026-03-09T20:47:28.916 INFO:tasks.workunit.client.0.vm07.stdout:9/355: dwrite d4/d11/d2a/f3b [0,4194304] 0 2026-03-09T20:47:28.923 INFO:tasks.workunit.client.1.vm10.stdout:7/299: dwrite db/d28/d2b/d36/d3b/f3d [4194304,4194304] 0 2026-03-09T20:47:28.932 INFO:tasks.workunit.client.1.vm10.stdout:6/345: mknod d3/d12/d36/c70 0 2026-03-09T20:47:28.945 INFO:tasks.workunit.client.1.vm10.stdout:3/304: mknod dc/d14/d26/d29/d2a/c62 0 2026-03-09T20:47:28.948 INFO:tasks.workunit.client.1.vm10.stdout:3/305: stat dc/d14/d26/d29/d40/f49 0 2026-03-09T20:47:28.948 INFO:tasks.workunit.client.1.vm10.stdout:3/306: write dc/d14/d26/d37/f3e [433545,114181] 0 2026-03-09T20:47:28.956 INFO:tasks.workunit.client.1.vm10.stdout:6/346: read d3/da/f15 [2365881,55305] 0 2026-03-09T20:47:28.980 INFO:tasks.workunit.client.0.vm07.stdout:2/411: fdatasync d2/f3e 0 2026-03-09T20:47:28.981 INFO:tasks.workunit.client.0.vm07.stdout:2/412: write d2/db/d28/d5c/f66 [879356,51935] 0 2026-03-09T20:47:28.995 INFO:tasks.workunit.client.0.vm07.stdout:8/348: write d1/dc/d16/f4b [71007,45342] 0 2026-03-09T20:47:29.015 INFO:tasks.workunit.client.1.vm10.stdout:0/286: mkdir d2/d9/d47/d60 0 2026-03-09T20:47:29.016 INFO:tasks.workunit.client.1.vm10.stdout:0/287: dread - d2/d9/da/d35/d30/f56 zero size 2026-03-09T20:47:29.035 INFO:tasks.workunit.client.1.vm10.stdout:1/318: creat d2/da/d25/d3e/f69 x:0 0 0 2026-03-09T20:47:29.040 INFO:tasks.workunit.client.0.vm07.stdout:0/453: dread d1/d1f/d20/f21 [0,4194304] 0 2026-03-09T20:47:29.063 INFO:tasks.workunit.client.1.vm10.stdout:2/341: truncate d5/d18/d2d/f60 2618925 0 2026-03-09T20:47:29.075 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.070+0000 7f78ed047640 1 -- 192.168.123.107:0/2810574295 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f78e80719a0 msgr2=0x7f78e8071da0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:47:29.076 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.070+0000 7f78ed047640 1 --2- 192.168.123.107:0/2810574295 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f78e80719a0 0x7f78e8071da0 secure :-1 s=READY pgs=312 cs=0 l=1 rev1=1 crypto rx=0x7f78dc0099b0 tx=0x7f78dc02f240 comp rx=0 tx=0).stop 2026-03-09T20:47:29.076 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.071+0000 7f78ed047640 1 -- 192.168.123.107:0/2810574295 shutdown_connections 2026-03-09T20:47:29.076 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.071+0000 7f78ed047640 1 --2- 192.168.123.107:0/2810574295 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f78e8072370 0x7f78e810c590 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:47:29.076 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.071+0000 7f78ed047640 1 --2- 192.168.123.107:0/2810574295 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f78e80719a0 0x7f78e8071da0 secure :-1 s=CLOSED pgs=312 cs=0 l=1 rev1=1 crypto rx=0x7f78dc0099b0 tx=0x7f78dc02f240 comp rx=0 tx=0).stop 2026-03-09T20:47:29.076 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.071+0000 7f78ed047640 1 -- 192.168.123.107:0/2810574295 >> 192.168.123.107:0/2810574295 conn(0x7f78e806d4f0 msgr2=0x7f78e806f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:47:29.076 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.072+0000 7f78ed047640 1 -- 192.168.123.107:0/2810574295 shutdown_connections 2026-03-09T20:47:29.076 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.072+0000 7f78ed047640 1 -- 192.168.123.107:0/2810574295 wait complete. 2026-03-09T20:47:29.076 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.072+0000 7f78ed047640 1 Processor -- start 2026-03-09T20:47:29.076 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.072+0000 7f78ed047640 1 -- start start 2026-03-09T20:47:29.076 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.072+0000 7f78ed047640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f78e8072370 0x7f78e8115c20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:47:29.076 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.072+0000 7f78ed047640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f78e8116160 0x7f78e81165e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:47:29.076 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.072+0000 7f78ed047640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f78e81175d0 con 0x7f78e8116160 2026-03-09T20:47:29.076 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.072+0000 7f78ed047640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f78e81b5700 con 0x7f78e8072370 2026-03-09T20:47:29.076 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.074+0000 7f78e7fff640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f78e8072370 0x7f78e8115c20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:47:29.076 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.074+0000 7f78e7fff640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f78e8072370 0x7f78e8115c20 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:45504/0 (socket says 192.168.123.107:45504) 2026-03-09T20:47:29.076 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.074+0000 7f78e7fff640 1 -- 192.168.123.107:0/3086612851 learned_addr learned my addr 192.168.123.107:0/3086612851 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:47:29.076 INFO:tasks.workunit.client.0.vm07.stdout:7/436: creat d3/da/db/d14/d1f/d2b/d52/f97 x:0 0 0 2026-03-09T20:47:29.078 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.076+0000 7f78e77fe640 1 --2- 192.168.123.107:0/3086612851 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f78e8116160 0x7f78e81165e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:47:29.078 INFO:tasks.workunit.client.0.vm07.stdout:7/437: fsync d3/da/db/d14/d43/f68 0 2026-03-09T20:47:29.079 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.077+0000 7f78e7fff640 1 -- 192.168.123.107:0/3086612851 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f78e8116160 msgr2=0x7f78e81165e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:47:29.079 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.077+0000 7f78e7fff640 1 --2- 192.168.123.107:0/3086612851 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f78e8116160 0x7f78e81165e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:47:29.079 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.077+0000 7f78e7fff640 1 -- 192.168.123.107:0/3086612851 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f78dc009660 con 0x7f78e8072370 2026-03-09T20:47:29.079 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.077+0000 7f78e7fff640 1 --2- 192.168.123.107:0/3086612851 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f78e8072370 0x7f78e8115c20 secure :-1 s=READY pgs=93 cs=0 l=1 rev1=1 crypto rx=0x7f78dc02f750 tx=0x7f78dc004520 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:47:29.081 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.079+0000 7f78e57fa640 1 -- 192.168.123.107:0/3086612851 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f78dc03d070 con 0x7f78e8072370 2026-03-09T20:47:29.081 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.079+0000 7f78e57fa640 1 -- 192.168.123.107:0/3086612851 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f78dc031d50 con 0x7f78e8072370 2026-03-09T20:47:29.081 INFO:tasks.workunit.client.0.vm07.stdout:3/397: mkdir d1/d5/d9/d11/d6d/d80 0 2026-03-09T20:47:29.081 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.080+0000 7f78e57fa640 1 -- 192.168.123.107:0/3086612851 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f78dc038470 con 0x7f78e8072370 2026-03-09T20:47:29.081 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.080+0000 7f78ed047640 1 -- 192.168.123.107:0/3086612851 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f78e81b58a0 con 0x7f78e8072370 2026-03-09T20:47:29.081 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.080+0000 7f78ed047640 1 -- 192.168.123.107:0/3086612851 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f78e81b5d60 con 0x7f78e8072370 2026-03-09T20:47:29.083 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.081+0000 7f78e57fa640 1 -- 192.168.123.107:0/3086612851 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 21) v1 ==== 49912+0+0 (secure 0 0 0) 0x7f78dc031080 con 0x7f78e8072370 2026-03-09T20:47:29.083 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.081+0000 7f78ed047640 1 -- 192.168.123.107:0/3086612851 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f78ac005350 con 0x7f78e8072370 2026-03-09T20:47:29.085 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.082+0000 7f78e57fa640 1 -- 192.168.123.107:0/3086612851 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(44..44 src has 1..44) v4 ==== 5706+0+0 (secure 0 0 0) 0x7f78dc077290 con 0x7f78e8072370 2026-03-09T20:47:29.087 INFO:tasks.workunit.client.1.vm10.stdout:5/291: unlink d2/f3c 0 2026-03-09T20:47:29.089 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.085+0000 7f78e57fa640 1 -- 192.168.123.107:0/3086612851 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+186382 (secure 0 0 0) 0x7f78dc049c30 con 0x7f78e8072370 2026-03-09T20:47:29.090 INFO:tasks.workunit.client.0.vm07.stdout:4/303: getdents d2/d1f/d2d/d3f/d41 0 2026-03-09T20:47:29.092 INFO:tasks.workunit.client.0.vm07.stdout:3/398: write d1/d5/d9/d2f/d3d/d64/d59/f69 [3078278,2723] 0 2026-03-09T20:47:29.093 INFO:tasks.workunit.client.0.vm07.stdout:4/304: write d2/df/f23 [5555817,35030] 0 2026-03-09T20:47:29.097 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.094+0000 7f78e57fa640 1 -- 192.168.123.107:0/3086612851 <== mon.1 v2:192.168.123.110:3300/0 7 ==== mgrmap(e 22) v1 ==== 50039+0+0 (secure 0 0 0) 0x7f78dc02faa0 con 0x7f78e8072370 2026-03-09T20:47:29.097 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.094+0000 7f78e57fa640 1 --2- 192.168.123.107:0/3086612851 >> [v2:192.168.123.110:6828/2207204228,v1:192.168.123.110:6829/2207204228] conn(0x7f78c4042560 0x7f78c4044950 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:47:29.098 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.097+0000 7f78e77fe640 1 --2- 192.168.123.107:0/3086612851 >> [v2:192.168.123.110:6828/2207204228,v1:192.168.123.110:6829/2207204228] conn(0x7f78c4042560 0x7f78c4044950 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:47:29.101 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.100+0000 7f78e77fe640 1 --2- 192.168.123.107:0/3086612851 >> [v2:192.168.123.110:6828/2207204228,v1:192.168.123.110:6829/2207204228] conn(0x7f78c4042560 0x7f78c4044950 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7f78d0005fd0 tx=0x7f78d0005950 comp rx=0 tx=0).ready entity=mgr.24439 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:47:29.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:29 vm07.local ceph-mon[49120]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "mon metadata", "id": "vm07"}]: dispatch 2026-03-09T20:47:29.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:29 vm07.local ceph-mon[49120]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "mon metadata", "id": "vm10"}]: dispatch 2026-03-09T20:47:29.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:29 vm07.local ceph-mon[49120]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm07.rovdbp"}]: dispatch 2026-03-09T20:47:29.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:29 vm07.local ceph-mon[49120]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm07.potfau"}]: dispatch 2026-03-09T20:47:29.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:29 vm07.local ceph-mon[49120]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm10.hzyuyq"}]: dispatch 2026-03-09T20:47:29.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:29 vm07.local ceph-mon[49120]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm10.qpltwp"}]: dispatch 2026-03-09T20:47:29.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:29 vm07.local ceph-mon[49120]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "mgr metadata", "who": "vm10.byqahe", "id": "vm10.byqahe"}]: dispatch 2026-03-09T20:47:29.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:29 vm07.local ceph-mon[49120]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:47:29.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:29 vm07.local ceph-mon[49120]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:47:29.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:29 vm07.local ceph-mon[49120]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:47:29.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:29 vm07.local ceph-mon[49120]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:47:29.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:29 vm07.local ceph-mon[49120]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T20:47:29.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:29 vm07.local ceph-mon[49120]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T20:47:29.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:29 vm07.local ceph-mon[49120]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T20:47:29.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:29 vm07.local ceph-mon[49120]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T20:47:29.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:29 vm07.local ceph-mon[49120]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T20:47:29.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:29 vm07.local ceph-mon[49120]: Manager daemon vm10.byqahe is now available 2026-03-09T20:47:29.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:29 vm07.local ceph-mon[49120]: Migrating agent root cert to cert store 2026-03-09T20:47:29.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:29 vm07.local ceph-mon[49120]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:29.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:29 vm07.local ceph-mon[49120]: Migrating agent root key to cert store 2026-03-09T20:47:29.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:29 vm07.local ceph-mon[49120]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:29.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:29 vm07.local ceph-mon[49120]: Checking for cert/key for grafana.vm07 2026-03-09T20:47:29.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:29 vm07.local ceph-mon[49120]: Migrating grafana.vm07 cert to cert store 2026-03-09T20:47:29.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:29 vm07.local ceph-mon[49120]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:29.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:29 vm07.local ceph-mon[49120]: Migrating grafana.vm07 key to cert store 2026-03-09T20:47:29.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:29 vm07.local ceph-mon[49120]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:29.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:29 vm07.local ceph-mon[49120]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:29.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:29 vm07.local ceph-mon[49120]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:47:29.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:29 vm07.local ceph-mon[49120]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:47:29.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:29 vm07.local ceph-mon[49120]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm10.byqahe/mirror_snapshot_schedule"}]: dispatch 2026-03-09T20:47:29.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:29 vm07.local ceph-mon[49120]: from='mgr.24439 ' entity='mgr.vm10.byqahe' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm10.byqahe/mirror_snapshot_schedule"}]: dispatch 2026-03-09T20:47:29.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:29 vm07.local ceph-mon[49120]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm10.byqahe/trash_purge_schedule"}]: dispatch 2026-03-09T20:47:29.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:29 vm07.local ceph-mon[49120]: from='mgr.24439 ' entity='mgr.vm10.byqahe' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm10.byqahe/trash_purge_schedule"}]: dispatch 2026-03-09T20:47:29.172 INFO:tasks.workunit.client.0.vm07.stdout:6/397: rename d8/f3b to d8/f7e 0 2026-03-09T20:47:29.174 INFO:tasks.workunit.client.0.vm07.stdout:6/398: dread d8/d16/d22/d24/d2b/f6a [0,4194304] 0 2026-03-09T20:47:29.178 INFO:tasks.workunit.client.1.vm10.stdout:7/300: dread db/d28/d2b/d36/d3b/f42 [0,4194304] 0 2026-03-09T20:47:29.209 INFO:tasks.workunit.client.0.vm07.stdout:8/349: dwrite d1/d5d/d6f/d2f/d4d/f67 [0,4194304] 0 2026-03-09T20:47:29.216 INFO:tasks.workunit.client.0.vm07.stdout:2/413: dread d2/f17 [4194304,4194304] 0 2026-03-09T20:47:29.217 INFO:tasks.workunit.client.1.vm10.stdout:0/288: truncate d2/d9/da/d11/f42 3928147 0 2026-03-09T20:47:29.222 INFO:tasks.workunit.client.1.vm10.stdout:0/289: dwrite d2/d9/da/d35/d30/f56 [0,4194304] 0 2026-03-09T20:47:29.245 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.243+0000 7f78ed047640 1 -- 192.168.123.107:0/3086612851 --> [v2:192.168.123.110:6828/2207204228,v1:192.168.123.110:6829/2207204228] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f78ac002bf0 con 0x7f78c4042560 2026-03-09T20:47:29.246 INFO:tasks.workunit.client.0.vm07.stdout:1/416: mkdir d3/d14/d88 0 2026-03-09T20:47:29.251 INFO:tasks.workunit.client.1.vm10.stdout:1/319: symlink d2/da/d25/d46/d51/d5d/l6a 0 2026-03-09T20:47:29.258 INFO:tasks.workunit.client.0.vm07.stdout:1/417: dwrite d3/f6f [0,4194304] 0 2026-03-09T20:47:29.260 INFO:tasks.workunit.client.0.vm07.stdout:1/418: chown d3/d14/c1b 23 1 2026-03-09T20:47:29.266 INFO:tasks.workunit.client.0.vm07.stdout:0/454: creat d1/d1f/d30/f8f x:0 0 0 2026-03-09T20:47:29.269 INFO:tasks.workunit.client.1.vm10.stdout:4/255: link d1/d2/f2d d1/d8/d1c/f52 0 2026-03-09T20:47:29.271 INFO:tasks.workunit.client.1.vm10.stdout:8/356: link d0/d22/d2f/c60 d0/d22/d2f/c70 0 2026-03-09T20:47:29.314 INFO:tasks.workunit.client.0.vm07.stdout:9/356: truncate d4/d11/d23/f2f 4130185 0 2026-03-09T20:47:29.314 INFO:tasks.workunit.client.0.vm07.stdout:5/473: rename d5/d33/f5a to d5/df/faa 0 2026-03-09T20:47:29.314 INFO:tasks.workunit.client.0.vm07.stdout:5/474: write d5/df/d13/d4f/f9b [494907,5176] 0 2026-03-09T20:47:29.315 INFO:tasks.workunit.client.0.vm07.stdout:5/475: write d5/d69/f82 [2023143,10915] 0 2026-03-09T20:47:29.315 INFO:tasks.workunit.client.0.vm07.stdout:5/476: dread - d5/df/d13/d55/f60 zero size 2026-03-09T20:47:29.315 INFO:tasks.workunit.client.0.vm07.stdout:6/399: creat d8/d26/d2a/d40/d67/f7f x:0 0 0 2026-03-09T20:47:29.315 INFO:tasks.workunit.client.0.vm07.stdout:6/400: fsync d8/d16/d22/d33/f73 0 2026-03-09T20:47:29.315 INFO:tasks.workunit.client.1.vm10.stdout:8/357: fdatasync d0/d22/d2f/d3d/f49 0 2026-03-09T20:47:29.315 INFO:tasks.workunit.client.1.vm10.stdout:8/358: chown d0/f17 1358296 1 2026-03-09T20:47:29.315 INFO:tasks.workunit.client.1.vm10.stdout:5/292: rename d2/d27/d3a to d2/d27/d37/d46/d5d/d77 0 2026-03-09T20:47:29.315 INFO:tasks.workunit.client.1.vm10.stdout:4/256: dwrite d1/d8/d1b/d30/f48 [0,4194304] 0 2026-03-09T20:47:29.315 INFO:tasks.workunit.client.1.vm10.stdout:9/364: creat d2/d3/d6d/d88/f8a x:0 0 0 2026-03-09T20:47:29.315 INFO:tasks.workunit.client.1.vm10.stdout:5/293: readlink d2/l1d 0 2026-03-09T20:47:29.315 INFO:tasks.workunit.client.1.vm10.stdout:8/359: fsync d0/d22/d25/d40/f5f 0 2026-03-09T20:47:29.315 INFO:tasks.workunit.client.1.vm10.stdout:7/301: creat db/d28/f5d x:0 0 0 2026-03-09T20:47:29.315 INFO:tasks.workunit.client.1.vm10.stdout:3/307: truncate dc/d14/d20/d2e/f32 1225330 0 2026-03-09T20:47:29.315 INFO:tasks.workunit.client.1.vm10.stdout:7/302: fdatasync db/d28/d2b/d36/f3c 0 2026-03-09T20:47:29.315 INFO:tasks.workunit.client.1.vm10.stdout:5/294: rename d2/d43 to d2/d1b/d54/d78 0 2026-03-09T20:47:29.315 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:29 vm10.local ceph-mon[57011]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "mon metadata", "id": "vm07"}]: dispatch 2026-03-09T20:47:29.315 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:29 vm10.local ceph-mon[57011]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "mon metadata", "id": "vm10"}]: dispatch 2026-03-09T20:47:29.315 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:29 vm10.local ceph-mon[57011]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm07.rovdbp"}]: dispatch 2026-03-09T20:47:29.315 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:29 vm10.local ceph-mon[57011]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm07.potfau"}]: dispatch 2026-03-09T20:47:29.315 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:29 vm10.local ceph-mon[57011]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm10.hzyuyq"}]: dispatch 2026-03-09T20:47:29.315 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:29 vm10.local ceph-mon[57011]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm10.qpltwp"}]: dispatch 2026-03-09T20:47:29.315 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:29 vm10.local ceph-mon[57011]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "mgr metadata", "who": "vm10.byqahe", "id": "vm10.byqahe"}]: dispatch 2026-03-09T20:47:29.315 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:29 vm10.local ceph-mon[57011]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:47:29.315 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:29 vm10.local ceph-mon[57011]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:47:29.315 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:29 vm10.local ceph-mon[57011]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:47:29.315 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:29 vm10.local ceph-mon[57011]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:47:29.315 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:29 vm10.local ceph-mon[57011]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T20:47:29.315 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:29 vm10.local ceph-mon[57011]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T20:47:29.315 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:29 vm10.local ceph-mon[57011]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T20:47:29.316 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:29 vm10.local ceph-mon[57011]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T20:47:29.316 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:29 vm10.local ceph-mon[57011]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T20:47:29.316 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:29 vm10.local ceph-mon[57011]: Manager daemon vm10.byqahe is now available 2026-03-09T20:47:29.316 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:29 vm10.local ceph-mon[57011]: Migrating agent root cert to cert store 2026-03-09T20:47:29.316 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:29 vm10.local ceph-mon[57011]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:29.316 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:29 vm10.local ceph-mon[57011]: Migrating agent root key to cert store 2026-03-09T20:47:29.316 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:29 vm10.local ceph-mon[57011]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:29.316 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:29 vm10.local ceph-mon[57011]: Checking for cert/key for grafana.vm07 2026-03-09T20:47:29.316 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:29 vm10.local ceph-mon[57011]: Migrating grafana.vm07 cert to cert store 2026-03-09T20:47:29.316 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:29 vm10.local ceph-mon[57011]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:29.316 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:29 vm10.local ceph-mon[57011]: Migrating grafana.vm07 key to cert store 2026-03-09T20:47:29.316 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:29 vm10.local ceph-mon[57011]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:29.316 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:29 vm10.local ceph-mon[57011]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:29.316 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:29 vm10.local ceph-mon[57011]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:47:29.316 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:29 vm10.local ceph-mon[57011]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:47:29.316 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:29 vm10.local ceph-mon[57011]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm10.byqahe/mirror_snapshot_schedule"}]: dispatch 2026-03-09T20:47:29.316 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:29 vm10.local ceph-mon[57011]: from='mgr.24439 ' entity='mgr.vm10.byqahe' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm10.byqahe/mirror_snapshot_schedule"}]: dispatch 2026-03-09T20:47:29.316 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:29 vm10.local ceph-mon[57011]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm10.byqahe/trash_purge_schedule"}]: dispatch 2026-03-09T20:47:29.316 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:29 vm10.local ceph-mon[57011]: from='mgr.24439 ' entity='mgr.vm10.byqahe' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm10.byqahe/trash_purge_schedule"}]: dispatch 2026-03-09T20:47:29.316 INFO:tasks.workunit.client.1.vm10.stdout:4/257: creat d1/d8/d1b/d30/f53 x:0 0 0 2026-03-09T20:47:29.316 INFO:tasks.workunit.client.1.vm10.stdout:2/342: creat d5/d18/d1b/f70 x:0 0 0 2026-03-09T20:47:29.316 INFO:tasks.workunit.client.1.vm10.stdout:2/343: write d5/d18/d1b/f26 [1403069,105453] 0 2026-03-09T20:47:29.317 INFO:tasks.workunit.client.1.vm10.stdout:9/365: dwrite d2/fc [4194304,4194304] 0 2026-03-09T20:47:29.317 INFO:tasks.workunit.client.1.vm10.stdout:9/366: chown d2/d12/f20 730795 1 2026-03-09T20:47:29.317 INFO:tasks.workunit.client.1.vm10.stdout:0/290: creat d2/d9/f61 x:0 0 0 2026-03-09T20:47:29.317 INFO:tasks.workunit.client.1.vm10.stdout:9/367: read - d2/d33/f77 zero size 2026-03-09T20:47:29.317 INFO:tasks.workunit.client.1.vm10.stdout:9/368: chown d2/d33/c5e 2 1 2026-03-09T20:47:29.317 INFO:tasks.workunit.client.1.vm10.stdout:9/369: dread d2/d33/f76 [0,4194304] 0 2026-03-09T20:47:29.317 INFO:tasks.workunit.client.1.vm10.stdout:0/291: dread d2/d9/da/de/d1a/d25/d34/f46 [0,4194304] 0 2026-03-09T20:47:29.318 INFO:tasks.workunit.client.0.vm07.stdout:4/305: dread d2/f33 [0,4194304] 0 2026-03-09T20:47:29.321 INFO:tasks.workunit.client.0.vm07.stdout:4/306: dread d2/df/d17/f1b [0,4194304] 0 2026-03-09T20:47:29.322 INFO:tasks.workunit.client.0.vm07.stdout:4/307: write d2/d1f/f3c [296180,29494] 0 2026-03-09T20:47:29.324 INFO:tasks.workunit.client.0.vm07.stdout:8/350: chown d1/dc/d16/d26/f48 16203000 1 2026-03-09T20:47:29.327 INFO:tasks.workunit.client.1.vm10.stdout:2/344: creat d5/d18/d2d/f71 x:0 0 0 2026-03-09T20:47:29.327 INFO:tasks.workunit.client.1.vm10.stdout:2/345: write d5/d18/f63 [478741,102245] 0 2026-03-09T20:47:29.335 INFO:tasks.workunit.client.0.vm07.stdout:0/455: fdatasync f0 0 2026-03-09T20:47:29.341 INFO:tasks.workunit.client.0.vm07.stdout:0/456: chown d1/d1f/c74 218870611 1 2026-03-09T20:47:29.341 INFO:tasks.workunit.client.0.vm07.stdout:9/357: dread - d4/d11/d23/f52 zero size 2026-03-09T20:47:29.341 INFO:tasks.workunit.client.0.vm07.stdout:7/438: creat d3/da/db/d79/f98 x:0 0 0 2026-03-09T20:47:29.350 INFO:tasks.workunit.client.0.vm07.stdout:3/399: creat d1/d5/d9/d11/d6d/d80/f81 x:0 0 0 2026-03-09T20:47:29.350 INFO:tasks.workunit.client.0.vm07.stdout:3/400: fdatasync d1/d5/d9/d2f/d34/f68 0 2026-03-09T20:47:29.352 INFO:tasks.workunit.client.1.vm10.stdout:1/320: link d2/da/d25/d46/d51/c5b d2/da/d25/d46/d51/d5d/c6b 0 2026-03-09T20:47:29.361 INFO:tasks.workunit.client.1.vm10.stdout:7/303: link db/d28/d2b/d36/d40/f44 db/d1f/f5e 0 2026-03-09T20:47:29.374 INFO:tasks.workunit.client.1.vm10.stdout:9/370: creat d2/d3/d85/f8b x:0 0 0 2026-03-09T20:47:29.375 INFO:tasks.workunit.client.1.vm10.stdout:9/371: chown d2/d28/d47/d50/f64 250462 1 2026-03-09T20:47:29.376 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.373+0000 7f78e57fa640 1 -- 192.168.123.107:0/3086612851 <== mgr.24439 v2:192.168.123.110:6828/2207204228 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+318 (secure 0 0 0) 0x7f78ac002bf0 con 0x7f78c4042560 2026-03-09T20:47:29.376 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.375+0000 7f78ed047640 1 -- 192.168.123.107:0/3086612851 >> [v2:192.168.123.110:6828/2207204228,v1:192.168.123.110:6829/2207204228] conn(0x7f78c4042560 msgr2=0x7f78c4044950 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:47:29.376 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.375+0000 7f78ed047640 1 --2- 192.168.123.107:0/3086612851 >> [v2:192.168.123.110:6828/2207204228,v1:192.168.123.110:6829/2207204228] conn(0x7f78c4042560 0x7f78c4044950 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7f78d0005fd0 tx=0x7f78d0005950 comp rx=0 tx=0).stop 2026-03-09T20:47:29.377 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.375+0000 7f78ed047640 1 -- 192.168.123.107:0/3086612851 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f78e8072370 msgr2=0x7f78e8115c20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:47:29.377 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.375+0000 7f78ed047640 1 --2- 192.168.123.107:0/3086612851 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f78e8072370 0x7f78e8115c20 secure :-1 s=READY pgs=93 cs=0 l=1 rev1=1 crypto rx=0x7f78dc02f750 tx=0x7f78dc004520 comp rx=0 tx=0).stop 2026-03-09T20:47:29.377 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.376+0000 7f78ed047640 1 -- 192.168.123.107:0/3086612851 shutdown_connections 2026-03-09T20:47:29.377 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.376+0000 7f78ed047640 1 --2- 192.168.123.107:0/3086612851 >> [v2:192.168.123.110:6828/2207204228,v1:192.168.123.110:6829/2207204228] conn(0x7f78c4042560 0x7f78c4044950 unknown :-1 s=CLOSED pgs=6 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:47:29.377 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.376+0000 7f78ed047640 1 --2- 192.168.123.107:0/3086612851 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f78e8116160 0x7f78e81165e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:47:29.377 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.376+0000 7f78ed047640 1 --2- 192.168.123.107:0/3086612851 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f78e8072370 0x7f78e8115c20 unknown :-1 s=CLOSED pgs=93 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:47:29.377 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.376+0000 7f78ed047640 1 -- 192.168.123.107:0/3086612851 >> 192.168.123.107:0/3086612851 conn(0x7f78e806d4f0 msgr2=0x7f78e810a900 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:47:29.377 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.376+0000 7f78ed047640 1 -- 192.168.123.107:0/3086612851 shutdown_connections 2026-03-09T20:47:29.377 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.376+0000 7f78ed047640 1 -- 192.168.123.107:0/3086612851 wait complete. 2026-03-09T20:47:29.391 INFO:teuthology.orchestra.run.vm07.stdout:true 2026-03-09T20:47:29.392 INFO:tasks.workunit.client.1.vm10.stdout:3/308: getdents dc/d14/d20/d21 0 2026-03-09T20:47:29.407 INFO:tasks.workunit.client.1.vm10.stdout:1/321: creat d2/da/d25/f6c x:0 0 0 2026-03-09T20:47:29.411 INFO:tasks.workunit.client.1.vm10.stdout:1/322: dwrite d2/da/d25/d3e/f69 [0,4194304] 0 2026-03-09T20:47:29.426 INFO:tasks.workunit.client.1.vm10.stdout:6/347: truncate d3/d12/d24/f27 2233901 0 2026-03-09T20:47:29.433 INFO:tasks.workunit.client.1.vm10.stdout:8/360: dwrite d0/d22/d2c/f32 [0,4194304] 0 2026-03-09T20:47:29.437 INFO:tasks.workunit.client.0.vm07.stdout:4/308: rename f1 to d2/d1f/d2d/d3f/f51 0 2026-03-09T20:47:29.438 INFO:tasks.workunit.client.0.vm07.stdout:4/309: readlink d2/df/l39 0 2026-03-09T20:47:29.442 INFO:tasks.workunit.client.0.vm07.stdout:4/310: dwrite d2/f43 [4194304,4194304] 0 2026-03-09T20:47:29.443 INFO:tasks.workunit.client.0.vm07.stdout:4/311: chown d2/l22 759 1 2026-03-09T20:47:29.446 INFO:tasks.workunit.client.0.vm07.stdout:4/312: dread d2/df/d17/f2a [0,4194304] 0 2026-03-09T20:47:29.459 INFO:tasks.workunit.client.0.vm07.stdout:8/351: mkdir d1/dc/d16/d26/d71 0 2026-03-09T20:47:29.463 INFO:tasks.workunit.client.0.vm07.stdout:2/414: creat d2/d46/f7e x:0 0 0 2026-03-09T20:47:29.464 INFO:tasks.workunit.client.1.vm10.stdout:4/258: getdents d1/d8/d1c 0 2026-03-09T20:47:29.467 INFO:tasks.workunit.client.1.vm10.stdout:3/309: mknod dc/d14/d26/c63 0 2026-03-09T20:47:29.468 INFO:tasks.workunit.client.0.vm07.stdout:9/358: unlink d4/d8/dc/f21 0 2026-03-09T20:47:29.470 INFO:tasks.workunit.client.1.vm10.stdout:5/295: truncate d2/fb 3085604 0 2026-03-09T20:47:29.471 INFO:tasks.workunit.client.1.vm10.stdout:5/296: truncate d2/f71 150277 0 2026-03-09T20:47:29.471 INFO:tasks.workunit.client.0.vm07.stdout:1/419: write d3/d23/f49 [4830345,25016] 0 2026-03-09T20:47:29.471 INFO:tasks.workunit.client.0.vm07.stdout:1/420: dread - d3/d66/f7e zero size 2026-03-09T20:47:29.475 INFO:tasks.workunit.client.0.vm07.stdout:1/421: dread d3/d14/f30 [0,4194304] 0 2026-03-09T20:47:29.478 INFO:tasks.workunit.client.0.vm07.stdout:5/477: write d5/df/d13/d55/f60 [999469,104024] 0 2026-03-09T20:47:29.487 INFO:tasks.workunit.client.1.vm10.stdout:1/323: symlink d2/da/d25/d3e/l6d 0 2026-03-09T20:47:29.493 INFO:tasks.workunit.client.1.vm10.stdout:6/348: symlink d3/da/d11/d31/d4c/l71 0 2026-03-09T20:47:29.493 INFO:tasks.workunit.client.0.vm07.stdout:6/401: symlink d8/l80 0 2026-03-09T20:47:29.493 INFO:tasks.workunit.client.0.vm07.stdout:6/402: write d8/d26/f4d [434025,117726] 0 2026-03-09T20:47:29.495 INFO:tasks.workunit.client.1.vm10.stdout:8/361: dwrite d0/d22/d25/f3b [0,4194304] 0 2026-03-09T20:47:29.500 INFO:tasks.workunit.client.0.vm07.stdout:4/313: rename d2/d1f/d2d/d3f/d41 to d2/d1f/d2d/d3f/d4a/d4b/d52 0 2026-03-09T20:47:29.503 INFO:tasks.workunit.client.0.vm07.stdout:4/314: write d2/f9 [664812,65747] 0 2026-03-09T20:47:29.505 INFO:tasks.workunit.client.1.vm10.stdout:4/259: sync 2026-03-09T20:47:29.507 INFO:tasks.workunit.client.1.vm10.stdout:3/310: creat dc/d14/d26/f64 x:0 0 0 2026-03-09T20:47:29.508 INFO:tasks.workunit.client.1.vm10.stdout:3/311: write dc/d14/d27/f3f [2985701,9830] 0 2026-03-09T20:47:29.511 INFO:tasks.workunit.client.0.vm07.stdout:8/352: mknod d1/d5d/d6f/c72 0 2026-03-09T20:47:29.515 INFO:tasks.workunit.client.1.vm10.stdout:5/297: rmdir d2/d27/d37/d46/d5d 39 2026-03-09T20:47:29.524 INFO:tasks.workunit.client.0.vm07.stdout:2/415: creat d2/db/d1c/d4a/d6c/f7f x:0 0 0 2026-03-09T20:47:29.530 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.528+0000 7fad17f0e640 1 -- 192.168.123.107:0/3507113187 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fad10071b90 msgr2=0x7fad10071f90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:47:29.530 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.528+0000 7fad17f0e640 1 --2- 192.168.123.107:0/3507113187 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fad10071b90 0x7fad10071f90 secure :-1 s=READY pgs=94 cs=0 l=1 rev1=1 crypto rx=0x7fad0c0098e0 tx=0x7fad0c02f1b0 comp rx=0 tx=0).stop 2026-03-09T20:47:29.530 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.529+0000 7fad17f0e640 1 -- 192.168.123.107:0/3507113187 shutdown_connections 2026-03-09T20:47:29.530 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.529+0000 7fad17f0e640 1 --2- 192.168.123.107:0/3507113187 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fad10072560 0x7fad100772d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:47:29.530 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.529+0000 7fad17f0e640 1 --2- 192.168.123.107:0/3507113187 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fad10071b90 0x7fad10071f90 unknown :-1 s=CLOSED pgs=94 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:47:29.530 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.529+0000 7fad17f0e640 1 -- 192.168.123.107:0/3507113187 >> 192.168.123.107:0/3507113187 conn(0x7fad1006d5d0 msgr2=0x7fad1006fa10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:47:29.530 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.529+0000 7fad17f0e640 1 -- 192.168.123.107:0/3507113187 shutdown_connections 2026-03-09T20:47:29.530 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.529+0000 7fad17f0e640 1 -- 192.168.123.107:0/3507113187 wait complete. 2026-03-09T20:47:29.531 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.530+0000 7fad17f0e640 1 Processor -- start 2026-03-09T20:47:29.531 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.530+0000 7fad17f0e640 1 -- start start 2026-03-09T20:47:29.531 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.530+0000 7fad17f0e640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fad10072560 0x7fad100827b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:47:29.531 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.530+0000 7fad17f0e640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fad10084160 0x7fad10082cf0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:47:29.531 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.530+0000 7fad17f0e640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fad10083230 con 0x7fad10072560 2026-03-09T20:47:29.531 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.530+0000 7fad17f0e640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fad100833a0 con 0x7fad10084160 2026-03-09T20:47:29.531 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.530+0000 7fad15482640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fad10084160 0x7fad10082cf0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:47:29.531 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.530+0000 7fad15482640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fad10084160 0x7fad10082cf0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:45520/0 (socket says 192.168.123.107:45520) 2026-03-09T20:47:29.531 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.530+0000 7fad15482640 1 -- 192.168.123.107:0/727230130 learned_addr learned my addr 192.168.123.107:0/727230130 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:47:29.532 INFO:tasks.workunit.client.1.vm10.stdout:1/324: mkdir d2/da/d25/d46/d51/d5d/d6e 0 2026-03-09T20:47:29.533 INFO:tasks.workunit.client.1.vm10.stdout:1/325: write d2/da/d25/d3e/d42/f63 [1653872,115515] 0 2026-03-09T20:47:29.533 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.532+0000 7fad15c83640 1 --2- 192.168.123.107:0/727230130 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fad10072560 0x7fad100827b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:47:29.533 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.532+0000 7fad15482640 1 -- 192.168.123.107:0/727230130 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fad10072560 msgr2=0x7fad100827b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:47:29.533 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.532+0000 7fad15482640 1 --2- 192.168.123.107:0/727230130 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fad10072560 0x7fad100827b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:47:29.534 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.532+0000 7fad15482640 1 -- 192.168.123.107:0/727230130 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fad0c009590 con 0x7fad10084160 2026-03-09T20:47:29.537 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.535+0000 7fad15482640 1 --2- 192.168.123.107:0/727230130 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fad10084160 0x7fad10082cf0 secure :-1 s=READY pgs=95 cs=0 l=1 rev1=1 crypto rx=0x7fad0800ea40 tx=0x7fad0800ef10 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:47:29.539 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.536+0000 7fad06ffd640 1 -- 192.168.123.107:0/727230130 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fad0800ce60 con 0x7fad10084160 2026-03-09T20:47:29.539 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.536+0000 7fad17f0e640 1 -- 192.168.123.107:0/727230130 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fad10083570 con 0x7fad10084160 2026-03-09T20:47:29.539 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.536+0000 7fad17f0e640 1 -- 192.168.123.107:0/727230130 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fad101c7110 con 0x7fad10084160 2026-03-09T20:47:29.539 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.536+0000 7fad06ffd640 1 -- 192.168.123.107:0/727230130 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fad080040d0 con 0x7fad10084160 2026-03-09T20:47:29.539 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.537+0000 7fad06ffd640 1 -- 192.168.123.107:0/727230130 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fad080175e0 con 0x7fad10084160 2026-03-09T20:47:29.539 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.537+0000 7fad06ffd640 1 -- 192.168.123.107:0/727230130 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 22) v1 ==== 50039+0+0 (secure 0 0 0) 0x7fad08017760 con 0x7fad10084160 2026-03-09T20:47:29.539 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.537+0000 7fad06ffd640 1 --2- 192.168.123.107:0/727230130 >> [v2:192.168.123.110:6828/2207204228,v1:192.168.123.110:6829/2207204228] conn(0x7facec03db80 0x7facec040040 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:47:29.539 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.537+0000 7fad06ffd640 1 -- 192.168.123.107:0/727230130 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(44..44 src has 1..44) v4 ==== 5706+0+0 (secure 0 0 0) 0x7fad08054600 con 0x7fad10084160 2026-03-09T20:47:29.539 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.537+0000 7fad04ff9640 1 -- 192.168.123.107:0/727230130 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7face0005350 con 0x7fad10084160 2026-03-09T20:47:29.541 INFO:tasks.workunit.client.1.vm10.stdout:7/304: write db/d21/d23/ff [1196323,36658] 0 2026-03-09T20:47:29.542 INFO:tasks.workunit.client.1.vm10.stdout:9/372: write d2/d33/d37/f4c [4482530,98693] 0 2026-03-09T20:47:29.542 INFO:tasks.workunit.client.1.vm10.stdout:7/305: chown db/d21/c25 0 1 2026-03-09T20:47:29.545 INFO:tasks.workunit.client.1.vm10.stdout:2/346: write d5/d18/d2d/f60 [1505916,113954] 0 2026-03-09T20:47:29.550 INFO:tasks.workunit.client.0.vm07.stdout:9/359: symlink d4/d11/d23/l80 0 2026-03-09T20:47:29.550 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.549+0000 7fad15c83640 1 --2- 192.168.123.107:0/727230130 >> [v2:192.168.123.110:6828/2207204228,v1:192.168.123.110:6829/2207204228] conn(0x7facec03db80 0x7facec040040 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:47:29.553 INFO:tasks.workunit.client.0.vm07.stdout:1/422: mknod d3/d14/d54/d3e/c89 0 2026-03-09T20:47:29.555 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.549+0000 7fad06ffd640 1 -- 192.168.123.107:0/727230130 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+193909 (secure 0 0 0) 0x7fad08015340 con 0x7fad10084160 2026-03-09T20:47:29.555 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.554+0000 7fad15c83640 1 --2- 192.168.123.107:0/727230130 >> [v2:192.168.123.110:6828/2207204228,v1:192.168.123.110:6829/2207204228] conn(0x7facec03db80 0x7facec040040 secure :-1 s=READY pgs=18 cs=0 l=1 rev1=1 crypto rx=0x7fad0c009560 tx=0x7fad0c0094d0 comp rx=0 tx=0).ready entity=mgr.24439 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:47:29.557 INFO:tasks.workunit.client.0.vm07.stdout:1/423: dwrite d3/d14/d54/d3e/f4a [0,4194304] 0 2026-03-09T20:47:29.562 INFO:tasks.workunit.client.0.vm07.stdout:1/424: dwrite d3/f5 [0,4194304] 0 2026-03-09T20:47:29.563 INFO:tasks.workunit.client.0.vm07.stdout:1/425: readlink d3/d23/d55/l5f 0 2026-03-09T20:47:29.576 INFO:tasks.workunit.client.1.vm10.stdout:0/292: dread d2/d9/da/d11/f42 [0,4194304] 0 2026-03-09T20:47:29.578 INFO:tasks.workunit.client.1.vm10.stdout:8/362: creat d0/d22/f71 x:0 0 0 2026-03-09T20:47:29.579 INFO:tasks.workunit.client.1.vm10.stdout:8/363: truncate d0/d22/d25/d40/f5f 856917 0 2026-03-09T20:47:29.583 INFO:tasks.workunit.client.0.vm07.stdout:2/416: sync 2026-03-09T20:47:29.603 INFO:tasks.workunit.client.0.vm07.stdout:2/417: truncate d2/db/d1c/d4a/d6c/f7f 986059 0 2026-03-09T20:47:29.603 INFO:tasks.workunit.client.0.vm07.stdout:2/418: fdatasync d2/d46/d72/f78 0 2026-03-09T20:47:29.603 INFO:tasks.workunit.client.0.vm07.stdout:3/401: symlink d1/d5/d9/d2f/d3d/d64/d43/d54/l82 0 2026-03-09T20:47:29.603 INFO:tasks.workunit.client.1.vm10.stdout:4/260: mkdir d1/d2/d3/d54 0 2026-03-09T20:47:29.603 INFO:tasks.workunit.client.1.vm10.stdout:3/312: fsync dc/d14/d27/f3c 0 2026-03-09T20:47:29.603 INFO:tasks.workunit.client.1.vm10.stdout:3/313: write f6 [2017504,21897] 0 2026-03-09T20:47:29.603 INFO:tasks.workunit.client.1.vm10.stdout:5/298: dread - d2/d27/d37/d46/d5d/d6d/f6e zero size 2026-03-09T20:47:29.603 INFO:tasks.workunit.client.1.vm10.stdout:1/326: rmdir d2/da/d25/d3e/d55 39 2026-03-09T20:47:29.604 INFO:tasks.workunit.client.0.vm07.stdout:4/315: creat d2/d1f/f53 x:0 0 0 2026-03-09T20:47:29.610 INFO:tasks.workunit.client.0.vm07.stdout:8/353: dread - d1/dc/d16/d26/f59 zero size 2026-03-09T20:47:29.610 INFO:tasks.workunit.client.0.vm07.stdout:8/354: chown d1/dc/d16/d26/f2a 51562580 1 2026-03-09T20:47:29.610 INFO:tasks.workunit.client.1.vm10.stdout:9/373: mknod d2/d3/de/c8c 0 2026-03-09T20:47:29.613 INFO:tasks.workunit.client.0.vm07.stdout:0/457: creat d1/f90 x:0 0 0 2026-03-09T20:47:29.614 INFO:tasks.workunit.client.0.vm07.stdout:0/458: write d1/d2/f1b [2589644,26622] 0 2026-03-09T20:47:29.626 INFO:tasks.workunit.client.1.vm10.stdout:8/364: mknod d0/d22/d2f/d38/c72 0 2026-03-09T20:47:29.627 INFO:tasks.workunit.client.0.vm07.stdout:1/426: mkdir d3/d23/d67/d8a 0 2026-03-09T20:47:29.634 INFO:tasks.workunit.client.0.vm07.stdout:7/439: link d3/da/db/d14/d1f/c39 d3/da/db/d14/d1f/d2b/c99 0 2026-03-09T20:47:29.636 INFO:tasks.workunit.client.1.vm10.stdout:3/314: creat dc/d14/d26/f65 x:0 0 0 2026-03-09T20:47:29.640 INFO:tasks.workunit.client.0.vm07.stdout:2/419: dread d2/db/d1c/f2e [0,4194304] 0 2026-03-09T20:47:29.641 INFO:tasks.workunit.client.1.vm10.stdout:5/299: mknod d2/d27/d37/d46/d5d/d5f/d69/c79 0 2026-03-09T20:47:29.642 INFO:tasks.workunit.client.1.vm10.stdout:5/300: stat d2/d58 0 2026-03-09T20:47:29.644 INFO:tasks.workunit.client.1.vm10.stdout:1/327: truncate d2/da/d25/f2e 451909 0 2026-03-09T20:47:29.650 INFO:tasks.workunit.client.0.vm07.stdout:4/316: creat d2/d1f/d2d/d3f/f54 x:0 0 0 2026-03-09T20:47:29.658 INFO:tasks.workunit.client.0.vm07.stdout:9/360: symlink d4/d11/d23/d32/l81 0 2026-03-09T20:47:29.659 INFO:tasks.workunit.client.0.vm07.stdout:9/361: readlink d4/d11/d31/l7f 0 2026-03-09T20:47:29.662 INFO:tasks.workunit.client.0.vm07.stdout:9/362: dwrite d4/d16/d29/d24/f36 [0,4194304] 0 2026-03-09T20:47:29.667 INFO:tasks.workunit.client.1.vm10.stdout:0/293: link d2/d9/da/d11/f42 d2/d4a/d58/f62 0 2026-03-09T20:47:29.669 INFO:tasks.workunit.client.1.vm10.stdout:0/294: write d2/d9/da/d35/f3a [181253,6779] 0 2026-03-09T20:47:29.672 INFO:tasks.workunit.client.0.vm07.stdout:1/427: symlink d3/d23/d67/l8b 0 2026-03-09T20:47:29.673 INFO:tasks.workunit.client.1.vm10.stdout:0/295: dwrite d2/d9/da/de/d1a/d25/d3e/f59 [0,4194304] 0 2026-03-09T20:47:29.678 INFO:tasks.workunit.client.0.vm07.stdout:1/428: dwrite d3/d14/d54/d3e/f59 [0,4194304] 0 2026-03-09T20:47:29.679 INFO:tasks.workunit.client.1.vm10.stdout:0/296: dread d2/d9/da/d35/d30/f56 [0,4194304] 0 2026-03-09T20:47:29.702 INFO:tasks.workunit.client.1.vm10.stdout:8/365: rmdir d0/d54 39 2026-03-09T20:47:29.716 INFO:tasks.workunit.client.0.vm07.stdout:7/440: rename d3/da/db/d32/d3e/f40 to d3/da/db/f9a 0 2026-03-09T20:47:29.726 INFO:tasks.workunit.client.1.vm10.stdout:6/349: dwrite d3/fe [0,4194304] 0 2026-03-09T20:47:29.729 INFO:tasks.workunit.client.0.vm07.stdout:2/420: chown d2/f2c 121 1 2026-03-09T20:47:29.744 INFO:tasks.workunit.client.1.vm10.stdout:3/315: dread dc/d14/d20/d2e/d56/f13 [0,4194304] 0 2026-03-09T20:47:29.763 INFO:tasks.workunit.client.0.vm07.stdout:5/478: truncate d5/df/d13/d30/d56/f84 3224871 0 2026-03-09T20:47:29.763 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.762+0000 7fad04ff9640 1 -- 192.168.123.107:0/727230130 --> [v2:192.168.123.110:6828/2207204228,v1:192.168.123.110:6829/2207204228] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7face0002bf0 con 0x7facec03db80 2026-03-09T20:47:29.765 INFO:tasks.workunit.client.1.vm10.stdout:2/347: dwrite d5/d18/f2c [0,4194304] 0 2026-03-09T20:47:29.774 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.770+0000 7fad06ffd640 1 -- 192.168.123.107:0/727230130 <== mgr.24439 v2:192.168.123.110:6828/2207204228 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+318 (secure 0 0 0) 0x7face0002bf0 con 0x7facec03db80 2026-03-09T20:47:29.774 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.773+0000 7fad17f0e640 1 -- 192.168.123.107:0/727230130 >> [v2:192.168.123.110:6828/2207204228,v1:192.168.123.110:6829/2207204228] conn(0x7facec03db80 msgr2=0x7facec040040 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:47:29.774 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.773+0000 7fad17f0e640 1 --2- 192.168.123.107:0/727230130 >> [v2:192.168.123.110:6828/2207204228,v1:192.168.123.110:6829/2207204228] conn(0x7facec03db80 0x7facec040040 secure :-1 s=READY pgs=18 cs=0 l=1 rev1=1 crypto rx=0x7fad0c009560 tx=0x7fad0c0094d0 comp rx=0 tx=0).stop 2026-03-09T20:47:29.775 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.773+0000 7fad17f0e640 1 -- 192.168.123.107:0/727230130 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fad10084160 msgr2=0x7fad10082cf0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:47:29.775 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.773+0000 7fad17f0e640 1 --2- 192.168.123.107:0/727230130 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fad10084160 0x7fad10082cf0 secure :-1 s=READY pgs=95 cs=0 l=1 rev1=1 crypto rx=0x7fad0800ea40 tx=0x7fad0800ef10 comp rx=0 tx=0).stop 2026-03-09T20:47:29.775 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.773+0000 7fad17f0e640 1 -- 192.168.123.107:0/727230130 shutdown_connections 2026-03-09T20:47:29.775 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.773+0000 7fad17f0e640 1 --2- 192.168.123.107:0/727230130 >> [v2:192.168.123.110:6828/2207204228,v1:192.168.123.110:6829/2207204228] conn(0x7facec03db80 0x7facec040040 unknown :-1 s=CLOSED pgs=18 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:47:29.775 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.773+0000 7fad17f0e640 1 --2- 192.168.123.107:0/727230130 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fad10084160 0x7fad10082cf0 unknown :-1 s=CLOSED pgs=95 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:47:29.775 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.773+0000 7fad17f0e640 1 --2- 192.168.123.107:0/727230130 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fad10072560 0x7fad100827b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:47:29.775 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.773+0000 7fad17f0e640 1 -- 192.168.123.107:0/727230130 >> 192.168.123.107:0/727230130 conn(0x7fad1006d5d0 msgr2=0x7fad10070430 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:47:29.775 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.774+0000 7fad17f0e640 1 -- 192.168.123.107:0/727230130 shutdown_connections 2026-03-09T20:47:29.775 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.774+0000 7fad17f0e640 1 -- 192.168.123.107:0/727230130 wait complete. 2026-03-09T20:47:29.780 INFO:tasks.workunit.client.1.vm10.stdout:7/306: dwrite db/d28/d2b/d36/d3b/f42 [0,4194304] 0 2026-03-09T20:47:29.794 INFO:tasks.workunit.client.1.vm10.stdout:9/374: write d2/d12/f69 [1152976,30666] 0 2026-03-09T20:47:29.795 INFO:tasks.workunit.client.1.vm10.stdout:5/301: mkdir d2/d39/d4b/d7a 0 2026-03-09T20:47:29.796 INFO:tasks.workunit.client.1.vm10.stdout:1/328: mknod d2/c6f 0 2026-03-09T20:47:29.804 INFO:tasks.workunit.client.0.vm07.stdout:4/317: dwrite d2/d1f/d2d/d3f/f51 [0,4194304] 0 2026-03-09T20:47:29.805 INFO:tasks.workunit.client.0.vm07.stdout:4/318: readlink d2/df/l4e 0 2026-03-09T20:47:29.827 INFO:tasks.workunit.client.0.vm07.stdout:0/459: link d1/d1f/d53/f84 d1/d2/dc/d17/f91 0 2026-03-09T20:47:29.827 INFO:tasks.workunit.client.0.vm07.stdout:0/460: chown d1/d2/dc/f10 26 1 2026-03-09T20:47:29.828 INFO:tasks.workunit.client.0.vm07.stdout:0/461: write d1/d2/f1b [245231,6114] 0 2026-03-09T20:47:29.837 INFO:tasks.workunit.client.0.vm07.stdout:4/319: sync 2026-03-09T20:47:29.852 INFO:tasks.workunit.client.0.vm07.stdout:9/363: creat d4/d8/dc/d4e/f82 x:0 0 0 2026-03-09T20:47:29.856 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.854+0000 7f06f58c9640 1 -- 192.168.123.107:0/2067519916 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f06f0072420 msgr2=0x7f06f0077190 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:47:29.856 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.854+0000 7f06f58c9640 1 --2- 192.168.123.107:0/2067519916 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f06f0072420 0x7f06f0077190 secure :-1 s=READY pgs=96 cs=0 l=1 rev1=1 crypto rx=0x7f06e8010d50 tx=0x7f06e8033cc0 comp rx=0 tx=0).stop 2026-03-09T20:47:29.856 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.855+0000 7f06f58c9640 1 -- 192.168.123.107:0/2067519916 shutdown_connections 2026-03-09T20:47:29.856 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.855+0000 7f06f58c9640 1 --2- 192.168.123.107:0/2067519916 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f06f0072420 0x7f06f0077190 unknown :-1 s=CLOSED pgs=96 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:47:29.856 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.855+0000 7f06f58c9640 1 --2- 192.168.123.107:0/2067519916 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f06f0071a50 0x7f06f0071e50 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:47:29.856 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.855+0000 7f06f58c9640 1 -- 192.168.123.107:0/2067519916 >> 192.168.123.107:0/2067519916 conn(0x7f06f006d4f0 msgr2=0x7f06f006f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:47:29.856 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.855+0000 7f06f58c9640 1 -- 192.168.123.107:0/2067519916 shutdown_connections 2026-03-09T20:47:29.857 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.855+0000 7f06f58c9640 1 -- 192.168.123.107:0/2067519916 wait complete. 2026-03-09T20:47:29.857 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.856+0000 7f06f58c9640 1 Processor -- start 2026-03-09T20:47:29.857 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.856+0000 7f06f58c9640 1 -- start start 2026-03-09T20:47:29.857 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.856+0000 7f06f58c9640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f06f0071a50 0x7f06f00840b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:47:29.857 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.856+0000 7f06f58c9640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f06f0072420 0x7f06f0082700 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:47:29.858 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.856+0000 7f06f58c9640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f06f0082c40 con 0x7f06f0072420 2026-03-09T20:47:29.858 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.856+0000 7f06f58c9640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f06f0082db0 con 0x7f06f0071a50 2026-03-09T20:47:29.858 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.856+0000 7f06f48c7640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f06f0071a50 0x7f06f00840b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:47:29.858 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.856+0000 7f06f48c7640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f06f0071a50 0x7f06f00840b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:45542/0 (socket says 192.168.123.107:45542) 2026-03-09T20:47:29.858 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.856+0000 7f06f48c7640 1 -- 192.168.123.107:0/4213726899 learned_addr learned my addr 192.168.123.107:0/4213726899 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:47:29.858 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.857+0000 7f06f48c7640 1 -- 192.168.123.107:0/4213726899 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f06f0072420 msgr2=0x7f06f0082700 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:47:29.858 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.857+0000 7f06f48c7640 1 --2- 192.168.123.107:0/4213726899 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f06f0072420 0x7f06f0082700 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:47:29.858 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.857+0000 7f06f48c7640 1 -- 192.168.123.107:0/4213726899 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f06e8010a00 con 0x7f06f0071a50 2026-03-09T20:47:29.858 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.857+0000 7f06f48c7640 1 --2- 192.168.123.107:0/4213726899 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f06f0071a50 0x7f06f00840b0 secure :-1 s=READY pgs=97 cs=0 l=1 rev1=1 crypto rx=0x7f06e00079d0 tx=0x7f06e0007ea0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:47:29.860 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.858+0000 7f06edffb640 1 -- 192.168.123.107:0/4213726899 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f06e0010070 con 0x7f06f0071a50 2026-03-09T20:47:29.860 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.858+0000 7f06edffb640 1 -- 192.168.123.107:0/4213726899 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f06e000ad90 con 0x7f06f0071a50 2026-03-09T20:47:29.860 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.858+0000 7f06edffb640 1 -- 192.168.123.107:0/4213726899 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f06e00153d0 con 0x7f06f0071a50 2026-03-09T20:47:29.861 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.859+0000 7f06f58c9640 1 -- 192.168.123.107:0/4213726899 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f06f0082fb0 con 0x7f06f0071a50 2026-03-09T20:47:29.861 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.859+0000 7f06f58c9640 1 -- 192.168.123.107:0/4213726899 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f06f0083480 con 0x7f06f0071a50 2026-03-09T20:47:29.861 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.860+0000 7f06edffb640 1 -- 192.168.123.107:0/4213726899 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 22) v1 ==== 50039+0+0 (secure 0 0 0) 0x7f06e001d050 con 0x7f06f0071a50 2026-03-09T20:47:29.861 INFO:tasks.workunit.client.0.vm07.stdout:1/429: truncate d3/d23/f2c 20788 0 2026-03-09T20:47:29.862 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.860+0000 7f06bf7fe640 1 -- 192.168.123.107:0/4213726899 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f06b8005350 con 0x7f06f0071a50 2026-03-09T20:47:29.862 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.860+0000 7f06edffb640 1 --2- 192.168.123.107:0/4213726899 >> [v2:192.168.123.110:6828/2207204228,v1:192.168.123.110:6829/2207204228] conn(0x7f06cc03dae0 0x7f06cc03ffa0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:47:29.862 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.860+0000 7f06edffb640 1 -- 192.168.123.107:0/4213726899 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(44..44 src has 1..44) v4 ==== 5706+0+0 (secure 0 0 0) 0x7f06e0052b80 con 0x7f06f0071a50 2026-03-09T20:47:29.862 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.861+0000 7f06effff640 1 --2- 192.168.123.107:0/4213726899 >> [v2:192.168.123.110:6828/2207204228,v1:192.168.123.110:6829/2207204228] conn(0x7f06cc03dae0 0x7f06cc03ffa0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:47:29.866 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.865+0000 7f06edffb640 1 -- 192.168.123.107:0/4213726899 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+193909 (secure 0 0 0) 0x7f06e0004490 con 0x7f06f0071a50 2026-03-09T20:47:29.866 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:29.865+0000 7f06effff640 1 --2- 192.168.123.107:0/4213726899 >> [v2:192.168.123.110:6828/2207204228,v1:192.168.123.110:6829/2207204228] conn(0x7f06cc03dae0 0x7f06cc03ffa0 secure :-1 s=READY pgs=19 cs=0 l=1 rev1=1 crypto rx=0x7f06e8034920 tx=0x7f06e80074a0 comp rx=0 tx=0).ready entity=mgr.24439 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:47:29.870 INFO:tasks.workunit.client.0.vm07.stdout:2/421: dwrite d2/db/f48 [0,4194304] 0 2026-03-09T20:47:29.872 INFO:tasks.workunit.client.0.vm07.stdout:2/422: chown d2/db/d28 295 1 2026-03-09T20:47:29.879 INFO:tasks.workunit.client.0.vm07.stdout:5/479: rename d5/df/d13/d55 to d5/d33/d39/d8d/dab 0 2026-03-09T20:47:29.880 INFO:tasks.workunit.client.0.vm07.stdout:5/480: chown d5/df/f4a 239372 1 2026-03-09T20:47:29.883 INFO:tasks.workunit.client.0.vm07.stdout:3/402: truncate d1/d5/d9/fe 960926 0 2026-03-09T20:47:29.888 INFO:tasks.workunit.client.0.vm07.stdout:3/403: dwrite d1/d5/d9/d11/d1f/f7f [0,4194304] 0 2026-03-09T20:47:29.898 INFO:tasks.workunit.client.0.vm07.stdout:6/403: getdents d8/d16/d22 0 2026-03-09T20:47:29.900 INFO:tasks.workunit.client.0.vm07.stdout:8/355: creat d1/d5d/d6f/d2f/d4d/f73 x:0 0 0 2026-03-09T20:47:29.900 INFO:tasks.workunit.client.0.vm07.stdout:8/356: fsync d1/dc/d16/d31/f47 0 2026-03-09T20:47:29.902 INFO:tasks.workunit.client.0.vm07.stdout:8/357: truncate d1/d5d/d6f/d2f/d4d/f73 225486 0 2026-03-09T20:47:29.904 INFO:tasks.workunit.client.0.vm07.stdout:4/320: fsync d2/f33 0 2026-03-09T20:47:29.916 INFO:tasks.workunit.client.0.vm07.stdout:7/441: mkdir d3/d9b 0 2026-03-09T20:47:29.925 INFO:tasks.workunit.client.0.vm07.stdout:0/462: dwrite d1/d1f/f63 [0,4194304] 0 2026-03-09T20:47:29.930 INFO:tasks.workunit.client.0.vm07.stdout:0/463: dwrite d1/f90 [0,4194304] 0 2026-03-09T20:47:29.945 INFO:tasks.workunit.client.1.vm10.stdout:4/261: link d1/d8/d1b/c22 d1/d8/d1c/d41/c55 0 2026-03-09T20:47:29.946 INFO:tasks.workunit.client.1.vm10.stdout:4/262: write d1/d8/d1b/d30/f53 [546364,12044] 0 2026-03-09T20:47:29.947 INFO:tasks.workunit.client.0.vm07.stdout:5/481: creat d5/df/d13/d30/fac x:0 0 0 2026-03-09T20:47:29.947 INFO:tasks.workunit.client.1.vm10.stdout:4/263: write d1/d8/d1b/d30/f48 [2074553,124328] 0 2026-03-09T20:47:29.947 INFO:tasks.workunit.client.0.vm07.stdout:5/482: stat d5/d33/d39/c65 0 2026-03-09T20:47:29.948 INFO:tasks.workunit.client.1.vm10.stdout:4/264: truncate d1/d8/d1c/d38/f44 1008222 0 2026-03-09T20:47:29.957 INFO:tasks.workunit.client.1.vm10.stdout:3/316: dread dc/d14/d20/d2e/f32 [0,4194304] 0 2026-03-09T20:47:29.966 INFO:tasks.workunit.client.0.vm07.stdout:6/404: unlink d8/c74 0 2026-03-09T20:47:29.966 INFO:tasks.workunit.client.1.vm10.stdout:2/348: dwrite d5/d18/d27/d28/d41/f4b [0,4194304] 0 2026-03-09T20:47:29.966 INFO:tasks.workunit.client.0.vm07.stdout:6/405: dread - d8/d16/d22/d33/f6d zero size 2026-03-09T20:47:29.970 INFO:tasks.workunit.client.1.vm10.stdout:7/307: creat db/d1f/f5f x:0 0 0 2026-03-09T20:47:29.970 INFO:tasks.workunit.client.1.vm10.stdout:2/349: dwrite d5/d18/d1b/f70 [0,4194304] 0 2026-03-09T20:47:29.971 INFO:tasks.workunit.client.0.vm07.stdout:8/358: mknod d1/dc/d16/d26/c74 0 2026-03-09T20:47:29.971 INFO:tasks.workunit.client.1.vm10.stdout:7/308: readlink db/d1f/l49 0 2026-03-09T20:47:29.971 INFO:tasks.workunit.client.0.vm07.stdout:8/359: dread - d1/dc/d16/f6e zero size 2026-03-09T20:47:29.974 INFO:tasks.workunit.client.1.vm10.stdout:9/375: mkdir d2/d28/d47/d50/d8d 0 2026-03-09T20:47:29.979 INFO:tasks.workunit.client.1.vm10.stdout:2/350: dread d5/d18/d27/d28/d41/f4b [0,4194304] 0 2026-03-09T20:47:30.009 INFO:tasks.workunit.client.0.vm07.stdout:2/423: dwrite d2/db/d28/f32 [0,4194304] 0 2026-03-09T20:47:30.035 INFO:tasks.workunit.client.1.vm10.stdout:8/366: mknod d0/d54/c73 0 2026-03-09T20:47:30.039 INFO:tasks.workunit.client.1.vm10.stdout:8/367: dread d0/d22/d2c/f57 [0,4194304] 0 2026-03-09T20:47:30.047 INFO:tasks.workunit.client.1.vm10.stdout:4/265: fdatasync d1/d8/f25 0 2026-03-09T20:47:30.050 INFO:tasks.workunit.client.1.vm10.stdout:6/350: mknod d3/da/d11/d31/d47/c72 0 2026-03-09T20:47:30.055 INFO:tasks.workunit.client.0.vm07.stdout:0/464: fsync d1/d2/dc/d80/f87 0 2026-03-09T20:47:30.061 INFO:tasks.workunit.client.0.vm07.stdout:3/404: dwrite d1/d5/d9/d2f/d34/f40 [0,4194304] 0 2026-03-09T20:47:30.067 INFO:tasks.workunit.client.1.vm10.stdout:0/297: truncate d2/d9/da/de/d1a/d25/d3e/f59 3885108 0 2026-03-09T20:47:30.067 INFO:tasks.workunit.client.1.vm10.stdout:5/302: dwrite d2/f23 [0,4194304] 0 2026-03-09T20:47:30.070 INFO:tasks.workunit.client.1.vm10.stdout:0/298: truncate d2/d4a/f5a 251199 0 2026-03-09T20:47:30.070 INFO:tasks.workunit.client.0.vm07.stdout:5/483: symlink d5/d50/lad 0 2026-03-09T20:47:30.071 INFO:tasks.workunit.client.1.vm10.stdout:0/299: write d2/db/f38 [5116169,129433] 0 2026-03-09T20:47:30.074 INFO:tasks.workunit.client.0.vm07.stdout:5/484: write d5/df/d13/f17 [4347599,5739] 0 2026-03-09T20:47:30.080 INFO:tasks.workunit.client.0.vm07.stdout:9/364: dwrite d4/d11/d23/f2f [4194304,4194304] 0 2026-03-09T20:47:30.080 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.077+0000 7f06bf7fe640 1 -- 192.168.123.107:0/4213726899 --> [v2:192.168.123.110:6828/2207204228,v1:192.168.123.110:6829/2207204228] -- mgr_command(tid 0: {"prefix": "orch ps", "target": ["mon-mgr", ""]}) v1 -- 0x7f06b8002bf0 con 0x7f06cc03dae0 2026-03-09T20:47:30.095 INFO:tasks.workunit.client.1.vm10.stdout:9/376: mkdir d2/d28/d47/d6a/d8e 0 2026-03-09T20:47:30.095 INFO:tasks.workunit.client.0.vm07.stdout:7/442: write d3/da/db/d14/d1f/d2b/d52/f74 [3154556,71644] 0 2026-03-09T20:47:30.100 INFO:tasks.workunit.client.0.vm07.stdout:8/360: fdatasync d1/d5d/d6f/d2f/f34 0 2026-03-09T20:47:30.105 INFO:tasks.workunit.client.1.vm10.stdout:5/303: dread d2/fd [0,4194304] 0 2026-03-09T20:47:30.114 INFO:tasks.workunit.client.1.vm10.stdout:1/329: mkdir d2/da/d25/d46/d51/d5d/d6e/d70 0 2026-03-09T20:47:30.114 INFO:tasks.workunit.client.1.vm10.stdout:1/330: stat d2/l24 0 2026-03-09T20:47:30.115 INFO:tasks.workunit.client.1.vm10.stdout:1/331: chown d2/da/d25/d3e/d42/l47 6 1 2026-03-09T20:47:30.116 INFO:tasks.workunit.client.0.vm07.stdout:1/430: creat d3/d66/f8c x:0 0 0 2026-03-09T20:47:30.119 INFO:tasks.workunit.client.0.vm07.stdout:2/424: truncate d2/db/d1c/f2e 2091952 0 2026-03-09T20:47:30.120 INFO:tasks.workunit.client.1.vm10.stdout:4/266: creat d1/d8/d39/f56 x:0 0 0 2026-03-09T20:47:30.121 INFO:tasks.workunit.client.1.vm10.stdout:4/267: fdatasync d1/d8/d1b/d30/f53 0 2026-03-09T20:47:30.122 INFO:tasks.workunit.client.1.vm10.stdout:6/351: symlink d3/da/d11/d31/d4c/l73 0 2026-03-09T20:47:30.127 INFO:tasks.workunit.client.1.vm10.stdout:6/352: dread d3/fe [0,4194304] 0 2026-03-09T20:47:30.129 INFO:teuthology.orchestra.run.vm07.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T20:47:30.130 INFO:teuthology.orchestra.run.vm07.stdout:alertmanager.vm07 vm07 *:9093,9094 running (3m) 2m ago 4m 23.6M - 0.25.0 c8568f914cd2 aa3206f6f5cb 2026-03-09T20:47:30.130 INFO:teuthology.orchestra.run.vm07.stdout:ceph-exporter.vm07 vm07 running (4m) 2m ago 4m 8514k - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 06140d824fae 2026-03-09T20:47:30.130 INFO:teuthology.orchestra.run.vm07.stdout:ceph-exporter.vm10 vm10 running (4m) 9s ago 4m 9135k - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 ecddc8340426 2026-03-09T20:47:30.130 INFO:teuthology.orchestra.run.vm07.stdout:crash.vm07 vm07 running (4m) 2m ago 4m 7620k - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 8dda9981b08b 2026-03-09T20:47:30.130 INFO:teuthology.orchestra.run.vm07.stdout:crash.vm10 vm10 running (4m) 9s ago 4m 7616k - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 eba80e79586f 2026-03-09T20:47:30.130 INFO:teuthology.orchestra.run.vm07.stdout:grafana.vm07 vm07 *:3000 running (3m) 2m ago 4m 78.3M - 9.4.7 954c08fa6188 74cf2e7ee6ad 2026-03-09T20:47:30.130 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm07.potfau vm07 running (2m) 2m ago 2m 18.5M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 2492b6874dc8 2026-03-09T20:47:30.130 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm07.rovdbp vm07 running (2m) 2m ago 2m 19.8M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 3dd0b4a28f35 2026-03-09T20:47:30.130 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm10.hzyuyq vm10 running (2m) 9s ago 2m 117M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 ed740ceed51a 2026-03-09T20:47:30.130 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm10.qpltwp vm10 running (2m) 9s ago 2m 22.1M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 c5fdba181aaf 2026-03-09T20:47:30.130 INFO:teuthology.orchestra.run.vm07.stdout:mgr.vm07.xjrvch vm07 *:9283,8765,8443 running (5m) 2m ago 5m 542M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 7a35a71cbc43 2026-03-09T20:47:30.130 INFO:teuthology.orchestra.run.vm07.stdout:mgr.vm10.byqahe vm10 *:8443,9283,8765 running (12s) 9s ago 4m 47.0M - 19.2.3-678-ge911bdeb 654f31e6858e 72000f76daa6 2026-03-09T20:47:30.130 INFO:teuthology.orchestra.run.vm07.stdout:mon.vm07 vm07 running (5m) 2m ago 5m 53.1M 2048M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 f3e88bdaa0dd 2026-03-09T20:47:30.130 INFO:teuthology.orchestra.run.vm07.stdout:mon.vm10 vm10 running (4m) 9s ago 4m 49.1M 2048M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 4e5d7d18c660 2026-03-09T20:47:30.130 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.vm07 vm07 *:9100 running (4m) 2m ago 4m 14.3M - 1.5.0 0da6a335fe13 d6fac1f8a1d0 2026-03-09T20:47:30.130 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.vm10 vm10 *:9100 running (4m) 9s ago 4m 15.2M - 1.5.0 0da6a335fe13 9716a97e7ed1 2026-03-09T20:47:30.130 INFO:teuthology.orchestra.run.vm07.stdout:osd.0 vm07 running (3m) 2m ago 3m 66.7M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 482878bd7721 2026-03-09T20:47:30.130 INFO:teuthology.orchestra.run.vm07.stdout:osd.1 vm07 running (3m) 2m ago 3m 68.5M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 15564e5032c9 2026-03-09T20:47:30.130 INFO:teuthology.orchestra.run.vm07.stdout:osd.2 vm07 running (3m) 2m ago 3m 47.2M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 a2ad523a264c 2026-03-09T20:47:30.130 INFO:teuthology.orchestra.run.vm07.stdout:osd.3 vm10 running (3m) 9s ago 3m 299M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 c4d7e2279ba1 2026-03-09T20:47:30.130 INFO:teuthology.orchestra.run.vm07.stdout:osd.4 vm10 running (3m) 9s ago 3m 280M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 37651efc9a7d 2026-03-09T20:47:30.130 INFO:teuthology.orchestra.run.vm07.stdout:osd.5 vm10 running (2m) 9s ago 2m 250M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 e1bd83add343 2026-03-09T20:47:30.130 INFO:teuthology.orchestra.run.vm07.stdout:prometheus.vm07 vm07 *:9095 running (3m) 2m ago 4m 37.0M - 2.43.0 a07b618ecd1d 08a586cd1392 2026-03-09T20:47:30.130 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.126+0000 7f06edffb640 1 -- 192.168.123.107:0/4213726899 <== mgr.24439 v2:192.168.123.110:6828/2207204228 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+3576 (secure 0 0 0) 0x7f06b8002bf0 con 0x7f06cc03dae0 2026-03-09T20:47:30.130 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.128+0000 7f06bf7fe640 1 -- 192.168.123.107:0/4213726899 >> [v2:192.168.123.110:6828/2207204228,v1:192.168.123.110:6829/2207204228] conn(0x7f06cc03dae0 msgr2=0x7f06cc03ffa0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:47:30.130 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.128+0000 7f06bf7fe640 1 --2- 192.168.123.107:0/4213726899 >> [v2:192.168.123.110:6828/2207204228,v1:192.168.123.110:6829/2207204228] conn(0x7f06cc03dae0 0x7f06cc03ffa0 secure :-1 s=READY pgs=19 cs=0 l=1 rev1=1 crypto rx=0x7f06e8034920 tx=0x7f06e80074a0 comp rx=0 tx=0).stop 2026-03-09T20:47:30.130 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.128+0000 7f06bf7fe640 1 -- 192.168.123.107:0/4213726899 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f06f0071a50 msgr2=0x7f06f00840b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:47:30.130 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.129+0000 7f06bf7fe640 1 --2- 192.168.123.107:0/4213726899 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f06f0071a50 0x7f06f00840b0 secure :-1 s=READY pgs=97 cs=0 l=1 rev1=1 crypto rx=0x7f06e00079d0 tx=0x7f06e0007ea0 comp rx=0 tx=0).stop 2026-03-09T20:47:30.130 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.129+0000 7f06bf7fe640 1 -- 192.168.123.107:0/4213726899 shutdown_connections 2026-03-09T20:47:30.130 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.129+0000 7f06bf7fe640 1 --2- 192.168.123.107:0/4213726899 >> [v2:192.168.123.110:6828/2207204228,v1:192.168.123.110:6829/2207204228] conn(0x7f06cc03dae0 0x7f06cc03ffa0 unknown :-1 s=CLOSED pgs=19 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:47:30.131 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.129+0000 7f06bf7fe640 1 --2- 192.168.123.107:0/4213726899 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f06f0072420 0x7f06f0082700 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:47:30.131 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.129+0000 7f06bf7fe640 1 --2- 192.168.123.107:0/4213726899 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f06f0071a50 0x7f06f00840b0 unknown :-1 s=CLOSED pgs=97 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:47:30.131 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.129+0000 7f06bf7fe640 1 -- 192.168.123.107:0/4213726899 >> 192.168.123.107:0/4213726899 conn(0x7f06f006d4f0 msgr2=0x7f06f0075420 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:47:30.131 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.129+0000 7f06bf7fe640 1 -- 192.168.123.107:0/4213726899 shutdown_connections 2026-03-09T20:47:30.131 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.130+0000 7f06bf7fe640 1 -- 192.168.123.107:0/4213726899 wait complete. 2026-03-09T20:47:30.132 INFO:tasks.workunit.client.1.vm10.stdout:3/317: link dc/d14/d20/d21/f50 dc/d14/d26/d29/d2a/f66 0 2026-03-09T20:47:30.133 INFO:tasks.workunit.client.1.vm10.stdout:0/300: mkdir d2/d9/d4b/d63 0 2026-03-09T20:47:30.133 INFO:tasks.workunit.client.1.vm10.stdout:0/301: write d2/d9/f61 [170862,122520] 0 2026-03-09T20:47:30.134 INFO:tasks.workunit.client.0.vm07.stdout:0/465: dread d1/d2/d4b/f61 [0,4194304] 0 2026-03-09T20:47:30.134 INFO:tasks.workunit.client.1.vm10.stdout:0/302: chown d2/d9/d4b/l50 2402367 1 2026-03-09T20:47:30.155 INFO:tasks.workunit.client.1.vm10.stdout:9/377: mkdir d2/d3/de/d8f 0 2026-03-09T20:47:30.155 INFO:tasks.workunit.client.1.vm10.stdout:5/304: mkdir d2/d1b/d54/d7b 0 2026-03-09T20:47:30.167 INFO:tasks.workunit.client.0.vm07.stdout:4/321: write d2/f28 [1023013,86096] 0 2026-03-09T20:47:30.170 INFO:tasks.workunit.client.1.vm10.stdout:5/305: sync 2026-03-09T20:47:30.185 INFO:tasks.workunit.client.1.vm10.stdout:7/309: truncate db/f16 3098720 0 2026-03-09T20:47:30.185 INFO:tasks.workunit.client.1.vm10.stdout:7/310: fdatasync db/d46/f5a 0 2026-03-09T20:47:30.203 INFO:tasks.workunit.client.1.vm10.stdout:8/368: truncate d0/d22/d2c/f32 1727301 0 2026-03-09T20:47:30.207 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.205+0000 7f1306954640 1 -- 192.168.123.107:0/778491773 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1300072440 msgr2=0x7f13000771b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:47:30.207 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.205+0000 7f1306954640 1 --2- 192.168.123.107:0/778491773 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1300072440 0x7f13000771b0 secure :-1 s=READY pgs=313 cs=0 l=1 rev1=1 crypto rx=0x7f12f8008030 tx=0x7f12f8030dc0 comp rx=0 tx=0).stop 2026-03-09T20:47:30.207 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.205+0000 7f1306954640 1 -- 192.168.123.107:0/778491773 shutdown_connections 2026-03-09T20:47:30.207 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.205+0000 7f1306954640 1 --2- 192.168.123.107:0/778491773 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1300072440 0x7f13000771b0 unknown :-1 s=CLOSED pgs=313 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:47:30.207 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.205+0000 7f1306954640 1 --2- 192.168.123.107:0/778491773 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1300071a70 0x7f1300071e70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:47:30.207 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.205+0000 7f1306954640 1 -- 192.168.123.107:0/778491773 >> 192.168.123.107:0/778491773 conn(0x7f130006d4f0 msgr2=0x7f130006f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:47:30.208 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.206+0000 7f1306954640 1 -- 192.168.123.107:0/778491773 shutdown_connections 2026-03-09T20:47:30.208 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.206+0000 7f1306954640 1 -- 192.168.123.107:0/778491773 wait complete. 2026-03-09T20:47:30.208 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.206+0000 7f1306954640 1 Processor -- start 2026-03-09T20:47:30.208 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.206+0000 7f1306954640 1 -- start start 2026-03-09T20:47:30.208 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.206+0000 7f1306954640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1300071a70 0x7f1300131960 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:47:30.208 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.206+0000 7f1306954640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1300133310 0x7f1300131ea0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:47:30.208 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.206+0000 7f1306954640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f13001323e0 con 0x7f1300071a70 2026-03-09T20:47:30.208 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.206+0000 7f1306954640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f1300132550 con 0x7f1300133310 2026-03-09T20:47:30.208 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.207+0000 7f12fffff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1300071a70 0x7f1300131960 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:47:30.208 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.207+0000 7f12fffff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1300071a70 0x7f1300131960 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:34578/0 (socket says 192.168.123.107:34578) 2026-03-09T20:47:30.208 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.207+0000 7f12fffff640 1 -- 192.168.123.107:0/2982579775 learned_addr learned my addr 192.168.123.107:0/2982579775 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:47:30.208 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.207+0000 7f12fffff640 1 -- 192.168.123.107:0/2982579775 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1300133310 msgr2=0x7f1300131ea0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:47:30.209 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.207+0000 7f12fffff640 1 --2- 192.168.123.107:0/2982579775 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1300133310 0x7f1300131ea0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:47:30.209 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.207+0000 7f12fffff640 1 -- 192.168.123.107:0/2982579775 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f12f8007ce0 con 0x7f1300071a70 2026-03-09T20:47:30.209 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.207+0000 7f12fffff640 1 --2- 192.168.123.107:0/2982579775 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1300071a70 0x7f1300131960 secure :-1 s=READY pgs=314 cs=0 l=1 rev1=1 crypto rx=0x7f12f000d8d0 tx=0x7f12f000dda0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:47:30.209 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.208+0000 7f12fd7fa640 1 -- 192.168.123.107:0/2982579775 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f12f0004490 con 0x7f1300071a70 2026-03-09T20:47:30.210 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.208+0000 7f12fd7fa640 1 -- 192.168.123.107:0/2982579775 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f12f000bd00 con 0x7f1300071a70 2026-03-09T20:47:30.210 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.208+0000 7f1306954640 1 -- 192.168.123.107:0/2982579775 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f130007fad0 con 0x7f1300071a70 2026-03-09T20:47:30.210 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.208+0000 7f1306954640 1 -- 192.168.123.107:0/2982579775 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f1300080020 con 0x7f1300071a70 2026-03-09T20:47:30.210 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.209+0000 7f12fd7fa640 1 -- 192.168.123.107:0/2982579775 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f12f0010460 con 0x7f1300071a70 2026-03-09T20:47:30.210 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.210+0000 7f12fd7fa640 1 -- 192.168.123.107:0/2982579775 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 23) v1 ==== 50339+0+0 (secure 0 0 0) 0x7f12f00105c0 con 0x7f1300071a70 2026-03-09T20:47:30.211 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.210+0000 7f12fd7fa640 1 --2- 192.168.123.107:0/2982579775 >> [v2:192.168.123.110:6828/2207204228,v1:192.168.123.110:6829/2207204228] conn(0x7f12e403df50 0x7f12e4040410 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:47:30.211 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.210+0000 7f12fd7fa640 1 -- 192.168.123.107:0/2982579775 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(44..44 src has 1..44) v4 ==== 5706+0+0 (secure 0 0 0) 0x7f12f0054330 con 0x7f1300071a70 2026-03-09T20:47:30.212 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.211+0000 7f12ff7fe640 1 --2- 192.168.123.107:0/2982579775 >> [v2:192.168.123.110:6828/2207204228,v1:192.168.123.110:6829/2207204228] conn(0x7f12e403df50 0x7f12e4040410 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:47:30.213 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.211+0000 7f1306954640 1 -- 192.168.123.107:0/2982579775 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f12cc005350 con 0x7f1300071a70 2026-03-09T20:47:30.213 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.212+0000 7f12ff7fe640 1 --2- 192.168.123.107:0/2982579775 >> [v2:192.168.123.110:6828/2207204228,v1:192.168.123.110:6829/2207204228] conn(0x7f12e403df50 0x7f12e4040410 secure :-1 s=READY pgs=20 cs=0 l=1 rev1=1 crypto rx=0x7f12f8007990 tx=0x7f12f8007920 comp rx=0 tx=0).ready entity=mgr.24439 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:47:30.218 INFO:tasks.workunit.client.1.vm10.stdout:4/268: rename d1/d8/d1c/d38 to d1/d8/d1b/d57 0 2026-03-09T20:47:30.218 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.215+0000 7f12fd7fa640 1 -- 192.168.123.107:0/2982579775 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+193909 (secure 0 0 0) 0x7f12f0018de0 con 0x7f1300071a70 2026-03-09T20:47:30.234 INFO:tasks.workunit.client.1.vm10.stdout:6/353: dread d3/d30/d33/f3a [0,4194304] 0 2026-03-09T20:47:30.237 INFO:tasks.workunit.client.1.vm10.stdout:9/378: symlink d2/d3/d6d/l90 0 2026-03-09T20:47:30.237 INFO:tasks.workunit.client.0.vm07.stdout:3/405: stat d1/d5/d9/d2f/d3d/d64/d43/c7a 0 2026-03-09T20:47:30.238 INFO:tasks.workunit.client.1.vm10.stdout:6/354: dwrite d3/da/d11/d31/d4c/d60/f63 [0,4194304] 0 2026-03-09T20:47:30.270 INFO:tasks.workunit.client.0.vm07.stdout:9/365: dread d4/d16/f33 [0,4194304] 0 2026-03-09T20:47:30.282 INFO:tasks.workunit.client.0.vm07.stdout:6/406: write d8/d16/d22/d24/d2b/f5a [4793052,43008] 0 2026-03-09T20:47:30.283 INFO:tasks.workunit.client.0.vm07.stdout:6/407: write d8/d16/d22/d33/f73 [1155076,90198] 0 2026-03-09T20:47:30.284 INFO:tasks.workunit.client.0.vm07.stdout:6/408: dread - d8/d26/d2a/f6e zero size 2026-03-09T20:47:30.288 INFO:tasks.workunit.client.0.vm07.stdout:6/409: dwrite d8/d50/f55 [0,4194304] 0 2026-03-09T20:47:30.290 INFO:tasks.workunit.client.0.vm07.stdout:6/410: chown d8/d16/d22/d33/l48 6212 1 2026-03-09T20:47:30.304 INFO:tasks.workunit.client.1.vm10.stdout:7/311: mkdir db/d21/d60 0 2026-03-09T20:47:30.305 INFO:tasks.workunit.client.1.vm10.stdout:7/312: write db/d28/d2b/d36/d3b/f42 [5453687,34032] 0 2026-03-09T20:47:30.309 INFO:tasks.workunit.client.0.vm07.stdout:7/443: creat d3/da/db/d32/d3e/d5c/f9c x:0 0 0 2026-03-09T20:47:30.321 INFO:tasks.workunit.client.1.vm10.stdout:1/332: truncate d2/da/f34 880084 0 2026-03-09T20:47:30.343 INFO:tasks.workunit.client.0.vm07.stdout:1/431: dread d3/d14/f33 [0,4194304] 0 2026-03-09T20:47:30.344 INFO:tasks.workunit.client.0.vm07.stdout:1/432: write d3/d23/d55/f7b [713961,31628] 0 2026-03-09T20:47:30.349 INFO:tasks.workunit.client.1.vm10.stdout:0/303: rename d2/d9/l3b to d2/d9/d2a/l64 0 2026-03-09T20:47:30.350 INFO:tasks.workunit.client.1.vm10.stdout:0/304: read - d2/d4e/f5b zero size 2026-03-09T20:47:30.353 INFO:tasks.workunit.client.0.vm07.stdout:2/425: write d2/db/f7c [821291,126462] 0 2026-03-09T20:47:30.359 INFO:tasks.workunit.client.1.vm10.stdout:4/269: creat d1/d8/d1b/d57/f58 x:0 0 0 2026-03-09T20:47:30.367 INFO:tasks.workunit.client.1.vm10.stdout:4/270: sync 2026-03-09T20:47:30.368 INFO:tasks.workunit.client.1.vm10.stdout:4/271: truncate d1/d8/d1b/d57/f44 1055019 0 2026-03-09T20:47:30.370 INFO:tasks.workunit.client.0.vm07.stdout:0/466: mknod d1/d1f/c92 0 2026-03-09T20:47:30.379 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:30 vm07.local ceph-mon[49120]: Deploying cephadm binary to vm10 2026-03-09T20:47:30.379 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:30 vm07.local ceph-mon[49120]: mgrmap e22: vm10.byqahe(active, since 1.16988s) 2026-03-09T20:47:30.379 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:30 vm07.local ceph-mon[49120]: pgmap v3: 65 pgs: 65 active+clean; 1.7 GiB data, 6.5 GiB used, 113 GiB / 120 GiB avail 2026-03-09T20:47:30.379 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:30 vm07.local ceph-mon[49120]: Deploying cephadm binary to vm07 2026-03-09T20:47:30.379 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:30 vm07.local ceph-mon[49120]: from='client.24465 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:47:30.379 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.378+0000 7f1306954640 1 -- 192.168.123.107:0/2982579775 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7f12cc0058d0 con 0x7f1300071a70 2026-03-09T20:47:30.383 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T20:47:30.383 INFO:teuthology.orchestra.run.vm07.stdout: "mon": { 2026-03-09T20:47:30.383 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable)": 2 2026-03-09T20:47:30.383 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:47:30.383 INFO:teuthology.orchestra.run.vm07.stdout: "mgr": { 2026-03-09T20:47:30.383 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 1 2026-03-09T20:47:30.383 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:47:30.383 INFO:teuthology.orchestra.run.vm07.stdout: "osd": { 2026-03-09T20:47:30.383 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable)": 6 2026-03-09T20:47:30.383 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:47:30.383 INFO:teuthology.orchestra.run.vm07.stdout: "mds": { 2026-03-09T20:47:30.383 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable)": 4 2026-03-09T20:47:30.383 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:47:30.383 INFO:teuthology.orchestra.run.vm07.stdout: "overall": { 2026-03-09T20:47:30.383 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable)": 12, 2026-03-09T20:47:30.383 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 1 2026-03-09T20:47:30.383 INFO:teuthology.orchestra.run.vm07.stdout: } 2026-03-09T20:47:30.383 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T20:47:30.383 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.382+0000 7f12fd7fa640 1 -- 192.168.123.107:0/2982579775 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+739 (secure 0 0 0) 0x7f12f001c120 con 0x7f1300071a70 2026-03-09T20:47:30.385 INFO:tasks.workunit.client.1.vm10.stdout:3/318: symlink dc/d14/d20/l67 0 2026-03-09T20:47:30.389 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.388+0000 7f12deffd640 1 -- 192.168.123.107:0/2982579775 >> [v2:192.168.123.110:6828/2207204228,v1:192.168.123.110:6829/2207204228] conn(0x7f12e403df50 msgr2=0x7f12e4040410 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:47:30.389 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.388+0000 7f12deffd640 1 --2- 192.168.123.107:0/2982579775 >> [v2:192.168.123.110:6828/2207204228,v1:192.168.123.110:6829/2207204228] conn(0x7f12e403df50 0x7f12e4040410 secure :-1 s=READY pgs=20 cs=0 l=1 rev1=1 crypto rx=0x7f12f8007990 tx=0x7f12f8007920 comp rx=0 tx=0).stop 2026-03-09T20:47:30.389 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.388+0000 7f12deffd640 1 -- 192.168.123.107:0/2982579775 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1300071a70 msgr2=0x7f1300131960 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:47:30.389 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.388+0000 7f12deffd640 1 --2- 192.168.123.107:0/2982579775 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1300071a70 0x7f1300131960 secure :-1 s=READY pgs=314 cs=0 l=1 rev1=1 crypto rx=0x7f12f000d8d0 tx=0x7f12f000dda0 comp rx=0 tx=0).stop 2026-03-09T20:47:30.389 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.388+0000 7f12deffd640 1 -- 192.168.123.107:0/2982579775 shutdown_connections 2026-03-09T20:47:30.389 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.388+0000 7f12deffd640 1 --2- 192.168.123.107:0/2982579775 >> [v2:192.168.123.110:6828/2207204228,v1:192.168.123.110:6829/2207204228] conn(0x7f12e403df50 0x7f12e4040410 unknown :-1 s=CLOSED pgs=20 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:47:30.389 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.388+0000 7f12deffd640 1 --2- 192.168.123.107:0/2982579775 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1300133310 0x7f1300131ea0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:47:30.389 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.388+0000 7f12deffd640 1 --2- 192.168.123.107:0/2982579775 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1300071a70 0x7f1300131960 unknown :-1 s=CLOSED pgs=314 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:47:30.389 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.388+0000 7f12deffd640 1 -- 192.168.123.107:0/2982579775 >> 192.168.123.107:0/2982579775 conn(0x7f130006d4f0 msgr2=0x7f1300070410 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:47:30.390 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.388+0000 7f12deffd640 1 -- 192.168.123.107:0/2982579775 shutdown_connections 2026-03-09T20:47:30.390 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.388+0000 7f12deffd640 1 -- 192.168.123.107:0/2982579775 wait complete. 2026-03-09T20:47:30.390 INFO:tasks.workunit.client.1.vm10.stdout:9/379: dwrite d2/d3/f5 [4194304,4194304] 0 2026-03-09T20:47:30.398 INFO:tasks.workunit.client.1.vm10.stdout:9/380: dwrite d2/d3/de/d35/f78 [0,4194304] 0 2026-03-09T20:47:30.401 INFO:tasks.workunit.client.1.vm10.stdout:9/381: dread - d2/d33/f7d zero size 2026-03-09T20:47:30.414 INFO:tasks.workunit.client.1.vm10.stdout:6/355: creat d3/da/d11/d26/d5b/f74 x:0 0 0 2026-03-09T20:47:30.417 INFO:tasks.workunit.client.0.vm07.stdout:5/485: truncate d5/df/d13/d6c/f99 1001274 0 2026-03-09T20:47:30.419 INFO:tasks.workunit.client.0.vm07.stdout:9/366: creat d4/d16/d29/d24/f83 x:0 0 0 2026-03-09T20:47:30.441 INFO:tasks.workunit.client.0.vm07.stdout:1/433: unlink d3/d23/d55/d56/d60/f7a 0 2026-03-09T20:47:30.441 INFO:tasks.workunit.client.0.vm07.stdout:1/434: chown d3/f10 2152040 1 2026-03-09T20:47:30.441 INFO:tasks.workunit.client.1.vm10.stdout:7/313: mknod db/d28/d2b/d36/d3f/c61 0 2026-03-09T20:47:30.441 INFO:tasks.workunit.client.1.vm10.stdout:1/333: rmdir d2/da/d25/d46/d51/d5d/d6e 39 2026-03-09T20:47:30.441 INFO:tasks.workunit.client.1.vm10.stdout:8/369: rmdir d0/d22/d25/d6c 39 2026-03-09T20:47:30.443 INFO:tasks.workunit.client.0.vm07.stdout:2/426: dwrite d2/f33 [0,4194304] 0 2026-03-09T20:47:30.445 INFO:tasks.workunit.client.0.vm07.stdout:2/427: rename d2/db/d1c to d2/db/d1c/d80 22 2026-03-09T20:47:30.446 INFO:tasks.workunit.client.0.vm07.stdout:2/428: write d2/db/f7c [1012184,111515] 0 2026-03-09T20:47:30.448 INFO:tasks.workunit.client.1.vm10.stdout:1/334: dread d2/da/d25/d3e/d42/f63 [0,4194304] 0 2026-03-09T20:47:30.449 INFO:tasks.workunit.client.1.vm10.stdout:1/335: chown d2/da/d25/d46 0 1 2026-03-09T20:47:30.452 INFO:tasks.workunit.client.0.vm07.stdout:7/444: write d3/da/db/d32/f3d [671008,57937] 0 2026-03-09T20:47:30.469 INFO:tasks.workunit.client.1.vm10.stdout:7/314: dread f3 [0,4194304] 0 2026-03-09T20:47:30.469 INFO:tasks.workunit.client.1.vm10.stdout:7/315: write db/d21/d26/f52 [3878293,19833] 0 2026-03-09T20:47:30.480 INFO:tasks.workunit.client.0.vm07.stdout:0/467: dwrite d1/d2/dc/f10 [0,4194304] 0 2026-03-09T20:47:30.481 INFO:tasks.workunit.client.0.vm07.stdout:0/468: chown d1/d1f/d53/c5b 2288178 1 2026-03-09T20:47:30.485 INFO:tasks.workunit.client.0.vm07.stdout:0/469: dwrite d1/d82/f86 [0,4194304] 0 2026-03-09T20:47:30.503 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.501+0000 7f60ddb3d640 1 -- 192.168.123.107:0/1776648644 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f60d8072370 msgr2=0x7f60d810c590 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:47:30.503 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.501+0000 7f60ddb3d640 1 --2- 192.168.123.107:0/1776648644 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f60d8072370 0x7f60d810c590 secure :-1 s=READY pgs=315 cs=0 l=1 rev1=1 crypto rx=0x7f60c00099b0 tx=0x7f60c002f220 comp rx=0 tx=0).stop 2026-03-09T20:47:30.503 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.501+0000 7f60ddb3d640 1 -- 192.168.123.107:0/1776648644 shutdown_connections 2026-03-09T20:47:30.503 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.501+0000 7f60ddb3d640 1 --2- 192.168.123.107:0/1776648644 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f60d8072370 0x7f60d810c590 unknown :-1 s=CLOSED pgs=315 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:47:30.503 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.501+0000 7f60ddb3d640 1 --2- 192.168.123.107:0/1776648644 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f60d80719a0 0x7f60d8071da0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:47:30.503 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.501+0000 7f60ddb3d640 1 -- 192.168.123.107:0/1776648644 >> 192.168.123.107:0/1776648644 conn(0x7f60d806d4f0 msgr2=0x7f60d806f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:47:30.503 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.501+0000 7f60ddb3d640 1 -- 192.168.123.107:0/1776648644 shutdown_connections 2026-03-09T20:47:30.503 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.501+0000 7f60ddb3d640 1 -- 192.168.123.107:0/1776648644 wait complete. 2026-03-09T20:47:30.503 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.501+0000 7f60ddb3d640 1 Processor -- start 2026-03-09T20:47:30.503 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.502+0000 7f60ddb3d640 1 -- start start 2026-03-09T20:47:30.503 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.502+0000 7f60ddb3d640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f60d80719a0 0x7f60d81158b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:47:30.503 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.502+0000 7f60ddb3d640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f60d8117260 0x7f60d8115df0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:47:30.503 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.502+0000 7f60ddb3d640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f60d8116330 con 0x7f60d8117260 2026-03-09T20:47:30.503 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.502+0000 7f60ddb3d640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f60d81164a0 con 0x7f60d80719a0 2026-03-09T20:47:30.503 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.502+0000 7f60d7fff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f60d8117260 0x7f60d8115df0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:47:30.503 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.502+0000 7f60d7fff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f60d8117260 0x7f60d8115df0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:34596/0 (socket says 192.168.123.107:34596) 2026-03-09T20:47:30.503 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.502+0000 7f60d7fff640 1 -- 192.168.123.107:0/1166850174 learned_addr learned my addr 192.168.123.107:0/1166850174 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:47:30.503 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.502+0000 7f60dcb3b640 1 --2- 192.168.123.107:0/1166850174 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f60d80719a0 0x7f60d81158b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:47:30.505 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.503+0000 7f60d7fff640 1 -- 192.168.123.107:0/1166850174 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f60d80719a0 msgr2=0x7f60d81158b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:47:30.505 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.503+0000 7f60d7fff640 1 --2- 192.168.123.107:0/1166850174 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f60d80719a0 0x7f60d81158b0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:47:30.505 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.503+0000 7f60d7fff640 1 -- 192.168.123.107:0/1166850174 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f60c0009660 con 0x7f60d8117260 2026-03-09T20:47:30.505 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.504+0000 7f60d7fff640 1 --2- 192.168.123.107:0/1166850174 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f60d8117260 0x7f60d8115df0 secure :-1 s=READY pgs=316 cs=0 l=1 rev1=1 crypto rx=0x7f60c0005ec0 tx=0x7f60c0002ed0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:47:30.506 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.504+0000 7f60d5ffb640 1 -- 192.168.123.107:0/1166850174 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f60c003d070 con 0x7f60d8117260 2026-03-09T20:47:30.506 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.504+0000 7f60ddb3d640 1 -- 192.168.123.107:0/1166850174 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f60d8116720 con 0x7f60d8117260 2026-03-09T20:47:30.506 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.504+0000 7f60ddb3d640 1 -- 192.168.123.107:0/1166850174 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f60d81b5a30 con 0x7f60d8117260 2026-03-09T20:47:30.506 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.505+0000 7f60d5ffb640 1 -- 192.168.123.107:0/1166850174 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f60c0002d20 con 0x7f60d8117260 2026-03-09T20:47:30.506 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.505+0000 7f60d5ffb640 1 -- 192.168.123.107:0/1166850174 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f60c0038930 con 0x7f60d8117260 2026-03-09T20:47:30.509 INFO:tasks.workunit.client.0.vm07.stdout:4/322: mkdir d2/d55 0 2026-03-09T20:47:30.509 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.508+0000 7f60ddb3d640 1 -- 192.168.123.107:0/1166850174 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f60d81183e0 con 0x7f60d8117260 2026-03-09T20:47:30.509 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.508+0000 7f60d5ffb640 1 -- 192.168.123.107:0/1166850174 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 23) v1 ==== 50339+0+0 (secure 0 0 0) 0x7f60c0049050 con 0x7f60d8117260 2026-03-09T20:47:30.509 INFO:tasks.workunit.client.0.vm07.stdout:4/323: write d2/df/f23 [9148090,117605] 0 2026-03-09T20:47:30.510 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.509+0000 7f60d5ffb640 1 --2- 192.168.123.107:0/1166850174 >> [v2:192.168.123.110:6828/2207204228,v1:192.168.123.110:6829/2207204228] conn(0x7f60b003de60 0x7f60b0040320 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:47:30.510 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.509+0000 7f60d5ffb640 1 -- 192.168.123.107:0/1166850174 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(44..44 src has 1..44) v4 ==== 5706+0+0 (secure 0 0 0) 0x7f60c0077600 con 0x7f60d8117260 2026-03-09T20:47:30.514 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.513+0000 7f60d5ffb640 1 -- 192.168.123.107:0/1166850174 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+193909 (secure 0 0 0) 0x7f60c0040550 con 0x7f60d8117260 2026-03-09T20:47:30.516 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.513+0000 7f60dcb3b640 1 --2- 192.168.123.107:0/1166850174 >> [v2:192.168.123.110:6828/2207204228,v1:192.168.123.110:6829/2207204228] conn(0x7f60b003de60 0x7f60b0040320 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:47:30.517 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.516+0000 7f60dcb3b640 1 --2- 192.168.123.107:0/1166850174 >> [v2:192.168.123.110:6828/2207204228,v1:192.168.123.110:6829/2207204228] conn(0x7f60b003de60 0x7f60b0040320 secure :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0x7f60d8071800 tx=0x7f60cc009290 comp rx=0 tx=0).ready entity=mgr.24439 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:47:30.532 INFO:tasks.workunit.client.1.vm10.stdout:3/319: chown dc/d14/d26/d29/f30 22 1 2026-03-09T20:47:30.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:30 vm10.local ceph-mon[57011]: Deploying cephadm binary to vm10 2026-03-09T20:47:30.541 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:30 vm10.local ceph-mon[57011]: mgrmap e22: vm10.byqahe(active, since 1.16988s) 2026-03-09T20:47:30.541 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:30 vm10.local ceph-mon[57011]: pgmap v3: 65 pgs: 65 active+clean; 1.7 GiB data, 6.5 GiB used, 113 GiB / 120 GiB avail 2026-03-09T20:47:30.541 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:30 vm10.local ceph-mon[57011]: Deploying cephadm binary to vm07 2026-03-09T20:47:30.541 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:30 vm10.local ceph-mon[57011]: from='client.24465 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:47:30.560 INFO:tasks.workunit.client.0.vm07.stdout:5/486: write d5/df/d13/f5b [976656,88440] 0 2026-03-09T20:47:30.561 INFO:tasks.workunit.client.0.vm07.stdout:5/487: write d5/df/d13/f2a [583613,8471] 0 2026-03-09T20:47:30.568 INFO:tasks.workunit.client.0.vm07.stdout:9/367: mkdir d4/d11/d2a/d84 0 2026-03-09T20:47:30.570 INFO:tasks.workunit.client.0.vm07.stdout:9/368: dread d4/d8/dc/d15/f57 [0,4194304] 0 2026-03-09T20:47:30.571 INFO:tasks.workunit.client.0.vm07.stdout:9/369: dread d4/d8/dc/d15/f57 [0,4194304] 0 2026-03-09T20:47:30.576 INFO:tasks.workunit.client.1.vm10.stdout:9/382: unlink d2/d3/de/f5d 0 2026-03-09T20:47:30.576 INFO:tasks.workunit.client.1.vm10.stdout:9/383: readlink d2/d3/d6d/l90 0 2026-03-09T20:47:30.579 INFO:tasks.workunit.client.0.vm07.stdout:6/411: dread d8/d16/d22/d24/f25 [0,4194304] 0 2026-03-09T20:47:30.586 INFO:tasks.workunit.client.0.vm07.stdout:6/412: dread d8/f52 [0,4194304] 0 2026-03-09T20:47:30.593 INFO:tasks.workunit.client.0.vm07.stdout:7/445: dread d3/da/db/d14/d43/f68 [0,4194304] 0 2026-03-09T20:47:30.599 INFO:tasks.workunit.client.1.vm10.stdout:2/351: getdents d5/d18/d27 0 2026-03-09T20:47:30.606 INFO:tasks.workunit.client.1.vm10.stdout:5/306: creat d2/d27/d37/d46/f7c x:0 0 0 2026-03-09T20:47:30.615 INFO:tasks.workunit.client.0.vm07.stdout:1/435: creat d3/d23/d55/d56/f8d x:0 0 0 2026-03-09T20:47:30.616 INFO:tasks.workunit.client.1.vm10.stdout:5/307: dread d2/d1b/f2f [0,4194304] 0 2026-03-09T20:47:30.618 INFO:tasks.workunit.client.0.vm07.stdout:1/436: dread d3/d14/d54/d3e/f4a [0,4194304] 0 2026-03-09T20:47:30.619 INFO:tasks.workunit.client.0.vm07.stdout:1/437: read - d3/d23/d55/f77 zero size 2026-03-09T20:47:30.619 INFO:tasks.workunit.client.0.vm07.stdout:1/438: chown d3/d14/d54/d3e/f4a 3155134 1 2026-03-09T20:47:30.621 INFO:tasks.workunit.client.0.vm07.stdout:1/439: truncate d3/d14/d54/d3e/f80 1021040 0 2026-03-09T20:47:30.637 INFO:tasks.workunit.client.1.vm10.stdout:8/370: unlink d0/d22/d2c/c5b 0 2026-03-09T20:47:30.646 INFO:tasks.workunit.client.0.vm07.stdout:2/429: creat d2/db/d49/f81 x:0 0 0 2026-03-09T20:47:30.646 INFO:tasks.workunit.client.0.vm07.stdout:2/430: chown d2/db/l19 2 1 2026-03-09T20:47:30.658 INFO:tasks.workunit.client.1.vm10.stdout:0/305: truncate d2/d9/da/d11/f1f 3492499 0 2026-03-09T20:47:30.659 INFO:tasks.workunit.client.1.vm10.stdout:0/306: read - d2/d4e/f5b zero size 2026-03-09T20:47:30.660 INFO:tasks.workunit.client.1.vm10.stdout:0/307: chown d2/d9/da/de/d1a/d25/d3e/c43 8065 1 2026-03-09T20:47:30.665 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.664+0000 7f60ddb3d640 1 -- 192.168.123.107:0/1166850174 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "fs dump"} v 0) v1 -- 0x7f60d8118620 con 0x7f60d8117260 2026-03-09T20:47:30.669 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.668+0000 7f60d5ffb640 1 -- 192.168.123.107:0/1166850174 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump"}]=0 dumped fsmap epoch 11 v11) v1 ==== 76+0+1867 (secure 0 0 0) 0x7f60c0040370 con 0x7f60d8117260 2026-03-09T20:47:30.669 INFO:teuthology.orchestra.run.vm07.stdout:e11 2026-03-09T20:47:30.669 INFO:teuthology.orchestra.run.vm07.stdout:enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T20:47:30.669 INFO:teuthology.orchestra.run.vm07.stdout:default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T20:47:30.669 INFO:teuthology.orchestra.run.vm07.stdout:legacy client fscid: 1 2026-03-09T20:47:30.669 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:47:30.669 INFO:teuthology.orchestra.run.vm07.stdout:Filesystem 'cephfs' (1) 2026-03-09T20:47:30.669 INFO:teuthology.orchestra.run.vm07.stdout:fs_name cephfs 2026-03-09T20:47:30.669 INFO:teuthology.orchestra.run.vm07.stdout:epoch 11 2026-03-09T20:47:30.669 INFO:teuthology.orchestra.run.vm07.stdout:flags 32 joinable allow_snaps allow_multimds_snaps allow_standby_replay 2026-03-09T20:47:30.669 INFO:teuthology.orchestra.run.vm07.stdout:created 2026-03-09T20:44:59.885491+0000 2026-03-09T20:47:30.669 INFO:teuthology.orchestra.run.vm07.stdout:modified 2026-03-09T20:45:12.822947+0000 2026-03-09T20:47:30.669 INFO:teuthology.orchestra.run.vm07.stdout:tableserver 0 2026-03-09T20:47:30.669 INFO:teuthology.orchestra.run.vm07.stdout:root 0 2026-03-09T20:47:30.669 INFO:teuthology.orchestra.run.vm07.stdout:session_timeout 60 2026-03-09T20:47:30.669 INFO:teuthology.orchestra.run.vm07.stdout:session_autoclose 300 2026-03-09T20:47:30.669 INFO:teuthology.orchestra.run.vm07.stdout:max_file_size 1099511627776 2026-03-09T20:47:30.669 INFO:teuthology.orchestra.run.vm07.stdout:max_xattr_size 65536 2026-03-09T20:47:30.669 INFO:teuthology.orchestra.run.vm07.stdout:required_client_features {} 2026-03-09T20:47:30.669 INFO:teuthology.orchestra.run.vm07.stdout:last_failure 0 2026-03-09T20:47:30.669 INFO:teuthology.orchestra.run.vm07.stdout:last_failure_osd_epoch 0 2026-03-09T20:47:30.669 INFO:teuthology.orchestra.run.vm07.stdout:compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T20:47:30.669 INFO:teuthology.orchestra.run.vm07.stdout:max_mds 2 2026-03-09T20:47:30.669 INFO:teuthology.orchestra.run.vm07.stdout:in 0,1 2026-03-09T20:47:30.669 INFO:teuthology.orchestra.run.vm07.stdout:up {0=14476,1=24291} 2026-03-09T20:47:30.669 INFO:teuthology.orchestra.run.vm07.stdout:failed 2026-03-09T20:47:30.670 INFO:teuthology.orchestra.run.vm07.stdout:damaged 2026-03-09T20:47:30.670 INFO:teuthology.orchestra.run.vm07.stdout:stopped 2026-03-09T20:47:30.670 INFO:teuthology.orchestra.run.vm07.stdout:data_pools [3] 2026-03-09T20:47:30.670 INFO:teuthology.orchestra.run.vm07.stdout:metadata_pool 2 2026-03-09T20:47:30.670 INFO:teuthology.orchestra.run.vm07.stdout:inline_data disabled 2026-03-09T20:47:30.670 INFO:teuthology.orchestra.run.vm07.stdout:balancer 2026-03-09T20:47:30.670 INFO:teuthology.orchestra.run.vm07.stdout:bal_rank_mask -1 2026-03-09T20:47:30.670 INFO:teuthology.orchestra.run.vm07.stdout:standby_count_wanted 1 2026-03-09T20:47:30.670 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm07.rovdbp{0:14476} state up:active seq 3 join_fscid=1 addr [v2:192.168.123.107:6826/2216764941,v1:192.168.123.107:6827/2216764941] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:47:30.670 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm10.hzyuyq{0:14498} state up:standby-replay seq 3 join_fscid=1 addr [v2:192.168.123.110:6826/3212743251,v1:192.168.123.110:6827/3212743251] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:47:30.670 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm10.qpltwp{1:24291} state up:active seq 3 join_fscid=1 addr [v2:192.168.123.110:6824/61492274,v1:192.168.123.110:6825/61492274] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:47:30.670 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm07.potfau{1:14490} state up:standby-replay seq 2 join_fscid=1 addr [v2:192.168.123.107:6828/3289699342,v1:192.168.123.107:6829/3289699342] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:47:30.670 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:47:30.670 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:47:30.670 INFO:teuthology.orchestra.run.vm07.stderr:dumped fsmap epoch 11 2026-03-09T20:47:30.675 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.672+0000 7f60ab7fe640 1 -- 192.168.123.107:0/1166850174 >> [v2:192.168.123.110:6828/2207204228,v1:192.168.123.110:6829/2207204228] conn(0x7f60b003de60 msgr2=0x7f60b0040320 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:47:30.675 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.672+0000 7f60ab7fe640 1 --2- 192.168.123.107:0/1166850174 >> [v2:192.168.123.110:6828/2207204228,v1:192.168.123.110:6829/2207204228] conn(0x7f60b003de60 0x7f60b0040320 secure :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0x7f60d8071800 tx=0x7f60cc009290 comp rx=0 tx=0).stop 2026-03-09T20:47:30.675 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.672+0000 7f60ab7fe640 1 -- 192.168.123.107:0/1166850174 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f60d8117260 msgr2=0x7f60d8115df0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:47:30.675 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.672+0000 7f60ab7fe640 1 --2- 192.168.123.107:0/1166850174 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f60d8117260 0x7f60d8115df0 secure :-1 s=READY pgs=316 cs=0 l=1 rev1=1 crypto rx=0x7f60c0005ec0 tx=0x7f60c0002ed0 comp rx=0 tx=0).stop 2026-03-09T20:47:30.675 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.672+0000 7f60ab7fe640 1 -- 192.168.123.107:0/1166850174 shutdown_connections 2026-03-09T20:47:30.675 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.672+0000 7f60ab7fe640 1 --2- 192.168.123.107:0/1166850174 >> [v2:192.168.123.110:6828/2207204228,v1:192.168.123.110:6829/2207204228] conn(0x7f60b003de60 0x7f60b0040320 unknown :-1 s=CLOSED pgs=21 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:47:30.675 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.672+0000 7f60ab7fe640 1 --2- 192.168.123.107:0/1166850174 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f60d8117260 0x7f60d8115df0 unknown :-1 s=CLOSED pgs=316 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:47:30.675 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.672+0000 7f60ab7fe640 1 --2- 192.168.123.107:0/1166850174 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f60d80719a0 0x7f60d81158b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:47:30.675 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.672+0000 7f60ab7fe640 1 -- 192.168.123.107:0/1166850174 >> 192.168.123.107:0/1166850174 conn(0x7f60d806d4f0 msgr2=0x7f60d8070300 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:47:30.675 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.672+0000 7f60ab7fe640 1 -- 192.168.123.107:0/1166850174 shutdown_connections 2026-03-09T20:47:30.675 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.672+0000 7f60ab7fe640 1 -- 192.168.123.107:0/1166850174 wait complete. 2026-03-09T20:47:30.686 INFO:tasks.workunit.client.1.vm10.stdout:7/316: read db/d21/d23/f29 [699163,29202] 0 2026-03-09T20:47:30.693 INFO:tasks.workunit.client.1.vm10.stdout:4/272: unlink d1/d8/d1b/d57/f58 0 2026-03-09T20:47:30.693 INFO:tasks.workunit.client.1.vm10.stdout:4/273: chown d1/d8/f25 6 1 2026-03-09T20:47:30.704 INFO:tasks.workunit.client.1.vm10.stdout:3/320: fsync dc/d14/d20/d21/d3b/f4f 0 2026-03-09T20:47:30.712 INFO:tasks.workunit.client.1.vm10.stdout:9/384: symlink d2/d28/d47/d50/l91 0 2026-03-09T20:47:30.713 INFO:tasks.workunit.client.0.vm07.stdout:3/406: link d1/d5/d9/d2f/d3d/d71/c7c d1/d5/d9/d2f/d34/d46/c83 0 2026-03-09T20:47:30.719 INFO:tasks.workunit.client.1.vm10.stdout:6/356: link d3/da/d11/d26/d5b/f55 d3/d30/f75 0 2026-03-09T20:47:30.721 INFO:tasks.workunit.client.1.vm10.stdout:9/385: dread d2/d3/de/d35/f38 [0,4194304] 0 2026-03-09T20:47:30.721 INFO:tasks.workunit.client.1.vm10.stdout:9/386: dread - d2/d28/d47/d67/f81 zero size 2026-03-09T20:47:30.739 INFO:tasks.workunit.client.1.vm10.stdout:6/357: dread d3/da/fd [4194304,4194304] 0 2026-03-09T20:47:30.756 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.753+0000 7f031cbfe640 1 -- 192.168.123.107:0/2945038835 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0318072390 msgr2=0x7f0318077100 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:47:30.756 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.753+0000 7f031cbfe640 1 --2- 192.168.123.107:0/2945038835 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0318072390 0x7f0318077100 secure :-1 s=READY pgs=317 cs=0 l=1 rev1=1 crypto rx=0x7f0310009040 tx=0x7f0310031ab0 comp rx=0 tx=0).stop 2026-03-09T20:47:30.756 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.753+0000 7f031cbfe640 1 -- 192.168.123.107:0/2945038835 shutdown_connections 2026-03-09T20:47:30.756 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.753+0000 7f031cbfe640 1 --2- 192.168.123.107:0/2945038835 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0318072390 0x7f0318077100 unknown :-1 s=CLOSED pgs=317 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:47:30.756 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.753+0000 7f031cbfe640 1 --2- 192.168.123.107:0/2945038835 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f03180719c0 0x7f0318071dc0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:47:30.756 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.753+0000 7f031cbfe640 1 -- 192.168.123.107:0/2945038835 >> 192.168.123.107:0/2945038835 conn(0x7f031806d400 msgr2=0x7f031806f840 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:47:30.756 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.753+0000 7f031cbfe640 1 -- 192.168.123.107:0/2945038835 shutdown_connections 2026-03-09T20:47:30.756 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.753+0000 7f031cbfe640 1 -- 192.168.123.107:0/2945038835 wait complete. 2026-03-09T20:47:30.756 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.753+0000 7f031cbfe640 1 Processor -- start 2026-03-09T20:47:30.756 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.753+0000 7f031cbfe640 1 -- start start 2026-03-09T20:47:30.756 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.754+0000 7f031cbfe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f03180719c0 0x7f0318084010 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:47:30.756 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.754+0000 7f031cbfe640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f0318082660 0x7f0318082ae0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:47:30.756 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.754+0000 7f031cbfe640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f0318084550 con 0x7f03180719c0 2026-03-09T20:47:30.756 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.754+0000 7f031cbfe640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f0318083020 con 0x7f0318082660 2026-03-09T20:47:30.756 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.754+0000 7f03177fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f03180719c0 0x7f0318084010 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:47:30.756 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.754+0000 7f03177fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f03180719c0 0x7f0318084010 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:34620/0 (socket says 192.168.123.107:34620) 2026-03-09T20:47:30.756 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.754+0000 7f03177fe640 1 -- 192.168.123.107:0/565708111 learned_addr learned my addr 192.168.123.107:0/565708111 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:47:30.757 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.754+0000 7f0316ffd640 1 --2- 192.168.123.107:0/565708111 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f0318082660 0x7f0318082ae0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:47:30.757 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.754+0000 7f03177fe640 1 -- 192.168.123.107:0/565708111 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f0318082660 msgr2=0x7f0318082ae0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:47:30.757 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.754+0000 7f03177fe640 1 --2- 192.168.123.107:0/565708111 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f0318082660 0x7f0318082ae0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:47:30.757 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.754+0000 7f03177fe640 1 -- 192.168.123.107:0/565708111 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f0310008cf0 con 0x7f03180719c0 2026-03-09T20:47:30.757 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.754+0000 7f03177fe640 1 --2- 192.168.123.107:0/565708111 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f03180719c0 0x7f0318084010 secure :-1 s=READY pgs=318 cs=0 l=1 rev1=1 crypto rx=0x7f030800d8d0 tx=0x7f030800dda0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:47:30.757 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.754+0000 7f0314ff9640 1 -- 192.168.123.107:0/565708111 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f0308004490 con 0x7f03180719c0 2026-03-09T20:47:30.757 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.754+0000 7f031cbfe640 1 -- 192.168.123.107:0/565708111 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f0318083300 con 0x7f03180719c0 2026-03-09T20:47:30.757 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.754+0000 7f0314ff9640 1 -- 192.168.123.107:0/565708111 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f030800bd00 con 0x7f03180719c0 2026-03-09T20:47:30.760 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.755+0000 7f0314ff9640 1 -- 192.168.123.107:0/565708111 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f0308010460 con 0x7f03180719c0 2026-03-09T20:47:30.760 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.755+0000 7f031cbfe640 1 -- 192.168.123.107:0/565708111 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f031812ef70 con 0x7f03180719c0 2026-03-09T20:47:30.760 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.756+0000 7f031cbfe640 1 -- 192.168.123.107:0/565708111 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f0318072390 con 0x7f03180719c0 2026-03-09T20:47:30.760 INFO:tasks.workunit.client.0.vm07.stdout:5/488: symlink d5/d33/d39/d8d/dab/lae 0 2026-03-09T20:47:30.760 INFO:tasks.workunit.client.1.vm10.stdout:2/352: dwrite d5/d2b/f3f [0,4194304] 0 2026-03-09T20:47:30.765 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.761+0000 7f0314ff9640 1 -- 192.168.123.107:0/565708111 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 23) v1 ==== 50339+0+0 (secure 0 0 0) 0x7f03080027e0 con 0x7f03180719c0 2026-03-09T20:47:30.765 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.761+0000 7f0314ff9640 1 --2- 192.168.123.107:0/565708111 >> [v2:192.168.123.110:6828/2207204228,v1:192.168.123.110:6829/2207204228] conn(0x7f02e003de10 0x7f02e00402d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:47:30.765 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.762+0000 7f0316ffd640 1 --2- 192.168.123.107:0/565708111 >> [v2:192.168.123.110:6828/2207204228,v1:192.168.123.110:6829/2207204228] conn(0x7f02e003de10 0x7f02e00402d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:47:30.765 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.762+0000 7f0314ff9640 1 -- 192.168.123.107:0/565708111 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(44..44 src has 1..44) v4 ==== 5706+0+0 (secure 0 0 0) 0x7f0308021d50 con 0x7f03180719c0 2026-03-09T20:47:30.771 INFO:tasks.workunit.client.1.vm10.stdout:5/308: symlink d2/d58/l7d 0 2026-03-09T20:47:30.776 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.771+0000 7f0314ff9640 1 -- 192.168.123.107:0/565708111 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+193909 (secure 0 0 0) 0x7f0308021540 con 0x7f03180719c0 2026-03-09T20:47:30.776 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.774+0000 7f0316ffd640 1 --2- 192.168.123.107:0/565708111 >> [v2:192.168.123.110:6828/2207204228,v1:192.168.123.110:6829/2207204228] conn(0x7f02e003de10 0x7f02e00402d0 secure :-1 s=READY pgs=22 cs=0 l=1 rev1=1 crypto rx=0x7f03100076d0 tx=0x7f0310007660 comp rx=0 tx=0).ready entity=mgr.24439 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:47:30.797 INFO:tasks.workunit.client.0.vm07.stdout:9/370: rename d4/d16/f33 to d4/d16/d29/d24/f85 0 2026-03-09T20:47:30.811 INFO:tasks.workunit.client.0.vm07.stdout:7/446: creat d3/d58/f9d x:0 0 0 2026-03-09T20:47:30.821 INFO:tasks.workunit.client.0.vm07.stdout:8/361: getdents d1/d5d/d6f/d2f 0 2026-03-09T20:47:30.825 INFO:tasks.workunit.client.0.vm07.stdout:8/362: read d1/dc/d16/d31/f54 [410136,112041] 0 2026-03-09T20:47:30.844 INFO:tasks.workunit.client.0.vm07.stdout:1/440: creat d3/d23/d55/d56/d60/f8e x:0 0 0 2026-03-09T20:47:30.845 INFO:tasks.workunit.client.0.vm07.stdout:1/441: write d3/d23/f49 [1866380,126811] 0 2026-03-09T20:47:30.857 INFO:tasks.workunit.client.0.vm07.stdout:2/431: mkdir d2/d46/d72/d82 0 2026-03-09T20:47:30.871 INFO:tasks.workunit.client.0.vm07.stdout:0/470: symlink d1/d2/d33/d35/l93 0 2026-03-09T20:47:30.892 INFO:tasks.workunit.client.0.vm07.stdout:4/324: symlink d2/d55/l56 0 2026-03-09T20:47:30.893 INFO:tasks.workunit.client.1.vm10.stdout:1/336: write d2/da/f11 [314594,10167] 0 2026-03-09T20:47:30.893 INFO:tasks.workunit.client.0.vm07.stdout:6/413: write d8/d16/f17 [1556954,27915] 0 2026-03-09T20:47:30.905 INFO:tasks.workunit.client.1.vm10.stdout:1/337: dread d2/da/d25/d3e/d42/f57 [0,4194304] 0 2026-03-09T20:47:30.911 INFO:tasks.workunit.client.1.vm10.stdout:1/338: chown d2/da/d25/d3e/f44 188 1 2026-03-09T20:47:30.914 INFO:tasks.workunit.client.1.vm10.stdout:1/339: chown d2/l6 23 1 2026-03-09T20:47:30.936 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.933+0000 7f031cbfe640 1 -- 192.168.123.107:0/565708111 --> [v2:192.168.123.110:6828/2207204228,v1:192.168.123.110:6829/2207204228] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f0318075bb0 con 0x7f02e003de10 2026-03-09T20:47:30.936 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.935+0000 7f0314ff9640 1 -- 192.168.123.107:0/565708111 <== mgr.24439 v2:192.168.123.110:6828/2207204228 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+318 (secure 0 0 0) 0x7f0318075bb0 con 0x7f02e003de10 2026-03-09T20:47:30.937 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T20:47:30.937 INFO:teuthology.orchestra.run.vm07.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc", 2026-03-09T20:47:30.937 INFO:teuthology.orchestra.run.vm07.stdout: "in_progress": true, 2026-03-09T20:47:30.937 INFO:teuthology.orchestra.run.vm07.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-09T20:47:30.937 INFO:teuthology.orchestra.run.vm07.stdout: "services_complete": [], 2026-03-09T20:47:30.937 INFO:teuthology.orchestra.run.vm07.stdout: "progress": "1/23 daemons upgraded", 2026-03-09T20:47:30.937 INFO:teuthology.orchestra.run.vm07.stdout: "message": "", 2026-03-09T20:47:30.938 INFO:teuthology.orchestra.run.vm07.stdout: "is_paused": false 2026-03-09T20:47:30.938 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T20:47:30.940 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.939+0000 7f031cbfe640 1 -- 192.168.123.107:0/565708111 >> [v2:192.168.123.110:6828/2207204228,v1:192.168.123.110:6829/2207204228] conn(0x7f02e003de10 msgr2=0x7f02e00402d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:47:30.940 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.939+0000 7f031cbfe640 1 --2- 192.168.123.107:0/565708111 >> [v2:192.168.123.110:6828/2207204228,v1:192.168.123.110:6829/2207204228] conn(0x7f02e003de10 0x7f02e00402d0 secure :-1 s=READY pgs=22 cs=0 l=1 rev1=1 crypto rx=0x7f03100076d0 tx=0x7f0310007660 comp rx=0 tx=0).stop 2026-03-09T20:47:30.940 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.939+0000 7f031cbfe640 1 -- 192.168.123.107:0/565708111 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f03180719c0 msgr2=0x7f0318084010 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:47:30.940 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.939+0000 7f031cbfe640 1 --2- 192.168.123.107:0/565708111 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f03180719c0 0x7f0318084010 secure :-1 s=READY pgs=318 cs=0 l=1 rev1=1 crypto rx=0x7f030800d8d0 tx=0x7f030800dda0 comp rx=0 tx=0).stop 2026-03-09T20:47:30.940 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.939+0000 7f031cbfe640 1 -- 192.168.123.107:0/565708111 shutdown_connections 2026-03-09T20:47:30.943 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.939+0000 7f031cbfe640 1 --2- 192.168.123.107:0/565708111 >> [v2:192.168.123.110:6828/2207204228,v1:192.168.123.110:6829/2207204228] conn(0x7f02e003de10 0x7f02e00402d0 unknown :-1 s=CLOSED pgs=22 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:47:30.943 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.940+0000 7f031cbfe640 1 --2- 192.168.123.107:0/565708111 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f0318082660 0x7f0318082ae0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:47:30.943 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.940+0000 7f031cbfe640 1 --2- 192.168.123.107:0/565708111 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f03180719c0 0x7f0318084010 unknown :-1 s=CLOSED pgs=318 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:47:30.943 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.940+0000 7f031cbfe640 1 -- 192.168.123.107:0/565708111 >> 192.168.123.107:0/565708111 conn(0x7f031806d400 msgr2=0x7f03180730a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:47:30.943 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.940+0000 7f031cbfe640 1 -- 192.168.123.107:0/565708111 shutdown_connections 2026-03-09T20:47:30.943 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:30.940+0000 7f031cbfe640 1 -- 192.168.123.107:0/565708111 wait complete. 2026-03-09T20:47:30.943 INFO:tasks.workunit.client.0.vm07.stdout:2/432: stat d2/d11/l15 0 2026-03-09T20:47:30.953 INFO:tasks.workunit.client.0.vm07.stdout:5/489: mknod d5/df/d13/caf 0 2026-03-09T20:47:30.953 INFO:tasks.workunit.client.1.vm10.stdout:0/308: dread d2/d9/d47/f5c [0,4194304] 0 2026-03-09T20:47:30.954 INFO:tasks.workunit.client.0.vm07.stdout:0/471: creat d1/d1f/d53/d72/f94 x:0 0 0 2026-03-09T20:47:30.961 INFO:tasks.workunit.client.0.vm07.stdout:4/325: mknod d2/d1f/d2d/d3f/c57 0 2026-03-09T20:47:30.983 INFO:tasks.workunit.client.0.vm07.stdout:4/326: dwrite d2/d1f/f53 [0,4194304] 0 2026-03-09T20:47:31.044 INFO:tasks.workunit.client.0.vm07.stdout:1/442: symlink d3/l8f 0 2026-03-09T20:47:31.046 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:31.043+0000 7f37f4ba3640 1 -- 192.168.123.107:0/3855316741 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f37f0072440 msgr2=0x7f37f00771b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:47:31.047 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:31.043+0000 7f37f4ba3640 1 --2- 192.168.123.107:0/3855316741 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f37f0072440 0x7f37f00771b0 secure :-1 s=READY pgs=319 cs=0 l=1 rev1=1 crypto rx=0x7f37e8009040 tx=0x7f37e802fc10 comp rx=0 tx=0).stop 2026-03-09T20:47:31.047 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:31.044+0000 7f37f4ba3640 1 -- 192.168.123.107:0/3855316741 shutdown_connections 2026-03-09T20:47:31.047 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:31.044+0000 7f37f4ba3640 1 --2- 192.168.123.107:0/3855316741 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f37f0072440 0x7f37f00771b0 unknown :-1 s=CLOSED pgs=319 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:47:31.047 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:31.044+0000 7f37f4ba3640 1 --2- 192.168.123.107:0/3855316741 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f37f0071a70 0x7f37f0071e70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:47:31.047 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:31.044+0000 7f37f4ba3640 1 -- 192.168.123.107:0/3855316741 >> 192.168.123.107:0/3855316741 conn(0x7f37f006d4f0 msgr2=0x7f37f006f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:47:31.047 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:31.044+0000 7f37f4ba3640 1 -- 192.168.123.107:0/3855316741 shutdown_connections 2026-03-09T20:47:31.047 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:31.044+0000 7f37f4ba3640 1 -- 192.168.123.107:0/3855316741 wait complete. 2026-03-09T20:47:31.047 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:31.044+0000 7f37f4ba3640 1 Processor -- start 2026-03-09T20:47:31.047 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:31.045+0000 7f37f4ba3640 1 -- start start 2026-03-09T20:47:31.047 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:31.045+0000 7f37f4ba3640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f37f0071a70 0x7f37f00840d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:47:31.047 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:31.045+0000 7f37f4ba3640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f37f0082720 0x7f37f0082ba0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:47:31.047 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:31.045+0000 7f37f4ba3640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f37f0084610 con 0x7f37f0082720 2026-03-09T20:47:31.047 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:31.045+0000 7f37f4ba3640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f37f00830e0 con 0x7f37f0071a70 2026-03-09T20:47:31.048 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:31.046+0000 7f37ee575640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f37f0071a70 0x7f37f00840d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:47:31.048 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:31.046+0000 7f37ee575640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f37f0071a70 0x7f37f00840d0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:45650/0 (socket says 192.168.123.107:45650) 2026-03-09T20:47:31.048 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:31.046+0000 7f37ee575640 1 -- 192.168.123.107:0/338283047 learned_addr learned my addr 192.168.123.107:0/338283047 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:47:31.048 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:31.046+0000 7f37edd74640 1 --2- 192.168.123.107:0/338283047 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f37f0082720 0x7f37f0082ba0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:47:31.048 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:31.047+0000 7f37ee575640 1 -- 192.168.123.107:0/338283047 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f37f0082720 msgr2=0x7f37f0082ba0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:47:31.048 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:31.047+0000 7f37ee575640 1 --2- 192.168.123.107:0/338283047 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f37f0082720 0x7f37f0082ba0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:47:31.048 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:31.047+0000 7f37ee575640 1 -- 192.168.123.107:0/338283047 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f37e8008cf0 con 0x7f37f0071a70 2026-03-09T20:47:31.053 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:31.051+0000 7f37ee575640 1 --2- 192.168.123.107:0/338283047 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f37f0071a70 0x7f37f00840d0 secure :-1 s=READY pgs=100 cs=0 l=1 rev1=1 crypto rx=0x7f37e00079d0 tx=0x7f37e0007ea0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:47:31.053 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:31.051+0000 7f37df7fe640 1 -- 192.168.123.107:0/338283047 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f37e0010070 con 0x7f37f0071a70 2026-03-09T20:47:31.053 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:31.051+0000 7f37f4ba3640 1 -- 192.168.123.107:0/338283047 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f37f0083390 con 0x7f37f0071a70 2026-03-09T20:47:31.053 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:31.051+0000 7f37f4ba3640 1 -- 192.168.123.107:0/338283047 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f37f012ef70 con 0x7f37f0071a70 2026-03-09T20:47:31.053 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:31.051+0000 7f37df7fe640 1 -- 192.168.123.107:0/338283047 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f37e000ad90 con 0x7f37f0071a70 2026-03-09T20:47:31.053 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:31.052+0000 7f37df7fe640 1 -- 192.168.123.107:0/338283047 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f37e00153d0 con 0x7f37f0071a70 2026-03-09T20:47:31.053 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:31.052+0000 7f37f4ba3640 1 -- 192.168.123.107:0/338283047 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f37f0079e60 con 0x7f37f0071a70 2026-03-09T20:47:31.055 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:31.054+0000 7f37df7fe640 1 -- 192.168.123.107:0/338283047 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 23) v1 ==== 50339+0+0 (secure 0 0 0) 0x7f37e001d050 con 0x7f37f0071a70 2026-03-09T20:47:31.055 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:31.054+0000 7f37df7fe640 1 --2- 192.168.123.107:0/338283047 >> [v2:192.168.123.110:6828/2207204228,v1:192.168.123.110:6829/2207204228] conn(0x7f37c003de60 0x7f37c0040320 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:47:31.056 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:31.055+0000 7f37df7fe640 1 -- 192.168.123.107:0/338283047 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(44..44 src has 1..44) v4 ==== 5706+0+0 (secure 0 0 0) 0x7f37e0025080 con 0x7f37f0071a70 2026-03-09T20:47:31.061 INFO:tasks.workunit.client.0.vm07.stdout:2/433: dread d2/f3e [4194304,4194304] 0 2026-03-09T20:47:31.069 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:31.063+0000 7f37df7fe640 1 -- 192.168.123.107:0/338283047 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+193909 (secure 0 0 0) 0x7f37e00137d0 con 0x7f37f0071a70 2026-03-09T20:47:31.069 INFO:tasks.workunit.client.0.vm07.stdout:3/407: creat d1/d5/d9/d11/f84 x:0 0 0 2026-03-09T20:47:31.070 INFO:tasks.workunit.client.0.vm07.stdout:3/408: readlink d1/d5/l70 0 2026-03-09T20:47:31.072 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:31.071+0000 7f37edd74640 1 --2- 192.168.123.107:0/338283047 >> [v2:192.168.123.110:6828/2207204228,v1:192.168.123.110:6829/2207204228] conn(0x7f37c003de60 0x7f37c0040320 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:47:31.074 INFO:tasks.workunit.client.1.vm10.stdout:2/353: rmdir d5/d18/d27/d28 39 2026-03-09T20:47:31.079 INFO:tasks.workunit.client.0.vm07.stdout:5/490: symlink d5/d50/lb0 0 2026-03-09T20:47:31.082 INFO:tasks.workunit.client.1.vm10.stdout:6/358: dread d3/d12/f25 [0,4194304] 0 2026-03-09T20:47:31.084 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:31.079+0000 7f37edd74640 1 --2- 192.168.123.107:0/338283047 >> [v2:192.168.123.110:6828/2207204228,v1:192.168.123.110:6829/2207204228] conn(0x7f37c003de60 0x7f37c0040320 secure :-1 s=READY pgs=24 cs=0 l=1 rev1=1 crypto rx=0x7f37e80062a0 tx=0x7f37e8006230 comp rx=0 tx=0).ready entity=mgr.24439 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:47:31.100 INFO:tasks.workunit.client.1.vm10.stdout:6/359: dwrite d3/f21 [0,4194304] 0 2026-03-09T20:47:31.103 INFO:tasks.workunit.client.1.vm10.stdout:6/360: truncate d3/d12/d36/f64 106179 0 2026-03-09T20:47:31.104 INFO:tasks.workunit.client.1.vm10.stdout:4/274: write d1/d8/d1c/f23 [605784,22512] 0 2026-03-09T20:47:31.104 INFO:tasks.workunit.client.0.vm07.stdout:0/472: unlink d1/d2/d4b/l7b 0 2026-03-09T20:47:31.104 INFO:tasks.workunit.client.1.vm10.stdout:6/361: readlink d3/d12/l20 0 2026-03-09T20:47:31.120 INFO:tasks.workunit.client.0.vm07.stdout:6/414: write f5 [4266162,17503] 0 2026-03-09T20:47:31.123 INFO:tasks.workunit.client.0.vm07.stdout:3/409: dread d1/d5/d9/d2f/d34/f5c [0,4194304] 0 2026-03-09T20:47:31.128 INFO:tasks.workunit.client.1.vm10.stdout:9/387: write d2/d28/f32 [150628,26524] 0 2026-03-09T20:47:31.128 INFO:tasks.workunit.client.1.vm10.stdout:9/388: chown d2/d28/d47/d67 28 1 2026-03-09T20:47:31.130 INFO:tasks.workunit.client.1.vm10.stdout:3/321: dwrite dc/d14/d20/d2e/f32 [0,4194304] 0 2026-03-09T20:47:31.132 INFO:tasks.workunit.client.0.vm07.stdout:9/371: creat d4/d8/d19/f86 x:0 0 0 2026-03-09T20:47:31.152 INFO:tasks.workunit.client.1.vm10.stdout:5/309: write d2/f8 [749553,96939] 0 2026-03-09T20:47:31.155 INFO:tasks.workunit.client.0.vm07.stdout:7/447: rmdir d3/d9b 0 2026-03-09T20:47:31.172 INFO:tasks.workunit.client.0.vm07.stdout:7/448: dread d3/da/db/d14/d43/f68 [0,4194304] 0 2026-03-09T20:47:31.186 INFO:tasks.workunit.client.0.vm07.stdout:8/363: creat d1/dc/f75 x:0 0 0 2026-03-09T20:47:31.207 INFO:tasks.workunit.client.0.vm07.stdout:1/443: mkdir d3/d23/d55/d56/d90 0 2026-03-09T20:47:31.208 INFO:tasks.workunit.client.0.vm07.stdout:1/444: truncate d3/d23/d55/d56/d60/f8e 194878 0 2026-03-09T20:47:31.215 INFO:tasks.workunit.client.1.vm10.stdout:0/309: dwrite d2/d9/da/de/d1a/d25/d34/f46 [0,4194304] 0 2026-03-09T20:47:31.219 INFO:tasks.workunit.client.1.vm10.stdout:0/310: chown d2/d9/da/de/d1a/d25/d3e 198 1 2026-03-09T20:47:31.271 INFO:tasks.workunit.client.1.vm10.stdout:2/354: dwrite f1 [0,4194304] 0 2026-03-09T20:47:31.274 INFO:tasks.workunit.client.1.vm10.stdout:6/362: write d3/d12/d36/f6e [372750,58835] 0 2026-03-09T20:47:31.274 INFO:tasks.workunit.client.1.vm10.stdout:4/275: dread d1/d2/f2a [0,4194304] 0 2026-03-09T20:47:31.275 INFO:tasks.workunit.client.1.vm10.stdout:4/276: readlink d1/l15 0 2026-03-09T20:47:31.279 INFO:tasks.workunit.client.1.vm10.stdout:6/363: dwrite f0 [0,4194304] 0 2026-03-09T20:47:31.281 INFO:tasks.workunit.client.1.vm10.stdout:4/277: dread d1/d8/d1b/d30/f48 [0,4194304] 0 2026-03-09T20:47:31.282 INFO:tasks.workunit.client.1.vm10.stdout:4/278: readlink d1/d2/d3/l19 0 2026-03-09T20:47:31.283 INFO:tasks.workunit.client.1.vm10.stdout:4/279: dread - d1/d8/d1b/f42 zero size 2026-03-09T20:47:31.287 INFO:tasks.workunit.client.1.vm10.stdout:6/364: dwrite d3/d12/d36/f64 [0,4194304] 0 2026-03-09T20:47:31.290 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:31 vm07.local ceph-mon[49120]: from='client.24469 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:47:31.290 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:31 vm07.local ceph-mon[49120]: from='client.24473 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:47:31.290 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:31 vm07.local ceph-mon[49120]: pgmap v4: 65 pgs: 65 active+clean; 1.7 GiB data, 6.5 GiB used, 113 GiB / 120 GiB avail 2026-03-09T20:47:31.290 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:31 vm07.local ceph-mon[49120]: mgrmap e23: vm10.byqahe(active, since 2s) 2026-03-09T20:47:31.290 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:31 vm07.local ceph-mon[49120]: from='client.? 192.168.123.107:0/2982579775' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:47:31.290 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:31 vm07.local ceph-mon[49120]: from='client.? 192.168.123.107:0/1166850174' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T20:47:31.294 INFO:teuthology.orchestra.run.vm07.stdout:HEALTH_OK 2026-03-09T20:47:31.294 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:31.288+0000 7f37f4ba3640 1 -- 192.168.123.107:0/338283047 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "health", "detail": "detail"} v 0) v1 -- 0x7f37f007a0b0 con 0x7f37f0071a70 2026-03-09T20:47:31.294 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:31.289+0000 7f37df7fe640 1 -- 192.168.123.107:0/338283047 <== mon.1 v2:192.168.123.110:3300/0 7 ==== mon_command_ack([{"prefix": "health", "detail": "detail"}]=0 v0) v1 ==== 74+0+10 (secure 0 0 0) 0x7f37e001ec10 con 0x7f37f0071a70 2026-03-09T20:47:31.306 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:31.299+0000 7f37dd7fa640 1 -- 192.168.123.107:0/338283047 >> [v2:192.168.123.110:6828/2207204228,v1:192.168.123.110:6829/2207204228] conn(0x7f37c003de60 msgr2=0x7f37c0040320 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:47:31.306 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:31.299+0000 7f37dd7fa640 1 --2- 192.168.123.107:0/338283047 >> [v2:192.168.123.110:6828/2207204228,v1:192.168.123.110:6829/2207204228] conn(0x7f37c003de60 0x7f37c0040320 secure :-1 s=READY pgs=24 cs=0 l=1 rev1=1 crypto rx=0x7f37e80062a0 tx=0x7f37e8006230 comp rx=0 tx=0).stop 2026-03-09T20:47:31.306 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:31.299+0000 7f37dd7fa640 1 -- 192.168.123.107:0/338283047 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f37f0071a70 msgr2=0x7f37f00840d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:47:31.306 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:31.299+0000 7f37dd7fa640 1 --2- 192.168.123.107:0/338283047 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f37f0071a70 0x7f37f00840d0 secure :-1 s=READY pgs=100 cs=0 l=1 rev1=1 crypto rx=0x7f37e00079d0 tx=0x7f37e0007ea0 comp rx=0 tx=0).stop 2026-03-09T20:47:31.306 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:31.299+0000 7f37dd7fa640 1 -- 192.168.123.107:0/338283047 shutdown_connections 2026-03-09T20:47:31.306 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:31.299+0000 7f37dd7fa640 1 --2- 192.168.123.107:0/338283047 >> [v2:192.168.123.110:6828/2207204228,v1:192.168.123.110:6829/2207204228] conn(0x7f37c003de60 0x7f37c0040320 unknown :-1 s=CLOSED pgs=24 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:47:31.306 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:31.299+0000 7f37dd7fa640 1 --2- 192.168.123.107:0/338283047 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f37f0082720 0x7f37f0082ba0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:47:31.306 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:31.299+0000 7f37dd7fa640 1 --2- 192.168.123.107:0/338283047 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f37f0071a70 0x7f37f00840d0 unknown :-1 s=CLOSED pgs=100 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:47:31.306 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:31.299+0000 7f37dd7fa640 1 -- 192.168.123.107:0/338283047 >> 192.168.123.107:0/338283047 conn(0x7f37f006d4f0 msgr2=0x7f37f0073150 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:47:31.306 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:31.299+0000 7f37dd7fa640 1 -- 192.168.123.107:0/338283047 shutdown_connections 2026-03-09T20:47:31.306 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:47:31.299+0000 7f37dd7fa640 1 -- 192.168.123.107:0/338283047 wait complete. 2026-03-09T20:47:31.316 INFO:tasks.workunit.client.1.vm10.stdout:4/280: dwrite d1/d2/f43 [0,4194304] 0 2026-03-09T20:47:31.318 INFO:tasks.workunit.client.1.vm10.stdout:4/281: stat d1/d2/f2a 0 2026-03-09T20:47:31.352 INFO:tasks.workunit.client.1.vm10.stdout:1/340: truncate d2/da/f34 1365988 0 2026-03-09T20:47:31.353 INFO:tasks.workunit.client.1.vm10.stdout:9/389: rmdir d2/d3 39 2026-03-09T20:47:31.356 INFO:tasks.workunit.client.1.vm10.stdout:3/322: creat dc/d14/d20/d2e/d56/f68 x:0 0 0 2026-03-09T20:47:31.357 INFO:tasks.workunit.client.1.vm10.stdout:3/323: write dc/d14/d26/d37/f3e [391709,39239] 0 2026-03-09T20:47:31.364 INFO:tasks.workunit.client.1.vm10.stdout:5/310: rename d2/l48 to d2/d58/d6c/l7e 0 2026-03-09T20:47:31.367 INFO:tasks.workunit.client.1.vm10.stdout:5/311: chown d2/d1b/d54 6742498 1 2026-03-09T20:47:31.368 INFO:tasks.workunit.client.1.vm10.stdout:5/312: fdatasync d2/f8 0 2026-03-09T20:47:31.374 INFO:tasks.workunit.client.1.vm10.stdout:0/311: read d2/db/f13 [915397,104094] 0 2026-03-09T20:47:31.377 INFO:tasks.workunit.client.1.vm10.stdout:2/355: mknod d5/d18/d27/d38/c72 0 2026-03-09T20:47:31.430 INFO:tasks.workunit.client.1.vm10.stdout:6/365: truncate d3/f4d 639971 0 2026-03-09T20:47:31.474 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:31 vm10.local ceph-mon[57011]: from='client.24469 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:47:31.474 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:31 vm10.local ceph-mon[57011]: from='client.24473 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:47:31.474 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:31 vm10.local ceph-mon[57011]: pgmap v4: 65 pgs: 65 active+clean; 1.7 GiB data, 6.5 GiB used, 113 GiB / 120 GiB avail 2026-03-09T20:47:31.474 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:31 vm10.local ceph-mon[57011]: mgrmap e23: vm10.byqahe(active, since 2s) 2026-03-09T20:47:31.474 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:31 vm10.local ceph-mon[57011]: from='client.? 192.168.123.107:0/2982579775' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:47:31.474 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:31 vm10.local ceph-mon[57011]: from='client.? 192.168.123.107:0/1166850174' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T20:47:31.558 INFO:tasks.workunit.client.1.vm10.stdout:4/282: symlink d1/d8/d1c/d41/l59 0 2026-03-09T20:47:31.568 INFO:tasks.workunit.client.1.vm10.stdout:3/324: fsync dc/d14/d26/d29/d2a/f5e 0 2026-03-09T20:47:31.573 INFO:tasks.workunit.client.1.vm10.stdout:3/325: dwrite dc/d14/d27/f3f [0,4194304] 0 2026-03-09T20:47:31.576 INFO:tasks.workunit.client.1.vm10.stdout:7/317: getdents db/d21 0 2026-03-09T20:47:31.586 INFO:tasks.workunit.client.0.vm07.stdout:5/491: mkdir d5/df/d13/d6c/db1 0 2026-03-09T20:47:31.587 INFO:tasks.workunit.client.0.vm07.stdout:5/492: dread - d5/df/d13/d30/fac zero size 2026-03-09T20:47:31.588 INFO:tasks.workunit.client.1.vm10.stdout:5/313: rmdir d2/d1b 39 2026-03-09T20:47:31.588 INFO:tasks.workunit.client.1.vm10.stdout:8/371: getdents d0/d22/d25/d6c 0 2026-03-09T20:47:31.591 INFO:tasks.workunit.client.0.vm07.stdout:6/415: creat d8/d26/d2a/f81 x:0 0 0 2026-03-09T20:47:31.591 INFO:tasks.workunit.client.0.vm07.stdout:0/473: creat d1/d1f/d53/d72/f95 x:0 0 0 2026-03-09T20:47:31.594 INFO:tasks.workunit.client.1.vm10.stdout:6/366: creat d3/da/f76 x:0 0 0 2026-03-09T20:47:31.599 INFO:tasks.workunit.client.0.vm07.stdout:7/449: readlink d3/d58/l72 0 2026-03-09T20:47:31.599 INFO:tasks.workunit.client.0.vm07.stdout:1/445: symlink d3/d23/d67/l91 0 2026-03-09T20:47:31.612 INFO:tasks.workunit.client.1.vm10.stdout:3/326: mkdir dc/d14/d26/d29/d40/d48/d69 0 2026-03-09T20:47:31.619 INFO:tasks.workunit.client.1.vm10.stdout:7/318: creat db/d1f/f62 x:0 0 0 2026-03-09T20:47:31.619 INFO:tasks.workunit.client.1.vm10.stdout:6/367: creat d3/da/d11/d31/d4c/d60/f77 x:0 0 0 2026-03-09T20:47:31.620 INFO:tasks.workunit.client.0.vm07.stdout:9/372: link d4/d11/d23/l7b d4/d16/d29/d24/d37/d44/d62/l87 0 2026-03-09T20:47:31.620 INFO:tasks.workunit.client.0.vm07.stdout:4/327: getdents d2/d55 0 2026-03-09T20:47:31.620 INFO:tasks.workunit.client.0.vm07.stdout:1/446: dread d3/d23/d55/d56/d60/f7c [0,4194304] 0 2026-03-09T20:47:31.621 INFO:tasks.workunit.client.1.vm10.stdout:3/327: fsync dc/d14/d20/d2e/d56/f13 0 2026-03-09T20:47:31.624 INFO:tasks.workunit.client.1.vm10.stdout:6/368: stat d3/d30/d6a 0 2026-03-09T20:47:31.624 INFO:tasks.workunit.client.1.vm10.stdout:5/314: symlink d2/l7f 0 2026-03-09T20:47:31.626 INFO:tasks.workunit.client.1.vm10.stdout:1/341: symlink d2/da/l71 0 2026-03-09T20:47:31.629 INFO:tasks.workunit.client.0.vm07.stdout:4/328: chown d2/df/d17/c3d 23 1 2026-03-09T20:47:31.629 INFO:tasks.workunit.client.1.vm10.stdout:6/369: write d3/d12/d36/f64 [81226,72009] 0 2026-03-09T20:47:31.632 INFO:tasks.workunit.client.1.vm10.stdout:7/319: dwrite db/d28/d2b/d36/d40/f44 [4194304,4194304] 0 2026-03-09T20:47:31.643 INFO:tasks.workunit.client.0.vm07.stdout:6/416: dwrite d8/d26/d2a/d40/d67/f70 [0,4194304] 0 2026-03-09T20:47:31.644 INFO:tasks.workunit.client.1.vm10.stdout:6/370: dwrite d3/d12/d24/d39/f6c [0,4194304] 0 2026-03-09T20:47:31.650 INFO:tasks.workunit.client.0.vm07.stdout:1/447: link d3/fc d3/d23/d67/f92 0 2026-03-09T20:47:31.653 INFO:tasks.workunit.client.0.vm07.stdout:1/448: chown d3/f28 23 1 2026-03-09T20:47:31.657 INFO:tasks.workunit.client.0.vm07.stdout:2/434: rename d2/d11/c27 to d2/db/d49/d7d/c83 0 2026-03-09T20:47:31.658 INFO:tasks.workunit.client.1.vm10.stdout:1/342: symlink d2/da/d25/d3e/d55/l72 0 2026-03-09T20:47:31.659 INFO:tasks.workunit.client.1.vm10.stdout:1/343: chown d2/da/d25/f27 0 1 2026-03-09T20:47:31.663 INFO:tasks.workunit.client.0.vm07.stdout:6/417: mknod d8/d16/d22/c82 0 2026-03-09T20:47:31.663 INFO:tasks.workunit.client.1.vm10.stdout:7/320: mkdir db/d28/d2b/d36/d63 0 2026-03-09T20:47:31.665 INFO:tasks.workunit.client.1.vm10.stdout:1/344: read d2/f21 [3220688,128973] 0 2026-03-09T20:47:31.667 INFO:tasks.workunit.client.0.vm07.stdout:2/435: rmdir d2/db/d28/d5c 39 2026-03-09T20:47:31.669 INFO:tasks.workunit.client.0.vm07.stdout:2/436: rmdir d2/db/d49/d7d 39 2026-03-09T20:47:31.670 INFO:tasks.workunit.client.0.vm07.stdout:2/437: dread - d2/d11/f51 zero size 2026-03-09T20:47:31.691 INFO:tasks.workunit.client.1.vm10.stdout:1/345: dwrite d2/da/f10 [0,4194304] 0 2026-03-09T20:47:31.691 INFO:tasks.workunit.client.1.vm10.stdout:6/371: rmdir d3/d12/d24 39 2026-03-09T20:47:31.691 INFO:tasks.workunit.client.1.vm10.stdout:6/372: chown d3/d30/f43 217 1 2026-03-09T20:47:31.692 INFO:tasks.workunit.client.0.vm07.stdout:2/438: mknod d2/db/d28/d5c/c84 0 2026-03-09T20:47:31.692 INFO:tasks.workunit.client.0.vm07.stdout:2/439: chown d2/db/d1c 0 1 2026-03-09T20:47:31.692 INFO:tasks.workunit.client.0.vm07.stdout:2/440: stat d2/db/d49/f64 0 2026-03-09T20:47:31.692 INFO:tasks.workunit.client.0.vm07.stdout:2/441: mkdir d2/db/d49/d7d/d85 0 2026-03-09T20:47:31.693 INFO:tasks.workunit.client.1.vm10.stdout:6/373: readlink d3/da/d11/d31/d4c/l71 0 2026-03-09T20:47:31.695 INFO:tasks.workunit.client.1.vm10.stdout:7/321: dread db/d54/f57 [0,4194304] 0 2026-03-09T20:47:31.711 INFO:tasks.workunit.client.1.vm10.stdout:6/374: getdents d3/da/d11/d26 0 2026-03-09T20:47:31.714 INFO:tasks.workunit.client.0.vm07.stdout:0/474: sync 2026-03-09T20:47:31.717 INFO:tasks.workunit.client.0.vm07.stdout:0/475: dread - d1/d1f/d53/d72/f94 zero size 2026-03-09T20:47:31.719 INFO:tasks.workunit.client.1.vm10.stdout:6/375: chown d3/da/d11/d26/c3c 5765 1 2026-03-09T20:47:31.741 INFO:tasks.workunit.client.1.vm10.stdout:2/356: dwrite d5/d18/d27/f29 [0,4194304] 0 2026-03-09T20:47:31.742 INFO:tasks.workunit.client.1.vm10.stdout:2/357: readlink d5/l40 0 2026-03-09T20:47:31.743 INFO:tasks.workunit.client.0.vm07.stdout:0/476: creat d1/d2/d4b/f96 x:0 0 0 2026-03-09T20:47:31.745 INFO:tasks.workunit.client.0.vm07.stdout:0/477: read d1/d1f/d20/f21 [628753,66546] 0 2026-03-09T20:47:31.745 INFO:tasks.workunit.client.0.vm07.stdout:0/478: readlink d1/d2/d4b/l52 0 2026-03-09T20:47:31.758 INFO:tasks.workunit.client.1.vm10.stdout:6/376: dread d3/da/d11/d26/d5b/f55 [0,4194304] 0 2026-03-09T20:47:31.760 INFO:tasks.workunit.client.1.vm10.stdout:2/358: unlink d5/l1e 0 2026-03-09T20:47:31.768 INFO:tasks.workunit.client.1.vm10.stdout:2/359: unlink d5/d18/d27/d38/d61/l68 0 2026-03-09T20:47:31.769 INFO:tasks.workunit.client.1.vm10.stdout:2/360: read d5/d18/f2c [2225549,119800] 0 2026-03-09T20:47:31.770 INFO:tasks.workunit.client.0.vm07.stdout:5/493: write d5/df/f4a [749758,9864] 0 2026-03-09T20:47:31.770 INFO:tasks.workunit.client.1.vm10.stdout:0/312: creat d2/f65 x:0 0 0 2026-03-09T20:47:31.771 INFO:tasks.workunit.client.1.vm10.stdout:2/361: write d5/d18/d1b/f70 [1311119,77060] 0 2026-03-09T20:47:31.772 INFO:tasks.workunit.client.1.vm10.stdout:0/313: fdatasync d2/db/d5d/f5f 0 2026-03-09T20:47:31.773 INFO:tasks.workunit.client.0.vm07.stdout:5/494: write d5/df/d13/f1f [3763425,48376] 0 2026-03-09T20:47:31.773 INFO:tasks.workunit.client.1.vm10.stdout:9/390: dwrite d2/d3/de/f42 [0,4194304] 0 2026-03-09T20:47:31.773 INFO:tasks.workunit.client.0.vm07.stdout:8/364: dwrite d1/dc/d16/d26/f36 [4194304,4194304] 0 2026-03-09T20:47:31.773 INFO:tasks.workunit.client.1.vm10.stdout:9/391: readlink d2/l27 0 2026-03-09T20:47:31.775 INFO:tasks.workunit.client.1.vm10.stdout:9/392: write d2/d3/fa [4732466,486] 0 2026-03-09T20:47:31.777 INFO:tasks.workunit.client.0.vm07.stdout:0/479: sync 2026-03-09T20:47:31.804 INFO:tasks.workunit.client.1.vm10.stdout:2/362: symlink d5/d18/d27/d38/d61/l73 0 2026-03-09T20:47:31.805 INFO:tasks.workunit.client.0.vm07.stdout:3/410: symlink d1/d5/d9/d11/l85 0 2026-03-09T20:47:31.805 INFO:tasks.workunit.client.0.vm07.stdout:0/480: fdatasync d1/d2/d4b/f61 0 2026-03-09T20:47:31.805 INFO:tasks.workunit.client.1.vm10.stdout:0/314: fsync d2/db/f13 0 2026-03-09T20:47:31.807 INFO:tasks.workunit.client.1.vm10.stdout:0/315: stat d2/d9/da/d11/c2e 0 2026-03-09T20:47:31.816 INFO:tasks.workunit.client.0.vm07.stdout:7/450: dwrite d3/da/db/f1e [0,4194304] 0 2026-03-09T20:47:31.819 INFO:tasks.workunit.client.0.vm07.stdout:4/329: write d2/fa [2110903,21029] 0 2026-03-09T20:47:31.820 INFO:tasks.workunit.client.0.vm07.stdout:0/481: rmdir d1/d2 39 2026-03-09T20:47:31.822 INFO:tasks.workunit.client.1.vm10.stdout:8/372: dwrite d0/d22/f29 [0,4194304] 0 2026-03-09T20:47:31.823 INFO:tasks.workunit.client.0.vm07.stdout:0/482: readlink d1/d1f/d53/l8c 0 2026-03-09T20:47:31.823 INFO:tasks.workunit.client.1.vm10.stdout:3/328: link dc/d14/d26/d37/l4d dc/d14/d26/d29/l6a 0 2026-03-09T20:47:31.823 INFO:tasks.workunit.client.0.vm07.stdout:4/330: readlink d2/df/d17/l24 0 2026-03-09T20:47:31.824 INFO:tasks.workunit.client.1.vm10.stdout:8/373: chown d0/d22/d2c/f36 1772 1 2026-03-09T20:47:31.824 INFO:tasks.workunit.client.1.vm10.stdout:3/329: stat dc/d14/d20/d21/d3b/f4f 0 2026-03-09T20:47:31.828 INFO:tasks.workunit.client.0.vm07.stdout:9/373: dwrite d4/d11/f2c [0,4194304] 0 2026-03-09T20:47:31.829 INFO:tasks.workunit.client.0.vm07.stdout:7/451: creat d3/da/db/d14/d1f/f9e x:0 0 0 2026-03-09T20:47:31.833 INFO:tasks.workunit.client.1.vm10.stdout:0/316: dwrite d2/d9/da/d35/f3a [0,4194304] 0 2026-03-09T20:47:31.833 INFO:tasks.workunit.client.0.vm07.stdout:3/411: dwrite d1/d5/d9/d2f/d34/f68 [0,4194304] 0 2026-03-09T20:47:31.836 INFO:tasks.workunit.client.0.vm07.stdout:3/412: read - d1/f78 zero size 2026-03-09T20:47:31.844 INFO:tasks.workunit.client.1.vm10.stdout:8/374: dread d0/d22/d25/d6c/f68 [0,4194304] 0 2026-03-09T20:47:31.849 INFO:tasks.workunit.client.0.vm07.stdout:5/495: dread d5/d19/f2c [0,4194304] 0 2026-03-09T20:47:31.852 INFO:tasks.workunit.client.1.vm10.stdout:2/363: creat d5/d18/d27/f74 x:0 0 0 2026-03-09T20:47:31.853 INFO:tasks.workunit.client.0.vm07.stdout:1/449: write d3/d14/d54/f13 [3502851,77925] 0 2026-03-09T20:47:31.876 INFO:tasks.workunit.client.1.vm10.stdout:5/315: dwrite d2/d27/d37/d46/d5d/d5f/d69/f76 [0,4194304] 0 2026-03-09T20:47:31.883 INFO:tasks.workunit.client.1.vm10.stdout:3/330: rmdir dc/d14/d26/d29/d40 39 2026-03-09T20:47:31.885 INFO:tasks.workunit.client.1.vm10.stdout:3/331: write dc/d14/d26/f64 [736699,126534] 0 2026-03-09T20:47:31.885 INFO:tasks.workunit.client.1.vm10.stdout:7/322: write db/d28/d2b/d36/d40/f48 [989408,131000] 0 2026-03-09T20:47:31.886 INFO:tasks.workunit.client.0.vm07.stdout:2/442: write d2/f17 [4820032,10598] 0 2026-03-09T20:47:31.889 INFO:tasks.workunit.client.1.vm10.stdout:1/346: dwrite d2/da/f26 [4194304,4194304] 0 2026-03-09T20:47:31.907 INFO:tasks.workunit.client.1.vm10.stdout:4/283: rename d1/f33 to d1/d8/d1c/d2b/f5a 0 2026-03-09T20:47:31.909 INFO:tasks.workunit.client.0.vm07.stdout:4/331: mknod d2/d1f/d2d/d3f/d4a/c58 0 2026-03-09T20:47:31.920 INFO:tasks.workunit.client.0.vm07.stdout:7/452: dwrite d3/da/db/d14/d1f/d2b/d52/f6e [0,4194304] 0 2026-03-09T20:47:31.928 INFO:tasks.workunit.client.0.vm07.stdout:7/453: dwrite d3/f88 [0,4194304] 0 2026-03-09T20:47:31.933 INFO:tasks.workunit.client.0.vm07.stdout:3/413: mkdir d1/d5/d9/d2f/d86 0 2026-03-09T20:47:31.933 INFO:tasks.workunit.client.0.vm07.stdout:0/483: creat d1/d2/dc/f97 x:0 0 0 2026-03-09T20:47:31.941 INFO:tasks.workunit.client.1.vm10.stdout:7/323: truncate f3 1828612 0 2026-03-09T20:47:31.950 INFO:tasks.workunit.client.1.vm10.stdout:1/347: creat d2/da/d25/f73 x:0 0 0 2026-03-09T20:47:31.957 INFO:tasks.workunit.client.0.vm07.stdout:2/443: rmdir d2/db/d28 39 2026-03-09T20:47:31.963 INFO:tasks.workunit.client.0.vm07.stdout:8/365: write d1/f20 [154924,48461] 0 2026-03-09T20:47:31.965 INFO:tasks.workunit.client.1.vm10.stdout:9/393: dwrite d2/d33/d37/f66 [0,4194304] 0 2026-03-09T20:47:31.966 INFO:tasks.workunit.client.1.vm10.stdout:6/377: rename d3/f7 to d3/d12/d36/d5c/f78 0 2026-03-09T20:47:31.974 INFO:tasks.workunit.client.0.vm07.stdout:4/332: truncate d2/d1f/f45 4691407 0 2026-03-09T20:47:31.983 INFO:tasks.workunit.client.1.vm10.stdout:0/317: mknod d2/d9/da/c66 0 2026-03-09T20:47:31.988 INFO:tasks.workunit.client.0.vm07.stdout:7/454: symlink d3/da/db/l9f 0 2026-03-09T20:47:31.988 INFO:tasks.workunit.client.1.vm10.stdout:2/364: mknod d5/d18/d27/c75 0 2026-03-09T20:47:31.990 INFO:tasks.workunit.client.0.vm07.stdout:0/484: unlink d1/d2/dc/d17/l4c 0 2026-03-09T20:47:31.990 INFO:tasks.workunit.client.1.vm10.stdout:5/316: mkdir d2/d80 0 2026-03-09T20:47:31.992 INFO:tasks.workunit.client.1.vm10.stdout:7/324: creat db/d46/f64 x:0 0 0 2026-03-09T20:47:31.992 INFO:tasks.workunit.client.1.vm10.stdout:7/325: dread - db/d46/f64 zero size 2026-03-09T20:47:31.998 INFO:tasks.workunit.client.1.vm10.stdout:0/318: creat d2/db/d5d/f67 x:0 0 0 2026-03-09T20:47:32.002 INFO:tasks.workunit.client.1.vm10.stdout:5/317: dwrite d2/f35 [0,4194304] 0 2026-03-09T20:47:32.002 INFO:tasks.workunit.client.1.vm10.stdout:5/318: write d2/f35 [2257217,86622] 0 2026-03-09T20:47:32.013 INFO:tasks.workunit.client.0.vm07.stdout:2/444: fdatasync d2/db/d1c/f3a 0 2026-03-09T20:47:32.013 INFO:tasks.workunit.client.0.vm07.stdout:8/366: truncate d1/dc/d16/d26/f4f 497299 0 2026-03-09T20:47:32.014 INFO:tasks.workunit.client.0.vm07.stdout:9/374: creat d4/d11/f88 x:0 0 0 2026-03-09T20:47:32.016 INFO:tasks.workunit.client.1.vm10.stdout:8/375: creat d0/d22/d25/f74 x:0 0 0 2026-03-09T20:47:32.017 INFO:tasks.workunit.client.1.vm10.stdout:8/376: write d0/d22/d25/d2e/d41/d47/f5a [307701,109276] 0 2026-03-09T20:47:32.018 INFO:tasks.workunit.client.0.vm07.stdout:6/418: getdents d8/d16/d22 0 2026-03-09T20:47:32.018 INFO:tasks.workunit.client.0.vm07.stdout:7/455: unlink d3/da/db/d14/d1f/f46 0 2026-03-09T20:47:32.019 INFO:tasks.workunit.client.1.vm10.stdout:8/377: dread d0/d22/d25/d40/f5f [0,4194304] 0 2026-03-09T20:47:32.020 INFO:tasks.workunit.client.1.vm10.stdout:9/394: symlink d2/d3/de/d35/l92 0 2026-03-09T20:47:32.028 INFO:tasks.workunit.client.0.vm07.stdout:9/375: unlink d4/d16/d29/d24/d37/f71 0 2026-03-09T20:47:32.035 INFO:tasks.workunit.client.1.vm10.stdout:0/319: creat d2/d9/da/d35/f68 x:0 0 0 2026-03-09T20:47:32.035 INFO:tasks.workunit.client.1.vm10.stdout:0/320: readlink d2/d9/da/d11/l24 0 2026-03-09T20:47:32.035 INFO:tasks.workunit.client.1.vm10.stdout:5/319: read d2/d27/f34 [5009558,51263] 0 2026-03-09T20:47:32.035 INFO:tasks.workunit.client.0.vm07.stdout:0/485: mkdir d1/d2/d98 0 2026-03-09T20:47:32.035 INFO:tasks.workunit.client.0.vm07.stdout:9/376: write d4/d11/f4f [1380464,67617] 0 2026-03-09T20:47:32.035 INFO:tasks.workunit.client.0.vm07.stdout:2/445: mknod d2/d46/d6e/c86 0 2026-03-09T20:47:32.035 INFO:tasks.workunit.client.0.vm07.stdout:8/367: mkdir d1/d5d/d6f/d2f/d53/d76 0 2026-03-09T20:47:32.036 INFO:tasks.workunit.client.0.vm07.stdout:8/368: chown d1/d5d/d6f/d2f/f51 94 1 2026-03-09T20:47:32.037 INFO:tasks.workunit.client.0.vm07.stdout:8/369: stat d1/dc/d16/d26/d71 0 2026-03-09T20:47:32.043 INFO:tasks.workunit.client.1.vm10.stdout:8/378: dread d0/d22/f35 [0,4194304] 0 2026-03-09T20:47:32.050 INFO:tasks.workunit.client.0.vm07.stdout:6/419: dread d8/f12 [0,4194304] 0 2026-03-09T20:47:32.052 INFO:tasks.workunit.client.0.vm07.stdout:7/456: rename d3/da/db/d14/d1f/d2b/l94 to d3/d58/d82/d90/la0 0 2026-03-09T20:47:32.052 INFO:tasks.workunit.client.1.vm10.stdout:2/365: sync 2026-03-09T20:47:32.055 INFO:tasks.workunit.client.0.vm07.stdout:0/486: readlink d1/d1f/d30/l4f 0 2026-03-09T20:47:32.058 INFO:tasks.workunit.client.1.vm10.stdout:3/332: rename dc/d14/d20/d2e/d56/f13 to dc/d14/d26/d29/f6b 0 2026-03-09T20:47:32.060 INFO:tasks.workunit.client.0.vm07.stdout:3/414: getdents d1/d5/d9/d11 0 2026-03-09T20:47:32.066 INFO:tasks.workunit.client.1.vm10.stdout:5/320: read d2/f3e [1415432,25481] 0 2026-03-09T20:47:32.066 INFO:tasks.workunit.client.0.vm07.stdout:5/496: write d5/df/d13/f38 [987295,103898] 0 2026-03-09T20:47:32.073 INFO:tasks.workunit.client.0.vm07.stdout:3/415: dwrite d1/d5/f25 [0,4194304] 0 2026-03-09T20:47:32.080 INFO:tasks.workunit.client.1.vm10.stdout:7/326: link db/d21/d23/f29 db/d28/d4c/f65 0 2026-03-09T20:47:32.086 INFO:tasks.workunit.client.1.vm10.stdout:7/327: chown db/d28/f4f 24158 1 2026-03-09T20:47:32.087 INFO:tasks.workunit.client.1.vm10.stdout:7/328: chown db/d28/d2b/d36/l17 163129101 1 2026-03-09T20:47:32.095 INFO:tasks.workunit.client.1.vm10.stdout:7/329: dread db/d28/d2b/d36/d40/f44 [4194304,4194304] 0 2026-03-09T20:47:32.101 INFO:tasks.workunit.client.1.vm10.stdout:8/379: dread d0/d22/d25/d2e/d41/d47/f5a [0,4194304] 0 2026-03-09T20:47:32.102 INFO:tasks.workunit.client.1.vm10.stdout:4/284: write d1/d2/d3/f18 [3782570,87476] 0 2026-03-09T20:47:32.102 INFO:tasks.workunit.client.1.vm10.stdout:4/285: stat d1/d8/d1c/f1f 0 2026-03-09T20:47:32.103 INFO:tasks.workunit.client.1.vm10.stdout:4/286: write d1/d8/d1c/f3e [473626,65623] 0 2026-03-09T20:47:32.106 INFO:tasks.workunit.client.0.vm07.stdout:1/450: dwrite d3/d23/d52/f73 [0,4194304] 0 2026-03-09T20:47:32.109 INFO:tasks.workunit.client.1.vm10.stdout:1/348: write d2/da/d25/f48 [52390,23676] 0 2026-03-09T20:47:32.115 INFO:tasks.workunit.client.1.vm10.stdout:1/349: dwrite d2/f3c [0,4194304] 0 2026-03-09T20:47:32.140 INFO:tasks.workunit.client.1.vm10.stdout:5/321: dread d2/d1b/f41 [0,4194304] 0 2026-03-09T20:47:32.154 INFO:tasks.workunit.client.0.vm07.stdout:9/377: mkdir d4/d8/d19/d89 0 2026-03-09T20:47:32.163 INFO:tasks.workunit.client.1.vm10.stdout:9/395: rmdir d2/d33/d86 0 2026-03-09T20:47:32.164 INFO:tasks.workunit.client.1.vm10.stdout:9/396: chown d2/d3/d6d 663643693 1 2026-03-09T20:47:32.172 INFO:tasks.workunit.client.0.vm07.stdout:4/333: dwrite d2/d1f/f45 [0,4194304] 0 2026-03-09T20:47:32.181 INFO:tasks.workunit.client.1.vm10.stdout:7/330: unlink db/d46/f64 0 2026-03-09T20:47:32.188 INFO:tasks.workunit.client.1.vm10.stdout:6/378: mkdir d3/d79 0 2026-03-09T20:47:32.188 INFO:tasks.workunit.client.1.vm10.stdout:9/397: creat d2/d28/d47/d67/f93 x:0 0 0 2026-03-09T20:47:32.189 INFO:tasks.workunit.client.1.vm10.stdout:0/321: rename d2/d9/da/de/d1a/d25/d3e to d2/d9/d69 0 2026-03-09T20:47:32.190 INFO:tasks.workunit.client.1.vm10.stdout:5/322: mkdir d2/d27/d75/d81 0 2026-03-09T20:47:32.192 INFO:tasks.workunit.client.1.vm10.stdout:5/323: readlink d2/d27/l44 0 2026-03-09T20:47:32.192 INFO:tasks.workunit.client.1.vm10.stdout:7/331: creat db/d46/f66 x:0 0 0 2026-03-09T20:47:32.194 INFO:tasks.workunit.client.1.vm10.stdout:9/398: creat d2/d3/de/d35/d44/f94 x:0 0 0 2026-03-09T20:47:32.197 INFO:tasks.workunit.client.1.vm10.stdout:8/380: rename d0/d22/l51 to d0/d22/d2f/l75 0 2026-03-09T20:47:32.198 INFO:tasks.workunit.client.0.vm07.stdout:9/378: sync 2026-03-09T20:47:32.198 INFO:tasks.workunit.client.0.vm07.stdout:3/416: dread - d1/d5/d9/d11/d1f/f72 zero size 2026-03-09T20:47:32.202 INFO:tasks.workunit.client.0.vm07.stdout:7/457: dwrite d3/da/db/d14/d1f/d2b/d52/f5e [0,4194304] 0 2026-03-09T20:47:32.206 INFO:tasks.workunit.client.1.vm10.stdout:8/381: sync 2026-03-09T20:47:32.206 INFO:tasks.workunit.client.1.vm10.stdout:4/287: getdents d1/d8/d1c 0 2026-03-09T20:47:32.208 INFO:tasks.workunit.client.1.vm10.stdout:4/288: chown d1/d8/d1c/l21 0 1 2026-03-09T20:47:32.210 INFO:tasks.workunit.client.0.vm07.stdout:8/370: rename d1/dc/d16/d26/f6b to d1/d5d/d6f/d2f/d4d/d63/f77 0 2026-03-09T20:47:32.210 INFO:tasks.workunit.client.0.vm07.stdout:0/487: write d1/d2/ff [187132,101152] 0 2026-03-09T20:47:32.214 INFO:tasks.workunit.client.1.vm10.stdout:5/324: dwrite d2/d1b/f2f [0,4194304] 0 2026-03-09T20:47:32.226 INFO:tasks.workunit.client.0.vm07.stdout:8/371: dread d1/dc/d16/f4a [0,4194304] 0 2026-03-09T20:47:32.226 INFO:tasks.workunit.client.1.vm10.stdout:4/289: dwrite d1/d2/f43 [0,4194304] 0 2026-03-09T20:47:32.226 INFO:tasks.workunit.client.0.vm07.stdout:8/372: chown d1/d5d/d6f/d2f 2714 1 2026-03-09T20:47:32.234 INFO:tasks.workunit.client.0.vm07.stdout:8/373: write d1/dc/d16/d26/f2a [4029518,476] 0 2026-03-09T20:47:32.240 INFO:tasks.workunit.client.1.vm10.stdout:7/332: symlink db/d28/d2b/l67 0 2026-03-09T20:47:32.243 INFO:tasks.workunit.client.1.vm10.stdout:5/325: dread f1 [0,4194304] 0 2026-03-09T20:47:32.244 INFO:tasks.workunit.client.1.vm10.stdout:6/379: truncate d3/da/d11/f17 812134 0 2026-03-09T20:47:32.250 INFO:tasks.workunit.client.1.vm10.stdout:5/326: sync 2026-03-09T20:47:32.255 INFO:tasks.workunit.client.1.vm10.stdout:7/333: dwrite db/d46/f5a [0,4194304] 0 2026-03-09T20:47:32.257 INFO:tasks.workunit.client.1.vm10.stdout:2/366: write d5/f15 [3510485,97273] 0 2026-03-09T20:47:32.258 INFO:tasks.workunit.client.1.vm10.stdout:9/399: chown d2/c17 26404462 1 2026-03-09T20:47:32.261 INFO:tasks.workunit.client.1.vm10.stdout:3/333: dwrite dc/d14/d20/d2e/d56/f15 [0,4194304] 0 2026-03-09T20:47:32.262 INFO:tasks.workunit.client.1.vm10.stdout:9/400: write d2/d3/f2e [1759511,73540] 0 2026-03-09T20:47:32.269 INFO:tasks.workunit.client.1.vm10.stdout:2/367: sync 2026-03-09T20:47:32.270 INFO:tasks.workunit.client.1.vm10.stdout:3/334: dwrite dc/ff [4194304,4194304] 0 2026-03-09T20:47:32.273 INFO:tasks.workunit.client.1.vm10.stdout:3/335: fdatasync dc/d14/d26/d29/f60 0 2026-03-09T20:47:32.279 INFO:tasks.workunit.client.1.vm10.stdout:3/336: dwrite dc/d14/d26/d29/f60 [0,4194304] 0 2026-03-09T20:47:32.283 INFO:tasks.workunit.client.0.vm07.stdout:5/497: mkdir d5/d33/db2 0 2026-03-09T20:47:32.284 INFO:tasks.workunit.client.0.vm07.stdout:5/498: chown d5/d69/l7f 950 1 2026-03-09T20:47:32.285 INFO:tasks.workunit.client.1.vm10.stdout:8/382: creat d0/d22/f76 x:0 0 0 2026-03-09T20:47:32.299 INFO:tasks.workunit.client.1.vm10.stdout:8/383: chown d0/c16 146569808 1 2026-03-09T20:47:32.303 INFO:tasks.workunit.client.1.vm10.stdout:6/380: symlink d3/da/d11/d31/d4c/d60/l7a 0 2026-03-09T20:47:32.311 INFO:tasks.workunit.client.1.vm10.stdout:8/384: chown d0/d22/d2c/f3f 91525 1 2026-03-09T20:47:32.314 INFO:tasks.workunit.client.1.vm10.stdout:5/327: mknod d2/d39/d4b/c82 0 2026-03-09T20:47:32.319 INFO:tasks.workunit.client.0.vm07.stdout:3/417: mknod d1/d5/d9/d2f/d3d/d64/d43/d54/c87 0 2026-03-09T20:47:32.319 INFO:tasks.workunit.client.1.vm10.stdout:5/328: fdatasync d2/d39/d4b/f60 0 2026-03-09T20:47:32.319 INFO:tasks.workunit.client.1.vm10.stdout:5/329: dread d2/f5 [0,4194304] 0 2026-03-09T20:47:32.330 INFO:tasks.workunit.client.1.vm10.stdout:9/401: dread - d2/d33/f3c zero size 2026-03-09T20:47:32.331 INFO:tasks.workunit.client.1.vm10.stdout:9/402: write d2/d3/f2e [2789908,7136] 0 2026-03-09T20:47:32.333 INFO:tasks.workunit.client.1.vm10.stdout:7/334: write db/d1f/f5e [1782518,128088] 0 2026-03-09T20:47:32.341 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:32 vm07.local ceph-mon[49120]: from='client.14678 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:47:32.341 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:32 vm07.local ceph-mon[49120]: from='client.? 192.168.123.107:0/338283047' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T20:47:32.346 INFO:tasks.workunit.client.0.vm07.stdout:0/488: mknod d1/d1f/d53/c99 0 2026-03-09T20:47:32.351 INFO:tasks.workunit.client.1.vm10.stdout:2/368: unlink d5/d18/d1b/c35 0 2026-03-09T20:47:32.356 INFO:tasks.workunit.client.1.vm10.stdout:2/369: sync 2026-03-09T20:47:32.356 INFO:tasks.workunit.client.0.vm07.stdout:6/420: truncate d8/d26/d2a/d40/d69/f62 1743515 0 2026-03-09T20:47:32.363 INFO:tasks.workunit.client.1.vm10.stdout:1/350: rename d2/f1a to d2/da/d25/d46/f74 0 2026-03-09T20:47:32.364 INFO:tasks.workunit.client.1.vm10.stdout:1/351: fsync d2/da/d25/f6c 0 2026-03-09T20:47:32.366 INFO:tasks.workunit.client.1.vm10.stdout:1/352: write d2/da/d25/f48 [7266939,35993] 0 2026-03-09T20:47:32.368 INFO:tasks.workunit.client.0.vm07.stdout:4/334: mkdir d2/df/d59 0 2026-03-09T20:47:32.369 INFO:tasks.workunit.client.0.vm07.stdout:4/335: chown d2/f2b 387321241 1 2026-03-09T20:47:32.376 INFO:tasks.workunit.client.1.vm10.stdout:4/290: truncate d1/fe 673712 0 2026-03-09T20:47:32.376 INFO:tasks.workunit.client.0.vm07.stdout:4/336: dread d2/d1f/d2d/d3f/f51 [4194304,4194304] 0 2026-03-09T20:47:32.376 INFO:tasks.workunit.client.1.vm10.stdout:4/291: dread - d1/d8/f25 zero size 2026-03-09T20:47:32.378 INFO:tasks.workunit.client.0.vm07.stdout:2/446: rename d2/db/d28/d79 to d2/db/d28/d87 0 2026-03-09T20:47:32.379 INFO:tasks.workunit.client.0.vm07.stdout:2/447: chown d2/db/f7c 37511983 1 2026-03-09T20:47:32.383 INFO:tasks.workunit.client.1.vm10.stdout:6/381: creat d3/da/d11/d31/d4c/f7b x:0 0 0 2026-03-09T20:47:32.388 INFO:tasks.workunit.client.0.vm07.stdout:3/418: mknod d1/d5/d9/d2f/d34/d46/c88 0 2026-03-09T20:47:32.397 INFO:tasks.workunit.client.1.vm10.stdout:5/330: readlink d2/d58/d6c/l7e 0 2026-03-09T20:47:32.407 INFO:tasks.workunit.client.1.vm10.stdout:9/403: read - d2/d12/f62 zero size 2026-03-09T20:47:32.409 INFO:tasks.workunit.client.0.vm07.stdout:6/421: rmdir d8/d26/d2a/d40/d67 39 2026-03-09T20:47:32.410 INFO:tasks.workunit.client.0.vm07.stdout:6/422: chown d8/d26/d2a/d40/f65 791 1 2026-03-09T20:47:32.417 INFO:tasks.workunit.client.0.vm07.stdout:6/423: dwrite d8/d16/d22/d24/d2b/f5a [0,4194304] 0 2026-03-09T20:47:32.421 INFO:tasks.workunit.client.1.vm10.stdout:0/322: dwrite d2/d9/da/d35/d30/f51 [0,4194304] 0 2026-03-09T20:47:32.423 INFO:tasks.workunit.client.1.vm10.stdout:0/323: fsync d2/d9/da/de/d1a/d25/d34/f46 0 2026-03-09T20:47:32.426 INFO:tasks.workunit.client.0.vm07.stdout:8/374: creat d1/d5d/d6f/d2f/d4d/d55/f78 x:0 0 0 2026-03-09T20:47:32.427 INFO:tasks.workunit.client.0.vm07.stdout:8/375: write d1/dc/d16/f4b [3508783,50412] 0 2026-03-09T20:47:32.428 INFO:tasks.workunit.client.0.vm07.stdout:8/376: fdatasync d1/dc/f42 0 2026-03-09T20:47:32.431 INFO:tasks.workunit.client.0.vm07.stdout:1/451: write d3/f3f [269999,94270] 0 2026-03-09T20:47:32.434 INFO:tasks.workunit.client.1.vm10.stdout:2/370: dread - d5/d18/d27/d28/d41/f6e zero size 2026-03-09T20:47:32.442 INFO:tasks.workunit.client.1.vm10.stdout:3/337: creat dc/d14/d26/d29/d40/f6c x:0 0 0 2026-03-09T20:47:32.445 INFO:tasks.workunit.client.0.vm07.stdout:5/499: fsync d5/d19/f20 0 2026-03-09T20:47:32.445 INFO:tasks.workunit.client.0.vm07.stdout:5/500: stat d5/d33/l37 0 2026-03-09T20:47:32.454 INFO:tasks.workunit.client.1.vm10.stdout:1/353: symlink d2/da/d25/d3e/d55/l75 0 2026-03-09T20:47:32.454 INFO:tasks.workunit.client.1.vm10.stdout:4/292: symlink d1/d8/d1b/l5b 0 2026-03-09T20:47:32.455 INFO:tasks.workunit.client.0.vm07.stdout:4/337: creat d2/d1f/d2d/d3f/d4a/d4b/d52/f5a x:0 0 0 2026-03-09T20:47:32.455 INFO:tasks.workunit.client.1.vm10.stdout:1/354: read - d2/da/d25/f65 zero size 2026-03-09T20:47:32.459 INFO:tasks.workunit.client.0.vm07.stdout:9/379: creat d4/d11/f8a x:0 0 0 2026-03-09T20:47:32.460 INFO:tasks.workunit.client.1.vm10.stdout:4/293: dwrite d1/d8/d1c/f3e [0,4194304] 0 2026-03-09T20:47:32.463 INFO:tasks.workunit.client.0.vm07.stdout:0/489: dwrite d1/d2/dc/d17/f91 [0,4194304] 0 2026-03-09T20:47:32.482 INFO:tasks.workunit.client.1.vm10.stdout:5/331: rename d2/d58/l7d to d2/d27/d37/d46/d5d/d5f/d66/l83 0 2026-03-09T20:47:32.484 INFO:tasks.workunit.client.1.vm10.stdout:5/332: write d2/d27/d37/d46/d5d/d6d/f6e [691115,24904] 0 2026-03-09T20:47:32.490 INFO:tasks.workunit.client.0.vm07.stdout:7/458: rename d3/da/db/d14/d1f/l56 to d3/da/db/d14/la1 0 2026-03-09T20:47:32.494 INFO:tasks.workunit.client.1.vm10.stdout:9/404: mknod d2/d28/d47/c95 0 2026-03-09T20:47:32.496 INFO:tasks.workunit.client.1.vm10.stdout:7/335: creat db/d54/f68 x:0 0 0 2026-03-09T20:47:32.501 INFO:tasks.workunit.client.1.vm10.stdout:0/324: write d2/d9/da/f2f [9435709,33287] 0 2026-03-09T20:47:32.507 INFO:tasks.workunit.client.0.vm07.stdout:1/452: fsync d3/d14/f19 0 2026-03-09T20:47:32.508 INFO:tasks.workunit.client.1.vm10.stdout:2/371: fsync d5/d18/d27/d28/d41/f4b 0 2026-03-09T20:47:32.510 INFO:tasks.workunit.client.0.vm07.stdout:1/453: read d3/d14/d54/f13 [231498,11631] 0 2026-03-09T20:47:32.515 INFO:tasks.workunit.client.1.vm10.stdout:2/372: sync 2026-03-09T20:47:32.515 INFO:tasks.workunit.client.1.vm10.stdout:3/338: creat dc/d14/d20/d21/d3b/f6d x:0 0 0 2026-03-09T20:47:32.517 INFO:tasks.workunit.client.0.vm07.stdout:5/501: truncate d5/df/d13/d30/f64 849853 0 2026-03-09T20:47:32.528 INFO:tasks.workunit.client.0.vm07.stdout:9/380: creat d4/d16/d29/d24/d37/f8b x:0 0 0 2026-03-09T20:47:32.534 INFO:tasks.workunit.client.0.vm07.stdout:9/381: dwrite d4/d11/d23/f2f [0,4194304] 0 2026-03-09T20:47:32.534 INFO:tasks.workunit.client.1.vm10.stdout:4/294: read d1/d8/d1b/f24 [25173,47891] 0 2026-03-09T20:47:32.535 INFO:tasks.workunit.client.0.vm07.stdout:9/382: truncate d4/d11/f88 117727 0 2026-03-09T20:47:32.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:32 vm10.local ceph-mon[57011]: from='client.14678 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:47:32.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:32 vm10.local ceph-mon[57011]: from='client.? 192.168.123.107:0/338283047' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T20:47:32.552 INFO:tasks.workunit.client.0.vm07.stdout:3/419: dwrite d1/d5/d9/d2f/d3d/d64/f63 [0,4194304] 0 2026-03-09T20:47:32.553 INFO:tasks.workunit.client.1.vm10.stdout:6/382: dwrite d3/da/d11/d26/d5b/f48 [0,4194304] 0 2026-03-09T20:47:32.584 INFO:tasks.workunit.client.1.vm10.stdout:5/333: mkdir d2/d27/d37/d46/d5d/d5f/d84 0 2026-03-09T20:47:32.585 INFO:tasks.workunit.client.0.vm07.stdout:2/448: truncate d2/db/f48 1087316 0 2026-03-09T20:47:32.585 INFO:tasks.workunit.client.1.vm10.stdout:7/336: unlink db/d1f/f37 0 2026-03-09T20:47:32.585 INFO:tasks.workunit.client.1.vm10.stdout:7/337: stat db/d21/d23/f1a 0 2026-03-09T20:47:32.591 INFO:tasks.workunit.client.0.vm07.stdout:4/338: dwrite d2/f4c [0,4194304] 0 2026-03-09T20:47:32.609 INFO:tasks.workunit.client.1.vm10.stdout:0/325: truncate d2/db/f13 3808713 0 2026-03-09T20:47:32.609 INFO:tasks.workunit.client.0.vm07.stdout:6/424: rename d8/f7e to d8/d16/d22/d24/d2b/f83 0 2026-03-09T20:47:32.609 INFO:tasks.workunit.client.0.vm07.stdout:6/425: write d8/d16/f17 [2432382,81736] 0 2026-03-09T20:47:32.632 INFO:tasks.workunit.client.1.vm10.stdout:3/339: rename dc/d14/d26/d29/d2a/l2d to dc/d14/d22/l6e 0 2026-03-09T20:47:32.646 INFO:tasks.workunit.client.1.vm10.stdout:2/373: creat d5/d18/d27/d28/d41/f76 x:0 0 0 2026-03-09T20:47:32.655 INFO:tasks.workunit.client.1.vm10.stdout:1/355: creat d2/da/d25/d46/d51/d5d/d6e/f76 x:0 0 0 2026-03-09T20:47:32.656 INFO:tasks.workunit.client.0.vm07.stdout:5/502: creat d5/df/d13/d6c/fb3 x:0 0 0 2026-03-09T20:47:32.657 INFO:tasks.workunit.client.1.vm10.stdout:8/385: link d0/d22/d25/d2e/d41/d47/l50 d0/d22/d2f/d3d/l77 0 2026-03-09T20:47:32.658 INFO:tasks.workunit.client.1.vm10.stdout:9/405: write d2/d28/d47/f58 [332631,52414] 0 2026-03-09T20:47:32.664 INFO:tasks.workunit.client.1.vm10.stdout:8/386: dwrite d0/d22/d2c/f6b [0,4194304] 0 2026-03-09T20:47:32.694 INFO:tasks.workunit.client.0.vm07.stdout:9/383: rename d4/d16/d29/d24/f36 to d4/d16/d29/d24/f8c 0 2026-03-09T20:47:32.694 INFO:tasks.workunit.client.0.vm07.stdout:9/384: dread - d4/d8/dc/d4e/f82 zero size 2026-03-09T20:47:32.727 INFO:tasks.workunit.client.1.vm10.stdout:4/295: rename d1/d8/d1b/d30 to d1/d2/d5c 0 2026-03-09T20:47:32.730 INFO:tasks.workunit.client.1.vm10.stdout:4/296: stat d1/d8/cd 0 2026-03-09T20:47:32.732 INFO:tasks.workunit.client.0.vm07.stdout:2/449: dwrite d2/f40 [0,4194304] 0 2026-03-09T20:47:32.744 INFO:tasks.workunit.client.1.vm10.stdout:2/374: mkdir d5/d18/d27/d28/d41/d77 0 2026-03-09T20:47:32.748 INFO:tasks.workunit.client.0.vm07.stdout:0/490: truncate d1/d1f/f63 399160 0 2026-03-09T20:47:32.758 INFO:tasks.workunit.client.0.vm07.stdout:7/459: mknod d3/da/db/d14/d43/d62/ca2 0 2026-03-09T20:47:32.759 INFO:tasks.workunit.client.0.vm07.stdout:6/426: truncate d8/d16/d22/d24/d2b/f2f 2809237 0 2026-03-09T20:47:32.759 INFO:tasks.workunit.client.0.vm07.stdout:1/454: creat d3/d23/d55/d56/d90/f93 x:0 0 0 2026-03-09T20:47:32.763 INFO:tasks.workunit.client.0.vm07.stdout:4/339: write d2/df/d17/f1b [994496,89943] 0 2026-03-09T20:47:32.768 INFO:tasks.workunit.client.0.vm07.stdout:5/503: creat d5/d33/d3b/fb4 x:0 0 0 2026-03-09T20:47:32.771 INFO:tasks.workunit.client.0.vm07.stdout:3/420: creat d1/d5/d9/d11/d60/f89 x:0 0 0 2026-03-09T20:47:32.772 INFO:tasks.workunit.client.0.vm07.stdout:3/421: chown d1/d5/d9/d2f/d3d/d71/c7c 86 1 2026-03-09T20:47:32.775 INFO:tasks.workunit.client.0.vm07.stdout:2/450: rename d2/db/d1c/d4a/d6c to d2/db/d1c/d4a/d88 0 2026-03-09T20:47:32.778 INFO:tasks.workunit.client.0.vm07.stdout:0/491: truncate d1/d2/dc/f56 1241535 0 2026-03-09T20:47:32.786 INFO:tasks.workunit.client.0.vm07.stdout:0/492: dread d1/d2/dc/d17/f3c [0,4194304] 0 2026-03-09T20:47:32.805 INFO:tasks.workunit.client.0.vm07.stdout:3/422: sync 2026-03-09T20:47:32.813 INFO:tasks.workunit.client.0.vm07.stdout:7/460: truncate d3/da/db/f27 1120650 0 2026-03-09T20:47:32.813 INFO:tasks.workunit.client.0.vm07.stdout:8/377: link d1/dc/d16/d31/l41 d1/d3b/l79 0 2026-03-09T20:47:32.815 INFO:tasks.workunit.client.0.vm07.stdout:6/427: symlink d8/d16/d61/l84 0 2026-03-09T20:47:32.819 INFO:tasks.workunit.client.0.vm07.stdout:5/504: mkdir d5/d33/d39/d8d/db5 0 2026-03-09T20:47:32.830 INFO:tasks.workunit.client.0.vm07.stdout:2/451: creat d2/db/d28/d5c/f89 x:0 0 0 2026-03-09T20:47:32.839 INFO:tasks.workunit.client.0.vm07.stdout:3/423: creat d1/d5/d9/d2f/d34/d46/f8a x:0 0 0 2026-03-09T20:47:32.842 INFO:tasks.workunit.client.0.vm07.stdout:7/461: fsync d3/f18 0 2026-03-09T20:47:32.843 INFO:tasks.workunit.client.0.vm07.stdout:7/462: write d3/da/db/d14/d1f/d2b/f2c [2003900,61239] 0 2026-03-09T20:47:32.848 INFO:tasks.workunit.client.0.vm07.stdout:1/455: mkdir d3/d14/d94 0 2026-03-09T20:47:32.849 INFO:tasks.workunit.client.0.vm07.stdout:1/456: chown d3/d14/d54/d3e/f80 0 1 2026-03-09T20:47:32.852 INFO:tasks.workunit.client.0.vm07.stdout:4/340: dwrite d2/d1f/f2c [0,4194304] 0 2026-03-09T20:47:32.856 INFO:tasks.workunit.client.1.vm10.stdout:9/406: creat d2/d3/d6d/f96 x:0 0 0 2026-03-09T20:47:32.858 INFO:tasks.workunit.client.0.vm07.stdout:8/378: dread d1/dc/d16/d31/f52 [0,4194304] 0 2026-03-09T20:47:32.863 INFO:tasks.workunit.client.0.vm07.stdout:6/428: rename d8/d16/d22/d24/d2b to d8/d16/d22/d33/d85 0 2026-03-09T20:47:32.872 INFO:tasks.workunit.client.1.vm10.stdout:6/383: creat d3/d12/d51/f7c x:0 0 0 2026-03-09T20:47:32.875 INFO:tasks.workunit.client.0.vm07.stdout:2/452: creat d2/d11/d56/f8a x:0 0 0 2026-03-09T20:47:32.876 INFO:tasks.workunit.client.0.vm07.stdout:2/453: chown d2/db/d1c/l25 0 1 2026-03-09T20:47:32.891 INFO:tasks.workunit.client.1.vm10.stdout:0/326: truncate d2/d9/da/d11/f1f 4235853 0 2026-03-09T20:47:32.894 INFO:tasks.workunit.client.1.vm10.stdout:1/356: write d2/f2a [835021,58544] 0 2026-03-09T20:47:32.894 INFO:tasks.workunit.client.1.vm10.stdout:4/297: mknod d1/d47/c5d 0 2026-03-09T20:47:32.896 INFO:tasks.workunit.client.1.vm10.stdout:3/340: getdents dc/d14/d26/d29/d40/d48/d69 0 2026-03-09T20:47:32.896 INFO:tasks.workunit.client.1.vm10.stdout:3/341: chown l8 13 1 2026-03-09T20:47:32.897 INFO:tasks.workunit.client.0.vm07.stdout:7/463: creat d3/d58/d82/fa3 x:0 0 0 2026-03-09T20:47:32.898 INFO:tasks.workunit.client.0.vm07.stdout:7/464: dread - d3/da/db/d14/d1f/d2b/d52/f97 zero size 2026-03-09T20:47:32.903 INFO:tasks.workunit.client.1.vm10.stdout:1/357: dread d2/f8 [0,4194304] 0 2026-03-09T20:47:32.907 INFO:tasks.workunit.client.0.vm07.stdout:4/341: rename d2/df/c1c to d2/d1f/d2d/d3f/c5b 0 2026-03-09T20:47:32.908 INFO:tasks.workunit.client.1.vm10.stdout:7/338: dwrite db/d28/d4c/f65 [4194304,4194304] 0 2026-03-09T20:47:32.916 INFO:tasks.workunit.client.1.vm10.stdout:8/387: fdatasync d0/d22/d2c/f32 0 2026-03-09T20:47:32.917 INFO:tasks.workunit.client.1.vm10.stdout:8/388: chown d0/d22/d2f/c56 4433 1 2026-03-09T20:47:32.917 INFO:tasks.workunit.client.1.vm10.stdout:8/389: write d0/d22/f29 [1956303,106898] 0 2026-03-09T20:47:32.921 INFO:tasks.workunit.client.0.vm07.stdout:6/429: mknod d8/d16/d22/c86 0 2026-03-09T20:47:32.921 INFO:tasks.workunit.client.0.vm07.stdout:6/430: chown f5 904 1 2026-03-09T20:47:32.922 INFO:tasks.workunit.client.0.vm07.stdout:6/431: chown d8/d16/d61/f68 265899634 1 2026-03-09T20:47:32.922 INFO:tasks.workunit.client.0.vm07.stdout:6/432: chown d8 0 1 2026-03-09T20:47:32.922 INFO:tasks.workunit.client.0.vm07.stdout:6/433: read - d8/f79 zero size 2026-03-09T20:47:32.923 INFO:tasks.workunit.client.1.vm10.stdout:5/334: truncate d2/f64 156746 0 2026-03-09T20:47:32.924 INFO:tasks.workunit.client.0.vm07.stdout:0/493: write d1/d82/f8d [1534998,109464] 0 2026-03-09T20:47:32.927 INFO:tasks.workunit.client.0.vm07.stdout:3/424: dwrite d1/d5/d9/f33 [4194304,4194304] 0 2026-03-09T20:47:32.952 INFO:tasks.workunit.client.0.vm07.stdout:9/385: getdents d4 0 2026-03-09T20:47:32.956 INFO:tasks.workunit.client.0.vm07.stdout:9/386: dwrite d4/d16/d29/f7d [0,4194304] 0 2026-03-09T20:47:32.957 INFO:tasks.workunit.client.1.vm10.stdout:0/327: symlink d2/d9/da/d35/d30/l6a 0 2026-03-09T20:47:32.957 INFO:tasks.workunit.client.0.vm07.stdout:8/379: dwrite d1/d5d/d6f/d2f/d53/f5f [0,4194304] 0 2026-03-09T20:47:32.969 INFO:tasks.workunit.client.0.vm07.stdout:2/454: rmdir d2/db/d28/d57 39 2026-03-09T20:47:32.973 INFO:tasks.workunit.client.1.vm10.stdout:4/298: fdatasync d1/d8/d1c/f3f 0 2026-03-09T20:47:32.973 INFO:tasks.workunit.client.0.vm07.stdout:5/505: write d5/df/d13/d30/f64 [1182747,37509] 0 2026-03-09T20:47:32.974 INFO:tasks.workunit.client.1.vm10.stdout:4/299: readlink d1/d8/l14 0 2026-03-09T20:47:32.979 INFO:tasks.workunit.client.1.vm10.stdout:2/375: mknod d5/c78 0 2026-03-09T20:47:32.980 INFO:tasks.workunit.client.1.vm10.stdout:1/358: dread - d2/da/d25/d3e/f58 zero size 2026-03-09T20:47:32.981 INFO:tasks.workunit.client.1.vm10.stdout:1/359: chown d2/f2a 3 1 2026-03-09T20:47:32.984 INFO:tasks.workunit.client.0.vm07.stdout:4/342: truncate d2/df/f49 4193504 0 2026-03-09T20:47:32.984 INFO:tasks.workunit.client.0.vm07.stdout:1/457: creat d3/d14/d94/f95 x:0 0 0 2026-03-09T20:47:32.988 INFO:tasks.workunit.client.0.vm07.stdout:4/343: dread d2/fa [0,4194304] 0 2026-03-09T20:47:32.992 INFO:tasks.workunit.client.1.vm10.stdout:1/360: dwrite d2/da/f50 [4194304,4194304] 0 2026-03-09T20:47:33.021 INFO:tasks.workunit.client.0.vm07.stdout:0/494: mkdir d1/d1f/d53/d72/d9a 0 2026-03-09T20:47:33.028 INFO:tasks.workunit.client.1.vm10.stdout:5/335: creat d2/d39/d4b/f85 x:0 0 0 2026-03-09T20:47:33.029 INFO:tasks.workunit.client.1.vm10.stdout:3/342: write dc/d14/d20/d21/f36 [4592903,101816] 0 2026-03-09T20:47:33.029 INFO:tasks.workunit.client.1.vm10.stdout:3/343: fsync dc/d14/d26/d29/f60 0 2026-03-09T20:47:33.031 INFO:tasks.workunit.client.1.vm10.stdout:5/336: write d2/d27/d37/d46/d5d/d5f/d69/f76 [3539039,66101] 0 2026-03-09T20:47:33.052 INFO:tasks.workunit.client.1.vm10.stdout:8/390: write d0/d22/f35 [2697527,58052] 0 2026-03-09T20:47:33.058 INFO:tasks.workunit.client.1.vm10.stdout:4/300: mknod d1/d47/c5e 0 2026-03-09T20:47:33.059 INFO:tasks.workunit.client.1.vm10.stdout:4/301: readlink d1/d8/lb 0 2026-03-09T20:47:33.065 INFO:tasks.workunit.client.0.vm07.stdout:9/387: rename d4/d11/d31 to d4/d16/d29/d24/d37/d8d 0 2026-03-09T20:47:33.072 INFO:tasks.workunit.client.0.vm07.stdout:8/380: dwrite d1/dc/f38 [0,4194304] 0 2026-03-09T20:47:33.073 INFO:tasks.workunit.client.0.vm07.stdout:8/381: read - d1/dc/d16/d26/f59 zero size 2026-03-09T20:47:33.078 INFO:tasks.workunit.client.0.vm07.stdout:2/455: symlink d2/db/d1c/d4a/l8b 0 2026-03-09T20:47:33.082 INFO:tasks.workunit.client.0.vm07.stdout:5/506: dread d5/df/d13/d6c/f77 [0,4194304] 0 2026-03-09T20:47:33.084 INFO:tasks.workunit.client.0.vm07.stdout:7/465: mkdir d3/da4 0 2026-03-09T20:47:33.084 INFO:tasks.workunit.client.0.vm07.stdout:7/466: chown d3/f88 467 1 2026-03-09T20:47:33.085 INFO:tasks.workunit.client.0.vm07.stdout:7/467: write d3/da/db/d14/d43/f68 [3155556,4168] 0 2026-03-09T20:47:33.088 INFO:tasks.workunit.client.0.vm07.stdout:7/468: dwrite d3/f67 [0,4194304] 0 2026-03-09T20:47:33.098 INFO:tasks.workunit.client.1.vm10.stdout:1/361: symlink d2/da/d25/d3e/d42/l77 0 2026-03-09T20:47:33.099 INFO:tasks.workunit.client.0.vm07.stdout:0/495: creat d1/d1f/d53/d72/f9b x:0 0 0 2026-03-09T20:47:33.099 INFO:tasks.workunit.client.0.vm07.stdout:9/388: sync 2026-03-09T20:47:33.106 INFO:tasks.workunit.client.0.vm07.stdout:6/434: dwrite d8/d16/d22/d33/f6d [0,4194304] 0 2026-03-09T20:47:33.107 INFO:tasks.workunit.client.0.vm07.stdout:9/389: dwrite d4/d11/f8a [0,4194304] 0 2026-03-09T20:47:33.113 INFO:tasks.workunit.client.0.vm07.stdout:9/390: sync 2026-03-09T20:47:33.120 INFO:tasks.workunit.client.1.vm10.stdout:2/376: dread d5/d18/f24 [0,4194304] 0 2026-03-09T20:47:33.127 INFO:tasks.workunit.client.0.vm07.stdout:3/425: symlink d1/l8b 0 2026-03-09T20:47:33.132 INFO:tasks.workunit.client.0.vm07.stdout:4/344: dwrite d2/d1f/d2d/d3f/f51 [4194304,4194304] 0 2026-03-09T20:47:33.136 INFO:tasks.workunit.client.0.vm07.stdout:4/345: chown d2/df/c10 436 1 2026-03-09T20:47:33.142 INFO:tasks.workunit.client.0.vm07.stdout:6/435: dread d8/d16/f17 [0,4194304] 0 2026-03-09T20:47:33.146 INFO:tasks.workunit.client.1.vm10.stdout:5/337: unlink d2/f5 0 2026-03-09T20:47:33.146 INFO:tasks.workunit.client.1.vm10.stdout:5/338: chown d2/l7f 20214748 1 2026-03-09T20:47:33.154 INFO:tasks.workunit.client.1.vm10.stdout:0/328: fsync d2/db/f13 0 2026-03-09T20:47:33.160 INFO:tasks.workunit.client.1.vm10.stdout:4/302: fsync d1/d2/f2a 0 2026-03-09T20:47:33.160 INFO:tasks.workunit.client.1.vm10.stdout:7/339: creat db/f69 x:0 0 0 2026-03-09T20:47:33.160 INFO:tasks.workunit.client.1.vm10.stdout:9/407: getdents d2/d3/de/d35 0 2026-03-09T20:47:33.161 INFO:tasks.workunit.client.1.vm10.stdout:9/408: readlink d2/d33/l4e 0 2026-03-09T20:47:33.182 INFO:tasks.workunit.client.1.vm10.stdout:1/362: dread d2/da/d25/f48 [0,4194304] 0 2026-03-09T20:47:33.190 INFO:tasks.workunit.client.1.vm10.stdout:0/329: symlink d2/db/d5d/l6b 0 2026-03-09T20:47:33.193 INFO:tasks.workunit.client.1.vm10.stdout:0/330: fsync d2/d9/da/d35/f68 0 2026-03-09T20:47:33.194 INFO:tasks.workunit.client.1.vm10.stdout:0/331: chown d2/d9/da 337 1 2026-03-09T20:47:33.194 INFO:tasks.workunit.client.1.vm10.stdout:6/384: truncate d3/d12/d24/d39/f6c 2257694 0 2026-03-09T20:47:33.195 INFO:tasks.workunit.client.1.vm10.stdout:0/332: chown d2/d4e/f5b 1062393 1 2026-03-09T20:47:33.196 INFO:tasks.workunit.client.1.vm10.stdout:1/363: dread d2/da/d25/d3e/f69 [0,4194304] 0 2026-03-09T20:47:33.203 INFO:tasks.workunit.client.1.vm10.stdout:3/344: write dc/d14/d26/d29/f6b [1655705,52350] 0 2026-03-09T20:47:33.207 INFO:tasks.workunit.client.0.vm07.stdout:4/346: fsync d2/d1f/f3c 0 2026-03-09T20:47:33.213 INFO:tasks.workunit.client.0.vm07.stdout:4/347: truncate d2/d1f/d2d/d3f/d4a/d4b/d52/f5a 53096 0 2026-03-09T20:47:33.222 INFO:tasks.workunit.client.1.vm10.stdout:7/340: fdatasync db/d21/f27 0 2026-03-09T20:47:33.225 INFO:tasks.workunit.client.1.vm10.stdout:4/303: rmdir d1 39 2026-03-09T20:47:33.226 INFO:tasks.workunit.client.1.vm10.stdout:2/377: symlink d5/d18/d27/d28/d41/d77/l79 0 2026-03-09T20:47:33.227 INFO:tasks.workunit.client.1.vm10.stdout:5/339: mknod d2/d27/d75/d81/c86 0 2026-03-09T20:47:33.233 INFO:tasks.workunit.client.1.vm10.stdout:0/333: symlink d2/d9/da/d11/l6c 0 2026-03-09T20:47:33.235 INFO:tasks.workunit.client.0.vm07.stdout:2/456: rmdir d2/db/d28/d5c 39 2026-03-09T20:47:33.236 INFO:tasks.workunit.client.1.vm10.stdout:1/364: creat d2/da/d25/f78 x:0 0 0 2026-03-09T20:47:33.237 INFO:tasks.workunit.client.1.vm10.stdout:8/391: rename d0/d22/d2f/d3d to d0/d22/d25/d2e/d41/d47/d78 0 2026-03-09T20:47:33.238 INFO:tasks.workunit.client.1.vm10.stdout:3/345: dread dc/d14/d20/d2e/f38 [0,4194304] 0 2026-03-09T20:47:33.239 INFO:tasks.workunit.client.1.vm10.stdout:9/409: dread d2/d12/f26 [0,4194304] 0 2026-03-09T20:47:33.239 INFO:tasks.workunit.client.1.vm10.stdout:9/410: stat d2/d28/f32 0 2026-03-09T20:47:33.245 INFO:tasks.workunit.client.1.vm10.stdout:7/341: mknod db/d28/d2b/d36/c6a 0 2026-03-09T20:47:33.249 INFO:tasks.workunit.client.1.vm10.stdout:7/342: read db/f45 [151440,130176] 0 2026-03-09T20:47:33.251 INFO:tasks.workunit.client.1.vm10.stdout:0/334: sync 2026-03-09T20:47:33.252 INFO:tasks.workunit.client.1.vm10.stdout:7/343: read db/d54/f57 [2905991,121424] 0 2026-03-09T20:47:33.253 INFO:tasks.workunit.client.1.vm10.stdout:7/344: readlink db/d28/d2b/d36/l17 0 2026-03-09T20:47:33.256 INFO:tasks.workunit.client.0.vm07.stdout:7/469: write d3/da/db/d14/d1f/d2b/f33 [1473213,29213] 0 2026-03-09T20:47:33.256 INFO:tasks.workunit.client.0.vm07.stdout:5/507: dread d5/df/d13/d3e/d5e/f7c [0,4194304] 0 2026-03-09T20:47:33.260 INFO:tasks.workunit.client.1.vm10.stdout:4/304: stat d1/d2/d5c/c3b 0 2026-03-09T20:47:33.261 INFO:tasks.workunit.client.1.vm10.stdout:4/305: dread - d1/d8/d39/f56 zero size 2026-03-09T20:47:33.261 INFO:tasks.workunit.client.1.vm10.stdout:0/335: dwrite d2/d4a/f5a [0,4194304] 0 2026-03-09T20:47:33.270 INFO:tasks.workunit.client.0.vm07.stdout:5/508: dwrite d5/d33/d75/fa8 [0,4194304] 0 2026-03-09T20:47:33.275 INFO:tasks.workunit.client.1.vm10.stdout:0/336: write d2/d4e/f5b [126550,110021] 0 2026-03-09T20:47:33.298 INFO:tasks.workunit.client.0.vm07.stdout:9/391: mkdir d4/d16/d29/d24/d37/d44/d62/d8e 0 2026-03-09T20:47:33.302 INFO:tasks.workunit.client.1.vm10.stdout:2/378: rename d5/l17 to d5/d18/d2d/d47/l7a 0 2026-03-09T20:47:33.303 INFO:tasks.workunit.client.0.vm07.stdout:3/426: mkdir d1/d5/d9/d11/d6d/d80/d8c 0 2026-03-09T20:47:33.305 INFO:tasks.workunit.client.1.vm10.stdout:1/365: fsync d2/da/d25/d3e/f44 0 2026-03-09T20:47:33.319 INFO:tasks.workunit.client.1.vm10.stdout:6/385: truncate d3/d12/d36/f64 4053402 0 2026-03-09T20:47:33.322 INFO:tasks.workunit.client.1.vm10.stdout:6/386: fsync d3/da/d11/d31/d4c/d60/f63 0 2026-03-09T20:47:33.325 INFO:tasks.workunit.client.1.vm10.stdout:3/346: read dc/d14/d20/d21/f41 [617663,128362] 0 2026-03-09T20:47:33.327 INFO:tasks.workunit.client.0.vm07.stdout:4/348: dread d2/f7 [0,4194304] 0 2026-03-09T20:47:33.341 INFO:tasks.workunit.client.1.vm10.stdout:7/345: symlink db/d1f/l6b 0 2026-03-09T20:47:33.347 INFO:tasks.workunit.client.0.vm07.stdout:8/382: truncate d1/f1d 784658 0 2026-03-09T20:47:33.351 INFO:tasks.workunit.client.0.vm07.stdout:2/457: dwrite d2/d11/f50 [4194304,4194304] 0 2026-03-09T20:47:33.376 INFO:tasks.workunit.client.0.vm07.stdout:1/458: link d3/d14/d54/f62 d3/d23/d55/d56/d90/f96 0 2026-03-09T20:47:33.377 INFO:tasks.workunit.client.0.vm07.stdout:7/470: readlink d3/l4 0 2026-03-09T20:47:33.380 INFO:tasks.workunit.client.1.vm10.stdout:0/337: rmdir d2/d9/d2a 39 2026-03-09T20:47:33.385 INFO:tasks.workunit.client.1.vm10.stdout:9/411: rename d2/d28/l56 to d2/d3/d85/l97 0 2026-03-09T20:47:33.388 INFO:tasks.workunit.client.1.vm10.stdout:2/379: mkdir d5/d18/d2d/d47/d7b 0 2026-03-09T20:47:33.389 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:33 vm07.local ceph-mon[49120]: pgmap v5: 65 pgs: 65 active+clean; 1.7 GiB data, 6.5 GiB used, 113 GiB / 120 GiB avail 2026-03-09T20:47:33.389 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:33 vm07.local ceph-mon[49120]: mgrmap e24: vm10.byqahe(active, since 4s) 2026-03-09T20:47:33.389 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:33 vm07.local ceph-mon[49120]: [09/Mar/2026:20:47:32] ENGINE Bus STARTING 2026-03-09T20:47:33.389 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:33 vm07.local ceph-mon[49120]: [09/Mar/2026:20:47:32] ENGINE Serving on http://192.168.123.110:8765 2026-03-09T20:47:33.389 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:33 vm07.local ceph-mon[49120]: [09/Mar/2026:20:47:32] ENGINE Serving on https://192.168.123.110:7150 2026-03-09T20:47:33.389 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:33 vm07.local ceph-mon[49120]: [09/Mar/2026:20:47:32] ENGINE Bus STARTED 2026-03-09T20:47:33.389 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:33 vm07.local ceph-mon[49120]: [09/Mar/2026:20:47:32] ENGINE Client ('192.168.123.110', 54556) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T20:47:33.389 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:33 vm07.local ceph-mon[49120]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:33.389 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:33 vm07.local ceph-mon[49120]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:33.399 INFO:tasks.workunit.client.0.vm07.stdout:9/392: chown d4/d11/d23/l7b 135415149 1 2026-03-09T20:47:33.400 INFO:tasks.workunit.client.1.vm10.stdout:1/366: write d2/da/d25/d3e/f69 [253253,76091] 0 2026-03-09T20:47:33.400 INFO:tasks.workunit.client.1.vm10.stdout:4/306: dwrite d1/d8/d1c/d2b/f5a [0,4194304] 0 2026-03-09T20:47:33.408 INFO:tasks.workunit.client.0.vm07.stdout:3/427: symlink d1/d5/d9/d11/l8d 0 2026-03-09T20:47:33.411 INFO:tasks.workunit.client.1.vm10.stdout:4/307: dread d1/d8/d1c/d2b/f5a [0,4194304] 0 2026-03-09T20:47:33.411 INFO:tasks.workunit.client.0.vm07.stdout:6/436: fsync d8/f12 0 2026-03-09T20:47:33.412 INFO:tasks.workunit.client.0.vm07.stdout:6/437: stat d8/d26/d2a/d40/f65 0 2026-03-09T20:47:33.417 INFO:tasks.workunit.client.1.vm10.stdout:4/308: dread d1/d8/d1c/f1d [0,4194304] 0 2026-03-09T20:47:33.418 INFO:tasks.workunit.client.0.vm07.stdout:6/438: dwrite d8/d26/d2a/f6e [0,4194304] 0 2026-03-09T20:47:33.418 INFO:tasks.workunit.client.1.vm10.stdout:4/309: chown d1/d8/d1c/d2b/f5a 13588222 1 2026-03-09T20:47:33.425 INFO:tasks.workunit.client.0.vm07.stdout:4/349: mkdir d2/d1f/d2d/d3f/d4a/d4b/d52/d5c 0 2026-03-09T20:47:33.426 INFO:tasks.workunit.client.0.vm07.stdout:8/383: rmdir d1/d5d/d6f 39 2026-03-09T20:47:33.427 INFO:tasks.workunit.client.0.vm07.stdout:8/384: write d1/dc/d16/d26/f2a [4643004,19601] 0 2026-03-09T20:47:33.431 INFO:tasks.workunit.client.1.vm10.stdout:5/340: fdatasync d2/d27/f34 0 2026-03-09T20:47:33.439 INFO:tasks.workunit.client.1.vm10.stdout:0/338: symlink d2/d9/da/d11/l6d 0 2026-03-09T20:47:33.442 INFO:tasks.workunit.client.0.vm07.stdout:7/471: chown d3/da/l93 106593 1 2026-03-09T20:47:33.445 INFO:tasks.workunit.client.1.vm10.stdout:9/412: truncate d2/d3/de/f24 779352 0 2026-03-09T20:47:33.449 INFO:tasks.workunit.client.0.vm07.stdout:0/496: creat d1/f9c x:0 0 0 2026-03-09T20:47:33.450 INFO:tasks.workunit.client.0.vm07.stdout:0/497: chown d1/d1f/d30/f8e 0 1 2026-03-09T20:47:33.450 INFO:tasks.workunit.client.0.vm07.stdout:9/393: creat d4/d8/dc/d4e/f8f x:0 0 0 2026-03-09T20:47:33.450 INFO:tasks.workunit.client.0.vm07.stdout:9/394: stat d4/d8/d19/d5f 0 2026-03-09T20:47:33.450 INFO:tasks.workunit.client.0.vm07.stdout:9/395: readlink d4/d8/dc/l6b 0 2026-03-09T20:47:33.461 INFO:tasks.workunit.client.0.vm07.stdout:6/439: unlink f5 0 2026-03-09T20:47:33.467 INFO:tasks.workunit.client.1.vm10.stdout:6/387: symlink d3/d30/d33/l7d 0 2026-03-09T20:47:33.472 INFO:tasks.workunit.client.1.vm10.stdout:4/310: rename d1/d8/f25 to d1/d8/d1b/f5f 0 2026-03-09T20:47:33.476 INFO:tasks.workunit.client.0.vm07.stdout:4/350: rename d2/d1f/d2d to d2/d55/d5d 0 2026-03-09T20:47:33.477 INFO:tasks.workunit.client.1.vm10.stdout:1/367: dread d2/da/f1e [0,4194304] 0 2026-03-09T20:47:33.481 INFO:tasks.workunit.client.0.vm07.stdout:3/428: write d1/d5/d9/d2f/d34/f4b [2961113,30859] 0 2026-03-09T20:47:33.487 INFO:tasks.workunit.client.1.vm10.stdout:3/347: dwrite dc/d14/d26/d29/d2a/f66 [0,4194304] 0 2026-03-09T20:47:33.488 INFO:tasks.workunit.client.1.vm10.stdout:4/311: dwrite d1/d2/f43 [0,4194304] 0 2026-03-09T20:47:33.492 INFO:tasks.workunit.client.1.vm10.stdout:1/368: dread d2/f19 [4194304,4194304] 0 2026-03-09T20:47:33.494 INFO:tasks.workunit.client.1.vm10.stdout:3/348: dwrite dc/d14/d26/d29/d2a/f66 [0,4194304] 0 2026-03-09T20:47:33.496 INFO:tasks.workunit.client.1.vm10.stdout:3/349: chown l8 14711178 1 2026-03-09T20:47:33.507 INFO:tasks.workunit.client.1.vm10.stdout:3/350: dwrite dc/d14/d26/d29/f6b [8388608,4194304] 0 2026-03-09T20:47:33.517 INFO:tasks.workunit.client.0.vm07.stdout:2/458: creat d2/db/d28/d5c/f8c x:0 0 0 2026-03-09T20:47:33.517 INFO:tasks.workunit.client.0.vm07.stdout:2/459: stat d2/db/d28/f58 0 2026-03-09T20:47:33.522 INFO:tasks.workunit.client.0.vm07.stdout:1/459: mkdir d3/d97 0 2026-03-09T20:47:33.522 INFO:tasks.workunit.client.0.vm07.stdout:1/460: chown d3/d23/d67/l8b 41374 1 2026-03-09T20:47:33.525 INFO:tasks.workunit.client.1.vm10.stdout:5/341: readlink d2/l26 0 2026-03-09T20:47:33.535 INFO:tasks.workunit.client.1.vm10.stdout:2/380: mknod d5/d18/d2d/d47/d7b/c7c 0 2026-03-09T20:47:33.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:33 vm10.local ceph-mon[57011]: pgmap v5: 65 pgs: 65 active+clean; 1.7 GiB data, 6.5 GiB used, 113 GiB / 120 GiB avail 2026-03-09T20:47:33.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:33 vm10.local ceph-mon[57011]: mgrmap e24: vm10.byqahe(active, since 4s) 2026-03-09T20:47:33.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:33 vm10.local ceph-mon[57011]: [09/Mar/2026:20:47:32] ENGINE Bus STARTING 2026-03-09T20:47:33.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:33 vm10.local ceph-mon[57011]: [09/Mar/2026:20:47:32] ENGINE Serving on http://192.168.123.110:8765 2026-03-09T20:47:33.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:33 vm10.local ceph-mon[57011]: [09/Mar/2026:20:47:32] ENGINE Serving on https://192.168.123.110:7150 2026-03-09T20:47:33.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:33 vm10.local ceph-mon[57011]: [09/Mar/2026:20:47:32] ENGINE Bus STARTED 2026-03-09T20:47:33.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:33 vm10.local ceph-mon[57011]: [09/Mar/2026:20:47:32] ENGINE Client ('192.168.123.110', 54556) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T20:47:33.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:33 vm10.local ceph-mon[57011]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:33.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:33 vm10.local ceph-mon[57011]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:33.541 INFO:tasks.workunit.client.0.vm07.stdout:5/509: creat d5/d33/fb6 x:0 0 0 2026-03-09T20:47:33.546 INFO:tasks.workunit.client.1.vm10.stdout:9/413: truncate d2/d28/d47/d50/f75 810474 0 2026-03-09T20:47:33.547 INFO:tasks.workunit.client.0.vm07.stdout:9/396: truncate d4/d8/dc/d15/f30 532501 0 2026-03-09T20:47:33.548 INFO:tasks.workunit.client.0.vm07.stdout:9/397: readlink d4/d16/d29/l4d 0 2026-03-09T20:47:33.556 INFO:tasks.workunit.client.1.vm10.stdout:8/392: getdents d0 0 2026-03-09T20:47:33.557 INFO:tasks.workunit.client.1.vm10.stdout:8/393: chown d0/c1f 15275 1 2026-03-09T20:47:33.557 INFO:tasks.workunit.client.1.vm10.stdout:6/388: dread - d3/da/f42 zero size 2026-03-09T20:47:33.558 INFO:tasks.workunit.client.1.vm10.stdout:8/394: read - d0/d22/f71 zero size 2026-03-09T20:47:33.561 INFO:tasks.workunit.client.0.vm07.stdout:6/440: creat d8/d26/f87 x:0 0 0 2026-03-09T20:47:33.563 INFO:tasks.workunit.client.1.vm10.stdout:6/389: dwrite d3/da/d11/d31/d4c/d60/f77 [0,4194304] 0 2026-03-09T20:47:33.564 INFO:tasks.workunit.client.1.vm10.stdout:8/395: dwrite d0/d22/d25/d40/f5e [0,4194304] 0 2026-03-09T20:47:33.566 INFO:tasks.workunit.client.0.vm07.stdout:6/441: dwrite d8/f79 [0,4194304] 0 2026-03-09T20:47:33.567 INFO:tasks.workunit.client.0.vm07.stdout:6/442: readlink d8/le 0 2026-03-09T20:47:33.591 INFO:tasks.workunit.client.1.vm10.stdout:0/339: dwrite d2/d4a/d58/f62 [0,4194304] 0 2026-03-09T20:47:33.600 INFO:tasks.workunit.client.0.vm07.stdout:4/351: creat d2/d55/d5d/d3f/d4a/f5e x:0 0 0 2026-03-09T20:47:33.613 INFO:tasks.workunit.client.1.vm10.stdout:4/312: dread d1/fe [0,4194304] 0 2026-03-09T20:47:33.613 INFO:tasks.workunit.client.1.vm10.stdout:4/313: chown d1/d2/d5c/c3b 495099 1 2026-03-09T20:47:33.617 INFO:tasks.workunit.client.0.vm07.stdout:8/385: fdatasync d1/d5d/d6f/d2f/f34 0 2026-03-09T20:47:33.633 INFO:tasks.workunit.client.0.vm07.stdout:1/461: rename d3/d14/l18 to d3/d23/d67/d8a/l98 0 2026-03-09T20:47:33.647 INFO:tasks.workunit.client.0.vm07.stdout:2/460: write d2/d11/d56/f5a [4830678,64367] 0 2026-03-09T20:47:33.671 INFO:tasks.workunit.client.0.vm07.stdout:5/510: creat d5/df/d13/d4f/fb7 x:0 0 0 2026-03-09T20:47:33.691 INFO:tasks.workunit.client.0.vm07.stdout:0/498: write d1/d1f/d30/f50 [609269,103552] 0 2026-03-09T20:47:33.696 INFO:tasks.workunit.client.1.vm10.stdout:3/351: dwrite dc/f5a [0,4194304] 0 2026-03-09T20:47:33.698 INFO:tasks.workunit.client.0.vm07.stdout:5/511: dread d5/f51 [0,4194304] 0 2026-03-09T20:47:33.702 INFO:tasks.workunit.client.0.vm07.stdout:4/352: rmdir d2/d55/d5d/d3f 39 2026-03-09T20:47:33.710 INFO:tasks.workunit.client.1.vm10.stdout:3/352: dwrite dc/d14/d20/d21/d3b/f6d [0,4194304] 0 2026-03-09T20:47:33.710 INFO:tasks.workunit.client.0.vm07.stdout:4/353: readlink d2/df/l18 0 2026-03-09T20:47:33.711 INFO:tasks.workunit.client.0.vm07.stdout:3/429: mkdir d1/d5/d8e 0 2026-03-09T20:47:33.716 INFO:tasks.workunit.client.1.vm10.stdout:3/353: chown lb 0 1 2026-03-09T20:47:33.733 INFO:tasks.workunit.client.1.vm10.stdout:6/390: mknod d3/da/d11/d31/d4c/c7e 0 2026-03-09T20:47:33.747 INFO:tasks.workunit.client.1.vm10.stdout:0/340: mknod d2/d9/da/de/d1a/d25/c6e 0 2026-03-09T20:47:33.748 INFO:tasks.workunit.client.0.vm07.stdout:5/512: dread d5/df/d13/f1f [0,4194304] 0 2026-03-09T20:47:33.751 INFO:tasks.workunit.client.1.vm10.stdout:4/314: creat d1/d2/f60 x:0 0 0 2026-03-09T20:47:33.751 INFO:tasks.workunit.client.1.vm10.stdout:8/396: dwrite d0/f17 [8388608,4194304] 0 2026-03-09T20:47:33.751 INFO:tasks.workunit.client.0.vm07.stdout:1/462: creat d3/d23/d55/d56/f99 x:0 0 0 2026-03-09T20:47:33.751 INFO:tasks.workunit.client.0.vm07.stdout:1/463: truncate d3/f34 4334495 0 2026-03-09T20:47:33.752 INFO:tasks.workunit.client.0.vm07.stdout:1/464: chown d3/fc 269 1 2026-03-09T20:47:33.752 INFO:tasks.workunit.client.0.vm07.stdout:7/472: link l1 d3/da/db/d14/d1f/d50/la5 0 2026-03-09T20:47:33.753 INFO:tasks.workunit.client.0.vm07.stdout:1/465: chown d3/d14/c1b 48939373 1 2026-03-09T20:47:33.757 INFO:tasks.workunit.client.0.vm07.stdout:2/461: mkdir d2/db/d1c/d8d 0 2026-03-09T20:47:33.765 INFO:tasks.workunit.client.0.vm07.stdout:6/443: dread d8/d16/d22/d24/f25 [0,4194304] 0 2026-03-09T20:47:33.768 INFO:tasks.workunit.client.0.vm07.stdout:6/444: stat d8/d16/d22/d33/d85/f2f 0 2026-03-09T20:47:33.774 INFO:tasks.workunit.client.0.vm07.stdout:4/354: mknod d2/df/d17/c5f 0 2026-03-09T20:47:33.784 INFO:tasks.workunit.client.0.vm07.stdout:4/355: dread d2/d1f/f45 [0,4194304] 0 2026-03-09T20:47:33.804 INFO:tasks.workunit.client.0.vm07.stdout:9/398: rename d4/d16/c75 to d4/d16/d29/d24/d37/d44/d62/d8e/c90 0 2026-03-09T20:47:33.804 INFO:tasks.workunit.client.0.vm07.stdout:8/386: creat d1/d5d/f7a x:0 0 0 2026-03-09T20:47:33.806 INFO:tasks.workunit.client.0.vm07.stdout:5/513: sync 2026-03-09T20:47:33.818 INFO:tasks.workunit.client.0.vm07.stdout:0/499: write d1/d1f/d20/f2c [2482815,126930] 0 2026-03-09T20:47:33.820 INFO:tasks.workunit.client.0.vm07.stdout:0/500: write d1/d1f/d53/d72/f94 [924610,65648] 0 2026-03-09T20:47:33.821 INFO:tasks.workunit.client.0.vm07.stdout:7/473: dwrite d3/f4f [4194304,4194304] 0 2026-03-09T20:47:33.836 INFO:tasks.workunit.client.0.vm07.stdout:9/399: fdatasync d4/d11/d2a/f39 0 2026-03-09T20:47:33.837 INFO:tasks.workunit.client.0.vm07.stdout:4/356: unlink d2/d1f/f3c 0 2026-03-09T20:47:33.839 INFO:tasks.workunit.client.0.vm07.stdout:1/466: dread d3/fa [0,4194304] 0 2026-03-09T20:47:33.842 INFO:tasks.workunit.client.0.vm07.stdout:7/474: dwrite d3/da/f45 [0,4194304] 0 2026-03-09T20:47:33.845 INFO:tasks.workunit.client.0.vm07.stdout:7/475: stat d3/da/db/d14/d1f/d2b/d52/l80 0 2026-03-09T20:47:33.846 INFO:tasks.workunit.client.0.vm07.stdout:3/430: creat d1/d5/d9/d2f/d34/f8f x:0 0 0 2026-03-09T20:47:33.851 INFO:tasks.workunit.client.0.vm07.stdout:8/387: symlink d1/d3b/l7b 0 2026-03-09T20:47:33.852 INFO:tasks.workunit.client.0.vm07.stdout:8/388: readlink d1/dc/l2c 0 2026-03-09T20:47:33.861 INFO:tasks.workunit.client.0.vm07.stdout:5/514: fdatasync d5/f25 0 2026-03-09T20:47:33.863 INFO:tasks.workunit.client.1.vm10.stdout:7/346: getdents db/d28/d4d 0 2026-03-09T20:47:33.865 INFO:tasks.workunit.client.0.vm07.stdout:6/445: rename d8/d26/d2a/d40/d67 to d8/d16/d4b/d88 0 2026-03-09T20:47:33.869 INFO:tasks.workunit.client.0.vm07.stdout:0/501: symlink d1/d2/d33/l9d 0 2026-03-09T20:47:33.871 INFO:tasks.workunit.client.1.vm10.stdout:5/342: mkdir d2/d27/d37/d46/d5d/d5f/d84/d87 0 2026-03-09T20:47:33.883 INFO:tasks.workunit.client.1.vm10.stdout:2/381: write d5/f16 [926283,124716] 0 2026-03-09T20:47:33.885 INFO:tasks.workunit.client.0.vm07.stdout:4/357: chown d2/d55/d5d/d3f 13316694 1 2026-03-09T20:47:33.885 INFO:tasks.workunit.client.0.vm07.stdout:9/400: creat d4/d8/d19/d26/f91 x:0 0 0 2026-03-09T20:47:33.888 INFO:tasks.workunit.client.1.vm10.stdout:3/354: rename dc/d14/d26/d29/f6b to dc/d14/d26/f6f 0 2026-03-09T20:47:33.889 INFO:tasks.workunit.client.1.vm10.stdout:3/355: write dc/d14/d26/f34 [502877,66348] 0 2026-03-09T20:47:33.899 INFO:tasks.workunit.client.0.vm07.stdout:7/476: symlink d3/da/db/d14/d43/la6 0 2026-03-09T20:47:33.900 INFO:tasks.workunit.client.0.vm07.stdout:8/389: symlink d1/d5d/d6f/d2f/d4d/d55/l7c 0 2026-03-09T20:47:33.902 INFO:tasks.workunit.client.0.vm07.stdout:8/390: dread d1/dc/d16/f4a [0,4194304] 0 2026-03-09T20:47:33.903 INFO:tasks.workunit.client.1.vm10.stdout:4/315: mkdir d1/d8/d1c/d41/d61 0 2026-03-09T20:47:33.908 INFO:tasks.workunit.client.1.vm10.stdout:5/343: creat d2/d27/d75/f88 x:0 0 0 2026-03-09T20:47:33.909 INFO:tasks.workunit.client.0.vm07.stdout:6/446: chown d8/c1e 2363539 1 2026-03-09T20:47:33.915 INFO:tasks.workunit.client.0.vm07.stdout:1/467: mknod d3/d97/c9a 0 2026-03-09T20:47:33.916 INFO:tasks.workunit.client.1.vm10.stdout:2/382: read - d5/d5b/f6c zero size 2026-03-09T20:47:33.917 INFO:tasks.workunit.client.0.vm07.stdout:2/462: getdents d2/d11/d56 0 2026-03-09T20:47:33.922 INFO:tasks.workunit.client.0.vm07.stdout:1/468: dwrite d3/d23/f49 [0,4194304] 0 2026-03-09T20:47:33.923 INFO:tasks.workunit.client.0.vm07.stdout:1/469: fdatasync d3/f34 0 2026-03-09T20:47:33.926 INFO:tasks.workunit.client.1.vm10.stdout:3/356: unlink dc/d14/d26/d29/d2a/l4e 0 2026-03-09T20:47:33.927 INFO:tasks.workunit.client.1.vm10.stdout:6/391: write d3/f5e [66116,57738] 0 2026-03-09T20:47:33.928 INFO:tasks.workunit.client.1.vm10.stdout:0/341: mknod d2/d9/da/d48/c6f 0 2026-03-09T20:47:33.931 INFO:tasks.workunit.client.0.vm07.stdout:7/477: dwrite d3/da/f38 [0,4194304] 0 2026-03-09T20:47:33.946 INFO:tasks.workunit.client.1.vm10.stdout:1/369: getdents d2/da/d25/d3e 0 2026-03-09T20:47:33.951 INFO:tasks.workunit.client.0.vm07.stdout:8/391: fdatasync d1/d5d/d6f/f18 0 2026-03-09T20:47:33.951 INFO:tasks.workunit.client.1.vm10.stdout:1/370: write d2/da/f50 [7832550,14231] 0 2026-03-09T20:47:33.954 INFO:tasks.workunit.client.1.vm10.stdout:4/316: rename d1/d8/d1c/f1d to d1/d8/d1c/d41/d61/f62 0 2026-03-09T20:47:33.955 INFO:tasks.workunit.client.0.vm07.stdout:7/478: dread d3/da/db/d14/f92 [0,4194304] 0 2026-03-09T20:47:33.957 INFO:tasks.workunit.client.1.vm10.stdout:8/397: getdents d0/d22/d25/d6c 0 2026-03-09T20:47:33.957 INFO:tasks.workunit.client.0.vm07.stdout:7/479: readlink d3/da/db/l9f 0 2026-03-09T20:47:33.959 INFO:tasks.workunit.client.0.vm07.stdout:5/515: symlink d5/d33/d39/d8d/db5/lb8 0 2026-03-09T20:47:33.962 INFO:tasks.workunit.client.1.vm10.stdout:4/317: dwrite d1/d2/f43 [0,4194304] 0 2026-03-09T20:47:33.962 INFO:tasks.workunit.client.0.vm07.stdout:5/516: chown d5/d19/d73/d97 1074396 1 2026-03-09T20:47:33.979 INFO:tasks.workunit.client.0.vm07.stdout:0/502: mknod d1/d2/d98/c9e 0 2026-03-09T20:47:33.980 INFO:tasks.workunit.client.0.vm07.stdout:0/503: chown d1/d1f/d30/f8f 25046859 1 2026-03-09T20:47:33.983 INFO:tasks.workunit.client.1.vm10.stdout:9/414: link d2/d3/c15 d2/d3/de/c98 0 2026-03-09T20:47:33.984 INFO:tasks.workunit.client.0.vm07.stdout:4/358: creat d2/df/d59/f60 x:0 0 0 2026-03-09T20:47:33.991 INFO:tasks.workunit.client.1.vm10.stdout:3/357: fdatasync dc/d14/d20/d21/f41 0 2026-03-09T20:47:33.992 INFO:tasks.workunit.client.0.vm07.stdout:2/463: stat d2/ff 0 2026-03-09T20:47:33.994 INFO:tasks.workunit.client.0.vm07.stdout:2/464: dread - d2/db/d49/f81 zero size 2026-03-09T20:47:34.001 INFO:tasks.workunit.client.0.vm07.stdout:1/470: fdatasync d3/d23/f49 0 2026-03-09T20:47:34.002 INFO:tasks.workunit.client.0.vm07.stdout:1/471: chown d3/d14/d54/f32 3564737 1 2026-03-09T20:47:34.005 INFO:tasks.workunit.client.0.vm07.stdout:3/431: creat d1/d5/d9/d2f/d3d/d64/d43/f90 x:0 0 0 2026-03-09T20:47:34.011 INFO:tasks.workunit.client.1.vm10.stdout:1/371: dread - d2/da/f4d zero size 2026-03-09T20:47:34.015 INFO:tasks.workunit.client.1.vm10.stdout:2/383: dwrite d5/fb [0,4194304] 0 2026-03-09T20:47:34.017 INFO:tasks.workunit.client.1.vm10.stdout:0/342: rename d2/d9/da/f53 to d2/db/d5d/f70 0 2026-03-09T20:47:34.017 INFO:tasks.workunit.client.1.vm10.stdout:2/384: read - d5/d5b/f6c zero size 2026-03-09T20:47:34.018 INFO:tasks.workunit.client.0.vm07.stdout:7/480: symlink d3/d58/d82/la7 0 2026-03-09T20:47:34.020 INFO:tasks.workunit.client.1.vm10.stdout:6/392: dwrite d3/da/f1b [4194304,4194304] 0 2026-03-09T20:47:34.025 INFO:tasks.workunit.client.1.vm10.stdout:8/398: creat d0/d22/d25/d2e/f79 x:0 0 0 2026-03-09T20:47:34.026 INFO:tasks.workunit.client.0.vm07.stdout:5/517: symlink d5/d33/d39/d8d/dab/lb9 0 2026-03-09T20:47:34.027 INFO:tasks.workunit.client.0.vm07.stdout:0/504: mkdir d1/d1f/d9f 0 2026-03-09T20:47:34.033 INFO:tasks.workunit.client.0.vm07.stdout:6/447: mknod d8/d5d/c89 0 2026-03-09T20:47:34.034 INFO:tasks.workunit.client.0.vm07.stdout:4/359: mknod d2/df/d17/c61 0 2026-03-09T20:47:34.035 INFO:tasks.workunit.client.1.vm10.stdout:5/344: mkdir d2/d39/d89 0 2026-03-09T20:47:34.035 INFO:tasks.workunit.client.0.vm07.stdout:4/360: chown d2/df/d17/c36 5 1 2026-03-09T20:47:34.044 INFO:tasks.workunit.client.0.vm07.stdout:9/401: creat d4/d16/d78/f92 x:0 0 0 2026-03-09T20:47:34.044 INFO:tasks.workunit.client.0.vm07.stdout:1/472: fsync d3/f82 0 2026-03-09T20:47:34.046 INFO:tasks.workunit.client.1.vm10.stdout:4/318: dwrite d1/d8/d1c/f3f [0,4194304] 0 2026-03-09T20:47:34.050 INFO:tasks.workunit.client.1.vm10.stdout:1/372: chown d2/da/d25/d46/d51/c5b 729451 1 2026-03-09T20:47:34.054 INFO:tasks.workunit.client.1.vm10.stdout:4/319: write d1/d8/d1b/f42 [233678,126052] 0 2026-03-09T20:47:34.062 INFO:tasks.workunit.client.1.vm10.stdout:0/343: rename d2/db to d2/d9/d47/d71 0 2026-03-09T20:47:34.070 INFO:tasks.workunit.client.1.vm10.stdout:4/320: dread d1/d2/f43 [0,4194304] 0 2026-03-09T20:47:34.070 INFO:tasks.workunit.client.1.vm10.stdout:0/344: dwrite d2/f65 [0,4194304] 0 2026-03-09T20:47:34.070 INFO:tasks.workunit.client.1.vm10.stdout:6/393: unlink d3/d30/d33/l3f 0 2026-03-09T20:47:34.072 INFO:tasks.workunit.client.1.vm10.stdout:2/385: dread d5/d2b/d32/f5c [0,4194304] 0 2026-03-09T20:47:34.082 INFO:tasks.workunit.client.1.vm10.stdout:7/347: getdents db/d28/d2b/d36/d3b 0 2026-03-09T20:47:34.083 INFO:tasks.workunit.client.1.vm10.stdout:7/348: read db/d28/d2b/d36/f35 [2984734,87743] 0 2026-03-09T20:47:34.085 INFO:tasks.workunit.client.0.vm07.stdout:6/448: symlink d8/d50/l8a 0 2026-03-09T20:47:34.086 INFO:tasks.workunit.client.0.vm07.stdout:4/361: write d2/f33 [325587,69629] 0 2026-03-09T20:47:34.088 INFO:tasks.workunit.client.1.vm10.stdout:9/415: link d2/d33/d37/f66 d2/d28/d47/d67/f99 0 2026-03-09T20:47:34.090 INFO:tasks.workunit.client.0.vm07.stdout:2/465: mknod d2/c8e 0 2026-03-09T20:47:34.109 INFO:tasks.workunit.client.0.vm07.stdout:1/473: rename d3/d14/d88 to d3/d14/d54/d9b 0 2026-03-09T20:47:34.109 INFO:tasks.workunit.client.0.vm07.stdout:9/402: truncate d4/d16/d29/f6e 548536 0 2026-03-09T20:47:34.118 INFO:tasks.workunit.client.1.vm10.stdout:1/373: dread - d2/da/d25/d3e/d55/f5c zero size 2026-03-09T20:47:34.122 INFO:tasks.workunit.client.0.vm07.stdout:8/392: creat d1/d5d/f7d x:0 0 0 2026-03-09T20:47:34.127 INFO:tasks.workunit.client.0.vm07.stdout:4/362: sync 2026-03-09T20:47:34.131 INFO:tasks.workunit.client.1.vm10.stdout:4/321: mknod d1/d8/d39/c63 0 2026-03-09T20:47:34.132 INFO:tasks.workunit.client.1.vm10.stdout:0/345: creat d2/d9/da/d35/d30/f72 x:0 0 0 2026-03-09T20:47:34.133 INFO:tasks.workunit.client.1.vm10.stdout:0/346: readlink d2/d9/da/de/l1d 0 2026-03-09T20:47:34.139 INFO:tasks.workunit.client.1.vm10.stdout:6/394: rename d3/d12 to d3/d30/d7f 0 2026-03-09T20:47:34.143 INFO:tasks.workunit.client.0.vm07.stdout:7/481: write d3/f8f [579757,102576] 0 2026-03-09T20:47:34.143 INFO:tasks.workunit.client.1.vm10.stdout:5/345: write d2/d27/d37/f38 [2581590,12432] 0 2026-03-09T20:47:34.157 INFO:tasks.workunit.client.1.vm10.stdout:9/416: symlink d2/d28/l9a 0 2026-03-09T20:47:34.160 INFO:tasks.workunit.client.1.vm10.stdout:3/358: creat dc/d14/d26/d29/f70 x:0 0 0 2026-03-09T20:47:34.173 INFO:tasks.workunit.client.1.vm10.stdout:1/374: write d2/da/d25/f65 [1015607,7972] 0 2026-03-09T20:47:34.174 INFO:tasks.workunit.client.1.vm10.stdout:1/375: write d2/da/f10 [2520844,24763] 0 2026-03-09T20:47:34.182 INFO:tasks.workunit.client.1.vm10.stdout:6/395: symlink d3/da/d11/d26/d5b/l80 0 2026-03-09T20:47:34.183 INFO:tasks.workunit.client.1.vm10.stdout:6/396: chown d3/d30/d7f/d36/d5c/c57 134244 1 2026-03-09T20:47:34.192 INFO:tasks.workunit.client.1.vm10.stdout:4/322: rename d1/d8/d1c/d41 to d1/d2/d5c/d64 0 2026-03-09T20:47:34.193 INFO:tasks.workunit.client.1.vm10.stdout:4/323: stat d1/d2/d5c/d64/l59 0 2026-03-09T20:47:34.193 INFO:tasks.workunit.client.1.vm10.stdout:4/324: chown d1/d8/d1c 2669 1 2026-03-09T20:47:34.196 INFO:tasks.workunit.client.1.vm10.stdout:5/346: mknod d2/d27/d37/d46/d5d/d5f/d84/c8a 0 2026-03-09T20:47:34.202 INFO:tasks.workunit.client.1.vm10.stdout:8/399: link d0/c16 d0/d22/d25/d2e/d41/c7a 0 2026-03-09T20:47:34.204 INFO:tasks.workunit.client.1.vm10.stdout:7/349: creat db/d28/d2b/d36/d63/f6c x:0 0 0 2026-03-09T20:47:34.209 INFO:tasks.workunit.client.1.vm10.stdout:9/417: dread d2/d3/de/f24 [0,4194304] 0 2026-03-09T20:47:34.209 INFO:tasks.workunit.client.1.vm10.stdout:3/359: unlink dc/d14/d26/d29/d40/d48/c61 0 2026-03-09T20:47:34.213 INFO:tasks.workunit.client.1.vm10.stdout:0/347: mknod d2/d9/c73 0 2026-03-09T20:47:34.218 INFO:tasks.workunit.client.1.vm10.stdout:1/376: rename d2/da/d25/f73 to d2/da/d25/d46/d51/d5d/d6e/d70/f79 0 2026-03-09T20:47:34.218 INFO:tasks.workunit.client.1.vm10.stdout:3/360: dwrite dc/d14/d26/d29/d40/f6c [0,4194304] 0 2026-03-09T20:47:34.225 INFO:tasks.workunit.client.1.vm10.stdout:1/377: readlink d2/l4b 0 2026-03-09T20:47:34.226 INFO:tasks.workunit.client.1.vm10.stdout:4/325: dread - d1/d47/f4f zero size 2026-03-09T20:47:34.227 INFO:tasks.workunit.client.1.vm10.stdout:3/361: chown dc/d14/d26/f6f 6 1 2026-03-09T20:47:34.227 INFO:tasks.workunit.client.1.vm10.stdout:5/347: creat d2/d27/d37/d46/d5d/d5f/d84/f8b x:0 0 0 2026-03-09T20:47:34.233 INFO:tasks.workunit.client.1.vm10.stdout:2/386: link d5/d18/d27/d28/d41/l50 d5/d18/d27/d28/d41/l7d 0 2026-03-09T20:47:34.235 INFO:tasks.workunit.client.1.vm10.stdout:8/400: chown d0/d22/d2f/d38/l69 139808 1 2026-03-09T20:47:34.236 INFO:tasks.workunit.client.1.vm10.stdout:2/387: chown d5/c78 0 1 2026-03-09T20:47:34.240 INFO:tasks.workunit.client.1.vm10.stdout:7/350: fsync db/d1f/f2a 0 2026-03-09T20:47:34.242 INFO:tasks.workunit.client.1.vm10.stdout:6/397: dread d3/d30/d33/f35 [4194304,4194304] 0 2026-03-09T20:47:34.259 INFO:tasks.workunit.client.1.vm10.stdout:1/378: mknod d2/da/d25/d46/c7a 0 2026-03-09T20:47:34.259 INFO:tasks.workunit.client.1.vm10.stdout:3/362: rmdir dc/d14/d20/d2e 39 2026-03-09T20:47:34.259 INFO:tasks.workunit.client.0.vm07.stdout:2/466: stat d2/d11/f38 0 2026-03-09T20:47:34.259 INFO:tasks.workunit.client.0.vm07.stdout:3/432: symlink d1/d5/d9/d11/d6d/l91 0 2026-03-09T20:47:34.259 INFO:tasks.workunit.client.0.vm07.stdout:4/363: rename d2/d55/d5d/d3f/f54 to d2/d55/f62 0 2026-03-09T20:47:34.259 INFO:tasks.workunit.client.0.vm07.stdout:4/364: write d2/fa [3164610,70406] 0 2026-03-09T20:47:34.259 INFO:tasks.workunit.client.0.vm07.stdout:7/482: mkdir d3/d58/d82/da8 0 2026-03-09T20:47:34.259 INFO:tasks.workunit.client.0.vm07.stdout:7/483: chown d3/da/db/d79 5220345 1 2026-03-09T20:47:34.259 INFO:tasks.workunit.client.0.vm07.stdout:7/484: truncate d3/da/db/d79/f98 103564 0 2026-03-09T20:47:34.262 INFO:tasks.workunit.client.0.vm07.stdout:0/505: creat d1/d2/d33/d35/fa0 x:0 0 0 2026-03-09T20:47:34.263 INFO:tasks.workunit.client.1.vm10.stdout:7/351: dread db/d28/d4d/f59 [0,4194304] 0 2026-03-09T20:47:34.264 INFO:tasks.workunit.client.1.vm10.stdout:5/348: sync 2026-03-09T20:47:34.269 INFO:tasks.workunit.client.1.vm10.stdout:6/398: rmdir d3/da/d11/d31/d47 39 2026-03-09T20:47:34.272 INFO:tasks.workunit.client.0.vm07.stdout:1/474: mkdir d3/d9c 0 2026-03-09T20:47:34.277 INFO:tasks.workunit.client.1.vm10.stdout:1/379: symlink d2/l7b 0 2026-03-09T20:47:34.279 INFO:tasks.workunit.client.0.vm07.stdout:3/433: dread - d1/d5/d9/d2f/d3d/f75 zero size 2026-03-09T20:47:34.294 INFO:tasks.workunit.client.1.vm10.stdout:4/326: write d1/d8/d39/f4b [248030,93190] 0 2026-03-09T20:47:34.300 INFO:tasks.workunit.client.1.vm10.stdout:8/401: write d0/d22/d2c/f6a [956056,89000] 0 2026-03-09T20:47:34.300 INFO:tasks.workunit.client.1.vm10.stdout:4/327: chown d1/d8/d39/f4b 130145612 1 2026-03-09T20:47:34.300 INFO:tasks.workunit.client.1.vm10.stdout:8/402: chown d0/d22/d2f 1017902903 1 2026-03-09T20:47:34.300 INFO:tasks.workunit.client.0.vm07.stdout:6/449: write d8/d26/d2a/d40/d69/f62 [208102,104700] 0 2026-03-09T20:47:34.312 INFO:tasks.workunit.client.0.vm07.stdout:4/365: unlink d2/df/d17/c36 0 2026-03-09T20:47:34.312 INFO:tasks.workunit.client.0.vm07.stdout:4/366: readlink d2/l50 0 2026-03-09T20:47:34.316 INFO:tasks.workunit.client.0.vm07.stdout:7/485: read d3/da/d83/f84 [1353412,91894] 0 2026-03-09T20:47:34.326 INFO:tasks.workunit.client.1.vm10.stdout:4/328: dwrite d1/d2/d5c/f53 [0,4194304] 0 2026-03-09T20:47:34.326 INFO:tasks.workunit.client.1.vm10.stdout:2/388: dwrite d5/d18/d27/d28/d41/f4b [0,4194304] 0 2026-03-09T20:47:34.326 INFO:tasks.workunit.client.0.vm07.stdout:5/518: link d5/df/d62/c85 d5/df/d13/d4f/cba 0 2026-03-09T20:47:34.327 INFO:tasks.workunit.client.1.vm10.stdout:7/352: mkdir db/d28/d2b/d36/d63/d6d 0 2026-03-09T20:47:34.329 INFO:tasks.workunit.client.1.vm10.stdout:9/418: dwrite d2/d28/d47/d6a/f7f [0,4194304] 0 2026-03-09T20:47:34.333 INFO:tasks.workunit.client.1.vm10.stdout:5/349: chown d2/d1b/d54/d78/c5e 100658678 1 2026-03-09T20:47:34.336 INFO:tasks.workunit.client.1.vm10.stdout:3/363: dwrite dc/f11 [0,4194304] 0 2026-03-09T20:47:34.348 INFO:tasks.workunit.client.0.vm07.stdout:5/519: dread d5/df/f4a [0,4194304] 0 2026-03-09T20:47:34.360 INFO:tasks.workunit.client.0.vm07.stdout:0/506: truncate d1/f57 805338 0 2026-03-09T20:47:34.361 INFO:tasks.workunit.client.1.vm10.stdout:1/380: rmdir d2/da/d25/d3e/d55 39 2026-03-09T20:47:34.361 INFO:tasks.workunit.client.0.vm07.stdout:5/520: dwrite d5/d33/d39/d8d/dab/f60 [0,4194304] 0 2026-03-09T20:47:34.362 INFO:tasks.workunit.client.1.vm10.stdout:1/381: chown d2/da/c4f 761 1 2026-03-09T20:47:34.367 INFO:tasks.workunit.client.0.vm07.stdout:1/475: symlink d3/d23/d55/d56/l9d 0 2026-03-09T20:47:34.375 INFO:tasks.workunit.client.1.vm10.stdout:8/403: rename d0/d22/c53 to d0/d22/d25/d6c/c7b 0 2026-03-09T20:47:34.384 INFO:tasks.workunit.client.0.vm07.stdout:9/403: link d4/d16/d29/f4a d4/d8/d19/d89/f93 0 2026-03-09T20:47:34.384 INFO:tasks.workunit.client.0.vm07.stdout:3/434: rmdir d1/d5/d9/d11 39 2026-03-09T20:47:34.384 INFO:tasks.workunit.client.1.vm10.stdout:8/404: chown d0/d54/c73 119 1 2026-03-09T20:47:34.384 INFO:tasks.workunit.client.1.vm10.stdout:8/405: write d0/d22/f76 [283908,90340] 0 2026-03-09T20:47:34.390 INFO:tasks.workunit.client.0.vm07.stdout:3/435: dwrite d1/d5/d9/d2f/d34/f40 [4194304,4194304] 0 2026-03-09T20:47:34.394 INFO:tasks.workunit.client.1.vm10.stdout:4/329: symlink d1/d8/l65 0 2026-03-09T20:47:34.401 INFO:tasks.workunit.client.0.vm07.stdout:9/404: sync 2026-03-09T20:47:34.406 INFO:tasks.workunit.client.0.vm07.stdout:4/367: creat d2/df/d17/f63 x:0 0 0 2026-03-09T20:47:34.406 INFO:tasks.workunit.client.0.vm07.stdout:4/368: chown d2/d1f/f53 1317198 1 2026-03-09T20:47:34.407 INFO:tasks.workunit.client.0.vm07.stdout:4/369: chown d2/f33 188206126 1 2026-03-09T20:47:34.408 INFO:tasks.workunit.client.0.vm07.stdout:4/370: write d2/d1f/f2c [3487965,96945] 0 2026-03-09T20:47:34.410 INFO:tasks.workunit.client.0.vm07.stdout:4/371: dread - d2/d55/d5d/d3f/d4a/f5e zero size 2026-03-09T20:47:34.416 INFO:tasks.workunit.client.1.vm10.stdout:9/419: symlink d2/d12/l9b 0 2026-03-09T20:47:34.420 INFO:tasks.workunit.client.1.vm10.stdout:7/353: mkdir db/d28/d4c/d6e 0 2026-03-09T20:47:34.424 INFO:tasks.workunit.client.1.vm10.stdout:0/348: getdents d2/d9/da/de/d1a/d25 0 2026-03-09T20:47:34.424 INFO:tasks.workunit.client.1.vm10.stdout:5/350: dread d2/f7 [0,4194304] 0 2026-03-09T20:47:34.427 INFO:tasks.workunit.client.0.vm07.stdout:0/507: unlink d1/d2/l36 0 2026-03-09T20:47:34.431 INFO:tasks.workunit.client.0.vm07.stdout:5/521: fdatasync d5/df/d13/d6c/f79 0 2026-03-09T20:47:34.439 INFO:tasks.workunit.client.1.vm10.stdout:3/364: rename dc/d14/d26/d37/f3a to dc/d14/d26/d29/d40/f71 0 2026-03-09T20:47:34.443 INFO:tasks.workunit.client.0.vm07.stdout:2/467: write d2/db/f41 [1022060,102705] 0 2026-03-09T20:47:34.443 INFO:tasks.workunit.client.1.vm10.stdout:6/399: write d3/d30/d7f/d36/d5c/f5f [4534423,102113] 0 2026-03-09T20:47:34.443 INFO:tasks.workunit.client.0.vm07.stdout:2/468: write d2/db/d28/d5c/f8c [622149,50908] 0 2026-03-09T20:47:34.447 INFO:tasks.workunit.client.0.vm07.stdout:1/476: mknod d3/d23/d55/d56/d60/c9e 0 2026-03-09T20:47:34.451 INFO:tasks.workunit.client.1.vm10.stdout:1/382: dread d2/da/d25/f40 [0,4194304] 0 2026-03-09T20:47:34.459 INFO:tasks.workunit.client.0.vm07.stdout:7/486: write d3/da/db/d14/d1f/f63 [446012,51186] 0 2026-03-09T20:47:34.461 INFO:tasks.workunit.client.0.vm07.stdout:7/487: chown d3/da/db/d14/f6d 3 1 2026-03-09T20:47:34.471 INFO:tasks.workunit.client.0.vm07.stdout:7/488: dwrite d3/da/f45 [4194304,4194304] 0 2026-03-09T20:47:34.478 INFO:tasks.workunit.client.1.vm10.stdout:4/330: write d1/d8/d1c/f1f [1039872,56979] 0 2026-03-09T20:47:34.478 INFO:tasks.workunit.client.0.vm07.stdout:8/393: getdents d1/d5d/d6f/d2f/d4d/d63 0 2026-03-09T20:47:34.479 INFO:tasks.workunit.client.0.vm07.stdout:8/394: write d1/f20 [124798,8081] 0 2026-03-09T20:47:34.493 INFO:tasks.workunit.client.1.vm10.stdout:9/420: fsync d2/d3/de/f24 0 2026-03-09T20:47:34.497 INFO:tasks.workunit.client.1.vm10.stdout:9/421: dwrite d2/d33/d37/f4c [0,4194304] 0 2026-03-09T20:47:34.503 INFO:tasks.workunit.client.1.vm10.stdout:8/406: dread d0/d22/d2c/f32 [0,4194304] 0 2026-03-09T20:47:34.508 INFO:tasks.workunit.client.0.vm07.stdout:6/450: creat d8/d26/d7d/f8b x:0 0 0 2026-03-09T20:47:34.521 INFO:tasks.workunit.client.1.vm10.stdout:7/354: creat db/d28/d2b/d36/d3f/f6f x:0 0 0 2026-03-09T20:47:34.531 INFO:tasks.workunit.client.0.vm07.stdout:9/405: rename d4/d11/f13 to d4/d8/d19/d5f/f94 0 2026-03-09T20:47:34.549 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:34 vm07.local ceph-mon[49120]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:34.549 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:34 vm07.local ceph-mon[49120]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:34.549 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:34 vm07.local ceph-mon[49120]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:34.549 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:34 vm07.local ceph-mon[49120]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:34.565 INFO:tasks.workunit.client.1.vm10.stdout:3/365: rename dc/d14 to dc/d14/d20/d72 22 2026-03-09T20:47:34.573 INFO:tasks.workunit.client.1.vm10.stdout:6/400: write d3/da/d11/d26/d5b/f55 [1466301,80916] 0 2026-03-09T20:47:34.586 INFO:tasks.workunit.client.0.vm07.stdout:5/522: mknod d5/d33/d75/cbb 0 2026-03-09T20:47:34.586 INFO:tasks.workunit.client.0.vm07.stdout:4/372: dwrite d2/f19 [4194304,4194304] 0 2026-03-09T20:47:34.597 INFO:tasks.workunit.client.0.vm07.stdout:1/477: mkdir d3/d23/d55/d56/d60/d9f 0 2026-03-09T20:47:34.597 INFO:tasks.workunit.client.0.vm07.stdout:7/489: creat d3/da/d83/fa9 x:0 0 0 2026-03-09T20:47:34.597 INFO:tasks.workunit.client.0.vm07.stdout:4/373: chown d2/df/f2e 11547 1 2026-03-09T20:47:34.599 INFO:tasks.workunit.client.0.vm07.stdout:1/478: write d3/d14/d94/f95 [152078,66741] 0 2026-03-09T20:47:34.619 INFO:tasks.workunit.client.1.vm10.stdout:9/422: creat d2/d3/de/d35/f9c x:0 0 0 2026-03-09T20:47:34.633 INFO:tasks.workunit.client.0.vm07.stdout:9/406: fsync d4/d16/d29/d24/d37/f51 0 2026-03-09T20:47:34.634 INFO:tasks.workunit.client.1.vm10.stdout:3/366: readlink dc/d14/d27/l4c 0 2026-03-09T20:47:34.639 INFO:tasks.workunit.client.1.vm10.stdout:2/389: getdents d5/d18/d27/d38/d61 0 2026-03-09T20:47:34.644 INFO:tasks.workunit.client.0.vm07.stdout:5/523: mkdir d5/d19/d73/dbc 0 2026-03-09T20:47:34.644 INFO:tasks.workunit.client.0.vm07.stdout:1/479: dread d3/d14/d54/d3e/f59 [0,4194304] 0 2026-03-09T20:47:34.651 INFO:tasks.workunit.client.1.vm10.stdout:5/351: write d2/f64 [191816,95796] 0 2026-03-09T20:47:34.651 INFO:tasks.workunit.client.1.vm10.stdout:8/407: dwrite d0/d22/d25/f3b [0,4194304] 0 2026-03-09T20:47:34.653 INFO:tasks.workunit.client.1.vm10.stdout:9/423: fdatasync d2/d12/f20 0 2026-03-09T20:47:34.655 INFO:tasks.workunit.client.1.vm10.stdout:9/424: chown d2/d28/d47/c95 8665 1 2026-03-09T20:47:34.667 INFO:tasks.workunit.client.0.vm07.stdout:9/407: symlink d4/d8/d19/d5f/l95 0 2026-03-09T20:47:34.667 INFO:tasks.workunit.client.1.vm10.stdout:1/383: dwrite d2/da/d25/f2e [0,4194304] 0 2026-03-09T20:47:34.680 INFO:tasks.workunit.client.0.vm07.stdout:1/480: symlink d3/d14/d94/la0 0 2026-03-09T20:47:34.685 INFO:tasks.workunit.client.0.vm07.stdout:8/395: dread d1/f25 [0,4194304] 0 2026-03-09T20:47:34.689 INFO:tasks.workunit.client.0.vm07.stdout:8/396: dread d1/f20 [0,4194304] 0 2026-03-09T20:47:34.697 INFO:tasks.workunit.client.0.vm07.stdout:6/451: link d8/d16/c6c d8/d26/d2a/c8c 0 2026-03-09T20:47:34.697 INFO:tasks.workunit.client.0.vm07.stdout:0/508: rename d1/d2/d4b/f96 to d1/fa1 0 2026-03-09T20:47:34.697 INFO:tasks.workunit.client.0.vm07.stdout:8/397: chown d1/dc/d16/d26/f4e 1 1 2026-03-09T20:47:34.697 INFO:tasks.workunit.client.0.vm07.stdout:9/408: fdatasync d4/d8/dc/d4e/f53 0 2026-03-09T20:47:34.697 INFO:tasks.workunit.client.0.vm07.stdout:9/409: chown d4 2098541 1 2026-03-09T20:47:34.698 INFO:tasks.workunit.client.0.vm07.stdout:1/481: mkdir d3/d97/da1 0 2026-03-09T20:47:34.698 INFO:tasks.workunit.client.0.vm07.stdout:8/398: chown d1/d3b/c50 1 1 2026-03-09T20:47:34.706 INFO:tasks.workunit.client.1.vm10.stdout:8/408: dread d0/fe [0,4194304] 0 2026-03-09T20:47:34.717 INFO:tasks.workunit.client.0.vm07.stdout:2/469: rename d2/db/l4f to d2/db/d28/d87/l8f 0 2026-03-09T20:47:34.717 INFO:tasks.workunit.client.1.vm10.stdout:4/331: truncate d1/d2/d5c/f53 264569 0 2026-03-09T20:47:34.719 INFO:tasks.workunit.client.0.vm07.stdout:0/509: mknod d1/d1f/ca2 0 2026-03-09T20:47:34.720 INFO:tasks.workunit.client.0.vm07.stdout:3/436: dwrite d1/d5/d9/d11/f73 [0,4194304] 0 2026-03-09T20:47:34.721 INFO:tasks.workunit.client.1.vm10.stdout:8/409: sync 2026-03-09T20:47:34.723 INFO:tasks.workunit.client.1.vm10.stdout:4/332: dread d1/d8/d1c/f3e [0,4194304] 0 2026-03-09T20:47:34.724 INFO:tasks.workunit.client.1.vm10.stdout:8/410: dread d0/d22/d25/d40/f5e [0,4194304] 0 2026-03-09T20:47:34.725 INFO:tasks.workunit.client.1.vm10.stdout:8/411: chown d0/d22/d2f/f31 4646186 1 2026-03-09T20:47:34.737 INFO:tasks.workunit.client.0.vm07.stdout:4/374: write d2/f7 [1608762,91432] 0 2026-03-09T20:47:34.738 INFO:tasks.workunit.client.0.vm07.stdout:4/375: write d2/d1f/f2c [2654266,113632] 0 2026-03-09T20:47:34.749 INFO:tasks.workunit.client.0.vm07.stdout:7/490: dwrite d3/da/db/d32/d3e/d5c/f7f [0,4194304] 0 2026-03-09T20:47:34.758 INFO:tasks.workunit.client.1.vm10.stdout:2/390: write d5/d18/d2d/d47/f65 [274730,6596] 0 2026-03-09T20:47:34.775 INFO:tasks.workunit.client.1.vm10.stdout:9/425: dwrite d2/d28/f79 [0,4194304] 0 2026-03-09T20:47:34.776 INFO:tasks.workunit.client.0.vm07.stdout:5/524: rename d5/d33/d39/c3c to d5/d33/d3b/cbd 0 2026-03-09T20:47:34.778 INFO:tasks.workunit.client.1.vm10.stdout:9/426: sync 2026-03-09T20:47:34.794 INFO:tasks.workunit.client.0.vm07.stdout:9/410: write d4/d8/d19/f28 [1435662,38641] 0 2026-03-09T20:47:34.801 INFO:tasks.workunit.client.1.vm10.stdout:7/355: creat db/f70 x:0 0 0 2026-03-09T20:47:34.808 INFO:tasks.workunit.client.0.vm07.stdout:0/510: symlink d1/d1f/d30/la3 0 2026-03-09T20:47:34.808 INFO:tasks.workunit.client.1.vm10.stdout:0/349: getdents d2/d9/da/de/d1a/d25/d34 0 2026-03-09T20:47:34.810 INFO:tasks.workunit.client.0.vm07.stdout:7/491: creat d3/da/db/d14/d1f/faa x:0 0 0 2026-03-09T20:47:34.810 INFO:tasks.workunit.client.0.vm07.stdout:7/492: readlink d3/da/db/d14/d1f/d2b/l36 0 2026-03-09T20:47:34.813 INFO:tasks.workunit.client.0.vm07.stdout:7/493: write d3/da/db/d32/f3d [1211918,50873] 0 2026-03-09T20:47:34.815 INFO:tasks.workunit.client.0.vm07.stdout:6/452: creat d8/f8d x:0 0 0 2026-03-09T20:47:34.818 INFO:tasks.workunit.client.0.vm07.stdout:8/399: symlink d1/dc/d16/l7e 0 2026-03-09T20:47:34.821 INFO:tasks.workunit.client.1.vm10.stdout:4/333: truncate d1/d2/f2a 529436 0 2026-03-09T20:47:34.822 INFO:tasks.workunit.client.1.vm10.stdout:4/334: dread d1/fe [0,4194304] 0 2026-03-09T20:47:34.824 INFO:tasks.workunit.client.0.vm07.stdout:4/376: write d2/d1f/f45 [165710,53940] 0 2026-03-09T20:47:34.826 INFO:tasks.workunit.client.1.vm10.stdout:5/352: dwrite d2/d58/f65 [0,4194304] 0 2026-03-09T20:47:34.826 INFO:tasks.workunit.client.1.vm10.stdout:5/353: chown d2/d27 156489932 1 2026-03-09T20:47:34.830 INFO:tasks.workunit.client.0.vm07.stdout:3/437: dwrite d1/d5/d9/f1b [4194304,4194304] 0 2026-03-09T20:47:34.836 INFO:tasks.workunit.client.0.vm07.stdout:4/377: dwrite d2/f43 [4194304,4194304] 0 2026-03-09T20:47:34.849 INFO:tasks.workunit.client.1.vm10.stdout:2/391: creat d5/d18/d27/d28/f7e x:0 0 0 2026-03-09T20:47:34.857 INFO:tasks.workunit.client.0.vm07.stdout:5/525: dread d5/df/d13/d3e/d5e/f98 [0,4194304] 0 2026-03-09T20:47:34.861 INFO:tasks.workunit.client.0.vm07.stdout:2/470: mkdir d2/db/d28/d90 0 2026-03-09T20:47:34.879 INFO:tasks.workunit.client.1.vm10.stdout:7/356: dread db/d21/d23/f14 [0,4194304] 0 2026-03-09T20:47:34.880 INFO:tasks.workunit.client.1.vm10.stdout:7/357: stat db/f19 0 2026-03-09T20:47:34.880 INFO:tasks.workunit.client.1.vm10.stdout:0/350: mknod d2/d9/d47/d71/c74 0 2026-03-09T20:47:34.887 INFO:tasks.workunit.client.1.vm10.stdout:6/401: rename d3/fc to d3/da/d11/d31/f81 0 2026-03-09T20:47:34.888 INFO:tasks.workunit.client.1.vm10.stdout:3/367: rename dc/d14/d26 to dc/d14/d26/d73 22 2026-03-09T20:47:34.895 INFO:tasks.workunit.client.1.vm10.stdout:3/368: sync 2026-03-09T20:47:34.901 INFO:tasks.workunit.client.0.vm07.stdout:3/438: dread d1/d5/d9/f33 [0,4194304] 0 2026-03-09T20:47:34.901 INFO:tasks.workunit.client.1.vm10.stdout:6/402: dread d3/da/d11/d31/d4c/d60/f63 [0,4194304] 0 2026-03-09T20:47:34.901 INFO:tasks.workunit.client.1.vm10.stdout:6/403: stat d3/d30/d7f/d4a/f4b 0 2026-03-09T20:47:34.906 INFO:tasks.workunit.client.1.vm10.stdout:8/412: write d0/d22/d2c/f57 [690969,109770] 0 2026-03-09T20:47:34.906 INFO:tasks.workunit.client.1.vm10.stdout:8/413: chown d0/d22/d2f/d38/c72 319908 1 2026-03-09T20:47:34.909 INFO:tasks.workunit.client.0.vm07.stdout:6/453: dread - d8/d16/d22/d24/f7b zero size 2026-03-09T20:47:34.923 INFO:tasks.workunit.client.1.vm10.stdout:4/335: write d1/fe [921353,66014] 0 2026-03-09T20:47:34.927 INFO:tasks.workunit.client.0.vm07.stdout:4/378: rmdir d2/d55/d5d/d3f/d4a/d4b/d52 39 2026-03-09T20:47:34.931 INFO:tasks.workunit.client.1.vm10.stdout:5/354: fsync d2/d27/f2d 0 2026-03-09T20:47:34.936 INFO:tasks.workunit.client.0.vm07.stdout:2/471: rename d2/db/f74 to d2/db/d28/d5c/f91 0 2026-03-09T20:47:34.939 INFO:tasks.workunit.client.1.vm10.stdout:9/427: truncate d2/d28/d47/d50/f75 1461330 0 2026-03-09T20:47:34.944 INFO:tasks.workunit.client.0.vm07.stdout:0/511: creat d1/d1f/d9f/fa4 x:0 0 0 2026-03-09T20:47:34.944 INFO:tasks.workunit.client.0.vm07.stdout:1/482: getdents d3/d23/d67/d8a 0 2026-03-09T20:47:34.949 INFO:tasks.workunit.client.1.vm10.stdout:0/351: rename d2/d9/da/d11/l6d to d2/d9/da/d11/l75 0 2026-03-09T20:47:34.952 INFO:tasks.workunit.client.0.vm07.stdout:0/512: dwrite d1/d2/dc/f10 [0,4194304] 0 2026-03-09T20:47:34.953 INFO:tasks.workunit.client.0.vm07.stdout:7/494: mknod d3/da/d83/d96/cab 0 2026-03-09T20:47:34.959 INFO:tasks.workunit.client.0.vm07.stdout:6/454: stat d8/d16/d22/l2d 0 2026-03-09T20:47:34.960 INFO:tasks.workunit.client.0.vm07.stdout:6/455: truncate d8/d26/d2a/f81 589869 0 2026-03-09T20:47:34.973 INFO:tasks.workunit.client.0.vm07.stdout:8/400: link d1/d5d/f7d d1/d5d/d6f/d2f/d53/d76/f7f 0 2026-03-09T20:47:34.983 INFO:tasks.workunit.client.1.vm10.stdout:4/336: mkdir d1/d8/d66 0 2026-03-09T20:47:34.988 INFO:tasks.workunit.client.0.vm07.stdout:2/472: symlink d2/d11/d56/l92 0 2026-03-09T20:47:34.990 INFO:tasks.workunit.client.1.vm10.stdout:5/355: creat d2/d58/d6c/f8c x:0 0 0 2026-03-09T20:47:34.991 INFO:tasks.workunit.client.1.vm10.stdout:2/392: mknod d5/d18/d1b/c7f 0 2026-03-09T20:47:34.993 INFO:tasks.workunit.client.0.vm07.stdout:1/483: sync 2026-03-09T20:47:34.994 INFO:tasks.workunit.client.1.vm10.stdout:5/356: dwrite d2/d39/d4b/f60 [0,4194304] 0 2026-03-09T20:47:35.010 INFO:tasks.workunit.client.1.vm10.stdout:8/414: dwrite d0/f11 [0,4194304] 0 2026-03-09T20:47:35.016 INFO:tasks.workunit.client.1.vm10.stdout:0/352: creat d2/d9/d47/d71/d5d/f76 x:0 0 0 2026-03-09T20:47:35.021 INFO:tasks.workunit.client.0.vm07.stdout:6/456: symlink d8/d16/d22/d33/l8e 0 2026-03-09T20:47:35.027 INFO:tasks.workunit.client.0.vm07.stdout:5/526: dwrite d5/d33/d3b/f63 [0,4194304] 0 2026-03-09T20:47:35.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:34 vm10.local ceph-mon[57011]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:35.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:34 vm10.local ceph-mon[57011]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:35.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:34 vm10.local ceph-mon[57011]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:35.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:34 vm10.local ceph-mon[57011]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:35.040 INFO:tasks.workunit.client.0.vm07.stdout:4/379: symlink d2/l64 0 2026-03-09T20:47:35.042 INFO:tasks.workunit.client.1.vm10.stdout:3/369: truncate dc/d14/d20/d2e/d56/f23 4180767 0 2026-03-09T20:47:35.042 INFO:tasks.workunit.client.1.vm10.stdout:3/370: readlink dc/d14/d27/l59 0 2026-03-09T20:47:35.043 INFO:tasks.workunit.client.0.vm07.stdout:9/411: link d4/d8/dc/ff d4/d8/d19/d26/f96 0 2026-03-09T20:47:35.044 INFO:tasks.workunit.client.0.vm07.stdout:9/412: chown d4/d8/dc/ff 262010108 1 2026-03-09T20:47:35.044 INFO:tasks.workunit.client.0.vm07.stdout:4/380: dread - d2/d55/d5d/d3f/d4a/f5e zero size 2026-03-09T20:47:35.048 INFO:tasks.workunit.client.0.vm07.stdout:4/381: dread d2/fa [0,4194304] 0 2026-03-09T20:47:35.049 INFO:tasks.workunit.client.0.vm07.stdout:4/382: truncate d2/f9 1729663 0 2026-03-09T20:47:35.058 INFO:tasks.workunit.client.1.vm10.stdout:2/393: rmdir d5/d18/d27/d38/d61 39 2026-03-09T20:47:35.060 INFO:tasks.workunit.client.0.vm07.stdout:7/495: write d3/da/db/d14/f2a [4177841,65070] 0 2026-03-09T20:47:35.064 INFO:tasks.workunit.client.0.vm07.stdout:2/473: dread d2/ff [0,4194304] 0 2026-03-09T20:47:35.064 INFO:tasks.workunit.client.0.vm07.stdout:0/513: dwrite d1/f3d [4194304,4194304] 0 2026-03-09T20:47:35.066 INFO:tasks.workunit.client.1.vm10.stdout:9/428: write d2/d3/de/f24 [790849,36199] 0 2026-03-09T20:47:35.068 INFO:tasks.workunit.client.1.vm10.stdout:9/429: read d2/d12/f26 [2111404,17098] 0 2026-03-09T20:47:35.069 INFO:tasks.workunit.client.0.vm07.stdout:3/439: truncate d1/d5/d9/f33 7611273 0 2026-03-09T20:47:35.069 INFO:tasks.workunit.client.1.vm10.stdout:1/384: truncate d2/da/d25/d3e/f69 1143968 0 2026-03-09T20:47:35.070 INFO:tasks.workunit.client.0.vm07.stdout:8/401: write d1/d5d/d6f/f61 [829277,90562] 0 2026-03-09T20:47:35.084 INFO:tasks.workunit.client.1.vm10.stdout:5/357: creat d2/d39/d4b/f8d x:0 0 0 2026-03-09T20:47:35.086 INFO:tasks.workunit.client.1.vm10.stdout:8/415: symlink d0/d22/d2c/l7c 0 2026-03-09T20:47:35.086 INFO:tasks.workunit.client.1.vm10.stdout:7/358: link db/d21/d23/f22 db/d54/f71 0 2026-03-09T20:47:35.090 INFO:tasks.workunit.client.1.vm10.stdout:5/358: dread d2/f64 [0,4194304] 0 2026-03-09T20:47:35.090 INFO:tasks.workunit.client.1.vm10.stdout:0/353: creat d2/d9/da/de/d1a/d25/d34/f77 x:0 0 0 2026-03-09T20:47:35.095 INFO:tasks.workunit.client.1.vm10.stdout:6/404: creat d3/da/d11/d31/f82 x:0 0 0 2026-03-09T20:47:35.095 INFO:tasks.workunit.client.1.vm10.stdout:6/405: write d3/da/d11/d31/d4c/d60/f77 [3619699,47470] 0 2026-03-09T20:47:35.102 INFO:tasks.workunit.client.1.vm10.stdout:4/337: mkdir d1/d67 0 2026-03-09T20:47:35.111 INFO:tasks.workunit.client.0.vm07.stdout:2/474: creat d2/db/d1c/f93 x:0 0 0 2026-03-09T20:47:35.114 INFO:tasks.workunit.client.1.vm10.stdout:2/394: rename d5/d18/d2d to d5/d2b/d32/d80 0 2026-03-09T20:47:35.121 INFO:tasks.workunit.client.1.vm10.stdout:1/385: truncate d2/f14 2743934 0 2026-03-09T20:47:35.122 INFO:tasks.workunit.client.1.vm10.stdout:1/386: write d2/da/d25/f65 [1864311,107289] 0 2026-03-09T20:47:35.125 INFO:tasks.workunit.client.0.vm07.stdout:0/514: write d1/d1f/d20/f43 [5440194,4551] 0 2026-03-09T20:47:35.131 INFO:tasks.workunit.client.1.vm10.stdout:8/416: mknod d0/d22/d25/d2e/d41/c7d 0 2026-03-09T20:47:35.133 INFO:tasks.workunit.client.1.vm10.stdout:5/359: truncate d2/d1b/f41 778676 0 2026-03-09T20:47:35.145 INFO:tasks.workunit.client.1.vm10.stdout:6/406: dread d3/f2f [0,4194304] 0 2026-03-09T20:47:35.146 INFO:tasks.workunit.client.1.vm10.stdout:3/371: truncate dc/d14/d20/d2e/d56/f15 1155422 0 2026-03-09T20:47:35.151 INFO:tasks.workunit.client.1.vm10.stdout:7/359: rename db/d28/d4d to db/d21/d26/d72 0 2026-03-09T20:47:35.156 INFO:tasks.workunit.client.1.vm10.stdout:7/360: dwrite db/d28/d2b/d36/d3b/f42 [0,4194304] 0 2026-03-09T20:47:35.157 INFO:tasks.workunit.client.1.vm10.stdout:7/361: write db/d54/f68 [261657,20562] 0 2026-03-09T20:47:35.162 INFO:tasks.workunit.client.1.vm10.stdout:4/338: dwrite d1/d2/d5c/d64/f51 [0,4194304] 0 2026-03-09T20:47:35.205 INFO:tasks.workunit.client.1.vm10.stdout:5/360: mknod d2/d27/d37/d46/d5d/d5f/d63/c8e 0 2026-03-09T20:47:35.209 INFO:tasks.workunit.client.1.vm10.stdout:8/417: dread d0/d22/d25/d2e/d41/d47/f4b [0,4194304] 0 2026-03-09T20:47:35.211 INFO:tasks.workunit.client.1.vm10.stdout:0/354: symlink d2/d4a/d58/l78 0 2026-03-09T20:47:35.218 INFO:tasks.workunit.client.0.vm07.stdout:8/402: mkdir d1/d5d/d6f/d80 0 2026-03-09T20:47:35.248 INFO:tasks.workunit.client.0.vm07.stdout:9/413: dwrite d4/d8/dc/d15/f30 [0,4194304] 0 2026-03-09T20:47:35.264 INFO:tasks.workunit.client.1.vm10.stdout:2/395: creat d5/d18/d27/d38/d61/f81 x:0 0 0 2026-03-09T20:47:35.265 INFO:tasks.workunit.client.1.vm10.stdout:2/396: dread - d5/d18/f1f zero size 2026-03-09T20:47:35.269 INFO:tasks.workunit.client.1.vm10.stdout:4/339: unlink d1/d8/f16 0 2026-03-09T20:47:35.272 INFO:tasks.workunit.client.0.vm07.stdout:0/515: creat d1/d2/d98/fa5 x:0 0 0 2026-03-09T20:47:35.272 INFO:tasks.workunit.client.0.vm07.stdout:0/516: chown d1/d2/d33/d35/f45 45553 1 2026-03-09T20:47:35.273 INFO:tasks.workunit.client.1.vm10.stdout:9/430: rmdir d2/d28/d47/d6a/d8e 0 2026-03-09T20:47:35.273 INFO:tasks.workunit.client.1.vm10.stdout:1/387: mknod d2/da/c7c 0 2026-03-09T20:47:35.275 INFO:tasks.workunit.client.0.vm07.stdout:8/403: dread d1/d5d/d6f/d2f/d4d/f73 [0,4194304] 0 2026-03-09T20:47:35.281 INFO:tasks.workunit.client.1.vm10.stdout:5/361: write d2/f35 [3390160,77182] 0 2026-03-09T20:47:35.282 INFO:tasks.workunit.client.1.vm10.stdout:7/362: dread db/d21/d23/ff [0,4194304] 0 2026-03-09T20:47:35.282 INFO:tasks.workunit.client.1.vm10.stdout:7/363: fsync db/d46/f5a 0 2026-03-09T20:47:35.284 INFO:tasks.workunit.client.1.vm10.stdout:6/407: mknod d3/d79/c83 0 2026-03-09T20:47:35.289 INFO:tasks.workunit.client.0.vm07.stdout:0/517: dread d1/d82/f8d [0,4194304] 0 2026-03-09T20:47:35.291 INFO:tasks.workunit.client.0.vm07.stdout:4/383: rename d2/d55/d5d/d3f/d4a/c58 to d2/d55/d5d/c65 0 2026-03-09T20:47:35.291 INFO:tasks.workunit.client.0.vm07.stdout:4/384: stat d2/d55/d5d/d3f 0 2026-03-09T20:47:35.292 INFO:tasks.workunit.client.0.vm07.stdout:4/385: stat d2/c40 0 2026-03-09T20:47:35.292 INFO:tasks.workunit.client.0.vm07.stdout:4/386: truncate d2/df/d17/f3e 2632796 0 2026-03-09T20:47:35.293 INFO:tasks.workunit.client.0.vm07.stdout:4/387: dread - d2/d55/d5d/d3f/d4a/f5e zero size 2026-03-09T20:47:35.293 INFO:tasks.workunit.client.0.vm07.stdout:4/388: chown d2/d55/f62 317 1 2026-03-09T20:47:35.294 INFO:tasks.workunit.client.0.vm07.stdout:4/389: stat d2 0 2026-03-09T20:47:35.300 INFO:tasks.workunit.client.0.vm07.stdout:1/484: getdents d3/d14 0 2026-03-09T20:47:35.301 INFO:tasks.workunit.client.0.vm07.stdout:1/485: chown d3/d23/f5d 10285 1 2026-03-09T20:47:35.305 INFO:tasks.workunit.client.0.vm07.stdout:3/440: dwrite d1/d5/d9/d11/d1f/f4a [0,4194304] 0 2026-03-09T20:47:35.307 INFO:tasks.workunit.client.0.vm07.stdout:9/414: write d4/d8/dc/ff [1659533,96438] 0 2026-03-09T20:47:35.311 INFO:tasks.workunit.client.1.vm10.stdout:2/397: dwrite d5/d18/f1a [0,4194304] 0 2026-03-09T20:47:35.328 INFO:tasks.workunit.client.1.vm10.stdout:8/418: link d0/d22/f35 d0/f7e 0 2026-03-09T20:47:35.329 INFO:tasks.workunit.client.0.vm07.stdout:6/457: link d8/le d8/l8f 0 2026-03-09T20:47:35.329 INFO:tasks.workunit.client.0.vm07.stdout:6/458: chown d8/d16/d61/f7c 114366 1 2026-03-09T20:47:35.330 INFO:tasks.workunit.client.1.vm10.stdout:5/362: write d2/f23 [767830,119372] 0 2026-03-09T20:47:35.340 INFO:tasks.workunit.client.1.vm10.stdout:7/364: creat db/d28/d30/f73 x:0 0 0 2026-03-09T20:47:35.340 INFO:tasks.workunit.client.0.vm07.stdout:5/527: link d5/d69/c78 d5/df/d13/d3e/cbe 0 2026-03-09T20:47:35.340 INFO:tasks.workunit.client.0.vm07.stdout:5/528: readlink d5/df/d13/l57 0 2026-03-09T20:47:35.340 INFO:tasks.workunit.client.0.vm07.stdout:0/518: truncate d1/d2/dc/f40 1592322 0 2026-03-09T20:47:35.341 INFO:tasks.workunit.client.0.vm07.stdout:7/496: rename d3/da/db/d14 to d3/da/db/d32/d3e/dac 0 2026-03-09T20:47:35.341 INFO:tasks.workunit.client.0.vm07.stdout:9/415: sync 2026-03-09T20:47:35.347 INFO:tasks.workunit.client.1.vm10.stdout:3/372: rename dc/d14/c5b to dc/d14/d26/d29/c74 0 2026-03-09T20:47:35.350 INFO:tasks.workunit.client.0.vm07.stdout:4/390: mknod d2/d55/c66 0 2026-03-09T20:47:35.352 INFO:tasks.workunit.client.0.vm07.stdout:1/486: write d3/d14/d54/d3e/f4a [480745,104560] 0 2026-03-09T20:47:35.353 INFO:tasks.workunit.client.0.vm07.stdout:1/487: write d3/d14/d54/d3e/f4a [3418385,13850] 0 2026-03-09T20:47:35.366 INFO:tasks.workunit.client.0.vm07.stdout:8/404: mknod d1/dc/c81 0 2026-03-09T20:47:35.367 INFO:tasks.workunit.client.0.vm07.stdout:8/405: stat d1/d5d/d6f/d2f/f34 0 2026-03-09T20:47:35.374 INFO:tasks.workunit.client.1.vm10.stdout:7/365: symlink db/d46/l74 0 2026-03-09T20:47:35.374 INFO:tasks.workunit.client.0.vm07.stdout:5/529: unlink d5/df/d13/f17 0 2026-03-09T20:47:35.381 INFO:tasks.workunit.client.1.vm10.stdout:5/363: dread d2/f40 [0,4194304] 0 2026-03-09T20:47:35.382 INFO:tasks.workunit.client.0.vm07.stdout:0/519: unlink d1/d2/d4b/l52 0 2026-03-09T20:47:35.384 INFO:tasks.workunit.client.1.vm10.stdout:3/373: write dc/ff [1199145,100754] 0 2026-03-09T20:47:35.386 INFO:tasks.workunit.client.0.vm07.stdout:7/497: stat d3/c8 0 2026-03-09T20:47:35.389 INFO:tasks.workunit.client.0.vm07.stdout:7/498: dwrite d3/da/d83/fa9 [0,4194304] 0 2026-03-09T20:47:35.400 INFO:tasks.workunit.client.1.vm10.stdout:0/355: rename d2/d9/da/de to d2/d4a/d79 0 2026-03-09T20:47:35.404 INFO:tasks.workunit.client.1.vm10.stdout:1/388: write d2/f17 [2423134,114689] 0 2026-03-09T20:47:35.404 INFO:tasks.workunit.client.1.vm10.stdout:4/340: dwrite d1/d8/d1c/f52 [0,4194304] 0 2026-03-09T20:47:35.404 INFO:tasks.workunit.client.0.vm07.stdout:2/475: dwrite d2/f4 [0,4194304] 0 2026-03-09T20:47:35.415 INFO:tasks.workunit.client.0.vm07.stdout:3/441: write d1/d5/d9/f3c [292416,104934] 0 2026-03-09T20:47:35.426 INFO:tasks.workunit.client.1.vm10.stdout:7/366: symlink db/d28/d4c/l75 0 2026-03-09T20:47:35.431 INFO:tasks.workunit.client.0.vm07.stdout:1/488: rename d3/d23/f6b to d3/d14/d54/fa2 0 2026-03-09T20:47:35.439 INFO:tasks.workunit.client.1.vm10.stdout:9/431: rename d2/d33/f3d to d2/d3/de/d8f/f9d 0 2026-03-09T20:47:35.440 INFO:tasks.workunit.client.1.vm10.stdout:5/364: rename d2/d39 to d2/d39/d89/d8f 22 2026-03-09T20:47:35.443 INFO:tasks.workunit.client.0.vm07.stdout:5/530: fsync d5/fa6 0 2026-03-09T20:47:35.447 INFO:tasks.workunit.client.1.vm10.stdout:0/356: dwrite d2/d9/da/d35/d30/f56 [0,4194304] 0 2026-03-09T20:47:35.461 INFO:tasks.workunit.client.0.vm07.stdout:7/499: readlink d3/d58/d82/d90/la0 0 2026-03-09T20:47:35.461 INFO:tasks.workunit.client.0.vm07.stdout:7/500: write d3/d58/f9d [679906,66138] 0 2026-03-09T20:47:35.470 INFO:tasks.workunit.client.1.vm10.stdout:6/408: getdents d3/d30/d7f/d24/d39 0 2026-03-09T20:47:35.472 INFO:tasks.workunit.client.1.vm10.stdout:3/374: mkdir dc/d14/d26/d29/d40/d48/d69/d75 0 2026-03-09T20:47:35.478 INFO:tasks.workunit.client.1.vm10.stdout:7/367: dread db/d46/f47 [0,4194304] 0 2026-03-09T20:47:35.479 INFO:tasks.workunit.client.1.vm10.stdout:7/368: dread - db/f70 zero size 2026-03-09T20:47:35.480 INFO:tasks.workunit.client.1.vm10.stdout:4/341: rename d1/d8/d1c/d2b/f46 to d1/d2/d5c/d64/d61/f68 0 2026-03-09T20:47:35.482 INFO:tasks.workunit.client.0.vm07.stdout:1/489: mknod d3/d23/d55/d56/d60/ca3 0 2026-03-09T20:47:35.484 INFO:tasks.workunit.client.0.vm07.stdout:1/490: stat d3/d14/d54/f62 0 2026-03-09T20:47:35.485 INFO:tasks.workunit.client.0.vm07.stdout:7/501: sync 2026-03-09T20:47:35.486 INFO:tasks.workunit.client.1.vm10.stdout:9/432: mkdir d2/d33/d37/d9e 0 2026-03-09T20:47:35.488 INFO:tasks.workunit.client.0.vm07.stdout:8/406: mknod d1/d5d/c82 0 2026-03-09T20:47:35.493 INFO:tasks.workunit.client.0.vm07.stdout:3/442: write d1/d5/d9/d11/d6d/d80/f81 [314665,38938] 0 2026-03-09T20:47:35.494 INFO:tasks.workunit.client.0.vm07.stdout:0/520: dwrite d1/d2/d33/d35/f46 [0,4194304] 0 2026-03-09T20:47:35.504 INFO:tasks.workunit.client.1.vm10.stdout:0/357: creat d2/d4a/d58/f7a x:0 0 0 2026-03-09T20:47:35.508 INFO:tasks.workunit.client.1.vm10.stdout:0/358: dwrite d2/d9/d47/d71/d5d/f76 [0,4194304] 0 2026-03-09T20:47:35.510 INFO:tasks.workunit.client.0.vm07.stdout:9/416: link d4/d11/d2a/f5d d4/d8/d19/d5f/d73/f97 0 2026-03-09T20:47:35.514 INFO:tasks.workunit.client.1.vm10.stdout:2/398: link d5/d18/d1b/d22/c46 d5/d18/d27/d28/d41/c82 0 2026-03-09T20:47:35.514 INFO:tasks.workunit.client.0.vm07.stdout:2/476: symlink d2/db/l94 0 2026-03-09T20:47:35.519 INFO:tasks.workunit.client.1.vm10.stdout:0/359: dwrite d2/d9/da/d35/f68 [0,4194304] 0 2026-03-09T20:47:35.520 INFO:tasks.workunit.client.1.vm10.stdout:0/360: fdatasync d2/d9/d47/d71/d5d/f5f 0 2026-03-09T20:47:35.525 INFO:tasks.workunit.client.1.vm10.stdout:7/369: symlink db/d21/d26/d72/l76 0 2026-03-09T20:47:35.532 INFO:tasks.workunit.client.0.vm07.stdout:1/491: mknod d3/d14/d94/ca4 0 2026-03-09T20:47:35.536 INFO:tasks.workunit.client.0.vm07.stdout:7/502: rename d3/c85 to d3/da/db/d32/d3e/cad 0 2026-03-09T20:47:35.546 INFO:tasks.workunit.client.1.vm10.stdout:5/365: symlink d2/d1b/d54/d7b/l90 0 2026-03-09T20:47:35.547 INFO:tasks.workunit.client.1.vm10.stdout:8/419: mkdir d0/d22/d2f/d38/d64/d7f 0 2026-03-09T20:47:35.548 INFO:tasks.workunit.client.0.vm07.stdout:6/459: getdents d8/d26/d2a 0 2026-03-09T20:47:35.556 INFO:tasks.workunit.client.1.vm10.stdout:6/409: symlink d3/d30/d7f/l84 0 2026-03-09T20:47:35.560 INFO:tasks.workunit.client.1.vm10.stdout:1/389: write d2/da/d25/d3e/f44 [1873503,125698] 0 2026-03-09T20:47:35.560 INFO:tasks.workunit.client.1.vm10.stdout:1/390: dread - d2/da/d25/f78 zero size 2026-03-09T20:47:35.575 INFO:tasks.workunit.client.1.vm10.stdout:5/366: sync 2026-03-09T20:47:35.575 INFO:tasks.workunit.client.1.vm10.stdout:6/410: sync 2026-03-09T20:47:35.576 INFO:tasks.workunit.client.1.vm10.stdout:6/411: stat d3/da/c23 0 2026-03-09T20:47:35.577 INFO:tasks.workunit.client.1.vm10.stdout:6/412: write d3/da/d11/d31/f82 [976820,47787] 0 2026-03-09T20:47:35.587 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:35 vm10.local ceph-mon[57011]: pgmap v6: 65 pgs: 65 active+clean; 1.9 GiB data, 7.0 GiB used, 113 GiB / 120 GiB avail; 26 MiB/s rd, 79 MiB/s wr, 202 op/s 2026-03-09T20:47:35.587 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:35 vm10.local ceph-mon[57011]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:35.587 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:35 vm10.local ceph-mon[57011]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:35.587 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:35 vm10.local ceph-mon[57011]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:47:35.587 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:35 vm10.local ceph-mon[57011]: from='mgr.24439 ' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:47:35.587 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:35 vm10.local ceph-mon[57011]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:35.587 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:35 vm10.local ceph-mon[57011]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:35.587 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:35 vm10.local ceph-mon[57011]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config rm", "who": "osd/host:vm10", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:47:35.587 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:35 vm10.local ceph-mon[57011]: from='mgr.24439 ' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config rm", "who": "osd/host:vm10", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:47:35.587 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:35 vm10.local ceph-mon[57011]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:47:35.587 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:35 vm10.local ceph-mon[57011]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:47:35.589 INFO:tasks.workunit.client.0.vm07.stdout:4/391: getdents d2/df/d17 0 2026-03-09T20:47:35.593 INFO:tasks.workunit.client.1.vm10.stdout:2/399: write d5/d5b/f6c [1020720,95810] 0 2026-03-09T20:47:35.593 INFO:tasks.workunit.client.0.vm07.stdout:9/417: dread d4/d11/d2a/f3b [0,4194304] 0 2026-03-09T20:47:35.594 INFO:tasks.workunit.client.0.vm07.stdout:9/418: chown d4/f5 71297208 1 2026-03-09T20:47:35.596 INFO:tasks.workunit.client.0.vm07.stdout:1/492: mknod d3/d14/d54/ca5 0 2026-03-09T20:47:35.596 INFO:tasks.workunit.client.0.vm07.stdout:1/493: readlink d3/d23/l40 0 2026-03-09T20:47:35.598 INFO:tasks.workunit.client.0.vm07.stdout:2/477: dwrite d2/d11/f60 [0,4194304] 0 2026-03-09T20:47:35.599 INFO:tasks.workunit.client.1.vm10.stdout:3/375: dwrite dc/d14/d26/d29/f5c [0,4194304] 0 2026-03-09T20:47:35.613 INFO:tasks.workunit.client.1.vm10.stdout:3/376: dwrite dc/d14/d26/d29/f60 [4194304,4194304] 0 2026-03-09T20:47:35.629 INFO:tasks.workunit.client.0.vm07.stdout:3/443: rename d1/d5/d9/d2f/d3d/d71/c7c to d1/d5/d9/d11/d6d/c92 0 2026-03-09T20:47:35.632 INFO:tasks.workunit.client.0.vm07.stdout:0/521: unlink d1/d1f/d20/l44 0 2026-03-09T20:47:35.643 INFO:tasks.workunit.client.0.vm07.stdout:4/392: truncate d2/f33 529977 0 2026-03-09T20:47:35.643 INFO:tasks.workunit.client.0.vm07.stdout:9/419: readlink d4/d8/l56 0 2026-03-09T20:47:35.645 INFO:tasks.workunit.client.1.vm10.stdout:6/413: symlink d3/d30/d7f/d51/l85 0 2026-03-09T20:47:35.650 INFO:tasks.workunit.client.1.vm10.stdout:6/414: readlink d3/d30/d7f/d51/l85 0 2026-03-09T20:47:35.650 INFO:tasks.workunit.client.1.vm10.stdout:6/415: chown d3/d30/d7f/d24/d39/f6c 237 1 2026-03-09T20:47:35.651 INFO:tasks.workunit.client.1.vm10.stdout:6/416: chown d3/d30/f75 17 1 2026-03-09T20:47:35.651 INFO:tasks.workunit.client.0.vm07.stdout:2/478: fsync d2/f17 0 2026-03-09T20:47:35.653 INFO:tasks.workunit.client.0.vm07.stdout:2/479: chown d2/db/c1f 4 1 2026-03-09T20:47:35.658 INFO:tasks.workunit.client.0.vm07.stdout:7/503: rename d3/da/db/d32/d3e/dac/d1f/d2b/d52/f6e to d3/da/db/d32/d3e/dac/d43/fae 0 2026-03-09T20:47:35.662 INFO:tasks.workunit.client.0.vm07.stdout:0/522: rmdir d1/d2/dc/d80 39 2026-03-09T20:47:35.663 INFO:tasks.workunit.client.1.vm10.stdout:3/377: dread dc/d14/d20/d21/f36 [4194304,4194304] 0 2026-03-09T20:47:35.663 INFO:tasks.workunit.client.1.vm10.stdout:9/433: rmdir d2/d33/d37/d9e 0 2026-03-09T20:47:35.664 INFO:tasks.workunit.client.0.vm07.stdout:5/531: link d5/d33/d39/d8d/la1 d5/df/lbf 0 2026-03-09T20:47:35.669 INFO:tasks.workunit.client.1.vm10.stdout:2/400: dread d5/d2b/f36 [0,4194304] 0 2026-03-09T20:47:35.671 INFO:tasks.workunit.client.1.vm10.stdout:8/420: truncate d0/f19 105448 0 2026-03-09T20:47:35.675 INFO:tasks.workunit.client.1.vm10.stdout:7/370: dwrite db/d21/d23/f34 [0,4194304] 0 2026-03-09T20:47:35.675 INFO:tasks.workunit.client.1.vm10.stdout:4/342: write d1/d2/d5c/d64/d61/f62 [1296551,110409] 0 2026-03-09T20:47:35.677 INFO:tasks.workunit.client.0.vm07.stdout:9/420: rename d4/d8 to d4/d8/d59/d98 22 2026-03-09T20:47:35.684 INFO:tasks.workunit.client.1.vm10.stdout:1/391: dread d2/da/d25/d3e/d42/f57 [0,4194304] 0 2026-03-09T20:47:35.688 INFO:tasks.workunit.client.0.vm07.stdout:8/407: dwrite d1/dc/d16/d26/f2d [4194304,4194304] 0 2026-03-09T20:47:35.689 INFO:tasks.workunit.client.0.vm07.stdout:3/444: write d1/d5/d9/d11/d1f/f5e [45501,1515] 0 2026-03-09T20:47:35.689 INFO:tasks.workunit.client.1.vm10.stdout:1/392: write d2/da/d25/f78 [688489,17034] 0 2026-03-09T20:47:35.690 INFO:tasks.workunit.client.0.vm07.stdout:3/445: fdatasync d1/d5/d9/d11/d1f/f4a 0 2026-03-09T20:47:35.692 INFO:tasks.workunit.client.1.vm10.stdout:1/393: write d2/da/d25/f78 [1715212,100120] 0 2026-03-09T20:47:35.696 INFO:tasks.workunit.client.1.vm10.stdout:7/371: dread db/d21/d23/ff [0,4194304] 0 2026-03-09T20:47:35.697 INFO:tasks.workunit.client.1.vm10.stdout:5/367: rename d2/c25 to d2/d27/d37/d46/d5d/d5f/d84/d87/c91 0 2026-03-09T20:47:35.697 INFO:tasks.workunit.client.0.vm07.stdout:1/494: getdents d3/d97/da1 0 2026-03-09T20:47:35.716 INFO:tasks.workunit.client.1.vm10.stdout:5/368: read d2/d27/d37/d46/d5d/d5f/d69/f76 [1020292,125683] 0 2026-03-09T20:47:35.726 INFO:tasks.workunit.client.0.vm07.stdout:5/532: truncate d5/df/d13/d3e/d5e/f98 4133188 0 2026-03-09T20:47:35.729 INFO:tasks.workunit.client.0.vm07.stdout:6/460: creat d8/d16/f90 x:0 0 0 2026-03-09T20:47:35.731 INFO:tasks.workunit.client.0.vm07.stdout:1/495: sync 2026-03-09T20:47:35.756 INFO:tasks.workunit.client.1.vm10.stdout:3/378: read f6 [2993503,111519] 0 2026-03-09T20:47:35.759 INFO:tasks.workunit.client.0.vm07.stdout:3/446: creat d1/d5/d9/d11/d6d/d80/f93 x:0 0 0 2026-03-09T20:47:35.766 INFO:tasks.workunit.client.1.vm10.stdout:6/417: write f1 [3673774,1384] 0 2026-03-09T20:47:35.766 INFO:tasks.workunit.client.0.vm07.stdout:2/480: write d2/db/d1c/f45 [907843,24898] 0 2026-03-09T20:47:35.768 INFO:tasks.workunit.client.0.vm07.stdout:0/523: write d1/f3b [607930,80499] 0 2026-03-09T20:47:35.769 INFO:tasks.workunit.client.0.vm07.stdout:0/524: stat d1/ca 0 2026-03-09T20:47:35.770 INFO:tasks.workunit.client.0.vm07.stdout:7/504: dwrite d3/da/f11 [0,4194304] 0 2026-03-09T20:47:35.793 INFO:tasks.workunit.client.1.vm10.stdout:3/379: dread dc/d14/d20/d2e/f32 [0,4194304] 0 2026-03-09T20:47:35.797 INFO:tasks.workunit.client.1.vm10.stdout:8/421: creat d0/d22/d25/d2e/d41/f80 x:0 0 0 2026-03-09T20:47:35.799 INFO:tasks.workunit.client.0.vm07.stdout:5/533: mknod d5/d33/d39/d8d/dab/cc0 0 2026-03-09T20:47:35.799 INFO:tasks.workunit.client.1.vm10.stdout:0/361: getdents d2/d9/d4b 0 2026-03-09T20:47:35.804 INFO:tasks.workunit.client.0.vm07.stdout:6/461: unlink d8/d5d/c89 0 2026-03-09T20:47:35.811 INFO:tasks.workunit.client.1.vm10.stdout:4/343: mkdir d1/d8/d1c/d69 0 2026-03-09T20:47:35.811 INFO:tasks.workunit.client.0.vm07.stdout:9/421: symlink d4/d8/d19/l99 0 2026-03-09T20:47:35.813 INFO:tasks.workunit.client.0.vm07.stdout:5/534: sync 2026-03-09T20:47:35.818 INFO:tasks.workunit.client.0.vm07.stdout:1/496: fsync d3/d23/f5d 0 2026-03-09T20:47:35.820 INFO:tasks.workunit.client.1.vm10.stdout:1/394: creat d2/da/d25/d3e/d42/f7d x:0 0 0 2026-03-09T20:47:35.827 INFO:tasks.workunit.client.1.vm10.stdout:7/372: mknod db/d28/d2b/d36/c77 0 2026-03-09T20:47:35.842 INFO:tasks.workunit.client.0.vm07.stdout:6/462: dread d8/d16/d22/d33/d85/f6a [0,4194304] 0 2026-03-09T20:47:35.846 INFO:tasks.workunit.client.0.vm07.stdout:7/505: write d3/da/f3b [393098,78674] 0 2026-03-09T20:47:35.846 INFO:tasks.workunit.client.0.vm07.stdout:9/422: creat d4/d16/d78/f9a x:0 0 0 2026-03-09T20:47:35.847 INFO:tasks.workunit.client.1.vm10.stdout:5/369: write d2/d27/d37/d46/f7c [569397,17953] 0 2026-03-09T20:47:35.848 INFO:tasks.workunit.client.0.vm07.stdout:7/506: chown d3/da/l4c 1644877 1 2026-03-09T20:47:35.849 INFO:tasks.workunit.client.0.vm07.stdout:0/525: dwrite d1/f31 [0,4194304] 0 2026-03-09T20:47:35.849 INFO:tasks.workunit.client.1.vm10.stdout:3/380: rename dc/d14/d20/d21/d3d to dc/d14/d26/d29/d2a/d76 0 2026-03-09T20:47:35.872 INFO:tasks.workunit.client.0.vm07.stdout:0/526: dwrite d1/d1f/d53/f84 [4194304,4194304] 0 2026-03-09T20:47:35.877 INFO:tasks.workunit.client.0.vm07.stdout:8/408: link d1/d5d/d6f/l40 d1/d3b/l83 0 2026-03-09T20:47:35.877 INFO:tasks.workunit.client.0.vm07.stdout:8/409: stat d1 0 2026-03-09T20:47:35.879 INFO:tasks.workunit.client.1.vm10.stdout:0/362: creat d2/d4a/f7b x:0 0 0 2026-03-09T20:47:35.880 INFO:tasks.workunit.client.0.vm07.stdout:4/393: getdents d2/d55/d5d/d3f/d4a 0 2026-03-09T20:47:35.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:35 vm07.local ceph-mon[49120]: pgmap v6: 65 pgs: 65 active+clean; 1.9 GiB data, 7.0 GiB used, 113 GiB / 120 GiB avail; 26 MiB/s rd, 79 MiB/s wr, 202 op/s 2026-03-09T20:47:35.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:35 vm07.local ceph-mon[49120]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:35.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:35 vm07.local ceph-mon[49120]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:35.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:35 vm07.local ceph-mon[49120]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:47:35.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:35 vm07.local ceph-mon[49120]: from='mgr.24439 ' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:47:35.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:35 vm07.local ceph-mon[49120]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:35.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:35 vm07.local ceph-mon[49120]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:35.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:35 vm07.local ceph-mon[49120]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config rm", "who": "osd/host:vm10", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:47:35.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:35 vm07.local ceph-mon[49120]: from='mgr.24439 ' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config rm", "who": "osd/host:vm10", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:47:35.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:35 vm07.local ceph-mon[49120]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:47:35.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:35 vm07.local ceph-mon[49120]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:47:35.889 INFO:tasks.workunit.client.1.vm10.stdout:5/370: dread d2/d1b/d54/d78/f47 [0,4194304] 0 2026-03-09T20:47:35.890 INFO:tasks.workunit.client.0.vm07.stdout:3/447: unlink d1/l2d 0 2026-03-09T20:47:35.898 INFO:tasks.workunit.client.1.vm10.stdout:7/373: dread - db/d28/f5d zero size 2026-03-09T20:47:35.904 INFO:tasks.workunit.client.0.vm07.stdout:6/463: rmdir d8/d16/d22/d33/d85 39 2026-03-09T20:47:35.904 INFO:tasks.workunit.client.0.vm07.stdout:2/481: creat d2/d46/d6e/f95 x:0 0 0 2026-03-09T20:47:35.904 INFO:tasks.workunit.client.0.vm07.stdout:9/423: mknod d4/d16/d29/c9b 0 2026-03-09T20:47:35.905 INFO:tasks.workunit.client.1.vm10.stdout:2/401: creat d5/d18/f83 x:0 0 0 2026-03-09T20:47:35.905 INFO:tasks.workunit.client.1.vm10.stdout:9/434: rename d2/c17 to d2/d3/de/d35/d44/c9f 0 2026-03-09T20:47:35.906 INFO:tasks.workunit.client.0.vm07.stdout:7/507: chown d3/da/c22 4506 1 2026-03-09T20:47:35.907 INFO:tasks.workunit.client.0.vm07.stdout:2/482: truncate d2/d11/f60 4574593 0 2026-03-09T20:47:35.907 INFO:tasks.workunit.client.0.vm07.stdout:7/508: fsync d3/f67 0 2026-03-09T20:47:35.908 INFO:tasks.workunit.client.0.vm07.stdout:9/424: write d4/d8/dc/ff [3055115,46986] 0 2026-03-09T20:47:35.911 INFO:tasks.workunit.client.0.vm07.stdout:0/527: mkdir d1/d2/dc/d17/da6 0 2026-03-09T20:47:35.913 INFO:tasks.workunit.client.1.vm10.stdout:3/381: readlink dc/d14/d26/d29/l6a 0 2026-03-09T20:47:35.916 INFO:tasks.workunit.client.1.vm10.stdout:2/402: dread d5/d2b/d32/d80/d47/f65 [0,4194304] 0 2026-03-09T20:47:35.917 INFO:tasks.workunit.client.0.vm07.stdout:9/425: dwrite d4/d8/dc/d15/f30 [0,4194304] 0 2026-03-09T20:47:35.917 INFO:tasks.workunit.client.1.vm10.stdout:2/403: readlink d5/d2b/d32/d80/d47/l4a 0 2026-03-09T20:47:35.920 INFO:tasks.workunit.client.0.vm07.stdout:9/426: chown d4/d8/dc 1004246 1 2026-03-09T20:47:35.926 INFO:tasks.workunit.client.1.vm10.stdout:8/422: mkdir d0/d54/d81 0 2026-03-09T20:47:35.934 INFO:tasks.workunit.client.0.vm07.stdout:5/535: dwrite d5/df/d13/d6c/f79 [0,4194304] 0 2026-03-09T20:47:35.938 INFO:tasks.workunit.client.0.vm07.stdout:5/536: fdatasync d5/df/d13/d6c/fb3 0 2026-03-09T20:47:35.946 INFO:tasks.workunit.client.1.vm10.stdout:4/344: creat d1/d8/d1c/d69/f6a x:0 0 0 2026-03-09T20:47:35.947 INFO:tasks.workunit.client.1.vm10.stdout:4/345: write d1/d2/d5c/d64/f51 [2336450,19794] 0 2026-03-09T20:47:35.947 INFO:tasks.workunit.client.0.vm07.stdout:1/497: write d3/f10 [3108219,26828] 0 2026-03-09T20:47:35.952 INFO:tasks.workunit.client.0.vm07.stdout:8/410: dread d1/f1d [0,4194304] 0 2026-03-09T20:47:35.954 INFO:tasks.workunit.client.1.vm10.stdout:0/363: creat d2/d9/d69/f7c x:0 0 0 2026-03-09T20:47:35.960 INFO:tasks.workunit.client.1.vm10.stdout:0/364: dread d2/d9/da/d35/d30/f56 [0,4194304] 0 2026-03-09T20:47:35.960 INFO:tasks.workunit.client.1.vm10.stdout:0/365: stat d2/d9/f61 0 2026-03-09T20:47:35.962 INFO:tasks.workunit.client.0.vm07.stdout:3/448: read d1/d5/d9/fe [306676,62835] 0 2026-03-09T20:47:35.971 INFO:tasks.workunit.client.1.vm10.stdout:5/371: rmdir d2/d27/d75/d81 39 2026-03-09T20:47:35.972 INFO:tasks.workunit.client.1.vm10.stdout:6/418: truncate d3/da/d11/d26/d5b/f55 5016732 0 2026-03-09T20:47:35.994 INFO:tasks.workunit.client.0.vm07.stdout:2/483: mkdir d2/db/d28/d87/d96 0 2026-03-09T20:47:35.996 INFO:tasks.workunit.client.0.vm07.stdout:6/464: write d8/d16/d22/d24/f7b [107454,100749] 0 2026-03-09T20:47:36.005 INFO:tasks.workunit.client.0.vm07.stdout:7/509: dread d3/da/f47 [0,4194304] 0 2026-03-09T20:47:36.011 INFO:tasks.workunit.client.0.vm07.stdout:7/510: dread d3/da/db/d32/f3d [0,4194304] 0 2026-03-09T20:47:36.011 INFO:tasks.workunit.client.1.vm10.stdout:9/435: write d2/f30 [206784,121417] 0 2026-03-09T20:47:36.015 INFO:tasks.workunit.client.1.vm10.stdout:9/436: dwrite d2/d3/fa [4194304,4194304] 0 2026-03-09T20:47:36.018 INFO:tasks.workunit.client.0.vm07.stdout:0/528: creat d1/d1f/d9f/fa7 x:0 0 0 2026-03-09T20:47:36.024 INFO:tasks.workunit.client.1.vm10.stdout:2/404: truncate d5/fa 4040878 0 2026-03-09T20:47:36.024 INFO:tasks.workunit.client.1.vm10.stdout:8/423: creat d0/d22/d25/d6c/f82 x:0 0 0 2026-03-09T20:47:36.028 INFO:tasks.workunit.client.1.vm10.stdout:3/382: write dc/d14/d26/d29/d2a/f57 [705164,37096] 0 2026-03-09T20:47:36.031 INFO:tasks.workunit.client.1.vm10.stdout:4/346: mkdir d1/d2/d5c/d64/d6b 0 2026-03-09T20:47:36.031 INFO:tasks.workunit.client.0.vm07.stdout:4/394: symlink d2/d55/d5d/d3f/d4a/d4b/l67 0 2026-03-09T20:47:36.032 INFO:tasks.workunit.client.0.vm07.stdout:1/498: mknod d3/d23/d55/d56/d60/ca6 0 2026-03-09T20:47:36.034 INFO:tasks.workunit.client.0.vm07.stdout:8/411: creat d1/d5d/d6f/d2f/d4d/d63/f84 x:0 0 0 2026-03-09T20:47:36.036 INFO:tasks.workunit.client.0.vm07.stdout:2/484: mknod d2/db/d1c/c97 0 2026-03-09T20:47:36.040 INFO:tasks.workunit.client.1.vm10.stdout:7/374: mkdir db/d21/d60/d78 0 2026-03-09T20:47:36.041 INFO:tasks.workunit.client.1.vm10.stdout:7/375: dread - db/f69 zero size 2026-03-09T20:47:36.044 INFO:tasks.workunit.client.0.vm07.stdout:7/511: creat d3/da/db/d79/faf x:0 0 0 2026-03-09T20:47:36.049 INFO:tasks.workunit.client.1.vm10.stdout:0/366: write d2/d9/da/d11/f15 [858161,13960] 0 2026-03-09T20:47:36.050 INFO:tasks.workunit.client.1.vm10.stdout:1/395: truncate d2/da/f35 480326 0 2026-03-09T20:47:36.051 INFO:tasks.workunit.client.1.vm10.stdout:1/396: chown d2/f2a 13702214 1 2026-03-09T20:47:36.055 INFO:tasks.workunit.client.0.vm07.stdout:5/537: dwrite d5/df/d13/d6c/f77 [0,4194304] 0 2026-03-09T20:47:36.065 INFO:tasks.workunit.client.0.vm07.stdout:0/529: fdatasync d1/d2/f47 0 2026-03-09T20:47:36.065 INFO:tasks.workunit.client.0.vm07.stdout:0/530: stat d1/d2/c3e 0 2026-03-09T20:47:36.072 INFO:tasks.workunit.client.1.vm10.stdout:9/437: symlink d2/d3/d6d/la0 0 2026-03-09T20:47:36.073 INFO:tasks.workunit.client.1.vm10.stdout:2/405: creat d5/d2b/d32/f84 x:0 0 0 2026-03-09T20:47:36.076 INFO:tasks.workunit.client.0.vm07.stdout:4/395: write d2/d55/f62 [378108,15888] 0 2026-03-09T20:47:36.076 INFO:tasks.workunit.client.0.vm07.stdout:1/499: write d3/d23/d55/d56/d60/f7c [252487,53670] 0 2026-03-09T20:47:36.077 INFO:tasks.workunit.client.0.vm07.stdout:7/512: dread d3/da/db/d32/d3e/dac/d1f/d2b/d52/f74 [0,4194304] 0 2026-03-09T20:47:36.077 INFO:tasks.workunit.client.0.vm07.stdout:8/412: dread - d1/dc/d16/f6e zero size 2026-03-09T20:47:36.077 INFO:tasks.workunit.client.1.vm10.stdout:8/424: chown d0/f21 50 1 2026-03-09T20:47:36.080 INFO:tasks.workunit.client.1.vm10.stdout:8/425: stat d0/d22/d25/d40 0 2026-03-09T20:47:36.082 INFO:tasks.workunit.client.0.vm07.stdout:7/513: dwrite d3/da/d83/fa9 [0,4194304] 0 2026-03-09T20:47:36.082 INFO:tasks.workunit.client.1.vm10.stdout:8/426: fsync d0/d22/d25/d6c/f82 0 2026-03-09T20:47:36.092 INFO:tasks.workunit.client.1.vm10.stdout:8/427: dread d0/f7e [0,4194304] 0 2026-03-09T20:47:36.097 INFO:tasks.workunit.client.1.vm10.stdout:3/383: fsync dc/d14/f1a 0 2026-03-09T20:47:36.099 INFO:tasks.workunit.client.0.vm07.stdout:2/485: creat d2/d11/d56/f98 x:0 0 0 2026-03-09T20:47:36.103 INFO:tasks.workunit.client.1.vm10.stdout:5/372: rename d2/f8 to d2/d39/d4b/d7a/f92 0 2026-03-09T20:47:36.107 INFO:tasks.workunit.client.1.vm10.stdout:5/373: dwrite d2/f23 [0,4194304] 0 2026-03-09T20:47:36.118 INFO:tasks.workunit.client.1.vm10.stdout:6/419: symlink d3/da/d11/d31/d47/l86 0 2026-03-09T20:47:36.118 INFO:tasks.workunit.client.1.vm10.stdout:7/376: symlink db/d21/d26/l79 0 2026-03-09T20:47:36.126 INFO:tasks.workunit.client.1.vm10.stdout:1/397: fsync d2/da/d25/f27 0 2026-03-09T20:47:36.126 INFO:tasks.workunit.client.0.vm07.stdout:0/531: unlink d1/d2/d98/c9e 0 2026-03-09T20:47:36.131 INFO:tasks.workunit.client.1.vm10.stdout:9/438: mknod d2/d28/d47/d50/ca1 0 2026-03-09T20:47:36.136 INFO:tasks.workunit.client.1.vm10.stdout:2/406: creat d5/d18/d27/d28/d41/d77/f85 x:0 0 0 2026-03-09T20:47:36.140 INFO:tasks.workunit.client.0.vm07.stdout:9/427: truncate d4/d11/f8a 637088 0 2026-03-09T20:47:36.148 INFO:tasks.workunit.client.0.vm07.stdout:5/538: dread d5/df/f2b [4194304,4194304] 0 2026-03-09T20:47:36.154 INFO:tasks.workunit.client.1.vm10.stdout:4/347: write d1/d8/d1c/d2b/f36 [265211,33526] 0 2026-03-09T20:47:36.158 INFO:tasks.workunit.client.1.vm10.stdout:3/384: mkdir dc/d14/d26/d77 0 2026-03-09T20:47:36.159 INFO:tasks.workunit.client.1.vm10.stdout:8/428: write d0/d22/f35 [2744524,107049] 0 2026-03-09T20:47:36.165 INFO:tasks.workunit.client.1.vm10.stdout:6/420: mkdir d3/da/d11/d31/d47/d87 0 2026-03-09T20:47:36.165 INFO:tasks.workunit.client.1.vm10.stdout:6/421: chown d3/da/f42 31992 1 2026-03-09T20:47:36.170 INFO:tasks.workunit.client.1.vm10.stdout:1/398: mkdir d2/da/d25/d46/d51/d7e 0 2026-03-09T20:47:36.172 INFO:tasks.workunit.client.1.vm10.stdout:1/399: chown d2/f19 21499376 1 2026-03-09T20:47:36.173 INFO:tasks.workunit.client.1.vm10.stdout:2/407: read - d5/d18/f67 zero size 2026-03-09T20:47:36.175 INFO:tasks.workunit.client.1.vm10.stdout:2/408: dread - d5/d18/f83 zero size 2026-03-09T20:47:36.175 INFO:tasks.workunit.client.0.vm07.stdout:1/500: mkdir d3/d23/d52/da7 0 2026-03-09T20:47:36.176 INFO:tasks.workunit.client.0.vm07.stdout:1/501: chown d3/d14/c48 16 1 2026-03-09T20:47:36.180 INFO:tasks.workunit.client.1.vm10.stdout:4/348: symlink d1/d8/d1c/d2b/l6c 0 2026-03-09T20:47:36.182 INFO:tasks.workunit.client.1.vm10.stdout:3/385: creat dc/d14/d26/d37/f78 x:0 0 0 2026-03-09T20:47:36.188 INFO:tasks.workunit.client.1.vm10.stdout:8/429: symlink d0/d22/d2f/l83 0 2026-03-09T20:47:36.190 INFO:tasks.workunit.client.1.vm10.stdout:8/430: read d0/d22/d2c/f3f [612720,48059] 0 2026-03-09T20:47:36.191 INFO:tasks.workunit.client.0.vm07.stdout:6/465: creat d8/d16/d22/d33/f91 x:0 0 0 2026-03-09T20:47:36.192 INFO:tasks.workunit.client.0.vm07.stdout:2/486: dread - d2/db/d28/f58 zero size 2026-03-09T20:47:36.202 INFO:tasks.workunit.client.0.vm07.stdout:6/466: dwrite d8/d26/d7d/f8b [0,4194304] 0 2026-03-09T20:47:36.219 INFO:tasks.workunit.client.1.vm10.stdout:7/377: symlink db/d28/d4c/d6e/l7a 0 2026-03-09T20:47:36.223 INFO:tasks.workunit.client.0.vm07.stdout:0/532: rmdir d1/d82 39 2026-03-09T20:47:36.235 INFO:tasks.workunit.client.0.vm07.stdout:9/428: mkdir d4/d16/d29/d9c 0 2026-03-09T20:47:36.236 INFO:tasks.workunit.client.1.vm10.stdout:1/400: mkdir d2/da/d25/d46/d51/d5d/d6e/d7f 0 2026-03-09T20:47:36.244 INFO:tasks.workunit.client.0.vm07.stdout:1/502: creat d3/d23/fa8 x:0 0 0 2026-03-09T20:47:36.247 INFO:tasks.workunit.client.0.vm07.stdout:6/467: sync 2026-03-09T20:47:36.249 INFO:tasks.workunit.client.1.vm10.stdout:4/349: mknod d1/d8/d1b/c6d 0 2026-03-09T20:47:36.255 INFO:tasks.workunit.client.0.vm07.stdout:5/539: write d5/f25 [1765464,108400] 0 2026-03-09T20:47:36.263 INFO:tasks.workunit.client.0.vm07.stdout:3/449: getdents d1/d5/d9/d2f/d3d/d64/d59 0 2026-03-09T20:47:36.264 INFO:tasks.workunit.client.1.vm10.stdout:2/409: dwrite d5/d18/d1b/f26 [0,4194304] 0 2026-03-09T20:47:36.265 INFO:tasks.workunit.client.1.vm10.stdout:2/410: fdatasync f1 0 2026-03-09T20:47:36.268 INFO:tasks.workunit.client.0.vm07.stdout:2/487: truncate d2/db/d28/f32 3444125 0 2026-03-09T20:47:36.271 INFO:tasks.workunit.client.1.vm10.stdout:9/439: dwrite d2/d3/de/d8f/f9d [0,4194304] 0 2026-03-09T20:47:36.275 INFO:tasks.workunit.client.1.vm10.stdout:0/367: link d2/d9/da/d11/l75 d2/d4a/d79/d1a/d25/l7d 0 2026-03-09T20:47:36.278 INFO:tasks.workunit.client.0.vm07.stdout:0/533: symlink d1/d2/d4b/la8 0 2026-03-09T20:47:36.285 INFO:tasks.workunit.client.1.vm10.stdout:8/431: dwrite d0/d22/f27 [0,4194304] 0 2026-03-09T20:47:36.285 INFO:tasks.workunit.client.1.vm10.stdout:1/401: mkdir d2/da/d25/d46/d80 0 2026-03-09T20:47:36.286 INFO:tasks.workunit.client.1.vm10.stdout:1/402: fsync d2/da/d25/f6c 0 2026-03-09T20:47:36.291 INFO:tasks.workunit.client.1.vm10.stdout:4/350: creat d1/d8/d39/f6e x:0 0 0 2026-03-09T20:47:36.292 INFO:tasks.workunit.client.1.vm10.stdout:1/403: chown d2/da/d25/f27 13882752 1 2026-03-09T20:47:36.300 INFO:tasks.workunit.client.0.vm07.stdout:1/503: rename d3/fa to d3/d14/d54/d6e/fa9 0 2026-03-09T20:47:36.302 INFO:tasks.workunit.client.0.vm07.stdout:1/504: chown d3 26346366 1 2026-03-09T20:47:36.304 INFO:tasks.workunit.client.1.vm10.stdout:3/386: mknod dc/c79 0 2026-03-09T20:47:36.313 INFO:tasks.workunit.client.0.vm07.stdout:8/413: creat d1/f85 x:0 0 0 2026-03-09T20:47:36.315 INFO:tasks.workunit.client.0.vm07.stdout:7/514: creat d3/fb0 x:0 0 0 2026-03-09T20:47:36.319 INFO:tasks.workunit.client.0.vm07.stdout:5/540: symlink d5/df/d13/d30/d56/lc1 0 2026-03-09T20:47:36.323 INFO:tasks.workunit.client.0.vm07.stdout:3/450: fdatasync d1/d5/d9/f15 0 2026-03-09T20:47:36.324 INFO:tasks.workunit.client.1.vm10.stdout:5/374: link d2/d1b/f41 d2/d27/d37/d46/d5d/d77/f93 0 2026-03-09T20:47:36.336 INFO:tasks.workunit.client.1.vm10.stdout:6/422: dwrite d3/d30/d7f/f28 [0,4194304] 0 2026-03-09T20:47:36.343 INFO:tasks.workunit.client.0.vm07.stdout:4/396: getdents d2/d1f 0 2026-03-09T20:47:36.346 INFO:tasks.workunit.client.1.vm10.stdout:7/378: dwrite db/d21/d23/f1a [0,4194304] 0 2026-03-09T20:47:36.351 INFO:tasks.workunit.client.1.vm10.stdout:7/379: stat db/d28/d2b/d36/f3c 0 2026-03-09T20:47:36.351 INFO:tasks.workunit.client.1.vm10.stdout:7/380: dread - db/d28/f5d zero size 2026-03-09T20:47:36.354 INFO:tasks.workunit.client.0.vm07.stdout:9/429: write d4/d8/d19/d5f/f94 [3549722,47951] 0 2026-03-09T20:47:36.357 INFO:tasks.workunit.client.1.vm10.stdout:9/440: dwrite d2/d12/f62 [0,4194304] 0 2026-03-09T20:47:36.362 INFO:tasks.workunit.client.1.vm10.stdout:8/432: mknod d0/d54/c84 0 2026-03-09T20:47:36.381 INFO:tasks.workunit.client.1.vm10.stdout:1/404: creat d2/f81 x:0 0 0 2026-03-09T20:47:36.398 INFO:tasks.workunit.client.0.vm07.stdout:2/488: creat d2/db/d28/d90/f99 x:0 0 0 2026-03-09T20:47:36.399 INFO:tasks.workunit.client.1.vm10.stdout:0/368: dwrite d2/d9/d47/d71/f38 [0,4194304] 0 2026-03-09T20:47:36.405 INFO:tasks.workunit.client.1.vm10.stdout:0/369: stat d2/d9/da/d11/l6c 0 2026-03-09T20:47:36.409 INFO:tasks.workunit.client.1.vm10.stdout:0/370: stat d2/d4a/d79/d1a/d25/d34/f77 0 2026-03-09T20:47:36.411 INFO:tasks.workunit.client.0.vm07.stdout:5/541: dread d5/d69/f82 [0,4194304] 0 2026-03-09T20:47:36.414 INFO:tasks.workunit.client.1.vm10.stdout:2/411: rename d5/c2f to d5/d2b/d32/d80/c86 0 2026-03-09T20:47:36.429 INFO:tasks.workunit.client.0.vm07.stdout:4/397: creat d2/d55/d5d/d3f/f68 x:0 0 0 2026-03-09T20:47:36.439 INFO:tasks.workunit.client.1.vm10.stdout:7/381: creat db/d28/d2b/d36/d3f/f7b x:0 0 0 2026-03-09T20:47:36.440 INFO:tasks.workunit.client.1.vm10.stdout:6/423: dwrite d3/d30/d7f/d4a/f4b [0,4194304] 0 2026-03-09T20:47:36.520 INFO:tasks.workunit.client.0.vm07.stdout:1/505: dwrite d3/d23/f2c [0,4194304] 0 2026-03-09T20:47:36.527 INFO:tasks.workunit.client.1.vm10.stdout:9/441: truncate d2/d12/d5a/f82 726270 0 2026-03-09T20:47:36.527 INFO:tasks.workunit.client.1.vm10.stdout:9/442: readlink d2/d28/d47/d50/l57 0 2026-03-09T20:47:36.528 INFO:tasks.workunit.client.0.vm07.stdout:8/414: mknod d1/d5d/d6f/d2f/d4d/c86 0 2026-03-09T20:47:36.529 INFO:tasks.workunit.client.0.vm07.stdout:8/415: chown d1/dc/c5e 1 1 2026-03-09T20:47:36.529 INFO:tasks.workunit.client.0.vm07.stdout:8/416: chown d1/dc/d16/f6e 596960 1 2026-03-09T20:47:36.530 INFO:tasks.workunit.client.1.vm10.stdout:4/351: symlink d1/l6f 0 2026-03-09T20:47:36.538 INFO:tasks.workunit.client.0.vm07.stdout:8/417: dread d1/dc/d16/d26/f2a [0,4194304] 0 2026-03-09T20:47:36.544 INFO:tasks.workunit.client.1.vm10.stdout:5/375: truncate d2/d27/d37/d46/d5d/d77/f93 1037129 0 2026-03-09T20:47:36.546 INFO:tasks.workunit.client.0.vm07.stdout:2/489: symlink d2/db/d28/d90/l9a 0 2026-03-09T20:47:36.546 INFO:tasks.workunit.client.0.vm07.stdout:2/490: fdatasync d2/d11/d56/f98 0 2026-03-09T20:47:36.549 INFO:tasks.workunit.client.1.vm10.stdout:0/371: creat d2/d9/da/d35/f7e x:0 0 0 2026-03-09T20:47:36.552 INFO:tasks.workunit.client.1.vm10.stdout:5/376: dread d2/d27/d37/d46/d5d/d6d/f6e [0,4194304] 0 2026-03-09T20:47:36.554 INFO:tasks.workunit.client.0.vm07.stdout:5/542: dwrite d5/df/f34 [4194304,4194304] 0 2026-03-09T20:47:36.566 INFO:tasks.workunit.client.0.vm07.stdout:0/534: link d1/d2/dc/d17/l76 d1/d2/dc/d17/da6/la9 0 2026-03-09T20:47:36.567 INFO:tasks.workunit.client.1.vm10.stdout:2/412: symlink d5/d5b/l87 0 2026-03-09T20:47:36.581 INFO:tasks.workunit.client.1.vm10.stdout:6/424: creat d3/d30/d7f/d24/d39/f88 x:0 0 0 2026-03-09T20:47:36.583 INFO:tasks.workunit.client.1.vm10.stdout:9/443: fsync d2/d3/de/f34 0 2026-03-09T20:47:36.584 INFO:tasks.workunit.client.1.vm10.stdout:6/425: chown d3/d30/d7f/f28 901901 1 2026-03-09T20:47:36.584 INFO:tasks.workunit.client.1.vm10.stdout:4/352: fsync d1/d8/d1c/f3e 0 2026-03-09T20:47:36.589 INFO:tasks.workunit.client.0.vm07.stdout:2/491: stat d2/db/d28/d57/c6a 0 2026-03-09T20:47:36.590 INFO:tasks.workunit.client.1.vm10.stdout:5/377: dread d2/f71 [0,4194304] 0 2026-03-09T20:47:36.593 INFO:tasks.workunit.client.0.vm07.stdout:0/535: stat d1/d2/c2d 0 2026-03-09T20:47:36.596 INFO:tasks.workunit.client.0.vm07.stdout:1/506: symlink d3/d9c/laa 0 2026-03-09T20:47:36.598 INFO:tasks.workunit.client.0.vm07.stdout:9/430: creat d4/d11/f9d x:0 0 0 2026-03-09T20:47:36.604 INFO:tasks.workunit.client.1.vm10.stdout:7/382: link db/d28/d2b/d36/d40/f44 db/f7c 0 2026-03-09T20:47:36.609 INFO:tasks.workunit.client.1.vm10.stdout:4/353: dread d1/d2/f7 [4194304,4194304] 0 2026-03-09T20:47:36.610 INFO:tasks.workunit.client.0.vm07.stdout:5/543: symlink d5/d33/db2/lc2 0 2026-03-09T20:47:36.612 INFO:tasks.workunit.client.1.vm10.stdout:7/383: rmdir db/d28/d2b/d36/d3b 39 2026-03-09T20:47:36.614 INFO:tasks.workunit.client.1.vm10.stdout:4/354: mkdir d1/d2/d3/d70 0 2026-03-09T20:47:36.615 INFO:tasks.workunit.client.1.vm10.stdout:5/378: creat d2/d27/d37/d46/f94 x:0 0 0 2026-03-09T20:47:36.616 INFO:tasks.workunit.client.0.vm07.stdout:1/507: mkdir d3/d97/da1/dab 0 2026-03-09T20:47:36.617 INFO:tasks.workunit.client.0.vm07.stdout:9/431: creat d4/d8/d19/d89/f9e x:0 0 0 2026-03-09T20:47:36.618 INFO:tasks.workunit.client.1.vm10.stdout:2/413: sync 2026-03-09T20:47:36.618 INFO:tasks.workunit.client.1.vm10.stdout:7/384: dwrite db/d1f/f62 [0,4194304] 0 2026-03-09T20:47:36.622 INFO:tasks.workunit.client.1.vm10.stdout:4/355: mkdir d1/d2/d5c/d64/d71 0 2026-03-09T20:47:36.622 INFO:tasks.workunit.client.1.vm10.stdout:5/379: mkdir d2/d27/d37/d46/d5d/d5f/d63/d95 0 2026-03-09T20:47:36.623 INFO:tasks.workunit.client.1.vm10.stdout:2/414: fsync d5/d18/d1b/d22/f6d 0 2026-03-09T20:47:36.623 INFO:tasks.workunit.client.1.vm10.stdout:7/385: write db/d28/d2b/d36/d3f/f6f [954558,63478] 0 2026-03-09T20:47:36.629 INFO:tasks.workunit.client.1.vm10.stdout:2/415: mknod d5/d5b/c88 0 2026-03-09T20:47:36.631 INFO:tasks.workunit.client.1.vm10.stdout:5/380: dwrite d2/d1b/f2f [0,4194304] 0 2026-03-09T20:47:36.670 INFO:tasks.workunit.client.1.vm10.stdout:5/381: mkdir d2/d27/d37/d46/d5d/d5f/d69/d96 0 2026-03-09T20:47:36.670 INFO:tasks.workunit.client.1.vm10.stdout:2/416: chown d5/d2b/d32/d80/c86 457 1 2026-03-09T20:47:36.683 INFO:tasks.workunit.client.1.vm10.stdout:2/417: mkdir d5/d18/d27/d89 0 2026-03-09T20:47:36.683 INFO:tasks.workunit.client.1.vm10.stdout:5/382: truncate d2/fb 2924838 0 2026-03-09T20:47:36.685 INFO:tasks.workunit.client.1.vm10.stdout:5/383: stat d2/l6 0 2026-03-09T20:47:36.686 INFO:tasks.workunit.client.0.vm07.stdout:7/515: write d3/da/db/f27 [1723764,54372] 0 2026-03-09T20:47:36.694 INFO:tasks.workunit.client.0.vm07.stdout:7/516: mkdir d3/da/db/d32/d3e/dac/d43/d62/db1 0 2026-03-09T20:47:36.694 INFO:tasks.workunit.client.0.vm07.stdout:7/517: dread - d3/d58/d82/fa3 zero size 2026-03-09T20:47:36.698 INFO:tasks.workunit.client.0.vm07.stdout:7/518: dwrite d3/d58/f9d [0,4194304] 0 2026-03-09T20:47:36.700 INFO:tasks.workunit.client.0.vm07.stdout:7/519: write d3/fb0 [545928,1730] 0 2026-03-09T20:47:36.700 INFO:tasks.workunit.client.0.vm07.stdout:7/520: fsync d3/da/db/f27 0 2026-03-09T20:47:36.709 INFO:tasks.workunit.client.1.vm10.stdout:5/384: truncate d2/d1b/d54/d78/f47 4799151 0 2026-03-09T20:47:36.722 INFO:tasks.workunit.client.1.vm10.stdout:5/385: creat d2/d39/d4b/f97 x:0 0 0 2026-03-09T20:47:36.723 INFO:tasks.workunit.client.1.vm10.stdout:5/386: write d2/f35 [3866998,89631] 0 2026-03-09T20:47:36.731 INFO:tasks.workunit.client.1.vm10.stdout:9/444: mkdir d2/d28/da2 0 2026-03-09T20:47:36.737 INFO:tasks.workunit.client.0.vm07.stdout:7/521: dread d3/da/f3b [0,4194304] 0 2026-03-09T20:47:36.740 INFO:tasks.workunit.client.1.vm10.stdout:9/445: mkdir d2/d28/d47/d67/da3 0 2026-03-09T20:47:36.743 INFO:tasks.workunit.client.0.vm07.stdout:7/522: rename d3/da/c22 to d3/da/db/d79/cb2 0 2026-03-09T20:47:36.749 INFO:tasks.workunit.client.1.vm10.stdout:9/446: getdents d2/d3/de 0 2026-03-09T20:47:36.749 INFO:tasks.workunit.client.0.vm07.stdout:8/418: write d1/dc/d16/f4a [1477490,36644] 0 2026-03-09T20:47:36.750 INFO:tasks.workunit.client.0.vm07.stdout:8/419: fsync d1/dc/f42 0 2026-03-09T20:47:36.756 INFO:tasks.workunit.client.0.vm07.stdout:8/420: fsync d1/dc/d16/d26/f37 0 2026-03-09T20:47:36.759 INFO:tasks.workunit.client.0.vm07.stdout:8/421: fsync d1/d5d/d6f/d2f/d4d/d63/f77 0 2026-03-09T20:47:36.773 INFO:tasks.workunit.client.1.vm10.stdout:0/372: truncate d2/d9/da/d35/f68 3716107 0 2026-03-09T20:47:36.777 INFO:tasks.workunit.client.1.vm10.stdout:0/373: dwrite d2/d9/da/d35/d30/f51 [0,4194304] 0 2026-03-09T20:47:36.780 INFO:tasks.workunit.client.1.vm10.stdout:0/374: write d2/d9/da/f2f [9989653,70648] 0 2026-03-09T20:47:36.780 INFO:tasks.workunit.client.0.vm07.stdout:2/492: dwrite d2/db/d28/f58 [0,4194304] 0 2026-03-09T20:47:36.791 INFO:tasks.workunit.client.1.vm10.stdout:0/375: creat d2/d9/da/d35/d30/f7f x:0 0 0 2026-03-09T20:47:36.793 INFO:tasks.workunit.client.0.vm07.stdout:0/536: write d1/f57 [1098033,14102] 0 2026-03-09T20:47:36.794 INFO:tasks.workunit.client.1.vm10.stdout:0/376: mkdir d2/d9/d69/d80 0 2026-03-09T20:47:36.795 INFO:tasks.workunit.client.0.vm07.stdout:0/537: symlink d1/d1f/d20/laa 0 2026-03-09T20:47:36.795 INFO:tasks.workunit.client.0.vm07.stdout:0/538: fdatasync d1/f3d 0 2026-03-09T20:47:36.795 INFO:tasks.workunit.client.0.vm07.stdout:0/539: readlink d1/d1f/d20/l89 0 2026-03-09T20:47:36.800 INFO:tasks.workunit.client.0.vm07.stdout:5/544: truncate d5/df/d13/d30/f64 842906 0 2026-03-09T20:47:36.802 INFO:tasks.workunit.client.1.vm10.stdout:4/356: write d1/f9 [2481236,13440] 0 2026-03-09T20:47:36.803 INFO:tasks.workunit.client.0.vm07.stdout:9/432: write d4/d11/d2a/f65 [30940,18630] 0 2026-03-09T20:47:36.803 INFO:tasks.workunit.client.0.vm07.stdout:1/508: write d3/d66/f8c [318935,73458] 0 2026-03-09T20:47:36.804 INFO:tasks.workunit.client.1.vm10.stdout:4/357: chown d1/d8/d39/f56 1978 1 2026-03-09T20:47:36.807 INFO:tasks.workunit.client.0.vm07.stdout:0/540: dread d1/d2/d33/d35/f64 [0,4194304] 0 2026-03-09T20:47:36.808 INFO:tasks.workunit.client.1.vm10.stdout:7/386: dwrite db/d1f/f2a [0,4194304] 0 2026-03-09T20:47:36.809 INFO:tasks.workunit.client.0.vm07.stdout:9/433: fsync d4/d8/dc/d15/f57 0 2026-03-09T20:47:36.811 INFO:tasks.workunit.client.1.vm10.stdout:1/405: mknod d2/da/d25/c82 0 2026-03-09T20:47:36.812 INFO:tasks.workunit.client.1.vm10.stdout:7/387: creat db/d28/d2b/d36/d3f/f7d x:0 0 0 2026-03-09T20:47:36.814 INFO:tasks.workunit.client.0.vm07.stdout:9/434: mknod d4/d11/d23/d32/c9f 0 2026-03-09T20:47:36.817 INFO:tasks.workunit.client.0.vm07.stdout:9/435: creat d4/fa0 x:0 0 0 2026-03-09T20:47:36.820 INFO:tasks.workunit.client.1.vm10.stdout:3/387: rename dc/d14/d26/l2b to dc/d14/d20/d21/d3b/l7a 0 2026-03-09T20:47:36.820 INFO:tasks.workunit.client.1.vm10.stdout:6/426: mkdir d3/da/d11/d89 0 2026-03-09T20:47:36.821 INFO:tasks.workunit.client.0.vm07.stdout:6/468: creat d8/d16/f92 x:0 0 0 2026-03-09T20:47:36.821 INFO:tasks.workunit.client.0.vm07.stdout:6/469: truncate d8/d26/d7d/f8b 4265991 0 2026-03-09T20:47:36.823 INFO:tasks.workunit.client.1.vm10.stdout:8/433: rename d0/d54/d81 to d0/d22/d25/d2e/d41/d85 0 2026-03-09T20:47:36.824 INFO:tasks.workunit.client.1.vm10.stdout:8/434: chown d0/d22/d2f/d38/c39 85337178 1 2026-03-09T20:47:36.831 INFO:tasks.workunit.client.0.vm07.stdout:4/398: creat d2/f69 x:0 0 0 2026-03-09T20:47:36.831 INFO:tasks.workunit.client.1.vm10.stdout:8/435: dwrite d0/d22/d25/f3b [0,4194304] 0 2026-03-09T20:47:36.832 INFO:tasks.workunit.client.1.vm10.stdout:8/436: write d0/d22/d25/f3b [2442006,27173] 0 2026-03-09T20:47:36.832 INFO:tasks.workunit.client.0.vm07.stdout:0/541: getdents d1/d2/d98 0 2026-03-09T20:47:36.838 INFO:tasks.workunit.client.0.vm07.stdout:9/436: mknod d4/d11/ca1 0 2026-03-09T20:47:36.841 INFO:tasks.workunit.client.1.vm10.stdout:7/388: dread db/d21/d23/f1e [0,4194304] 0 2026-03-09T20:47:36.841 INFO:tasks.workunit.client.0.vm07.stdout:9/437: chown d4/d16/d29/d24/d37/d44/d62/d74 1 1 2026-03-09T20:47:36.841 INFO:tasks.workunit.client.0.vm07.stdout:3/451: symlink d1/l94 0 2026-03-09T20:47:36.841 INFO:tasks.workunit.client.0.vm07.stdout:3/452: fsync d1/d5/d9/d2f/d34/d46/f8a 0 2026-03-09T20:47:36.841 INFO:tasks.workunit.client.0.vm07.stdout:3/453: truncate d1/d5/d9/f3c 1492005 0 2026-03-09T20:47:36.843 INFO:tasks.workunit.client.1.vm10.stdout:9/447: rename d2/d3/d6d/l68 to d2/d3/de/d35/d44/la4 0 2026-03-09T20:47:36.847 INFO:tasks.workunit.client.0.vm07.stdout:4/399: rename d2/df/d17/f3e to d2/df/d17/f6a 0 2026-03-09T20:47:36.849 INFO:tasks.workunit.client.0.vm07.stdout:4/400: read d2/f43 [2522283,39413] 0 2026-03-09T20:47:36.851 INFO:tasks.workunit.client.0.vm07.stdout:0/542: mknod d1/d2/d33/cab 0 2026-03-09T20:47:36.851 INFO:tasks.workunit.client.1.vm10.stdout:2/418: dwrite d5/d18/d1b/d22/f6d [0,4194304] 0 2026-03-09T20:47:36.856 INFO:tasks.workunit.client.1.vm10.stdout:9/448: dwrite d2/f30 [4194304,4194304] 0 2026-03-09T20:47:36.865 INFO:tasks.workunit.client.0.vm07.stdout:8/422: write d1/dc/f4c [3125273,116251] 0 2026-03-09T20:47:36.865 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:36 vm07.local ceph-mon[49120]: Updating vm07:/etc/ceph/ceph.conf 2026-03-09T20:47:36.865 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:36 vm07.local ceph-mon[49120]: Updating vm10:/etc/ceph/ceph.conf 2026-03-09T20:47:36.865 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:36 vm07.local ceph-mon[49120]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:36.865 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:36 vm07.local ceph-mon[49120]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:36.865 INFO:tasks.workunit.client.1.vm10.stdout:8/437: mkdir d0/d22/d25/d40/d86 0 2026-03-09T20:47:36.865 INFO:tasks.workunit.client.1.vm10.stdout:8/438: chown d0/d22/d25/d40/f5e 94 1 2026-03-09T20:47:36.866 INFO:tasks.workunit.client.1.vm10.stdout:4/358: sync 2026-03-09T20:47:36.866 INFO:tasks.workunit.client.1.vm10.stdout:4/359: write d1/d8/d39/f4b [836356,11120] 0 2026-03-09T20:47:36.866 INFO:tasks.workunit.client.0.vm07.stdout:5/545: sync 2026-03-09T20:47:36.876 INFO:tasks.workunit.client.0.vm07.stdout:3/454: rename d1/d5/d8e to d1/d5/d9/d2f/d3d/d64/d95 0 2026-03-09T20:47:36.876 INFO:tasks.workunit.client.0.vm07.stdout:3/455: chown d1/c6f 38133 1 2026-03-09T20:47:36.880 INFO:tasks.workunit.client.1.vm10.stdout:1/406: getdents d2/da/d25/d46/d51/d5d/d6e/d70 0 2026-03-09T20:47:36.886 INFO:tasks.workunit.client.1.vm10.stdout:9/449: sync 2026-03-09T20:47:36.891 INFO:tasks.workunit.client.1.vm10.stdout:6/427: rename d3/d30/f43 to d3/d79/f8a 0 2026-03-09T20:47:36.892 INFO:tasks.workunit.client.0.vm07.stdout:0/543: creat d1/d1f/d53/d72/fac x:0 0 0 2026-03-09T20:47:36.895 INFO:tasks.workunit.client.0.vm07.stdout:5/546: fsync d5/df/d13/f1f 0 2026-03-09T20:47:36.895 INFO:tasks.workunit.client.1.vm10.stdout:2/419: dread d5/d18/d1b/d22/f4f [0,4194304] 0 2026-03-09T20:47:36.898 INFO:tasks.workunit.client.0.vm07.stdout:5/547: dwrite d5/df/d13/d6c/f77 [0,4194304] 0 2026-03-09T20:47:36.900 INFO:tasks.workunit.client.0.vm07.stdout:3/456: rmdir d1/d5 39 2026-03-09T20:47:36.904 INFO:tasks.workunit.client.1.vm10.stdout:1/407: rmdir d2/da 39 2026-03-09T20:47:36.914 INFO:tasks.workunit.client.0.vm07.stdout:7/523: symlink d3/da/db/d32/d3e/dac/d1f/d50/lb3 0 2026-03-09T20:47:36.917 INFO:tasks.workunit.client.0.vm07.stdout:8/423: mkdir d1/d5d/d6f/d2f/d53/d76/d87 0 2026-03-09T20:47:36.918 INFO:tasks.workunit.client.1.vm10.stdout:6/428: read - d3/da/f58 zero size 2026-03-09T20:47:36.920 INFO:tasks.workunit.client.1.vm10.stdout:6/429: read d3/d30/d7f/d36/d5c/f5f [4106896,34416] 0 2026-03-09T20:47:36.923 INFO:tasks.workunit.client.0.vm07.stdout:3/457: fdatasync d1/d5/d9/f1b 0 2026-03-09T20:47:36.925 INFO:tasks.workunit.client.0.vm07.stdout:4/401: creat d2/df/f6b x:0 0 0 2026-03-09T20:47:36.925 INFO:tasks.workunit.client.1.vm10.stdout:2/420: symlink d5/d2b/d32/d80/d47/l8a 0 2026-03-09T20:47:36.925 INFO:tasks.workunit.client.0.vm07.stdout:4/402: chown d2/d55/f62 37 1 2026-03-09T20:47:36.926 INFO:tasks.workunit.client.1.vm10.stdout:2/421: fdatasync d5/d18/d27/f29 0 2026-03-09T20:47:36.926 INFO:tasks.workunit.client.0.vm07.stdout:4/403: truncate d2/df/d59/f60 145511 0 2026-03-09T20:47:36.927 INFO:tasks.workunit.client.1.vm10.stdout:9/450: truncate d2/d3/de/d35/f78 619959 0 2026-03-09T20:47:36.927 INFO:tasks.workunit.client.1.vm10.stdout:2/422: readlink d5/d2b/l66 0 2026-03-09T20:47:36.933 INFO:tasks.workunit.client.1.vm10.stdout:8/439: creat d0/d22/d25/d2e/d41/d47/f87 x:0 0 0 2026-03-09T20:47:36.934 INFO:tasks.workunit.client.1.vm10.stdout:6/430: chown d3/da/d11/f17 126367 1 2026-03-09T20:47:36.934 INFO:tasks.workunit.client.1.vm10.stdout:6/431: readlink d3/da/d11/d26/d5b/l80 0 2026-03-09T20:47:36.935 INFO:tasks.workunit.client.1.vm10.stdout:6/432: chown d3/da/d11/d31/d4c/d60/c6f 238 1 2026-03-09T20:47:36.939 INFO:tasks.workunit.client.1.vm10.stdout:1/408: creat d2/da/d25/d46/d51/d5d/d6e/d70/f83 x:0 0 0 2026-03-09T20:47:36.941 INFO:tasks.workunit.client.0.vm07.stdout:3/458: fdatasync d1/f36 0 2026-03-09T20:47:36.942 INFO:tasks.workunit.client.1.vm10.stdout:9/451: creat d2/d28/fa5 x:0 0 0 2026-03-09T20:47:36.942 INFO:tasks.workunit.client.0.vm07.stdout:3/459: dread - d1/d5/d9/d2f/d3d/d64/d43/f90 zero size 2026-03-09T20:47:36.943 INFO:tasks.workunit.client.0.vm07.stdout:3/460: write d1/d5/d9/d2f/d34/d46/f8a [51680,53032] 0 2026-03-09T20:47:36.945 INFO:tasks.workunit.client.1.vm10.stdout:9/452: dread d2/d3/de/d8f/f9d [0,4194304] 0 2026-03-09T20:47:36.948 INFO:tasks.workunit.client.0.vm07.stdout:5/548: read d5/df/d13/d30/d56/f84 [2484914,94101] 0 2026-03-09T20:47:36.952 INFO:tasks.workunit.client.0.vm07.stdout:5/549: dwrite d5/d33/d39/d8d/dab/f60 [0,4194304] 0 2026-03-09T20:47:36.956 INFO:tasks.workunit.client.0.vm07.stdout:2/493: write d2/db/d1c/f2e [62947,80302] 0 2026-03-09T20:47:36.962 INFO:tasks.workunit.client.0.vm07.stdout:1/509: write d3/d23/f39 [94914,107743] 0 2026-03-09T20:47:36.962 INFO:tasks.workunit.client.1.vm10.stdout:0/377: write d2/d9/da/d11/f1f [329327,98350] 0 2026-03-09T20:47:36.962 INFO:tasks.workunit.client.1.vm10.stdout:0/378: chown d2/d9/da/fd 11032 1 2026-03-09T20:47:36.962 INFO:tasks.workunit.client.1.vm10.stdout:0/379: write d2/d9/da/d35/d30/f7f [556323,54977] 0 2026-03-09T20:47:36.963 INFO:tasks.workunit.client.0.vm07.stdout:3/461: sync 2026-03-09T20:47:36.967 INFO:tasks.workunit.client.0.vm07.stdout:3/462: dwrite d1/d5/d9/d2f/d34/f4b [4194304,4194304] 0 2026-03-09T20:47:36.968 INFO:tasks.workunit.client.0.vm07.stdout:3/463: dread - d1/d5/d9/d2f/d3d/f75 zero size 2026-03-09T20:47:36.969 INFO:tasks.workunit.client.0.vm07.stdout:3/464: fdatasync d1/d5/f25 0 2026-03-09T20:47:36.969 INFO:tasks.workunit.client.1.vm10.stdout:3/388: write dc/d14/d26/d29/d40/f49 [3200406,7049] 0 2026-03-09T20:47:36.970 INFO:tasks.workunit.client.0.vm07.stdout:3/465: write d1/d5/d9/d11/d6d/d80/f93 [536240,23531] 0 2026-03-09T20:47:36.973 INFO:tasks.workunit.client.0.vm07.stdout:3/466: dread d1/d5/d9/d2f/d34/f40 [4194304,4194304] 0 2026-03-09T20:47:36.982 INFO:tasks.workunit.client.1.vm10.stdout:3/389: dread dc/d14/d26/d29/f30 [0,4194304] 0 2026-03-09T20:47:36.984 INFO:tasks.workunit.client.0.vm07.stdout:6/470: dwrite d8/f52 [0,4194304] 0 2026-03-09T20:47:36.986 INFO:tasks.workunit.client.0.vm07.stdout:8/424: truncate d1/d5d/d6f/d2f/d4d/f67 666580 0 2026-03-09T20:47:36.999 INFO:tasks.workunit.client.1.vm10.stdout:5/387: dwrite d2/f2c [0,4194304] 0 2026-03-09T20:47:37.010 INFO:tasks.workunit.client.0.vm07.stdout:9/438: dwrite d4/d8/d19/f42 [0,4194304] 0 2026-03-09T20:47:37.017 INFO:tasks.workunit.client.1.vm10.stdout:1/409: readlink d2/da/d25/d3e/d55/l72 0 2026-03-09T20:47:37.019 INFO:tasks.workunit.client.1.vm10.stdout:9/453: unlink d2/d3/d6d/d88/f8a 0 2026-03-09T20:47:37.027 INFO:tasks.workunit.client.1.vm10.stdout:4/360: dwrite d1/d8/d1b/f24 [0,4194304] 0 2026-03-09T20:47:37.028 INFO:tasks.workunit.client.0.vm07.stdout:0/544: write d1/d2/d4b/f61 [2074033,3719] 0 2026-03-09T20:47:37.029 INFO:tasks.workunit.client.1.vm10.stdout:9/454: dwrite d2/d3/f2e [0,4194304] 0 2026-03-09T20:47:37.029 INFO:tasks.workunit.client.0.vm07.stdout:0/545: write d1/f31 [1706221,49467] 0 2026-03-09T20:47:37.030 INFO:tasks.workunit.client.0.vm07.stdout:0/546: dread - d1/d1f/d9f/fa4 zero size 2026-03-09T20:47:37.035 INFO:tasks.workunit.client.1.vm10.stdout:7/389: truncate db/d21/d23/f29 7668943 0 2026-03-09T20:47:37.035 INFO:tasks.workunit.client.0.vm07.stdout:1/510: symlink d3/d23/d55/d56/d60/lac 0 2026-03-09T20:47:37.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:36 vm10.local ceph-mon[57011]: Updating vm07:/etc/ceph/ceph.conf 2026-03-09T20:47:37.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:36 vm10.local ceph-mon[57011]: Updating vm10:/etc/ceph/ceph.conf 2026-03-09T20:47:37.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:36 vm10.local ceph-mon[57011]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:37.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:36 vm10.local ceph-mon[57011]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:37.046 INFO:tasks.workunit.client.0.vm07.stdout:7/524: rename d3/d58/f9d to d3/da/fb4 0 2026-03-09T20:47:37.052 INFO:tasks.workunit.client.1.vm10.stdout:7/390: creat db/d21/d60/f7e x:0 0 0 2026-03-09T20:47:37.054 INFO:tasks.workunit.client.1.vm10.stdout:3/390: mknod dc/d14/d26/d29/c7b 0 2026-03-09T20:47:37.054 INFO:tasks.workunit.client.0.vm07.stdout:8/425: truncate d1/dc/d6a/f62 518616 0 2026-03-09T20:47:37.056 INFO:tasks.workunit.client.0.vm07.stdout:2/494: getdents d2/db/d49/d7d/d85 0 2026-03-09T20:47:37.057 INFO:tasks.workunit.client.1.vm10.stdout:5/388: fdatasync d2/d27/d37/d46/d5d/d5f/f61 0 2026-03-09T20:47:37.057 INFO:tasks.workunit.client.0.vm07.stdout:1/511: truncate d3/d14/d54/f62 143226 0 2026-03-09T20:47:37.058 INFO:tasks.workunit.client.1.vm10.stdout:5/389: dread - d2/d27/d37/d46/d5d/d5f/d84/f8b zero size 2026-03-09T20:47:37.059 INFO:tasks.workunit.client.1.vm10.stdout:9/455: fsync d2/d28/d47/d50/f59 0 2026-03-09T20:47:37.073 INFO:tasks.workunit.client.1.vm10.stdout:0/380: rename d2/d4a/d79/d1a/d25/d34/f46 to d2/d9/da/f81 0 2026-03-09T20:47:37.077 INFO:tasks.workunit.client.1.vm10.stdout:7/391: symlink db/d28/d30/l7f 0 2026-03-09T20:47:37.078 INFO:tasks.workunit.client.1.vm10.stdout:7/392: chown db/d46/f5a 996 1 2026-03-09T20:47:37.078 INFO:tasks.workunit.client.0.vm07.stdout:4/404: rename d2/d55/d5d/d3f/d4a/d4b/l67 to d2/d55/d5d/d3f/d4a/d4b/d52/d5c/l6c 0 2026-03-09T20:47:37.082 INFO:tasks.workunit.client.0.vm07.stdout:6/471: dread d8/d16/d22/d33/d85/f2f [0,4194304] 0 2026-03-09T20:47:37.086 INFO:tasks.workunit.client.1.vm10.stdout:6/433: getdents d3/d30/d7f/d36 0 2026-03-09T20:47:37.087 INFO:tasks.workunit.client.1.vm10.stdout:8/440: write d0/d22/d25/d2e/d41/f67 [490958,113781] 0 2026-03-09T20:47:37.087 INFO:tasks.workunit.client.1.vm10.stdout:2/423: write d5/d18/f67 [893171,104741] 0 2026-03-09T20:47:37.089 INFO:tasks.workunit.client.1.vm10.stdout:5/390: rmdir d2/d27/d75 39 2026-03-09T20:47:37.091 INFO:tasks.workunit.client.1.vm10.stdout:9/456: rmdir d2/d3/de/d35 39 2026-03-09T20:47:37.092 INFO:tasks.workunit.client.0.vm07.stdout:1/512: dread - d3/d23/f58 zero size 2026-03-09T20:47:37.094 INFO:tasks.workunit.client.1.vm10.stdout:4/361: rename d1/d8/d1c/f1f to d1/d8/d1c/d2b/f72 0 2026-03-09T20:47:37.097 INFO:tasks.workunit.client.1.vm10.stdout:5/391: creat d2/d58/d6c/f98 x:0 0 0 2026-03-09T20:47:37.100 INFO:tasks.workunit.client.1.vm10.stdout:5/392: chown d2/f7 16088 1 2026-03-09T20:47:37.100 INFO:tasks.workunit.client.1.vm10.stdout:0/381: rename d2/d9/d47 to d2/d4a/d58/d82 0 2026-03-09T20:47:37.102 INFO:tasks.workunit.client.1.vm10.stdout:8/441: symlink d0/l88 0 2026-03-09T20:47:37.102 INFO:tasks.workunit.client.1.vm10.stdout:7/393: dread f5 [0,4194304] 0 2026-03-09T20:47:37.103 INFO:tasks.workunit.client.1.vm10.stdout:5/393: fsync d2/f7 0 2026-03-09T20:47:37.104 INFO:tasks.workunit.client.1.vm10.stdout:9/457: chown d2/d3/de/d35/d44/l87 2054774 1 2026-03-09T20:47:37.105 INFO:tasks.workunit.client.1.vm10.stdout:0/382: mknod d2/d4a/c83 0 2026-03-09T20:47:37.105 INFO:tasks.workunit.client.1.vm10.stdout:4/362: creat d1/d8/d1c/d2b/d4a/f73 x:0 0 0 2026-03-09T20:47:37.106 INFO:tasks.workunit.client.1.vm10.stdout:4/363: write d1/d8/d39/f56 [917003,126560] 0 2026-03-09T20:47:37.107 INFO:tasks.workunit.client.0.vm07.stdout:4/405: truncate d2/df/d17/f46 840638 0 2026-03-09T20:47:37.109 INFO:tasks.workunit.client.0.vm07.stdout:7/525: unlink d3/da/db/d32/d3e/d5c/l7c 0 2026-03-09T20:47:37.111 INFO:tasks.workunit.client.0.vm07.stdout:6/472: rename d8/f79 to d8/d26/d2a/d40/d69/d4f/f93 0 2026-03-09T20:47:37.116 INFO:tasks.workunit.client.1.vm10.stdout:9/458: read d2/d28/f32 [50720,1459] 0 2026-03-09T20:47:37.120 INFO:tasks.workunit.client.0.vm07.stdout:5/550: dwrite d5/df/f4a [0,4194304] 0 2026-03-09T20:47:37.138 INFO:tasks.workunit.client.1.vm10.stdout:7/394: dread db/d28/f4f [0,4194304] 0 2026-03-09T20:47:37.143 INFO:tasks.workunit.client.0.vm07.stdout:4/406: dread d2/d1f/f25 [0,4194304] 0 2026-03-09T20:47:37.155 INFO:tasks.workunit.client.0.vm07.stdout:9/439: truncate d4/d11/f2c 1615949 0 2026-03-09T20:47:37.162 INFO:tasks.workunit.client.0.vm07.stdout:0/547: write d1/d82/f8d [4001339,86626] 0 2026-03-09T20:47:37.163 INFO:tasks.workunit.client.1.vm10.stdout:1/410: dwrite d2/da/d25/d3e/d42/f57 [0,4194304] 0 2026-03-09T20:47:37.164 INFO:tasks.workunit.client.1.vm10.stdout:7/395: sync 2026-03-09T20:47:37.165 INFO:tasks.workunit.client.0.vm07.stdout:0/548: dwrite d1/d1f/d20/f2c [0,4194304] 0 2026-03-09T20:47:37.167 INFO:tasks.workunit.client.1.vm10.stdout:7/396: write db/d21/d26/f52 [1668715,25815] 0 2026-03-09T20:47:37.173 INFO:tasks.workunit.client.1.vm10.stdout:3/391: dwrite dc/d14/d20/d2e/d56/f68 [0,4194304] 0 2026-03-09T20:47:37.175 INFO:tasks.workunit.client.1.vm10.stdout:5/394: mkdir d2/d27/d37/d46/d99 0 2026-03-09T20:47:37.177 INFO:tasks.workunit.client.1.vm10.stdout:3/392: chown dc/d14/d26/d29/d40/f49 3 1 2026-03-09T20:47:37.182 INFO:tasks.workunit.client.1.vm10.stdout:2/424: write d5/d2b/d32/d80/d47/f65 [89940,110502] 0 2026-03-09T20:47:37.190 INFO:tasks.workunit.client.0.vm07.stdout:3/467: getdents d1/d5/d9/d2f/d3d/d64 0 2026-03-09T20:47:37.205 INFO:tasks.workunit.client.1.vm10.stdout:6/434: dwrite d3/da/fd [4194304,4194304] 0 2026-03-09T20:47:37.206 INFO:tasks.workunit.client.0.vm07.stdout:6/473: fdatasync d8/d16/d61/f68 0 2026-03-09T20:47:37.207 INFO:tasks.workunit.client.1.vm10.stdout:6/435: write d3/d30/d7f/d4a/f4b [1759832,95380] 0 2026-03-09T20:47:37.208 INFO:tasks.workunit.client.1.vm10.stdout:6/436: stat d3/d30/d7f/d36/f4f 0 2026-03-09T20:47:37.218 INFO:tasks.workunit.client.0.vm07.stdout:8/426: link d1/la d1/d5d/d6f/d80/l88 0 2026-03-09T20:47:37.223 INFO:tasks.workunit.client.1.vm10.stdout:8/442: mkdir d0/d22/d25/d89 0 2026-03-09T20:47:37.228 INFO:tasks.workunit.client.1.vm10.stdout:0/383: fsync d2/d9/da/d35/f68 0 2026-03-09T20:47:37.230 INFO:tasks.workunit.client.1.vm10.stdout:3/393: dread dc/d14/d26/d29/d40/f49 [0,4194304] 0 2026-03-09T20:47:37.230 INFO:tasks.workunit.client.1.vm10.stdout:3/394: chown dc/d14/d26/c63 4490 1 2026-03-09T20:47:37.238 INFO:tasks.workunit.client.0.vm07.stdout:2/495: write d2/db/f48 [2088910,68114] 0 2026-03-09T20:47:37.243 INFO:tasks.workunit.client.0.vm07.stdout:0/549: rmdir d1/d1f/d30 39 2026-03-09T20:47:37.244 INFO:tasks.workunit.client.0.vm07.stdout:0/550: write d1/d1f/d53/d72/f9b [565752,62087] 0 2026-03-09T20:47:37.250 INFO:tasks.workunit.client.1.vm10.stdout:2/425: fdatasync d5/d18/f2c 0 2026-03-09T20:47:37.250 INFO:tasks.workunit.client.1.vm10.stdout:2/426: write d5/d18/d27/f29 [1201199,64275] 0 2026-03-09T20:47:37.253 INFO:tasks.workunit.client.0.vm07.stdout:6/474: rename d8/d26/d2a/d40/c57 to d8/d5d/c94 0 2026-03-09T20:47:37.267 INFO:tasks.workunit.client.0.vm07.stdout:8/427: creat d1/d5d/d6f/d2f/d53/f89 x:0 0 0 2026-03-09T20:47:37.269 INFO:tasks.workunit.client.0.vm07.stdout:8/428: fsync d1/d5d/d6f/f61 0 2026-03-09T20:47:37.270 INFO:tasks.workunit.client.0.vm07.stdout:8/429: truncate d1/f85 400394 0 2026-03-09T20:47:37.279 INFO:tasks.workunit.client.0.vm07.stdout:1/513: write d3/f68 [1555411,58743] 0 2026-03-09T20:47:37.280 INFO:tasks.workunit.client.0.vm07.stdout:1/514: stat d3/d23/d55/d56/l9d 0 2026-03-09T20:47:37.287 INFO:tasks.workunit.client.1.vm10.stdout:4/364: dwrite d1/d2/d5c/f53 [0,4194304] 0 2026-03-09T20:47:37.289 INFO:tasks.workunit.client.0.vm07.stdout:7/526: dwrite d3/da/db/d32/d3e/dac/d1f/d2b/d52/f66 [0,4194304] 0 2026-03-09T20:47:37.308 INFO:tasks.workunit.client.1.vm10.stdout:7/397: write db/f7c [3654462,27652] 0 2026-03-09T20:47:37.309 INFO:tasks.workunit.client.0.vm07.stdout:4/407: write d2/d55/d5d/f47 [878710,9427] 0 2026-03-09T20:47:37.314 INFO:tasks.workunit.client.0.vm07.stdout:9/440: write d4/d11/f4f [1346672,60081] 0 2026-03-09T20:47:37.316 INFO:tasks.workunit.client.0.vm07.stdout:5/551: dwrite d5/d33/d39/f6e [0,4194304] 0 2026-03-09T20:47:37.324 INFO:tasks.workunit.client.1.vm10.stdout:3/395: mknod dc/d14/d22/c7c 0 2026-03-09T20:47:37.327 INFO:tasks.workunit.client.0.vm07.stdout:3/468: dwrite d1/d5/d9/d2f/d34/f40 [0,4194304] 0 2026-03-09T20:47:37.329 INFO:tasks.workunit.client.1.vm10.stdout:7/398: sync 2026-03-09T20:47:37.339 INFO:tasks.workunit.client.0.vm07.stdout:8/430: symlink d1/d5d/d6f/d2f/d53/l8a 0 2026-03-09T20:47:37.341 INFO:tasks.workunit.client.1.vm10.stdout:2/427: mknod d5/d18/d1b/d22/c8b 0 2026-03-09T20:47:37.347 INFO:tasks.workunit.client.0.vm07.stdout:3/469: dread d1/d5/d9/d11/d1f/f5e [0,4194304] 0 2026-03-09T20:47:37.349 INFO:tasks.workunit.client.0.vm07.stdout:1/515: rename d3/d14/c1f to d3/d14/d54/cad 0 2026-03-09T20:47:37.356 INFO:tasks.workunit.client.0.vm07.stdout:7/527: rmdir d3/da/db/d32/d3e/dac 39 2026-03-09T20:47:37.363 INFO:tasks.workunit.client.0.vm07.stdout:5/552: creat d5/d33/d39/fc3 x:0 0 0 2026-03-09T20:47:37.363 INFO:tasks.workunit.client.0.vm07.stdout:5/553: chown d5/df/d13/c18 62 1 2026-03-09T20:47:37.367 INFO:tasks.workunit.client.0.vm07.stdout:4/408: dread d2/df/d17/f46 [0,4194304] 0 2026-03-09T20:47:37.373 INFO:tasks.workunit.client.1.vm10.stdout:7/399: unlink db/f45 0 2026-03-09T20:47:37.374 INFO:tasks.workunit.client.1.vm10.stdout:7/400: write db/d28/d30/f73 [300429,114407] 0 2026-03-09T20:47:37.378 INFO:tasks.workunit.client.0.vm07.stdout:0/551: write d1/f11 [2201705,81023] 0 2026-03-09T20:47:37.380 INFO:tasks.workunit.client.1.vm10.stdout:1/411: dwrite d2/da/d25/d3e/f58 [0,4194304] 0 2026-03-09T20:47:37.384 INFO:tasks.workunit.client.1.vm10.stdout:8/443: dwrite d0/d54/f65 [0,4194304] 0 2026-03-09T20:47:37.408 INFO:tasks.workunit.client.1.vm10.stdout:9/459: getdents d2/d28 0 2026-03-09T20:47:37.433 INFO:tasks.workunit.client.1.vm10.stdout:6/437: creat d3/da/d11/f8b x:0 0 0 2026-03-09T20:47:37.461 INFO:tasks.workunit.client.1.vm10.stdout:0/384: link d2/d9/f20 d2/d9/da/d35/f84 0 2026-03-09T20:47:37.477 INFO:tasks.workunit.client.1.vm10.stdout:3/396: creat dc/d14/d26/d29/d40/d48/d69/d75/f7d x:0 0 0 2026-03-09T20:47:37.483 INFO:tasks.workunit.client.0.vm07.stdout:8/431: creat d1/d5d/d6f/d2f/f8b x:0 0 0 2026-03-09T20:47:37.493 INFO:tasks.workunit.client.0.vm07.stdout:3/470: rename d1/d5/d9/d2f/d3d/c77 to d1/d5/d9/d2f/d3d/d64/d43/d54/c96 0 2026-03-09T20:47:37.510 INFO:tasks.workunit.client.0.vm07.stdout:1/516: creat d3/d9c/fae x:0 0 0 2026-03-09T20:47:37.519 INFO:tasks.workunit.client.0.vm07.stdout:7/528: chown d3/da/db/d32/d3e/dac/d1f/f37 59531 1 2026-03-09T20:47:37.533 INFO:tasks.workunit.client.1.vm10.stdout:8/444: rmdir d0/d22/d25 39 2026-03-09T20:47:37.543 INFO:tasks.workunit.client.0.vm07.stdout:9/441: creat d4/d16/d29/d24/d37/d44/d62/d74/fa2 x:0 0 0 2026-03-09T20:47:37.548 INFO:tasks.workunit.client.1.vm10.stdout:5/395: rename d2/d39/d4b/f8d to d2/d27/d75/f9a 0 2026-03-09T20:47:37.559 INFO:tasks.workunit.client.0.vm07.stdout:2/496: creat d2/db/d49/f9b x:0 0 0 2026-03-09T20:47:37.559 INFO:tasks.workunit.client.0.vm07.stdout:4/409: dwrite d2/f2b [0,4194304] 0 2026-03-09T20:47:37.562 INFO:tasks.workunit.client.0.vm07.stdout:5/554: dwrite d5/d50/f61 [0,4194304] 0 2026-03-09T20:47:37.562 INFO:tasks.workunit.client.0.vm07.stdout:4/410: fdatasync d2/f69 0 2026-03-09T20:47:37.569 INFO:tasks.workunit.client.0.vm07.stdout:4/411: fsync d2/d1f/f26 0 2026-03-09T20:47:37.583 INFO:tasks.workunit.client.0.vm07.stdout:6/475: rmdir d8/d50/d5e 0 2026-03-09T20:47:37.584 INFO:tasks.workunit.client.0.vm07.stdout:7/529: mknod d3/da/db/d32/d3e/dac/d43/cb5 0 2026-03-09T20:47:37.589 INFO:tasks.workunit.client.0.vm07.stdout:3/471: dwrite d1/d5/d9/d11/d60/f89 [0,4194304] 0 2026-03-09T20:47:37.590 INFO:tasks.workunit.client.0.vm07.stdout:2/497: dread d2/d11/d56/f5a [4194304,4194304] 0 2026-03-09T20:47:37.590 INFO:tasks.workunit.client.0.vm07.stdout:7/530: read d3/da/db/d32/d3e/dac/d1f/d2b/d52/f66 [3161010,92209] 0 2026-03-09T20:47:37.595 INFO:tasks.workunit.client.0.vm07.stdout:5/555: dread d5/d33/d3b/f63 [0,4194304] 0 2026-03-09T20:47:37.598 INFO:tasks.workunit.client.0.vm07.stdout:8/432: dread d1/f13 [0,4194304] 0 2026-03-09T20:47:37.599 INFO:tasks.workunit.client.0.vm07.stdout:8/433: fsync d1/d5d/d6f/d2f/d53/f5f 0 2026-03-09T20:47:37.607 INFO:tasks.workunit.client.1.vm10.stdout:6/438: chown d3/d30/d7f/d36/d5c/f78 12935884 1 2026-03-09T20:47:37.607 INFO:tasks.workunit.client.1.vm10.stdout:3/397: unlink dc/d14/d26/d29/l6a 0 2026-03-09T20:47:37.608 INFO:tasks.workunit.client.1.vm10.stdout:7/401: fsync db/d28/d2b/d36/d3b/f3d 0 2026-03-09T20:47:37.609 INFO:tasks.workunit.client.1.vm10.stdout:1/412: symlink d2/da/d25/d3e/l84 0 2026-03-09T20:47:37.611 INFO:tasks.workunit.client.1.vm10.stdout:4/365: rename d1/d8/d1c/l27 to d1/d2/d5c/l74 0 2026-03-09T20:47:37.612 INFO:tasks.workunit.client.1.vm10.stdout:6/439: dwrite d3/f5e [0,4194304] 0 2026-03-09T20:47:37.612 INFO:tasks.workunit.client.1.vm10.stdout:1/413: write d2/da/f50 [1575615,111228] 0 2026-03-09T20:47:37.617 INFO:tasks.workunit.client.1.vm10.stdout:6/440: dwrite d3/da/d11/d31/d4c/d60/f63 [0,4194304] 0 2026-03-09T20:47:37.624 INFO:tasks.workunit.client.0.vm07.stdout:4/412: creat d2/df/d17/f6d x:0 0 0 2026-03-09T20:47:37.633 INFO:tasks.workunit.client.1.vm10.stdout:0/385: mknod d2/d9/da/c85 0 2026-03-09T20:47:37.633 INFO:tasks.workunit.client.1.vm10.stdout:3/398: symlink dc/d14/d26/d29/d40/d48/d69/l7e 0 2026-03-09T20:47:37.633 INFO:tasks.workunit.client.1.vm10.stdout:0/386: chown d2/d9/da/d35/d30/l6a 13 1 2026-03-09T20:47:37.634 INFO:tasks.workunit.client.0.vm07.stdout:7/531: chown d3/da/d83/d96/cab 1 1 2026-03-09T20:47:37.637 INFO:tasks.workunit.client.1.vm10.stdout:5/396: mknod d2/c9b 0 2026-03-09T20:47:37.644 INFO:tasks.workunit.client.0.vm07.stdout:5/556: truncate d5/df/d13/d4f/f9b 4342098 0 2026-03-09T20:47:37.650 INFO:tasks.workunit.client.0.vm07.stdout:3/472: dwrite d1/d5/d9/d11/d1f/f5e [0,4194304] 0 2026-03-09T20:47:37.650 INFO:tasks.workunit.client.0.vm07.stdout:6/476: dwrite d8/d16/d22/d33/f66 [0,4194304] 0 2026-03-09T20:47:37.666 INFO:tasks.workunit.client.0.vm07.stdout:8/434: sync 2026-03-09T20:47:37.667 INFO:tasks.workunit.client.0.vm07.stdout:8/435: fsync d1/d5d/d6f/f64 0 2026-03-09T20:47:37.689 INFO:tasks.workunit.client.1.vm10.stdout:6/441: mkdir d3/d30/d7f/d36/d6d/d8c 0 2026-03-09T20:47:37.690 INFO:tasks.workunit.client.0.vm07.stdout:4/413: unlink d2/df/d17/c61 0 2026-03-09T20:47:37.698 INFO:tasks.workunit.client.0.vm07.stdout:7/532: creat d3/da/d83/fb6 x:0 0 0 2026-03-09T20:47:37.809 INFO:tasks.workunit.client.0.vm07.stdout:2/498: dread d2/db/d1c/f2e [0,4194304] 0 2026-03-09T20:47:37.810 INFO:tasks.workunit.client.0.vm07.stdout:6/477: truncate d8/d26/d2a/d40/f65 3489995 0 2026-03-09T20:47:37.810 INFO:tasks.workunit.client.0.vm07.stdout:2/499: stat d2/d11/f60 0 2026-03-09T20:47:37.814 INFO:tasks.workunit.client.0.vm07.stdout:8/436: symlink d1/dc/d16/d26/l8c 0 2026-03-09T20:47:37.816 INFO:tasks.workunit.client.1.vm10.stdout:0/387: dread d2/d9/da/fd [0,4194304] 0 2026-03-09T20:47:37.821 INFO:tasks.workunit.client.0.vm07.stdout:8/437: sync 2026-03-09T20:47:37.831 INFO:tasks.workunit.client.0.vm07.stdout:5/557: read d5/d19/f4d [250954,35369] 0 2026-03-09T20:47:37.848 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:37 vm07.local ceph-mon[49120]: Updating vm07:/var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/config/ceph.conf 2026-03-09T20:47:37.848 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:37 vm07.local ceph-mon[49120]: Updating vm10:/var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/config/ceph.conf 2026-03-09T20:47:37.848 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:37 vm07.local ceph-mon[49120]: Updating vm10:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:47:37.849 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:37 vm07.local ceph-mon[49120]: Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:47:37.849 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:37 vm07.local ceph-mon[49120]: pgmap v7: 65 pgs: 65 active+clean; 1.9 GiB data, 7.2 GiB used, 113 GiB / 120 GiB avail; 24 MiB/s rd, 69 MiB/s wr, 163 op/s 2026-03-09T20:47:37.849 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:37 vm07.local ceph-mon[49120]: Updating vm07:/var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/config/ceph.client.admin.keyring 2026-03-09T20:47:37.849 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:37 vm07.local ceph-mon[49120]: Updating vm10:/var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/config/ceph.client.admin.keyring 2026-03-09T20:47:37.849 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:37 vm07.local ceph-mon[49120]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:37.849 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:37 vm07.local ceph-mon[49120]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:37.849 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:37 vm07.local ceph-mon[49120]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:37.849 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:37 vm07.local ceph-mon[49120]: Standby manager daemon vm07.xjrvch started 2026-03-09T20:47:37.849 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:37 vm07.local ceph-mon[49120]: from='mgr.? 192.168.123.107:0/2398079272' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm07.xjrvch/crt"}]: dispatch 2026-03-09T20:47:37.849 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:37 vm07.local ceph-mon[49120]: from='mgr.? 192.168.123.107:0/2398079272' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T20:47:37.849 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:37 vm07.local ceph-mon[49120]: from='mgr.? 192.168.123.107:0/2398079272' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm07.xjrvch/key"}]: dispatch 2026-03-09T20:47:37.849 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:37 vm07.local ceph-mon[49120]: from='mgr.? 192.168.123.107:0/2398079272' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T20:47:37.858 INFO:tasks.workunit.client.1.vm10.stdout:2/428: truncate d5/f1d 1147713 0 2026-03-09T20:47:37.860 INFO:tasks.workunit.client.0.vm07.stdout:0/552: link d1/d1f/d30/la3 d1/d2/d33/d35/lad 0 2026-03-09T20:47:37.866 INFO:tasks.workunit.client.0.vm07.stdout:1/517: write d3/fc [2291051,46794] 0 2026-03-09T20:47:37.867 INFO:tasks.workunit.client.0.vm07.stdout:1/518: fdatasync d3/f10 0 2026-03-09T20:47:37.870 INFO:tasks.workunit.client.1.vm10.stdout:7/402: write db/d21/d23/f22 [2606149,17282] 0 2026-03-09T20:47:37.871 INFO:tasks.workunit.client.0.vm07.stdout:9/442: truncate d4/d8/d19/d5f/f94 3552091 0 2026-03-09T20:47:37.896 INFO:tasks.workunit.client.1.vm10.stdout:4/366: write d1/d2/f43 [3509970,19221] 0 2026-03-09T20:47:37.901 INFO:tasks.workunit.client.1.vm10.stdout:5/397: write d2/d27/f2d [2741592,82553] 0 2026-03-09T20:47:37.903 INFO:tasks.workunit.client.0.vm07.stdout:6/478: creat d8/d16/d4b/f95 x:0 0 0 2026-03-09T20:47:38.002 INFO:tasks.workunit.client.0.vm07.stdout:7/533: mkdir d3/da/d53/db7 0 2026-03-09T20:47:38.022 INFO:tasks.workunit.client.0.vm07.stdout:1/519: rmdir d3/d23/d55/d56/d60 39 2026-03-09T20:47:38.024 INFO:tasks.workunit.client.0.vm07.stdout:2/500: write d2/db/d1c/f3a [4475834,51337] 0 2026-03-09T20:47:38.030 INFO:tasks.workunit.client.0.vm07.stdout:8/438: write d1/d5d/d6f/d2f/d53/d76/f7f [637272,28935] 0 2026-03-09T20:47:38.033 INFO:tasks.workunit.client.0.vm07.stdout:5/558: write d5/d19/f20 [4719600,45569] 0 2026-03-09T20:47:38.034 INFO:tasks.workunit.client.0.vm07.stdout:5/559: read - d5/d33/d3b/fb4 zero size 2026-03-09T20:47:38.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:37 vm10.local ceph-mon[57011]: Updating vm07:/var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/config/ceph.conf 2026-03-09T20:47:38.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:37 vm10.local ceph-mon[57011]: Updating vm10:/var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/config/ceph.conf 2026-03-09T20:47:38.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:37 vm10.local ceph-mon[57011]: Updating vm10:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:47:38.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:37 vm10.local ceph-mon[57011]: Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:47:38.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:37 vm10.local ceph-mon[57011]: pgmap v7: 65 pgs: 65 active+clean; 1.9 GiB data, 7.2 GiB used, 113 GiB / 120 GiB avail; 24 MiB/s rd, 69 MiB/s wr, 163 op/s 2026-03-09T20:47:38.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:37 vm10.local ceph-mon[57011]: Updating vm07:/var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/config/ceph.client.admin.keyring 2026-03-09T20:47:38.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:37 vm10.local ceph-mon[57011]: Updating vm10:/var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/config/ceph.client.admin.keyring 2026-03-09T20:47:38.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:37 vm10.local ceph-mon[57011]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:38.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:37 vm10.local ceph-mon[57011]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:38.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:37 vm10.local ceph-mon[57011]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:38.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:37 vm10.local ceph-mon[57011]: Standby manager daemon vm07.xjrvch started 2026-03-09T20:47:38.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:37 vm10.local ceph-mon[57011]: from='mgr.? 192.168.123.107:0/2398079272' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm07.xjrvch/crt"}]: dispatch 2026-03-09T20:47:38.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:37 vm10.local ceph-mon[57011]: from='mgr.? 192.168.123.107:0/2398079272' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T20:47:38.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:37 vm10.local ceph-mon[57011]: from='mgr.? 192.168.123.107:0/2398079272' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm07.xjrvch/key"}]: dispatch 2026-03-09T20:47:38.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:37 vm10.local ceph-mon[57011]: from='mgr.? 192.168.123.107:0/2398079272' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T20:47:38.048 INFO:tasks.workunit.client.0.vm07.stdout:9/443: creat d4/d8/dc/d15/fa3 x:0 0 0 2026-03-09T20:47:38.051 INFO:tasks.workunit.client.1.vm10.stdout:9/460: rename d2/d28/d47/d50/d8d to d2/da6 0 2026-03-09T20:47:38.056 INFO:tasks.workunit.client.1.vm10.stdout:0/388: creat d2/d9/da/d11/f86 x:0 0 0 2026-03-09T20:47:38.057 INFO:tasks.workunit.client.0.vm07.stdout:6/479: write d8/d16/d22/d33/d85/f83 [2588049,107262] 0 2026-03-09T20:47:38.057 INFO:tasks.workunit.client.0.vm07.stdout:9/444: sync 2026-03-09T20:47:38.057 INFO:tasks.workunit.client.1.vm10.stdout:0/389: dread - d2/d9/d69/f7c zero size 2026-03-09T20:47:38.057 INFO:tasks.workunit.client.1.vm10.stdout:7/403: truncate db/d28/d2b/d36/f3c 2136167 0 2026-03-09T20:47:38.058 INFO:tasks.workunit.client.1.vm10.stdout:4/367: creat d1/d8/d1b/d57/f75 x:0 0 0 2026-03-09T20:47:38.061 INFO:tasks.workunit.client.0.vm07.stdout:6/480: dread d8/d16/d22/d33/d85/f2f [0,4194304] 0 2026-03-09T20:47:38.064 INFO:tasks.workunit.client.0.vm07.stdout:0/553: creat d1/d2/dc/d17/da6/fae x:0 0 0 2026-03-09T20:47:38.068 INFO:tasks.workunit.client.1.vm10.stdout:9/461: dwrite d2/d28/f63 [0,4194304] 0 2026-03-09T20:47:38.085 INFO:tasks.workunit.client.0.vm07.stdout:5/560: stat d5/df/d13/d4f/f9b 0 2026-03-09T20:47:38.085 INFO:tasks.workunit.client.0.vm07.stdout:5/561: fdatasync d5/f25 0 2026-03-09T20:47:38.086 INFO:tasks.workunit.client.1.vm10.stdout:8/445: rename d0/lb to d0/d22/d2f/d38/d64/l8a 0 2026-03-09T20:47:38.086 INFO:tasks.workunit.client.1.vm10.stdout:5/398: creat d2/d27/d37/d46/d99/f9c x:0 0 0 2026-03-09T20:47:38.086 INFO:tasks.workunit.client.0.vm07.stdout:3/473: link d1/l2c d1/d5/d9/d2f/d34/l97 0 2026-03-09T20:47:38.086 INFO:tasks.workunit.client.1.vm10.stdout:2/429: creat d5/d18/d27/f8c x:0 0 0 2026-03-09T20:47:38.087 INFO:tasks.workunit.client.0.vm07.stdout:4/414: getdents d2/d55/d5d/d3f/d4a/d4b 0 2026-03-09T20:47:38.087 INFO:tasks.workunit.client.0.vm07.stdout:4/415: fsync d2/d55/f62 0 2026-03-09T20:47:38.100 INFO:tasks.workunit.client.0.vm07.stdout:9/445: rename d4/fa0 to d4/d8/d19/d5f/d73/fa4 0 2026-03-09T20:47:38.100 INFO:tasks.workunit.client.1.vm10.stdout:7/404: fsync db/d28/d4c/f65 0 2026-03-09T20:47:38.101 INFO:tasks.workunit.client.0.vm07.stdout:9/446: stat d4/d8/dc/f68 0 2026-03-09T20:47:38.101 INFO:tasks.workunit.client.0.vm07.stdout:6/481: write d8/d26/d2a/d40/d69/d4f/f93 [3639286,124291] 0 2026-03-09T20:47:38.104 INFO:tasks.workunit.client.1.vm10.stdout:7/405: dwrite db/d54/f71 [0,4194304] 0 2026-03-09T20:47:38.124 INFO:tasks.workunit.client.1.vm10.stdout:9/462: mkdir d2/d12/d5a/da7 0 2026-03-09T20:47:38.130 INFO:tasks.workunit.client.0.vm07.stdout:1/520: mknod d3/d66/d86/caf 0 2026-03-09T20:47:38.139 INFO:tasks.workunit.client.1.vm10.stdout:4/368: symlink d1/l76 0 2026-03-09T20:47:38.141 INFO:tasks.workunit.client.0.vm07.stdout:0/554: mkdir d1/d2/d98/daf 0 2026-03-09T20:47:38.141 INFO:tasks.workunit.client.0.vm07.stdout:0/555: dread - d1/d2/d98/fa5 zero size 2026-03-09T20:47:38.146 INFO:tasks.workunit.client.1.vm10.stdout:1/414: rename d2/l3a to d2/da/d25/d46/d51/l85 0 2026-03-09T20:47:38.149 INFO:tasks.workunit.client.1.vm10.stdout:5/399: rmdir d2/d27/d37/d46/d5d/d6d 39 2026-03-09T20:47:38.150 INFO:tasks.workunit.client.1.vm10.stdout:2/430: truncate d5/d2b/f36 2368093 0 2026-03-09T20:47:38.152 INFO:tasks.workunit.client.1.vm10.stdout:0/390: link d2/d9/da/d48/c6f d2/c87 0 2026-03-09T20:47:38.171 INFO:tasks.workunit.client.1.vm10.stdout:4/369: creat d1/d8/f77 x:0 0 0 2026-03-09T20:47:38.172 INFO:tasks.workunit.client.1.vm10.stdout:4/370: read d1/d8/f29 [52486,37033] 0 2026-03-09T20:47:38.172 INFO:tasks.workunit.client.1.vm10.stdout:4/371: write d1/f9 [190425,17677] 0 2026-03-09T20:47:38.183 INFO:tasks.workunit.client.1.vm10.stdout:8/446: mkdir d0/d22/d25/d2e/d41/d85/d8b 0 2026-03-09T20:47:38.200 INFO:tasks.workunit.client.1.vm10.stdout:3/399: rename dc/d14/d26/d29/d40/d48 to dc/d14/d22/d7f 0 2026-03-09T20:47:38.200 INFO:tasks.workunit.client.1.vm10.stdout:3/400: chown dc/d14/d26/d29/d2a/d55 160 1 2026-03-09T20:47:38.205 INFO:tasks.workunit.client.1.vm10.stdout:1/415: write d2/da/d25/f48 [9113614,25441] 0 2026-03-09T20:47:38.211 INFO:tasks.workunit.client.1.vm10.stdout:2/431: mkdir d5/d2b/d32/d80/d8d 0 2026-03-09T20:47:38.212 INFO:tasks.workunit.client.1.vm10.stdout:2/432: fsync d5/f16 0 2026-03-09T20:47:38.222 INFO:tasks.workunit.client.1.vm10.stdout:5/400: dread f1 [4194304,4194304] 0 2026-03-09T20:47:38.225 INFO:tasks.workunit.client.1.vm10.stdout:5/401: dwrite d2/d27/f2d [0,4194304] 0 2026-03-09T20:47:38.228 INFO:tasks.workunit.client.1.vm10.stdout:8/447: chown d0/d22/d25/f2d 110 1 2026-03-09T20:47:38.229 INFO:tasks.workunit.client.1.vm10.stdout:6/442: rename d3/d30/d33/d67 to d3/d30/d7f/d36/d5c/d8d 0 2026-03-09T20:47:38.229 INFO:tasks.workunit.client.1.vm10.stdout:9/463: dwrite d2/d12/d5a/f82 [0,4194304] 0 2026-03-09T20:47:38.234 INFO:tasks.workunit.client.1.vm10.stdout:6/443: fdatasync d3/d30/d7f/f28 0 2026-03-09T20:47:38.235 INFO:tasks.workunit.client.1.vm10.stdout:1/416: creat d2/da/d25/d3e/d42/f86 x:0 0 0 2026-03-09T20:47:38.236 INFO:tasks.workunit.client.1.vm10.stdout:1/417: stat d2/da/d25/d46/d51/d5d/f67 0 2026-03-09T20:47:38.255 INFO:tasks.workunit.client.1.vm10.stdout:7/406: link db/f39 db/d28/d2b/d36/d63/d6d/f80 0 2026-03-09T20:47:38.257 INFO:tasks.workunit.client.0.vm07.stdout:7/534: link d3/da/db/d32/d3e/dac/c78 d3/da/db/d32/d7a/cb8 0 2026-03-09T20:47:38.258 INFO:tasks.workunit.client.1.vm10.stdout:7/407: dwrite db/d28/d2b/d36/d3b/f42 [0,4194304] 0 2026-03-09T20:47:38.260 INFO:tasks.workunit.client.0.vm07.stdout:9/447: mkdir d4/d8/d19/d5f/da5 0 2026-03-09T20:47:38.273 INFO:tasks.workunit.client.1.vm10.stdout:4/372: mkdir d1/d2/d3/d70/d78 0 2026-03-09T20:47:38.276 INFO:tasks.workunit.client.0.vm07.stdout:1/521: unlink d3/f68 0 2026-03-09T20:47:38.280 INFO:tasks.workunit.client.0.vm07.stdout:0/556: fdatasync d1/f1a 0 2026-03-09T20:47:38.286 INFO:tasks.workunit.client.0.vm07.stdout:8/439: creat d1/dc/d16/f8d x:0 0 0 2026-03-09T20:47:38.291 INFO:tasks.workunit.client.0.vm07.stdout:9/448: sync 2026-03-09T20:47:38.306 INFO:tasks.workunit.client.1.vm10.stdout:0/391: truncate d2/d9/da/d11/f1f 300612 0 2026-03-09T20:47:38.307 INFO:tasks.workunit.client.1.vm10.stdout:0/392: fsync d2/d9/da/d11/f15 0 2026-03-09T20:47:38.309 INFO:tasks.workunit.client.0.vm07.stdout:3/474: dwrite d1/d5/d9/d11/d1f/f27 [0,4194304] 0 2026-03-09T20:47:38.313 INFO:tasks.workunit.client.1.vm10.stdout:8/448: write d0/d22/d2f/d38/f6d [705508,75930] 0 2026-03-09T20:47:38.325 INFO:tasks.workunit.client.0.vm07.stdout:4/416: truncate d2/f3 84861 0 2026-03-09T20:47:38.326 INFO:tasks.workunit.client.1.vm10.stdout:3/401: dwrite dc/d14/d20/d2e/d56/f23 [0,4194304] 0 2026-03-09T20:47:38.326 INFO:tasks.workunit.client.0.vm07.stdout:4/417: rename d2/df/d59 to d2/df/d59/d6e 22 2026-03-09T20:47:38.326 INFO:tasks.workunit.client.0.vm07.stdout:4/418: chown d2/f43 668001 1 2026-03-09T20:47:38.327 INFO:tasks.workunit.client.0.vm07.stdout:4/419: chown d2/d1f/f25 1044893 1 2026-03-09T20:47:38.328 INFO:tasks.workunit.client.1.vm10.stdout:1/418: fsync d2/da/d25/d3e/d42/f62 0 2026-03-09T20:47:38.330 INFO:tasks.workunit.client.1.vm10.stdout:3/402: dread dc/d14/d20/d2e/d56/f68 [0,4194304] 0 2026-03-09T20:47:38.334 INFO:tasks.workunit.client.1.vm10.stdout:3/403: write dc/d14/d20/d2e/d56/f68 [951486,19548] 0 2026-03-09T20:47:38.345 INFO:tasks.workunit.client.1.vm10.stdout:2/433: truncate d5/fa 4882051 0 2026-03-09T20:47:38.355 INFO:tasks.workunit.client.1.vm10.stdout:4/373: chown d1/d8/d1b/f5f 428515816 1 2026-03-09T20:47:38.358 INFO:tasks.workunit.client.0.vm07.stdout:0/557: symlink d1/d1f/d53/lb0 0 2026-03-09T20:47:38.359 INFO:tasks.workunit.client.0.vm07.stdout:2/501: link d2/db/d1c/l5f d2/db/l9c 0 2026-03-09T20:47:38.361 INFO:tasks.workunit.client.1.vm10.stdout:7/408: dread db/d21/d26/f2f [0,4194304] 0 2026-03-09T20:47:38.365 INFO:tasks.workunit.client.0.vm07.stdout:5/562: creat d5/df/d13/d3e/fc4 x:0 0 0 2026-03-09T20:47:38.365 INFO:tasks.workunit.client.0.vm07.stdout:9/449: creat d4/d16/d29/d24/d37/d44/d62/d74/fa6 x:0 0 0 2026-03-09T20:47:38.366 INFO:tasks.workunit.client.0.vm07.stdout:9/450: write d4/d8/d19/f42 [4557831,1700] 0 2026-03-09T20:47:38.367 INFO:tasks.workunit.client.0.vm07.stdout:0/558: sync 2026-03-09T20:47:38.388 INFO:tasks.workunit.client.0.vm07.stdout:8/440: dread d1/dc/fd [0,4194304] 0 2026-03-09T20:47:38.388 INFO:tasks.workunit.client.0.vm07.stdout:8/441: chown d1/d5d/d6f/d2f/d4d/d63 274715 1 2026-03-09T20:47:38.390 INFO:tasks.workunit.client.0.vm07.stdout:3/475: symlink d1/d5/d9/d11/d1f/l98 0 2026-03-09T20:47:38.400 INFO:tasks.workunit.client.1.vm10.stdout:9/464: dwrite d2/d28/d47/d67/f81 [0,4194304] 0 2026-03-09T20:47:38.405 INFO:tasks.workunit.client.1.vm10.stdout:9/465: dwrite d2/d28/fa5 [0,4194304] 0 2026-03-09T20:47:38.407 INFO:tasks.workunit.client.0.vm07.stdout:6/482: truncate d8/d26/d2a/f6e 3677233 0 2026-03-09T20:47:38.422 INFO:tasks.workunit.client.1.vm10.stdout:5/402: write d2/d27/d37/d46/d5d/d6d/f6e [1118371,5192] 0 2026-03-09T20:47:38.431 INFO:tasks.workunit.client.1.vm10.stdout:3/404: mknod dc/d14/d20/d2e/c80 0 2026-03-09T20:47:38.431 INFO:tasks.workunit.client.1.vm10.stdout:3/405: dwrite dc/d14/d26/d29/f60 [4194304,4194304] 0 2026-03-09T20:47:38.432 INFO:tasks.workunit.client.0.vm07.stdout:1/522: stat d3/d14/c1d 0 2026-03-09T20:47:38.433 INFO:tasks.workunit.client.1.vm10.stdout:3/406: dwrite dc/d14/d20/d21/d3b/f4f [4194304,4194304] 0 2026-03-09T20:47:38.438 INFO:tasks.workunit.client.0.vm07.stdout:2/502: unlink d2/db/lc 0 2026-03-09T20:47:38.460 INFO:tasks.workunit.client.1.vm10.stdout:7/409: creat db/d21/f81 x:0 0 0 2026-03-09T20:47:38.462 INFO:tasks.workunit.client.1.vm10.stdout:0/393: symlink d2/d9/da/l88 0 2026-03-09T20:47:38.465 INFO:tasks.workunit.client.0.vm07.stdout:8/442: creat d1/d5d/d6f/d2f/d4d/d55/f8e x:0 0 0 2026-03-09T20:47:38.468 INFO:tasks.workunit.client.1.vm10.stdout:8/449: creat d0/d22/d25/d2e/d41/d47/d63/f8c x:0 0 0 2026-03-09T20:47:38.477 INFO:tasks.workunit.client.0.vm07.stdout:3/476: rmdir d1/d5/d9/d2f 39 2026-03-09T20:47:38.477 INFO:tasks.workunit.client.0.vm07.stdout:3/477: dwrite d1/d5/d9/d11/d6d/d80/f93 [0,4194304] 0 2026-03-09T20:47:38.503 INFO:tasks.workunit.client.0.vm07.stdout:7/535: creat d3/da/db/d32/d3e/dac/fb9 x:0 0 0 2026-03-09T20:47:38.505 INFO:tasks.workunit.client.0.vm07.stdout:6/483: dread d8/f5f [0,4194304] 0 2026-03-09T20:47:38.511 INFO:tasks.workunit.client.1.vm10.stdout:2/434: dwrite d5/d18/d1b/d22/f4f [0,4194304] 0 2026-03-09T20:47:38.512 INFO:tasks.workunit.client.1.vm10.stdout:9/466: mknod d2/d12/ca8 0 2026-03-09T20:47:38.532 INFO:tasks.workunit.client.0.vm07.stdout:5/563: dwrite d5/df/d13/d30/d56/f84 [0,4194304] 0 2026-03-09T20:47:38.539 INFO:tasks.workunit.client.1.vm10.stdout:1/419: mknod d2/da/d25/d46/d51/d5d/d6e/d7f/c87 0 2026-03-09T20:47:38.558 INFO:tasks.workunit.client.0.vm07.stdout:4/420: dwrite d2/df/f49 [0,4194304] 0 2026-03-09T20:47:38.562 INFO:tasks.workunit.client.0.vm07.stdout:4/421: dwrite d2/d55/d5d/d3f/f68 [0,4194304] 0 2026-03-09T20:47:38.575 INFO:tasks.workunit.client.0.vm07.stdout:0/559: mkdir d1/d2/dc/db1 0 2026-03-09T20:47:38.585 INFO:tasks.workunit.client.0.vm07.stdout:3/478: fsync d1/d5/d9/d11/d1f/f72 0 2026-03-09T20:47:38.585 INFO:tasks.workunit.client.1.vm10.stdout:4/374: mkdir d1/d2/d5c/d64/d6b/d79 0 2026-03-09T20:47:38.587 INFO:tasks.workunit.client.1.vm10.stdout:6/444: rename d3/d79/f8a to d3/d30/d7f/f8e 0 2026-03-09T20:47:38.588 INFO:tasks.workunit.client.0.vm07.stdout:6/484: mknod d8/d26/d2a/c96 0 2026-03-09T20:47:38.592 INFO:tasks.workunit.client.0.vm07.stdout:1/523: creat d3/d97/da1/dab/fb0 x:0 0 0 2026-03-09T20:47:38.594 INFO:tasks.workunit.client.1.vm10.stdout:8/450: mknod d0/d22/d25/d2e/d41/c8d 0 2026-03-09T20:47:38.595 INFO:tasks.workunit.client.0.vm07.stdout:5/564: creat d5/d69/fc5 x:0 0 0 2026-03-09T20:47:38.595 INFO:tasks.workunit.client.1.vm10.stdout:8/451: write d0/d22/d25/d6c/f82 [447311,77758] 0 2026-03-09T20:47:38.597 INFO:tasks.workunit.client.1.vm10.stdout:7/410: dread db/d28/d2b/d36/f1c [0,4194304] 0 2026-03-09T20:47:38.599 INFO:tasks.workunit.client.1.vm10.stdout:8/452: dwrite d0/d54/f65 [0,4194304] 0 2026-03-09T20:47:38.600 INFO:tasks.workunit.client.1.vm10.stdout:8/453: chown d0/d22/d2f/d38/c39 2779 1 2026-03-09T20:47:38.603 INFO:tasks.workunit.client.1.vm10.stdout:8/454: dwrite d0/d22/d25/f3b [0,4194304] 0 2026-03-09T20:47:38.608 INFO:tasks.workunit.client.1.vm10.stdout:8/455: dwrite d0/d22/d25/d2e/d41/d47/d63/f8c [0,4194304] 0 2026-03-09T20:47:38.620 INFO:tasks.workunit.client.0.vm07.stdout:2/503: fdatasync d2/f33 0 2026-03-09T20:47:38.625 INFO:tasks.workunit.client.1.vm10.stdout:3/407: write dc/d14/d26/d29/f51 [366975,373] 0 2026-03-09T20:47:38.626 INFO:tasks.workunit.client.0.vm07.stdout:9/451: write d4/d8/f1c [1456722,70696] 0 2026-03-09T20:47:38.626 INFO:tasks.workunit.client.0.vm07.stdout:8/443: write d1/d5d/d6f/d2f/d4d/d63/f77 [504527,122317] 0 2026-03-09T20:47:38.631 INFO:tasks.workunit.client.1.vm10.stdout:2/435: dread - d5/f59 zero size 2026-03-09T20:47:38.631 INFO:tasks.workunit.client.1.vm10.stdout:9/467: read - d2/d28/d47/d67/f93 zero size 2026-03-09T20:47:38.632 INFO:tasks.workunit.client.1.vm10.stdout:1/420: rmdir d2/da/d25 39 2026-03-09T20:47:38.633 INFO:tasks.workunit.client.1.vm10.stdout:5/403: fdatasync d2/fd 0 2026-03-09T20:47:38.638 INFO:tasks.workunit.client.1.vm10.stdout:3/408: dread dc/d14/d20/d2e/f32 [0,4194304] 0 2026-03-09T20:47:38.640 INFO:tasks.workunit.client.1.vm10.stdout:5/404: dwrite d2/d27/d37/d46/d5d/d5f/d84/f8b [0,4194304] 0 2026-03-09T20:47:38.649 INFO:tasks.workunit.client.0.vm07.stdout:4/422: dread d2/f3 [0,4194304] 0 2026-03-09T20:47:38.673 INFO:tasks.workunit.client.1.vm10.stdout:0/394: rename d2/d4a/d79/c29 to d2/d4a/c89 0 2026-03-09T20:47:38.675 INFO:tasks.workunit.client.0.vm07.stdout:7/536: symlink d3/da4/lba 0 2026-03-09T20:47:38.712 INFO:tasks.workunit.client.1.vm10.stdout:7/411: rmdir db/d28/d4c 39 2026-03-09T20:47:38.712 INFO:tasks.workunit.client.1.vm10.stdout:7/412: dwrite db/d1f/f62 [0,4194304] 0 2026-03-09T20:47:38.712 INFO:tasks.workunit.client.0.vm07.stdout:6/485: truncate d8/d16/d22/d33/f73 2220869 0 2026-03-09T20:47:38.712 INFO:tasks.workunit.client.0.vm07.stdout:9/452: mkdir d4/d8/d19/d89/da7 0 2026-03-09T20:47:38.712 INFO:tasks.workunit.client.0.vm07.stdout:3/479: mkdir d1/d5/d9/d2f/d99 0 2026-03-09T20:47:38.712 INFO:tasks.workunit.client.0.vm07.stdout:3/480: chown d1/d5/d9/d2f/d3d/f74 116728 1 2026-03-09T20:47:38.712 INFO:tasks.workunit.client.0.vm07.stdout:6/486: mkdir d8/d5d/d97 0 2026-03-09T20:47:38.712 INFO:tasks.workunit.client.0.vm07.stdout:9/453: chown d4/d16/c58 68348 1 2026-03-09T20:47:38.712 INFO:tasks.workunit.client.0.vm07.stdout:3/481: symlink d1/d5/d9/d2f/d34/d46/l9a 0 2026-03-09T20:47:38.713 INFO:tasks.workunit.client.0.vm07.stdout:3/482: chown d1/d5/d9/d11/d1f/f4a 843497 1 2026-03-09T20:47:38.717 INFO:tasks.workunit.client.0.vm07.stdout:3/483: dwrite d1/d5/d9/f3c [0,4194304] 0 2026-03-09T20:47:38.740 INFO:tasks.workunit.client.0.vm07.stdout:1/524: rename d3/d14/d54/cad to d3/d23/d67/d8a/cb1 0 2026-03-09T20:47:38.745 INFO:tasks.workunit.client.1.vm10.stdout:9/468: rename d2/d3/de/d35/d44/f94 to d2/d28/da2/fa9 0 2026-03-09T20:47:38.751 INFO:tasks.workunit.client.1.vm10.stdout:7/413: unlink db/d21/f27 0 2026-03-09T20:47:38.751 INFO:tasks.workunit.client.1.vm10.stdout:7/414: chown db/d21/d26/l79 729 1 2026-03-09T20:47:38.753 INFO:tasks.workunit.client.0.vm07.stdout:4/423: creat d2/d55/d5d/f6f x:0 0 0 2026-03-09T20:47:38.753 INFO:tasks.workunit.client.0.vm07.stdout:4/424: stat d2/f28 0 2026-03-09T20:47:38.753 INFO:tasks.workunit.client.0.vm07.stdout:4/425: rename d2/df to d2/df/d17/d70 22 2026-03-09T20:47:38.757 INFO:tasks.workunit.client.0.vm07.stdout:7/537: creat d3/da/fbb x:0 0 0 2026-03-09T20:47:38.758 INFO:tasks.workunit.client.0.vm07.stdout:7/538: chown d3/da/db/d32/d3e/dac/d1f/d2b/d52/f74 0 1 2026-03-09T20:47:38.758 INFO:tasks.workunit.client.0.vm07.stdout:7/539: readlink d3/da4/lba 0 2026-03-09T20:47:38.759 INFO:tasks.workunit.client.0.vm07.stdout:7/540: write d3/da/d83/fb6 [512068,82721] 0 2026-03-09T20:47:38.764 INFO:tasks.workunit.client.0.vm07.stdout:9/454: dread d4/d11/f1a [0,4194304] 0 2026-03-09T20:47:38.770 INFO:tasks.workunit.client.0.vm07.stdout:6/487: unlink d8/f29 0 2026-03-09T20:47:38.777 INFO:tasks.workunit.client.0.vm07.stdout:9/455: readlink d4/d16/l3a 0 2026-03-09T20:47:38.781 INFO:tasks.workunit.client.0.vm07.stdout:9/456: dwrite d4/d16/d29/d24/f77 [0,4194304] 0 2026-03-09T20:47:38.786 INFO:tasks.workunit.client.0.vm07.stdout:3/484: creat d1/d5/d9/d11/f9b x:0 0 0 2026-03-09T20:47:38.791 INFO:tasks.workunit.client.1.vm10.stdout:7/415: creat db/d28/d2b/d36/d63/d6d/f82 x:0 0 0 2026-03-09T20:47:38.804 INFO:tasks.workunit.client.0.vm07.stdout:6/488: creat d8/d16/d22/f98 x:0 0 0 2026-03-09T20:47:38.804 INFO:tasks.workunit.client.0.vm07.stdout:6/489: mkdir d8/d16/d4b/d88/d99 0 2026-03-09T20:47:38.804 INFO:tasks.workunit.client.1.vm10.stdout:7/416: unlink c9 0 2026-03-09T20:47:38.804 INFO:tasks.workunit.client.1.vm10.stdout:7/417: dread - db/d28/d2b/d36/d63/d6d/f80 zero size 2026-03-09T20:47:38.804 INFO:tasks.workunit.client.1.vm10.stdout:7/418: dread db/d21/d26/f2f [0,4194304] 0 2026-03-09T20:47:38.804 INFO:tasks.workunit.client.1.vm10.stdout:7/419: truncate db/f70 72845 0 2026-03-09T20:47:38.804 INFO:tasks.workunit.client.1.vm10.stdout:7/420: write db/d28/d2b/d36/d63/f6c [366699,107576] 0 2026-03-09T20:47:38.805 INFO:tasks.workunit.client.0.vm07.stdout:6/490: mkdir d8/d26/d2a/d40/d69/d4f/d9a 0 2026-03-09T20:47:38.805 INFO:tasks.workunit.client.1.vm10.stdout:7/421: truncate db/d28/d2b/d36/d3b/f3d 2437806 0 2026-03-09T20:47:38.809 INFO:tasks.workunit.client.1.vm10.stdout:7/422: dread db/d1f/f62 [0,4194304] 0 2026-03-09T20:47:38.811 INFO:tasks.workunit.client.1.vm10.stdout:7/423: read db/d28/f31 [3052348,107837] 0 2026-03-09T20:47:38.816 INFO:tasks.workunit.client.1.vm10.stdout:4/375: write d1/d2/d3/f18 [3308542,118795] 0 2026-03-09T20:47:38.816 INFO:tasks.workunit.client.1.vm10.stdout:4/376: chown d1/d8/d1c/d2b/l6c 5014953 1 2026-03-09T20:47:38.818 INFO:tasks.workunit.client.1.vm10.stdout:4/377: truncate d1/d8/d1c/f23 1320078 0 2026-03-09T20:47:38.818 INFO:tasks.workunit.client.1.vm10.stdout:4/378: write d1/fe [761065,92904] 0 2026-03-09T20:47:38.820 INFO:tasks.workunit.client.1.vm10.stdout:4/379: creat d1/d8/d1c/d2b/f7a x:0 0 0 2026-03-09T20:47:38.820 INFO:tasks.workunit.client.1.vm10.stdout:4/380: chown d1/d8/d1c/l21 1 1 2026-03-09T20:47:38.854 INFO:tasks.workunit.client.0.vm07.stdout:7/541: sync 2026-03-09T20:47:38.854 INFO:tasks.workunit.client.0.vm07.stdout:0/560: dwrite d1/d2/d4b/f70 [0,4194304] 0 2026-03-09T20:47:38.861 INFO:tasks.workunit.client.0.vm07.stdout:7/542: creat d3/da/d83/fbc x:0 0 0 2026-03-09T20:47:38.862 INFO:tasks.workunit.client.0.vm07.stdout:7/543: chown d3/d58/d82/c95 683 1 2026-03-09T20:47:38.862 INFO:tasks.workunit.client.0.vm07.stdout:7/544: dread - d3/da/fbb zero size 2026-03-09T20:47:38.872 INFO:tasks.workunit.client.1.vm10.stdout:4/381: dread d1/d2/d3/f18 [0,4194304] 0 2026-03-09T20:47:38.873 INFO:tasks.workunit.client.0.vm07.stdout:7/545: dwrite d3/da/db/d32/d3e/dac/d43/fae [0,4194304] 0 2026-03-09T20:47:38.877 INFO:tasks.workunit.client.0.vm07.stdout:0/561: dread d1/d2/d33/d35/f46 [0,4194304] 0 2026-03-09T20:47:38.887 INFO:tasks.workunit.client.1.vm10.stdout:2/436: write d5/d2b/f3f [1878774,110241] 0 2026-03-09T20:47:38.888 INFO:tasks.workunit.client.0.vm07.stdout:7/546: dread d3/da/f11 [0,4194304] 0 2026-03-09T20:47:38.891 INFO:tasks.workunit.client.0.vm07.stdout:8/444: write d1/dc/fe [8252664,54949] 0 2026-03-09T20:47:38.892 INFO:tasks.workunit.client.1.vm10.stdout:4/382: mknod d1/d2/d3/d54/c7b 0 2026-03-09T20:47:38.897 INFO:tasks.workunit.client.1.vm10.stdout:3/409: dwrite dc/f58 [4194304,4194304] 0 2026-03-09T20:47:38.899 INFO:tasks.workunit.client.0.vm07.stdout:7/547: sync 2026-03-09T20:47:38.901 INFO:tasks.workunit.client.0.vm07.stdout:7/548: dread d3/da/d83/fa9 [0,4194304] 0 2026-03-09T20:47:38.904 INFO:tasks.workunit.client.1.vm10.stdout:5/405: dwrite d2/d1b/f41 [0,4194304] 0 2026-03-09T20:47:38.906 INFO:tasks.workunit.client.1.vm10.stdout:5/406: chown d2/d1b/c1f 23163 1 2026-03-09T20:47:38.916 INFO:tasks.workunit.client.1.vm10.stdout:3/410: mknod dc/d14/d26/d29/d40/c81 0 2026-03-09T20:47:38.920 INFO:tasks.workunit.client.0.vm07.stdout:5/565: dwrite d5/df/d13/d6c/f99 [0,4194304] 0 2026-03-09T20:47:38.922 INFO:tasks.workunit.client.0.vm07.stdout:5/566: chown d5/d33/d39/c65 314 1 2026-03-09T20:47:38.932 INFO:tasks.workunit.client.0.vm07.stdout:2/504: dwrite d2/db/d28/f32 [0,4194304] 0 2026-03-09T20:47:38.933 INFO:tasks.workunit.client.0.vm07.stdout:8/445: mkdir d1/d8f 0 2026-03-09T20:47:38.937 INFO:tasks.workunit.client.1.vm10.stdout:4/383: dread d1/d2/f2e [0,4194304] 0 2026-03-09T20:47:38.937 INFO:tasks.workunit.client.1.vm10.stdout:4/384: chown d1/d2/l32 1094 1 2026-03-09T20:47:38.941 INFO:tasks.workunit.client.1.vm10.stdout:3/411: unlink dc/d14/d26/f64 0 2026-03-09T20:47:38.944 INFO:tasks.workunit.client.1.vm10.stdout:3/412: dread dc/f58 [4194304,4194304] 0 2026-03-09T20:47:38.945 INFO:tasks.workunit.client.0.vm07.stdout:0/562: getdents d1/d1f/d53 0 2026-03-09T20:47:38.946 INFO:tasks.workunit.client.1.vm10.stdout:3/413: read dc/d14/d26/f6f [11853570,73142] 0 2026-03-09T20:47:38.960 INFO:tasks.workunit.client.1.vm10.stdout:6/445: write d3/f40 [711858,4930] 0 2026-03-09T20:47:38.964 INFO:tasks.workunit.client.1.vm10.stdout:4/385: rename d1/d2/d5c/c3b to d1/d2/c7c 0 2026-03-09T20:47:38.965 INFO:tasks.workunit.client.1.vm10.stdout:8/456: truncate d0/d22/d2c/f6a 65126 0 2026-03-09T20:47:38.965 INFO:tasks.workunit.client.1.vm10.stdout:0/395: write d2/d9/f61 [997165,9283] 0 2026-03-09T20:47:38.966 INFO:tasks.workunit.client.0.vm07.stdout:4/426: write d2/df/d17/f46 [1319053,33074] 0 2026-03-09T20:47:38.968 INFO:tasks.workunit.client.0.vm07.stdout:4/427: truncate d2/df/d17/f6d 956632 0 2026-03-09T20:47:38.971 INFO:tasks.workunit.client.0.vm07.stdout:1/525: write d3/d23/d55/d56/d60/f8e [193494,4867] 0 2026-03-09T20:47:38.995 INFO:tasks.workunit.client.1.vm10.stdout:1/421: truncate d2/da/d25/d3e/d42/f57 3396177 0 2026-03-09T20:47:38.995 INFO:tasks.workunit.client.1.vm10.stdout:1/422: write d2/da/d25/d3e/f58 [882342,80697] 0 2026-03-09T20:47:38.995 INFO:tasks.workunit.client.1.vm10.stdout:9/469: write d2/d3/f6c [213950,83495] 0 2026-03-09T20:47:38.995 INFO:tasks.workunit.client.1.vm10.stdout:4/386: rmdir d1/d8/d1c/d69 39 2026-03-09T20:47:38.995 INFO:tasks.workunit.client.0.vm07.stdout:1/526: readlink d3/d23/d67/d8a/l98 0 2026-03-09T20:47:38.995 INFO:tasks.workunit.client.0.vm07.stdout:9/457: dwrite d4/d16/d29/d24/d37/f51 [0,4194304] 0 2026-03-09T20:47:38.995 INFO:tasks.workunit.client.0.vm07.stdout:9/458: chown d4/d8/dc/d15 63 1 2026-03-09T20:47:38.995 INFO:tasks.workunit.client.0.vm07.stdout:7/549: truncate d3/da/db/f27 1410119 0 2026-03-09T20:47:38.996 INFO:tasks.workunit.client.0.vm07.stdout:3/485: dwrite d1/d5/d9/fe [0,4194304] 0 2026-03-09T20:47:39.006 INFO:tasks.workunit.client.0.vm07.stdout:2/505: read - d2/d46/f7e zero size 2026-03-09T20:47:39.010 INFO:tasks.workunit.client.1.vm10.stdout:0/396: creat d2/f8a x:0 0 0 2026-03-09T20:47:39.012 INFO:tasks.workunit.client.1.vm10.stdout:8/457: mknod d0/d22/d25/d2e/c8e 0 2026-03-09T20:47:39.015 INFO:tasks.workunit.client.0.vm07.stdout:0/563: mknod d1/d2/dc/d17/da6/cb2 0 2026-03-09T20:47:39.018 INFO:tasks.workunit.client.0.vm07.stdout:6/491: truncate d8/d16/d22/d33/f66 336880 0 2026-03-09T20:47:39.019 INFO:tasks.workunit.client.1.vm10.stdout:7/424: truncate db/d28/d30/f73 252389 0 2026-03-09T20:47:39.025 INFO:tasks.workunit.client.1.vm10.stdout:5/407: link d2/c42 d2/d27/d37/d46/d99/c9d 0 2026-03-09T20:47:39.026 INFO:tasks.workunit.client.1.vm10.stdout:3/414: sync 2026-03-09T20:47:39.027 INFO:tasks.workunit.client.1.vm10.stdout:6/446: link d3/da/f1b d3/da/d11/d26/f8f 0 2026-03-09T20:47:39.028 INFO:tasks.workunit.client.1.vm10.stdout:6/447: chown d3/da/d11/d31/c3b 1 1 2026-03-09T20:47:39.029 INFO:tasks.workunit.client.0.vm07.stdout:7/550: truncate d3/da/db/d32/d3e/dac/f3a 830244 0 2026-03-09T20:47:39.030 INFO:tasks.workunit.client.0.vm07.stdout:1/527: sync 2026-03-09T20:47:39.032 INFO:tasks.workunit.client.1.vm10.stdout:2/437: dwrite d5/d18/d27/d28/d41/f6e [0,4194304] 0 2026-03-09T20:47:39.037 INFO:tasks.workunit.client.0.vm07.stdout:0/564: sync 2026-03-09T20:47:39.040 INFO:tasks.workunit.client.0.vm07.stdout:2/506: creat d2/db/d1c/f9d x:0 0 0 2026-03-09T20:47:39.041 INFO:tasks.workunit.client.1.vm10.stdout:8/458: fdatasync d0/d22/d25/d40/f5f 0 2026-03-09T20:47:39.041 INFO:tasks.workunit.client.1.vm10.stdout:7/425: fdatasync db/d28/d2b/d36/d40/f48 0 2026-03-09T20:47:39.041 INFO:tasks.workunit.client.0.vm07.stdout:4/428: rename d2/f33 to d2/d55/f71 0 2026-03-09T20:47:39.044 INFO:tasks.workunit.client.0.vm07.stdout:4/429: dread d2/d55/d5d/d3f/f68 [0,4194304] 0 2026-03-09T20:47:39.047 INFO:tasks.workunit.client.1.vm10.stdout:1/423: truncate d2/da/d25/d46/f74 1713871 0 2026-03-09T20:47:39.058 INFO:tasks.workunit.client.0.vm07.stdout:7/551: symlink d3/da/d83/lbd 0 2026-03-09T20:47:39.059 INFO:tasks.workunit.client.1.vm10.stdout:3/415: truncate dc/d14/d26/d29/d2a/f66 3824298 0 2026-03-09T20:47:39.063 INFO:tasks.workunit.client.0.vm07.stdout:0/565: rmdir d1/d2/d33 39 2026-03-09T20:47:39.063 INFO:tasks.workunit.client.0.vm07.stdout:6/492: mkdir d8/d16/d22/d9b 0 2026-03-09T20:47:39.064 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:38 vm07.local ceph-mon[49120]: Reconfiguring prometheus.vm07 (dependencies changed)... 2026-03-09T20:47:39.064 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:38 vm07.local ceph-mon[49120]: Reconfiguring daemon prometheus.vm07 on vm07 2026-03-09T20:47:39.064 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:38 vm07.local ceph-mon[49120]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "mgr metadata", "who": "vm07.xjrvch", "id": "vm07.xjrvch"}]: dispatch 2026-03-09T20:47:39.064 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:38 vm07.local ceph-mon[49120]: mgrmap e25: vm10.byqahe(active, since 9s), standbys: vm07.xjrvch 2026-03-09T20:47:39.075 INFO:tasks.workunit.client.0.vm07.stdout:8/446: write d1/dc/d16/d31/f52 [278220,52798] 0 2026-03-09T20:47:39.090 INFO:tasks.workunit.client.0.vm07.stdout:3/486: rename d1/c6f to d1/d5/d9/d2f/d3d/d64/c9c 0 2026-03-09T20:47:39.091 INFO:tasks.workunit.client.0.vm07.stdout:5/567: truncate d5/d33/d75/fa8 37434 0 2026-03-09T20:47:39.093 INFO:tasks.workunit.client.1.vm10.stdout:2/438: symlink d5/d18/d1b/d22/l8e 0 2026-03-09T20:47:39.096 INFO:tasks.workunit.client.1.vm10.stdout:0/397: symlink d2/d4a/d79/l8b 0 2026-03-09T20:47:39.096 INFO:tasks.workunit.client.1.vm10.stdout:0/398: stat d2/d9/f12 0 2026-03-09T20:47:39.097 INFO:tasks.workunit.client.1.vm10.stdout:0/399: chown d2/d4a/c83 2 1 2026-03-09T20:47:39.097 INFO:tasks.workunit.client.0.vm07.stdout:4/430: symlink d2/df/d59/l72 0 2026-03-09T20:47:39.097 INFO:tasks.workunit.client.1.vm10.stdout:0/400: stat d2/d4a/f7b 0 2026-03-09T20:47:39.097 INFO:tasks.workunit.client.0.vm07.stdout:4/431: readlink d2/l64 0 2026-03-09T20:47:39.097 INFO:tasks.workunit.client.1.vm10.stdout:0/401: chown d2/f39 1899172806 1 2026-03-09T20:47:39.097 INFO:tasks.workunit.client.0.vm07.stdout:4/432: chown d2/d1f/f25 3499 1 2026-03-09T20:47:39.099 INFO:tasks.workunit.client.0.vm07.stdout:9/459: link d4/d16/c58 d4/d8/d19/d5f/d73/ca8 0 2026-03-09T20:47:39.100 INFO:tasks.workunit.client.0.vm07.stdout:9/460: stat d4/d11/f1a 0 2026-03-09T20:47:39.102 INFO:tasks.workunit.client.1.vm10.stdout:0/402: sync 2026-03-09T20:47:39.104 INFO:tasks.workunit.client.1.vm10.stdout:0/403: read d2/d9/f61 [920496,66212] 0 2026-03-09T20:47:39.105 INFO:tasks.workunit.client.1.vm10.stdout:0/404: truncate d2/d9/d69/f7c 813054 0 2026-03-09T20:47:39.106 INFO:tasks.workunit.client.0.vm07.stdout:0/566: rmdir d1/d2/dc/d80 39 2026-03-09T20:47:39.107 INFO:tasks.workunit.client.0.vm07.stdout:0/567: chown d1/d2/ff 10392114 1 2026-03-09T20:47:39.109 INFO:tasks.workunit.client.0.vm07.stdout:6/493: creat d8/d16/d4b/f9c x:0 0 0 2026-03-09T20:47:39.109 INFO:tasks.workunit.client.0.vm07.stdout:6/494: readlink d8/d26/d2a/d40/l59 0 2026-03-09T20:47:39.111 INFO:tasks.workunit.client.1.vm10.stdout:1/424: dwrite d2/da/d25/d46/d51/d5d/d6e/d70/f79 [0,4194304] 0 2026-03-09T20:47:39.126 INFO:tasks.workunit.client.0.vm07.stdout:1/528: dwrite d3/f82 [0,4194304] 0 2026-03-09T20:47:39.138 INFO:tasks.workunit.client.1.vm10.stdout:9/470: write d2/d28/d47/d50/f75 [911300,41365] 0 2026-03-09T20:47:39.153 INFO:tasks.workunit.client.1.vm10.stdout:2/439: rename d5/d18/d1b/d22/l64 to d5/d18/d1b/d22/l8f 0 2026-03-09T20:47:39.153 INFO:tasks.workunit.client.1.vm10.stdout:2/440: dread - d5/d18/f1f zero size 2026-03-09T20:47:39.158 INFO:tasks.workunit.client.1.vm10.stdout:7/426: unlink db/d28/d2b/d36/f3c 0 2026-03-09T20:47:39.160 INFO:tasks.workunit.client.1.vm10.stdout:7/427: dread db/d28/f4f [0,4194304] 0 2026-03-09T20:47:39.162 INFO:tasks.workunit.client.0.vm07.stdout:7/552: dwrite d3/da/d83/fa9 [0,4194304] 0 2026-03-09T20:47:39.166 INFO:tasks.workunit.client.1.vm10.stdout:8/459: mkdir d0/d22/d25/d8f 0 2026-03-09T20:47:39.166 INFO:tasks.workunit.client.1.vm10.stdout:0/405: creat d2/d4a/d58/d82/d71/d5d/f8c x:0 0 0 2026-03-09T20:47:39.168 INFO:tasks.workunit.client.1.vm10.stdout:5/408: rename d2/d1b/c52 to d2/d27/d37/d46/d5d/c9e 0 2026-03-09T20:47:39.187 INFO:tasks.workunit.client.1.vm10.stdout:4/387: getdents d1/d2/d5c/d64 0 2026-03-09T20:47:39.190 INFO:tasks.workunit.client.1.vm10.stdout:4/388: dwrite d1/d8/d39/f4b [0,4194304] 0 2026-03-09T20:47:39.196 INFO:tasks.workunit.client.1.vm10.stdout:4/389: dwrite d1/d8/d1c/d2b/f7a [0,4194304] 0 2026-03-09T20:47:39.200 INFO:tasks.workunit.client.0.vm07.stdout:2/507: mknod d2/d46/d72/d82/c9e 0 2026-03-09T20:47:39.205 INFO:tasks.workunit.client.0.vm07.stdout:5/568: fsync d5/df/d13/f5b 0 2026-03-09T20:47:39.219 INFO:tasks.workunit.client.0.vm07.stdout:9/461: mknod d4/d8/d19/d26/ca9 0 2026-03-09T20:47:39.219 INFO:tasks.workunit.client.1.vm10.stdout:3/416: rename dc/d14/f1a to dc/d14/d20/d2e/d56/f82 0 2026-03-09T20:47:39.219 INFO:tasks.workunit.client.1.vm10.stdout:3/417: dread dc/d14/d26/d29/f30 [0,4194304] 0 2026-03-09T20:47:39.219 INFO:tasks.workunit.client.1.vm10.stdout:6/448: getdents d3 0 2026-03-09T20:47:39.219 INFO:tasks.workunit.client.1.vm10.stdout:9/471: mkdir d2/d3/daa 0 2026-03-09T20:47:39.219 INFO:tasks.workunit.client.1.vm10.stdout:6/449: chown d3/da/d11/d31/d47 0 1 2026-03-09T20:47:39.221 INFO:tasks.workunit.client.1.vm10.stdout:9/472: dwrite d2/d3/de/f24 [0,4194304] 0 2026-03-09T20:47:39.230 INFO:tasks.workunit.client.0.vm07.stdout:0/568: truncate d1/d2/dc/d17/f3c 3780127 0 2026-03-09T20:47:39.232 INFO:tasks.workunit.client.1.vm10.stdout:4/390: mknod d1/d2/d5c/c7d 0 2026-03-09T20:47:39.244 INFO:tasks.workunit.client.1.vm10.stdout:0/406: creat d2/d9/d69/d80/f8d x:0 0 0 2026-03-09T20:47:39.246 INFO:tasks.workunit.client.1.vm10.stdout:8/460: write d0/d22/d25/d40/f5f [822195,15254] 0 2026-03-09T20:47:39.247 INFO:tasks.workunit.client.0.vm07.stdout:8/447: write d1/dc/d16/d26/f36 [6645652,79434] 0 2026-03-09T20:47:39.252 INFO:tasks.workunit.client.0.vm07.stdout:3/487: write d1/d5/d9/d2f/d3d/f75 [991351,103991] 0 2026-03-09T20:47:39.257 INFO:tasks.workunit.client.1.vm10.stdout:7/428: truncate db/d1f/f5e 1281209 0 2026-03-09T20:47:39.262 INFO:tasks.workunit.client.0.vm07.stdout:1/529: rename d3/d23/d55/d56/f8d to d3/d23/d52/fb2 0 2026-03-09T20:47:39.263 INFO:tasks.workunit.client.1.vm10.stdout:5/409: getdents d2/d27/d37/d46/d5d/d5f/d63/d95 0 2026-03-09T20:47:39.266 INFO:tasks.workunit.client.1.vm10.stdout:1/425: rename d2/da/f10 to d2/da/f88 0 2026-03-09T20:47:39.269 INFO:tasks.workunit.client.1.vm10.stdout:6/450: creat d3/d79/f90 x:0 0 0 2026-03-09T20:47:39.270 INFO:tasks.workunit.client.1.vm10.stdout:6/451: write d3/da/d11/f8b [385886,24684] 0 2026-03-09T20:47:39.276 INFO:tasks.workunit.client.1.vm10.stdout:1/426: sync 2026-03-09T20:47:39.277 INFO:tasks.workunit.client.0.vm07.stdout:5/569: dwrite d5/df/d13/f5b [0,4194304] 0 2026-03-09T20:47:39.283 INFO:tasks.workunit.client.0.vm07.stdout:7/553: dread d3/f61 [0,4194304] 0 2026-03-09T20:47:39.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:38 vm10.local ceph-mon[57011]: Reconfiguring prometheus.vm07 (dependencies changed)... 2026-03-09T20:47:39.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:38 vm10.local ceph-mon[57011]: Reconfiguring daemon prometheus.vm07 on vm07 2026-03-09T20:47:39.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:38 vm10.local ceph-mon[57011]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "mgr metadata", "who": "vm07.xjrvch", "id": "vm07.xjrvch"}]: dispatch 2026-03-09T20:47:39.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:38 vm10.local ceph-mon[57011]: mgrmap e25: vm10.byqahe(active, since 9s), standbys: vm07.xjrvch 2026-03-09T20:47:39.297 INFO:tasks.workunit.client.1.vm10.stdout:4/391: rename d1/d8/d1b/c22 to d1/d2/d3/d70/d78/c7e 0 2026-03-09T20:47:39.308 INFO:tasks.workunit.client.0.vm07.stdout:0/569: truncate d1/d2/f14 2363633 0 2026-03-09T20:47:39.309 INFO:tasks.workunit.client.0.vm07.stdout:0/570: read - d1/d1f/d30/f7a zero size 2026-03-09T20:47:39.313 INFO:tasks.workunit.client.1.vm10.stdout:8/461: symlink d0/d22/d2f/d38/l90 0 2026-03-09T20:47:39.315 INFO:tasks.workunit.client.0.vm07.stdout:6/495: symlink d8/d16/d22/d24/l9d 0 2026-03-09T20:47:39.316 INFO:tasks.workunit.client.1.vm10.stdout:9/473: dwrite d2/d33/f77 [0,4194304] 0 2026-03-09T20:47:39.316 INFO:tasks.workunit.client.1.vm10.stdout:7/429: mknod db/d21/d26/d72/c83 0 2026-03-09T20:47:39.318 INFO:tasks.workunit.client.1.vm10.stdout:9/474: chown d2/f6 76 1 2026-03-09T20:47:39.320 INFO:tasks.workunit.client.0.vm07.stdout:4/433: dwrite d2/d55/f71 [0,4194304] 0 2026-03-09T20:47:39.346 INFO:tasks.workunit.client.1.vm10.stdout:0/407: write d2/d9/f20 [2684871,4771] 0 2026-03-09T20:47:39.350 INFO:tasks.workunit.client.1.vm10.stdout:0/408: dwrite d2/d9/f20 [0,4194304] 0 2026-03-09T20:47:39.350 INFO:tasks.workunit.client.1.vm10.stdout:0/409: chown d2/d9/f12 88 1 2026-03-09T20:47:39.351 INFO:tasks.workunit.client.1.vm10.stdout:0/410: chown d2/f39 0 1 2026-03-09T20:47:39.354 INFO:tasks.workunit.client.1.vm10.stdout:5/410: dread d2/d1b/f5c [0,4194304] 0 2026-03-09T20:47:39.356 INFO:tasks.workunit.client.1.vm10.stdout:3/418: truncate dc/d14/d20/d21/f50 1869008 0 2026-03-09T20:47:39.359 INFO:tasks.workunit.client.1.vm10.stdout:3/419: dwrite dc/d14/d20/d21/d3b/f4f [0,4194304] 0 2026-03-09T20:47:39.362 INFO:tasks.workunit.client.1.vm10.stdout:2/441: getdents d5 0 2026-03-09T20:47:39.363 INFO:tasks.workunit.client.1.vm10.stdout:2/442: chown d5/d2b/d32/d80/f5d 4227 1 2026-03-09T20:47:39.364 INFO:tasks.workunit.client.1.vm10.stdout:2/443: fdatasync d5/d18/d1b/f26 0 2026-03-09T20:47:39.366 INFO:tasks.workunit.client.0.vm07.stdout:1/530: dread - d3/d23/d55/d56/d60/f53 zero size 2026-03-09T20:47:39.376 INFO:tasks.workunit.client.0.vm07.stdout:5/570: creat d5/df/d13/d3e/d5e/fc6 x:0 0 0 2026-03-09T20:47:39.376 INFO:tasks.workunit.client.0.vm07.stdout:5/571: truncate d5/df/d13/d3e/d5e/fc6 822804 0 2026-03-09T20:47:39.384 INFO:tasks.workunit.client.0.vm07.stdout:7/554: creat d3/d58/d82/d90/fbe x:0 0 0 2026-03-09T20:47:39.422 INFO:tasks.workunit.client.0.vm07.stdout:4/434: creat d2/df/d17/f73 x:0 0 0 2026-03-09T20:47:39.422 INFO:tasks.workunit.client.0.vm07.stdout:4/435: readlink d2/df/l32 0 2026-03-09T20:47:39.424 INFO:tasks.workunit.client.1.vm10.stdout:7/430: mkdir db/d28/d2b/d36/d63/d84 0 2026-03-09T20:47:39.430 INFO:tasks.workunit.client.0.vm07.stdout:8/448: mknod d1/d5d/c90 0 2026-03-09T20:47:39.433 INFO:tasks.workunit.client.1.vm10.stdout:9/475: mkdir d2/d28/d47/d50/dab 0 2026-03-09T20:47:39.436 INFO:tasks.workunit.client.1.vm10.stdout:6/452: dwrite d3/da/f76 [0,4194304] 0 2026-03-09T20:47:39.439 INFO:tasks.workunit.client.1.vm10.stdout:0/411: rename d2/d4a/d79/d1a to d2/d4a/d58/d82/d71/d8e 0 2026-03-09T20:47:39.441 INFO:tasks.workunit.client.0.vm07.stdout:7/555: creat d3/da/db/d32/d3e/d5c/fbf x:0 0 0 2026-03-09T20:47:39.447 INFO:tasks.workunit.client.1.vm10.stdout:3/420: mknod dc/d14/d22/d7f/d69/d75/c83 0 2026-03-09T20:47:39.458 INFO:tasks.workunit.client.0.vm07.stdout:2/508: rename d2/d11/d56/l92 to d2/l9f 0 2026-03-09T20:47:39.460 INFO:tasks.workunit.client.1.vm10.stdout:1/427: unlink d2/da/c2d 0 2026-03-09T20:47:39.465 INFO:tasks.workunit.client.0.vm07.stdout:5/572: mkdir d5/d19/d73/dbc/dc7 0 2026-03-09T20:47:39.466 INFO:tasks.workunit.client.1.vm10.stdout:8/462: getdents d0/d22/d25/d40/d86 0 2026-03-09T20:47:39.469 INFO:tasks.workunit.client.0.vm07.stdout:7/556: creat d3/da/db/d32/d3e/dac/d1f/d2b/d52/fc0 x:0 0 0 2026-03-09T20:47:39.470 INFO:tasks.workunit.client.0.vm07.stdout:7/557: truncate d3/da/db/d79/f98 1054473 0 2026-03-09T20:47:39.471 INFO:tasks.workunit.client.1.vm10.stdout:7/431: creat db/d46/f85 x:0 0 0 2026-03-09T20:47:39.474 INFO:tasks.workunit.client.0.vm07.stdout:9/462: getdents d4/d16/d29/d24/d37 0 2026-03-09T20:47:39.475 INFO:tasks.workunit.client.0.vm07.stdout:1/531: write d3/d14/d54/d3e/f75 [2671827,81733] 0 2026-03-09T20:47:39.475 INFO:tasks.workunit.client.0.vm07.stdout:0/571: creat d1/fb3 x:0 0 0 2026-03-09T20:47:39.480 INFO:tasks.workunit.client.1.vm10.stdout:2/444: write d5/fa [93199,108448] 0 2026-03-09T20:47:39.483 INFO:tasks.workunit.client.1.vm10.stdout:4/392: truncate d1/d8/d1c/f52 1378430 0 2026-03-09T20:47:39.485 INFO:tasks.workunit.client.0.vm07.stdout:8/449: dwrite d1/dc/d16/d31/f47 [0,4194304] 0 2026-03-09T20:47:39.486 INFO:tasks.workunit.client.1.vm10.stdout:4/393: dread d1/d2/d5c/f53 [0,4194304] 0 2026-03-09T20:47:39.487 INFO:tasks.workunit.client.1.vm10.stdout:6/453: creat d3/d30/f91 x:0 0 0 2026-03-09T20:47:39.489 INFO:tasks.workunit.client.1.vm10.stdout:9/476: rename d2/d28/d47/d50/f5b to d2/d12/d5a/fac 0 2026-03-09T20:47:39.490 INFO:tasks.workunit.client.1.vm10.stdout:9/477: read - d2/d33/f3c zero size 2026-03-09T20:47:39.490 INFO:tasks.workunit.client.1.vm10.stdout:9/478: chown d2/d3/d85 309749 1 2026-03-09T20:47:39.502 INFO:tasks.workunit.client.0.vm07.stdout:3/488: rename d1/d5/d9/d2f/d34/l44 to d1/d5/d9/d2f/d34/l9d 0 2026-03-09T20:47:39.502 INFO:tasks.workunit.client.0.vm07.stdout:6/496: rename d8/d26/d2a/f37 to d8/d26/d2a/d40/d69/f9e 0 2026-03-09T20:47:39.505 INFO:tasks.workunit.client.0.vm07.stdout:7/558: read d3/da/f3b [93392,10117] 0 2026-03-09T20:47:39.506 INFO:tasks.workunit.client.0.vm07.stdout:0/572: creat d1/d1f/d9f/fb4 x:0 0 0 2026-03-09T20:47:39.510 INFO:tasks.workunit.client.0.vm07.stdout:1/532: creat d3/d23/d55/d56/d60/fb3 x:0 0 0 2026-03-09T20:47:39.514 INFO:tasks.workunit.client.0.vm07.stdout:8/450: mkdir d1/d5d/d6f/d2f/d4d/d63/d91 0 2026-03-09T20:47:39.519 INFO:tasks.workunit.client.0.vm07.stdout:9/463: sync 2026-03-09T20:47:39.521 INFO:tasks.workunit.client.0.vm07.stdout:9/464: chown d4/d16/d29/d24/f8c 13033828 1 2026-03-09T20:47:39.522 INFO:tasks.workunit.client.0.vm07.stdout:9/465: write d4/d8/d19/f28 [5027137,43536] 0 2026-03-09T20:47:39.525 INFO:tasks.workunit.client.1.vm10.stdout:7/432: dwrite db/d28/f31 [0,4194304] 0 2026-03-09T20:47:39.535 INFO:tasks.workunit.client.1.vm10.stdout:8/463: dread d0/d22/d25/f2b [0,4194304] 0 2026-03-09T20:47:39.539 INFO:tasks.workunit.client.1.vm10.stdout:2/445: truncate d5/d18/f63 900469 0 2026-03-09T20:47:39.541 INFO:tasks.workunit.client.1.vm10.stdout:8/464: dwrite d0/d22/d25/d6c/f82 [0,4194304] 0 2026-03-09T20:47:39.542 INFO:tasks.workunit.client.1.vm10.stdout:0/412: write d2/d4a/d58/d82/d71/d5d/f67 [377003,43277] 0 2026-03-09T20:47:39.543 INFO:tasks.workunit.client.0.vm07.stdout:8/451: dread d1/d5d/d6f/f30 [0,4194304] 0 2026-03-09T20:47:39.545 INFO:tasks.workunit.client.1.vm10.stdout:0/413: chown d2/d4a/d58/d82/d71/c74 1207996 1 2026-03-09T20:47:39.549 INFO:tasks.workunit.client.0.vm07.stdout:2/509: rename d2/db/d1c/l25 to d2/db/d49/d7d/la0 0 2026-03-09T20:47:39.558 INFO:tasks.workunit.client.0.vm07.stdout:6/497: creat d8/d26/d2a/d40/d69/d4f/f9f x:0 0 0 2026-03-09T20:47:39.558 INFO:tasks.workunit.client.1.vm10.stdout:4/394: creat d1/d2/d3/d54/f7f x:0 0 0 2026-03-09T20:47:39.562 INFO:tasks.workunit.client.0.vm07.stdout:3/489: dread d1/d5/d9/d2f/d3d/d64/f63 [0,4194304] 0 2026-03-09T20:47:39.563 INFO:tasks.workunit.client.0.vm07.stdout:3/490: read - d1/d5/d9/d2f/d3d/d64/d43/f90 zero size 2026-03-09T20:47:39.563 INFO:tasks.workunit.client.0.vm07.stdout:3/491: chown d1/d5/d9/d2f/d66 0 1 2026-03-09T20:47:39.564 INFO:tasks.workunit.client.0.vm07.stdout:3/492: fsync d1/d5/d9/fe 0 2026-03-09T20:47:39.565 INFO:tasks.workunit.client.0.vm07.stdout:1/533: dread - d3/d23/d55/d56/d90/f93 zero size 2026-03-09T20:47:39.567 INFO:tasks.workunit.client.0.vm07.stdout:4/436: getdents d2/d55/d5d/d3f/d4a/d4b/d52/d5c 0 2026-03-09T20:47:39.568 INFO:tasks.workunit.client.0.vm07.stdout:4/437: read d2/df/f49 [1441980,61172] 0 2026-03-09T20:47:39.573 INFO:tasks.workunit.client.1.vm10.stdout:3/421: creat dc/d14/d22/d4a/f84 x:0 0 0 2026-03-09T20:47:39.576 INFO:tasks.workunit.client.1.vm10.stdout:3/422: read dc/d14/d20/d2e/d56/f23 [1179809,32866] 0 2026-03-09T20:47:39.578 INFO:tasks.workunit.client.0.vm07.stdout:5/573: rename d5/d33/d39/l44 to d5/d33/d3b/lc8 0 2026-03-09T20:47:39.582 INFO:tasks.workunit.client.1.vm10.stdout:7/433: dread db/d28/d2b/d36/d40/f44 [0,4194304] 0 2026-03-09T20:47:39.583 INFO:tasks.workunit.client.1.vm10.stdout:7/434: fsync db/d46/f85 0 2026-03-09T20:47:39.589 INFO:tasks.workunit.client.0.vm07.stdout:2/510: creat d2/d46/d72/fa1 x:0 0 0 2026-03-09T20:47:39.600 INFO:tasks.workunit.client.0.vm07.stdout:3/493: truncate d1/f36 101841 0 2026-03-09T20:47:39.601 INFO:tasks.workunit.client.1.vm10.stdout:4/395: rmdir d1/d8/d39 39 2026-03-09T20:47:39.602 INFO:tasks.workunit.client.1.vm10.stdout:4/396: chown d1/d8/c13 211 1 2026-03-09T20:47:39.604 INFO:tasks.workunit.client.1.vm10.stdout:6/454: symlink d3/d30/d7f/l92 0 2026-03-09T20:47:39.609 INFO:tasks.workunit.client.1.vm10.stdout:1/428: write d2/da/d25/f2e [741513,130772] 0 2026-03-09T20:47:39.610 INFO:tasks.workunit.client.1.vm10.stdout:1/429: write d2/da/d25/f78 [1865661,84129] 0 2026-03-09T20:47:39.612 INFO:tasks.workunit.client.1.vm10.stdout:2/446: write d5/d2b/d32/d80/f3e [258910,109497] 0 2026-03-09T20:47:39.615 INFO:tasks.workunit.client.1.vm10.stdout:0/414: dwrite d2/d9/da/d35/f3a [4194304,4194304] 0 2026-03-09T20:47:39.616 INFO:tasks.workunit.client.0.vm07.stdout:6/498: write d8/d26/d2a/d40/d69/f78 [99984,1130] 0 2026-03-09T20:47:39.616 INFO:tasks.workunit.client.0.vm07.stdout:7/559: write d3/da/f38 [3736517,55015] 0 2026-03-09T20:47:39.617 INFO:tasks.workunit.client.0.vm07.stdout:4/438: creat d2/d55/d5d/d3f/d4a/d4b/f74 x:0 0 0 2026-03-09T20:47:39.617 INFO:tasks.workunit.client.0.vm07.stdout:1/534: write d3/d14/d54/d3e/f72 [466672,108660] 0 2026-03-09T20:47:39.621 INFO:tasks.workunit.client.0.vm07.stdout:7/560: readlink d3/da/db/d32/d3e/dac/d1f/l71 0 2026-03-09T20:47:39.622 INFO:tasks.workunit.client.0.vm07.stdout:7/561: chown l1 343 1 2026-03-09T20:47:39.628 INFO:tasks.workunit.client.1.vm10.stdout:5/411: link d2/d1b/l29 d2/d27/d75/l9f 0 2026-03-09T20:47:39.629 INFO:tasks.workunit.client.1.vm10.stdout:3/423: symlink dc/d14/d22/l85 0 2026-03-09T20:47:39.630 INFO:tasks.workunit.client.0.vm07.stdout:7/562: dread d3/da/fb4 [0,4194304] 0 2026-03-09T20:47:39.636 INFO:tasks.workunit.client.0.vm07.stdout:7/563: dread d3/da/d83/fb6 [0,4194304] 0 2026-03-09T20:47:39.639 INFO:tasks.workunit.client.0.vm07.stdout:2/511: dread - d2/d11/d56/f8a zero size 2026-03-09T20:47:39.639 INFO:tasks.workunit.client.1.vm10.stdout:8/465: mkdir d0/d22/d25/d40/d86/d91 0 2026-03-09T20:47:39.641 INFO:tasks.workunit.client.1.vm10.stdout:6/455: rename d3/d79/c83 to d3/d30/d7f/d36/d5c/c93 0 2026-03-09T20:47:39.642 INFO:tasks.workunit.client.0.vm07.stdout:1/535: symlink d3/d14/d54/d6e/lb4 0 2026-03-09T20:47:39.647 INFO:tasks.workunit.client.0.vm07.stdout:8/452: rmdir d1/dc/d16/d26/d71 0 2026-03-09T20:47:39.648 INFO:tasks.workunit.client.0.vm07.stdout:5/574: rename d5/df/d13/d30/f64 to d5/df/d13/d6c/fc9 0 2026-03-09T20:47:39.650 INFO:tasks.workunit.client.1.vm10.stdout:3/424: mknod dc/d14/d22/d7f/c86 0 2026-03-09T20:47:39.659 INFO:tasks.workunit.client.1.vm10.stdout:1/430: write d2/da/f26 [9019787,88045] 0 2026-03-09T20:47:39.660 INFO:tasks.workunit.client.1.vm10.stdout:1/431: chown d2/da/d25/f48 106339812 1 2026-03-09T20:47:39.663 INFO:tasks.workunit.client.1.vm10.stdout:7/435: unlink db/d28/d2b/c58 0 2026-03-09T20:47:39.664 INFO:tasks.workunit.client.0.vm07.stdout:5/575: sync 2026-03-09T20:47:39.665 INFO:tasks.workunit.client.0.vm07.stdout:3/494: dwrite d1/d5/d9/d11/d1f/f72 [0,4194304] 0 2026-03-09T20:47:39.684 INFO:tasks.workunit.client.0.vm07.stdout:6/499: mkdir d8/d16/d22/d24/da0 0 2026-03-09T20:47:39.684 INFO:tasks.workunit.client.0.vm07.stdout:7/564: write d3/da/db/d32/d3e/dac/d1f/f5d [469622,30114] 0 2026-03-09T20:47:39.684 INFO:tasks.workunit.client.1.vm10.stdout:8/466: chown d0/f13 156 1 2026-03-09T20:47:39.684 INFO:tasks.workunit.client.0.vm07.stdout:9/466: getdents d4/d8 0 2026-03-09T20:47:39.684 INFO:tasks.workunit.client.0.vm07.stdout:8/453: symlink d1/d5d/d6f/d2f/d4d/d63/l92 0 2026-03-09T20:47:39.684 INFO:tasks.workunit.client.0.vm07.stdout:1/536: creat d3/d23/d55/d56/d60/fb5 x:0 0 0 2026-03-09T20:47:39.684 INFO:tasks.workunit.client.1.vm10.stdout:9/479: getdents d2 0 2026-03-09T20:47:39.685 INFO:tasks.workunit.client.1.vm10.stdout:0/415: mknod d2/d4a/d79/c8f 0 2026-03-09T20:47:39.686 INFO:tasks.workunit.client.1.vm10.stdout:6/456: read d3/d30/d7f/f16 [275834,50766] 0 2026-03-09T20:47:39.690 INFO:tasks.workunit.client.0.vm07.stdout:0/573: link d1/d2/d33/f7e d1/d2/d33/fb5 0 2026-03-09T20:47:39.695 INFO:tasks.workunit.client.0.vm07.stdout:4/439: creat d2/df/f75 x:0 0 0 2026-03-09T20:47:39.697 INFO:tasks.workunit.client.0.vm07.stdout:2/512: creat d2/db/d1c/d8d/fa2 x:0 0 0 2026-03-09T20:47:39.705 INFO:tasks.workunit.client.1.vm10.stdout:4/397: truncate d1/d8/d1c/d2b/f7a 3173330 0 2026-03-09T20:47:39.705 INFO:tasks.workunit.client.1.vm10.stdout:4/398: write d1/d2/f43 [2687032,23230] 0 2026-03-09T20:47:39.712 INFO:tasks.workunit.client.1.vm10.stdout:5/412: truncate d2/d27/d37/d46/d5d/d5f/d84/f8b 375100 0 2026-03-09T20:47:39.715 INFO:tasks.workunit.client.1.vm10.stdout:3/425: dread dc/d14/d20/d21/f41 [0,4194304] 0 2026-03-09T20:47:39.716 INFO:tasks.workunit.client.0.vm07.stdout:9/467: dread - d4/d8/dc/d4e/f8f zero size 2026-03-09T20:47:39.719 INFO:tasks.workunit.client.0.vm07.stdout:0/574: readlink d1/d2/d33/d35/l8b 0 2026-03-09T20:47:39.724 INFO:tasks.workunit.client.0.vm07.stdout:6/500: mkdir d8/d5d/d97/da1 0 2026-03-09T20:47:39.725 INFO:tasks.workunit.client.0.vm07.stdout:6/501: readlink d8/l80 0 2026-03-09T20:47:39.726 INFO:tasks.workunit.client.0.vm07.stdout:9/468: unlink d4/d16/d29/f7d 0 2026-03-09T20:47:39.748 INFO:tasks.workunit.client.0.vm07.stdout:0/575: dread - d1/d1f/d53/d72/f95 zero size 2026-03-09T20:47:39.748 INFO:tasks.workunit.client.0.vm07.stdout:3/495: dread d1/d5/d9/d11/f26 [0,4194304] 0 2026-03-09T20:47:39.748 INFO:tasks.workunit.client.0.vm07.stdout:3/496: mkdir d1/d5/d9/d2f/d34/d9e 0 2026-03-09T20:47:39.748 INFO:tasks.workunit.client.0.vm07.stdout:9/469: fsync d4/d16/f27 0 2026-03-09T20:47:39.748 INFO:tasks.workunit.client.0.vm07.stdout:9/470: dread d4/d8/dc/d15/f57 [0,4194304] 0 2026-03-09T20:47:39.748 INFO:tasks.workunit.client.0.vm07.stdout:3/497: rename d1/d5/d9/d11/d6d/d80/d8c to d1/d5/d9/d2f/d3d/d64/d43/d54/d9f 0 2026-03-09T20:47:39.748 INFO:tasks.workunit.client.1.vm10.stdout:8/467: unlink d0/d22/d25/d40/f5f 0 2026-03-09T20:47:39.748 INFO:tasks.workunit.client.1.vm10.stdout:9/480: truncate d2/d33/f7d 623430 0 2026-03-09T20:47:39.749 INFO:tasks.workunit.client.1.vm10.stdout:9/481: write d2/d33/f77 [1641191,112318] 0 2026-03-09T20:47:39.749 INFO:tasks.workunit.client.1.vm10.stdout:6/457: rename d3/da/d11/d31/d4c/f7b to d3/d30/d7f/d51/f94 0 2026-03-09T20:47:39.749 INFO:tasks.workunit.client.1.vm10.stdout:1/432: mkdir d2/d89 0 2026-03-09T20:47:39.749 INFO:tasks.workunit.client.1.vm10.stdout:7/436: mkdir db/d28/d86 0 2026-03-09T20:47:39.749 INFO:tasks.workunit.client.1.vm10.stdout:3/426: readlink dc/d14/d26/d37/l4d 0 2026-03-09T20:47:39.749 INFO:tasks.workunit.client.0.vm07.stdout:9/471: fsync d4/d16/d78/f92 0 2026-03-09T20:47:39.750 INFO:tasks.workunit.client.1.vm10.stdout:2/447: getdents d5/d5b 0 2026-03-09T20:47:39.750 INFO:tasks.workunit.client.0.vm07.stdout:3/498: chown d1/d5/d9/d2f/d3d/d64/f55 13771 1 2026-03-09T20:47:39.751 INFO:tasks.workunit.client.1.vm10.stdout:0/416: rmdir d2/d9/d2a 39 2026-03-09T20:47:39.756 INFO:tasks.workunit.client.1.vm10.stdout:9/482: mkdir d2/d12/dad 0 2026-03-09T20:47:39.757 INFO:tasks.workunit.client.1.vm10.stdout:6/458: symlink d3/d30/d7f/d36/d5c/d8d/l95 0 2026-03-09T20:47:39.761 INFO:tasks.workunit.client.1.vm10.stdout:4/399: mknod d1/d2/d5c/d64/d71/c80 0 2026-03-09T20:47:39.799 INFO:tasks.workunit.client.1.vm10.stdout:1/433: dread d2/f1c [0,4194304] 0 2026-03-09T20:47:39.799 INFO:tasks.workunit.client.1.vm10.stdout:8/468: mkdir d0/d92 0 2026-03-09T20:47:39.799 INFO:tasks.workunit.client.1.vm10.stdout:0/417: creat d2/d4a/d58/d82/d71/d8e/d25/d34/f90 x:0 0 0 2026-03-09T20:47:39.799 INFO:tasks.workunit.client.1.vm10.stdout:4/400: mkdir d1/d2/d5c/d64/d6b/d81 0 2026-03-09T20:47:39.799 INFO:tasks.workunit.client.1.vm10.stdout:4/401: chown d1/l76 1 1 2026-03-09T20:47:39.799 INFO:tasks.workunit.client.1.vm10.stdout:6/459: truncate d3/da/d11/d31/f81 1126919 0 2026-03-09T20:47:39.799 INFO:tasks.workunit.client.1.vm10.stdout:3/427: creat dc/f87 x:0 0 0 2026-03-09T20:47:39.799 INFO:tasks.workunit.client.1.vm10.stdout:3/428: dread dc/f58 [4194304,4194304] 0 2026-03-09T20:47:39.799 INFO:tasks.workunit.client.1.vm10.stdout:6/460: fdatasync d3/da/d11/d26/f8f 0 2026-03-09T20:47:39.799 INFO:tasks.workunit.client.1.vm10.stdout:1/434: rename d2/l24 to d2/l8a 0 2026-03-09T20:47:39.799 INFO:tasks.workunit.client.1.vm10.stdout:2/448: link d5/d18/f1f d5/d18/f90 0 2026-03-09T20:47:39.799 INFO:tasks.workunit.client.1.vm10.stdout:3/429: fdatasync dc/d14/d20/d21/d3b/f6d 0 2026-03-09T20:47:39.799 INFO:tasks.workunit.client.1.vm10.stdout:0/418: rename d2/c57 to d2/d9/c91 0 2026-03-09T20:47:39.799 INFO:tasks.workunit.client.1.vm10.stdout:0/419: chown d2/d4a/d58/d82/d71/l1c 1516375 1 2026-03-09T20:47:39.799 INFO:tasks.workunit.client.1.vm10.stdout:6/461: creat d3/f96 x:0 0 0 2026-03-09T20:47:39.799 INFO:tasks.workunit.client.1.vm10.stdout:1/435: mknod d2/da/d25/d3e/d42/c8b 0 2026-03-09T20:47:39.799 INFO:tasks.workunit.client.1.vm10.stdout:8/469: getdents d0/d22/d25/d40/d86 0 2026-03-09T20:47:39.801 INFO:tasks.workunit.client.1.vm10.stdout:2/449: mkdir d5/d2b/d32/d91 0 2026-03-09T20:47:39.803 INFO:tasks.workunit.client.1.vm10.stdout:4/402: rename d1/d8/d1b/f42 to d1/d67/f82 0 2026-03-09T20:47:39.804 INFO:tasks.workunit.client.1.vm10.stdout:4/403: truncate d1/d2/d3/d54/f7f 467877 0 2026-03-09T20:47:39.805 INFO:tasks.workunit.client.1.vm10.stdout:0/420: fsync d2/d9/da/d11/f42 0 2026-03-09T20:47:39.806 INFO:tasks.workunit.client.1.vm10.stdout:1/436: mkdir d2/da/d25/d46/d8c 0 2026-03-09T20:47:39.810 INFO:tasks.workunit.client.1.vm10.stdout:4/404: read d1/d2/f2a [396810,86909] 0 2026-03-09T20:47:39.811 INFO:tasks.workunit.client.1.vm10.stdout:4/405: fdatasync d1/d2/d5c/d64/f51 0 2026-03-09T20:47:39.811 INFO:tasks.workunit.client.1.vm10.stdout:0/421: mkdir d2/d9/da/d11/d92 0 2026-03-09T20:47:39.814 INFO:tasks.workunit.client.1.vm10.stdout:8/470: truncate d0/f17 4362371 0 2026-03-09T20:47:39.816 INFO:tasks.workunit.client.1.vm10.stdout:4/406: creat d1/d2/d5c/d64/f83 x:0 0 0 2026-03-09T20:47:39.817 INFO:tasks.workunit.client.1.vm10.stdout:4/407: chown d1/d2/f43 970635 1 2026-03-09T20:47:39.817 INFO:tasks.workunit.client.1.vm10.stdout:0/422: dwrite d2/d9/da/d35/f3a [4194304,4194304] 0 2026-03-09T20:47:39.817 INFO:tasks.workunit.client.1.vm10.stdout:1/437: dread d2/da/d25/d3e/d42/f63 [0,4194304] 0 2026-03-09T20:47:39.820 INFO:tasks.workunit.client.1.vm10.stdout:0/423: readlink d2/d4a/d58/d82/d71/d5d/l6b 0 2026-03-09T20:47:39.824 INFO:tasks.workunit.client.1.vm10.stdout:4/408: write d1/d8/d1c/f3e [3974365,25296] 0 2026-03-09T20:47:39.825 INFO:tasks.workunit.client.1.vm10.stdout:0/424: truncate d2/d4a/d58/d82/d71/d5d/f5f 498591 0 2026-03-09T20:47:39.826 INFO:tasks.workunit.client.1.vm10.stdout:0/425: write d2/d4a/d58/d82/d71/d5d/f67 [1282069,6515] 0 2026-03-09T20:47:39.828 INFO:tasks.workunit.client.0.vm07.stdout:0/576: sync 2026-03-09T20:47:39.831 INFO:tasks.workunit.client.1.vm10.stdout:4/409: truncate d1/d8/d39/f4b 4677307 0 2026-03-09T20:47:39.836 INFO:tasks.workunit.client.1.vm10.stdout:4/410: unlink d1/d8/d1c/f3e 0 2026-03-09T20:47:39.858 INFO:tasks.workunit.client.0.vm07.stdout:8/454: write d1/fb [761106,65739] 0 2026-03-09T20:47:39.864 INFO:tasks.workunit.client.0.vm07.stdout:8/455: fdatasync d1/d5d/d6f/d2f/d53/d76/f7f 0 2026-03-09T20:47:39.864 INFO:tasks.workunit.client.0.vm07.stdout:2/513: write d2/d46/d6e/f7a [941613,37402] 0 2026-03-09T20:47:39.876 INFO:tasks.workunit.client.0.vm07.stdout:4/440: dwrite d2/f28 [0,4194304] 0 2026-03-09T20:47:39.876 INFO:tasks.workunit.client.0.vm07.stdout:5/576: dwrite d5/df/faa [0,4194304] 0 2026-03-09T20:47:39.881 INFO:tasks.workunit.client.0.vm07.stdout:7/565: dwrite d3/da/db/d32/d3e/f65 [0,4194304] 0 2026-03-09T20:47:39.892 INFO:tasks.workunit.client.0.vm07.stdout:1/537: dwrite d3/d14/f25 [0,4194304] 0 2026-03-09T20:47:39.909 INFO:tasks.workunit.client.0.vm07.stdout:2/514: creat d2/db/d28/d90/fa3 x:0 0 0 2026-03-09T20:47:39.918 INFO:tasks.workunit.client.0.vm07.stdout:6/502: write d8/d26/f3d [1148693,11338] 0 2026-03-09T20:47:39.922 INFO:tasks.workunit.client.0.vm07.stdout:6/503: readlink d8/d26/d2a/d40/d69/l56 0 2026-03-09T20:47:39.927 INFO:tasks.workunit.client.0.vm07.stdout:4/441: truncate d2/d1f/f25 4445883 0 2026-03-09T20:47:39.929 INFO:tasks.workunit.client.0.vm07.stdout:4/442: truncate d2/df/d17/f73 1040266 0 2026-03-09T20:47:39.937 INFO:tasks.workunit.client.0.vm07.stdout:1/538: symlink d3/d14/d54/d9b/lb6 0 2026-03-09T20:47:39.942 INFO:tasks.workunit.client.0.vm07.stdout:2/515: mkdir d2/db/d28/d90/da4 0 2026-03-09T20:47:39.942 INFO:tasks.workunit.client.0.vm07.stdout:2/516: stat d2/db/d28/d57/f75 0 2026-03-09T20:47:39.943 INFO:tasks.workunit.client.0.vm07.stdout:4/443: dwrite d2/d55/d5d/f47 [0,4194304] 0 2026-03-09T20:47:39.953 INFO:tasks.workunit.client.1.vm10.stdout:9/483: dread d2/d33/f7d [0,4194304] 0 2026-03-09T20:47:39.954 INFO:tasks.workunit.client.1.vm10.stdout:9/484: write d2/d33/f77 [2450409,63996] 0 2026-03-09T20:47:39.966 INFO:tasks.workunit.client.0.vm07.stdout:1/539: truncate d3/d14/d54/f32 383984 0 2026-03-09T20:47:39.966 INFO:tasks.workunit.client.0.vm07.stdout:2/517: stat d2/c37 0 2026-03-09T20:47:39.966 INFO:tasks.workunit.client.0.vm07.stdout:2/518: read d2/d46/d6e/f7a [577753,28231] 0 2026-03-09T20:47:39.968 INFO:tasks.workunit.client.0.vm07.stdout:9/472: dwrite f2 [4194304,4194304] 0 2026-03-09T20:47:39.980 INFO:tasks.workunit.client.0.vm07.stdout:6/504: dread d8/d26/d2a/d40/f65 [0,4194304] 0 2026-03-09T20:47:39.988 INFO:tasks.workunit.client.1.vm10.stdout:7/437: write db/d28/d2b/d36/f55 [1444675,12248] 0 2026-03-09T20:47:39.991 INFO:tasks.workunit.client.0.vm07.stdout:3/499: truncate d1/d5/d9/f3c 3886585 0 2026-03-09T20:47:39.991 INFO:tasks.workunit.client.1.vm10.stdout:5/413: dwrite d2/d1b/d54/d78/f47 [0,4194304] 0 2026-03-09T20:47:40.006 INFO:tasks.workunit.client.1.vm10.stdout:7/438: mkdir db/d21/d60/d87 0 2026-03-09T20:47:40.010 INFO:tasks.workunit.client.1.vm10.stdout:9/485: read d2/d28/d47/f58 [311692,129783] 0 2026-03-09T20:47:40.013 INFO:tasks.workunit.client.1.vm10.stdout:7/439: rename db/d54 to db/d28/d2b/d36/d3b/d88 0 2026-03-09T20:47:40.015 INFO:tasks.workunit.client.1.vm10.stdout:1/438: rmdir d2/da 39 2026-03-09T20:47:40.019 INFO:tasks.workunit.client.1.vm10.stdout:3/430: dwrite dc/d14/d27/f3f [0,4194304] 0 2026-03-09T20:47:40.019 INFO:tasks.workunit.client.1.vm10.stdout:2/450: write d5/d2b/d32/f5c [364793,124750] 0 2026-03-09T20:47:40.021 INFO:tasks.workunit.client.0.vm07.stdout:6/505: sync 2026-03-09T20:47:40.050 INFO:tasks.workunit.client.1.vm10.stdout:8/471: write d0/d22/d25/d2e/f33 [3821429,120795] 0 2026-03-09T20:47:40.051 INFO:tasks.workunit.client.0.vm07.stdout:0/577: write d1/d1f/d30/f8e [923483,107083] 0 2026-03-09T20:47:40.053 INFO:tasks.workunit.client.1.vm10.stdout:1/439: write d2/da/d25/d46/d51/d5d/d6e/d70/f79 [3295952,72203] 0 2026-03-09T20:47:40.059 INFO:tasks.workunit.client.0.vm07.stdout:4/444: creat d2/d55/d5d/d3f/d4a/d4b/d52/d5c/f76 x:0 0 0 2026-03-09T20:47:40.060 INFO:tasks.workunit.client.1.vm10.stdout:0/426: dwrite d2/d9/da/d11/f1f [0,4194304] 0 2026-03-09T20:47:40.061 INFO:tasks.workunit.client.0.vm07.stdout:6/506: mknod d8/d16/d61/ca2 0 2026-03-09T20:47:40.065 INFO:tasks.workunit.client.1.vm10.stdout:4/411: dwrite d1/d2/f2e [0,4194304] 0 2026-03-09T20:47:40.066 INFO:tasks.workunit.client.0.vm07.stdout:9/473: symlink d4/d8/d19/d89/da7/laa 0 2026-03-09T20:47:40.067 INFO:tasks.workunit.client.1.vm10.stdout:0/427: dwrite d2/d9/f20 [0,4194304] 0 2026-03-09T20:47:40.077 INFO:tasks.workunit.client.0.vm07.stdout:0/578: dread d1/f31 [0,4194304] 0 2026-03-09T20:47:40.078 INFO:tasks.workunit.client.1.vm10.stdout:4/412: dread d1/d8/d1c/d2b/f36 [0,4194304] 0 2026-03-09T20:47:40.079 INFO:tasks.workunit.client.0.vm07.stdout:7/566: write d3/f18 [844977,4770] 0 2026-03-09T20:47:40.082 INFO:tasks.workunit.client.0.vm07.stdout:8/456: dwrite d1/dc/d16/f6d [0,4194304] 0 2026-03-09T20:47:40.104 INFO:tasks.workunit.client.1.vm10.stdout:6/462: dread - d3/da/d11/d31/d4c/f69 zero size 2026-03-09T20:47:40.104 INFO:tasks.workunit.client.0.vm07.stdout:5/577: write d5/d33/d3b/f63 [4922328,60340] 0 2026-03-09T20:47:40.116 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:39 vm07.local ceph-mon[49120]: pgmap v8: 65 pgs: 65 active+clean; 1.9 GiB data, 7.2 GiB used, 113 GiB / 120 GiB avail; 18 MiB/s rd, 54 MiB/s wr, 127 op/s 2026-03-09T20:47:40.116 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:39 vm07.local ceph-mon[49120]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:40.116 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:39 vm07.local ceph-mon[49120]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:40.116 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:39 vm07.local ceph-mon[49120]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T20:47:40.116 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:39 vm07.local ceph-mon[49120]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T20:47:40.116 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:39 vm07.local ceph-mon[49120]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:47:40.116 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:39 vm07.local ceph-mon[49120]: Upgrade: Need to upgrade myself (mgr.vm10.byqahe) 2026-03-09T20:47:40.116 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:39 vm07.local ceph-mon[49120]: Upgrade: Need to upgrade myself (mgr.vm10.byqahe) 2026-03-09T20:47:40.138 INFO:tasks.workunit.client.0.vm07.stdout:4/445: mknod d2/d55/d5d/d3f/d4a/d4b/d52/c77 0 2026-03-09T20:47:40.138 INFO:tasks.workunit.client.0.vm07.stdout:9/474: creat d4/d16/d29/fab x:0 0 0 2026-03-09T20:47:40.157 INFO:tasks.workunit.client.0.vm07.stdout:2/519: write d2/d11/f51 [787723,77099] 0 2026-03-09T20:47:40.161 INFO:tasks.workunit.client.0.vm07.stdout:3/500: dwrite d1/d5/d9/d2f/d3d/d64/d59/f61 [0,4194304] 0 2026-03-09T20:47:40.162 INFO:tasks.workunit.client.0.vm07.stdout:1/540: dwrite d3/d14/f33 [0,4194304] 0 2026-03-09T20:47:40.167 INFO:tasks.workunit.client.1.vm10.stdout:6/463: dread f1 [0,4194304] 0 2026-03-09T20:47:40.188 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:39 vm10.local ceph-mon[57011]: pgmap v8: 65 pgs: 65 active+clean; 1.9 GiB data, 7.2 GiB used, 113 GiB / 120 GiB avail; 18 MiB/s rd, 54 MiB/s wr, 127 op/s 2026-03-09T20:47:40.188 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:39 vm10.local ceph-mon[57011]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:40.188 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:39 vm10.local ceph-mon[57011]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:40.188 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:39 vm10.local ceph-mon[57011]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T20:47:40.188 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:39 vm10.local ceph-mon[57011]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T20:47:40.188 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:39 vm10.local ceph-mon[57011]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:47:40.188 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:39 vm10.local ceph-mon[57011]: Upgrade: Need to upgrade myself (mgr.vm10.byqahe) 2026-03-09T20:47:40.188 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:39 vm10.local ceph-mon[57011]: Upgrade: Need to upgrade myself (mgr.vm10.byqahe) 2026-03-09T20:47:40.194 INFO:tasks.workunit.client.1.vm10.stdout:5/414: getdents d2/d39 0 2026-03-09T20:47:40.198 INFO:tasks.workunit.client.1.vm10.stdout:0/428: mkdir d2/d4a/d58/d82/d93 0 2026-03-09T20:47:40.199 INFO:tasks.workunit.client.1.vm10.stdout:7/440: dread db/d28/d4c/f65 [0,4194304] 0 2026-03-09T20:47:40.204 INFO:tasks.workunit.client.1.vm10.stdout:2/451: truncate d5/d2b/f36 1852335 0 2026-03-09T20:47:40.205 INFO:tasks.workunit.client.1.vm10.stdout:1/440: dwrite d2/da/d25/f28 [0,4194304] 0 2026-03-09T20:47:40.208 INFO:tasks.workunit.client.1.vm10.stdout:4/413: rename d1/d2/f12 to d1/d2/f84 0 2026-03-09T20:47:40.224 INFO:tasks.workunit.client.0.vm07.stdout:7/567: mkdir d3/d58/dc1 0 2026-03-09T20:47:40.224 INFO:tasks.workunit.client.1.vm10.stdout:9/486: link d2/d28/d47/d6a/f7f d2/d33/d37/fae 0 2026-03-09T20:47:40.224 INFO:tasks.workunit.client.1.vm10.stdout:9/487: readlink d2/d12/l23 0 2026-03-09T20:47:40.224 INFO:tasks.workunit.client.1.vm10.stdout:3/431: creat dc/f88 x:0 0 0 2026-03-09T20:47:40.224 INFO:tasks.workunit.client.1.vm10.stdout:2/452: dwrite d5/d18/f67 [0,4194304] 0 2026-03-09T20:47:40.224 INFO:tasks.workunit.client.1.vm10.stdout:3/432: dwrite dc/ff [0,4194304] 0 2026-03-09T20:47:40.225 INFO:tasks.workunit.client.1.vm10.stdout:8/472: link d0/d22/d2f/c56 d0/d22/d25/d2e/c93 0 2026-03-09T20:47:40.225 INFO:tasks.workunit.client.1.vm10.stdout:4/414: creat d1/d2/d5c/d64/d61/f85 x:0 0 0 2026-03-09T20:47:40.228 INFO:tasks.workunit.client.0.vm07.stdout:4/446: symlink d2/df/d59/l78 0 2026-03-09T20:47:40.228 INFO:tasks.workunit.client.0.vm07.stdout:9/475: creat d4/d8/dc/d4e/d54/fac x:0 0 0 2026-03-09T20:47:40.231 INFO:tasks.workunit.client.1.vm10.stdout:5/415: sync 2026-03-09T20:47:40.231 INFO:tasks.workunit.client.1.vm10.stdout:1/441: sync 2026-03-09T20:47:40.246 INFO:tasks.workunit.client.1.vm10.stdout:0/429: creat d2/d9/d4b/d63/f94 x:0 0 0 2026-03-09T20:47:40.247 INFO:tasks.workunit.client.1.vm10.stdout:3/433: rmdir dc/d14/d20/d21 39 2026-03-09T20:47:40.257 INFO:tasks.workunit.client.1.vm10.stdout:2/453: unlink d5/c20 0 2026-03-09T20:47:40.257 INFO:tasks.workunit.client.0.vm07.stdout:1/541: mknod d3/d97/da1/cb7 0 2026-03-09T20:47:40.258 INFO:tasks.workunit.client.1.vm10.stdout:4/415: dread - d1/d2/f60 zero size 2026-03-09T20:47:40.260 INFO:tasks.workunit.client.1.vm10.stdout:7/441: read db/d28/d2b/d36/d3b/f3d [317450,84078] 0 2026-03-09T20:47:40.262 INFO:tasks.workunit.client.1.vm10.stdout:7/442: truncate db/d28/d2b/d36/d3f/f7b 596679 0 2026-03-09T20:47:40.265 INFO:tasks.workunit.client.1.vm10.stdout:4/416: dwrite d1/fe [0,4194304] 0 2026-03-09T20:47:40.269 INFO:tasks.workunit.client.1.vm10.stdout:6/464: fdatasync d3/d30/d7f/d24/f27 0 2026-03-09T20:47:40.271 INFO:tasks.workunit.client.1.vm10.stdout:6/465: fdatasync d3/da/f15 0 2026-03-09T20:47:40.272 INFO:tasks.workunit.client.1.vm10.stdout:6/466: chown d3/d30/d7f/c1c 90991158 1 2026-03-09T20:47:40.276 INFO:tasks.workunit.client.0.vm07.stdout:7/568: mkdir d3/da/db/d32/d3e/d5c/dc2 0 2026-03-09T20:47:40.283 INFO:tasks.workunit.client.1.vm10.stdout:1/442: creat d2/da/d25/d3e/d42/f8d x:0 0 0 2026-03-09T20:47:40.284 INFO:tasks.workunit.client.1.vm10.stdout:0/430: mknod d2/d9/d69/c95 0 2026-03-09T20:47:40.286 INFO:tasks.workunit.client.0.vm07.stdout:6/507: rename d8/d26/d2a/d40/d69/d4f to d8/d16/da3 0 2026-03-09T20:47:40.293 INFO:tasks.workunit.client.0.vm07.stdout:0/579: dwrite d1/f90 [0,4194304] 0 2026-03-09T20:47:40.293 INFO:tasks.workunit.client.0.vm07.stdout:9/476: fsync d4/d11/d23/f2f 0 2026-03-09T20:47:40.300 INFO:tasks.workunit.client.0.vm07.stdout:8/457: dwrite d1/dc/f75 [0,4194304] 0 2026-03-09T20:47:40.306 INFO:tasks.workunit.client.1.vm10.stdout:8/473: dread d0/d22/d25/f34 [0,4194304] 0 2026-03-09T20:47:40.307 INFO:tasks.workunit.client.0.vm07.stdout:2/520: creat d2/db/d28/d90/da4/fa5 x:0 0 0 2026-03-09T20:47:40.308 INFO:tasks.workunit.client.1.vm10.stdout:8/474: dread d0/d22/d25/f34 [0,4194304] 0 2026-03-09T20:47:40.310 INFO:tasks.workunit.client.1.vm10.stdout:4/417: mkdir d1/d2/d3/d70/d78/d86 0 2026-03-09T20:47:40.312 INFO:tasks.workunit.client.0.vm07.stdout:7/569: rmdir d3 39 2026-03-09T20:47:40.313 INFO:tasks.workunit.client.0.vm07.stdout:6/508: unlink d8/d16/d22/l2d 0 2026-03-09T20:47:40.315 INFO:tasks.workunit.client.0.vm07.stdout:4/447: symlink d2/df/l79 0 2026-03-09T20:47:40.322 INFO:tasks.workunit.client.1.vm10.stdout:6/467: symlink d3/d30/d7f/d36/d5c/d8d/l97 0 2026-03-09T20:47:40.322 INFO:tasks.workunit.client.1.vm10.stdout:5/416: mknod d2/d27/d37/d46/d5d/d5f/d69/d96/ca0 0 2026-03-09T20:47:40.324 INFO:tasks.workunit.client.1.vm10.stdout:0/431: symlink d2/d9/d69/l96 0 2026-03-09T20:47:40.324 INFO:tasks.workunit.client.1.vm10.stdout:5/417: truncate d2/d58/d6c/f98 775582 0 2026-03-09T20:47:40.325 INFO:tasks.workunit.client.1.vm10.stdout:3/434: mkdir dc/d14/d26/d29/d2a/d55/d89 0 2026-03-09T20:47:40.330 INFO:tasks.workunit.client.1.vm10.stdout:7/443: fdatasync db/d28/d2b/d36/d3b/f42 0 2026-03-09T20:47:40.334 INFO:tasks.workunit.client.1.vm10.stdout:7/444: dread db/f70 [0,4194304] 0 2026-03-09T20:47:40.334 INFO:tasks.workunit.client.1.vm10.stdout:7/445: chown db/d28/f41 4788060 1 2026-03-09T20:47:40.334 INFO:tasks.workunit.client.1.vm10.stdout:9/488: getdents d2/d3/de 0 2026-03-09T20:47:40.334 INFO:tasks.workunit.client.0.vm07.stdout:4/448: write d2/d55/d5d/d3f/f68 [4059077,8324] 0 2026-03-09T20:47:40.348 INFO:tasks.workunit.client.1.vm10.stdout:4/418: dread d1/d8/d1b/f24 [0,4194304] 0 2026-03-09T20:47:40.349 INFO:tasks.workunit.client.0.vm07.stdout:4/449: dwrite d2/d55/d5d/f6f [0,4194304] 0 2026-03-09T20:47:40.350 INFO:tasks.workunit.client.1.vm10.stdout:3/435: fsync dc/d14/d26/f34 0 2026-03-09T20:47:40.356 INFO:tasks.workunit.client.0.vm07.stdout:9/477: rename d4/d11/f2c to d4/d8/d19/d26/fad 0 2026-03-09T20:47:40.356 INFO:tasks.workunit.client.0.vm07.stdout:7/570: mkdir d3/da/db/d79/dc3 0 2026-03-09T20:47:40.371 INFO:tasks.workunit.client.0.vm07.stdout:0/580: dread d1/f1a [0,4194304] 0 2026-03-09T20:47:40.377 INFO:tasks.workunit.client.0.vm07.stdout:5/578: dwrite d5/d19/f4d [0,4194304] 0 2026-03-09T20:47:40.378 INFO:tasks.workunit.client.0.vm07.stdout:0/581: readlink d1/l1c 0 2026-03-09T20:47:40.395 INFO:tasks.workunit.client.0.vm07.stdout:3/501: dwrite d1/d5/d9/d2f/d3d/d64/f22 [0,4194304] 0 2026-03-09T20:47:40.405 INFO:tasks.workunit.client.1.vm10.stdout:6/468: truncate d3/d30/d7f/d24/f27 2729874 0 2026-03-09T20:47:40.411 INFO:tasks.workunit.client.1.vm10.stdout:1/443: dwrite d2/da/f1e [0,4194304] 0 2026-03-09T20:47:40.414 INFO:tasks.workunit.client.1.vm10.stdout:2/454: dwrite d5/d18/f90 [0,4194304] 0 2026-03-09T20:47:40.416 INFO:tasks.workunit.client.1.vm10.stdout:6/469: dwrite d3/d30/f91 [0,4194304] 0 2026-03-09T20:47:40.419 INFO:tasks.workunit.client.1.vm10.stdout:4/419: fdatasync d1/d8/d1c/d2b/f36 0 2026-03-09T20:47:40.420 INFO:tasks.workunit.client.0.vm07.stdout:1/542: dwrite d3/d14/d54/d6e/fa9 [4194304,4194304] 0 2026-03-09T20:47:40.420 INFO:tasks.workunit.client.0.vm07.stdout:8/458: dwrite d1/d5d/d6f/f18 [0,4194304] 0 2026-03-09T20:47:40.420 INFO:tasks.workunit.client.1.vm10.stdout:4/420: chown d1/d2/f43 11417 1 2026-03-09T20:47:40.424 INFO:tasks.workunit.client.1.vm10.stdout:0/432: readlink d2/d9/d2a/l64 0 2026-03-09T20:47:40.427 INFO:tasks.workunit.client.0.vm07.stdout:6/509: dwrite d8/d26/d2a/d40/d69/f9e [0,4194304] 0 2026-03-09T20:47:40.428 INFO:tasks.workunit.client.0.vm07.stdout:6/510: write d8/d26/f3d [484297,22163] 0 2026-03-09T20:47:40.429 INFO:tasks.workunit.client.0.vm07.stdout:6/511: chown d8/d26/d2a/d40/d69/f78 863 1 2026-03-09T20:47:40.431 INFO:tasks.workunit.client.1.vm10.stdout:8/475: creat d0/f94 x:0 0 0 2026-03-09T20:47:40.433 INFO:tasks.workunit.client.1.vm10.stdout:9/489: truncate d2/d3/de/d35/f38 671119 0 2026-03-09T20:47:40.435 INFO:tasks.workunit.client.0.vm07.stdout:2/521: link d2/db/l19 d2/d11/la6 0 2026-03-09T20:47:40.436 INFO:tasks.workunit.client.0.vm07.stdout:2/522: readlink d2/d11/l3f 0 2026-03-09T20:47:40.436 INFO:tasks.workunit.client.0.vm07.stdout:7/571: mknod d3/da/db/d32/d3e/d5c/dc2/cc4 0 2026-03-09T20:47:40.440 INFO:tasks.workunit.client.0.vm07.stdout:2/523: write d2/db/d1c/f93 [442637,8960] 0 2026-03-09T20:47:40.449 INFO:tasks.workunit.client.1.vm10.stdout:2/455: creat d5/d2b/d32/f92 x:0 0 0 2026-03-09T20:47:40.449 INFO:tasks.workunit.client.0.vm07.stdout:1/543: rmdir d3/d23/d55/d56 39 2026-03-09T20:47:40.457 INFO:tasks.workunit.client.1.vm10.stdout:1/444: symlink d2/da/d25/d3e/d55/l8e 0 2026-03-09T20:47:40.472 INFO:tasks.workunit.client.0.vm07.stdout:6/512: stat d8/d16/l42 0 2026-03-09T20:47:40.472 INFO:tasks.workunit.client.0.vm07.stdout:9/478: mknod d4/d11/d2a/d84/cae 0 2026-03-09T20:47:40.473 INFO:tasks.workunit.client.0.vm07.stdout:7/572: unlink d3/da/fb4 0 2026-03-09T20:47:40.476 INFO:tasks.workunit.client.0.vm07.stdout:6/513: read - d8/d26/d2a/f41 zero size 2026-03-09T20:47:40.488 INFO:tasks.workunit.client.0.vm07.stdout:8/459: mkdir d1/d5d/d6f/d2f/d53/d76/d87/d93 0 2026-03-09T20:47:40.490 INFO:tasks.workunit.client.0.vm07.stdout:8/460: dread d1/d5d/d6f/d2f/d4d/f73 [0,4194304] 0 2026-03-09T20:47:40.493 INFO:tasks.workunit.client.0.vm07.stdout:8/461: chown d1/c3d 137884 1 2026-03-09T20:47:40.497 INFO:tasks.workunit.client.0.vm07.stdout:8/462: truncate d1/d5d/d6f/d2f/d4d/d55/f8e 896221 0 2026-03-09T20:47:40.497 INFO:tasks.workunit.client.0.vm07.stdout:1/544: read d3/d14/f4d [354899,74819] 0 2026-03-09T20:47:40.497 INFO:tasks.workunit.client.0.vm07.stdout:1/545: chown d3/d23/d67/l91 56103 1 2026-03-09T20:47:40.500 INFO:tasks.workunit.client.1.vm10.stdout:8/476: unlink d0/d22/d2f/d38/l55 0 2026-03-09T20:47:40.507 INFO:tasks.workunit.client.0.vm07.stdout:2/524: dread d2/f3e [0,4194304] 0 2026-03-09T20:47:40.518 INFO:tasks.workunit.client.0.vm07.stdout:4/450: rename d2/d55/d5d/f47 to d2/d55/d5d/d3f/d4a/d4b/f7a 0 2026-03-09T20:47:40.520 INFO:tasks.workunit.client.0.vm07.stdout:7/573: mkdir d3/da/d83/dc5 0 2026-03-09T20:47:40.521 INFO:tasks.workunit.client.1.vm10.stdout:2/456: truncate d5/d18/d27/d38/f45 1099222 0 2026-03-09T20:47:40.522 INFO:tasks.workunit.client.0.vm07.stdout:9/479: symlink d4/d11/d2a/laf 0 2026-03-09T20:47:40.527 INFO:tasks.workunit.client.1.vm10.stdout:7/446: unlink db/d21/d26/d72/c83 0 2026-03-09T20:47:40.534 INFO:tasks.workunit.client.0.vm07.stdout:1/546: dread - d3/d14/d54/fa2 zero size 2026-03-09T20:47:40.542 INFO:tasks.workunit.client.0.vm07.stdout:6/514: dread d8/d16/d22/d33/f6d [0,4194304] 0 2026-03-09T20:47:40.543 INFO:tasks.workunit.client.0.vm07.stdout:6/515: truncate d8/d16/d4b/f95 584023 0 2026-03-09T20:47:40.544 INFO:tasks.workunit.client.0.vm07.stdout:6/516: dread d8/d16/d22/d33/f6d [0,4194304] 0 2026-03-09T20:47:40.549 INFO:tasks.workunit.client.0.vm07.stdout:6/517: dwrite d8/d16/d22/d33/f91 [0,4194304] 0 2026-03-09T20:47:40.556 INFO:tasks.workunit.client.0.vm07.stdout:9/480: dread d4/d11/f88 [0,4194304] 0 2026-03-09T20:47:40.558 INFO:tasks.workunit.client.1.vm10.stdout:7/447: mkdir db/d46/d89 0 2026-03-09T20:47:40.564 INFO:tasks.workunit.client.1.vm10.stdout:5/418: truncate d2/f2c 1292949 0 2026-03-09T20:47:40.568 INFO:tasks.workunit.client.0.vm07.stdout:1/547: unlink d3/d23/d55/c61 0 2026-03-09T20:47:40.568 INFO:tasks.workunit.client.1.vm10.stdout:5/419: dwrite d2/d27/d37/d46/d5d/d5f/f61 [4194304,4194304] 0 2026-03-09T20:47:40.575 INFO:tasks.workunit.client.0.vm07.stdout:2/525: mkdir d2/da7 0 2026-03-09T20:47:40.576 INFO:tasks.workunit.client.0.vm07.stdout:0/582: write d1/d1f/d20/f4d [693619,7111] 0 2026-03-09T20:47:40.579 INFO:tasks.workunit.client.1.vm10.stdout:3/436: link dc/d14/d22/d7f/c5f dc/c8a 0 2026-03-09T20:47:40.580 INFO:tasks.workunit.client.1.vm10.stdout:3/437: chown dc/d14/d22/d7f 3494 1 2026-03-09T20:47:40.581 INFO:tasks.workunit.client.0.vm07.stdout:2/526: dwrite d2/db/d1c/f45 [0,4194304] 0 2026-03-09T20:47:40.584 INFO:tasks.workunit.client.1.vm10.stdout:8/477: mkdir d0/d95 0 2026-03-09T20:47:40.585 INFO:tasks.workunit.client.1.vm10.stdout:9/490: rmdir d2/d28/d47/d67/da3 0 2026-03-09T20:47:40.591 INFO:tasks.workunit.client.0.vm07.stdout:3/502: dwrite d1/f36 [0,4194304] 0 2026-03-09T20:47:40.612 INFO:tasks.workunit.client.1.vm10.stdout:4/421: write d1/d2/d5c/d64/d61/f68 [998063,75190] 0 2026-03-09T20:47:40.613 INFO:tasks.workunit.client.0.vm07.stdout:8/463: write d1/f25 [817566,57157] 0 2026-03-09T20:47:40.615 INFO:tasks.workunit.client.1.vm10.stdout:1/445: dwrite d2/da/f11 [0,4194304] 0 2026-03-09T20:47:40.615 INFO:tasks.workunit.client.1.vm10.stdout:2/457: mkdir d5/d2b/d32/d80/d8d/d93 0 2026-03-09T20:47:40.616 INFO:tasks.workunit.client.1.vm10.stdout:6/470: getdents d3/da/d11/d31/d47 0 2026-03-09T20:47:40.617 INFO:tasks.workunit.client.1.vm10.stdout:1/446: write d2/da/d25/d3e/f44 [1715239,94166] 0 2026-03-09T20:47:40.618 INFO:tasks.workunit.client.0.vm07.stdout:6/518: symlink d8/d16/d61/la4 0 2026-03-09T20:47:40.619 INFO:tasks.workunit.client.1.vm10.stdout:1/447: write d2/da/d25/d3e/d42/f8d [393179,3712] 0 2026-03-09T20:47:40.621 INFO:tasks.workunit.client.1.vm10.stdout:7/448: mkdir db/d28/d2b/d36/d40/d8a 0 2026-03-09T20:47:40.622 INFO:tasks.workunit.client.0.vm07.stdout:9/481: dread - d4/d8/d19/f7e zero size 2026-03-09T20:47:40.622 INFO:tasks.workunit.client.0.vm07.stdout:9/482: dread - d4/d11/f9d zero size 2026-03-09T20:47:40.630 INFO:tasks.workunit.client.0.vm07.stdout:9/483: dwrite d4/d8/d19/d89/f9e [0,4194304] 0 2026-03-09T20:47:40.632 INFO:tasks.workunit.client.1.vm10.stdout:0/433: getdents d2 0 2026-03-09T20:47:40.643 INFO:tasks.workunit.client.0.vm07.stdout:0/583: write d1/fa1 [952764,55888] 0 2026-03-09T20:47:40.643 INFO:tasks.workunit.client.0.vm07.stdout:1/548: mknod d3/d23/cb8 0 2026-03-09T20:47:40.643 INFO:tasks.workunit.client.0.vm07.stdout:2/527: readlink d2/d11/l21 0 2026-03-09T20:47:40.644 INFO:tasks.workunit.client.1.vm10.stdout:0/434: read - d2/d9/d4b/d63/f94 zero size 2026-03-09T20:47:40.644 INFO:tasks.workunit.client.1.vm10.stdout:0/435: chown d2/d9/d69 2 1 2026-03-09T20:47:40.644 INFO:tasks.workunit.client.1.vm10.stdout:0/436: truncate d2/d4a/d58/d82/d71/d5d/f8c 557335 0 2026-03-09T20:47:40.644 INFO:tasks.workunit.client.1.vm10.stdout:5/420: mkdir d2/d27/d37/d46/d5d/d5f/d84/d87/da1 0 2026-03-09T20:47:40.647 INFO:tasks.workunit.client.1.vm10.stdout:3/438: rename dc/d14/d26/d29/c52 to dc/d14/d26/d77/c8b 0 2026-03-09T20:47:40.650 INFO:tasks.workunit.client.0.vm07.stdout:5/579: rename d5/df/d13/d3e/d47/c74 to d5/cca 0 2026-03-09T20:47:40.650 INFO:tasks.workunit.client.0.vm07.stdout:5/580: write d5/d19/f4d [3493224,15124] 0 2026-03-09T20:47:40.654 INFO:tasks.workunit.client.1.vm10.stdout:4/422: symlink d1/d8/d1c/d2b/l87 0 2026-03-09T20:47:40.654 INFO:tasks.workunit.client.0.vm07.stdout:8/464: mkdir d1/dc/d16/d26/d94 0 2026-03-09T20:47:40.662 INFO:tasks.workunit.client.0.vm07.stdout:6/519: mknod d8/d26/d7d/ca5 0 2026-03-09T20:47:40.669 INFO:tasks.workunit.client.1.vm10.stdout:2/458: mkdir d5/d2b/d32/d80/d47/d94 0 2026-03-09T20:47:40.681 INFO:tasks.workunit.client.0.vm07.stdout:1/549: dread - d3/d23/d55/d56/d60/fb5 zero size 2026-03-09T20:47:40.681 INFO:tasks.workunit.client.0.vm07.stdout:5/581: creat d5/d19/d73/d9c/fcb x:0 0 0 2026-03-09T20:47:40.681 INFO:tasks.workunit.client.0.vm07.stdout:3/503: symlink d1/d5/d9/d2f/d99/la0 0 2026-03-09T20:47:40.681 INFO:tasks.workunit.client.1.vm10.stdout:6/471: symlink d3/da/d11/d31/d4c/l98 0 2026-03-09T20:47:40.681 INFO:tasks.workunit.client.1.vm10.stdout:2/459: write d5/d18/d1b/d22/f6d [1252362,121500] 0 2026-03-09T20:47:40.681 INFO:tasks.workunit.client.1.vm10.stdout:1/448: symlink d2/da/d25/d46/d51/d5d/d6e/l8f 0 2026-03-09T20:47:40.681 INFO:tasks.workunit.client.1.vm10.stdout:7/449: mkdir db/d28/d2b/d36/d63/d8b 0 2026-03-09T20:47:40.681 INFO:tasks.workunit.client.1.vm10.stdout:0/437: mknod d2/d9/d69/c97 0 2026-03-09T20:47:40.681 INFO:tasks.workunit.client.1.vm10.stdout:5/421: creat d2/d27/d37/d46/d5d/d5f/d63/fa2 x:0 0 0 2026-03-09T20:47:40.681 INFO:tasks.workunit.client.1.vm10.stdout:3/439: stat dc/d14/d20/d2e/d56/f15 0 2026-03-09T20:47:40.681 INFO:tasks.workunit.client.1.vm10.stdout:3/440: readlink dc/d14/d27/l4c 0 2026-03-09T20:47:40.681 INFO:tasks.workunit.client.1.vm10.stdout:3/441: chown c5 732560609 1 2026-03-09T20:47:40.682 INFO:tasks.workunit.client.0.vm07.stdout:7/574: link d3/da/db/d32/d3e/dac/d43/l69 d3/da/d53/lc6 0 2026-03-09T20:47:40.684 INFO:tasks.workunit.client.1.vm10.stdout:9/491: unlink d2/d3/f54 0 2026-03-09T20:47:40.685 INFO:tasks.workunit.client.0.vm07.stdout:1/550: dread - d3/d23/d55/f77 zero size 2026-03-09T20:47:40.685 INFO:tasks.workunit.client.1.vm10.stdout:4/423: mknod d1/d2/d3/c88 0 2026-03-09T20:47:40.685 INFO:tasks.workunit.client.0.vm07.stdout:1/551: chown d3/d23/d67/l8b 717 1 2026-03-09T20:47:40.687 INFO:tasks.workunit.client.1.vm10.stdout:6/472: creat d3/d30/d7f/d24/f99 x:0 0 0 2026-03-09T20:47:40.689 INFO:tasks.workunit.client.0.vm07.stdout:2/528: symlink d2/db/d49/d7d/d85/la8 0 2026-03-09T20:47:40.690 INFO:tasks.workunit.client.0.vm07.stdout:7/575: write d3/da/f3b [4379051,17139] 0 2026-03-09T20:47:40.691 INFO:tasks.workunit.client.0.vm07.stdout:0/584: symlink d1/d2/dc/db1/lb6 0 2026-03-09T20:47:40.700 INFO:tasks.workunit.client.0.vm07.stdout:8/465: mkdir d1/d5d/d6f/d2f/d4d/d95 0 2026-03-09T20:47:40.701 INFO:tasks.workunit.client.0.vm07.stdout:8/466: chown d1/d5d/d6f/d2f/d53/l8a 299 1 2026-03-09T20:47:40.706 INFO:tasks.workunit.client.0.vm07.stdout:2/529: rmdir d2/db/d28/d57 39 2026-03-09T20:47:40.710 INFO:tasks.workunit.client.0.vm07.stdout:3/504: dread d1/d5/d9/d11/d6d/d80/f93 [0,4194304] 0 2026-03-09T20:47:40.712 INFO:tasks.workunit.client.0.vm07.stdout:9/484: sync 2026-03-09T20:47:40.714 INFO:tasks.workunit.client.0.vm07.stdout:5/582: sync 2026-03-09T20:47:40.720 INFO:tasks.workunit.client.0.vm07.stdout:4/451: truncate d2/df/f49 3394786 0 2026-03-09T20:47:40.733 INFO:tasks.workunit.client.1.vm10.stdout:8/478: truncate d0/d22/d25/d2e/f33 1164379 0 2026-03-09T20:47:40.736 INFO:tasks.workunit.client.0.vm07.stdout:0/585: truncate d1/d2/d33/fb5 2810410 0 2026-03-09T20:47:40.738 INFO:tasks.workunit.client.0.vm07.stdout:9/485: mknod d4/d16/d29/d24/d37/d44/d62/d8e/cb0 0 2026-03-09T20:47:40.738 INFO:tasks.workunit.client.0.vm07.stdout:6/520: dwrite d8/d16/d61/f7c [0,4194304] 0 2026-03-09T20:47:40.744 INFO:tasks.workunit.client.0.vm07.stdout:5/583: rmdir d5/d19/d73/d97 39 2026-03-09T20:47:40.756 INFO:tasks.workunit.client.0.vm07.stdout:1/552: truncate d3/d14/d54/d3e/f4a 2176022 0 2026-03-09T20:47:40.757 INFO:tasks.workunit.client.0.vm07.stdout:4/452: mknod d2/d55/d5d/d3f/c7b 0 2026-03-09T20:47:40.757 INFO:tasks.workunit.client.0.vm07.stdout:4/453: chown d2/f43 977 1 2026-03-09T20:47:40.758 INFO:tasks.workunit.client.0.vm07.stdout:1/553: write d3/d66/f8c [1099528,65967] 0 2026-03-09T20:47:40.761 INFO:tasks.workunit.client.0.vm07.stdout:8/467: creat d1/d5d/d6f/d2f/d4d/d63/d91/f96 x:0 0 0 2026-03-09T20:47:40.762 INFO:tasks.workunit.client.0.vm07.stdout:2/530: creat d2/db/d28/d87/d96/fa9 x:0 0 0 2026-03-09T20:47:40.764 INFO:tasks.workunit.client.1.vm10.stdout:2/460: mknod d5/d2b/d32/d80/c95 0 2026-03-09T20:47:40.773 INFO:tasks.workunit.client.1.vm10.stdout:3/442: mkdir dc/d14/d26/d29/d40/d8c 0 2026-03-09T20:47:40.774 INFO:tasks.workunit.client.1.vm10.stdout:3/443: write dc/f87 [731649,48744] 0 2026-03-09T20:47:40.775 INFO:tasks.workunit.client.1.vm10.stdout:3/444: write dc/d14/d27/f3f [5123228,77637] 0 2026-03-09T20:47:40.778 INFO:tasks.workunit.client.0.vm07.stdout:9/486: fdatasync d4/d8/dc/d15/f18 0 2026-03-09T20:47:40.779 INFO:tasks.workunit.client.0.vm07.stdout:2/531: dread d2/d11/f51 [0,4194304] 0 2026-03-09T20:47:40.779 INFO:tasks.workunit.client.0.vm07.stdout:7/576: rename d3/da/d83/lbd to d3/lc7 0 2026-03-09T20:47:40.783 INFO:tasks.workunit.client.0.vm07.stdout:5/584: dread d5/d33/d39/d8d/dab/f5f [0,4194304] 0 2026-03-09T20:47:40.784 INFO:tasks.workunit.client.0.vm07.stdout:5/585: fdatasync d5/df/d13/d6c/f79 0 2026-03-09T20:47:40.784 INFO:tasks.workunit.client.0.vm07.stdout:9/487: stat d4/d16/d29 0 2026-03-09T20:47:40.786 INFO:tasks.workunit.client.1.vm10.stdout:9/492: fdatasync d2/d12/f20 0 2026-03-09T20:47:40.789 INFO:tasks.workunit.client.1.vm10.stdout:6/473: creat d3/d30/d7f/d4a/f9a x:0 0 0 2026-03-09T20:47:40.795 INFO:tasks.workunit.client.0.vm07.stdout:8/468: creat d1/d5d/d6f/d2f/d53/d76/d87/f97 x:0 0 0 2026-03-09T20:47:40.797 INFO:tasks.workunit.client.1.vm10.stdout:7/450: write f5 [397353,38514] 0 2026-03-09T20:47:40.798 INFO:tasks.workunit.client.0.vm07.stdout:3/505: creat d1/d5/d9/fa1 x:0 0 0 2026-03-09T20:47:40.798 INFO:tasks.workunit.client.1.vm10.stdout:1/449: fdatasync d2/f14 0 2026-03-09T20:47:40.802 INFO:tasks.workunit.client.1.vm10.stdout:2/461: rename d5/d18/d1b/f2e to d5/d2b/d32/f96 0 2026-03-09T20:47:40.802 INFO:tasks.workunit.client.0.vm07.stdout:0/586: truncate d1/d1f/f63 750151 0 2026-03-09T20:47:40.804 INFO:tasks.workunit.client.1.vm10.stdout:1/450: dwrite d2/da/d25/d46/d51/d5d/d6e/d70/f83 [0,4194304] 0 2026-03-09T20:47:40.804 INFO:tasks.workunit.client.0.vm07.stdout:6/521: mkdir d8/d16/d22/d9b/da6 0 2026-03-09T20:47:40.809 INFO:tasks.workunit.client.1.vm10.stdout:0/438: mkdir d2/d4a/d58/d82/d60/d98 0 2026-03-09T20:47:40.815 INFO:tasks.workunit.client.0.vm07.stdout:7/577: fsync d3/da/db/d32/d3e/dac/d1f/d2b/f33 0 2026-03-09T20:47:40.815 INFO:tasks.workunit.client.0.vm07.stdout:8/469: sync 2026-03-09T20:47:40.816 INFO:tasks.workunit.client.1.vm10.stdout:7/451: sync 2026-03-09T20:47:40.816 INFO:tasks.workunit.client.0.vm07.stdout:8/470: chown d1/lf 63 1 2026-03-09T20:47:40.817 INFO:tasks.workunit.client.0.vm07.stdout:8/471: fsync d1/d5d/d6f/d2f/d4d/d55/f8e 0 2026-03-09T20:47:40.818 INFO:tasks.workunit.client.0.vm07.stdout:6/522: dwrite d8/d16/d4b/f95 [0,4194304] 0 2026-03-09T20:47:40.842 INFO:tasks.workunit.client.1.vm10.stdout:5/422: dwrite d2/d27/f34 [0,4194304] 0 2026-03-09T20:47:40.847 INFO:tasks.workunit.client.1.vm10.stdout:6/474: symlink d3/da/d11/d26/l9b 0 2026-03-09T20:47:40.856 INFO:tasks.workunit.client.0.vm07.stdout:1/554: mkdir d3/d23/d55/d56/d60/d9f/db9 0 2026-03-09T20:47:40.856 INFO:tasks.workunit.client.0.vm07.stdout:1/555: dwrite d3/fc [0,4194304] 0 2026-03-09T20:47:40.856 INFO:tasks.workunit.client.0.vm07.stdout:1/556: read d3/d14/d54/d3e/f75 [683496,36961] 0 2026-03-09T20:47:40.860 INFO:tasks.workunit.client.0.vm07.stdout:1/557: dwrite d3/f9 [0,4194304] 0 2026-03-09T20:47:40.863 INFO:tasks.workunit.client.1.vm10.stdout:2/462: mknod d5/d2b/d32/d80/d47/c97 0 2026-03-09T20:47:40.864 INFO:tasks.workunit.client.0.vm07.stdout:9/488: mknod d4/d16/d29/d24/d37/d44/d62/d8e/cb1 0 2026-03-09T20:47:40.864 INFO:tasks.workunit.client.0.vm07.stdout:3/506: mknod d1/d5/d9/d2f/d3d/d71/d76/ca2 0 2026-03-09T20:47:40.884 INFO:tasks.workunit.client.1.vm10.stdout:7/452: rename db/d21/d26/d72/f59 to db/d28/d4c/f8c 0 2026-03-09T20:47:40.884 INFO:tasks.workunit.client.1.vm10.stdout:0/439: stat d2/d9/da/d48/c6f 0 2026-03-09T20:47:40.889 INFO:tasks.workunit.client.0.vm07.stdout:2/532: unlink d2/l9f 0 2026-03-09T20:47:40.890 INFO:tasks.workunit.client.1.vm10.stdout:0/440: dwrite d2/d4a/d58/d82/d71/d5d/f5f [0,4194304] 0 2026-03-09T20:47:40.890 INFO:tasks.workunit.client.0.vm07.stdout:2/533: chown d2/db/d49/f64 2 1 2026-03-09T20:47:40.895 INFO:tasks.workunit.client.1.vm10.stdout:9/493: creat d2/d12/d5a/da7/faf x:0 0 0 2026-03-09T20:47:40.896 INFO:tasks.workunit.client.1.vm10.stdout:0/441: chown d2/d4a/d58/d82/d71/d8e/d25/d34/f77 1 1 2026-03-09T20:47:40.896 INFO:tasks.workunit.client.1.vm10.stdout:9/494: chown d2/d33/l4e 387115119 1 2026-03-09T20:47:40.897 INFO:tasks.workunit.client.1.vm10.stdout:9/495: stat d2/d12/d5a/da7 0 2026-03-09T20:47:40.897 INFO:tasks.workunit.client.1.vm10.stdout:9/496: chown d2/d3/f5 15 1 2026-03-09T20:47:40.903 INFO:tasks.workunit.client.1.vm10.stdout:5/423: creat d2/d27/d37/fa3 x:0 0 0 2026-03-09T20:47:40.917 INFO:tasks.workunit.client.0.vm07.stdout:9/489: dread d4/fa [0,4194304] 0 2026-03-09T20:47:40.928 INFO:tasks.workunit.client.1.vm10.stdout:8/479: write d0/d22/d25/f74 [101051,27970] 0 2026-03-09T20:47:40.929 INFO:tasks.workunit.client.0.vm07.stdout:5/586: mkdir d5/df/d13/d6c/db1/dcc 0 2026-03-09T20:47:40.932 INFO:tasks.workunit.client.1.vm10.stdout:7/453: creat db/d28/d2b/d36/d40/f8d x:0 0 0 2026-03-09T20:47:40.961 INFO:tasks.workunit.client.1.vm10.stdout:0/442: creat d2/f99 x:0 0 0 2026-03-09T20:47:40.967 INFO:tasks.workunit.client.0.vm07.stdout:3/507: rename d1/l94 to d1/d5/d9/d2f/d3d/d64/d43/d54/la3 0 2026-03-09T20:47:40.968 INFO:tasks.workunit.client.0.vm07.stdout:7/578: dread d3/da/db/d32/d3e/dac/f1a [0,4194304] 0 2026-03-09T20:47:40.970 INFO:tasks.workunit.client.0.vm07.stdout:4/454: write d2/d55/d5d/d3f/f51 [5964196,6564] 0 2026-03-09T20:47:40.970 INFO:tasks.workunit.client.1.vm10.stdout:9/497: mknod d2/d12/d5a/da7/cb0 0 2026-03-09T20:47:40.975 INFO:tasks.workunit.client.0.vm07.stdout:0/587: creat d1/d2/dc/d80/fb7 x:0 0 0 2026-03-09T20:47:40.980 INFO:tasks.workunit.client.0.vm07.stdout:2/534: rmdir d2/db/d49/d7d 39 2026-03-09T20:47:40.981 INFO:tasks.workunit.client.0.vm07.stdout:8/472: symlink d1/dc/d16/l98 0 2026-03-09T20:47:40.981 INFO:tasks.workunit.client.1.vm10.stdout:5/424: write d2/d27/d75/f9a [860884,40191] 0 2026-03-09T20:47:40.982 INFO:tasks.workunit.client.1.vm10.stdout:2/463: creat d5/d18/d27/d5f/f98 x:0 0 0 2026-03-09T20:47:40.983 INFO:tasks.workunit.client.1.vm10.stdout:1/451: truncate d2/da/f11 1626650 0 2026-03-09T20:47:40.983 INFO:tasks.workunit.client.1.vm10.stdout:6/475: write d3/da/d11/d31/d4c/d60/f77 [4804207,79022] 0 2026-03-09T20:47:40.983 INFO:tasks.workunit.client.1.vm10.stdout:4/424: write d1/d8/d1b/d57/f44 [1761443,56481] 0 2026-03-09T20:47:40.984 INFO:tasks.workunit.client.0.vm07.stdout:9/490: mknod d4/d8/d19/d26/cb2 0 2026-03-09T20:47:40.984 INFO:tasks.workunit.client.1.vm10.stdout:1/452: fsync d2/da/d25/f78 0 2026-03-09T20:47:40.986 INFO:tasks.workunit.client.1.vm10.stdout:8/480: rename d0/d22/d25/d2e/d41/d47/f5a to d0/d22/d2c/f96 0 2026-03-09T20:47:40.986 INFO:tasks.workunit.client.1.vm10.stdout:3/445: write dc/d14/d20/d21/f50 [228761,71710] 0 2026-03-09T20:47:40.989 INFO:tasks.workunit.client.0.vm07.stdout:9/491: chown d4/d8/dc/f68 1243595 1 2026-03-09T20:47:40.989 INFO:tasks.workunit.client.1.vm10.stdout:4/425: sync 2026-03-09T20:47:40.992 INFO:tasks.workunit.client.1.vm10.stdout:0/443: creat d2/d9/da/d35/d30/f9a x:0 0 0 2026-03-09T20:47:40.992 INFO:tasks.workunit.client.1.vm10.stdout:9/498: rmdir d2/d28/d47/d50 39 2026-03-09T20:47:40.993 INFO:tasks.workunit.client.0.vm07.stdout:1/558: dwrite d3/d23/d55/d56/d90/f93 [0,4194304] 0 2026-03-09T20:47:40.998 INFO:tasks.workunit.client.1.vm10.stdout:9/499: dwrite d2/d3/f6c [0,4194304] 0 2026-03-09T20:47:41.007 INFO:tasks.workunit.client.0.vm07.stdout:4/455: rename d2/d1f/f45 to d2/df/d59/f7c 0 2026-03-09T20:47:41.008 INFO:tasks.workunit.client.0.vm07.stdout:0/588: creat d1/d1f/d53/fb8 x:0 0 0 2026-03-09T20:47:41.009 INFO:tasks.workunit.client.1.vm10.stdout:2/464: creat d5/d18/d27/d38/d61/f99 x:0 0 0 2026-03-09T20:47:41.009 INFO:tasks.workunit.client.1.vm10.stdout:2/465: fdatasync d5/d2b/f3f 0 2026-03-09T20:47:41.015 INFO:tasks.workunit.client.0.vm07.stdout:2/535: rmdir d2/db/d28/d90 39 2026-03-09T20:47:41.016 INFO:tasks.workunit.client.0.vm07.stdout:6/523: creat d8/d26/d2a/d40/fa7 x:0 0 0 2026-03-09T20:47:41.020 INFO:tasks.workunit.client.1.vm10.stdout:3/446: rename dc/d14/d27/l4c to dc/d14/d26/d29/d2a/d55/d89/l8d 0 2026-03-09T20:47:41.022 INFO:tasks.workunit.client.1.vm10.stdout:7/454: creat db/d28/d2b/d36/d3b/d88/f8e x:0 0 0 2026-03-09T20:47:41.026 INFO:tasks.workunit.client.1.vm10.stdout:4/426: fsync d1/d8/d1c/d2b/f72 0 2026-03-09T20:47:41.028 INFO:tasks.workunit.client.1.vm10.stdout:0/444: creat d2/f9b x:0 0 0 2026-03-09T20:47:41.029 INFO:tasks.workunit.client.0.vm07.stdout:3/508: creat d1/d5/d9/d2f/d66/fa4 x:0 0 0 2026-03-09T20:47:41.031 INFO:tasks.workunit.client.0.vm07.stdout:7/579: creat d3/d58/dc1/fc8 x:0 0 0 2026-03-09T20:47:41.038 INFO:tasks.workunit.client.0.vm07.stdout:1/559: dread d3/f24 [0,4194304] 0 2026-03-09T20:47:41.040 INFO:tasks.workunit.client.0.vm07.stdout:8/473: rename d1/c5 to d1/d5d/d6f/d2f/d4d/d95/c99 0 2026-03-09T20:47:41.041 INFO:tasks.workunit.client.1.vm10.stdout:9/500: chown d2/d3/c1f 0 1 2026-03-09T20:47:41.042 INFO:tasks.workunit.client.0.vm07.stdout:0/589: mkdir d1/d2/dc/d17/da6/db9 0 2026-03-09T20:47:41.046 INFO:tasks.workunit.client.1.vm10.stdout:7/455: dread db/d21/d23/f34 [0,4194304] 0 2026-03-09T20:47:41.049 INFO:tasks.workunit.client.1.vm10.stdout:2/466: truncate d5/d5b/f6c 1682155 0 2026-03-09T20:47:41.049 INFO:tasks.workunit.client.1.vm10.stdout:1/453: mknod d2/d89/c90 0 2026-03-09T20:47:41.049 INFO:tasks.workunit.client.0.vm07.stdout:2/536: truncate d2/db/d1c/d4a/d88/f7f 1130955 0 2026-03-09T20:47:41.050 INFO:tasks.workunit.client.0.vm07.stdout:6/524: creat d8/d16/da3/fa8 x:0 0 0 2026-03-09T20:47:41.051 INFO:tasks.workunit.client.0.vm07.stdout:3/509: truncate d1/f65 638405 0 2026-03-09T20:47:41.059 INFO:tasks.workunit.client.0.vm07.stdout:5/587: write d5/df/d13/f1f [3226446,89356] 0 2026-03-09T20:47:41.059 INFO:tasks.workunit.client.0.vm07.stdout:5/588: fsync d5/d33/d39/fc3 0 2026-03-09T20:47:41.061 INFO:tasks.workunit.client.1.vm10.stdout:5/425: dwrite d2/d39/d4b/f4e [0,4194304] 0 2026-03-09T20:47:41.063 INFO:tasks.workunit.client.1.vm10.stdout:5/426: chown d2/d27/d37/d46/d5d/d5f 5594778 1 2026-03-09T20:47:41.067 INFO:tasks.workunit.client.1.vm10.stdout:6/476: dwrite d3/fe [0,4194304] 0 2026-03-09T20:47:41.068 INFO:tasks.workunit.client.0.vm07.stdout:4/456: dwrite d2/df/f23 [4194304,4194304] 0 2026-03-09T20:47:41.088 INFO:tasks.workunit.client.0.vm07.stdout:7/580: truncate d3/da/db/d32/d3e/dac/f1a 3015343 0 2026-03-09T20:47:41.106 INFO:tasks.workunit.client.0.vm07.stdout:1/560: mkdir d3/d14/d54/d6e/dba 0 2026-03-09T20:47:41.112 INFO:tasks.workunit.client.1.vm10.stdout:0/445: truncate d2/d4a/f5a 4204799 0 2026-03-09T20:47:41.114 INFO:tasks.workunit.client.0.vm07.stdout:0/590: mknod d1/d2/dc/d80/cba 0 2026-03-09T20:47:41.119 INFO:tasks.workunit.client.1.vm10.stdout:9/501: rename d2/d3/daa to d2/d33/db1 0 2026-03-09T20:47:41.123 INFO:tasks.workunit.client.1.vm10.stdout:9/502: dwrite d2/d12/d5a/f82 [0,4194304] 0 2026-03-09T20:47:41.124 INFO:tasks.workunit.client.1.vm10.stdout:0/446: sync 2026-03-09T20:47:41.126 INFO:tasks.workunit.client.0.vm07.stdout:8/474: dread d1/f33 [0,4194304] 0 2026-03-09T20:47:41.128 INFO:tasks.workunit.client.0.vm07.stdout:8/475: truncate d1/d5d/d6f/d2f/d53/d76/f7f 935926 0 2026-03-09T20:47:41.129 INFO:tasks.workunit.client.0.vm07.stdout:8/476: chown d1/d5d/d6f/l40 0 1 2026-03-09T20:47:41.142 INFO:tasks.workunit.client.1.vm10.stdout:8/481: creat d0/f97 x:0 0 0 2026-03-09T20:47:41.142 INFO:tasks.workunit.client.1.vm10.stdout:8/482: readlink d0/d22/d25/d2e/d41/d47/d78/l48 0 2026-03-09T20:47:41.148 INFO:tasks.workunit.client.0.vm07.stdout:6/525: write d8/d16/d22/d24/f25 [2118232,83476] 0 2026-03-09T20:47:41.148 INFO:tasks.workunit.client.1.vm10.stdout:3/447: dwrite dc/d14/d20/d21/f41 [0,4194304] 0 2026-03-09T20:47:41.153 INFO:tasks.workunit.client.0.vm07.stdout:6/526: dwrite d8/d16/d22/d24/f25 [4194304,4194304] 0 2026-03-09T20:47:41.162 INFO:tasks.workunit.client.0.vm07.stdout:9/492: creat d4/d8/d19/fb3 x:0 0 0 2026-03-09T20:47:41.171 INFO:tasks.workunit.client.0.vm07.stdout:5/589: dread - d5/df/d13/d30/fac zero size 2026-03-09T20:47:41.180 INFO:tasks.workunit.client.0.vm07.stdout:6/527: sync 2026-03-09T20:47:41.189 INFO:tasks.workunit.client.1.vm10.stdout:4/427: mknod d1/d8/d1c/d69/c89 0 2026-03-09T20:47:41.192 INFO:tasks.workunit.client.1.vm10.stdout:1/454: rename d2/da/d25/d46/c5f to d2/da/d25/d46/d51/c91 0 2026-03-09T20:47:41.197 INFO:tasks.workunit.client.1.vm10.stdout:7/456: link db/d28/d2b/d36/f55 db/d28/d2b/f8f 0 2026-03-09T20:47:41.203 INFO:tasks.workunit.client.1.vm10.stdout:8/483: mknod d0/d22/d25/d2e/d41/d47/d63/c98 0 2026-03-09T20:47:41.207 INFO:tasks.workunit.client.1.vm10.stdout:3/448: mkdir dc/d14/d20/d21/d3b/d8e 0 2026-03-09T20:47:41.211 INFO:tasks.workunit.client.1.vm10.stdout:9/503: dread d2/d3/f5 [4194304,4194304] 0 2026-03-09T20:47:41.213 INFO:tasks.workunit.client.1.vm10.stdout:2/467: write d5/d18/d27/d38/f43 [321550,103087] 0 2026-03-09T20:47:41.214 INFO:tasks.workunit.client.1.vm10.stdout:2/468: fdatasync d5/d2b/d32/f84 0 2026-03-09T20:47:41.215 INFO:tasks.workunit.client.1.vm10.stdout:2/469: chown d5/d18/d27/d38/f43 0 1 2026-03-09T20:47:41.215 INFO:tasks.workunit.client.1.vm10.stdout:2/470: readlink d5/d18/d1b/d22/l8e 0 2026-03-09T20:47:41.215 INFO:tasks.workunit.client.0.vm07.stdout:3/510: dwrite d1/d5/d9/d2f/d34/f68 [4194304,4194304] 0 2026-03-09T20:47:41.219 INFO:tasks.workunit.client.1.vm10.stdout:6/477: mkdir d3/d9c 0 2026-03-09T20:47:41.228 INFO:tasks.workunit.client.1.vm10.stdout:7/457: symlink db/d28/d2b/l90 0 2026-03-09T20:47:41.228 INFO:tasks.workunit.client.1.vm10.stdout:7/458: dread - db/f39 zero size 2026-03-09T20:47:41.233 INFO:tasks.workunit.client.1.vm10.stdout:3/449: mkdir dc/d14/d26/d8f 0 2026-03-09T20:47:41.233 INFO:tasks.workunit.client.1.vm10.stdout:7/459: sync 2026-03-09T20:47:41.240 INFO:tasks.workunit.client.1.vm10.stdout:5/427: creat d2/d27/d37/d46/d5d/fa4 x:0 0 0 2026-03-09T20:47:41.240 INFO:tasks.workunit.client.1.vm10.stdout:5/428: readlink d2/d27/d75/l9f 0 2026-03-09T20:47:41.245 INFO:tasks.workunit.client.1.vm10.stdout:4/428: creat d1/d2/d5c/d64/d6b/d81/f8a x:0 0 0 2026-03-09T20:47:41.252 INFO:tasks.workunit.client.1.vm10.stdout:0/447: rename d2/d9/d69/f59 to d2/d9/d4b/d63/f9c 0 2026-03-09T20:47:41.260 INFO:tasks.workunit.client.0.vm07.stdout:8/477: unlink d1/d5d/c82 0 2026-03-09T20:47:41.272 INFO:tasks.workunit.client.0.vm07.stdout:4/457: write d2/f43 [1511469,11570] 0 2026-03-09T20:47:41.277 INFO:tasks.workunit.client.0.vm07.stdout:5/590: read d5/df/d13/d30/f36 [135151,30276] 0 2026-03-09T20:47:41.284 INFO:tasks.workunit.client.1.vm10.stdout:1/455: dwrite d2/da/d25/f65 [0,4194304] 0 2026-03-09T20:47:41.285 INFO:tasks.workunit.client.0.vm07.stdout:1/561: dwrite d3/d23/f49 [0,4194304] 0 2026-03-09T20:47:41.290 INFO:tasks.workunit.client.1.vm10.stdout:8/484: write d0/d22/d25/d2e/d41/d47/f4b [4495656,38813] 0 2026-03-09T20:47:41.293 INFO:tasks.workunit.client.1.vm10.stdout:7/460: dwrite db/d1f/f5f [0,4194304] 0 2026-03-09T20:47:41.294 INFO:tasks.workunit.client.0.vm07.stdout:9/493: dwrite d4/d8/d19/f86 [0,4194304] 0 2026-03-09T20:47:41.300 INFO:tasks.workunit.client.1.vm10.stdout:7/461: dwrite db/d28/d2b/d36/d3b/d88/f71 [4194304,4194304] 0 2026-03-09T20:47:41.305 INFO:tasks.workunit.client.1.vm10.stdout:4/429: fsync d1/d2/f60 0 2026-03-09T20:47:41.305 INFO:tasks.workunit.client.1.vm10.stdout:6/478: write d3/d30/d33/f37 [322206,13085] 0 2026-03-09T20:47:41.308 INFO:tasks.workunit.client.1.vm10.stdout:7/462: dwrite db/d46/f5a [0,4194304] 0 2026-03-09T20:47:41.314 INFO:tasks.workunit.client.0.vm07.stdout:6/528: readlink d8/l1b 0 2026-03-09T20:47:41.315 INFO:tasks.workunit.client.0.vm07.stdout:6/529: stat d8/d26/d2a/f7a 0 2026-03-09T20:47:41.315 INFO:tasks.workunit.client.0.vm07.stdout:6/530: chown d8/d16/d22 1505694047 1 2026-03-09T20:47:41.318 INFO:tasks.workunit.client.1.vm10.stdout:0/448: dwrite d2/f39 [0,4194304] 0 2026-03-09T20:47:41.333 INFO:tasks.workunit.client.1.vm10.stdout:2/471: rename d5/d18/d27/d38/f43 to d5/d18/d27/d89/f9a 0 2026-03-09T20:47:41.358 INFO:tasks.workunit.client.0.vm07.stdout:7/581: mknod d3/da/d83/dc5/cc9 0 2026-03-09T20:47:41.359 INFO:tasks.workunit.client.0.vm07.stdout:7/582: stat d3/da/db/d32/d3e/dac/d1f/d2b/d52/l80 0 2026-03-09T20:47:41.359 INFO:tasks.workunit.client.0.vm07.stdout:7/583: dread - d3/da/db/d32/d3e/dac/fb9 zero size 2026-03-09T20:47:41.375 INFO:tasks.workunit.client.0.vm07.stdout:0/591: mknod d1/d2/dc/d17/da6/db9/cbb 0 2026-03-09T20:47:41.379 INFO:tasks.workunit.client.0.vm07.stdout:2/537: link d2/f7b d2/db/d1c/faa 0 2026-03-09T20:47:41.383 INFO:tasks.workunit.client.1.vm10.stdout:1/456: mkdir d2/da/d25/d46/d51/d5d/d6e/d7f/d92 0 2026-03-09T20:47:41.389 INFO:tasks.workunit.client.0.vm07.stdout:8/478: creat d1/d3b/f9a x:0 0 0 2026-03-09T20:47:41.389 INFO:tasks.workunit.client.0.vm07.stdout:8/479: readlink d1/d5d/d6f/d2f/d4d/d55/l7c 0 2026-03-09T20:47:41.389 INFO:tasks.workunit.client.0.vm07.stdout:8/480: stat d1/d5d/d6f/d2f/f51 0 2026-03-09T20:47:41.389 INFO:tasks.workunit.client.0.vm07.stdout:8/481: write d1/d5d/f7d [1680490,11796] 0 2026-03-09T20:47:41.389 INFO:tasks.workunit.client.0.vm07.stdout:4/458: dread - d2/d55/d5d/d3f/d4a/d4b/d52/d5c/f76 zero size 2026-03-09T20:47:41.391 INFO:tasks.workunit.client.1.vm10.stdout:8/485: dwrite d0/d22/d2c/f3f [0,4194304] 0 2026-03-09T20:47:41.399 INFO:tasks.workunit.client.0.vm07.stdout:1/562: rmdir d3/d23 39 2026-03-09T20:47:41.403 INFO:tasks.workunit.client.0.vm07.stdout:9/494: mknod d4/d8/d19/d89/da7/cb4 0 2026-03-09T20:47:41.406 INFO:tasks.workunit.client.1.vm10.stdout:6/479: unlink d3/d30/d7f/d51/f53 0 2026-03-09T20:47:41.408 INFO:tasks.workunit.client.0.vm07.stdout:6/531: creat d8/d26/d2a/fa9 x:0 0 0 2026-03-09T20:47:41.411 INFO:tasks.workunit.client.1.vm10.stdout:7/463: fsync db/d28/d4c/f65 0 2026-03-09T20:47:41.412 INFO:tasks.workunit.client.0.vm07.stdout:3/511: mkdir d1/d5/d9/d2f/d34/da5 0 2026-03-09T20:47:41.421 INFO:tasks.workunit.client.0.vm07.stdout:2/538: truncate d2/d11/f51 264458 0 2026-03-09T20:47:41.428 INFO:tasks.workunit.client.0.vm07.stdout:9/495: sync 2026-03-09T20:47:41.429 INFO:tasks.workunit.client.0.vm07.stdout:2/539: read d2/db/d1c/f3a [8139752,11406] 0 2026-03-09T20:47:41.430 INFO:tasks.workunit.client.0.vm07.stdout:2/540: chown d2/db/d1c/d4a/l8b 10394264 1 2026-03-09T20:47:41.435 INFO:tasks.workunit.client.1.vm10.stdout:5/429: rename d2/d27/d37/d46/d5d/d5f/d63/c8e to d2/d27/d75/d81/ca5 0 2026-03-09T20:47:41.435 INFO:tasks.workunit.client.1.vm10.stdout:2/472: chown d5/d2b/d32/f96 238181719 1 2026-03-09T20:47:41.438 INFO:tasks.workunit.client.0.vm07.stdout:8/482: rename d1/dc/d16/d26/l56 to d1/d5d/d6f/d80/l9b 0 2026-03-09T20:47:41.461 INFO:tasks.workunit.client.1.vm10.stdout:4/430: dread d1/d8/d1c/f3f [0,4194304] 0 2026-03-09T20:47:41.494 INFO:tasks.workunit.client.0.vm07.stdout:4/459: fsync d2/f9 0 2026-03-09T20:47:41.495 INFO:tasks.workunit.client.1.vm10.stdout:9/504: getdents d2/d3/d6d 0 2026-03-09T20:47:41.496 INFO:tasks.workunit.client.1.vm10.stdout:9/505: chown d2/d3/d85 51 1 2026-03-09T20:47:41.498 INFO:tasks.workunit.client.1.vm10.stdout:1/457: creat d2/da/d25/d46/d51/d5d/d6e/f93 x:0 0 0 2026-03-09T20:47:41.501 INFO:tasks.workunit.client.1.vm10.stdout:3/450: dwrite dc/d14/d20/d2e/d56/f82 [0,4194304] 0 2026-03-09T20:47:41.501 INFO:tasks.workunit.client.0.vm07.stdout:1/563: chown d3/d23/fa8 214 1 2026-03-09T20:47:41.501 INFO:tasks.workunit.client.1.vm10.stdout:3/451: readlink dc/l1c 0 2026-03-09T20:47:41.503 INFO:tasks.workunit.client.1.vm10.stdout:3/452: read dc/d14/d20/d21/f50 [1273324,75848] 0 2026-03-09T20:47:41.504 INFO:tasks.workunit.client.1.vm10.stdout:3/453: write dc/d14/d22/d4a/f84 [426065,97342] 0 2026-03-09T20:47:41.519 INFO:tasks.workunit.client.1.vm10.stdout:3/454: sync 2026-03-09T20:47:41.525 INFO:tasks.workunit.client.1.vm10.stdout:8/486: symlink d0/d22/d25/d40/d86/l99 0 2026-03-09T20:47:41.526 INFO:tasks.workunit.client.1.vm10.stdout:8/487: readlink d0/d22/d2f/d38/l69 0 2026-03-09T20:47:41.532 INFO:tasks.workunit.client.1.vm10.stdout:6/480: stat d3/da/l5a 0 2026-03-09T20:47:41.533 INFO:tasks.workunit.client.0.vm07.stdout:5/591: write d5/df/d13/f41 [8536024,9518] 0 2026-03-09T20:47:41.537 INFO:tasks.workunit.client.0.vm07.stdout:5/592: dwrite d5/df/d13/d30/d56/f84 [4194304,4194304] 0 2026-03-09T20:47:41.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:41 vm10.local ceph-mon[57011]: pgmap v9: 65 pgs: 65 active+clean; 2.2 GiB data, 7.8 GiB used, 112 GiB / 120 GiB avail; 33 MiB/s rd, 92 MiB/s wr, 219 op/s 2026-03-09T20:47:41.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:41 vm10.local ceph-mon[57011]: Upgrade: Updating mgr.vm07.xjrvch 2026-03-09T20:47:41.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:41 vm10.local ceph-mon[57011]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:41.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:41 vm10.local ceph-mon[57011]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm07.xjrvch", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T20:47:41.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:41 vm10.local ceph-mon[57011]: from='mgr.24439 ' entity='mgr.vm10.byqahe' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm07.xjrvch", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T20:47:41.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:41 vm10.local ceph-mon[57011]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T20:47:41.538 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:41 vm10.local ceph-mon[57011]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:47:41.538 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:41 vm10.local ceph-mon[57011]: Deploying daemon mgr.vm07.xjrvch on vm07 2026-03-09T20:47:41.542 INFO:tasks.workunit.client.0.vm07.stdout:0/592: truncate d1/d1f/f63 358826 0 2026-03-09T20:47:41.545 INFO:tasks.workunit.client.1.vm10.stdout:7/464: truncate db/d28/d2b/d36/d40/f44 325591 0 2026-03-09T20:47:41.546 INFO:tasks.workunit.client.1.vm10.stdout:7/465: chown db/d28/d2b/d36/d3b/d88/f71 484877111 1 2026-03-09T20:47:41.550 INFO:tasks.workunit.client.0.vm07.stdout:9/496: truncate d4/f10 4726437 0 2026-03-09T20:47:41.556 INFO:tasks.workunit.client.0.vm07.stdout:5/593: dread d5/d19/f2c [0,4194304] 0 2026-03-09T20:47:41.557 INFO:tasks.workunit.client.1.vm10.stdout:5/430: dread - d2/d27/d75/f88 zero size 2026-03-09T20:47:41.561 INFO:tasks.workunit.client.1.vm10.stdout:2/473: rename d5/f1d to d5/d2b/d32/d80/d47/d94/f9b 0 2026-03-09T20:47:41.563 INFO:tasks.workunit.client.0.vm07.stdout:2/541: dread d2/d11/d56/f5a [4194304,4194304] 0 2026-03-09T20:47:41.568 INFO:tasks.workunit.client.1.vm10.stdout:5/431: dread d2/d27/f2d [0,4194304] 0 2026-03-09T20:47:41.568 INFO:tasks.workunit.client.0.vm07.stdout:7/584: write d3/da/db/d32/d3e/dac/d1f/d2b/f2c [1824549,68693] 0 2026-03-09T20:47:41.580 INFO:tasks.workunit.client.0.vm07.stdout:1/564: creat d3/d97/da1/fbb x:0 0 0 2026-03-09T20:47:41.584 INFO:tasks.workunit.client.0.vm07.stdout:6/532: mkdir d8/d16/d22/d24/da0/daa 0 2026-03-09T20:47:41.600 INFO:tasks.workunit.client.0.vm07.stdout:9/497: symlink d4/d8/dc/d4e/lb5 0 2026-03-09T20:47:41.601 INFO:tasks.workunit.client.0.vm07.stdout:9/498: chown d4/d8/d19/d5f/d73 641164 1 2026-03-09T20:47:41.601 INFO:tasks.workunit.client.0.vm07.stdout:9/499: write d4/d16/d29/d24/f77 [930324,111843] 0 2026-03-09T20:47:41.608 INFO:tasks.workunit.client.0.vm07.stdout:5/594: fdatasync d5/df/d13/d4f/fb7 0 2026-03-09T20:47:41.614 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:41 vm07.local ceph-mon[49120]: pgmap v9: 65 pgs: 65 active+clean; 2.2 GiB data, 7.8 GiB used, 112 GiB / 120 GiB avail; 33 MiB/s rd, 92 MiB/s wr, 219 op/s 2026-03-09T20:47:41.614 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:41 vm07.local ceph-mon[49120]: Upgrade: Updating mgr.vm07.xjrvch 2026-03-09T20:47:41.614 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:41 vm07.local ceph-mon[49120]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:41.614 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:41 vm07.local ceph-mon[49120]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm07.xjrvch", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T20:47:41.614 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:41 vm07.local ceph-mon[49120]: from='mgr.24439 ' entity='mgr.vm10.byqahe' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm07.xjrvch", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T20:47:41.614 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:41 vm07.local ceph-mon[49120]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T20:47:41.614 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:41 vm07.local ceph-mon[49120]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:47:41.614 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:41 vm07.local ceph-mon[49120]: Deploying daemon mgr.vm07.xjrvch on vm07 2026-03-09T20:47:41.615 INFO:tasks.workunit.client.0.vm07.stdout:2/542: dread - d2/d46/d6e/f95 zero size 2026-03-09T20:47:41.616 INFO:tasks.workunit.client.0.vm07.stdout:2/543: stat d2/d11/l30 0 2026-03-09T20:47:41.635 INFO:tasks.workunit.client.0.vm07.stdout:9/500: mkdir d4/d8/d19/d26/db6 0 2026-03-09T20:47:41.640 INFO:tasks.workunit.client.0.vm07.stdout:9/501: dwrite d4/d16/d29/d24/f77 [0,4194304] 0 2026-03-09T20:47:41.651 INFO:tasks.workunit.client.0.vm07.stdout:5/595: read - d5/d33/d39/d8d/f8e zero size 2026-03-09T20:47:41.655 INFO:tasks.workunit.client.1.vm10.stdout:7/466: fdatasync db/f7c 0 2026-03-09T20:47:41.657 INFO:tasks.workunit.client.0.vm07.stdout:5/596: dread d5/df/d13/d3e/d5e/f7c [0,4194304] 0 2026-03-09T20:47:41.658 INFO:tasks.workunit.client.0.vm07.stdout:5/597: chown d5/df/d13/d3e 93080 1 2026-03-09T20:47:41.659 INFO:tasks.workunit.client.1.vm10.stdout:5/432: mknod d2/d27/d37/d46/d5d/d5f/d63/d95/ca6 0 2026-03-09T20:47:41.668 INFO:tasks.workunit.client.0.vm07.stdout:5/598: mknod d5/d33/d75/ccd 0 2026-03-09T20:47:41.668 INFO:tasks.workunit.client.1.vm10.stdout:1/458: creat d2/da/d25/d3e/f94 x:0 0 0 2026-03-09T20:47:41.669 INFO:tasks.workunit.client.1.vm10.stdout:1/459: dwrite d2/f4c [0,4194304] 0 2026-03-09T20:47:41.669 INFO:tasks.workunit.client.1.vm10.stdout:7/467: sync 2026-03-09T20:47:41.671 INFO:tasks.workunit.client.1.vm10.stdout:4/431: getdents d1/d67 0 2026-03-09T20:47:41.672 INFO:tasks.workunit.client.1.vm10.stdout:4/432: chown d1/d2/d5c/d64/d61/f62 5 1 2026-03-09T20:47:41.672 INFO:tasks.workunit.client.0.vm07.stdout:8/483: getdents d1/d5d/d6f/d2f/d53/d76/d87 0 2026-03-09T20:47:41.672 INFO:tasks.workunit.client.0.vm07.stdout:8/484: chown d1/dc/d16/f6d 1324384 1 2026-03-09T20:47:41.675 INFO:tasks.workunit.client.0.vm07.stdout:8/485: dwrite d1/d5d/d6f/f61 [0,4194304] 0 2026-03-09T20:47:41.684 INFO:tasks.workunit.client.1.vm10.stdout:1/460: dread d2/da/f88 [0,4194304] 0 2026-03-09T20:47:41.685 INFO:tasks.workunit.client.0.vm07.stdout:9/502: getdents d4/d16/d29/d9c 0 2026-03-09T20:47:41.693 INFO:tasks.workunit.client.0.vm07.stdout:9/503: creat d4/d8/d19/d5f/d73/fb7 x:0 0 0 2026-03-09T20:47:41.698 INFO:tasks.workunit.client.0.vm07.stdout:9/504: unlink d4/d8/dc/f68 0 2026-03-09T20:47:41.698 INFO:tasks.workunit.client.0.vm07.stdout:9/505: write d4/d16/f27 [537317,126867] 0 2026-03-09T20:47:41.701 INFO:tasks.workunit.client.0.vm07.stdout:8/486: getdents d1/dc/d16/d31 0 2026-03-09T20:47:41.702 INFO:tasks.workunit.client.0.vm07.stdout:9/506: mkdir d4/d8/d19/d5f/da5/db8 0 2026-03-09T20:47:41.705 INFO:tasks.workunit.client.0.vm07.stdout:9/507: chown d4/d16/f41 18463792 1 2026-03-09T20:47:41.705 INFO:tasks.workunit.client.0.vm07.stdout:8/487: getdents d1/dc/d16/d26/d94 0 2026-03-09T20:47:41.707 INFO:tasks.workunit.client.0.vm07.stdout:8/488: mkdir d1/d5d/d6f/d2f/d4d/d63/d9c 0 2026-03-09T20:47:41.711 INFO:tasks.workunit.client.0.vm07.stdout:8/489: dwrite d1/d5d/d6f/d2f/d53/d76/d87/f97 [0,4194304] 0 2026-03-09T20:47:41.713 INFO:tasks.workunit.client.0.vm07.stdout:8/490: read d1/dc/d16/d31/f47 [2276130,47060] 0 2026-03-09T20:47:41.727 INFO:tasks.workunit.client.0.vm07.stdout:3/512: write d1/d5/d9/d2f/d3d/d64/d59/f69 [1850257,76531] 0 2026-03-09T20:47:41.728 INFO:tasks.workunit.client.0.vm07.stdout:3/513: read d1/d5/d9/d11/f26 [961174,58298] 0 2026-03-09T20:47:41.751 INFO:tasks.workunit.client.1.vm10.stdout:1/461: unlink d2/da/d25/c82 0 2026-03-09T20:47:41.754 INFO:tasks.workunit.client.0.vm07.stdout:8/491: rmdir d1/d5d/d6f/d2f/d4d/d63/d9c 0 2026-03-09T20:47:41.756 INFO:tasks.workunit.client.1.vm10.stdout:9/506: dwrite d2/d3/de/d35/f9c [0,4194304] 0 2026-03-09T20:47:41.757 INFO:tasks.workunit.client.0.vm07.stdout:8/492: mkdir d1/d8f/d9d 0 2026-03-09T20:47:41.757 INFO:tasks.workunit.client.1.vm10.stdout:7/468: creat db/d28/f91 x:0 0 0 2026-03-09T20:47:41.760 INFO:tasks.workunit.client.0.vm07.stdout:8/493: truncate d1/dc/d16/f4b 4386409 0 2026-03-09T20:47:41.761 INFO:tasks.workunit.client.1.vm10.stdout:2/474: link d5/d2b/d32/d80/d47/l4a d5/d18/d27/d28/d41/l9c 0 2026-03-09T20:47:41.761 INFO:tasks.workunit.client.1.vm10.stdout:1/462: mknod d2/da/c95 0 2026-03-09T20:47:41.763 INFO:tasks.workunit.client.1.vm10.stdout:7/469: dwrite db/d46/f85 [0,4194304] 0 2026-03-09T20:47:41.764 INFO:tasks.workunit.client.1.vm10.stdout:3/455: dwrite dc/d14/d26/d29/d40/f49 [0,4194304] 0 2026-03-09T20:47:41.780 INFO:tasks.workunit.client.0.vm07.stdout:8/494: symlink d1/dc/l9e 0 2026-03-09T20:47:41.780 INFO:tasks.workunit.client.0.vm07.stdout:8/495: chown d1/d3b/f9a 3922827 1 2026-03-09T20:47:41.781 INFO:tasks.workunit.client.1.vm10.stdout:6/481: write f2 [2174041,102172] 0 2026-03-09T20:47:41.787 INFO:tasks.workunit.client.1.vm10.stdout:0/449: symlink d2/d9/da/l9d 0 2026-03-09T20:47:41.787 INFO:tasks.workunit.client.1.vm10.stdout:0/450: write d2/f99 [641715,51874] 0 2026-03-09T20:47:41.788 INFO:tasks.workunit.client.0.vm07.stdout:0/593: mknod d1/d1f/cbc 0 2026-03-09T20:47:41.788 INFO:tasks.workunit.client.0.vm07.stdout:0/594: truncate d1/f57 1937098 0 2026-03-09T20:47:41.789 INFO:tasks.workunit.client.1.vm10.stdout:0/451: write d2/d9/da/d35/d30/f7f [35242,100142] 0 2026-03-09T20:47:41.793 INFO:tasks.workunit.client.0.vm07.stdout:0/595: dread d1/d1f/d53/d72/f9b [0,4194304] 0 2026-03-09T20:47:41.793 INFO:tasks.workunit.client.0.vm07.stdout:8/496: unlink d1/d5d/f7d 0 2026-03-09T20:47:41.793 INFO:tasks.workunit.client.1.vm10.stdout:2/475: symlink d5/d18/d27/d38/d61/l9d 0 2026-03-09T20:47:41.795 INFO:tasks.workunit.client.1.vm10.stdout:7/470: fsync db/f69 0 2026-03-09T20:47:41.796 INFO:tasks.workunit.client.0.vm07.stdout:4/460: mkdir d2/d55/d5d/d3f/d4a/d7d 0 2026-03-09T20:47:41.797 INFO:tasks.workunit.client.0.vm07.stdout:4/461: write d2/d55/d5d/d3f/f68 [4300589,125599] 0 2026-03-09T20:47:41.807 INFO:tasks.workunit.client.0.vm07.stdout:7/585: dwrite d3/da/db/d32/d3e/dac/f92 [0,4194304] 0 2026-03-09T20:47:41.812 INFO:tasks.workunit.client.1.vm10.stdout:9/507: truncate d2/d3/f1c 43259 0 2026-03-09T20:47:41.817 INFO:tasks.workunit.client.0.vm07.stdout:0/596: creat d1/d1f/d20/fbd x:0 0 0 2026-03-09T20:47:41.817 INFO:tasks.workunit.client.0.vm07.stdout:0/597: chown d1/d2/d33/l81 93278 1 2026-03-09T20:47:41.826 INFO:tasks.workunit.client.0.vm07.stdout:4/462: dread - d2/df/d17/f63 zero size 2026-03-09T20:47:41.827 INFO:tasks.workunit.client.1.vm10.stdout:8/488: write d0/f11 [3389077,46229] 0 2026-03-09T20:47:41.828 INFO:tasks.workunit.client.1.vm10.stdout:8/489: write d0/f11 [2069831,26867] 0 2026-03-09T20:47:41.831 INFO:tasks.workunit.client.1.vm10.stdout:7/471: read db/d21/d26/f52 [966315,36182] 0 2026-03-09T20:47:41.832 INFO:tasks.workunit.client.1.vm10.stdout:6/482: dread d3/d30/d7f/f18 [0,4194304] 0 2026-03-09T20:47:41.836 INFO:tasks.workunit.client.0.vm07.stdout:8/497: creat d1/d5d/d6f/d2f/f9f x:0 0 0 2026-03-09T20:47:41.840 INFO:tasks.workunit.client.0.vm07.stdout:8/498: dread d1/f25 [0,4194304] 0 2026-03-09T20:47:41.846 INFO:tasks.workunit.client.1.vm10.stdout:9/508: chown d2/d3/l60 1092261440 1 2026-03-09T20:47:41.883 INFO:tasks.workunit.client.0.vm07.stdout:2/544: write d2/db/d28/d57/f68 [3381276,82880] 0 2026-03-09T20:47:41.883 INFO:tasks.workunit.client.0.vm07.stdout:2/545: chown d2/db/l77 2993 1 2026-03-09T20:47:41.883 INFO:tasks.workunit.client.0.vm07.stdout:7/586: rmdir d3/d58/d82/da8 0 2026-03-09T20:47:41.883 INFO:tasks.workunit.client.0.vm07.stdout:2/546: creat d2/db/d1c/fab x:0 0 0 2026-03-09T20:47:41.883 INFO:tasks.workunit.client.0.vm07.stdout:5/599: truncate d5/d19/f20 2806227 0 2026-03-09T20:47:41.883 INFO:tasks.workunit.client.0.vm07.stdout:7/587: mknod d3/da/d83/d96/cca 0 2026-03-09T20:47:41.883 INFO:tasks.workunit.client.0.vm07.stdout:5/600: fsync d5/d33/d39/d8d/dab/f5f 0 2026-03-09T20:47:41.883 INFO:tasks.workunit.client.1.vm10.stdout:0/452: mkdir d2/d9/d9e 0 2026-03-09T20:47:41.883 INFO:tasks.workunit.client.1.vm10.stdout:4/433: write d1/d8/f29 [1030178,53346] 0 2026-03-09T20:47:41.883 INFO:tasks.workunit.client.1.vm10.stdout:4/434: chown d1/d67 8 1 2026-03-09T20:47:41.883 INFO:tasks.workunit.client.1.vm10.stdout:2/476: fsync d5/d2b/d32/d80/d47/d94/f9b 0 2026-03-09T20:47:41.884 INFO:tasks.workunit.client.1.vm10.stdout:2/477: chown d5/d18/d27/d28/d41 10409 1 2026-03-09T20:47:41.884 INFO:tasks.workunit.client.1.vm10.stdout:8/490: rename d0/d22/d25/d2e/d41/d47/d78/f49 to d0/d22/d25/d2e/d41/d47/d78/f9a 0 2026-03-09T20:47:41.884 INFO:tasks.workunit.client.1.vm10.stdout:7/472: mknod db/d21/c92 0 2026-03-09T20:47:41.884 INFO:tasks.workunit.client.1.vm10.stdout:9/509: symlink d2/d28/d47/d67/lb2 0 2026-03-09T20:47:41.884 INFO:tasks.workunit.client.1.vm10.stdout:0/453: truncate d2/d9/da/d35/d30/f56 915667 0 2026-03-09T20:47:41.884 INFO:tasks.workunit.client.1.vm10.stdout:4/435: creat d1/d8/d1b/f8b x:0 0 0 2026-03-09T20:47:41.884 INFO:tasks.workunit.client.1.vm10.stdout:8/491: mkdir d0/d22/d25/d6c/d9b 0 2026-03-09T20:47:41.884 INFO:tasks.workunit.client.0.vm07.stdout:3/514: rmdir d1/d5/d9/d2f/d99 39 2026-03-09T20:47:41.885 INFO:tasks.workunit.client.1.vm10.stdout:8/492: dread d0/d22/d2c/f3f [0,4194304] 0 2026-03-09T20:47:41.890 INFO:tasks.workunit.client.1.vm10.stdout:5/433: rename d2/d1b/l29 to d2/d27/la7 0 2026-03-09T20:47:41.893 INFO:tasks.workunit.client.1.vm10.stdout:0/454: unlink d2/d9/d69/f7c 0 2026-03-09T20:47:41.895 INFO:tasks.workunit.client.1.vm10.stdout:4/436: fdatasync d1/d8/d1c/d2b/f36 0 2026-03-09T20:47:41.896 INFO:tasks.workunit.client.1.vm10.stdout:4/437: readlink d1/l76 0 2026-03-09T20:47:41.896 INFO:tasks.workunit.client.1.vm10.stdout:8/493: symlink d0/d22/d25/d2e/d41/d85/l9c 0 2026-03-09T20:47:41.897 INFO:tasks.workunit.client.0.vm07.stdout:3/515: truncate d1/d5/d9/d2f/d3d/d64/f1a 2914839 0 2026-03-09T20:47:41.898 INFO:tasks.workunit.client.0.vm07.stdout:1/565: rename d3/d23/cb8 to d3/d97/da1/cbc 0 2026-03-09T20:47:41.899 INFO:tasks.workunit.client.0.vm07.stdout:1/566: chown d3 212227629 1 2026-03-09T20:47:41.900 INFO:tasks.workunit.client.0.vm07.stdout:3/516: read d1/d5/d9/d2f/d3d/d64/f63 [2362547,22924] 0 2026-03-09T20:47:41.903 INFO:tasks.workunit.client.1.vm10.stdout:7/473: creat db/d28/d2b/d36/d40/d8a/f93 x:0 0 0 2026-03-09T20:47:41.905 INFO:tasks.workunit.client.1.vm10.stdout:2/478: dread d5/d18/d1b/f70 [0,4194304] 0 2026-03-09T20:47:41.906 INFO:tasks.workunit.client.1.vm10.stdout:5/434: fdatasync f1 0 2026-03-09T20:47:41.908 INFO:tasks.workunit.client.0.vm07.stdout:4/463: creat d2/d55/d5d/f7e x:0 0 0 2026-03-09T20:47:41.909 INFO:tasks.workunit.client.0.vm07.stdout:4/464: truncate d2/df/d17/f73 1199346 0 2026-03-09T20:47:41.910 INFO:tasks.workunit.client.1.vm10.stdout:1/463: dwrite d2/da/d25/d3e/d55/f5c [0,4194304] 0 2026-03-09T20:47:41.911 INFO:tasks.workunit.client.0.vm07.stdout:0/598: sync 2026-03-09T20:47:41.911 INFO:tasks.workunit.client.0.vm07.stdout:7/588: sync 2026-03-09T20:47:41.912 INFO:tasks.workunit.client.1.vm10.stdout:0/455: dread d2/d4a/d58/d82/f5c [0,4194304] 0 2026-03-09T20:47:41.922 INFO:tasks.workunit.client.1.vm10.stdout:2/479: dread d5/d18/f1f [0,4194304] 0 2026-03-09T20:47:41.928 INFO:tasks.workunit.client.1.vm10.stdout:1/464: dwrite d2/da/d25/d46/d51/d5d/d6e/d70/f83 [0,4194304] 0 2026-03-09T20:47:41.929 INFO:tasks.workunit.client.1.vm10.stdout:1/465: write d2/da/d25/f78 [2550338,1333] 0 2026-03-09T20:47:41.930 INFO:tasks.workunit.client.0.vm07.stdout:6/533: rename d8/d26/d2a to d8/d16/d22/d24/da0/dab 0 2026-03-09T20:47:41.934 INFO:tasks.workunit.client.0.vm07.stdout:1/567: truncate d3/f7d 521426 0 2026-03-09T20:47:41.941 INFO:tasks.workunit.client.1.vm10.stdout:3/456: truncate dc/d14/d27/f3f 1650690 0 2026-03-09T20:47:41.944 INFO:tasks.workunit.client.1.vm10.stdout:4/438: mknod d1/d2/d3/c8c 0 2026-03-09T20:47:41.953 INFO:tasks.workunit.client.0.vm07.stdout:5/601: fsync d5/d19/f20 0 2026-03-09T20:47:41.956 INFO:tasks.workunit.client.1.vm10.stdout:8/494: mkdir d0/d22/d2f/d9d 0 2026-03-09T20:47:41.957 INFO:tasks.workunit.client.1.vm10.stdout:6/483: write d3/d30/d7f/d36/f4f [4655845,122038] 0 2026-03-09T20:47:41.960 INFO:tasks.workunit.client.0.vm07.stdout:8/499: write d1/dc/d16/d26/f4f [804661,98377] 0 2026-03-09T20:47:41.960 INFO:tasks.workunit.client.0.vm07.stdout:8/500: stat d1/d3b/f9a 0 2026-03-09T20:47:41.963 INFO:tasks.workunit.client.0.vm07.stdout:4/465: write d2/d55/d5d/d3f/d4a/d4b/f7a [2910351,74590] 0 2026-03-09T20:47:41.978 INFO:tasks.workunit.client.0.vm07.stdout:7/589: unlink d3/da/db/d32/d3e/dac/f6d 0 2026-03-09T20:47:41.981 INFO:tasks.workunit.client.0.vm07.stdout:2/547: write d2/db/d28/f34 [1062651,98068] 0 2026-03-09T20:47:41.991 INFO:tasks.workunit.client.0.vm07.stdout:9/508: rename d4/d11/d2a/d84 to d4/d8/db9 0 2026-03-09T20:47:41.995 INFO:tasks.workunit.client.0.vm07.stdout:9/509: dwrite d4/d16/d29/d24/d37/d44/d62/d74/fa6 [0,4194304] 0 2026-03-09T20:47:42.002 INFO:tasks.workunit.client.0.vm07.stdout:6/534: mknod d8/d16/d22/d9b/cac 0 2026-03-09T20:47:42.009 INFO:tasks.workunit.client.0.vm07.stdout:3/517: getdents d1/d5/d9/d2f/d34/da5 0 2026-03-09T20:47:42.012 INFO:tasks.workunit.client.0.vm07.stdout:3/518: stat d1/d5/d9/d2f/d3d/d64 0 2026-03-09T20:47:42.017 INFO:tasks.workunit.client.0.vm07.stdout:8/501: creat d1/dc/d16/d31/fa0 x:0 0 0 2026-03-09T20:47:42.028 INFO:tasks.workunit.client.0.vm07.stdout:4/466: creat d2/d55/d5d/d3f/d4a/f7f x:0 0 0 2026-03-09T20:47:42.030 INFO:tasks.workunit.client.1.vm10.stdout:2/480: truncate d5/d18/d1b/f70 4180703 0 2026-03-09T20:47:42.031 INFO:tasks.workunit.client.1.vm10.stdout:2/481: chown d5/d2b/d32/d80/d47/d94 3733952 1 2026-03-09T20:47:42.031 INFO:tasks.workunit.client.1.vm10.stdout:1/466: chown d2/da/d25/d3e/d42/f57 0 1 2026-03-09T20:47:42.032 INFO:tasks.workunit.client.1.vm10.stdout:3/457: rmdir dc/d14/d22/d7f/d69 39 2026-03-09T20:47:42.032 INFO:tasks.workunit.client.1.vm10.stdout:1/467: write d2/da/f1e [4715239,93491] 0 2026-03-09T20:47:42.038 INFO:tasks.workunit.client.1.vm10.stdout:4/439: symlink d1/d8/d1c/d2b/l8d 0 2026-03-09T20:47:42.039 INFO:tasks.workunit.client.1.vm10.stdout:1/468: dwrite d2/da/f50 [4194304,4194304] 0 2026-03-09T20:47:42.041 INFO:tasks.workunit.client.0.vm07.stdout:1/568: rename d3/d23/d55/d56/d60/l65 to d3/d14/d54/lbd 0 2026-03-09T20:47:42.052 INFO:tasks.workunit.client.0.vm07.stdout:2/548: dread d2/d11/f44 [0,4194304] 0 2026-03-09T20:47:42.053 INFO:tasks.workunit.client.0.vm07.stdout:2/549: stat d2/da7 0 2026-03-09T20:47:42.057 INFO:tasks.workunit.client.1.vm10.stdout:7/474: creat db/d28/d86/f94 x:0 0 0 2026-03-09T20:47:42.059 INFO:tasks.workunit.client.0.vm07.stdout:6/535: creat d8/d16/d22/d33/d85/fad x:0 0 0 2026-03-09T20:47:42.060 INFO:tasks.workunit.client.0.vm07.stdout:6/536: truncate d8/d16/f92 198875 0 2026-03-09T20:47:42.060 INFO:tasks.workunit.client.0.vm07.stdout:6/537: dread - d8/d16/d22/d33/d85/fad zero size 2026-03-09T20:47:42.064 INFO:tasks.workunit.client.1.vm10.stdout:9/510: getdents d2/d12/d5a/da7 0 2026-03-09T20:47:42.067 INFO:tasks.workunit.client.1.vm10.stdout:9/511: read d2/d3/f5 [2428802,30311] 0 2026-03-09T20:47:42.069 INFO:tasks.workunit.client.0.vm07.stdout:4/467: truncate d2/f19 578883 0 2026-03-09T20:47:42.070 INFO:tasks.workunit.client.0.vm07.stdout:4/468: dread - d2/d55/d5d/d3f/d4a/f7f zero size 2026-03-09T20:47:42.078 INFO:tasks.workunit.client.1.vm10.stdout:2/482: creat d5/f9e x:0 0 0 2026-03-09T20:47:42.079 INFO:tasks.workunit.client.0.vm07.stdout:7/590: unlink d3/da/db/d32/d3e/dac/d1f/d2b/c34 0 2026-03-09T20:47:42.079 INFO:tasks.workunit.client.1.vm10.stdout:8/495: dwrite d0/d22/d2c/f6b [0,4194304] 0 2026-03-09T20:47:42.080 INFO:tasks.workunit.client.1.vm10.stdout:8/496: chown d0/d22/d2f/d38/l69 161 1 2026-03-09T20:47:42.086 INFO:tasks.workunit.client.1.vm10.stdout:0/456: truncate d2/d4a/d58/d82/d71/d5d/f5f 1731519 0 2026-03-09T20:47:42.091 INFO:tasks.workunit.client.0.vm07.stdout:1/569: fsync d3/f2b 0 2026-03-09T20:47:42.093 INFO:tasks.workunit.client.0.vm07.stdout:5/602: rename d5/df/c11 to d5/d33/d39/d8d/dab/cce 0 2026-03-09T20:47:42.097 INFO:tasks.workunit.client.1.vm10.stdout:3/458: rename dc/d14/d26/d77 to dc/d14/d90 0 2026-03-09T20:47:42.101 INFO:tasks.workunit.client.1.vm10.stdout:1/469: fsync d2/da/d25/d3e/d42/f57 0 2026-03-09T20:47:42.102 INFO:tasks.workunit.client.1.vm10.stdout:1/470: truncate d2/da/f35 1060221 0 2026-03-09T20:47:42.105 INFO:tasks.workunit.client.0.vm07.stdout:6/538: fdatasync d8/f15 0 2026-03-09T20:47:42.106 INFO:tasks.workunit.client.0.vm07.stdout:6/539: readlink d8/d16/d22/d24/l9d 0 2026-03-09T20:47:42.107 INFO:tasks.workunit.client.1.vm10.stdout:1/471: dwrite d2/da/d25/f48 [8388608,4194304] 0 2026-03-09T20:47:42.112 INFO:tasks.workunit.client.1.vm10.stdout:7/475: mkdir db/d21/d95 0 2026-03-09T20:47:42.115 INFO:tasks.workunit.client.1.vm10.stdout:7/476: dwrite db/d21/d60/f7e [0,4194304] 0 2026-03-09T20:47:42.118 INFO:tasks.workunit.client.1.vm10.stdout:7/477: dread - db/d28/d2b/d36/d3b/d88/f8e zero size 2026-03-09T20:47:42.118 INFO:tasks.workunit.client.0.vm07.stdout:4/469: creat d2/df/d17/f80 x:0 0 0 2026-03-09T20:47:42.142 INFO:tasks.workunit.client.1.vm10.stdout:5/435: creat d2/d80/fa8 x:0 0 0 2026-03-09T20:47:42.147 INFO:tasks.workunit.client.0.vm07.stdout:0/599: link d1/d2/dc/f56 d1/d2/dc/d80/fbe 0 2026-03-09T20:47:42.154 INFO:tasks.workunit.client.0.vm07.stdout:7/591: rmdir d3/da/db/d32/d3e/d5c/dc2 39 2026-03-09T20:47:42.156 INFO:tasks.workunit.client.1.vm10.stdout:8/497: symlink d0/d22/d25/d6c/l9e 0 2026-03-09T20:47:42.157 INFO:tasks.workunit.client.1.vm10.stdout:8/498: chown d0/d22/d2c/l3a 0 1 2026-03-09T20:47:42.166 INFO:tasks.workunit.client.1.vm10.stdout:1/472: fdatasync d2/f3c 0 2026-03-09T20:47:42.169 INFO:tasks.workunit.client.1.vm10.stdout:7/478: unlink db/d28/d2b/d36/d3b/d88/f68 0 2026-03-09T20:47:42.171 INFO:tasks.workunit.client.1.vm10.stdout:5/436: mkdir d2/d27/d37/d46/d5d/d5f/da9 0 2026-03-09T20:47:42.172 INFO:tasks.workunit.client.1.vm10.stdout:5/437: stat d2/d27/d75/d81/c86 0 2026-03-09T20:47:42.174 INFO:tasks.workunit.client.1.vm10.stdout:7/479: dwrite db/d28/d2b/d36/d3f/f7d [0,4194304] 0 2026-03-09T20:47:42.177 INFO:tasks.workunit.client.1.vm10.stdout:2/483: mkdir d5/d18/d9f 0 2026-03-09T20:47:42.179 INFO:tasks.workunit.client.1.vm10.stdout:8/499: stat d0/c16 0 2026-03-09T20:47:42.190 INFO:tasks.workunit.client.1.vm10.stdout:9/512: truncate d2/d3/f2e 1234362 0 2026-03-09T20:47:42.192 INFO:tasks.workunit.client.1.vm10.stdout:3/459: mkdir dc/d14/d22/d7f/d69/d75/d91 0 2026-03-09T20:47:42.193 INFO:tasks.workunit.client.1.vm10.stdout:1/473: fdatasync d2/da/f88 0 2026-03-09T20:47:42.196 INFO:tasks.workunit.client.1.vm10.stdout:3/460: dread dc/d14/d26/d29/d40/f49 [0,4194304] 0 2026-03-09T20:47:42.210 INFO:tasks.workunit.client.1.vm10.stdout:1/474: dwrite d2/da/d25/f78 [0,4194304] 0 2026-03-09T20:47:42.211 INFO:tasks.workunit.client.1.vm10.stdout:4/440: rename d1/d8/f77 to d1/d8/d1b/d57/f8e 0 2026-03-09T20:47:42.211 INFO:tasks.workunit.client.1.vm10.stdout:0/457: rename d2/d9 to d2/d9/da/d11/d9f 22 2026-03-09T20:47:42.211 INFO:tasks.workunit.client.1.vm10.stdout:2/484: sync 2026-03-09T20:47:42.213 INFO:tasks.workunit.client.1.vm10.stdout:3/461: dread dc/d14/d26/d29/f51 [0,4194304] 0 2026-03-09T20:47:42.216 INFO:tasks.workunit.client.1.vm10.stdout:9/513: creat d2/d33/fb3 x:0 0 0 2026-03-09T20:47:42.216 INFO:tasks.workunit.client.1.vm10.stdout:2/485: dread d5/d2b/d32/d80/d47/d94/f9b [0,4194304] 0 2026-03-09T20:47:42.220 INFO:tasks.workunit.client.0.vm07.stdout:3/519: rename d1/d5/d9/d11/d1f/f72 to d1/d5/d9/d2f/d99/fa6 0 2026-03-09T20:47:42.224 INFO:tasks.workunit.client.0.vm07.stdout:5/603: mknod d5/d19/d73/dbc/ccf 0 2026-03-09T20:47:42.229 INFO:tasks.workunit.client.1.vm10.stdout:5/438: creat d2/d27/d37/d46/d5d/d5f/d84/d87/da1/faa x:0 0 0 2026-03-09T20:47:42.229 INFO:tasks.workunit.client.1.vm10.stdout:8/500: truncate d0/d22/d25/d2e/f33 61446 0 2026-03-09T20:47:42.230 INFO:tasks.workunit.client.0.vm07.stdout:6/540: chown d8/d16/d4b/c64 222848 1 2026-03-09T20:47:42.232 INFO:tasks.workunit.client.0.vm07.stdout:6/541: dread d8/d16/d22/d24/da0/dab/d40/f65 [0,4194304] 0 2026-03-09T20:47:42.232 INFO:tasks.workunit.client.0.vm07.stdout:6/542: chown d8/d16/d22/d33/l8e 1882 1 2026-03-09T20:47:42.234 INFO:tasks.workunit.client.0.vm07.stdout:4/470: creat d2/df/d59/f81 x:0 0 0 2026-03-09T20:47:42.236 INFO:tasks.workunit.client.1.vm10.stdout:6/484: rename d3/d30/d7f/d36/d5c/d8d/l97 to d3/d79/l9d 0 2026-03-09T20:47:42.241 INFO:tasks.workunit.client.1.vm10.stdout:7/480: write db/d21/d23/f1a [3171099,1967] 0 2026-03-09T20:47:42.241 INFO:tasks.workunit.client.1.vm10.stdout:0/458: symlink d2/d9/d4b/la0 0 2026-03-09T20:47:42.242 INFO:tasks.workunit.client.1.vm10.stdout:3/462: rmdir dc/d14/d26/d29 39 2026-03-09T20:47:42.243 INFO:tasks.workunit.client.0.vm07.stdout:0/600: fdatasync d1/d2/dc/f10 0 2026-03-09T20:47:42.245 INFO:tasks.workunit.client.0.vm07.stdout:1/570: symlink d3/lbe 0 2026-03-09T20:47:42.245 INFO:tasks.workunit.client.0.vm07.stdout:1/571: readlink d3/d14/d54/l36 0 2026-03-09T20:47:42.249 INFO:tasks.workunit.client.1.vm10.stdout:2/486: dread - d5/d18/d27/d28/f5a zero size 2026-03-09T20:47:42.252 INFO:tasks.workunit.client.0.vm07.stdout:3/520: mkdir d1/d5/d9/d11/d60/da7 0 2026-03-09T20:47:42.253 INFO:tasks.workunit.client.0.vm07.stdout:5/604: mknod d5/df/d13/d30/d56/cd0 0 2026-03-09T20:47:42.255 INFO:tasks.workunit.client.1.vm10.stdout:5/439: mknod d2/d27/d37/d46/d5d/d77/cab 0 2026-03-09T20:47:42.256 INFO:tasks.workunit.client.0.vm07.stdout:3/521: dwrite d1/d5/d9/d2f/d66/fa4 [0,4194304] 0 2026-03-09T20:47:42.266 INFO:tasks.workunit.client.0.vm07.stdout:2/550: creat d2/d11/fac x:0 0 0 2026-03-09T20:47:42.266 INFO:tasks.workunit.client.1.vm10.stdout:8/501: chown d0/fe 60099 1 2026-03-09T20:47:42.266 INFO:tasks.workunit.client.0.vm07.stdout:9/510: mknod d4/d8/d19/cba 0 2026-03-09T20:47:42.268 INFO:tasks.workunit.client.1.vm10.stdout:4/441: truncate d1/d8/d1c/d2b/f7a 1522348 0 2026-03-09T20:47:42.270 INFO:tasks.workunit.client.0.vm07.stdout:6/543: unlink d8/d16/da3/fa8 0 2026-03-09T20:47:42.273 INFO:tasks.workunit.client.1.vm10.stdout:1/475: rename d2/da/d25/d3e/d55/f5c to d2/d89/f96 0 2026-03-09T20:47:42.275 INFO:tasks.workunit.client.0.vm07.stdout:8/502: creat d1/d5d/d6f/d2f/d53/d76/fa1 x:0 0 0 2026-03-09T20:47:42.276 INFO:tasks.workunit.client.1.vm10.stdout:1/476: dwrite d2/da/d25/d3e/f94 [0,4194304] 0 2026-03-09T20:47:42.286 INFO:tasks.workunit.client.1.vm10.stdout:3/463: truncate dc/d14/d20/d2e/d56/f15 371746 0 2026-03-09T20:47:42.286 INFO:tasks.workunit.client.1.vm10.stdout:3/464: stat dc/d14/d20/d2e/d56/f23 0 2026-03-09T20:47:42.287 INFO:tasks.workunit.client.1.vm10.stdout:1/477: sync 2026-03-09T20:47:42.287 INFO:tasks.workunit.client.1.vm10.stdout:3/465: chown dc/d14/d20/d21/d3b/c54 2538760 1 2026-03-09T20:47:42.287 INFO:tasks.workunit.client.1.vm10.stdout:1/478: chown d2/l4 30218 1 2026-03-09T20:47:42.289 INFO:tasks.workunit.client.0.vm07.stdout:0/601: mknod d1/d1f/d53/cbf 0 2026-03-09T20:47:42.289 INFO:tasks.workunit.client.1.vm10.stdout:1/479: write d2/da/d25/d3e/f44 [825071,28811] 0 2026-03-09T20:47:42.295 INFO:tasks.workunit.client.1.vm10.stdout:9/514: mkdir d2/d3/db4 0 2026-03-09T20:47:42.297 INFO:tasks.workunit.client.1.vm10.stdout:2/487: creat d5/d18/d27/d38/d61/fa0 x:0 0 0 2026-03-09T20:47:42.299 INFO:tasks.workunit.client.1.vm10.stdout:5/440: chown d2/d27/d37/d46/d99/c9d 6776585 1 2026-03-09T20:47:42.308 INFO:tasks.workunit.client.1.vm10.stdout:6/485: write d3/d30/d7f/f18 [3643772,6297] 0 2026-03-09T20:47:42.309 INFO:tasks.workunit.client.0.vm07.stdout:4/471: write d2/df/d59/f60 [17146,35715] 0 2026-03-09T20:47:42.312 INFO:tasks.workunit.client.0.vm07.stdout:4/472: dwrite d2/df/d59/f60 [0,4194304] 0 2026-03-09T20:47:42.313 INFO:tasks.workunit.client.0.vm07.stdout:4/473: read d2/f3 [44812,15965] 0 2026-03-09T20:47:42.318 INFO:tasks.workunit.client.0.vm07.stdout:4/474: dread d2/f9 [0,4194304] 0 2026-03-09T20:47:42.322 INFO:tasks.workunit.client.0.vm07.stdout:4/475: read d2/f4c [4170154,77416] 0 2026-03-09T20:47:42.323 INFO:tasks.workunit.client.0.vm07.stdout:3/522: mknod d1/d5/d9/d11/d6d/ca8 0 2026-03-09T20:47:42.323 INFO:tasks.workunit.client.1.vm10.stdout:0/459: write d2/d9/da/d35/f7e [105891,85409] 0 2026-03-09T20:47:42.329 INFO:tasks.workunit.client.0.vm07.stdout:7/592: dwrite d3/da/db/d32/d3e/dac/f3a [0,4194304] 0 2026-03-09T20:47:42.333 INFO:tasks.workunit.client.1.vm10.stdout:2/488: dread d5/d2b/f3f [0,4194304] 0 2026-03-09T20:47:42.334 INFO:tasks.workunit.client.1.vm10.stdout:2/489: chown d5/l14 36407 1 2026-03-09T20:47:42.340 INFO:tasks.workunit.client.0.vm07.stdout:5/605: write d5/df/d13/d4f/f9b [3972193,40799] 0 2026-03-09T20:47:42.345 INFO:tasks.workunit.client.0.vm07.stdout:6/544: symlink d8/d26/d7d/lae 0 2026-03-09T20:47:42.349 INFO:tasks.workunit.client.0.vm07.stdout:8/503: rename d1/dc/d16/l98 to d1/dc/d16/d26/la2 0 2026-03-09T20:47:42.355 INFO:tasks.workunit.client.1.vm10.stdout:9/515: creat d2/d3/de/d8f/fb5 x:0 0 0 2026-03-09T20:47:42.361 INFO:tasks.workunit.client.0.vm07.stdout:7/593: mknod d3/da4/ccb 0 2026-03-09T20:47:42.361 INFO:tasks.workunit.client.1.vm10.stdout:6/486: mkdir d3/d30/d7f/d24/d39/d9e 0 2026-03-09T20:47:42.361 INFO:tasks.workunit.client.1.vm10.stdout:0/460: unlink d2/d9/d4b/d63/f94 0 2026-03-09T20:47:42.362 INFO:tasks.workunit.client.1.vm10.stdout:2/490: dread - d5/d2b/d32/d80/f71 zero size 2026-03-09T20:47:42.362 INFO:tasks.workunit.client.0.vm07.stdout:5/606: creat d5/df/d13/d30/d56/fd1 x:0 0 0 2026-03-09T20:47:42.365 INFO:tasks.workunit.client.0.vm07.stdout:0/602: dread d1/d2/dc/d17/f23 [4194304,4194304] 0 2026-03-09T20:47:42.366 INFO:tasks.workunit.client.1.vm10.stdout:9/516: sync 2026-03-09T20:47:42.367 INFO:tasks.workunit.client.1.vm10.stdout:2/491: sync 2026-03-09T20:47:42.370 INFO:tasks.workunit.client.1.vm10.stdout:4/442: rename d1/d8/d1c/d2b/f7a to d1/d67/f8f 0 2026-03-09T20:47:42.375 INFO:tasks.workunit.client.1.vm10.stdout:8/502: dwrite d0/d22/f27 [0,4194304] 0 2026-03-09T20:47:42.375 INFO:tasks.workunit.client.1.vm10.stdout:7/481: truncate db/d28/d2b/d36/d3b/d88/f71 2215898 0 2026-03-09T20:47:42.379 INFO:tasks.workunit.client.1.vm10.stdout:7/482: read db/d28/d2b/d36/d3f/f7b [356610,112381] 0 2026-03-09T20:47:42.381 INFO:tasks.workunit.client.1.vm10.stdout:5/441: write d2/d27/d37/d46/f94 [90332,80344] 0 2026-03-09T20:47:42.381 INFO:tasks.workunit.client.1.vm10.stdout:4/443: dread d1/f9 [0,4194304] 0 2026-03-09T20:47:42.385 INFO:tasks.workunit.client.1.vm10.stdout:8/503: dwrite d0/d22/d25/d2e/d41/d47/f87 [0,4194304] 0 2026-03-09T20:47:42.386 INFO:tasks.workunit.client.1.vm10.stdout:8/504: chown d0/d22/d2f/d38/c42 5455737 1 2026-03-09T20:47:42.388 INFO:tasks.workunit.client.0.vm07.stdout:6/545: write d8/d16/d22/d24/da0/dab/d40/f65 [191594,28596] 0 2026-03-09T20:47:42.394 INFO:tasks.workunit.client.1.vm10.stdout:7/483: dwrite db/d28/d2b/d36/d63/f6c [0,4194304] 0 2026-03-09T20:47:42.408 INFO:tasks.workunit.client.1.vm10.stdout:1/480: unlink d2/l15 0 2026-03-09T20:47:42.408 INFO:tasks.workunit.client.0.vm07.stdout:1/572: link d3/d23/d55/d56/d60/c7f d3/d14/d54/d9b/cbf 0 2026-03-09T20:47:42.408 INFO:tasks.workunit.client.1.vm10.stdout:1/481: fsync d2/da/d25/f78 0 2026-03-09T20:47:42.411 INFO:tasks.workunit.client.1.vm10.stdout:1/482: dread d2/da/d25/f28 [0,4194304] 0 2026-03-09T20:47:42.416 INFO:tasks.workunit.client.0.vm07.stdout:3/523: creat d1/d5/d9/d2f/d34/da5/fa9 x:0 0 0 2026-03-09T20:47:42.419 INFO:tasks.workunit.client.1.vm10.stdout:6/487: fdatasync d3/da/d11/d26/d5b/f48 0 2026-03-09T20:47:42.422 INFO:tasks.workunit.client.0.vm07.stdout:2/551: creat d2/db/d49/fad x:0 0 0 2026-03-09T20:47:42.422 INFO:tasks.workunit.client.1.vm10.stdout:6/488: dread d3/d30/d7f/f18 [0,4194304] 0 2026-03-09T20:47:42.426 INFO:tasks.workunit.client.0.vm07.stdout:8/504: dread d1/d5d/d6f/d2f/d4d/f67 [0,4194304] 0 2026-03-09T20:47:42.426 INFO:tasks.workunit.client.0.vm07.stdout:8/505: dwrite d1/fb [0,4194304] 0 2026-03-09T20:47:42.435 INFO:tasks.workunit.client.1.vm10.stdout:9/517: mknod d2/d28/cb6 0 2026-03-09T20:47:42.437 INFO:tasks.workunit.client.0.vm07.stdout:5/607: creat d5/df/d13/d3e/d47/fd2 x:0 0 0 2026-03-09T20:47:42.440 INFO:tasks.workunit.client.0.vm07.stdout:5/608: dwrite d5/d50/f61 [4194304,4194304] 0 2026-03-09T20:47:42.441 INFO:tasks.workunit.client.0.vm07.stdout:5/609: readlink d5/l1b 0 2026-03-09T20:47:42.447 INFO:tasks.workunit.client.1.vm10.stdout:2/492: mknod d5/d2b/d32/d80/d47/ca1 0 2026-03-09T20:47:42.451 INFO:tasks.workunit.client.0.vm07.stdout:6/546: creat d8/d16/d22/d24/da0/faf x:0 0 0 2026-03-09T20:47:42.452 INFO:tasks.workunit.client.0.vm07.stdout:6/547: chown d8/l1b 0 1 2026-03-09T20:47:42.452 INFO:tasks.workunit.client.0.vm07.stdout:6/548: chown d8/d16/d22/d24/da0/dab/c96 74 1 2026-03-09T20:47:42.455 INFO:tasks.workunit.client.0.vm07.stdout:1/573: mkdir d3/d14/d54/d6e/dc0 0 2026-03-09T20:47:42.458 INFO:tasks.workunit.client.1.vm10.stdout:5/442: mknod d2/d27/d37/d46/d5d/d5f/d63/d95/cac 0 2026-03-09T20:47:42.461 INFO:tasks.workunit.client.0.vm07.stdout:8/506: symlink d1/d5d/d6f/d2f/la3 0 2026-03-09T20:47:42.465 INFO:tasks.workunit.client.0.vm07.stdout:5/610: symlink d5/df/d13/d3e/d47/ld3 0 2026-03-09T20:47:42.466 INFO:tasks.workunit.client.0.vm07.stdout:2/552: dread d2/f2c [0,4194304] 0 2026-03-09T20:47:42.469 INFO:tasks.workunit.client.0.vm07.stdout:2/553: dwrite d2/db/d28/d57/f68 [0,4194304] 0 2026-03-09T20:47:42.484 INFO:tasks.workunit.client.0.vm07.stdout:9/511: rename d4/d8/d19/d26 to d4/d8/dc/dbb 0 2026-03-09T20:47:42.487 INFO:tasks.workunit.client.0.vm07.stdout:1/574: mknod d3/d14/d54/d9b/cc1 0 2026-03-09T20:47:42.488 INFO:tasks.workunit.client.0.vm07.stdout:6/549: dread d8/d16/d22/d24/da0/dab/f6e [0,4194304] 0 2026-03-09T20:47:42.489 INFO:tasks.workunit.client.0.vm07.stdout:6/550: stat d8/d26 0 2026-03-09T20:47:42.489 INFO:tasks.workunit.client.0.vm07.stdout:6/551: read - d8/d16/d4b/f9c zero size 2026-03-09T20:47:42.494 INFO:tasks.workunit.client.0.vm07.stdout:4/476: getdents d2/df/d17 0 2026-03-09T20:47:42.496 INFO:tasks.workunit.client.0.vm07.stdout:8/507: mknod d1/d3b/ca4 0 2026-03-09T20:47:42.498 INFO:tasks.workunit.client.0.vm07.stdout:5/611: creat d5/d69/fd4 x:0 0 0 2026-03-09T20:47:42.506 INFO:tasks.workunit.client.1.vm10.stdout:4/444: rename d1/c34 to d1/d2/d5c/d64/c90 0 2026-03-09T20:47:42.507 INFO:tasks.workunit.client.1.vm10.stdout:4/445: stat d1/d2/d5c/d64/d6b/d81/f8a 0 2026-03-09T20:47:42.509 INFO:tasks.workunit.client.0.vm07.stdout:3/524: dread d1/d5/d9/d11/d60/f89 [0,4194304] 0 2026-03-09T20:47:42.509 INFO:tasks.workunit.client.0.vm07.stdout:2/554: truncate d2/d11/d56/f98 962438 0 2026-03-09T20:47:42.510 INFO:tasks.workunit.client.0.vm07.stdout:2/555: chown d2/db/d1c/faa 461706 1 2026-03-09T20:47:42.510 INFO:tasks.workunit.client.0.vm07.stdout:3/525: dread - d1/d5/d9/d11/f9b zero size 2026-03-09T20:47:42.514 INFO:tasks.workunit.client.0.vm07.stdout:3/526: dread d1/d5/d9/d2f/d3d/d64/f7b [0,4194304] 0 2026-03-09T20:47:42.521 INFO:tasks.workunit.client.0.vm07.stdout:6/552: truncate d8/d16/d22/d24/da0/dab/f6e 1747958 0 2026-03-09T20:47:42.521 INFO:tasks.workunit.client.0.vm07.stdout:4/477: creat d2/d55/d5d/d3f/d4a/d4b/d52/f82 x:0 0 0 2026-03-09T20:47:42.521 INFO:tasks.workunit.client.1.vm10.stdout:9/518: dread d2/d28/f79 [0,4194304] 0 2026-03-09T20:47:42.522 INFO:tasks.workunit.client.0.vm07.stdout:7/594: getdents d3/d58/d82 0 2026-03-09T20:47:42.524 INFO:tasks.workunit.client.1.vm10.stdout:3/466: link dc/d14/d22/d7f/c5f dc/d14/d90/c92 0 2026-03-09T20:47:42.526 INFO:tasks.workunit.client.0.vm07.stdout:8/508: unlink d1/dc/d16/d31/l6c 0 2026-03-09T20:47:42.526 INFO:tasks.workunit.client.0.vm07.stdout:8/509: dread - d1/dc/d16/d26/f59 zero size 2026-03-09T20:47:42.528 INFO:tasks.workunit.client.0.vm07.stdout:8/510: write d1/d5d/d6f/d2f/d53/f89 [744262,3473] 0 2026-03-09T20:47:42.542 INFO:tasks.workunit.client.1.vm10.stdout:0/461: write d2/d4e/f5b [971599,63011] 0 2026-03-09T20:47:42.543 INFO:tasks.workunit.client.1.vm10.stdout:0/462: chown d2/d9/da/d11/l6c 517287395 1 2026-03-09T20:47:42.551 INFO:tasks.workunit.client.1.vm10.stdout:9/519: mkdir d2/d3/d6d/db7 0 2026-03-09T20:47:42.551 INFO:tasks.workunit.client.1.vm10.stdout:7/484: write db/d28/d2b/d36/d3b/f3d [2930295,106433] 0 2026-03-09T20:47:42.551 INFO:tasks.workunit.client.0.vm07.stdout:9/512: fsync d4/d11/f8a 0 2026-03-09T20:47:42.552 INFO:tasks.workunit.client.1.vm10.stdout:9/520: chown d2/d33/d37/l53 29648276 1 2026-03-09T20:47:42.552 INFO:tasks.workunit.client.1.vm10.stdout:8/505: dwrite d0/d22/d25/d2e/d41/f80 [0,4194304] 0 2026-03-09T20:47:42.558 INFO:tasks.workunit.client.0.vm07.stdout:0/603: truncate d1/d2/d33/d35/f45 3443396 0 2026-03-09T20:47:42.565 INFO:tasks.workunit.client.0.vm07.stdout:4/478: mkdir d2/df/d17/d83 0 2026-03-09T20:47:42.565 INFO:tasks.workunit.client.0.vm07.stdout:4/479: dread - d2/d55/d5d/d3f/d4a/f7f zero size 2026-03-09T20:47:42.567 INFO:tasks.workunit.client.1.vm10.stdout:8/506: symlink d0/d22/d25/d2e/d41/l9f 0 2026-03-09T20:47:42.567 INFO:tasks.workunit.client.1.vm10.stdout:3/467: sync 2026-03-09T20:47:42.568 INFO:tasks.workunit.client.0.vm07.stdout:6/553: rename d8/d26/d7d/ca5 to d8/d16/d22/d24/da0/cb0 0 2026-03-09T20:47:42.568 INFO:tasks.workunit.client.1.vm10.stdout:3/468: write dc/d14/d20/d21/d3b/f4f [3215934,38542] 0 2026-03-09T20:47:42.571 INFO:tasks.workunit.client.0.vm07.stdout:8/511: truncate d1/d5d/d6f/d2f/f51 688522 0 2026-03-09T20:47:42.573 INFO:tasks.workunit.client.1.vm10.stdout:0/463: creat d2/d4a/d58/d82/d93/fa1 x:0 0 0 2026-03-09T20:47:42.573 INFO:tasks.workunit.client.1.vm10.stdout:7/485: symlink db/l96 0 2026-03-09T20:47:42.573 INFO:tasks.workunit.client.1.vm10.stdout:8/507: rmdir d0/d22/d25 39 2026-03-09T20:47:42.574 INFO:tasks.workunit.client.1.vm10.stdout:7/486: truncate db/d28/d2b/d36/f35 4666797 0 2026-03-09T20:47:42.574 INFO:tasks.workunit.client.0.vm07.stdout:9/513: fsync d4/d8/dc/d4e/f8f 0 2026-03-09T20:47:42.574 INFO:tasks.workunit.client.1.vm10.stdout:7/487: stat db/d28/d86 0 2026-03-09T20:47:42.579 INFO:tasks.workunit.client.1.vm10.stdout:8/508: write d0/d22/d25/d2e/d41/f80 [531496,89007] 0 2026-03-09T20:47:42.580 INFO:tasks.workunit.client.1.vm10.stdout:8/509: readlink d0/d22/d2f/d38/l69 0 2026-03-09T20:47:42.587 INFO:tasks.workunit.client.0.vm07.stdout:8/512: dread d1/f25 [0,4194304] 0 2026-03-09T20:47:42.588 INFO:tasks.workunit.client.0.vm07.stdout:8/513: chown d1/dc/f4c 10893206 1 2026-03-09T20:47:42.590 INFO:tasks.workunit.client.0.vm07.stdout:3/527: sync 2026-03-09T20:47:42.592 INFO:tasks.workunit.client.0.vm07.stdout:9/514: mkdir d4/d8/d19/d5f/d73/dbc 0 2026-03-09T20:47:42.593 INFO:tasks.workunit.client.1.vm10.stdout:8/510: dread d0/d22/d25/f2d [0,4194304] 0 2026-03-09T20:47:42.598 INFO:tasks.workunit.client.0.vm07.stdout:0/604: mkdir d1/dc0 0 2026-03-09T20:47:42.615 INFO:tasks.workunit.client.1.vm10.stdout:2/493: dwrite d5/d2b/f69 [0,4194304] 0 2026-03-09T20:47:42.639 INFO:tasks.workunit.client.1.vm10.stdout:3/469: rename dc/d14/d26/d29/d2a/d55/d89 to dc/d14/d26/d29/d93 0 2026-03-09T20:47:42.642 INFO:tasks.workunit.client.1.vm10.stdout:8/511: unlink d0/l24 0 2026-03-09T20:47:42.644 INFO:tasks.workunit.client.1.vm10.stdout:5/443: write d2/d39/d4b/f60 [2643544,110549] 0 2026-03-09T20:47:42.650 INFO:tasks.workunit.client.1.vm10.stdout:6/489: truncate d3/da/f76 2748713 0 2026-03-09T20:47:42.651 INFO:tasks.workunit.client.0.vm07.stdout:8/514: truncate d1/dc/d16/f6e 763355 0 2026-03-09T20:47:42.652 INFO:tasks.workunit.client.1.vm10.stdout:1/483: dwrite d2/da/f34 [0,4194304] 0 2026-03-09T20:47:42.654 INFO:tasks.workunit.client.0.vm07.stdout:8/515: chown d1/d5d/d6f/d2f/f34 107 1 2026-03-09T20:47:42.659 INFO:tasks.workunit.client.1.vm10.stdout:1/484: sync 2026-03-09T20:47:42.659 INFO:tasks.workunit.client.0.vm07.stdout:8/516: dwrite d1/d5d/d6f/f64 [4194304,4194304] 0 2026-03-09T20:47:42.663 INFO:tasks.workunit.client.0.vm07.stdout:8/517: dread d1/dc/f42 [0,4194304] 0 2026-03-09T20:47:42.673 INFO:tasks.workunit.client.1.vm10.stdout:4/446: write d1/d47/f4f [346706,66599] 0 2026-03-09T20:47:42.673 INFO:tasks.workunit.client.1.vm10.stdout:4/447: dread - d1/d2/d5c/d64/f83 zero size 2026-03-09T20:47:42.674 INFO:tasks.workunit.client.1.vm10.stdout:4/448: fdatasync d1/d2/d5c/d64/d61/f68 0 2026-03-09T20:47:42.675 INFO:tasks.workunit.client.0.vm07.stdout:5/612: dwrite d5/df/d13/d3e/d5e/f7c [4194304,4194304] 0 2026-03-09T20:47:42.690 INFO:tasks.workunit.client.1.vm10.stdout:7/488: getdents db/d21/d23 0 2026-03-09T20:47:42.695 INFO:tasks.workunit.client.0.vm07.stdout:3/528: mknod d1/d5/d9/d2f/d99/caa 0 2026-03-09T20:47:42.695 INFO:tasks.workunit.client.0.vm07.stdout:9/515: unlink d4/d11/d23/f52 0 2026-03-09T20:47:42.695 INFO:tasks.workunit.client.0.vm07.stdout:2/556: write d2/db/d28/d57/f65 [1043212,121417] 0 2026-03-09T20:47:42.704 INFO:tasks.workunit.client.1.vm10.stdout:2/494: rename d5/d18/c34 to d5/d2b/ca2 0 2026-03-09T20:47:42.706 INFO:tasks.workunit.client.0.vm07.stdout:3/529: dread d1/d5/d9/d11/d1f/f27 [0,4194304] 0 2026-03-09T20:47:42.708 INFO:tasks.workunit.client.0.vm07.stdout:1/575: getdents d3/d14/d94 0 2026-03-09T20:47:42.708 INFO:tasks.workunit.client.0.vm07.stdout:1/576: dread - d3/d23/d55/d56/d60/fb5 zero size 2026-03-09T20:47:42.710 INFO:tasks.workunit.client.1.vm10.stdout:9/521: write d2/d28/d47/d67/f99 [4564666,71546] 0 2026-03-09T20:47:42.712 INFO:tasks.workunit.client.1.vm10.stdout:4/449: dread d1/fe [0,4194304] 0 2026-03-09T20:47:42.723 INFO:tasks.workunit.client.0.vm07.stdout:7/595: write d3/da/db/f1e [271967,108561] 0 2026-03-09T20:47:42.723 INFO:tasks.workunit.client.0.vm07.stdout:5/613: rename d5/df/d13/d3e/fc4 to d5/df/d13/d3e/d5e/fd5 0 2026-03-09T20:47:42.723 INFO:tasks.workunit.client.1.vm10.stdout:8/512: symlink d0/d22/d25/d40/la0 0 2026-03-09T20:47:42.723 INFO:tasks.workunit.client.1.vm10.stdout:0/464: dwrite d2/d4a/d58/d82/f5c [4194304,4194304] 0 2026-03-09T20:47:42.728 INFO:tasks.workunit.client.0.vm07.stdout:2/557: sync 2026-03-09T20:47:42.733 INFO:tasks.workunit.client.0.vm07.stdout:6/554: dwrite d8/d16/d22/d33/d85/f53 [0,4194304] 0 2026-03-09T20:47:42.744 INFO:tasks.workunit.client.1.vm10.stdout:0/465: read d2/d9/da/d35/f3a [4063430,103883] 0 2026-03-09T20:47:42.744 INFO:tasks.workunit.client.0.vm07.stdout:4/480: truncate d2/d55/d5d/d3f/f51 1482346 0 2026-03-09T20:47:42.745 INFO:tasks.workunit.client.0.vm07.stdout:4/481: stat d2/df/d17/c3b 0 2026-03-09T20:47:42.756 INFO:tasks.workunit.client.0.vm07.stdout:0/605: mknod d1/d2/d33/d35/cc1 0 2026-03-09T20:47:42.759 INFO:tasks.workunit.client.0.vm07.stdout:5/614: dread d5/df/d13/f1f [0,4194304] 0 2026-03-09T20:47:42.760 INFO:tasks.workunit.client.0.vm07.stdout:1/577: dread - d3/d23/d67/f69 zero size 2026-03-09T20:47:42.763 INFO:tasks.workunit.client.0.vm07.stdout:7/596: creat d3/d58/dc1/fcc x:0 0 0 2026-03-09T20:47:42.767 INFO:tasks.workunit.client.0.vm07.stdout:7/597: dwrite d3/da/db/f1e [4194304,4194304] 0 2026-03-09T20:47:42.774 INFO:tasks.workunit.client.0.vm07.stdout:8/518: mknod d1/ca5 0 2026-03-09T20:47:42.774 INFO:tasks.workunit.client.0.vm07.stdout:8/519: chown d1/dc/fe 126763536 1 2026-03-09T20:47:42.780 INFO:tasks.workunit.client.1.vm10.stdout:5/444: write d2/d27/d37/f57 [2062944,128141] 0 2026-03-09T20:47:42.790 INFO:tasks.workunit.client.0.vm07.stdout:2/558: creat d2/db/d28/d5c/fae x:0 0 0 2026-03-09T20:47:42.794 INFO:tasks.workunit.client.0.vm07.stdout:2/559: dwrite d2/db/d28/d57/f65 [0,4194304] 0 2026-03-09T20:47:42.808 INFO:tasks.workunit.client.0.vm07.stdout:0/606: dread - d1/d1f/d9f/fa7 zero size 2026-03-09T20:47:42.814 INFO:tasks.workunit.client.0.vm07.stdout:3/530: mknod d1/d5/d9/d2f/d34/cab 0 2026-03-09T20:47:42.819 INFO:tasks.workunit.client.1.vm10.stdout:3/470: mknod dc/d14/d26/c94 0 2026-03-09T20:47:42.824 INFO:tasks.workunit.client.0.vm07.stdout:1/578: unlink d3/f2b 0 2026-03-09T20:47:42.827 INFO:tasks.workunit.client.0.vm07.stdout:4/482: write d2/d55/d5d/d3f/d4a/d4b/d52/f5a [714540,20669] 0 2026-03-09T20:47:42.838 INFO:tasks.workunit.client.1.vm10.stdout:9/522: rename d2/d3/de/d35/d44 to d2/db8 0 2026-03-09T20:47:42.839 INFO:tasks.workunit.client.0.vm07.stdout:4/483: dwrite d2/d55/f62 [0,4194304] 0 2026-03-09T20:47:42.841 INFO:tasks.workunit.client.1.vm10.stdout:2/495: write d5/fb [1379629,101401] 0 2026-03-09T20:47:42.841 INFO:tasks.workunit.client.0.vm07.stdout:7/598: fsync d3/da/db/d32/d3e/d5c/f64 0 2026-03-09T20:47:42.845 INFO:tasks.workunit.client.0.vm07.stdout:4/484: dread - d2/d55/d5d/d3f/d4a/d4b/d52/d5c/f76 zero size 2026-03-09T20:47:42.847 INFO:tasks.workunit.client.1.vm10.stdout:2/496: read d5/d2b/d32/f5c [489099,92701] 0 2026-03-09T20:47:42.847 INFO:tasks.workunit.client.0.vm07.stdout:7/599: dwrite d3/da/db/d79/f98 [0,4194304] 0 2026-03-09T20:47:42.851 INFO:tasks.workunit.client.1.vm10.stdout:2/497: dwrite d5/fb [0,4194304] 0 2026-03-09T20:47:42.864 INFO:tasks.workunit.client.1.vm10.stdout:8/513: mknod d0/d22/d25/d2e/d41/d85/ca1 0 2026-03-09T20:47:42.878 INFO:tasks.workunit.client.1.vm10.stdout:6/490: mknod d3/da/d11/c9f 0 2026-03-09T20:47:42.878 INFO:tasks.workunit.client.1.vm10.stdout:0/466: mknod d2/d9/da/d35/d30/ca2 0 2026-03-09T20:47:42.879 INFO:tasks.workunit.client.1.vm10.stdout:0/467: chown d2/f99 164421 1 2026-03-09T20:47:42.879 INFO:tasks.workunit.client.1.vm10.stdout:5/445: fdatasync d2/f40 0 2026-03-09T20:47:42.883 INFO:tasks.workunit.client.0.vm07.stdout:6/555: mkdir d8/d16/d22/db1 0 2026-03-09T20:47:42.890 INFO:tasks.workunit.client.0.vm07.stdout:0/607: unlink d1/d1f/d30/f50 0 2026-03-09T20:47:42.890 INFO:tasks.workunit.client.0.vm07.stdout:1/579: symlink d3/d23/d55/d56/d90/lc2 0 2026-03-09T20:47:42.894 INFO:tasks.workunit.client.1.vm10.stdout:0/468: dread d2/d4e/f5b [0,4194304] 0 2026-03-09T20:47:42.894 INFO:tasks.workunit.client.1.vm10.stdout:7/489: getdents db/d21/d95 0 2026-03-09T20:47:42.895 INFO:tasks.workunit.client.1.vm10.stdout:0/469: chown d2/d9/da/d11/c1e 15 1 2026-03-09T20:47:42.911 INFO:tasks.workunit.client.0.vm07.stdout:1/580: dread d3/d23/d55/d56/d60/f8e [0,4194304] 0 2026-03-09T20:47:42.911 INFO:tasks.workunit.client.0.vm07.stdout:1/581: chown d3/d14/f33 18897497 1 2026-03-09T20:47:42.914 INFO:tasks.workunit.client.0.vm07.stdout:7/600: creat d3/da/db/d32/d3e/dac/d43/d62/fcd x:0 0 0 2026-03-09T20:47:42.917 INFO:tasks.workunit.client.0.vm07.stdout:8/520: mknod d1/dc/ca6 0 2026-03-09T20:47:42.921 INFO:tasks.workunit.client.0.vm07.stdout:7/601: dwrite d3/d58/dc1/fcc [0,4194304] 0 2026-03-09T20:47:42.943 INFO:tasks.workunit.client.0.vm07.stdout:9/516: rename d4/d8/dc/d4e/d54/l72 to d4/d8/dc/lbd 0 2026-03-09T20:47:42.953 INFO:tasks.workunit.client.1.vm10.stdout:6/491: stat d3/f21 0 2026-03-09T20:47:42.954 INFO:tasks.workunit.client.1.vm10.stdout:6/492: stat d3/d30/d7f/d4a/f4b 0 2026-03-09T20:47:42.958 INFO:tasks.workunit.client.0.vm07.stdout:7/602: sync 2026-03-09T20:47:42.961 INFO:tasks.workunit.client.0.vm07.stdout:7/603: dread d3/d58/dc1/fcc [0,4194304] 0 2026-03-09T20:47:42.980 INFO:tasks.workunit.client.1.vm10.stdout:4/450: write d1/d67/f8f [1715658,45133] 0 2026-03-09T20:47:42.983 INFO:tasks.workunit.client.1.vm10.stdout:8/514: dwrite d0/f13 [0,4194304] 0 2026-03-09T20:47:42.994 INFO:tasks.workunit.client.0.vm07.stdout:5/615: creat d5/df/fd6 x:0 0 0 2026-03-09T20:47:43.004 INFO:tasks.workunit.client.1.vm10.stdout:1/485: write d2/da/d25/d3e/d42/f57 [3137092,70878] 0 2026-03-09T20:47:43.005 INFO:tasks.workunit.client.0.vm07.stdout:6/556: write d8/d26/f87 [1017519,112428] 0 2026-03-09T20:47:43.006 INFO:tasks.workunit.client.0.vm07.stdout:6/557: write d8/d16/d22/d24/da0/dab/d40/fa7 [49510,52316] 0 2026-03-09T20:47:43.017 INFO:tasks.workunit.client.1.vm10.stdout:5/446: read - d2/d27/d37/d46/d5d/d5f/f6a zero size 2026-03-09T20:47:43.035 INFO:tasks.workunit.client.1.vm10.stdout:3/471: symlink dc/d14/d26/d8f/l95 0 2026-03-09T20:47:43.075 INFO:tasks.workunit.client.1.vm10.stdout:7/490: write db/f70 [648638,89378] 0 2026-03-09T20:47:43.083 INFO:tasks.workunit.client.1.vm10.stdout:0/470: rmdir d2/d4a/d58/d82/d71/d8e/d25/d34 39 2026-03-09T20:47:43.108 INFO:tasks.workunit.client.0.vm07.stdout:3/531: rename d1/d5/d9/d2f/d3d/d71/d76/c7d to d1/d5/d9/d2f/d3d/cac 0 2026-03-09T20:47:43.123 INFO:tasks.workunit.client.1.vm10.stdout:6/493: mknod d3/d30/d7f/d51/ca0 0 2026-03-09T20:47:43.124 INFO:tasks.workunit.client.1.vm10.stdout:6/494: write d3/d30/d7f/f28 [932922,74116] 0 2026-03-09T20:47:43.129 INFO:tasks.workunit.client.0.vm07.stdout:9/517: dwrite d4/d16/d78/f92 [0,4194304] 0 2026-03-09T20:47:43.136 INFO:tasks.workunit.client.1.vm10.stdout:8/515: truncate d0/d22/d25/f2b 4456314 0 2026-03-09T20:47:43.140 INFO:tasks.workunit.client.0.vm07.stdout:5/616: mkdir d5/d33/d39/d8d/dd7 0 2026-03-09T20:47:43.141 INFO:tasks.workunit.client.1.vm10.stdout:1/486: symlink d2/d89/l97 0 2026-03-09T20:47:43.144 INFO:tasks.workunit.client.0.vm07.stdout:4/485: link d2/df/f2e d2/d55/d5d/d3f/d4a/f84 0 2026-03-09T20:47:43.146 INFO:tasks.workunit.client.1.vm10.stdout:3/472: read dc/d14/d26/d37/f3e [22874,35416] 0 2026-03-09T20:47:43.149 INFO:tasks.workunit.client.1.vm10.stdout:7/491: creat db/d28/d4c/f97 x:0 0 0 2026-03-09T20:47:43.150 INFO:tasks.workunit.client.1.vm10.stdout:7/492: write f5 [489119,100806] 0 2026-03-09T20:47:43.164 INFO:tasks.workunit.client.1.vm10.stdout:0/471: mkdir d2/d9/da/d11/da3 0 2026-03-09T20:47:43.164 INFO:tasks.workunit.client.1.vm10.stdout:0/472: chown d2/d9/da/d48/l4f 0 1 2026-03-09T20:47:43.165 INFO:tasks.workunit.client.0.vm07.stdout:6/558: write d8/d16/d22/d24/da0/dab/f81 [1336300,53724] 0 2026-03-09T20:47:43.166 INFO:tasks.workunit.client.0.vm07.stdout:6/559: dread d8/d16/f92 [0,4194304] 0 2026-03-09T20:47:43.183 INFO:tasks.workunit.client.1.vm10.stdout:8/516: dread d0/f19 [0,4194304] 0 2026-03-09T20:47:43.191 INFO:tasks.workunit.client.1.vm10.stdout:1/487: write d2/da/d25/d3e/d42/f62 [731794,90028] 0 2026-03-09T20:47:43.203 INFO:tasks.workunit.client.1.vm10.stdout:9/523: write d2/d28/d47/d50/f64 [215369,127086] 0 2026-03-09T20:47:43.205 INFO:tasks.workunit.client.1.vm10.stdout:3/473: dwrite dc/d14/d20/d2e/d56/f23 [0,4194304] 0 2026-03-09T20:47:43.210 INFO:tasks.workunit.client.1.vm10.stdout:2/498: link d5/d18/d27/d28/d41/d77/l79 d5/d18/la3 0 2026-03-09T20:47:43.237 INFO:tasks.workunit.client.1.vm10.stdout:6/495: rename d3/d30/d7f/d36/d5c/c93 to d3/da/d11/d31/d47/ca1 0 2026-03-09T20:47:43.241 INFO:tasks.workunit.client.1.vm10.stdout:6/496: dwrite d3/da/d11/d26/d5b/f55 [0,4194304] 0 2026-03-09T20:47:43.251 INFO:tasks.workunit.client.1.vm10.stdout:4/451: creat d1/d8/d1c/f91 x:0 0 0 2026-03-09T20:47:43.254 INFO:tasks.workunit.client.1.vm10.stdout:4/452: stat d1/d8/d1c/l21 0 2026-03-09T20:47:43.254 INFO:tasks.workunit.client.1.vm10.stdout:8/517: fdatasync d0/d22/d25/d6c/f5c 0 2026-03-09T20:47:43.254 INFO:tasks.workunit.client.1.vm10.stdout:8/518: write d0/f17 [3149425,2069] 0 2026-03-09T20:47:43.255 INFO:tasks.workunit.client.1.vm10.stdout:7/493: creat db/d21/d60/d87/f98 x:0 0 0 2026-03-09T20:47:43.256 INFO:tasks.workunit.client.1.vm10.stdout:7/494: fsync db/d28/d2b/d36/d63/f6c 0 2026-03-09T20:47:43.257 INFO:tasks.workunit.client.0.vm07.stdout:2/560: link d2/db/f76 d2/db/faf 0 2026-03-09T20:47:43.257 INFO:tasks.workunit.client.1.vm10.stdout:7/495: stat db/d21/d60/d78 0 2026-03-09T20:47:43.265 INFO:tasks.workunit.client.1.vm10.stdout:0/473: dread - d2/d4a/d58/d82/d71/d8e/d25/d34/f77 zero size 2026-03-09T20:47:43.266 INFO:tasks.workunit.client.1.vm10.stdout:0/474: chown d2/d4a/d58/d82/d60 54145316 1 2026-03-09T20:47:43.266 INFO:tasks.workunit.client.0.vm07.stdout:7/604: mknod d3/da/d53/db7/cce 0 2026-03-09T20:47:43.278 INFO:tasks.workunit.client.0.vm07.stdout:2/561: dread d2/f3e [0,4194304] 0 2026-03-09T20:47:43.302 INFO:tasks.workunit.client.1.vm10.stdout:8/519: stat d0/cc 0 2026-03-09T20:47:43.302 INFO:tasks.workunit.client.1.vm10.stdout:5/447: getdents d2/d27/d37/d46/d5d/d5f/d63/d95 0 2026-03-09T20:47:43.303 INFO:tasks.workunit.client.1.vm10.stdout:1/488: symlink d2/da/d25/d46/d51/d7e/l98 0 2026-03-09T20:47:43.304 INFO:tasks.workunit.client.1.vm10.stdout:5/448: truncate d2/d1b/d54/d78/f47 5317590 0 2026-03-09T20:47:43.305 INFO:tasks.workunit.client.0.vm07.stdout:9/518: mknod d4/d16/d29/d24/d37/d44/d62/d8e/cbe 0 2026-03-09T20:47:43.306 INFO:tasks.workunit.client.0.vm07.stdout:4/486: mkdir d2/d55/d5d/d3f/d4a/d85 0 2026-03-09T20:47:43.306 INFO:tasks.workunit.client.0.vm07.stdout:6/560: truncate d8/f15 3789079 0 2026-03-09T20:47:43.308 INFO:tasks.workunit.client.0.vm07.stdout:9/519: write d4/d8/dc/d4e/d54/fac [543919,104009] 0 2026-03-09T20:47:43.309 INFO:tasks.workunit.client.0.vm07.stdout:3/532: dwrite d1/d5/d9/f1c [0,4194304] 0 2026-03-09T20:47:43.316 INFO:tasks.workunit.client.1.vm10.stdout:6/497: dwrite d3/d30/d33/f4e [0,4194304] 0 2026-03-09T20:47:43.324 INFO:tasks.workunit.client.1.vm10.stdout:5/449: sync 2026-03-09T20:47:43.341 INFO:tasks.workunit.client.1.vm10.stdout:7/496: symlink db/d28/d2b/d36/d3f/l99 0 2026-03-09T20:47:43.345 INFO:tasks.workunit.client.1.vm10.stdout:9/524: write d2/d33/f3f [85160,62924] 0 2026-03-09T20:47:43.345 INFO:tasks.workunit.client.1.vm10.stdout:9/525: chown d2/d33/c5e 5565 1 2026-03-09T20:47:43.345 INFO:tasks.workunit.client.1.vm10.stdout:0/475: mknod d2/d4a/d58/d82/ca4 0 2026-03-09T20:47:43.345 INFO:tasks.workunit.client.1.vm10.stdout:0/476: fdatasync d2/d9/da/d35/d30/f7f 0 2026-03-09T20:47:43.348 INFO:tasks.workunit.client.0.vm07.stdout:2/562: readlink d2/l3d 0 2026-03-09T20:47:43.348 INFO:tasks.workunit.client.0.vm07.stdout:0/608: getdents d1/d1f 0 2026-03-09T20:47:43.350 INFO:tasks.workunit.client.0.vm07.stdout:0/609: chown d1/d1f/c74 868929 1 2026-03-09T20:47:43.364 INFO:tasks.workunit.client.1.vm10.stdout:4/453: mkdir d1/d2/d5c/d64/d6b/d79/d92 0 2026-03-09T20:47:43.364 INFO:tasks.workunit.client.0.vm07.stdout:1/582: link d3/d14/d94/f95 d3/d23/d55/d56/fc3 0 2026-03-09T20:47:43.365 INFO:tasks.workunit.client.1.vm10.stdout:4/454: sync 2026-03-09T20:47:43.365 INFO:tasks.workunit.client.0.vm07.stdout:8/521: getdents d1/d5d/d6f/d2f 0 2026-03-09T20:47:43.366 INFO:tasks.workunit.client.1.vm10.stdout:4/455: write d1/d8/d1c/f91 [65541,13276] 0 2026-03-09T20:47:43.367 INFO:tasks.workunit.client.1.vm10.stdout:4/456: readlink d1/l76 0 2026-03-09T20:47:43.375 INFO:tasks.workunit.client.1.vm10.stdout:1/489: rmdir d2/d89 39 2026-03-09T20:47:43.376 INFO:tasks.workunit.client.1.vm10.stdout:1/490: readlink d2/da/d25/d46/d51/l54 0 2026-03-09T20:47:43.378 INFO:tasks.workunit.client.0.vm07.stdout:6/561: unlink d8/d16/l58 0 2026-03-09T20:47:43.378 INFO:tasks.workunit.client.0.vm07.stdout:6/562: readlink d8/d16/d61/la4 0 2026-03-09T20:47:43.391 INFO:tasks.workunit.client.0.vm07.stdout:6/563: dread d8/d16/d22/d33/f6d [0,4194304] 0 2026-03-09T20:47:43.395 INFO:tasks.workunit.client.0.vm07.stdout:5/617: truncate d5/d33/d3b/f63 869969 0 2026-03-09T20:47:43.396 INFO:tasks.workunit.client.0.vm07.stdout:5/618: stat d5/d19/d73/d94 0 2026-03-09T20:47:43.401 INFO:tasks.workunit.client.0.vm07.stdout:3/533: mkdir d1/d5/d9/d2f/d3d/d64/d43/d54/dad 0 2026-03-09T20:47:43.402 INFO:tasks.workunit.client.1.vm10.stdout:5/450: creat d2/d1b/d54/d78/fad x:0 0 0 2026-03-09T20:47:43.403 INFO:tasks.workunit.client.1.vm10.stdout:8/520: dwrite d0/d22/d25/d2e/f79 [0,4194304] 0 2026-03-09T20:47:43.407 INFO:tasks.workunit.client.1.vm10.stdout:8/521: dwrite d0/f13 [4194304,4194304] 0 2026-03-09T20:47:43.409 INFO:tasks.workunit.client.1.vm10.stdout:8/522: sync 2026-03-09T20:47:43.423 INFO:tasks.workunit.client.1.vm10.stdout:6/498: dread d3/d30/d7f/d36/d5c/f78 [0,4194304] 0 2026-03-09T20:47:43.433 INFO:tasks.workunit.client.1.vm10.stdout:7/497: dwrite db/d28/d2b/d36/f35 [0,4194304] 0 2026-03-09T20:47:43.443 INFO:tasks.workunit.client.0.vm07.stdout:9/520: write d4/d8/dc/d4e/f82 [21358,85448] 0 2026-03-09T20:47:43.449 INFO:tasks.workunit.client.0.vm07.stdout:9/521: dread d4/d16/d29/d24/d37/d44/d62/d74/fa6 [0,4194304] 0 2026-03-09T20:47:43.465 INFO:tasks.workunit.client.0.vm07.stdout:2/563: mkdir d2/d46/db0 0 2026-03-09T20:47:43.469 INFO:tasks.workunit.client.0.vm07.stdout:0/610: mkdir d1/d1f/dc2 0 2026-03-09T20:47:43.482 INFO:tasks.workunit.client.1.vm10.stdout:2/499: link f1 d5/d18/d27/d38/d61/fa4 0 2026-03-09T20:47:43.483 INFO:tasks.workunit.client.1.vm10.stdout:2/500: chown d5/d18/d27/d89 6581919 1 2026-03-09T20:47:43.484 INFO:tasks.workunit.client.0.vm07.stdout:4/487: mkdir d2/d55/d5d/d86 0 2026-03-09T20:47:43.484 INFO:tasks.workunit.client.0.vm07.stdout:4/488: chown d2/df/d17/f46 2809683 1 2026-03-09T20:47:43.484 INFO:tasks.workunit.client.1.vm10.stdout:4/457: rename d1/d2/d5c/d64/c50 to d1/d2/c93 0 2026-03-09T20:47:43.486 INFO:tasks.workunit.client.1.vm10.stdout:1/491: symlink d2/da/d25/d46/d51/l99 0 2026-03-09T20:47:43.487 INFO:tasks.workunit.client.0.vm07.stdout:6/564: chown d8/d16/d22/d24/da0/dab/c8c 0 1 2026-03-09T20:47:43.491 INFO:tasks.workunit.client.0.vm07.stdout:3/534: readlink d1/d5/d9/d2f/d34/d46/d5d/l5f 0 2026-03-09T20:47:43.496 INFO:tasks.workunit.client.0.vm07.stdout:6/565: dread d8/d16/d22/d24/f25 [0,4194304] 0 2026-03-09T20:47:43.499 INFO:tasks.workunit.client.0.vm07.stdout:1/583: write d3/f10 [3013643,117928] 0 2026-03-09T20:47:43.501 INFO:tasks.workunit.client.0.vm07.stdout:5/619: write d5/d69/f82 [5226101,76096] 0 2026-03-09T20:47:43.502 INFO:tasks.workunit.client.1.vm10.stdout:5/451: write d2/f71 [1041060,80414] 0 2026-03-09T20:47:43.505 INFO:tasks.workunit.client.1.vm10.stdout:6/499: truncate d3/d30/d7f/d24/d39/f88 602236 0 2026-03-09T20:47:43.506 INFO:tasks.workunit.client.1.vm10.stdout:7/498: creat db/d21/f9a x:0 0 0 2026-03-09T20:47:43.507 INFO:tasks.workunit.client.1.vm10.stdout:3/474: getdents dc/d14/d20/d2e/d56 0 2026-03-09T20:47:43.507 INFO:tasks.workunit.client.1.vm10.stdout:3/475: stat dc/d14/d26 0 2026-03-09T20:47:43.508 INFO:tasks.workunit.client.1.vm10.stdout:9/526: rmdir d2/d3/de/d8f 39 2026-03-09T20:47:43.510 INFO:tasks.workunit.client.1.vm10.stdout:6/500: sync 2026-03-09T20:47:43.510 INFO:tasks.workunit.client.1.vm10.stdout:6/501: stat d3/da/d11/d26/f8f 0 2026-03-09T20:47:43.511 INFO:tasks.workunit.client.1.vm10.stdout:0/477: mknod d2/d4a/d58/d82/d71/d8e/d25/ca5 0 2026-03-09T20:47:43.512 INFO:tasks.workunit.client.0.vm07.stdout:8/522: creat d1/d8f/d9d/fa7 x:0 0 0 2026-03-09T20:47:43.515 INFO:tasks.workunit.client.1.vm10.stdout:8/523: link d0/d22/d25/d2e/d41/d47/f87 d0/d22/d25/d8f/fa2 0 2026-03-09T20:47:43.518 INFO:tasks.workunit.client.1.vm10.stdout:8/524: dwrite d0/d22/d2c/f32 [0,4194304] 0 2026-03-09T20:47:43.529 INFO:tasks.workunit.client.1.vm10.stdout:0/478: dread d2/d9/da/d11/f1f [0,4194304] 0 2026-03-09T20:47:43.533 INFO:tasks.workunit.client.1.vm10.stdout:5/452: creat d2/d27/d37/fae x:0 0 0 2026-03-09T20:47:43.536 INFO:tasks.workunit.client.0.vm07.stdout:7/605: getdents d3/da 0 2026-03-09T20:47:43.539 INFO:tasks.workunit.client.0.vm07.stdout:1/584: unlink d3/d14/f25 0 2026-03-09T20:47:43.542 INFO:tasks.workunit.client.0.vm07.stdout:5/620: rmdir d5/df 39 2026-03-09T20:47:43.544 INFO:tasks.workunit.client.1.vm10.stdout:2/501: mkdir d5/d2b/d32/d80/d8d/d93/da5 0 2026-03-09T20:47:43.554 INFO:tasks.workunit.client.0.vm07.stdout:9/522: mkdir d4/d8/dbf 0 2026-03-09T20:47:43.556 INFO:tasks.workunit.client.0.vm07.stdout:9/523: read d4/d8/dc/d4e/f82 [1659,106164] 0 2026-03-09T20:47:43.557 INFO:tasks.workunit.client.0.vm07.stdout:9/524: read d4/d16/d29/d24/f77 [3471281,90991] 0 2026-03-09T20:47:43.572 INFO:tasks.workunit.client.0.vm07.stdout:4/489: write d2/df/d17/f2a [5227,40394] 0 2026-03-09T20:47:43.574 INFO:tasks.workunit.client.0.vm07.stdout:8/523: fdatasync d1/f20 0 2026-03-09T20:47:43.576 INFO:tasks.workunit.client.1.vm10.stdout:1/492: write d2/f59 [593820,101401] 0 2026-03-09T20:47:43.583 INFO:tasks.workunit.client.1.vm10.stdout:5/453: readlink d2/lf 0 2026-03-09T20:47:43.590 INFO:tasks.workunit.client.0.vm07.stdout:3/535: symlink d1/d5/d9/d2f/d34/lae 0 2026-03-09T20:47:43.593 INFO:tasks.workunit.client.1.vm10.stdout:7/499: dwrite db/d21/d26/f52 [0,4194304] 0 2026-03-09T20:47:43.593 INFO:tasks.workunit.client.0.vm07.stdout:6/566: dwrite d8/d50/f55 [0,4194304] 0 2026-03-09T20:47:43.627 INFO:tasks.workunit.client.1.vm10.stdout:6/502: rename d3/d30/d7f/d36/d5c/c57 to d3/d30/d7f/d24/ca2 0 2026-03-09T20:47:43.636 INFO:tasks.workunit.client.1.vm10.stdout:4/458: creat d1/f94 x:0 0 0 2026-03-09T20:47:43.637 INFO:tasks.workunit.client.1.vm10.stdout:4/459: truncate d1/d8/d1c/f91 181907 0 2026-03-09T20:47:43.645 INFO:tasks.workunit.client.1.vm10.stdout:1/493: fdatasync d2/da/d25/f27 0 2026-03-09T20:47:43.648 INFO:tasks.workunit.client.1.vm10.stdout:4/460: dread d1/d2/d3/f18 [0,4194304] 0 2026-03-09T20:47:43.655 INFO:tasks.workunit.client.0.vm07.stdout:2/564: rename d2/db/d28/l4c to d2/db/d28/d90/da4/lb1 0 2026-03-09T20:47:43.657 INFO:tasks.workunit.client.1.vm10.stdout:2/502: write d5/d18/d27/f74 [976982,88508] 0 2026-03-09T20:47:43.659 INFO:tasks.workunit.client.1.vm10.stdout:2/503: write d5/d18/d1b/d22/f4f [3836609,103671] 0 2026-03-09T20:47:43.663 INFO:tasks.workunit.client.1.vm10.stdout:9/527: dwrite d2/f6 [0,4194304] 0 2026-03-09T20:47:43.668 INFO:tasks.workunit.client.0.vm07.stdout:0/611: mkdir d1/d1f/dc3 0 2026-03-09T20:47:43.668 INFO:tasks.workunit.client.1.vm10.stdout:2/504: dwrite d5/d18/d1b/d22/f6d [0,4194304] 0 2026-03-09T20:47:43.680 INFO:tasks.workunit.client.1.vm10.stdout:5/454: mkdir d2/d1b/d54/d7b/daf 0 2026-03-09T20:47:43.694 INFO:tasks.workunit.client.0.vm07.stdout:8/524: dwrite d1/f20 [0,4194304] 0 2026-03-09T20:47:43.707 INFO:tasks.workunit.client.0.vm07.stdout:7/606: creat d3/da/db/d32/d3e/d5c/dc2/fcf x:0 0 0 2026-03-09T20:47:43.713 INFO:tasks.workunit.client.0.vm07.stdout:7/607: dwrite d3/da/db/f9a [0,4194304] 0 2026-03-09T20:47:43.717 INFO:tasks.workunit.client.1.vm10.stdout:8/525: rename d0/d22/f27 to d0/d22/d25/d2e/d41/fa3 0 2026-03-09T20:47:43.728 INFO:tasks.workunit.client.1.vm10.stdout:1/494: dwrite d2/da/f35 [0,4194304] 0 2026-03-09T20:47:43.733 INFO:tasks.workunit.client.1.vm10.stdout:1/495: readlink d2/l6 0 2026-03-09T20:47:43.737 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:43 vm07.local ceph-mon[49120]: pgmap v10: 65 pgs: 65 active+clean; 2.2 GiB data, 7.8 GiB used, 112 GiB / 120 GiB avail; 30 MiB/s rd, 84 MiB/s wr, 201 op/s 2026-03-09T20:47:43.748 INFO:tasks.workunit.client.0.vm07.stdout:1/585: truncate d3/d14/d54/d6e/fa9 1911671 0 2026-03-09T20:47:43.750 INFO:tasks.workunit.client.1.vm10.stdout:4/461: dwrite d1/d2/d5c/f48 [0,4194304] 0 2026-03-09T20:47:43.771 INFO:tasks.workunit.client.1.vm10.stdout:4/462: sync 2026-03-09T20:47:43.775 INFO:tasks.workunit.client.1.vm10.stdout:0/479: link d2/d9/da/c66 d2/d9/d2a/ca6 0 2026-03-09T20:47:43.783 INFO:tasks.workunit.client.1.vm10.stdout:9/528: readlink d2/d12/l23 0 2026-03-09T20:47:43.806 INFO:tasks.workunit.client.0.vm07.stdout:6/567: symlink d8/d16/da3/d9a/lb2 0 2026-03-09T20:47:43.809 INFO:tasks.workunit.client.0.vm07.stdout:6/568: write d8/d16/d22/f98 [775466,127327] 0 2026-03-09T20:47:43.823 INFO:tasks.workunit.client.0.vm07.stdout:4/490: truncate d2/d55/d5d/d3f/d4a/d4b/d52/f5a 412799 0 2026-03-09T20:47:43.824 INFO:tasks.workunit.client.0.vm07.stdout:8/525: read d1/dc/d16/d26/f48 [6193,87541] 0 2026-03-09T20:47:43.824 INFO:tasks.workunit.client.0.vm07.stdout:9/525: write d4/d16/d29/f6e [715980,99568] 0 2026-03-09T20:47:43.828 INFO:tasks.workunit.client.0.vm07.stdout:2/565: dwrite d2/db/f7c [0,4194304] 0 2026-03-09T20:47:43.831 INFO:tasks.workunit.client.0.vm07.stdout:8/526: chown d1/dc/d16/d31/fa0 60 1 2026-03-09T20:47:43.833 INFO:tasks.workunit.client.0.vm07.stdout:4/491: write d2/df/d17/f6d [1590630,56189] 0 2026-03-09T20:47:43.838 INFO:tasks.workunit.client.0.vm07.stdout:7/608: creat d3/da/db/d79/fd0 x:0 0 0 2026-03-09T20:47:43.838 INFO:tasks.workunit.client.0.vm07.stdout:5/621: mknod d5/df/d13/d3e/cd8 0 2026-03-09T20:47:43.852 INFO:tasks.workunit.client.0.vm07.stdout:9/526: unlink d4/d8/d19/f69 0 2026-03-09T20:47:43.854 INFO:tasks.workunit.client.1.vm10.stdout:5/455: symlink d2/d58/lb0 0 2026-03-09T20:47:43.855 INFO:tasks.workunit.client.1.vm10.stdout:3/476: getdents dc/d14/d22/d4a 0 2026-03-09T20:47:43.858 INFO:tasks.workunit.client.1.vm10.stdout:7/500: truncate db/d28/d30/f73 1266389 0 2026-03-09T20:47:43.860 INFO:tasks.workunit.client.0.vm07.stdout:2/566: unlink d2/d11/l21 0 2026-03-09T20:47:43.866 INFO:tasks.workunit.client.0.vm07.stdout:9/527: dread d4/d16/d29/d24/f8c [0,4194304] 0 2026-03-09T20:47:43.867 INFO:tasks.workunit.client.1.vm10.stdout:6/503: link d3/d30/d7f/d24/f99 d3/d30/d7f/d24/d39/fa3 0 2026-03-09T20:47:43.868 INFO:tasks.workunit.client.1.vm10.stdout:6/504: fdatasync d3/d79/f90 0 2026-03-09T20:47:43.871 INFO:tasks.workunit.client.0.vm07.stdout:1/586: write d3/d14/f6a [4939908,46103] 0 2026-03-09T20:47:43.873 INFO:tasks.workunit.client.0.vm07.stdout:5/622: mknod d5/df/d13/d3e/d47/cd9 0 2026-03-09T20:47:43.874 INFO:tasks.workunit.client.0.vm07.stdout:5/623: stat d5/df/d13/d3e/d5e/fd5 0 2026-03-09T20:47:43.877 INFO:tasks.workunit.client.0.vm07.stdout:3/536: rename d1/d5/d9/d2f/d3d/d64/d43/d54 to d1/d5/d9/daf 0 2026-03-09T20:47:43.881 INFO:tasks.workunit.client.0.vm07.stdout:6/569: mkdir d8/db3 0 2026-03-09T20:47:43.881 INFO:tasks.workunit.client.0.vm07.stdout:6/570: write d8/d16/f23 [5820626,107420] 0 2026-03-09T20:47:43.881 INFO:tasks.workunit.client.1.vm10.stdout:6/505: dread d3/da/d11/d26/d5b/f48 [0,4194304] 0 2026-03-09T20:47:43.881 INFO:tasks.workunit.client.1.vm10.stdout:6/506: dread - d3/d79/f90 zero size 2026-03-09T20:47:43.881 INFO:tasks.workunit.client.1.vm10.stdout:4/463: truncate d1/d8/d39/f56 1071748 0 2026-03-09T20:47:43.883 INFO:tasks.workunit.client.0.vm07.stdout:7/609: sync 2026-03-09T20:47:43.887 INFO:tasks.workunit.client.0.vm07.stdout:7/610: write d3/da/db/d32/d3e/dac/d1f/f5d [1540369,65194] 0 2026-03-09T20:47:43.894 INFO:tasks.workunit.client.0.vm07.stdout:8/527: dread d1/d5d/d6f/d2f/d53/d76/d87/f97 [0,4194304] 0 2026-03-09T20:47:43.894 INFO:tasks.workunit.client.1.vm10.stdout:8/526: dwrite d0/d22/d25/d2e/d41/d47/f87 [0,4194304] 0 2026-03-09T20:47:43.894 INFO:tasks.workunit.client.1.vm10.stdout:2/505: mkdir d5/d18/d27/da6 0 2026-03-09T20:47:43.895 INFO:tasks.workunit.client.1.vm10.stdout:2/506: write d5/d2b/d32/f92 [876111,91989] 0 2026-03-09T20:47:43.899 INFO:tasks.workunit.client.1.vm10.stdout:2/507: sync 2026-03-09T20:47:43.901 INFO:tasks.workunit.client.0.vm07.stdout:7/611: dwrite d3/da/db/f9a [0,4194304] 0 2026-03-09T20:47:43.906 INFO:tasks.workunit.client.0.vm07.stdout:7/612: readlink d3/da/db/d32/d3e/dac/l5a 0 2026-03-09T20:47:43.907 INFO:tasks.workunit.client.0.vm07.stdout:7/613: write d3/d58/d82/d90/fbe [726457,22953] 0 2026-03-09T20:47:43.916 INFO:tasks.workunit.client.0.vm07.stdout:0/612: truncate d1/d2/d4b/f70 3231727 0 2026-03-09T20:47:43.918 INFO:tasks.workunit.client.1.vm10.stdout:3/477: creat dc/d14/d20/d21/f96 x:0 0 0 2026-03-09T20:47:43.918 INFO:tasks.workunit.client.0.vm07.stdout:1/587: fdatasync d3/f3f 0 2026-03-09T20:47:43.918 INFO:tasks.workunit.client.1.vm10.stdout:3/478: stat dc/d14/d20/d21/d3b/d8e 0 2026-03-09T20:47:43.921 INFO:tasks.workunit.client.0.vm07.stdout:7/614: dwrite d3/da/f38 [0,4194304] 0 2026-03-09T20:47:43.922 INFO:tasks.workunit.client.0.vm07.stdout:7/615: truncate d3/da/db/d32/d3e/dac/f3a 4804261 0 2026-03-09T20:47:43.937 INFO:tasks.workunit.client.0.vm07.stdout:6/571: fdatasync d8/d16/d22/d33/d85/f83 0 2026-03-09T20:47:43.953 INFO:tasks.workunit.client.0.vm07.stdout:3/537: dread d1/d5/d9/d11/d60/f89 [0,4194304] 0 2026-03-09T20:47:43.957 INFO:tasks.workunit.client.1.vm10.stdout:6/507: rename d3/da/c68 to d3/d30/d6a/ca4 0 2026-03-09T20:47:43.957 INFO:tasks.workunit.client.1.vm10.stdout:6/508: readlink d3/d30/d7f/l34 0 2026-03-09T20:47:43.960 INFO:tasks.workunit.client.1.vm10.stdout:9/529: dwrite d2/d28/f79 [0,4194304] 0 2026-03-09T20:47:43.983 INFO:tasks.workunit.client.1.vm10.stdout:5/456: dwrite d2/f64 [0,4194304] 0 2026-03-09T20:47:43.984 INFO:tasks.workunit.client.0.vm07.stdout:5/624: write d5/df/d13/d30/fac [814509,119863] 0 2026-03-09T20:47:43.987 INFO:tasks.workunit.client.0.vm07.stdout:4/492: dwrite d2/d1f/f25 [0,4194304] 0 2026-03-09T20:47:43.995 INFO:tasks.workunit.client.0.vm07.stdout:9/528: mknod d4/d8/d59/cc0 0 2026-03-09T20:47:44.001 INFO:tasks.workunit.client.1.vm10.stdout:8/527: creat d0/d54/fa4 x:0 0 0 2026-03-09T20:47:44.001 INFO:tasks.workunit.client.1.vm10.stdout:7/501: write db/d46/f47 [5293390,107677] 0 2026-03-09T20:47:44.003 INFO:tasks.workunit.client.1.vm10.stdout:2/508: creat d5/d2b/d32/d80/fa7 x:0 0 0 2026-03-09T20:47:44.007 INFO:tasks.workunit.client.0.vm07.stdout:1/588: creat d3/d23/d67/fc4 x:0 0 0 2026-03-09T20:47:44.007 INFO:tasks.workunit.client.0.vm07.stdout:1/589: chown d3/l3b 461780714 1 2026-03-09T20:47:44.012 INFO:tasks.workunit.client.1.vm10.stdout:1/496: dwrite d2/da/d25/d3e/f69 [0,4194304] 0 2026-03-09T20:47:44.014 INFO:tasks.workunit.client.1.vm10.stdout:4/464: write d1/d8/d1c/f3f [329243,104763] 0 2026-03-09T20:47:44.015 INFO:tasks.workunit.client.1.vm10.stdout:4/465: fdatasync d1/d8/d1c/d2b/f72 0 2026-03-09T20:47:44.027 INFO:tasks.workunit.client.1.vm10.stdout:6/509: fdatasync d3/da/d11/d26/d5b/f48 0 2026-03-09T20:47:44.030 INFO:tasks.workunit.client.0.vm07.stdout:6/572: dread d8/d16/f92 [0,4194304] 0 2026-03-09T20:47:44.034 INFO:tasks.workunit.client.1.vm10.stdout:0/480: creat d2/d9/da/fa7 x:0 0 0 2026-03-09T20:47:44.034 INFO:tasks.workunit.client.0.vm07.stdout:3/538: creat d1/d5/d9/d2f/d3d/d71/fb0 x:0 0 0 2026-03-09T20:47:44.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:43 vm10.local ceph-mon[57011]: pgmap v10: 65 pgs: 65 active+clean; 2.2 GiB data, 7.8 GiB used, 112 GiB / 120 GiB avail; 30 MiB/s rd, 84 MiB/s wr, 201 op/s 2026-03-09T20:47:44.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:43 vm10.local ceph-mon[57011]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:44.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:43 vm10.local ceph-mon[57011]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:44.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:43 vm10.local ceph-mon[57011]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:47:44.044 INFO:tasks.workunit.client.1.vm10.stdout:8/528: read d0/d22/d25/d2e/d41/f67 [548680,15712] 0 2026-03-09T20:47:44.071 INFO:tasks.workunit.client.1.vm10.stdout:4/466: dwrite d1/d2/d5c/f53 [4194304,4194304] 0 2026-03-09T20:47:44.075 INFO:tasks.workunit.client.0.vm07.stdout:2/567: rename d2/db/d28/d5c/f66 to d2/db/d49/fb2 0 2026-03-09T20:47:44.094 INFO:tasks.workunit.client.1.vm10.stdout:5/457: write d2/d27/d37/d46/d99/f9c [975655,22552] 0 2026-03-09T20:47:44.097 INFO:tasks.workunit.client.1.vm10.stdout:3/479: dwrite dc/d14/d20/d21/f50 [0,4194304] 0 2026-03-09T20:47:44.097 INFO:tasks.workunit.client.1.vm10.stdout:2/509: write d5/d18/f2c [1165944,100549] 0 2026-03-09T20:47:44.097 INFO:tasks.workunit.client.1.vm10.stdout:3/480: chown dc/d14/d26/d29/d40 6815 1 2026-03-09T20:47:44.097 INFO:tasks.workunit.client.0.vm07.stdout:7/616: write d3/f3f [3438217,103631] 0 2026-03-09T20:47:44.097 INFO:tasks.workunit.client.0.vm07.stdout:6/573: chown d8/d16/d22/d33/c63 64 1 2026-03-09T20:47:44.097 INFO:tasks.workunit.client.0.vm07.stdout:3/539: unlink d1/d5/d9/d11/d6d/d80/f81 0 2026-03-09T20:47:44.101 INFO:tasks.workunit.client.0.vm07.stdout:5/625: write d5/d33/d39/d8d/f8e [918673,81654] 0 2026-03-09T20:47:44.105 INFO:tasks.workunit.client.0.vm07.stdout:1/590: write d3/f6f [2736228,66818] 0 2026-03-09T20:47:44.105 INFO:tasks.workunit.client.1.vm10.stdout:1/497: dwrite d2/da/d25/d3e/d42/f86 [0,4194304] 0 2026-03-09T20:47:44.108 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:43 vm07.local ceph-mon[49120]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:44.109 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:43 vm07.local ceph-mon[49120]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:44.109 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:43 vm07.local ceph-mon[49120]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:47:44.115 INFO:tasks.workunit.client.1.vm10.stdout:7/502: mknod db/d28/c9b 0 2026-03-09T20:47:44.118 INFO:tasks.workunit.client.0.vm07.stdout:4/493: mknod d2/d55/d5d/d3f/d4a/d4b/c87 0 2026-03-09T20:47:44.121 INFO:tasks.workunit.client.0.vm07.stdout:9/529: mkdir d4/d8/d19/d5f/da5/db8/dc1 0 2026-03-09T20:47:44.122 INFO:tasks.workunit.client.0.vm07.stdout:0/613: rename d1 to d1/dc4 22 2026-03-09T20:47:44.132 INFO:tasks.workunit.client.1.vm10.stdout:4/467: rename d1/d8/l65 to d1/d8/d39/l95 0 2026-03-09T20:47:44.135 INFO:tasks.workunit.client.1.vm10.stdout:6/510: unlink d3/d30/d7f/f8e 0 2026-03-09T20:47:44.140 INFO:tasks.workunit.client.1.vm10.stdout:9/530: getdents d2/da6 0 2026-03-09T20:47:44.141 INFO:tasks.workunit.client.1.vm10.stdout:9/531: write d2/d28/f79 [1206579,52779] 0 2026-03-09T20:47:44.141 INFO:tasks.workunit.client.1.vm10.stdout:9/532: readlink d2/d12/l23 0 2026-03-09T20:47:44.142 INFO:tasks.workunit.client.0.vm07.stdout:4/494: dread d2/df/d17/f1b [0,4194304] 0 2026-03-09T20:47:44.144 INFO:tasks.workunit.client.1.vm10.stdout:5/458: truncate f1 6119836 0 2026-03-09T20:47:44.146 INFO:tasks.workunit.client.0.vm07.stdout:4/495: dread d2/d1f/f2c [0,4194304] 0 2026-03-09T20:47:44.149 INFO:tasks.workunit.client.1.vm10.stdout:3/481: rmdir dc/d14/d20/d21 39 2026-03-09T20:47:44.151 INFO:tasks.workunit.client.1.vm10.stdout:9/533: sync 2026-03-09T20:47:44.154 INFO:tasks.workunit.client.1.vm10.stdout:2/510: mknod d5/d18/d27/d89/ca8 0 2026-03-09T20:47:44.157 INFO:tasks.workunit.client.1.vm10.stdout:1/498: chown d2/da/f34 8 1 2026-03-09T20:47:44.158 INFO:tasks.workunit.client.1.vm10.stdout:1/499: write d2/da/d25/d3e/d42/f57 [2576741,40541] 0 2026-03-09T20:47:44.173 INFO:tasks.workunit.client.1.vm10.stdout:8/529: truncate d0/f11 2636444 0 2026-03-09T20:47:44.175 INFO:tasks.workunit.client.1.vm10.stdout:5/459: creat d2/d27/d37/d46/d5d/d5f/d69/d96/fb1 x:0 0 0 2026-03-09T20:47:44.176 INFO:tasks.workunit.client.1.vm10.stdout:7/503: dwrite db/d28/d2b/f8f [0,4194304] 0 2026-03-09T20:47:44.177 INFO:tasks.workunit.client.1.vm10.stdout:3/482: creat dc/d14/d26/d29/d2a/d76/f97 x:0 0 0 2026-03-09T20:47:44.177 INFO:tasks.workunit.client.0.vm07.stdout:7/617: creat d3/da/db/d79/fd1 x:0 0 0 2026-03-09T20:47:44.178 INFO:tasks.workunit.client.1.vm10.stdout:9/534: mknod d2/d12/d5a/da7/cb9 0 2026-03-09T20:47:44.179 INFO:tasks.workunit.client.1.vm10.stdout:8/530: dwrite d0/d22/d25/d8f/fa2 [0,4194304] 0 2026-03-09T20:47:44.197 INFO:tasks.workunit.client.1.vm10.stdout:2/511: dread d5/d5b/f6c [0,4194304] 0 2026-03-09T20:47:44.198 INFO:tasks.workunit.client.1.vm10.stdout:1/500: creat d2/da/d25/d3e/d55/f9a x:0 0 0 2026-03-09T20:47:44.198 INFO:tasks.workunit.client.0.vm07.stdout:8/528: getdents d1/d8f 0 2026-03-09T20:47:44.200 INFO:tasks.workunit.client.1.vm10.stdout:0/481: getdents d2/d4a/d58/d82/d71/d8e/d25/d34 0 2026-03-09T20:47:44.201 INFO:tasks.workunit.client.1.vm10.stdout:2/512: dwrite d5/d18/d27/d38/d61/fa0 [0,4194304] 0 2026-03-09T20:47:44.202 INFO:tasks.workunit.client.1.vm10.stdout:0/482: write d2/f99 [285418,113911] 0 2026-03-09T20:47:44.202 INFO:tasks.workunit.client.0.vm07.stdout:7/618: readlink d3/da/db/l4b 0 2026-03-09T20:47:44.202 INFO:tasks.workunit.client.1.vm10.stdout:5/460: creat d2/d80/fb2 x:0 0 0 2026-03-09T20:47:44.205 INFO:tasks.workunit.client.1.vm10.stdout:8/531: creat d0/d22/d2f/d38/fa5 x:0 0 0 2026-03-09T20:47:44.206 INFO:tasks.workunit.client.1.vm10.stdout:8/532: stat d0/d22/d25/d2e/d41/d47/d63/c98 0 2026-03-09T20:47:44.208 INFO:tasks.workunit.client.1.vm10.stdout:8/533: readlink d0/d22/d2f/l83 0 2026-03-09T20:47:44.208 INFO:tasks.workunit.client.1.vm10.stdout:6/511: creat d3/d30/d7f/d36/d5c/fa5 x:0 0 0 2026-03-09T20:47:44.209 INFO:tasks.workunit.client.0.vm07.stdout:7/619: dwrite d3/da/d83/fa9 [0,4194304] 0 2026-03-09T20:47:44.209 INFO:tasks.workunit.client.1.vm10.stdout:8/534: chown d0/d22/l4e 216413236 1 2026-03-09T20:47:44.211 INFO:tasks.workunit.client.1.vm10.stdout:0/483: read d2/d9/da/d11/f42 [2860071,18976] 0 2026-03-09T20:47:44.212 INFO:tasks.workunit.client.1.vm10.stdout:8/535: dwrite d0/d22/d25/d2e/d41/d47/f87 [0,4194304] 0 2026-03-09T20:47:44.219 INFO:tasks.workunit.client.0.vm07.stdout:7/620: dwrite d3/da/db/f9a [4194304,4194304] 0 2026-03-09T20:47:44.222 INFO:tasks.workunit.client.0.vm07.stdout:7/621: chown d3/da/db/d32/d3e/dac/d1f/d2b/f2c 2292 1 2026-03-09T20:47:44.223 INFO:tasks.workunit.client.1.vm10.stdout:6/512: sync 2026-03-09T20:47:44.245 INFO:tasks.workunit.client.1.vm10.stdout:1/501: rename d2/da/d25/d3e/d55/l8e to d2/da/d25/d46/d51/l9b 0 2026-03-09T20:47:44.245 INFO:tasks.workunit.client.1.vm10.stdout:1/502: stat d2/da/d25/d46/f61 0 2026-03-09T20:47:44.250 INFO:tasks.workunit.client.0.vm07.stdout:6/574: rename d8/d16/d22/d33 to d8/d16/db4 0 2026-03-09T20:47:44.251 INFO:tasks.workunit.client.0.vm07.stdout:5/626: rename d5/d33/d39/d8d to d5/d33/d39/d8d/dd7/dda 22 2026-03-09T20:47:44.256 INFO:tasks.workunit.client.0.vm07.stdout:5/627: chown d5/df/d13/d3e/d47 117893 1 2026-03-09T20:47:44.259 INFO:tasks.workunit.client.1.vm10.stdout:4/468: write d1/d8/d1b/d57/f75 [231029,86392] 0 2026-03-09T20:47:44.261 INFO:tasks.workunit.client.0.vm07.stdout:2/568: write d2/f7 [287889,81518] 0 2026-03-09T20:47:44.262 INFO:tasks.workunit.client.0.vm07.stdout:9/530: write d4/d16/d29/d24/f85 [2099253,111032] 0 2026-03-09T20:47:44.262 INFO:tasks.workunit.client.0.vm07.stdout:4/496: truncate d2/f43 2598174 0 2026-03-09T20:47:44.265 INFO:tasks.workunit.client.0.vm07.stdout:0/614: dwrite d1/d2/d4b/f70 [0,4194304] 0 2026-03-09T20:47:44.278 INFO:tasks.workunit.client.1.vm10.stdout:3/483: symlink dc/d14/d26/d29/d40/d8c/l98 0 2026-03-09T20:47:44.278 INFO:tasks.workunit.client.0.vm07.stdout:3/540: getdents d1/d5/d9/d11/d60 0 2026-03-09T20:47:44.286 INFO:tasks.workunit.client.1.vm10.stdout:7/504: link db/f19 db/d21/f9c 0 2026-03-09T20:47:44.290 INFO:tasks.workunit.client.0.vm07.stdout:1/591: rename d3/d23/d55/d56 to d3/d97/da1/dc5 0 2026-03-09T20:47:44.290 INFO:tasks.workunit.client.0.vm07.stdout:1/592: readlink d3/d97/da1/dc5/d90/lc2 0 2026-03-09T20:47:44.293 INFO:tasks.workunit.client.0.vm07.stdout:5/628: dwrite d5/d19/f20 [0,4194304] 0 2026-03-09T20:47:44.313 INFO:tasks.workunit.client.1.vm10.stdout:5/461: rename d2/d27/d37/d46/d5d/d5f to d2/d27/d75/d81/db3 0 2026-03-09T20:47:44.316 INFO:tasks.workunit.client.1.vm10.stdout:4/469: creat d1/d47/f96 x:0 0 0 2026-03-09T20:47:44.318 INFO:tasks.workunit.client.0.vm07.stdout:8/529: getdents d1/d5d 0 2026-03-09T20:47:44.323 INFO:tasks.workunit.client.1.vm10.stdout:3/484: creat dc/d14/d26/d8f/f99 x:0 0 0 2026-03-09T20:47:44.324 INFO:tasks.workunit.client.0.vm07.stdout:3/541: symlink d1/d5/d9/d2f/d66/lb1 0 2026-03-09T20:47:44.324 INFO:tasks.workunit.client.1.vm10.stdout:7/505: symlink db/d28/d2b/d36/d40/l9d 0 2026-03-09T20:47:44.324 INFO:tasks.workunit.client.1.vm10.stdout:7/506: chown db/d21/l4b 2854790 1 2026-03-09T20:47:44.325 INFO:tasks.workunit.client.0.vm07.stdout:4/497: dwrite d2/f7 [0,4194304] 0 2026-03-09T20:47:44.326 INFO:tasks.workunit.client.1.vm10.stdout:7/507: write db/d28/d2b/d36/d40/f44 [443137,125534] 0 2026-03-09T20:47:44.326 INFO:tasks.workunit.client.1.vm10.stdout:0/484: symlink d2/d9/da/d11/da3/la8 0 2026-03-09T20:47:44.327 INFO:tasks.workunit.client.1.vm10.stdout:7/508: readlink db/d28/d4c/l75 0 2026-03-09T20:47:44.327 INFO:tasks.workunit.client.0.vm07.stdout:6/575: mknod d8/d16/d22/d24/da0/daa/cb5 0 2026-03-09T20:47:44.328 INFO:tasks.workunit.client.1.vm10.stdout:7/509: write db/d21/f9a [335816,36378] 0 2026-03-09T20:47:44.328 INFO:tasks.workunit.client.0.vm07.stdout:6/576: stat d8/d5d 0 2026-03-09T20:47:44.329 INFO:tasks.workunit.client.0.vm07.stdout:6/577: truncate d8/d16/d22/f98 1573785 0 2026-03-09T20:47:44.330 INFO:tasks.workunit.client.0.vm07.stdout:6/578: chown d8/d16/c6c 954 1 2026-03-09T20:47:44.330 INFO:tasks.workunit.client.1.vm10.stdout:9/535: dread d2/d3/de/d35/f78 [0,4194304] 0 2026-03-09T20:47:44.348 INFO:tasks.workunit.client.1.vm10.stdout:4/470: creat d1/d8/d39/f97 x:0 0 0 2026-03-09T20:47:44.349 INFO:tasks.workunit.client.0.vm07.stdout:2/569: mkdir d2/d46/db0/db3 0 2026-03-09T20:47:44.350 INFO:tasks.workunit.client.0.vm07.stdout:4/498: mknod d2/df/d59/c88 0 2026-03-09T20:47:44.351 INFO:tasks.workunit.client.0.vm07.stdout:4/499: truncate d2/df/d17/f80 887515 0 2026-03-09T20:47:44.353 INFO:tasks.workunit.client.1.vm10.stdout:4/471: dwrite d1/d2/d5c/d64/d6b/d81/f8a [0,4194304] 0 2026-03-09T20:47:44.359 INFO:tasks.workunit.client.1.vm10.stdout:4/472: dwrite d1/d8/d1b/f8b [0,4194304] 0 2026-03-09T20:47:44.365 INFO:tasks.workunit.client.1.vm10.stdout:7/510: read db/d21/d23/f22 [948857,68393] 0 2026-03-09T20:47:44.366 INFO:tasks.workunit.client.1.vm10.stdout:8/536: link d0/d22/f35 d0/d22/d25/d8f/fa6 0 2026-03-09T20:47:44.367 INFO:tasks.workunit.client.0.vm07.stdout:5/629: creat d5/d19/d73/d97/fdb x:0 0 0 2026-03-09T20:47:44.367 INFO:tasks.workunit.client.1.vm10.stdout:7/511: write db/d28/d2b/d36/d63/d6d/f82 [88027,45553] 0 2026-03-09T20:47:44.368 INFO:tasks.workunit.client.1.vm10.stdout:8/537: write d0/d22/d2f/d38/fa5 [474591,2655] 0 2026-03-09T20:47:44.368 INFO:tasks.workunit.client.1.vm10.stdout:6/513: link d3/da/d11/d31/d4c/l73 d3/d30/d7f/d36/d6d/la6 0 2026-03-09T20:47:44.371 INFO:tasks.workunit.client.0.vm07.stdout:7/622: link d3/c9 d3/da/db/d32/d7a/cd2 0 2026-03-09T20:47:44.371 INFO:tasks.workunit.client.0.vm07.stdout:8/530: unlink d1/dc/c5a 0 2026-03-09T20:47:44.372 INFO:tasks.workunit.client.0.vm07.stdout:3/542: dread d1/d5/d9/d2f/d3d/d64/f30 [0,4194304] 0 2026-03-09T20:47:44.373 INFO:tasks.workunit.client.1.vm10.stdout:2/513: getdents d5/d2b/d32/d80 0 2026-03-09T20:47:44.376 INFO:tasks.workunit.client.0.vm07.stdout:8/531: symlink d1/d5d/d6f/d2f/d53/la8 0 2026-03-09T20:47:44.382 INFO:tasks.workunit.client.1.vm10.stdout:8/538: sync 2026-03-09T20:47:44.404 INFO:tasks.workunit.client.0.vm07.stdout:2/570: dread d2/db/d28/f58 [0,4194304] 0 2026-03-09T20:47:44.407 INFO:tasks.workunit.client.0.vm07.stdout:4/500: mknod d2/d55/d5d/c89 0 2026-03-09T20:47:44.413 INFO:tasks.workunit.client.0.vm07.stdout:0/615: write d1/f1a [1526554,75076] 0 2026-03-09T20:47:44.417 INFO:tasks.workunit.client.0.vm07.stdout:6/579: read d8/d26/f4d [917774,115006] 0 2026-03-09T20:47:44.420 INFO:tasks.workunit.client.1.vm10.stdout:1/503: write d2/da/f3d [785048,7768] 0 2026-03-09T20:47:44.425 INFO:tasks.workunit.client.0.vm07.stdout:9/531: dwrite d4/d8/dc/ff [0,4194304] 0 2026-03-09T20:47:44.429 INFO:tasks.workunit.client.0.vm07.stdout:1/593: dwrite d3/d66/f7e [0,4194304] 0 2026-03-09T20:47:44.434 INFO:tasks.workunit.client.0.vm07.stdout:6/580: dwrite d8/d16/d22/d24/da0/dab/d40/f65 [4194304,4194304] 0 2026-03-09T20:47:44.443 INFO:tasks.workunit.client.1.vm10.stdout:3/485: write dc/d14/d26/d29/f60 [4474353,45158] 0 2026-03-09T20:47:44.449 INFO:tasks.workunit.client.1.vm10.stdout:3/486: dwrite dc/f88 [0,4194304] 0 2026-03-09T20:47:44.453 INFO:tasks.workunit.client.0.vm07.stdout:5/630: write d5/d33/fb6 [462112,18615] 0 2026-03-09T20:47:44.454 INFO:tasks.workunit.client.1.vm10.stdout:9/536: write d2/d3/de/d8f/f9d [4735630,81306] 0 2026-03-09T20:47:44.455 INFO:tasks.workunit.client.1.vm10.stdout:0/485: dwrite d2/d9/da/d35/f68 [4194304,4194304] 0 2026-03-09T20:47:44.456 INFO:tasks.workunit.client.0.vm07.stdout:7/623: write d3/da/db/d32/f3d [1848910,59986] 0 2026-03-09T20:47:44.457 INFO:tasks.workunit.client.1.vm10.stdout:6/514: unlink d3/d30/d7f/f16 0 2026-03-09T20:47:44.458 INFO:tasks.workunit.client.1.vm10.stdout:2/514: symlink d5/d18/d27/d89/la9 0 2026-03-09T20:47:44.461 INFO:tasks.workunit.client.1.vm10.stdout:5/462: dwrite d2/f2c [0,4194304] 0 2026-03-09T20:47:44.462 INFO:tasks.workunit.client.0.vm07.stdout:8/532: mknod d1/d5d/d6f/ca9 0 2026-03-09T20:47:44.463 INFO:tasks.workunit.client.1.vm10.stdout:6/515: chown d3/d30/d33/l7d 0 1 2026-03-09T20:47:44.463 INFO:tasks.workunit.client.1.vm10.stdout:6/516: stat d3/d9c 0 2026-03-09T20:47:44.463 INFO:tasks.workunit.client.1.vm10.stdout:5/463: stat d2/d27/d37/d46/c72 0 2026-03-09T20:47:44.482 INFO:tasks.workunit.client.0.vm07.stdout:4/501: unlink d2/d55/d5d/f7e 0 2026-03-09T20:47:44.507 INFO:tasks.workunit.client.1.vm10.stdout:3/487: write dc/d14/d20/d21/f50 [2522431,5449] 0 2026-03-09T20:47:44.507 INFO:tasks.workunit.client.1.vm10.stdout:7/512: truncate db/d21/d26/f2f 829961 0 2026-03-09T20:47:44.514 INFO:tasks.workunit.client.1.vm10.stdout:3/488: dread dc/d14/d20/d2e/d56/f15 [0,4194304] 0 2026-03-09T20:47:44.518 INFO:tasks.workunit.client.1.vm10.stdout:0/486: creat d2/d9/da/d35/d30/fa9 x:0 0 0 2026-03-09T20:47:44.520 INFO:tasks.workunit.client.0.vm07.stdout:6/581: unlink d8/d16/c6c 0 2026-03-09T20:47:44.521 INFO:tasks.workunit.client.0.vm07.stdout:1/594: dread d3/d14/d54/d3e/f75 [0,4194304] 0 2026-03-09T20:47:44.523 INFO:tasks.workunit.client.0.vm07.stdout:7/624: symlink d3/da/d53/ld3 0 2026-03-09T20:47:44.525 INFO:tasks.workunit.client.1.vm10.stdout:2/515: symlink d5/d18/d27/d38/d61/laa 0 2026-03-09T20:47:44.528 INFO:tasks.workunit.client.1.vm10.stdout:5/464: rmdir d2/d27/d75/d81/db3/d66 39 2026-03-09T20:47:44.529 INFO:tasks.workunit.client.0.vm07.stdout:3/543: mknod d1/d5/d9/cb2 0 2026-03-09T20:47:44.531 INFO:tasks.workunit.client.0.vm07.stdout:7/625: dread d3/d58/dc1/fcc [0,4194304] 0 2026-03-09T20:47:44.535 INFO:tasks.workunit.client.1.vm10.stdout:1/504: getdents d2/da/d25/d46/d8c 0 2026-03-09T20:47:44.535 INFO:tasks.workunit.client.1.vm10.stdout:3/489: mkdir dc/d14/d20/d21/d9a 0 2026-03-09T20:47:44.535 INFO:tasks.workunit.client.1.vm10.stdout:0/487: creat d2/d9/d69/faa x:0 0 0 2026-03-09T20:47:44.537 INFO:tasks.workunit.client.0.vm07.stdout:4/502: chown d2/c1a 4611 1 2026-03-09T20:47:44.539 INFO:tasks.workunit.client.0.vm07.stdout:5/631: symlink d5/d19/d73/ldc 0 2026-03-09T20:47:44.539 INFO:tasks.workunit.client.1.vm10.stdout:2/516: creat d5/d2b/d32/fab x:0 0 0 2026-03-09T20:47:44.540 INFO:tasks.workunit.client.0.vm07.stdout:4/503: write d2/df/d17/f73 [1179760,47767] 0 2026-03-09T20:47:44.540 INFO:tasks.workunit.client.1.vm10.stdout:0/488: dwrite d2/d4a/d58/d82/d93/fa1 [0,4194304] 0 2026-03-09T20:47:44.542 INFO:tasks.workunit.client.1.vm10.stdout:5/465: symlink d2/d27/d75/d81/db3/d84/lb4 0 2026-03-09T20:47:44.545 INFO:tasks.workunit.client.0.vm07.stdout:7/626: unlink d3/da/db/d32/d3e/dac/c78 0 2026-03-09T20:47:44.548 INFO:tasks.workunit.client.1.vm10.stdout:0/489: dwrite d2/f39 [0,4194304] 0 2026-03-09T20:47:44.551 INFO:tasks.workunit.client.1.vm10.stdout:0/490: fdatasync d2/d9/da/fa7 0 2026-03-09T20:47:44.552 INFO:tasks.workunit.client.1.vm10.stdout:0/491: write d2/d9/da/d35/f68 [2620837,58758] 0 2026-03-09T20:47:44.553 INFO:tasks.workunit.client.1.vm10.stdout:0/492: stat d2/d4a/d58/d82/d71/d5d/f76 0 2026-03-09T20:47:44.553 INFO:tasks.workunit.client.1.vm10.stdout:0/493: chown d2/d9/da/d11/da3 158 1 2026-03-09T20:47:44.554 INFO:tasks.workunit.client.1.vm10.stdout:0/494: readlink d2/d4a/d58/d82/d71/l18 0 2026-03-09T20:47:44.559 INFO:tasks.workunit.client.1.vm10.stdout:1/505: creat d2/da/d25/d46/d51/d5d/d6e/f9c x:0 0 0 2026-03-09T20:47:44.560 INFO:tasks.workunit.client.1.vm10.stdout:1/506: chown d2/da/d25/d46/d51/d5d/d6e/d70/f79 55421635 1 2026-03-09T20:47:44.561 INFO:tasks.workunit.client.1.vm10.stdout:1/507: rename d2 to d2/da/d25/d46/d51/d5d/d6e/d9d 22 2026-03-09T20:47:44.563 INFO:tasks.workunit.client.0.vm07.stdout:2/571: write d2/d11/f60 [1031582,113598] 0 2026-03-09T20:47:44.564 INFO:tasks.workunit.client.1.vm10.stdout:4/473: dwrite d1/d8/d1c/f23 [0,4194304] 0 2026-03-09T20:47:44.564 INFO:tasks.workunit.client.0.vm07.stdout:9/532: rename d4/d8/dc/d15/f57 to d4/d8/d19/fc2 0 2026-03-09T20:47:44.572 INFO:tasks.workunit.client.0.vm07.stdout:6/582: unlink d8/f15 0 2026-03-09T20:47:44.572 INFO:tasks.workunit.client.1.vm10.stdout:1/508: sync 2026-03-09T20:47:44.577 INFO:tasks.workunit.client.0.vm07.stdout:1/595: dread d3/d14/d54/d3e/f4a [0,4194304] 0 2026-03-09T20:47:44.581 INFO:tasks.workunit.client.1.vm10.stdout:8/539: write d0/d22/d25/d2e/d41/d47/d63/f8c [3529560,7208] 0 2026-03-09T20:47:44.585 INFO:tasks.workunit.client.0.vm07.stdout:0/616: dwrite d1/d2/dc/d17/da6/fae [0,4194304] 0 2026-03-09T20:47:44.608 INFO:tasks.workunit.client.1.vm10.stdout:5/466: creat d2/d27/d37/fb5 x:0 0 0 2026-03-09T20:47:44.608 INFO:tasks.workunit.client.1.vm10.stdout:5/467: readlink d2/d27/l44 0 2026-03-09T20:47:44.609 INFO:tasks.workunit.client.1.vm10.stdout:5/468: fdatasync d2/d80/fb2 0 2026-03-09T20:47:44.613 INFO:tasks.workunit.client.1.vm10.stdout:9/537: dwrite d2/d12/f69 [0,4194304] 0 2026-03-09T20:47:44.614 INFO:tasks.workunit.client.1.vm10.stdout:5/469: dread d2/d27/d75/d81/db3/d69/f76 [0,4194304] 0 2026-03-09T20:47:44.618 INFO:tasks.workunit.client.1.vm10.stdout:6/517: write d3/d30/d7f/d4a/f4b [845340,60094] 0 2026-03-09T20:47:44.625 INFO:tasks.workunit.client.1.vm10.stdout:6/518: dwrite d3/d30/f91 [0,4194304] 0 2026-03-09T20:47:44.629 INFO:tasks.workunit.client.0.vm07.stdout:8/533: link d1/dc/fd d1/d5d/d6f/d80/faa 0 2026-03-09T20:47:44.639 INFO:tasks.workunit.client.1.vm10.stdout:5/470: read d2/d39/d4b/f51 [209778,74774] 0 2026-03-09T20:47:44.647 INFO:tasks.workunit.client.0.vm07.stdout:3/544: dwrite d1/f78 [0,4194304] 0 2026-03-09T20:47:44.661 INFO:tasks.workunit.client.0.vm07.stdout:4/504: dwrite d2/df/f6b [0,4194304] 0 2026-03-09T20:47:44.665 INFO:tasks.workunit.client.0.vm07.stdout:3/545: dread d1/f78 [0,4194304] 0 2026-03-09T20:47:44.672 INFO:tasks.workunit.client.0.vm07.stdout:7/627: rename d3/da/f45 to d3/da/d83/dc5/fd4 0 2026-03-09T20:47:44.680 INFO:tasks.workunit.client.0.vm07.stdout:7/628: fdatasync d3/d58/d82/d90/fbe 0 2026-03-09T20:47:44.684 INFO:tasks.workunit.client.1.vm10.stdout:3/490: creat dc/d14/d22/d7f/d69/d75/d91/f9b x:0 0 0 2026-03-09T20:47:44.688 INFO:tasks.workunit.client.0.vm07.stdout:9/533: read d4/d16/d29/d24/d37/f51 [1970806,23443] 0 2026-03-09T20:47:44.700 INFO:tasks.workunit.client.0.vm07.stdout:5/632: symlink d5/df/ldd 0 2026-03-09T20:47:44.709 INFO:tasks.workunit.client.0.vm07.stdout:6/583: creat d8/d16/d4b/d88/fb6 x:0 0 0 2026-03-09T20:47:44.711 INFO:tasks.workunit.client.0.vm07.stdout:6/584: readlink d8/le 0 2026-03-09T20:47:44.714 INFO:tasks.workunit.client.1.vm10.stdout:1/509: stat d2/da/f50 0 2026-03-09T20:47:44.718 INFO:tasks.workunit.client.1.vm10.stdout:8/540: creat d0/d22/d25/d2e/d41/d85/fa7 x:0 0 0 2026-03-09T20:47:44.723 INFO:tasks.workunit.client.1.vm10.stdout:9/538: creat d2/d12/d5a/fba x:0 0 0 2026-03-09T20:47:44.728 INFO:tasks.workunit.client.1.vm10.stdout:6/519: symlink d3/da/d11/d26/la7 0 2026-03-09T20:47:44.739 INFO:tasks.workunit.client.0.vm07.stdout:4/505: mkdir d2/df/d59/d8a 0 2026-03-09T20:47:44.740 INFO:tasks.workunit.client.0.vm07.stdout:3/546: fsync d1/d5/d9/d11/d6d/d80/f93 0 2026-03-09T20:47:44.744 INFO:tasks.workunit.client.0.vm07.stdout:3/547: readlink d1/d5/d9/d2f/d34/lae 0 2026-03-09T20:47:44.754 INFO:tasks.workunit.client.0.vm07.stdout:2/572: mkdir d2/da7/db4 0 2026-03-09T20:47:44.754 INFO:tasks.workunit.client.0.vm07.stdout:5/633: truncate d5/d69/fc5 473974 0 2026-03-09T20:47:44.756 INFO:tasks.workunit.client.1.vm10.stdout:6/520: sync 2026-03-09T20:47:44.760 INFO:tasks.workunit.client.0.vm07.stdout:7/629: sync 2026-03-09T20:47:44.760 INFO:tasks.workunit.client.0.vm07.stdout:6/585: sync 2026-03-09T20:47:44.760 INFO:tasks.workunit.client.0.vm07.stdout:1/596: mkdir d3/dc6 0 2026-03-09T20:47:44.761 INFO:tasks.workunit.client.0.vm07.stdout:0/617: read d1/f48 [781298,114869] 0 2026-03-09T20:47:44.772 INFO:tasks.workunit.client.0.vm07.stdout:1/597: dwrite d3/f82 [0,4194304] 0 2026-03-09T20:47:44.775 INFO:tasks.workunit.client.1.vm10.stdout:0/495: write d2/d9/da/d35/f3a [5251907,19663] 0 2026-03-09T20:47:44.776 INFO:tasks.workunit.client.0.vm07.stdout:1/598: write d3/f6f [942688,97997] 0 2026-03-09T20:47:44.778 INFO:tasks.workunit.client.1.vm10.stdout:0/496: read d2/d9/da/d35/f84 [3899198,127532] 0 2026-03-09T20:47:44.779 INFO:tasks.workunit.client.1.vm10.stdout:0/497: dread - d2/d9/d69/faa zero size 2026-03-09T20:47:44.790 INFO:tasks.workunit.client.0.vm07.stdout:8/534: dwrite d1/d5d/d6f/d2f/d4d/d63/f77 [0,4194304] 0 2026-03-09T20:47:44.794 INFO:tasks.workunit.client.0.vm07.stdout:8/535: dread - d1/dc/d16/d31/fa0 zero size 2026-03-09T20:47:44.808 INFO:tasks.workunit.client.0.vm07.stdout:9/534: write d4/d11/d2a/f5d [2292124,92422] 0 2026-03-09T20:47:44.818 INFO:tasks.workunit.client.0.vm07.stdout:4/506: mkdir d2/d55/d8b 0 2026-03-09T20:47:44.821 INFO:tasks.workunit.client.0.vm07.stdout:3/548: mkdir d1/d5/d9/d11/d6d/d80/db3 0 2026-03-09T20:47:44.834 INFO:tasks.workunit.client.0.vm07.stdout:2/573: write d2/db/d28/d5c/f8c [947830,91432] 0 2026-03-09T20:47:44.854 INFO:tasks.workunit.client.1.vm10.stdout:3/491: mkdir dc/d14/d26/d29/d40/d8c/d9c 0 2026-03-09T20:47:44.857 INFO:tasks.workunit.client.0.vm07.stdout:0/618: mknod d1/d2/d4b/cc5 0 2026-03-09T20:47:44.859 INFO:tasks.workunit.client.1.vm10.stdout:2/517: link d5/d2b/d32/d80/d47/d94/f9b d5/d18/d27/da6/fac 0 2026-03-09T20:47:44.859 INFO:tasks.workunit.client.1.vm10.stdout:2/518: stat d5/d18/d27/da6/fac 0 2026-03-09T20:47:44.863 INFO:tasks.workunit.client.1.vm10.stdout:4/474: link d1/d67/f8f d1/d8/d1c/d2b/d4a/f98 0 2026-03-09T20:47:44.870 INFO:tasks.workunit.client.0.vm07.stdout:8/536: mknod d1/d5d/d6f/d80/cab 0 2026-03-09T20:47:44.871 INFO:tasks.workunit.client.0.vm07.stdout:9/535: rmdir d4/d16/d29 39 2026-03-09T20:47:44.872 INFO:tasks.workunit.client.0.vm07.stdout:8/537: chown d1/dc/d16/l1a 60 1 2026-03-09T20:47:44.872 INFO:tasks.workunit.client.0.vm07.stdout:8/538: chown d1/d3b/f3e 411 1 2026-03-09T20:47:44.876 INFO:tasks.workunit.client.1.vm10.stdout:1/510: dwrite d2/f21 [4194304,4194304] 0 2026-03-09T20:47:44.880 INFO:tasks.workunit.client.0.vm07.stdout:4/507: rename d2/df/d17/f37 to d2/d55/d5d/d3f/d4a/d85/f8c 0 2026-03-09T20:47:44.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:44 vm07.local ceph-mon[49120]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:47:44.883 INFO:tasks.workunit.client.0.vm07.stdout:4/508: write d2/f4c [2284662,103939] 0 2026-03-09T20:47:44.903 INFO:tasks.workunit.client.1.vm10.stdout:0/498: rename d2/d9/da/d11/l45 to d2/d9/d2a/lab 0 2026-03-09T20:47:44.903 INFO:tasks.workunit.client.1.vm10.stdout:3/492: symlink dc/d14/d26/d29/d2a/d76/l9d 0 2026-03-09T20:47:44.904 INFO:tasks.workunit.client.1.vm10.stdout:4/475: mkdir d1/d2/d3/d70/d99 0 2026-03-09T20:47:44.905 INFO:tasks.workunit.client.0.vm07.stdout:1/599: creat d3/d23/d52/da7/fc7 x:0 0 0 2026-03-09T20:47:44.908 INFO:tasks.workunit.client.0.vm07.stdout:6/586: read d8/d16/d22/d24/da0/dab/d40/d69/f78 [2003043,108999] 0 2026-03-09T20:47:44.910 INFO:tasks.workunit.client.1.vm10.stdout:8/541: creat d0/d22/d25/d40/d86/d91/fa8 x:0 0 0 2026-03-09T20:47:44.910 INFO:tasks.workunit.client.1.vm10.stdout:8/542: fsync d0/d22/d25/d2e/d41/d85/fa7 0 2026-03-09T20:47:44.911 INFO:tasks.workunit.client.1.vm10.stdout:8/543: dread - d0/d54/fa4 zero size 2026-03-09T20:47:44.911 INFO:tasks.workunit.client.1.vm10.stdout:8/544: write d0/d22/d25/d2e/d41/d47/f4b [2502722,80427] 0 2026-03-09T20:47:44.913 INFO:tasks.workunit.client.1.vm10.stdout:9/539: creat d2/d3/d6d/db7/fbb x:0 0 0 2026-03-09T20:47:44.920 INFO:tasks.workunit.client.1.vm10.stdout:8/545: dread d0/d22/d25/d2e/d41/d47/d63/f8c [0,4194304] 0 2026-03-09T20:47:44.924 INFO:tasks.workunit.client.1.vm10.stdout:7/513: truncate db/d21/d26/f2f 70629 0 2026-03-09T20:47:44.927 INFO:tasks.workunit.client.1.vm10.stdout:8/546: dread d0/d22/d2c/f6b [0,4194304] 0 2026-03-09T20:47:44.928 INFO:tasks.workunit.client.1.vm10.stdout:8/547: write d0/d22/d25/f74 [1093199,22823] 0 2026-03-09T20:47:44.929 INFO:tasks.workunit.client.0.vm07.stdout:3/549: dwrite d1/d5/d9/d11/d1f/f7f [4194304,4194304] 0 2026-03-09T20:47:44.932 INFO:tasks.workunit.client.0.vm07.stdout:3/550: fsync d1/d5/d9/fe 0 2026-03-09T20:47:44.944 INFO:tasks.workunit.client.0.vm07.stdout:5/634: dwrite d5/d33/d3b/f63 [0,4194304] 0 2026-03-09T20:47:44.959 INFO:tasks.workunit.client.1.vm10.stdout:5/471: creat d2/d27/d75/d81/db3/fb6 x:0 0 0 2026-03-09T20:47:44.962 INFO:tasks.workunit.client.1.vm10.stdout:5/472: chown d2/d58/d6c/f98 247014395 1 2026-03-09T20:47:44.963 INFO:tasks.workunit.client.1.vm10.stdout:1/511: dwrite d2/f8 [0,4194304] 0 2026-03-09T20:47:44.966 INFO:tasks.workunit.client.1.vm10.stdout:6/521: mknod d3/da/d11/d31/d47/d87/ca8 0 2026-03-09T20:47:44.970 INFO:tasks.workunit.client.1.vm10.stdout:0/499: rmdir d2/d4a/d79 39 2026-03-09T20:47:44.973 INFO:tasks.workunit.client.1.vm10.stdout:2/519: link d5/d2b/f69 d5/d18/d27/d5f/fad 0 2026-03-09T20:47:44.974 INFO:tasks.workunit.client.1.vm10.stdout:0/500: dread d2/d9/da/d35/d30/f7f [0,4194304] 0 2026-03-09T20:47:44.974 INFO:tasks.workunit.client.1.vm10.stdout:0/501: write d2/d9/da/fa7 [496154,89480] 0 2026-03-09T20:47:44.975 INFO:tasks.workunit.client.0.vm07.stdout:7/630: rename d3/da/db/f9a to d3/da/db/d79/dc3/fd5 0 2026-03-09T20:47:44.976 INFO:tasks.workunit.client.1.vm10.stdout:0/502: write d2/d4a/d58/d82/d71/d5d/f8c [1293174,6031] 0 2026-03-09T20:47:44.980 INFO:tasks.workunit.client.1.vm10.stdout:3/493: unlink dc/d14/d22/d7f/d69/d75/d91/f9b 0 2026-03-09T20:47:44.987 INFO:tasks.workunit.client.0.vm07.stdout:8/539: truncate d1/dc/d6a/f62 600413 0 2026-03-09T20:47:44.992 INFO:tasks.workunit.client.1.vm10.stdout:9/540: mkdir d2/d3/de/d8f/dbc 0 2026-03-09T20:47:44.995 INFO:tasks.workunit.client.0.vm07.stdout:8/540: dwrite d1/d5d/d6f/d2f/d53/f89 [0,4194304] 0 2026-03-09T20:47:45.014 INFO:tasks.workunit.client.1.vm10.stdout:8/548: read - d0/d22/f71 zero size 2026-03-09T20:47:45.015 INFO:tasks.workunit.client.0.vm07.stdout:3/551: rename d1/d5/d9/d11/d6d/ca8 to d1/d5/d9/d11/d6d/cb4 0 2026-03-09T20:47:45.016 INFO:tasks.workunit.client.0.vm07.stdout:5/635: creat d5/df/d13/d6c/fde x:0 0 0 2026-03-09T20:47:45.026 INFO:tasks.workunit.client.1.vm10.stdout:1/512: mkdir d2/da/d25/d46/d51/d7e/d9e 0 2026-03-09T20:47:45.027 INFO:tasks.workunit.client.1.vm10.stdout:1/513: chown d2/da/d25/d46/d51 0 1 2026-03-09T20:47:45.037 INFO:tasks.workunit.client.1.vm10.stdout:6/522: read d3/da/d11/d31/f82 [42916,107102] 0 2026-03-09T20:47:45.037 INFO:tasks.workunit.client.0.vm07.stdout:7/631: mknod d3/da/db/d79/dc3/cd6 0 2026-03-09T20:47:45.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:44 vm10.local ceph-mon[57011]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:47:45.037 INFO:tasks.workunit.client.1.vm10.stdout:1/514: sync 2026-03-09T20:47:45.040 INFO:tasks.workunit.client.0.vm07.stdout:1/600: symlink d3/d14/lc8 0 2026-03-09T20:47:45.041 INFO:tasks.workunit.client.1.vm10.stdout:0/503: mkdir d2/d9/da/d48/dac 0 2026-03-09T20:47:45.042 INFO:tasks.workunit.client.1.vm10.stdout:2/520: write d5/d18/d27/f29 [1384145,26248] 0 2026-03-09T20:47:45.043 INFO:tasks.workunit.client.1.vm10.stdout:2/521: fdatasync d5/fb 0 2026-03-09T20:47:45.043 INFO:tasks.workunit.client.1.vm10.stdout:2/522: chown d5/d2b/d32/d80/d8d 92650073 1 2026-03-09T20:47:45.044 INFO:tasks.workunit.client.1.vm10.stdout:2/523: chown d5/d2b/d32/d80/d8d 262355575 1 2026-03-09T20:47:45.044 INFO:tasks.workunit.client.1.vm10.stdout:4/476: link d1/fe d1/d67/f9a 0 2026-03-09T20:47:45.045 INFO:tasks.workunit.client.0.vm07.stdout:6/587: mknod d8/d16/d22/d24/da0/dab/d40/cb7 0 2026-03-09T20:47:45.048 INFO:tasks.workunit.client.0.vm07.stdout:8/541: creat d1/d5d/d6f/d2f/d4d/d55/fac x:0 0 0 2026-03-09T20:47:45.052 INFO:tasks.workunit.client.0.vm07.stdout:5/636: creat d5/df/d13/d6c/db1/fdf x:0 0 0 2026-03-09T20:47:45.056 INFO:tasks.workunit.client.1.vm10.stdout:7/514: getdents db/d21/d23 0 2026-03-09T20:47:45.056 INFO:tasks.workunit.client.0.vm07.stdout:4/509: link d2/l30 d2/d55/d8b/l8d 0 2026-03-09T20:47:45.056 INFO:tasks.workunit.client.1.vm10.stdout:7/515: chown db/d28/d4c 8 1 2026-03-09T20:47:45.057 INFO:tasks.workunit.client.0.vm07.stdout:4/510: chown d2/d55/d5d/d3f/d4a/d85/f8c 189494164 1 2026-03-09T20:47:45.059 INFO:tasks.workunit.client.1.vm10.stdout:9/541: dwrite d2/d28/f32 [0,4194304] 0 2026-03-09T20:47:45.060 INFO:tasks.workunit.client.1.vm10.stdout:9/542: chown d2/d3/de 4 1 2026-03-09T20:47:45.061 INFO:tasks.workunit.client.0.vm07.stdout:3/552: rmdir d1/d5/d9/d11/d60 39 2026-03-09T20:47:45.061 INFO:tasks.workunit.client.1.vm10.stdout:9/543: write d2/d3/de/d8f/fb5 [881970,57134] 0 2026-03-09T20:47:45.077 INFO:tasks.workunit.client.0.vm07.stdout:7/632: fdatasync d3/da/db/d32/d3e/dac/d1f/d2b/d52/f73 0 2026-03-09T20:47:45.083 INFO:tasks.workunit.client.0.vm07.stdout:1/601: creat d3/d23/fc9 x:0 0 0 2026-03-09T20:47:45.094 INFO:tasks.workunit.client.0.vm07.stdout:6/588: write d8/d16/d22/d24/f25 [4317348,83111] 0 2026-03-09T20:47:45.096 INFO:tasks.workunit.client.1.vm10.stdout:5/473: truncate d2/d27/d37/f57 2803769 0 2026-03-09T20:47:45.096 INFO:tasks.workunit.client.1.vm10.stdout:5/474: chown d2/lf 163 1 2026-03-09T20:47:45.099 INFO:tasks.workunit.client.1.vm10.stdout:0/504: creat d2/d9/d4b/d63/fad x:0 0 0 2026-03-09T20:47:45.104 INFO:tasks.workunit.client.1.vm10.stdout:3/494: mkdir dc/d9e 0 2026-03-09T20:47:45.108 INFO:tasks.workunit.client.0.vm07.stdout:3/553: chown d1/d5/d9/daf/c52 231 1 2026-03-09T20:47:45.110 INFO:tasks.workunit.client.1.vm10.stdout:2/524: mknod d5/d18/d27/d89/cae 0 2026-03-09T20:47:45.111 INFO:tasks.workunit.client.0.vm07.stdout:1/602: sync 2026-03-09T20:47:45.113 INFO:tasks.workunit.client.0.vm07.stdout:2/574: link d2/db/d28/d5c/c84 d2/d11/d56/cb5 0 2026-03-09T20:47:45.113 INFO:tasks.workunit.client.0.vm07.stdout:2/575: fdatasync d2/db/d49/fad 0 2026-03-09T20:47:45.116 INFO:tasks.workunit.client.0.vm07.stdout:4/511: write d2/df/d17/f63 [30639,85284] 0 2026-03-09T20:47:45.123 INFO:tasks.workunit.client.1.vm10.stdout:4/477: mkdir d1/d8/d1c/d2b/d4a/d9b 0 2026-03-09T20:47:45.124 INFO:tasks.workunit.client.0.vm07.stdout:5/637: dwrite d5/df/d13/f1f [0,4194304] 0 2026-03-09T20:47:45.126 INFO:tasks.workunit.client.1.vm10.stdout:4/478: sync 2026-03-09T20:47:45.131 INFO:tasks.workunit.client.0.vm07.stdout:5/638: chown d5/d19/d73/dbc/ccf 23326 1 2026-03-09T20:47:45.131 INFO:tasks.workunit.client.0.vm07.stdout:5/639: readlink d5/d33/l68 0 2026-03-09T20:47:45.135 INFO:tasks.workunit.client.0.vm07.stdout:5/640: truncate d5/d19/f20 5005341 0 2026-03-09T20:47:45.145 INFO:tasks.workunit.client.1.vm10.stdout:7/516: mknod db/d21/d26/d72/c9e 0 2026-03-09T20:47:45.145 INFO:tasks.workunit.client.1.vm10.stdout:7/517: chown db/f19 195635 1 2026-03-09T20:47:45.151 INFO:tasks.workunit.client.0.vm07.stdout:9/536: getdents d4/d16/d29/d24/d37/d44/d62 0 2026-03-09T20:47:45.154 INFO:tasks.workunit.client.1.vm10.stdout:8/549: symlink d0/d22/d25/d2e/d41/d85/d8b/la9 0 2026-03-09T20:47:45.160 INFO:tasks.workunit.client.1.vm10.stdout:5/475: chown d2/f3e 13 1 2026-03-09T20:47:45.162 INFO:tasks.workunit.client.1.vm10.stdout:6/523: write d3/da/d11/d31/f81 [1980859,89932] 0 2026-03-09T20:47:45.164 INFO:tasks.workunit.client.1.vm10.stdout:0/505: unlink d2/f39 0 2026-03-09T20:47:45.166 INFO:tasks.workunit.client.1.vm10.stdout:0/506: sync 2026-03-09T20:47:45.172 INFO:tasks.workunit.client.1.vm10.stdout:2/525: creat d5/d2b/d32/d80/d47/d7b/faf x:0 0 0 2026-03-09T20:47:45.176 INFO:tasks.workunit.client.1.vm10.stdout:4/479: dread - d1/d8/d1c/d2b/d4a/f73 zero size 2026-03-09T20:47:45.176 INFO:tasks.workunit.client.1.vm10.stdout:4/480: stat d1/d8/d39/c63 0 2026-03-09T20:47:45.183 INFO:tasks.workunit.client.1.vm10.stdout:9/544: getdents d2/d33/db1 0 2026-03-09T20:47:45.186 INFO:tasks.workunit.client.1.vm10.stdout:8/550: truncate d0/fe 3320412 0 2026-03-09T20:47:45.193 INFO:tasks.workunit.client.1.vm10.stdout:1/515: creat d2/da/f9f x:0 0 0 2026-03-09T20:47:45.194 INFO:tasks.workunit.client.1.vm10.stdout:1/516: readlink d2/da/d25/d3e/d55/l75 0 2026-03-09T20:47:45.194 INFO:tasks.workunit.client.1.vm10.stdout:1/517: chown d2/c2c 272638 1 2026-03-09T20:47:45.195 INFO:tasks.workunit.client.1.vm10.stdout:1/518: write d2/da/f34 [3021882,70101] 0 2026-03-09T20:47:45.202 INFO:tasks.workunit.client.1.vm10.stdout:6/524: mkdir d3/da/d11/d31/d4c/da9 0 2026-03-09T20:47:45.210 INFO:tasks.workunit.client.1.vm10.stdout:1/519: dread d2/da/d25/f65 [0,4194304] 0 2026-03-09T20:47:45.211 INFO:tasks.workunit.client.1.vm10.stdout:1/520: write d2/da/f50 [2332764,58755] 0 2026-03-09T20:47:45.225 INFO:tasks.workunit.client.1.vm10.stdout:0/507: write d2/d9/da/d11/f15 [1792246,21534] 0 2026-03-09T20:47:45.225 INFO:tasks.workunit.client.1.vm10.stdout:3/495: getdents dc/d14/d20/d21/d3b/d8e 0 2026-03-09T20:47:45.227 INFO:tasks.workunit.client.1.vm10.stdout:0/508: fdatasync d2/d9/da/d35/d30/f9a 0 2026-03-09T20:47:45.230 INFO:tasks.workunit.client.1.vm10.stdout:2/526: mkdir d5/d2b/db0 0 2026-03-09T20:47:45.234 INFO:tasks.workunit.client.1.vm10.stdout:2/527: dread d5/d18/f24 [0,4194304] 0 2026-03-09T20:47:45.235 INFO:tasks.workunit.client.1.vm10.stdout:4/481: creat d1/d2/d5c/f9c x:0 0 0 2026-03-09T20:47:45.248 INFO:tasks.workunit.client.1.vm10.stdout:7/518: dwrite f3 [0,4194304] 0 2026-03-09T20:47:45.250 INFO:tasks.workunit.client.1.vm10.stdout:7/519: stat db/d21/d26/d72/l76 0 2026-03-09T20:47:45.251 INFO:tasks.workunit.client.1.vm10.stdout:7/520: chown db/d21/d60/d87 64992235 1 2026-03-09T20:47:45.251 INFO:tasks.workunit.client.1.vm10.stdout:9/545: truncate d2/d33/d37/f4c 5925988 0 2026-03-09T20:47:45.287 INFO:tasks.workunit.client.1.vm10.stdout:8/551: dwrite d0/d22/d25/d6c/f68 [0,4194304] 0 2026-03-09T20:47:45.290 INFO:tasks.workunit.client.0.vm07.stdout:6/589: mkdir d8/d16/da3/db8 0 2026-03-09T20:47:45.317 INFO:tasks.workunit.client.0.vm07.stdout:2/576: mkdir d2/db/d1c/d4a/db6 0 2026-03-09T20:47:45.319 INFO:tasks.workunit.client.0.vm07.stdout:1/603: dread d3/d14/d54/d3e/f72 [0,4194304] 0 2026-03-09T20:47:45.324 INFO:tasks.workunit.client.0.vm07.stdout:4/512: dread d2/d55/d5d/d3f/d4a/d4b/d52/f5a [0,4194304] 0 2026-03-09T20:47:45.329 INFO:tasks.workunit.client.1.vm10.stdout:4/482: rmdir d1/d2/d3/d54 39 2026-03-09T20:47:45.338 INFO:tasks.workunit.client.0.vm07.stdout:0/619: getdents d1/d1f/d30 0 2026-03-09T20:47:45.339 INFO:tasks.workunit.client.0.vm07.stdout:0/620: truncate d1/d2/dc/d80/fb7 1018352 0 2026-03-09T20:47:45.346 INFO:tasks.workunit.client.0.vm07.stdout:0/621: dread d1/f48 [0,4194304] 0 2026-03-09T20:47:45.348 INFO:tasks.workunit.client.0.vm07.stdout:7/633: truncate d3/da/db/d32/d3e/dac/f92 2838899 0 2026-03-09T20:47:45.352 INFO:tasks.workunit.client.1.vm10.stdout:7/521: creat db/d46/f9f x:0 0 0 2026-03-09T20:47:45.355 INFO:tasks.workunit.client.0.vm07.stdout:9/537: mknod d4/d11/d23/cc3 0 2026-03-09T20:47:45.367 INFO:tasks.workunit.client.0.vm07.stdout:9/538: dread d4/d16/d29/d24/f85 [0,4194304] 0 2026-03-09T20:47:45.368 INFO:tasks.workunit.client.0.vm07.stdout:9/539: stat d4/d16/d29/d24/d37/d44/d62/d8e 0 2026-03-09T20:47:45.368 INFO:tasks.workunit.client.0.vm07.stdout:9/540: chown d4/d8/dc/dbb 0 1 2026-03-09T20:47:45.372 INFO:tasks.workunit.client.0.vm07.stdout:6/590: dread d8/d16/d22/d24/da0/dab/d40/d69/f78 [0,4194304] 0 2026-03-09T20:47:45.375 INFO:tasks.workunit.client.1.vm10.stdout:5/476: rename d2/d27/d37/d46/d99/f9c to d2/d27/d37/d46/fb7 0 2026-03-09T20:47:45.376 INFO:tasks.workunit.client.0.vm07.stdout:8/542: rename d1/d5d/d6f/d2f/d53/d76 to d1/dc/d16/dad 0 2026-03-09T20:47:45.382 INFO:tasks.workunit.client.0.vm07.stdout:2/577: rmdir d2/db/d28/d90 39 2026-03-09T20:47:45.382 INFO:tasks.workunit.client.0.vm07.stdout:1/604: mknod d3/d14/d54/cca 0 2026-03-09T20:47:45.383 INFO:tasks.workunit.client.0.vm07.stdout:9/541: dwrite d4/d16/f27 [0,4194304] 0 2026-03-09T20:47:45.384 INFO:tasks.workunit.client.1.vm10.stdout:2/528: symlink d5/d2b/d32/d80/d8d/d93/da5/lb1 0 2026-03-09T20:47:45.384 INFO:tasks.workunit.client.1.vm10.stdout:9/546: fsync d2/d3/de/f7c 0 2026-03-09T20:47:45.384 INFO:tasks.workunit.client.1.vm10.stdout:9/547: stat d2/d3/d6d/db7 0 2026-03-09T20:47:45.385 INFO:tasks.workunit.client.1.vm10.stdout:9/548: dread - d2/d12/d5a/da7/faf zero size 2026-03-09T20:47:45.395 INFO:tasks.workunit.client.0.vm07.stdout:5/641: unlink d5/d33/d75/c9f 0 2026-03-09T20:47:45.400 INFO:tasks.workunit.client.1.vm10.stdout:1/521: rename d2/da/d25/d46/d51/d5d/d6e/d7f to d2/da/d25/d46/d80/da0 0 2026-03-09T20:47:45.401 INFO:tasks.workunit.client.1.vm10.stdout:1/522: chown d2/da/d25/d3e/d55/l75 1117564 1 2026-03-09T20:47:45.402 INFO:tasks.workunit.client.0.vm07.stdout:6/591: mknod d8/d16/d22/d9b/cb9 0 2026-03-09T20:47:45.405 INFO:tasks.workunit.client.0.vm07.stdout:3/554: dwrite d1/d5/d9/d11/f58 [4194304,4194304] 0 2026-03-09T20:47:45.406 INFO:tasks.workunit.client.0.vm07.stdout:2/578: sync 2026-03-09T20:47:45.417 INFO:tasks.workunit.client.1.vm10.stdout:1/523: dread d2/f19 [8388608,4194304] 0 2026-03-09T20:47:45.420 INFO:tasks.workunit.client.1.vm10.stdout:0/509: creat d2/d4a/fae x:0 0 0 2026-03-09T20:47:45.421 INFO:tasks.workunit.client.0.vm07.stdout:1/605: creat d3/d97/da1/dc5/d60/fcb x:0 0 0 2026-03-09T20:47:45.421 INFO:tasks.workunit.client.0.vm07.stdout:0/622: creat d1/d1f/d20/fc6 x:0 0 0 2026-03-09T20:47:45.421 INFO:tasks.workunit.client.0.vm07.stdout:7/634: mknod d3/da/d53/db7/cd7 0 2026-03-09T20:47:45.422 INFO:tasks.workunit.client.0.vm07.stdout:9/542: mkdir d4/d16/d78/dc4 0 2026-03-09T20:47:45.422 INFO:tasks.workunit.client.0.vm07.stdout:5/642: dread - d5/d19/d73/fa3 zero size 2026-03-09T20:47:45.424 INFO:tasks.workunit.client.0.vm07.stdout:9/543: read - d4/d8/dc/d4e/f8f zero size 2026-03-09T20:47:45.438 INFO:tasks.workunit.client.0.vm07.stdout:6/592: symlink d8/d16/da3/lba 0 2026-03-09T20:47:45.439 INFO:tasks.workunit.client.0.vm07.stdout:6/593: chown d8/d16/d22/d9b/da6 563974260 1 2026-03-09T20:47:45.441 INFO:tasks.workunit.client.1.vm10.stdout:5/477: truncate d2/d27/d37/f57 3355657 0 2026-03-09T20:47:45.452 INFO:tasks.workunit.client.1.vm10.stdout:1/524: dwrite d2/da/f1e [0,4194304] 0 2026-03-09T20:47:45.463 INFO:tasks.workunit.client.0.vm07.stdout:3/555: dread d1/d5/d9/d2f/d34/d46/f8a [0,4194304] 0 2026-03-09T20:47:45.467 INFO:tasks.workunit.client.1.vm10.stdout:6/525: getdents d3/da/d11/d31/d4c/d60 0 2026-03-09T20:47:45.468 INFO:tasks.workunit.client.1.vm10.stdout:6/526: chown d3/d30/d7f/l84 1519944 1 2026-03-09T20:47:45.469 INFO:tasks.workunit.client.0.vm07.stdout:8/543: symlink d1/d5d/d6f/d2f/d4d/lae 0 2026-03-09T20:47:45.475 INFO:tasks.workunit.client.1.vm10.stdout:2/529: creat d5/d2b/db0/fb2 x:0 0 0 2026-03-09T20:47:45.484 INFO:tasks.workunit.client.1.vm10.stdout:8/552: write d0/f21 [1564624,16732] 0 2026-03-09T20:47:45.484 INFO:tasks.workunit.client.1.vm10.stdout:8/553: stat d0/d22 0 2026-03-09T20:47:45.489 INFO:tasks.workunit.client.1.vm10.stdout:8/554: dread d0/d22/d25/f34 [0,4194304] 0 2026-03-09T20:47:45.506 INFO:tasks.workunit.client.1.vm10.stdout:4/483: getdents d1/d2/d5c/d64 0 2026-03-09T20:47:45.509 INFO:tasks.workunit.client.0.vm07.stdout:3/556: chown d1/d5/d9/d11/f84 28574 1 2026-03-09T20:47:45.511 INFO:tasks.workunit.client.1.vm10.stdout:2/530: mkdir d5/d18/d27/d28/d41/d77/db3 0 2026-03-09T20:47:45.511 INFO:tasks.workunit.client.0.vm07.stdout:2/579: write d2/db/d1c/f9d [699960,47760] 0 2026-03-09T20:47:45.517 INFO:tasks.workunit.client.1.vm10.stdout:3/496: rename dc/d14/d26/d29/d40/c81 to dc/d14/d26/c9f 0 2026-03-09T20:47:45.521 INFO:tasks.workunit.client.0.vm07.stdout:1/606: dwrite d3/d23/f39 [0,4194304] 0 2026-03-09T20:47:45.521 INFO:tasks.workunit.client.0.vm07.stdout:3/557: sync 2026-03-09T20:47:45.533 INFO:tasks.workunit.client.1.vm10.stdout:9/549: link d2/d12/d5a/fac d2/d3/de/d35/fbd 0 2026-03-09T20:47:45.534 INFO:tasks.workunit.client.0.vm07.stdout:7/635: write d3/da/db/d32/d3e/d5c/f64 [2205712,115535] 0 2026-03-09T20:47:45.539 INFO:tasks.workunit.client.0.vm07.stdout:4/513: getdents d2/d1f 0 2026-03-09T20:47:45.544 INFO:tasks.workunit.client.0.vm07.stdout:9/544: dwrite d4/d11/d2a/f3b [0,4194304] 0 2026-03-09T20:47:45.569 INFO:tasks.workunit.client.1.vm10.stdout:6/527: mkdir d3/d30/d7f/d36/d5c/daa 0 2026-03-09T20:47:45.570 INFO:tasks.workunit.client.1.vm10.stdout:6/528: stat d3/d30/d7f 0 2026-03-09T20:47:45.574 INFO:tasks.workunit.client.1.vm10.stdout:6/529: dwrite d3/d30/d7f/d4a/f4b [0,4194304] 0 2026-03-09T20:47:45.574 INFO:tasks.workunit.client.0.vm07.stdout:5/643: mknod d5/df/ce0 0 2026-03-09T20:47:45.594 INFO:tasks.workunit.client.0.vm07.stdout:6/594: mkdir d8/d16/dbb 0 2026-03-09T20:47:45.595 INFO:tasks.workunit.client.0.vm07.stdout:6/595: dread - d8/d16/db4/d85/fad zero size 2026-03-09T20:47:45.598 INFO:tasks.workunit.client.1.vm10.stdout:7/522: rename db/d28/d2b/l67 to db/d1f/la0 0 2026-03-09T20:47:45.599 INFO:tasks.workunit.client.0.vm07.stdout:2/580: write d2/db/d28/d90/da4/fa5 [509341,80856] 0 2026-03-09T20:47:45.611 INFO:tasks.workunit.client.1.vm10.stdout:9/550: rmdir d2/d33 39 2026-03-09T20:47:45.618 INFO:tasks.workunit.client.1.vm10.stdout:5/478: link d2/d39/d4b/f85 d2/d1b/d54/fb8 0 2026-03-09T20:47:45.620 INFO:tasks.workunit.client.1.vm10.stdout:1/525: creat d2/da/fa1 x:0 0 0 2026-03-09T20:47:45.621 INFO:tasks.workunit.client.0.vm07.stdout:1/607: creat d3/d14/d54/fcc x:0 0 0 2026-03-09T20:47:45.624 INFO:tasks.workunit.client.1.vm10.stdout:1/526: dwrite d2/da/d25/d3e/d42/f8d [0,4194304] 0 2026-03-09T20:47:45.637 INFO:tasks.workunit.client.0.vm07.stdout:1/608: dread d3/d23/f49 [0,4194304] 0 2026-03-09T20:47:45.638 INFO:tasks.workunit.client.0.vm07.stdout:1/609: dread - d3/d23/fc9 zero size 2026-03-09T20:47:45.640 INFO:tasks.workunit.client.0.vm07.stdout:7/636: creat d3/d58/d82/fd8 x:0 0 0 2026-03-09T20:47:45.642 INFO:tasks.workunit.client.1.vm10.stdout:2/531: mkdir d5/d18/d27/db4 0 2026-03-09T20:47:45.644 INFO:tasks.workunit.client.0.vm07.stdout:4/514: fdatasync d2/d55/d5d/d3f/d4a/d4b/f74 0 2026-03-09T20:47:45.646 INFO:tasks.workunit.client.1.vm10.stdout:4/484: mknod d1/d2/d3/d54/c9d 0 2026-03-09T20:47:45.650 INFO:tasks.workunit.client.1.vm10.stdout:4/485: dread - d1/d8/d39/f97 zero size 2026-03-09T20:47:45.654 INFO:tasks.workunit.client.0.vm07.stdout:0/623: creat d1/fc7 x:0 0 0 2026-03-09T20:47:45.665 INFO:tasks.workunit.client.0.vm07.stdout:5/644: dread d5/f25 [0,4194304] 0 2026-03-09T20:47:45.665 INFO:tasks.workunit.client.0.vm07.stdout:5/645: chown d5/d33/db2 1 1 2026-03-09T20:47:45.671 INFO:tasks.workunit.client.0.vm07.stdout:9/545: write d4/d11/d2a/f39 [2985821,17602] 0 2026-03-09T20:47:45.671 INFO:tasks.workunit.client.0.vm07.stdout:9/546: chown d4/d8/d19/f7e 11 1 2026-03-09T20:47:45.675 INFO:tasks.workunit.client.0.vm07.stdout:6/596: creat d8/d16/d4b/fbc x:0 0 0 2026-03-09T20:47:45.677 INFO:tasks.workunit.client.1.vm10.stdout:3/497: truncate dc/d14/d20/d2e/d56/f23 2318301 0 2026-03-09T20:47:45.677 INFO:tasks.workunit.client.0.vm07.stdout:2/581: truncate d2/db/f67 846483 0 2026-03-09T20:47:45.682 INFO:tasks.workunit.client.0.vm07.stdout:3/558: unlink d1/d5/d9/d11/f2a 0 2026-03-09T20:47:45.683 INFO:tasks.workunit.client.1.vm10.stdout:4/486: rmdir d1/d2/d5c/d64/d6b 39 2026-03-09T20:47:45.684 INFO:tasks.workunit.client.1.vm10.stdout:0/510: rename d2/d4a/d58/d82/d71/d8e/c22 to d2/d4a/d58/d82/d71/caf 0 2026-03-09T20:47:45.685 INFO:tasks.workunit.client.1.vm10.stdout:0/511: write d2/d9/da/d35/f3a [6244512,1414] 0 2026-03-09T20:47:45.686 INFO:tasks.workunit.client.0.vm07.stdout:4/515: creat d2/d55/d5d/d3f/d4a/d85/f8e x:0 0 0 2026-03-09T20:47:45.686 INFO:tasks.workunit.client.1.vm10.stdout:0/512: chown d2/d9/da/d35/d30/ca2 2 1 2026-03-09T20:47:45.689 INFO:tasks.workunit.client.1.vm10.stdout:3/498: mknod dc/d14/ca0 0 2026-03-09T20:47:45.690 INFO:tasks.workunit.client.0.vm07.stdout:4/516: read d2/d55/f62 [1622372,8884] 0 2026-03-09T20:47:45.695 INFO:tasks.workunit.client.1.vm10.stdout:5/479: write d2/d58/d6c/f8c [728057,83842] 0 2026-03-09T20:47:45.695 INFO:tasks.workunit.client.1.vm10.stdout:1/527: write d2/f2a [1937319,39418] 0 2026-03-09T20:47:45.707 INFO:tasks.workunit.client.0.vm07.stdout:1/610: dread d3/d23/f2c [0,4194304] 0 2026-03-09T20:47:45.711 INFO:tasks.workunit.client.0.vm07.stdout:9/547: rename d4/d8/d19/d89/da7/laa to d4/d8/d19/d5f/d73/dbc/lc5 0 2026-03-09T20:47:45.718 INFO:tasks.workunit.client.0.vm07.stdout:7/637: dread d3/da/db/d32/d3e/dac/d43/f68 [0,4194304] 0 2026-03-09T20:47:45.722 INFO:tasks.workunit.client.0.vm07.stdout:7/638: chown d3/d58/dc1 3267 1 2026-03-09T20:47:45.722 INFO:tasks.workunit.client.0.vm07.stdout:8/544: getdents d1/dc/d16/dad 0 2026-03-09T20:47:45.724 INFO:tasks.workunit.client.0.vm07.stdout:8/545: fsync d1/d5d/d6f/d2f/d4d/d55/fac 0 2026-03-09T20:47:45.724 INFO:tasks.workunit.client.0.vm07.stdout:8/546: readlink d1/dc/l28 0 2026-03-09T20:47:45.726 INFO:tasks.workunit.client.0.vm07.stdout:6/597: dread d8/d16/d22/d24/da0/dab/f6e [0,4194304] 0 2026-03-09T20:47:45.731 INFO:tasks.workunit.client.0.vm07.stdout:3/559: mkdir d1/d5/d9/d2f/d3d/d71/db5 0 2026-03-09T20:47:45.731 INFO:tasks.workunit.client.0.vm07.stdout:0/624: mknod d1/d1f/dc3/cc8 0 2026-03-09T20:47:45.731 INFO:tasks.workunit.client.0.vm07.stdout:4/517: chown d2/f19 243893 1 2026-03-09T20:47:45.732 INFO:tasks.workunit.client.0.vm07.stdout:5/646: truncate d5/df/d13/d3e/d5e/f98 2850605 0 2026-03-09T20:47:45.732 INFO:tasks.workunit.client.0.vm07.stdout:6/598: stat d8/f5f 0 2026-03-09T20:47:45.733 INFO:tasks.workunit.client.0.vm07.stdout:5/647: fdatasync d5/d33/d39/d8d/f8e 0 2026-03-09T20:47:45.735 INFO:tasks.workunit.client.0.vm07.stdout:7/639: sync 2026-03-09T20:47:45.759 INFO:tasks.workunit.client.0.vm07.stdout:3/560: unlink d1/d5/d9/d2f/d3d/l7e 0 2026-03-09T20:47:45.759 INFO:tasks.workunit.client.1.vm10.stdout:4/487: truncate d1/d2/f2a 660590 0 2026-03-09T20:47:45.759 INFO:tasks.workunit.client.0.vm07.stdout:6/599: rmdir d8/d26 39 2026-03-09T20:47:45.760 INFO:tasks.workunit.client.0.vm07.stdout:1/611: dwrite d3/d23/fa8 [0,4194304] 0 2026-03-09T20:47:45.764 INFO:tasks.workunit.client.0.vm07.stdout:3/561: readlink d1/d5/d9/l1e 0 2026-03-09T20:47:45.766 INFO:tasks.workunit.client.1.vm10.stdout:8/555: rename d0/d22/d25/d2e/d41/d85/l9c to d0/d22/d2f/d9d/laa 0 2026-03-09T20:47:45.771 INFO:tasks.workunit.client.1.vm10.stdout:8/556: dwrite d0/d22/d25/d2e/d41/d85/fa7 [0,4194304] 0 2026-03-09T20:47:45.771 INFO:tasks.workunit.client.1.vm10.stdout:8/557: stat d0/d22/d2f/c60 0 2026-03-09T20:47:45.772 INFO:tasks.workunit.client.0.vm07.stdout:9/548: creat d4/d8/dc/dbb/db6/fc6 x:0 0 0 2026-03-09T20:47:45.776 INFO:tasks.workunit.client.0.vm07.stdout:2/582: creat d2/db/fb7 x:0 0 0 2026-03-09T20:47:45.786 INFO:tasks.workunit.client.1.vm10.stdout:9/551: creat d2/fbe x:0 0 0 2026-03-09T20:47:45.786 INFO:tasks.workunit.client.1.vm10.stdout:3/499: mknod dc/d14/d27/ca1 0 2026-03-09T20:47:45.789 INFO:tasks.workunit.client.1.vm10.stdout:5/480: creat d2/d58/fb9 x:0 0 0 2026-03-09T20:47:45.790 INFO:tasks.workunit.client.1.vm10.stdout:1/528: chown d2/da/d25/d46/d51/l85 723398 1 2026-03-09T20:47:45.790 INFO:tasks.workunit.client.0.vm07.stdout:8/547: creat d1/dc/d16/d26/d94/faf x:0 0 0 2026-03-09T20:47:45.799 INFO:tasks.workunit.client.1.vm10.stdout:4/488: symlink d1/d2/d5c/l9e 0 2026-03-09T20:47:45.801 INFO:tasks.workunit.client.0.vm07.stdout:4/518: mknod d2/d55/d5d/d3f/d4a/d7d/c8f 0 2026-03-09T20:47:45.806 INFO:tasks.workunit.client.0.vm07.stdout:0/625: mknod d1/d2/dc/cc9 0 2026-03-09T20:47:45.815 INFO:tasks.workunit.client.1.vm10.stdout:0/513: dwrite d2/d9/f61 [0,4194304] 0 2026-03-09T20:47:45.816 INFO:tasks.workunit.client.1.vm10.stdout:8/558: mknod d0/d22/d25/d2e/d41/d85/d8b/cab 0 2026-03-09T20:47:45.816 INFO:tasks.workunit.client.1.vm10.stdout:7/523: getdents db/d21/d60 0 2026-03-09T20:47:45.817 INFO:tasks.workunit.client.0.vm07.stdout:6/600: mkdir d8/d50/dbd 0 2026-03-09T20:47:45.817 INFO:tasks.workunit.client.0.vm07.stdout:3/562: mkdir d1/d5/d9/d2f/d3d/d71/d76/db6 0 2026-03-09T20:47:45.818 INFO:tasks.workunit.client.0.vm07.stdout:1/612: truncate d3/d97/da1/dab/fb0 927758 0 2026-03-09T20:47:45.823 INFO:tasks.workunit.client.1.vm10.stdout:7/524: dread db/d21/f9a [0,4194304] 0 2026-03-09T20:47:45.823 INFO:tasks.workunit.client.1.vm10.stdout:7/525: stat db/d28/d2b/d36/f35 0 2026-03-09T20:47:45.826 INFO:tasks.workunit.client.0.vm07.stdout:5/648: mkdir d5/df/d13/d3e/de1 0 2026-03-09T20:47:45.827 INFO:tasks.workunit.client.1.vm10.stdout:9/552: creat d2/d3/de/d8f/fbf x:0 0 0 2026-03-09T20:47:45.831 INFO:tasks.workunit.client.0.vm07.stdout:4/519: mkdir d2/d55/d5d/d3f/d4a/d4b/d52/d5c/d90 0 2026-03-09T20:47:45.837 INFO:tasks.workunit.client.0.vm07.stdout:6/601: fsync d8/d16/db4/d85/f5a 0 2026-03-09T20:47:45.850 INFO:tasks.workunit.client.1.vm10.stdout:4/489: write d1/d2/d5c/d64/f51 [989502,62516] 0 2026-03-09T20:47:45.851 INFO:tasks.workunit.client.1.vm10.stdout:4/490: chown d1/d8/d1b/d57/f44 867 1 2026-03-09T20:47:45.855 INFO:tasks.workunit.client.1.vm10.stdout:7/526: creat db/d28/d4c/fa1 x:0 0 0 2026-03-09T20:47:45.855 INFO:tasks.workunit.client.1.vm10.stdout:7/527: chown f3 165616 1 2026-03-09T20:47:45.856 INFO:tasks.workunit.client.1.vm10.stdout:7/528: stat db/d21/d23/f14 0 2026-03-09T20:47:45.857 INFO:tasks.workunit.client.1.vm10.stdout:1/529: mkdir d2/da/d25/d46/d51/d7e/d9e/da2 0 2026-03-09T20:47:45.858 INFO:tasks.workunit.client.0.vm07.stdout:7/640: creat d3/da/db/d32/d3e/fd9 x:0 0 0 2026-03-09T20:47:45.858 INFO:tasks.workunit.client.0.vm07.stdout:5/649: symlink d5/d19/d73/d97/le2 0 2026-03-09T20:47:45.858 INFO:tasks.workunit.client.1.vm10.stdout:1/530: write d2/da/d25/d3e/d55/f9a [431093,84587] 0 2026-03-09T20:47:45.860 INFO:tasks.workunit.client.0.vm07.stdout:6/602: dwrite d8/d16/d22/d24/da0/faf [0,4194304] 0 2026-03-09T20:47:45.861 INFO:tasks.workunit.client.0.vm07.stdout:4/520: dread d2/fa [0,4194304] 0 2026-03-09T20:47:45.861 INFO:tasks.workunit.client.0.vm07.stdout:8/548: fdatasync d1/d5d/d6f/d80/faa 0 2026-03-09T20:47:45.862 INFO:tasks.workunit.client.0.vm07.stdout:8/549: stat d1/d8f 0 2026-03-09T20:47:45.865 INFO:tasks.workunit.client.0.vm07.stdout:6/603: fsync d8/d16/d22/d24/da0/dab/d40/fa7 0 2026-03-09T20:47:45.874 INFO:tasks.workunit.client.0.vm07.stdout:4/521: dread d2/df/d17/f6d [0,4194304] 0 2026-03-09T20:47:45.878 INFO:tasks.workunit.client.0.vm07.stdout:1/613: dwrite d3/d23/f37 [4194304,4194304] 0 2026-03-09T20:47:45.896 INFO:tasks.workunit.client.1.vm10.stdout:3/500: dwrite dc/d14/d27/f3f [0,4194304] 0 2026-03-09T20:47:45.898 INFO:tasks.workunit.client.1.vm10.stdout:6/530: rename d3/da/d11/d26/l9b to d3/da/d11/d31/lab 0 2026-03-09T20:47:45.900 INFO:tasks.workunit.client.1.vm10.stdout:0/514: creat d2/d9/da/d11/d92/fb0 x:0 0 0 2026-03-09T20:47:45.902 INFO:tasks.workunit.client.0.vm07.stdout:2/583: creat d2/db/d28/fb8 x:0 0 0 2026-03-09T20:47:45.903 INFO:tasks.workunit.client.1.vm10.stdout:9/553: fsync d2/d28/da2/fa9 0 2026-03-09T20:47:45.905 INFO:tasks.workunit.client.0.vm07.stdout:7/641: chown d3/da/d83/d96/cab 2640564 1 2026-03-09T20:47:45.905 INFO:tasks.workunit.client.1.vm10.stdout:9/554: chown d2/d3/de/d8f/fbf 166755908 1 2026-03-09T20:47:45.909 INFO:tasks.workunit.client.0.vm07.stdout:5/650: fsync d5/df/d13/f3d 0 2026-03-09T20:47:45.913 INFO:tasks.workunit.client.1.vm10.stdout:5/481: creat d2/d27/d37/d46/fba x:0 0 0 2026-03-09T20:47:45.913 INFO:tasks.workunit.client.0.vm07.stdout:5/651: fdatasync d5/df/d13/d3e/d5e/f7c 0 2026-03-09T20:47:45.932 INFO:tasks.workunit.client.1.vm10.stdout:2/532: rename d5/d2b to d5/d18/d27/d28/d41/d77/db3/db5 0 2026-03-09T20:47:45.943 INFO:tasks.workunit.client.1.vm10.stdout:6/531: dread d3/fe [0,4194304] 0 2026-03-09T20:47:45.947 INFO:tasks.workunit.client.1.vm10.stdout:3/501: write dc/f11 [2178561,109662] 0 2026-03-09T20:47:45.948 INFO:tasks.workunit.client.0.vm07.stdout:8/550: write d1/f13 [3923602,17799] 0 2026-03-09T20:47:45.948 INFO:tasks.workunit.client.0.vm07.stdout:3/563: dwrite d1/d5/d9/d11/f21 [0,4194304] 0 2026-03-09T20:47:45.950 INFO:tasks.workunit.client.1.vm10.stdout:7/529: dwrite db/f19 [0,4194304] 0 2026-03-09T20:47:45.954 INFO:tasks.workunit.client.1.vm10.stdout:4/491: symlink d1/d2/d5c/d64/d6b/d81/l9f 0 2026-03-09T20:47:45.955 INFO:tasks.workunit.client.1.vm10.stdout:0/515: mkdir d2/d4a/d58/d82/d93/db1 0 2026-03-09T20:47:45.962 INFO:tasks.workunit.client.1.vm10.stdout:1/531: mknod d2/da/d25/d46/d8c/ca3 0 2026-03-09T20:47:45.962 INFO:tasks.workunit.client.1.vm10.stdout:0/516: dwrite d2/d4a/fae [0,4194304] 0 2026-03-09T20:47:45.962 INFO:tasks.workunit.client.0.vm07.stdout:9/549: link d4/d8/d19/d89/da7/cb4 d4/d8/cc7 0 2026-03-09T20:47:45.962 INFO:tasks.workunit.client.0.vm07.stdout:7/642: fsync d3/da/db/d32/d3e/dac/f2a 0 2026-03-09T20:47:45.962 INFO:tasks.workunit.client.0.vm07.stdout:2/584: creat d2/db/d1c/d8d/fb9 x:0 0 0 2026-03-09T20:47:45.964 INFO:tasks.workunit.client.1.vm10.stdout:6/532: sync 2026-03-09T20:47:45.965 INFO:tasks.workunit.client.1.vm10.stdout:6/533: write d3/d30/d7f/f28 [2777546,34532] 0 2026-03-09T20:47:45.969 INFO:tasks.workunit.client.0.vm07.stdout:1/614: dread d3/d97/da1/dab/fb0 [0,4194304] 0 2026-03-09T20:47:45.981 INFO:tasks.workunit.client.0.vm07.stdout:5/652: creat d5/df/d13/d30/fe3 x:0 0 0 2026-03-09T20:47:45.982 INFO:tasks.workunit.client.0.vm07.stdout:5/653: chown d5/df/d13/l57 94307 1 2026-03-09T20:47:45.989 INFO:tasks.workunit.client.1.vm10.stdout:2/533: fdatasync d5/d18/f90 0 2026-03-09T20:47:45.997 INFO:tasks.workunit.client.0.vm07.stdout:5/654: read d5/df/f2b [2921425,69372] 0 2026-03-09T20:47:46.003 INFO:tasks.workunit.client.1.vm10.stdout:3/502: mkdir dc/d14/d26/d29/d40/da2 0 2026-03-09T20:47:46.014 INFO:tasks.workunit.client.0.vm07.stdout:2/585: dread d2/db/d28/d57/f65 [0,4194304] 0 2026-03-09T20:47:46.017 INFO:tasks.workunit.client.1.vm10.stdout:7/530: dread db/d28/f31 [0,4194304] 0 2026-03-09T20:47:46.018 INFO:tasks.workunit.client.1.vm10.stdout:7/531: chown db/d46/f85 36995 1 2026-03-09T20:47:46.019 INFO:tasks.workunit.client.1.vm10.stdout:4/492: unlink d1/d2/d5c/d64/d71/c80 0 2026-03-09T20:47:46.035 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:45 vm07.local ceph-mon[49120]: pgmap v11: 65 pgs: 65 active+clean; 2.3 GiB data, 8.2 GiB used, 112 GiB / 120 GiB avail; 40 MiB/s rd, 107 MiB/s wr, 283 op/s 2026-03-09T20:47:46.040 INFO:tasks.workunit.client.0.vm07.stdout:4/522: write d2/d55/d5d/d3f/d4a/d4b/d52/f5a [1200026,105601] 0 2026-03-09T20:47:46.050 INFO:tasks.workunit.client.0.vm07.stdout:9/550: symlink d4/d11/d2a/lc8 0 2026-03-09T20:47:46.052 INFO:tasks.workunit.client.0.vm07.stdout:6/604: dwrite d8/d16/d22/d24/f43 [4194304,4194304] 0 2026-03-09T20:47:46.055 INFO:tasks.workunit.client.1.vm10.stdout:9/555: write d2/d33/f77 [2941953,23266] 0 2026-03-09T20:47:46.059 INFO:tasks.workunit.client.0.vm07.stdout:7/643: mknod d3/da/d53/cda 0 2026-03-09T20:47:46.065 INFO:tasks.workunit.client.0.vm07.stdout:3/564: dwrite d1/d5/d9/d2f/d3d/f74 [0,4194304] 0 2026-03-09T20:47:46.078 INFO:tasks.workunit.client.1.vm10.stdout:0/517: creat d2/d4a/d58/d82/d71/fb2 x:0 0 0 2026-03-09T20:47:46.080 INFO:tasks.workunit.client.0.vm07.stdout:1/615: rename d3/d97/da1/dc5/d60/fcb to d3/d9c/fcd 0 2026-03-09T20:47:46.084 INFO:tasks.workunit.client.1.vm10.stdout:6/534: creat d3/d30/d7f/d36/d5c/d8d/fac x:0 0 0 2026-03-09T20:47:46.084 INFO:tasks.workunit.client.0.vm07.stdout:7/644: dread d3/da/db/f1e [4194304,4194304] 0 2026-03-09T20:47:46.085 INFO:tasks.workunit.client.0.vm07.stdout:0/626: getdents d1/d1f/d53/d72 0 2026-03-09T20:47:46.088 INFO:tasks.workunit.client.1.vm10.stdout:8/559: rename d0/l20 to d0/d22/d25/d40/lac 0 2026-03-09T20:47:46.094 INFO:tasks.workunit.client.0.vm07.stdout:2/586: dread - d2/db/d28/d90/fa3 zero size 2026-03-09T20:47:46.097 INFO:tasks.workunit.client.0.vm07.stdout:5/655: dread d5/df/d13/d3e/d5e/fc6 [0,4194304] 0 2026-03-09T20:47:46.097 INFO:tasks.workunit.client.0.vm07.stdout:8/551: mkdir d1/db0 0 2026-03-09T20:47:46.098 INFO:tasks.workunit.client.1.vm10.stdout:7/532: creat db/d28/d2b/d36/d40/fa2 x:0 0 0 2026-03-09T20:47:46.098 INFO:tasks.workunit.client.1.vm10.stdout:7/533: chown db/d21/d60 4 1 2026-03-09T20:47:46.098 INFO:tasks.workunit.client.1.vm10.stdout:7/534: chown db/f39 757788 1 2026-03-09T20:47:46.098 INFO:tasks.workunit.client.1.vm10.stdout:0/518: sync 2026-03-09T20:47:46.099 INFO:tasks.workunit.client.1.vm10.stdout:0/519: write d2/f9b [726058,66930] 0 2026-03-09T20:47:46.107 INFO:tasks.workunit.client.0.vm07.stdout:4/523: mknod d2/d55/d5d/d3f/d4a/d85/c91 0 2026-03-09T20:47:46.107 INFO:tasks.workunit.client.1.vm10.stdout:1/532: symlink d2/da/la4 0 2026-03-09T20:47:46.108 INFO:tasks.workunit.client.1.vm10.stdout:5/482: rmdir d2/d1b/d54/d7b/daf 0 2026-03-09T20:47:46.109 INFO:tasks.workunit.client.0.vm07.stdout:4/524: fdatasync d2/d55/d5d/d3f/d4a/d4b/d52/f82 0 2026-03-09T20:47:46.114 INFO:tasks.workunit.client.1.vm10.stdout:9/556: rename d2/d28/da2/fa9 to d2/d28/d47/d6a/fc0 0 2026-03-09T20:47:46.118 INFO:tasks.workunit.client.1.vm10.stdout:4/493: symlink d1/d2/d3/d70/d99/la0 0 2026-03-09T20:47:46.120 INFO:tasks.workunit.client.1.vm10.stdout:7/535: symlink db/d28/d2b/d36/d63/la3 0 2026-03-09T20:47:46.121 INFO:tasks.workunit.client.1.vm10.stdout:7/536: fdatasync db/d28/f41 0 2026-03-09T20:47:46.121 INFO:tasks.workunit.client.1.vm10.stdout:2/534: dwrite d5/d18/d27/d28/f5a [0,4194304] 0 2026-03-09T20:47:46.124 INFO:tasks.workunit.client.0.vm07.stdout:1/616: fsync d3/d23/d52/f73 0 2026-03-09T20:47:46.127 INFO:tasks.workunit.client.1.vm10.stdout:2/535: dread d5/d18/d27/da6/fac [0,4194304] 0 2026-03-09T20:47:46.129 INFO:tasks.workunit.client.1.vm10.stdout:1/533: fsync d2/da/d25/f40 0 2026-03-09T20:47:46.129 INFO:tasks.workunit.client.0.vm07.stdout:7/645: truncate d3/f67 4896425 0 2026-03-09T20:47:46.136 INFO:tasks.workunit.client.1.vm10.stdout:7/537: dread f5 [0,4194304] 0 2026-03-09T20:47:46.151 INFO:tasks.workunit.client.1.vm10.stdout:5/483: symlink d2/d27/d75/d81/db3/d84/d87/da1/lbb 0 2026-03-09T20:47:46.154 INFO:tasks.workunit.client.1.vm10.stdout:8/560: write d0/d22/d2c/f36 [3830541,105811] 0 2026-03-09T20:47:46.156 INFO:tasks.workunit.client.1.vm10.stdout:0/520: write d2/d4a/d58/d82/d71/f38 [5532567,123028] 0 2026-03-09T20:47:46.156 INFO:tasks.workunit.client.1.vm10.stdout:8/561: fsync d0/d22/d2c/f32 0 2026-03-09T20:47:46.156 INFO:tasks.workunit.client.1.vm10.stdout:0/521: read - d2/d9/da/d35/d30/f9a zero size 2026-03-09T20:47:46.164 INFO:tasks.workunit.client.1.vm10.stdout:6/535: mkdir d3/d30/d7f/d36/d5c/dad 0 2026-03-09T20:47:46.164 INFO:tasks.workunit.client.0.vm07.stdout:0/627: mkdir d1/d1f/dc3/dca 0 2026-03-09T20:47:46.167 INFO:tasks.workunit.client.1.vm10.stdout:3/503: link dc/d14/d26/d29/f5c dc/d14/d26/d29/d2a/d55/fa3 0 2026-03-09T20:47:46.170 INFO:tasks.workunit.client.1.vm10.stdout:4/494: mkdir d1/d8/d1b/da1 0 2026-03-09T20:47:46.178 INFO:tasks.workunit.client.1.vm10.stdout:3/504: dwrite dc/d14/d26/d29/d2a/f66 [0,4194304] 0 2026-03-09T20:47:46.180 INFO:tasks.workunit.client.1.vm10.stdout:3/505: read dc/d14/d20/d2e/d56/f68 [739886,54788] 0 2026-03-09T20:47:46.197 INFO:tasks.workunit.client.1.vm10.stdout:1/534: creat d2/da/d25/d46/d51/d7e/d9e/fa5 x:0 0 0 2026-03-09T20:47:46.204 INFO:tasks.workunit.client.1.vm10.stdout:5/484: symlink d2/d27/d37/lbc 0 2026-03-09T20:47:46.206 INFO:tasks.workunit.client.1.vm10.stdout:8/562: mkdir d0/d22/d25/d2e/d41/d47/d63/dad 0 2026-03-09T20:47:46.210 INFO:tasks.workunit.client.1.vm10.stdout:8/563: dwrite d0/d22/d25/d40/d86/d91/fa8 [0,4194304] 0 2026-03-09T20:47:46.223 INFO:tasks.workunit.client.1.vm10.stdout:8/564: dread d0/d22/d25/d2e/d41/f80 [0,4194304] 0 2026-03-09T20:47:46.227 INFO:tasks.workunit.client.1.vm10.stdout:0/522: truncate d2/d9/da/fd 4834960 0 2026-03-09T20:47:46.228 INFO:tasks.workunit.client.1.vm10.stdout:9/557: link d2/d12/d5a/da7/faf d2/d28/d47/d50/dab/fc1 0 2026-03-09T20:47:46.234 INFO:tasks.workunit.client.1.vm10.stdout:7/538: dwrite db/f39 [0,4194304] 0 2026-03-09T20:47:46.245 INFO:tasks.workunit.client.1.vm10.stdout:3/506: creat dc/d14/d26/d29/d2a/fa4 x:0 0 0 2026-03-09T20:47:46.268 INFO:tasks.workunit.client.1.vm10.stdout:8/565: dread d0/f11 [0,4194304] 0 2026-03-09T20:47:46.269 INFO:tasks.workunit.client.1.vm10.stdout:0/523: symlink d2/d9/d4b/d63/lb3 0 2026-03-09T20:47:46.273 INFO:tasks.workunit.client.1.vm10.stdout:9/558: mknod d2/d3/d6d/cc2 0 2026-03-09T20:47:46.275 INFO:tasks.workunit.client.1.vm10.stdout:6/536: creat d3/d30/d7f/d36/d5c/daa/fae x:0 0 0 2026-03-09T20:47:46.287 INFO:tasks.workunit.client.1.vm10.stdout:6/537: dread d3/da/d11/d31/d4c/d60/f77 [0,4194304] 0 2026-03-09T20:47:46.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:45 vm10.local ceph-mon[57011]: pgmap v11: 65 pgs: 65 active+clean; 2.3 GiB data, 8.2 GiB used, 112 GiB / 120 GiB avail; 40 MiB/s rd, 107 MiB/s wr, 283 op/s 2026-03-09T20:47:46.288 INFO:tasks.workunit.client.1.vm10.stdout:6/538: fdatasync d3/d30/d33/f4e 0 2026-03-09T20:47:46.290 INFO:tasks.workunit.client.1.vm10.stdout:6/539: read d3/f21 [500754,30481] 0 2026-03-09T20:47:46.290 INFO:tasks.workunit.client.1.vm10.stdout:6/540: readlink d3/da/d11/d31/d4c/l71 0 2026-03-09T20:47:46.292 INFO:tasks.workunit.client.0.vm07.stdout:2/587: rename d2/d46/d72/d82/c9e to d2/db/d28/d5c/cba 0 2026-03-09T20:47:46.293 INFO:tasks.workunit.client.1.vm10.stdout:2/536: write d5/fd [3492084,74743] 0 2026-03-09T20:47:46.294 INFO:tasks.workunit.client.1.vm10.stdout:2/537: chown d5/f15 239 1 2026-03-09T20:47:46.298 INFO:tasks.workunit.client.1.vm10.stdout:4/495: dwrite d1/d67/f9a [0,4194304] 0 2026-03-09T20:47:46.300 INFO:tasks.workunit.client.1.vm10.stdout:7/539: rename db/d28/d2b/d36/d40/d8a/f93 to db/d28/d30/fa4 0 2026-03-09T20:47:46.300 INFO:tasks.workunit.client.1.vm10.stdout:4/496: read - d1/d47/f96 zero size 2026-03-09T20:47:46.324 INFO:tasks.workunit.client.0.vm07.stdout:9/551: creat d4/d16/d29/d9c/fc9 x:0 0 0 2026-03-09T20:47:46.332 INFO:tasks.workunit.client.1.vm10.stdout:5/485: symlink d2/d39/d89/lbd 0 2026-03-09T20:47:46.333 INFO:tasks.workunit.client.0.vm07.stdout:9/552: dwrite d4/d16/f27 [0,4194304] 0 2026-03-09T20:47:46.340 INFO:tasks.workunit.client.0.vm07.stdout:4/525: chown d2/c11 1937963679 1 2026-03-09T20:47:46.341 INFO:tasks.workunit.client.0.vm07.stdout:9/553: write d4/d8/d19/d5f/d73/fb7 [745955,5240] 0 2026-03-09T20:47:46.362 INFO:tasks.workunit.client.0.vm07.stdout:5/656: write d5/d19/f95 [617858,112670] 0 2026-03-09T20:47:46.371 INFO:tasks.workunit.client.1.vm10.stdout:3/507: mknod dc/d14/d22/d7f/d69/d75/ca5 0 2026-03-09T20:47:46.371 INFO:tasks.workunit.client.1.vm10.stdout:8/566: rmdir d0/d22/d25/d40/d86 39 2026-03-09T20:47:46.377 INFO:tasks.workunit.client.1.vm10.stdout:1/535: dwrite d2/da/d25/d3e/f41 [0,4194304] 0 2026-03-09T20:47:46.378 INFO:tasks.workunit.client.0.vm07.stdout:6/605: write d8/d26/d7d/f8b [3185835,90113] 0 2026-03-09T20:47:46.379 INFO:tasks.workunit.client.0.vm07.stdout:6/606: chown d8/d16/d4b/l76 28 1 2026-03-09T20:47:46.389 INFO:tasks.workunit.client.0.vm07.stdout:7/646: creat d3/da/d83/dc5/fdb x:0 0 0 2026-03-09T20:47:46.401 INFO:tasks.workunit.client.1.vm10.stdout:2/538: truncate d5/d18/d27/f2a 3986045 0 2026-03-09T20:47:46.401 INFO:tasks.workunit.client.1.vm10.stdout:2/539: chown d5/f7 722599 1 2026-03-09T20:47:46.401 INFO:tasks.workunit.client.1.vm10.stdout:4/497: mknod d1/d8/d1c/d2b/ca2 0 2026-03-09T20:47:46.403 INFO:tasks.workunit.client.1.vm10.stdout:5/486: creat d2/d27/d75/d81/db3/d63/fbe x:0 0 0 2026-03-09T20:47:46.403 INFO:tasks.workunit.client.1.vm10.stdout:2/540: chown d5/d18/d27/d28/d41/d77/db3/db5/d32/d80/d47/f65 55628124 1 2026-03-09T20:47:46.406 INFO:tasks.workunit.client.0.vm07.stdout:5/657: rmdir d5/d19 39 2026-03-09T20:47:46.416 INFO:tasks.workunit.client.0.vm07.stdout:3/565: creat d1/fb7 x:0 0 0 2026-03-09T20:47:46.419 INFO:tasks.workunit.client.1.vm10.stdout:0/524: write d2/d4a/d58/d82/d71/f13 [3716522,64651] 0 2026-03-09T20:47:46.428 INFO:tasks.workunit.client.1.vm10.stdout:2/541: dread d5/fb [0,4194304] 0 2026-03-09T20:47:46.434 INFO:tasks.workunit.client.0.vm07.stdout:7/647: truncate d3/da/db/d32/d3e/dac/d1f/d2b/d52/f74 202833 0 2026-03-09T20:47:46.439 INFO:tasks.workunit.client.1.vm10.stdout:9/559: dwrite d2/d28/d47/d50/f59 [0,4194304] 0 2026-03-09T20:47:46.443 INFO:tasks.workunit.client.0.vm07.stdout:0/628: dwrite d1/d1f/d53/d72/f9b [0,4194304] 0 2026-03-09T20:47:46.469 INFO:tasks.workunit.client.0.vm07.stdout:0/629: dread d1/d2/dc/d17/f23 [4194304,4194304] 0 2026-03-09T20:47:46.474 INFO:tasks.workunit.client.0.vm07.stdout:8/552: write d1/f33 [2512782,13953] 0 2026-03-09T20:47:46.475 INFO:tasks.workunit.client.0.vm07.stdout:8/553: read d1/f13 [3020253,44495] 0 2026-03-09T20:47:46.519 INFO:tasks.workunit.client.1.vm10.stdout:7/540: dwrite db/d46/f66 [0,4194304] 0 2026-03-09T20:47:46.519 INFO:tasks.workunit.client.1.vm10.stdout:7/541: chown db/l1b 556568 1 2026-03-09T20:47:46.520 INFO:tasks.workunit.client.1.vm10.stdout:7/542: write db/d28/f41 [2977193,85050] 0 2026-03-09T20:47:46.521 INFO:tasks.workunit.client.1.vm10.stdout:7/543: read - db/f69 zero size 2026-03-09T20:47:46.522 INFO:tasks.workunit.client.1.vm10.stdout:7/544: write db/d28/d2b/d36/d3f/f7d [1926144,91227] 0 2026-03-09T20:47:46.526 INFO:tasks.workunit.client.1.vm10.stdout:7/545: dread db/d28/d2b/f8f [0,4194304] 0 2026-03-09T20:47:46.566 INFO:tasks.workunit.client.1.vm10.stdout:3/508: dwrite dc/d14/d26/f34 [0,4194304] 0 2026-03-09T20:47:46.567 INFO:tasks.workunit.client.1.vm10.stdout:1/536: dwrite d2/da/d25/f65 [0,4194304] 0 2026-03-09T20:47:46.571 INFO:tasks.workunit.client.1.vm10.stdout:6/541: dwrite d3/f5e [0,4194304] 0 2026-03-09T20:47:46.581 INFO:tasks.workunit.client.1.vm10.stdout:4/498: symlink d1/d2/d3/d70/d78/la3 0 2026-03-09T20:47:46.581 INFO:tasks.workunit.client.0.vm07.stdout:6/607: write d8/d16/db4/d85/f3c [1201787,41484] 0 2026-03-09T20:47:46.613 INFO:tasks.workunit.client.1.vm10.stdout:6/542: sync 2026-03-09T20:47:46.615 INFO:tasks.workunit.client.1.vm10.stdout:5/487: rename d2/d27/d75/d81/db3 to d2/d39/dbf 0 2026-03-09T20:47:46.621 INFO:tasks.workunit.client.0.vm07.stdout:9/554: symlink d4/d8/d19/d5f/da5/db8/dc1/lca 0 2026-03-09T20:47:46.627 INFO:tasks.workunit.client.1.vm10.stdout:8/567: mknod d0/d22/d25/d40/d86/d91/cae 0 2026-03-09T20:47:46.631 INFO:tasks.workunit.client.0.vm07.stdout:3/566: creat d1/d5/d9/d2f/d34/d46/d5d/fb8 x:0 0 0 2026-03-09T20:47:46.632 INFO:tasks.workunit.client.1.vm10.stdout:0/525: mkdir d2/d4e/db4 0 2026-03-09T20:47:46.651 INFO:tasks.workunit.client.1.vm10.stdout:1/537: dread - d2/da/d25/d46/d51/d5d/d6e/f76 zero size 2026-03-09T20:47:46.651 INFO:tasks.workunit.client.0.vm07.stdout:6/608: chown d8/c11 127702273 1 2026-03-09T20:47:46.651 INFO:tasks.workunit.client.1.vm10.stdout:1/538: chown d2/da/d25/d46/d80/da0/d92 188 1 2026-03-09T20:47:46.652 INFO:tasks.workunit.client.0.vm07.stdout:9/555: symlink d4/d8/d19/d5f/da5/lcb 0 2026-03-09T20:47:46.652 INFO:tasks.workunit.client.0.vm07.stdout:9/556: stat d4/d16/d29/d24/f8c 0 2026-03-09T20:47:46.653 INFO:tasks.workunit.client.0.vm07.stdout:9/557: stat d4/d8/dc/d4e/f82 0 2026-03-09T20:47:46.653 INFO:tasks.workunit.client.1.vm10.stdout:4/499: fdatasync d1/d2/f43 0 2026-03-09T20:47:46.654 INFO:tasks.workunit.client.0.vm07.stdout:9/558: chown d4/d11/f8a 578934889 1 2026-03-09T20:47:46.656 INFO:tasks.workunit.client.0.vm07.stdout:1/617: getdents d3/d97/da1/dc5/d60 0 2026-03-09T20:47:46.657 INFO:tasks.workunit.client.1.vm10.stdout:6/543: mknod d3/d30/d7f/d51/caf 0 2026-03-09T20:47:46.660 INFO:tasks.workunit.client.1.vm10.stdout:2/542: rename d5/d18/d27/d28 to d5/d18/d27/d89/db6 0 2026-03-09T20:47:46.660 INFO:tasks.workunit.client.1.vm10.stdout:2/543: stat d5/d5b 0 2026-03-09T20:47:46.675 INFO:tasks.workunit.client.0.vm07.stdout:3/567: dread d1/d5/d9/f1b [4194304,4194304] 0 2026-03-09T20:47:46.676 INFO:tasks.workunit.client.1.vm10.stdout:8/568: readlink d0/l26 0 2026-03-09T20:47:46.691 INFO:tasks.workunit.client.1.vm10.stdout:0/526: truncate d2/d9/da/f2f 8476093 0 2026-03-09T20:47:46.691 INFO:tasks.workunit.client.1.vm10.stdout:0/527: fdatasync d2/d9/da/d35/d30/f9a 0 2026-03-09T20:47:46.699 INFO:tasks.workunit.client.1.vm10.stdout:1/539: rmdir d2/da/d25/d46/d51/d5d/d6e 39 2026-03-09T20:47:46.700 INFO:tasks.workunit.client.1.vm10.stdout:1/540: chown d2/da/d25/d3e/d55/l75 1385 1 2026-03-09T20:47:46.700 INFO:tasks.workunit.client.0.vm07.stdout:1/618: mknod d3/d14/d54/d9b/cce 0 2026-03-09T20:47:46.701 INFO:tasks.workunit.client.0.vm07.stdout:9/559: fsync d4/d8/dc/d15/f18 0 2026-03-09T20:47:46.703 INFO:tasks.workunit.client.0.vm07.stdout:5/658: mknod d5/df/d13/d6c/ce4 0 2026-03-09T20:47:46.706 INFO:tasks.workunit.client.1.vm10.stdout:6/544: dread - d3/d30/d7f/d51/f94 zero size 2026-03-09T20:47:46.707 INFO:tasks.workunit.client.1.vm10.stdout:6/545: stat d3/da/l2e 0 2026-03-09T20:47:46.709 INFO:tasks.workunit.client.1.vm10.stdout:5/488: rename d2/d27/d37/d46/f7c to d2/d39/d4b/d7a/fc0 0 2026-03-09T20:47:46.716 INFO:tasks.workunit.client.1.vm10.stdout:8/569: creat d0/d22/d2f/d38/d64/faf x:0 0 0 2026-03-09T20:47:46.720 INFO:tasks.workunit.client.0.vm07.stdout:9/560: stat d4/d16/c79 0 2026-03-09T20:47:46.726 INFO:tasks.workunit.client.0.vm07.stdout:3/568: symlink d1/d5/d9/d2f/d34/d9e/lb9 0 2026-03-09T20:47:46.729 INFO:tasks.workunit.client.0.vm07.stdout:3/569: dread d1/d5/d9/d2f/d3d/d64/f30 [0,4194304] 0 2026-03-09T20:47:46.740 INFO:tasks.workunit.client.0.vm07.stdout:3/570: creat d1/d5/d9/d2f/d3d/d64/d95/fba x:0 0 0 2026-03-09T20:47:46.748 INFO:tasks.workunit.client.0.vm07.stdout:3/571: truncate d1/f78 5226423 0 2026-03-09T20:47:46.749 INFO:tasks.workunit.client.0.vm07.stdout:1/619: link d3/d14/d54/d3e/c46 d3/d97/da1/ccf 0 2026-03-09T20:47:46.751 INFO:tasks.workunit.client.0.vm07.stdout:4/526: truncate d2/fa 2757771 0 2026-03-09T20:47:46.754 INFO:tasks.workunit.client.0.vm07.stdout:4/527: write d2/df/d59/f60 [3176407,101999] 0 2026-03-09T20:47:46.756 INFO:tasks.workunit.client.0.vm07.stdout:1/620: mkdir d3/d97/da1/dc5/d60/d9f/dd0 0 2026-03-09T20:47:46.764 INFO:tasks.workunit.client.0.vm07.stdout:3/572: creat d1/d5/d9/d2f/d86/fbb x:0 0 0 2026-03-09T20:47:46.768 INFO:tasks.workunit.client.1.vm10.stdout:9/560: write d2/d28/f63 [3010803,29075] 0 2026-03-09T20:47:46.774 INFO:tasks.workunit.client.1.vm10.stdout:9/561: chown d2/d12/l22 7 1 2026-03-09T20:47:46.774 INFO:tasks.workunit.client.0.vm07.stdout:7/648: write d3/da/db/d32/d3e/dac/d1f/d2b/d52/fc0 [920953,110980] 0 2026-03-09T20:47:46.774 INFO:tasks.workunit.client.0.vm07.stdout:7/649: readlink d3/da/db/d32/d3e/dac/d1f/d2b/l42 0 2026-03-09T20:47:46.781 INFO:tasks.workunit.client.0.vm07.stdout:0/630: dwrite d1/d1f/d30/f7a [0,4194304] 0 2026-03-09T20:47:46.786 INFO:tasks.workunit.client.0.vm07.stdout:9/561: sync 2026-03-09T20:47:46.787 INFO:tasks.workunit.client.0.vm07.stdout:1/621: dread d3/d23/d52/f79 [0,4194304] 0 2026-03-09T20:47:46.808 INFO:tasks.workunit.client.0.vm07.stdout:3/573: symlink d1/d5/d9/d11/d60/lbc 0 2026-03-09T20:47:46.812 INFO:tasks.workunit.client.0.vm07.stdout:8/554: dwrite d1/d5d/d6f/d2f/f51 [0,4194304] 0 2026-03-09T20:47:46.816 INFO:tasks.workunit.client.0.vm07.stdout:7/650: readlink d3/da/d53/lc6 0 2026-03-09T20:47:46.818 INFO:tasks.workunit.client.0.vm07.stdout:8/555: stat d1/d5d/d6f/d2f/d4d/f73 0 2026-03-09T20:47:46.837 INFO:tasks.workunit.client.0.vm07.stdout:9/562: mkdir d4/d16/d29/d24/d37/d8d/dcc 0 2026-03-09T20:47:46.838 INFO:tasks.workunit.client.1.vm10.stdout:0/528: mknod d2/d4a/d58/d82/d93/cb5 0 2026-03-09T20:47:46.844 INFO:tasks.workunit.client.0.vm07.stdout:6/609: write d8/d16/d22/d24/f7b [341046,6730] 0 2026-03-09T20:47:46.856 INFO:tasks.workunit.client.0.vm07.stdout:1/622: dread d3/d23/d55/f7b [0,4194304] 0 2026-03-09T20:47:46.856 INFO:tasks.workunit.client.1.vm10.stdout:7/546: creat db/d28/fa5 x:0 0 0 2026-03-09T20:47:46.863 INFO:tasks.workunit.client.0.vm07.stdout:5/659: dwrite d5/fa6 [0,4194304] 0 2026-03-09T20:47:46.895 INFO:tasks.workunit.client.1.vm10.stdout:8/570: mknod d0/d22/d2f/cb0 0 2026-03-09T20:47:46.905 INFO:tasks.workunit.client.1.vm10.stdout:9/562: creat d2/d28/d47/d6a/fc3 x:0 0 0 2026-03-09T20:47:46.905 INFO:tasks.workunit.client.0.vm07.stdout:9/563: creat d4/d8/d19/d5f/d73/fcd x:0 0 0 2026-03-09T20:47:46.906 INFO:tasks.workunit.client.0.vm07.stdout:6/610: write d8/d16/db4/d85/f4a [6080234,100851] 0 2026-03-09T20:47:46.920 INFO:tasks.workunit.client.1.vm10.stdout:7/547: unlink db/d28/d2b/d36/d63/f6c 0 2026-03-09T20:47:46.920 INFO:tasks.workunit.client.1.vm10.stdout:7/548: stat db/d28/d4c/f65 0 2026-03-09T20:47:46.922 INFO:tasks.workunit.client.0.vm07.stdout:0/631: write d1/d2/d33/d35/f64 [409842,127365] 0 2026-03-09T20:47:46.943 INFO:tasks.workunit.client.0.vm07.stdout:4/528: write d2/d55/f71 [1226939,33311] 0 2026-03-09T20:47:46.943 INFO:tasks.workunit.client.1.vm10.stdout:1/541: mkdir d2/da/d25/d46/d51/d5d/da6 0 2026-03-09T20:47:46.943 INFO:tasks.workunit.client.1.vm10.stdout:1/542: readlink d2/da/d25/l49 0 2026-03-09T20:47:46.946 INFO:tasks.workunit.client.0.vm07.stdout:8/556: dwrite d1/dc/f29 [0,4194304] 0 2026-03-09T20:47:46.956 INFO:tasks.workunit.client.1.vm10.stdout:6/546: truncate d3/d30/d7f/d36/d5c/f5f 2566278 0 2026-03-09T20:47:46.964 INFO:tasks.workunit.client.0.vm07.stdout:3/574: write d1/d5/d9/f3c [2773872,128673] 0 2026-03-09T20:47:46.965 INFO:tasks.workunit.client.1.vm10.stdout:4/500: rename d1/d2/f84 to d1/d8/d66/fa4 0 2026-03-09T20:47:46.965 INFO:tasks.workunit.client.0.vm07.stdout:3/575: readlink d1/l8 0 2026-03-09T20:47:46.967 INFO:tasks.workunit.client.1.vm10.stdout:8/571: chown d0/d22/d2f/l75 0 1 2026-03-09T20:47:46.973 INFO:tasks.workunit.client.0.vm07.stdout:2/588: link d2/l59 d2/lbb 0 2026-03-09T20:47:46.973 INFO:tasks.workunit.client.1.vm10.stdout:9/563: symlink d2/da6/lc4 0 2026-03-09T20:47:46.973 INFO:tasks.workunit.client.1.vm10.stdout:3/509: link dc/d14/d26/c94 dc/d14/ca6 0 2026-03-09T20:47:46.973 INFO:tasks.workunit.client.1.vm10.stdout:1/543: creat d2/da/d25/d46/fa7 x:0 0 0 2026-03-09T20:47:46.973 INFO:tasks.workunit.client.1.vm10.stdout:1/544: stat d2/da/d25/d46/d51/d5d 0 2026-03-09T20:47:46.975 INFO:tasks.workunit.client.0.vm07.stdout:9/564: truncate d4/d16/d29/d24/d37/d44/d62/d74/fa2 935496 0 2026-03-09T20:47:46.980 INFO:tasks.workunit.client.0.vm07.stdout:8/557: dread - d1/d5d/d6f/d2f/d4d/d63/d91/f96 zero size 2026-03-09T20:47:46.982 INFO:tasks.workunit.client.0.vm07.stdout:5/660: truncate d5/df/d13/d3e/d5e/f98 1848145 0 2026-03-09T20:47:46.985 INFO:tasks.workunit.client.1.vm10.stdout:2/544: rename d5/d18/f2c to d5/d5b/fb7 0 2026-03-09T20:47:46.986 INFO:tasks.workunit.client.1.vm10.stdout:5/489: truncate d2/d1b/d54/d78/f47 4775074 0 2026-03-09T20:47:46.987 INFO:tasks.workunit.client.0.vm07.stdout:0/632: write d1/f31 [2447059,122852] 0 2026-03-09T20:47:46.987 INFO:tasks.workunit.client.1.vm10.stdout:5/490: chown d2/d27/d37/d46/d5d/d77/f93 48658896 1 2026-03-09T20:47:47.001 INFO:tasks.workunit.client.0.vm07.stdout:7/651: rename d3/da/d83/fb6 to d3/da/db/d32/d3e/dac/fdc 0 2026-03-09T20:47:47.003 INFO:tasks.workunit.client.0.vm07.stdout:7/652: chown d3/da/d83/dc5/fd4 7485465 1 2026-03-09T20:47:47.010 INFO:tasks.workunit.client.0.vm07.stdout:6/611: creat d8/d50/dbd/fbe x:0 0 0 2026-03-09T20:47:47.011 INFO:tasks.workunit.client.0.vm07.stdout:8/558: dread d1/f33 [0,4194304] 0 2026-03-09T20:47:47.015 INFO:tasks.workunit.client.0.vm07.stdout:4/529: dwrite d2/f9 [0,4194304] 0 2026-03-09T20:47:47.016 INFO:tasks.workunit.client.0.vm07.stdout:2/589: unlink d2/d11/f36 0 2026-03-09T20:47:47.020 INFO:tasks.workunit.client.1.vm10.stdout:7/549: mknod db/d21/d95/ca6 0 2026-03-09T20:47:47.028 INFO:tasks.workunit.client.0.vm07.stdout:4/530: stat d2/d55/d5d/d3f/d4a/d4b/d52/d5c/d90 0 2026-03-09T20:47:47.038 INFO:tasks.workunit.client.0.vm07.stdout:9/565: symlink d4/d16/d29/d9c/lce 0 2026-03-09T20:47:47.041 INFO:tasks.workunit.client.1.vm10.stdout:6/547: creat d3/da/d11/d89/fb0 x:0 0 0 2026-03-09T20:47:47.044 INFO:tasks.workunit.client.1.vm10.stdout:0/529: rename d2/d9/d9e to d2/d9/da/d11/d92/db6 0 2026-03-09T20:47:47.048 INFO:tasks.workunit.client.1.vm10.stdout:6/548: dwrite d3/d79/f90 [0,4194304] 0 2026-03-09T20:47:47.062 INFO:tasks.workunit.client.1.vm10.stdout:2/545: truncate d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/f5d 347913 0 2026-03-09T20:47:47.062 INFO:tasks.workunit.client.0.vm07.stdout:5/661: creat d5/d33/d39/fe5 x:0 0 0 2026-03-09T20:47:47.069 INFO:tasks.workunit.client.1.vm10.stdout:1/545: dwrite d2/da/d25/d46/d51/d5d/f67 [0,4194304] 0 2026-03-09T20:47:47.070 INFO:tasks.workunit.client.0.vm07.stdout:1/623: rename d3/d97/da1/dc5/d60/c9e to d3/d23/d67/d8a/cd1 0 2026-03-09T20:47:47.070 INFO:tasks.workunit.client.0.vm07.stdout:7/653: creat d3/da/d53/db7/fdd x:0 0 0 2026-03-09T20:47:47.089 INFO:tasks.workunit.client.0.vm07.stdout:8/559: symlink d1/d5d/lb1 0 2026-03-09T20:47:47.093 INFO:tasks.workunit.client.1.vm10.stdout:9/564: rename d2/fbe to d2/d12/d5a/fc5 0 2026-03-09T20:47:47.093 INFO:tasks.workunit.client.1.vm10.stdout:9/565: chown d2/d12 124829073 1 2026-03-09T20:47:47.093 INFO:tasks.workunit.client.0.vm07.stdout:2/590: creat d2/da7/fbc x:0 0 0 2026-03-09T20:47:47.096 INFO:tasks.workunit.client.1.vm10.stdout:1/546: dread d2/da/d25/d3e/d42/f62 [0,4194304] 0 2026-03-09T20:47:47.100 INFO:tasks.workunit.client.0.vm07.stdout:2/591: chown d2/db/d28/d57/f68 0 1 2026-03-09T20:47:47.102 INFO:tasks.workunit.client.0.vm07.stdout:9/566: mkdir d4/d8/d19/d5f/dcf 0 2026-03-09T20:47:47.102 INFO:tasks.workunit.client.0.vm07.stdout:6/612: dwrite d8/d16/d22/d24/da0/dab/f81 [0,4194304] 0 2026-03-09T20:47:47.104 INFO:tasks.workunit.client.0.vm07.stdout:6/613: chown d8/d16/d4b/d88 6 1 2026-03-09T20:47:47.110 INFO:tasks.workunit.client.1.vm10.stdout:5/491: getdents d2/d39/dbf/d63/d95 0 2026-03-09T20:47:47.111 INFO:tasks.workunit.client.1.vm10.stdout:5/492: dread - d2/d80/fa8 zero size 2026-03-09T20:47:47.113 INFO:tasks.workunit.client.0.vm07.stdout:5/662: creat d5/d69/fe6 x:0 0 0 2026-03-09T20:47:47.119 INFO:tasks.workunit.client.1.vm10.stdout:9/566: dread d2/d33/f3f [0,4194304] 0 2026-03-09T20:47:47.120 INFO:tasks.workunit.client.1.vm10.stdout:4/501: rmdir d1/d2/d5c/d64/d71 0 2026-03-09T20:47:47.122 INFO:tasks.workunit.client.0.vm07.stdout:5/663: write d5/df/d13/d4f/f9b [4937546,15587] 0 2026-03-09T20:47:47.122 INFO:tasks.workunit.client.0.vm07.stdout:5/664: chown d5/d33/d39/d8d 34325893 1 2026-03-09T20:47:47.131 INFO:tasks.workunit.client.0.vm07.stdout:3/576: rename d1/d5/d9/d2f/d34/cab to d1/d5/d9/d2f/d3d/d71/db5/cbd 0 2026-03-09T20:47:47.139 INFO:tasks.workunit.client.0.vm07.stdout:3/577: dwrite d1/d5/d9/d2f/d3d/d64/f55 [4194304,4194304] 0 2026-03-09T20:47:47.140 INFO:tasks.workunit.client.0.vm07.stdout:1/624: read d3/d14/f33 [2229767,37554] 0 2026-03-09T20:47:47.145 INFO:tasks.workunit.client.1.vm10.stdout:2/546: dread d5/d18/d27/f2a [0,4194304] 0 2026-03-09T20:47:47.145 INFO:tasks.workunit.client.1.vm10.stdout:2/547: chown d5/d18/d27/d89/db6/c6a 257821 1 2026-03-09T20:47:47.166 INFO:tasks.workunit.client.0.vm07.stdout:6/614: dread d8/f5f [0,4194304] 0 2026-03-09T20:47:47.167 INFO:tasks.workunit.client.0.vm07.stdout:6/615: stat d8/d26/d7d/f8b 0 2026-03-09T20:47:47.167 INFO:tasks.workunit.client.0.vm07.stdout:6/616: fsync d8/d26/f87 0 2026-03-09T20:47:47.168 INFO:tasks.workunit.client.0.vm07.stdout:6/617: write d8/d16/d22/d24/f43 [2411410,102258] 0 2026-03-09T20:47:47.169 INFO:tasks.workunit.client.0.vm07.stdout:6/618: fdatasync d8/d16/d22/d24/da0/dab/d40/f65 0 2026-03-09T20:47:47.172 INFO:tasks.workunit.client.0.vm07.stdout:4/531: write d2/f69 [1023144,78639] 0 2026-03-09T20:47:47.186 INFO:tasks.workunit.client.1.vm10.stdout:7/550: symlink db/d28/d2b/d36/d63/d8b/la7 0 2026-03-09T20:47:47.186 INFO:tasks.workunit.client.1.vm10.stdout:7/551: fsync db/d28/f91 0 2026-03-09T20:47:47.189 INFO:tasks.workunit.client.1.vm10.stdout:3/510: rename dc/f1f to dc/d14/d27/fa7 0 2026-03-09T20:47:47.190 INFO:tasks.workunit.client.1.vm10.stdout:3/511: readlink dc/d14/d22/l85 0 2026-03-09T20:47:47.190 INFO:tasks.workunit.client.1.vm10.stdout:1/547: mknod d2/da/d25/d3e/d55/ca8 0 2026-03-09T20:47:47.191 INFO:tasks.workunit.client.1.vm10.stdout:0/530: mkdir d2/d4a/d58/d82/d71/d8e/d25/db7 0 2026-03-09T20:47:47.193 INFO:tasks.workunit.client.1.vm10.stdout:4/502: creat d1/d67/fa5 x:0 0 0 2026-03-09T20:47:47.195 INFO:tasks.workunit.client.1.vm10.stdout:9/567: creat d2/d28/d47/d67/fc6 x:0 0 0 2026-03-09T20:47:47.200 INFO:tasks.workunit.client.1.vm10.stdout:2/548: rmdir d5/d18/d1b 39 2026-03-09T20:47:47.200 INFO:tasks.workunit.client.1.vm10.stdout:2/549: fsync d5/d18/d27/f74 0 2026-03-09T20:47:47.200 INFO:tasks.workunit.client.1.vm10.stdout:8/572: link d0/d22/d25/d8f/fa6 d0/d22/d25/d6c/fb1 0 2026-03-09T20:47:47.205 INFO:tasks.workunit.client.1.vm10.stdout:4/503: dread d1/d2/d5c/f53 [0,4194304] 0 2026-03-09T20:47:47.207 INFO:tasks.workunit.client.1.vm10.stdout:4/504: truncate d1/d47/f4f 881263 0 2026-03-09T20:47:47.209 INFO:tasks.workunit.client.1.vm10.stdout:6/549: rename d3/f5e to d3/da/d11/d31/d4c/d60/fb1 0 2026-03-09T20:47:47.224 INFO:tasks.workunit.client.1.vm10.stdout:5/493: write d2/d1b/f41 [3875609,67111] 0 2026-03-09T20:47:47.236 INFO:tasks.workunit.client.1.vm10.stdout:9/568: chown d2/d28/d47/d6a/fc0 19554122 1 2026-03-09T20:47:47.238 INFO:tasks.workunit.client.1.vm10.stdout:7/552: write f5 [8471,915] 0 2026-03-09T20:47:47.258 INFO:tasks.workunit.client.1.vm10.stdout:4/505: creat d1/d2/d5c/d64/d6b/d79/fa6 x:0 0 0 2026-03-09T20:47:47.259 INFO:tasks.workunit.client.1.vm10.stdout:3/512: rename dc/d14/d22/d7f to dc/d14/d26/d29/d40/da8 0 2026-03-09T20:47:47.259 INFO:tasks.workunit.client.0.vm07.stdout:5/665: fdatasync d5/f25 0 2026-03-09T20:47:47.273 INFO:tasks.workunit.client.1.vm10.stdout:5/494: symlink d2/d58/lc1 0 2026-03-09T20:47:47.282 INFO:tasks.workunit.client.0.vm07.stdout:3/578: read d1/d5/d9/f15 [3127849,124346] 0 2026-03-09T20:47:47.301 INFO:tasks.workunit.client.1.vm10.stdout:9/569: truncate d2/d12/f2a 3604754 0 2026-03-09T20:47:47.301 INFO:tasks.workunit.client.1.vm10.stdout:9/570: truncate d2/d33/fb3 628808 0 2026-03-09T20:47:47.305 INFO:tasks.workunit.client.0.vm07.stdout:2/592: write d2/db/d1c/f93 [579648,72093] 0 2026-03-09T20:47:47.305 INFO:tasks.workunit.client.0.vm07.stdout:2/593: readlink d2/l3d 0 2026-03-09T20:47:47.306 INFO:tasks.workunit.client.1.vm10.stdout:7/553: creat db/d28/d2b/d36/d63/d6d/fa8 x:0 0 0 2026-03-09T20:47:47.307 INFO:tasks.workunit.client.1.vm10.stdout:7/554: dread - db/d21/d60/d87/f98 zero size 2026-03-09T20:47:47.307 INFO:tasks.workunit.client.0.vm07.stdout:2/594: dread d2/f3e [4194304,4194304] 0 2026-03-09T20:47:47.309 INFO:tasks.workunit.client.1.vm10.stdout:8/573: write d0/d22/d25/d2e/d41/d47/d63/f8c [359360,61670] 0 2026-03-09T20:47:47.314 INFO:tasks.workunit.client.0.vm07.stdout:9/567: dwrite d4/d8/d19/d89/f93 [0,4194304] 0 2026-03-09T20:47:47.315 INFO:tasks.workunit.client.1.vm10.stdout:2/550: mkdir d5/d18/d27/db8 0 2026-03-09T20:47:47.323 INFO:tasks.workunit.client.0.vm07.stdout:6/619: rename d8/d16/db4/l8e to d8/d50/dbd/lbf 0 2026-03-09T20:47:47.325 INFO:tasks.workunit.client.0.vm07.stdout:6/620: dread d8/d16/d22/d24/da0/dab/d40/fa7 [0,4194304] 0 2026-03-09T20:47:47.327 INFO:tasks.workunit.client.1.vm10.stdout:4/506: symlink d1/d2/d5c/la7 0 2026-03-09T20:47:47.327 INFO:tasks.workunit.client.1.vm10.stdout:4/507: readlink d1/d8/d39/l4e 0 2026-03-09T20:47:47.330 INFO:tasks.workunit.client.1.vm10.stdout:3/513: symlink dc/d14/d20/d21/la9 0 2026-03-09T20:47:47.330 INFO:tasks.workunit.client.1.vm10.stdout:6/550: dwrite d3/d30/d7f/d51/f7c [0,4194304] 0 2026-03-09T20:47:47.330 INFO:tasks.workunit.client.0.vm07.stdout:7/654: write d3/da/db/d32/d3e/dac/f92 [1642602,97770] 0 2026-03-09T20:47:47.333 INFO:tasks.workunit.client.0.vm07.stdout:0/633: getdents d1/d2/dc/db1 0 2026-03-09T20:47:47.335 INFO:tasks.workunit.client.1.vm10.stdout:9/571: creat d2/d3/d6d/db7/fc7 x:0 0 0 2026-03-09T20:47:47.336 INFO:tasks.workunit.client.1.vm10.stdout:7/555: mknod db/d28/d2b/d36/d3f/ca9 0 2026-03-09T20:47:47.354 INFO:tasks.workunit.client.1.vm10.stdout:2/551: symlink d5/d5b/lb9 0 2026-03-09T20:47:47.372 INFO:tasks.workunit.client.0.vm07.stdout:4/532: write d2/df/d17/f46 [1212435,51572] 0 2026-03-09T20:47:47.373 INFO:tasks.workunit.client.1.vm10.stdout:8/574: dwrite d0/f11 [0,4194304] 0 2026-03-09T20:47:47.379 INFO:tasks.workunit.client.0.vm07.stdout:2/595: write d2/db/d28/f32 [1025136,22552] 0 2026-03-09T20:47:47.381 INFO:tasks.workunit.client.0.vm07.stdout:2/596: write d2/db/d28/f32 [896121,43671] 0 2026-03-09T20:47:47.385 INFO:tasks.workunit.client.0.vm07.stdout:2/597: dwrite d2/db/d1c/d8d/fb9 [0,4194304] 0 2026-03-09T20:47:47.392 INFO:tasks.workunit.client.1.vm10.stdout:5/495: mknod d2/d27/d75/cc2 0 2026-03-09T20:47:47.393 INFO:tasks.workunit.client.1.vm10.stdout:1/548: getdents d2/da/d25/d46/d51/d5d 0 2026-03-09T20:47:47.393 INFO:tasks.workunit.client.0.vm07.stdout:8/560: mknod d1/d5d/d6f/cb2 0 2026-03-09T20:47:47.393 INFO:tasks.workunit.client.1.vm10.stdout:5/496: readlink d2/d27/d37/l4f 0 2026-03-09T20:47:47.399 INFO:tasks.workunit.client.1.vm10.stdout:9/572: unlink d2/d3/d6d/cc2 0 2026-03-09T20:47:47.415 INFO:tasks.workunit.client.0.vm07.stdout:5/666: creat d5/df/d13/d3e/de1/fe7 x:0 0 0 2026-03-09T20:47:47.416 INFO:tasks.workunit.client.0.vm07.stdout:1/625: link d3/d97/da1/dc5/d60/f8e d3/d9c/fd2 0 2026-03-09T20:47:47.422 INFO:tasks.workunit.client.1.vm10.stdout:0/531: rename d2/d4e to d2/d9/db8 0 2026-03-09T20:47:47.423 INFO:tasks.workunit.client.1.vm10.stdout:0/532: read d2/d9/da/d11/f1f [1177692,126151] 0 2026-03-09T20:47:47.429 INFO:tasks.workunit.client.1.vm10.stdout:5/497: creat d2/d27/d75/d81/fc3 x:0 0 0 2026-03-09T20:47:47.430 INFO:tasks.workunit.client.1.vm10.stdout:5/498: dread - d2/d39/dbf/f6a zero size 2026-03-09T20:47:47.431 INFO:tasks.workunit.client.1.vm10.stdout:5/499: readlink d2/d1b/l53 0 2026-03-09T20:47:47.434 INFO:tasks.workunit.client.0.vm07.stdout:6/621: creat d8/d16/da3/db8/fc0 x:0 0 0 2026-03-09T20:47:47.437 INFO:tasks.workunit.client.1.vm10.stdout:0/533: dread d2/f65 [0,4194304] 0 2026-03-09T20:47:47.437 INFO:tasks.workunit.client.1.vm10.stdout:5/500: read d2/f7 [4650444,106447] 0 2026-03-09T20:47:47.438 INFO:tasks.workunit.client.1.vm10.stdout:5/501: dread - d2/d1b/d54/d78/fad zero size 2026-03-09T20:47:47.447 INFO:tasks.workunit.client.1.vm10.stdout:3/514: rmdir dc/d14/d20/d21/d3b/d8e 0 2026-03-09T20:47:47.448 INFO:tasks.workunit.client.0.vm07.stdout:1/626: mkdir d3/d97/da1/dc5/d90/dd3 0 2026-03-09T20:47:47.448 INFO:tasks.workunit.client.1.vm10.stdout:0/534: dwrite d2/d4a/d58/d82/d71/d5d/f8c [0,4194304] 0 2026-03-09T20:47:47.448 INFO:tasks.workunit.client.0.vm07.stdout:1/627: chown d3/d97/da1/fbb 1018 1 2026-03-09T20:47:47.451 INFO:tasks.workunit.client.1.vm10.stdout:5/502: dwrite d2/d1b/d54/d78/fad [0,4194304] 0 2026-03-09T20:47:47.451 INFO:tasks.workunit.client.1.vm10.stdout:3/515: write dc/d14/d26/d29/d2a/d76/f97 [563755,124670] 0 2026-03-09T20:47:47.452 INFO:tasks.workunit.client.1.vm10.stdout:5/503: readlink d2/l6 0 2026-03-09T20:47:47.457 INFO:tasks.workunit.client.1.vm10.stdout:0/535: dwrite d2/d4a/fae [4194304,4194304] 0 2026-03-09T20:47:47.457 INFO:tasks.workunit.client.1.vm10.stdout:5/504: write d2/d39/dbf/fb6 [821930,51066] 0 2026-03-09T20:47:47.459 INFO:tasks.workunit.client.1.vm10.stdout:3/516: write dc/d14/d20/d21/d3b/f4f [3083431,12951] 0 2026-03-09T20:47:47.478 INFO:tasks.workunit.client.1.vm10.stdout:4/508: truncate d1/d8/f29 984556 0 2026-03-09T20:47:47.479 INFO:tasks.workunit.client.0.vm07.stdout:0/634: write d1/d1f/d20/f2c [5189641,41198] 0 2026-03-09T20:47:47.481 INFO:tasks.workunit.client.1.vm10.stdout:2/552: dwrite d5/f15 [0,4194304] 0 2026-03-09T20:47:47.484 INFO:tasks.workunit.client.0.vm07.stdout:7/655: dwrite d3/f4f [0,4194304] 0 2026-03-09T20:47:47.487 INFO:tasks.workunit.client.0.vm07.stdout:2/598: dwrite d2/db/f76 [0,4194304] 0 2026-03-09T20:47:47.488 INFO:tasks.workunit.client.1.vm10.stdout:6/551: dwrite d3/da/d11/f1d [0,4194304] 0 2026-03-09T20:47:47.489 INFO:tasks.workunit.client.0.vm07.stdout:2/599: write d2/db/d28/f32 [4617707,44228] 0 2026-03-09T20:47:47.489 INFO:tasks.workunit.client.1.vm10.stdout:8/575: dwrite d0/f94 [0,4194304] 0 2026-03-09T20:47:47.491 INFO:tasks.workunit.client.1.vm10.stdout:9/573: dwrite d2/d28/d47/f58 [0,4194304] 0 2026-03-09T20:47:47.496 INFO:tasks.workunit.client.0.vm07.stdout:9/568: creat d4/d16/fd0 x:0 0 0 2026-03-09T20:47:47.496 INFO:tasks.workunit.client.1.vm10.stdout:7/556: rename db/d21/d26/l79 to db/d28/d2b/d36/d40/laa 0 2026-03-09T20:47:47.498 INFO:tasks.workunit.client.1.vm10.stdout:7/557: chown db/d46/l74 2385 1 2026-03-09T20:47:47.499 INFO:tasks.workunit.client.1.vm10.stdout:7/558: chown db/d28/d2b/d36/d3b/d88/f57 60099191 1 2026-03-09T20:47:47.499 INFO:tasks.workunit.client.1.vm10.stdout:7/559: fsync db/f70 0 2026-03-09T20:47:47.504 INFO:tasks.workunit.client.1.vm10.stdout:7/560: dwrite db/d28/d2b/d36/d63/d6d/fa8 [0,4194304] 0 2026-03-09T20:47:47.514 INFO:tasks.workunit.client.0.vm07.stdout:8/561: creat d1/dc/d16/dad/d87/d93/fb3 x:0 0 0 2026-03-09T20:47:47.535 INFO:tasks.workunit.client.1.vm10.stdout:5/505: fsync d2/d39/d4b/d7a/f92 0 2026-03-09T20:47:47.535 INFO:tasks.workunit.client.1.vm10.stdout:5/506: fdatasync d2/d39/dbf/d84/d87/da1/faa 0 2026-03-09T20:47:47.535 INFO:tasks.workunit.client.0.vm07.stdout:3/579: getdents d1/d5/d9/daf 0 2026-03-09T20:47:47.535 INFO:tasks.workunit.client.0.vm07.stdout:4/533: link d2/df/d59/l78 d2/d55/d5d/d3f/d4a/d4b/d52/d5c/d90/l92 0 2026-03-09T20:47:47.540 INFO:tasks.workunit.client.0.vm07.stdout:0/635: unlink d1/d2/d33/d35/f46 0 2026-03-09T20:47:47.549 INFO:tasks.workunit.client.1.vm10.stdout:3/517: dread dc/d14/d26/d29/f51 [0,4194304] 0 2026-03-09T20:47:47.551 INFO:tasks.workunit.client.1.vm10.stdout:6/552: dread d3/da/d11/d31/d4c/d60/f63 [0,4194304] 0 2026-03-09T20:47:47.552 INFO:tasks.workunit.client.1.vm10.stdout:6/553: truncate d3/da/d11/d89/fb0 296093 0 2026-03-09T20:47:47.559 INFO:tasks.workunit.client.0.vm07.stdout:6/622: getdents d8/d16/dbb 0 2026-03-09T20:47:47.561 INFO:tasks.workunit.client.0.vm07.stdout:6/623: chown d8/d16/d4b/d88/f70 21 1 2026-03-09T20:47:47.562 INFO:tasks.workunit.client.1.vm10.stdout:8/576: sync 2026-03-09T20:47:47.572 INFO:tasks.workunit.client.1.vm10.stdout:2/553: creat d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/fba x:0 0 0 2026-03-09T20:47:47.572 INFO:tasks.workunit.client.1.vm10.stdout:8/577: dread d0/d22/d2c/f36 [0,4194304] 0 2026-03-09T20:47:47.573 INFO:tasks.workunit.client.1.vm10.stdout:8/578: write d0/d22/d25/d2e/d41/d47/f87 [2067809,56570] 0 2026-03-09T20:47:47.574 INFO:tasks.workunit.client.1.vm10.stdout:8/579: chown d0/d22/d2f/d38/l44 895464376 1 2026-03-09T20:47:47.574 INFO:tasks.workunit.client.1.vm10.stdout:9/574: write d2/d3/f7 [2424674,60425] 0 2026-03-09T20:47:47.583 INFO:tasks.workunit.client.0.vm07.stdout:2/600: unlink d2/db/d1c/f26 0 2026-03-09T20:47:47.589 INFO:tasks.workunit.client.0.vm07.stdout:2/601: read - d2/db/d28/d90/fa3 zero size 2026-03-09T20:47:47.589 INFO:tasks.workunit.client.0.vm07.stdout:8/562: mkdir d1/dc/d16/d31/db4 0 2026-03-09T20:47:47.600 INFO:tasks.workunit.client.1.vm10.stdout:3/518: truncate dc/d14/d27/f3c 2629615 0 2026-03-09T20:47:47.603 INFO:tasks.workunit.client.1.vm10.stdout:6/554: symlink d3/da/d11/d26/d5b/lb2 0 2026-03-09T20:47:47.604 INFO:tasks.workunit.client.0.vm07.stdout:2/602: truncate d2/db/d1c/fab 860804 0 2026-03-09T20:47:47.606 INFO:tasks.workunit.client.0.vm07.stdout:9/569: symlink d4/d8/db9/ld1 0 2026-03-09T20:47:47.607 INFO:tasks.workunit.client.1.vm10.stdout:6/555: dwrite d3/d30/d7f/d36/d5c/d8d/fac [0,4194304] 0 2026-03-09T20:47:47.613 INFO:tasks.workunit.client.0.vm07.stdout:3/580: rename d1/l2c to d1/d5/d9/d2f/d34/d46/d5d/lbe 0 2026-03-09T20:47:47.617 INFO:tasks.workunit.client.0.vm07.stdout:4/534: mkdir d2/d55/d5d/d93 0 2026-03-09T20:47:47.621 INFO:tasks.workunit.client.1.vm10.stdout:1/549: rename d2/da/l39 to d2/da/d25/d46/d80/da0/d92/la9 0 2026-03-09T20:47:47.651 INFO:tasks.workunit.client.1.vm10.stdout:8/580: truncate d0/d22/d25/d2e/d41/d47/d78/f9a 1832270 0 2026-03-09T20:47:47.652 INFO:tasks.workunit.client.0.vm07.stdout:9/570: symlink d4/d8/d19/ld2 0 2026-03-09T20:47:47.655 INFO:tasks.workunit.client.1.vm10.stdout:9/575: creat d2/db8/fc8 x:0 0 0 2026-03-09T20:47:47.658 INFO:tasks.workunit.client.1.vm10.stdout:9/576: dwrite d2/d28/d47/d50/f64 [0,4194304] 0 2026-03-09T20:47:47.662 INFO:tasks.workunit.client.0.vm07.stdout:8/563: link d1/d5d/d6f/d2f/d4d/d63/f77 d1/fb5 0 2026-03-09T20:47:47.664 INFO:tasks.workunit.client.1.vm10.stdout:2/554: rename d5/c30 to d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d8d/cbb 0 2026-03-09T20:47:47.668 INFO:tasks.workunit.client.0.vm07.stdout:3/581: creat d1/d5/d9/d2f/d3d/d64/d43/fbf x:0 0 0 2026-03-09T20:47:47.668 INFO:tasks.workunit.client.0.vm07.stdout:3/582: fdatasync d1/fb7 0 2026-03-09T20:47:47.674 INFO:tasks.workunit.client.1.vm10.stdout:2/555: rmdir d5 39 2026-03-09T20:47:47.675 INFO:tasks.workunit.client.0.vm07.stdout:9/571: getdents d4/d16/d78 0 2026-03-09T20:47:47.676 INFO:tasks.workunit.client.0.vm07.stdout:9/572: chown d4/d8/dc/d4e/lb5 54 1 2026-03-09T20:47:47.677 INFO:tasks.workunit.client.1.vm10.stdout:3/519: rmdir dc/d14/d20/d21/d9a 0 2026-03-09T20:47:47.680 INFO:tasks.workunit.client.0.vm07.stdout:3/583: truncate d1/d5/d9/d11/f4d 923861 0 2026-03-09T20:47:47.686 INFO:tasks.workunit.client.1.vm10.stdout:1/550: creat d2/da/faa x:0 0 0 2026-03-09T20:47:47.689 INFO:tasks.workunit.client.1.vm10.stdout:3/520: sync 2026-03-09T20:47:47.689 INFO:tasks.workunit.client.1.vm10.stdout:3/521: stat dc/d14/d26/d29/d40/da8 0 2026-03-09T20:47:47.690 INFO:tasks.workunit.client.1.vm10.stdout:9/577: dread d2/d28/f51 [0,4194304] 0 2026-03-09T20:47:47.695 INFO:tasks.workunit.client.0.vm07.stdout:3/584: mkdir d1/d5/d9/d2f/d66/dc0 0 2026-03-09T20:47:47.705 INFO:tasks.workunit.client.0.vm07.stdout:3/585: read d1/d5/d9/d11/f58 [783583,39472] 0 2026-03-09T20:47:47.705 INFO:tasks.workunit.client.0.vm07.stdout:9/573: fdatasync d4/d8/f1c 0 2026-03-09T20:47:47.706 INFO:tasks.workunit.client.0.vm07.stdout:3/586: truncate d1/d5/d9/d11/f26 1097154 0 2026-03-09T20:47:47.708 INFO:tasks.workunit.client.0.vm07.stdout:9/574: fdatasync d4/d8/dc/dbb/fad 0 2026-03-09T20:47:47.709 INFO:tasks.workunit.client.0.vm07.stdout:9/575: chown d4/d8/d19/d5f/dcf 2492 1 2026-03-09T20:47:47.717 INFO:tasks.workunit.client.0.vm07.stdout:9/576: dread d4/d8/d19/d89/f93 [0,4194304] 0 2026-03-09T20:47:47.719 INFO:tasks.workunit.client.0.vm07.stdout:9/577: stat d4/d8/dc/ff 0 2026-03-09T20:47:47.723 INFO:tasks.workunit.client.0.vm07.stdout:5/667: dread d5/d19/f4d [0,4194304] 0 2026-03-09T20:47:47.731 INFO:tasks.workunit.client.0.vm07.stdout:5/668: truncate d5/d19/f2c 3417045 0 2026-03-09T20:47:47.732 INFO:tasks.workunit.client.0.vm07.stdout:3/587: dread d1/f36 [0,4194304] 0 2026-03-09T20:47:47.733 INFO:tasks.workunit.client.0.vm07.stdout:3/588: dread d1/d5/d9/d2f/d34/d46/f8a [0,4194304] 0 2026-03-09T20:47:47.735 INFO:tasks.workunit.client.0.vm07.stdout:3/589: dread d1/d5/d9/d2f/d3d/d64/f30 [0,4194304] 0 2026-03-09T20:47:47.740 INFO:tasks.workunit.client.0.vm07.stdout:3/590: readlink d1/d5/d9/d2f/d34/l97 0 2026-03-09T20:47:47.744 INFO:tasks.workunit.client.0.vm07.stdout:3/591: creat d1/d5/d9/d2f/d3d/d64/fc1 x:0 0 0 2026-03-09T20:47:47.745 INFO:tasks.workunit.client.1.vm10.stdout:3/522: fsync dc/d14/d20/d2e/d56/f15 0 2026-03-09T20:47:47.746 INFO:tasks.workunit.client.1.vm10.stdout:9/578: creat d2/d3/d6d/db7/fc9 x:0 0 0 2026-03-09T20:47:47.750 INFO:tasks.workunit.client.0.vm07.stdout:3/592: readlink d1/l6a 0 2026-03-09T20:47:47.750 INFO:tasks.workunit.client.1.vm10.stdout:8/581: getdents d0/d22/d2f/d38/d64 0 2026-03-09T20:47:47.755 INFO:tasks.workunit.client.0.vm07.stdout:3/593: rmdir d1/d5/d9/d2f/d3d/d71 39 2026-03-09T20:47:47.757 INFO:tasks.workunit.client.1.vm10.stdout:2/556: creat d5/d18/d27/db8/fbc x:0 0 0 2026-03-09T20:47:47.758 INFO:tasks.workunit.client.1.vm10.stdout:2/557: chown d5/d18/d27/d89/db6/d41/f76 36590306 1 2026-03-09T20:47:47.763 INFO:tasks.workunit.client.0.vm07.stdout:3/594: symlink d1/d5/d9/d2f/d99/lc2 0 2026-03-09T20:47:47.763 INFO:tasks.workunit.client.1.vm10.stdout:4/509: dread d1/d2/f43 [0,4194304] 0 2026-03-09T20:47:47.763 INFO:tasks.workunit.client.1.vm10.stdout:4/510: dread - d1/d8/d39/f97 zero size 2026-03-09T20:47:47.763 INFO:tasks.workunit.client.1.vm10.stdout:8/582: mknod d0/d22/d25/d2e/d41/d85/d8b/cb2 0 2026-03-09T20:47:47.763 INFO:tasks.workunit.client.1.vm10.stdout:8/583: write d0/d22/d2f/d38/d64/faf [15754,111836] 0 2026-03-09T20:47:47.764 INFO:tasks.workunit.client.1.vm10.stdout:8/584: chown d0/d22/d2f/d38/fa5 678 1 2026-03-09T20:47:47.764 INFO:tasks.workunit.client.1.vm10.stdout:9/579: sync 2026-03-09T20:47:47.764 INFO:tasks.workunit.client.1.vm10.stdout:2/558: sync 2026-03-09T20:47:47.764 INFO:tasks.workunit.client.1.vm10.stdout:6/556: dread d3/d30/d7f/d24/f27 [0,4194304] 0 2026-03-09T20:47:47.765 INFO:tasks.workunit.client.1.vm10.stdout:2/559: stat d5/d5b 0 2026-03-09T20:47:47.765 INFO:tasks.workunit.client.1.vm10.stdout:9/580: stat d2/d3/c15 0 2026-03-09T20:47:47.768 INFO:tasks.workunit.client.0.vm07.stdout:8/564: sync 2026-03-09T20:47:47.771 INFO:tasks.workunit.client.0.vm07.stdout:3/595: creat d1/d5/d9/d2f/d3d/d71/fc3 x:0 0 0 2026-03-09T20:47:47.779 INFO:tasks.workunit.client.0.vm07.stdout:8/565: mknod d1/dc/d16/dad/d87/d93/cb6 0 2026-03-09T20:47:47.780 INFO:tasks.workunit.client.1.vm10.stdout:6/557: dwrite f1 [4194304,4194304] 0 2026-03-09T20:47:47.780 INFO:tasks.workunit.client.0.vm07.stdout:3/596: rename d1/d5/d9/d11/d1f/f5e to d1/d5/d9/d2f/d3d/d71/fc4 0 2026-03-09T20:47:47.783 INFO:tasks.workunit.client.1.vm10.stdout:9/581: unlink d2/d3/f7 0 2026-03-09T20:47:47.788 INFO:tasks.workunit.client.1.vm10.stdout:4/511: symlink d1/d8/la8 0 2026-03-09T20:47:47.788 INFO:tasks.workunit.client.1.vm10.stdout:4/512: fdatasync d1/d8/d1b/f8b 0 2026-03-09T20:47:47.789 INFO:tasks.workunit.client.1.vm10.stdout:4/513: chown d1/d8/d1c/d2b/l8d 555 1 2026-03-09T20:47:47.791 INFO:tasks.workunit.client.1.vm10.stdout:2/560: symlink d5/d18/d27/d38/lbd 0 2026-03-09T20:47:47.792 INFO:tasks.workunit.client.1.vm10.stdout:2/561: chown d5/d18/d27/d89/db6/d41/d77/db3/db5/f69 567 1 2026-03-09T20:47:47.798 INFO:tasks.workunit.client.0.vm07.stdout:3/597: read d1/d5/d9/d2f/d34/f4b [3371879,57000] 0 2026-03-09T20:47:47.798 INFO:tasks.workunit.client.0.vm07.stdout:8/566: rename d1/dc/d16/d26/f4e to d1/d5d/fb7 0 2026-03-09T20:47:47.799 INFO:tasks.workunit.client.0.vm07.stdout:8/567: write d1/dc/d16/d31/fa0 [238446,25984] 0 2026-03-09T20:47:47.808 INFO:tasks.workunit.client.1.vm10.stdout:3/523: link dc/d14/d20/d2e/d56/f82 dc/d14/d26/faa 0 2026-03-09T20:47:47.814 INFO:tasks.workunit.client.0.vm07.stdout:3/598: unlink d1/d5/d9/d2f/d3d/d64/f22 0 2026-03-09T20:47:47.822 INFO:tasks.workunit.client.1.vm10.stdout:0/536: dread d2/d4a/d58/d82/d71/d5d/f67 [0,4194304] 0 2026-03-09T20:47:47.828 INFO:tasks.workunit.client.1.vm10.stdout:7/561: dread db/d28/d2b/d36/f35 [0,4194304] 0 2026-03-09T20:47:47.828 INFO:tasks.workunit.client.1.vm10.stdout:7/562: dread db/d46/f66 [0,4194304] 0 2026-03-09T20:47:47.831 INFO:tasks.workunit.client.0.vm07.stdout:7/656: dread d3/da/f11 [0,4194304] 0 2026-03-09T20:47:47.837 INFO:tasks.workunit.client.1.vm10.stdout:9/582: dread d2/d3/de/f42 [0,4194304] 0 2026-03-09T20:47:47.838 INFO:tasks.workunit.client.1.vm10.stdout:9/583: read d2/d28/f51 [4729274,67942] 0 2026-03-09T20:47:47.839 INFO:tasks.workunit.client.1.vm10.stdout:9/584: chown d2/d3/d6d/db7/fbb 89133737 1 2026-03-09T20:47:47.851 INFO:tasks.workunit.client.0.vm07.stdout:1/628: dwrite d3/f24 [0,4194304] 0 2026-03-09T20:47:47.865 INFO:tasks.workunit.client.1.vm10.stdout:5/507: write d2/d27/d75/f9a [1168354,65252] 0 2026-03-09T20:47:47.866 INFO:tasks.workunit.client.0.vm07.stdout:0/636: write d1/d2/ff [771635,86875] 0 2026-03-09T20:47:47.868 INFO:tasks.workunit.client.0.vm07.stdout:0/637: readlink d1/d2/d33/l85 0 2026-03-09T20:47:47.871 INFO:tasks.workunit.client.0.vm07.stdout:2/603: dwrite d2/d46/d6e/f7a [0,4194304] 0 2026-03-09T20:47:47.876 INFO:tasks.workunit.client.0.vm07.stdout:6/624: dwrite d8/d16/d22/d24/da0/dab/f6e [0,4194304] 0 2026-03-09T20:47:47.889 INFO:tasks.workunit.client.1.vm10.stdout:2/562: mknod d5/d18/d27/d89/cbe 0 2026-03-09T20:47:47.889 INFO:tasks.workunit.client.0.vm07.stdout:3/599: readlink d1/d5/d9/d11/l85 0 2026-03-09T20:47:47.889 INFO:tasks.workunit.client.0.vm07.stdout:7/657: rename d3/da/d83 to d3/da/d53/db7/dde 0 2026-03-09T20:47:47.894 INFO:tasks.workunit.client.0.vm07.stdout:1/629: mknod d3/d23/d67/d8a/cd4 0 2026-03-09T20:47:47.900 INFO:tasks.workunit.client.0.vm07.stdout:4/535: write d2/f21 [2946558,64950] 0 2026-03-09T20:47:47.911 INFO:tasks.workunit.client.0.vm07.stdout:4/536: dwrite d2/df/d17/f6a [0,4194304] 0 2026-03-09T20:47:47.929 INFO:tasks.workunit.client.1.vm10.stdout:0/537: creat d2/d9/da/d48/fb9 x:0 0 0 2026-03-09T20:47:47.948 INFO:tasks.workunit.client.1.vm10.stdout:7/563: mkdir db/d46/dab 0 2026-03-09T20:47:47.958 INFO:tasks.workunit.client.1.vm10.stdout:5/508: creat d2/d58/d6c/fc4 x:0 0 0 2026-03-09T20:47:47.979 INFO:tasks.workunit.client.0.vm07.stdout:7/658: dread d3/da/db/d32/d3e/dac/fdc [0,4194304] 0 2026-03-09T20:47:47.981 INFO:tasks.workunit.client.0.vm07.stdout:9/578: dwrite d4/d11/d23/f2f [4194304,4194304] 0 2026-03-09T20:47:48.000 INFO:tasks.workunit.client.1.vm10.stdout:0/538: mknod d2/d9/da/d48/cba 0 2026-03-09T20:47:48.004 INFO:tasks.workunit.client.0.vm07.stdout:5/669: dwrite d5/d50/f52 [0,4194304] 0 2026-03-09T20:47:48.032 INFO:tasks.workunit.client.0.vm07.stdout:8/568: link d1/d5d/d6f/d2f/d4d/d63/d91/f96 d1/dc/d16/dad/fb8 0 2026-03-09T20:47:48.035 INFO:tasks.workunit.client.1.vm10.stdout:1/551: write d2/da/d25/d46/f61 [197469,9384] 0 2026-03-09T20:47:48.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:47 vm10.local ceph-mon[57011]: pgmap v12: 65 pgs: 65 active+clean; 2.4 GiB data, 8.6 GiB used, 111 GiB / 120 GiB avail; 38 MiB/s rd, 101 MiB/s wr, 233 op/s 2026-03-09T20:47:48.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:47 vm10.local ceph-mon[57011]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:48.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:47 vm10.local ceph-mon[57011]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:48.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:47 vm10.local ceph-mon[57011]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:48.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:47 vm10.local ceph-mon[57011]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:48.038 INFO:tasks.workunit.client.1.vm10.stdout:5/509: unlink d2/d27/d75/d81/fc3 0 2026-03-09T20:47:48.045 INFO:tasks.workunit.client.0.vm07.stdout:5/670: sync 2026-03-09T20:47:48.051 INFO:tasks.workunit.client.1.vm10.stdout:8/585: write d0/f19 [41586,97163] 0 2026-03-09T20:47:48.052 INFO:tasks.workunit.client.0.vm07.stdout:6/625: mkdir d8/d16/d22/d24/da0/dab/dc1 0 2026-03-09T20:47:48.052 INFO:tasks.workunit.client.1.vm10.stdout:8/586: readlink d0/d22/d25/d2e/d41/l9f 0 2026-03-09T20:47:48.065 INFO:tasks.workunit.client.1.vm10.stdout:0/539: unlink d2/d4a/d58/d82/d71/d8e/d25/c6e 0 2026-03-09T20:47:48.065 INFO:tasks.workunit.client.1.vm10.stdout:0/540: chown d2/d4a/d58/d82/d71/d8e/d25/d34/f77 7862432 1 2026-03-09T20:47:48.080 INFO:tasks.workunit.client.1.vm10.stdout:9/585: creat d2/d3/de/d35/fca x:0 0 0 2026-03-09T20:47:48.083 INFO:tasks.workunit.client.1.vm10.stdout:1/552: mknod d2/da/d25/d3e/d55/cab 0 2026-03-09T20:47:48.085 INFO:tasks.workunit.client.1.vm10.stdout:4/514: dwrite d1/d8/d1b/f24 [0,4194304] 0 2026-03-09T20:47:48.096 INFO:tasks.workunit.client.0.vm07.stdout:7/659: dread d3/f18 [0,4194304] 0 2026-03-09T20:47:48.097 INFO:tasks.workunit.client.0.vm07.stdout:7/660: readlink d3/da/db/d32/d3e/dac/d1f/d2b/d52/l81 0 2026-03-09T20:47:48.106 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:47 vm07.local ceph-mon[49120]: pgmap v12: 65 pgs: 65 active+clean; 2.4 GiB data, 8.6 GiB used, 111 GiB / 120 GiB avail; 38 MiB/s rd, 101 MiB/s wr, 233 op/s 2026-03-09T20:47:48.106 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:47 vm07.local ceph-mon[49120]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:48.106 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:47 vm07.local ceph-mon[49120]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:48.107 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:47 vm07.local ceph-mon[49120]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:48.107 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:47 vm07.local ceph-mon[49120]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:48.107 INFO:tasks.workunit.client.0.vm07.stdout:9/579: dread d4/d8/d19/d89/f9e [0,4194304] 0 2026-03-09T20:47:48.115 INFO:tasks.workunit.client.0.vm07.stdout:4/537: symlink d2/d55/d5d/l94 0 2026-03-09T20:47:48.118 INFO:tasks.workunit.client.1.vm10.stdout:5/510: rmdir d2/d27/d37/d46/d99 39 2026-03-09T20:47:48.134 INFO:tasks.workunit.client.1.vm10.stdout:6/558: truncate f1 4413867 0 2026-03-09T20:47:48.137 INFO:tasks.workunit.client.1.vm10.stdout:3/524: link dc/d14/d20/d21/d3b/l7a dc/d14/d26/d29/d93/lab 0 2026-03-09T20:47:48.154 INFO:tasks.workunit.client.0.vm07.stdout:0/638: write d1/f3d [7377149,129873] 0 2026-03-09T20:47:48.156 INFO:tasks.workunit.client.0.vm07.stdout:6/626: dread d8/d16/db4/d85/f83 [0,4194304] 0 2026-03-09T20:47:48.157 INFO:tasks.workunit.client.0.vm07.stdout:6/627: fdatasync d8/d16/d4b/d88/fb6 0 2026-03-09T20:47:48.174 INFO:tasks.workunit.client.1.vm10.stdout:7/564: creat db/d28/fac x:0 0 0 2026-03-09T20:47:48.193 INFO:tasks.workunit.client.0.vm07.stdout:2/604: dwrite d2/d46/f7e [0,4194304] 0 2026-03-09T20:47:48.195 INFO:tasks.workunit.client.1.vm10.stdout:1/553: creat d2/da/d25/d46/d80/da0/d92/fac x:0 0 0 2026-03-09T20:47:48.233 INFO:tasks.workunit.client.1.vm10.stdout:4/515: fsync d1/d2/d5c/d64/d61/f62 0 2026-03-09T20:47:48.264 INFO:tasks.workunit.client.1.vm10.stdout:8/587: creat d0/d92/fb3 x:0 0 0 2026-03-09T20:47:48.267 INFO:tasks.workunit.client.0.vm07.stdout:3/600: dwrite d1/d5/d9/d2f/d3d/d64/f7b [0,4194304] 0 2026-03-09T20:47:48.269 INFO:tasks.workunit.client.0.vm07.stdout:3/601: dread d1/d5/d9/d11/f21 [0,4194304] 0 2026-03-09T20:47:48.273 INFO:tasks.workunit.client.1.vm10.stdout:6/559: fsync d3/da/f42 0 2026-03-09T20:47:48.275 INFO:tasks.workunit.client.0.vm07.stdout:3/602: read d1/d5/d9/f15 [3529125,106321] 0 2026-03-09T20:47:48.280 INFO:tasks.workunit.client.0.vm07.stdout:1/630: dwrite d3/f28 [0,4194304] 0 2026-03-09T20:47:48.281 INFO:tasks.workunit.client.0.vm07.stdout:1/631: readlink d3/d9c/laa 0 2026-03-09T20:47:48.289 INFO:tasks.workunit.client.1.vm10.stdout:3/525: fsync dc/d14/d26/d37/f3e 0 2026-03-09T20:47:48.299 INFO:tasks.workunit.client.1.vm10.stdout:2/563: truncate d5/fd 1744586 0 2026-03-09T20:47:48.306 INFO:tasks.workunit.client.1.vm10.stdout:9/586: mknod d2/ccb 0 2026-03-09T20:47:48.309 INFO:tasks.workunit.client.1.vm10.stdout:4/516: rename d1/d8/d66 to d1/d2/d5c/d64/d6b/d81/da9 0 2026-03-09T20:47:48.309 INFO:tasks.workunit.client.1.vm10.stdout:4/517: chown d1/l6f 1867690958 1 2026-03-09T20:47:48.312 INFO:tasks.workunit.client.1.vm10.stdout:3/526: sync 2026-03-09T20:47:48.317 INFO:tasks.workunit.client.0.vm07.stdout:5/671: write d5/df/f34 [2569206,89048] 0 2026-03-09T20:47:48.317 INFO:tasks.workunit.client.1.vm10.stdout:8/588: readlink d0/d22/d2f/l30 0 2026-03-09T20:47:48.317 INFO:tasks.workunit.client.1.vm10.stdout:8/589: fdatasync d0/d22/d25/d2e/d41/d85/fa7 0 2026-03-09T20:47:48.320 INFO:tasks.workunit.client.1.vm10.stdout:6/560: write f1 [5000921,47586] 0 2026-03-09T20:47:48.321 INFO:tasks.workunit.client.0.vm07.stdout:7/661: write d3/da/db/f1e [6499472,70962] 0 2026-03-09T20:47:48.324 INFO:tasks.workunit.client.1.vm10.stdout:3/527: dread dc/f5a [0,4194304] 0 2026-03-09T20:47:48.326 INFO:tasks.workunit.client.1.vm10.stdout:3/528: chown dc/d14/d26/d29/d40/f6c 0 1 2026-03-09T20:47:48.327 INFO:tasks.workunit.client.1.vm10.stdout:2/564: stat d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47/l4a 0 2026-03-09T20:47:48.329 INFO:tasks.workunit.client.1.vm10.stdout:5/511: dwrite d2/d27/d75/f88 [0,4194304] 0 2026-03-09T20:47:48.330 INFO:tasks.workunit.client.1.vm10.stdout:5/512: chown d2/d27/d37/d46/fb7 16411154 1 2026-03-09T20:47:48.345 INFO:tasks.workunit.client.1.vm10.stdout:1/554: creat d2/da/d25/d46/d51/d7e/d9e/da2/fad x:0 0 0 2026-03-09T20:47:48.346 INFO:tasks.workunit.client.0.vm07.stdout:8/569: dread - d1/dc/d16/dad/fa1 zero size 2026-03-09T20:47:48.353 INFO:tasks.workunit.client.1.vm10.stdout:6/561: fdatasync d3/da/d11/d26/f8f 0 2026-03-09T20:47:48.362 INFO:tasks.workunit.client.0.vm07.stdout:0/639: dread d1/f90 [0,4194304] 0 2026-03-09T20:47:48.366 INFO:tasks.workunit.client.1.vm10.stdout:2/565: mknod d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47/d7b/cbf 0 2026-03-09T20:47:48.366 INFO:tasks.workunit.client.1.vm10.stdout:2/566: chown d5/d18/d27/f74 1302632 1 2026-03-09T20:47:48.369 INFO:tasks.workunit.client.0.vm07.stdout:9/580: write d4/d8/d19/f42 [585556,55844] 0 2026-03-09T20:47:48.371 INFO:tasks.workunit.client.0.vm07.stdout:9/581: write d4/d11/d2a/f39 [1033839,93930] 0 2026-03-09T20:47:48.371 INFO:tasks.workunit.client.1.vm10.stdout:9/587: dwrite d2/d3/de/d35/f78 [0,4194304] 0 2026-03-09T20:47:48.401 INFO:tasks.workunit.client.0.vm07.stdout:5/672: mkdir d5/d33/db2/de8 0 2026-03-09T20:47:48.402 INFO:tasks.workunit.client.0.vm07.stdout:7/662: fdatasync d3/f61 0 2026-03-09T20:47:48.402 INFO:tasks.workunit.client.0.vm07.stdout:1/632: write d3/d23/d52/f79 [1023524,93090] 0 2026-03-09T20:47:48.417 INFO:tasks.workunit.client.1.vm10.stdout:0/541: link d2/d9/da/d48/l4d d2/d4a/lbb 0 2026-03-09T20:47:48.417 INFO:tasks.workunit.client.1.vm10.stdout:7/565: link db/d46/f66 db/d46/d89/fad 0 2026-03-09T20:47:48.419 INFO:tasks.workunit.client.1.vm10.stdout:8/590: symlink d0/d22/d25/d6c/d9b/lb4 0 2026-03-09T20:47:48.419 INFO:tasks.workunit.client.0.vm07.stdout:2/605: write d2/db/d28/d90/da4/fa5 [507546,94689] 0 2026-03-09T20:47:48.419 INFO:tasks.workunit.client.1.vm10.stdout:6/562: rename d3/da/d11/c13 to d3/d30/d7f/d36/cb3 0 2026-03-09T20:47:48.423 INFO:tasks.workunit.client.1.vm10.stdout:6/563: stat d3/d30/d7f/d36/d5c 0 2026-03-09T20:47:48.426 INFO:tasks.workunit.client.1.vm10.stdout:3/529: dwrite dc/d14/d26/f6f [8388608,4194304] 0 2026-03-09T20:47:48.427 INFO:tasks.workunit.client.1.vm10.stdout:3/530: readlink l8 0 2026-03-09T20:47:48.427 INFO:tasks.workunit.client.1.vm10.stdout:4/518: dwrite d1/d8/d39/f56 [0,4194304] 0 2026-03-09T20:47:48.429 INFO:tasks.workunit.client.1.vm10.stdout:2/567: rmdir d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47 39 2026-03-09T20:47:48.431 INFO:tasks.workunit.client.1.vm10.stdout:9/588: readlink d2/l27 0 2026-03-09T20:47:48.434 INFO:tasks.workunit.client.1.vm10.stdout:8/591: dwrite d0/d22/d25/d2e/d41/d85/fa7 [0,4194304] 0 2026-03-09T20:47:48.438 INFO:tasks.workunit.client.1.vm10.stdout:8/592: fsync d0/d22/d25/d2e/f79 0 2026-03-09T20:47:48.455 INFO:tasks.workunit.client.0.vm07.stdout:3/603: truncate d1/f65 536713 0 2026-03-09T20:47:48.456 INFO:tasks.workunit.client.0.vm07.stdout:3/604: dread - d1/d5/d9/d11/f84 zero size 2026-03-09T20:47:48.459 INFO:tasks.workunit.client.1.vm10.stdout:1/555: mknod d2/da/d25/d3e/cae 0 2026-03-09T20:47:48.464 INFO:tasks.workunit.client.1.vm10.stdout:1/556: dwrite d2/f59 [0,4194304] 0 2026-03-09T20:47:48.465 INFO:tasks.workunit.client.0.vm07.stdout:5/673: read - d5/d33/d39/fc3 zero size 2026-03-09T20:47:48.486 INFO:tasks.workunit.client.1.vm10.stdout:0/542: rename d2/d4a/d58/d82/d71/f13 to d2/d4a/d58/d82/d93/fbc 0 2026-03-09T20:47:48.487 INFO:tasks.workunit.client.1.vm10.stdout:0/543: chown d2/d4a/lbb 14156 1 2026-03-09T20:47:48.489 INFO:tasks.workunit.client.0.vm07.stdout:4/538: link d2/d55/d5d/d3f/d4a/d7d/c8f d2/df/d17/d83/c95 0 2026-03-09T20:47:48.496 INFO:tasks.workunit.client.0.vm07.stdout:4/539: dwrite d2/df/d17/f2a [0,4194304] 0 2026-03-09T20:47:48.511 INFO:tasks.workunit.client.1.vm10.stdout:4/519: read - d1/d2/d5c/d64/d61/f85 zero size 2026-03-09T20:47:48.516 INFO:tasks.workunit.client.1.vm10.stdout:8/593: mkdir d0/d22/d2f/d38/d64/db5 0 2026-03-09T20:47:48.518 INFO:tasks.workunit.client.0.vm07.stdout:5/674: creat d5/df/d13/d4f/fe9 x:0 0 0 2026-03-09T20:47:48.518 INFO:tasks.workunit.client.0.vm07.stdout:5/675: readlink d5/d19/d73/ldc 0 2026-03-09T20:47:48.520 INFO:tasks.workunit.client.0.vm07.stdout:6/628: truncate d8/d16/d22/f98 1349639 0 2026-03-09T20:47:48.525 INFO:tasks.workunit.client.1.vm10.stdout:1/557: creat d2/da/d25/d3e/d55/faf x:0 0 0 2026-03-09T20:47:48.529 INFO:tasks.workunit.client.1.vm10.stdout:0/544: rename d2/d9/da/d35/f84 to d2/d9/da/d48/dac/fbd 0 2026-03-09T20:47:48.529 INFO:tasks.workunit.client.1.vm10.stdout:6/564: symlink d3/lb4 0 2026-03-09T20:47:48.529 INFO:tasks.workunit.client.1.vm10.stdout:6/565: chown d3/d30/c38 0 1 2026-03-09T20:47:48.529 INFO:tasks.workunit.client.1.vm10.stdout:3/531: symlink dc/d14/d26/d29/lac 0 2026-03-09T20:47:48.529 INFO:tasks.workunit.client.0.vm07.stdout:0/640: dwrite d1/f3b [0,4194304] 0 2026-03-09T20:47:48.531 INFO:tasks.workunit.client.0.vm07.stdout:0/641: dread d1/d2/dc/d80/f87 [0,4194304] 0 2026-03-09T20:47:48.532 INFO:tasks.workunit.client.1.vm10.stdout:5/513: getdents d2/d39/dbf/d84/d87 0 2026-03-09T20:47:48.534 INFO:tasks.workunit.client.0.vm07.stdout:6/629: dread - d8/d16/d4b/f9c zero size 2026-03-09T20:47:48.534 INFO:tasks.workunit.client.1.vm10.stdout:4/520: truncate d1/d8/d1b/f5f 876011 0 2026-03-09T20:47:48.537 INFO:tasks.workunit.client.1.vm10.stdout:2/568: truncate d5/d18/d27/d89/db6/d41/d77/db3/db5/f36 2065500 0 2026-03-09T20:47:48.537 INFO:tasks.workunit.client.1.vm10.stdout:1/558: sync 2026-03-09T20:47:48.542 INFO:tasks.workunit.client.0.vm07.stdout:9/582: write d4/d8/dc/d15/fa3 [288494,27940] 0 2026-03-09T20:47:48.543 INFO:tasks.workunit.client.0.vm07.stdout:8/570: getdents d1/dc 0 2026-03-09T20:47:48.543 INFO:tasks.workunit.client.1.vm10.stdout:8/594: read d0/d22/d25/d6c/f82 [2316676,36386] 0 2026-03-09T20:47:48.545 INFO:tasks.workunit.client.1.vm10.stdout:2/569: dwrite d5/d18/d27/d89/db6/d41/d77/db3/db5/db0/fb2 [0,4194304] 0 2026-03-09T20:47:48.547 INFO:tasks.workunit.client.1.vm10.stdout:1/559: dwrite d2/f8 [0,4194304] 0 2026-03-09T20:47:48.557 INFO:tasks.workunit.client.0.vm07.stdout:7/663: dwrite d3/f88 [4194304,4194304] 0 2026-03-09T20:47:48.562 INFO:tasks.workunit.client.0.vm07.stdout:2/606: creat d2/fbd x:0 0 0 2026-03-09T20:47:48.571 INFO:tasks.workunit.client.0.vm07.stdout:3/605: creat d1/d5/fc5 x:0 0 0 2026-03-09T20:47:48.572 INFO:tasks.workunit.client.0.vm07.stdout:6/630: sync 2026-03-09T20:47:48.584 INFO:tasks.workunit.client.1.vm10.stdout:3/532: unlink dc/d14/d26/d8f/f99 0 2026-03-09T20:47:48.591 INFO:tasks.workunit.client.1.vm10.stdout:9/589: creat d2/d28/da2/fcc x:0 0 0 2026-03-09T20:47:48.596 INFO:tasks.workunit.client.0.vm07.stdout:1/633: getdents d3/d97/da1/dc5/d60 0 2026-03-09T20:47:48.597 INFO:tasks.workunit.client.0.vm07.stdout:1/634: truncate d3/d23/d67/fc4 593633 0 2026-03-09T20:47:48.618 INFO:tasks.workunit.client.0.vm07.stdout:8/571: symlink d1/d5d/d6f/lb9 0 2026-03-09T20:47:48.625 INFO:tasks.workunit.client.1.vm10.stdout:7/566: getdents db/d28/d2b/d36 0 2026-03-09T20:47:48.626 INFO:tasks.workunit.client.0.vm07.stdout:7/664: fsync d3/da/db/d79/faf 0 2026-03-09T20:47:48.627 INFO:tasks.workunit.client.0.vm07.stdout:7/665: read d3/da/db/d32/d3e/dac/d43/f68 [3916184,77380] 0 2026-03-09T20:47:48.630 INFO:tasks.workunit.client.0.vm07.stdout:2/607: write d2/db/d28/d57/f65 [3614474,98692] 0 2026-03-09T20:47:48.644 INFO:tasks.workunit.client.0.vm07.stdout:9/583: write d4/d8/dc/d4e/f53 [1913830,43195] 0 2026-03-09T20:47:48.648 INFO:tasks.workunit.client.0.vm07.stdout:4/540: dread d2/f2b [0,4194304] 0 2026-03-09T20:47:48.649 INFO:tasks.workunit.client.1.vm10.stdout:8/595: chown d0/d22/d25/d40/lac 7247 1 2026-03-09T20:47:48.649 INFO:tasks.workunit.client.1.vm10.stdout:8/596: truncate d0/f11 4929240 0 2026-03-09T20:47:48.651 INFO:tasks.workunit.client.0.vm07.stdout:3/606: dwrite d1/f36 [4194304,4194304] 0 2026-03-09T20:47:48.668 INFO:tasks.workunit.client.0.vm07.stdout:5/676: creat d5/d19/d73/fea x:0 0 0 2026-03-09T20:47:48.671 INFO:tasks.workunit.client.0.vm07.stdout:0/642: symlink d1/d2/dc/lcb 0 2026-03-09T20:47:48.688 INFO:tasks.workunit.client.1.vm10.stdout:5/514: mknod d2/d39/dbf/cc5 0 2026-03-09T20:47:48.689 INFO:tasks.workunit.client.1.vm10.stdout:5/515: fdatasync d2/d27/d37/d46/fba 0 2026-03-09T20:47:48.690 INFO:tasks.workunit.client.0.vm07.stdout:7/666: dread - d3/da/db/d32/d3e/dac/d1f/faa zero size 2026-03-09T20:47:48.693 INFO:tasks.workunit.client.0.vm07.stdout:8/572: dwrite d1/d5d/d6f/d2f/f34 [0,4194304] 0 2026-03-09T20:47:48.700 INFO:tasks.workunit.client.0.vm07.stdout:7/667: sync 2026-03-09T20:47:48.702 INFO:tasks.workunit.client.1.vm10.stdout:7/567: creat db/d28/d2b/d36/d3f/fae x:0 0 0 2026-03-09T20:47:48.703 INFO:tasks.workunit.client.1.vm10.stdout:7/568: write db/f19 [3477596,28950] 0 2026-03-09T20:47:48.712 INFO:tasks.workunit.client.1.vm10.stdout:2/570: symlink d5/d18/d9f/lc0 0 2026-03-09T20:47:48.717 INFO:tasks.workunit.client.0.vm07.stdout:7/668: sync 2026-03-09T20:47:48.718 INFO:tasks.workunit.client.0.vm07.stdout:7/669: stat d3/da/db/d32/d3e/dac/d43/d62/ca2 0 2026-03-09T20:47:48.719 INFO:tasks.workunit.client.1.vm10.stdout:1/560: symlink d2/d89/lb0 0 2026-03-09T20:47:48.720 INFO:tasks.workunit.client.1.vm10.stdout:1/561: dread - d2/da/d25/d46/d51/d7e/d9e/da2/fad zero size 2026-03-09T20:47:48.721 INFO:tasks.workunit.client.1.vm10.stdout:1/562: readlink d2/da/d25/d3e/d55/l66 0 2026-03-09T20:47:48.724 INFO:tasks.workunit.client.1.vm10.stdout:0/545: rename d2/d4a/d58/d82/d93/fa1 to d2/d4a/fbe 0 2026-03-09T20:47:48.724 INFO:tasks.workunit.client.0.vm07.stdout:6/631: mkdir d8/d16/d22/db1/dc2 0 2026-03-09T20:47:48.738 INFO:tasks.workunit.client.0.vm07.stdout:5/677: rename d5/df/d13/d4f/fb7 to d5/d33/d3b/feb 0 2026-03-09T20:47:48.739 INFO:tasks.workunit.client.1.vm10.stdout:3/533: truncate dc/f88 3382938 0 2026-03-09T20:47:48.739 INFO:tasks.workunit.client.1.vm10.stdout:6/566: link d3/da/d11/d31/d47/c72 d3/da/d11/d26/d5b/cb5 0 2026-03-09T20:47:48.740 INFO:tasks.workunit.client.1.vm10.stdout:3/534: write dc/d14/d26/d29/d2a/d76/f97 [709635,72507] 0 2026-03-09T20:47:48.740 INFO:tasks.workunit.client.1.vm10.stdout:6/567: readlink d3/da/d11/d31/d4c/l98 0 2026-03-09T20:47:48.740 INFO:tasks.workunit.client.1.vm10.stdout:6/568: chown d3/da/d11/d31 6 1 2026-03-09T20:47:48.741 INFO:tasks.workunit.client.0.vm07.stdout:1/635: truncate d3/d14/f17 5085256 0 2026-03-09T20:47:48.760 INFO:tasks.workunit.client.0.vm07.stdout:3/607: dread d1/d5/d9/d2f/d34/f68 [0,4194304] 0 2026-03-09T20:47:48.778 INFO:tasks.workunit.client.0.vm07.stdout:9/584: fsync d4/d8/d19/fc2 0 2026-03-09T20:47:48.784 INFO:tasks.workunit.client.0.vm07.stdout:4/541: unlink d2/f43 0 2026-03-09T20:47:48.789 INFO:tasks.workunit.client.0.vm07.stdout:8/573: dwrite d1/d5d/d6f/d2f/d4d/d55/f78 [0,4194304] 0 2026-03-09T20:47:48.798 INFO:tasks.workunit.client.0.vm07.stdout:4/542: dread d2/d1f/f25 [0,4194304] 0 2026-03-09T20:47:48.798 INFO:tasks.workunit.client.0.vm07.stdout:4/543: readlink d2/df/d59/l72 0 2026-03-09T20:47:48.801 INFO:tasks.workunit.client.0.vm07.stdout:4/544: read d2/d1f/f53 [2476337,99387] 0 2026-03-09T20:47:48.835 INFO:tasks.workunit.client.0.vm07.stdout:7/670: symlink d3/da/d53/db7/dde/dc5/ldf 0 2026-03-09T20:47:48.836 INFO:tasks.workunit.client.1.vm10.stdout:7/569: symlink db/d1f/laf 0 2026-03-09T20:47:48.836 INFO:tasks.workunit.client.1.vm10.stdout:7/570: fsync db/d21/f9c 0 2026-03-09T20:47:48.836 INFO:tasks.workunit.client.0.vm07.stdout:2/608: rename d2/db/d28/d87 to d2/d46/d6e/dbe 0 2026-03-09T20:47:48.836 INFO:tasks.workunit.client.0.vm07.stdout:5/678: mkdir d5/d33/d39/d8d/dec 0 2026-03-09T20:47:48.836 INFO:tasks.workunit.client.0.vm07.stdout:0/643: mkdir d1/dc0/dcc 0 2026-03-09T20:47:48.836 INFO:tasks.workunit.client.0.vm07.stdout:1/636: mknod d3/d23/d52/cd5 0 2026-03-09T20:47:48.837 INFO:tasks.workunit.client.0.vm07.stdout:3/608: fdatasync d1/d5/d9/d2f/d34/f8f 0 2026-03-09T20:47:48.842 INFO:tasks.workunit.client.0.vm07.stdout:2/609: dread d2/db/d1c/d8d/fb9 [0,4194304] 0 2026-03-09T20:47:48.842 INFO:tasks.workunit.client.1.vm10.stdout:2/571: truncate d5/d18/d27/d89/db6/d41/f4b 2730889 0 2026-03-09T20:47:48.854 INFO:tasks.workunit.client.0.vm07.stdout:8/574: mkdir d1/dc/dba 0 2026-03-09T20:47:48.862 INFO:tasks.workunit.client.1.vm10.stdout:8/597: write d0/d22/d25/d6c/fb1 [938416,80663] 0 2026-03-09T20:47:48.865 INFO:tasks.workunit.client.0.vm07.stdout:4/545: truncate d2/df/d59/f81 659713 0 2026-03-09T20:47:48.865 INFO:tasks.workunit.client.0.vm07.stdout:9/585: dwrite d4/fa [0,4194304] 0 2026-03-09T20:47:48.867 INFO:tasks.workunit.client.1.vm10.stdout:5/516: rename d2/d58/d6c/f98 to d2/d39/dbf/d69/d96/fc6 0 2026-03-09T20:47:48.872 INFO:tasks.workunit.client.1.vm10.stdout:3/535: symlink dc/d14/d27/lad 0 2026-03-09T20:47:48.882 INFO:tasks.workunit.client.1.vm10.stdout:6/569: dread d3/f1f [0,4194304] 0 2026-03-09T20:47:48.887 INFO:tasks.workunit.client.0.vm07.stdout:1/637: rename d3/d14/d94/ca4 to d3/d23/cd6 0 2026-03-09T20:47:48.890 INFO:tasks.workunit.client.0.vm07.stdout:6/632: write d8/d16/d22/d24/da0/dab/f7a [253046,19966] 0 2026-03-09T20:47:48.896 INFO:tasks.workunit.client.1.vm10.stdout:4/521: getdents d1/d2/d5c/d64/d6b/d79 0 2026-03-09T20:47:48.897 INFO:tasks.workunit.client.0.vm07.stdout:7/671: write d3/da/db/d32/d3e/dac/f2a [1997625,76634] 0 2026-03-09T20:47:48.897 INFO:tasks.workunit.client.0.vm07.stdout:7/672: dread - d3/da/db/d32/d3e/fd9 zero size 2026-03-09T20:47:48.902 INFO:tasks.workunit.client.1.vm10.stdout:7/571: mknod db/d21/d23/cb0 0 2026-03-09T20:47:48.917 INFO:tasks.workunit.client.0.vm07.stdout:4/546: symlink d2/df/d17/l96 0 2026-03-09T20:47:48.920 INFO:tasks.workunit.client.1.vm10.stdout:8/598: write d0/d22/f76 [1271069,91401] 0 2026-03-09T20:47:48.922 INFO:tasks.workunit.client.0.vm07.stdout:2/610: dwrite d2/db/d1c/f45 [0,4194304] 0 2026-03-09T20:47:48.922 INFO:tasks.workunit.client.1.vm10.stdout:5/517: chown d2/c2b 184711 1 2026-03-09T20:47:48.924 INFO:tasks.workunit.client.0.vm07.stdout:9/586: symlink d4/d8/d19/d89/da7/ld3 0 2026-03-09T20:47:48.924 INFO:tasks.workunit.client.1.vm10.stdout:5/518: chown d2/d27/d37/d46/d5d/d77/f93 1753 1 2026-03-09T20:47:48.924 INFO:tasks.workunit.client.0.vm07.stdout:9/587: fsync d4/d16/d29/f6e 0 2026-03-09T20:47:48.934 INFO:tasks.workunit.client.0.vm07.stdout:8/575: dwrite d1/f85 [0,4194304] 0 2026-03-09T20:47:48.934 INFO:tasks.workunit.client.0.vm07.stdout:8/576: stat d1/d3b/f3e 0 2026-03-09T20:47:48.949 INFO:tasks.workunit.client.0.vm07.stdout:1/638: truncate d3/d23/d67/f92 5126016 0 2026-03-09T20:47:48.952 INFO:tasks.workunit.client.0.vm07.stdout:9/588: sync 2026-03-09T20:47:48.957 INFO:tasks.workunit.client.1.vm10.stdout:3/536: write dc/d14/d26/d29/d2a/f57 [511113,36141] 0 2026-03-09T20:47:48.958 INFO:tasks.workunit.client.1.vm10.stdout:6/570: mknod d3/d30/d33/cb6 0 2026-03-09T20:47:48.962 INFO:tasks.workunit.client.1.vm10.stdout:9/590: getdents d2/d28 0 2026-03-09T20:47:48.968 INFO:tasks.workunit.client.1.vm10.stdout:4/522: chown d1/d2/d5c/l74 394153 1 2026-03-09T20:47:48.969 INFO:tasks.workunit.client.1.vm10.stdout:7/572: creat db/d21/d60/d87/fb1 x:0 0 0 2026-03-09T20:47:48.971 INFO:tasks.workunit.client.0.vm07.stdout:7/673: chown d3/da/db/d32/d3e/dac/d1f/d2b/d52/f74 6052 1 2026-03-09T20:47:48.972 INFO:tasks.workunit.client.0.vm07.stdout:6/633: write d8/d16/d22/d24/da0/dab/d40/d69/f78 [3635092,31242] 0 2026-03-09T20:47:48.973 INFO:tasks.workunit.client.1.vm10.stdout:2/572: readlink d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47/l7a 0 2026-03-09T20:47:48.977 INFO:tasks.workunit.client.0.vm07.stdout:6/634: dwrite d8/d16/d4b/fbc [0,4194304] 0 2026-03-09T20:47:48.979 INFO:tasks.workunit.client.0.vm07.stdout:6/635: stat d8/d16/d4b/d88/f70 0 2026-03-09T20:47:48.983 INFO:tasks.workunit.client.0.vm07.stdout:6/636: write d8/d16/d22/d24/da0/dab/d40/d69/f78 [5200924,80087] 0 2026-03-09T20:47:48.983 INFO:tasks.workunit.client.1.vm10.stdout:2/573: dread d5/d18/d27/d38/d61/fa0 [0,4194304] 0 2026-03-09T20:47:48.999 INFO:tasks.workunit.client.1.vm10.stdout:1/563: rename d2/da/faa to d2/da/fb1 0 2026-03-09T20:47:49.000 INFO:tasks.workunit.client.1.vm10.stdout:1/564: chown d2/da/d25/d46/d51/c60 111405 1 2026-03-09T20:47:49.029 INFO:tasks.workunit.client.1.vm10.stdout:0/546: mknod d2/d4a/d79/cbf 0 2026-03-09T20:47:49.047 INFO:tasks.workunit.client.1.vm10.stdout:3/537: symlink dc/d14/d20/d2e/d56/lae 0 2026-03-09T20:47:49.048 INFO:tasks.workunit.client.0.vm07.stdout:0/644: creat d1/d1f/fcd x:0 0 0 2026-03-09T20:47:49.048 INFO:tasks.workunit.client.0.vm07.stdout:1/639: fsync d3/d97/da1/dc5/d60/fb3 0 2026-03-09T20:47:49.048 INFO:tasks.workunit.client.0.vm07.stdout:9/589: mkdir d4/d16/d29/d24/d37/d44/d62/d8e/dd4 0 2026-03-09T20:47:49.048 INFO:tasks.workunit.client.0.vm07.stdout:7/674: mkdir d3/da/db/d32/d3e/dac/d43/d62/de0 0 2026-03-09T20:47:49.048 INFO:tasks.workunit.client.1.vm10.stdout:9/591: chown d2/d3/de/c5c 3186725 1 2026-03-09T20:47:49.048 INFO:tasks.workunit.client.1.vm10.stdout:9/592: truncate d2/d12/d5a/da7/faf 223264 0 2026-03-09T20:47:49.048 INFO:tasks.workunit.client.1.vm10.stdout:7/573: unlink db/d1f/laf 0 2026-03-09T20:47:49.048 INFO:tasks.workunit.client.1.vm10.stdout:2/574: rename d5/d18/d27/d89/db6/d41/d77/l79 to d5/d18/d27/d89/db6/d41/d77/db3/db5/lc1 0 2026-03-09T20:47:49.048 INFO:tasks.workunit.client.1.vm10.stdout:1/565: mkdir d2/da/d25/d46/d80/db2 0 2026-03-09T20:47:49.048 INFO:tasks.workunit.client.1.vm10.stdout:5/519: creat d2/d39/dbf/d66/fc7 x:0 0 0 2026-03-09T20:47:49.050 INFO:tasks.workunit.client.0.vm07.stdout:6/637: mkdir d8/d16/d4b/d88/dc3 0 2026-03-09T20:47:49.050 INFO:tasks.workunit.client.0.vm07.stdout:5/679: link d5/d19/d73/d9c/la2 d5/d33/d39/d8d/dd7/led 0 2026-03-09T20:47:49.052 INFO:tasks.workunit.client.0.vm07.stdout:6/638: dread - d8/d16/d61/f68 zero size 2026-03-09T20:47:49.056 INFO:tasks.workunit.client.1.vm10.stdout:3/538: mkdir dc/d14/d20/d21/daf 0 2026-03-09T20:47:49.058 INFO:tasks.workunit.client.0.vm07.stdout:3/609: link d1/d5/d9/d11/d1f/l98 d1/d5/d9/d2f/d3d/d71/lc6 0 2026-03-09T20:47:49.060 INFO:tasks.workunit.client.1.vm10.stdout:0/547: mknod d2/d9/da/d48/dac/cc0 0 2026-03-09T20:47:49.065 INFO:tasks.workunit.client.1.vm10.stdout:8/599: dwrite d0/f13 [4194304,4194304] 0 2026-03-09T20:47:49.070 INFO:tasks.workunit.client.1.vm10.stdout:2/575: dread d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/f96 [0,4194304] 0 2026-03-09T20:47:49.073 INFO:tasks.workunit.client.0.vm07.stdout:6/639: sync 2026-03-09T20:47:49.073 INFO:tasks.workunit.client.0.vm07.stdout:6/640: stat d8/d50/l8a 0 2026-03-09T20:47:49.092 INFO:tasks.workunit.client.0.vm07.stdout:8/577: write d1/f33 [2015728,12629] 0 2026-03-09T20:47:49.094 INFO:tasks.workunit.client.1.vm10.stdout:4/523: dwrite d1/d2/f60 [0,4194304] 0 2026-03-09T20:47:49.109 INFO:tasks.workunit.client.0.vm07.stdout:2/611: truncate d2/db/d1c/f45 1692123 0 2026-03-09T20:47:49.113 INFO:tasks.workunit.client.1.vm10.stdout:9/593: write d2/d12/f62 [4257705,79950] 0 2026-03-09T20:47:49.114 INFO:tasks.workunit.client.1.vm10.stdout:9/594: chown d2/d33/d37/c5f 7275 1 2026-03-09T20:47:49.123 INFO:tasks.workunit.client.1.vm10.stdout:1/566: truncate d2/da/d25/d46/d51/d5d/d6e/f76 812212 0 2026-03-09T20:47:49.126 INFO:tasks.workunit.client.0.vm07.stdout:0/645: write d1/d2/d33/d35/f5c [325372,21605] 0 2026-03-09T20:47:49.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:48 vm07.local ceph-mon[49120]: pgmap v13: 65 pgs: 65 active+clean; 2.4 GiB data, 8.6 GiB used, 111 GiB / 120 GiB avail; 35 MiB/s rd, 93 MiB/s wr, 222 op/s 2026-03-09T20:47:49.135 INFO:tasks.workunit.client.1.vm10.stdout:6/571: link d3/d30/d7f/d36/f64 d3/d79/fb7 0 2026-03-09T20:47:49.138 INFO:tasks.workunit.client.0.vm07.stdout:9/590: symlink d4/d8/d19/d5f/da5/ld5 0 2026-03-09T20:47:49.142 INFO:tasks.workunit.client.1.vm10.stdout:0/548: mkdir d2/d9/da/d11/d92/dc1 0 2026-03-09T20:47:49.171 INFO:tasks.workunit.client.1.vm10.stdout:4/524: mkdir d1/d2/d3/d54/daa 0 2026-03-09T20:47:49.176 INFO:tasks.workunit.client.1.vm10.stdout:7/574: rename db/d21/d60/d87/fb1 to db/d21/fb2 0 2026-03-09T20:47:49.176 INFO:tasks.workunit.client.1.vm10.stdout:7/575: chown db/d46/d89/fad 842695811 1 2026-03-09T20:47:49.186 INFO:tasks.workunit.client.1.vm10.stdout:1/567: mkdir d2/da/d25/d46/d51/d5d/d6e/d70/db3 0 2026-03-09T20:47:49.194 INFO:tasks.workunit.client.0.vm07.stdout:6/641: rename d8/d50/dbd to d8/d5d/d97/dc4 0 2026-03-09T20:47:49.194 INFO:tasks.workunit.client.0.vm07.stdout:4/547: creat d2/df/f97 x:0 0 0 2026-03-09T20:47:49.204 INFO:tasks.workunit.client.0.vm07.stdout:4/548: sync 2026-03-09T20:47:49.212 INFO:tasks.workunit.client.1.vm10.stdout:6/572: truncate d3/d30/d7f/d36/f56 4530473 0 2026-03-09T20:47:49.213 INFO:tasks.workunit.client.0.vm07.stdout:5/680: write d5/df/f2b [4717605,106637] 0 2026-03-09T20:47:49.223 INFO:tasks.workunit.client.0.vm07.stdout:4/549: dread d2/df/d17/f73 [0,4194304] 0 2026-03-09T20:47:49.224 INFO:tasks.workunit.client.0.vm07.stdout:4/550: write d2/d55/d5d/d3f/d4a/d85/f8c [1689635,28782] 0 2026-03-09T20:47:49.225 INFO:tasks.workunit.client.0.vm07.stdout:4/551: chown d2/f21 31 1 2026-03-09T20:47:49.227 INFO:tasks.workunit.client.0.vm07.stdout:4/552: sync 2026-03-09T20:47:49.239 INFO:tasks.workunit.client.1.vm10.stdout:3/539: symlink dc/d14/d26/d29/d40/da8/lb0 0 2026-03-09T20:47:49.266 INFO:tasks.workunit.client.1.vm10.stdout:4/525: rmdir d1/d2/d5c 39 2026-03-09T20:47:49.281 INFO:tasks.workunit.client.1.vm10.stdout:5/520: rename d2/d39/dbf/d84/d87 to d2/d27/d37/dc8 0 2026-03-09T20:47:49.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:48 vm10.local ceph-mon[57011]: pgmap v13: 65 pgs: 65 active+clean; 2.4 GiB data, 8.6 GiB used, 111 GiB / 120 GiB avail; 35 MiB/s rd, 93 MiB/s wr, 222 op/s 2026-03-09T20:47:49.289 INFO:tasks.workunit.client.1.vm10.stdout:7/576: fdatasync db/d28/d2b/d36/d3b/f3d 0 2026-03-09T20:47:49.317 INFO:tasks.workunit.client.1.vm10.stdout:6/573: dread d3/da/f1b [4194304,4194304] 0 2026-03-09T20:47:49.317 INFO:tasks.workunit.client.1.vm10.stdout:6/574: stat d3/da/d11/d26/d5b/c62 0 2026-03-09T20:47:49.318 INFO:tasks.workunit.client.1.vm10.stdout:3/540: symlink dc/d14/d26/d29/d40/da8/d69/d75/lb1 0 2026-03-09T20:47:49.318 INFO:tasks.workunit.client.1.vm10.stdout:8/600: creat d0/d22/fb6 x:0 0 0 2026-03-09T20:47:49.319 INFO:tasks.workunit.client.1.vm10.stdout:8/601: fsync d0/d22/d25/d8f/fa2 0 2026-03-09T20:47:49.320 INFO:tasks.workunit.client.1.vm10.stdout:8/602: truncate d0/f17 4720227 0 2026-03-09T20:47:49.324 INFO:tasks.workunit.client.1.vm10.stdout:5/521: symlink d2/d39/dbf/d69/d96/lc9 0 2026-03-09T20:47:49.325 INFO:tasks.workunit.client.0.vm07.stdout:2/612: unlink d2/db/d49/fad 0 2026-03-09T20:47:49.327 INFO:tasks.workunit.client.1.vm10.stdout:7/577: symlink db/d28/d2b/d36/d3b/d88/lb3 0 2026-03-09T20:47:49.337 INFO:tasks.workunit.client.1.vm10.stdout:2/576: getdents d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47 0 2026-03-09T20:47:49.337 INFO:tasks.workunit.client.1.vm10.stdout:4/526: mknod d1/d2/d3/cab 0 2026-03-09T20:47:49.340 INFO:tasks.workunit.client.1.vm10.stdout:3/541: creat dc/d14/d20/d2e/fb2 x:0 0 0 2026-03-09T20:47:49.341 INFO:tasks.workunit.client.0.vm07.stdout:9/591: truncate d4/d11/d2a/f65 132835 0 2026-03-09T20:47:49.352 INFO:tasks.workunit.client.1.vm10.stdout:9/595: rename d2/c3e to d2/d3/de/d8f/ccd 0 2026-03-09T20:47:49.353 INFO:tasks.workunit.client.1.vm10.stdout:9/596: chown d2/d28/d47/d50/l57 1 1 2026-03-09T20:47:49.355 INFO:tasks.workunit.client.0.vm07.stdout:1/640: write d3/fc [479761,59920] 0 2026-03-09T20:47:49.355 INFO:tasks.workunit.client.1.vm10.stdout:6/575: write d3/d30/d7f/d36/f6e [91943,94621] 0 2026-03-09T20:47:49.357 INFO:tasks.workunit.client.0.vm07.stdout:0/646: dwrite d1/d2/dc/d80/fbe [0,4194304] 0 2026-03-09T20:47:49.369 INFO:tasks.workunit.client.1.vm10.stdout:5/522: symlink d2/d39/d89/lca 0 2026-03-09T20:47:49.372 INFO:tasks.workunit.client.0.vm07.stdout:6/642: dread d8/d16/f17 [0,4194304] 0 2026-03-09T20:47:49.373 INFO:tasks.workunit.client.0.vm07.stdout:6/643: readlink d8/d16/db4/d85/l4c 0 2026-03-09T20:47:49.374 INFO:tasks.workunit.client.1.vm10.stdout:7/578: symlink db/d28/d30/lb4 0 2026-03-09T20:47:49.375 INFO:tasks.workunit.client.0.vm07.stdout:5/681: fsync d5/df/d13/f38 0 2026-03-09T20:47:49.377 INFO:tasks.workunit.client.1.vm10.stdout:4/527: stat d1/d2/d5c/d64/c90 0 2026-03-09T20:47:49.379 INFO:tasks.workunit.client.1.vm10.stdout:2/577: dwrite d5/d5b/fb7 [0,4194304] 0 2026-03-09T20:47:49.392 INFO:tasks.workunit.client.1.vm10.stdout:8/603: unlink d0/d22/d25/l3e 0 2026-03-09T20:47:49.394 INFO:tasks.workunit.client.0.vm07.stdout:2/613: unlink d2/db/d28/d90/l9a 0 2026-03-09T20:47:49.403 INFO:tasks.workunit.client.1.vm10.stdout:9/597: creat d2/d3/d6d/fce x:0 0 0 2026-03-09T20:47:49.404 INFO:tasks.workunit.client.1.vm10.stdout:9/598: truncate d2/d3/de/d8f/fbf 232423 0 2026-03-09T20:47:49.404 INFO:tasks.workunit.client.1.vm10.stdout:9/599: stat d2/d33/d37/c40 0 2026-03-09T20:47:49.409 INFO:tasks.workunit.client.0.vm07.stdout:9/592: fdatasync d4/d11/f9d 0 2026-03-09T20:47:49.410 INFO:tasks.workunit.client.0.vm07.stdout:9/593: stat d4/d8/d59/f66 0 2026-03-09T20:47:49.413 INFO:tasks.workunit.client.1.vm10.stdout:5/523: unlink d2/d39/dbf/d66/c68 0 2026-03-09T20:47:49.413 INFO:tasks.workunit.client.1.vm10.stdout:1/568: getdents d2/da/d25/d46/d80/da0 0 2026-03-09T20:47:49.415 INFO:tasks.workunit.client.1.vm10.stdout:4/528: read - d1/d8/d1c/d69/f6a zero size 2026-03-09T20:47:49.416 INFO:tasks.workunit.client.1.vm10.stdout:5/524: dwrite d2/d27/d37/dc8/da1/faa [0,4194304] 0 2026-03-09T20:47:49.416 INFO:tasks.workunit.client.1.vm10.stdout:2/578: mkdir d5/d18/d9f/dc2 0 2026-03-09T20:47:49.417 INFO:tasks.workunit.client.0.vm07.stdout:0/647: rename d1/d1f/d20/f4d to d1/d1f/dc2/fce 0 2026-03-09T20:47:49.417 INFO:tasks.workunit.client.1.vm10.stdout:2/579: chown d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/l33 14897167 1 2026-03-09T20:47:49.420 INFO:tasks.workunit.client.0.vm07.stdout:1/641: dread d3/f3f [0,4194304] 0 2026-03-09T20:47:49.421 INFO:tasks.workunit.client.1.vm10.stdout:9/600: mkdir d2/d33/dcf 0 2026-03-09T20:47:49.422 INFO:tasks.workunit.client.1.vm10.stdout:9/601: dread d2/d33/f7d [0,4194304] 0 2026-03-09T20:47:49.422 INFO:tasks.workunit.client.1.vm10.stdout:6/576: mknod d3/da/d11/d31/d4c/da9/cb8 0 2026-03-09T20:47:49.423 INFO:tasks.workunit.client.1.vm10.stdout:6/577: stat d3/da/f15 0 2026-03-09T20:47:49.429 INFO:tasks.workunit.client.0.vm07.stdout:5/682: fdatasync d5/df/d13/f38 0 2026-03-09T20:47:49.448 INFO:tasks.workunit.client.1.vm10.stdout:0/549: rename d2/d9/d4b/l50 to d2/d4a/d58/d82/d71/d8e/lc2 0 2026-03-09T20:47:49.451 INFO:tasks.workunit.client.1.vm10.stdout:5/525: dread d2/d27/d37/f57 [0,4194304] 0 2026-03-09T20:47:49.454 INFO:tasks.workunit.client.1.vm10.stdout:5/526: dread d2/d27/d75/f9a [0,4194304] 0 2026-03-09T20:47:49.454 INFO:tasks.workunit.client.0.vm07.stdout:7/675: getdents d3/da/db/d32/d3e/dac/d43/d62 0 2026-03-09T20:47:49.462 INFO:tasks.workunit.client.0.vm07.stdout:3/610: getdents d1/d5/d9/d2f/d34/d9e 0 2026-03-09T20:47:49.478 INFO:tasks.workunit.client.1.vm10.stdout:9/602: dread d2/f30 [0,4194304] 0 2026-03-09T20:47:49.478 INFO:tasks.workunit.client.1.vm10.stdout:9/603: readlink d2/d33/d37/l53 0 2026-03-09T20:47:49.479 INFO:tasks.workunit.client.0.vm07.stdout:9/594: write d4/d8/dc/d15/f18 [4592991,2675] 0 2026-03-09T20:47:49.482 INFO:tasks.workunit.client.0.vm07.stdout:4/553: dwrite d2/df/d59/f7c [4194304,4194304] 0 2026-03-09T20:47:49.484 INFO:tasks.workunit.client.0.vm07.stdout:4/554: write d2/df/f2e [649891,856] 0 2026-03-09T20:47:49.491 INFO:tasks.workunit.client.0.vm07.stdout:8/578: link d1/d5d/d6f/d80/l88 d1/d5d/d6f/d2f/lbb 0 2026-03-09T20:47:49.491 INFO:tasks.workunit.client.0.vm07.stdout:6/644: symlink d8/d16/d22/db1/dc2/lc5 0 2026-03-09T20:47:49.491 INFO:tasks.workunit.client.1.vm10.stdout:8/604: creat d0/d22/d25/d2e/d41/d47/fb7 x:0 0 0 2026-03-09T20:47:49.491 INFO:tasks.workunit.client.1.vm10.stdout:8/605: dread - d0/d54/fa4 zero size 2026-03-09T20:47:49.492 INFO:tasks.workunit.client.0.vm07.stdout:1/642: mkdir d3/d97/da1/dd7 0 2026-03-09T20:47:49.499 INFO:tasks.workunit.client.1.vm10.stdout:3/542: rename dc/d14/d27/ca1 to dc/d14/cb3 0 2026-03-09T20:47:49.499 INFO:tasks.workunit.client.0.vm07.stdout:5/683: chown d5/d69/fc5 2606532 1 2026-03-09T20:47:49.501 INFO:tasks.workunit.client.0.vm07.stdout:5/684: truncate d5/df/d13/d3e/de1/fe7 447340 0 2026-03-09T20:47:49.503 INFO:tasks.workunit.client.0.vm07.stdout:2/614: symlink d2/db/d28/lbf 0 2026-03-09T20:47:49.503 INFO:tasks.workunit.client.1.vm10.stdout:2/580: dread d5/d18/d1b/d22/f4f [0,4194304] 0 2026-03-09T20:47:49.504 INFO:tasks.workunit.client.1.vm10.stdout:5/527: creat d2/d39/dbf/d66/fcb x:0 0 0 2026-03-09T20:47:49.504 INFO:tasks.workunit.client.1.vm10.stdout:6/578: truncate d3/f52 563960 0 2026-03-09T20:47:49.504 INFO:tasks.workunit.client.0.vm07.stdout:7/676: fdatasync d3/f3f 0 2026-03-09T20:47:49.505 INFO:tasks.workunit.client.0.vm07.stdout:7/677: fsync d3/da/d53/db7/dde/dc5/fd4 0 2026-03-09T20:47:49.507 INFO:tasks.workunit.client.1.vm10.stdout:7/579: getdents db/d21/d26 0 2026-03-09T20:47:49.508 INFO:tasks.workunit.client.0.vm07.stdout:3/611: write d1/d5/d9/d11/f21 [826501,74577] 0 2026-03-09T20:47:49.510 INFO:tasks.workunit.client.1.vm10.stdout:9/604: creat d2/d3/de/d8f/fd0 x:0 0 0 2026-03-09T20:47:49.510 INFO:tasks.workunit.client.0.vm07.stdout:9/595: truncate d4/d11/f8a 819583 0 2026-03-09T20:47:49.511 INFO:tasks.workunit.client.1.vm10.stdout:9/605: read d2/d28/d47/d50/f64 [2499569,28450] 0 2026-03-09T20:47:49.512 INFO:tasks.workunit.client.1.vm10.stdout:9/606: write d2/d3/de/d35/fca [727536,21313] 0 2026-03-09T20:47:49.517 INFO:tasks.workunit.client.1.vm10.stdout:1/569: rmdir d2/da/d25/d46/d80/db2 0 2026-03-09T20:47:49.522 INFO:tasks.workunit.client.0.vm07.stdout:1/643: sync 2026-03-09T20:47:49.526 INFO:tasks.workunit.client.0.vm07.stdout:9/596: dread d4/d8/dc/ff [0,4194304] 0 2026-03-09T20:47:49.537 INFO:tasks.workunit.client.1.vm10.stdout:8/606: write d0/d22/d25/d6c/f5c [5234803,76387] 0 2026-03-09T20:47:49.538 INFO:tasks.workunit.client.0.vm07.stdout:0/648: mknod d1/dc0/dcc/ccf 0 2026-03-09T20:47:49.546 INFO:tasks.workunit.client.1.vm10.stdout:4/529: rename d1/d8 to d1/d2/d5c/d64/d6b/d81/dac 0 2026-03-09T20:47:49.552 INFO:tasks.workunit.client.1.vm10.stdout:0/550: mknod d2/d9/cc3 0 2026-03-09T20:47:49.560 INFO:tasks.workunit.client.1.vm10.stdout:0/551: dread d2/d9/da/d11/f42 [0,4194304] 0 2026-03-09T20:47:49.573 INFO:tasks.workunit.client.0.vm07.stdout:6/645: dwrite d8/d16/d22/d24/da0/dab/d40/d69/f9e [0,4194304] 0 2026-03-09T20:47:49.597 INFO:tasks.workunit.client.1.vm10.stdout:6/579: truncate d3/d30/d7f/f18 2902354 0 2026-03-09T20:47:49.598 INFO:tasks.workunit.client.1.vm10.stdout:6/580: stat d3/d30/c38 0 2026-03-09T20:47:49.598 INFO:tasks.workunit.client.0.vm07.stdout:3/612: dread - d1/d5/d9/d2f/d34/da5/fa9 zero size 2026-03-09T20:47:49.614 INFO:tasks.workunit.client.1.vm10.stdout:5/528: write d2/d27/f2d [816171,65236] 0 2026-03-09T20:47:49.616 INFO:tasks.workunit.client.0.vm07.stdout:7/678: dwrite d3/d58/d82/fa3 [0,4194304] 0 2026-03-09T20:47:49.617 INFO:tasks.workunit.client.1.vm10.stdout:2/581: dwrite d5/d18/d27/d89/db6/d41/f6e [0,4194304] 0 2026-03-09T20:47:49.621 INFO:tasks.workunit.client.1.vm10.stdout:2/582: dread d5/d18/d27/d89/db6/d41/d77/db3/db5/db0/fb2 [0,4194304] 0 2026-03-09T20:47:49.655 INFO:tasks.workunit.client.0.vm07.stdout:1/644: truncate d3/d14/d54/fa2 687847 0 2026-03-09T20:47:49.655 INFO:tasks.workunit.client.0.vm07.stdout:1/645: fdatasync d3/f28 0 2026-03-09T20:47:49.657 INFO:tasks.workunit.client.0.vm07.stdout:1/646: rename d3 to d3/d23/d67/d8a/dd8 22 2026-03-09T20:47:49.657 INFO:tasks.workunit.client.1.vm10.stdout:1/570: mknod d2/da/d25/d3e/d42/cb4 0 2026-03-09T20:47:49.660 INFO:tasks.workunit.client.0.vm07.stdout:4/555: symlink d2/d55/d5d/l98 0 2026-03-09T20:47:49.668 INFO:tasks.workunit.client.0.vm07.stdout:0/649: symlink d1/d2/dc/d80/ld0 0 2026-03-09T20:47:49.668 INFO:tasks.workunit.client.1.vm10.stdout:9/607: dwrite d2/d3/f6c [0,4194304] 0 2026-03-09T20:47:49.669 INFO:tasks.workunit.client.1.vm10.stdout:3/543: mkdir dc/db4 0 2026-03-09T20:47:49.678 INFO:tasks.workunit.client.1.vm10.stdout:8/607: rename d0/d22/d2f/d38/d64/faf to d0/d22/d25/d6c/fb8 0 2026-03-09T20:47:49.689 INFO:tasks.workunit.client.0.vm07.stdout:5/685: mkdir d5/d33/db2/de8/dee 0 2026-03-09T20:47:49.693 INFO:tasks.workunit.client.1.vm10.stdout:6/581: fdatasync d3/d30/d7f/d36/d5c/f78 0 2026-03-09T20:47:49.694 INFO:tasks.workunit.client.0.vm07.stdout:3/613: mknod d1/d5/d9/d2f/d3d/d64/d95/cc7 0 2026-03-09T20:47:49.694 INFO:tasks.workunit.client.0.vm07.stdout:7/679: creat d3/d58/d77/fe1 x:0 0 0 2026-03-09T20:47:49.694 INFO:tasks.workunit.client.0.vm07.stdout:7/680: stat d3/da/db/d32/d3e/dac/d43/l76 0 2026-03-09T20:47:49.694 INFO:tasks.workunit.client.0.vm07.stdout:7/681: chown d3/c8 6 1 2026-03-09T20:47:49.696 INFO:tasks.workunit.client.1.vm10.stdout:2/583: unlink d5/d18/d27/f29 0 2026-03-09T20:47:49.697 INFO:tasks.workunit.client.1.vm10.stdout:2/584: dread - d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/f4c zero size 2026-03-09T20:47:49.698 INFO:tasks.workunit.client.1.vm10.stdout:7/580: mkdir db/d46/dab/db5 0 2026-03-09T20:47:49.705 INFO:tasks.workunit.client.1.vm10.stdout:9/608: mkdir d2/d28/d47/d50/dd1 0 2026-03-09T20:47:49.708 INFO:tasks.workunit.client.1.vm10.stdout:3/544: mknod dc/d14/d27/cb5 0 2026-03-09T20:47:49.708 INFO:tasks.workunit.client.1.vm10.stdout:3/545: fsync dc/f11 0 2026-03-09T20:47:49.708 INFO:tasks.workunit.client.1.vm10.stdout:1/571: sync 2026-03-09T20:47:49.708 INFO:tasks.workunit.client.1.vm10.stdout:3/546: fsync dc/d14/d26/d29/d2a/d76/f97 0 2026-03-09T20:47:49.715 INFO:tasks.workunit.client.0.vm07.stdout:8/579: fdatasync d1/dc/d16/d26/f36 0 2026-03-09T20:47:49.720 INFO:tasks.workunit.client.1.vm10.stdout:4/530: write d1/d2/d5c/d64/d6b/d81/dac/d1c/d69/f6a [982662,86970] 0 2026-03-09T20:47:49.720 INFO:tasks.workunit.client.0.vm07.stdout:2/615: write d2/f17 [2496794,31545] 0 2026-03-09T20:47:49.720 INFO:tasks.workunit.client.1.vm10.stdout:4/531: write d1/d47/f4f [161328,80381] 0 2026-03-09T20:47:49.721 INFO:tasks.workunit.client.0.vm07.stdout:4/556: dread d2/f28 [0,4194304] 0 2026-03-09T20:47:49.722 INFO:tasks.workunit.client.0.vm07.stdout:4/557: write d2/f7 [55068,93174] 0 2026-03-09T20:47:49.727 INFO:tasks.workunit.client.1.vm10.stdout:1/572: dread d2/da/f34 [0,4194304] 0 2026-03-09T20:47:49.736 INFO:tasks.workunit.client.1.vm10.stdout:8/608: write d0/d22/d25/d2e/d41/f80 [1684360,8522] 0 2026-03-09T20:47:49.737 INFO:tasks.workunit.client.0.vm07.stdout:1/647: dwrite d3/d97/da1/dc5/fc3 [0,4194304] 0 2026-03-09T20:47:49.746 INFO:tasks.workunit.client.1.vm10.stdout:0/552: fdatasync d2/d9/da/d35/d30/f56 0 2026-03-09T20:47:49.747 INFO:tasks.workunit.client.1.vm10.stdout:0/553: write d2/d9/da/fa7 [1273880,7784] 0 2026-03-09T20:47:49.750 INFO:tasks.workunit.client.1.vm10.stdout:8/609: dread d0/d22/d2c/f3f [0,4194304] 0 2026-03-09T20:47:49.754 INFO:tasks.workunit.client.0.vm07.stdout:3/614: truncate d1/d5/d9/d2f/d3d/f75 1023043 0 2026-03-09T20:47:49.756 INFO:tasks.workunit.client.0.vm07.stdout:3/615: dread - d1/d5/d9/d2f/d34/da5/fa9 zero size 2026-03-09T20:47:49.759 INFO:tasks.workunit.client.0.vm07.stdout:9/597: link d4/d8/dc/c50 d4/d8/d59/cd6 0 2026-03-09T20:47:49.764 INFO:tasks.workunit.client.0.vm07.stdout:9/598: dwrite d4/d11/d2a/f3b [0,4194304] 0 2026-03-09T20:47:49.796 INFO:tasks.workunit.client.0.vm07.stdout:2/616: dread d2/db/d28/d57/f68 [0,4194304] 0 2026-03-09T20:47:49.804 INFO:tasks.workunit.client.0.vm07.stdout:6/646: link d8/d16/f92 d8/d16/d22/d9b/fc6 0 2026-03-09T20:47:49.804 INFO:tasks.workunit.client.0.vm07.stdout:6/647: stat d8/d16/d22/c82 0 2026-03-09T20:47:49.806 INFO:tasks.workunit.client.0.vm07.stdout:3/616: rmdir d1/d5/d9/d2f/d99 39 2026-03-09T20:47:49.811 INFO:tasks.workunit.client.0.vm07.stdout:1/648: dwrite d3/d23/d67/f69 [0,4194304] 0 2026-03-09T20:47:49.819 INFO:tasks.workunit.client.0.vm07.stdout:7/682: mkdir d3/de2 0 2026-03-09T20:47:49.822 INFO:tasks.workunit.client.0.vm07.stdout:9/599: symlink d4/d8/d19/d5f/da5/db8/ld7 0 2026-03-09T20:47:49.823 INFO:tasks.workunit.client.0.vm07.stdout:9/600: chown d4/d16/d29/d24/d37/d44/d62/d74/fa6 1322 1 2026-03-09T20:47:49.825 INFO:tasks.workunit.client.0.vm07.stdout:0/650: creat d1/d2/d33/d35/fd1 x:0 0 0 2026-03-09T20:47:49.826 INFO:tasks.workunit.client.0.vm07.stdout:0/651: stat d1/d2/dc/c7f 0 2026-03-09T20:47:49.828 INFO:tasks.workunit.client.0.vm07.stdout:9/601: read d4/d11/d2a/f39 [1924991,101836] 0 2026-03-09T20:47:49.830 INFO:tasks.workunit.client.0.vm07.stdout:8/580: creat d1/dc/d16/d31/db4/fbc x:0 0 0 2026-03-09T20:47:49.833 INFO:tasks.workunit.client.0.vm07.stdout:4/558: fsync d2/d55/d5d/d3f/f51 0 2026-03-09T20:47:49.833 INFO:tasks.workunit.client.0.vm07.stdout:4/559: readlink d2/d1f/l29 0 2026-03-09T20:47:49.840 INFO:tasks.workunit.client.0.vm07.stdout:5/686: creat d5/df/d13/fef x:0 0 0 2026-03-09T20:47:49.867 INFO:tasks.workunit.client.0.vm07.stdout:1/649: rmdir d3/d9c 39 2026-03-09T20:47:49.870 INFO:tasks.workunit.client.0.vm07.stdout:7/683: mkdir d3/d58/d77/de3 0 2026-03-09T20:47:49.874 INFO:tasks.workunit.client.0.vm07.stdout:7/684: dwrite d3/da/db/f1e [0,4194304] 0 2026-03-09T20:47:49.879 INFO:tasks.workunit.client.0.vm07.stdout:9/602: mknod d4/d16/d29/d24/d37/d44/cd8 0 2026-03-09T20:47:49.879 INFO:tasks.workunit.client.0.vm07.stdout:6/648: dwrite d8/d16/f90 [0,4194304] 0 2026-03-09T20:47:49.887 INFO:tasks.workunit.client.1.vm10.stdout:5/529: mkdir d2/d39/dbf/dcc 0 2026-03-09T20:47:49.897 INFO:tasks.workunit.client.0.vm07.stdout:2/617: creat d2/d46/db0/db3/fc0 x:0 0 0 2026-03-09T20:47:49.901 INFO:tasks.workunit.client.0.vm07.stdout:5/687: rmdir d5/d33/d39/d8d/dd7 39 2026-03-09T20:47:49.902 INFO:tasks.workunit.client.0.vm07.stdout:5/688: read d5/df/d13/d4f/f9b [819905,34431] 0 2026-03-09T20:47:49.903 INFO:tasks.workunit.client.1.vm10.stdout:2/585: rmdir d5/d18/d27/d89 39 2026-03-09T20:47:49.910 INFO:tasks.workunit.client.0.vm07.stdout:3/617: mknod d1/d5/d9/d2f/d3d/d64/d43/cc8 0 2026-03-09T20:47:49.918 INFO:tasks.workunit.client.0.vm07.stdout:1/650: creat d3/d14/d54/d9b/fd9 x:0 0 0 2026-03-09T20:47:49.925 INFO:tasks.workunit.client.1.vm10.stdout:9/609: dread d2/d28/f79 [0,4194304] 0 2026-03-09T20:47:49.934 INFO:tasks.workunit.client.1.vm10.stdout:3/547: dread dc/d14/d20/d21/f36 [0,4194304] 0 2026-03-09T20:47:49.935 INFO:tasks.workunit.client.1.vm10.stdout:3/548: read dc/d14/d26/d29/f51 [205856,103848] 0 2026-03-09T20:47:49.937 INFO:tasks.workunit.client.0.vm07.stdout:9/603: creat d4/d8/dc/d4e/fd9 x:0 0 0 2026-03-09T20:47:49.938 INFO:tasks.workunit.client.1.vm10.stdout:6/582: dwrite d3/da/d11/d26/d5b/f74 [0,4194304] 0 2026-03-09T20:47:49.940 INFO:tasks.workunit.client.1.vm10.stdout:4/532: read d1/d2/d3/d54/f7f [451255,81336] 0 2026-03-09T20:47:49.940 INFO:tasks.workunit.client.1.vm10.stdout:1/573: mkdir d2/da/d25/d46/d80/da0/d92/db5 0 2026-03-09T20:47:49.941 INFO:tasks.workunit.client.1.vm10.stdout:4/533: readlink d1/d2/d5c/d64/d6b/d81/dac/d1b/l5b 0 2026-03-09T20:47:49.959 INFO:tasks.workunit.client.1.vm10.stdout:7/581: dwrite db/d21/d23/f22 [0,4194304] 0 2026-03-09T20:47:49.960 INFO:tasks.workunit.client.1.vm10.stdout:0/554: creat d2/d9/da/d48/dac/fc4 x:0 0 0 2026-03-09T20:47:49.972 INFO:tasks.workunit.client.1.vm10.stdout:8/610: mkdir d0/d22/d25/d2e/d41/d85/db9 0 2026-03-09T20:47:49.975 INFO:tasks.workunit.client.0.vm07.stdout:3/618: creat d1/d5/d9/d2f/d3d/d71/db5/fc9 x:0 0 0 2026-03-09T20:47:49.976 INFO:tasks.workunit.client.0.vm07.stdout:1/651: mknod d3/d97/da1/dab/cda 0 2026-03-09T20:47:49.980 INFO:tasks.workunit.client.1.vm10.stdout:5/530: creat d2/d39/dbf/d63/fcd x:0 0 0 2026-03-09T20:47:49.984 INFO:tasks.workunit.client.1.vm10.stdout:2/586: rename d5/d18/d27/d38/l52 to d5/d18/d9f/dc2/lc3 0 2026-03-09T20:47:49.984 INFO:tasks.workunit.client.0.vm07.stdout:0/652: rename d1/d2/d33/d35/l3a to d1/d2/d33/d35/ld2 0 2026-03-09T20:47:49.984 INFO:tasks.workunit.client.0.vm07.stdout:8/581: rename d1/d5d/d6f/d80/l88 to d1/dc/lbd 0 2026-03-09T20:47:49.986 INFO:tasks.workunit.client.0.vm07.stdout:6/649: creat d8/d16/d4b/d88/d99/fc7 x:0 0 0 2026-03-09T20:47:49.987 INFO:tasks.workunit.client.1.vm10.stdout:9/610: mknod d2/d28/d47/d6a/cd2 0 2026-03-09T20:47:49.988 INFO:tasks.workunit.client.1.vm10.stdout:9/611: chown d2/d28/d47/c70 0 1 2026-03-09T20:47:49.989 INFO:tasks.workunit.client.0.vm07.stdout:2/618: write d2/d46/d6e/f95 [820995,130283] 0 2026-03-09T20:47:49.992 INFO:tasks.workunit.client.0.vm07.stdout:4/560: dwrite d2/df/d59/f81 [0,4194304] 0 2026-03-09T20:47:50.003 INFO:tasks.workunit.client.0.vm07.stdout:3/619: dread - d1/d5/d9/d11/f9b zero size 2026-03-09T20:47:50.007 INFO:tasks.workunit.client.1.vm10.stdout:4/534: symlink d1/d2/d5c/lad 0 2026-03-09T20:47:50.008 INFO:tasks.workunit.client.0.vm07.stdout:7/685: rename d3/da/d53/db7/fdd to d3/d58/fe4 0 2026-03-09T20:47:50.012 INFO:tasks.workunit.client.0.vm07.stdout:8/582: dread d1/d5d/d6f/d2f/d4d/d55/f8e [0,4194304] 0 2026-03-09T20:47:50.022 INFO:tasks.workunit.client.1.vm10.stdout:0/555: rmdir d2/d9/d2a 39 2026-03-09T20:47:50.022 INFO:tasks.workunit.client.1.vm10.stdout:5/531: truncate d2/d39/d4b/f51 258457 0 2026-03-09T20:47:50.022 INFO:tasks.workunit.client.1.vm10.stdout:2/587: dread - d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/f71 zero size 2026-03-09T20:47:50.022 INFO:tasks.workunit.client.1.vm10.stdout:2/588: chown d5/d18/d27/d38/d61/f81 73669276 1 2026-03-09T20:47:50.022 INFO:tasks.workunit.client.0.vm07.stdout:6/650: mkdir d8/d26/d7d/dc8 0 2026-03-09T20:47:50.022 INFO:tasks.workunit.client.0.vm07.stdout:6/651: readlink d8/d16/d22/d24/l9d 0 2026-03-09T20:47:50.022 INFO:tasks.workunit.client.0.vm07.stdout:2/619: mknod d2/db/d1c/cc1 0 2026-03-09T20:47:50.022 INFO:tasks.workunit.client.0.vm07.stdout:5/689: creat d5/ff0 x:0 0 0 2026-03-09T20:47:50.022 INFO:tasks.workunit.client.0.vm07.stdout:1/652: sync 2026-03-09T20:47:50.030 INFO:tasks.workunit.client.0.vm07.stdout:3/620: dread d1/d5/d9/d11/f73 [0,4194304] 0 2026-03-09T20:47:50.034 INFO:tasks.workunit.client.1.vm10.stdout:3/549: rename dc/d14/d26/d29/d40/da8/d69/d75/f7d to dc/d14/d26/d29/d40/d8c/d9c/fb6 0 2026-03-09T20:47:50.036 INFO:tasks.workunit.client.1.vm10.stdout:5/532: sync 2026-03-09T20:47:50.036 INFO:tasks.workunit.client.1.vm10.stdout:9/612: truncate d2/d28/f51 7749445 0 2026-03-09T20:47:50.040 INFO:tasks.workunit.client.0.vm07.stdout:0/653: rename d1/d2/dc/db1/lb6 to d1/d1f/d9f/ld3 0 2026-03-09T20:47:50.048 INFO:tasks.workunit.client.1.vm10.stdout:7/582: creat db/d46/d89/fb6 x:0 0 0 2026-03-09T20:47:50.049 INFO:tasks.workunit.client.0.vm07.stdout:2/620: fsync d2/db/d1c/f2e 0 2026-03-09T20:47:50.050 INFO:tasks.workunit.client.0.vm07.stdout:4/561: dread d2/d55/d5d/d3f/d4a/d4b/f7a [0,4194304] 0 2026-03-09T20:47:50.053 INFO:tasks.workunit.client.0.vm07.stdout:4/562: dread d2/f19 [0,4194304] 0 2026-03-09T20:47:50.055 INFO:tasks.workunit.client.1.vm10.stdout:0/556: creat d2/d4a/d58/d82/d71/d8e/d25/d34/fc5 x:0 0 0 2026-03-09T20:47:50.056 INFO:tasks.workunit.client.0.vm07.stdout:5/690: creat d5/d33/d39/ff1 x:0 0 0 2026-03-09T20:47:50.056 INFO:tasks.workunit.client.0.vm07.stdout:5/691: chown d5/df/d13/d3e/d5e/c92 56095928 1 2026-03-09T20:47:50.057 INFO:tasks.workunit.client.1.vm10.stdout:6/583: write d3/da/f58 [499546,75162] 0 2026-03-09T20:47:50.058 INFO:tasks.workunit.client.0.vm07.stdout:6/652: rmdir d8/d26/d7d 39 2026-03-09T20:47:50.066 INFO:tasks.workunit.client.1.vm10.stdout:8/611: mknod d0/d22/d25/d2e/d41/d85/db9/cba 0 2026-03-09T20:47:50.067 INFO:tasks.workunit.client.0.vm07.stdout:6/653: sync 2026-03-09T20:47:50.069 INFO:tasks.workunit.client.1.vm10.stdout:2/589: unlink d5/d18/d27/d5f/f98 0 2026-03-09T20:47:50.072 INFO:tasks.workunit.client.1.vm10.stdout:3/550: mknod dc/d14/d22/d4a/cb7 0 2026-03-09T20:47:50.074 INFO:tasks.workunit.client.1.vm10.stdout:2/590: dread d5/d5b/f6c [0,4194304] 0 2026-03-09T20:47:50.076 INFO:tasks.workunit.client.0.vm07.stdout:9/604: getdents d4/d8/d19/d89 0 2026-03-09T20:47:50.077 INFO:tasks.workunit.client.1.vm10.stdout:5/533: symlink d2/d39/dbf/d63/d95/lce 0 2026-03-09T20:47:50.078 INFO:tasks.workunit.client.0.vm07.stdout:3/621: write d1/d5/d9/d2f/d34/f40 [4777392,72288] 0 2026-03-09T20:47:50.079 INFO:tasks.workunit.client.0.vm07.stdout:3/622: chown d1/d5/d9/d2f/d3d/d64/d95/fba 15791 1 2026-03-09T20:47:50.080 INFO:tasks.workunit.client.0.vm07.stdout:3/623: write d1/d5/d9/d2f/d3d/d71/fc3 [574360,52817] 0 2026-03-09T20:47:50.084 INFO:tasks.workunit.client.0.vm07.stdout:3/624: dwrite d1/d5/d9/d2f/d3d/d64/f1a [0,4194304] 0 2026-03-09T20:47:50.086 INFO:tasks.workunit.client.0.vm07.stdout:3/625: chown d1/d5/d9/d11/d60 2518105 1 2026-03-09T20:47:50.101 INFO:tasks.workunit.client.1.vm10.stdout:1/574: creat d2/da/fb6 x:0 0 0 2026-03-09T20:47:50.102 INFO:tasks.workunit.client.0.vm07.stdout:2/621: chown d2/f3e 14602514 1 2026-03-09T20:47:50.105 INFO:tasks.workunit.client.1.vm10.stdout:1/575: dwrite d2/da/d25/d46/f61 [0,4194304] 0 2026-03-09T20:47:50.117 INFO:tasks.workunit.client.1.vm10.stdout:0/557: mknod d2/d4a/d58/d82/d60/cc6 0 2026-03-09T20:47:50.118 INFO:tasks.workunit.client.1.vm10.stdout:0/558: readlink d2/d4a/d58/d82/d71/d8e/d25/l7d 0 2026-03-09T20:47:50.122 INFO:tasks.workunit.client.1.vm10.stdout:8/612: rename d0/d22/d2c/l4c to d0/d22/d25/d40/d86/lbb 0 2026-03-09T20:47:50.127 INFO:tasks.workunit.client.1.vm10.stdout:3/551: unlink dc/d14/d20/d2e/d56/c28 0 2026-03-09T20:47:50.130 INFO:tasks.workunit.client.1.vm10.stdout:9/613: unlink d2/d3/de/c8c 0 2026-03-09T20:47:50.132 INFO:tasks.workunit.client.0.vm07.stdout:7/686: creat d3/da/db/d32/d3e/d5c/fe5 x:0 0 0 2026-03-09T20:47:50.133 INFO:tasks.workunit.client.0.vm07.stdout:7/687: stat d3/d58/d82/fa3 0 2026-03-09T20:47:50.136 INFO:tasks.workunit.client.0.vm07.stdout:5/692: write d5/d69/f82 [2510963,9474] 0 2026-03-09T20:47:50.148 INFO:tasks.workunit.client.0.vm07.stdout:3/626: dwrite d1/d5/d9/d2f/d3d/d64/d43/f90 [0,4194304] 0 2026-03-09T20:47:50.158 INFO:tasks.workunit.client.1.vm10.stdout:1/576: read - d2/da/d25/d3e/d42/f7d zero size 2026-03-09T20:47:50.159 INFO:tasks.workunit.client.1.vm10.stdout:1/577: write d2/da/fa1 [752882,72397] 0 2026-03-09T20:47:50.159 INFO:tasks.workunit.client.1.vm10.stdout:1/578: chown d2/da/d25/d46/d80/da0/d92/db5 13893876 1 2026-03-09T20:47:50.160 INFO:tasks.workunit.client.1.vm10.stdout:1/579: chown d2/da/d25/d46/d80/da0 3995 1 2026-03-09T20:47:50.160 INFO:tasks.workunit.client.1.vm10.stdout:7/583: symlink db/d28/lb7 0 2026-03-09T20:47:50.164 INFO:tasks.workunit.client.1.vm10.stdout:7/584: dwrite db/d28/d2b/d36/d3b/d88/f71 [0,4194304] 0 2026-03-09T20:47:50.164 INFO:tasks.workunit.client.1.vm10.stdout:0/559: mknod d2/d9/d69/cc7 0 2026-03-09T20:47:50.166 INFO:tasks.workunit.client.1.vm10.stdout:6/584: fdatasync d3/d30/d7f/f18 0 2026-03-09T20:47:50.181 INFO:tasks.workunit.client.1.vm10.stdout:9/614: symlink d2/d33/ld3 0 2026-03-09T20:47:50.187 INFO:tasks.workunit.client.1.vm10.stdout:8/613: dread d0/d22/d2f/d38/fa5 [0,4194304] 0 2026-03-09T20:47:50.189 INFO:tasks.workunit.client.1.vm10.stdout:4/535: getdents d1/d2/d3/d70/d99 0 2026-03-09T20:47:50.191 INFO:tasks.workunit.client.1.vm10.stdout:8/614: dwrite d0/f94 [0,4194304] 0 2026-03-09T20:47:50.196 INFO:tasks.workunit.client.1.vm10.stdout:9/615: dread d2/d28/d47/d50/f59 [0,4194304] 0 2026-03-09T20:47:50.198 INFO:tasks.workunit.client.1.vm10.stdout:6/585: sync 2026-03-09T20:47:50.199 INFO:tasks.workunit.client.1.vm10.stdout:8/615: dread d0/d22/d25/f74 [0,4194304] 0 2026-03-09T20:47:50.199 INFO:tasks.workunit.client.1.vm10.stdout:8/616: chown d0/d22/d25/d8f 1 1 2026-03-09T20:47:50.200 INFO:tasks.workunit.client.1.vm10.stdout:6/586: sync 2026-03-09T20:47:50.206 INFO:tasks.workunit.client.1.vm10.stdout:3/552: write dc/f5a [2901334,63958] 0 2026-03-09T20:47:50.208 INFO:tasks.workunit.client.1.vm10.stdout:5/534: dwrite d2/d1b/d54/d78/f47 [4194304,4194304] 0 2026-03-09T20:47:50.210 INFO:tasks.workunit.client.1.vm10.stdout:7/585: mknod db/d28/d2b/d36/cb8 0 2026-03-09T20:47:50.222 INFO:tasks.workunit.client.1.vm10.stdout:3/553: sync 2026-03-09T20:47:50.228 INFO:tasks.workunit.client.1.vm10.stdout:4/536: mknod d1/d67/cae 0 2026-03-09T20:47:50.232 INFO:tasks.workunit.client.1.vm10.stdout:9/616: creat d2/d3/d6d/d88/fd4 x:0 0 0 2026-03-09T20:47:50.234 INFO:tasks.workunit.client.1.vm10.stdout:8/617: creat d0/d22/d2c/fbc x:0 0 0 2026-03-09T20:47:50.235 INFO:tasks.workunit.client.1.vm10.stdout:6/587: truncate d3/d30/d7f/d36/d5c/fa5 658472 0 2026-03-09T20:47:50.238 INFO:tasks.workunit.client.1.vm10.stdout:0/560: dread d2/d4a/d58/d82/f5c [0,4194304] 0 2026-03-09T20:47:50.243 INFO:tasks.workunit.client.0.vm07.stdout:2/622: truncate d2/db/d49/f81 32436 0 2026-03-09T20:47:50.247 INFO:tasks.workunit.client.1.vm10.stdout:7/586: mknod db/d28/d30/cb9 0 2026-03-09T20:47:50.250 INFO:tasks.workunit.client.0.vm07.stdout:1/653: rmdir d3/d97/da1/dc5/d60/d9f/db9 0 2026-03-09T20:47:50.252 INFO:tasks.workunit.client.1.vm10.stdout:2/591: link l3 d5/d5b/lc4 0 2026-03-09T20:47:50.253 INFO:tasks.workunit.client.1.vm10.stdout:2/592: readlink d5/d18/d27/d38/d61/l73 0 2026-03-09T20:47:50.253 INFO:tasks.workunit.client.0.vm07.stdout:8/583: rename d1/d5d/d6f/d2f/f8b to d1/dc/d16/fbe 0 2026-03-09T20:47:50.256 INFO:tasks.workunit.client.1.vm10.stdout:3/554: creat dc/d14/d26/d8f/fb8 x:0 0 0 2026-03-09T20:47:50.257 INFO:tasks.workunit.client.1.vm10.stdout:3/555: chown dc/d14/d20/l67 144 1 2026-03-09T20:47:50.259 INFO:tasks.workunit.client.0.vm07.stdout:5/693: mknod d5/df/d13/d6c/db1/cf2 0 2026-03-09T20:47:50.261 INFO:tasks.workunit.client.1.vm10.stdout:9/617: symlink d2/d3/d85/ld5 0 2026-03-09T20:47:50.262 INFO:tasks.workunit.client.1.vm10.stdout:9/618: write d2/d33/fb3 [619422,13837] 0 2026-03-09T20:47:50.268 INFO:tasks.workunit.client.0.vm07.stdout:3/627: rmdir d1/d5/d9/d2f/d3d/d71 39 2026-03-09T20:47:50.269 INFO:tasks.workunit.client.0.vm07.stdout:3/628: read d1/d5/d9/d11/f21 [3430877,81105] 0 2026-03-09T20:47:50.275 INFO:tasks.workunit.client.0.vm07.stdout:2/623: chown d2/f33 809864246 1 2026-03-09T20:47:50.277 INFO:tasks.workunit.client.1.vm10.stdout:1/580: link d2/da/d25/d46/d51/l85 d2/da/d25/d46/d51/d5d/d6e/lb7 0 2026-03-09T20:47:50.281 INFO:tasks.workunit.client.0.vm07.stdout:6/654: creat d8/d16/d22/d24/da0/dab/fc9 x:0 0 0 2026-03-09T20:47:50.284 INFO:tasks.workunit.client.0.vm07.stdout:6/655: dread d8/d16/d22/d24/da0/dab/d40/d69/f9e [0,4194304] 0 2026-03-09T20:47:50.289 INFO:tasks.workunit.client.0.vm07.stdout:1/654: dwrite d3/d23/d55/f7b [0,4194304] 0 2026-03-09T20:47:50.289 INFO:tasks.workunit.client.1.vm10.stdout:6/588: dread d3/d30/d7f/d36/d5c/f78 [0,4194304] 0 2026-03-09T20:47:50.290 INFO:tasks.workunit.client.1.vm10.stdout:5/535: dwrite d2/d39/d4b/f51 [0,4194304] 0 2026-03-09T20:47:50.293 INFO:tasks.workunit.client.1.vm10.stdout:2/593: creat d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/fc5 x:0 0 0 2026-03-09T20:47:50.298 INFO:tasks.workunit.client.0.vm07.stdout:0/654: rename d1/d2/d33/d35/l39 to d1/d2/dc/d17/da6/db9/ld4 0 2026-03-09T20:47:50.313 INFO:tasks.workunit.client.0.vm07.stdout:5/694: rmdir d5 39 2026-03-09T20:47:50.316 INFO:tasks.workunit.client.0.vm07.stdout:3/629: mknod d1/d5/d9/d11/d1f/cca 0 2026-03-09T20:47:50.317 INFO:tasks.workunit.client.1.vm10.stdout:9/619: read d2/d28/f63 [3382969,36840] 0 2026-03-09T20:47:50.320 INFO:tasks.workunit.client.0.vm07.stdout:3/630: dwrite d1/f36 [8388608,4194304] 0 2026-03-09T20:47:50.329 INFO:tasks.workunit.client.1.vm10.stdout:8/618: link d0/f11 d0/d22/d25/d6c/fbd 0 2026-03-09T20:47:50.334 INFO:tasks.workunit.client.0.vm07.stdout:2/624: unlink d2/db/d28/c2a 0 2026-03-09T20:47:50.336 INFO:tasks.workunit.client.1.vm10.stdout:0/561: rename d2/d9/da/d11/l52 to d2/d9/da/d48/dac/lc8 0 2026-03-09T20:47:50.337 INFO:tasks.workunit.client.1.vm10.stdout:8/619: rename d0/d22/d25 to d0/d22/d25/d2e/d41/d85/dbe 22 2026-03-09T20:47:50.338 INFO:tasks.workunit.client.1.vm10.stdout:0/562: stat d2/d9/da/d11/d92/dc1 0 2026-03-09T20:47:50.338 INFO:tasks.workunit.client.0.vm07.stdout:4/563: getdents d2/d55/d5d/d3f/d4a/d4b/d52/d5c 0 2026-03-09T20:47:50.338 INFO:tasks.workunit.client.1.vm10.stdout:0/563: fsync d2/d9/f61 0 2026-03-09T20:47:50.339 INFO:tasks.workunit.client.1.vm10.stdout:8/620: chown d0/d22/d25/d2e/d41/d47/d78/l77 53320468 1 2026-03-09T20:47:50.339 INFO:tasks.workunit.client.1.vm10.stdout:0/564: stat d2 0 2026-03-09T20:47:50.342 INFO:tasks.workunit.client.0.vm07.stdout:4/564: dwrite d2/d55/f71 [0,4194304] 0 2026-03-09T20:47:50.349 INFO:tasks.workunit.client.1.vm10.stdout:1/581: symlink d2/d89/lb8 0 2026-03-09T20:47:50.349 INFO:tasks.workunit.client.1.vm10.stdout:1/582: chown d2/da/d25/d3e/f58 43430559 1 2026-03-09T20:47:50.354 INFO:tasks.workunit.client.1.vm10.stdout:7/587: chown db/d28/d2b/d36/d63 8 1 2026-03-09T20:47:50.356 INFO:tasks.workunit.client.0.vm07.stdout:6/656: symlink d8/d16/d22/d24/da0/dab/d40/d69/lca 0 2026-03-09T20:47:50.359 INFO:tasks.workunit.client.1.vm10.stdout:4/537: write d1/d2/d5c/d64/d6b/d81/dac/f29 [1780138,76268] 0 2026-03-09T20:47:50.361 INFO:tasks.workunit.client.1.vm10.stdout:6/589: mkdir d3/da/d11/d89/db9 0 2026-03-09T20:47:50.363 INFO:tasks.workunit.client.0.vm07.stdout:8/584: truncate d1/dc/fd 2812040 0 2026-03-09T20:47:50.371 INFO:tasks.workunit.client.0.vm07.stdout:9/605: rename d4/d8/dc/d4e/c6d to d4/d11/d23/d32/cda 0 2026-03-09T20:47:50.374 INFO:tasks.workunit.client.1.vm10.stdout:6/590: dread d3/da/d11/f1d [0,4194304] 0 2026-03-09T20:47:50.375 INFO:tasks.workunit.client.1.vm10.stdout:5/536: write d2/d27/d37/d46/d5d/d77/f93 [2640097,5139] 0 2026-03-09T20:47:50.380 INFO:tasks.workunit.client.1.vm10.stdout:5/537: sync 2026-03-09T20:47:50.383 INFO:tasks.workunit.client.1.vm10.stdout:9/620: fdatasync d2/d12/f20 0 2026-03-09T20:47:50.389 INFO:tasks.workunit.client.1.vm10.stdout:8/621: rmdir d0/d22/d25/d2e/d41/d47/d78 39 2026-03-09T20:47:50.389 INFO:tasks.workunit.client.0.vm07.stdout:3/631: read d1/d5/d9/d11/f21 [3036871,8531] 0 2026-03-09T20:47:50.390 INFO:tasks.workunit.client.1.vm10.stdout:8/622: dread d0/f11 [4194304,4194304] 0 2026-03-09T20:47:50.391 INFO:tasks.workunit.client.1.vm10.stdout:1/583: dread - d2/da/d25/d46/d51/d5d/d6e/f93 zero size 2026-03-09T20:47:50.393 INFO:tasks.workunit.client.1.vm10.stdout:7/588: read db/f7c [207122,100978] 0 2026-03-09T20:47:50.395 INFO:tasks.workunit.client.1.vm10.stdout:4/538: rename d1/d2/d5c/d64/d6b/d81/dac/c13 to d1/d2/d3/d70/d78/caf 0 2026-03-09T20:47:50.398 INFO:tasks.workunit.client.0.vm07.stdout:4/565: creat d2/d55/d5d/d3f/d4a/f99 x:0 0 0 2026-03-09T20:47:50.398 INFO:tasks.workunit.client.0.vm07.stdout:4/566: chown d2/f28 42667 1 2026-03-09T20:47:50.404 INFO:tasks.workunit.client.0.vm07.stdout:1/655: fsync d3/d14/d54/f32 0 2026-03-09T20:47:50.409 INFO:tasks.workunit.client.1.vm10.stdout:3/556: creat dc/fb9 x:0 0 0 2026-03-09T20:47:50.413 INFO:tasks.workunit.client.0.vm07.stdout:7/688: link d3/d58/d77/l91 d3/da/db/d32/d3e/le6 0 2026-03-09T20:47:50.418 INFO:tasks.workunit.client.0.vm07.stdout:0/655: dwrite d1/d2/d33/fb5 [0,4194304] 0 2026-03-09T20:47:50.430 INFO:tasks.workunit.client.0.vm07.stdout:5/695: rmdir d5/d33/d39/d8d/db5 39 2026-03-09T20:47:50.432 INFO:tasks.workunit.client.1.vm10.stdout:5/538: dread d2/f35 [0,4194304] 0 2026-03-09T20:47:50.440 INFO:tasks.workunit.client.0.vm07.stdout:3/632: creat d1/d5/d9/daf/d9f/fcb x:0 0 0 2026-03-09T20:47:50.443 INFO:tasks.workunit.client.1.vm10.stdout:1/584: truncate d2/da/d25/d3e/f58 113499 0 2026-03-09T20:47:50.445 INFO:tasks.workunit.client.0.vm07.stdout:2/625: mknod d2/db/d49/cc2 0 2026-03-09T20:47:50.449 INFO:tasks.workunit.client.1.vm10.stdout:0/565: write d2/d9/f20 [4296319,85943] 0 2026-03-09T20:47:50.454 INFO:tasks.workunit.client.0.vm07.stdout:4/567: symlink d2/df/d59/l9a 0 2026-03-09T20:47:50.460 INFO:tasks.workunit.client.0.vm07.stdout:6/657: creat d8/d16/d22/d24/da0/dab/dc1/fcb x:0 0 0 2026-03-09T20:47:50.460 INFO:tasks.workunit.client.1.vm10.stdout:4/539: creat d1/d2/d5c/fb0 x:0 0 0 2026-03-09T20:47:50.464 INFO:tasks.workunit.client.1.vm10.stdout:2/594: creat d5/d18/fc6 x:0 0 0 2026-03-09T20:47:50.467 INFO:tasks.workunit.client.1.vm10.stdout:6/591: symlink d3/d30/d7f/d36/d6d/d8c/lba 0 2026-03-09T20:47:50.468 INFO:tasks.workunit.client.0.vm07.stdout:1/656: mknod d3/d23/d67/cdb 0 2026-03-09T20:47:50.482 INFO:tasks.workunit.client.0.vm07.stdout:0/656: fdatasync d1/d2/dc/d80/f87 0 2026-03-09T20:47:50.484 INFO:tasks.workunit.client.1.vm10.stdout:9/621: dwrite d2/f46 [0,4194304] 0 2026-03-09T20:47:50.495 INFO:tasks.workunit.client.1.vm10.stdout:1/585: mkdir d2/da/d25/d46/d51/d7e/d9e/da2/db9 0 2026-03-09T20:47:50.495 INFO:tasks.workunit.client.1.vm10.stdout:1/586: stat d2/da/d25/d46/d51/d5d/da6 0 2026-03-09T20:47:50.496 INFO:tasks.workunit.client.1.vm10.stdout:1/587: stat d2/da/d25/d46/d51/d7e/d9e/fa5 0 2026-03-09T20:47:50.499 INFO:tasks.workunit.client.0.vm07.stdout:5/696: write d5/df/d13/d3e/d47/fd2 [481395,51674] 0 2026-03-09T20:47:50.499 INFO:tasks.workunit.client.1.vm10.stdout:5/539: dwrite d2/d39/d4b/f4e [0,4194304] 0 2026-03-09T20:47:50.500 INFO:tasks.workunit.client.0.vm07.stdout:4/568: truncate d2/df/f6b 368355 0 2026-03-09T20:47:50.501 INFO:tasks.workunit.client.0.vm07.stdout:4/569: write d2/d55/d5d/d3f/d4a/f99 [845912,78172] 0 2026-03-09T20:47:50.505 INFO:tasks.workunit.client.0.vm07.stdout:4/570: dread d2/df/d59/f81 [0,4194304] 0 2026-03-09T20:47:50.527 INFO:tasks.workunit.client.0.vm07.stdout:9/606: creat d4/d16/fdb x:0 0 0 2026-03-09T20:47:50.529 INFO:tasks.workunit.client.0.vm07.stdout:7/689: truncate d3/da/db/d32/d3e/dac/f1a 2212682 0 2026-03-09T20:47:50.530 INFO:tasks.workunit.client.0.vm07.stdout:7/690: dread - d3/da/d53/db7/dde/dc5/fdb zero size 2026-03-09T20:47:50.534 INFO:tasks.workunit.client.0.vm07.stdout:3/633: dread d1/f78 [0,4194304] 0 2026-03-09T20:47:50.536 INFO:tasks.workunit.client.0.vm07.stdout:6/658: dread d8/d16/f18 [0,4194304] 0 2026-03-09T20:47:50.541 INFO:tasks.workunit.client.1.vm10.stdout:0/566: dwrite d2/d9/d69/d80/f8d [0,4194304] 0 2026-03-09T20:47:50.543 INFO:tasks.workunit.client.1.vm10.stdout:8/623: creat d0/d22/d25/fbf x:0 0 0 2026-03-09T20:47:50.544 INFO:tasks.workunit.client.1.vm10.stdout:8/624: rename d0/d22/d25/d2e to d0/d22/d25/d2e/d41/d85/dc0 22 2026-03-09T20:47:50.544 INFO:tasks.workunit.client.1.vm10.stdout:8/625: dread - d0/d22/f71 zero size 2026-03-09T20:47:50.545 INFO:tasks.workunit.client.0.vm07.stdout:2/626: rename d2/f2c to d2/d46/d6e/dbe/d96/fc3 0 2026-03-09T20:47:50.552 INFO:tasks.workunit.client.1.vm10.stdout:5/540: mkdir d2/d58/dcf 0 2026-03-09T20:47:50.552 INFO:tasks.workunit.client.1.vm10.stdout:5/541: chown d2/d39/dbf 51 1 2026-03-09T20:47:50.563 INFO:tasks.workunit.client.1.vm10.stdout:7/589: dwrite db/d28/d2b/d36/d40/f44 [0,4194304] 0 2026-03-09T20:47:50.564 INFO:tasks.workunit.client.0.vm07.stdout:8/585: getdents d1/d3b 0 2026-03-09T20:47:50.577 INFO:tasks.workunit.client.0.vm07.stdout:7/691: unlink d3/da/db/d32/d3e/dac/f3a 0 2026-03-09T20:47:50.578 INFO:tasks.workunit.client.0.vm07.stdout:7/692: write d3/f4f [7738313,46429] 0 2026-03-09T20:47:50.579 INFO:tasks.workunit.client.1.vm10.stdout:4/540: dwrite d1/d2/d5c/d64/d6b/d81/dac/d39/f4b [4194304,4194304] 0 2026-03-09T20:47:50.591 INFO:tasks.workunit.client.1.vm10.stdout:3/557: link dc/d14/d20/d21/f96 dc/d14/d90/fba 0 2026-03-09T20:47:50.597 INFO:tasks.workunit.client.0.vm07.stdout:6/659: mkdir d8/d16/d22/d24/da0/dab/dc1/dcc 0 2026-03-09T20:47:50.600 INFO:tasks.workunit.client.1.vm10.stdout:0/567: creat d2/d4a/d58/d82/d71/d5d/fc9 x:0 0 0 2026-03-09T20:47:50.601 INFO:tasks.workunit.client.1.vm10.stdout:0/568: chown d2/d4a/d58/d82/d60/cc6 395149897 1 2026-03-09T20:47:50.602 INFO:tasks.workunit.client.0.vm07.stdout:2/627: symlink d2/db/d28/d90/da4/lc4 0 2026-03-09T20:47:50.606 INFO:tasks.workunit.client.0.vm07.stdout:5/697: mknod d5/d33/db2/de8/dee/cf3 0 2026-03-09T20:47:50.609 INFO:tasks.workunit.client.0.vm07.stdout:0/657: write d1/d82/f86 [3458430,78642] 0 2026-03-09T20:47:50.614 INFO:tasks.workunit.client.1.vm10.stdout:5/542: creat d2/d27/d75/d81/fd0 x:0 0 0 2026-03-09T20:47:50.616 INFO:tasks.workunit.client.1.vm10.stdout:2/595: link d5/f7 d5/d18/d27/d89/db6/d41/fc7 0 2026-03-09T20:47:50.617 INFO:tasks.workunit.client.0.vm07.stdout:8/586: symlink d1/d5d/d6f/d2f/d4d/d55/lbf 0 2026-03-09T20:47:50.623 INFO:tasks.workunit.client.1.vm10.stdout:4/541: dread d1/d2/f2d [0,4194304] 0 2026-03-09T20:47:50.625 INFO:tasks.workunit.client.0.vm07.stdout:1/657: link d3/d97/da1/dc5/d60/f53 d3/d14/d54/fdc 0 2026-03-09T20:47:50.625 INFO:tasks.workunit.client.0.vm07.stdout:1/658: write d3/d14/d94/f95 [3384323,86013] 0 2026-03-09T20:47:50.626 INFO:tasks.workunit.client.0.vm07.stdout:1/659: chown d3/d14/d54/d9b/lb6 22791 1 2026-03-09T20:47:50.629 INFO:tasks.workunit.client.0.vm07.stdout:7/693: rmdir d3/da/d53/db7/dde 39 2026-03-09T20:47:50.630 INFO:tasks.workunit.client.1.vm10.stdout:6/592: creat d3/da/d11/d31/fbb x:0 0 0 2026-03-09T20:47:50.631 INFO:tasks.workunit.client.0.vm07.stdout:3/634: mkdir d1/d5/d9/d2f/d3d/d71/dcc 0 2026-03-09T20:47:50.637 INFO:tasks.workunit.client.1.vm10.stdout:8/626: mknod d0/d22/d25/d2e/d41/d47/d78/cc1 0 2026-03-09T20:47:50.638 INFO:tasks.workunit.client.0.vm07.stdout:9/607: rename d4/d8/d19/d5f/d73/ca8 to d4/d16/d29/d24/d37/d44/cdc 0 2026-03-09T20:47:50.640 INFO:tasks.workunit.client.1.vm10.stdout:8/627: dread d0/d22/d25/d6c/fb8 [0,4194304] 0 2026-03-09T20:47:50.641 INFO:tasks.workunit.client.1.vm10.stdout:0/569: write d2/d9/da/d11/f42 [3120976,39861] 0 2026-03-09T20:47:50.642 INFO:tasks.workunit.client.1.vm10.stdout:0/570: read d2/d9/da/f81 [2914341,60519] 0 2026-03-09T20:47:50.643 INFO:tasks.workunit.client.1.vm10.stdout:0/571: write d2/d9/da/d11/f42 [75830,26359] 0 2026-03-09T20:47:50.646 INFO:tasks.workunit.client.1.vm10.stdout:1/588: creat d2/da/d25/d3e/fba x:0 0 0 2026-03-09T20:47:50.647 INFO:tasks.workunit.client.1.vm10.stdout:1/589: stat d2/da/d25/d46/d51/d5d/d6e/d70/f83 0 2026-03-09T20:47:50.648 INFO:tasks.workunit.client.0.vm07.stdout:2/628: symlink d2/d46/d6e/lc5 0 2026-03-09T20:47:50.649 INFO:tasks.workunit.client.1.vm10.stdout:5/543: mknod d2/d39/d89/cd1 0 2026-03-09T20:47:50.650 INFO:tasks.workunit.client.0.vm07.stdout:5/698: creat d5/df/d13/d4f/ff4 x:0 0 0 2026-03-09T20:47:50.651 INFO:tasks.workunit.client.0.vm07.stdout:5/699: write d5/df/d13/d30/fe3 [311039,72317] 0 2026-03-09T20:47:50.651 INFO:tasks.workunit.client.0.vm07.stdout:5/700: fsync d5/d50/f52 0 2026-03-09T20:47:50.654 INFO:tasks.workunit.client.0.vm07.stdout:0/658: mknod d1/d1f/dc2/cd5 0 2026-03-09T20:47:50.656 INFO:tasks.workunit.client.1.vm10.stdout:2/596: mkdir d5/d18/d27/d38/d61/dc8 0 2026-03-09T20:47:50.658 INFO:tasks.workunit.client.1.vm10.stdout:4/542: unlink d1/d2/d5c/d64/d6b/d81/dac/d1c/d2b/d4a/f98 0 2026-03-09T20:47:50.672 INFO:tasks.workunit.client.0.vm07.stdout:7/694: rmdir d3/da4 39 2026-03-09T20:47:50.672 INFO:tasks.workunit.client.1.vm10.stdout:9/622: getdents d2/d28 0 2026-03-09T20:47:50.672 INFO:tasks.workunit.client.1.vm10.stdout:9/623: dwrite d2/d28/f32 [4194304,4194304] 0 2026-03-09T20:47:50.672 INFO:tasks.workunit.client.1.vm10.stdout:9/624: write d2/d28/d47/f58 [2734713,59403] 0 2026-03-09T20:47:50.675 INFO:tasks.workunit.client.0.vm07.stdout:1/660: sync 2026-03-09T20:47:50.675 INFO:tasks.workunit.client.0.vm07.stdout:1/661: stat d3/d23/d67/l8b 0 2026-03-09T20:47:50.676 INFO:tasks.workunit.client.0.vm07.stdout:1/662: chown d3/d14/d94/f95 191 1 2026-03-09T20:47:50.676 INFO:tasks.workunit.client.0.vm07.stdout:1/663: readlink d3/d97/da1/dc5/d60/lac 0 2026-03-09T20:47:50.677 INFO:tasks.workunit.client.0.vm07.stdout:1/664: write d3/fc [2851450,15450] 0 2026-03-09T20:47:50.678 INFO:tasks.workunit.client.1.vm10.stdout:4/543: dread d1/d2/d5c/d64/d6b/d81/dac/f29 [0,4194304] 0 2026-03-09T20:47:50.681 INFO:tasks.workunit.client.0.vm07.stdout:6/660: mkdir d8/d16/dcd 0 2026-03-09T20:47:50.687 INFO:tasks.workunit.client.1.vm10.stdout:1/590: mknod d2/da/d25/d46/cbb 0 2026-03-09T20:47:50.688 INFO:tasks.workunit.client.0.vm07.stdout:5/701: truncate d5/d50/f61 3125413 0 2026-03-09T20:47:50.688 INFO:tasks.workunit.client.1.vm10.stdout:5/544: mkdir d2/d39/d4b/d7a/dd2 0 2026-03-09T20:47:50.689 INFO:tasks.workunit.client.1.vm10.stdout:1/591: fsync d2/da/d25/d3e/f41 0 2026-03-09T20:47:50.689 INFO:tasks.workunit.client.0.vm07.stdout:0/659: dread - d1/f9c zero size 2026-03-09T20:47:50.690 INFO:tasks.workunit.client.1.vm10.stdout:2/597: creat d5/d18/d27/d89/db6/d41/d77/db3/db5/fc9 x:0 0 0 2026-03-09T20:47:50.693 INFO:tasks.workunit.client.0.vm07.stdout:8/587: dwrite d1/dc/d16/dad/d87/f97 [0,4194304] 0 2026-03-09T20:47:50.693 INFO:tasks.workunit.client.1.vm10.stdout:3/558: creat dc/fbb x:0 0 0 2026-03-09T20:47:50.694 INFO:tasks.workunit.client.1.vm10.stdout:3/559: write dc/d14/d26/f6f [1002164,27712] 0 2026-03-09T20:47:50.694 INFO:tasks.workunit.client.0.vm07.stdout:8/588: chown d1/ca5 166191017 1 2026-03-09T20:47:50.697 INFO:tasks.workunit.client.0.vm07.stdout:1/665: mkdir d3/d97/da1/ddd 0 2026-03-09T20:47:50.698 INFO:tasks.workunit.client.1.vm10.stdout:9/625: chown d2/d3/l9 18 1 2026-03-09T20:47:50.698 INFO:tasks.workunit.client.1.vm10.stdout:9/626: stat d2/d28/d47/d50/l57 0 2026-03-09T20:47:50.699 INFO:tasks.workunit.client.1.vm10.stdout:0/572: truncate d2/d4a/f5a 24615 0 2026-03-09T20:47:50.699 INFO:tasks.workunit.client.0.vm07.stdout:2/629: truncate d2/d11/f38 1267012 0 2026-03-09T20:47:50.700 INFO:tasks.workunit.client.0.vm07.stdout:5/702: unlink d5/d33/l68 0 2026-03-09T20:47:50.705 INFO:tasks.workunit.client.0.vm07.stdout:0/660: unlink d1/d2/d33/cab 0 2026-03-09T20:47:50.706 INFO:tasks.workunit.client.0.vm07.stdout:7/695: rename d3/da/db/d32/d3e/dac/d1f/c39 to d3/da/db/d32/d3e/dac/d43/d62/ce7 0 2026-03-09T20:47:50.707 INFO:tasks.workunit.client.0.vm07.stdout:7/696: write d3/f3f [4596141,120315] 0 2026-03-09T20:47:50.710 INFO:tasks.workunit.client.0.vm07.stdout:8/589: rmdir d1/dc/d16 39 2026-03-09T20:47:50.711 INFO:tasks.workunit.client.0.vm07.stdout:1/666: rmdir d3/d66/d86 39 2026-03-09T20:47:50.712 INFO:tasks.workunit.client.0.vm07.stdout:1/667: readlink d3/d23/l64 0 2026-03-09T20:47:50.712 INFO:tasks.workunit.client.0.vm07.stdout:6/661: truncate d8/f46 1502078 0 2026-03-09T20:47:50.713 INFO:tasks.workunit.client.0.vm07.stdout:5/703: write d5/df/d13/d3e/d5e/fd5 [517992,77739] 0 2026-03-09T20:47:50.719 INFO:tasks.workunit.client.0.vm07.stdout:6/662: fdatasync d8/f12 0 2026-03-09T20:47:50.723 INFO:tasks.workunit.client.0.vm07.stdout:1/668: getdents d3/d14/d54/d6e/dc0 0 2026-03-09T20:47:50.726 INFO:tasks.workunit.client.1.vm10.stdout:8/628: write d0/f97 [1027103,82655] 0 2026-03-09T20:47:50.726 INFO:tasks.workunit.client.0.vm07.stdout:7/697: dread d3/da/db/d32/d3e/dac/f92 [0,4194304] 0 2026-03-09T20:47:50.726 INFO:tasks.workunit.client.0.vm07.stdout:3/635: write d1/d5/d9/d11/d60/f89 [5234512,16161] 0 2026-03-09T20:47:50.728 INFO:tasks.workunit.client.0.vm07.stdout:9/608: write d4/d8/dc/d15/f30 [3706074,125716] 0 2026-03-09T20:47:50.732 INFO:tasks.workunit.client.0.vm07.stdout:8/590: sync 2026-03-09T20:47:50.738 INFO:tasks.workunit.client.0.vm07.stdout:5/704: unlink d5/d19/f2c 0 2026-03-09T20:47:50.741 INFO:tasks.workunit.client.1.vm10.stdout:8/629: dread d0/d22/d2c/f32 [0,4194304] 0 2026-03-09T20:47:50.742 INFO:tasks.workunit.client.1.vm10.stdout:8/630: chown d0/d22/d2c/c3c 0 1 2026-03-09T20:47:50.749 INFO:tasks.workunit.client.0.vm07.stdout:9/609: dread d4/d8/d19/f42 [0,4194304] 0 2026-03-09T20:47:50.758 INFO:tasks.workunit.client.0.vm07.stdout:4/571: dwrite d2/df/f6b [0,4194304] 0 2026-03-09T20:47:50.760 INFO:tasks.workunit.client.0.vm07.stdout:4/572: truncate d2/d55/d5d/d3f/d4a/f99 1848685 0 2026-03-09T20:47:50.761 INFO:tasks.workunit.client.0.vm07.stdout:4/573: dread - d2/df/f97 zero size 2026-03-09T20:47:50.777 INFO:tasks.workunit.client.0.vm07.stdout:2/630: rename d2/f7b to d2/db/d49/fc6 0 2026-03-09T20:47:50.779 INFO:tasks.workunit.client.1.vm10.stdout:7/590: getdents db/d46 0 2026-03-09T20:47:50.780 INFO:tasks.workunit.client.0.vm07.stdout:7/698: fdatasync d3/da/db/d32/d3e/dac/fb9 0 2026-03-09T20:47:50.780 INFO:tasks.workunit.client.0.vm07.stdout:2/631: dwrite d2/db/d1c/f9d [0,4194304] 0 2026-03-09T20:47:50.783 INFO:tasks.workunit.client.0.vm07.stdout:3/636: stat d1/d5/d9/d11/l50 0 2026-03-09T20:47:50.789 INFO:tasks.workunit.client.0.vm07.stdout:8/591: creat d1/d5d/d6f/fc0 x:0 0 0 2026-03-09T20:47:50.789 INFO:tasks.workunit.client.0.vm07.stdout:8/592: chown d1/d5d/d6f/d2f/d53/f5f 147 1 2026-03-09T20:47:50.790 INFO:tasks.workunit.client.0.vm07.stdout:6/663: symlink d8/d16/dcd/lce 0 2026-03-09T20:47:50.793 INFO:tasks.workunit.client.0.vm07.stdout:9/610: mkdir d4/d8/d19/d89/da7/ddd 0 2026-03-09T20:47:50.795 INFO:tasks.workunit.client.1.vm10.stdout:9/627: symlink d2/d28/d47/d50/ld6 0 2026-03-09T20:47:50.803 INFO:tasks.workunit.client.0.vm07.stdout:0/661: getdents d1/d1f/d9f 0 2026-03-09T20:47:50.803 INFO:tasks.workunit.client.0.vm07.stdout:0/662: dread d1/d2/dc/d17/f3c [0,4194304] 0 2026-03-09T20:47:50.803 INFO:tasks.workunit.client.0.vm07.stdout:4/574: rename d2/f2b to d2/df/d59/d8a/f9b 0 2026-03-09T20:47:50.803 INFO:tasks.workunit.client.0.vm07.stdout:1/669: symlink d3/d97/da1/dd7/lde 0 2026-03-09T20:47:50.803 INFO:tasks.workunit.client.1.vm10.stdout:0/573: mkdir d2/d4a/d58/d82/d71/dca 0 2026-03-09T20:47:50.803 INFO:tasks.workunit.client.1.vm10.stdout:0/574: chown d2/d4a/d79/l8b 1135 1 2026-03-09T20:47:50.803 INFO:tasks.workunit.client.1.vm10.stdout:8/631: fsync d0/d22/d2c/f3f 0 2026-03-09T20:47:50.803 INFO:tasks.workunit.client.1.vm10.stdout:6/593: getdents d3/d30/d7f/d4a 0 2026-03-09T20:47:50.804 INFO:tasks.workunit.client.0.vm07.stdout:2/632: mkdir d2/db/d28/d5c/dc7 0 2026-03-09T20:47:50.807 INFO:tasks.workunit.client.1.vm10.stdout:4/544: creat d1/d2/d5c/d64/d6b/d81/dac/fb1 x:0 0 0 2026-03-09T20:47:50.808 INFO:tasks.workunit.client.1.vm10.stdout:4/545: chown d1/d2/d3/c8c 29811887 1 2026-03-09T20:47:50.809 INFO:tasks.workunit.client.1.vm10.stdout:4/546: dread - d1/d2/d5c/d64/d6b/d81/dac/d39/f97 zero size 2026-03-09T20:47:50.810 INFO:tasks.workunit.client.1.vm10.stdout:7/591: sync 2026-03-09T20:47:50.811 INFO:tasks.workunit.client.1.vm10.stdout:3/560: dread f6 [0,4194304] 0 2026-03-09T20:47:50.811 INFO:tasks.workunit.client.1.vm10.stdout:4/547: write d1/d2/d5c/d64/d6b/d81/dac/d1b/f8b [2197753,81329] 0 2026-03-09T20:47:50.811 INFO:tasks.workunit.client.1.vm10.stdout:7/592: sync 2026-03-09T20:47:50.812 INFO:tasks.workunit.client.1.vm10.stdout:7/593: dread - db/d46/d89/fb6 zero size 2026-03-09T20:47:50.812 INFO:tasks.workunit.client.0.vm07.stdout:8/593: mkdir d1/d5d/d6f/d2f/d4d/d95/dc1 0 2026-03-09T20:47:50.816 INFO:tasks.workunit.client.1.vm10.stdout:9/628: symlink d2/db8/ld7 0 2026-03-09T20:47:50.817 INFO:tasks.workunit.client.1.vm10.stdout:0/575: mkdir d2/d4a/d58/d82/d71/d5d/dcb 0 2026-03-09T20:47:50.817 INFO:tasks.workunit.client.1.vm10.stdout:5/545: link d2/d39/dbf/d69/d96/ca0 d2/d1b/d54/d7b/cd3 0 2026-03-09T20:47:50.819 INFO:tasks.workunit.client.0.vm07.stdout:4/575: creat d2/d1f/f9c x:0 0 0 2026-03-09T20:47:50.821 INFO:tasks.workunit.client.0.vm07.stdout:6/664: dread d8/d16/db4/d85/f53 [0,4194304] 0 2026-03-09T20:47:50.829 INFO:tasks.workunit.client.1.vm10.stdout:1/592: dwrite d2/da/d25/d3e/d42/f86 [0,4194304] 0 2026-03-09T20:47:50.829 INFO:tasks.workunit.client.1.vm10.stdout:1/593: readlink d2/da/d25/l37 0 2026-03-09T20:47:50.830 INFO:tasks.workunit.client.0.vm07.stdout:5/705: dwrite d5/d33/fb6 [0,4194304] 0 2026-03-09T20:47:50.831 INFO:tasks.workunit.client.1.vm10.stdout:2/598: write d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47/d94/f9b [123424,71762] 0 2026-03-09T20:47:50.832 INFO:tasks.workunit.client.0.vm07.stdout:2/633: truncate d2/d11/f50 4653435 0 2026-03-09T20:47:50.832 INFO:tasks.workunit.client.1.vm10.stdout:2/599: chown d5/d18/d27/d89/cbe 4081773 1 2026-03-09T20:47:50.836 INFO:tasks.workunit.client.1.vm10.stdout:6/594: dread d3/d30/d7f/f18 [0,4194304] 0 2026-03-09T20:47:50.840 INFO:tasks.workunit.client.1.vm10.stdout:3/561: creat dc/d14/d26/d29/d40/d8c/fbc x:0 0 0 2026-03-09T20:47:50.840 INFO:tasks.workunit.client.1.vm10.stdout:3/562: chown dc/fb9 3421 1 2026-03-09T20:47:50.841 INFO:tasks.workunit.client.0.vm07.stdout:8/594: chown d1/dc/d16/f70 5228432 1 2026-03-09T20:47:50.841 INFO:tasks.workunit.client.1.vm10.stdout:9/629: unlink d2/d12/l21 0 2026-03-09T20:47:50.842 INFO:tasks.workunit.client.0.vm07.stdout:9/611: link d4/d11/d23/f2f d4/d8/dc/d15/fde 0 2026-03-09T20:47:50.842 INFO:tasks.workunit.client.1.vm10.stdout:9/630: write d2/d28/d47/d6a/fc3 [59108,120189] 0 2026-03-09T20:47:50.843 INFO:tasks.workunit.client.0.vm07.stdout:9/612: stat d4/d16/d29/d24/d37/d44/d62/d8e/dd4 0 2026-03-09T20:47:50.850 INFO:tasks.workunit.client.1.vm10.stdout:8/632: creat d0/d22/d2f/d38/d64/d7f/fc2 x:0 0 0 2026-03-09T20:47:50.851 INFO:tasks.workunit.client.1.vm10.stdout:5/546: mknod d2/d39/cd4 0 2026-03-09T20:47:50.852 INFO:tasks.workunit.client.1.vm10.stdout:1/594: fsync d2/f3c 0 2026-03-09T20:47:50.853 INFO:tasks.workunit.client.0.vm07.stdout:4/576: mkdir d2/df/d59/d8a/d9d 0 2026-03-09T20:47:50.857 INFO:tasks.workunit.client.1.vm10.stdout:7/594: rename db/d28/d2b/d36/l17 to db/d21/d60/d78/lba 0 2026-03-09T20:47:50.857 INFO:tasks.workunit.client.1.vm10.stdout:7/595: chown db/d21/d60/d87/f98 225177166 1 2026-03-09T20:47:50.860 INFO:tasks.workunit.client.0.vm07.stdout:8/595: creat d1/d5d/d6f/d80/fc2 x:0 0 0 2026-03-09T20:47:50.860 INFO:tasks.workunit.client.1.vm10.stdout:0/576: symlink d2/d9/da/d11/d92/dc1/lcc 0 2026-03-09T20:47:50.861 INFO:tasks.workunit.client.0.vm07.stdout:9/613: symlink d4/d16/d29/d24/d37/d44/d62/d8e/ldf 0 2026-03-09T20:47:50.861 INFO:tasks.workunit.client.1.vm10.stdout:8/633: rmdir d0/d22/d25/d2e/d41/d47/d78 39 2026-03-09T20:47:50.861 INFO:tasks.workunit.client.0.vm07.stdout:9/614: dread - d4/d8/dc/dbb/db6/fc6 zero size 2026-03-09T20:47:50.862 INFO:tasks.workunit.client.1.vm10.stdout:8/634: stat d0/d22/d25/d2e/d41/d47/d63/f8c 0 2026-03-09T20:47:50.862 INFO:tasks.workunit.client.1.vm10.stdout:8/635: stat d0/d22/d25/fbf 0 2026-03-09T20:47:50.863 INFO:tasks.workunit.client.1.vm10.stdout:8/636: fsync d0/d22/d25/d6c/fb1 0 2026-03-09T20:47:50.863 INFO:tasks.workunit.client.0.vm07.stdout:0/663: creat d1/d2/dc/fd6 x:0 0 0 2026-03-09T20:47:50.866 INFO:tasks.workunit.client.1.vm10.stdout:9/631: dread d2/d28/d47/d50/f75 [0,4194304] 0 2026-03-09T20:47:50.866 INFO:tasks.workunit.client.1.vm10.stdout:1/595: fsync d2/da/d25/d3e/d42/f8d 0 2026-03-09T20:47:50.868 INFO:tasks.workunit.client.0.vm07.stdout:2/634: mkdir d2/dc8 0 2026-03-09T20:47:50.868 INFO:tasks.workunit.client.1.vm10.stdout:2/600: unlink d5/fd 0 2026-03-09T20:47:50.869 INFO:tasks.workunit.client.0.vm07.stdout:9/615: creat d4/d16/d29/d24/d37/d44/d62/d8e/fe0 x:0 0 0 2026-03-09T20:47:50.872 INFO:tasks.workunit.client.0.vm07.stdout:7/699: write d3/da/db/d32/d3e/dac/d1f/faa [251642,61687] 0 2026-03-09T20:47:50.873 INFO:tasks.workunit.client.0.vm07.stdout:0/664: mknod d1/d1f/d30/cd7 0 2026-03-09T20:47:50.873 INFO:tasks.workunit.client.0.vm07.stdout:0/665: write d1/d82/f86 [1736102,86389] 0 2026-03-09T20:47:50.878 INFO:tasks.workunit.client.0.vm07.stdout:2/635: creat d2/db/d1c/d4a/d88/fc9 x:0 0 0 2026-03-09T20:47:50.886 INFO:tasks.workunit.client.1.vm10.stdout:9/632: fdatasync d2/d3/de/d8f/f9d 0 2026-03-09T20:47:50.887 INFO:tasks.workunit.client.1.vm10.stdout:2/601: dread - d5/d18/d27/d38/d61/f99 zero size 2026-03-09T20:47:50.887 INFO:tasks.workunit.client.1.vm10.stdout:8/637: sync 2026-03-09T20:47:50.887 INFO:tasks.workunit.client.0.vm07.stdout:2/636: chown d2/db/d1c/d8d 1 1 2026-03-09T20:47:50.887 INFO:tasks.workunit.client.0.vm07.stdout:7/700: stat d3/da/db/d32/d3e/cad 0 2026-03-09T20:47:50.887 INFO:tasks.workunit.client.0.vm07.stdout:9/616: symlink d4/d8/le1 0 2026-03-09T20:47:50.887 INFO:tasks.workunit.client.0.vm07.stdout:7/701: creat d3/da/db/fe8 x:0 0 0 2026-03-09T20:47:50.887 INFO:tasks.workunit.client.0.vm07.stdout:0/666: symlink d1/d2/dc/ld8 0 2026-03-09T20:47:50.888 INFO:tasks.workunit.client.0.vm07.stdout:8/596: sync 2026-03-09T20:47:50.888 INFO:tasks.workunit.client.0.vm07.stdout:8/597: read - d1/d3b/f9a zero size 2026-03-09T20:47:50.889 INFO:tasks.workunit.client.0.vm07.stdout:8/598: readlink d1/d5d/d6f/d2f/la3 0 2026-03-09T20:47:50.890 INFO:tasks.workunit.client.0.vm07.stdout:9/617: fdatasync d4/d16/f41 0 2026-03-09T20:47:50.892 INFO:tasks.workunit.client.0.vm07.stdout:7/702: rename d3/da/db/l20 to d3/d58/d77/de3/le9 0 2026-03-09T20:47:50.893 INFO:tasks.workunit.client.0.vm07.stdout:7/703: chown d3/d58/dc1/fc8 21582183 1 2026-03-09T20:47:50.895 INFO:tasks.workunit.client.0.vm07.stdout:2/637: mknod d2/da7/db4/cca 0 2026-03-09T20:47:50.897 INFO:tasks.workunit.client.0.vm07.stdout:9/618: truncate d4/d8/dc/dbb/f96 93140 0 2026-03-09T20:47:50.897 INFO:tasks.workunit.client.1.vm10.stdout:8/638: creat d0/d22/d2f/d38/d64/fc3 x:0 0 0 2026-03-09T20:47:50.897 INFO:tasks.workunit.client.1.vm10.stdout:5/547: link d2/d27/la7 d2/d39/d4b/d7a/dd2/ld5 0 2026-03-09T20:47:50.898 INFO:tasks.workunit.client.0.vm07.stdout:7/704: rmdir d3/da/db/d32/d3e/dac/d43/d62 39 2026-03-09T20:47:50.899 INFO:tasks.workunit.client.0.vm07.stdout:0/667: unlink d1/d2/d33/d35/lad 0 2026-03-09T20:47:50.900 INFO:tasks.workunit.client.1.vm10.stdout:7/596: getdents db/d21/d26/d72 0 2026-03-09T20:47:50.901 INFO:tasks.workunit.client.1.vm10.stdout:7/597: write db/d21/f9c [1085986,82261] 0 2026-03-09T20:47:50.902 INFO:tasks.workunit.client.0.vm07.stdout:0/668: dread d1/d2/dc/d80/fbe [0,4194304] 0 2026-03-09T20:47:50.903 INFO:tasks.workunit.client.0.vm07.stdout:8/599: symlink d1/d5d/d6f/d2f/d4d/d95/dc1/lc3 0 2026-03-09T20:47:50.907 INFO:tasks.workunit.client.0.vm07.stdout:9/619: creat d4/d16/d29/d24/d37/d44/d62/d8e/fe2 x:0 0 0 2026-03-09T20:47:50.908 INFO:tasks.workunit.client.0.vm07.stdout:3/637: write d1/d5/d9/d11/f73 [4558501,65071] 0 2026-03-09T20:47:50.913 INFO:tasks.workunit.client.0.vm07.stdout:2/638: dread d2/db/f41 [0,4194304] 0 2026-03-09T20:47:50.914 INFO:tasks.workunit.client.0.vm07.stdout:3/638: truncate d1/d5/d9/daf/d9f/fcb 60329 0 2026-03-09T20:47:50.915 INFO:tasks.workunit.client.0.vm07.stdout:1/670: write d3/f34 [777130,116350] 0 2026-03-09T20:47:50.915 INFO:tasks.workunit.client.1.vm10.stdout:4/548: write d1/f26 [2917964,99744] 0 2026-03-09T20:47:50.916 INFO:tasks.workunit.client.1.vm10.stdout:4/549: stat d1/d2/d3/d70/d78/la3 0 2026-03-09T20:47:50.928 INFO:tasks.workunit.client.0.vm07.stdout:6/665: dwrite d8/d16/db4/d85/f2f [0,4194304] 0 2026-03-09T20:47:50.928 INFO:tasks.workunit.client.1.vm10.stdout:6/595: dwrite d3/da/d11/d26/d5b/f48 [0,4194304] 0 2026-03-09T20:47:50.935 INFO:tasks.workunit.client.0.vm07.stdout:5/706: dwrite d5/d33/d75/fa8 [0,4194304] 0 2026-03-09T20:47:50.948 INFO:tasks.workunit.client.0.vm07.stdout:4/577: truncate d2/f69 378991 0 2026-03-09T20:47:50.948 INFO:tasks.workunit.client.1.vm10.stdout:1/596: getdents d2 0 2026-03-09T20:47:50.950 INFO:tasks.workunit.client.1.vm10.stdout:4/550: dread d1/d2/d5c/d64/d6b/d81/dac/d1c/d2b/f5a [0,4194304] 0 2026-03-09T20:47:50.951 INFO:tasks.workunit.client.1.vm10.stdout:3/563: truncate dc/d14/d20/d21/d3b/f4f 6544472 0 2026-03-09T20:47:50.951 INFO:tasks.workunit.client.0.vm07.stdout:8/600: truncate d1/f13 2565353 0 2026-03-09T20:47:50.954 INFO:tasks.workunit.client.1.vm10.stdout:0/577: dwrite d2/d4a/d58/f7a [0,4194304] 0 2026-03-09T20:47:50.958 INFO:tasks.workunit.client.1.vm10.stdout:2/602: link d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47/c97 d5/d18/d27/d5f/cca 0 2026-03-09T20:47:50.959 INFO:tasks.workunit.client.1.vm10.stdout:8/639: creat d0/d22/d25/d8f/fc4 x:0 0 0 2026-03-09T20:47:50.962 INFO:tasks.workunit.client.1.vm10.stdout:6/596: creat d3/da/d11/d31/d4c/d60/fbc x:0 0 0 2026-03-09T20:47:50.973 INFO:tasks.workunit.client.0.vm07.stdout:4/578: rename d2/f21 to d2/d55/d5d/d3f/d4a/d4b/d52/f9e 0 2026-03-09T20:47:50.973 INFO:tasks.workunit.client.1.vm10.stdout:9/633: write d2/d3/de/f42 [2804435,101750] 0 2026-03-09T20:47:50.973 INFO:tasks.workunit.client.1.vm10.stdout:6/597: dread d3/da/d11/d89/fb0 [0,4194304] 0 2026-03-09T20:47:50.979 INFO:tasks.workunit.client.0.vm07.stdout:9/620: mknod d4/d16/d29/d24/d37/d8d/dcc/ce3 0 2026-03-09T20:47:50.982 INFO:tasks.workunit.client.0.vm07.stdout:1/671: dread d3/d97/da1/dc5/d90/f93 [0,4194304] 0 2026-03-09T20:47:50.983 INFO:tasks.workunit.client.1.vm10.stdout:2/603: mknod d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47/ccb 0 2026-03-09T20:47:50.984 INFO:tasks.workunit.client.1.vm10.stdout:8/640: fsync d0/f14 0 2026-03-09T20:47:50.984 INFO:tasks.workunit.client.0.vm07.stdout:3/639: mkdir d1/d5/dcd 0 2026-03-09T20:47:50.989 INFO:tasks.workunit.client.0.vm07.stdout:6/666: symlink d8/d16/lcf 0 2026-03-09T20:47:50.991 INFO:tasks.workunit.client.1.vm10.stdout:5/548: creat d2/fd6 x:0 0 0 2026-03-09T20:47:50.996 INFO:tasks.workunit.client.0.vm07.stdout:2/639: dread d2/d11/d56/f5a [0,4194304] 0 2026-03-09T20:47:50.996 INFO:tasks.workunit.client.0.vm07.stdout:5/707: mknod d5/df/d13/d6c/db1/dcc/cf5 0 2026-03-09T20:47:51.003 INFO:tasks.workunit.client.1.vm10.stdout:5/549: dread d2/d27/f2d [0,4194304] 0 2026-03-09T20:47:51.003 INFO:tasks.workunit.client.1.vm10.stdout:5/550: read - d2/d39/dbf/d63/fbe zero size 2026-03-09T20:47:51.004 INFO:tasks.workunit.client.0.vm07.stdout:2/640: dread d2/db/d28/f58 [0,4194304] 0 2026-03-09T20:47:51.005 INFO:tasks.workunit.client.0.vm07.stdout:2/641: chown d2/db/d49/f6b 7 1 2026-03-09T20:47:51.007 INFO:tasks.workunit.client.1.vm10.stdout:1/597: truncate d2/da/d25/d3e/f58 697051 0 2026-03-09T20:47:51.010 INFO:tasks.workunit.client.0.vm07.stdout:7/705: creat d3/da/db/d32/d3e/d5c/fea x:0 0 0 2026-03-09T20:47:51.010 INFO:tasks.workunit.client.0.vm07.stdout:7/706: chown d3/d58/d82/la7 0 1 2026-03-09T20:47:51.013 INFO:tasks.workunit.client.1.vm10.stdout:6/598: rmdir d3/d30/d7f/d24 39 2026-03-09T20:47:51.016 INFO:tasks.workunit.client.0.vm07.stdout:4/579: unlink d2/d55/d5d/d3f/d4a/d85/f8e 0 2026-03-09T20:47:51.018 INFO:tasks.workunit.client.0.vm07.stdout:8/601: mknod d1/dc/d6a/cc4 0 2026-03-09T20:47:51.018 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:50 vm10.local ceph-mon[57011]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:51.018 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:50 vm10.local ceph-mon[57011]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:51.018 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:50 vm10.local ceph-mon[57011]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:47:51.018 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:50 vm10.local ceph-mon[57011]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:47:51.018 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:50 vm10.local ceph-mon[57011]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:51.018 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:50 vm10.local ceph-mon[57011]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:47:51.018 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:50 vm10.local ceph-mon[57011]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "mgr fail", "who": "vm10.byqahe"}]: dispatch 2026-03-09T20:47:51.018 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:50 vm10.local ceph-mon[57011]: from='mgr.24439 ' entity='mgr.vm10.byqahe' cmd=[{"prefix": "mgr fail", "who": "vm10.byqahe"}]: dispatch 2026-03-09T20:47:51.018 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:50 vm10.local ceph-mon[57011]: Activating manager daemon vm07.xjrvch 2026-03-09T20:47:51.018 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:50 vm10.local ceph-mon[57011]: from='mgr.24439 ' entity='mgr.vm10.byqahe' cmd='[{"prefix": "mgr fail", "who": "vm10.byqahe"}]': finished 2026-03-09T20:47:51.019 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:50 vm10.local ceph-mon[57011]: osdmap e45: 6 total, 6 up, 6 in 2026-03-09T20:47:51.019 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:50 vm10.local ceph-mon[57011]: mgrmap e26: vm07.xjrvch(active, starting, since 0.00736287s) 2026-03-09T20:47:51.019 INFO:tasks.workunit.client.1.vm10.stdout:6/599: sync 2026-03-09T20:47:51.020 INFO:tasks.workunit.client.1.vm10.stdout:3/564: symlink dc/d9e/lbd 0 2026-03-09T20:47:51.021 INFO:tasks.workunit.client.0.vm07.stdout:9/621: mkdir d4/d8/d59/de4 0 2026-03-09T20:47:51.022 INFO:tasks.workunit.client.0.vm07.stdout:9/622: chown d4/d16/d29/d24/d37/d44/d62/d8e 145068 1 2026-03-09T20:47:51.024 INFO:tasks.workunit.client.1.vm10.stdout:6/600: dread d3/da/d11/d31/f82 [0,4194304] 0 2026-03-09T20:47:51.025 INFO:tasks.workunit.client.1.vm10.stdout:0/578: fsync d2/d4a/d58/d82/d71/d5d/f70 0 2026-03-09T20:47:51.031 INFO:tasks.workunit.client.0.vm07.stdout:3/640: truncate d1/d5/d9/d2f/d34/f4b 1079359 0 2026-03-09T20:47:51.033 INFO:tasks.workunit.client.1.vm10.stdout:8/641: creat d0/d22/d2f/fc5 x:0 0 0 2026-03-09T20:47:51.035 INFO:tasks.workunit.client.1.vm10.stdout:2/604: mkdir d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47/dcc 0 2026-03-09T20:47:51.038 INFO:tasks.workunit.client.0.vm07.stdout:5/708: rename d5/df/d13/d4f/c53 to d5/d33/d39/d8d/dab/cf6 0 2026-03-09T20:47:51.041 INFO:tasks.workunit.client.0.vm07.stdout:2/642: dread - d2/db/d28/d90/f99 zero size 2026-03-09T20:47:51.048 INFO:tasks.workunit.client.0.vm07.stdout:0/669: write d1/d2/d4b/f61 [1044329,47649] 0 2026-03-09T20:47:51.049 INFO:tasks.workunit.client.1.vm10.stdout:7/598: dwrite db/d21/d26/f2f [0,4194304] 0 2026-03-09T20:47:51.049 INFO:tasks.workunit.client.0.vm07.stdout:7/707: creat d3/da/db/d32/d3e/d5c/dc2/feb x:0 0 0 2026-03-09T20:47:51.051 INFO:tasks.workunit.client.1.vm10.stdout:5/551: rename d2/d58/d6c/f8c to d2/d39/dbf/d63/d95/fd7 0 2026-03-09T20:47:51.052 INFO:tasks.workunit.client.1.vm10.stdout:5/552: write d2/d27/d37/f38 [5235135,19172] 0 2026-03-09T20:47:51.054 INFO:tasks.workunit.client.0.vm07.stdout:4/580: creat d2/d55/d5d/d3f/f9f x:0 0 0 2026-03-09T20:47:51.054 INFO:tasks.workunit.client.0.vm07.stdout:4/581: chown d2/d55/d5d/d3f/f51 22939017 1 2026-03-09T20:47:51.055 INFO:tasks.workunit.client.1.vm10.stdout:1/598: read d2/da/d25/d3e/d42/f62 [529439,88108] 0 2026-03-09T20:47:51.055 INFO:tasks.workunit.client.1.vm10.stdout:1/599: stat d2 0 2026-03-09T20:47:51.061 INFO:tasks.workunit.client.1.vm10.stdout:9/634: creat d2/d33/dcf/fd8 x:0 0 0 2026-03-09T20:47:51.065 INFO:tasks.workunit.client.1.vm10.stdout:4/551: write d1/d2/d5c/d64/d6b/d81/dac/d1c/f91 [1126384,113083] 0 2026-03-09T20:47:51.068 INFO:tasks.workunit.client.0.vm07.stdout:1/672: write d3/d14/d54/fdc [129825,113739] 0 2026-03-09T20:47:51.073 INFO:tasks.workunit.client.1.vm10.stdout:6/601: creat d3/da/d11/d26/d5b/fbd x:0 0 0 2026-03-09T20:47:51.073 INFO:tasks.workunit.client.1.vm10.stdout:6/602: dread - d3/d30/d7f/d51/f94 zero size 2026-03-09T20:47:51.073 INFO:tasks.workunit.client.0.vm07.stdout:6/667: symlink d8/db3/ld0 0 2026-03-09T20:47:51.075 INFO:tasks.workunit.client.0.vm07.stdout:5/709: mkdir d5/d19/d73/dbc/df7 0 2026-03-09T20:47:51.075 INFO:tasks.workunit.client.0.vm07.stdout:9/623: read d4/d16/d29/d24/d37/d44/d62/d74/fa6 [3356264,95197] 0 2026-03-09T20:47:51.076 INFO:tasks.workunit.client.0.vm07.stdout:9/624: stat d4/d8/d19/d89/da7 0 2026-03-09T20:47:51.079 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:50 vm07.local ceph-mon[49120]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:51.079 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:50 vm07.local ceph-mon[49120]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:51.079 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:50 vm07.local ceph-mon[49120]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:47:51.079 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:50 vm07.local ceph-mon[49120]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:47:51.079 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:50 vm07.local ceph-mon[49120]: from='mgr.24439 ' entity='mgr.vm10.byqahe' 2026-03-09T20:47:51.079 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:50 vm07.local ceph-mon[49120]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:47:51.079 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:50 vm07.local ceph-mon[49120]: from='mgr.24439 192.168.123.110:0/2226652552' entity='mgr.vm10.byqahe' cmd=[{"prefix": "mgr fail", "who": "vm10.byqahe"}]: dispatch 2026-03-09T20:47:51.079 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:50 vm07.local ceph-mon[49120]: from='mgr.24439 ' entity='mgr.vm10.byqahe' cmd=[{"prefix": "mgr fail", "who": "vm10.byqahe"}]: dispatch 2026-03-09T20:47:51.079 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:50 vm07.local ceph-mon[49120]: Activating manager daemon vm07.xjrvch 2026-03-09T20:47:51.079 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:50 vm07.local ceph-mon[49120]: from='mgr.24439 ' entity='mgr.vm10.byqahe' cmd='[{"prefix": "mgr fail", "who": "vm10.byqahe"}]': finished 2026-03-09T20:47:51.079 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:50 vm07.local ceph-mon[49120]: osdmap e45: 6 total, 6 up, 6 in 2026-03-09T20:47:51.079 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:50 vm07.local ceph-mon[49120]: mgrmap e26: vm07.xjrvch(active, starting, since 0.00736287s) 2026-03-09T20:47:51.080 INFO:tasks.workunit.client.1.vm10.stdout:8/642: mkdir d0/d22/d25/d2e/d41/d85/db9/dc6 0 2026-03-09T20:47:51.086 INFO:tasks.workunit.client.0.vm07.stdout:0/670: mkdir d1/dc0/dcc/dd9 0 2026-03-09T20:47:51.086 INFO:tasks.workunit.client.0.vm07.stdout:0/671: fdatasync d1/d2/ff 0 2026-03-09T20:47:51.090 INFO:tasks.workunit.client.0.vm07.stdout:7/708: truncate d3/da/db/d32/d3e/dac/d43/f68 4076602 0 2026-03-09T20:47:51.098 INFO:tasks.workunit.client.0.vm07.stdout:4/582: truncate d2/d55/d5d/d3f/d4a/f5e 211785 0 2026-03-09T20:47:51.101 INFO:tasks.workunit.client.1.vm10.stdout:7/599: mkdir db/d28/d4c/d6e/dbb 0 2026-03-09T20:47:51.106 INFO:tasks.workunit.client.0.vm07.stdout:8/602: symlink d1/d5d/d6f/d2f/d4d/lc5 0 2026-03-09T20:47:51.106 INFO:tasks.workunit.client.1.vm10.stdout:3/565: write dc/f87 [1294238,96966] 0 2026-03-09T20:47:51.108 INFO:tasks.workunit.client.0.vm07.stdout:3/641: rename d1/d5/d9/d2f/d99/la0 to d1/d5/dcd/lce 0 2026-03-09T20:47:51.109 INFO:tasks.workunit.client.0.vm07.stdout:3/642: read - d1/d5/d9/d2f/d3d/d71/db5/fc9 zero size 2026-03-09T20:47:51.109 INFO:tasks.workunit.client.0.vm07.stdout:6/668: creat d8/d5d/d97/fd1 x:0 0 0 2026-03-09T20:47:51.110 INFO:tasks.workunit.client.0.vm07.stdout:3/643: chown d1/d5/d9/d11/d60/f89 1 1 2026-03-09T20:47:51.110 INFO:tasks.workunit.client.0.vm07.stdout:5/710: chown d5/d50/f61 6803913 1 2026-03-09T20:47:51.111 INFO:tasks.workunit.client.0.vm07.stdout:5/711: chown d5/df/d13/d4f 32951524 1 2026-03-09T20:47:51.112 INFO:tasks.workunit.client.1.vm10.stdout:5/553: unlink d2/d39/d4b/c59 0 2026-03-09T20:47:51.113 INFO:tasks.workunit.client.1.vm10.stdout:5/554: write d2/d39/dbf/d66/fcb [884632,87251] 0 2026-03-09T20:47:51.113 INFO:tasks.workunit.client.0.vm07.stdout:9/625: rmdir d4/d8/d19/d5f/da5/db8 39 2026-03-09T20:47:51.113 INFO:tasks.workunit.client.0.vm07.stdout:9/626: readlink d4/d8/d19/l99 0 2026-03-09T20:47:51.115 INFO:tasks.workunit.client.1.vm10.stdout:4/552: rmdir d1/d2 39 2026-03-09T20:47:51.116 INFO:tasks.workunit.client.0.vm07.stdout:0/672: symlink d1/d2/dc/d80/lda 0 2026-03-09T20:47:51.119 INFO:tasks.workunit.client.1.vm10.stdout:6/603: fsync d3/d30/d33/f35 0 2026-03-09T20:47:51.120 INFO:tasks.workunit.client.1.vm10.stdout:4/553: dwrite d1/d47/f4f [0,4194304] 0 2026-03-09T20:47:51.125 INFO:tasks.workunit.client.0.vm07.stdout:4/583: creat d2/d55/d5d/d3f/d4a/d85/fa0 x:0 0 0 2026-03-09T20:47:51.125 INFO:tasks.workunit.client.0.vm07.stdout:4/584: chown d2/f4c 333577568 1 2026-03-09T20:47:51.126 INFO:tasks.workunit.client.0.vm07.stdout:4/585: stat d2/d1f/f25 0 2026-03-09T20:47:51.130 INFO:tasks.workunit.client.0.vm07.stdout:4/586: dwrite d2/df/d17/f2a [0,4194304] 0 2026-03-09T20:47:51.131 INFO:tasks.workunit.client.1.vm10.stdout:8/643: chown d0/d22/d2f/d9d/laa 312921 1 2026-03-09T20:47:51.132 INFO:tasks.workunit.client.0.vm07.stdout:4/587: read d2/d1f/f53 [2152065,62108] 0 2026-03-09T20:47:51.135 INFO:tasks.workunit.client.1.vm10.stdout:1/600: dread d2/da/f1e [0,4194304] 0 2026-03-09T20:47:51.151 INFO:tasks.workunit.client.0.vm07.stdout:7/709: dread d3/da/db/d79/f98 [0,4194304] 0 2026-03-09T20:47:51.160 INFO:tasks.workunit.client.1.vm10.stdout:7/600: creat db/d21/fbc x:0 0 0 2026-03-09T20:47:51.168 INFO:tasks.workunit.client.1.vm10.stdout:1/601: dread d2/f1c [0,4194304] 0 2026-03-09T20:47:51.169 INFO:tasks.workunit.client.1.vm10.stdout:1/602: write d2/da/d25/d3e/d55/f9a [1512611,47196] 0 2026-03-09T20:47:51.172 INFO:tasks.workunit.client.1.vm10.stdout:0/579: dwrite d2/d4a/d58/d82/d71/d8e/d25/d34/f77 [0,4194304] 0 2026-03-09T20:47:51.190 INFO:tasks.workunit.client.1.vm10.stdout:9/635: dwrite d2/d3/de/f7c [0,4194304] 0 2026-03-09T20:47:51.192 INFO:tasks.workunit.client.1.vm10.stdout:2/605: dwrite d5/d18/d1b/f23 [0,4194304] 0 2026-03-09T20:47:51.195 INFO:tasks.workunit.client.1.vm10.stdout:8/644: unlink d0/d22/d25/d8f/fc4 0 2026-03-09T20:47:51.195 INFO:tasks.workunit.client.1.vm10.stdout:8/645: chown d0/d22/d25/f34 0 1 2026-03-09T20:47:51.201 INFO:tasks.workunit.client.1.vm10.stdout:8/646: dwrite d0/f17 [4194304,4194304] 0 2026-03-09T20:47:51.204 INFO:tasks.workunit.client.1.vm10.stdout:1/603: unlink d2/f8 0 2026-03-09T20:47:51.211 INFO:tasks.workunit.client.1.vm10.stdout:0/580: rename d2/d9/d4b to d2/d4a/d58/d82/d71/d8e/d25/db7/dcd 0 2026-03-09T20:47:51.238 INFO:tasks.workunit.client.1.vm10.stdout:6/604: fdatasync d3/d30/d7f/d24/f27 0 2026-03-09T20:47:51.240 INFO:tasks.workunit.client.0.vm07.stdout:8/603: symlink d1/d8f/lc6 0 2026-03-09T20:47:51.241 INFO:tasks.workunit.client.1.vm10.stdout:2/606: truncate d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47/f53 205725 0 2026-03-09T20:47:51.244 INFO:tasks.workunit.client.0.vm07.stdout:6/669: unlink d8/d16/d4b/l76 0 2026-03-09T20:47:51.250 INFO:tasks.workunit.client.0.vm07.stdout:3/644: dwrite d1/d5/d9/d2f/d99/fa6 [0,4194304] 0 2026-03-09T20:47:51.265 INFO:tasks.workunit.client.1.vm10.stdout:7/601: mkdir db/d28/d2b/d36/d3b/d88/dbd 0 2026-03-09T20:47:51.270 INFO:tasks.workunit.client.1.vm10.stdout:5/555: truncate d2/d39/d4b/f4e 3609904 0 2026-03-09T20:47:51.271 INFO:tasks.workunit.client.0.vm07.stdout:9/627: mkdir d4/de5 0 2026-03-09T20:47:51.272 INFO:tasks.workunit.client.0.vm07.stdout:9/628: chown d4/d8/le1 158821 1 2026-03-09T20:47:51.272 INFO:tasks.workunit.client.0.vm07.stdout:9/629: chown d4/d8/db9 4447408 1 2026-03-09T20:47:51.274 INFO:tasks.workunit.client.0.vm07.stdout:2/643: creat d2/db/d28/fcb x:0 0 0 2026-03-09T20:47:51.284 INFO:tasks.workunit.client.1.vm10.stdout:4/554: creat d1/d2/d5c/d64/d6b/d79/d92/fb2 x:0 0 0 2026-03-09T20:47:51.286 INFO:tasks.workunit.client.0.vm07.stdout:7/710: rmdir d3/da/db/d32/d3e/dac/d1f/d2b/d52 39 2026-03-09T20:47:51.289 INFO:tasks.workunit.client.1.vm10.stdout:6/605: mkdir d3/d30/d7f/d36/d6d/dbe 0 2026-03-09T20:47:51.292 INFO:tasks.workunit.client.1.vm10.stdout:5/556: sync 2026-03-09T20:47:51.293 INFO:tasks.workunit.client.0.vm07.stdout:1/673: link d3/f5c d3/d23/d67/fdf 0 2026-03-09T20:47:51.300 INFO:tasks.workunit.client.1.vm10.stdout:8/647: dwrite d0/d22/d25/d6c/f82 [0,4194304] 0 2026-03-09T20:47:51.301 INFO:tasks.workunit.client.1.vm10.stdout:7/602: rename db/d28/lb7 to db/d21/d60/d87/lbe 0 2026-03-09T20:47:51.301 INFO:tasks.workunit.client.0.vm07.stdout:5/712: creat d5/d19/d73/dbc/dc7/ff8 x:0 0 0 2026-03-09T20:47:51.308 INFO:tasks.workunit.client.1.vm10.stdout:5/557: dread d2/d39/dbf/f61 [0,4194304] 0 2026-03-09T20:47:51.309 INFO:tasks.workunit.client.1.vm10.stdout:5/558: read d2/d1b/d54/d78/f47 [4268336,88549] 0 2026-03-09T20:47:51.313 INFO:tasks.workunit.client.0.vm07.stdout:3/645: fdatasync d1/d5/d9/d2f/d3d/d71/fb0 0 2026-03-09T20:47:51.313 INFO:tasks.workunit.client.0.vm07.stdout:3/646: dread - d1/d5/d9/d2f/d3d/d64/d43/fbf zero size 2026-03-09T20:47:51.314 INFO:tasks.workunit.client.0.vm07.stdout:0/673: write d1/f57 [1714152,37828] 0 2026-03-09T20:47:51.316 INFO:tasks.workunit.client.0.vm07.stdout:4/588: dwrite d2/d1f/f53 [0,4194304] 0 2026-03-09T20:47:51.327 INFO:tasks.workunit.client.1.vm10.stdout:0/581: creat d2/d9/db8/db4/fce x:0 0 0 2026-03-09T20:47:51.330 INFO:tasks.workunit.client.0.vm07.stdout:2/644: rmdir d2/db/d49 39 2026-03-09T20:47:51.330 INFO:tasks.workunit.client.0.vm07.stdout:2/645: readlink d2/db/l43 0 2026-03-09T20:47:51.342 INFO:tasks.workunit.client.1.vm10.stdout:2/607: write d5/d18/f67 [3464049,67181] 0 2026-03-09T20:47:51.345 INFO:tasks.workunit.client.1.vm10.stdout:3/566: link dc/d14/d26/d29/d40/da8/c5f dc/d14/d26/d29/cbe 0 2026-03-09T20:47:51.349 INFO:tasks.workunit.client.1.vm10.stdout:8/648: rmdir d0/d22/d2f/d38/d64/d7f 39 2026-03-09T20:47:51.349 INFO:tasks.workunit.client.1.vm10.stdout:8/649: fsync d0/d22/f76 0 2026-03-09T20:47:51.359 INFO:tasks.workunit.client.0.vm07.stdout:3/647: read d1/d5/d9/d2f/d3d/d71/fc4 [3562608,63550] 0 2026-03-09T20:47:51.364 INFO:tasks.workunit.client.1.vm10.stdout:7/603: rename db/d21/d60 to db/d46/d89/dbf 0 2026-03-09T20:47:51.364 INFO:tasks.workunit.client.1.vm10.stdout:7/604: chown db/d21/d26 794 1 2026-03-09T20:47:51.365 INFO:tasks.workunit.client.1.vm10.stdout:5/559: creat d2/d39/dbf/d63/fd8 x:0 0 0 2026-03-09T20:47:51.365 INFO:tasks.workunit.client.0.vm07.stdout:9/630: fdatasync d4/d11/f8a 0 2026-03-09T20:47:51.366 INFO:tasks.workunit.client.1.vm10.stdout:0/582: dread - d2/d4a/d58/d82/d71/d8e/d25/db7/dcd/d63/fad zero size 2026-03-09T20:47:51.368 INFO:tasks.workunit.client.0.vm07.stdout:2/646: fdatasync d2/db/f48 0 2026-03-09T20:47:51.370 INFO:tasks.workunit.client.0.vm07.stdout:2/647: dread d2/db/d28/d57/f68 [0,4194304] 0 2026-03-09T20:47:51.373 INFO:tasks.workunit.client.0.vm07.stdout:8/604: write d1/dc/f4c [8902149,88934] 0 2026-03-09T20:47:51.373 INFO:tasks.workunit.client.0.vm07.stdout:5/713: write d5/df/d13/d6c/f99 [2866562,26594] 0 2026-03-09T20:47:51.375 INFO:tasks.workunit.client.1.vm10.stdout:6/606: dwrite d3/d30/d7f/d24/d39/f6c [0,4194304] 0 2026-03-09T20:47:51.378 INFO:tasks.workunit.client.0.vm07.stdout:5/714: dwrite d5/d19/d73/fea [0,4194304] 0 2026-03-09T20:47:51.397 INFO:tasks.workunit.client.0.vm07.stdout:0/674: dwrite d1/d2/d33/fb5 [0,4194304] 0 2026-03-09T20:47:51.401 INFO:tasks.workunit.client.0.vm07.stdout:0/675: dwrite d1/d2/d33/d35/fd1 [0,4194304] 0 2026-03-09T20:47:51.409 INFO:tasks.workunit.client.0.vm07.stdout:7/711: creat d3/da/d53/db7/dde/dc5/fec x:0 0 0 2026-03-09T20:47:51.410 INFO:tasks.workunit.client.0.vm07.stdout:7/712: truncate d3/da/db/d32/d3e/fd9 337196 0 2026-03-09T20:47:51.411 INFO:tasks.workunit.client.1.vm10.stdout:4/555: dwrite d1/d2/d5c/d64/d6b/d81/dac/d1b/d57/f4c [4194304,4194304] 0 2026-03-09T20:47:51.414 INFO:tasks.workunit.client.1.vm10.stdout:4/556: chown d1/d2/d5c/d64/d6b/d81/dac/d1c/d2b/l8d 141638 1 2026-03-09T20:47:51.416 INFO:tasks.workunit.client.1.vm10.stdout:9/636: getdents d2/d3/d6d/db7 0 2026-03-09T20:47:51.421 INFO:tasks.workunit.client.0.vm07.stdout:1/674: mkdir d3/d23/d55/de0 0 2026-03-09T20:47:51.429 INFO:tasks.workunit.client.0.vm07.stdout:6/670: creat d8/d26/fd2 x:0 0 0 2026-03-09T20:47:51.433 INFO:tasks.workunit.client.1.vm10.stdout:3/567: creat dc/d14/d22/fbf x:0 0 0 2026-03-09T20:47:51.434 INFO:tasks.workunit.client.1.vm10.stdout:3/568: truncate dc/d14/d26/d29/d2a/f66 5160738 0 2026-03-09T20:47:51.435 INFO:tasks.workunit.client.0.vm07.stdout:3/648: truncate d1/d5/d9/d2f/d3d/d71/fc4 1733058 0 2026-03-09T20:47:51.438 INFO:tasks.workunit.client.0.vm07.stdout:4/589: mknod d2/d55/d5d/d93/ca1 0 2026-03-09T20:47:51.439 INFO:tasks.workunit.client.1.vm10.stdout:1/604: fsync d2/da/d25/d3e/d42/f7d 0 2026-03-09T20:47:51.446 INFO:tasks.workunit.client.0.vm07.stdout:9/631: rmdir d4/d16/d29 39 2026-03-09T20:47:51.446 INFO:tasks.workunit.client.0.vm07.stdout:9/632: stat d4/d8/dc/dbb 0 2026-03-09T20:47:51.448 INFO:tasks.workunit.client.0.vm07.stdout:2/648: dread - d2/d46/d72/fa1 zero size 2026-03-09T20:47:51.461 INFO:tasks.workunit.client.0.vm07.stdout:5/715: creat d5/d19/d73/dbc/ff9 x:0 0 0 2026-03-09T20:47:51.516 INFO:tasks.workunit.client.0.vm07.stdout:2/649: sync 2026-03-09T20:47:51.526 INFO:tasks.workunit.client.1.vm10.stdout:4/557: unlink d1/d2/d5c/d64/d6b/d81/dac/fb1 0 2026-03-09T20:47:51.526 INFO:tasks.workunit.client.1.vm10.stdout:9/637: dread d2/d3/f2e [0,4194304] 0 2026-03-09T20:47:51.530 INFO:tasks.workunit.client.0.vm07.stdout:1/675: unlink d3/d97/da1/dc5/d60/c50 0 2026-03-09T20:47:51.532 INFO:tasks.workunit.client.1.vm10.stdout:3/569: creat dc/d14/d20/d21/d3b/fc0 x:0 0 0 2026-03-09T20:47:51.536 INFO:tasks.workunit.client.1.vm10.stdout:8/650: fsync d0/d22/d2c/f96 0 2026-03-09T20:47:51.549 INFO:tasks.workunit.client.0.vm07.stdout:8/605: truncate d1/dc/fe 8333785 0 2026-03-09T20:47:51.550 INFO:tasks.workunit.client.0.vm07.stdout:8/606: chown d1/d5d/d6f/d2f/d53/l5c 13 1 2026-03-09T20:47:51.557 INFO:tasks.workunit.client.1.vm10.stdout:4/558: mkdir d1/d47/db3 0 2026-03-09T20:47:51.558 INFO:tasks.workunit.client.0.vm07.stdout:0/676: mkdir d1/d2/d33/d35/ddb 0 2026-03-09T20:47:51.559 INFO:tasks.workunit.client.1.vm10.stdout:9/638: symlink d2/d33/db1/ld9 0 2026-03-09T20:47:51.559 INFO:tasks.workunit.client.0.vm07.stdout:7/713: symlink d3/da/db/d32/d3e/led 0 2026-03-09T20:47:51.563 INFO:tasks.workunit.client.1.vm10.stdout:8/651: mknod d0/d22/d25/d6c/cc7 0 2026-03-09T20:47:51.572 INFO:tasks.workunit.client.1.vm10.stdout:1/605: chown d2/da/fa1 5660503 1 2026-03-09T20:47:51.577 INFO:tasks.workunit.client.0.vm07.stdout:9/633: unlink d4/d8/dc/c40 0 2026-03-09T20:47:51.579 INFO:tasks.workunit.client.1.vm10.stdout:6/607: link d3/da/d11/d31/d4c/d60/l7a d3/d30/d33/lbf 0 2026-03-09T20:47:51.580 INFO:tasks.workunit.client.0.vm07.stdout:5/716: truncate d5/df/d13/d6c/fc9 909902 0 2026-03-09T20:47:51.583 INFO:tasks.workunit.client.1.vm10.stdout:9/639: dread - d2/db8/f4d zero size 2026-03-09T20:47:51.584 INFO:tasks.workunit.client.1.vm10.stdout:9/640: write d2/d28/f32 [9338907,118123] 0 2026-03-09T20:47:51.585 INFO:tasks.workunit.client.0.vm07.stdout:0/677: truncate d1/f11 5921780 0 2026-03-09T20:47:51.587 INFO:tasks.workunit.client.1.vm10.stdout:3/570: mkdir dc/d14/d20/d21/daf/dc1 0 2026-03-09T20:47:51.590 INFO:tasks.workunit.client.1.vm10.stdout:8/652: mknod d0/d22/d2f/d38/cc8 0 2026-03-09T20:47:51.592 INFO:tasks.workunit.client.1.vm10.stdout:5/560: getdents d2/d58/d6c 0 2026-03-09T20:47:51.593 INFO:tasks.workunit.client.1.vm10.stdout:5/561: chown d2/f71 1753692650 1 2026-03-09T20:47:51.596 INFO:tasks.workunit.client.0.vm07.stdout:0/678: fdatasync d1/d1f/d20/f21 0 2026-03-09T20:47:51.599 INFO:tasks.workunit.client.0.vm07.stdout:1/676: dread d3/d14/f17 [0,4194304] 0 2026-03-09T20:47:51.603 INFO:tasks.workunit.client.0.vm07.stdout:8/607: creat d1/dc/d16/dad/fc7 x:0 0 0 2026-03-09T20:47:51.606 INFO:tasks.workunit.client.1.vm10.stdout:1/606: truncate d2/da/f26 5631736 0 2026-03-09T20:47:51.608 INFO:tasks.workunit.client.1.vm10.stdout:5/562: mkdir d2/d39/d4b/d7a/dd9 0 2026-03-09T20:47:51.610 INFO:tasks.workunit.client.0.vm07.stdout:1/677: mkdir d3/d23/d52/de1 0 2026-03-09T20:47:51.610 INFO:tasks.workunit.client.0.vm07.stdout:1/678: chown d3/dc6 0 1 2026-03-09T20:47:51.611 INFO:tasks.workunit.client.0.vm07.stdout:1/679: read d3/d97/da1/dab/fb0 [371845,114747] 0 2026-03-09T20:47:51.612 INFO:tasks.workunit.client.1.vm10.stdout:9/641: creat d2/d3/de/d8f/dbc/fda x:0 0 0 2026-03-09T20:47:51.613 INFO:tasks.workunit.client.1.vm10.stdout:9/642: chown d2/d3/de/d35/fca 5154943 1 2026-03-09T20:47:51.613 INFO:tasks.workunit.client.0.vm07.stdout:9/634: link d4/d16/d29/f6e d4/d11/d23/d32/fe6 0 2026-03-09T20:47:51.613 INFO:tasks.workunit.client.0.vm07.stdout:9/635: stat d4/d16/d29/d9c/lce 0 2026-03-09T20:47:51.616 INFO:tasks.workunit.client.0.vm07.stdout:8/608: unlink d1/c1b 0 2026-03-09T20:47:51.618 INFO:tasks.workunit.client.1.vm10.stdout:2/608: symlink d5/d18/d1b/lcd 0 2026-03-09T20:47:51.618 INFO:tasks.workunit.client.1.vm10.stdout:2/609: stat f1 0 2026-03-09T20:47:51.619 INFO:tasks.workunit.client.0.vm07.stdout:1/680: fdatasync d3/d23/d52/f73 0 2026-03-09T20:47:51.625 INFO:tasks.workunit.client.1.vm10.stdout:6/608: dread d3/d30/d7f/d36/d5c/d8d/fac [0,4194304] 0 2026-03-09T20:47:51.625 INFO:tasks.workunit.client.1.vm10.stdout:6/609: chown d3/da/d11/d31/f82 7056 1 2026-03-09T20:47:51.627 INFO:tasks.workunit.client.0.vm07.stdout:1/681: mkdir d3/d97/da1/dab/de2 0 2026-03-09T20:47:51.629 INFO:tasks.workunit.client.1.vm10.stdout:3/571: link dc/d14/d26/d29/d40/da8/lb0 dc/d14/d26/d29/d2a/d76/lc2 0 2026-03-09T20:47:51.631 INFO:tasks.workunit.client.1.vm10.stdout:2/610: creat d5/d18/d27/db8/fce x:0 0 0 2026-03-09T20:47:51.632 INFO:tasks.workunit.client.1.vm10.stdout:2/611: dread - d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/f4c zero size 2026-03-09T20:47:51.633 INFO:tasks.workunit.client.1.vm10.stdout:2/612: chown d5/d18/d27/d89/db6/d41/d77/db3 22 1 2026-03-09T20:47:51.635 INFO:tasks.workunit.client.1.vm10.stdout:5/563: dread d2/d27/d37/f38 [0,4194304] 0 2026-03-09T20:47:51.635 INFO:tasks.workunit.client.0.vm07.stdout:1/682: symlink d3/d14/d54/d9b/le3 0 2026-03-09T20:47:51.636 INFO:tasks.workunit.client.1.vm10.stdout:9/643: unlink d2/db8/la4 0 2026-03-09T20:47:51.636 INFO:tasks.workunit.client.0.vm07.stdout:1/683: stat d3/d14/lc8 0 2026-03-09T20:47:51.637 INFO:tasks.workunit.client.1.vm10.stdout:6/610: unlink d3/da/d11/d31/f81 0 2026-03-09T20:47:51.640 INFO:tasks.workunit.client.1.vm10.stdout:5/564: creat d2/d39/d4b/fda x:0 0 0 2026-03-09T20:47:51.643 INFO:tasks.workunit.client.1.vm10.stdout:6/611: unlink d3/da/d11/d31/d47/c72 0 2026-03-09T20:47:51.647 INFO:tasks.workunit.client.0.vm07.stdout:1/684: rmdir d3/d23/d52/de1 0 2026-03-09T20:47:51.649 INFO:tasks.workunit.client.1.vm10.stdout:3/572: mkdir dc/d14/d26/d29/d40/da8/dc3 0 2026-03-09T20:47:51.650 INFO:tasks.workunit.client.0.vm07.stdout:1/685: symlink d3/d14/d54/d9b/le4 0 2026-03-09T20:47:51.651 INFO:tasks.workunit.client.1.vm10.stdout:9/644: mkdir d2/d3/db4/ddb 0 2026-03-09T20:47:51.654 INFO:tasks.workunit.client.0.vm07.stdout:1/686: creat d3/d97/da1/ddd/fe5 x:0 0 0 2026-03-09T20:47:51.665 INFO:tasks.workunit.client.1.vm10.stdout:9/645: dread d2/d3/de/d35/f38 [0,4194304] 0 2026-03-09T20:47:51.665 INFO:tasks.workunit.client.0.vm07.stdout:6/671: dwrite d8/d16/d61/f68 [0,4194304] 0 2026-03-09T20:47:51.666 INFO:tasks.workunit.client.0.vm07.stdout:6/672: chown d8/d16/d22/d24/da0/dab/d40/d69/f78 159487192 1 2026-03-09T20:47:51.669 INFO:tasks.workunit.client.0.vm07.stdout:6/673: creat d8/d16/da3/db8/fd3 x:0 0 0 2026-03-09T20:47:51.733 INFO:tasks.workunit.client.1.vm10.stdout:7/605: dwrite db/d28/d30/f73 [0,4194304] 0 2026-03-09T20:47:51.783 INFO:tasks.workunit.client.1.vm10.stdout:0/583: write d2/d4a/d58/d82/d71/d8e/d25/db7/dcd/d63/f9c [710430,34822] 0 2026-03-09T20:47:51.783 INFO:tasks.workunit.client.0.vm07.stdout:2/650: write d2/db/d1c/f42 [501080,49629] 0 2026-03-09T20:47:51.794 INFO:tasks.workunit.client.0.vm07.stdout:2/651: unlink d2/db/d1c/d8d/fb9 0 2026-03-09T20:47:51.821 INFO:tasks.workunit.client.0.vm07.stdout:3/649: truncate d1/d5/d9/f3c 2615170 0 2026-03-09T20:47:51.821 INFO:tasks.workunit.client.0.vm07.stdout:3/650: dread - d1/fb7 zero size 2026-03-09T20:47:51.822 INFO:tasks.workunit.client.0.vm07.stdout:3/651: fdatasync d1/d5/d9/d2f/d3d/d64/f55 0 2026-03-09T20:47:51.826 INFO:tasks.workunit.client.0.vm07.stdout:3/652: mkdir d1/dcf 0 2026-03-09T20:47:51.828 INFO:tasks.workunit.client.0.vm07.stdout:4/590: write d2/d55/d5d/d3f/d4a/f5e [782863,124036] 0 2026-03-09T20:47:51.831 INFO:tasks.workunit.client.0.vm07.stdout:9/636: symlink d4/d8/d19/le7 0 2026-03-09T20:47:51.834 INFO:tasks.workunit.client.0.vm07.stdout:5/717: dread d5/df/d13/d6c/fc9 [0,4194304] 0 2026-03-09T20:47:51.835 INFO:tasks.workunit.client.0.vm07.stdout:9/637: truncate d4/d8/d19/d89/f93 1806022 0 2026-03-09T20:47:51.836 INFO:tasks.workunit.client.0.vm07.stdout:5/718: creat d5/df/d13/d3e/d5e/ffa x:0 0 0 2026-03-09T20:47:51.838 INFO:tasks.workunit.client.0.vm07.stdout:5/719: mkdir d5/d33/db2/dfb 0 2026-03-09T20:47:51.840 INFO:tasks.workunit.client.0.vm07.stdout:5/720: mknod d5/d33/d39/d8d/cfc 0 2026-03-09T20:47:51.841 INFO:tasks.workunit.client.0.vm07.stdout:5/721: dread - d5/df/fd6 zero size 2026-03-09T20:47:51.841 INFO:tasks.workunit.client.0.vm07.stdout:5/722: readlink d5/d69/l88 0 2026-03-09T20:47:51.843 INFO:tasks.workunit.client.1.vm10.stdout:4/559: dwrite d1/d2/d5c/f48 [0,4194304] 0 2026-03-09T20:47:51.844 INFO:tasks.workunit.client.1.vm10.stdout:4/560: chown d1/d2/d3 242902439 1 2026-03-09T20:47:51.846 INFO:tasks.workunit.client.1.vm10.stdout:4/561: creat d1/d47/fb4 x:0 0 0 2026-03-09T20:47:51.849 INFO:tasks.workunit.client.0.vm07.stdout:7/714: dwrite d3/da/d53/db7/dde/f84 [0,4194304] 0 2026-03-09T20:47:51.856 INFO:tasks.workunit.client.1.vm10.stdout:4/562: dread d1/d2/d5c/d64/d6b/d81/dac/d1c/d69/f6a [0,4194304] 0 2026-03-09T20:47:51.857 INFO:tasks.workunit.client.0.vm07.stdout:7/715: chown d3/da/db/d32/d3e/dac/d1f/d2b/d52/f74 28 1 2026-03-09T20:47:51.861 INFO:tasks.workunit.client.1.vm10.stdout:4/563: fdatasync d1/d2/d5c/f9c 0 2026-03-09T20:47:51.862 INFO:tasks.workunit.client.0.vm07.stdout:7/716: mknod d3/da/db/d32/d3e/dac/cee 0 2026-03-09T20:47:51.888 INFO:tasks.workunit.client.1.vm10.stdout:1/607: write d2/da/d25/d3e/d42/f62 [352700,74589] 0 2026-03-09T20:47:51.890 INFO:tasks.workunit.client.1.vm10.stdout:1/608: stat d2/da/f88 0 2026-03-09T20:47:51.892 INFO:tasks.workunit.client.0.vm07.stdout:8/609: truncate d1/f33 1794531 0 2026-03-09T20:47:51.894 INFO:tasks.workunit.client.0.vm07.stdout:8/610: unlink d1/dc/ca6 0 2026-03-09T20:47:51.901 INFO:tasks.workunit.client.1.vm10.stdout:2/613: write d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/fa7 [599139,123709] 0 2026-03-09T20:47:51.903 INFO:tasks.workunit.client.1.vm10.stdout:2/614: rmdir d5/d18/d9f/dc2 39 2026-03-09T20:47:51.904 INFO:tasks.workunit.client.0.vm07.stdout:0/679: rename d1/d2/d33/l85 to d1/dc0/dcc/ldc 0 2026-03-09T20:47:51.907 INFO:tasks.workunit.client.1.vm10.stdout:2/615: dread d5/d18/f24 [0,4194304] 0 2026-03-09T20:47:51.908 INFO:tasks.workunit.client.0.vm07.stdout:4/591: rename d2/d55/c66 to d2/d55/d5d/d93/ca2 0 2026-03-09T20:47:51.908 INFO:tasks.workunit.client.0.vm07.stdout:0/680: symlink d1/d1f/dc2/ldd 0 2026-03-09T20:47:51.911 INFO:tasks.workunit.client.0.vm07.stdout:4/592: stat d2/d55/d5d/d93 0 2026-03-09T20:47:51.912 INFO:tasks.workunit.client.0.vm07.stdout:4/593: creat d2/d55/d5d/d3f/fa3 x:0 0 0 2026-03-09T20:47:51.913 INFO:tasks.workunit.client.0.vm07.stdout:4/594: fsync d2/df/d17/f2a 0 2026-03-09T20:47:51.915 INFO:tasks.workunit.client.0.vm07.stdout:4/595: getdents d2/d55/d8b 0 2026-03-09T20:47:51.919 INFO:tasks.workunit.client.0.vm07.stdout:4/596: mknod d2/d55/d5d/d3f/d4a/d4b/d52/d5c/d90/ca4 0 2026-03-09T20:47:51.920 INFO:tasks.workunit.client.0.vm07.stdout:4/597: creat d2/d1f/fa5 x:0 0 0 2026-03-09T20:47:51.922 INFO:tasks.workunit.client.0.vm07.stdout:4/598: creat d2/d55/d5d/d86/fa6 x:0 0 0 2026-03-09T20:47:51.929 INFO:tasks.workunit.client.1.vm10.stdout:6/612: dwrite d3/da/d11/d89/fb0 [0,4194304] 0 2026-03-09T20:47:51.951 INFO:tasks.workunit.client.1.vm10.stdout:3/573: dwrite dc/d14/d90/fba [0,4194304] 0 2026-03-09T20:47:51.951 INFO:tasks.workunit.client.1.vm10.stdout:3/574: truncate dc/d14/d26/d29/d40/d8c/fbc 621602 0 2026-03-09T20:47:51.951 INFO:tasks.workunit.client.0.vm07.stdout:1/687: dwrite d3/d23/f2c [0,4194304] 0 2026-03-09T20:47:51.951 INFO:tasks.workunit.client.0.vm07.stdout:1/688: write d3/d14/d54/fdc [917920,94976] 0 2026-03-09T20:47:51.951 INFO:tasks.workunit.client.0.vm07.stdout:1/689: chown d3/f24 924 1 2026-03-09T20:47:51.951 INFO:tasks.workunit.client.0.vm07.stdout:6/674: write d8/d16/d61/f68 [5097097,123808] 0 2026-03-09T20:47:51.951 INFO:tasks.workunit.client.0.vm07.stdout:1/690: chown d3/d14/l2a 559332 1 2026-03-09T20:47:51.951 INFO:tasks.workunit.client.0.vm07.stdout:6/675: creat d8/db3/fd4 x:0 0 0 2026-03-09T20:47:51.954 INFO:tasks.workunit.client.0.vm07.stdout:1/691: truncate d3/d14/d54/f32 1293259 0 2026-03-09T20:47:51.954 INFO:tasks.workunit.client.0.vm07.stdout:1/692: chown d3/d23/d67/f69 7667 1 2026-03-09T20:47:51.956 INFO:tasks.workunit.client.1.vm10.stdout:9/646: write d2/d28/d47/d6a/f7f [209806,60151] 0 2026-03-09T20:47:51.962 INFO:tasks.workunit.client.0.vm07.stdout:1/693: rename d3/d14/d54/d3e/f4a to d3/d97/da1/dc5/d60/d9f/dd0/fe6 0 2026-03-09T20:47:51.963 INFO:tasks.workunit.client.0.vm07.stdout:6/676: dread d8/d16/db4/d85/f5a [4194304,4194304] 0 2026-03-09T20:47:51.966 INFO:tasks.workunit.client.1.vm10.stdout:9/647: unlink d2/d28/d47/l71 0 2026-03-09T20:47:51.967 INFO:tasks.workunit.client.1.vm10.stdout:6/613: link d3/d30/d7f/d51/ca0 d3/da/d11/d31/d4c/d60/cc0 0 2026-03-09T20:47:51.968 INFO:tasks.workunit.client.0.vm07.stdout:6/677: rename d8/d16/da3/db8 to d8/d16/d4b/d88/dc3/dd5 0 2026-03-09T20:47:51.971 INFO:tasks.workunit.client.0.vm07.stdout:6/678: dread d8/d16/db4/d85/f2f [0,4194304] 0 2026-03-09T20:47:51.974 INFO:tasks.workunit.client.0.vm07.stdout:3/653: dread d1/d5/d9/d11/d1f/f4a [0,4194304] 0 2026-03-09T20:47:51.979 INFO:tasks.workunit.client.1.vm10.stdout:1/609: dread d2/f17 [0,4194304] 0 2026-03-09T20:47:51.980 INFO:tasks.workunit.client.0.vm07.stdout:3/654: dwrite d1/d5/d9/d11/d60/f89 [0,4194304] 0 2026-03-09T20:47:51.980 INFO:tasks.workunit.client.0.vm07.stdout:0/681: sync 2026-03-09T20:47:51.980 INFO:tasks.workunit.client.1.vm10.stdout:7/606: dwrite db/d28/d4c/f8c [0,4194304] 0 2026-03-09T20:47:51.985 INFO:tasks.workunit.client.0.vm07.stdout:1/694: sync 2026-03-09T20:47:51.994 INFO:tasks.workunit.client.1.vm10.stdout:1/610: chown d2/da/d25/d46/d80/da0/d92/la9 14643929 1 2026-03-09T20:47:51.995 INFO:tasks.workunit.client.1.vm10.stdout:8/653: rename d0/f13 to d0/d22/d25/d2e/d41/d85/db9/dc6/fc9 0 2026-03-09T20:47:51.998 INFO:tasks.workunit.client.1.vm10.stdout:8/654: dwrite d0/d22/d2f/d38/d64/fc3 [0,4194304] 0 2026-03-09T20:47:52.005 INFO:tasks.workunit.client.1.vm10.stdout:0/584: write d2/d9/da/d35/f3a [5603493,72884] 0 2026-03-09T20:47:52.007 INFO:tasks.workunit.client.0.vm07.stdout:2/652: write d2/d11/f44 [2512329,105573] 0 2026-03-09T20:47:52.007 INFO:tasks.workunit.client.0.vm07.stdout:2/653: fdatasync d2/db/faf 0 2026-03-09T20:47:52.011 INFO:tasks.workunit.client.0.vm07.stdout:2/654: dwrite d2/d46/db0/db3/fc0 [0,4194304] 0 2026-03-09T20:47:52.024 INFO:tasks.workunit.client.1.vm10.stdout:9/648: fdatasync d2/d3/f1c 0 2026-03-09T20:47:52.031 INFO:tasks.workunit.client.1.vm10.stdout:5/565: rename d2/d27/f34 to d2/d1b/d54/d78/fdb 0 2026-03-09T20:47:52.031 INFO:tasks.workunit.client.1.vm10.stdout:5/566: chown d2/f71 23437469 1 2026-03-09T20:47:52.034 INFO:tasks.workunit.client.0.vm07.stdout:9/638: dwrite d4/d8/dc/dbb/fad [0,4194304] 0 2026-03-09T20:47:52.046 INFO:tasks.workunit.client.1.vm10.stdout:1/611: dread d2/da/d25/f28 [0,4194304] 0 2026-03-09T20:47:52.047 INFO:tasks.workunit.client.0.vm07.stdout:5/723: write d5/df/d13/f2a [1210483,55340] 0 2026-03-09T20:47:52.052 INFO:tasks.workunit.client.1.vm10.stdout:8/655: mknod d0/d22/d25/d6c/d9b/cca 0 2026-03-09T20:47:52.056 INFO:tasks.workunit.client.0.vm07.stdout:2/655: mkdir d2/d46/d6e/dbe/d96/dcc 0 2026-03-09T20:47:52.060 INFO:tasks.workunit.client.1.vm10.stdout:4/564: dwrite d1/d2/d5c/d64/d61/f85 [0,4194304] 0 2026-03-09T20:47:52.060 INFO:tasks.workunit.client.0.vm07.stdout:7/717: write d3/da/d53/db7/dde/fa9 [1401530,42626] 0 2026-03-09T20:47:52.060 INFO:tasks.workunit.client.0.vm07.stdout:7/718: write d3/d58/d77/fe1 [238398,43924] 0 2026-03-09T20:47:52.061 INFO:tasks.workunit.client.1.vm10.stdout:4/565: dread d1/d2/d5c/d64/d6b/d81/dac/d1c/d69/f6a [0,4194304] 0 2026-03-09T20:47:52.079 INFO:tasks.workunit.client.0.vm07.stdout:2/656: dread d2/d46/f7e [0,4194304] 0 2026-03-09T20:47:52.088 INFO:tasks.workunit.client.1.vm10.stdout:2/616: write d5/d18/f63 [1488896,70574] 0 2026-03-09T20:47:52.088 INFO:tasks.workunit.client.0.vm07.stdout:8/611: dwrite d1/dc/d16/d26/d94/faf [0,4194304] 0 2026-03-09T20:47:52.089 INFO:tasks.workunit.client.1.vm10.stdout:1/612: unlink d2/f17 0 2026-03-09T20:47:52.100 INFO:tasks.workunit.client.0.vm07.stdout:0/682: rename d1/d2/dc/d17/f6c to d1/d2/dc/fde 0 2026-03-09T20:47:52.117 INFO:tasks.workunit.client.0.vm07.stdout:4/599: truncate d2/f9 2256106 0 2026-03-09T20:47:52.117 INFO:tasks.workunit.client.1.vm10.stdout:3/575: write dc/d14/d20/d2e/f32 [4572814,73986] 0 2026-03-09T20:47:52.117 INFO:tasks.workunit.client.1.vm10.stdout:5/567: getdents d2/d39/d4b/d7a/dd9 0 2026-03-09T20:47:52.118 INFO:tasks.workunit.client.1.vm10.stdout:1/613: stat d2/d89/lb0 0 2026-03-09T20:47:52.118 INFO:tasks.workunit.client.1.vm10.stdout:0/585: creat d2/d4a/fcf x:0 0 0 2026-03-09T20:47:52.118 INFO:tasks.workunit.client.1.vm10.stdout:7/607: getdents db/d1f 0 2026-03-09T20:47:52.120 INFO:tasks.workunit.client.1.vm10.stdout:0/586: dwrite d2/d9/da/f81 [4194304,4194304] 0 2026-03-09T20:47:52.120 INFO:tasks.workunit.client.1.vm10.stdout:9/649: link d2/d33/db1/ld9 d2/ldc 0 2026-03-09T20:47:52.120 INFO:tasks.workunit.client.1.vm10.stdout:7/608: mknod db/d28/cc0 0 2026-03-09T20:47:52.123 INFO:tasks.workunit.client.0.vm07.stdout:5/724: rename d5/df/d13/d6c/fb3 to d5/d33/d75/ffd 0 2026-03-09T20:47:52.130 INFO:tasks.workunit.client.0.vm07.stdout:0/683: symlink d1/d2/dc/db1/ldf 0 2026-03-09T20:47:52.131 INFO:tasks.workunit.client.1.vm10.stdout:1/614: mkdir d2/da/dbc 0 2026-03-09T20:47:52.131 INFO:tasks.workunit.client.0.vm07.stdout:0/684: fsync d1/d2/dc/d80/f87 0 2026-03-09T20:47:52.132 INFO:tasks.workunit.client.0.vm07.stdout:0/685: chown d1/d1f/dc3/dca 222395 1 2026-03-09T20:47:52.133 INFO:tasks.workunit.client.1.vm10.stdout:9/650: mkdir d2/d3/d6d/d88/ddd 0 2026-03-09T20:47:52.135 INFO:tasks.workunit.client.0.vm07.stdout:2/657: unlink d2/f63 0 2026-03-09T20:47:52.136 INFO:tasks.workunit.client.1.vm10.stdout:0/587: mknod d2/d9/da/cd0 0 2026-03-09T20:47:52.139 INFO:tasks.workunit.client.0.vm07.stdout:4/600: sync 2026-03-09T20:47:52.140 INFO:tasks.workunit.client.1.vm10.stdout:4/566: sync 2026-03-09T20:47:52.141 INFO:tasks.workunit.client.1.vm10.stdout:3/576: sync 2026-03-09T20:47:52.147 INFO:tasks.workunit.client.1.vm10.stdout:1/615: rename d2/da/d25/d3e/d42/f8d to d2/da/d25/d46/d80/fbd 0 2026-03-09T20:47:52.158 INFO:tasks.workunit.client.0.vm07.stdout:0/686: creat d1/d2/dc/d17/fe0 x:0 0 0 2026-03-09T20:47:52.164 INFO:tasks.workunit.client.1.vm10.stdout:3/577: creat dc/d14/d26/d29/d2a/d76/fc4 x:0 0 0 2026-03-09T20:47:52.164 INFO:tasks.workunit.client.0.vm07.stdout:0/687: truncate d1/d1f/fcd 613270 0 2026-03-09T20:47:52.164 INFO:tasks.workunit.client.0.vm07.stdout:7/719: creat d3/da/db/d32/d3e/dac/fef x:0 0 0 2026-03-09T20:47:52.167 INFO:tasks.workunit.client.0.vm07.stdout:8/612: mknod d1/d3b/cc8 0 2026-03-09T20:47:52.167 INFO:tasks.workunit.client.0.vm07.stdout:2/658: creat d2/db/d28/d57/fcd x:0 0 0 2026-03-09T20:47:52.167 INFO:tasks.workunit.client.0.vm07.stdout:2/659: fdatasync d2/db/d1c/f9d 0 2026-03-09T20:47:52.174 INFO:tasks.workunit.client.1.vm10.stdout:0/588: rename d2/d4a/d58/d82/d71/d8e/d25 to d2/d9/da/d11/dd1 0 2026-03-09T20:47:52.174 INFO:tasks.workunit.client.1.vm10.stdout:0/589: readlink d2/d9/da/l9d 0 2026-03-09T20:47:52.176 INFO:tasks.workunit.client.0.vm07.stdout:4/601: creat d2/d55/d5d/d3f/fa7 x:0 0 0 2026-03-09T20:47:52.179 INFO:tasks.workunit.client.1.vm10.stdout:1/616: mkdir d2/da/d25/d46/dbe 0 2026-03-09T20:47:52.183 INFO:tasks.workunit.client.0.vm07.stdout:0/688: rename d1/d2/d33/d35/l8b to d1/d2/d98/le1 0 2026-03-09T20:47:52.183 INFO:tasks.workunit.client.0.vm07.stdout:0/689: chown d1/d2/c2e 15 1 2026-03-09T20:47:52.184 INFO:tasks.workunit.client.0.vm07.stdout:0/690: dwrite d1/d2/dc/d80/f87 [0,4194304] 0 2026-03-09T20:47:52.194 INFO:tasks.workunit.client.1.vm10.stdout:4/567: symlink d1/d2/d3/d54/daa/lb5 0 2026-03-09T20:47:52.196 INFO:tasks.workunit.client.1.vm10.stdout:6/614: write d3/da/d11/f8b [140683,59918] 0 2026-03-09T20:47:52.198 INFO:tasks.workunit.client.0.vm07.stdout:6/679: dwrite d8/d16/db4/d85/fad [0,4194304] 0 2026-03-09T20:47:52.199 INFO:tasks.workunit.client.0.vm07.stdout:6/680: stat d8/d16/d22/d24/da0/dab/d40/fa7 0 2026-03-09T20:47:52.201 INFO:tasks.workunit.client.1.vm10.stdout:2/617: dread d5/fa [4194304,4194304] 0 2026-03-09T20:47:52.202 INFO:tasks.workunit.client.1.vm10.stdout:2/618: stat d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47/l4a 0 2026-03-09T20:47:52.202 INFO:tasks.workunit.client.1.vm10.stdout:6/615: sync 2026-03-09T20:47:52.212 INFO:tasks.workunit.client.0.vm07.stdout:3/655: write d1/d5/d9/d2f/d3d/d71/fc4 [2136620,123471] 0 2026-03-09T20:47:52.212 INFO:tasks.workunit.client.0.vm07.stdout:1/695: write d3/d9c/fd2 [539642,26290] 0 2026-03-09T20:47:52.213 INFO:tasks.workunit.client.0.vm07.stdout:1/696: chown d3/d97 2065431915 1 2026-03-09T20:47:52.220 INFO:tasks.workunit.client.0.vm07.stdout:9/639: dwrite d4/d16/d29/d24/d37/f51 [4194304,4194304] 0 2026-03-09T20:47:52.221 INFO:tasks.workunit.client.1.vm10.stdout:8/656: dwrite d0/d22/f71 [0,4194304] 0 2026-03-09T20:47:52.228 INFO:tasks.workunit.client.1.vm10.stdout:8/657: sync 2026-03-09T20:47:52.236 INFO:tasks.workunit.client.0.vm07.stdout:7/720: truncate d3/da/db/d32/d3e/dac/d1f/f5d 682809 0 2026-03-09T20:47:52.241 INFO:tasks.workunit.client.1.vm10.stdout:3/578: rename dc/d14/d20/d2e/d56/l12 to dc/d14/lc5 0 2026-03-09T20:47:52.249 INFO:tasks.workunit.client.1.vm10.stdout:7/609: getdents db/d46/d89/dbf/d78 0 2026-03-09T20:47:52.250 INFO:tasks.workunit.client.1.vm10.stdout:6/616: mknod d3/d30/d6a/cc1 0 2026-03-09T20:47:52.251 INFO:tasks.workunit.client.0.vm07.stdout:5/725: rmdir d5/d19/d73/dbc/df7 0 2026-03-09T20:47:52.253 INFO:tasks.workunit.client.1.vm10.stdout:2/619: truncate d5/d18/d1b/d22/f6d 2081722 0 2026-03-09T20:47:52.256 INFO:tasks.workunit.client.1.vm10.stdout:5/568: dwrite d2/d27/d37/fb5 [0,4194304] 0 2026-03-09T20:47:52.263 INFO:tasks.workunit.client.1.vm10.stdout:2/620: dwrite d5/d18/d27/db8/fbc [0,4194304] 0 2026-03-09T20:47:52.266 INFO:tasks.workunit.client.1.vm10.stdout:8/658: fdatasync d0/d22/d2f/d38/f43 0 2026-03-09T20:47:52.270 INFO:tasks.workunit.client.1.vm10.stdout:2/621: dwrite d5/d18/d27/f74 [0,4194304] 0 2026-03-09T20:47:52.274 INFO:tasks.workunit.client.1.vm10.stdout:4/568: rename d1/d2/d5c/d64/d6b/d81/dac/d1c/d2b/l8d to d1/d2/d5c/d64/d6b/lb6 0 2026-03-09T20:47:52.282 INFO:tasks.workunit.client.1.vm10.stdout:3/579: fsync dc/d14/d26/d29/d40/f71 0 2026-03-09T20:47:52.318 INFO:tasks.workunit.client.0.vm07.stdout:3/656: truncate d1/d5/d9/d2f/d34/f8f 696037 0 2026-03-09T20:47:52.318 INFO:tasks.workunit.client.0.vm07.stdout:3/657: chown d1/d5/dcd 36 1 2026-03-09T20:47:52.318 INFO:tasks.workunit.client.0.vm07.stdout:7/721: unlink d3/da/d53/db7/dde/dc5/cc9 0 2026-03-09T20:47:52.318 INFO:tasks.workunit.client.0.vm07.stdout:3/658: fdatasync d1/d5/d9/fa1 0 2026-03-09T20:47:52.318 INFO:tasks.workunit.client.0.vm07.stdout:8/613: readlink d1/d3b/l79 0 2026-03-09T20:47:52.319 INFO:tasks.workunit.client.1.vm10.stdout:9/651: write d2/d28/d47/d50/f75 [470031,95219] 0 2026-03-09T20:47:52.319 INFO:tasks.workunit.client.1.vm10.stdout:1/617: write d2/da/fb1 [564078,102426] 0 2026-03-09T20:47:52.319 INFO:tasks.workunit.client.1.vm10.stdout:5/569: mknod d2/d27/d37/dc8/cdc 0 2026-03-09T20:47:52.319 INFO:tasks.workunit.client.1.vm10.stdout:2/622: dread - d5/d18/d27/d89/db6/f7e zero size 2026-03-09T20:47:52.319 INFO:tasks.workunit.client.1.vm10.stdout:4/569: fdatasync d1/d47/f96 0 2026-03-09T20:47:52.319 INFO:tasks.workunit.client.1.vm10.stdout:0/590: link d2/d4a/d58/d82/d93/cb5 d2/d9/da/d35/cd2 0 2026-03-09T20:47:52.319 INFO:tasks.workunit.client.1.vm10.stdout:0/591: dread - d2/d9/da/d11/dd1/db7/dcd/d63/fad zero size 2026-03-09T20:47:52.319 INFO:tasks.workunit.client.1.vm10.stdout:9/652: chown d2/d3/d85/l97 2 1 2026-03-09T20:47:52.319 INFO:tasks.workunit.client.1.vm10.stdout:9/653: chown d2/d3/d6d/fce 1828 1 2026-03-09T20:47:52.319 INFO:tasks.workunit.client.0.vm07.stdout:9/640: getdents d4/d8/d59/de4 0 2026-03-09T20:47:52.321 INFO:tasks.workunit.client.1.vm10.stdout:1/618: symlink d2/da/d25/d46/d51/d7e/d9e/lbf 0 2026-03-09T20:47:52.323 INFO:tasks.workunit.client.0.vm07.stdout:7/722: fdatasync d3/da/db/d32/d3e/d5c/f64 0 2026-03-09T20:47:52.326 INFO:tasks.workunit.client.1.vm10.stdout:7/610: creat db/d28/d2b/d36/d63/d84/fc1 x:0 0 0 2026-03-09T20:47:52.329 INFO:tasks.workunit.client.1.vm10.stdout:5/570: readlink d2/d27/la7 0 2026-03-09T20:47:52.332 INFO:tasks.workunit.client.0.vm07.stdout:0/691: creat d1/d2/dc/fe2 x:0 0 0 2026-03-09T20:47:52.337 INFO:tasks.workunit.client.0.vm07.stdout:0/692: dread d1/d2/d4b/f61 [0,4194304] 0 2026-03-09T20:47:52.343 INFO:tasks.workunit.client.0.vm07.stdout:0/693: readlink d1/d2/dc/d80/lda 0 2026-03-09T20:47:52.343 INFO:tasks.workunit.client.0.vm07.stdout:0/694: read - d1/d1f/d53/d72/fac zero size 2026-03-09T20:47:52.344 INFO:tasks.workunit.client.0.vm07.stdout:2/660: read d2/db/d49/fb2 [792104,115271] 0 2026-03-09T20:47:52.348 INFO:tasks.workunit.client.0.vm07.stdout:6/681: creat d8/d16/db4/fd6 x:0 0 0 2026-03-09T20:47:52.362 INFO:tasks.workunit.client.0.vm07.stdout:8/614: mknod d1/d5d/d6f/d2f/d4d/d63/d91/cc9 0 2026-03-09T20:47:52.363 INFO:tasks.workunit.client.0.vm07.stdout:8/615: chown d1/dc/d16/d31/fa0 654 1 2026-03-09T20:47:52.376 INFO:tasks.workunit.client.0.vm07.stdout:9/641: dread d4/d16/d29/d24/f77 [0,4194304] 0 2026-03-09T20:47:52.378 INFO:tasks.workunit.client.1.vm10.stdout:0/592: rename d2/d9/da/d11/dd1/db7/dcd/d63/lb3 to d2/d9/da/d11/dd1/db7/dcd/ld3 0 2026-03-09T20:47:52.381 INFO:tasks.workunit.client.0.vm07.stdout:7/723: dwrite d3/d58/fe4 [0,4194304] 0 2026-03-09T20:47:52.385 INFO:tasks.workunit.client.0.vm07.stdout:7/724: dwrite d3/da/db/d32/d3e/dac/fef [0,4194304] 0 2026-03-09T20:47:52.400 INFO:tasks.workunit.client.0.vm07.stdout:4/602: getdents d2/d55 0 2026-03-09T20:47:52.402 INFO:tasks.workunit.client.0.vm07.stdout:7/725: read d3/da/d53/db7/dde/fa9 [2204060,33162] 0 2026-03-09T20:47:52.409 INFO:tasks.workunit.client.0.vm07.stdout:1/697: dwrite d3/d23/fc9 [0,4194304] 0 2026-03-09T20:47:52.419 INFO:tasks.workunit.client.0.vm07.stdout:5/726: dwrite d5/df/d13/f41 [4194304,4194304] 0 2026-03-09T20:47:52.422 INFO:tasks.workunit.client.0.vm07.stdout:3/659: write d1/d5/d9/d2f/d3d/d64/f30 [352573,12709] 0 2026-03-09T20:47:52.425 INFO:tasks.workunit.client.1.vm10.stdout:4/570: dwrite d1/f94 [0,4194304] 0 2026-03-09T20:47:52.427 INFO:tasks.workunit.client.1.vm10.stdout:8/659: link d0/d22/d2f/d38/c39 d0/d22/d25/d89/ccb 0 2026-03-09T20:47:52.428 INFO:tasks.workunit.client.1.vm10.stdout:2/623: mkdir d5/d18/d27/d38/dcf 0 2026-03-09T20:47:52.431 INFO:tasks.workunit.client.0.vm07.stdout:8/616: dwrite d1/dc/d16/dad/d87/d93/fb3 [0,4194304] 0 2026-03-09T20:47:52.434 INFO:tasks.workunit.client.1.vm10.stdout:9/654: mknod d2/d3/de/d35/cde 0 2026-03-09T20:47:52.436 INFO:tasks.workunit.client.1.vm10.stdout:0/593: creat d2/d9/da/d48/fd4 x:0 0 0 2026-03-09T20:47:52.440 INFO:tasks.workunit.client.1.vm10.stdout:3/580: write dc/d14/d26/faa [3554254,3972] 0 2026-03-09T20:47:52.445 INFO:tasks.workunit.client.1.vm10.stdout:6/617: getdents d3/d30/d7f/d36/d5c 0 2026-03-09T20:47:52.449 INFO:tasks.workunit.client.1.vm10.stdout:5/571: mkdir d2/d39/dbf/da9/ddd 0 2026-03-09T20:47:52.464 INFO:tasks.workunit.client.0.vm07.stdout:4/603: dwrite d2/d55/d5d/d3f/d4a/d4b/d52/f9e [0,4194304] 0 2026-03-09T20:47:52.464 INFO:tasks.workunit.client.0.vm07.stdout:4/604: write d2/d55/d5d/d3f/d4a/d85/f8c [2681011,55888] 0 2026-03-09T20:47:52.464 INFO:tasks.workunit.client.0.vm07.stdout:1/698: mknod d3/d14/d54/d6e/ce7 0 2026-03-09T20:47:52.464 INFO:tasks.workunit.client.1.vm10.stdout:8/660: creat d0/d92/fcc x:0 0 0 2026-03-09T20:47:52.464 INFO:tasks.workunit.client.1.vm10.stdout:8/661: truncate d0/d22/d25/d2e/d41/d47/fb7 747565 0 2026-03-09T20:47:52.464 INFO:tasks.workunit.client.1.vm10.stdout:9/655: mknod d2/d12/cdf 0 2026-03-09T20:47:52.472 INFO:tasks.workunit.client.1.vm10.stdout:0/594: mkdir d2/d4a/d58/dd5 0 2026-03-09T20:47:52.478 INFO:tasks.workunit.client.1.vm10.stdout:6/618: creat d3/d30/d7f/d36/fc2 x:0 0 0 2026-03-09T20:47:52.481 INFO:tasks.workunit.client.0.vm07.stdout:2/661: rename d2/l69 to d2/d46/d6e/dbe/d96/dcc/lce 0 2026-03-09T20:47:52.483 INFO:tasks.workunit.client.1.vm10.stdout:5/572: mkdir d2/d39/d4b/dde 0 2026-03-09T20:47:52.483 INFO:tasks.workunit.client.1.vm10.stdout:5/573: stat d2/d58/fb9 0 2026-03-09T20:47:52.489 INFO:tasks.workunit.client.1.vm10.stdout:6/619: dread d3/da/d11/d26/f2a [0,4194304] 0 2026-03-09T20:47:52.497 INFO:tasks.workunit.client.1.vm10.stdout:3/581: dread dc/d14/d20/d2e/d56/f68 [0,4194304] 0 2026-03-09T20:47:52.500 INFO:tasks.workunit.client.0.vm07.stdout:8/617: fsync d1/dc/d16/d26/f48 0 2026-03-09T20:47:52.509 INFO:tasks.workunit.client.1.vm10.stdout:9/656: creat d2/d28/d47/d67/fe0 x:0 0 0 2026-03-09T20:47:52.509 INFO:tasks.workunit.client.1.vm10.stdout:6/620: fdatasync d3/d30/f91 0 2026-03-09T20:47:52.512 INFO:tasks.workunit.client.1.vm10.stdout:8/662: getdents d0/d22/d25/d2e/d41/d47/d63/dad 0 2026-03-09T20:47:52.512 INFO:tasks.workunit.client.1.vm10.stdout:8/663: write d0/d22/f76 [1307818,71788] 0 2026-03-09T20:47:52.514 INFO:tasks.workunit.client.1.vm10.stdout:9/657: mknod d2/d28/d47/d6a/ce1 0 2026-03-09T20:47:52.515 INFO:tasks.workunit.client.1.vm10.stdout:9/658: fdatasync d2/d3/d6d/fce 0 2026-03-09T20:47:52.522 INFO:tasks.workunit.client.1.vm10.stdout:6/621: mkdir d3/da/d11/d31/d4c/dc3 0 2026-03-09T20:47:52.529 INFO:tasks.workunit.client.1.vm10.stdout:9/659: creat d2/d33/dcf/fe2 x:0 0 0 2026-03-09T20:47:52.542 INFO:tasks.workunit.client.0.vm07.stdout:6/682: rename d8/f46 to d8/d16/dbb/fd7 0 2026-03-09T20:47:52.542 INFO:tasks.workunit.client.1.vm10.stdout:9/660: dread - d2/d3/d6d/db7/fbb zero size 2026-03-09T20:47:52.542 INFO:tasks.workunit.client.1.vm10.stdout:0/595: rename d2/d9/da/f81 to d2/d9/da/fd6 0 2026-03-09T20:47:52.542 INFO:tasks.workunit.client.1.vm10.stdout:6/622: rename d3/d30/d6a/cc1 to d3/d30/d7f/d36/d6d/cc4 0 2026-03-09T20:47:52.542 INFO:tasks.workunit.client.1.vm10.stdout:6/623: truncate d3/da/f58 1052232 0 2026-03-09T20:47:52.551 INFO:tasks.workunit.client.0.vm07.stdout:2/662: truncate d2/db/d49/f81 264304 0 2026-03-09T20:47:52.551 INFO:tasks.workunit.client.1.vm10.stdout:3/582: creat dc/d14/d26/d29/d40/da8/fc6 x:0 0 0 2026-03-09T20:47:52.551 INFO:tasks.workunit.client.1.vm10.stdout:8/664: unlink d0/d22/d25/c6e 0 2026-03-09T20:47:52.551 INFO:tasks.workunit.client.0.vm07.stdout:2/663: chown d2/d11/f5d 3091 1 2026-03-09T20:47:52.553 INFO:tasks.workunit.client.1.vm10.stdout:5/574: read d2/d27/d37/d46/f94 [141398,43331] 0 2026-03-09T20:47:52.553 INFO:tasks.workunit.client.1.vm10.stdout:5/575: stat d2/d39/dbf 0 2026-03-09T20:47:52.555 INFO:tasks.workunit.client.0.vm07.stdout:9/642: getdents d4/d16/d29/d9c 0 2026-03-09T20:47:52.564 INFO:tasks.workunit.client.1.vm10.stdout:9/661: dread d2/d12/f20 [0,4194304] 0 2026-03-09T20:47:52.572 INFO:tasks.workunit.client.1.vm10.stdout:7/611: write db/d28/f5d [948385,25268] 0 2026-03-09T20:47:52.575 INFO:tasks.workunit.client.1.vm10.stdout:1/619: write d2/f4c [52082,116953] 0 2026-03-09T20:47:52.580 INFO:tasks.workunit.client.1.vm10.stdout:2/624: write d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47/d7b/faf [113183,116440] 0 2026-03-09T20:47:52.580 INFO:tasks.workunit.client.0.vm07.stdout:1/699: rename d3/d14/d54/d6e to d3/d97/da1/dc5/d90/de8 0 2026-03-09T20:47:52.582 INFO:tasks.workunit.client.1.vm10.stdout:4/571: dwrite d1/d2/f7 [0,4194304] 0 2026-03-09T20:47:52.584 INFO:tasks.workunit.client.0.vm07.stdout:0/695: write d1/f11 [5349509,45624] 0 2026-03-09T20:47:52.586 INFO:tasks.workunit.client.0.vm07.stdout:5/727: dwrite d5/df/d13/d30/d56/f84 [0,4194304] 0 2026-03-09T20:47:52.601 INFO:tasks.workunit.client.1.vm10.stdout:3/583: dread - dc/d14/d26/f65 zero size 2026-03-09T20:47:52.606 INFO:tasks.workunit.client.0.vm07.stdout:7/726: truncate d3/d58/d82/fa3 3094138 0 2026-03-09T20:47:52.612 INFO:tasks.workunit.client.1.vm10.stdout:8/665: creat d0/d22/d25/d2e/d41/d85/fcd x:0 0 0 2026-03-09T20:47:52.612 INFO:tasks.workunit.client.1.vm10.stdout:6/624: dread d3/f40 [0,4194304] 0 2026-03-09T20:47:52.615 INFO:tasks.workunit.client.0.vm07.stdout:3/660: write d1/d5/d9/d2f/d34/f8f [1474346,48188] 0 2026-03-09T20:47:52.624 INFO:tasks.workunit.client.0.vm07.stdout:8/618: getdents d1/dc/d16 0 2026-03-09T20:47:52.624 INFO:tasks.workunit.client.1.vm10.stdout:3/584: dread dc/d14/d20/d2e/d56/f82 [0,4194304] 0 2026-03-09T20:47:52.632 INFO:tasks.workunit.client.1.vm10.stdout:5/576: dread d2/d39/d4b/f60 [0,4194304] 0 2026-03-09T20:47:52.637 INFO:tasks.workunit.client.1.vm10.stdout:7/612: symlink db/d21/d23/lc2 0 2026-03-09T20:47:52.641 INFO:tasks.workunit.client.1.vm10.stdout:0/596: dwrite d2/d9/da/d11/d92/fb0 [0,4194304] 0 2026-03-09T20:47:52.641 INFO:tasks.workunit.client.0.vm07.stdout:5/728: creat d5/df/d62/ffe x:0 0 0 2026-03-09T20:47:52.641 INFO:tasks.workunit.client.0.vm07.stdout:5/729: dread - d5/df/fd6 zero size 2026-03-09T20:47:52.647 INFO:tasks.workunit.client.1.vm10.stdout:4/572: rename d1/d2/d3/l19 to d1/d2/d3/d70/d99/lb7 0 2026-03-09T20:47:52.648 INFO:tasks.workunit.client.1.vm10.stdout:4/573: read d1/d2/d5c/d64/d6b/d81/dac/d1c/d2b/f5a [169183,42753] 0 2026-03-09T20:47:52.649 INFO:tasks.workunit.client.1.vm10.stdout:8/666: symlink d0/d22/d2c/lce 0 2026-03-09T20:47:52.650 INFO:tasks.workunit.client.1.vm10.stdout:6/625: symlink d3/d30/d7f/d24/lc5 0 2026-03-09T20:47:52.650 INFO:tasks.workunit.client.1.vm10.stdout:3/585: chown dc/d14/cb3 723 1 2026-03-09T20:47:52.653 INFO:tasks.workunit.client.1.vm10.stdout:6/626: dwrite d3/d30/d7f/d36/fc2 [0,4194304] 0 2026-03-09T20:47:52.654 INFO:tasks.workunit.client.1.vm10.stdout:6/627: chown d3/da/d11/d31/d4c/da9/cb8 2693 1 2026-03-09T20:47:52.655 INFO:tasks.workunit.client.0.vm07.stdout:0/696: mknod d1/dc0/dcc/dd9/ce3 0 2026-03-09T20:47:52.656 INFO:tasks.workunit.client.0.vm07.stdout:0/697: chown d1/d2/dc/d17/l76 835 1 2026-03-09T20:47:52.657 INFO:tasks.workunit.client.1.vm10.stdout:5/577: truncate d2/f3e 4744705 0 2026-03-09T20:47:52.657 INFO:tasks.workunit.client.1.vm10.stdout:7/613: mknod db/d21/d26/d72/cc3 0 2026-03-09T20:47:52.657 INFO:tasks.workunit.client.0.vm07.stdout:5/730: mkdir d5/d19/d73/d97/dff 0 2026-03-09T20:47:52.667 INFO:tasks.workunit.client.0.vm07.stdout:2/664: dwrite d2/db/f7c [4194304,4194304] 0 2026-03-09T20:47:52.678 INFO:tasks.workunit.client.0.vm07.stdout:4/605: write d2/df/f2e [1544824,70148] 0 2026-03-09T20:47:52.680 INFO:tasks.workunit.client.0.vm07.stdout:4/606: chown d2/d55/d5d/d93/ca1 377 1 2026-03-09T20:47:52.681 INFO:tasks.workunit.client.0.vm07.stdout:9/643: dwrite d4/d8/d59/f66 [0,4194304] 0 2026-03-09T20:47:52.683 INFO:tasks.workunit.client.0.vm07.stdout:9/644: dread - d4/d8/dc/d4e/f8f zero size 2026-03-09T20:47:52.699 INFO:tasks.workunit.client.0.vm07.stdout:8/619: getdents d1/db0 0 2026-03-09T20:47:52.699 INFO:tasks.workunit.client.0.vm07.stdout:0/698: rmdir d1/d1f/d53 39 2026-03-09T20:47:52.699 INFO:tasks.workunit.client.0.vm07.stdout:5/731: creat d5/df/d13/d30/f100 x:0 0 0 2026-03-09T20:47:52.699 INFO:tasks.workunit.client.1.vm10.stdout:9/662: dwrite d2/d28/fa5 [4194304,4194304] 0 2026-03-09T20:47:52.699 INFO:tasks.workunit.client.1.vm10.stdout:1/620: mknod d2/da/d25/d46/d51/d5d/da6/cc0 0 2026-03-09T20:47:52.699 INFO:tasks.workunit.client.1.vm10.stdout:9/663: chown d2/d12/f69 18 1 2026-03-09T20:47:52.699 INFO:tasks.workunit.client.1.vm10.stdout:4/574: creat d1/d2/d5c/d64/d6b/d81/da9/fb8 x:0 0 0 2026-03-09T20:47:52.706 INFO:tasks.workunit.client.1.vm10.stdout:3/586: chown dc/d14/d26/c9f 61 1 2026-03-09T20:47:52.709 INFO:tasks.workunit.client.0.vm07.stdout:2/665: truncate d2/db/d28/d90/f99 105844 0 2026-03-09T20:47:52.723 INFO:tasks.workunit.client.0.vm07.stdout:6/683: rename d8/c1e to d8/d16/d22/d24/da0/dab/d40/cd8 0 2026-03-09T20:47:52.725 INFO:tasks.workunit.client.0.vm07.stdout:0/699: read - d1/d2/d98/fa5 zero size 2026-03-09T20:47:52.730 INFO:tasks.workunit.client.0.vm07.stdout:7/727: link d3/da/db/d79/f98 d3/da/db/d32/d3e/dac/d43/d62/db1/ff0 0 2026-03-09T20:47:52.731 INFO:tasks.workunit.client.0.vm07.stdout:7/728: write d3/da/db/d32/d3e/d5c/dc2/feb [978849,88541] 0 2026-03-09T20:47:52.739 INFO:tasks.workunit.client.0.vm07.stdout:3/661: rename d1/d5/d9/d2f/d3d/d64 to d1/d5/d9/d11/d6d/dd0 0 2026-03-09T20:47:52.777 INFO:tasks.workunit.client.0.vm07.stdout:7/729: rmdir d3/da/db/d32/d3e/dac/d43/d62/db1 39 2026-03-09T20:47:52.777 INFO:tasks.workunit.client.0.vm07.stdout:3/662: creat d1/d5/d9/d11/d6d/dd0/d59/fd1 x:0 0 0 2026-03-09T20:47:52.777 INFO:tasks.workunit.client.0.vm07.stdout:4/607: getdents d2/d55/d5d 0 2026-03-09T20:47:52.777 INFO:tasks.workunit.client.0.vm07.stdout:8/620: getdents d1 0 2026-03-09T20:47:52.777 INFO:tasks.workunit.client.0.vm07.stdout:3/663: truncate d1/d5/d9/f1b 1744911 0 2026-03-09T20:47:52.778 INFO:tasks.workunit.client.0.vm07.stdout:2/666: link d2/d11/la6 d2/db/d28/d5c/lcf 0 2026-03-09T20:47:52.781 INFO:tasks.workunit.client.0.vm07.stdout:8/621: symlink d1/d8f/lca 0 2026-03-09T20:47:52.783 INFO:tasks.workunit.client.0.vm07.stdout:3/664: creat d1/d5/d9/d2f/d34/d46/d5d/fd2 x:0 0 0 2026-03-09T20:47:52.787 INFO:tasks.workunit.client.0.vm07.stdout:6/684: getdents d8/d16/dbb 0 2026-03-09T20:47:52.789 INFO:tasks.workunit.client.0.vm07.stdout:1/700: write d3/d97/da1/fbb [90792,56044] 0 2026-03-09T20:47:52.789 INFO:tasks.workunit.client.1.vm10.stdout:2/625: write d5/f9e [271971,8495] 0 2026-03-09T20:47:52.794 INFO:tasks.workunit.client.0.vm07.stdout:2/667: mknod d2/d46/db0/cd0 0 2026-03-09T20:47:52.802 INFO:tasks.workunit.client.0.vm07.stdout:3/665: creat d1/d5/d9/d2f/d66/fd3 x:0 0 0 2026-03-09T20:47:52.802 INFO:tasks.workunit.client.0.vm07.stdout:6/685: mknod d8/d16/dbb/cd9 0 2026-03-09T20:47:52.804 INFO:tasks.workunit.client.1.vm10.stdout:0/597: rmdir d2/d4a/d58/d82/d71/d5d/dcb 0 2026-03-09T20:47:52.808 INFO:tasks.workunit.client.0.vm07.stdout:4/608: link d2/d1f/f2c d2/df/d59/d8a/d9d/fa8 0 2026-03-09T20:47:52.812 INFO:tasks.workunit.client.1.vm10.stdout:1/621: creat d2/da/d25/d46/d80/da0/d92/db5/fc1 x:0 0 0 2026-03-09T20:47:52.814 INFO:tasks.workunit.client.0.vm07.stdout:2/668: dread d2/f4 [0,4194304] 0 2026-03-09T20:47:52.817 INFO:tasks.workunit.client.0.vm07.stdout:8/622: symlink d1/db0/lcb 0 2026-03-09T20:47:52.819 INFO:tasks.workunit.client.1.vm10.stdout:2/626: symlink d5/d18/d27/d38/d61/ld0 0 2026-03-09T20:47:52.825 INFO:tasks.workunit.client.0.vm07.stdout:1/701: unlink d3/d23/d52/fb2 0 2026-03-09T20:47:52.835 INFO:tasks.workunit.client.0.vm07.stdout:2/669: rename d2/db/d28/d5c/f8c to d2/db/d1c/d4a/d88/fd1 0 2026-03-09T20:47:52.836 INFO:tasks.workunit.client.0.vm07.stdout:5/732: sync 2026-03-09T20:47:52.837 INFO:tasks.workunit.client.0.vm07.stdout:3/666: sync 2026-03-09T20:47:52.850 INFO:tasks.workunit.client.0.vm07.stdout:9/645: dwrite d4/d8/dc/dbb/f91 [0,4194304] 0 2026-03-09T20:47:52.851 INFO:tasks.workunit.client.1.vm10.stdout:8/667: dwrite d0/d22/d2c/f57 [0,4194304] 0 2026-03-09T20:47:52.851 INFO:tasks.workunit.client.1.vm10.stdout:8/668: chown d0/d22 0 1 2026-03-09T20:47:52.864 INFO:tasks.workunit.client.1.vm10.stdout:9/664: write d2/d3/d6d/db7/fbb [132070,115602] 0 2026-03-09T20:47:52.867 INFO:tasks.workunit.client.0.vm07.stdout:0/700: write d1/d2/dc/d17/da6/fae [1183690,13301] 0 2026-03-09T20:47:52.872 INFO:tasks.workunit.client.0.vm07.stdout:8/623: fdatasync d1/dc/d16/dad/fa1 0 2026-03-09T20:47:52.890 INFO:tasks.workunit.client.0.vm07.stdout:6/686: mknod d8/d26/d7d/cda 0 2026-03-09T20:47:52.890 INFO:tasks.workunit.client.0.vm07.stdout:4/609: unlink d2/df/d17/d83/c95 0 2026-03-09T20:47:52.890 INFO:tasks.workunit.client.0.vm07.stdout:4/610: dread d2/d55/f71 [0,4194304] 0 2026-03-09T20:47:52.890 INFO:tasks.workunit.client.0.vm07.stdout:4/611: write d2/d55/d5d/d86/fa6 [386666,82426] 0 2026-03-09T20:47:52.890 INFO:tasks.workunit.client.0.vm07.stdout:4/612: read d2/f7 [918284,98518] 0 2026-03-09T20:47:52.890 INFO:tasks.workunit.client.0.vm07.stdout:4/613: chown d2/d55/d5d/d3f/d4a/f5e 4025 1 2026-03-09T20:47:52.891 INFO:tasks.workunit.client.1.vm10.stdout:8/669: sync 2026-03-09T20:47:52.893 INFO:tasks.workunit.client.0.vm07.stdout:4/614: dread d2/d55/d5d/d3f/d4a/f5e [0,4194304] 0 2026-03-09T20:47:52.899 INFO:tasks.workunit.client.0.vm07.stdout:3/667: symlink d1/d5/d9/d2f/d34/d46/d5d/ld4 0 2026-03-09T20:47:52.900 INFO:tasks.workunit.client.1.vm10.stdout:6/628: link d3/d30/d33/f35 d3/da/d11/fc6 0 2026-03-09T20:47:52.903 INFO:tasks.workunit.client.1.vm10.stdout:9/665: mknod d2/d3/d6d/d88/ce3 0 2026-03-09T20:47:52.906 INFO:tasks.workunit.client.1.vm10.stdout:7/614: getdents db/d21 0 2026-03-09T20:47:52.907 INFO:tasks.workunit.client.1.vm10.stdout:4/575: dwrite d1/d2/d5c/d64/d6b/d81/f8a [0,4194304] 0 2026-03-09T20:47:52.915 INFO:tasks.workunit.client.1.vm10.stdout:5/578: write d2/f3e [2516271,88616] 0 2026-03-09T20:47:52.918 INFO:tasks.workunit.client.1.vm10.stdout:3/587: write dc/d14/d26/d29/d40/d8c/d9c/fb6 [690002,33008] 0 2026-03-09T20:47:52.921 INFO:tasks.workunit.client.1.vm10.stdout:1/622: creat d2/da/d25/d46/d51/d5d/d6e/d70/db3/fc2 x:0 0 0 2026-03-09T20:47:52.922 INFO:tasks.workunit.client.1.vm10.stdout:1/623: readlink d2/da/d25/d46/d51/d5d/d6e/lb7 0 2026-03-09T20:47:52.925 INFO:tasks.workunit.client.1.vm10.stdout:1/624: chown d2/da/d25/d3e/d55/f9a 32399136 1 2026-03-09T20:47:52.925 INFO:tasks.workunit.client.0.vm07.stdout:1/702: write d3/f5 [1363751,10628] 0 2026-03-09T20:47:52.925 INFO:tasks.workunit.client.0.vm07.stdout:1/703: readlink d3/d14/lc8 0 2026-03-09T20:47:52.926 INFO:tasks.workunit.client.0.vm07.stdout:1/704: dread d3/d97/da1/dc5/d60/f8e [0,4194304] 0 2026-03-09T20:47:52.926 INFO:tasks.workunit.client.0.vm07.stdout:1/705: chown d3/f6f 82517 1 2026-03-09T20:47:52.928 INFO:tasks.workunit.client.1.vm10.stdout:8/670: mknod d0/d22/d25/d40/d86/ccf 0 2026-03-09T20:47:52.929 INFO:tasks.workunit.client.1.vm10.stdout:8/671: write d0/d22/d25/d2e/d41/f80 [2824652,91498] 0 2026-03-09T20:47:52.936 INFO:tasks.workunit.client.1.vm10.stdout:6/629: truncate d3/d30/d7f/f28 484476 0 2026-03-09T20:47:52.943 INFO:tasks.workunit.client.1.vm10.stdout:7/615: mkdir db/d28/d2b/d36/d63/d6d/dc4 0 2026-03-09T20:47:52.943 INFO:tasks.workunit.client.1.vm10.stdout:6/630: dread f1 [0,4194304] 0 2026-03-09T20:47:52.943 INFO:tasks.workunit.client.0.vm07.stdout:9/646: write d4/d16/d29/f64 [830650,51604] 0 2026-03-09T20:47:52.944 INFO:tasks.workunit.client.1.vm10.stdout:6/631: write d3/da/d11/d31/d4c/d60/fbc [322109,26153] 0 2026-03-09T20:47:52.945 INFO:tasks.workunit.client.1.vm10.stdout:0/598: truncate d2/d9/da/d11/dd1/d34/f77 647367 0 2026-03-09T20:47:52.946 INFO:tasks.workunit.client.1.vm10.stdout:7/616: sync 2026-03-09T20:47:52.953 INFO:tasks.workunit.client.1.vm10.stdout:7/617: dread db/d21/d26/f52 [0,4194304] 0 2026-03-09T20:47:52.957 INFO:tasks.workunit.client.1.vm10.stdout:3/588: fsync dc/d14/d20/d2e/d56/f82 0 2026-03-09T20:47:52.957 INFO:tasks.workunit.client.1.vm10.stdout:3/589: readlink l8 0 2026-03-09T20:47:52.957 INFO:tasks.workunit.client.0.vm07.stdout:6/687: write d8/d50/f55 [142065,87240] 0 2026-03-09T20:47:52.962 INFO:tasks.workunit.client.1.vm10.stdout:1/625: creat d2/da/d25/d46/d8c/fc3 x:0 0 0 2026-03-09T20:47:52.966 INFO:tasks.workunit.client.0.vm07.stdout:2/670: rename d2/d11/f38 to d2/da7/fd2 0 2026-03-09T20:47:52.966 INFO:tasks.workunit.client.1.vm10.stdout:2/627: dwrite d5/d18/d27/d89/db6/d41/fc7 [4194304,4194304] 0 2026-03-09T20:47:52.979 INFO:tasks.workunit.client.1.vm10.stdout:8/672: mkdir d0/d22/d2f/dd0 0 2026-03-09T20:47:52.980 INFO:tasks.workunit.client.0.vm07.stdout:5/733: truncate d5/df/d13/f38 1014842 0 2026-03-09T20:47:52.980 INFO:tasks.workunit.client.1.vm10.stdout:8/673: chown d0/d22/d25/d2e/d41/l9f 0 1 2026-03-09T20:47:52.981 INFO:tasks.workunit.client.1.vm10.stdout:4/576: write d1/d2/d5c/d64/d6b/d81/dac/d39/f6e [578430,19085] 0 2026-03-09T20:47:52.983 INFO:tasks.workunit.client.0.vm07.stdout:3/668: rmdir d1/d5/d9/d11/d6d/d80 39 2026-03-09T20:47:52.989 INFO:tasks.workunit.client.1.vm10.stdout:9/666: rename d2/d3/de/f34 to d2/d3/de/d8f/fe4 0 2026-03-09T20:47:52.989 INFO:tasks.workunit.client.1.vm10.stdout:6/632: chown d3/d30/d7f/d36/d6d/la6 885 1 2026-03-09T20:47:52.997 INFO:tasks.workunit.client.1.vm10.stdout:9/667: dread d2/d3/de/d35/f78 [0,4194304] 0 2026-03-09T20:47:52.997 INFO:tasks.workunit.client.1.vm10.stdout:9/668: chown d2/f30 1 1 2026-03-09T20:47:52.998 INFO:tasks.workunit.client.1.vm10.stdout:9/669: chown d2/d28/c3a 27493744 1 2026-03-09T20:47:52.998 INFO:tasks.workunit.client.1.vm10.stdout:9/670: chown d2/d33/d37 504280 1 2026-03-09T20:47:52.998 INFO:tasks.workunit.client.0.vm07.stdout:1/706: mknod d3/d97/da1/dc5/d60/d9f/ce9 0 2026-03-09T20:47:52.999 INFO:tasks.workunit.client.0.vm07.stdout:1/707: chown d3/d9c/fd2 0 1 2026-03-09T20:47:52.999 INFO:tasks.workunit.client.1.vm10.stdout:9/671: chown d2/d28/d47/d67/f93 5 1 2026-03-09T20:47:53.002 INFO:tasks.workunit.client.1.vm10.stdout:9/672: sync 2026-03-09T20:47:53.005 INFO:tasks.workunit.client.1.vm10.stdout:7/618: read db/d21/d23/f1a [77847,36987] 0 2026-03-09T20:47:53.008 INFO:tasks.workunit.client.0.vm07.stdout:7/730: truncate d3/d58/d82/fa3 547871 0 2026-03-09T20:47:53.009 INFO:tasks.workunit.client.0.vm07.stdout:7/731: write d3/da/db/d32/d3e/d5c/fea [471233,12519] 0 2026-03-09T20:47:53.011 INFO:tasks.workunit.client.0.vm07.stdout:0/701: dwrite d1/d2/d33/d35/f45 [0,4194304] 0 2026-03-09T20:47:53.014 INFO:tasks.workunit.client.1.vm10.stdout:3/590: stat dc/d14/d27/f3c 0 2026-03-09T20:47:53.023 INFO:tasks.workunit.client.0.vm07.stdout:8/624: write d1/d3b/f9a [602944,50429] 0 2026-03-09T20:47:53.032 INFO:tasks.workunit.client.0.vm07.stdout:6/688: dread d8/f12 [0,4194304] 0 2026-03-09T20:47:53.034 INFO:tasks.workunit.client.1.vm10.stdout:4/577: read d1/d2/f2a [626175,35657] 0 2026-03-09T20:47:53.035 INFO:tasks.workunit.client.1.vm10.stdout:4/578: sync 2026-03-09T20:47:53.036 INFO:tasks.workunit.client.0.vm07.stdout:2/671: truncate d2/db/d49/f6b 2133044 0 2026-03-09T20:47:53.036 INFO:tasks.workunit.client.1.vm10.stdout:4/579: write d1/d2/d5c/d64/d6b/d81/dac/d1b/d57/f4c [819015,84516] 0 2026-03-09T20:47:53.042 INFO:tasks.workunit.client.0.vm07.stdout:9/647: write d4/d11/f8a [1106,41093] 0 2026-03-09T20:47:53.044 INFO:tasks.workunit.client.0.vm07.stdout:4/615: dwrite d2/df/d59/d8a/d9d/fa8 [0,4194304] 0 2026-03-09T20:47:53.052 INFO:tasks.workunit.client.1.vm10.stdout:5/579: rename d2/d27/d37/d46/d5d/l62 to d2/d58/ldf 0 2026-03-09T20:47:53.054 INFO:tasks.workunit.client.0.vm07.stdout:5/734: truncate d5/d33/d3b/f63 4699027 0 2026-03-09T20:47:53.058 INFO:tasks.workunit.client.0.vm07.stdout:3/669: mknod d1/d5/d9/d2f/d34/d46/cd5 0 2026-03-09T20:47:53.063 INFO:tasks.workunit.client.1.vm10.stdout:9/673: mknod d2/d28/d47/d6a/ce5 0 2026-03-09T20:47:53.074 INFO:tasks.workunit.client.1.vm10.stdout:7/619: fdatasync db/d28/d2b/d36/d63/d6d/f80 0 2026-03-09T20:47:53.092 INFO:tasks.workunit.client.0.vm07.stdout:7/732: mkdir d3/da/db/d32/d3e/d5c/dc2/df1 0 2026-03-09T20:47:53.093 INFO:tasks.workunit.client.1.vm10.stdout:3/591: creat dc/d14/d20/d21/fc7 x:0 0 0 2026-03-09T20:47:53.094 INFO:tasks.workunit.client.0.vm07.stdout:8/625: truncate d1/d5d/d6f/d2f/d4d/f73 963896 0 2026-03-09T20:47:53.097 INFO:tasks.workunit.client.1.vm10.stdout:2/628: creat d5/d18/d27/d38/d61/dc8/fd1 x:0 0 0 2026-03-09T20:47:53.105 INFO:tasks.workunit.client.1.vm10.stdout:8/674: truncate d0/f7e 1440258 0 2026-03-09T20:47:53.105 INFO:tasks.workunit.client.0.vm07.stdout:9/648: creat d4/d8/d19/d5f/d73/dbc/fe8 x:0 0 0 2026-03-09T20:47:53.105 INFO:tasks.workunit.client.0.vm07.stdout:9/649: read d4/d16/d29/d24/f77 [1201459,119052] 0 2026-03-09T20:47:53.113 INFO:tasks.workunit.client.1.vm10.stdout:6/633: rename d3/da/d11/d26/la7 to d3/d30/d7f/d36/d5c/d8d/lc7 0 2026-03-09T20:47:53.114 INFO:tasks.workunit.client.0.vm07.stdout:5/735: mkdir d5/df/d13/d4f/d101 0 2026-03-09T20:47:53.125 INFO:tasks.workunit.client.1.vm10.stdout:9/674: creat d2/d28/d47/d67/fe6 x:0 0 0 2026-03-09T20:47:53.126 INFO:tasks.workunit.client.1.vm10.stdout:4/580: write d1/d2/d5c/d64/f51 [395056,35674] 0 2026-03-09T20:47:53.126 INFO:tasks.workunit.client.1.vm10.stdout:5/580: write d2/fd [268966,26939] 0 2026-03-09T20:47:53.127 INFO:tasks.workunit.client.0.vm07.stdout:1/708: unlink d3/d97/da1/dc5/d60/d9f/dd0/fe6 0 2026-03-09T20:47:53.131 INFO:tasks.workunit.client.0.vm07.stdout:1/709: dwrite d3/d23/d67/f92 [0,4194304] 0 2026-03-09T20:47:53.141 INFO:tasks.workunit.client.1.vm10.stdout:1/626: link d2/da/d25/c52 d2/da/d25/d3e/d55/cc4 0 2026-03-09T20:47:53.143 INFO:tasks.workunit.client.1.vm10.stdout:2/629: fsync d5/d18/d1b/d22/f4f 0 2026-03-09T20:47:53.151 INFO:tasks.workunit.client.0.vm07.stdout:8/626: mknod d1/db0/ccc 0 2026-03-09T20:47:53.152 INFO:tasks.workunit.client.1.vm10.stdout:6/634: mknod d3/da/d11/d31/d4c/cc8 0 2026-03-09T20:47:53.152 INFO:tasks.workunit.client.1.vm10.stdout:6/635: chown d3/d30/d7f/d24/d39/f6c 801642 1 2026-03-09T20:47:53.152 INFO:tasks.workunit.client.1.vm10.stdout:0/599: getdents d2/d4a 0 2026-03-09T20:47:53.152 INFO:tasks.workunit.client.1.vm10.stdout:0/600: chown d2/d9/da/d35/d30/fa9 1417895 1 2026-03-09T20:47:53.153 INFO:tasks.workunit.client.1.vm10.stdout:9/675: read - d2/d3/de/f84 zero size 2026-03-09T20:47:53.158 INFO:tasks.workunit.client.1.vm10.stdout:4/581: mkdir d1/d47/db9 0 2026-03-09T20:47:53.159 INFO:tasks.workunit.client.1.vm10.stdout:4/582: write d1/d47/fb4 [897130,130232] 0 2026-03-09T20:47:53.164 INFO:tasks.workunit.client.0.vm07.stdout:6/689: dwrite d8/d16/d22/d24/da0/dab/fa9 [0,4194304] 0 2026-03-09T20:47:53.167 INFO:tasks.workunit.client.0.vm07.stdout:6/690: chown d8/d16/d22/d24/da0/dab/d40/d69/f78 2920 1 2026-03-09T20:47:53.184 INFO:tasks.workunit.client.1.vm10.stdout:3/592: creat dc/d14/d26/d29/d40/da8/dc3/fc8 x:0 0 0 2026-03-09T20:47:53.186 INFO:tasks.workunit.client.1.vm10.stdout:1/627: readlink d2/da/l71 0 2026-03-09T20:47:53.191 INFO:tasks.workunit.client.1.vm10.stdout:8/675: mkdir d0/dd1 0 2026-03-09T20:47:53.193 INFO:tasks.workunit.client.1.vm10.stdout:6/636: creat d3/d30/d7f/d36/d5c/daa/fc9 x:0 0 0 2026-03-09T20:47:53.205 INFO:tasks.workunit.client.1.vm10.stdout:4/583: creat d1/d2/d5c/d64/d6b/d79/d92/fba x:0 0 0 2026-03-09T20:47:53.207 INFO:tasks.workunit.client.1.vm10.stdout:7/620: truncate db/d28/d2b/d36/d3b/d88/f71 751807 0 2026-03-09T20:47:53.207 INFO:tasks.workunit.client.1.vm10.stdout:5/581: write d2/d39/dbf/d69/d96/fb1 [758781,87891] 0 2026-03-09T20:47:53.208 INFO:tasks.workunit.client.1.vm10.stdout:7/621: chown db/f7c 0 1 2026-03-09T20:47:53.211 INFO:tasks.workunit.client.0.vm07.stdout:2/672: dwrite d2/db/d1c/fab [0,4194304] 0 2026-03-09T20:47:53.212 INFO:tasks.workunit.client.1.vm10.stdout:0/601: dwrite d2/d9/da/d35/f68 [4194304,4194304] 0 2026-03-09T20:47:53.212 INFO:tasks.workunit.client.0.vm07.stdout:2/673: write d2/d11/f44 [1782955,110671] 0 2026-03-09T20:47:53.213 INFO:tasks.workunit.client.0.vm07.stdout:2/674: write d2/db/f7c [6292176,20139] 0 2026-03-09T20:47:53.228 INFO:tasks.workunit.client.0.vm07.stdout:4/616: symlink d2/df/la9 0 2026-03-09T20:47:53.229 INFO:tasks.workunit.client.1.vm10.stdout:3/593: creat dc/d14/d90/fc9 x:0 0 0 2026-03-09T20:47:53.229 INFO:tasks.workunit.client.1.vm10.stdout:3/594: stat dc/d14/d26/d8f/l95 0 2026-03-09T20:47:53.229 INFO:tasks.workunit.client.0.vm07.stdout:5/736: unlink d5/d33/d3b/la0 0 2026-03-09T20:47:53.231 INFO:tasks.workunit.client.0.vm07.stdout:5/737: read d5/df/d13/d6c/f99 [1978607,66315] 0 2026-03-09T20:47:53.236 INFO:tasks.workunit.client.1.vm10.stdout:1/628: mknod d2/da/d25/d46/d51/d5d/da6/cc5 0 2026-03-09T20:47:53.236 INFO:tasks.workunit.client.1.vm10.stdout:1/629: chown d2/da/d25/d3e/d42 27 1 2026-03-09T20:47:53.244 INFO:tasks.workunit.client.0.vm07.stdout:1/710: creat d3/d14/d54/fea x:0 0 0 2026-03-09T20:47:53.246 INFO:tasks.workunit.client.0.vm07.stdout:7/733: mkdir d3/da4/df2 0 2026-03-09T20:47:53.248 INFO:tasks.workunit.client.1.vm10.stdout:6/637: dread d3/f2f [0,4194304] 0 2026-03-09T20:47:53.254 INFO:tasks.workunit.client.0.vm07.stdout:9/650: write d4/f5 [3539205,56233] 0 2026-03-09T20:47:53.255 INFO:tasks.workunit.client.0.vm07.stdout:9/651: chown d4/d8/d19/d89 1008471 1 2026-03-09T20:47:53.262 INFO:tasks.workunit.client.0.vm07.stdout:3/670: dwrite d1/d5/d9/f1b [0,4194304] 0 2026-03-09T20:47:53.269 INFO:tasks.workunit.client.1.vm10.stdout:8/676: write d0/d22/d2f/f31 [1600757,40358] 0 2026-03-09T20:47:53.272 INFO:tasks.workunit.client.1.vm10.stdout:4/584: symlink d1/d2/d5c/d64/d6b/d81/dac/d1c/d2b/lbb 0 2026-03-09T20:47:53.272 INFO:tasks.workunit.client.1.vm10.stdout:7/622: truncate f5 623987 0 2026-03-09T20:47:53.274 INFO:tasks.workunit.client.1.vm10.stdout:5/582: mkdir d2/d39/d4b/de0 0 2026-03-09T20:47:53.276 INFO:tasks.workunit.client.0.vm07.stdout:6/691: rename d8/d26/fd2 to d8/d16/d22/d9b/fdb 0 2026-03-09T20:47:53.286 INFO:tasks.workunit.client.1.vm10.stdout:2/630: link d5/f15 d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47/d7b/fd2 0 2026-03-09T20:47:53.293 INFO:tasks.workunit.client.1.vm10.stdout:6/638: read d3/d30/d7f/d36/d5c/f5f [73868,88360] 0 2026-03-09T20:47:53.295 INFO:tasks.workunit.client.1.vm10.stdout:9/676: creat d2/d28/d47/fe7 x:0 0 0 2026-03-09T20:47:53.296 INFO:tasks.workunit.client.1.vm10.stdout:9/677: chown d2/d33/c5e 93280 1 2026-03-09T20:47:53.298 INFO:tasks.workunit.client.1.vm10.stdout:8/677: creat d0/d22/d25/d2e/fd2 x:0 0 0 2026-03-09T20:47:53.299 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:53 vm07.local ceph-mon[49120]: Active manager daemon vm07.xjrvch restarted 2026-03-09T20:47:53.300 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:53 vm07.local ceph-mon[49120]: Activating manager daemon vm07.xjrvch 2026-03-09T20:47:53.300 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:53 vm07.local ceph-mon[49120]: from='mgr.? 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm07.xjrvch/crt"}]: dispatch 2026-03-09T20:47:53.300 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:53 vm07.local ceph-mon[49120]: osdmap e46: 6 total, 6 up, 6 in 2026-03-09T20:47:53.300 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:53 vm07.local ceph-mon[49120]: mgrmap e27: vm07.xjrvch(active, starting, since 0.0151248s) 2026-03-09T20:47:53.300 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:53 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T20:47:53.300 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:53 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T20:47:53.300 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:53 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mon metadata", "id": "vm07"}]: dispatch 2026-03-09T20:47:53.300 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:53 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mon metadata", "id": "vm10"}]: dispatch 2026-03-09T20:47:53.300 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:53 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm07.rovdbp"}]: dispatch 2026-03-09T20:47:53.300 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:53 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm07.potfau"}]: dispatch 2026-03-09T20:47:53.300 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:53 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm10.hzyuyq"}]: dispatch 2026-03-09T20:47:53.300 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:53 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm10.qpltwp"}]: dispatch 2026-03-09T20:47:53.300 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:53 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mgr metadata", "who": "vm07.xjrvch", "id": "vm07.xjrvch"}]: dispatch 2026-03-09T20:47:53.300 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:53 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:47:53.300 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:53 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:47:53.300 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:53 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:47:53.300 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:53 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:47:53.300 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:53 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T20:47:53.300 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:53 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T20:47:53.300 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:53 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm07.xjrvch/key"}]: dispatch 2026-03-09T20:47:53.302 INFO:tasks.workunit.client.0.vm07.stdout:1/711: mknod d3/d97/da1/dab/ceb 0 2026-03-09T20:47:53.311 INFO:tasks.workunit.client.0.vm07.stdout:0/702: link d1/d1f/d53/d72/f6b d1/dc0/fe4 0 2026-03-09T20:47:53.311 INFO:tasks.workunit.client.1.vm10.stdout:7/623: chown db/d28/d2b/d36/f35 3702458 1 2026-03-09T20:47:53.313 INFO:tasks.workunit.client.1.vm10.stdout:5/583: rename d2/d39/d4b/d7a/dd2 to d2/d39/d4b/d7a/de1 0 2026-03-09T20:47:53.315 INFO:tasks.workunit.client.1.vm10.stdout:7/624: dwrite db/d28/d30/f73 [0,4194304] 0 2026-03-09T20:47:53.316 INFO:tasks.workunit.client.1.vm10.stdout:7/625: chown db/d28/d30/fa4 151416513 1 2026-03-09T20:47:53.317 INFO:tasks.workunit.client.1.vm10.stdout:7/626: chown db/l96 2709 1 2026-03-09T20:47:53.338 INFO:tasks.workunit.client.1.vm10.stdout:3/595: fdatasync dc/f5a 0 2026-03-09T20:47:53.342 INFO:tasks.workunit.client.1.vm10.stdout:1/630: fsync d2/da/d25/d46/f74 0 2026-03-09T20:47:53.354 INFO:tasks.workunit.client.1.vm10.stdout:4/585: dwrite d1/d2/d5c/d64/d6b/d81/dac/d1c/f3f [4194304,4194304] 0 2026-03-09T20:47:53.356 INFO:tasks.workunit.client.0.vm07.stdout:9/652: dwrite f2 [0,4194304] 0 2026-03-09T20:47:53.365 INFO:tasks.workunit.client.1.vm10.stdout:2/631: dread d5/fb [0,4194304] 0 2026-03-09T20:47:53.380 INFO:tasks.workunit.client.0.vm07.stdout:2/675: symlink d2/db/d1c/d4a/db6/ld3 0 2026-03-09T20:47:53.381 INFO:tasks.workunit.client.0.vm07.stdout:2/676: chown d2/db/d28/f34 18806 1 2026-03-09T20:47:53.385 INFO:tasks.workunit.client.0.vm07.stdout:1/712: symlink d3/d23/d67/lec 0 2026-03-09T20:47:53.386 INFO:tasks.workunit.client.0.vm07.stdout:7/734: truncate d3/f67 316986 0 2026-03-09T20:47:53.386 INFO:tasks.workunit.client.0.vm07.stdout:0/703: fsync d1/d2/d33/d35/f59 0 2026-03-09T20:47:53.396 INFO:tasks.workunit.client.1.vm10.stdout:0/602: creat d2/d4a/d58/d82/d71/d8e/fd7 x:0 0 0 2026-03-09T20:47:53.399 INFO:tasks.workunit.client.0.vm07.stdout:8/627: dwrite d1/d5d/d6f/d80/faa [0,4194304] 0 2026-03-09T20:47:53.400 INFO:tasks.workunit.client.0.vm07.stdout:8/628: chown d1/d3b/l79 1 1 2026-03-09T20:47:53.404 INFO:tasks.workunit.client.0.vm07.stdout:3/671: mkdir d1/d5/d9/d2f/d3d/dd6 0 2026-03-09T20:47:53.405 INFO:tasks.workunit.client.0.vm07.stdout:3/672: write d1/d5/d9/d2f/d3d/d71/db5/fc9 [1024116,101001] 0 2026-03-09T20:47:53.421 INFO:tasks.workunit.client.0.vm07.stdout:9/653: creat d4/d8/d19/d5f/d73/dbc/fe9 x:0 0 0 2026-03-09T20:47:53.424 INFO:tasks.workunit.client.0.vm07.stdout:6/692: dwrite d8/d16/db4/f91 [0,4194304] 0 2026-03-09T20:47:53.427 INFO:tasks.workunit.client.0.vm07.stdout:5/738: truncate d5/df/d13/d30/d56/f84 7194151 0 2026-03-09T20:47:53.433 INFO:tasks.workunit.client.1.vm10.stdout:4/586: rmdir d1/d2/d5c/d64/d6b/d81/dac/d1c/d2b/d4a 39 2026-03-09T20:47:53.440 INFO:tasks.workunit.client.1.vm10.stdout:2/632: mkdir d5/d18/d27/d89/db6/dd3 0 2026-03-09T20:47:53.443 INFO:tasks.workunit.client.0.vm07.stdout:2/677: dread d2/d46/d6e/f7a [0,4194304] 0 2026-03-09T20:47:53.448 INFO:tasks.workunit.client.1.vm10.stdout:7/627: write db/d28/d4c/fa1 [913865,9222] 0 2026-03-09T20:47:53.448 INFO:tasks.workunit.client.1.vm10.stdout:7/628: readlink l4 0 2026-03-09T20:47:53.456 INFO:tasks.workunit.client.1.vm10.stdout:5/584: getdents d2/d39/d4b/dde 0 2026-03-09T20:47:53.456 INFO:tasks.workunit.client.1.vm10.stdout:0/603: readlink d2/d9/d2a/l64 0 2026-03-09T20:47:53.461 INFO:tasks.workunit.client.0.vm07.stdout:6/693: symlink d8/db3/ldc 0 2026-03-09T20:47:53.462 INFO:tasks.workunit.client.0.vm07.stdout:6/694: chown d8/d16/d22/d24/da0/dab/d40/d69/l56 59199 1 2026-03-09T20:47:53.464 INFO:tasks.workunit.client.0.vm07.stdout:5/739: rename d5/df/d13/d3e/cd8 to d5/df/d13/d6c/c102 0 2026-03-09T20:47:53.465 INFO:tasks.workunit.client.0.vm07.stdout:5/740: write d5/df/d13/d3e/d5e/fd5 [1516127,40848] 0 2026-03-09T20:47:53.470 INFO:tasks.workunit.client.0.vm07.stdout:4/617: getdents d2/d1f 0 2026-03-09T20:47:53.472 INFO:tasks.workunit.client.1.vm10.stdout:1/631: mknod d2/cc6 0 2026-03-09T20:47:53.476 INFO:tasks.workunit.client.0.vm07.stdout:1/713: fdatasync d3/d14/d54/f32 0 2026-03-09T20:47:53.478 INFO:tasks.workunit.client.0.vm07.stdout:1/714: dread d3/d14/f17 [0,4194304] 0 2026-03-09T20:47:53.480 INFO:tasks.workunit.client.0.vm07.stdout:1/715: dwrite d3/f28 [0,4194304] 0 2026-03-09T20:47:53.482 INFO:tasks.workunit.client.1.vm10.stdout:5/585: sync 2026-03-09T20:47:53.486 INFO:tasks.workunit.client.0.vm07.stdout:3/673: symlink d1/d5/d9/d11/ld7 0 2026-03-09T20:47:53.496 INFO:tasks.workunit.client.1.vm10.stdout:8/678: creat d0/d22/d25/d40/fd3 x:0 0 0 2026-03-09T20:47:53.501 INFO:tasks.workunit.client.1.vm10.stdout:0/604: rename d2/d4a/fae to d2/d4a/d58/d82/d60/fd8 0 2026-03-09T20:47:53.505 INFO:tasks.workunit.client.1.vm10.stdout:6/639: write d3/d30/d33/f35 [3619937,32848] 0 2026-03-09T20:47:53.506 INFO:tasks.workunit.client.1.vm10.stdout:6/640: chown d3/d30/d7f/d24/f99 1 1 2026-03-09T20:47:53.506 INFO:tasks.workunit.client.1.vm10.stdout:6/641: chown d3/da/d11/d31/d4c/d60/fb1 1 1 2026-03-09T20:47:53.507 INFO:tasks.workunit.client.0.vm07.stdout:8/629: dwrite d1/d5d/d6f/d2f/d4d/d63/f84 [0,4194304] 0 2026-03-09T20:47:53.509 INFO:tasks.workunit.client.1.vm10.stdout:3/596: write dc/f88 [3239724,130546] 0 2026-03-09T20:47:53.515 INFO:tasks.workunit.client.0.vm07.stdout:8/630: dwrite d1/dc/d16/dad/d87/f97 [4194304,4194304] 0 2026-03-09T20:47:53.540 INFO:tasks.workunit.client.1.vm10.stdout:1/632: mkdir d2/da/d25/d46/d80/da0/d92/db5/dc7 0 2026-03-09T20:47:53.541 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:53 vm10.local ceph-mon[57011]: Active manager daemon vm07.xjrvch restarted 2026-03-09T20:47:53.541 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:53 vm10.local ceph-mon[57011]: Activating manager daemon vm07.xjrvch 2026-03-09T20:47:53.541 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:53 vm10.local ceph-mon[57011]: from='mgr.? 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm07.xjrvch/crt"}]: dispatch 2026-03-09T20:47:53.541 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:53 vm10.local ceph-mon[57011]: osdmap e46: 6 total, 6 up, 6 in 2026-03-09T20:47:53.541 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:53 vm10.local ceph-mon[57011]: mgrmap e27: vm07.xjrvch(active, starting, since 0.0151248s) 2026-03-09T20:47:53.541 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:53 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T20:47:53.541 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:53 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T20:47:53.541 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:53 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mon metadata", "id": "vm07"}]: dispatch 2026-03-09T20:47:53.541 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:53 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mon metadata", "id": "vm10"}]: dispatch 2026-03-09T20:47:53.541 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:53 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm07.rovdbp"}]: dispatch 2026-03-09T20:47:53.541 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:53 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm07.potfau"}]: dispatch 2026-03-09T20:47:53.541 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:53 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm10.hzyuyq"}]: dispatch 2026-03-09T20:47:53.541 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:53 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm10.qpltwp"}]: dispatch 2026-03-09T20:47:53.541 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:53 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mgr metadata", "who": "vm07.xjrvch", "id": "vm07.xjrvch"}]: dispatch 2026-03-09T20:47:53.541 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:53 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:47:53.541 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:53 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:47:53.541 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:53 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:47:53.541 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:53 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:47:53.541 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:53 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T20:47:53.541 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:53 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T20:47:53.541 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:53 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm07.xjrvch/key"}]: dispatch 2026-03-09T20:47:53.541 INFO:tasks.workunit.client.0.vm07.stdout:6/695: fsync d8/d26/d7d/f8b 0 2026-03-09T20:47:53.541 INFO:tasks.workunit.client.0.vm07.stdout:5/741: rename d5/df/d13/d30/d56/cd0 to d5/d19/d73/d9c/c103 0 2026-03-09T20:47:53.541 INFO:tasks.workunit.client.0.vm07.stdout:4/618: unlink d2/d55/d5d/d3f/d4a/d4b/d52/c4d 0 2026-03-09T20:47:53.543 INFO:tasks.workunit.client.1.vm10.stdout:9/678: getdents d2/d3/d6d 0 2026-03-09T20:47:53.543 INFO:tasks.workunit.client.0.vm07.stdout:7/735: creat d3/da/db/d32/d3e/dac/ff3 x:0 0 0 2026-03-09T20:47:53.544 INFO:tasks.workunit.client.1.vm10.stdout:2/633: write d5/d18/d1b/f26 [5124637,97879] 0 2026-03-09T20:47:53.544 INFO:tasks.workunit.client.1.vm10.stdout:9/679: fdatasync d2/d3/de/d8f/fbf 0 2026-03-09T20:47:53.545 INFO:tasks.workunit.client.1.vm10.stdout:9/680: chown d2/d28/d47/c95 490768304 1 2026-03-09T20:47:53.548 INFO:tasks.workunit.client.1.vm10.stdout:8/679: creat d0/d22/d25/d2e/d41/d85/db9/fd4 x:0 0 0 2026-03-09T20:47:53.550 INFO:tasks.workunit.client.0.vm07.stdout:7/736: dwrite d3/d58/d77/fe1 [0,4194304] 0 2026-03-09T20:47:53.550 INFO:tasks.workunit.client.0.vm07.stdout:5/742: dread d5/df/d13/d3e/d47/fd2 [0,4194304] 0 2026-03-09T20:47:53.552 INFO:tasks.workunit.client.1.vm10.stdout:8/680: dread d0/f19 [0,4194304] 0 2026-03-09T20:47:53.553 INFO:tasks.workunit.client.1.vm10.stdout:0/605: readlink d2/d9/da/d48/l4c 0 2026-03-09T20:47:53.557 INFO:tasks.workunit.client.1.vm10.stdout:9/681: sync 2026-03-09T20:47:53.578 INFO:tasks.workunit.client.0.vm07.stdout:8/631: mknod d1/d5d/d6f/d80/ccd 0 2026-03-09T20:47:53.578 INFO:tasks.workunit.client.0.vm07.stdout:3/674: sync 2026-03-09T20:47:53.578 INFO:tasks.workunit.client.0.vm07.stdout:5/743: creat d5/d50/f104 x:0 0 0 2026-03-09T20:47:53.578 INFO:tasks.workunit.client.0.vm07.stdout:7/737: dread - d3/da/db/d79/fd1 zero size 2026-03-09T20:47:53.578 INFO:tasks.workunit.client.0.vm07.stdout:0/704: link d1/d2/d33/f4e d1/fe5 0 2026-03-09T20:47:53.578 INFO:tasks.workunit.client.0.vm07.stdout:9/654: getdents d4 0 2026-03-09T20:47:53.578 INFO:tasks.workunit.client.0.vm07.stdout:8/632: chown d1/d5d/c69 440446231 1 2026-03-09T20:47:53.578 INFO:tasks.workunit.client.1.vm10.stdout:9/682: readlink d2/d3/d85/ld5 0 2026-03-09T20:47:53.578 INFO:tasks.workunit.client.1.vm10.stdout:3/597: dread dc/d14/d26/faa [0,4194304] 0 2026-03-09T20:47:53.579 INFO:tasks.workunit.client.1.vm10.stdout:7/629: link db/d28/d2b/d36/d63/la3 db/d1f/lc5 0 2026-03-09T20:47:53.579 INFO:tasks.workunit.client.1.vm10.stdout:8/681: creat d0/d22/d25/d2e/d41/d47/d63/fd5 x:0 0 0 2026-03-09T20:47:53.579 INFO:tasks.workunit.client.1.vm10.stdout:0/606: unlink d2/d9/da/d35/d30/f7f 0 2026-03-09T20:47:53.579 INFO:tasks.workunit.client.1.vm10.stdout:4/587: getdents d1/d2/d5c/d64/d6b/d81/dac 0 2026-03-09T20:47:53.579 INFO:tasks.workunit.client.1.vm10.stdout:9/683: mkdir d2/d3/d6d/de8 0 2026-03-09T20:47:53.579 INFO:tasks.workunit.client.1.vm10.stdout:2/634: mkdir d5/d18/d1b/dd4 0 2026-03-09T20:47:53.579 INFO:tasks.workunit.client.1.vm10.stdout:4/588: dwrite d1/d2/d5c/d64/d6b/d81/dac/d39/f6e [0,4194304] 0 2026-03-09T20:47:53.579 INFO:tasks.workunit.client.1.vm10.stdout:4/589: chown d1/d2/d3/d70/d99 35345 1 2026-03-09T20:47:53.579 INFO:tasks.workunit.client.1.vm10.stdout:4/590: sync 2026-03-09T20:47:53.581 INFO:tasks.workunit.client.1.vm10.stdout:4/591: dread d1/d2/f2a [0,4194304] 0 2026-03-09T20:47:53.587 INFO:tasks.workunit.client.1.vm10.stdout:8/682: creat d0/d22/d25/d2e/d41/d85/db9/dc6/fd6 x:0 0 0 2026-03-09T20:47:53.596 INFO:tasks.workunit.client.0.vm07.stdout:1/716: creat d3/d66/fed x:0 0 0 2026-03-09T20:47:53.599 INFO:tasks.workunit.client.1.vm10.stdout:5/586: link d2/d39/dbf/fb6 d2/d27/d37/d46/d5d/fe2 0 2026-03-09T20:47:53.603 INFO:tasks.workunit.client.0.vm07.stdout:2/678: write d2/db/d49/f6b [627803,82758] 0 2026-03-09T20:47:53.604 INFO:tasks.workunit.client.0.vm07.stdout:7/738: read d3/d58/dc1/fcc [174373,50385] 0 2026-03-09T20:47:53.606 INFO:tasks.workunit.client.1.vm10.stdout:3/598: creat dc/db4/fca x:0 0 0 2026-03-09T20:47:53.607 INFO:tasks.workunit.client.1.vm10.stdout:6/642: write d3/d30/d7f/d51/f7c [1281868,80504] 0 2026-03-09T20:47:53.608 INFO:tasks.workunit.client.0.vm07.stdout:6/696: write d8/d26/f3d [220364,79612] 0 2026-03-09T20:47:53.609 INFO:tasks.workunit.client.0.vm07.stdout:4/619: write d2/d55/d5d/d3f/d4a/d4b/f7a [1844761,118042] 0 2026-03-09T20:47:53.613 INFO:tasks.workunit.client.1.vm10.stdout:2/635: symlink d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47/ld5 0 2026-03-09T20:47:53.621 INFO:tasks.workunit.client.0.vm07.stdout:2/679: dread d2/db/d28/d57/f65 [0,4194304] 0 2026-03-09T20:47:53.622 INFO:tasks.workunit.client.1.vm10.stdout:2/636: stat d5/d18/d27/d38/d61 0 2026-03-09T20:47:53.622 INFO:tasks.workunit.client.1.vm10.stdout:4/592: mknod d1/d2/d5c/d64/d6b/d81/dac/d39/cbc 0 2026-03-09T20:47:53.622 INFO:tasks.workunit.client.1.vm10.stdout:0/607: mknod d2/d9/da/cd9 0 2026-03-09T20:47:53.622 INFO:tasks.workunit.client.1.vm10.stdout:1/633: getdents d2/da/d25/d3e/d55 0 2026-03-09T20:47:53.625 INFO:tasks.workunit.client.1.vm10.stdout:9/684: read d2/d12/d5a/f82 [964087,112905] 0 2026-03-09T20:47:53.626 INFO:tasks.workunit.client.1.vm10.stdout:5/587: mknod d2/d58/d6c/ce3 0 2026-03-09T20:47:53.635 INFO:tasks.workunit.client.1.vm10.stdout:6/643: truncate d3/d30/d7f/f18 768629 0 2026-03-09T20:47:53.636 INFO:tasks.workunit.client.1.vm10.stdout:7/630: dwrite db/d46/d89/dbf/f7e [0,4194304] 0 2026-03-09T20:47:53.637 INFO:tasks.workunit.client.1.vm10.stdout:8/683: write d0/d22/d2c/f96 [50260,923] 0 2026-03-09T20:47:53.638 INFO:tasks.workunit.client.0.vm07.stdout:3/675: dwrite d1/d5/d9/f33 [0,4194304] 0 2026-03-09T20:47:53.647 INFO:tasks.workunit.client.0.vm07.stdout:3/676: dwrite d1/d5/d9/fe [4194304,4194304] 0 2026-03-09T20:47:53.652 INFO:tasks.workunit.client.0.vm07.stdout:9/655: mknod d4/d8/d19/d5f/da5/cea 0 2026-03-09T20:47:53.653 INFO:tasks.workunit.client.1.vm10.stdout:4/593: mkdir d1/d2/d5c/d64/d6b/d81/dac/d1c/d69/dbd 0 2026-03-09T20:47:53.653 INFO:tasks.workunit.client.0.vm07.stdout:1/717: dread d3/d23/fa8 [0,4194304] 0 2026-03-09T20:47:53.656 INFO:tasks.workunit.client.1.vm10.stdout:3/599: dwrite dc/d14/d26/faa [0,4194304] 0 2026-03-09T20:47:53.659 INFO:tasks.workunit.client.1.vm10.stdout:3/600: chown dc/d14/d22/d4a/cb7 2249 1 2026-03-09T20:47:53.670 INFO:tasks.workunit.client.0.vm07.stdout:2/680: creat d2/d46/db0/fd4 x:0 0 0 2026-03-09T20:47:53.681 INFO:tasks.workunit.client.0.vm07.stdout:5/744: creat d5/d33/f105 x:0 0 0 2026-03-09T20:47:53.686 INFO:tasks.workunit.client.0.vm07.stdout:5/745: dwrite d5/df/f34 [0,4194304] 0 2026-03-09T20:47:53.689 INFO:tasks.workunit.client.1.vm10.stdout:9/685: creat d2/d12/d5a/fe9 x:0 0 0 2026-03-09T20:47:53.691 INFO:tasks.workunit.client.1.vm10.stdout:9/686: dread d2/d3/de/d8f/fbf [0,4194304] 0 2026-03-09T20:47:53.708 INFO:tasks.workunit.client.1.vm10.stdout:1/634: dread d2/da/d25/d46/d51/d5d/d6e/d70/f79 [0,4194304] 0 2026-03-09T20:47:53.709 INFO:tasks.workunit.client.1.vm10.stdout:7/631: fsync db/d28/d2b/d36/d63/d6d/f80 0 2026-03-09T20:47:53.711 INFO:tasks.workunit.client.0.vm07.stdout:6/697: fsync d8/d16/f92 0 2026-03-09T20:47:53.715 INFO:tasks.workunit.client.0.vm07.stdout:3/677: mkdir d1/d5/d9/d2f/d99/dd8 0 2026-03-09T20:47:53.719 INFO:tasks.workunit.client.1.vm10.stdout:2/637: mknod d5/d18/cd6 0 2026-03-09T20:47:53.728 INFO:tasks.workunit.client.1.vm10.stdout:4/594: mkdir d1/d2/d5c/d64/d6b/d81/dac/d1b/dbe 0 2026-03-09T20:47:53.733 INFO:tasks.workunit.client.1.vm10.stdout:7/632: sync 2026-03-09T20:47:53.735 INFO:tasks.workunit.client.0.vm07.stdout:8/633: truncate d1/d5d/d6f/d2f/f34 2432435 0 2026-03-09T20:47:53.745 INFO:tasks.workunit.client.0.vm07.stdout:1/718: symlink d3/d14/d54/d9b/lee 0 2026-03-09T20:47:53.745 INFO:tasks.workunit.client.1.vm10.stdout:9/687: unlink d2/d28/d47/d50/dab/fc1 0 2026-03-09T20:47:53.745 INFO:tasks.workunit.client.1.vm10.stdout:9/688: chown d2/d3/de 0 1 2026-03-09T20:47:53.750 INFO:tasks.workunit.client.0.vm07.stdout:0/705: getdents d1/d2/dc/d17/da6 0 2026-03-09T20:47:53.751 INFO:tasks.workunit.client.0.vm07.stdout:7/739: write d3/da/db/d79/f98 [1487335,117498] 0 2026-03-09T20:47:53.751 INFO:tasks.workunit.client.1.vm10.stdout:5/588: write d2/d27/d75/f9a [411336,14632] 0 2026-03-09T20:47:53.752 INFO:tasks.workunit.client.0.vm07.stdout:4/620: truncate d2/d55/d5d/d86/fa6 194113 0 2026-03-09T20:47:53.755 INFO:tasks.workunit.client.1.vm10.stdout:3/601: write dc/d14/d26/d29/f70 [919741,79913] 0 2026-03-09T20:47:53.761 INFO:tasks.workunit.client.0.vm07.stdout:3/678: truncate d1/d5/d9/d2f/d3d/f74 659083 0 2026-03-09T20:47:53.761 INFO:tasks.workunit.client.0.vm07.stdout:3/679: readlink d1/d5/d9/d11/l50 0 2026-03-09T20:47:53.768 INFO:tasks.workunit.client.0.vm07.stdout:1/719: truncate d3/d23/f58 870250 0 2026-03-09T20:47:53.769 INFO:tasks.workunit.client.0.vm07.stdout:3/680: dread d1/d5/d9/f15 [0,4194304] 0 2026-03-09T20:47:53.776 INFO:tasks.workunit.client.0.vm07.stdout:5/746: rename d5/d19/c80 to d5/d19/d73/d97/c106 0 2026-03-09T20:47:53.778 INFO:tasks.workunit.client.0.vm07.stdout:0/706: fdatasync d1/d2/dc/f10 0 2026-03-09T20:47:53.789 INFO:tasks.workunit.client.0.vm07.stdout:2/681: write d2/db/d1c/d4a/d88/f7f [1705335,108744] 0 2026-03-09T20:47:53.798 INFO:tasks.workunit.client.0.vm07.stdout:0/707: dread d1/d1f/d53/f65 [0,4194304] 0 2026-03-09T20:47:53.803 INFO:tasks.workunit.client.0.vm07.stdout:9/656: link d4/d16/l3a d4/d8/d19/d5f/d73/leb 0 2026-03-09T20:47:53.803 INFO:tasks.workunit.client.1.vm10.stdout:6/644: creat d3/da/d11/d31/d4c/dc3/fca x:0 0 0 2026-03-09T20:47:53.809 INFO:tasks.workunit.client.1.vm10.stdout:1/635: chown d2/f81 111824 1 2026-03-09T20:47:53.811 INFO:tasks.workunit.client.0.vm07.stdout:1/720: rename d3/d23/c6d to d3/d23/cef 0 2026-03-09T20:47:53.813 INFO:tasks.workunit.client.0.vm07.stdout:8/634: write d1/dc/d6a/f62 [1345595,64775] 0 2026-03-09T20:47:53.813 INFO:tasks.workunit.client.0.vm07.stdout:8/635: dread - d1/dc/d16/f70 zero size 2026-03-09T20:47:53.814 INFO:tasks.workunit.client.0.vm07.stdout:8/636: chown d1/fb5 539700 1 2026-03-09T20:47:53.814 INFO:tasks.workunit.client.0.vm07.stdout:8/637: readlink d1/dc/d16/d26/l49 0 2026-03-09T20:47:53.824 INFO:tasks.workunit.client.1.vm10.stdout:4/595: fsync d1/d2/d3/f18 0 2026-03-09T20:47:53.830 INFO:tasks.workunit.client.0.vm07.stdout:5/747: write d5/df/d13/d6c/f99 [4026090,53895] 0 2026-03-09T20:47:53.830 INFO:tasks.workunit.client.0.vm07.stdout:5/748: chown d5/df/d13/d6c/ca7 10719184 1 2026-03-09T20:47:53.831 INFO:tasks.workunit.client.1.vm10.stdout:2/638: dwrite d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/fba [0,4194304] 0 2026-03-09T20:47:53.834 INFO:tasks.workunit.client.0.vm07.stdout:5/749: dwrite d5/df/d13/d4f/ff4 [0,4194304] 0 2026-03-09T20:47:53.845 INFO:tasks.workunit.client.1.vm10.stdout:7/633: unlink db/d21/d23/f14 0 2026-03-09T20:47:53.857 INFO:tasks.workunit.client.1.vm10.stdout:5/589: fsync d2/d27/d37/fa3 0 2026-03-09T20:47:53.863 INFO:tasks.workunit.client.1.vm10.stdout:3/602: dread dc/f5a [4194304,4194304] 0 2026-03-09T20:47:53.864 INFO:tasks.workunit.client.1.vm10.stdout:6/645: truncate d3/d79/fb7 758830 0 2026-03-09T20:47:53.866 INFO:tasks.workunit.client.1.vm10.stdout:3/603: dwrite dc/d14/d26/d29/d40/d8c/d9c/fb6 [0,4194304] 0 2026-03-09T20:47:53.873 INFO:tasks.workunit.client.0.vm07.stdout:4/621: mknod d2/df/d59/d8a/d9d/caa 0 2026-03-09T20:47:53.875 INFO:tasks.workunit.client.0.vm07.stdout:0/708: dwrite d1/fb3 [0,4194304] 0 2026-03-09T20:47:53.879 INFO:tasks.workunit.client.0.vm07.stdout:2/682: dwrite d2/db/d28/d90/fa3 [0,4194304] 0 2026-03-09T20:47:53.888 INFO:tasks.workunit.client.0.vm07.stdout:6/698: creat d8/fdd x:0 0 0 2026-03-09T20:47:53.900 INFO:tasks.workunit.client.0.vm07.stdout:9/657: truncate d4/d8/d19/d5f/d73/fb7 154826 0 2026-03-09T20:47:53.907 INFO:tasks.workunit.client.0.vm07.stdout:3/681: symlink d1/dcf/ld9 0 2026-03-09T20:47:53.913 INFO:tasks.workunit.client.0.vm07.stdout:8/638: rmdir d1/d5d/d6f/d2f/d4d/d95/dc1 39 2026-03-09T20:47:53.919 INFO:tasks.workunit.client.1.vm10.stdout:7/634: symlink db/d46/lc6 0 2026-03-09T20:47:53.919 INFO:tasks.workunit.client.1.vm10.stdout:0/608: getdents d2/d9/da/d11/dd1/db7 0 2026-03-09T20:47:53.921 INFO:tasks.workunit.client.1.vm10.stdout:9/689: truncate d2/d3/de/d8f/fe4 1812462 0 2026-03-09T20:47:53.923 INFO:tasks.workunit.client.1.vm10.stdout:5/590: symlink d2/d39/dbf/d63/d95/le4 0 2026-03-09T20:47:53.929 INFO:tasks.workunit.client.1.vm10.stdout:8/684: write d0/d22/d25/d2e/d41/d47/d78/f9a [1551871,96315] 0 2026-03-09T20:47:53.942 INFO:tasks.workunit.client.0.vm07.stdout:7/740: creat d3/da/db/d32/d3e/dac/ff4 x:0 0 0 2026-03-09T20:47:53.942 INFO:tasks.workunit.client.0.vm07.stdout:7/741: chown d3/c6b 3792768 1 2026-03-09T20:47:53.945 INFO:tasks.workunit.client.1.vm10.stdout:1/636: getdents d2/da/d25/d46/dbe 0 2026-03-09T20:47:53.946 INFO:tasks.workunit.client.1.vm10.stdout:6/646: write d3/d30/d7f/d4a/f4b [498163,87579] 0 2026-03-09T20:47:53.952 INFO:tasks.workunit.client.0.vm07.stdout:2/683: creat d2/db/d28/d90/fd5 x:0 0 0 2026-03-09T20:47:53.955 INFO:tasks.workunit.client.1.vm10.stdout:4/596: creat d1/d2/d5c/d64/d6b/d81/dac/d1b/dbe/fbf x:0 0 0 2026-03-09T20:47:53.956 INFO:tasks.workunit.client.1.vm10.stdout:4/597: chown d1/d2/d5c/d64/d6b/d81/dac/d1c/d2b 10666908 1 2026-03-09T20:47:53.959 INFO:tasks.workunit.client.1.vm10.stdout:7/635: mknod db/d21/cc7 0 2026-03-09T20:47:53.964 INFO:tasks.workunit.client.1.vm10.stdout:7/636: sync 2026-03-09T20:47:53.969 INFO:tasks.workunit.client.1.vm10.stdout:2/639: write d5/d18/d1b/d22/f6d [2597826,39597] 0 2026-03-09T20:47:53.970 INFO:tasks.workunit.client.1.vm10.stdout:2/640: stat d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/f60 0 2026-03-09T20:47:53.970 INFO:tasks.workunit.client.0.vm07.stdout:6/699: write d8/d26/f4d [125398,65745] 0 2026-03-09T20:47:53.975 INFO:tasks.workunit.client.1.vm10.stdout:0/609: mknod d2/d9/da/d35/cda 0 2026-03-09T20:47:53.978 INFO:tasks.workunit.client.0.vm07.stdout:3/682: mkdir d1/d5/d9/d2f/d34/da5/dda 0 2026-03-09T20:47:53.990 INFO:tasks.workunit.client.1.vm10.stdout:5/591: rmdir d2/d39/d4b/d7a 39 2026-03-09T20:47:53.990 INFO:tasks.workunit.client.1.vm10.stdout:5/592: chown d2/d27/d75/d81 45 1 2026-03-09T20:47:53.994 INFO:tasks.workunit.client.0.vm07.stdout:8/639: dread d1/d5d/d6f/f61 [0,4194304] 0 2026-03-09T20:47:53.995 INFO:tasks.workunit.client.0.vm07.stdout:8/640: write d1/d5d/d6f/fc0 [157517,78690] 0 2026-03-09T20:47:54.012 INFO:tasks.workunit.client.0.vm07.stdout:4/622: dwrite d2/f69 [0,4194304] 0 2026-03-09T20:47:54.013 INFO:tasks.workunit.client.0.vm07.stdout:4/623: chown d2/df/c4f 175594 1 2026-03-09T20:47:54.025 INFO:tasks.workunit.client.0.vm07.stdout:7/742: fdatasync d3/da/db/d79/fd0 0 2026-03-09T20:47:54.032 INFO:tasks.workunit.client.0.vm07.stdout:2/684: mkdir d2/db/d28/d90/dd6 0 2026-03-09T20:47:54.035 INFO:tasks.workunit.client.1.vm10.stdout:3/604: mkdir dc/d14/d26/dcb 0 2026-03-09T20:47:54.037 INFO:tasks.workunit.client.0.vm07.stdout:9/658: symlink d4/d8/d19/d5f/da5/lec 0 2026-03-09T20:47:54.042 INFO:tasks.workunit.client.1.vm10.stdout:6/647: fsync d3/da/d11/d31/d4c/d60/f63 0 2026-03-09T20:47:54.046 INFO:tasks.workunit.client.0.vm07.stdout:0/709: dwrite d1/d1f/dc2/fce [0,4194304] 0 2026-03-09T20:47:54.059 INFO:tasks.workunit.client.1.vm10.stdout:4/598: creat d1/d2/d5c/d64/d6b/d79/fc0 x:0 0 0 2026-03-09T20:47:54.063 INFO:tasks.workunit.client.0.vm07.stdout:6/700: mknod d8/d5d/cde 0 2026-03-09T20:47:54.063 INFO:tasks.workunit.client.0.vm07.stdout:3/683: rmdir d1/dcf 39 2026-03-09T20:47:54.064 INFO:tasks.workunit.client.0.vm07.stdout:3/684: write d1/d5/d9/d2f/d66/fd3 [787624,25283] 0 2026-03-09T20:47:54.066 INFO:tasks.workunit.client.1.vm10.stdout:7/637: rename db/f69 to db/d28/d4c/d6e/fc8 0 2026-03-09T20:47:54.067 INFO:tasks.workunit.client.1.vm10.stdout:4/599: rename d1/d2 to d1/d2/d5c/d64/d6b/d81/dac/dc1 22 2026-03-09T20:47:54.077 INFO:tasks.workunit.client.0.vm07.stdout:5/750: link d5/df/d13/f2a d5/d33/f107 0 2026-03-09T20:47:54.077 INFO:tasks.workunit.client.0.vm07.stdout:5/751: write d5/f51 [5396469,85740] 0 2026-03-09T20:47:54.078 INFO:tasks.workunit.client.0.vm07.stdout:5/752: stat d5/d33/d39/d8d/dab/f60 0 2026-03-09T20:47:54.082 INFO:tasks.workunit.client.1.vm10.stdout:0/610: symlink d2/ldb 0 2026-03-09T20:47:54.085 INFO:tasks.workunit.client.1.vm10.stdout:9/690: link d2/d3/de/d8f/dbc/fda d2/d12/d5a/fea 0 2026-03-09T20:47:54.086 INFO:tasks.workunit.client.1.vm10.stdout:9/691: write d2/f46 [419304,127681] 0 2026-03-09T20:47:54.095 INFO:tasks.workunit.client.1.vm10.stdout:5/593: write d2/d27/d75/f88 [1112600,128911] 0 2026-03-09T20:47:54.096 INFO:tasks.workunit.client.1.vm10.stdout:5/594: read - d2/d39/dbf/d66/fc7 zero size 2026-03-09T20:47:54.096 INFO:tasks.workunit.client.1.vm10.stdout:5/595: dread - d2/d39/d4b/fda zero size 2026-03-09T20:47:54.096 INFO:tasks.workunit.client.1.vm10.stdout:8/685: mknod d0/d95/cd7 0 2026-03-09T20:47:54.105 INFO:tasks.workunit.client.1.vm10.stdout:3/605: dwrite dc/d14/d26/d29/d2a/f5e [0,4194304] 0 2026-03-09T20:47:54.108 INFO:tasks.workunit.client.1.vm10.stdout:3/606: sync 2026-03-09T20:47:54.110 INFO:tasks.workunit.client.0.vm07.stdout:4/624: dwrite d2/df/d17/f80 [0,4194304] 0 2026-03-09T20:47:54.127 INFO:tasks.workunit.client.0.vm07.stdout:2/685: fsync d2/d46/f7e 0 2026-03-09T20:47:54.127 INFO:tasks.workunit.client.0.vm07.stdout:2/686: readlink d2/d11/l30 0 2026-03-09T20:47:54.140 INFO:tasks.workunit.client.1.vm10.stdout:7/638: mknod db/d46/d89/dbf/cc9 0 2026-03-09T20:47:54.153 INFO:tasks.workunit.client.1.vm10.stdout:7/639: dread db/d1f/f62 [0,4194304] 0 2026-03-09T20:47:54.162 INFO:tasks.workunit.client.1.vm10.stdout:1/637: write d2/f1c [4828268,110931] 0 2026-03-09T20:47:54.164 INFO:tasks.workunit.client.1.vm10.stdout:2/641: mknod d5/d18/cd7 0 2026-03-09T20:47:54.164 INFO:tasks.workunit.client.0.vm07.stdout:9/659: dwrite d4/d16/d29/d24/f8c [4194304,4194304] 0 2026-03-09T20:47:54.167 INFO:tasks.workunit.client.1.vm10.stdout:4/600: dread d1/d2/d3/f18 [0,4194304] 0 2026-03-09T20:47:54.167 INFO:tasks.workunit.client.0.vm07.stdout:9/660: read d4/d11/f88 [98134,7111] 0 2026-03-09T20:47:54.168 INFO:tasks.workunit.client.1.vm10.stdout:4/601: chown d1/d2/d5c/d64/d6b/d81/dac/d1c/f23 38661 1 2026-03-09T20:47:54.187 INFO:tasks.workunit.client.0.vm07.stdout:1/721: rename d3/f9 to d3/d23/d55/ff0 0 2026-03-09T20:47:54.201 INFO:tasks.workunit.client.0.vm07.stdout:9/661: fsync d4/d16/d29/d24/f8c 0 2026-03-09T20:47:54.202 INFO:tasks.workunit.client.0.vm07.stdout:5/753: symlink d5/d50/l108 0 2026-03-09T20:47:54.212 INFO:tasks.workunit.client.1.vm10.stdout:9/692: rename d2/d28/d47/f58 to d2/d28/d47/d67/feb 0 2026-03-09T20:47:54.213 INFO:tasks.workunit.client.1.vm10.stdout:9/693: read d2/d12/f26 [3374547,35764] 0 2026-03-09T20:47:54.214 INFO:tasks.workunit.client.1.vm10.stdout:9/694: stat d2/f46 0 2026-03-09T20:47:54.231 INFO:tasks.workunit.client.0.vm07.stdout:8/641: write d1/d3b/f3e [658934,101362] 0 2026-03-09T20:47:54.232 INFO:tasks.workunit.client.0.vm07.stdout:8/642: chown d1/dc/d16/fbe 0 1 2026-03-09T20:47:54.252 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:54 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T20:47:54.252 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:54 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T20:47:54.252 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:54 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T20:47:54.252 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:54 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T20:47:54.252 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:54 vm07.local ceph-mon[49120]: Manager daemon vm07.xjrvch is now available 2026-03-09T20:47:54.252 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:54 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:47:54.252 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:54 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:47:54.252 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:54 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm07.xjrvch/mirror_snapshot_schedule"}]: dispatch 2026-03-09T20:47:54.252 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:54 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm07.xjrvch/trash_purge_schedule"}]: dispatch 2026-03-09T20:47:54.252 INFO:tasks.workunit.client.1.vm10.stdout:5/596: dwrite d2/f64 [4194304,4194304] 0 2026-03-09T20:47:54.272 INFO:tasks.workunit.client.0.vm07.stdout:7/743: fdatasync d3/da/f3b 0 2026-03-09T20:47:54.281 INFO:tasks.workunit.client.1.vm10.stdout:3/607: readlink dc/d14/d20/d21/d3b/l7a 0 2026-03-09T20:47:54.282 INFO:tasks.workunit.client.1.vm10.stdout:3/608: fdatasync dc/d14/d26/f6f 0 2026-03-09T20:47:54.284 INFO:tasks.workunit.client.1.vm10.stdout:3/609: chown dc/d14/d20/d21/la9 12246 1 2026-03-09T20:47:54.296 INFO:tasks.workunit.client.1.vm10.stdout:4/602: symlink d1/d2/d5c/d64/d6b/d81/dac/lc2 0 2026-03-09T20:47:54.334 INFO:tasks.workunit.client.1.vm10.stdout:5/597: dread - d2/d58/fb9 zero size 2026-03-09T20:47:54.335 INFO:tasks.workunit.client.1.vm10.stdout:9/695: dwrite d2/d3/d85/f8b [0,4194304] 0 2026-03-09T20:47:54.336 INFO:tasks.workunit.client.1.vm10.stdout:9/696: stat d2/d12/d5a/da7 0 2026-03-09T20:47:54.345 INFO:tasks.workunit.client.1.vm10.stdout:6/648: creat d3/d30/d7f/fcb x:0 0 0 2026-03-09T20:47:54.349 INFO:tasks.workunit.client.1.vm10.stdout:1/638: symlink d2/da/lc8 0 2026-03-09T20:47:54.372 INFO:tasks.workunit.client.0.vm07.stdout:4/625: mkdir d2/d55/dab 0 2026-03-09T20:47:54.391 INFO:tasks.workunit.client.1.vm10.stdout:3/610: write dc/d14/d26/d29/f51 [893335,76432] 0 2026-03-09T20:47:54.393 INFO:tasks.workunit.client.1.vm10.stdout:7/640: rename db/d28/d2b/d36/d3b/f3d to db/d46/d89/fca 0 2026-03-09T20:47:54.397 INFO:tasks.workunit.client.1.vm10.stdout:7/641: stat db/d28/d4c/d6e 0 2026-03-09T20:47:54.397 INFO:tasks.workunit.client.0.vm07.stdout:0/710: getdents d1/d2/d98/daf 0 2026-03-09T20:47:54.398 INFO:tasks.workunit.client.1.vm10.stdout:0/611: write d2/d4a/d58/d82/d71/d5d/f8c [473634,26088] 0 2026-03-09T20:47:54.405 INFO:tasks.workunit.client.1.vm10.stdout:8/686: creat d0/d22/d25/d2e/d41/d47/fd8 x:0 0 0 2026-03-09T20:47:54.418 INFO:tasks.workunit.client.0.vm07.stdout:3/685: rename d1/d5/d9/daf/dad to d1/d5/d9/d11/d6d/dd0/d95/ddb 0 2026-03-09T20:47:54.419 INFO:tasks.workunit.client.1.vm10.stdout:9/697: mknod d2/d28/d47/d6a/cec 0 2026-03-09T20:47:54.426 INFO:tasks.workunit.client.0.vm07.stdout:1/722: symlink d3/d97/da1/dab/lf1 0 2026-03-09T20:47:54.426 INFO:tasks.workunit.client.0.vm07.stdout:5/754: unlink d5/f51 0 2026-03-09T20:47:54.430 INFO:tasks.workunit.client.1.vm10.stdout:1/639: mkdir d2/da/d25/d3e/d55/dc9 0 2026-03-09T20:47:54.431 INFO:tasks.workunit.client.1.vm10.stdout:2/642: link d5/f7 d5/d18/d9f/fd8 0 2026-03-09T20:47:54.435 INFO:tasks.workunit.client.1.vm10.stdout:6/649: write d3/d30/d7f/d36/f4f [5062459,123051] 0 2026-03-09T20:47:54.451 INFO:tasks.workunit.client.0.vm07.stdout:4/626: write d2/d55/d5d/d3f/d4a/d4b/f74 [663441,55992] 0 2026-03-09T20:47:54.452 INFO:tasks.workunit.client.1.vm10.stdout:6/650: dread d3/da/d11/fc6 [4194304,4194304] 0 2026-03-09T20:47:54.461 INFO:tasks.workunit.client.0.vm07.stdout:7/744: dwrite d3/da/db/d32/d3e/dac/fdc [0,4194304] 0 2026-03-09T20:47:54.463 INFO:tasks.workunit.client.0.vm07.stdout:7/745: chown d3/d58 0 1 2026-03-09T20:47:54.469 INFO:tasks.workunit.client.0.vm07.stdout:2/687: truncate d2/db/f7c 7864364 0 2026-03-09T20:47:54.471 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:54 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T20:47:54.471 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:54 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T20:47:54.471 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:54 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T20:47:54.471 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:54 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T20:47:54.471 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:54 vm10.local ceph-mon[57011]: Manager daemon vm07.xjrvch is now available 2026-03-09T20:47:54.471 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:54 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:47:54.471 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:54 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:47:54.471 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:54 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm07.xjrvch/mirror_snapshot_schedule"}]: dispatch 2026-03-09T20:47:54.471 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:54 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm07.xjrvch/trash_purge_schedule"}]: dispatch 2026-03-09T20:47:54.471 INFO:tasks.workunit.client.1.vm10.stdout:5/598: rename d2/d39/d4b/d7a/f92 to d2/d39/fe5 0 2026-03-09T20:47:54.472 INFO:tasks.workunit.client.0.vm07.stdout:0/711: truncate d1/d2/d33/d35/f64 224700 0 2026-03-09T20:47:54.472 INFO:tasks.workunit.client.0.vm07.stdout:2/688: write d2/db/f41 [529140,99439] 0 2026-03-09T20:47:54.508 INFO:tasks.workunit.client.1.vm10.stdout:6/651: dread d3/d30/d7f/d4a/f4b [0,4194304] 0 2026-03-09T20:47:54.509 INFO:tasks.workunit.client.0.vm07.stdout:9/662: rename d4/d16/d29/c9b to d4/d8/d59/ced 0 2026-03-09T20:47:54.509 INFO:tasks.workunit.client.0.vm07.stdout:3/686: symlink d1/d5/d9/d11/d6d/dd0/d59/ldc 0 2026-03-09T20:47:54.514 INFO:tasks.workunit.client.0.vm07.stdout:5/755: fsync d5/df/d13/f2a 0 2026-03-09T20:47:54.520 INFO:tasks.workunit.client.0.vm07.stdout:3/687: dread d1/d5/d9/f1b [0,4194304] 0 2026-03-09T20:47:54.521 INFO:tasks.workunit.client.0.vm07.stdout:3/688: chown d1/d5/d9/daf/d9f 13367325 1 2026-03-09T20:47:54.531 INFO:tasks.workunit.client.0.vm07.stdout:8/643: creat d1/dc/dba/fce x:0 0 0 2026-03-09T20:47:54.535 INFO:tasks.workunit.client.1.vm10.stdout:0/612: fsync d2/d9/da/d11/dd1/d34/f77 0 2026-03-09T20:47:54.541 INFO:tasks.workunit.client.1.vm10.stdout:8/687: mkdir d0/d22/d2f/d38/d64/db5/dd9 0 2026-03-09T20:47:54.544 INFO:tasks.workunit.client.1.vm10.stdout:8/688: fdatasync d0/d22/d25/d2e/d41/d47/f4b 0 2026-03-09T20:47:54.545 INFO:tasks.workunit.client.1.vm10.stdout:8/689: chown d0/d22/d2f/d38 30 1 2026-03-09T20:47:54.548 INFO:tasks.workunit.client.1.vm10.stdout:8/690: write d0/d22/d25/d2e/d41/d47/f4b [8226313,22964] 0 2026-03-09T20:47:54.568 INFO:tasks.workunit.client.0.vm07.stdout:7/746: mkdir d3/da/d53/df5 0 2026-03-09T20:47:54.569 INFO:tasks.workunit.client.1.vm10.stdout:9/698: dwrite d2/d3/d6d/db7/fc9 [0,4194304] 0 2026-03-09T20:47:54.582 INFO:tasks.workunit.client.1.vm10.stdout:2/643: creat d5/d18/d27/d89/db6/dd3/fd9 x:0 0 0 2026-03-09T20:47:54.591 INFO:tasks.workunit.client.1.vm10.stdout:3/611: truncate dc/d14/d90/fba 2258215 0 2026-03-09T20:47:54.596 INFO:tasks.workunit.client.1.vm10.stdout:4/603: dwrite d1/d2/d5c/d64/d6b/d81/dac/d1c/d2b/f72 [0,4194304] 0 2026-03-09T20:47:54.607 INFO:tasks.workunit.client.0.vm07.stdout:2/689: creat d2/d46/d6e/dbe/d96/fd7 x:0 0 0 2026-03-09T20:47:54.610 INFO:tasks.workunit.client.0.vm07.stdout:6/701: getdents d8/db3 0 2026-03-09T20:47:54.610 INFO:tasks.workunit.client.0.vm07.stdout:6/702: readlink d8/d26/d7d/lae 0 2026-03-09T20:47:54.619 INFO:tasks.workunit.client.0.vm07.stdout:9/663: mkdir d4/d8/d19/d5f/d73/dee 0 2026-03-09T20:47:54.631 INFO:tasks.workunit.client.1.vm10.stdout:7/642: rename db/d28/d2b/d36/d3b/d88/f71 to db/d28/d2b/d36/d3f/fcb 0 2026-03-09T20:47:54.631 INFO:tasks.workunit.client.1.vm10.stdout:5/599: fsync d2/d39/dbf/d69/d96/fc6 0 2026-03-09T20:47:54.634 INFO:tasks.workunit.client.0.vm07.stdout:5/756: chown d5/d33/d75/ffd 475597635 1 2026-03-09T20:47:54.641 INFO:tasks.workunit.client.0.vm07.stdout:5/757: chown d5/d19/c7a 454 1 2026-03-09T20:47:54.644 INFO:tasks.workunit.client.1.vm10.stdout:7/643: sync 2026-03-09T20:47:54.644 INFO:tasks.workunit.client.1.vm10.stdout:3/612: dread dc/d14/d26/d29/d2a/f66 [0,4194304] 0 2026-03-09T20:47:54.645 INFO:tasks.workunit.client.0.vm07.stdout:3/689: mknod d1/d5/dcd/cdd 0 2026-03-09T20:47:54.653 INFO:tasks.workunit.client.1.vm10.stdout:9/699: mkdir d2/d28/da2/ded 0 2026-03-09T20:47:54.654 INFO:tasks.workunit.client.0.vm07.stdout:4/627: truncate d2/df/d59/d8a/f9b 2838190 0 2026-03-09T20:47:54.656 INFO:tasks.workunit.client.1.vm10.stdout:2/644: fdatasync d5/d18/f24 0 2026-03-09T20:47:54.657 INFO:tasks.workunit.client.0.vm07.stdout:7/747: truncate d3/d58/dc1/fc8 976880 0 2026-03-09T20:47:54.661 INFO:tasks.workunit.client.0.vm07.stdout:0/712: unlink d1/d1f/d30/la3 0 2026-03-09T20:47:54.663 INFO:tasks.workunit.client.1.vm10.stdout:6/652: mknod d3/da/d11/d31/ccc 0 2026-03-09T20:47:54.673 INFO:tasks.workunit.client.0.vm07.stdout:2/690: unlink d2/db/d1c/fab 0 2026-03-09T20:47:54.680 INFO:tasks.workunit.client.1.vm10.stdout:6/653: dread d3/d30/d7f/d36/d5c/f5f [0,4194304] 0 2026-03-09T20:47:54.682 INFO:tasks.workunit.client.1.vm10.stdout:6/654: sync 2026-03-09T20:47:54.682 INFO:tasks.workunit.client.0.vm07.stdout:6/703: creat d8/d16/d22/d24/da0/fdf x:0 0 0 2026-03-09T20:47:54.685 INFO:tasks.workunit.client.1.vm10.stdout:4/604: read d1/d67/f9a [1948528,54764] 0 2026-03-09T20:47:54.686 INFO:tasks.workunit.client.1.vm10.stdout:0/613: write d2/d4a/d58/d82/d71/d5d/f76 [4964024,130332] 0 2026-03-09T20:47:54.686 INFO:tasks.workunit.client.0.vm07.stdout:9/664: symlink d4/d16/d29/d24/d37/lef 0 2026-03-09T20:47:54.696 INFO:tasks.workunit.client.0.vm07.stdout:1/723: creat d3/d14/ff2 x:0 0 0 2026-03-09T20:47:54.697 INFO:tasks.workunit.client.1.vm10.stdout:5/600: mkdir d2/d1b/d54/d78/de6 0 2026-03-09T20:47:54.703 INFO:tasks.workunit.client.1.vm10.stdout:8/691: fdatasync d0/d22/d25/d8f/fa6 0 2026-03-09T20:47:54.706 INFO:tasks.workunit.client.0.vm07.stdout:8/644: dwrite d1/d5d/d6f/d2f/d4d/d55/f8e [0,4194304] 0 2026-03-09T20:47:54.716 INFO:tasks.workunit.client.1.vm10.stdout:1/640: getdents d2/da/d25 0 2026-03-09T20:47:54.738 INFO:tasks.workunit.client.0.vm07.stdout:4/628: write d2/d55/d5d/f6f [1289193,6883] 0 2026-03-09T20:47:54.740 INFO:tasks.workunit.client.0.vm07.stdout:6/704: creat d8/d5d/fe0 x:0 0 0 2026-03-09T20:47:54.747 INFO:tasks.workunit.client.1.vm10.stdout:6/655: write d3/da/d11/d31/f82 [978202,123647] 0 2026-03-09T20:47:54.747 INFO:tasks.workunit.client.0.vm07.stdout:9/665: unlink d4/d8/d19/d5f/da5/lec 0 2026-03-09T20:47:54.749 INFO:tasks.workunit.client.1.vm10.stdout:6/656: sync 2026-03-09T20:47:54.752 INFO:tasks.workunit.client.1.vm10.stdout:0/614: creat d2/d9/d2a/fdc x:0 0 0 2026-03-09T20:47:54.752 INFO:tasks.workunit.client.0.vm07.stdout:1/724: dread - d3/d23/d67/fdf zero size 2026-03-09T20:47:54.770 INFO:tasks.workunit.client.0.vm07.stdout:1/725: dread d3/d14/d94/f95 [0,4194304] 0 2026-03-09T20:47:54.778 INFO:tasks.workunit.client.0.vm07.stdout:3/690: creat d1/d5/d9/d2f/d66/dc0/fde x:0 0 0 2026-03-09T20:47:54.785 INFO:tasks.workunit.client.0.vm07.stdout:8/645: creat d1/d5d/d6f/d2f/d4d/d55/fcf x:0 0 0 2026-03-09T20:47:54.787 INFO:tasks.workunit.client.0.vm07.stdout:3/691: dread d1/d5/d9/d2f/d34/f68 [4194304,4194304] 0 2026-03-09T20:47:54.804 INFO:tasks.workunit.client.0.vm07.stdout:7/748: mkdir d3/da/d53/df5/df6 0 2026-03-09T20:47:54.812 INFO:tasks.workunit.client.0.vm07.stdout:4/629: fdatasync d2/df/d59/f81 0 2026-03-09T20:47:54.837 INFO:tasks.workunit.client.0.vm07.stdout:9/666: write d4/d11/f1a [1718628,27597] 0 2026-03-09T20:47:54.839 INFO:tasks.workunit.client.0.vm07.stdout:5/758: rename d5/d33/d39/c65 to d5/d19/d73/c109 0 2026-03-09T20:47:54.843 INFO:tasks.workunit.client.0.vm07.stdout:1/726: symlink d3/d14/d94/lf3 0 2026-03-09T20:47:54.846 INFO:tasks.workunit.client.0.vm07.stdout:8/646: symlink d1/ld0 0 2026-03-09T20:47:54.862 INFO:tasks.workunit.client.1.vm10.stdout:5/601: creat d2/d39/dbf/d84/fe7 x:0 0 0 2026-03-09T20:47:54.863 INFO:tasks.workunit.client.0.vm07.stdout:7/749: dread d3/d58/d82/fa3 [0,4194304] 0 2026-03-09T20:47:54.869 INFO:tasks.workunit.client.1.vm10.stdout:8/692: unlink d0/d22/d25/d40/d86/d91/cae 0 2026-03-09T20:47:54.877 INFO:tasks.workunit.client.0.vm07.stdout:4/630: creat d2/d55/d5d/d3f/d4a/d4b/d52/d5c/d90/fac x:0 0 0 2026-03-09T20:47:54.890 INFO:tasks.workunit.client.0.vm07.stdout:6/705: truncate d8/d16/db4/f66 248802 0 2026-03-09T20:47:54.891 INFO:tasks.workunit.client.1.vm10.stdout:8/693: read d0/d22/d2f/d38/fa5 [226459,104333] 0 2026-03-09T20:47:54.893 INFO:tasks.workunit.client.1.vm10.stdout:7/644: write db/d28/d2b/d36/f1c [4821779,25414] 0 2026-03-09T20:47:54.900 INFO:tasks.workunit.client.0.vm07.stdout:9/667: truncate d4/d8/d19/d5f/d73/fcd 922609 0 2026-03-09T20:47:54.900 INFO:tasks.workunit.client.1.vm10.stdout:7/645: dwrite db/d46/d89/dbf/f7e [0,4194304] 0 2026-03-09T20:47:54.901 INFO:tasks.workunit.client.0.vm07.stdout:9/668: truncate d4/d8/dc/d4e/fd9 445754 0 2026-03-09T20:47:54.902 INFO:tasks.workunit.client.1.vm10.stdout:9/700: symlink d2/d28/da2/ded/lee 0 2026-03-09T20:47:54.937 INFO:tasks.workunit.client.0.vm07.stdout:1/727: dread d3/d66/f8c [0,4194304] 0 2026-03-09T20:47:54.944 INFO:tasks.workunit.client.0.vm07.stdout:7/750: mknod d3/d58/d77/de3/cf7 0 2026-03-09T20:47:54.945 INFO:tasks.workunit.client.0.vm07.stdout:7/751: chown d3/da/db/fe8 107988773 1 2026-03-09T20:47:54.952 INFO:tasks.workunit.client.0.vm07.stdout:0/713: link d1/d1f/d30/cd7 d1/ce6 0 2026-03-09T20:47:54.968 INFO:tasks.workunit.client.1.vm10.stdout:5/602: unlink d2/d39/d89/lbd 0 2026-03-09T20:47:54.970 INFO:tasks.workunit.client.1.vm10.stdout:4/605: write d1/d2/d5c/d64/d6b/d81/dac/f29 [2671730,61880] 0 2026-03-09T20:47:54.971 INFO:tasks.workunit.client.0.vm07.stdout:5/759: write d5/d69/fc5 [1366681,44048] 0 2026-03-09T20:47:54.977 INFO:tasks.workunit.client.0.vm07.stdout:3/692: write d1/d5/d9/d2f/d34/f4b [1685932,112243] 0 2026-03-09T20:47:54.978 INFO:tasks.workunit.client.0.vm07.stdout:4/631: creat d2/d55/d5d/d3f/d4a/fad x:0 0 0 2026-03-09T20:47:54.978 INFO:tasks.workunit.client.0.vm07.stdout:6/706: creat d8/d16/d22/db1/fe1 x:0 0 0 2026-03-09T20:47:54.982 INFO:tasks.workunit.client.0.vm07.stdout:9/669: dread d4/d8/dc/ff [0,4194304] 0 2026-03-09T20:47:54.982 INFO:tasks.workunit.client.1.vm10.stdout:8/694: symlink d0/d22/d25/d2e/d41/d85/db9/dc6/lda 0 2026-03-09T20:47:54.983 INFO:tasks.workunit.client.0.vm07.stdout:9/670: chown d4/d8/dc/c50 1084 1 2026-03-09T20:47:54.983 INFO:tasks.workunit.client.0.vm07.stdout:9/671: chown d4/d8/dc/ff 0 1 2026-03-09T20:47:54.988 INFO:tasks.workunit.client.1.vm10.stdout:7/646: fsync db/d28/d2b/d36/d3f/f6f 0 2026-03-09T20:47:54.997 INFO:tasks.workunit.client.0.vm07.stdout:1/728: rename d3/d66/fed to d3/d23/d67/d8a/ff4 0 2026-03-09T20:47:55.000 INFO:tasks.workunit.client.1.vm10.stdout:9/701: creat d2/d33/d37/fef x:0 0 0 2026-03-09T20:47:55.010 INFO:tasks.workunit.client.0.vm07.stdout:2/691: link d2/d46/d6e/dbe/l8f d2/db/d49/ld8 0 2026-03-09T20:47:55.026 INFO:tasks.workunit.client.0.vm07.stdout:3/693: creat d1/d5/d9/daf/fdf x:0 0 0 2026-03-09T20:47:55.043 INFO:tasks.workunit.client.0.vm07.stdout:4/632: write d2/d55/d5d/d3f/d4a/f5e [322088,121009] 0 2026-03-09T20:47:55.043 INFO:tasks.workunit.client.1.vm10.stdout:6/657: truncate d3/da/d11/d31/d4c/d60/fb1 3904970 0 2026-03-09T20:47:55.044 INFO:tasks.workunit.client.1.vm10.stdout:4/606: write d1/d67/f82 [736997,73246] 0 2026-03-09T20:47:55.048 INFO:tasks.workunit.client.1.vm10.stdout:8/695: dwrite d0/d22/d2c/f36 [0,4194304] 0 2026-03-09T20:47:55.052 INFO:tasks.workunit.client.1.vm10.stdout:3/613: getdents dc/d14/d22 0 2026-03-09T20:47:55.053 INFO:tasks.workunit.client.1.vm10.stdout:5/603: dwrite d2/d27/d37/d46/d5d/d77/f93 [0,4194304] 0 2026-03-09T20:47:55.060 INFO:tasks.workunit.client.1.vm10.stdout:4/607: dread d1/d2/d5c/d64/d6b/d81/dac/d1c/d2b/f72 [0,4194304] 0 2026-03-09T20:47:55.073 INFO:tasks.workunit.client.1.vm10.stdout:1/641: rename d2/da/d25/d46/d51/d7e/d9e to d2/da/d25/d3e/dca 0 2026-03-09T20:47:55.074 INFO:tasks.workunit.client.0.vm07.stdout:9/672: mknod d4/d8/d19/d5f/d73/dbc/cf0 0 2026-03-09T20:47:55.075 INFO:tasks.workunit.client.1.vm10.stdout:7/647: dwrite db/d28/d2b/d36/d63/d6d/f82 [0,4194304] 0 2026-03-09T20:47:55.076 INFO:tasks.workunit.client.1.vm10.stdout:7/648: chown db/d21/d23 774024280 1 2026-03-09T20:47:55.077 INFO:tasks.workunit.client.0.vm07.stdout:8/647: write d1/f33 [1227815,22684] 0 2026-03-09T20:47:55.087 INFO:tasks.workunit.client.0.vm07.stdout:7/752: symlink d3/da/db/d32/d3e/d5c/lf8 0 2026-03-09T20:47:55.091 INFO:tasks.workunit.client.1.vm10.stdout:2/645: getdents d5/d18/d27/d89/db6/d41/d77/db3/db5/d32 0 2026-03-09T20:47:55.096 INFO:tasks.workunit.client.0.vm07.stdout:0/714: mkdir d1/d2/de7 0 2026-03-09T20:47:55.096 INFO:tasks.workunit.client.0.vm07.stdout:2/692: readlink d2/l59 0 2026-03-09T20:47:55.096 INFO:tasks.workunit.client.0.vm07.stdout:2/693: readlink d2/db/l77 0 2026-03-09T20:47:55.097 INFO:tasks.workunit.client.0.vm07.stdout:0/715: write d1/d2/ff [788168,122084] 0 2026-03-09T20:47:55.097 INFO:tasks.workunit.client.0.vm07.stdout:2/694: chown d2/d46/l53 631428563 1 2026-03-09T20:47:55.106 INFO:tasks.workunit.client.1.vm10.stdout:1/642: dread d2/da/f88 [0,4194304] 0 2026-03-09T20:47:55.109 INFO:tasks.workunit.client.1.vm10.stdout:8/696: creat d0/d22/d2f/d38/d64/db5/fdb x:0 0 0 2026-03-09T20:47:55.113 INFO:tasks.workunit.client.1.vm10.stdout:2/646: dread d5/d18/d1b/f23 [0,4194304] 0 2026-03-09T20:47:55.123 INFO:tasks.workunit.client.0.vm07.stdout:6/707: mknod d8/d16/d22/d24/da0/dab/d40/ce2 0 2026-03-09T20:47:55.123 INFO:tasks.workunit.client.0.vm07.stdout:4/633: mknod d2/d55/d5d/d3f/cae 0 2026-03-09T20:47:55.131 INFO:tasks.workunit.client.0.vm07.stdout:1/729: creat d3/d97/da1/dc5/d90/de8/dba/ff5 x:0 0 0 2026-03-09T20:47:55.139 INFO:tasks.workunit.client.1.vm10.stdout:1/643: read d2/da/d25/d3e/d42/f57 [3302627,111175] 0 2026-03-09T20:47:55.144 INFO:tasks.workunit.client.1.vm10.stdout:2/647: mkdir d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d8d/d93/da5/dda 0 2026-03-09T20:47:55.156 INFO:tasks.workunit.client.1.vm10.stdout:0/615: rename d2/d4a/fbe to d2/d4a/d58/d82/d71/d5d/fdd 0 2026-03-09T20:47:55.173 INFO:tasks.workunit.client.0.vm07.stdout:3/694: mkdir d1/d5/d9/d2f/d99/dd8/de0 0 2026-03-09T20:47:55.173 INFO:tasks.workunit.client.0.vm07.stdout:5/760: creat d5/df/d13/d3e/de1/f10a x:0 0 0 2026-03-09T20:47:55.173 INFO:tasks.workunit.client.1.vm10.stdout:7/649: symlink db/lcc 0 2026-03-09T20:47:55.173 INFO:tasks.workunit.client.1.vm10.stdout:1/644: stat d2/da/d25/d46/d51/d5d/d6e/f76 0 2026-03-09T20:47:55.173 INFO:tasks.workunit.client.1.vm10.stdout:3/614: getdents dc/d14 0 2026-03-09T20:47:55.174 INFO:tasks.workunit.client.1.vm10.stdout:0/616: dread d2/d9/f12 [0,4194304] 0 2026-03-09T20:47:55.176 INFO:tasks.workunit.client.0.vm07.stdout:9/673: creat d4/d16/d78/dc4/ff1 x:0 0 0 2026-03-09T20:47:55.177 INFO:tasks.workunit.client.1.vm10.stdout:3/615: dread dc/d14/d26/d29/d2a/f5e [0,4194304] 0 2026-03-09T20:47:55.179 INFO:tasks.workunit.client.1.vm10.stdout:5/604: getdents d2/d39/dbf/d63/d95 0 2026-03-09T20:47:55.180 INFO:tasks.workunit.client.1.vm10.stdout:3/616: dread dc/f87 [0,4194304] 0 2026-03-09T20:47:55.185 INFO:tasks.workunit.client.0.vm07.stdout:8/648: sync 2026-03-09T20:47:55.199 INFO:tasks.workunit.client.1.vm10.stdout:4/608: write d1/d47/f96 [545527,48879] 0 2026-03-09T20:47:55.201 INFO:tasks.workunit.client.1.vm10.stdout:9/702: dwrite d2/d12/d5a/fba [0,4194304] 0 2026-03-09T20:47:55.210 INFO:tasks.workunit.client.1.vm10.stdout:8/697: write d0/d22/d25/d6c/fb1 [1585497,17164] 0 2026-03-09T20:47:55.229 INFO:tasks.workunit.client.0.vm07.stdout:7/753: symlink d3/da/db/d32/d3e/dac/d43/d62/lf9 0 2026-03-09T20:47:55.230 INFO:tasks.workunit.client.0.vm07.stdout:6/708: write d8/d16/d22/d24/da0/dab/f81 [3245913,119197] 0 2026-03-09T20:47:55.230 INFO:tasks.workunit.client.0.vm07.stdout:2/695: mkdir d2/db/d49/d7d/d85/dd9 0 2026-03-09T20:47:55.244 INFO:tasks.workunit.client.1.vm10.stdout:0/617: write d2/d4a/fcf [636029,34504] 0 2026-03-09T20:47:55.246 INFO:tasks.workunit.client.1.vm10.stdout:7/650: link db/d28/d4c/f97 db/d28/d4c/fcd 0 2026-03-09T20:47:55.251 INFO:tasks.workunit.client.0.vm07.stdout:1/730: read d3/d23/f37 [5727371,108452] 0 2026-03-09T20:47:55.253 INFO:tasks.workunit.client.0.vm07.stdout:1/731: chown d3/d23/d52/f79 0 1 2026-03-09T20:47:55.260 INFO:tasks.workunit.client.1.vm10.stdout:6/658: dwrite d3/d30/d7f/f18 [0,4194304] 0 2026-03-09T20:47:55.262 INFO:tasks.workunit.client.1.vm10.stdout:6/659: chown d3/da/d11/d31 1099118 1 2026-03-09T20:47:55.265 INFO:tasks.workunit.client.1.vm10.stdout:1/645: dwrite d2/da/d25/d3e/d55/faf [0,4194304] 0 2026-03-09T20:47:55.267 INFO:tasks.workunit.client.0.vm07.stdout:0/716: dwrite d1/d2/dc/f6d [0,4194304] 0 2026-03-09T20:47:55.273 INFO:tasks.workunit.client.1.vm10.stdout:3/617: write dc/d14/d20/d21/d3b/f6d [678084,93116] 0 2026-03-09T20:47:55.275 INFO:tasks.workunit.client.0.vm07.stdout:6/709: mkdir d8/d5d/d97/dc4/de3 0 2026-03-09T20:47:55.278 INFO:tasks.workunit.client.1.vm10.stdout:4/609: write d1/d2/d3/f18 [3627414,16893] 0 2026-03-09T20:47:55.280 INFO:tasks.workunit.client.0.vm07.stdout:7/754: truncate d3/da/db/d32/d3e/dac/d1f/f63 1254660 0 2026-03-09T20:47:55.281 INFO:tasks.workunit.client.1.vm10.stdout:8/698: creat d0/d95/fdc x:0 0 0 2026-03-09T20:47:55.282 INFO:tasks.workunit.client.0.vm07.stdout:3/695: creat d1/d5/d9/d2f/d99/dd8/de0/fe1 x:0 0 0 2026-03-09T20:47:55.284 INFO:tasks.workunit.client.1.vm10.stdout:9/703: read d2/d12/f62 [1205394,36509] 0 2026-03-09T20:47:55.285 INFO:tasks.workunit.client.1.vm10.stdout:2/648: getdents d5/d18/d27/d89/db6/d41 0 2026-03-09T20:47:55.301 INFO:tasks.workunit.client.0.vm07.stdout:4/634: creat d2/df/faf x:0 0 0 2026-03-09T20:47:55.301 INFO:tasks.workunit.client.0.vm07.stdout:4/635: chown d2/d55/f71 132964 1 2026-03-09T20:47:55.313 INFO:tasks.workunit.client.0.vm07.stdout:0/717: read d1/d2/dc/f56 [2190493,64383] 0 2026-03-09T20:47:55.320 INFO:tasks.workunit.client.1.vm10.stdout:0/618: dread d2/d9/da/fd [4194304,4194304] 0 2026-03-09T20:47:55.320 INFO:tasks.workunit.client.1.vm10.stdout:1/646: fdatasync d2/da/d25/d3e/f94 0 2026-03-09T20:47:55.320 INFO:tasks.workunit.client.1.vm10.stdout:6/660: fsync d3/da/d11/f17 0 2026-03-09T20:47:55.320 INFO:tasks.workunit.client.0.vm07.stdout:8/649: creat d1/dc/d16/d26/fd1 x:0 0 0 2026-03-09T20:47:55.320 INFO:tasks.workunit.client.0.vm07.stdout:1/732: mknod d3/d97/da1/dc5/d90/dd3/cf6 0 2026-03-09T20:47:55.320 INFO:tasks.workunit.client.0.vm07.stdout:1/733: read - d3/d23/d67/d8a/ff4 zero size 2026-03-09T20:47:55.321 INFO:tasks.workunit.client.1.vm10.stdout:9/704: read d2/d12/d5a/da7/faf [149591,43821] 0 2026-03-09T20:47:55.328 INFO:tasks.workunit.client.1.vm10.stdout:8/699: dread d0/d22/d25/d6c/f82 [0,4194304] 0 2026-03-09T20:47:55.330 INFO:tasks.workunit.client.1.vm10.stdout:8/700: write d0/d22/d25/d2e/d41/d47/fd8 [272466,106997] 0 2026-03-09T20:47:55.331 INFO:tasks.workunit.client.1.vm10.stdout:7/651: creat db/d46/dab/db5/fce x:0 0 0 2026-03-09T20:47:55.334 INFO:tasks.workunit.client.1.vm10.stdout:0/619: mknod d2/d4a/d58/cde 0 2026-03-09T20:47:55.335 INFO:tasks.workunit.client.0.vm07.stdout:3/696: unlink d1/d5/d9/d2f/d3d/d71/lc6 0 2026-03-09T20:47:55.342 INFO:tasks.workunit.client.1.vm10.stdout:1/647: symlink d2/da/d25/lcb 0 2026-03-09T20:47:55.344 INFO:tasks.workunit.client.0.vm07.stdout:4/636: link d2/c1a d2/df/d17/cb0 0 2026-03-09T20:47:55.344 INFO:tasks.workunit.client.0.vm07.stdout:8/650: symlink d1/d5d/d6f/d2f/d4d/d95/dc1/ld2 0 2026-03-09T20:47:55.345 INFO:tasks.workunit.client.1.vm10.stdout:0/620: sync 2026-03-09T20:47:55.355 INFO:tasks.workunit.client.0.vm07.stdout:9/674: write d4/d8/fd [367461,117278] 0 2026-03-09T20:47:55.355 INFO:tasks.workunit.client.1.vm10.stdout:5/605: write d2/d1b/d54/d78/fdb [6311802,48787] 0 2026-03-09T20:47:55.358 INFO:tasks.workunit.client.0.vm07.stdout:5/761: dwrite d5/df/d13/d6c/fc9 [0,4194304] 0 2026-03-09T20:47:55.359 INFO:tasks.workunit.client.0.vm07.stdout:5/762: chown d5/l2e 1412808 1 2026-03-09T20:47:55.367 INFO:tasks.workunit.client.0.vm07.stdout:5/763: read d5/d33/fb6 [1946278,41708] 0 2026-03-09T20:47:55.374 INFO:tasks.workunit.client.1.vm10.stdout:1/648: dread - d2/da/fb6 zero size 2026-03-09T20:47:55.376 INFO:tasks.workunit.client.0.vm07.stdout:2/696: dwrite d2/db/d49/fc6 [0,4194304] 0 2026-03-09T20:47:55.382 INFO:tasks.workunit.client.0.vm07.stdout:6/710: write d8/f5f [1185160,80163] 0 2026-03-09T20:47:55.382 INFO:tasks.workunit.client.0.vm07.stdout:6/711: readlink d8/d16/d22/d24/l9d 0 2026-03-09T20:47:55.387 INFO:tasks.workunit.client.0.vm07.stdout:8/651: mkdir d1/dc/d16/dad/d87/dd3 0 2026-03-09T20:47:55.390 INFO:tasks.workunit.client.1.vm10.stdout:9/705: rename d2/d3/de/c18 to d2/d28/d47/d6a/cf0 0 2026-03-09T20:47:55.394 INFO:tasks.workunit.client.1.vm10.stdout:2/649: write d5/fa [3928264,18146] 0 2026-03-09T20:47:55.396 INFO:tasks.workunit.client.1.vm10.stdout:3/618: getdents dc/d14/d26/d29/d40/da8/d69/d75 0 2026-03-09T20:47:55.399 INFO:tasks.workunit.client.1.vm10.stdout:3/619: chown dc/d14/d26/d29/d40/da8/dc3/fc8 532891511 1 2026-03-09T20:47:55.402 INFO:tasks.workunit.client.1.vm10.stdout:3/620: dread - dc/d14/d22/fbf zero size 2026-03-09T20:47:55.405 INFO:tasks.workunit.client.1.vm10.stdout:3/621: chown dc/d14/d26/d29/d40/f71 1967792 1 2026-03-09T20:47:55.407 INFO:tasks.workunit.client.0.vm07.stdout:9/675: dread d4/d11/d2a/f39 [0,4194304] 0 2026-03-09T20:47:55.408 INFO:tasks.workunit.client.1.vm10.stdout:3/622: sync 2026-03-09T20:47:55.409 INFO:tasks.workunit.client.0.vm07.stdout:7/755: dwrite d3/da/db/f27 [0,4194304] 0 2026-03-09T20:47:55.411 INFO:tasks.workunit.client.0.vm07.stdout:7/756: write d3/da/db/d32/d3e/dac/fdc [2964404,108654] 0 2026-03-09T20:47:55.411 INFO:tasks.workunit.client.1.vm10.stdout:0/621: symlink d2/d4a/d58/d82/d93/ldf 0 2026-03-09T20:47:55.413 INFO:tasks.workunit.client.1.vm10.stdout:4/610: truncate d1/d2/d5c/d64/d6b/d81/dac/d1c/f3f 5103618 0 2026-03-09T20:47:55.413 INFO:tasks.workunit.client.0.vm07.stdout:5/764: fdatasync d5/df/d13/d3e/d47/fd2 0 2026-03-09T20:47:55.428 INFO:tasks.workunit.client.1.vm10.stdout:5/606: creat d2/d27/d37/dc8/fe8 x:0 0 0 2026-03-09T20:47:55.428 INFO:tasks.workunit.client.1.vm10.stdout:5/607: fsync d2/fd 0 2026-03-09T20:47:55.435 INFO:tasks.workunit.client.0.vm07.stdout:1/734: dwrite d3/d23/d55/f77 [0,4194304] 0 2026-03-09T20:47:55.440 INFO:tasks.workunit.client.1.vm10.stdout:6/661: creat d3/d30/d7f/fcd x:0 0 0 2026-03-09T20:47:55.440 INFO:tasks.workunit.client.0.vm07.stdout:0/718: write d1/fe5 [412651,21823] 0 2026-03-09T20:47:55.440 INFO:tasks.workunit.client.0.vm07.stdout:3/697: write d1/d5/d9/d11/d1f/f7f [2874257,98053] 0 2026-03-09T20:47:55.442 INFO:tasks.workunit.client.0.vm07.stdout:3/698: read d1/d5/d9/d2f/d34/f8f [701089,41197] 0 2026-03-09T20:47:55.442 INFO:tasks.workunit.client.0.vm07.stdout:3/699: readlink d1/d5/d9/d2f/d34/l9d 0 2026-03-09T20:47:55.445 INFO:tasks.workunit.client.0.vm07.stdout:6/712: dread - d8/d16/d4b/d88/f7f zero size 2026-03-09T20:47:55.445 INFO:tasks.workunit.client.0.vm07.stdout:6/713: fdatasync d8/fdd 0 2026-03-09T20:47:55.446 INFO:tasks.workunit.client.0.vm07.stdout:6/714: fdatasync d8/d16/d22/d24/da0/dab/f81 0 2026-03-09T20:47:55.446 INFO:tasks.workunit.client.0.vm07.stdout:6/715: readlink d8/d16/d61/la4 0 2026-03-09T20:47:55.453 INFO:tasks.workunit.client.1.vm10.stdout:9/706: mknod d2/da6/cf1 0 2026-03-09T20:47:55.455 INFO:tasks.workunit.client.1.vm10.stdout:7/652: write db/d28/f41 [2789937,106323] 0 2026-03-09T20:47:55.455 INFO:tasks.workunit.client.0.vm07.stdout:8/652: read d1/f25 [471333,126434] 0 2026-03-09T20:47:55.459 INFO:tasks.workunit.client.1.vm10.stdout:8/701: rmdir d0/d22/d2f/d38/d64/db5/dd9 0 2026-03-09T20:47:55.466 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:55 vm10.local ceph-mon[57011]: mgrmap e28: vm07.xjrvch(active, since 1.10647s) 2026-03-09T20:47:55.466 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:55 vm10.local ceph-mon[57011]: pgmap v3: 65 pgs: 65 active+clean; 2.9 GiB data, 10 GiB used, 110 GiB / 120 GiB avail 2026-03-09T20:47:55.467 INFO:tasks.workunit.client.1.vm10.stdout:2/650: fdatasync d5/d18/d27/d89/f9a 0 2026-03-09T20:47:55.468 INFO:tasks.workunit.client.0.vm07.stdout:9/676: symlink d4/d16/d29/d9c/lf2 0 2026-03-09T20:47:55.482 INFO:tasks.workunit.client.0.vm07.stdout:7/757: dread d3/f8f [0,4194304] 0 2026-03-09T20:47:55.483 INFO:tasks.workunit.client.0.vm07.stdout:7/758: write d3/da/db/d32/d3e/dac/ff4 [977603,75121] 0 2026-03-09T20:47:55.489 INFO:tasks.workunit.client.1.vm10.stdout:5/608: mkdir d2/d39/dbf/d69/de9 0 2026-03-09T20:47:55.489 INFO:tasks.workunit.client.1.vm10.stdout:1/649: mknod d2/da/d25/d3e/dca/da2/db9/ccc 0 2026-03-09T20:47:55.490 INFO:tasks.workunit.client.0.vm07.stdout:4/637: dwrite d2/df/f49 [0,4194304] 0 2026-03-09T20:47:55.499 INFO:tasks.workunit.client.0.vm07.stdout:2/697: write d2/d46/d72/fa1 [1000036,122963] 0 2026-03-09T20:47:55.500 INFO:tasks.workunit.client.1.vm10.stdout:8/702: dread d0/f11 [0,4194304] 0 2026-03-09T20:47:55.500 INFO:tasks.workunit.client.0.vm07.stdout:2/698: write d2/db/d28/d57/fcd [225213,33822] 0 2026-03-09T20:47:55.502 INFO:tasks.workunit.client.1.vm10.stdout:4/611: dwrite d1/d2/d5c/d64/d6b/d81/dac/d1c/d2b/f5a [4194304,4194304] 0 2026-03-09T20:47:55.503 INFO:tasks.workunit.client.1.vm10.stdout:4/612: stat d1/d2/d5c/d64/d6b 0 2026-03-09T20:47:55.505 INFO:tasks.workunit.client.1.vm10.stdout:0/622: dwrite d2/d4a/d58/d82/d93/fbc [0,4194304] 0 2026-03-09T20:47:55.510 INFO:tasks.workunit.client.1.vm10.stdout:9/707: symlink d2/d12/lf2 0 2026-03-09T20:47:55.520 INFO:tasks.workunit.client.0.vm07.stdout:0/719: mkdir d1/d2/d98/de8 0 2026-03-09T20:47:55.524 INFO:tasks.workunit.client.1.vm10.stdout:3/623: mknod dc/d14/d26/d29/d40/da2/ccc 0 2026-03-09T20:47:55.530 INFO:tasks.workunit.client.1.vm10.stdout:2/651: unlink d5/fa 0 2026-03-09T20:47:55.533 INFO:tasks.workunit.client.1.vm10.stdout:9/708: dread d2/d3/d6d/db7/fc9 [0,4194304] 0 2026-03-09T20:47:55.537 INFO:tasks.workunit.client.1.vm10.stdout:6/662: creat d3/d30/d7f/d36/d5c/dad/fce x:0 0 0 2026-03-09T20:47:55.540 INFO:tasks.workunit.client.1.vm10.stdout:1/650: dread d2/da/d25/d46/f74 [0,4194304] 0 2026-03-09T20:47:55.540 INFO:tasks.workunit.client.1.vm10.stdout:1/651: chown d2/da/l71 18 1 2026-03-09T20:47:55.540 INFO:tasks.workunit.client.1.vm10.stdout:8/703: creat d0/d22/d25/d2e/d41/d85/db9/fdd x:0 0 0 2026-03-09T20:47:55.541 INFO:tasks.workunit.client.1.vm10.stdout:8/704: readlink d0/d22/d25/d6c/l9e 0 2026-03-09T20:47:55.545 INFO:tasks.workunit.client.0.vm07.stdout:5/765: rename d5/df/d62 to d5/df/d13/d4f/d101/d10b 0 2026-03-09T20:47:55.549 INFO:tasks.workunit.client.0.vm07.stdout:5/766: dread d5/df/d13/d4f/ff4 [0,4194304] 0 2026-03-09T20:47:55.550 INFO:tasks.workunit.client.1.vm10.stdout:0/623: dread d2/d9/da/d11/f15 [0,4194304] 0 2026-03-09T20:47:55.552 INFO:tasks.workunit.client.1.vm10.stdout:1/652: dread d2/da/d25/d3e/d42/f62 [0,4194304] 0 2026-03-09T20:47:55.554 INFO:tasks.workunit.client.0.vm07.stdout:7/759: truncate d3/d58/d82/fa3 1520426 0 2026-03-09T20:47:55.555 INFO:tasks.workunit.client.1.vm10.stdout:3/624: symlink dc/d14/d26/d29/d40/lcd 0 2026-03-09T20:47:55.556 INFO:tasks.workunit.client.0.vm07.stdout:4/638: mknod d2/d55/d5d/d3f/d4a/d85/cb1 0 2026-03-09T20:47:55.557 INFO:tasks.workunit.client.0.vm07.stdout:2/699: mkdir d2/d46/d6e/dda 0 2026-03-09T20:47:55.557 INFO:tasks.workunit.client.0.vm07.stdout:2/700: chown d2/db/l94 36267 1 2026-03-09T20:47:55.565 INFO:tasks.workunit.client.1.vm10.stdout:6/663: fsync d3/d30/d7f/d4a/f9a 0 2026-03-09T20:47:55.570 INFO:tasks.workunit.client.0.vm07.stdout:1/735: write d3/d23/f49 [5211047,38514] 0 2026-03-09T20:47:55.570 INFO:tasks.workunit.client.1.vm10.stdout:7/653: write db/d28/f91 [888325,39091] 0 2026-03-09T20:47:55.571 INFO:tasks.workunit.client.1.vm10.stdout:7/654: chown db/d21/d26/f52 3842424 1 2026-03-09T20:47:55.571 INFO:tasks.workunit.client.1.vm10.stdout:7/655: chown db/d46/d89/dbf/d78 24647120 1 2026-03-09T20:47:55.578 INFO:tasks.workunit.client.0.vm07.stdout:6/716: write d8/d16/d4b/d88/f70 [1193330,18455] 0 2026-03-09T20:47:55.580 INFO:tasks.workunit.client.0.vm07.stdout:3/700: dwrite d1/d5/d9/d2f/d86/fbb [0,4194304] 0 2026-03-09T20:47:55.581 INFO:tasks.workunit.client.1.vm10.stdout:5/609: dwrite d2/d27/d37/d46/f94 [0,4194304] 0 2026-03-09T20:47:55.589 INFO:tasks.workunit.client.0.vm07.stdout:0/720: dwrite f0 [0,4194304] 0 2026-03-09T20:47:55.590 INFO:tasks.workunit.client.0.vm07.stdout:8/653: mkdir d1/d5d/d6f/d2f/d4d/dd4 0 2026-03-09T20:47:55.591 INFO:tasks.workunit.client.0.vm07.stdout:0/721: write d1/f57 [933986,119920] 0 2026-03-09T20:47:55.591 INFO:tasks.workunit.client.0.vm07.stdout:0/722: chown d1/d2/l27 99736375 1 2026-03-09T20:47:55.598 INFO:tasks.workunit.client.1.vm10.stdout:0/624: mkdir d2/d9/da/d11/dd1/db7/dcd/de0 0 2026-03-09T20:47:55.599 INFO:tasks.workunit.client.1.vm10.stdout:2/652: dwrite d5/d18/f1f [0,4194304] 0 2026-03-09T20:47:55.600 INFO:tasks.workunit.client.0.vm07.stdout:5/767: readlink d5/df/d13/d30/l6b 0 2026-03-09T20:47:55.600 INFO:tasks.workunit.client.1.vm10.stdout:2/653: chown d5/d18/d27 57412994 1 2026-03-09T20:47:55.605 INFO:tasks.workunit.client.0.vm07.stdout:7/760: rename d3/da/db/d32/d3e/f65 to d3/da/db/d32/d3e/dac/d1f/d50/ffa 0 2026-03-09T20:47:55.608 INFO:tasks.workunit.client.0.vm07.stdout:4/639: mknod d2/d55/d5d/cb2 0 2026-03-09T20:47:55.614 INFO:tasks.workunit.client.0.vm07.stdout:1/736: symlink d3/d97/da1/dc5/d90/dd3/lf7 0 2026-03-09T20:47:55.614 INFO:tasks.workunit.client.1.vm10.stdout:6/664: mkdir d3/da/d11/d26/dcf 0 2026-03-09T20:47:55.626 INFO:tasks.workunit.client.0.vm07.stdout:4/640: dread d2/df/d59/f81 [0,4194304] 0 2026-03-09T20:47:55.626 INFO:tasks.workunit.client.0.vm07.stdout:4/641: dread - d2/d1f/f9c zero size 2026-03-09T20:47:55.629 INFO:tasks.workunit.client.0.vm07.stdout:6/717: fdatasync d8/d26/f87 0 2026-03-09T20:47:55.632 INFO:tasks.workunit.client.1.vm10.stdout:8/705: mknod d0/d22/d25/d2e/d41/d47/d63/dad/cde 0 2026-03-09T20:47:55.632 INFO:tasks.workunit.client.1.vm10.stdout:2/654: dread d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/f60 [0,4194304] 0 2026-03-09T20:47:55.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:55 vm07.local ceph-mon[49120]: mgrmap e28: vm07.xjrvch(active, since 1.10647s) 2026-03-09T20:47:55.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:55 vm07.local ceph-mon[49120]: pgmap v3: 65 pgs: 65 active+clean; 2.9 GiB data, 10 GiB used, 110 GiB / 120 GiB avail 2026-03-09T20:47:55.640 INFO:tasks.workunit.client.1.vm10.stdout:0/625: mkdir d2/d9/d69/d80/de1 0 2026-03-09T20:47:55.641 INFO:tasks.workunit.client.1.vm10.stdout:4/613: creat d1/d2/d5c/d64/d6b/d81/dac/d1c/d2b/d4a/d9b/fc3 x:0 0 0 2026-03-09T20:47:55.641 INFO:tasks.workunit.client.1.vm10.stdout:0/626: fsync d2/d9/d2a/fdc 0 2026-03-09T20:47:55.645 INFO:tasks.workunit.client.1.vm10.stdout:4/614: dread d1/d2/d3/f18 [0,4194304] 0 2026-03-09T20:47:55.646 INFO:tasks.workunit.client.0.vm07.stdout:5/768: fsync d5/fa6 0 2026-03-09T20:47:55.653 INFO:tasks.workunit.client.0.vm07.stdout:1/737: fsync d3/d14/f6a 0 2026-03-09T20:47:55.654 INFO:tasks.workunit.client.0.vm07.stdout:1/738: chown d3/d97/da1/fbb 280334562 1 2026-03-09T20:47:55.654 INFO:tasks.workunit.client.0.vm07.stdout:1/739: fsync d3/d23/d67/f92 0 2026-03-09T20:47:55.658 INFO:tasks.workunit.client.1.vm10.stdout:4/615: dwrite d1/d2/d5c/d64/d6b/d81/f8a [0,4194304] 0 2026-03-09T20:47:55.661 INFO:tasks.workunit.client.1.vm10.stdout:6/665: creat d3/da/d11/d31/d47/d87/fd0 x:0 0 0 2026-03-09T20:47:55.662 INFO:tasks.workunit.client.1.vm10.stdout:6/666: readlink d3/da/d11/d31/d4c/l98 0 2026-03-09T20:47:55.664 INFO:tasks.workunit.client.0.vm07.stdout:6/718: rename d8/d16/db4 to d8/d16/d22/d9b/de4 0 2026-03-09T20:47:55.670 INFO:tasks.workunit.client.0.vm07.stdout:9/677: write d4/d8/f34 [942984,37574] 0 2026-03-09T20:47:55.678 INFO:tasks.workunit.client.0.vm07.stdout:7/761: dread d3/d58/dc1/fcc [0,4194304] 0 2026-03-09T20:47:55.679 INFO:tasks.workunit.client.1.vm10.stdout:4/616: dread d1/d2/d5c/d64/d61/f85 [0,4194304] 0 2026-03-09T20:47:55.692 INFO:tasks.workunit.client.0.vm07.stdout:5/769: chown d5/df/d13/d4f/d101/d10b/l9a 336814 1 2026-03-09T20:47:55.693 INFO:tasks.workunit.client.1.vm10.stdout:0/627: mkdir d2/d9/d69/de2 0 2026-03-09T20:47:55.693 INFO:tasks.workunit.client.1.vm10.stdout:5/610: creat d2/d39/d4b/d7a/dd9/fea x:0 0 0 2026-03-09T20:47:55.700 INFO:tasks.workunit.client.1.vm10.stdout:3/625: link dc/d14/d20/d21/la9 dc/d14/d26/d37/lce 0 2026-03-09T20:47:55.702 INFO:tasks.workunit.client.1.vm10.stdout:9/709: getdents d2/d3/de/d8f/dbc 0 2026-03-09T20:47:55.704 INFO:tasks.workunit.client.1.vm10.stdout:4/617: dwrite d1/d2/d5c/d64/d6b/d81/da9/fb8 [0,4194304] 0 2026-03-09T20:47:55.706 INFO:tasks.workunit.client.0.vm07.stdout:9/678: chown d4/d16/d29/d24/d37/d44/d62/d74/fa2 0 1 2026-03-09T20:47:55.716 INFO:tasks.workunit.client.1.vm10.stdout:0/628: dread d2/d9/da/fa7 [0,4194304] 0 2026-03-09T20:47:55.739 INFO:tasks.workunit.client.1.vm10.stdout:4/618: dread d1/d2/f43 [0,4194304] 0 2026-03-09T20:47:55.739 INFO:tasks.workunit.client.1.vm10.stdout:4/619: truncate d1/d2/f7 8399290 0 2026-03-09T20:47:55.742 INFO:tasks.workunit.client.1.vm10.stdout:4/620: chown d1/d2/d3/d54/daa 56 1 2026-03-09T20:47:55.742 INFO:tasks.workunit.client.1.vm10.stdout:1/653: write d2/da/f1e [195214,15727] 0 2026-03-09T20:47:55.742 INFO:tasks.workunit.client.0.vm07.stdout:3/701: truncate d1/d5/d9/d11/d6d/dd0/f55 6645614 0 2026-03-09T20:47:55.751 INFO:tasks.workunit.client.0.vm07.stdout:8/654: dwrite d1/d5d/d6f/d2f/f51 [0,4194304] 0 2026-03-09T20:47:55.758 INFO:tasks.workunit.client.0.vm07.stdout:2/701: dwrite d2/db/d28/d5c/f91 [0,4194304] 0 2026-03-09T20:47:55.759 INFO:tasks.workunit.client.0.vm07.stdout:3/702: write d1/d5/d9/d11/d6d/dd0/d59/fd1 [962078,22887] 0 2026-03-09T20:47:55.763 INFO:tasks.workunit.client.1.vm10.stdout:2/655: dwrite d5/d18/d27/d89/db6/d41/d77/db3/db5/f69 [0,4194304] 0 2026-03-09T20:47:55.781 INFO:tasks.workunit.client.0.vm07.stdout:6/719: dwrite d8/d26/d7d/f8b [0,4194304] 0 2026-03-09T20:47:55.787 INFO:tasks.workunit.client.0.vm07.stdout:5/770: fsync d5/df/d13/d6c/f77 0 2026-03-09T20:47:55.796 INFO:tasks.workunit.client.0.vm07.stdout:9/679: rmdir d4/d8/dc/d15 39 2026-03-09T20:47:55.797 INFO:tasks.workunit.client.0.vm07.stdout:3/703: rmdir d1/d5/d9/d2f/d34 39 2026-03-09T20:47:55.804 INFO:tasks.workunit.client.0.vm07.stdout:8/655: creat d1/d5d/d6f/d2f/d4d/d63/fd5 x:0 0 0 2026-03-09T20:47:55.815 INFO:tasks.workunit.client.0.vm07.stdout:0/723: getdents d1/d1f/d53 0 2026-03-09T20:47:55.816 INFO:tasks.workunit.client.0.vm07.stdout:4/642: readlink d2/d55/d5d/d3f/d4a/d4b/d52/d5c/d90/l92 0 2026-03-09T20:47:55.818 INFO:tasks.workunit.client.0.vm07.stdout:9/680: mkdir d4/d16/d29/d24/d37/d8d/df3 0 2026-03-09T20:47:55.819 INFO:tasks.workunit.client.0.vm07.stdout:4/643: write d2/d55/d5d/d3f/d4a/f84 [676471,81300] 0 2026-03-09T20:47:55.819 INFO:tasks.workunit.client.0.vm07.stdout:4/644: readlink d2/df/l4e 0 2026-03-09T20:47:55.827 INFO:tasks.workunit.client.0.vm07.stdout:0/724: truncate d1/f3d 1730607 0 2026-03-09T20:47:55.844 INFO:tasks.workunit.client.0.vm07.stdout:2/702: rename d2/d46 to d2/d11/ddb 0 2026-03-09T20:47:55.844 INFO:tasks.workunit.client.0.vm07.stdout:9/681: creat d4/d16/d29/ff4 x:0 0 0 2026-03-09T20:47:55.867 INFO:tasks.workunit.client.0.vm07.stdout:7/762: write d3/f18 [3836735,84594] 0 2026-03-09T20:47:55.872 INFO:tasks.workunit.client.0.vm07.stdout:1/740: dwrite d3/d97/da1/dc5/d90/f96 [0,4194304] 0 2026-03-09T20:47:55.877 INFO:tasks.workunit.client.0.vm07.stdout:1/741: readlink d3/d97/da1/dc5/d90/dd3/lf7 0 2026-03-09T20:47:55.877 INFO:tasks.workunit.client.0.vm07.stdout:1/742: write d3/fc [3099855,110252] 0 2026-03-09T20:47:55.897 INFO:tasks.workunit.client.0.vm07.stdout:5/771: dwrite d5/d19/d73/d9c/fcb [0,4194304] 0 2026-03-09T20:47:55.906 INFO:tasks.workunit.client.0.vm07.stdout:8/656: write d1/d5d/d6f/f30 [3609024,53593] 0 2026-03-09T20:47:55.911 INFO:tasks.workunit.client.0.vm07.stdout:8/657: write d1/dc/dba/fce [960455,129062] 0 2026-03-09T20:47:55.949 INFO:tasks.workunit.client.0.vm07.stdout:2/703: unlink d2/db/d1c/faa 0 2026-03-09T20:47:55.949 INFO:tasks.workunit.client.0.vm07.stdout:2/704: chown d2/db/d28/d90/fd5 457012219 1 2026-03-09T20:47:55.955 INFO:tasks.workunit.client.0.vm07.stdout:2/705: dread d2/d11/ddb/d6e/f7a [0,4194304] 0 2026-03-09T20:47:55.959 INFO:tasks.workunit.client.0.vm07.stdout:6/720: getdents d8/d26 0 2026-03-09T20:47:55.959 INFO:tasks.workunit.client.0.vm07.stdout:4/645: truncate d2/f69 3332139 0 2026-03-09T20:47:55.960 INFO:tasks.workunit.client.1.vm10.stdout:5/611: truncate d2/d1b/f2f 83284 0 2026-03-09T20:47:55.993 INFO:tasks.workunit.client.0.vm07.stdout:1/743: fsync d3/d23/d52/f79 0 2026-03-09T20:47:56.001 INFO:tasks.workunit.client.1.vm10.stdout:7/656: getdents db/d46 0 2026-03-09T20:47:56.002 INFO:tasks.workunit.client.1.vm10.stdout:7/657: chown db/d28/d2b/d36/d63/d8b/la7 32 1 2026-03-09T20:47:56.010 INFO:tasks.workunit.client.1.vm10.stdout:3/626: dwrite dc/d14/d26/d29/d2a/f57 [0,4194304] 0 2026-03-09T20:47:56.026 INFO:tasks.workunit.client.1.vm10.stdout:9/710: rename d2/d3/de/d8f/fd0 to d2/d3/d6d/ff3 0 2026-03-09T20:47:56.026 INFO:tasks.workunit.client.1.vm10.stdout:9/711: stat d2/ccb 0 2026-03-09T20:47:56.027 INFO:tasks.workunit.client.1.vm10.stdout:6/667: mkdir d3/da/d11/d89/db9/dd1 0 2026-03-09T20:47:56.029 INFO:tasks.workunit.client.1.vm10.stdout:8/706: link d0/d22/d2c/f3f d0/dd1/fdf 0 2026-03-09T20:47:56.034 INFO:tasks.workunit.client.0.vm07.stdout:3/704: rename d1/d5/d9/f3c to d1/d5/d9/d2f/d3d/d71/dcc/fe2 0 2026-03-09T20:47:56.034 INFO:tasks.workunit.client.0.vm07.stdout:8/658: read d1/fb5 [70523,87180] 0 2026-03-09T20:47:56.034 INFO:tasks.workunit.client.0.vm07.stdout:3/705: readlink d1/l3e 0 2026-03-09T20:47:56.041 INFO:tasks.workunit.client.1.vm10.stdout:4/621: symlink d1/d2/d3/d70/d78/lc4 0 2026-03-09T20:47:56.048 INFO:tasks.workunit.client.0.vm07.stdout:9/682: fsync d4/d8/dc/d15/f18 0 2026-03-09T20:47:56.052 INFO:tasks.workunit.client.0.vm07.stdout:5/772: write d5/df/d13/d30/fac [756861,45162] 0 2026-03-09T20:47:56.065 INFO:tasks.workunit.client.1.vm10.stdout:2/656: mkdir d5/d18/d27/d38/d61/dc8/ddb 0 2026-03-09T20:47:56.069 INFO:tasks.workunit.client.1.vm10.stdout:0/629: dwrite d2/d9/da/d35/f7e [0,4194304] 0 2026-03-09T20:47:56.082 INFO:tasks.workunit.client.1.vm10.stdout:5/612: write d2/d27/d37/d46/fb7 [704898,115169] 0 2026-03-09T20:47:56.082 INFO:tasks.workunit.client.0.vm07.stdout:2/706: truncate d2/f4 1395462 0 2026-03-09T20:47:56.083 INFO:tasks.workunit.client.0.vm07.stdout:7/763: mkdir d3/da4/df2/dfb 0 2026-03-09T20:47:56.086 INFO:tasks.workunit.client.0.vm07.stdout:4/646: rmdir d2/d55/d5d/d3f/d4a/d7d 39 2026-03-09T20:47:56.095 INFO:tasks.workunit.client.1.vm10.stdout:3/627: fdatasync dc/d14/d20/d2e/f38 0 2026-03-09T20:47:56.100 INFO:tasks.workunit.client.0.vm07.stdout:6/721: dread d8/d16/d22/d9b/de4/f73 [0,4194304] 0 2026-03-09T20:47:56.116 INFO:tasks.workunit.client.0.vm07.stdout:6/722: dwrite d8/fdd [0,4194304] 0 2026-03-09T20:47:56.118 INFO:tasks.workunit.client.0.vm07.stdout:4/647: dwrite d2/d55/d5d/d3f/d4a/fad [0,4194304] 0 2026-03-09T20:47:56.128 INFO:tasks.workunit.client.1.vm10.stdout:6/668: rename d3/da/d11/d31/d4c to d3/da/d11/d89/db9/dd1/dd2 0 2026-03-09T20:47:56.129 INFO:tasks.workunit.client.1.vm10.stdout:1/654: symlink d2/lcd 0 2026-03-09T20:47:56.137 INFO:tasks.workunit.client.1.vm10.stdout:1/655: dread d2/da/d25/d3e/d55/f9a [0,4194304] 0 2026-03-09T20:47:56.141 INFO:tasks.workunit.client.1.vm10.stdout:4/622: symlink d1/d67/lc5 0 2026-03-09T20:47:56.151 INFO:tasks.workunit.client.1.vm10.stdout:2/657: mkdir d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47/ddc 0 2026-03-09T20:47:56.153 INFO:tasks.workunit.client.0.vm07.stdout:2/707: read d2/db/d49/fc6 [1727095,27123] 0 2026-03-09T20:47:56.158 INFO:tasks.workunit.client.0.vm07.stdout:5/773: sync 2026-03-09T20:47:56.158 INFO:tasks.workunit.client.0.vm07.stdout:2/708: stat d2/db/c23 0 2026-03-09T20:47:56.158 INFO:tasks.workunit.client.0.vm07.stdout:4/648: sync 2026-03-09T20:47:56.158 INFO:tasks.workunit.client.0.vm07.stdout:7/764: creat d3/da/d53/db7/dde/dc5/ffc x:0 0 0 2026-03-09T20:47:56.159 INFO:tasks.workunit.client.0.vm07.stdout:7/765: chown d3/da/d53/db7/dde/dc5/fdb 182594 1 2026-03-09T20:47:56.170 INFO:tasks.workunit.client.0.vm07.stdout:9/683: dread d4/d8/d19/d89/f93 [0,4194304] 0 2026-03-09T20:47:56.178 INFO:tasks.workunit.client.0.vm07.stdout:1/744: creat d3/d97/da1/dab/de2/ff8 x:0 0 0 2026-03-09T20:47:56.203 INFO:tasks.workunit.client.1.vm10.stdout:9/712: dwrite d2/d28/d47/d67/f99 [0,4194304] 0 2026-03-09T20:47:56.206 INFO:tasks.workunit.client.1.vm10.stdout:5/613: dwrite d2/d58/fb9 [0,4194304] 0 2026-03-09T20:47:56.209 INFO:tasks.workunit.client.1.vm10.stdout:6/669: chown d3/d30/d7f/d24/ca2 109472222 1 2026-03-09T20:47:56.215 INFO:tasks.workunit.client.1.vm10.stdout:0/630: dwrite d2/d9/da/d11/dd1/d34/fc5 [0,4194304] 0 2026-03-09T20:47:56.216 INFO:tasks.workunit.client.1.vm10.stdout:1/656: dread - d2/da/d25/d46/d51/d5d/d6e/f9c zero size 2026-03-09T20:47:56.231 INFO:tasks.workunit.client.1.vm10.stdout:7/658: creat db/d28/d2b/d36/d3b/fcf x:0 0 0 2026-03-09T20:47:56.233 INFO:tasks.workunit.client.1.vm10.stdout:7/659: write db/d21/d26/f52 [3822921,116225] 0 2026-03-09T20:47:56.245 INFO:tasks.workunit.client.1.vm10.stdout:5/614: truncate d2/d1b/f5c 819786 0 2026-03-09T20:47:56.253 INFO:tasks.workunit.client.1.vm10.stdout:6/670: creat d3/d30/d7f/d24/fd3 x:0 0 0 2026-03-09T20:47:56.253 INFO:tasks.workunit.client.1.vm10.stdout:1/657: rename d2/da/d25/lcb to d2/da/d25/d46/d51/d7e/lce 0 2026-03-09T20:47:56.259 INFO:tasks.workunit.client.1.vm10.stdout:2/658: creat d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47/dcc/fdd x:0 0 0 2026-03-09T20:47:56.264 INFO:tasks.workunit.client.1.vm10.stdout:6/671: dread d3/d30/d7f/d36/d5c/f5f [0,4194304] 0 2026-03-09T20:47:56.265 INFO:tasks.workunit.client.1.vm10.stdout:7/660: unlink db/d28/f5d 0 2026-03-09T20:47:56.267 INFO:tasks.workunit.client.1.vm10.stdout:8/707: link d0/d22/d2f/d38/c39 d0/d22/d25/d2e/d41/d47/ce0 0 2026-03-09T20:47:56.267 INFO:tasks.workunit.client.1.vm10.stdout:1/658: dread - d2/da/d25/d3e/dca/fa5 zero size 2026-03-09T20:47:56.269 INFO:tasks.workunit.client.1.vm10.stdout:7/661: truncate db/d28/d2b/d36/f35 4897899 0 2026-03-09T20:47:56.270 INFO:tasks.workunit.client.1.vm10.stdout:6/672: fdatasync d3/d30/d7f/d36/f4f 0 2026-03-09T20:47:56.271 INFO:tasks.workunit.client.1.vm10.stdout:1/659: chown d2/da/d25/d3e/d55/cab 62 1 2026-03-09T20:47:56.272 INFO:tasks.workunit.client.1.vm10.stdout:3/628: getdents dc/d14/d26/d29/d2a 0 2026-03-09T20:47:56.274 INFO:tasks.workunit.client.1.vm10.stdout:1/660: write d2/da/d25/d46/d80/da0/d92/db5/fc1 [861148,25292] 0 2026-03-09T20:47:56.276 INFO:tasks.workunit.client.1.vm10.stdout:6/673: chown d3/da/d11/d31/c3b 25255424 1 2026-03-09T20:47:56.283 INFO:tasks.workunit.client.1.vm10.stdout:9/713: dread d2/d28/f51 [0,4194304] 0 2026-03-09T20:47:56.299 INFO:tasks.workunit.client.1.vm10.stdout:4/623: getdents d1/d2/d5c/d64/d6b/d81/dac/d1c/d2b/d4a/d9b 0 2026-03-09T20:47:56.302 INFO:tasks.workunit.client.0.vm07.stdout:6/723: mkdir d8/d26/de5 0 2026-03-09T20:47:56.312 INFO:tasks.workunit.client.0.vm07.stdout:0/725: link d1/f3d d1/d1f/d30/fe9 0 2026-03-09T20:47:56.313 INFO:tasks.workunit.client.0.vm07.stdout:0/726: stat d1/dc0/fe4 0 2026-03-09T20:47:56.317 INFO:tasks.workunit.client.0.vm07.stdout:3/706: fdatasync d1/d5/d9/d2f/d3d/d71/dcc/fe2 0 2026-03-09T20:47:56.321 INFO:tasks.workunit.client.1.vm10.stdout:0/631: dwrite d2/f8a [0,4194304] 0 2026-03-09T20:47:56.341 INFO:tasks.workunit.client.1.vm10.stdout:5/615: truncate d2/d27/d37/d46/d5d/fe2 313090 0 2026-03-09T20:47:56.358 INFO:tasks.workunit.client.0.vm07.stdout:5/774: mkdir d5/d19/d73/d9c/d10c 0 2026-03-09T20:47:56.368 INFO:tasks.workunit.client.0.vm07.stdout:7/766: unlink d3/da/d53/c75 0 2026-03-09T20:47:56.369 INFO:tasks.workunit.client.1.vm10.stdout:3/629: dread dc/f11 [0,4194304] 0 2026-03-09T20:47:56.375 INFO:tasks.workunit.client.0.vm07.stdout:4/649: dread d2/d55/d5d/d3f/d4a/f99 [0,4194304] 0 2026-03-09T20:47:56.378 INFO:tasks.workunit.client.1.vm10.stdout:4/624: symlink d1/d2/d5c/d64/d6b/d81/da9/lc6 0 2026-03-09T20:47:56.386 INFO:tasks.workunit.client.0.vm07.stdout:9/684: fdatasync d4/d11/f88 0 2026-03-09T20:47:56.392 INFO:tasks.workunit.client.1.vm10.stdout:8/708: dwrite d0/d22/d25/d40/d86/d91/fa8 [4194304,4194304] 0 2026-03-09T20:47:56.397 INFO:tasks.workunit.client.0.vm07.stdout:1/745: rename d3/d14/f33 to d3/d97/da1/dc5/d90/dd3/ff9 0 2026-03-09T20:47:56.403 INFO:tasks.workunit.client.1.vm10.stdout:2/659: mknod d5/d18/d27/d38/d61/dc8/ddb/cde 0 2026-03-09T20:47:56.408 INFO:tasks.workunit.client.1.vm10.stdout:7/662: write db/d28/f31 [4114692,69325] 0 2026-03-09T20:47:56.409 INFO:tasks.workunit.client.0.vm07.stdout:8/659: truncate d1/d5d/d6f/d80/faa 2301579 0 2026-03-09T20:47:56.409 INFO:tasks.workunit.client.0.vm07.stdout:6/724: fdatasync d8/d16/d22/d24/da0/dab/d40/d69/f39 0 2026-03-09T20:47:56.411 INFO:tasks.workunit.client.1.vm10.stdout:0/632: creat d2/d4a/d58/d82/d93/fe3 x:0 0 0 2026-03-09T20:47:56.416 INFO:tasks.workunit.client.0.vm07.stdout:0/727: creat d1/d2/d33/fea x:0 0 0 2026-03-09T20:47:56.422 INFO:tasks.workunit.client.0.vm07.stdout:0/728: dread d1/dc0/fe4 [0,4194304] 0 2026-03-09T20:47:56.429 INFO:tasks.workunit.client.0.vm07.stdout:3/707: mkdir d1/d5/d9/daf/de3 0 2026-03-09T20:47:56.430 INFO:tasks.workunit.client.0.vm07.stdout:0/729: dwrite d1/d2/dc/fe2 [0,4194304] 0 2026-03-09T20:47:56.448 INFO:tasks.workunit.client.1.vm10.stdout:1/661: creat d2/da/d25/d46/d80/da0/d92/db5/dc7/fcf x:0 0 0 2026-03-09T20:47:56.458 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:56 vm10.local ceph-mon[57011]: pgmap v4: 65 pgs: 65 active+clean; 2.9 GiB data, 10 GiB used, 110 GiB / 120 GiB avail 2026-03-09T20:47:56.458 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:56 vm10.local ceph-mon[57011]: mgrmap e29: vm07.xjrvch(active, since 2s) 2026-03-09T20:47:56.458 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:56 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:47:56.458 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:56 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:47:56.462 INFO:tasks.workunit.client.1.vm10.stdout:6/674: symlink d3/d9c/ld4 0 2026-03-09T20:47:56.463 INFO:tasks.workunit.client.1.vm10.stdout:6/675: write d3/d30/d7f/d51/f7c [4991237,72360] 0 2026-03-09T20:47:56.468 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:56 vm07.local ceph-mon[49120]: pgmap v4: 65 pgs: 65 active+clean; 2.9 GiB data, 10 GiB used, 110 GiB / 120 GiB avail 2026-03-09T20:47:56.468 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:56 vm07.local ceph-mon[49120]: mgrmap e29: vm07.xjrvch(active, since 2s) 2026-03-09T20:47:56.468 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:56 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:47:56.468 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:56 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:47:56.468 INFO:tasks.workunit.client.1.vm10.stdout:3/630: mknod dc/d14/d26/d29/d2a/d55/ccf 0 2026-03-09T20:47:56.469 INFO:tasks.workunit.client.0.vm07.stdout:2/709: mkdir d2/db/d28/d5c/dc7/ddc 0 2026-03-09T20:47:56.629 INFO:tasks.workunit.client.0.vm07.stdout:7/767: creat d3/da/db/d32/d3e/dac/d1f/d50/ffd x:0 0 0 2026-03-09T20:47:56.699 INFO:tasks.workunit.client.1.vm10.stdout:4/625: rename d1/d2/d5c/d64/d6b/d81/dac/d1c/f23 to d1/d2/d5c/d64/d6b/d81/da9/fc7 0 2026-03-09T20:47:56.720 INFO:tasks.workunit.client.0.vm07.stdout:5/775: dwrite d5/d33/d39/fe5 [0,4194304] 0 2026-03-09T20:47:56.731 INFO:tasks.workunit.client.0.vm07.stdout:9/685: dread d4/d11/d2a/f5d [0,4194304] 0 2026-03-09T20:47:56.736 INFO:tasks.workunit.client.1.vm10.stdout:7/663: read db/d28/d2b/d36/d3f/fcb [229965,12370] 0 2026-03-09T20:47:56.736 INFO:tasks.workunit.client.1.vm10.stdout:7/664: readlink db/d28/d30/l7f 0 2026-03-09T20:47:56.740 INFO:tasks.workunit.client.0.vm07.stdout:6/725: rename d8/d16/d61/f68 to d8/d5d/fe6 0 2026-03-09T20:47:56.743 INFO:tasks.workunit.client.1.vm10.stdout:0/633: symlink d2/d9/da/d11/dd1/d34/le4 0 2026-03-09T20:47:56.746 INFO:tasks.workunit.client.1.vm10.stdout:0/634: dread d2/d9/da/d35/f7e [0,4194304] 0 2026-03-09T20:47:56.759 INFO:tasks.workunit.client.1.vm10.stdout:1/662: read d2/f3c [1868217,90862] 0 2026-03-09T20:47:56.778 INFO:tasks.workunit.client.0.vm07.stdout:3/708: dread d1/d5/d9/d2f/d3d/d71/dcc/fe2 [0,4194304] 0 2026-03-09T20:47:56.781 INFO:tasks.workunit.client.0.vm07.stdout:0/730: creat d1/d1f/dc3/feb x:0 0 0 2026-03-09T20:47:56.781 INFO:tasks.workunit.client.0.vm07.stdout:0/731: stat d1/fe5 0 2026-03-09T20:47:56.806 INFO:tasks.workunit.client.0.vm07.stdout:7/768: unlink d3/da/db/d32/d3e/dac/l17 0 2026-03-09T20:47:56.808 INFO:tasks.workunit.client.0.vm07.stdout:4/650: unlink d2/df/d59/d8a/f9b 0 2026-03-09T20:47:56.809 INFO:tasks.workunit.client.0.vm07.stdout:4/651: chown d2/d55/d5d/d3f/d4a/d85/f8c 6122 1 2026-03-09T20:47:56.824 INFO:tasks.workunit.client.0.vm07.stdout:5/776: fdatasync d5/d33/d39/ff1 0 2026-03-09T20:47:56.841 INFO:tasks.workunit.client.0.vm07.stdout:6/726: unlink d8/d16/d22/d24/da0/dab/d40/d69/l56 0 2026-03-09T20:47:56.852 INFO:tasks.workunit.client.0.vm07.stdout:2/710: fsync d2/d11/f50 0 2026-03-09T20:47:56.872 INFO:tasks.workunit.client.0.vm07.stdout:2/711: chown d2/d11/ddb/d6e 1462349361 1 2026-03-09T20:47:56.872 INFO:tasks.workunit.client.0.vm07.stdout:2/712: write d2/db/d28/d90/fa3 [1094996,17745] 0 2026-03-09T20:47:56.872 INFO:tasks.workunit.client.0.vm07.stdout:7/769: truncate d3/d58/d82/fd8 1044580 0 2026-03-09T20:47:56.872 INFO:tasks.workunit.client.0.vm07.stdout:0/732: symlink d1/d1f/d53/d72/d9a/lec 0 2026-03-09T20:47:56.874 INFO:tasks.workunit.client.0.vm07.stdout:5/777: sync 2026-03-09T20:47:56.878 INFO:tasks.workunit.client.1.vm10.stdout:6/676: dread d3/da/fd [4194304,4194304] 0 2026-03-09T20:47:56.881 INFO:tasks.workunit.client.0.vm07.stdout:4/652: mkdir d2/d55/d5d/d3f/db3 0 2026-03-09T20:47:56.905 INFO:tasks.workunit.client.0.vm07.stdout:2/713: fdatasync d2/db/d28/d5c/fae 0 2026-03-09T20:47:56.914 INFO:tasks.workunit.client.0.vm07.stdout:7/770: creat d3/da/db/d32/d3e/d5c/dc2/ffe x:0 0 0 2026-03-09T20:47:56.920 INFO:tasks.workunit.client.0.vm07.stdout:9/686: creat d4/d8/d19/d5f/da5/ff5 x:0 0 0 2026-03-09T20:47:56.920 INFO:tasks.workunit.client.0.vm07.stdout:9/687: stat d4/d11/f9d 0 2026-03-09T20:47:56.928 INFO:tasks.workunit.client.0.vm07.stdout:8/660: getdents d1/d5d/d6f/d2f/d4d 0 2026-03-09T20:47:56.929 INFO:tasks.workunit.client.1.vm10.stdout:9/714: truncate d2/d3/de/f7c 3128297 0 2026-03-09T20:47:56.934 INFO:tasks.workunit.client.0.vm07.stdout:0/733: chown d1/d2/dc/d17/l37 3778447 1 2026-03-09T20:47:56.938 INFO:tasks.workunit.client.0.vm07.stdout:0/734: dwrite d1/d2/d33/fea [0,4194304] 0 2026-03-09T20:47:56.951 INFO:tasks.workunit.client.0.vm07.stdout:5/778: symlink d5/df/d13/d4f/l10d 0 2026-03-09T20:47:56.952 INFO:tasks.workunit.client.0.vm07.stdout:5/779: chown d5/df/d13/d30/d56/f72 62 1 2026-03-09T20:47:56.960 INFO:tasks.workunit.client.1.vm10.stdout:2/660: truncate d5/f7 1804460 0 2026-03-09T20:47:56.962 INFO:tasks.workunit.client.1.vm10.stdout:7/665: unlink db/d28/d2b/d36/d3b/d88/f8e 0 2026-03-09T20:47:56.968 INFO:tasks.workunit.client.0.vm07.stdout:2/714: creat d2/db/d1c/d4a/db6/fdd x:0 0 0 2026-03-09T20:47:56.971 INFO:tasks.workunit.client.0.vm07.stdout:7/771: mkdir d3/da4/df2/dff 0 2026-03-09T20:47:56.974 INFO:tasks.workunit.client.1.vm10.stdout:0/635: mknod d2/d4a/d58/d82/d71/d5d/ce5 0 2026-03-09T20:47:56.975 INFO:tasks.workunit.client.0.vm07.stdout:9/688: fsync d4/d11/d23/f2f 0 2026-03-09T20:47:56.978 INFO:tasks.workunit.client.0.vm07.stdout:5/780: sync 2026-03-09T20:47:56.980 INFO:tasks.workunit.client.1.vm10.stdout:1/663: mkdir d2/da/d25/d46/d51/d5d/d6e/dd0 0 2026-03-09T20:47:56.982 INFO:tasks.workunit.client.0.vm07.stdout:1/746: dwrite d3/d23/d52/f79 [0,4194304] 0 2026-03-09T20:47:56.984 INFO:tasks.workunit.client.0.vm07.stdout:6/727: creat d8/d16/d22/d24/da0/dab/d40/fe7 x:0 0 0 2026-03-09T20:47:56.985 INFO:tasks.workunit.client.0.vm07.stdout:6/728: chown d8/d16/d22/d9b/de4/f91 278 1 2026-03-09T20:47:56.985 INFO:tasks.workunit.client.0.vm07.stdout:6/729: readlink d8/d16/d61/l84 0 2026-03-09T20:47:56.999 INFO:tasks.workunit.client.1.vm10.stdout:9/715: symlink d2/d3/db4/lf4 0 2026-03-09T20:47:57.004 INFO:tasks.workunit.client.0.vm07.stdout:0/735: rename d1/d1f/d20/c6f to d1/d2/d98/de8/ced 0 2026-03-09T20:47:57.012 INFO:tasks.workunit.client.0.vm07.stdout:4/653: symlink d2/d55/d5d/d93/lb4 0 2026-03-09T20:47:57.030 INFO:tasks.workunit.client.1.vm10.stdout:5/616: truncate d2/d39/dbf/f61 2530160 0 2026-03-09T20:47:57.036 INFO:tasks.workunit.client.0.vm07.stdout:7/772: rmdir d3/d58/d82 39 2026-03-09T20:47:57.040 INFO:tasks.workunit.client.1.vm10.stdout:0/636: read d2/d9/d69/d80/f8d [3313011,21461] 0 2026-03-09T20:47:57.043 INFO:tasks.workunit.client.1.vm10.stdout:0/637: dread d2/f8a [0,4194304] 0 2026-03-09T20:47:57.050 INFO:tasks.workunit.client.0.vm07.stdout:9/689: mknod d4/d8/d19/d5f/d73/dbc/cf6 0 2026-03-09T20:47:57.050 INFO:tasks.workunit.client.0.vm07.stdout:5/781: creat d5/d19/f10e x:0 0 0 2026-03-09T20:47:57.050 INFO:tasks.workunit.client.1.vm10.stdout:0/638: chown d2/d9/da/d48/l4c 371262 1 2026-03-09T20:47:57.050 INFO:tasks.workunit.client.1.vm10.stdout:0/639: chown d2/d9/d69/l96 17 1 2026-03-09T20:47:57.050 INFO:tasks.workunit.client.1.vm10.stdout:3/631: creat dc/d14/d26/d29/fd0 x:0 0 0 2026-03-09T20:47:57.051 INFO:tasks.workunit.client.1.vm10.stdout:9/716: symlink d2/da6/lf5 0 2026-03-09T20:47:57.052 INFO:tasks.workunit.client.1.vm10.stdout:9/717: stat d2/d12/dad 0 2026-03-09T20:47:57.061 INFO:tasks.workunit.client.1.vm10.stdout:1/664: dread d2/da/d25/f27 [0,4194304] 0 2026-03-09T20:47:57.061 INFO:tasks.workunit.client.1.vm10.stdout:5/617: sync 2026-03-09T20:47:57.063 INFO:tasks.workunit.client.0.vm07.stdout:6/730: creat d8/d50/fe8 x:0 0 0 2026-03-09T20:47:57.068 INFO:tasks.workunit.client.0.vm07.stdout:8/661: fsync d1/dc/f42 0 2026-03-09T20:47:57.069 INFO:tasks.workunit.client.0.vm07.stdout:8/662: chown d1/dc/d16/f70 0 1 2026-03-09T20:47:57.082 INFO:tasks.workunit.client.1.vm10.stdout:3/632: symlink dc/d14/d26/d29/d40/da2/ld1 0 2026-03-09T20:47:57.097 INFO:tasks.workunit.client.0.vm07.stdout:4/654: fdatasync d2/df/d59/f60 0 2026-03-09T20:47:57.102 INFO:tasks.workunit.client.1.vm10.stdout:6/677: creat d3/da/d11/d31/fd5 x:0 0 0 2026-03-09T20:47:57.102 INFO:tasks.workunit.client.1.vm10.stdout:6/678: dread - d3/d30/d7f/d24/f99 zero size 2026-03-09T20:47:57.103 INFO:tasks.workunit.client.1.vm10.stdout:9/718: fsync d2/d28/da2/fcc 0 2026-03-09T20:47:57.111 INFO:tasks.workunit.client.0.vm07.stdout:3/709: symlink d1/le4 0 2026-03-09T20:47:57.111 INFO:tasks.workunit.client.0.vm07.stdout:3/710: readlink d1/l6a 0 2026-03-09T20:47:57.113 INFO:tasks.workunit.client.1.vm10.stdout:5/618: symlink d2/d27/d37/leb 0 2026-03-09T20:47:57.121 INFO:tasks.workunit.client.1.vm10.stdout:1/665: creat d2/da/d25/d46/d51/d7e/fd1 x:0 0 0 2026-03-09T20:47:57.131 INFO:tasks.workunit.client.0.vm07.stdout:5/782: unlink d5/d33/d39/l7b 0 2026-03-09T20:47:57.142 INFO:tasks.workunit.client.0.vm07.stdout:6/731: creat d8/d16/d4b/d88/d99/fe9 x:0 0 0 2026-03-09T20:47:57.150 INFO:tasks.workunit.client.0.vm07.stdout:8/663: creat d1/d5d/d6f/d2f/d4d/d55/fd6 x:0 0 0 2026-03-09T20:47:57.151 INFO:tasks.workunit.client.0.vm07.stdout:5/783: sync 2026-03-09T20:47:57.157 INFO:tasks.workunit.client.0.vm07.stdout:4/655: unlink d2/d55/d5d/d3f/d4a/d4b/f74 0 2026-03-09T20:47:57.157 INFO:tasks.workunit.client.0.vm07.stdout:4/656: chown d2/d55/d5d/d3f/d4a/d4b 69179616 1 2026-03-09T20:47:57.181 INFO:tasks.workunit.client.1.vm10.stdout:8/709: truncate d0/d22/d2f/d38/d64/d7f/fc2 414827 0 2026-03-09T20:47:57.181 INFO:tasks.workunit.client.1.vm10.stdout:8/710: readlink d0/d22/d25/d2e/d41/d47/l50 0 2026-03-09T20:47:57.185 INFO:tasks.workunit.client.1.vm10.stdout:8/711: sync 2026-03-09T20:47:57.188 INFO:tasks.workunit.client.1.vm10.stdout:8/712: dwrite d0/d22/f76 [0,4194304] 0 2026-03-09T20:47:57.189 INFO:tasks.workunit.client.1.vm10.stdout:8/713: write d0/d22/f71 [4676153,6395] 0 2026-03-09T20:47:57.198 INFO:tasks.workunit.client.1.vm10.stdout:6/679: fsync d3/da/d11/d26/d5b/f55 0 2026-03-09T20:47:57.203 INFO:tasks.workunit.client.1.vm10.stdout:0/640: creat d2/d9/da/d11/dd1/fe6 x:0 0 0 2026-03-09T20:47:57.212 INFO:tasks.workunit.client.1.vm10.stdout:8/714: rename d0/d22/d25/d2e/d41/d47/f4b to d0/d22/d25/d2e/fe1 0 2026-03-09T20:47:57.212 INFO:tasks.workunit.client.1.vm10.stdout:8/715: write d0/d22/d2f/d38/f43 [688724,120861] 0 2026-03-09T20:47:57.234 INFO:tasks.workunit.client.1.vm10.stdout:0/641: creat d2/d4a/d58/d82/d71/fe7 x:0 0 0 2026-03-09T20:47:57.235 INFO:tasks.workunit.client.0.vm07.stdout:7/773: creat d3/d58/d82/f100 x:0 0 0 2026-03-09T20:47:57.237 INFO:tasks.workunit.client.1.vm10.stdout:1/666: getdents d2/da/d25/d46/d80/da0/d92 0 2026-03-09T20:47:57.251 INFO:tasks.workunit.client.1.vm10.stdout:8/716: mknod d0/d22/d25/d2e/d41/d85/db9/dc6/ce2 0 2026-03-09T20:47:57.251 INFO:tasks.workunit.client.1.vm10.stdout:8/717: write d0/d22/d25/d40/fd3 [220626,84334] 0 2026-03-09T20:47:57.252 INFO:tasks.workunit.client.1.vm10.stdout:8/718: chown d0/d22/d25/d2e/d41/c7a 0 1 2026-03-09T20:47:57.252 INFO:tasks.workunit.client.1.vm10.stdout:0/642: mkdir d2/d9/da/d48/dac/de8 0 2026-03-09T20:47:57.255 INFO:tasks.workunit.client.1.vm10.stdout:0/643: sync 2026-03-09T20:47:57.259 INFO:tasks.workunit.client.1.vm10.stdout:2/661: dwrite d5/d18/d27/d89/db6/d41/f4b [0,4194304] 0 2026-03-09T20:47:57.262 INFO:tasks.workunit.client.1.vm10.stdout:8/719: mknod d0/d22/d2f/d38/ce3 0 2026-03-09T20:47:57.265 INFO:tasks.workunit.client.1.vm10.stdout:0/644: mkdir d2/d9/d2a/de9 0 2026-03-09T20:47:57.267 INFO:tasks.workunit.client.1.vm10.stdout:7/666: dwrite db/d28/d2b/d36/d40/f44 [0,4194304] 0 2026-03-09T20:47:57.268 INFO:tasks.workunit.client.0.vm07.stdout:2/715: dwrite d2/db/d1c/f3a [4194304,4194304] 0 2026-03-09T20:47:57.286 INFO:tasks.workunit.client.1.vm10.stdout:2/662: symlink d5/d18/d27/d38/d61/dc8/ldf 0 2026-03-09T20:47:57.288 INFO:tasks.workunit.client.1.vm10.stdout:8/720: fdatasync d0/d54/fa4 0 2026-03-09T20:47:57.289 INFO:tasks.workunit.client.1.vm10.stdout:8/721: chown d0/d22/d25/d2e/d41/d47/d63/c98 581426644 1 2026-03-09T20:47:57.295 INFO:tasks.workunit.client.0.vm07.stdout:1/747: dwrite d3/d97/da1/dc5/f99 [0,4194304] 0 2026-03-09T20:47:57.297 INFO:tasks.workunit.client.1.vm10.stdout:4/626: creat d1/d2/d5c/d64/d6b/d81/fc8 x:0 0 0 2026-03-09T20:47:57.303 INFO:tasks.workunit.client.0.vm07.stdout:1/748: chown d3/d23/d55/l5f 141997 1 2026-03-09T20:47:57.303 INFO:tasks.workunit.client.1.vm10.stdout:1/667: creat d2/fd2 x:0 0 0 2026-03-09T20:47:57.312 INFO:tasks.workunit.client.1.vm10.stdout:7/667: truncate f3 96093 0 2026-03-09T20:47:57.313 INFO:tasks.workunit.client.0.vm07.stdout:9/690: mknod d4/d8/d19/d5f/d73/dee/cf7 0 2026-03-09T20:47:57.315 INFO:tasks.workunit.client.1.vm10.stdout:5/619: dwrite d2/d27/d37/fa3 [0,4194304] 0 2026-03-09T20:47:57.319 INFO:tasks.workunit.client.1.vm10.stdout:4/627: mkdir d1/d2/d3/d70/d99/dc9 0 2026-03-09T20:47:57.328 INFO:tasks.workunit.client.0.vm07.stdout:0/736: link d1/d1f/d53/d72/d9a/lec d1/dc0/lee 0 2026-03-09T20:47:57.328 INFO:tasks.workunit.client.1.vm10.stdout:3/633: dwrite dc/d14/d20/d21/f96 [0,4194304] 0 2026-03-09T20:47:57.328 INFO:tasks.workunit.client.1.vm10.stdout:4/628: dread - d1/d2/d5c/d64/d6b/d79/fc0 zero size 2026-03-09T20:47:57.328 INFO:tasks.workunit.client.1.vm10.stdout:1/668: symlink d2/da/d25/d46/d80/da0/ld3 0 2026-03-09T20:47:57.328 INFO:tasks.workunit.client.1.vm10.stdout:9/719: dwrite d2/d3/de/f80 [0,4194304] 0 2026-03-09T20:47:57.328 INFO:tasks.workunit.client.1.vm10.stdout:6/680: write d3/d30/d33/f35 [226905,58695] 0 2026-03-09T20:47:57.336 INFO:tasks.workunit.client.0.vm07.stdout:7/774: mknod d3/da4/df2/c101 0 2026-03-09T20:47:57.345 INFO:tasks.workunit.client.1.vm10.stdout:5/620: mknod d2/d27/d37/d46/d5d/d77/cec 0 2026-03-09T20:47:57.345 INFO:tasks.workunit.client.0.vm07.stdout:2/716: truncate d2/d11/d56/f5a 2096664 0 2026-03-09T20:47:57.348 INFO:tasks.workunit.client.1.vm10.stdout:8/722: symlink d0/d22/d2f/dd0/le4 0 2026-03-09T20:47:57.351 INFO:tasks.workunit.client.1.vm10.stdout:8/723: dwrite d0/d22/d2f/d38/f43 [0,4194304] 0 2026-03-09T20:47:57.356 INFO:tasks.workunit.client.0.vm07.stdout:6/732: mknod d8/cea 0 2026-03-09T20:47:57.357 INFO:tasks.workunit.client.1.vm10.stdout:1/669: mkdir d2/da/d25/d46/d51/d5d/d6e/d70/db3/dd4 0 2026-03-09T20:47:57.360 INFO:tasks.workunit.client.0.vm07.stdout:5/784: unlink d5/d33/c48 0 2026-03-09T20:47:57.361 INFO:tasks.workunit.client.1.vm10.stdout:0/645: rmdir d2/d9/da/d11/d92/db6 0 2026-03-09T20:47:57.362 INFO:tasks.workunit.client.1.vm10.stdout:0/646: write d2/d4a/d58/d82/d71/d5d/f76 [1743593,119774] 0 2026-03-09T20:47:57.362 INFO:tasks.workunit.client.0.vm07.stdout:0/737: truncate d1/d2/d33/d35/f59 1809926 0 2026-03-09T20:47:57.366 INFO:tasks.workunit.client.0.vm07.stdout:0/738: chown d1/d2/dc/d17/l37 13359 1 2026-03-09T20:47:57.369 INFO:tasks.workunit.client.1.vm10.stdout:7/668: dread db/d28/f31 [0,4194304] 0 2026-03-09T20:47:57.380 INFO:tasks.workunit.client.1.vm10.stdout:6/681: mkdir d3/d30/d6a/dd6 0 2026-03-09T20:47:57.381 INFO:tasks.workunit.client.0.vm07.stdout:1/749: mknod d3/d66/cfa 0 2026-03-09T20:47:57.381 INFO:tasks.workunit.client.0.vm07.stdout:7/775: creat d3/da/db/d32/f102 x:0 0 0 2026-03-09T20:47:57.381 INFO:tasks.workunit.client.1.vm10.stdout:5/621: creat d2/d39/d4b/d7a/fed x:0 0 0 2026-03-09T20:47:57.382 INFO:tasks.workunit.client.1.vm10.stdout:5/622: stat d2/d39/l73 0 2026-03-09T20:47:57.384 INFO:tasks.workunit.client.0.vm07.stdout:2/717: mkdir d2/db/d49/d7d/d85/dde 0 2026-03-09T20:47:57.388 INFO:tasks.workunit.client.1.vm10.stdout:2/663: link d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47/d7b/faf d5/d18/d27/d38/fe0 0 2026-03-09T20:47:57.389 INFO:tasks.workunit.client.1.vm10.stdout:2/664: read - d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47/dcc/fdd zero size 2026-03-09T20:47:57.391 INFO:tasks.workunit.client.1.vm10.stdout:8/724: rmdir d0/d22/d25/d89 39 2026-03-09T20:47:57.391 INFO:tasks.workunit.client.0.vm07.stdout:5/785: dread - d5/df/d13/d3e/d5e/ffa zero size 2026-03-09T20:47:57.397 INFO:tasks.workunit.client.0.vm07.stdout:2/718: dread d2/db/d28/d5c/f91 [0,4194304] 0 2026-03-09T20:47:57.398 INFO:tasks.workunit.client.1.vm10.stdout:0/647: symlink d2/d4a/d58/d82/d60/lea 0 2026-03-09T20:47:57.399 INFO:tasks.workunit.client.0.vm07.stdout:8/664: rename d1/l68 to d1/dc/d16/dad/ld7 0 2026-03-09T20:47:57.400 INFO:tasks.workunit.client.0.vm07.stdout:4/657: link d2/d55/d5d/d3f/c57 d2/cb5 0 2026-03-09T20:47:57.400 INFO:tasks.workunit.client.1.vm10.stdout:7/669: dread - db/d28/d2b/d36/d40/fa2 zero size 2026-03-09T20:47:57.402 INFO:tasks.workunit.client.0.vm07.stdout:5/786: mkdir d5/d19/d73/dbc/d10f 0 2026-03-09T20:47:57.408 INFO:tasks.workunit.client.1.vm10.stdout:8/725: symlink d0/d22/d25/d6c/d9b/le5 0 2026-03-09T20:47:57.410 INFO:tasks.workunit.client.0.vm07.stdout:2/719: mknod d2/da7/cdf 0 2026-03-09T20:47:57.412 INFO:tasks.workunit.client.0.vm07.stdout:2/720: chown d2/db/d49/d7d/d85/dd9 7898 1 2026-03-09T20:47:57.421 INFO:tasks.workunit.client.0.vm07.stdout:4/658: dread - d2/df/f75 zero size 2026-03-09T20:47:57.422 INFO:tasks.workunit.client.1.vm10.stdout:0/648: fsync d2/d9/da/d35/d30/f56 0 2026-03-09T20:47:57.422 INFO:tasks.workunit.client.1.vm10.stdout:6/682: getdents d3/d30/d7f/d24/d39/d9e 0 2026-03-09T20:47:57.428 INFO:tasks.workunit.client.0.vm07.stdout:5/787: mknod d5/df/d13/d30/d56/c110 0 2026-03-09T20:47:57.434 INFO:tasks.workunit.client.0.vm07.stdout:7/776: rename d3/da/d53/l8b to d3/da/db/d32/d3e/dac/d43/d62/de0/l103 0 2026-03-09T20:47:57.437 INFO:tasks.workunit.client.0.vm07.stdout:6/733: dread d8/d16/d22/d24/da0/dab/d40/f65 [4194304,4194304] 0 2026-03-09T20:47:57.439 INFO:tasks.workunit.client.0.vm07.stdout:3/711: dwrite d1/d5/d9/d2f/d99/fa6 [0,4194304] 0 2026-03-09T20:47:57.448 INFO:tasks.workunit.client.0.vm07.stdout:4/659: rmdir d2 39 2026-03-09T20:47:57.453 INFO:tasks.workunit.client.0.vm07.stdout:1/750: getdents d3/d97/da1/dd7 0 2026-03-09T20:47:57.454 INFO:tasks.workunit.client.0.vm07.stdout:9/691: write d4/d16/d29/d24/d37/d44/d62/d74/fa2 [1443112,7530] 0 2026-03-09T20:47:57.455 INFO:tasks.workunit.client.1.vm10.stdout:3/634: write dc/d14/d26/d8f/fb8 [735864,2522] 0 2026-03-09T20:47:57.457 INFO:tasks.workunit.client.1.vm10.stdout:0/649: rmdir d2/d9/da/d11/dd1/db7/dcd 39 2026-03-09T20:47:57.459 INFO:tasks.workunit.client.0.vm07.stdout:5/788: dread d5/df/d13/d3e/de1/fe7 [0,4194304] 0 2026-03-09T20:47:57.464 INFO:tasks.workunit.client.0.vm07.stdout:2/721: mkdir d2/db/d28/d5c/dc7/ddc/de0 0 2026-03-09T20:47:57.470 INFO:tasks.workunit.client.1.vm10.stdout:9/720: write d2/d28/d47/d6a/fc0 [398093,22017] 0 2026-03-09T20:47:57.475 INFO:tasks.workunit.client.1.vm10.stdout:8/726: dread d0/d22/d25/f2b [0,4194304] 0 2026-03-09T20:47:57.481 INFO:tasks.workunit.client.0.vm07.stdout:6/734: mknod d8/d16/d61/ceb 0 2026-03-09T20:47:57.498 INFO:tasks.workunit.client.1.vm10.stdout:0/650: unlink d2/d9/da/d11/da3/la8 0 2026-03-09T20:47:57.498 INFO:tasks.workunit.client.1.vm10.stdout:0/651: read d2/d9/f12 [574542,129191] 0 2026-03-09T20:47:57.500 INFO:tasks.workunit.client.0.vm07.stdout:4/660: readlink d2/df/l4e 0 2026-03-09T20:47:57.501 INFO:tasks.workunit.client.1.vm10.stdout:6/683: getdents d3/d30/d7f/d24/d39/d9e 0 2026-03-09T20:47:57.502 INFO:tasks.workunit.client.0.vm07.stdout:1/751: write d3/d23/d67/d8a/ff4 [671983,61390] 0 2026-03-09T20:47:57.502 INFO:tasks.workunit.client.1.vm10.stdout:6/684: chown d3/d30/d7f/d24/d39/fa3 2 1 2026-03-09T20:47:57.506 INFO:tasks.workunit.client.1.vm10.stdout:0/652: dwrite d2/d9/da/d11/dd1/d34/fc5 [4194304,4194304] 0 2026-03-09T20:47:57.506 INFO:tasks.workunit.client.0.vm07.stdout:9/692: mkdir d4/d16/d29/d24/d37/df8 0 2026-03-09T20:47:57.509 INFO:tasks.workunit.client.1.vm10.stdout:4/629: write d1/d2/d5c/d64/d6b/d81/dac/d1b/d57/f44 [1972692,38715] 0 2026-03-09T20:47:57.512 INFO:tasks.workunit.client.0.vm07.stdout:0/739: dwrite d1/d2/d33/d35/f5c [0,4194304] 0 2026-03-09T20:47:57.523 INFO:tasks.workunit.client.1.vm10.stdout:4/630: sync 2026-03-09T20:47:57.524 INFO:tasks.workunit.client.0.vm07.stdout:2/722: mkdir d2/db/d28/d57/de1 0 2026-03-09T20:47:57.531 INFO:tasks.workunit.client.0.vm07.stdout:7/777: symlink d3/da4/df2/dfb/l104 0 2026-03-09T20:47:57.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:57 vm10.local ceph-mon[57011]: Standby manager daemon vm10.byqahe started 2026-03-09T20:47:57.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:57 vm10.local ceph-mon[57011]: from='mgr.? 192.168.123.110:0/633002729' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm10.byqahe/crt"}]: dispatch 2026-03-09T20:47:57.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:57 vm10.local ceph-mon[57011]: from='mgr.? 192.168.123.110:0/633002729' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T20:47:57.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:57 vm10.local ceph-mon[57011]: from='mgr.? 192.168.123.110:0/633002729' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm10.byqahe/key"}]: dispatch 2026-03-09T20:47:57.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:57 vm10.local ceph-mon[57011]: from='mgr.? 192.168.123.110:0/633002729' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T20:47:57.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:57 vm10.local ceph-mon[57011]: [09/Mar/2026:20:47:56] ENGINE Bus STARTING 2026-03-09T20:47:57.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:57 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:47:57.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:57 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:47:57.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:57 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "who": "osd/host:vm10", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:47:57.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:57 vm10.local ceph-mon[57011]: [09/Mar/2026:20:47:56] ENGINE Serving on https://192.168.123.107:7150 2026-03-09T20:47:57.537 INFO:tasks.workunit.client.1.vm10.stdout:6/685: creat d3/d30/d7f/d36/fd7 x:0 0 0 2026-03-09T20:47:57.537 INFO:tasks.workunit.client.1.vm10.stdout:5/623: write d2/d27/d37/fae [594850,34700] 0 2026-03-09T20:47:57.538 INFO:tasks.workunit.client.1.vm10.stdout:2/665: write d5/d5b/fb7 [1227132,107182] 0 2026-03-09T20:47:57.542 INFO:tasks.workunit.client.0.vm07.stdout:6/735: fsync d8/d5d/d97/fd1 0 2026-03-09T20:47:57.548 INFO:tasks.workunit.client.0.vm07.stdout:4/661: mkdir d2/d55/d5d/d3f/db6 0 2026-03-09T20:47:57.550 INFO:tasks.workunit.client.0.vm07.stdout:4/662: readlink d2/d55/d5d/d3f/d4a/d4b/d52/d5c/l6c 0 2026-03-09T20:47:57.555 INFO:tasks.workunit.client.1.vm10.stdout:1/670: truncate d2/da/d25/d3e/d55/faf 2615526 0 2026-03-09T20:47:57.556 INFO:tasks.workunit.client.1.vm10.stdout:1/671: read - d2/fd2 zero size 2026-03-09T20:47:57.557 INFO:tasks.workunit.client.1.vm10.stdout:4/631: creat d1/d2/d5c/d64/d6b/d81/fca x:0 0 0 2026-03-09T20:47:57.561 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:57 vm07.local ceph-mon[49120]: Standby manager daemon vm10.byqahe started 2026-03-09T20:47:57.561 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:57 vm07.local ceph-mon[49120]: from='mgr.? 192.168.123.110:0/633002729' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm10.byqahe/crt"}]: dispatch 2026-03-09T20:47:57.561 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:57 vm07.local ceph-mon[49120]: from='mgr.? 192.168.123.110:0/633002729' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T20:47:57.561 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:57 vm07.local ceph-mon[49120]: from='mgr.? 192.168.123.110:0/633002729' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm10.byqahe/key"}]: dispatch 2026-03-09T20:47:57.561 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:57 vm07.local ceph-mon[49120]: from='mgr.? 192.168.123.110:0/633002729' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T20:47:57.561 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:57 vm07.local ceph-mon[49120]: [09/Mar/2026:20:47:56] ENGINE Bus STARTING 2026-03-09T20:47:57.561 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:57 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:47:57.561 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:57 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:47:57.561 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:57 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "who": "osd/host:vm10", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:47:57.561 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:57 vm07.local ceph-mon[49120]: [09/Mar/2026:20:47:56] ENGINE Serving on https://192.168.123.107:7150 2026-03-09T20:47:57.561 INFO:tasks.workunit.client.0.vm07.stdout:1/752: readlink d3/l6 0 2026-03-09T20:47:57.563 INFO:tasks.workunit.client.1.vm10.stdout:7/670: dwrite db/d21/f81 [0,4194304] 0 2026-03-09T20:47:57.564 INFO:tasks.workunit.client.1.vm10.stdout:7/671: chown db/d28/d2b/d36/cb8 89505464 1 2026-03-09T20:47:57.571 INFO:tasks.workunit.client.1.vm10.stdout:2/666: mknod d5/d18/d27/d38/d61/ce1 0 2026-03-09T20:47:57.571 INFO:tasks.workunit.client.1.vm10.stdout:0/653: getdents d2/d4a/d58/dd5 0 2026-03-09T20:47:57.572 INFO:tasks.workunit.client.0.vm07.stdout:5/789: dread d5/df/d13/d4f/f9b [0,4194304] 0 2026-03-09T20:47:57.573 INFO:tasks.workunit.client.0.vm07.stdout:8/665: dwrite d1/d5d/d6f/d2f/d4d/d55/fac [0,4194304] 0 2026-03-09T20:47:57.578 INFO:tasks.workunit.client.0.vm07.stdout:2/723: fdatasync d2/d11/ddb/d6e/dbe/d96/fa9 0 2026-03-09T20:47:57.586 INFO:tasks.workunit.client.1.vm10.stdout:1/672: mkdir d2/da/d25/d3e/dca/da2/dd5 0 2026-03-09T20:47:57.595 INFO:tasks.workunit.client.1.vm10.stdout:3/635: write dc/d14/d26/f65 [965796,69131] 0 2026-03-09T20:47:57.597 INFO:tasks.workunit.client.1.vm10.stdout:9/721: write d2/d12/f20 [2007856,42334] 0 2026-03-09T20:47:57.598 INFO:tasks.workunit.client.0.vm07.stdout:7/778: symlink d3/d58/d82/l105 0 2026-03-09T20:47:57.600 INFO:tasks.workunit.client.1.vm10.stdout:5/624: mknod d2/d27/d37/d46/cee 0 2026-03-09T20:47:57.600 INFO:tasks.workunit.client.0.vm07.stdout:2/724: dwrite d2/d11/f44 [0,4194304] 0 2026-03-09T20:47:57.601 INFO:tasks.workunit.client.1.vm10.stdout:8/727: dwrite d0/d22/d25/d2e/d41/f67 [0,4194304] 0 2026-03-09T20:47:57.604 INFO:tasks.workunit.client.0.vm07.stdout:7/779: readlink d3/d58/d82/la7 0 2026-03-09T20:47:57.604 INFO:tasks.workunit.client.1.vm10.stdout:5/625: dwrite d2/d1b/d54/d78/fdb [0,4194304] 0 2026-03-09T20:47:57.612 INFO:tasks.workunit.client.1.vm10.stdout:2/667: dread d5/f15 [0,4194304] 0 2026-03-09T20:47:57.613 INFO:tasks.workunit.client.1.vm10.stdout:2/668: dread - d5/d18/d27/d89/db6/f7e zero size 2026-03-09T20:47:57.625 INFO:tasks.workunit.client.0.vm07.stdout:3/712: write d1/d5/d9/d11/f26 [1700475,30070] 0 2026-03-09T20:47:57.627 INFO:tasks.workunit.client.1.vm10.stdout:4/632: creat d1/d2/d5c/d64/d6b/d81/dac/d1c/d69/fcb x:0 0 0 2026-03-09T20:47:57.639 INFO:tasks.workunit.client.0.vm07.stdout:4/663: dread - d2/d55/d5d/d3f/d4a/d85/fa0 zero size 2026-03-09T20:47:57.643 INFO:tasks.workunit.client.1.vm10.stdout:7/672: mkdir db/d28/d2b/dd0 0 2026-03-09T20:47:57.651 INFO:tasks.workunit.client.1.vm10.stdout:0/654: symlink d2/d9/da/d11/da3/leb 0 2026-03-09T20:47:57.653 INFO:tasks.workunit.client.0.vm07.stdout:8/666: sync 2026-03-09T20:47:57.653 INFO:tasks.workunit.client.0.vm07.stdout:1/753: truncate d3/d23/d67/fc4 858276 0 2026-03-09T20:47:57.653 INFO:tasks.workunit.client.0.vm07.stdout:0/740: creat d1/d2/d33/d35/ddb/fef x:0 0 0 2026-03-09T20:47:57.654 INFO:tasks.workunit.client.0.vm07.stdout:0/741: readlink d1/d1f/dc2/ldd 0 2026-03-09T20:47:57.654 INFO:tasks.workunit.client.0.vm07.stdout:0/742: stat d1/d1f/d20/c22 0 2026-03-09T20:47:57.665 INFO:tasks.workunit.client.1.vm10.stdout:8/728: creat d0/d95/fe6 x:0 0 0 2026-03-09T20:47:57.668 INFO:tasks.workunit.client.1.vm10.stdout:2/669: creat d5/d5b/fe2 x:0 0 0 2026-03-09T20:47:57.673 INFO:tasks.workunit.client.1.vm10.stdout:7/673: creat db/d28/d2b/d36/d63/d6d/fd1 x:0 0 0 2026-03-09T20:47:57.678 INFO:tasks.workunit.client.0.vm07.stdout:5/790: dread d5/df/d13/f5b [4194304,4194304] 0 2026-03-09T20:47:57.679 INFO:tasks.workunit.client.0.vm07.stdout:2/725: rename d2/db/d1c/f42 to d2/db/d28/d57/de1/fe2 0 2026-03-09T20:47:57.679 INFO:tasks.workunit.client.0.vm07.stdout:5/791: dread - d5/df/d13/d3e/d5e/ffa zero size 2026-03-09T20:47:57.680 INFO:tasks.workunit.client.0.vm07.stdout:9/693: getdents d4/d16/d78 0 2026-03-09T20:47:57.688 INFO:tasks.workunit.client.1.vm10.stdout:5/626: getdents d2/d39/d4b/de0 0 2026-03-09T20:47:57.690 INFO:tasks.workunit.client.0.vm07.stdout:0/743: readlink d1/d2/d33/d35/ld2 0 2026-03-09T20:47:57.693 INFO:tasks.workunit.client.0.vm07.stdout:4/664: dread d2/df/d59/f7c [4194304,4194304] 0 2026-03-09T20:47:57.695 INFO:tasks.workunit.client.0.vm07.stdout:5/792: sync 2026-03-09T20:47:57.702 INFO:tasks.workunit.client.1.vm10.stdout:6/686: getdents d3/da/d11/d89/db9/dd1 0 2026-03-09T20:47:57.702 INFO:tasks.workunit.client.1.vm10.stdout:0/655: getdents d2/d9/d2a/de9 0 2026-03-09T20:47:57.702 INFO:tasks.workunit.client.1.vm10.stdout:3/636: creat dc/d14/d26/d29/fd2 x:0 0 0 2026-03-09T20:47:57.710 INFO:tasks.workunit.client.1.vm10.stdout:5/627: read d2/d39/dbf/d84/f8b [100522,89928] 0 2026-03-09T20:47:57.713 INFO:tasks.workunit.client.0.vm07.stdout:6/736: write d8/d16/d22/d24/da0/dab/d40/d69/f62 [2648334,130460] 0 2026-03-09T20:47:57.716 INFO:tasks.workunit.client.0.vm07.stdout:2/726: dread d2/db/d28/f34 [0,4194304] 0 2026-03-09T20:47:57.719 INFO:tasks.workunit.client.1.vm10.stdout:1/673: write d2/da/d25/d3e/d42/f7d [133906,54576] 0 2026-03-09T20:47:57.720 INFO:tasks.workunit.client.1.vm10.stdout:9/722: write d2/d3/de/f42 [4852062,104715] 0 2026-03-09T20:47:57.721 INFO:tasks.workunit.client.1.vm10.stdout:9/723: write d2/f46 [167971,66392] 0 2026-03-09T20:47:57.721 INFO:tasks.workunit.client.1.vm10.stdout:9/724: chown d2/d3/d6d/fce 0 1 2026-03-09T20:47:57.728 INFO:tasks.workunit.client.0.vm07.stdout:0/744: truncate d1/d1f/d30/f8e 358524 0 2026-03-09T20:47:57.731 INFO:tasks.workunit.client.0.vm07.stdout:5/793: read - d5/d69/fe6 zero size 2026-03-09T20:47:57.732 INFO:tasks.workunit.client.0.vm07.stdout:7/780: write d3/da/db/d32/d3e/dac/d1f/d2b/d52/f5e [5203252,120950] 0 2026-03-09T20:47:57.737 INFO:tasks.workunit.client.0.vm07.stdout:3/713: creat d1/d5/d9/d11/d6d/dd0/d43/fe5 x:0 0 0 2026-03-09T20:47:57.743 INFO:tasks.workunit.client.0.vm07.stdout:1/754: write d3/d23/d52/da7/fc7 [758120,103553] 0 2026-03-09T20:47:57.744 INFO:tasks.workunit.client.0.vm07.stdout:1/755: stat d3/f24 0 2026-03-09T20:47:57.744 INFO:tasks.workunit.client.1.vm10.stdout:8/729: dwrite d0/d22/d25/d6c/fbd [4194304,4194304] 0 2026-03-09T20:47:57.744 INFO:tasks.workunit.client.0.vm07.stdout:1/756: chown d3/d23/d67/d8a/l98 70 1 2026-03-09T20:47:57.753 INFO:tasks.workunit.client.0.vm07.stdout:5/794: truncate d5/f25 3954085 0 2026-03-09T20:47:57.765 INFO:tasks.workunit.client.0.vm07.stdout:9/694: creat d4/d16/ff9 x:0 0 0 2026-03-09T20:47:57.765 INFO:tasks.workunit.client.0.vm07.stdout:8/667: getdents d1/dc/d16/dad 0 2026-03-09T20:47:57.766 INFO:tasks.workunit.client.0.vm07.stdout:8/668: dread - d1/dc/d16/d26/fd1 zero size 2026-03-09T20:47:57.768 INFO:tasks.workunit.client.0.vm07.stdout:0/745: dread d1/f31 [0,4194304] 0 2026-03-09T20:47:57.772 INFO:tasks.workunit.client.0.vm07.stdout:2/727: creat d2/dc8/fe3 x:0 0 0 2026-03-09T20:47:57.783 INFO:tasks.workunit.client.0.vm07.stdout:0/746: dread d1/d2/ff [0,4194304] 0 2026-03-09T20:47:57.791 INFO:tasks.workunit.client.0.vm07.stdout:4/665: creat d2/fb7 x:0 0 0 2026-03-09T20:47:57.791 INFO:tasks.workunit.client.0.vm07.stdout:5/795: symlink d5/d19/d73/dbc/dc7/l111 0 2026-03-09T20:47:57.797 INFO:tasks.workunit.client.0.vm07.stdout:3/714: symlink d1/d5/d9/d11/le6 0 2026-03-09T20:47:57.799 INFO:tasks.workunit.client.0.vm07.stdout:9/695: chown d4/d11/d2a/f65 225643678 1 2026-03-09T20:47:57.807 INFO:tasks.workunit.client.0.vm07.stdout:2/728: chown d2/f40 1 1 2026-03-09T20:47:57.809 INFO:tasks.workunit.client.0.vm07.stdout:1/757: mknod d3/dc6/cfb 0 2026-03-09T20:47:57.820 INFO:tasks.workunit.client.0.vm07.stdout:8/669: write d1/dc/d16/d26/f48 [606226,62689] 0 2026-03-09T20:47:57.820 INFO:tasks.workunit.client.0.vm07.stdout:0/747: write d1/d2/dc/d17/f23 [4004437,89021] 0 2026-03-09T20:47:57.825 INFO:tasks.workunit.client.0.vm07.stdout:5/796: mkdir d5/d19/d73/d94/d112 0 2026-03-09T20:47:57.828 INFO:tasks.workunit.client.1.vm10.stdout:4/633: creat d1/d2/d5c/d64/d6b/d81/dac/d1c/d2b/fcc x:0 0 0 2026-03-09T20:47:57.829 INFO:tasks.workunit.client.1.vm10.stdout:0/656: read d2/f9b [610928,64472] 0 2026-03-09T20:47:57.831 INFO:tasks.workunit.client.1.vm10.stdout:3/637: rename dc/d14/d26/d29/d40/f6c to dc/d14/fd3 0 2026-03-09T20:47:57.833 INFO:tasks.workunit.client.1.vm10.stdout:7/674: symlink db/d28/ld2 0 2026-03-09T20:47:57.833 INFO:tasks.workunit.client.0.vm07.stdout:9/696: creat d4/d11/d2a/ffa x:0 0 0 2026-03-09T20:47:57.833 INFO:tasks.workunit.client.0.vm07.stdout:6/737: getdents d8/d5d 0 2026-03-09T20:47:57.836 INFO:tasks.workunit.client.1.vm10.stdout:5/628: truncate d2/d1b/d54/d78/fad 1877584 0 2026-03-09T20:47:57.837 INFO:tasks.workunit.client.0.vm07.stdout:0/748: sync 2026-03-09T20:47:57.839 INFO:tasks.workunit.client.0.vm07.stdout:1/758: chown d3/d97/da1/ccf 0 1 2026-03-09T20:47:57.840 INFO:tasks.workunit.client.1.vm10.stdout:2/670: dwrite d5/d18/d1b/f70 [4194304,4194304] 0 2026-03-09T20:47:57.851 INFO:tasks.workunit.client.1.vm10.stdout:6/687: write d3/d30/d7f/d36/d5c/d8d/fac [3533109,129789] 0 2026-03-09T20:47:57.858 INFO:tasks.workunit.client.0.vm07.stdout:7/781: rename d3/da/db/d32/d3e/dac/d1f/f5d to d3/da/db/d32/d3e/dac/f106 0 2026-03-09T20:47:57.863 INFO:tasks.workunit.client.1.vm10.stdout:8/730: rename d0/d22/d2c/l7c to d0/d22/d25/d8f/le7 0 2026-03-09T20:47:57.863 INFO:tasks.workunit.client.1.vm10.stdout:3/638: truncate dc/d14/d20/d2e/d56/f68 2855893 0 2026-03-09T20:47:57.863 INFO:tasks.workunit.client.0.vm07.stdout:2/729: symlink d2/db/d28/d90/dd6/le4 0 2026-03-09T20:47:57.863 INFO:tasks.workunit.client.0.vm07.stdout:0/749: unlink d1/d2/dc/d17/da6/cb2 0 2026-03-09T20:47:57.864 INFO:tasks.workunit.client.0.vm07.stdout:3/715: dwrite d1/f78 [0,4194304] 0 2026-03-09T20:47:57.864 INFO:tasks.workunit.client.1.vm10.stdout:7/675: read f3 [34432,41554] 0 2026-03-09T20:47:57.864 INFO:tasks.workunit.client.1.vm10.stdout:3/639: dread dc/d14/d26/d29/f70 [0,4194304] 0 2026-03-09T20:47:57.868 INFO:tasks.workunit.client.0.vm07.stdout:7/782: sync 2026-03-09T20:47:57.877 INFO:tasks.workunit.client.0.vm07.stdout:6/738: symlink d8/d16/d22/d24/da0/dab/lec 0 2026-03-09T20:47:57.881 INFO:tasks.workunit.client.1.vm10.stdout:6/688: readlink d3/d79/l9d 0 2026-03-09T20:47:57.882 INFO:tasks.workunit.client.1.vm10.stdout:0/657: mknod d2/d4a/d58/dd5/cec 0 2026-03-09T20:47:57.883 INFO:tasks.workunit.client.1.vm10.stdout:0/658: chown d2/d9/d69/c97 4 1 2026-03-09T20:47:57.888 INFO:tasks.workunit.client.0.vm07.stdout:0/750: fdatasync d1/d2/dc/d80/fb7 0 2026-03-09T20:47:57.895 INFO:tasks.workunit.client.0.vm07.stdout:8/670: dread d1/d5d/d6f/d2f/d4d/d55/f78 [0,4194304] 0 2026-03-09T20:47:57.902 INFO:tasks.workunit.client.1.vm10.stdout:7/676: stat db/d28/d2b/d36/d40/laa 0 2026-03-09T20:47:57.903 INFO:tasks.workunit.client.1.vm10.stdout:7/677: chown db/d28/l33 2042551 1 2026-03-09T20:47:57.929 INFO:tasks.workunit.client.1.vm10.stdout:1/674: write d2/da/d25/d46/d51/d5d/d6e/f9c [43482,49409] 0 2026-03-09T20:47:57.930 INFO:tasks.workunit.client.1.vm10.stdout:9/725: write d2/d28/d47/d67/f93 [948821,45858] 0 2026-03-09T20:47:57.930 INFO:tasks.workunit.client.0.vm07.stdout:9/697: link d4/d16/d29/d24/d37/d44/d62/d8e/cbe d4/d8/d19/cfb 0 2026-03-09T20:47:57.937 INFO:tasks.workunit.client.0.vm07.stdout:9/698: dread - d4/d16/d78/dc4/ff1 zero size 2026-03-09T20:47:57.938 INFO:tasks.workunit.client.0.vm07.stdout:4/666: dwrite d2/f69 [0,4194304] 0 2026-03-09T20:47:57.939 INFO:tasks.workunit.client.0.vm07.stdout:5/797: write d5/d33/d3b/fb4 [930967,95605] 0 2026-03-09T20:47:57.939 INFO:tasks.workunit.client.1.vm10.stdout:6/689: creat d3/d30/d7f/d36/d6d/fd8 x:0 0 0 2026-03-09T20:47:57.941 INFO:tasks.workunit.client.1.vm10.stdout:4/634: write d1/d2/d5c/d64/d61/f62 [103649,4195] 0 2026-03-09T20:47:57.941 INFO:tasks.workunit.client.1.vm10.stdout:0/659: dread - d2/d9/da/d11/dd1/d34/f90 zero size 2026-03-09T20:47:57.941 INFO:tasks.workunit.client.1.vm10.stdout:4/635: chown d1/d47/c5e 1924368 1 2026-03-09T20:47:57.942 INFO:tasks.workunit.client.1.vm10.stdout:0/660: readlink d2/d9/da/d35/d30/l6a 0 2026-03-09T20:47:57.948 INFO:tasks.workunit.client.1.vm10.stdout:4/636: dwrite d1/d2/d5c/d64/d6b/d81/dac/d1c/d69/fcb [0,4194304] 0 2026-03-09T20:47:57.958 INFO:tasks.workunit.client.1.vm10.stdout:8/731: write d0/f97 [687478,33356] 0 2026-03-09T20:47:57.963 INFO:tasks.workunit.client.1.vm10.stdout:2/671: dwrite d5/d18/d27/db8/fce [0,4194304] 0 2026-03-09T20:47:57.963 INFO:tasks.workunit.client.0.vm07.stdout:1/759: dwrite d3/d14/d54/f4b [0,4194304] 0 2026-03-09T20:47:57.963 INFO:tasks.workunit.client.0.vm07.stdout:2/730: dwrite d2/d11/f5d [0,4194304] 0 2026-03-09T20:47:57.963 INFO:tasks.workunit.client.1.vm10.stdout:8/732: dwrite d0/f7e [0,4194304] 0 2026-03-09T20:47:57.964 INFO:tasks.workunit.client.1.vm10.stdout:0/661: sync 2026-03-09T20:47:57.968 INFO:tasks.workunit.client.1.vm10.stdout:8/733: truncate d0/d22/d25/d2e/d41/d47/fd8 413511 0 2026-03-09T20:47:57.980 INFO:tasks.workunit.client.0.vm07.stdout:7/783: truncate d3/da/db/d32/d3e/dac/d1f/d50/ffa 4614634 0 2026-03-09T20:47:57.981 INFO:tasks.workunit.client.0.vm07.stdout:7/784: stat d3/da4/df2/dfb 0 2026-03-09T20:47:57.995 INFO:tasks.workunit.client.1.vm10.stdout:7/678: creat db/d28/d2b/d36/d3b/d88/fd3 x:0 0 0 2026-03-09T20:47:57.995 INFO:tasks.workunit.client.1.vm10.stdout:7/679: readlink db/d28/d2b/l90 0 2026-03-09T20:47:57.997 INFO:tasks.workunit.client.0.vm07.stdout:8/671: rename d1/dc/d16/d26/f2a to d1/d5d/d6f/d80/fd8 0 2026-03-09T20:47:57.997 INFO:tasks.workunit.client.0.vm07.stdout:6/739: mkdir d8/d16/d22/d9b/da6/ded 0 2026-03-09T20:47:57.998 INFO:tasks.workunit.client.1.vm10.stdout:1/675: rmdir d2/da/d25/d3e/d55 39 2026-03-09T20:47:57.998 INFO:tasks.workunit.client.1.vm10.stdout:1/676: dread - d2/da/d25/d46/d8c/fc3 zero size 2026-03-09T20:47:57.998 INFO:tasks.workunit.client.0.vm07.stdout:3/716: link d1/d5/d9/daf/c87 d1/d5/d9/d2f/d3d/d71/d76/ce7 0 2026-03-09T20:47:58.001 INFO:tasks.workunit.client.1.vm10.stdout:6/690: rename f2 to d3/da/d11/d31/d47/d87/fd9 0 2026-03-09T20:47:58.002 INFO:tasks.workunit.client.1.vm10.stdout:4/637: truncate d1/d2/f2d 554750 0 2026-03-09T20:47:58.018 INFO:tasks.workunit.client.0.vm07.stdout:4/667: dread d2/df/d17/f6d [0,4194304] 0 2026-03-09T20:47:58.019 INFO:tasks.workunit.client.1.vm10.stdout:1/677: sync 2026-03-09T20:47:58.019 INFO:tasks.workunit.client.1.vm10.stdout:1/678: chown d2/da/d25/d46/d51 59925 1 2026-03-09T20:47:58.021 INFO:tasks.workunit.client.1.vm10.stdout:2/672: fsync d5/d18/d27/d38/d61/f81 0 2026-03-09T20:47:58.022 INFO:tasks.workunit.client.0.vm07.stdout:2/731: creat d2/db/d49/d7d/fe5 x:0 0 0 2026-03-09T20:47:58.025 INFO:tasks.workunit.client.1.vm10.stdout:8/734: rmdir d0/d92 39 2026-03-09T20:47:58.025 INFO:tasks.workunit.client.1.vm10.stdout:5/629: link d2/d27/d37/d46/d99/c9d d2/d39/dbf/d69/d96/cef 0 2026-03-09T20:47:58.026 INFO:tasks.workunit.client.1.vm10.stdout:4/638: creat d1/d2/d5c/d64/d6b/d81/dac/d1b/dbe/fcd x:0 0 0 2026-03-09T20:47:58.035 INFO:tasks.workunit.client.0.vm07.stdout:6/740: dwrite d8/d5d/fe6 [0,4194304] 0 2026-03-09T20:47:58.040 INFO:tasks.workunit.client.1.vm10.stdout:9/726: creat d2/d3/ff6 x:0 0 0 2026-03-09T20:47:58.043 INFO:tasks.workunit.client.0.vm07.stdout:7/785: rmdir d3/de2 0 2026-03-09T20:47:58.044 INFO:tasks.workunit.client.0.vm07.stdout:2/732: symlink d2/db/d49/d7d/d85/dd9/le6 0 2026-03-09T20:47:58.047 INFO:tasks.workunit.client.1.vm10.stdout:5/630: creat d2/d39/d4b/d7a/ff0 x:0 0 0 2026-03-09T20:47:58.047 INFO:tasks.workunit.client.0.vm07.stdout:3/717: link d1/le4 d1/d5/d9/d2f/d3d/dd6/le8 0 2026-03-09T20:47:58.048 INFO:tasks.workunit.client.0.vm07.stdout:6/741: creat d8/d16/d22/d24/da0/dab/dc1/fee x:0 0 0 2026-03-09T20:47:58.049 INFO:tasks.workunit.client.1.vm10.stdout:4/639: mknod d1/d2/d3/d70/d78/cce 0 2026-03-09T20:47:58.053 INFO:tasks.workunit.client.0.vm07.stdout:7/786: rename d3/da/d53/db7/cce to d3/da/d53/db7/dde/d96/c107 0 2026-03-09T20:47:58.072 INFO:tasks.workunit.client.1.vm10.stdout:6/691: rename d3/da/d11/d89/db9/dd1/dd2/c7e to d3/d30/d7f/cda 0 2026-03-09T20:47:58.073 INFO:tasks.workunit.client.1.vm10.stdout:6/692: dread - d3/da/d11/d89/db9/dd1/dd2/f69 zero size 2026-03-09T20:47:58.079 INFO:tasks.workunit.client.0.vm07.stdout:3/718: dread d1/d5/d9/d11/f58 [0,4194304] 0 2026-03-09T20:47:58.080 INFO:tasks.workunit.client.1.vm10.stdout:5/631: unlink d2/d39/dbf/d69/c79 0 2026-03-09T20:47:58.080 INFO:tasks.workunit.client.1.vm10.stdout:6/693: dread d3/d30/d7f/d36/d5c/f78 [0,4194304] 0 2026-03-09T20:47:58.080 INFO:tasks.workunit.client.1.vm10.stdout:5/632: readlink d2/d39/d4b/d7a/de1/ld5 0 2026-03-09T20:47:58.090 INFO:tasks.workunit.client.1.vm10.stdout:4/640: truncate d1/d2/d5c/d64/d6b/d81/dac/d1b/d57/f75 766991 0 2026-03-09T20:47:58.103 INFO:tasks.workunit.client.1.vm10.stdout:0/662: getdents d2/d9/da 0 2026-03-09T20:47:58.103 INFO:tasks.workunit.client.1.vm10.stdout:8/735: rename d0/d22/d2f/d38 to d0/d92/de8 0 2026-03-09T20:47:58.105 INFO:tasks.workunit.client.0.vm07.stdout:3/719: dread - d1/d5/d9/d11/d6d/dd0/fc1 zero size 2026-03-09T20:47:58.107 INFO:tasks.workunit.client.1.vm10.stdout:5/633: dwrite d2/fb [0,4194304] 0 2026-03-09T20:47:58.122 INFO:tasks.workunit.client.1.vm10.stdout:2/673: getdents d5/d18/d1b/d22 0 2026-03-09T20:47:58.126 INFO:tasks.workunit.client.1.vm10.stdout:4/641: fdatasync d1/d2/d5c/f48 0 2026-03-09T20:47:58.134 INFO:tasks.workunit.client.0.vm07.stdout:8/672: dread d1/fb [0,4194304] 0 2026-03-09T20:47:58.138 INFO:tasks.workunit.client.1.vm10.stdout:3/640: dwrite dc/d14/d20/d2e/d56/f15 [0,4194304] 0 2026-03-09T20:47:58.139 INFO:tasks.workunit.client.0.vm07.stdout:5/798: dread d5/df/f2b [0,4194304] 0 2026-03-09T20:47:58.139 INFO:tasks.workunit.client.1.vm10.stdout:3/641: rename dc to dc/d14/d26/d37/dd4 22 2026-03-09T20:47:58.140 INFO:tasks.workunit.client.1.vm10.stdout:3/642: chown dc/l1c 10 1 2026-03-09T20:47:58.141 INFO:tasks.workunit.client.0.vm07.stdout:5/799: readlink d5/d33/d39/d8d/dab/lae 0 2026-03-09T20:47:58.145 INFO:tasks.workunit.client.1.vm10.stdout:3/643: sync 2026-03-09T20:47:58.152 INFO:tasks.workunit.client.1.vm10.stdout:8/736: mkdir d0/d22/d25/d2e/d41/de9 0 2026-03-09T20:47:58.155 INFO:tasks.workunit.client.0.vm07.stdout:0/751: dwrite d1/d2/d33/d35/fd1 [0,4194304] 0 2026-03-09T20:47:58.171 INFO:tasks.workunit.client.0.vm07.stdout:9/699: write d4/d8/dc/d15/fde [513760,109528] 0 2026-03-09T20:47:58.176 INFO:tasks.workunit.client.0.vm07.stdout:1/760: write d3/d23/d52/f73 [4053099,51058] 0 2026-03-09T20:47:58.180 INFO:tasks.workunit.client.1.vm10.stdout:7/680: write db/d28/d2b/d36/d3f/fcb [1662263,95246] 0 2026-03-09T20:47:58.182 INFO:tasks.workunit.client.0.vm07.stdout:4/668: dwrite d2/f4c [4194304,4194304] 0 2026-03-09T20:47:58.185 INFO:tasks.workunit.client.1.vm10.stdout:1/679: write d2/da/d25/d46/f74 [386576,77649] 0 2026-03-09T20:47:58.209 INFO:tasks.workunit.client.1.vm10.stdout:4/642: creat d1/d2/d5c/d64/d6b/d81/dac/d1b/dbe/fcf x:0 0 0 2026-03-09T20:47:58.212 INFO:tasks.workunit.client.1.vm10.stdout:2/674: dread d5/f7 [0,4194304] 0 2026-03-09T20:47:58.219 INFO:tasks.workunit.client.1.vm10.stdout:3/644: fsync dc/d14/d26/d29/f70 0 2026-03-09T20:47:58.219 INFO:tasks.workunit.client.1.vm10.stdout:9/727: write d2/d12/d5a/f82 [3204339,42855] 0 2026-03-09T20:47:58.228 INFO:tasks.workunit.client.1.vm10.stdout:6/694: dwrite d3/da/d11/d26/d5b/f74 [0,4194304] 0 2026-03-09T20:47:58.229 INFO:tasks.workunit.client.1.vm10.stdout:6/695: readlink d3/d9c/ld4 0 2026-03-09T20:47:58.231 INFO:tasks.workunit.client.0.vm07.stdout:7/787: write d3/da/db/d32/d3e/dac/d1f/d2b/f2c [4278702,95877] 0 2026-03-09T20:47:58.231 INFO:tasks.workunit.client.0.vm07.stdout:2/733: write d2/d11/d56/f5a [1033627,130176] 0 2026-03-09T20:47:58.232 INFO:tasks.workunit.client.1.vm10.stdout:0/663: write d2/d9/da/d11/dd1/d34/f77 [164288,123725] 0 2026-03-09T20:47:58.234 INFO:tasks.workunit.client.1.vm10.stdout:1/680: creat d2/da/d25/d46/d51/fd6 x:0 0 0 2026-03-09T20:47:58.239 INFO:tasks.workunit.client.0.vm07.stdout:6/742: dwrite d8/d16/d22/f98 [0,4194304] 0 2026-03-09T20:47:58.243 INFO:tasks.workunit.client.0.vm07.stdout:6/743: chown d8/d50/fe8 1 1 2026-03-09T20:47:58.246 INFO:tasks.workunit.client.0.vm07.stdout:6/744: stat d8/d16/d22/d24/da0/daa/cb5 0 2026-03-09T20:47:58.249 INFO:tasks.workunit.client.1.vm10.stdout:2/675: creat d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47/d7b/fe3 x:0 0 0 2026-03-09T20:47:58.258 INFO:tasks.workunit.client.1.vm10.stdout:4/643: dread d1/d2/d5c/d64/d6b/d81/da9/fa4 [0,4194304] 0 2026-03-09T20:47:58.263 INFO:tasks.workunit.client.1.vm10.stdout:9/728: stat d2/d3/f5 0 2026-03-09T20:47:58.269 INFO:tasks.workunit.client.1.vm10.stdout:6/696: creat d3/da/d11/d26/fdb x:0 0 0 2026-03-09T20:47:58.277 INFO:tasks.workunit.client.1.vm10.stdout:8/737: write d0/d22/fb6 [42420,70067] 0 2026-03-09T20:47:58.286 INFO:tasks.workunit.client.1.vm10.stdout:8/738: sync 2026-03-09T20:47:58.290 INFO:tasks.workunit.client.1.vm10.stdout:2/676: dread d5/d18/d27/d38/d61/fa4 [0,4194304] 0 2026-03-09T20:47:58.291 INFO:tasks.workunit.client.1.vm10.stdout:5/634: write d2/d1b/f41 [2555292,45599] 0 2026-03-09T20:47:58.293 INFO:tasks.workunit.client.1.vm10.stdout:5/635: stat d2/d58/d6c/ce3 0 2026-03-09T20:47:58.294 INFO:tasks.workunit.client.1.vm10.stdout:5/636: chown d2/d39/dbf/d69/l56 1149208 1 2026-03-09T20:47:58.298 INFO:tasks.workunit.client.1.vm10.stdout:0/664: write d2/d9/da/d11/f1f [100894,122992] 0 2026-03-09T20:47:58.299 INFO:tasks.workunit.client.1.vm10.stdout:0/665: write d2/d9/da/d35/f68 [171222,62976] 0 2026-03-09T20:47:58.301 INFO:tasks.workunit.client.1.vm10.stdout:6/697: truncate d3/d30/d7f/d36/d5c/daa/fae 3185 0 2026-03-09T20:47:58.303 INFO:tasks.workunit.client.1.vm10.stdout:4/644: dwrite d1/d2/f2a [0,4194304] 0 2026-03-09T20:47:58.305 INFO:tasks.workunit.client.1.vm10.stdout:1/681: fdatasync d2/da/f32 0 2026-03-09T20:47:58.305 INFO:tasks.workunit.client.1.vm10.stdout:6/698: chown d3/da/d11/d89/db9/dd1/dd2/dc3/fca 13551 1 2026-03-09T20:47:58.309 INFO:tasks.workunit.client.1.vm10.stdout:8/739: mknod d0/d22/d25/d2e/cea 0 2026-03-09T20:47:58.312 INFO:tasks.workunit.client.0.vm07.stdout:5/800: mknod d5/d33/d39/d8d/dd7/c113 0 2026-03-09T20:47:58.312 INFO:tasks.workunit.client.0.vm07.stdout:0/752: creat d1/d1f/dc2/ff0 x:0 0 0 2026-03-09T20:47:58.316 INFO:tasks.workunit.client.1.vm10.stdout:2/677: mkdir d5/d18/d27/d89/db6/d41/de4 0 2026-03-09T20:47:58.321 INFO:tasks.workunit.client.0.vm07.stdout:9/700: stat d4/d16/d29/d24/d37/d8d/c4c 0 2026-03-09T20:47:58.321 INFO:tasks.workunit.client.0.vm07.stdout:4/669: creat d2/d55/d5d/d93/fb8 x:0 0 0 2026-03-09T20:47:58.322 INFO:tasks.workunit.client.1.vm10.stdout:8/740: sync 2026-03-09T20:47:58.323 INFO:tasks.workunit.client.1.vm10.stdout:9/729: mkdir d2/d3/d85/df7 0 2026-03-09T20:47:58.327 INFO:tasks.workunit.client.1.vm10.stdout:5/637: fdatasync d2/d27/d37/f38 0 2026-03-09T20:47:58.337 INFO:tasks.workunit.client.1.vm10.stdout:7/681: getdents db/d28/d2b/d36 0 2026-03-09T20:47:58.338 INFO:tasks.workunit.client.1.vm10.stdout:7/682: write db/d1f/f5e [3088847,47512] 0 2026-03-09T20:47:58.341 INFO:tasks.workunit.client.0.vm07.stdout:8/673: mkdir d1/d5d/d6f/d2f/d4d/dd4/dd9 0 2026-03-09T20:47:58.345 INFO:tasks.workunit.client.0.vm07.stdout:1/761: dwrite d3/d14/d54/d3e/f80 [0,4194304] 0 2026-03-09T20:47:58.347 INFO:tasks.workunit.client.1.vm10.stdout:0/666: truncate d2/d9/f61 2619974 0 2026-03-09T20:47:58.364 INFO:tasks.workunit.client.1.vm10.stdout:3/645: getdents dc 0 2026-03-09T20:47:58.370 INFO:tasks.workunit.client.1.vm10.stdout:9/730: rename d2/d28/f63 to d2/db8/ff8 0 2026-03-09T20:47:58.372 INFO:tasks.workunit.client.1.vm10.stdout:9/731: read - d2/d12/d5a/fc5 zero size 2026-03-09T20:47:58.374 INFO:tasks.workunit.client.0.vm07.stdout:4/670: sync 2026-03-09T20:47:58.375 INFO:tasks.workunit.client.0.vm07.stdout:4/671: stat d2/d55/d5d/d3f/d4a/d4b/d52/d5c/d90/l92 0 2026-03-09T20:47:58.376 INFO:tasks.workunit.client.0.vm07.stdout:9/701: dread d4/d8/dc/dbb/fad [0,4194304] 0 2026-03-09T20:47:58.386 INFO:tasks.workunit.client.0.vm07.stdout:7/788: dwrite d3/f88 [0,4194304] 0 2026-03-09T20:47:58.386 INFO:tasks.workunit.client.0.vm07.stdout:8/674: write d1/d5d/d6f/f61 [2517502,912] 0 2026-03-09T20:47:58.391 INFO:tasks.workunit.client.0.vm07.stdout:0/753: dwrite d1/f1a [0,4194304] 0 2026-03-09T20:47:58.401 INFO:tasks.workunit.client.0.vm07.stdout:2/734: dwrite d2/db/d49/f81 [0,4194304] 0 2026-03-09T20:47:58.412 INFO:tasks.workunit.client.0.vm07.stdout:3/720: creat d1/d5/d9/d2f/d34/fe9 x:0 0 0 2026-03-09T20:47:58.414 INFO:tasks.workunit.client.1.vm10.stdout:7/683: mkdir db/d28/d2b/d36/d40/d8a/dd4 0 2026-03-09T20:47:58.415 INFO:tasks.workunit.client.1.vm10.stdout:7/684: write db/d46/f5a [4212564,40516] 0 2026-03-09T20:47:58.415 INFO:tasks.workunit.client.0.vm07.stdout:1/762: symlink d3/d97/da1/dc5/d90/de8/lfc 0 2026-03-09T20:47:58.416 INFO:tasks.workunit.client.1.vm10.stdout:7/685: write db/d28/d2b/d36/d3f/fcb [303615,66389] 0 2026-03-09T20:47:58.418 INFO:tasks.workunit.client.0.vm07.stdout:5/801: symlink d5/d33/d39/d8d/l114 0 2026-03-09T20:47:58.424 INFO:tasks.workunit.client.1.vm10.stdout:4/645: symlink d1/d47/db9/ld0 0 2026-03-09T20:47:58.429 INFO:tasks.workunit.client.1.vm10.stdout:1/682: symlink d2/da/ld7 0 2026-03-09T20:47:58.437 INFO:tasks.workunit.client.1.vm10.stdout:6/699: mkdir d3/d30/d7f/d36/d6d/dbe/ddc 0 2026-03-09T20:47:58.459 INFO:tasks.workunit.client.1.vm10.stdout:2/678: creat d5/d18/d27/da6/fe5 x:0 0 0 2026-03-09T20:47:58.472 INFO:tasks.workunit.client.0.vm07.stdout:8/675: rename d1/d5d/d6f/d2f/d4d/d55/f78 to d1/dc/d16/dad/d87/dd3/fda 0 2026-03-09T20:47:58.485 INFO:tasks.workunit.client.1.vm10.stdout:0/667: dwrite d2/d9/da/d35/f7e [0,4194304] 0 2026-03-09T20:47:58.486 INFO:tasks.workunit.client.1.vm10.stdout:9/732: mknod d2/d28/da2/cf9 0 2026-03-09T20:47:58.490 INFO:tasks.workunit.client.0.vm07.stdout:9/702: dwrite d4/d8/d59/f66 [0,4194304] 0 2026-03-09T20:47:58.500 INFO:tasks.workunit.client.0.vm07.stdout:0/754: dread d1/d2/d4b/f61 [0,4194304] 0 2026-03-09T20:47:58.501 INFO:tasks.workunit.client.0.vm07.stdout:0/755: readlink d1/d2/l5f 0 2026-03-09T20:47:58.502 INFO:tasks.workunit.client.0.vm07.stdout:0/756: write d1/d2/d33/d35/f5c [3784727,88450] 0 2026-03-09T20:47:58.516 INFO:tasks.workunit.client.0.vm07.stdout:2/735: rmdir d2/d11 39 2026-03-09T20:47:58.516 INFO:tasks.workunit.client.0.vm07.stdout:1/763: creat d3/dc6/ffd x:0 0 0 2026-03-09T20:47:58.517 INFO:tasks.workunit.client.0.vm07.stdout:2/736: chown d2/db/l43 0 1 2026-03-09T20:47:58.521 INFO:tasks.workunit.client.0.vm07.stdout:5/802: symlink d5/df/d13/d3e/d5e/l115 0 2026-03-09T20:47:58.521 INFO:tasks.workunit.client.0.vm07.stdout:6/745: getdents d8/d5d/d97 0 2026-03-09T20:47:58.522 INFO:tasks.workunit.client.0.vm07.stdout:4/672: rmdir d2/d55/d5d/d86 39 2026-03-09T20:47:58.524 INFO:tasks.workunit.client.1.vm10.stdout:1/683: mknod d2/da/d25/d46/d51/d5d/cd8 0 2026-03-09T20:47:58.524 INFO:tasks.workunit.client.0.vm07.stdout:7/789: unlink d3/da/db/d32/d3e/dac/c1b 0 2026-03-09T20:47:58.529 INFO:tasks.workunit.client.0.vm07.stdout:8/676: chown d1/f85 238486 1 2026-03-09T20:47:58.529 INFO:tasks.workunit.client.0.vm07.stdout:5/803: dwrite d5/d69/fc5 [0,4194304] 0 2026-03-09T20:47:58.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:58 vm10.local ceph-mon[57011]: [09/Mar/2026:20:47:56] ENGINE Client ('192.168.123.107', 49306) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T20:47:58.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:58 vm10.local ceph-mon[57011]: [09/Mar/2026:20:47:56] ENGINE Serving on http://192.168.123.107:8765 2026-03-09T20:47:58.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:58 vm10.local ceph-mon[57011]: [09/Mar/2026:20:47:56] ENGINE Bus STARTED 2026-03-09T20:47:58.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:58 vm10.local ceph-mon[57011]: pgmap v5: 65 pgs: 65 active+clean; 2.9 GiB data, 10 GiB used, 110 GiB / 120 GiB avail 2026-03-09T20:47:58.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:58 vm10.local ceph-mon[57011]: mgrmap e30: vm07.xjrvch(active, since 4s), standbys: vm10.byqahe 2026-03-09T20:47:58.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:58 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mgr metadata", "who": "vm10.byqahe", "id": "vm10.byqahe"}]: dispatch 2026-03-09T20:47:58.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:58 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:47:58.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:58 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:47:58.542 INFO:tasks.workunit.client.1.vm10.stdout:2/679: fdatasync d5/d18/d1b/f23 0 2026-03-09T20:47:58.542 INFO:tasks.workunit.client.0.vm07.stdout:0/757: creat d1/d2/dc/d17/da6/ff1 x:0 0 0 2026-03-09T20:47:58.542 INFO:tasks.workunit.client.1.vm10.stdout:0/668: symlink d2/d9/da/led 0 2026-03-09T20:47:58.543 INFO:tasks.workunit.client.0.vm07.stdout:0/758: readlink d1/d2/d33/d35/l68 0 2026-03-09T20:47:58.545 INFO:tasks.workunit.client.1.vm10.stdout:9/733: creat d2/d3/d85/ffa x:0 0 0 2026-03-09T20:47:58.547 INFO:tasks.workunit.client.0.vm07.stdout:3/721: fdatasync d1/d5/d9/d2f/d3d/f75 0 2026-03-09T20:47:58.548 INFO:tasks.workunit.client.1.vm10.stdout:6/700: dread d3/fe [0,4194304] 0 2026-03-09T20:47:58.548 INFO:tasks.workunit.client.1.vm10.stdout:6/701: chown d3/da/d11/d31/d47/l86 1871 1 2026-03-09T20:47:58.549 INFO:tasks.workunit.client.1.vm10.stdout:6/702: chown d3/d30/d7f/d51/f94 1521 1 2026-03-09T20:47:58.550 INFO:tasks.workunit.client.1.vm10.stdout:6/703: dread - d3/d30/d7f/d24/f99 zero size 2026-03-09T20:47:58.555 INFO:tasks.workunit.client.1.vm10.stdout:7/686: mkdir db/d28/d2b/d36/d3b/dd5 0 2026-03-09T20:47:58.555 INFO:tasks.workunit.client.0.vm07.stdout:2/737: chown d2/db/d1c/f45 28148 1 2026-03-09T20:47:58.555 INFO:tasks.workunit.client.0.vm07.stdout:1/764: truncate d3/d97/da1/fbb 1112697 0 2026-03-09T20:47:58.562 INFO:tasks.workunit.client.0.vm07.stdout:6/746: mkdir d8/d16/d4b/d88/dc3/dd5/def 0 2026-03-09T20:47:58.565 INFO:tasks.workunit.client.0.vm07.stdout:7/790: fsync d3/da/d53/db7/dde/fbc 0 2026-03-09T20:47:58.566 INFO:tasks.workunit.client.0.vm07.stdout:8/677: symlink d1/d5d/d6f/d2f/d4d/d63/ldb 0 2026-03-09T20:47:58.567 INFO:tasks.workunit.client.1.vm10.stdout:8/741: getdents d0/d54 0 2026-03-09T20:47:58.571 INFO:tasks.workunit.client.0.vm07.stdout:5/804: truncate d5/df/d13/d3e/d5e/f7c 7424450 0 2026-03-09T20:47:58.573 INFO:tasks.workunit.client.1.vm10.stdout:6/704: sync 2026-03-09T20:47:58.573 INFO:tasks.workunit.client.0.vm07.stdout:3/722: sync 2026-03-09T20:47:58.574 INFO:tasks.workunit.client.0.vm07.stdout:0/759: dread - d1/fc7 zero size 2026-03-09T20:47:58.576 INFO:tasks.workunit.client.1.vm10.stdout:1/684: symlink d2/da/d25/d46/dbe/ld9 0 2026-03-09T20:47:58.577 INFO:tasks.workunit.client.1.vm10.stdout:7/687: dread db/d28/d2b/d36/d40/f48 [0,4194304] 0 2026-03-09T20:47:58.581 INFO:tasks.workunit.client.0.vm07.stdout:7/791: dread d3/da/db/d32/d3e/dac/d43/d62/db1/ff0 [0,4194304] 0 2026-03-09T20:47:58.588 INFO:tasks.workunit.client.0.vm07.stdout:2/738: symlink d2/db/d28/d90/da4/le7 0 2026-03-09T20:47:58.590 INFO:tasks.workunit.client.0.vm07.stdout:1/765: rename d3/d23/d55/de0 to d3/d97/da1/dd7/dfe 0 2026-03-09T20:47:58.596 INFO:tasks.workunit.client.1.vm10.stdout:6/705: truncate d3/d30/d7f/f25 830777 0 2026-03-09T20:47:58.597 INFO:tasks.workunit.client.0.vm07.stdout:6/747: symlink d8/d16/d4b/d88/d99/lf0 0 2026-03-09T20:47:58.597 INFO:tasks.workunit.client.0.vm07.stdout:6/748: chown d8/d16/d22/d9b/de4/d85/c45 248638 1 2026-03-09T20:47:58.612 INFO:tasks.workunit.client.1.vm10.stdout:1/685: symlink d2/da/d25/d46/d80/da0/d92/db5/dc7/lda 0 2026-03-09T20:47:58.614 INFO:tasks.workunit.client.0.vm07.stdout:8/678: truncate d1/fb5 2521221 0 2026-03-09T20:47:58.620 INFO:tasks.workunit.client.1.vm10.stdout:3/646: getdents dc 0 2026-03-09T20:47:58.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:58 vm07.local ceph-mon[49120]: [09/Mar/2026:20:47:56] ENGINE Client ('192.168.123.107', 49306) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T20:47:58.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:58 vm07.local ceph-mon[49120]: [09/Mar/2026:20:47:56] ENGINE Serving on http://192.168.123.107:8765 2026-03-09T20:47:58.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:58 vm07.local ceph-mon[49120]: [09/Mar/2026:20:47:56] ENGINE Bus STARTED 2026-03-09T20:47:58.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:58 vm07.local ceph-mon[49120]: pgmap v5: 65 pgs: 65 active+clean; 2.9 GiB data, 10 GiB used, 110 GiB / 120 GiB avail 2026-03-09T20:47:58.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:58 vm07.local ceph-mon[49120]: mgrmap e30: vm07.xjrvch(active, since 4s), standbys: vm10.byqahe 2026-03-09T20:47:58.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:58 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mgr metadata", "who": "vm10.byqahe", "id": "vm10.byqahe"}]: dispatch 2026-03-09T20:47:58.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:58 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:47:58.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:58 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:47:58.638 INFO:tasks.workunit.client.0.vm07.stdout:0/760: read d1/d2/d4b/f70 [846438,121615] 0 2026-03-09T20:47:58.643 INFO:tasks.workunit.client.1.vm10.stdout:5/638: write d2/d27/d37/d46/d5d/fe2 [1138543,56212] 0 2026-03-09T20:47:58.645 INFO:tasks.workunit.client.1.vm10.stdout:7/688: mknod db/d28/cd6 0 2026-03-09T20:47:58.649 INFO:tasks.workunit.client.1.vm10.stdout:1/686: mkdir d2/da/d25/d46/ddb 0 2026-03-09T20:47:58.650 INFO:tasks.workunit.client.1.vm10.stdout:3/647: mknod dc/db4/cd5 0 2026-03-09T20:47:58.651 INFO:tasks.workunit.client.0.vm07.stdout:1/766: rename d3/d9c/fae to d3/d14/d54/d3e/fff 0 2026-03-09T20:47:58.662 INFO:tasks.workunit.client.0.vm07.stdout:4/673: mkdir d2/d55/d5d/d86/db9 0 2026-03-09T20:47:58.664 INFO:tasks.workunit.client.0.vm07.stdout:9/703: write d4/d16/d29/d24/f77 [2864000,110775] 0 2026-03-09T20:47:58.667 INFO:tasks.workunit.client.0.vm07.stdout:9/704: dread - d4/d8/d19/d5f/d73/dbc/fe9 zero size 2026-03-09T20:47:58.669 INFO:tasks.workunit.client.0.vm07.stdout:6/749: dread - d8/d16/d4b/d88/d99/fc7 zero size 2026-03-09T20:47:58.672 INFO:tasks.workunit.client.1.vm10.stdout:5/639: symlink d2/d39/dbf/d63/lf1 0 2026-03-09T20:47:58.673 INFO:tasks.workunit.client.1.vm10.stdout:4/646: truncate d1/d2/d5c/d64/d6b/d81/f8a 3963696 0 2026-03-09T20:47:58.674 INFO:tasks.workunit.client.1.vm10.stdout:7/689: fsync db/d28/d2b/f8f 0 2026-03-09T20:47:58.678 INFO:tasks.workunit.client.1.vm10.stdout:1/687: creat d2/da/d25/d3e/dca/da2/fdc x:0 0 0 2026-03-09T20:47:58.679 INFO:tasks.workunit.client.0.vm07.stdout:8/679: symlink d1/d5d/d6f/d2f/d4d/dd4/ldc 0 2026-03-09T20:47:58.683 INFO:tasks.workunit.client.1.vm10.stdout:3/648: rename dc/d14/d20/d21/d3b/f4f to dc/d14/d20/d21/fd6 0 2026-03-09T20:47:58.689 INFO:tasks.workunit.client.0.vm07.stdout:8/680: dwrite d1/d5d/d6f/d2f/d4d/d63/fd5 [0,4194304] 0 2026-03-09T20:47:58.690 INFO:tasks.workunit.client.0.vm07.stdout:8/681: stat d1/dc/d6a/cc4 0 2026-03-09T20:47:58.705 INFO:tasks.workunit.client.1.vm10.stdout:8/742: write d0/d22/d25/d2e/f33 [558071,49552] 0 2026-03-09T20:47:58.705 INFO:tasks.workunit.client.0.vm07.stdout:5/805: write d5/df/d13/f5b [4155146,33363] 0 2026-03-09T20:47:58.708 INFO:tasks.workunit.client.1.vm10.stdout:2/680: dwrite d5/d18/d27/d89/db6/d41/d77/db3/db5/fc9 [0,4194304] 0 2026-03-09T20:47:58.709 INFO:tasks.workunit.client.1.vm10.stdout:9/734: dwrite d2/d3/f1c [0,4194304] 0 2026-03-09T20:47:58.710 INFO:tasks.workunit.client.0.vm07.stdout:0/761: fsync d1/d1f/d30/f8f 0 2026-03-09T20:47:58.710 INFO:tasks.workunit.client.0.vm07.stdout:7/792: creat d3/da4/df2/dff/f108 x:0 0 0 2026-03-09T20:47:58.713 INFO:tasks.workunit.client.0.vm07.stdout:2/739: fdatasync d2/db/d1c/d4a/d88/fd1 0 2026-03-09T20:47:58.714 INFO:tasks.workunit.client.1.vm10.stdout:6/706: dwrite d3/f40 [0,4194304] 0 2026-03-09T20:47:58.716 INFO:tasks.workunit.client.1.vm10.stdout:0/669: dwrite d2/d9/da/d11/dd1/fe6 [0,4194304] 0 2026-03-09T20:47:58.716 INFO:tasks.workunit.client.1.vm10.stdout:0/670: write d2/d4a/fcf [954722,115708] 0 2026-03-09T20:47:58.717 INFO:tasks.workunit.client.1.vm10.stdout:7/690: dread - db/d28/fac zero size 2026-03-09T20:47:58.722 INFO:tasks.workunit.client.1.vm10.stdout:3/649: mkdir dc/d14/d26/d29/d40/da2/dd7 0 2026-03-09T20:47:58.723 INFO:tasks.workunit.client.0.vm07.stdout:9/705: mknod d4/d16/d78/dc4/cfc 0 2026-03-09T20:47:58.731 INFO:tasks.workunit.client.0.vm07.stdout:4/674: dwrite d2/d55/d5d/d3f/d4a/f5e [0,4194304] 0 2026-03-09T20:47:58.751 INFO:tasks.workunit.client.0.vm07.stdout:8/682: mknod d1/d5d/d6f/d2f/d4d/d95/dc1/cdd 0 2026-03-09T20:47:58.751 INFO:tasks.workunit.client.0.vm07.stdout:5/806: truncate d5/d33/d75/ffd 637435 0 2026-03-09T20:47:58.752 INFO:tasks.workunit.client.0.vm07.stdout:8/683: chown d1/dc/d16/d26/l8c 82105 1 2026-03-09T20:47:58.763 INFO:tasks.workunit.client.0.vm07.stdout:0/762: creat d1/d2/dc/d17/ff2 x:0 0 0 2026-03-09T20:47:58.766 INFO:tasks.workunit.client.0.vm07.stdout:0/763: chown d1/d1f/dc3/dca 705164 1 2026-03-09T20:47:58.766 INFO:tasks.workunit.client.0.vm07.stdout:7/793: creat d3/d58/d82/f109 x:0 0 0 2026-03-09T20:47:58.791 INFO:tasks.workunit.client.0.vm07.stdout:4/675: dread d2/d55/d5d/d3f/d4a/f84 [0,4194304] 0 2026-03-09T20:47:58.793 INFO:tasks.workunit.client.0.vm07.stdout:8/684: symlink d1/dc/d16/dad/d87/dd3/lde 0 2026-03-09T20:47:58.797 INFO:tasks.workunit.client.0.vm07.stdout:0/764: rmdir d1/d2/d33 39 2026-03-09T20:47:58.802 INFO:tasks.workunit.client.0.vm07.stdout:0/765: chown d1/d1f/d20/fbd 125 1 2026-03-09T20:47:58.802 INFO:tasks.workunit.client.0.vm07.stdout:7/794: mknod d3/da/d53/db7/dde/d96/c10a 0 2026-03-09T20:47:58.807 INFO:tasks.workunit.client.0.vm07.stdout:2/740: creat d2/d11/ddb/d6e/dbe/fe8 x:0 0 0 2026-03-09T20:47:58.812 INFO:tasks.workunit.client.0.vm07.stdout:6/750: creat d8/d16/d22/ff1 x:0 0 0 2026-03-09T20:47:58.815 INFO:tasks.workunit.client.0.vm07.stdout:4/676: mkdir d2/d55/d5d/d3f/d4a/d4b/d52/dba 0 2026-03-09T20:47:58.824 INFO:tasks.workunit.client.0.vm07.stdout:8/685: dread d1/dc/d16/f4a [0,4194304] 0 2026-03-09T20:47:58.827 INFO:tasks.workunit.client.0.vm07.stdout:2/741: sync 2026-03-09T20:47:58.829 INFO:tasks.workunit.client.0.vm07.stdout:7/795: creat d3/d58/d82/f10b x:0 0 0 2026-03-09T20:47:58.834 INFO:tasks.workunit.client.0.vm07.stdout:5/807: symlink d5/df/d13/d30/d56/l116 0 2026-03-09T20:47:58.841 INFO:tasks.workunit.client.0.vm07.stdout:3/723: write d1/d5/d9/d11/f4d [1769618,116070] 0 2026-03-09T20:47:58.845 INFO:tasks.workunit.client.0.vm07.stdout:1/767: write d3/d9c/fd2 [861994,21318] 0 2026-03-09T20:47:58.859 INFO:tasks.workunit.client.0.vm07.stdout:2/742: dread - d2/da7/fbc zero size 2026-03-09T20:47:58.862 INFO:tasks.workunit.client.0.vm07.stdout:2/743: dread d2/db/d28/d5c/f91 [0,4194304] 0 2026-03-09T20:47:58.865 INFO:tasks.workunit.client.0.vm07.stdout:2/744: dwrite d2/db/d28/d90/fd5 [0,4194304] 0 2026-03-09T20:47:58.879 INFO:tasks.workunit.client.0.vm07.stdout:4/677: write d2/d1f/fa5 [606436,9713] 0 2026-03-09T20:47:58.881 INFO:tasks.workunit.client.0.vm07.stdout:6/751: rename d8/d5d/d97/dc4/lbf to d8/d16/da3/d9a/lf2 0 2026-03-09T20:47:58.884 INFO:tasks.workunit.client.0.vm07.stdout:9/706: dwrite d4/d8/f1c [0,4194304] 0 2026-03-09T20:47:58.894 INFO:tasks.workunit.client.0.vm07.stdout:8/686: write d1/dc/d16/d26/f59 [33418,20858] 0 2026-03-09T20:47:58.895 INFO:tasks.workunit.client.0.vm07.stdout:8/687: chown d1/d5d/d6f/f64 825 1 2026-03-09T20:47:58.900 INFO:tasks.workunit.client.1.vm10.stdout:4/647: creat d1/d2/d5c/d64/d6b/d81/dac/d39/fd1 x:0 0 0 2026-03-09T20:47:58.906 INFO:tasks.workunit.client.1.vm10.stdout:2/681: truncate d5/fb 1138274 0 2026-03-09T20:47:58.907 INFO:tasks.workunit.client.0.vm07.stdout:9/707: dread d4/f5 [0,4194304] 0 2026-03-09T20:47:58.908 INFO:tasks.workunit.client.0.vm07.stdout:9/708: chown d4/d16/d29/d24/d37/d8d 595 1 2026-03-09T20:47:58.912 INFO:tasks.workunit.client.0.vm07.stdout:9/709: write d4/d8/d19/f42 [244598,90286] 0 2026-03-09T20:47:58.917 INFO:tasks.workunit.client.1.vm10.stdout:6/707: rmdir d3/da/d11/d26/d5b 39 2026-03-09T20:47:58.922 INFO:tasks.workunit.client.1.vm10.stdout:0/671: read d2/d9/da/fd [767275,120202] 0 2026-03-09T20:47:58.927 INFO:tasks.workunit.client.1.vm10.stdout:9/735: write d2/d28/d47/d67/f81 [1929464,94747] 0 2026-03-09T20:47:58.927 INFO:tasks.workunit.client.1.vm10.stdout:8/743: write d0/d22/d25/d2e/d41/d85/db9/fd4 [39325,104369] 0 2026-03-09T20:47:58.927 INFO:tasks.workunit.client.0.vm07.stdout:1/768: write d3/f34 [2094555,97019] 0 2026-03-09T20:47:58.928 INFO:tasks.workunit.client.1.vm10.stdout:9/736: read d2/d33/d37/f4c [1050441,85624] 0 2026-03-09T20:47:58.931 INFO:tasks.workunit.client.0.vm07.stdout:7/796: dwrite d3/da/db/d32/f3d [0,4194304] 0 2026-03-09T20:47:58.941 INFO:tasks.workunit.client.0.vm07.stdout:2/745: fdatasync d2/db/d28/f58 0 2026-03-09T20:47:58.946 INFO:tasks.workunit.client.0.vm07.stdout:4/678: mknod d2/df/d17/cbb 0 2026-03-09T20:47:58.946 INFO:tasks.workunit.client.1.vm10.stdout:1/688: dread d2/da/d25/d46/d51/d5d/d6e/f76 [0,4194304] 0 2026-03-09T20:47:58.955 INFO:tasks.workunit.client.1.vm10.stdout:5/640: creat d2/d27/d37/d46/d5d/ff2 x:0 0 0 2026-03-09T20:47:58.959 INFO:tasks.workunit.client.0.vm07.stdout:3/724: rename d1/l8b to d1/d5/d9/d11/d60/lea 0 2026-03-09T20:47:58.969 INFO:tasks.workunit.client.1.vm10.stdout:4/648: dread d1/d2/d5c/d64/d6b/d81/dac/d1b/f8b [0,4194304] 0 2026-03-09T20:47:58.971 INFO:tasks.workunit.client.0.vm07.stdout:8/688: symlink d1/dc/d16/dad/d87/dd3/ldf 0 2026-03-09T20:47:58.980 INFO:tasks.workunit.client.0.vm07.stdout:5/808: fdatasync d5/d33/d3b/f63 0 2026-03-09T20:47:58.986 INFO:tasks.workunit.client.1.vm10.stdout:6/708: read d3/da/f76 [2396745,72487] 0 2026-03-09T20:47:58.986 INFO:tasks.workunit.client.1.vm10.stdout:0/672: mkdir d2/d9/da/d11/dd1/d34/dee 0 2026-03-09T20:47:58.987 INFO:tasks.workunit.client.0.vm07.stdout:0/766: link d1/d1f/c74 d1/d2/dc/d17/da6/cf3 0 2026-03-09T20:47:58.988 INFO:tasks.workunit.client.1.vm10.stdout:9/737: creat d2/d3/de/d8f/ffb x:0 0 0 2026-03-09T20:47:58.989 INFO:tasks.workunit.client.1.vm10.stdout:9/738: chown d2/d12 109805 1 2026-03-09T20:47:58.990 INFO:tasks.workunit.client.1.vm10.stdout:8/744: dwrite d0/d22/d25/d2e/d41/fa3 [0,4194304] 0 2026-03-09T20:47:58.990 INFO:tasks.workunit.client.1.vm10.stdout:9/739: dread - d2/d33/d37/fef zero size 2026-03-09T20:47:58.995 INFO:tasks.workunit.client.0.vm07.stdout:7/797: symlink d3/da/db/d32/d3e/dac/d1f/d2b/d52/l10c 0 2026-03-09T20:47:58.995 INFO:tasks.workunit.client.0.vm07.stdout:2/746: symlink d2/db/d1c/d4a/d88/le9 0 2026-03-09T20:47:58.996 INFO:tasks.workunit.client.0.vm07.stdout:5/809: sync 2026-03-09T20:47:58.997 INFO:tasks.workunit.client.0.vm07.stdout:0/767: dread d1/f1a [0,4194304] 0 2026-03-09T20:47:58.998 INFO:tasks.workunit.client.0.vm07.stdout:0/768: chown d1/d1f/d30 126719 1 2026-03-09T20:47:59.005 INFO:tasks.workunit.client.0.vm07.stdout:3/725: truncate d1/d5/d9/fa1 460345 0 2026-03-09T20:47:59.005 INFO:tasks.workunit.client.0.vm07.stdout:8/689: dread - d1/dc/d16/dad/fb8 zero size 2026-03-09T20:47:59.008 INFO:tasks.workunit.client.1.vm10.stdout:0/673: symlink d2/d4a/d58/d82/lef 0 2026-03-09T20:47:59.009 INFO:tasks.workunit.client.1.vm10.stdout:0/674: chown d2/d9/d69/l44 30 1 2026-03-09T20:47:59.009 INFO:tasks.workunit.client.1.vm10.stdout:7/691: creat db/d28/d2b/d36/d3b/fd7 x:0 0 0 2026-03-09T20:47:59.010 INFO:tasks.workunit.client.0.vm07.stdout:0/769: dwrite d1/d1f/dc3/feb [0,4194304] 0 2026-03-09T20:47:59.012 INFO:tasks.workunit.client.1.vm10.stdout:3/650: rename dc/d14/d26/d29/d2a/f5e to dc/d14/d26/fd8 0 2026-03-09T20:47:59.014 INFO:tasks.workunit.client.0.vm07.stdout:8/690: chown d1/d5d/d6f/d2f/d4d/lae 66033177 1 2026-03-09T20:47:59.016 INFO:tasks.workunit.client.0.vm07.stdout:8/691: stat d1/dc/d16/d26/d94/faf 0 2026-03-09T20:47:59.018 INFO:tasks.workunit.client.0.vm07.stdout:0/770: dread - d1/d2/dc/d17/da6/ff1 zero size 2026-03-09T20:47:59.019 INFO:tasks.workunit.client.0.vm07.stdout:7/798: dread d3/d58/d77/fe1 [0,4194304] 0 2026-03-09T20:47:59.020 INFO:tasks.workunit.client.0.vm07.stdout:0/771: write d1/fe5 [1058969,115103] 0 2026-03-09T20:47:59.031 INFO:tasks.workunit.client.1.vm10.stdout:5/641: truncate d2/d1b/f5c 314156 0 2026-03-09T20:47:59.044 INFO:tasks.workunit.client.0.vm07.stdout:1/769: dwrite d3/d97/da1/dc5/d90/f93 [4194304,4194304] 0 2026-03-09T20:47:59.048 INFO:tasks.workunit.client.0.vm07.stdout:1/770: read - d3/f5c zero size 2026-03-09T20:47:59.050 INFO:tasks.workunit.client.0.vm07.stdout:1/771: chown d3/d97/da1/dd7/lde 214639 1 2026-03-09T20:47:59.058 INFO:tasks.workunit.client.1.vm10.stdout:1/689: dwrite d2/da/d25/d3e/f69 [0,4194304] 0 2026-03-09T20:47:59.058 INFO:tasks.workunit.client.0.vm07.stdout:5/810: creat d5/df/d13/d4f/d101/d10b/f117 x:0 0 0 2026-03-09T20:47:59.058 INFO:tasks.workunit.client.0.vm07.stdout:4/679: write d2/df/d17/f73 [1134117,76909] 0 2026-03-09T20:47:59.058 INFO:tasks.workunit.client.0.vm07.stdout:2/747: rename d2/db/d28/d57/fcd to d2/db/d49/d7d/d85/fea 0 2026-03-09T20:47:59.059 INFO:tasks.workunit.client.0.vm07.stdout:3/726: rename d1/d5/d9/d2f/d3d to d1/d5/d9/d2f/d3d/dd6/deb 22 2026-03-09T20:47:59.061 INFO:tasks.workunit.client.1.vm10.stdout:1/690: dread - d2/da/d25/d46/d80/da0/d92/db5/dc7/fcf zero size 2026-03-09T20:47:59.081 INFO:tasks.workunit.client.1.vm10.stdout:7/692: mkdir db/d28/d30/dd8 0 2026-03-09T20:47:59.081 INFO:tasks.workunit.client.1.vm10.stdout:2/682: truncate d5/d18/d27/d89/db6/d41/d77/db3/db5/fc9 3483623 0 2026-03-09T20:47:59.081 INFO:tasks.workunit.client.1.vm10.stdout:6/709: write d3/da/d11/f1d [4506570,82093] 0 2026-03-09T20:47:59.082 INFO:tasks.workunit.client.1.vm10.stdout:4/649: write d1/d2/d5c/d64/d6b/d81/dac/d1b/d57/f75 [360685,110469] 0 2026-03-09T20:47:59.084 INFO:tasks.workunit.client.1.vm10.stdout:8/745: rename d0/d95/cd7 to d0/d22/d25/d2e/d41/de9/ceb 0 2026-03-09T20:47:59.085 INFO:tasks.workunit.client.1.vm10.stdout:3/651: dread - dc/d14/d26/d29/d40/da8/fc6 zero size 2026-03-09T20:47:59.085 INFO:tasks.workunit.client.0.vm07.stdout:9/710: creat d4/d11/ffd x:0 0 0 2026-03-09T20:47:59.086 INFO:tasks.workunit.client.1.vm10.stdout:9/740: getdents d2/d3/db4/ddb 0 2026-03-09T20:47:59.087 INFO:tasks.workunit.client.1.vm10.stdout:9/741: dread - d2/d33/d37/fef zero size 2026-03-09T20:47:59.087 INFO:tasks.workunit.client.1.vm10.stdout:8/746: read d0/f97 [916325,99024] 0 2026-03-09T20:47:59.088 INFO:tasks.workunit.client.1.vm10.stdout:8/747: fdatasync d0/d22/d25/d2e/d41/d85/db9/fd4 0 2026-03-09T20:47:59.089 INFO:tasks.workunit.client.1.vm10.stdout:5/642: creat d2/d39/dbf/d66/ff3 x:0 0 0 2026-03-09T20:47:59.092 INFO:tasks.workunit.client.1.vm10.stdout:1/691: symlink d2/da/d25/d46/d51/d5d/d6e/ldd 0 2026-03-09T20:47:59.094 INFO:tasks.workunit.client.1.vm10.stdout:1/692: dread - d2/da/d25/d46/d80/da0/d92/fac zero size 2026-03-09T20:47:59.094 INFO:tasks.workunit.client.1.vm10.stdout:6/710: creat d3/d30/d6a/fdd x:0 0 0 2026-03-09T20:47:59.098 INFO:tasks.workunit.client.0.vm07.stdout:1/772: creat d3/d97/da1/dc5/d90/de8/dba/f100 x:0 0 0 2026-03-09T20:47:59.099 INFO:tasks.workunit.client.1.vm10.stdout:2/683: fdatasync d5/f59 0 2026-03-09T20:47:59.105 INFO:tasks.workunit.client.0.vm07.stdout:2/748: creat d2/d11/ddb/d72/feb x:0 0 0 2026-03-09T20:47:59.109 INFO:tasks.workunit.client.1.vm10.stdout:4/650: write d1/d47/f4f [969500,54313] 0 2026-03-09T20:47:59.113 INFO:tasks.workunit.client.1.vm10.stdout:3/652: creat dc/d14/d20/d21/fd9 x:0 0 0 2026-03-09T20:47:59.113 INFO:tasks.workunit.client.1.vm10.stdout:9/742: readlink d2/l2d 0 2026-03-09T20:47:59.118 INFO:tasks.workunit.client.1.vm10.stdout:8/748: mkdir d0/d54/dec 0 2026-03-09T20:47:59.118 INFO:tasks.workunit.client.0.vm07.stdout:3/727: creat d1/d5/d9/d11/d6d/dd0/d59/fec x:0 0 0 2026-03-09T20:47:59.118 INFO:tasks.workunit.client.1.vm10.stdout:5/643: rename d2/fb to d2/d27/d37/d46/d99/ff4 0 2026-03-09T20:47:59.119 INFO:tasks.workunit.client.0.vm07.stdout:3/728: stat d1/d5/d9/d2f/d34/c4c 0 2026-03-09T20:47:59.124 INFO:tasks.workunit.client.0.vm07.stdout:8/692: unlink d1/dc/d16/dad/f7f 0 2026-03-09T20:47:59.127 INFO:tasks.workunit.client.0.vm07.stdout:0/772: symlink d1/d2/d98/daf/lf4 0 2026-03-09T20:47:59.127 INFO:tasks.workunit.client.1.vm10.stdout:1/693: symlink d2/da/d25/d46/d80/da0/d92/db5/lde 0 2026-03-09T20:47:59.129 INFO:tasks.workunit.client.0.vm07.stdout:7/799: mknod d3/d58/c10d 0 2026-03-09T20:47:59.132 INFO:tasks.workunit.client.0.vm07.stdout:1/773: mkdir d3/d14/d54/d3e/d101 0 2026-03-09T20:47:59.134 INFO:tasks.workunit.client.1.vm10.stdout:7/693: symlink db/d28/d2b/d36/d3b/d88/dbd/ld9 0 2026-03-09T20:47:59.135 INFO:tasks.workunit.client.0.vm07.stdout:4/680: mkdir d2/d55/d5d/d3f/d4a/dbc 0 2026-03-09T20:47:59.137 INFO:tasks.workunit.client.0.vm07.stdout:2/749: mknod d2/db/d1c/d4a/cec 0 2026-03-09T20:47:59.138 INFO:tasks.workunit.client.0.vm07.stdout:2/750: readlink d2/db/d49/d7d/d85/dd9/le6 0 2026-03-09T20:47:59.149 INFO:tasks.workunit.client.1.vm10.stdout:2/684: dwrite d5/d18/d27/d38/d61/fa4 [4194304,4194304] 0 2026-03-09T20:47:59.150 INFO:tasks.workunit.client.1.vm10.stdout:2/685: chown d5/d18/d9f/fd8 4011 1 2026-03-09T20:47:59.151 INFO:tasks.workunit.client.1.vm10.stdout:4/651: fdatasync d1/d2/d3/d54/f7f 0 2026-03-09T20:47:59.154 INFO:tasks.workunit.client.1.vm10.stdout:9/743: unlink d2/d3/de/d8f/f9d 0 2026-03-09T20:47:59.170 INFO:tasks.workunit.client.1.vm10.stdout:8/749: mkdir d0/d54/ded 0 2026-03-09T20:47:59.171 INFO:tasks.workunit.client.1.vm10.stdout:5/644: mkdir d2/d58/df5 0 2026-03-09T20:47:59.173 INFO:tasks.workunit.client.1.vm10.stdout:0/675: getdents d2 0 2026-03-09T20:47:59.174 INFO:tasks.workunit.client.1.vm10.stdout:8/750: dwrite d0/d22/d25/d2e/d41/f67 [0,4194304] 0 2026-03-09T20:47:59.176 INFO:tasks.workunit.client.1.vm10.stdout:8/751: readlink d0/d22/d25/d2e/d41/l45 0 2026-03-09T20:47:59.176 INFO:tasks.workunit.client.1.vm10.stdout:8/752: readlink d0/d22/d25/d6c/d9b/lb4 0 2026-03-09T20:47:59.193 INFO:tasks.workunit.client.1.vm10.stdout:7/694: fsync db/d28/f4f 0 2026-03-09T20:47:59.193 INFO:tasks.workunit.client.1.vm10.stdout:7/695: fsync db/d21/f9c 0 2026-03-09T20:47:59.194 INFO:tasks.workunit.client.1.vm10.stdout:7/696: fsync db/d46/f5a 0 2026-03-09T20:47:59.195 INFO:tasks.workunit.client.1.vm10.stdout:7/697: chown db/d28/d86 7820647 1 2026-03-09T20:47:59.200 INFO:tasks.workunit.client.1.vm10.stdout:8/753: sync 2026-03-09T20:47:59.201 INFO:tasks.workunit.client.1.vm10.stdout:2/686: sync 2026-03-09T20:47:59.203 INFO:tasks.workunit.client.1.vm10.stdout:8/754: read d0/f14 [6853488,6016] 0 2026-03-09T20:47:59.204 INFO:tasks.workunit.client.1.vm10.stdout:4/652: read - d1/d2/d5c/d64/d6b/d79/d92/fb2 zero size 2026-03-09T20:47:59.204 INFO:tasks.workunit.client.1.vm10.stdout:2/687: write d5/d18/d1b/d22/f6d [1814500,52698] 0 2026-03-09T20:47:59.205 INFO:tasks.workunit.client.1.vm10.stdout:4/653: readlink d1/d2/d5c/lad 0 2026-03-09T20:47:59.211 INFO:tasks.workunit.client.1.vm10.stdout:5/645: chown d2/l7f 3 1 2026-03-09T20:47:59.215 INFO:tasks.workunit.client.1.vm10.stdout:0/676: symlink d2/d4a/d58/d82/d71/lf0 0 2026-03-09T20:47:59.223 INFO:tasks.workunit.client.1.vm10.stdout:3/653: write dc/d14/d26/d29/d2a/fa4 [631709,93757] 0 2026-03-09T20:47:59.225 INFO:tasks.workunit.client.0.vm07.stdout:5/811: mkdir d5/d19/d73/d9c/d10c/d118 0 2026-03-09T20:47:59.230 INFO:tasks.workunit.client.0.vm07.stdout:6/752: getdents d8/d26 0 2026-03-09T20:47:59.236 INFO:tasks.workunit.client.1.vm10.stdout:8/755: creat d0/d22/d25/d40/d86/fee x:0 0 0 2026-03-09T20:47:59.237 INFO:tasks.workunit.client.1.vm10.stdout:8/756: stat d0/d22/d25/d2e/d41/d47/l50 0 2026-03-09T20:47:59.244 INFO:tasks.workunit.client.1.vm10.stdout:9/744: mknod d2/d3/cfc 0 2026-03-09T20:47:59.249 INFO:tasks.workunit.client.0.vm07.stdout:8/693: dread d1/f1d [0,4194304] 0 2026-03-09T20:47:59.249 INFO:tasks.workunit.client.0.vm07.stdout:7/800: stat d3/da/d53/db7/dde/d96/c107 0 2026-03-09T20:47:59.250 INFO:tasks.workunit.client.1.vm10.stdout:1/694: dwrite d2/da/d25/d46/d51/d5d/d6e/d70/f83 [0,4194304] 0 2026-03-09T20:47:59.261 INFO:tasks.workunit.client.1.vm10.stdout:5/646: dread - d2/d58/d6c/fc4 zero size 2026-03-09T20:47:59.264 INFO:tasks.workunit.client.1.vm10.stdout:6/711: dwrite d3/d30/d7f/d24/d39/f88 [0,4194304] 0 2026-03-09T20:47:59.275 INFO:tasks.workunit.client.1.vm10.stdout:4/654: write d1/d2/d5c/d64/f83 [566976,5527] 0 2026-03-09T20:47:59.275 INFO:tasks.workunit.client.1.vm10.stdout:6/712: chown d3/da/d11/d31/d47 19 1 2026-03-09T20:47:59.275 INFO:tasks.workunit.client.1.vm10.stdout:8/757: creat d0/d92/de8/d64/db5/fef x:0 0 0 2026-03-09T20:47:59.276 INFO:tasks.workunit.client.0.vm07.stdout:2/751: symlink d2/d11/ddb/d72/led 0 2026-03-09T20:47:59.276 INFO:tasks.workunit.client.0.vm07.stdout:3/729: write d1/d5/d9/d11/d6d/dd0/f30 [737090,5287] 0 2026-03-09T20:47:59.276 INFO:tasks.workunit.client.0.vm07.stdout:5/812: symlink d5/d50/l119 0 2026-03-09T20:47:59.276 INFO:tasks.workunit.client.0.vm07.stdout:3/730: stat d1/d5/d9/d2f/d3d/d71/d76/db6 0 2026-03-09T20:47:59.276 INFO:tasks.workunit.client.0.vm07.stdout:9/711: link d4/fa d4/d8/d19/d5f/dcf/ffe 0 2026-03-09T20:47:59.276 INFO:tasks.workunit.client.0.vm07.stdout:7/801: creat d3/d58/d77/de3/f10e x:0 0 0 2026-03-09T20:47:59.278 INFO:tasks.workunit.client.1.vm10.stdout:7/698: dwrite db/d21/f9a [0,4194304] 0 2026-03-09T20:47:59.282 INFO:tasks.workunit.client.1.vm10.stdout:2/688: mknod d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d8d/d93/da5/dda/ce6 0 2026-03-09T20:47:59.282 INFO:tasks.workunit.client.1.vm10.stdout:0/677: mknod d2/d9/da/d35/d30/cf1 0 2026-03-09T20:47:59.283 INFO:tasks.workunit.client.1.vm10.stdout:4/655: mknod d1/d2/d3/d70/d78/cd2 0 2026-03-09T20:47:59.286 INFO:tasks.workunit.client.1.vm10.stdout:4/656: chown d1/d2/d5c/f48 3344920 1 2026-03-09T20:47:59.286 INFO:tasks.workunit.client.1.vm10.stdout:4/657: chown d1/d67 1582 1 2026-03-09T20:47:59.288 INFO:tasks.workunit.client.1.vm10.stdout:2/689: dread d5/d18/d27/d89/f9a [0,4194304] 0 2026-03-09T20:47:59.290 INFO:tasks.workunit.client.1.vm10.stdout:0/678: dread d2/d9/da/d11/f15 [0,4194304] 0 2026-03-09T20:47:59.293 INFO:tasks.workunit.client.1.vm10.stdout:2/690: sync 2026-03-09T20:47:59.294 INFO:tasks.workunit.client.1.vm10.stdout:2/691: stat d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/f92 0 2026-03-09T20:47:59.296 INFO:tasks.workunit.client.1.vm10.stdout:6/713: fdatasync d3/d30/d7f/d36/d5c/fa5 0 2026-03-09T20:47:59.297 INFO:tasks.workunit.client.0.vm07.stdout:5/813: dread d5/df/d13/d6c/f79 [0,4194304] 0 2026-03-09T20:47:59.302 INFO:tasks.workunit.client.0.vm07.stdout:9/712: creat d4/d8/dc/dbb/fff x:0 0 0 2026-03-09T20:47:59.303 INFO:tasks.workunit.client.0.vm07.stdout:2/752: dread d2/db/d1c/d4a/d88/f7f [0,4194304] 0 2026-03-09T20:47:59.311 INFO:tasks.workunit.client.0.vm07.stdout:7/802: mkdir d3/d58/d77/d10f 0 2026-03-09T20:47:59.311 INFO:tasks.workunit.client.0.vm07.stdout:8/694: fdatasync d1/d5d/d6f/d80/faa 0 2026-03-09T20:47:59.311 INFO:tasks.workunit.client.0.vm07.stdout:4/681: link d2/df/l18 d2/d55/d5d/d3f/d4a/d4b/d52/d5c/lbd 0 2026-03-09T20:47:59.311 INFO:tasks.workunit.client.1.vm10.stdout:1/695: rename d2/l7b to d2/da/d25/d46/ldf 0 2026-03-09T20:47:59.311 INFO:tasks.workunit.client.1.vm10.stdout:4/658: fsync d1/d2/d5c/d64/d61/f68 0 2026-03-09T20:47:59.311 INFO:tasks.workunit.client.1.vm10.stdout:0/679: dread - d2/d9/da/d35/d30/f72 zero size 2026-03-09T20:47:59.311 INFO:tasks.workunit.client.1.vm10.stdout:4/659: chown d1/d2/d5c/d64/d6b/d79/d92 115 1 2026-03-09T20:47:59.311 INFO:tasks.workunit.client.0.vm07.stdout:0/773: link d1/d2/d33/d35/f45 d1/d2/dc/d17/ff5 0 2026-03-09T20:47:59.315 INFO:tasks.workunit.client.1.vm10.stdout:6/714: symlink d3/d30/d7f/d36/d5c/dad/lde 0 2026-03-09T20:47:59.315 INFO:tasks.workunit.client.0.vm07.stdout:9/713: creat d4/d8/d19/d5f/d73/dbc/f100 x:0 0 0 2026-03-09T20:47:59.315 INFO:tasks.workunit.client.0.vm07.stdout:7/803: mkdir d3/da/db/d32/d3e/dac/d1f/d50/d110 0 2026-03-09T20:47:59.318 INFO:tasks.workunit.client.1.vm10.stdout:5/647: creat d2/d27/d37/d46/ff6 x:0 0 0 2026-03-09T20:47:59.318 INFO:tasks.workunit.client.1.vm10.stdout:1/696: chown d2/f21 3492 1 2026-03-09T20:47:59.319 INFO:tasks.workunit.client.0.vm07.stdout:4/682: rmdir d2/df/d59 39 2026-03-09T20:47:59.320 INFO:tasks.workunit.client.1.vm10.stdout:0/680: unlink d2/d4a/d58/d82/d60/lea 0 2026-03-09T20:47:59.321 INFO:tasks.workunit.client.1.vm10.stdout:9/745: dread d2/d3/f2f [0,4194304] 0 2026-03-09T20:47:59.322 INFO:tasks.workunit.client.1.vm10.stdout:2/692: mknod d5/d18/d27/d89/db6/d41/de4/ce7 0 2026-03-09T20:47:59.325 INFO:tasks.workunit.client.1.vm10.stdout:4/660: dread d1/d2/d5c/d64/d6b/d81/dac/d1b/f5f [0,4194304] 0 2026-03-09T20:47:59.327 INFO:tasks.workunit.client.1.vm10.stdout:6/715: mkdir d3/da/d11/d89/ddf 0 2026-03-09T20:47:59.329 INFO:tasks.workunit.client.1.vm10.stdout:3/654: dwrite dc/d14/d90/fc9 [0,4194304] 0 2026-03-09T20:47:59.329 INFO:tasks.workunit.client.0.vm07.stdout:1/774: dwrite d3/d14/d54/fcc [0,4194304] 0 2026-03-09T20:47:59.351 INFO:tasks.workunit.client.0.vm07.stdout:3/731: write d1/d5/d9/f33 [1596160,4674] 0 2026-03-09T20:47:59.354 INFO:tasks.workunit.client.1.vm10.stdout:8/758: dwrite d0/d92/de8/f6d [0,4194304] 0 2026-03-09T20:47:59.363 INFO:tasks.workunit.client.0.vm07.stdout:5/814: dwrite d5/d33/fb6 [0,4194304] 0 2026-03-09T20:47:59.364 INFO:tasks.workunit.client.1.vm10.stdout:7/699: link db/d21/c92 db/d1f/cda 0 2026-03-09T20:47:59.365 INFO:tasks.workunit.client.0.vm07.stdout:5/815: dread - d5/df/d13/d3e/de1/f10a zero size 2026-03-09T20:47:59.381 INFO:tasks.workunit.client.1.vm10.stdout:5/648: rename d2/d1b/f28 to d2/d1b/d54/d78/de6/ff7 0 2026-03-09T20:47:59.383 INFO:tasks.workunit.client.0.vm07.stdout:6/753: link d8/d16/d22/c30 d8/d16/d22/d24/da0/dab/d40/cf3 0 2026-03-09T20:47:59.388 INFO:tasks.workunit.client.1.vm10.stdout:2/693: symlink d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47/d7b/le8 0 2026-03-09T20:47:59.388 INFO:tasks.workunit.client.1.vm10.stdout:2/694: stat d5/d18/f67 0 2026-03-09T20:47:59.391 INFO:tasks.workunit.client.0.vm07.stdout:5/816: dwrite d5/d33/fb6 [0,4194304] 0 2026-03-09T20:47:59.395 INFO:tasks.workunit.client.0.vm07.stdout:9/714: dwrite d4/d11/f88 [0,4194304] 0 2026-03-09T20:47:59.395 INFO:tasks.workunit.client.1.vm10.stdout:8/759: unlink d0/f7e 0 2026-03-09T20:47:59.400 INFO:tasks.workunit.client.0.vm07.stdout:0/774: mkdir d1/df6 0 2026-03-09T20:47:59.400 INFO:tasks.workunit.client.0.vm07.stdout:1/775: creat d3/d97/da1/dc5/d90/de8/f102 x:0 0 0 2026-03-09T20:47:59.401 INFO:tasks.workunit.client.0.vm07.stdout:3/732: mknod d1/d5/d9/d2f/d34/da5/ced 0 2026-03-09T20:47:59.407 INFO:tasks.workunit.client.0.vm07.stdout:6/754: rename d8/d16/f92 to d8/d26/d7d/dc8/ff4 0 2026-03-09T20:47:59.408 INFO:tasks.workunit.client.0.vm07.stdout:1/776: stat d3/d14/d54/d9b/cbf 0 2026-03-09T20:47:59.423 INFO:tasks.workunit.client.0.vm07.stdout:2/753: getdents d2/db/d28/d57 0 2026-03-09T20:47:59.424 INFO:tasks.workunit.client.0.vm07.stdout:9/715: chown d4/d8/l3c 41793904 1 2026-03-09T20:47:59.431 INFO:tasks.workunit.client.0.vm07.stdout:5/817: creat d5/d33/db2/de8/f11a x:0 0 0 2026-03-09T20:47:59.449 INFO:tasks.workunit.client.1.vm10.stdout:3/655: symlink dc/d14/d26/d29/lda 0 2026-03-09T20:47:59.458 INFO:tasks.workunit.client.1.vm10.stdout:7/700: symlink db/d28/d2b/dd0/ldb 0 2026-03-09T20:47:59.461 INFO:tasks.workunit.client.0.vm07.stdout:2/754: rename d2/d11/d56/cb5 to d2/d11/ddb/d72/d82/cee 0 2026-03-09T20:47:59.462 INFO:tasks.workunit.client.0.vm07.stdout:1/777: creat d3/d23/d67/f103 x:0 0 0 2026-03-09T20:47:59.464 INFO:tasks.workunit.client.0.vm07.stdout:4/683: getdents d2/d55/d5d/d3f/d4a 0 2026-03-09T20:47:59.466 INFO:tasks.workunit.client.1.vm10.stdout:8/760: symlink d0/lf0 0 2026-03-09T20:47:59.467 INFO:tasks.workunit.client.0.vm07.stdout:5/818: creat d5/df/d13/d6c/db1/f11b x:0 0 0 2026-03-09T20:47:59.473 INFO:tasks.workunit.client.1.vm10.stdout:7/701: creat db/d28/d4c/fdc x:0 0 0 2026-03-09T20:47:59.474 INFO:tasks.workunit.client.0.vm07.stdout:5/819: dread d5/d33/d39/d8d/dab/f5f [0,4194304] 0 2026-03-09T20:47:59.476 INFO:tasks.workunit.client.0.vm07.stdout:9/716: symlink d4/d16/d29/d24/d37/d8d/df3/l101 0 2026-03-09T20:47:59.479 INFO:tasks.workunit.client.1.vm10.stdout:5/649: creat d2/d1b/ff8 x:0 0 0 2026-03-09T20:47:59.479 INFO:tasks.workunit.client.0.vm07.stdout:1/778: symlink d3/d9c/l104 0 2026-03-09T20:47:59.481 INFO:tasks.workunit.client.0.vm07.stdout:8/695: truncate d1/d5d/d6f/f61 2803996 0 2026-03-09T20:47:59.481 INFO:tasks.workunit.client.0.vm07.stdout:8/696: dread - d1/dc/d16/fbe zero size 2026-03-09T20:47:59.482 INFO:tasks.workunit.client.0.vm07.stdout:8/697: chown d1/ld0 6651 1 2026-03-09T20:47:59.482 INFO:tasks.workunit.client.1.vm10.stdout:0/681: getdents d2/d4a/d58/d82/d71/d5d 0 2026-03-09T20:47:59.484 INFO:tasks.workunit.client.1.vm10.stdout:1/697: write d2/da/f88 [2082108,37294] 0 2026-03-09T20:47:59.485 INFO:tasks.workunit.client.1.vm10.stdout:5/650: dwrite d2/d39/dbf/fb6 [0,4194304] 0 2026-03-09T20:47:59.486 INFO:tasks.workunit.client.0.vm07.stdout:7/804: truncate d3/da/db/d32/d3e/dac/d43/fae 1067723 0 2026-03-09T20:47:59.500 INFO:tasks.workunit.client.0.vm07.stdout:7/805: dread - d3/da/d53/db7/dde/dc5/ffc zero size 2026-03-09T20:47:59.500 INFO:tasks.workunit.client.0.vm07.stdout:7/806: write d3/da/db/f27 [3879953,61903] 0 2026-03-09T20:47:59.500 INFO:tasks.workunit.client.0.vm07.stdout:0/775: write d1/d2/d4b/f70 [4300179,57723] 0 2026-03-09T20:47:59.500 INFO:tasks.workunit.client.1.vm10.stdout:1/698: chown d2/da/d25/d3e/d42/f7d 222 1 2026-03-09T20:47:59.500 INFO:tasks.workunit.client.1.vm10.stdout:4/661: dwrite d1/f94 [0,4194304] 0 2026-03-09T20:47:59.500 INFO:tasks.workunit.client.1.vm10.stdout:5/651: write d2/d39/d4b/d7a/fed [254754,41302] 0 2026-03-09T20:47:59.500 INFO:tasks.workunit.client.1.vm10.stdout:2/695: write d5/f16 [1249413,25822] 0 2026-03-09T20:47:59.500 INFO:tasks.workunit.client.1.vm10.stdout:6/716: dwrite d3/d30/d7f/d4a/f9a [0,4194304] 0 2026-03-09T20:47:59.513 INFO:tasks.workunit.client.1.vm10.stdout:9/746: dwrite d2/db8/ff8 [0,4194304] 0 2026-03-09T20:47:59.513 INFO:tasks.workunit.client.1.vm10.stdout:2/696: dread d5/d18/d27/d89/db6/d41/fc7 [0,4194304] 0 2026-03-09T20:47:59.516 INFO:tasks.workunit.client.0.vm07.stdout:9/717: dread d4/d11/f1a [0,4194304] 0 2026-03-09T20:47:59.518 INFO:tasks.workunit.client.0.vm07.stdout:9/718: stat f2 0 2026-03-09T20:47:59.519 INFO:tasks.workunit.client.1.vm10.stdout:8/761: truncate d0/d22/d2c/f32 2357018 0 2026-03-09T20:47:59.519 INFO:tasks.workunit.client.0.vm07.stdout:2/755: sync 2026-03-09T20:47:59.520 INFO:tasks.workunit.client.0.vm07.stdout:9/719: chown d4/d16/d78 188802 1 2026-03-09T20:47:59.521 INFO:tasks.workunit.client.0.vm07.stdout:7/807: dread d3/da/db/d32/d3e/dac/d1f/d2b/d52/f5e [0,4194304] 0 2026-03-09T20:47:59.522 INFO:tasks.workunit.client.1.vm10.stdout:7/702: symlink db/d28/d2b/d36/d40/d8a/ldd 0 2026-03-09T20:47:59.523 INFO:tasks.workunit.client.1.vm10.stdout:7/703: write db/d28/d2b/d36/d3b/d88/fd3 [467655,53504] 0 2026-03-09T20:47:59.528 INFO:tasks.workunit.client.1.vm10.stdout:8/762: dread d0/d92/de8/f43 [0,4194304] 0 2026-03-09T20:47:59.529 INFO:tasks.workunit.client.1.vm10.stdout:0/682: truncate d2/f9b 164846 0 2026-03-09T20:47:59.545 INFO:tasks.workunit.client.0.vm07.stdout:1/779: dread d3/d97/da1/dc5/d90/f96 [0,4194304] 0 2026-03-09T20:47:59.559 INFO:tasks.workunit.client.0.vm07.stdout:1/780: dwrite d3/d23/d67/d8a/ff4 [0,4194304] 0 2026-03-09T20:47:59.564 INFO:tasks.workunit.client.1.vm10.stdout:5/652: mknod d2/d27/d37/d46/d5d/d77/cf9 0 2026-03-09T20:47:59.569 INFO:tasks.workunit.client.0.vm07.stdout:3/733: dwrite d1/d5/d9/d2f/d34/f40 [4194304,4194304] 0 2026-03-09T20:47:59.574 INFO:tasks.workunit.client.0.vm07.stdout:3/734: stat d1/d5/d9/d11/l50 0 2026-03-09T20:47:59.576 INFO:tasks.workunit.client.1.vm10.stdout:1/699: creat d2/da/d25/d46/dbe/fe0 x:0 0 0 2026-03-09T20:47:59.598 INFO:tasks.workunit.client.1.vm10.stdout:9/747: dread - d2/d33/dcf/fe2 zero size 2026-03-09T20:47:59.612 INFO:tasks.workunit.client.1.vm10.stdout:2/697: symlink d5/d18/d27/d89/db6/d41/d77/db3/db5/le9 0 2026-03-09T20:47:59.620 INFO:tasks.workunit.client.0.vm07.stdout:9/720: creat d4/d16/d29/d24/d37/d44/d62/d8e/f102 x:0 0 0 2026-03-09T20:47:59.621 INFO:tasks.workunit.client.1.vm10.stdout:0/683: rmdir d2/d4a/d58/d82/d71 39 2026-03-09T20:47:59.626 INFO:tasks.workunit.client.1.vm10.stdout:5/653: dread d2/d39/dbf/d63/d95/fd7 [0,4194304] 0 2026-03-09T20:47:59.639 INFO:tasks.workunit.client.1.vm10.stdout:1/700: creat d2/da/d25/d3e/dca/da2/fe1 x:0 0 0 2026-03-09T20:47:59.645 INFO:tasks.workunit.client.1.vm10.stdout:6/717: creat d3/da/d11/d26/dcf/fe0 x:0 0 0 2026-03-09T20:47:59.652 INFO:tasks.workunit.client.0.vm07.stdout:1/781: dread - d3/d23/f5d zero size 2026-03-09T20:47:59.656 INFO:tasks.workunit.client.0.vm07.stdout:6/755: dwrite d8/d16/d22/d24/da0/dab/dc1/fcb [0,4194304] 0 2026-03-09T20:47:59.667 INFO:tasks.workunit.client.1.vm10.stdout:9/748: rename d2/d3/d6d/la0 to d2/d12/d5a/da7/lfd 0 2026-03-09T20:47:59.667 INFO:tasks.workunit.client.1.vm10.stdout:9/749: fsync d2/d3/d85/f8b 0 2026-03-09T20:47:59.672 INFO:tasks.workunit.client.1.vm10.stdout:9/750: dwrite d2/d3/d85/f8b [4194304,4194304] 0 2026-03-09T20:47:59.691 INFO:tasks.workunit.client.1.vm10.stdout:3/656: write dc/d14/d26/d29/f60 [7441901,101953] 0 2026-03-09T20:47:59.691 INFO:tasks.workunit.client.1.vm10.stdout:3/657: chown dc/fbb 441671486 1 2026-03-09T20:47:59.720 INFO:tasks.workunit.client.0.vm07.stdout:2/756: unlink d2/db/d28/l39 0 2026-03-09T20:47:59.723 INFO:tasks.workunit.client.0.vm07.stdout:7/808: chown d3/da/db/d79/dc3 1 1 2026-03-09T20:47:59.732 INFO:tasks.workunit.client.0.vm07.stdout:7/809: stat d3/da/db/d32/d3e/dac/c23 0 2026-03-09T20:47:59.733 INFO:tasks.workunit.client.0.vm07.stdout:2/757: sync 2026-03-09T20:47:59.734 INFO:tasks.workunit.client.0.vm07.stdout:7/810: chown d3/c5b 3475 1 2026-03-09T20:47:59.736 INFO:tasks.workunit.client.0.vm07.stdout:7/811: fdatasync d3/da/db/d32/f3d 0 2026-03-09T20:47:59.741 INFO:tasks.workunit.client.0.vm07.stdout:6/756: fsync d8/d5d/d97/dc4/fbe 0 2026-03-09T20:47:59.741 INFO:tasks.workunit.client.0.vm07.stdout:5/820: write d5/d33/d39/ff1 [611094,96821] 0 2026-03-09T20:47:59.744 INFO:tasks.workunit.client.0.vm07.stdout:4/684: dwrite d2/df/f23 [8388608,4194304] 0 2026-03-09T20:47:59.744 INFO:tasks.workunit.client.1.vm10.stdout:1/701: creat d2/da/d25/d46/d80/da0/d92/db5/dc7/fe2 x:0 0 0 2026-03-09T20:47:59.745 INFO:tasks.workunit.client.1.vm10.stdout:1/702: write d2/da/d25/d46/dbe/fe0 [461803,80022] 0 2026-03-09T20:47:59.759 INFO:tasks.workunit.client.1.vm10.stdout:6/718: dread d3/f4d [0,4194304] 0 2026-03-09T20:47:59.760 INFO:tasks.workunit.client.0.vm07.stdout:2/758: dwrite d2/dc8/fe3 [0,4194304] 0 2026-03-09T20:47:59.761 INFO:tasks.workunit.client.1.vm10.stdout:4/662: dwrite d1/d2/d5c/f53 [0,4194304] 0 2026-03-09T20:47:59.763 INFO:tasks.workunit.client.1.vm10.stdout:4/663: stat d1/d2/d3/d70/d99/la0 0 2026-03-09T20:47:59.764 INFO:tasks.workunit.client.0.vm07.stdout:8/698: write d1/dc/f4c [4410844,25659] 0 2026-03-09T20:47:59.765 INFO:tasks.workunit.client.1.vm10.stdout:3/658: fsync dc/d14/d26/d29/d40/da8/fc6 0 2026-03-09T20:47:59.779 INFO:tasks.workunit.client.0.vm07.stdout:6/757: rmdir d8/d16/d4b 39 2026-03-09T20:47:59.779 INFO:tasks.workunit.client.1.vm10.stdout:2/698: dwrite d5/d18/f24 [0,4194304] 0 2026-03-09T20:47:59.779 INFO:tasks.workunit.client.0.vm07.stdout:6/758: chown d8/d16/dcd 0 1 2026-03-09T20:47:59.780 INFO:tasks.workunit.client.0.vm07.stdout:4/685: chown d2/d55/d5d/d3f/d4a/d7d/c8f 117 1 2026-03-09T20:47:59.785 INFO:tasks.workunit.client.0.vm07.stdout:0/776: getdents d1/d1f/dc3 0 2026-03-09T20:47:59.785 INFO:tasks.workunit.client.1.vm10.stdout:7/704: truncate db/d21/f81 3981877 0 2026-03-09T20:47:59.786 INFO:tasks.workunit.client.0.vm07.stdout:6/759: sync 2026-03-09T20:47:59.792 INFO:tasks.workunit.client.0.vm07.stdout:3/735: dwrite d1/d5/d9/d11/d6d/dd0/d43/f90 [0,4194304] 0 2026-03-09T20:47:59.803 INFO:tasks.workunit.client.0.vm07.stdout:9/721: rename d4/d16/c79 to d4/d8/d19/d5f/da5/c103 0 2026-03-09T20:47:59.822 INFO:tasks.workunit.client.0.vm07.stdout:5/821: fsync d5/d33/d75/ffd 0 2026-03-09T20:47:59.824 INFO:tasks.workunit.client.0.vm07.stdout:5/822: chown d5/df/d13/d6c/db1/dcc/cf5 14808561 1 2026-03-09T20:47:59.826 INFO:tasks.workunit.client.0.vm07.stdout:2/759: dread d2/db/f76 [0,4194304] 0 2026-03-09T20:47:59.831 INFO:tasks.workunit.client.0.vm07.stdout:7/812: truncate d3/da/f26 2441257 0 2026-03-09T20:47:59.835 INFO:tasks.workunit.client.0.vm07.stdout:3/736: dwrite d1/d5/d9/d11/d6d/dd0/d59/fd1 [0,4194304] 0 2026-03-09T20:47:59.839 INFO:tasks.workunit.client.0.vm07.stdout:6/760: truncate d8/d16/f18 8078171 0 2026-03-09T20:47:59.841 INFO:tasks.workunit.client.0.vm07.stdout:0/777: chown d1/d1f/cbc 67641406 1 2026-03-09T20:47:59.846 INFO:tasks.workunit.client.1.vm10.stdout:9/751: getdents d2/d28/d47/d50/dd1 0 2026-03-09T20:47:59.847 INFO:tasks.workunit.client.1.vm10.stdout:9/752: truncate d2/d3/de/f42 5465216 0 2026-03-09T20:47:59.848 INFO:tasks.workunit.client.1.vm10.stdout:3/659: chown dc/d14/d90/c92 15313 1 2026-03-09T20:47:59.851 INFO:tasks.workunit.client.1.vm10.stdout:2/699: mkdir d5/d18/d27/d38/d61/dc8/ddb/dea 0 2026-03-09T20:47:59.853 INFO:tasks.workunit.client.1.vm10.stdout:8/763: getdents d0/d22/d25/d2e/d41/d85/db9 0 2026-03-09T20:47:59.853 INFO:tasks.workunit.client.1.vm10.stdout:1/703: fsync d2/da/f26 0 2026-03-09T20:47:59.859 INFO:tasks.workunit.client.1.vm10.stdout:8/764: dwrite d0/d22/d25/d2e/f33 [0,4194304] 0 2026-03-09T20:47:59.859 INFO:tasks.workunit.client.1.vm10.stdout:6/719: unlink d3/d30/d7f/f28 0 2026-03-09T20:47:59.863 INFO:tasks.workunit.client.1.vm10.stdout:3/660: truncate f6 3017372 0 2026-03-09T20:47:59.865 INFO:tasks.workunit.client.1.vm10.stdout:3/661: chown dc/d14/d22/c7c 58 1 2026-03-09T20:47:59.866 INFO:tasks.workunit.client.0.vm07.stdout:6/761: mknod d8/d16/da3/cf5 0 2026-03-09T20:47:59.869 INFO:tasks.workunit.client.1.vm10.stdout:3/662: fsync dc/d14/d22/fbf 0 2026-03-09T20:47:59.870 INFO:tasks.workunit.client.0.vm07.stdout:3/737: dwrite d1/d5/d9/d11/f4d [0,4194304] 0 2026-03-09T20:47:59.871 INFO:tasks.workunit.client.0.vm07.stdout:0/778: symlink d1/d2/dc/db1/lf7 0 2026-03-09T20:47:59.873 INFO:tasks.workunit.client.1.vm10.stdout:2/700: mkdir d5/d18/d27/d89/db6/d41/d77/db3/db5/db0/deb 0 2026-03-09T20:47:59.877 INFO:tasks.workunit.client.1.vm10.stdout:4/664: getdents d1/d2/d5c/d64/d6b/d81/dac/d1c/d69 0 2026-03-09T20:47:59.881 INFO:tasks.workunit.client.1.vm10.stdout:4/665: fsync d1/d2/d5c/d64/d6b/d81/dac/f29 0 2026-03-09T20:47:59.883 INFO:tasks.workunit.client.1.vm10.stdout:8/765: dread d0/f11 [0,4194304] 0 2026-03-09T20:47:59.883 INFO:tasks.workunit.client.1.vm10.stdout:8/766: chown d0/d92/fb3 62903 1 2026-03-09T20:47:59.890 INFO:tasks.workunit.client.0.vm07.stdout:3/738: sync 2026-03-09T20:47:59.893 INFO:tasks.workunit.client.1.vm10.stdout:1/704: rename d2/f21 to d2/da/d25/d3e/fe3 0 2026-03-09T20:47:59.901 INFO:tasks.workunit.client.0.vm07.stdout:9/722: creat d4/d8/d19/d5f/da5/f104 x:0 0 0 2026-03-09T20:47:59.901 INFO:tasks.workunit.client.0.vm07.stdout:6/762: creat d8/d16/d22/d24/da0/daa/ff6 x:0 0 0 2026-03-09T20:47:59.902 INFO:tasks.workunit.client.0.vm07.stdout:7/813: dread d3/da/db/d32/d3e/dac/d43/d62/db1/ff0 [0,4194304] 0 2026-03-09T20:47:59.903 INFO:tasks.workunit.client.0.vm07.stdout:9/723: chown d4/d8/d19/d89/da7/ddd 30165003 1 2026-03-09T20:47:59.905 INFO:tasks.workunit.client.0.vm07.stdout:9/724: read d4/d8/dc/ff [41279,121175] 0 2026-03-09T20:47:59.906 INFO:tasks.workunit.client.0.vm07.stdout:2/760: creat d2/d11/fef x:0 0 0 2026-03-09T20:47:59.908 INFO:tasks.workunit.client.0.vm07.stdout:5/823: dread d5/d69/f82 [0,4194304] 0 2026-03-09T20:47:59.908 INFO:tasks.workunit.client.0.vm07.stdout:0/779: rmdir d1/d1f/d9f 39 2026-03-09T20:47:59.908 INFO:tasks.workunit.client.1.vm10.stdout:4/666: dread d1/d2/d5c/d64/d6b/d81/da9/fc7 [0,4194304] 0 2026-03-09T20:47:59.909 INFO:tasks.workunit.client.1.vm10.stdout:3/663: rename dc/d14/lc5 to dc/d14/d20/d21/daf/ldb 0 2026-03-09T20:47:59.910 INFO:tasks.workunit.client.1.vm10.stdout:4/667: chown d1/d2/d5c/d64/d6b/d81/dac/d39/f4b 458539 1 2026-03-09T20:47:59.911 INFO:tasks.workunit.client.0.vm07.stdout:3/739: dread - d1/d5/fc5 zero size 2026-03-09T20:47:59.916 INFO:tasks.workunit.client.1.vm10.stdout:0/684: dread d2/d9/da/d35/f3a [0,4194304] 0 2026-03-09T20:47:59.921 INFO:tasks.workunit.client.1.vm10.stdout:1/705: mknod d2/ce4 0 2026-03-09T20:47:59.930 INFO:tasks.workunit.client.0.vm07.stdout:9/725: fdatasync d4/d8/d19/d89/f93 0 2026-03-09T20:47:59.931 INFO:tasks.workunit.client.1.vm10.stdout:4/668: sync 2026-03-09T20:47:59.933 INFO:tasks.workunit.client.0.vm07.stdout:9/726: dwrite d4/d8/dc/dbb/fff [0,4194304] 0 2026-03-09T20:47:59.944 INFO:tasks.workunit.client.1.vm10.stdout:2/701: rename d5/d18/f83 to d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47/ddc/fec 0 2026-03-09T20:47:59.945 INFO:tasks.workunit.client.0.vm07.stdout:1/782: write d3/d14/f4d [966937,78559] 0 2026-03-09T20:47:59.947 INFO:tasks.workunit.client.0.vm07.stdout:2/761: symlink d2/d11/ddb/d6e/dbe/d96/lf0 0 2026-03-09T20:47:59.948 INFO:tasks.workunit.client.1.vm10.stdout:5/654: dwrite d2/d39/dbf/fb6 [4194304,4194304] 0 2026-03-09T20:47:59.948 INFO:tasks.workunit.client.0.vm07.stdout:2/762: chown d2/d11/ddb/db0/db3 1684 1 2026-03-09T20:47:59.959 INFO:tasks.workunit.client.1.vm10.stdout:6/720: link d3/d30/d7f/d36/d5c/daa/fae d3/da/d11/d89/db9/dd1/dd2/dc3/fe1 0 2026-03-09T20:47:59.959 INFO:tasks.workunit.client.1.vm10.stdout:6/721: chown d3/da/fd 544 1 2026-03-09T20:47:59.960 INFO:tasks.workunit.client.1.vm10.stdout:1/706: rmdir d2/da/d25/d46/d51 39 2026-03-09T20:47:59.960 INFO:tasks.workunit.client.1.vm10.stdout:6/722: chown d3/d30/d7f/d36/d5c/d8d 103982448 1 2026-03-09T20:47:59.961 INFO:tasks.workunit.client.1.vm10.stdout:4/669: fsync d1/d2/d5c/d64/d6b/d81/dac/d1b/f24 0 2026-03-09T20:47:59.962 INFO:tasks.workunit.client.0.vm07.stdout:7/814: fsync d3/d58/dc1/fc8 0 2026-03-09T20:47:59.982 INFO:tasks.workunit.client.1.vm10.stdout:4/670: dread d1/d2/d5c/d64/d61/f62 [0,4194304] 0 2026-03-09T20:47:59.982 INFO:tasks.workunit.client.1.vm10.stdout:6/723: creat d3/da/d11/d31/d47/d87/fe2 x:0 0 0 2026-03-09T20:47:59.982 INFO:tasks.workunit.client.0.vm07.stdout:1/783: creat d3/d9c/f105 x:0 0 0 2026-03-09T20:47:59.982 INFO:tasks.workunit.client.0.vm07.stdout:1/784: truncate d3/d97/da1/dc5/f99 4884410 0 2026-03-09T20:47:59.982 INFO:tasks.workunit.client.0.vm07.stdout:6/763: creat d8/d16/d22/d9b/de4/ff7 x:0 0 0 2026-03-09T20:47:59.982 INFO:tasks.workunit.client.0.vm07.stdout:7/815: symlink d3/da4/l111 0 2026-03-09T20:47:59.982 INFO:tasks.workunit.client.0.vm07.stdout:3/740: rename d1/f36 to d1/d5/d9/d11/d6d/fee 0 2026-03-09T20:47:59.991 INFO:tasks.workunit.client.1.vm10.stdout:4/671: rename d1/d2/d5c/d64/d6b/d81/dac/d1b/c6d to d1/d47/cd3 0 2026-03-09T20:48:00.003 INFO:tasks.workunit.client.0.vm07.stdout:0/780: getdents d1/d2 0 2026-03-09T20:48:00.004 INFO:tasks.workunit.client.1.vm10.stdout:5/655: mkdir d2/d39/dbf/d69/de9/dfa 0 2026-03-09T20:48:00.008 INFO:tasks.workunit.client.1.vm10.stdout:3/664: getdents dc/d14/d20/d2e/d56 0 2026-03-09T20:48:00.009 INFO:tasks.workunit.client.1.vm10.stdout:4/672: creat d1/d2/d5c/fd4 x:0 0 0 2026-03-09T20:48:00.009 INFO:tasks.workunit.client.0.vm07.stdout:2/763: getdents d2/db/d1c/d4a 0 2026-03-09T20:48:00.015 INFO:tasks.workunit.client.0.vm07.stdout:0/781: dwrite d1/d2/dc/d17/da6/ff1 [0,4194304] 0 2026-03-09T20:48:00.018 INFO:tasks.workunit.client.0.vm07.stdout:0/782: chown d1/d2/dc/d17/da6/cf3 7884097 1 2026-03-09T20:48:00.021 INFO:tasks.workunit.client.1.vm10.stdout:3/665: dread dc/d14/d26/d8f/fb8 [0,4194304] 0 2026-03-09T20:48:00.027 INFO:tasks.workunit.client.0.vm07.stdout:9/727: rename d4/d8/d19/d5f/d73/f97 to d4/d11/f105 0 2026-03-09T20:48:00.027 INFO:tasks.workunit.client.0.vm07.stdout:2/764: mknod d2/db/d49/d7d/d85/dde/cf1 0 2026-03-09T20:48:00.029 INFO:tasks.workunit.client.1.vm10.stdout:4/673: dread d1/d2/d5c/d64/d6b/d81/dac/d1b/d57/f44 [0,4194304] 0 2026-03-09T20:48:00.031 INFO:tasks.workunit.client.1.vm10.stdout:4/674: readlink d1/d2/d5c/d64/d6b/d81/dac/d39/l4e 0 2026-03-09T20:48:00.040 INFO:tasks.workunit.client.1.vm10.stdout:5/656: symlink d2/d1b/lfb 0 2026-03-09T20:48:00.047 INFO:tasks.workunit.client.0.vm07.stdout:4/686: write d2/d55/d5d/d3f/d4a/d85/f8c [4754950,66000] 0 2026-03-09T20:48:00.073 INFO:tasks.workunit.client.1.vm10.stdout:3/666: mkdir dc/d14/d26/d29/d2a/ddc 0 2026-03-09T20:48:00.073 INFO:tasks.workunit.client.0.vm07.stdout:2/765: fsync d2/db/d28/d57/f75 0 2026-03-09T20:48:00.076 INFO:tasks.workunit.client.0.vm07.stdout:0/783: mkdir d1/d1f/d9f/df8 0 2026-03-09T20:48:00.077 INFO:tasks.workunit.client.0.vm07.stdout:0/784: chown d1/d2/d4b/la8 30714 1 2026-03-09T20:48:00.082 INFO:tasks.workunit.client.1.vm10.stdout:3/667: mkdir dc/d14/d26/d8f/ddd 0 2026-03-09T20:48:00.083 INFO:tasks.workunit.client.1.vm10.stdout:5/657: dwrite d2/d39/dbf/d63/d95/fd7 [0,4194304] 0 2026-03-09T20:48:00.089 INFO:tasks.workunit.client.1.vm10.stdout:5/658: dwrite d2/d39/d4b/d7a/ff0 [0,4194304] 0 2026-03-09T20:48:00.090 INFO:tasks.workunit.client.0.vm07.stdout:0/785: unlink d1/d1f/d53/lb0 0 2026-03-09T20:48:00.094 INFO:tasks.workunit.client.0.vm07.stdout:4/687: mkdir d2/d55/d5d/d93/dbe 0 2026-03-09T20:48:00.105 INFO:tasks.workunit.client.1.vm10.stdout:9/753: dwrite d2/d28/f79 [0,4194304] 0 2026-03-09T20:48:00.111 INFO:tasks.workunit.client.0.vm07.stdout:6/764: rename d8/d16/d22/d24/da0/daa to d8/d16/d22/d9b/de4/d85/df8 0 2026-03-09T20:48:00.115 INFO:tasks.workunit.client.1.vm10.stdout:8/767: write d0/d22/f35 [5029630,13584] 0 2026-03-09T20:48:00.121 INFO:tasks.workunit.client.0.vm07.stdout:5/824: write d5/d69/fc5 [4558572,27897] 0 2026-03-09T20:48:00.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:59 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:00.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:59 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:00.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:47:59 vm07.local ceph-mon[49120]: pgmap v6: 65 pgs: 65 active+clean; 2.9 GiB data, 10 GiB used, 110 GiB / 120 GiB avail 2026-03-09T20:48:00.143 INFO:tasks.workunit.client.1.vm10.stdout:0/685: write d2/d9/da/d48/fb9 [511970,65910] 0 2026-03-09T20:48:00.144 INFO:tasks.workunit.client.1.vm10.stdout:5/659: dread - d2/d39/dbf/d63/fbe zero size 2026-03-09T20:48:00.147 INFO:tasks.workunit.client.1.vm10.stdout:1/707: dwrite d2/da/d25/d3e/f94 [4194304,4194304] 0 2026-03-09T20:48:00.148 INFO:tasks.workunit.client.1.vm10.stdout:4/675: getdents d1/d2/d3 0 2026-03-09T20:48:00.149 INFO:tasks.workunit.client.1.vm10.stdout:2/702: dwrite d5/d18/d27/d89/db6/d41/f6e [0,4194304] 0 2026-03-09T20:48:00.150 INFO:tasks.workunit.client.0.vm07.stdout:1/785: dwrite d3/d23/f37 [0,4194304] 0 2026-03-09T20:48:00.159 INFO:tasks.workunit.client.0.vm07.stdout:4/688: truncate d2/f3 327513 0 2026-03-09T20:48:00.168 INFO:tasks.workunit.client.0.vm07.stdout:7/816: write d3/da/d53/db7/dde/dc5/fec [536043,121121] 0 2026-03-09T20:48:00.181 INFO:tasks.workunit.client.0.vm07.stdout:3/741: write d1/d5/d9/d2f/d3d/d71/fb0 [326679,56229] 0 2026-03-09T20:48:00.181 INFO:tasks.workunit.client.0.vm07.stdout:3/742: dread - d1/d5/fc5 zero size 2026-03-09T20:48:00.186 INFO:tasks.workunit.client.1.vm10.stdout:6/724: dwrite d3/da/d11/d89/db9/dd1/dd2/d60/fb1 [0,4194304] 0 2026-03-09T20:48:00.189 INFO:tasks.workunit.client.1.vm10.stdout:5/660: truncate d2/d27/d75/d81/fd0 153223 0 2026-03-09T20:48:00.196 INFO:tasks.workunit.client.1.vm10.stdout:4/676: creat d1/d47/db9/fd5 x:0 0 0 2026-03-09T20:48:00.201 INFO:tasks.workunit.client.1.vm10.stdout:3/668: rmdir dc/d14/d20/d21/daf/dc1 0 2026-03-09T20:48:00.205 INFO:tasks.workunit.client.1.vm10.stdout:6/725: sync 2026-03-09T20:48:00.209 INFO:tasks.workunit.client.0.vm07.stdout:5/825: mkdir d5/df/d13/d4f/d11c 0 2026-03-09T20:48:00.215 INFO:tasks.workunit.client.1.vm10.stdout:7/705: dread db/d28/d2b/d36/d3f/f6f [0,4194304] 0 2026-03-09T20:48:00.220 INFO:tasks.workunit.client.0.vm07.stdout:1/786: mkdir d3/d97/da1/dc5/d90/dd3/d106 0 2026-03-09T20:48:00.230 INFO:tasks.workunit.client.1.vm10.stdout:1/708: dread d2/da/d25/f78 [0,4194304] 0 2026-03-09T20:48:00.233 INFO:tasks.workunit.client.0.vm07.stdout:4/689: dread d2/df/d17/f6a [0,4194304] 0 2026-03-09T20:48:00.235 INFO:tasks.workunit.client.1.vm10.stdout:2/703: rename d5/d18/d1b/d22/l8f to d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/led 0 2026-03-09T20:48:00.247 INFO:tasks.workunit.client.1.vm10.stdout:3/669: mkdir dc/d14/d26/d29/d40/da8/dde 0 2026-03-09T20:48:00.247 INFO:tasks.workunit.client.1.vm10.stdout:5/661: rmdir d2/d39/d4b/dde 0 2026-03-09T20:48:00.254 INFO:tasks.workunit.client.0.vm07.stdout:8/699: dread d1/d5d/d6f/d2f/d53/f5f [0,4194304] 0 2026-03-09T20:48:00.260 INFO:tasks.workunit.client.1.vm10.stdout:4/677: rename d1/d67/f9a to d1/d2/d5c/fd6 0 2026-03-09T20:48:00.270 INFO:tasks.workunit.client.0.vm07.stdout:5/826: read d5/df/d13/f1f [3518221,59007] 0 2026-03-09T20:48:00.270 INFO:tasks.workunit.client.0.vm07.stdout:7/817: fsync d3/da/db/d79/fd1 0 2026-03-09T20:48:00.270 INFO:tasks.workunit.client.0.vm07.stdout:7/818: dread - d3/d58/d82/f100 zero size 2026-03-09T20:48:00.278 INFO:tasks.workunit.client.1.vm10.stdout:2/704: write d5/d18/d27/d89/db6/d41/d77/db3/db5/db0/fb2 [601065,77782] 0 2026-03-09T20:48:00.279 INFO:tasks.workunit.client.1.vm10.stdout:2/705: stat d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47/d7b/fe3 0 2026-03-09T20:48:00.280 INFO:tasks.workunit.client.1.vm10.stdout:2/706: readlink d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/l3a 0 2026-03-09T20:48:00.285 INFO:tasks.workunit.client.0.vm07.stdout:1/787: truncate d3/d14/f30 10668 0 2026-03-09T20:48:00.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:59 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:00.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:59 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:00.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:47:59 vm10.local ceph-mon[57011]: pgmap v6: 65 pgs: 65 active+clean; 2.9 GiB data, 10 GiB used, 110 GiB / 120 GiB avail 2026-03-09T20:48:00.294 INFO:tasks.workunit.client.0.vm07.stdout:9/728: write d4/d8/d19/d5f/dcf/ffe [2061910,100754] 0 2026-03-09T20:48:00.295 INFO:tasks.workunit.client.0.vm07.stdout:9/729: chown d4/d16/d78/dc4/ff1 12892 1 2026-03-09T20:48:00.295 INFO:tasks.workunit.client.0.vm07.stdout:1/788: dread d3/d23/d52/f73 [0,4194304] 0 2026-03-09T20:48:00.300 INFO:tasks.workunit.client.0.vm07.stdout:5/827: unlink d5/df/d13/d30/f36 0 2026-03-09T20:48:00.301 INFO:tasks.workunit.client.1.vm10.stdout:4/678: mkdir d1/d2/d3/d54/dd7 0 2026-03-09T20:48:00.302 INFO:tasks.workunit.client.1.vm10.stdout:4/679: chown d1/d2/d5c/d64/d6b/d81/dac/d1b/d57/f75 7 1 2026-03-09T20:48:00.308 INFO:tasks.workunit.client.0.vm07.stdout:2/766: dwrite d2/f3e [4194304,4194304] 0 2026-03-09T20:48:00.309 INFO:tasks.workunit.client.0.vm07.stdout:2/767: chown d2/d11/ddb/db0/cd0 1 1 2026-03-09T20:48:00.312 INFO:tasks.workunit.client.0.vm07.stdout:0/786: write d1/d2/dc/d80/fbe [4591123,18454] 0 2026-03-09T20:48:00.315 INFO:tasks.workunit.client.1.vm10.stdout:9/754: write d2/d28/da2/fcc [685746,56869] 0 2026-03-09T20:48:00.316 INFO:tasks.workunit.client.0.vm07.stdout:6/765: dwrite d8/d16/f23 [0,4194304] 0 2026-03-09T20:48:00.320 INFO:tasks.workunit.client.1.vm10.stdout:8/768: dwrite d0/d92/de8/f43 [4194304,4194304] 0 2026-03-09T20:48:00.322 INFO:tasks.workunit.client.1.vm10.stdout:8/769: chown d0/lf0 1287583 1 2026-03-09T20:48:00.335 INFO:tasks.workunit.client.1.vm10.stdout:7/706: creat db/d28/d2b/d36/d3b/fde x:0 0 0 2026-03-09T20:48:00.339 INFO:tasks.workunit.client.0.vm07.stdout:7/819: mkdir d3/da/d53/db7/dde/d96/d112 0 2026-03-09T20:48:00.342 INFO:tasks.workunit.client.1.vm10.stdout:0/686: truncate d2/d9/da/d11/f1f 1019630 0 2026-03-09T20:48:00.352 INFO:tasks.workunit.client.1.vm10.stdout:2/707: truncate d5/f59 277465 0 2026-03-09T20:48:00.353 INFO:tasks.workunit.client.1.vm10.stdout:2/708: dread - d5/d18/d27/d89/db6/f7e zero size 2026-03-09T20:48:00.356 INFO:tasks.workunit.client.1.vm10.stdout:2/709: dwrite d5/f16 [0,4194304] 0 2026-03-09T20:48:00.359 INFO:tasks.workunit.client.0.vm07.stdout:3/743: creat d1/d5/d9/d2f/d3d/fef x:0 0 0 2026-03-09T20:48:00.359 INFO:tasks.workunit.client.1.vm10.stdout:2/710: read - d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47/dcc/fdd zero size 2026-03-09T20:48:00.359 INFO:tasks.workunit.client.1.vm10.stdout:6/726: write d3/d30/d7f/d24/d39/f6c [4134077,78480] 0 2026-03-09T20:48:00.359 INFO:tasks.workunit.client.0.vm07.stdout:8/700: creat d1/db0/fe0 x:0 0 0 2026-03-09T20:48:00.383 INFO:tasks.workunit.client.0.vm07.stdout:9/730: rmdir d4/d16/d29/d24/d37 39 2026-03-09T20:48:00.385 INFO:tasks.workunit.client.1.vm10.stdout:1/709: dwrite d2/da/d25/d46/f61 [0,4194304] 0 2026-03-09T20:48:00.388 INFO:tasks.workunit.client.0.vm07.stdout:4/690: write d2/d55/d5d/d3f/d4a/d85/fa0 [86403,39566] 0 2026-03-09T20:48:00.389 INFO:tasks.workunit.client.0.vm07.stdout:4/691: read - d2/d55/d5d/d93/fb8 zero size 2026-03-09T20:48:00.390 INFO:tasks.workunit.client.0.vm07.stdout:4/692: stat d2/df/l32 0 2026-03-09T20:48:00.392 INFO:tasks.workunit.client.0.vm07.stdout:1/789: creat d3/dc6/f107 x:0 0 0 2026-03-09T20:48:00.399 INFO:tasks.workunit.client.1.vm10.stdout:5/662: dwrite d2/d58/d6c/fc4 [0,4194304] 0 2026-03-09T20:48:00.415 INFO:tasks.workunit.client.1.vm10.stdout:3/670: dwrite dc/d14/d26/d29/d2a/f66 [4194304,4194304] 0 2026-03-09T20:48:00.417 INFO:tasks.workunit.client.1.vm10.stdout:3/671: write dc/d14/d20/d2e/d56/f15 [2368514,29378] 0 2026-03-09T20:48:00.435 INFO:tasks.workunit.client.0.vm07.stdout:7/820: fdatasync d3/da/d53/db7/dde/f84 0 2026-03-09T20:48:00.435 INFO:tasks.workunit.client.0.vm07.stdout:3/744: creat d1/d5/d9/d2f/d3d/d71/ff0 x:0 0 0 2026-03-09T20:48:00.438 INFO:tasks.workunit.client.1.vm10.stdout:8/770: mkdir d0/dd1/df1 0 2026-03-09T20:48:00.439 INFO:tasks.workunit.client.0.vm07.stdout:1/790: mknod d3/d97/da1/dc5/d90/de8/c108 0 2026-03-09T20:48:00.449 INFO:tasks.workunit.client.0.vm07.stdout:2/768: rename d2/db/d28/d5c/cba to d2/d11/ddb/db0/db3/cf2 0 2026-03-09T20:48:00.450 INFO:tasks.workunit.client.0.vm07.stdout:1/791: dread d3/d9c/fd2 [0,4194304] 0 2026-03-09T20:48:00.457 INFO:tasks.workunit.client.1.vm10.stdout:3/672: creat dc/d14/d20/d21/fdf x:0 0 0 2026-03-09T20:48:00.472 INFO:tasks.workunit.client.0.vm07.stdout:6/766: mknod d8/d16/d4b/d88/dc3/cf9 0 2026-03-09T20:48:00.472 INFO:tasks.workunit.client.1.vm10.stdout:0/687: fsync d2/d4a/d58/d82/d71/d5d/f5f 0 2026-03-09T20:48:00.472 INFO:tasks.workunit.client.1.vm10.stdout:2/711: creat d5/d18/d27/db4/fee x:0 0 0 2026-03-09T20:48:00.472 INFO:tasks.workunit.client.1.vm10.stdout:4/680: rename d1/d2/d5c/d64/d6b/d81/dac/d1b/da1 to d1/dd8 0 2026-03-09T20:48:00.504 INFO:tasks.workunit.client.1.vm10.stdout:8/771: unlink d0/d22/d25/d2e/d41/d85/db9/fd4 0 2026-03-09T20:48:00.513 INFO:tasks.workunit.client.1.vm10.stdout:3/673: mkdir dc/d14/d26/d29/d40/da2/de0 0 2026-03-09T20:48:00.515 INFO:tasks.workunit.client.1.vm10.stdout:9/755: write d2/d33/fb3 [53330,101887] 0 2026-03-09T20:48:00.521 INFO:tasks.workunit.client.0.vm07.stdout:5/828: write d5/df/f2b [8157063,7816] 0 2026-03-09T20:48:00.521 INFO:tasks.workunit.client.1.vm10.stdout:7/707: dwrite db/d28/fa5 [0,4194304] 0 2026-03-09T20:48:00.526 INFO:tasks.workunit.client.0.vm07.stdout:0/787: dwrite d1/d2/d4b/f61 [0,4194304] 0 2026-03-09T20:48:00.527 INFO:tasks.workunit.client.0.vm07.stdout:0/788: readlink d1/d2/dc/db1/lf7 0 2026-03-09T20:48:00.529 INFO:tasks.workunit.client.1.vm10.stdout:6/727: dwrite d3/d30/d7f/d36/d5c/f78 [0,4194304] 0 2026-03-09T20:48:00.533 INFO:tasks.workunit.client.0.vm07.stdout:9/731: write f2 [8856903,107956] 0 2026-03-09T20:48:00.537 INFO:tasks.workunit.client.1.vm10.stdout:6/728: dwrite d3/d30/d7f/f18 [4194304,4194304] 0 2026-03-09T20:48:00.538 INFO:tasks.workunit.client.0.vm07.stdout:2/769: creat d2/db/d28/d57/ff3 x:0 0 0 2026-03-09T20:48:00.538 INFO:tasks.workunit.client.0.vm07.stdout:2/770: readlink d2/d11/l3f 0 2026-03-09T20:48:00.539 INFO:tasks.workunit.client.1.vm10.stdout:6/729: read - d3/da/d11/d26/fdb zero size 2026-03-09T20:48:00.543 INFO:tasks.workunit.client.1.vm10.stdout:0/688: creat d2/d4a/d58/d82/d71/d5d/ff2 x:0 0 0 2026-03-09T20:48:00.544 INFO:tasks.workunit.client.1.vm10.stdout:0/689: readlink d2/d9/da/d11/dd1/d34/l3c 0 2026-03-09T20:48:00.547 INFO:tasks.workunit.client.0.vm07.stdout:1/792: mkdir d3/d23/d109 0 2026-03-09T20:48:00.550 INFO:tasks.workunit.client.0.vm07.stdout:3/745: write d1/d5/d9/d2f/d34/d46/d5d/fb8 [491415,75417] 0 2026-03-09T20:48:00.557 INFO:tasks.workunit.client.0.vm07.stdout:7/821: write d3/da/db/d32/d3e/dac/d1f/f37 [921046,88880] 0 2026-03-09T20:48:00.560 INFO:tasks.workunit.client.0.vm07.stdout:8/701: dwrite d1/dc/f29 [0,4194304] 0 2026-03-09T20:48:00.563 INFO:tasks.workunit.client.0.vm07.stdout:6/767: rename d8/d16/d22/d9b/de4/d85/f3c to d8/d16/d61/ffa 0 2026-03-09T20:48:00.566 INFO:tasks.workunit.client.1.vm10.stdout:1/710: write d2/da/d25/d46/d51/d5d/d6e/f76 [1346114,96860] 0 2026-03-09T20:48:00.567 INFO:tasks.workunit.client.1.vm10.stdout:1/711: write d2/da/d25/d46/d80/da0/d92/db5/dc7/fcf [234053,7860] 0 2026-03-09T20:48:00.571 INFO:tasks.workunit.client.1.vm10.stdout:5/663: rename d2/d27/f2d to d2/d39/d4b/d7a/ffc 0 2026-03-09T20:48:00.579 INFO:tasks.workunit.client.1.vm10.stdout:8/772: write d0/f14 [7734159,15706] 0 2026-03-09T20:48:00.594 INFO:tasks.workunit.client.0.vm07.stdout:9/732: fdatasync d4/f17 0 2026-03-09T20:48:00.600 INFO:tasks.workunit.client.0.vm07.stdout:2/771: symlink d2/db/d1c/lf4 0 2026-03-09T20:48:00.604 INFO:tasks.workunit.client.0.vm07.stdout:1/793: chown d3/d97/da1/fbb 253435817 1 2026-03-09T20:48:00.608 INFO:tasks.workunit.client.1.vm10.stdout:0/690: rmdir d2/d9/da/d11/d92 39 2026-03-09T20:48:00.609 INFO:tasks.workunit.client.1.vm10.stdout:0/691: write d2/d4a/d58/d82/d93/fbc [5049061,23233] 0 2026-03-09T20:48:00.613 INFO:tasks.workunit.client.0.vm07.stdout:3/746: fsync d1/d5/d9/d2f/d3d/f75 0 2026-03-09T20:48:00.621 INFO:tasks.workunit.client.1.vm10.stdout:4/681: rename d1/l6f to d1/d2/d5c/d64/d61/ld9 0 2026-03-09T20:48:00.621 INFO:tasks.workunit.client.1.vm10.stdout:4/682: chown d1/d2/d3/c8c 25095 1 2026-03-09T20:48:00.629 INFO:tasks.workunit.client.0.vm07.stdout:8/702: dread d1/fb5 [0,4194304] 0 2026-03-09T20:48:00.636 INFO:tasks.workunit.client.1.vm10.stdout:8/773: mknod d0/d54/cf2 0 2026-03-09T20:48:00.661 INFO:tasks.workunit.client.1.vm10.stdout:0/692: truncate d2/d4a/d58/d82/d71/f38 4404226 0 2026-03-09T20:48:00.661 INFO:tasks.workunit.client.0.vm07.stdout:1/794: creat d3/d97/da1/dc5/d60/d9f/f10a x:0 0 0 2026-03-09T20:48:00.662 INFO:tasks.workunit.client.0.vm07.stdout:2/772: dread d2/db/d28/d57/f68 [0,4194304] 0 2026-03-09T20:48:00.663 INFO:tasks.workunit.client.0.vm07.stdout:2/773: read - d2/db/d28/d57/ff3 zero size 2026-03-09T20:48:00.667 INFO:tasks.workunit.client.0.vm07.stdout:3/747: creat d1/d5/d9/d2f/d34/d46/d5d/ff1 x:0 0 0 2026-03-09T20:48:00.668 INFO:tasks.workunit.client.0.vm07.stdout:3/748: dread - d1/d5/d9/d11/d6d/dd0/d43/fe5 zero size 2026-03-09T20:48:00.671 INFO:tasks.workunit.client.0.vm07.stdout:2/774: dwrite d2/d11/ddb/d6e/dbe/fe8 [0,4194304] 0 2026-03-09T20:48:00.687 INFO:tasks.workunit.client.0.vm07.stdout:7/822: rename d3/da/db/d32/d7a to d3/da4/df2/d113 0 2026-03-09T20:48:00.687 INFO:tasks.workunit.client.0.vm07.stdout:6/768: fdatasync d8/d16/d22/d9b/de4/d85/f4a 0 2026-03-09T20:48:00.689 INFO:tasks.workunit.client.1.vm10.stdout:4/683: creat d1/d67/fda x:0 0 0 2026-03-09T20:48:00.693 INFO:tasks.workunit.client.0.vm07.stdout:8/703: symlink d1/d5d/d6f/d2f/le1 0 2026-03-09T20:48:00.694 INFO:tasks.workunit.client.0.vm07.stdout:8/704: chown d1/db0 486464 1 2026-03-09T20:48:00.695 INFO:tasks.workunit.client.0.vm07.stdout:5/829: link d5/df/d13/d6c/f77 d5/d33/db2/de8/f11d 0 2026-03-09T20:48:00.702 INFO:tasks.workunit.client.1.vm10.stdout:8/774: creat d0/d22/d25/d2e/d41/d47/d63/ff3 x:0 0 0 2026-03-09T20:48:00.703 INFO:tasks.workunit.client.1.vm10.stdout:6/730: rmdir d3/da/d11/d89/ddf 0 2026-03-09T20:48:00.704 INFO:tasks.workunit.client.1.vm10.stdout:6/731: truncate d3/d30/d7f/d36/f4f 6017471 0 2026-03-09T20:48:00.706 INFO:tasks.workunit.client.0.vm07.stdout:1/795: fdatasync d3/d23/d52/f73 0 2026-03-09T20:48:00.709 INFO:tasks.workunit.client.0.vm07.stdout:0/789: dread d1/d2/d33/d35/f59 [0,4194304] 0 2026-03-09T20:48:00.709 INFO:tasks.workunit.client.0.vm07.stdout:0/790: fsync d1/d2/dc/d17/ff2 0 2026-03-09T20:48:00.709 INFO:tasks.workunit.client.1.vm10.stdout:0/693: dread d2/d4a/f5a [0,4194304] 0 2026-03-09T20:48:00.716 INFO:tasks.workunit.client.0.vm07.stdout:0/791: dread d1/f31 [0,4194304] 0 2026-03-09T20:48:00.724 INFO:tasks.workunit.client.0.vm07.stdout:2/775: mknod d2/db/d1c/d4a/d88/cf5 0 2026-03-09T20:48:00.733 INFO:tasks.workunit.client.1.vm10.stdout:2/712: write d5/d18/d1b/f26 [2650613,108939] 0 2026-03-09T20:48:00.733 INFO:tasks.workunit.client.1.vm10.stdout:2/713: readlink d5/d18/d27/d38/d61/l9d 0 2026-03-09T20:48:00.734 INFO:tasks.workunit.client.1.vm10.stdout:2/714: write d5/d18/d27/d89/f9a [1407469,96129] 0 2026-03-09T20:48:00.742 INFO:tasks.workunit.client.0.vm07.stdout:5/830: mkdir d5/d19/d73/d9c/d11e 0 2026-03-09T20:48:00.746 INFO:tasks.workunit.client.1.vm10.stdout:9/756: write d2/d3/de/d35/f9c [1456142,51062] 0 2026-03-09T20:48:00.746 INFO:tasks.workunit.client.1.vm10.stdout:7/708: write db/d46/f47 [4009570,113610] 0 2026-03-09T20:48:00.749 INFO:tasks.workunit.client.1.vm10.stdout:1/712: write d2/da/d25/f48 [2395120,23848] 0 2026-03-09T20:48:00.749 INFO:tasks.workunit.client.1.vm10.stdout:3/674: dwrite dc/d14/d26/d29/d2a/d76/f97 [0,4194304] 0 2026-03-09T20:48:00.750 INFO:tasks.workunit.client.1.vm10.stdout:1/713: chown d2/da/d25/d46/d51/l54 4752 1 2026-03-09T20:48:00.752 INFO:tasks.workunit.client.1.vm10.stdout:5/664: write d2/d39/dbf/d66/fc7 [542988,76539] 0 2026-03-09T20:48:00.763 INFO:tasks.workunit.client.0.vm07.stdout:9/733: dwrite d4/d16/f41 [0,4194304] 0 2026-03-09T20:48:00.766 INFO:tasks.workunit.client.0.vm07.stdout:9/734: write d4/d8/dc/dbb/fff [4344788,14475] 0 2026-03-09T20:48:00.778 INFO:tasks.workunit.client.0.vm07.stdout:0/792: creat d1/dc0/dcc/dd9/ff9 x:0 0 0 2026-03-09T20:48:00.779 INFO:tasks.workunit.client.0.vm07.stdout:3/749: mknod d1/d5/d9/d11/d6d/dd0/d43/cf2 0 2026-03-09T20:48:00.785 INFO:tasks.workunit.client.0.vm07.stdout:7/823: unlink d3/da4/df2/d113/cb8 0 2026-03-09T20:48:00.787 INFO:tasks.workunit.client.0.vm07.stdout:8/705: write d1/dc/d16/dad/fb8 [843206,104077] 0 2026-03-09T20:48:00.793 INFO:tasks.workunit.client.0.vm07.stdout:4/693: truncate d2/df/d59/d8a/d9d/fa8 1031127 0 2026-03-09T20:48:00.805 INFO:tasks.workunit.client.0.vm07.stdout:2/776: write d2/db/d49/fb2 [1940623,65923] 0 2026-03-09T20:48:00.811 INFO:tasks.workunit.client.0.vm07.stdout:5/831: dwrite d5/d33/d39/d8d/f8e [0,4194304] 0 2026-03-09T20:48:00.841 INFO:tasks.workunit.client.0.vm07.stdout:1/796: truncate d3/d14/d54/fcc 1112748 0 2026-03-09T20:48:00.846 INFO:tasks.workunit.client.0.vm07.stdout:3/750: write d1/d5/d9/d2f/d34/f68 [797898,74605] 0 2026-03-09T20:48:00.846 INFO:tasks.workunit.client.0.vm07.stdout:9/735: dwrite d4/d8/fd [0,4194304] 0 2026-03-09T20:48:00.848 INFO:tasks.workunit.client.0.vm07.stdout:0/793: dwrite d1/d2/d98/fa5 [0,4194304] 0 2026-03-09T20:48:00.853 INFO:tasks.workunit.client.0.vm07.stdout:9/736: stat d4/d8/d19/d5f/da5/cea 0 2026-03-09T20:48:00.866 INFO:tasks.workunit.client.0.vm07.stdout:8/706: fdatasync d1/d5d/d6f/d2f/d4d/d63/f77 0 2026-03-09T20:48:00.867 INFO:tasks.workunit.client.0.vm07.stdout:8/707: write d1/d5d/d6f/d2f/f51 [1442846,88327] 0 2026-03-09T20:48:00.882 INFO:tasks.workunit.client.0.vm07.stdout:5/832: mkdir d5/d33/d39/d8d/dab/d11f 0 2026-03-09T20:48:00.885 INFO:tasks.workunit.client.1.vm10.stdout:0/694: creat d2/d4a/d58/d82/d93/ff3 x:0 0 0 2026-03-09T20:48:00.888 INFO:tasks.workunit.client.1.vm10.stdout:8/775: mknod d0/d22/d25/cf4 0 2026-03-09T20:48:00.907 INFO:tasks.workunit.client.0.vm07.stdout:4/694: dwrite d2/df/d17/f80 [0,4194304] 0 2026-03-09T20:48:00.914 INFO:tasks.workunit.client.0.vm07.stdout:4/695: dwrite d2/d55/d5d/d3f/d4a/fad [0,4194304] 0 2026-03-09T20:48:00.929 INFO:tasks.workunit.client.1.vm10.stdout:9/757: unlink d2/d28/f32 0 2026-03-09T20:48:00.931 INFO:tasks.workunit.client.0.vm07.stdout:3/751: mkdir d1/d5/d9/d11/d60/df3 0 2026-03-09T20:48:00.935 INFO:tasks.workunit.client.1.vm10.stdout:7/709: rename db/d28/d2b/d36/d3f/f6f to db/d28/d2b/d36/d3f/fdf 0 2026-03-09T20:48:00.939 INFO:tasks.workunit.client.1.vm10.stdout:3/675: creat dc/d14/d27/fe1 x:0 0 0 2026-03-09T20:48:00.939 INFO:tasks.workunit.client.1.vm10.stdout:1/714: creat d2/da/d25/d3e/d42/fe5 x:0 0 0 2026-03-09T20:48:00.939 INFO:tasks.workunit.client.0.vm07.stdout:6/769: getdents d8/d16/dbb 0 2026-03-09T20:48:00.940 INFO:tasks.workunit.client.0.vm07.stdout:9/737: dread d4/d8/d19/fc2 [0,4194304] 0 2026-03-09T20:48:00.943 INFO:tasks.workunit.client.1.vm10.stdout:5/665: chown d2/d39/d4b/f85 0 1 2026-03-09T20:48:00.944 INFO:tasks.workunit.client.0.vm07.stdout:8/708: rmdir d1 39 2026-03-09T20:48:00.947 INFO:tasks.workunit.client.1.vm10.stdout:0/695: rename d2/d4a/d58/d82/d71/d8e/fd7 to d2/d9/d69/de2/ff4 0 2026-03-09T20:48:00.947 INFO:tasks.workunit.client.0.vm07.stdout:2/777: mkdir d2/db/df6 0 2026-03-09T20:48:00.949 INFO:tasks.workunit.client.0.vm07.stdout:7/824: link d3/c5b d3/da/db/d32/d3e/dac/d1f/d50/c114 0 2026-03-09T20:48:00.951 INFO:tasks.workunit.client.1.vm10.stdout:2/715: creat d5/d18/d27/d38/d61/dc8/ddb/dea/fef x:0 0 0 2026-03-09T20:48:00.961 INFO:tasks.workunit.client.0.vm07.stdout:4/696: creat d2/d55/d5d/d3f/d4a/fbf x:0 0 0 2026-03-09T20:48:00.961 INFO:tasks.workunit.client.0.vm07.stdout:0/794: fsync d1/d2/dc/f40 0 2026-03-09T20:48:00.961 INFO:tasks.workunit.client.0.vm07.stdout:0/795: stat d1/d2/d4b/f70 0 2026-03-09T20:48:00.961 INFO:tasks.workunit.client.1.vm10.stdout:9/758: creat d2/d28/da2/ded/ffe x:0 0 0 2026-03-09T20:48:00.961 INFO:tasks.workunit.client.1.vm10.stdout:7/710: rmdir db 39 2026-03-09T20:48:00.961 INFO:tasks.workunit.client.1.vm10.stdout:9/759: write d2/d3/de/f80 [110066,108800] 0 2026-03-09T20:48:00.961 INFO:tasks.workunit.client.1.vm10.stdout:1/715: mkdir d2/d89/de6 0 2026-03-09T20:48:00.961 INFO:tasks.workunit.client.1.vm10.stdout:5/666: creat d2/d80/ffd x:0 0 0 2026-03-09T20:48:00.961 INFO:tasks.workunit.client.1.vm10.stdout:4/684: getdents d1 0 2026-03-09T20:48:00.961 INFO:tasks.workunit.client.1.vm10.stdout:0/696: dread d2/d4a/d58/d82/d71/d5d/f76 [4194304,4194304] 0 2026-03-09T20:48:00.961 INFO:tasks.workunit.client.0.vm07.stdout:3/752: read d1/d5/d9/f1c [1944404,45178] 0 2026-03-09T20:48:00.963 INFO:tasks.workunit.client.0.vm07.stdout:6/770: read d8/d16/d22/d9b/de4/f91 [2676248,63110] 0 2026-03-09T20:48:00.964 INFO:tasks.workunit.client.1.vm10.stdout:3/676: rename dc/f58 to dc/db4/fe2 0 2026-03-09T20:48:00.965 INFO:tasks.workunit.client.1.vm10.stdout:3/677: chown dc/d14/d26/d29/d40/da8/dde 4 1 2026-03-09T20:48:00.965 INFO:tasks.workunit.client.1.vm10.stdout:8/776: fsync d0/d22/d2c/f32 0 2026-03-09T20:48:00.969 INFO:tasks.workunit.client.0.vm07.stdout:2/778: symlink d2/d11/ddb/d6e/dbe/lf7 0 2026-03-09T20:48:00.972 INFO:tasks.workunit.client.0.vm07.stdout:7/825: unlink d3/fb0 0 2026-03-09T20:48:00.973 INFO:tasks.workunit.client.0.vm07.stdout:5/833: sync 2026-03-09T20:48:00.974 INFO:tasks.workunit.client.1.vm10.stdout:2/716: dread - d5/d18/fc6 zero size 2026-03-09T20:48:00.977 INFO:tasks.workunit.client.1.vm10.stdout:7/711: write db/d21/d23/f22 [357031,45784] 0 2026-03-09T20:48:00.985 INFO:tasks.workunit.client.0.vm07.stdout:4/697: creat d2/df/d59/d8a/fc0 x:0 0 0 2026-03-09T20:48:00.989 INFO:tasks.workunit.client.1.vm10.stdout:9/760: read - d2/d12/d5a/fe9 zero size 2026-03-09T20:48:00.992 INFO:tasks.workunit.client.0.vm07.stdout:0/796: rmdir d1/d1f/d20 39 2026-03-09T20:48:01.002 INFO:tasks.workunit.client.0.vm07.stdout:1/797: dwrite d3/f7d [0,4194304] 0 2026-03-09T20:48:01.008 INFO:tasks.workunit.client.1.vm10.stdout:1/716: creat d2/da/d25/d46/d80/da0/d92/fe7 x:0 0 0 2026-03-09T20:48:01.014 INFO:tasks.workunit.client.0.vm07.stdout:3/753: rename d1/d5/d9/d11/le6 to d1/d5/d9/d11/d60/df3/lf4 0 2026-03-09T20:48:01.025 INFO:tasks.workunit.client.1.vm10.stdout:0/697: mknod d2/d9/da/d35/cf5 0 2026-03-09T20:48:01.025 INFO:tasks.workunit.client.0.vm07.stdout:6/771: fsync d8/d16/d4b/f95 0 2026-03-09T20:48:01.025 INFO:tasks.workunit.client.0.vm07.stdout:9/738: symlink d4/d16/d29/d24/d7c/l106 0 2026-03-09T20:48:01.025 INFO:tasks.workunit.client.0.vm07.stdout:7/826: creat d3/da4/df2/dfb/f115 x:0 0 0 2026-03-09T20:48:01.026 INFO:tasks.workunit.client.0.vm07.stdout:7/827: chown d3/da/db/d32/d3e/dac/d1f/d50/la5 50 1 2026-03-09T20:48:01.031 INFO:tasks.workunit.client.1.vm10.stdout:3/678: mkdir dc/db4/de3 0 2026-03-09T20:48:01.032 INFO:tasks.workunit.client.1.vm10.stdout:5/667: write d2/d27/d37/d46/f94 [4845133,112106] 0 2026-03-09T20:48:01.035 INFO:tasks.workunit.client.1.vm10.stdout:8/777: rmdir d0/d22/d25/d2e/d41/d47/d78 39 2026-03-09T20:48:01.036 INFO:tasks.workunit.client.1.vm10.stdout:8/778: readlink d0/d22/d25/d6c/d9b/le5 0 2026-03-09T20:48:01.045 INFO:tasks.workunit.client.1.vm10.stdout:6/732: link d3/d30/d7f/d36/d5c/dad/fce d3/da/d11/d26/d5b/fe3 0 2026-03-09T20:48:01.057 INFO:tasks.workunit.client.0.vm07.stdout:0/797: write d1/f11 [2983763,72302] 0 2026-03-09T20:48:01.061 INFO:tasks.workunit.client.1.vm10.stdout:9/761: dwrite d2/d3/d6d/db7/fc9 [0,4194304] 0 2026-03-09T20:48:01.064 INFO:tasks.workunit.client.0.vm07.stdout:1/798: dwrite d3/d97/da1/dc5/d90/f96 [0,4194304] 0 2026-03-09T20:48:01.087 INFO:tasks.workunit.client.0.vm07.stdout:2/779: rename d2/db/d28/d5c/dc7/ddc to d2/db/d28/d57/df8 0 2026-03-09T20:48:01.090 INFO:tasks.workunit.client.0.vm07.stdout:3/754: chown d1/d5/d9/d11/d6d/c92 24857 1 2026-03-09T20:48:01.094 INFO:tasks.workunit.client.1.vm10.stdout:1/717: dread d2/da/fb1 [0,4194304] 0 2026-03-09T20:48:01.095 INFO:tasks.workunit.client.1.vm10.stdout:1/718: truncate d2/da/d25/d46/d80/da0/d92/fe7 775123 0 2026-03-09T20:48:01.096 INFO:tasks.workunit.client.0.vm07.stdout:6/772: fsync d8/d16/d4b/fbc 0 2026-03-09T20:48:01.096 INFO:tasks.workunit.client.1.vm10.stdout:1/719: write d2/da/d25/d3e/f69 [4724737,98378] 0 2026-03-09T20:48:01.100 INFO:tasks.workunit.client.1.vm10.stdout:0/698: mkdir d2/d4a/d58/df6 0 2026-03-09T20:48:01.109 INFO:tasks.workunit.client.1.vm10.stdout:8/779: creat d0/d22/d25/d2e/d41/d85/d8b/ff5 x:0 0 0 2026-03-09T20:48:01.119 INFO:tasks.workunit.client.0.vm07.stdout:4/698: dread d2/d55/d5d/d3f/d4a/f5e [0,4194304] 0 2026-03-09T20:48:01.120 INFO:tasks.workunit.client.0.vm07.stdout:4/699: readlink d2/df/l79 0 2026-03-09T20:48:01.121 INFO:tasks.workunit.client.1.vm10.stdout:2/717: creat d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d91/ff0 x:0 0 0 2026-03-09T20:48:01.125 INFO:tasks.workunit.client.0.vm07.stdout:5/834: truncate d5/d69/fc5 2302163 0 2026-03-09T20:48:01.125 INFO:tasks.workunit.client.1.vm10.stdout:7/712: dwrite db/d28/d2b/d36/d3f/fae [0,4194304] 0 2026-03-09T20:48:01.126 INFO:tasks.workunit.client.0.vm07.stdout:0/798: symlink d1/d1f/d30/lfa 0 2026-03-09T20:48:01.126 INFO:tasks.workunit.client.1.vm10.stdout:7/713: dread - db/d28/d30/fa4 zero size 2026-03-09T20:48:01.127 INFO:tasks.workunit.client.1.vm10.stdout:5/668: dwrite d2/d39/dbf/d84/fe7 [0,4194304] 0 2026-03-09T20:48:01.131 INFO:tasks.workunit.client.0.vm07.stdout:0/799: dwrite d1/dc0/dcc/dd9/ff9 [0,4194304] 0 2026-03-09T20:48:01.155 INFO:tasks.workunit.client.1.vm10.stdout:4/685: creat d1/d2/d5c/d64/d6b/d81/dac/d1c/fdb x:0 0 0 2026-03-09T20:48:01.155 INFO:tasks.workunit.client.0.vm07.stdout:2/780: symlink d2/db/d49/d7d/d85/lf9 0 2026-03-09T20:48:01.156 INFO:tasks.workunit.client.0.vm07.stdout:3/755: symlink d1/d5/d9/d11/d6d/dd0/d95/ddb/lf5 0 2026-03-09T20:48:01.157 INFO:tasks.workunit.client.1.vm10.stdout:1/720: symlink d2/da/d25/d46/d80/da0/d92/db5/le8 0 2026-03-09T20:48:01.158 INFO:tasks.workunit.client.0.vm07.stdout:6/773: mkdir d8/d16/d22/d24/da0/dab/d40/d69/dfb 0 2026-03-09T20:48:01.158 INFO:tasks.workunit.client.1.vm10.stdout:1/721: write d2/da/d25/d3e/dca/da2/fe1 [775110,1457] 0 2026-03-09T20:48:01.166 INFO:tasks.workunit.client.1.vm10.stdout:6/733: creat d3/d30/d7f/d24/d39/d9e/fe4 x:0 0 0 2026-03-09T20:48:01.170 INFO:tasks.workunit.client.0.vm07.stdout:7/828: truncate d3/da/db/d32/d3e/dac/d1f/d50/ffa 4086151 0 2026-03-09T20:48:01.174 INFO:tasks.workunit.client.1.vm10.stdout:2/718: mkdir d5/d18/d27/d89/db6/d41/df1 0 2026-03-09T20:48:01.175 INFO:tasks.workunit.client.1.vm10.stdout:2/719: write d5/d18/d1b/f26 [785324,30071] 0 2026-03-09T20:48:01.177 INFO:tasks.workunit.client.1.vm10.stdout:2/720: read - d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47/d7b/fe3 zero size 2026-03-09T20:48:01.177 INFO:tasks.workunit.client.1.vm10.stdout:2/721: chown d5/f15 66187104 1 2026-03-09T20:48:01.187 INFO:tasks.workunit.client.1.vm10.stdout:3/679: creat dc/d14/d26/d29/d40/da8/fe4 x:0 0 0 2026-03-09T20:48:01.188 INFO:tasks.workunit.client.1.vm10.stdout:0/699: creat d2/d4a/d58/d82/d60/d98/ff7 x:0 0 0 2026-03-09T20:48:01.194 INFO:tasks.workunit.client.1.vm10.stdout:6/734: mkdir d3/d30/d7f/d36/d5c/dad/de5 0 2026-03-09T20:48:01.209 INFO:tasks.workunit.client.1.vm10.stdout:9/762: write d2/d33/dcf/fe2 [209952,34937] 0 2026-03-09T20:48:01.210 INFO:tasks.workunit.client.1.vm10.stdout:9/763: truncate d2/d3/d85/f8b 8762163 0 2026-03-09T20:48:01.221 INFO:tasks.workunit.client.1.vm10.stdout:2/722: chown d5/d18/la3 113009 1 2026-03-09T20:48:01.222 INFO:tasks.workunit.client.1.vm10.stdout:7/714: creat db/d28/d2b/d36/d3b/dd5/fe0 x:0 0 0 2026-03-09T20:48:01.222 INFO:tasks.workunit.client.1.vm10.stdout:7/715: stat db/d28/d2b/d36/d63/d6d/fa8 0 2026-03-09T20:48:01.223 INFO:tasks.workunit.client.1.vm10.stdout:7/716: chown db/d21/d23/f34 3727021 1 2026-03-09T20:48:01.266 INFO:tasks.workunit.client.0.vm07.stdout:9/739: write d4/d16/d29/d24/d37/d44/d62/d8e/fe0 [238157,72004] 0 2026-03-09T20:48:01.273 INFO:tasks.workunit.client.1.vm10.stdout:3/680: truncate dc/f88 4060359 0 2026-03-09T20:48:01.274 INFO:tasks.workunit.client.1.vm10.stdout:1/722: rename d2/da/f9f to d2/da/d25/d46/ddb/fe9 0 2026-03-09T20:48:01.278 INFO:tasks.workunit.client.1.vm10.stdout:5/669: dwrite d2/f64 [4194304,4194304] 0 2026-03-09T20:48:01.281 INFO:tasks.workunit.client.1.vm10.stdout:8/780: dwrite d0/d22/d25/d2e/d41/d47/d63/f8c [4194304,4194304] 0 2026-03-09T20:48:01.286 INFO:tasks.workunit.client.1.vm10.stdout:7/717: creat db/d21/d26/d72/fe1 x:0 0 0 2026-03-09T20:48:01.306 INFO:tasks.workunit.client.1.vm10.stdout:3/681: unlink dc/d14/d26/d29/d40/d8c/l98 0 2026-03-09T20:48:01.310 INFO:tasks.workunit.client.0.vm07.stdout:2/781: rename d2/db/d28/d5c/fae to d2/db/d1c/d8d/ffa 0 2026-03-09T20:48:01.311 INFO:tasks.workunit.client.1.vm10.stdout:1/723: sync 2026-03-09T20:48:01.314 INFO:tasks.workunit.client.0.vm07.stdout:2/782: dwrite d2/d11/fef [0,4194304] 0 2026-03-09T20:48:01.316 INFO:tasks.workunit.client.0.vm07.stdout:3/756: dread - d1/d5/d9/d2f/d99/dd8/de0/fe1 zero size 2026-03-09T20:48:01.328 INFO:tasks.workunit.client.1.vm10.stdout:0/700: creat d2/d9/da/d11/dd1/db7/dcd/d63/ff8 x:0 0 0 2026-03-09T20:48:01.337 INFO:tasks.workunit.client.1.vm10.stdout:4/686: write d1/d67/f8f [2275000,54093] 0 2026-03-09T20:48:01.338 INFO:tasks.workunit.client.0.vm07.stdout:8/709: getdents d1/db0 0 2026-03-09T20:48:01.340 INFO:tasks.workunit.client.1.vm10.stdout:7/718: chown db/d21/d23/l20 1686413562 1 2026-03-09T20:48:01.341 INFO:tasks.workunit.client.0.vm07.stdout:9/740: mknod d4/d8/db9/c107 0 2026-03-09T20:48:01.342 INFO:tasks.workunit.client.1.vm10.stdout:3/682: fdatasync dc/db4/fca 0 2026-03-09T20:48:01.342 INFO:tasks.workunit.client.0.vm07.stdout:9/741: chown d4/d8/d19/d89/da7/ld3 20481653 1 2026-03-09T20:48:01.347 INFO:tasks.workunit.client.1.vm10.stdout:5/670: symlink d2/d27/lfe 0 2026-03-09T20:48:01.348 INFO:tasks.workunit.client.1.vm10.stdout:0/701: creat d2/d9/da/d35/d30/ff9 x:0 0 0 2026-03-09T20:48:01.348 INFO:tasks.workunit.client.1.vm10.stdout:4/687: rename d1/d2/d5c/d64/d6b/d81/dac/d1b/dbe/fcf to d1/d2/d5c/fdc 0 2026-03-09T20:48:01.356 INFO:tasks.workunit.client.0.vm07.stdout:2/783: symlink d2/db/d28/d5c/dc7/lfb 0 2026-03-09T20:48:01.356 INFO:tasks.workunit.client.0.vm07.stdout:6/774: dread d8/d16/da3/f93 [0,4194304] 0 2026-03-09T20:48:01.360 INFO:tasks.workunit.client.1.vm10.stdout:8/781: mkdir d0/d22/d25/df6 0 2026-03-09T20:48:01.363 INFO:tasks.workunit.client.1.vm10.stdout:9/764: write d2/d33/f76 [406714,37478] 0 2026-03-09T20:48:01.363 INFO:tasks.workunit.client.1.vm10.stdout:7/719: dwrite db/d28/f31 [4194304,4194304] 0 2026-03-09T20:48:01.373 INFO:tasks.workunit.client.1.vm10.stdout:6/735: truncate d3/f40 1368322 0 2026-03-09T20:48:01.374 INFO:tasks.workunit.client.1.vm10.stdout:3/683: creat dc/db4/fe5 x:0 0 0 2026-03-09T20:48:01.381 INFO:tasks.workunit.client.0.vm07.stdout:4/700: write d2/d1f/f25 [1428938,79666] 0 2026-03-09T20:48:01.381 INFO:tasks.workunit.client.0.vm07.stdout:8/710: rename d1/d5d/d6f/d2f/d4d/d95 to d1/dc/d16/d26/de2 0 2026-03-09T20:48:01.384 INFO:tasks.workunit.client.1.vm10.stdout:1/724: mkdir d2/da/dbc/dea 0 2026-03-09T20:48:01.386 INFO:tasks.workunit.client.1.vm10.stdout:1/725: fsync d2/da/d25/d46/d51/d5d/d6e/d70/f83 0 2026-03-09T20:48:01.386 INFO:tasks.workunit.client.0.vm07.stdout:9/742: dread - d4/d8/dc/dbb/db6/fc6 zero size 2026-03-09T20:48:01.391 INFO:tasks.workunit.client.0.vm07.stdout:5/835: mkdir d5/df/d13/d30/d56/d120 0 2026-03-09T20:48:01.392 INFO:tasks.workunit.client.0.vm07.stdout:9/743: write d4/d8/d19/d5f/da5/ff5 [974174,31009] 0 2026-03-09T20:48:01.392 INFO:tasks.workunit.client.1.vm10.stdout:0/702: readlink d2/d9/da/l9d 0 2026-03-09T20:48:01.393 INFO:tasks.workunit.client.0.vm07.stdout:0/800: link d1/d1f/d53/d72/f95 d1/d2/d33/ffb 0 2026-03-09T20:48:01.395 INFO:tasks.workunit.client.1.vm10.stdout:6/736: dread d3/da/fd [0,4194304] 0 2026-03-09T20:48:01.397 INFO:tasks.workunit.client.1.vm10.stdout:6/737: readlink d3/da/l5a 0 2026-03-09T20:48:01.399 INFO:tasks.workunit.client.0.vm07.stdout:1/799: getdents d3/d23 0 2026-03-09T20:48:01.404 INFO:tasks.workunit.client.1.vm10.stdout:2/723: getdents d5/d18/d27/d38 0 2026-03-09T20:48:01.406 INFO:tasks.workunit.client.0.vm07.stdout:2/784: dread - d2/d11/ddb/db0/fd4 zero size 2026-03-09T20:48:01.407 INFO:tasks.workunit.client.1.vm10.stdout:6/738: sync 2026-03-09T20:48:01.408 INFO:tasks.workunit.client.1.vm10.stdout:7/720: fdatasync db/d28/f31 0 2026-03-09T20:48:01.409 INFO:tasks.workunit.client.1.vm10.stdout:8/782: mknod d0/d22/d25/d2e/d41/cf7 0 2026-03-09T20:48:01.416 INFO:tasks.workunit.client.0.vm07.stdout:7/829: write d3/da/db/d32/d3e/dac/f106 [122102,27496] 0 2026-03-09T20:48:01.421 INFO:tasks.workunit.client.0.vm07.stdout:3/757: fdatasync d1/d5/d9/d11/d6d/fee 0 2026-03-09T20:48:01.421 INFO:tasks.workunit.client.1.vm10.stdout:9/765: chown d2/d3/c39 8 1 2026-03-09T20:48:01.430 INFO:tasks.workunit.client.1.vm10.stdout:3/684: truncate dc/d14/d26/d29/d40/f71 6339602 0 2026-03-09T20:48:01.447 INFO:tasks.workunit.client.0.vm07.stdout:9/744: dread d4/d16/f27 [0,4194304] 0 2026-03-09T20:48:01.454 INFO:tasks.workunit.client.0.vm07.stdout:0/801: rename d1/dc0/fe4 to d1/d2/d98/ffc 0 2026-03-09T20:48:01.463 INFO:tasks.workunit.client.0.vm07.stdout:5/836: write d5/df/d13/d3e/d5e/ffa [592287,111478] 0 2026-03-09T20:48:01.469 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.467+0000 7f9eccf61640 1 -- 192.168.123.107:0/1404504438 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9ec8072370 msgr2=0x7f9ec810c590 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:48:01.469 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.467+0000 7f9eccf61640 1 --2- 192.168.123.107:0/1404504438 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9ec8072370 0x7f9ec810c590 secure :-1 s=READY pgs=111 cs=0 l=1 rev1=1 crypto rx=0x7f9ebc0099b0 tx=0x7f9ebc02f240 comp rx=0 tx=0).stop 2026-03-09T20:48:01.472 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.471+0000 7f9eccf61640 1 -- 192.168.123.107:0/1404504438 shutdown_connections 2026-03-09T20:48:01.472 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.471+0000 7f9eccf61640 1 --2- 192.168.123.107:0/1404504438 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9ec8072370 0x7f9ec810c590 unknown :-1 s=CLOSED pgs=111 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:01.472 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.471+0000 7f9eccf61640 1 --2- 192.168.123.107:0/1404504438 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9ec80719a0 0x7f9ec8071da0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:01.472 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.471+0000 7f9eccf61640 1 -- 192.168.123.107:0/1404504438 >> 192.168.123.107:0/1404504438 conn(0x7f9ec806d4f0 msgr2=0x7f9ec806f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:48:01.472 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.471+0000 7f9eccf61640 1 -- 192.168.123.107:0/1404504438 shutdown_connections 2026-03-09T20:48:01.474 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.471+0000 7f9eccf61640 1 -- 192.168.123.107:0/1404504438 wait complete. 2026-03-09T20:48:01.474 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.471+0000 7f9eccf61640 1 Processor -- start 2026-03-09T20:48:01.474 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.472+0000 7f9eccf61640 1 -- start start 2026-03-09T20:48:01.474 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.472+0000 7f9eccf61640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9ec80719a0 0x7f9ec81159b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:48:01.474 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.472+0000 7f9eccf61640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9ec8072370 0x7f9ec8115ef0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:48:01.474 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.473+0000 7f9eccf61640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9ec81173f0 con 0x7f9ec8072370 2026-03-09T20:48:01.474 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.473+0000 7f9eccf61640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9ec8117560 con 0x7f9ec80719a0 2026-03-09T20:48:01.474 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.473+0000 7f9ec77fe640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9ec80719a0 0x7f9ec81159b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:48:01.474 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.473+0000 7f9ec77fe640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9ec80719a0 0x7f9ec81159b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:34598/0 (socket says 192.168.123.107:34598) 2026-03-09T20:48:01.474 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.473+0000 7f9ec77fe640 1 -- 192.168.123.107:0/188303236 learned_addr learned my addr 192.168.123.107:0/188303236 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:48:01.475 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.474+0000 7f9ec77fe640 1 -- 192.168.123.107:0/188303236 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9ec8072370 msgr2=0x7f9ec8115ef0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:48:01.475 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.474+0000 7f9ec77fe640 1 --2- 192.168.123.107:0/188303236 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9ec8072370 0x7f9ec8115ef0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:01.475 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.474+0000 7f9ec77fe640 1 -- 192.168.123.107:0/188303236 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f9ebc009660 con 0x7f9ec80719a0 2026-03-09T20:48:01.475 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.474+0000 7f9ec77fe640 1 --2- 192.168.123.107:0/188303236 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9ec80719a0 0x7f9ec81159b0 secure :-1 s=READY pgs=112 cs=0 l=1 rev1=1 crypto rx=0x7f9eb800efc0 tx=0x7f9eb800c490 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:48:01.476 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.475+0000 7f9ec4ff9640 1 -- 192.168.123.107:0/188303236 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f9eb8009280 con 0x7f9ec80719a0 2026-03-09T20:48:01.476 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.475+0000 7f9eccf61640 1 -- 192.168.123.107:0/188303236 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f9ec81164f0 con 0x7f9ec80719a0 2026-03-09T20:48:01.476 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.475+0000 7f9eccf61640 1 -- 192.168.123.107:0/188303236 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f9ec81b58d0 con 0x7f9ec80719a0 2026-03-09T20:48:01.476 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.475+0000 7f9ec4ff9640 1 -- 192.168.123.107:0/188303236 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f9eb800f040 con 0x7f9ec80719a0 2026-03-09T20:48:01.476 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.475+0000 7f9ec4ff9640 1 -- 192.168.123.107:0/188303236 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f9eb8004940 con 0x7f9ec80719a0 2026-03-09T20:48:01.480 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.479+0000 7f9eccf61640 1 -- 192.168.123.107:0/188303236 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f9e94005350 con 0x7f9ec80719a0 2026-03-09T20:48:01.481 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.479+0000 7f9ec4ff9640 1 -- 192.168.123.107:0/188303236 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 30) v1 ==== 100078+0+0 (secure 0 0 0) 0x7f9eb8007500 con 0x7f9ec80719a0 2026-03-09T20:48:01.481 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.479+0000 7f9ec4ff9640 1 --2- 192.168.123.107:0/188303236 >> [v2:192.168.123.107:6800/39551776,v1:192.168.123.107:6801/39551776] conn(0x7f9e98077680 0x7f9e98079b40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:48:01.481 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.479+0000 7f9ec4ff9640 1 -- 192.168.123.107:0/188303236 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(46..46 src has 1..46) v4 ==== 6050+0+0 (secure 0 0 0) 0x7f9eb8099430 con 0x7f9ec80719a0 2026-03-09T20:48:01.483 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.482+0000 7f9ec4ff9640 1 -- 192.168.123.107:0/188303236 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+193909 (secure 0 0 0) 0x7f9eb8061ec0 con 0x7f9ec80719a0 2026-03-09T20:48:01.484 INFO:tasks.workunit.client.0.vm07.stdout:4/701: dwrite d2/d1f/f53 [0,4194304] 0 2026-03-09T20:48:01.486 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.485+0000 7f9ec6ffd640 1 --2- 192.168.123.107:0/188303236 >> [v2:192.168.123.107:6800/39551776,v1:192.168.123.107:6801/39551776] conn(0x7f9e98077680 0x7f9e98079b40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:48:01.489 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.488+0000 7f9ec6ffd640 1 --2- 192.168.123.107:0/188303236 >> [v2:192.168.123.107:6800/39551776,v1:192.168.123.107:6801/39551776] conn(0x7f9e98077680 0x7f9e98079b40 secure :-1 s=READY pgs=20 cs=0 l=1 rev1=1 crypto rx=0x7f9ebc02f750 tx=0x7f9ebc005b00 comp rx=0 tx=0).ready entity=mgr.24495 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:48:01.520 INFO:tasks.workunit.client.0.vm07.stdout:5/837: sync 2026-03-09T20:48:01.528 INFO:tasks.workunit.client.0.vm07.stdout:5/838: sync 2026-03-09T20:48:01.696 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.694+0000 7f9eccf61640 1 -- 192.168.123.107:0/188303236 --> [v2:192.168.123.107:6800/39551776,v1:192.168.123.107:6801/39551776] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f9e94002bf0 con 0x7f9e98077680 2026-03-09T20:48:01.701 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.700+0000 7f9ec4ff9640 1 -- 192.168.123.107:0/188303236 <== mgr.24495 v2:192.168.123.107:6800/39551776 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+337 (secure 0 0 0) 0x7f9e94002bf0 con 0x7f9e98077680 2026-03-09T20:48:01.704 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.702+0000 7f9ea67fc640 1 -- 192.168.123.107:0/188303236 >> [v2:192.168.123.107:6800/39551776,v1:192.168.123.107:6801/39551776] conn(0x7f9e98077680 msgr2=0x7f9e98079b40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:48:01.704 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.702+0000 7f9ea67fc640 1 --2- 192.168.123.107:0/188303236 >> [v2:192.168.123.107:6800/39551776,v1:192.168.123.107:6801/39551776] conn(0x7f9e98077680 0x7f9e98079b40 secure :-1 s=READY pgs=20 cs=0 l=1 rev1=1 crypto rx=0x7f9ebc02f750 tx=0x7f9ebc005b00 comp rx=0 tx=0).stop 2026-03-09T20:48:01.704 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.703+0000 7f9ea67fc640 1 -- 192.168.123.107:0/188303236 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9ec80719a0 msgr2=0x7f9ec81159b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:48:01.704 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.703+0000 7f9ea67fc640 1 --2- 192.168.123.107:0/188303236 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9ec80719a0 0x7f9ec81159b0 secure :-1 s=READY pgs=112 cs=0 l=1 rev1=1 crypto rx=0x7f9eb800efc0 tx=0x7f9eb800c490 comp rx=0 tx=0).stop 2026-03-09T20:48:01.704 INFO:tasks.workunit.client.0.vm07.stdout:6/775: mknod d8/cfc 0 2026-03-09T20:48:01.704 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.703+0000 7f9ea67fc640 1 -- 192.168.123.107:0/188303236 shutdown_connections 2026-03-09T20:48:01.704 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.703+0000 7f9ea67fc640 1 --2- 192.168.123.107:0/188303236 >> [v2:192.168.123.107:6800/39551776,v1:192.168.123.107:6801/39551776] conn(0x7f9e98077680 0x7f9e98079b40 unknown :-1 s=CLOSED pgs=20 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:01.704 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.703+0000 7f9ea67fc640 1 --2- 192.168.123.107:0/188303236 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9ec8072370 0x7f9ec8115ef0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:01.704 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.703+0000 7f9ea67fc640 1 --2- 192.168.123.107:0/188303236 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9ec80719a0 0x7f9ec81159b0 unknown :-1 s=CLOSED pgs=112 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:01.704 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.703+0000 7f9ea67fc640 1 -- 192.168.123.107:0/188303236 >> 192.168.123.107:0/188303236 conn(0x7f9ec806d4f0 msgr2=0x7f9ec810a7b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:48:01.706 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.705+0000 7f9ea67fc640 1 -- 192.168.123.107:0/188303236 shutdown_connections 2026-03-09T20:48:01.707 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.706+0000 7f9ea67fc640 1 -- 192.168.123.107:0/188303236 wait complete. 2026-03-09T20:48:01.714 INFO:tasks.workunit.client.0.vm07.stdout:2/785: dread - d2/d11/ddb/d72/f78 zero size 2026-03-09T20:48:01.718 INFO:teuthology.orchestra.run.vm07.stdout:true 2026-03-09T20:48:01.722 INFO:tasks.workunit.client.0.vm07.stdout:2/786: dwrite d2/d11/f44 [0,4194304] 0 2026-03-09T20:48:01.727 INFO:tasks.workunit.client.0.vm07.stdout:7/830: symlink d3/da/d53/db7/dde/d96/l116 0 2026-03-09T20:48:01.751 INFO:tasks.workunit.client.0.vm07.stdout:3/758: readlink d1/d5/d9/d2f/d34/d46/d5d/lbe 0 2026-03-09T20:48:01.762 INFO:tasks.workunit.client.1.vm10.stdout:7/721: dread db/d28/d2b/d36/f1c [0,4194304] 0 2026-03-09T20:48:01.762 INFO:tasks.workunit.client.1.vm10.stdout:7/722: write db/d21/f9c [1296661,49866] 0 2026-03-09T20:48:01.771 INFO:tasks.workunit.client.1.vm10.stdout:3/685: symlink dc/d14/d20/d21/le6 0 2026-03-09T20:48:01.795 INFO:tasks.workunit.client.0.vm07.stdout:1/800: mknod d3/d23/d55/c10b 0 2026-03-09T20:48:01.819 INFO:tasks.workunit.client.0.vm07.stdout:6/776: mkdir d8/d26/d7d/dfd 0 2026-03-09T20:48:01.836 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.834+0000 7f1ea7518640 1 -- 192.168.123.107:0/2517509549 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1ea0071a50 msgr2=0x7f1ea0071e50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:48:01.868 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.834+0000 7f1ea7518640 1 --2- 192.168.123.107:0/2517509549 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1ea0071a50 0x7f1ea0071e50 secure :-1 s=READY pgs=113 cs=0 l=1 rev1=1 crypto rx=0x7f1e9c0099b0 tx=0x7f1e9c031250 comp rx=0 tx=0).stop 2026-03-09T20:48:01.870 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.834+0000 7f1ea7518640 1 -- 192.168.123.107:0/2517509549 shutdown_connections 2026-03-09T20:48:01.871 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.834+0000 7f1ea7518640 1 --2- 192.168.123.107:0/2517509549 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1ea0072420 0x7f1ea0077190 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:01.871 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.834+0000 7f1ea7518640 1 --2- 192.168.123.107:0/2517509549 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1ea0071a50 0x7f1ea0071e50 unknown :-1 s=CLOSED pgs=113 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:01.871 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.834+0000 7f1ea7518640 1 -- 192.168.123.107:0/2517509549 >> 192.168.123.107:0/2517509549 conn(0x7f1ea006d4f0 msgr2=0x7f1ea006f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:48:01.871 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.834+0000 7f1ea7518640 1 -- 192.168.123.107:0/2517509549 shutdown_connections 2026-03-09T20:48:01.871 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.834+0000 7f1ea7518640 1 -- 192.168.123.107:0/2517509549 wait complete. 2026-03-09T20:48:01.871 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.835+0000 7f1ea7518640 1 Processor -- start 2026-03-09T20:48:01.871 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.835+0000 7f1ea7518640 1 -- start start 2026-03-09T20:48:01.871 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.835+0000 7f1ea7518640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1ea0072420 0x7f1ea0084080 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:48:01.871 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.835+0000 7f1ea7518640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1ea00826d0 0x7f1ea0082b50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:48:01.871 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.835+0000 7f1ea7518640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f1ea00845c0 con 0x7f1ea0072420 2026-03-09T20:48:01.871 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.835+0000 7f1ea7518640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f1ea0083090 con 0x7f1ea00826d0 2026-03-09T20:48:01.871 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.835+0000 7f1ea5d15640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1ea00826d0 0x7f1ea0082b50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:48:01.871 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.835+0000 7f1ea5d15640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1ea00826d0 0x7f1ea0082b50 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:34620/0 (socket says 192.168.123.107:34620) 2026-03-09T20:48:01.871 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.835+0000 7f1ea5d15640 1 -- 192.168.123.107:0/4261624747 learned_addr learned my addr 192.168.123.107:0/4261624747 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:48:01.871 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.835+0000 7f1ea5d15640 1 -- 192.168.123.107:0/4261624747 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1ea0072420 msgr2=0x7f1ea0084080 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:48:01.871 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.835+0000 7f1ea5d15640 1 --2- 192.168.123.107:0/4261624747 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1ea0072420 0x7f1ea0084080 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:01.871 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.835+0000 7f1ea5d15640 1 -- 192.168.123.107:0/4261624747 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f1e9c009660 con 0x7f1ea00826d0 2026-03-09T20:48:01.871 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.836+0000 7f1ea5d15640 1 --2- 192.168.123.107:0/4261624747 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1ea00826d0 0x7f1ea0082b50 secure :-1 s=READY pgs=114 cs=0 l=1 rev1=1 crypto rx=0x7f1e9000a9b0 tx=0x7f1e9000e500 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:48:01.872 INFO:tasks.workunit.client.0.vm07.stdout:2/787: fsync d2/f17 0 2026-03-09T20:48:01.872 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.837+0000 7f1e977fe640 1 -- 192.168.123.107:0/4261624747 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f1e9000f040 con 0x7f1ea00826d0 2026-03-09T20:48:01.872 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.837+0000 7f1ea7518640 1 -- 192.168.123.107:0/4261624747 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f1ea0083310 con 0x7f1ea00826d0 2026-03-09T20:48:01.872 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.838+0000 7f1ea7518640 1 -- 192.168.123.107:0/4261624747 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f1ea012ef70 con 0x7f1ea00826d0 2026-03-09T20:48:01.872 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.838+0000 7f1ea7518640 1 -- 192.168.123.107:0/4261624747 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f1ea0079e40 con 0x7f1ea00826d0 2026-03-09T20:48:01.872 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.839+0000 7f1e977fe640 1 -- 192.168.123.107:0/4261624747 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f1e90004590 con 0x7f1ea00826d0 2026-03-09T20:48:01.872 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.839+0000 7f1e977fe640 1 -- 192.168.123.107:0/4261624747 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f1e90002930 con 0x7f1ea00826d0 2026-03-09T20:48:01.872 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.841+0000 7f1e977fe640 1 -- 192.168.123.107:0/4261624747 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 30) v1 ==== 100078+0+0 (secure 0 0 0) 0x7f1e90012030 con 0x7f1ea00826d0 2026-03-09T20:48:01.872 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.841+0000 7f1e977fe640 1 --2- 192.168.123.107:0/4261624747 >> [v2:192.168.123.107:6800/39551776,v1:192.168.123.107:6801/39551776] conn(0x7f1e6c077680 0x7f1e6c079b40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:48:01.872 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.841+0000 7f1e977fe640 1 -- 192.168.123.107:0/4261624747 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(46..46 src has 1..46) v4 ==== 6050+0+0 (secure 0 0 0) 0x7f1e9009a2b0 con 0x7f1ea00826d0 2026-03-09T20:48:01.872 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.841+0000 7f1ea6516640 1 --2- 192.168.123.107:0/4261624747 >> [v2:192.168.123.107:6800/39551776,v1:192.168.123.107:6801/39551776] conn(0x7f1e6c077680 0x7f1e6c079b40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:48:01.872 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.843+0000 7f1ea6516640 1 --2- 192.168.123.107:0/4261624747 >> [v2:192.168.123.107:6800/39551776,v1:192.168.123.107:6801/39551776] conn(0x7f1e6c077680 0x7f1e6c079b40 secure :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0x7f1e9c0099b0 tx=0x7f1e9c005c50 comp rx=0 tx=0).ready entity=mgr.24495 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:48:01.872 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:01.844+0000 7f1e977fe640 1 -- 192.168.123.107:0/4261624747 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+193909 (secure 0 0 0) 0x7f1e90062c80 con 0x7f1ea00826d0 2026-03-09T20:48:01.872 INFO:tasks.workunit.client.0.vm07.stdout:7/831: mknod d3/d58/d82/d90/c117 0 2026-03-09T20:48:01.872 INFO:tasks.workunit.client.0.vm07.stdout:3/759: unlink d1/d5/d9/d2f/d34/d46/d5d/l5f 0 2026-03-09T20:48:01.872 INFO:tasks.workunit.client.0.vm07.stdout:8/711: truncate d1/d5d/d6f/d2f/d4d/f73 1147108 0 2026-03-09T20:48:01.877 INFO:tasks.workunit.client.0.vm07.stdout:0/802: truncate d1/d1f/d53/d72/f6b 568469 0 2026-03-09T20:48:01.878 INFO:tasks.workunit.client.0.vm07.stdout:1/801: truncate d3/d23/f39 1403672 0 2026-03-09T20:48:01.880 INFO:tasks.workunit.client.0.vm07.stdout:4/702: fsync d2/d55/d5d/d3f/d4a/f99 0 2026-03-09T20:48:01.893 INFO:tasks.workunit.client.1.vm10.stdout:4/688: creat d1/d2/d5c/d64/d6b/d81/dac/fdd x:0 0 0 2026-03-09T20:48:01.928 INFO:tasks.workunit.client.0.vm07.stdout:1/802: unlink d3/d14/d54/d3e/f80 0 2026-03-09T20:48:01.929 INFO:tasks.workunit.client.0.vm07.stdout:4/703: write d2/d55/d5d/d3f/d4a/fad [4527855,111633] 0 2026-03-09T20:48:01.929 INFO:tasks.workunit.client.1.vm10.stdout:4/689: dread - d1/d2/d5c/fdc zero size 2026-03-09T20:48:01.929 INFO:tasks.workunit.client.1.vm10.stdout:5/671: link d2/d27/d37/dc8/da1/faa d2/fff 0 2026-03-09T20:48:01.944 INFO:tasks.workunit.client.0.vm07.stdout:0/803: dread d1/d2/dc/f6d [0,4194304] 0 2026-03-09T20:48:01.948 INFO:tasks.workunit.client.1.vm10.stdout:4/690: creat d1/d2/d3/d70/d78/d86/fde x:0 0 0 2026-03-09T20:48:01.952 INFO:tasks.workunit.client.1.vm10.stdout:4/691: dwrite d1/d47/db9/fd5 [0,4194304] 0 2026-03-09T20:48:02.016 INFO:tasks.workunit.client.0.vm07.stdout:3/760: link d1/d5/d9/d2f/d34/d46/cd5 d1/d5/d9/d2f/d86/cf6 0 2026-03-09T20:48:02.016 INFO:tasks.workunit.client.1.vm10.stdout:7/723: getdents db/d28/d30 0 2026-03-09T20:48:02.017 INFO:tasks.workunit.client.1.vm10.stdout:7/724: write db/d28/d2b/d36/d3f/fcb [2673913,88736] 0 2026-03-09T20:48:02.030 INFO:tasks.workunit.client.0.vm07.stdout:0/804: mknod d1/d2/d98/daf/cfd 0 2026-03-09T20:48:02.047 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.046+0000 7f1ea7518640 1 -- 192.168.123.107:0/4261624747 --> [v2:192.168.123.107:6800/39551776,v1:192.168.123.107:6801/39551776] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f1ea00834a0 con 0x7f1e6c077680 2026-03-09T20:48:02.050 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.049+0000 7f1e977fe640 1 -- 192.168.123.107:0/4261624747 <== mgr.24495 v2:192.168.123.107:6800/39551776 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+337 (secure 0 0 0) 0x7f1ea00834a0 con 0x7f1e6c077680 2026-03-09T20:48:02.055 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.054+0000 7f1e957fa640 1 -- 192.168.123.107:0/4261624747 >> [v2:192.168.123.107:6800/39551776,v1:192.168.123.107:6801/39551776] conn(0x7f1e6c077680 msgr2=0x7f1e6c079b40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:48:02.055 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.054+0000 7f1e957fa640 1 --2- 192.168.123.107:0/4261624747 >> [v2:192.168.123.107:6800/39551776,v1:192.168.123.107:6801/39551776] conn(0x7f1e6c077680 0x7f1e6c079b40 secure :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0x7f1e9c0099b0 tx=0x7f1e9c005c50 comp rx=0 tx=0).stop 2026-03-09T20:48:02.055 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.054+0000 7f1e957fa640 1 -- 192.168.123.107:0/4261624747 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1ea00826d0 msgr2=0x7f1ea0082b50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:48:02.055 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.054+0000 7f1e957fa640 1 --2- 192.168.123.107:0/4261624747 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1ea00826d0 0x7f1ea0082b50 secure :-1 s=READY pgs=114 cs=0 l=1 rev1=1 crypto rx=0x7f1e9000a9b0 tx=0x7f1e9000e500 comp rx=0 tx=0).stop 2026-03-09T20:48:02.055 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.054+0000 7f1e957fa640 1 -- 192.168.123.107:0/4261624747 shutdown_connections 2026-03-09T20:48:02.055 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.054+0000 7f1e957fa640 1 --2- 192.168.123.107:0/4261624747 >> [v2:192.168.123.107:6800/39551776,v1:192.168.123.107:6801/39551776] conn(0x7f1e6c077680 0x7f1e6c079b40 unknown :-1 s=CLOSED pgs=21 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:02.055 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.054+0000 7f1e957fa640 1 --2- 192.168.123.107:0/4261624747 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1ea00826d0 0x7f1ea0082b50 unknown :-1 s=CLOSED pgs=114 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:02.055 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.054+0000 7f1e957fa640 1 --2- 192.168.123.107:0/4261624747 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1ea0072420 0x7f1ea0084080 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:02.055 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.054+0000 7f1e957fa640 1 -- 192.168.123.107:0/4261624747 >> 192.168.123.107:0/4261624747 conn(0x7f1ea006d4f0 msgr2=0x7f1ea007b3a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:48:02.055 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.054+0000 7f1e957fa640 1 -- 192.168.123.107:0/4261624747 shutdown_connections 2026-03-09T20:48:02.055 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.054+0000 7f1e957fa640 1 -- 192.168.123.107:0/4261624747 wait complete. 2026-03-09T20:48:02.098 INFO:tasks.workunit.client.0.vm07.stdout:4/704: link d2/df/l32 d2/df/lc1 0 2026-03-09T20:48:02.100 INFO:tasks.workunit.client.0.vm07.stdout:4/705: readlink d2/df/d17/l24 0 2026-03-09T20:48:02.112 INFO:tasks.workunit.client.0.vm07.stdout:4/706: getdents d2/d55/d5d/d86/db9 0 2026-03-09T20:48:02.115 INFO:tasks.workunit.client.0.vm07.stdout:4/707: stat d2/d55/d5d/d93/ca1 0 2026-03-09T20:48:02.116 INFO:tasks.workunit.client.0.vm07.stdout:4/708: stat d2/df/d59/l9a 0 2026-03-09T20:48:02.122 INFO:tasks.workunit.client.1.vm10.stdout:6/739: dwrite d3/da/d11/d89/fb0 [0,4194304] 0 2026-03-09T20:48:02.131 INFO:tasks.workunit.client.0.vm07.stdout:4/709: dread d2/df/d17/f80 [0,4194304] 0 2026-03-09T20:48:02.158 INFO:tasks.workunit.client.1.vm10.stdout:6/740: dread - d3/d30/d7f/fcd zero size 2026-03-09T20:48:02.158 INFO:tasks.workunit.client.1.vm10.stdout:6/741: readlink d3/da/l5a 0 2026-03-09T20:48:02.168 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.167+0000 7fb81daea640 1 -- 192.168.123.107:0/3403148550 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb818106db0 msgr2=0x7fb8181071b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:48:02.168 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.167+0000 7fb81daea640 1 --2- 192.168.123.107:0/3403148550 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb818106db0 0x7fb8181071b0 secure :-1 s=READY pgs=327 cs=0 l=1 rev1=1 crypto rx=0x7fb80c00b0a0 tx=0x7fb80c02f4a0 comp rx=0 tx=0).stop 2026-03-09T20:48:02.168 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.167+0000 7fb81daea640 1 -- 192.168.123.107:0/3403148550 shutdown_connections 2026-03-09T20:48:02.168 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.167+0000 7fb81daea640 1 --2- 192.168.123.107:0/3403148550 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fb818107fb0 0x7fb818108430 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:02.168 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.167+0000 7fb81daea640 1 --2- 192.168.123.107:0/3403148550 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb818106db0 0x7fb8181071b0 unknown :-1 s=CLOSED pgs=327 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:02.168 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.167+0000 7fb81daea640 1 -- 192.168.123.107:0/3403148550 >> 192.168.123.107:0/3403148550 conn(0x7fb818075ee0 msgr2=0x7fb818078300 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:48:02.169 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.167+0000 7fb81daea640 1 -- 192.168.123.107:0/3403148550 shutdown_connections 2026-03-09T20:48:02.169 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.167+0000 7fb81daea640 1 -- 192.168.123.107:0/3403148550 wait complete. 2026-03-09T20:48:02.169 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.168+0000 7fb81daea640 1 Processor -- start 2026-03-09T20:48:02.169 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.168+0000 7fb81daea640 1 -- start start 2026-03-09T20:48:02.169 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.168+0000 7fb81daea640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fb818106db0 0x7fb81819e700 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:48:02.169 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.168+0000 7fb81daea640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb818107fb0 0x7fb81819ec40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:48:02.169 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.168+0000 7fb81daea640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fb81819f210 con 0x7fb818107fb0 2026-03-09T20:48:02.169 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.168+0000 7fb81daea640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fb81819f380 con 0x7fb818106db0 2026-03-09T20:48:02.169 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.168+0000 7fb81cae8640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fb818106db0 0x7fb81819e700 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:48:02.169 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.168+0000 7fb81cae8640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fb818106db0 0x7fb81819e700 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:34636/0 (socket says 192.168.123.107:34636) 2026-03-09T20:48:02.169 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.168+0000 7fb81cae8640 1 -- 192.168.123.107:0/2863656946 learned_addr learned my addr 192.168.123.107:0/2863656946 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:48:02.169 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.169+0000 7fb81cae8640 1 -- 192.168.123.107:0/2863656946 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb818107fb0 msgr2=0x7fb81819ec40 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:48:02.170 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.169+0000 7fb81cae8640 1 --2- 192.168.123.107:0/2863656946 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb818107fb0 0x7fb81819ec40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:02.170 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.169+0000 7fb81cae8640 1 -- 192.168.123.107:0/2863656946 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fb80c009d00 con 0x7fb818106db0 2026-03-09T20:48:02.170 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.169+0000 7fb81cae8640 1 --2- 192.168.123.107:0/2863656946 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fb818106db0 0x7fb81819e700 secure :-1 s=READY pgs=115 cs=0 l=1 rev1=1 crypto rx=0x7fb80c009840 tx=0x7fb80c002bd0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:48:02.170 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.169+0000 7fb815ffb640 1 -- 192.168.123.107:0/2863656946 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fb80c0090d0 con 0x7fb818106db0 2026-03-09T20:48:02.170 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.169+0000 7fb81daea640 1 -- 192.168.123.107:0/2863656946 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fb8181a3dc0 con 0x7fb818106db0 2026-03-09T20:48:02.171 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.169+0000 7fb81daea640 1 -- 192.168.123.107:0/2863656946 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fb8181a42e0 con 0x7fb818106db0 2026-03-09T20:48:02.171 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.170+0000 7fb815ffb640 1 -- 192.168.123.107:0/2863656946 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fb80c02fe90 con 0x7fb818106db0 2026-03-09T20:48:02.171 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.170+0000 7fb815ffb640 1 -- 192.168.123.107:0/2863656946 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fb80c038790 con 0x7fb818106db0 2026-03-09T20:48:02.171 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.170+0000 7fb81daea640 1 -- 192.168.123.107:0/2863656946 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fb7e0005350 con 0x7fb818106db0 2026-03-09T20:48:02.172 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.171+0000 7fb815ffb640 1 -- 192.168.123.107:0/2863656946 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 30) v1 ==== 100078+0+0 (secure 0 0 0) 0x7fb80c009230 con 0x7fb818106db0 2026-03-09T20:48:02.172 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.171+0000 7fb815ffb640 1 --2- 192.168.123.107:0/2863656946 >> [v2:192.168.123.107:6800/39551776,v1:192.168.123.107:6801/39551776] conn(0x7fb7f8077470 0x7fb7f8079930 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:48:02.172 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.171+0000 7fb815ffb640 1 -- 192.168.123.107:0/2863656946 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(46..46 src has 1..46) v4 ==== 6050+0+0 (secure 0 0 0) 0x7fb80c0be380 con 0x7fb818106db0 2026-03-09T20:48:02.174 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.174+0000 7fb815ffb640 1 -- 192.168.123.107:0/2863656946 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+193909 (secure 0 0 0) 0x7fb80c086de0 con 0x7fb818106db0 2026-03-09T20:48:02.176 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.175+0000 7fb817fff640 1 --2- 192.168.123.107:0/2863656946 >> [v2:192.168.123.107:6800/39551776,v1:192.168.123.107:6801/39551776] conn(0x7fb7f8077470 0x7fb7f8079930 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:48:02.181 INFO:tasks.workunit.client.0.vm07.stdout:9/745: write d4/d8/d19/d89/f9e [4055180,65315] 0 2026-03-09T20:48:02.183 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.182+0000 7fb817fff640 1 --2- 192.168.123.107:0/2863656946 >> [v2:192.168.123.107:6800/39551776,v1:192.168.123.107:6801/39551776] conn(0x7fb7f8077470 0x7fb7f8079930 secure :-1 s=READY pgs=22 cs=0 l=1 rev1=1 crypto rx=0x7fb804005fd0 tx=0x7fb804005950 comp rx=0 tx=0).ready entity=mgr.24495 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:48:02.244 INFO:tasks.workunit.client.0.vm07.stdout:9/746: truncate d4/d16/d29/d24/d37/d44/d62/d74/fa6 175722 0 2026-03-09T20:48:02.255 INFO:tasks.workunit.client.0.vm07.stdout:3/761: read d1/d5/d9/d2f/d34/f8f [1375481,71709] 0 2026-03-09T20:48:02.311 INFO:tasks.workunit.client.1.vm10.stdout:8/783: mkdir d0/df8 0 2026-03-09T20:48:02.311 INFO:tasks.workunit.client.0.vm07.stdout:4/710: mkdir d2/d55/d5d/dc2 0 2026-03-09T20:48:02.318 INFO:tasks.workunit.client.0.vm07.stdout:9/747: rmdir d4/d8/d19/d5f/da5 39 2026-03-09T20:48:02.320 INFO:tasks.workunit.client.1.vm10.stdout:8/784: dread d0/d22/d2c/f96 [0,4194304] 0 2026-03-09T20:48:02.346 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.344+0000 7fb81daea640 1 -- 192.168.123.107:0/2863656946 --> [v2:192.168.123.107:6800/39551776,v1:192.168.123.107:6801/39551776] -- mgr_command(tid 0: {"prefix": "orch ps", "target": ["mon-mgr", ""]}) v1 -- 0x7fb7e0002bf0 con 0x7fb7f8077470 2026-03-09T20:48:02.358 INFO:teuthology.orchestra.run.vm07.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T20:48:02.358 INFO:teuthology.orchestra.run.vm07.stdout:alertmanager.vm07 vm07 *:9093,9094 running (4m) 4s ago 5m 42.3M - 0.25.0 c8568f914cd2 aa3206f6f5cb 2026-03-09T20:48:02.358 INFO:teuthology.orchestra.run.vm07.stdout:ceph-exporter.vm07 vm07 running (5m) 4s ago 5m 9026k - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 06140d824fae 2026-03-09T20:48:02.358 INFO:teuthology.orchestra.run.vm07.stdout:ceph-exporter.vm10 vm10 running (4m) 6s ago 4m 9181k - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 ecddc8340426 2026-03-09T20:48:02.358 INFO:teuthology.orchestra.run.vm07.stdout:crash.vm07 vm07 running (5m) 4s ago 5m 7620k - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 8dda9981b08b 2026-03-09T20:48:02.358 INFO:teuthology.orchestra.run.vm07.stdout:crash.vm10 vm10 running (4m) 6s ago 4m 7616k - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 eba80e79586f 2026-03-09T20:48:02.358 INFO:teuthology.orchestra.run.vm07.stdout:grafana.vm07 vm07 *:3000 running (4m) 4s ago 4m 156M - 9.4.7 954c08fa6188 74cf2e7ee6ad 2026-03-09T20:48:02.358 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm07.potfau vm07 running (2m) 4s ago 2m 27.4M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 2492b6874dc8 2026-03-09T20:48:02.358 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm07.rovdbp vm07 running (3m) 4s ago 3m 216M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 3dd0b4a28f35 2026-03-09T20:48:02.358 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm10.hzyuyq vm10 running (2m) 6s ago 2m 132M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 ed740ceed51a 2026-03-09T20:48:02.358 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm10.qpltwp vm10 running (3m) 6s ago 3m 24.0M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 c5fdba181aaf 2026-03-09T20:48:02.359 INFO:teuthology.orchestra.run.vm07.stdout:mgr.vm07.xjrvch vm07 *:8443,9283,8765 running (19s) 4s ago 5m 581M - 19.2.3-678-ge911bdeb 654f31e6858e bc6ab9c540eb 2026-03-09T20:48:02.359 INFO:teuthology.orchestra.run.vm07.stdout:mgr.vm10.byqahe vm10 *:8443,9283,8765 running (44s) 6s ago 4m 327M - 19.2.3-678-ge911bdeb 654f31e6858e 72000f76daa6 2026-03-09T20:48:02.359 INFO:teuthology.orchestra.run.vm07.stdout:mon.vm07 vm07 running (5m) 4s ago 5m 53.7M 2048M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 f3e88bdaa0dd 2026-03-09T20:48:02.359 INFO:teuthology.orchestra.run.vm07.stdout:mon.vm10 vm10 running (4m) 6s ago 4m 44.9M 2048M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 4e5d7d18c660 2026-03-09T20:48:02.359 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.vm07 vm07 *:9100 running (5m) 4s ago 5m 15.8M - 1.5.0 0da6a335fe13 d6fac1f8a1d0 2026-03-09T20:48:02.359 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.vm10 vm10 *:9100 running (4m) 6s ago 4m 15.7M - 1.5.0 0da6a335fe13 9716a97e7ed1 2026-03-09T20:48:02.359 INFO:teuthology.orchestra.run.vm07.stdout:osd.0 vm07 running (4m) 4s ago 4m 310M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 482878bd7721 2026-03-09T20:48:02.359 INFO:teuthology.orchestra.run.vm07.stdout:osd.1 vm07 running (4m) 4s ago 4m 349M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 15564e5032c9 2026-03-09T20:48:02.359 INFO:teuthology.orchestra.run.vm07.stdout:osd.2 vm07 running (3m) 4s ago 3m 297M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 a2ad523a264c 2026-03-09T20:48:02.359 INFO:teuthology.orchestra.run.vm07.stdout:osd.3 vm10 running (3m) 6s ago 3m 429M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 c4d7e2279ba1 2026-03-09T20:48:02.359 INFO:teuthology.orchestra.run.vm07.stdout:osd.4 vm10 running (3m) 6s ago 3m 360M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 37651efc9a7d 2026-03-09T20:48:02.359 INFO:teuthology.orchestra.run.vm07.stdout:osd.5 vm10 running (3m) 6s ago 3m 368M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 e1bd83add343 2026-03-09T20:48:02.359 INFO:teuthology.orchestra.run.vm07.stdout:prometheus.vm07 vm07 *:9095 running (23s) 4s ago 4m 49.2M - 2.43.0 a07b618ecd1d 09d0c0bf3f23 2026-03-09T20:48:02.359 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.355+0000 7fb815ffb640 1 -- 192.168.123.107:0/2863656946 <== mgr.24495 v2:192.168.123.107:6800/39551776 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+3576 (secure 0 0 0) 0x7fb7e0002bf0 con 0x7fb7f8077470 2026-03-09T20:48:02.359 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.357+0000 7fb7eb7fe640 1 -- 192.168.123.107:0/2863656946 >> [v2:192.168.123.107:6800/39551776,v1:192.168.123.107:6801/39551776] conn(0x7fb7f8077470 msgr2=0x7fb7f8079930 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:48:02.359 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.357+0000 7fb7eb7fe640 1 --2- 192.168.123.107:0/2863656946 >> [v2:192.168.123.107:6800/39551776,v1:192.168.123.107:6801/39551776] conn(0x7fb7f8077470 0x7fb7f8079930 secure :-1 s=READY pgs=22 cs=0 l=1 rev1=1 crypto rx=0x7fb804005fd0 tx=0x7fb804005950 comp rx=0 tx=0).stop 2026-03-09T20:48:02.359 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.357+0000 7fb7eb7fe640 1 -- 192.168.123.107:0/2863656946 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fb818106db0 msgr2=0x7fb81819e700 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:48:02.359 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.357+0000 7fb7eb7fe640 1 --2- 192.168.123.107:0/2863656946 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fb818106db0 0x7fb81819e700 secure :-1 s=READY pgs=115 cs=0 l=1 rev1=1 crypto rx=0x7fb80c009840 tx=0x7fb80c002bd0 comp rx=0 tx=0).stop 2026-03-09T20:48:02.359 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.357+0000 7fb7eb7fe640 1 -- 192.168.123.107:0/2863656946 shutdown_connections 2026-03-09T20:48:02.359 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.357+0000 7fb7eb7fe640 1 --2- 192.168.123.107:0/2863656946 >> [v2:192.168.123.107:6800/39551776,v1:192.168.123.107:6801/39551776] conn(0x7fb7f8077470 0x7fb7f8079930 unknown :-1 s=CLOSED pgs=22 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:02.359 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.357+0000 7fb7eb7fe640 1 --2- 192.168.123.107:0/2863656946 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb818107fb0 0x7fb81819ec40 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:02.359 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.357+0000 7fb7eb7fe640 1 --2- 192.168.123.107:0/2863656946 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fb818106db0 0x7fb81819e700 unknown :-1 s=CLOSED pgs=115 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:02.359 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.357+0000 7fb7eb7fe640 1 -- 192.168.123.107:0/2863656946 >> 192.168.123.107:0/2863656946 conn(0x7fb818075ee0 msgr2=0x7fb818078af0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:48:02.359 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.357+0000 7fb7eb7fe640 1 -- 192.168.123.107:0/2863656946 shutdown_connections 2026-03-09T20:48:02.359 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.357+0000 7fb7eb7fe640 1 -- 192.168.123.107:0/2863656946 wait complete. 2026-03-09T20:48:02.382 INFO:tasks.workunit.client.1.vm10.stdout:9/766: write d2/d3/de/f7c [2718338,76053] 0 2026-03-09T20:48:02.382 INFO:tasks.workunit.client.1.vm10.stdout:5/672: write d2/d27/d37/d46/d5d/fa4 [635962,16868] 0 2026-03-09T20:48:02.384 INFO:tasks.workunit.client.0.vm07.stdout:3/762: mkdir d1/d5/d9/d11/df7 0 2026-03-09T20:48:02.385 INFO:tasks.workunit.client.0.vm07.stdout:7/832: rename d3/da/db/d32/d3e/dac/c4e to d3/da/c118 0 2026-03-09T20:48:02.385 INFO:tasks.workunit.client.0.vm07.stdout:6/777: write d8/d16/d4b/d88/f70 [1479481,25435] 0 2026-03-09T20:48:02.385 INFO:tasks.workunit.client.0.vm07.stdout:5/839: write d5/df/d13/d6c/f77 [1137385,119518] 0 2026-03-09T20:48:02.385 INFO:tasks.workunit.client.0.vm07.stdout:8/712: write d1/f13 [1774078,32366] 0 2026-03-09T20:48:02.387 INFO:tasks.workunit.client.0.vm07.stdout:7/833: readlink d3/da/db/d32/d3e/le6 0 2026-03-09T20:48:02.390 INFO:tasks.workunit.client.1.vm10.stdout:0/703: dwrite d2/d9/da/d11/f1f [0,4194304] 0 2026-03-09T20:48:02.391 INFO:tasks.workunit.client.0.vm07.stdout:3/763: read d1/d5/d9/d11/d6d/dd0/d59/fd1 [408524,96048] 0 2026-03-09T20:48:02.391 INFO:tasks.workunit.client.0.vm07.stdout:2/788: dwrite d2/d11/ddb/db0/fd4 [0,4194304] 0 2026-03-09T20:48:02.417 INFO:tasks.workunit.client.0.vm07.stdout:9/748: mkdir d4/d16/d29/d24/d37/d44/d62/d108 0 2026-03-09T20:48:02.423 INFO:tasks.workunit.client.1.vm10.stdout:8/785: creat d0/df8/ff9 x:0 0 0 2026-03-09T20:48:02.436 INFO:tasks.workunit.client.0.vm07.stdout:6/778: creat d8/d16/d61/ffe x:0 0 0 2026-03-09T20:48:02.437 INFO:tasks.workunit.client.0.vm07.stdout:0/805: creat d1/d2/dc/ffe x:0 0 0 2026-03-09T20:48:02.437 INFO:tasks.workunit.client.0.vm07.stdout:1/803: write d3/d97/da1/fbb [1426508,19402] 0 2026-03-09T20:48:02.454 INFO:tasks.workunit.client.1.vm10.stdout:8/786: mkdir d0/d22/d25/d2e/dfa 0 2026-03-09T20:48:02.456 INFO:tasks.workunit.client.0.vm07.stdout:8/713: unlink d1/dc/d16/f6d 0 2026-03-09T20:48:02.456 INFO:tasks.workunit.client.0.vm07.stdout:3/764: rename d1/l6 to d1/d5/dcd/lf8 0 2026-03-09T20:48:02.458 INFO:tasks.workunit.client.1.vm10.stdout:2/724: mknod d5/d18/d27/d38/d61/cf2 0 2026-03-09T20:48:02.459 INFO:tasks.workunit.client.1.vm10.stdout:4/692: write d1/d2/d5c/d64/d6b/d81/dac/d1c/d2b/d4a/d9b/fc3 [527620,70452] 0 2026-03-09T20:48:02.460 INFO:tasks.workunit.client.1.vm10.stdout:1/726: rename d2/da/fe to d2/da/d25/d46/d80/da0/d92/feb 0 2026-03-09T20:48:02.461 INFO:tasks.workunit.client.1.vm10.stdout:1/727: fdatasync d2/da/d25/d46/d80/da0/d92/db5/dc7/fcf 0 2026-03-09T20:48:02.465 INFO:tasks.workunit.client.1.vm10.stdout:9/767: creat d2/d3/db4/ddb/fff x:0 0 0 2026-03-09T20:48:02.468 INFO:tasks.workunit.client.1.vm10.stdout:7/725: dwrite db/d21/fbc [0,4194304] 0 2026-03-09T20:48:02.468 INFO:tasks.workunit.client.0.vm07.stdout:5/840: symlink d5/d33/db2/de8/dee/l121 0 2026-03-09T20:48:02.470 INFO:tasks.workunit.client.1.vm10.stdout:7/726: write db/d28/f41 [508292,5916] 0 2026-03-09T20:48:02.470 INFO:tasks.workunit.client.0.vm07.stdout:5/841: chown d5/df/d13/d6c/db1/fdf 92 1 2026-03-09T20:48:02.474 INFO:tasks.workunit.client.1.vm10.stdout:7/727: dwrite db/d46/f5a [0,4194304] 0 2026-03-09T20:48:02.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:02 vm07.local ceph-mon[49120]: pgmap v7: 65 pgs: 65 active+clean; 3.1 GiB data, 11 GiB used, 109 GiB / 120 GiB avail; 32 MiB/s rd, 62 MiB/s wr, 208 op/s 2026-03-09T20:48:02.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:02 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:02.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:02 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:02.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:02 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:48:02.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:02 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:02.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:02 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:48:02.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:02 vm07.local ceph-mon[49120]: Updating vm07:/etc/ceph/ceph.conf 2026-03-09T20:48:02.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:02 vm07.local ceph-mon[49120]: Updating vm10:/etc/ceph/ceph.conf 2026-03-09T20:48:02.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:02 vm07.local ceph-mon[49120]: Updating vm10:/var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/config/ceph.conf 2026-03-09T20:48:02.479 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.476+0000 7f0ccf5c7640 1 -- 192.168.123.107:0/3560397708 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0cc8071a50 msgr2=0x7f0cc8071e50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:48:02.479 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.476+0000 7f0ccf5c7640 1 --2- 192.168.123.107:0/3560397708 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0cc8071a50 0x7f0cc8071e50 secure :-1 s=READY pgs=328 cs=0 l=1 rev1=1 crypto rx=0x7f0cbc00bb70 tx=0x7f0cbc030ff0 comp rx=0 tx=0).stop 2026-03-09T20:48:02.480 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.476+0000 7f0ccf5c7640 1 -- 192.168.123.107:0/3560397708 shutdown_connections 2026-03-09T20:48:02.480 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.476+0000 7f0ccf5c7640 1 --2- 192.168.123.107:0/3560397708 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f0cc8072420 0x7f0cc8077190 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:02.480 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.476+0000 7f0ccf5c7640 1 --2- 192.168.123.107:0/3560397708 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0cc8071a50 0x7f0cc8071e50 unknown :-1 s=CLOSED pgs=328 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:02.480 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.476+0000 7f0ccf5c7640 1 -- 192.168.123.107:0/3560397708 >> 192.168.123.107:0/3560397708 conn(0x7f0cc806d4f0 msgr2=0x7f0cc806f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:48:02.480 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.476+0000 7f0ccf5c7640 1 -- 192.168.123.107:0/3560397708 shutdown_connections 2026-03-09T20:48:02.480 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.476+0000 7f0ccf5c7640 1 -- 192.168.123.107:0/3560397708 wait complete. 2026-03-09T20:48:02.480 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.477+0000 7f0ccf5c7640 1 Processor -- start 2026-03-09T20:48:02.480 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.477+0000 7f0ccf5c7640 1 -- start start 2026-03-09T20:48:02.480 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.477+0000 7f0ccf5c7640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f0cc8072420 0x7f0cc8084030 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:48:02.480 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.477+0000 7f0ccf5c7640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0cc8082680 0x7f0cc8082b00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:48:02.480 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.477+0000 7f0ccf5c7640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f0cc8084570 con 0x7f0cc8082680 2026-03-09T20:48:02.480 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.477+0000 7f0ccf5c7640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f0cc8083040 con 0x7f0cc8072420 2026-03-09T20:48:02.480 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.478+0000 7f0cce5c5640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f0cc8072420 0x7f0cc8084030 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:48:02.480 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.478+0000 7f0cce5c5640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f0cc8072420 0x7f0cc8084030 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:34656/0 (socket says 192.168.123.107:34656) 2026-03-09T20:48:02.480 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.478+0000 7f0cce5c5640 1 -- 192.168.123.107:0/4212757325 learned_addr learned my addr 192.168.123.107:0/4212757325 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:48:02.480 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.479+0000 7f0cce5c5640 1 -- 192.168.123.107:0/4212757325 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0cc8082680 msgr2=0x7f0cc8082b00 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:48:02.480 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.479+0000 7f0cce5c5640 1 --2- 192.168.123.107:0/4212757325 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0cc8082680 0x7f0cc8082b00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:02.480 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.479+0000 7f0cce5c5640 1 -- 192.168.123.107:0/4212757325 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f0cbc00b820 con 0x7f0cc8072420 2026-03-09T20:48:02.480 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.479+0000 7f0cce5c5640 1 --2- 192.168.123.107:0/4212757325 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f0cc8072420 0x7f0cc8084030 secure :-1 s=READY pgs=116 cs=0 l=1 rev1=1 crypto rx=0x7f0cbc004840 tx=0x7f0cbc004690 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:48:02.483 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.479+0000 7f0cbb7fe640 1 -- 192.168.123.107:0/4212757325 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f0cbc0317d0 con 0x7f0cc8072420 2026-03-09T20:48:02.483 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.479+0000 7f0ccf5c7640 1 -- 192.168.123.107:0/4212757325 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f0cc80832c0 con 0x7f0cc8072420 2026-03-09T20:48:02.483 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.479+0000 7f0ccf5c7640 1 -- 192.168.123.107:0/4212757325 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f0cc81be5d0 con 0x7f0cc8072420 2026-03-09T20:48:02.483 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.480+0000 7f0cbb7fe640 1 -- 192.168.123.107:0/4212757325 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f0cbc031df0 con 0x7f0cc8072420 2026-03-09T20:48:02.483 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.480+0000 7f0cbb7fe640 1 -- 192.168.123.107:0/4212757325 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f0cbc03acb0 con 0x7f0cc8072420 2026-03-09T20:48:02.483 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.481+0000 7f0cbb7fe640 1 -- 192.168.123.107:0/4212757325 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 30) v1 ==== 100078+0+0 (secure 0 0 0) 0x7f0cbc03a640 con 0x7f0cc8072420 2026-03-09T20:48:02.483 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.482+0000 7f0cbb7fe640 1 --2- 192.168.123.107:0/4212757325 >> [v2:192.168.123.107:6800/39551776,v1:192.168.123.107:6801/39551776] conn(0x7f0ca8077750 0x7f0ca8079c10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:48:02.483 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.482+0000 7f0cbb7fe640 1 -- 192.168.123.107:0/4212757325 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(46..46 src has 1..46) v4 ==== 6050+0+0 (secure 0 0 0) 0x7f0cbc0bdf90 con 0x7f0cc8072420 2026-03-09T20:48:02.483 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.482+0000 7f0cb97fa640 1 -- 192.168.123.107:0/4212757325 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f0c98005350 con 0x7f0cc8072420 2026-03-09T20:48:02.489 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.485+0000 7f0cbb7fe640 1 -- 192.168.123.107:0/4212757325 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+193909 (secure 0 0 0) 0x7f0cbc086a70 con 0x7f0cc8072420 2026-03-09T20:48:02.492 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.491+0000 7f0ccddc4640 1 --2- 192.168.123.107:0/4212757325 >> [v2:192.168.123.107:6800/39551776,v1:192.168.123.107:6801/39551776] conn(0x7f0ca8077750 0x7f0ca8079c10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:48:02.496 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.495+0000 7f0ccddc4640 1 --2- 192.168.123.107:0/4212757325 >> [v2:192.168.123.107:6800/39551776,v1:192.168.123.107:6801/39551776] conn(0x7f0ca8077750 0x7f0ca8079c10 secure :-1 s=READY pgs=23 cs=0 l=1 rev1=1 crypto rx=0x7f0cc40062a0 tx=0x7f0cc4006210 comp rx=0 tx=0).ready entity=mgr.24495 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:48:02.501 INFO:tasks.workunit.client.1.vm10.stdout:5/673: link d2/d27/d37/d46/d5d/fe2 d2/d39/dbf/d84/f100 0 2026-03-09T20:48:02.502 INFO:tasks.workunit.client.1.vm10.stdout:5/674: readlink d2/d1b/d54/d7b/l90 0 2026-03-09T20:48:02.512 INFO:tasks.workunit.client.1.vm10.stdout:3/686: creat dc/d14/d26/d29/fe7 x:0 0 0 2026-03-09T20:48:02.518 INFO:tasks.workunit.client.1.vm10.stdout:0/704: unlink d2/d9/da/d11/d92/dc1/lcc 0 2026-03-09T20:48:02.522 INFO:tasks.workunit.client.1.vm10.stdout:6/742: dwrite d3/d30/d7f/d36/f6e [0,4194304] 0 2026-03-09T20:48:02.527 INFO:tasks.workunit.client.1.vm10.stdout:6/743: dwrite d3/d30/d7f/d24/d39/d9e/fe4 [0,4194304] 0 2026-03-09T20:48:02.547 INFO:tasks.workunit.client.1.vm10.stdout:4/693: chown d1/d2/d5c/fd6 296024992 1 2026-03-09T20:48:02.556 INFO:tasks.workunit.client.0.vm07.stdout:0/806: symlink d1/dc0/dcc/lff 0 2026-03-09T20:48:02.566 INFO:tasks.workunit.client.0.vm07.stdout:0/807: dwrite d1/dc0/dcc/dd9/ff9 [4194304,4194304] 0 2026-03-09T20:48:02.574 INFO:tasks.workunit.client.0.vm07.stdout:0/808: truncate d1/d2/d33/f4e 5188983 0 2026-03-09T20:48:02.583 INFO:tasks.workunit.client.0.vm07.stdout:4/711: dwrite d2/d55/d5d/d3f/d4a/d4b/d52/f82 [0,4194304] 0 2026-03-09T20:48:02.592 INFO:tasks.workunit.client.1.vm10.stdout:1/728: unlink d2/da/d25/d3e/f94 0 2026-03-09T20:48:02.593 INFO:tasks.workunit.client.1.vm10.stdout:1/729: write d2/da/d25/d46/d80/da0/d92/db5/dc7/fcf [642519,42804] 0 2026-03-09T20:48:02.596 INFO:tasks.workunit.client.1.vm10.stdout:9/768: dread - d2/d3/d6d/d88/fd4 zero size 2026-03-09T20:48:02.597 INFO:tasks.workunit.client.0.vm07.stdout:6/779: symlink d8/d16/d22/d24/da0/dab/d40/d69/lff 0 2026-03-09T20:48:02.597 INFO:tasks.workunit.client.1.vm10.stdout:9/769: write d2/d28/d47/d67/f81 [1802085,6397] 0 2026-03-09T20:48:02.611 INFO:tasks.workunit.client.0.vm07.stdout:3/765: dwrite d1/d5/d9/d11/d6d/dd0/f1a [4194304,4194304] 0 2026-03-09T20:48:02.637 INFO:tasks.workunit.client.1.vm10.stdout:7/728: dread db/d28/d2b/d36/f35 [0,4194304] 0 2026-03-09T20:48:02.639 INFO:tasks.workunit.client.1.vm10.stdout:5/675: fsync d2/d80/fa8 0 2026-03-09T20:48:02.670 INFO:tasks.workunit.client.0.vm07.stdout:7/834: dwrite d3/f18 [0,4194304] 0 2026-03-09T20:48:02.671 INFO:tasks.workunit.client.0.vm07.stdout:7/835: dread - d3/da4/df2/dff/f108 zero size 2026-03-09T20:48:02.673 INFO:tasks.workunit.client.0.vm07.stdout:7/836: truncate d3/da/db/d32/d3e/dac/d1f/d2b/f2c 4771257 0 2026-03-09T20:48:02.689 INFO:tasks.workunit.client.1.vm10.stdout:3/687: rename dc/d14/d26/c63 to dc/d9e/ce8 0 2026-03-09T20:48:02.697 INFO:tasks.workunit.client.1.vm10.stdout:0/705: stat d2/d4a/d58/d82/d71/caf 0 2026-03-09T20:48:02.705 INFO:tasks.workunit.client.1.vm10.stdout:0/706: dread d2/d4a/fcf [0,4194304] 0 2026-03-09T20:48:02.718 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.717+0000 7f0cb97fa640 1 -- 192.168.123.107:0/4212757325 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7f0c980058d0 con 0x7f0cc8072420 2026-03-09T20:48:02.719 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.718+0000 7f0cbb7fe640 1 -- 192.168.123.107:0/4212757325 <== mon.1 v2:192.168.123.110:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+739 (secure 0 0 0) 0x7f0cbc0861c0 con 0x7f0cc8072420 2026-03-09T20:48:02.719 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T20:48:02.719 INFO:teuthology.orchestra.run.vm07.stdout: "mon": { 2026-03-09T20:48:02.719 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable)": 2 2026-03-09T20:48:02.719 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:48:02.719 INFO:teuthology.orchestra.run.vm07.stdout: "mgr": { 2026-03-09T20:48:02.719 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T20:48:02.719 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:48:02.719 INFO:teuthology.orchestra.run.vm07.stdout: "osd": { 2026-03-09T20:48:02.719 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable)": 6 2026-03-09T20:48:02.719 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:48:02.719 INFO:teuthology.orchestra.run.vm07.stdout: "mds": { 2026-03-09T20:48:02.719 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable)": 4 2026-03-09T20:48:02.719 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:48:02.719 INFO:teuthology.orchestra.run.vm07.stdout: "overall": { 2026-03-09T20:48:02.719 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable)": 12, 2026-03-09T20:48:02.719 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T20:48:02.719 INFO:teuthology.orchestra.run.vm07.stdout: } 2026-03-09T20:48:02.719 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T20:48:02.722 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.721+0000 7f0cb97fa640 1 -- 192.168.123.107:0/4212757325 >> [v2:192.168.123.107:6800/39551776,v1:192.168.123.107:6801/39551776] conn(0x7f0ca8077750 msgr2=0x7f0ca8079c10 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:48:02.723 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.721+0000 7f0cb97fa640 1 --2- 192.168.123.107:0/4212757325 >> [v2:192.168.123.107:6800/39551776,v1:192.168.123.107:6801/39551776] conn(0x7f0ca8077750 0x7f0ca8079c10 secure :-1 s=READY pgs=23 cs=0 l=1 rev1=1 crypto rx=0x7f0cc40062a0 tx=0x7f0cc4006210 comp rx=0 tx=0).stop 2026-03-09T20:48:02.723 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.721+0000 7f0cb97fa640 1 -- 192.168.123.107:0/4212757325 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f0cc8072420 msgr2=0x7f0cc8084030 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:48:02.723 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.721+0000 7f0cb97fa640 1 --2- 192.168.123.107:0/4212757325 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f0cc8072420 0x7f0cc8084030 secure :-1 s=READY pgs=116 cs=0 l=1 rev1=1 crypto rx=0x7f0cbc004840 tx=0x7f0cbc004690 comp rx=0 tx=0).stop 2026-03-09T20:48:02.723 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.721+0000 7f0cb97fa640 1 -- 192.168.123.107:0/4212757325 shutdown_connections 2026-03-09T20:48:02.723 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.721+0000 7f0cb97fa640 1 --2- 192.168.123.107:0/4212757325 >> [v2:192.168.123.107:6800/39551776,v1:192.168.123.107:6801/39551776] conn(0x7f0ca8077750 0x7f0ca8079c10 unknown :-1 s=CLOSED pgs=23 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:02.723 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.721+0000 7f0cb97fa640 1 --2- 192.168.123.107:0/4212757325 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0cc8082680 0x7f0cc8082b00 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:02.723 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.721+0000 7f0cb97fa640 1 --2- 192.168.123.107:0/4212757325 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f0cc8072420 0x7f0cc8084030 unknown :-1 s=CLOSED pgs=116 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:02.723 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.721+0000 7f0cb97fa640 1 -- 192.168.123.107:0/4212757325 >> 192.168.123.107:0/4212757325 conn(0x7f0cc806d4f0 msgr2=0x7f0cc807b340 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:48:02.723 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.721+0000 7f0cb97fa640 1 -- 192.168.123.107:0/4212757325 shutdown_connections 2026-03-09T20:48:02.723 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.721+0000 7f0cb97fa640 1 -- 192.168.123.107:0/4212757325 wait complete. 2026-03-09T20:48:02.749 INFO:tasks.workunit.client.0.vm07.stdout:1/804: write d3/d23/fa8 [2129025,39207] 0 2026-03-09T20:48:02.752 INFO:tasks.workunit.client.1.vm10.stdout:8/787: mknod d0/d22/d25/cfb 0 2026-03-09T20:48:02.757 INFO:tasks.workunit.client.0.vm07.stdout:0/809: rename d1/d2/c2d to d1/df6/c100 0 2026-03-09T20:48:02.767 INFO:tasks.workunit.client.0.vm07.stdout:6/780: truncate d8/d16/d22/d24/da0/faf 673180 0 2026-03-09T20:48:02.767 INFO:tasks.workunit.client.0.vm07.stdout:6/781: readlink d8/le 0 2026-03-09T20:48:02.769 INFO:tasks.workunit.client.1.vm10.stdout:4/694: mknod d1/d2/d3/cdf 0 2026-03-09T20:48:02.776 INFO:tasks.workunit.client.1.vm10.stdout:1/730: truncate d2/f81 808094 0 2026-03-09T20:48:02.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:02 vm10.local ceph-mon[57011]: pgmap v7: 65 pgs: 65 active+clean; 3.1 GiB data, 11 GiB used, 109 GiB / 120 GiB avail; 32 MiB/s rd, 62 MiB/s wr, 208 op/s 2026-03-09T20:48:02.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:02 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:02.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:02 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:02.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:02 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:48:02.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:02 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:02.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:02 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:48:02.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:02 vm10.local ceph-mon[57011]: Updating vm07:/etc/ceph/ceph.conf 2026-03-09T20:48:02.788 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:02 vm10.local ceph-mon[57011]: Updating vm10:/etc/ceph/ceph.conf 2026-03-09T20:48:02.788 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:02 vm10.local ceph-mon[57011]: Updating vm10:/var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/config/ceph.conf 2026-03-09T20:48:02.809 INFO:tasks.workunit.client.0.vm07.stdout:3/766: dwrite d1/d5/d9/d11/f58 [0,4194304] 0 2026-03-09T20:48:02.819 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.817+0000 7f10770ba640 1 -- 192.168.123.107:0/790425890 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1070071a70 msgr2=0x7f1070071e70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:48:02.819 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.817+0000 7f10770ba640 1 --2- 192.168.123.107:0/790425890 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1070071a70 0x7f1070071e70 secure :-1 s=READY pgs=329 cs=0 l=1 rev1=1 crypto rx=0x7f1060007920 tx=0x7f106002ffe0 comp rx=0 tx=0).stop 2026-03-09T20:48:02.819 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.817+0000 7f10770ba640 1 -- 192.168.123.107:0/790425890 shutdown_connections 2026-03-09T20:48:02.819 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.817+0000 7f10770ba640 1 --2- 192.168.123.107:0/790425890 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1070072440 0x7f10700771b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:02.819 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.817+0000 7f10770ba640 1 --2- 192.168.123.107:0/790425890 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1070071a70 0x7f1070071e70 unknown :-1 s=CLOSED pgs=329 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:02.819 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.817+0000 7f10770ba640 1 -- 192.168.123.107:0/790425890 >> 192.168.123.107:0/790425890 conn(0x7f107006d4f0 msgr2=0x7f107006f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:48:02.819 INFO:tasks.workunit.client.1.vm10.stdout:9/770: unlink d2/d28/d47/d6a/ce5 0 2026-03-09T20:48:02.821 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.817+0000 7f10770ba640 1 -- 192.168.123.107:0/790425890 shutdown_connections 2026-03-09T20:48:02.821 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.817+0000 7f10770ba640 1 -- 192.168.123.107:0/790425890 wait complete. 2026-03-09T20:48:02.821 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.820+0000 7f10770ba640 1 Processor -- start 2026-03-09T20:48:02.821 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.820+0000 7f10770ba640 1 -- start start 2026-03-09T20:48:02.821 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.820+0000 7f10770ba640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1070072440 0x7f10701319c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:48:02.821 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.820+0000 7f10770ba640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1070133370 0x7f1070131f00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:48:02.821 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.820+0000 7f10770ba640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f10701324d0 con 0x7f1070133370 2026-03-09T20:48:02.821 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.820+0000 7f10770ba640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f1070132640 con 0x7f1070072440 2026-03-09T20:48:02.821 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.820+0000 7f1074e2f640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1070072440 0x7f10701319c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:48:02.821 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.820+0000 7f1074e2f640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1070072440 0x7f10701319c0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:34680/0 (socket says 192.168.123.107:34680) 2026-03-09T20:48:02.821 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.820+0000 7f1074e2f640 1 -- 192.168.123.107:0/1088171383 learned_addr learned my addr 192.168.123.107:0/1088171383 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:48:02.822 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.821+0000 7f106ffff640 1 --2- 192.168.123.107:0/1088171383 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1070133370 0x7f1070131f00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:48:02.822 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.821+0000 7f1074e2f640 1 -- 192.168.123.107:0/1088171383 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1070133370 msgr2=0x7f1070131f00 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:48:02.822 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.821+0000 7f1074e2f640 1 --2- 192.168.123.107:0/1088171383 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1070133370 0x7f1070131f00 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:02.822 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.821+0000 7f1074e2f640 1 -- 192.168.123.107:0/1088171383 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f10600075d0 con 0x7f1070072440 2026-03-09T20:48:02.822 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.821+0000 7f1074e2f640 1 --2- 192.168.123.107:0/1088171383 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1070072440 0x7f10701319c0 secure :-1 s=READY pgs=117 cs=0 l=1 rev1=1 crypto rx=0x7f1060002410 tx=0x7f1060030a70 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:48:02.822 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.821+0000 7f106dffb640 1 -- 192.168.123.107:0/1088171383 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f1060030df0 con 0x7f1070072440 2026-03-09T20:48:02.823 INFO:tasks.workunit.client.0.vm07.stdout:2/789: rmdir d2/db/d28/d57/df8/de0 0 2026-03-09T20:48:02.823 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.821+0000 7f10770ba640 1 -- 192.168.123.107:0/1088171383 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f107007fb70 con 0x7f1070072440 2026-03-09T20:48:02.823 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.821+0000 7f10770ba640 1 -- 192.168.123.107:0/1088171383 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f1070080060 con 0x7f1070072440 2026-03-09T20:48:02.823 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.822+0000 7f106dffb640 1 -- 192.168.123.107:0/1088171383 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f1060002d70 con 0x7f1070072440 2026-03-09T20:48:02.823 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.822+0000 7f106dffb640 1 -- 192.168.123.107:0/1088171383 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f1060038780 con 0x7f1070072440 2026-03-09T20:48:02.825 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.824+0000 7f106dffb640 1 -- 192.168.123.107:0/1088171383 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 30) v1 ==== 100078+0+0 (secure 0 0 0) 0x7f10600388e0 con 0x7f1070072440 2026-03-09T20:48:02.825 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.824+0000 7f106dffb640 1 --2- 192.168.123.107:0/1088171383 >> [v2:192.168.123.107:6800/39551776,v1:192.168.123.107:6801/39551776] conn(0x7f1050077680 0x7f1050079b40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:48:02.825 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.824+0000 7f106dffb640 1 -- 192.168.123.107:0/1088171383 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(46..46 src has 1..46) v4 ==== 6050+0+0 (secure 0 0 0) 0x7f1060048030 con 0x7f1070072440 2026-03-09T20:48:02.825 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.825+0000 7f106ffff640 1 --2- 192.168.123.107:0/1088171383 >> [v2:192.168.123.107:6800/39551776,v1:192.168.123.107:6801/39551776] conn(0x7f1050077680 0x7f1050079b40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:48:02.825 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.824+0000 7f10770ba640 1 -- 192.168.123.107:0/1088171383 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f1070132800 con 0x7f1070072440 2026-03-09T20:48:02.826 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.825+0000 7f106ffff640 1 --2- 192.168.123.107:0/1088171383 >> [v2:192.168.123.107:6800/39551776,v1:192.168.123.107:6801/39551776] conn(0x7f1050077680 0x7f1050079b40 secure :-1 s=READY pgs=24 cs=0 l=1 rev1=1 crypto rx=0x7f10701330f0 tx=0x7f1068002a60 comp rx=0 tx=0).ready entity=mgr.24495 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:48:02.829 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.828+0000 7f106dffb640 1 -- 192.168.123.107:0/1088171383 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+193909 (secure 0 0 0) 0x7f1060087a30 con 0x7f1070072440 2026-03-09T20:48:02.843 INFO:tasks.workunit.client.0.vm07.stdout:9/749: rmdir d4/de5 0 2026-03-09T20:48:02.849 INFO:tasks.workunit.client.1.vm10.stdout:7/729: dwrite db/d28/d2b/d36/f35 [4194304,4194304] 0 2026-03-09T20:48:02.853 INFO:tasks.workunit.client.1.vm10.stdout:3/688: rmdir dc/d14/d26/d29/d40/da8/d69 39 2026-03-09T20:48:02.861 INFO:tasks.workunit.client.1.vm10.stdout:5/676: dwrite d2/d27/d37/f38 [0,4194304] 0 2026-03-09T20:48:02.861 INFO:tasks.workunit.client.1.vm10.stdout:5/677: readlink d2/d58/lb0 0 2026-03-09T20:48:02.861 INFO:tasks.workunit.client.0.vm07.stdout:1/805: creat d3/d23/d67/d8a/f10c x:0 0 0 2026-03-09T20:48:02.861 INFO:tasks.workunit.client.0.vm07.stdout:1/806: chown d3/d97/da1/ddd 690190010 1 2026-03-09T20:48:02.869 INFO:tasks.workunit.client.1.vm10.stdout:0/707: mknod d2/d4a/d58/d82/d71/cfa 0 2026-03-09T20:48:02.897 INFO:tasks.workunit.client.0.vm07.stdout:4/712: fsync d2/d55/d5d/f6f 0 2026-03-09T20:48:02.918 INFO:tasks.workunit.client.0.vm07.stdout:0/810: dwrite d1/f57 [0,4194304] 0 2026-03-09T20:48:02.948 INFO:tasks.workunit.client.0.vm07.stdout:6/782: dwrite d8/d16/d4b/d88/dc3/dd5/fd3 [0,4194304] 0 2026-03-09T20:48:02.970 INFO:tasks.workunit.client.0.vm07.stdout:9/750: mknod d4/d8/db9/c109 0 2026-03-09T20:48:02.973 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.972+0000 7f10770ba640 1 -- 192.168.123.107:0/1088171383 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "fs dump"} v 0) v1 -- 0x7f107007a6b0 con 0x7f1070072440 2026-03-09T20:48:02.973 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.972+0000 7f106dffb640 1 -- 192.168.123.107:0/1088171383 <== mon.1 v2:192.168.123.110:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump"}]=0 dumped fsmap epoch 11 v11) v1 ==== 76+0+1867 (secure 0 0 0) 0x7f1060087180 con 0x7f1070072440 2026-03-09T20:48:02.973 INFO:teuthology.orchestra.run.vm07.stdout:e11 2026-03-09T20:48:02.974 INFO:teuthology.orchestra.run.vm07.stdout:enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T20:48:02.974 INFO:teuthology.orchestra.run.vm07.stdout:default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T20:48:02.974 INFO:teuthology.orchestra.run.vm07.stdout:legacy client fscid: 1 2026-03-09T20:48:02.974 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:48:02.974 INFO:teuthology.orchestra.run.vm07.stdout:Filesystem 'cephfs' (1) 2026-03-09T20:48:02.974 INFO:teuthology.orchestra.run.vm07.stdout:fs_name cephfs 2026-03-09T20:48:02.974 INFO:teuthology.orchestra.run.vm07.stdout:epoch 11 2026-03-09T20:48:02.974 INFO:teuthology.orchestra.run.vm07.stdout:flags 32 joinable allow_snaps allow_multimds_snaps allow_standby_replay 2026-03-09T20:48:02.974 INFO:teuthology.orchestra.run.vm07.stdout:created 2026-03-09T20:44:59.885491+0000 2026-03-09T20:48:02.974 INFO:teuthology.orchestra.run.vm07.stdout:modified 2026-03-09T20:45:12.822947+0000 2026-03-09T20:48:02.974 INFO:teuthology.orchestra.run.vm07.stdout:tableserver 0 2026-03-09T20:48:02.974 INFO:teuthology.orchestra.run.vm07.stdout:root 0 2026-03-09T20:48:02.974 INFO:teuthology.orchestra.run.vm07.stdout:session_timeout 60 2026-03-09T20:48:02.974 INFO:teuthology.orchestra.run.vm07.stdout:session_autoclose 300 2026-03-09T20:48:02.974 INFO:teuthology.orchestra.run.vm07.stdout:max_file_size 1099511627776 2026-03-09T20:48:02.974 INFO:teuthology.orchestra.run.vm07.stdout:max_xattr_size 65536 2026-03-09T20:48:02.974 INFO:teuthology.orchestra.run.vm07.stdout:required_client_features {} 2026-03-09T20:48:02.974 INFO:teuthology.orchestra.run.vm07.stdout:last_failure 0 2026-03-09T20:48:02.974 INFO:teuthology.orchestra.run.vm07.stdout:last_failure_osd_epoch 0 2026-03-09T20:48:02.974 INFO:teuthology.orchestra.run.vm07.stdout:compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T20:48:02.974 INFO:teuthology.orchestra.run.vm07.stdout:max_mds 2 2026-03-09T20:48:02.974 INFO:teuthology.orchestra.run.vm07.stdout:in 0,1 2026-03-09T20:48:02.974 INFO:teuthology.orchestra.run.vm07.stdout:up {0=14476,1=24291} 2026-03-09T20:48:02.974 INFO:teuthology.orchestra.run.vm07.stdout:failed 2026-03-09T20:48:02.974 INFO:teuthology.orchestra.run.vm07.stdout:damaged 2026-03-09T20:48:02.974 INFO:teuthology.orchestra.run.vm07.stdout:stopped 2026-03-09T20:48:02.974 INFO:teuthology.orchestra.run.vm07.stdout:data_pools [3] 2026-03-09T20:48:02.974 INFO:teuthology.orchestra.run.vm07.stdout:metadata_pool 2 2026-03-09T20:48:02.974 INFO:teuthology.orchestra.run.vm07.stdout:inline_data disabled 2026-03-09T20:48:02.974 INFO:teuthology.orchestra.run.vm07.stdout:balancer 2026-03-09T20:48:02.974 INFO:teuthology.orchestra.run.vm07.stdout:bal_rank_mask -1 2026-03-09T20:48:02.974 INFO:teuthology.orchestra.run.vm07.stdout:standby_count_wanted 1 2026-03-09T20:48:02.974 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm07.rovdbp{0:14476} state up:active seq 3 join_fscid=1 addr [v2:192.168.123.107:6826/2216764941,v1:192.168.123.107:6827/2216764941] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:48:02.974 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm10.hzyuyq{0:14498} state up:standby-replay seq 3 join_fscid=1 addr [v2:192.168.123.110:6826/3212743251,v1:192.168.123.110:6827/3212743251] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:48:02.974 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm10.qpltwp{1:24291} state up:active seq 3 join_fscid=1 addr [v2:192.168.123.110:6824/61492274,v1:192.168.123.110:6825/61492274] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:48:02.974 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm07.potfau{1:14490} state up:standby-replay seq 2 join_fscid=1 addr [v2:192.168.123.107:6828/3289699342,v1:192.168.123.107:6829/3289699342] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:48:02.974 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:48:02.974 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:48:02.974 INFO:teuthology.orchestra.run.vm07.stderr:dumped fsmap epoch 11 2026-03-09T20:48:02.977 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.976+0000 7f103f7fe640 1 -- 192.168.123.107:0/1088171383 >> [v2:192.168.123.107:6800/39551776,v1:192.168.123.107:6801/39551776] conn(0x7f1050077680 msgr2=0x7f1050079b40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:48:02.977 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.976+0000 7f103f7fe640 1 --2- 192.168.123.107:0/1088171383 >> [v2:192.168.123.107:6800/39551776,v1:192.168.123.107:6801/39551776] conn(0x7f1050077680 0x7f1050079b40 secure :-1 s=READY pgs=24 cs=0 l=1 rev1=1 crypto rx=0x7f10701330f0 tx=0x7f1068002a60 comp rx=0 tx=0).stop 2026-03-09T20:48:02.977 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.976+0000 7f103f7fe640 1 -- 192.168.123.107:0/1088171383 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1070072440 msgr2=0x7f10701319c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:48:02.977 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.976+0000 7f103f7fe640 1 --2- 192.168.123.107:0/1088171383 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1070072440 0x7f10701319c0 secure :-1 s=READY pgs=117 cs=0 l=1 rev1=1 crypto rx=0x7f1060002410 tx=0x7f1060030a70 comp rx=0 tx=0).stop 2026-03-09T20:48:02.977 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.976+0000 7f103f7fe640 1 -- 192.168.123.107:0/1088171383 shutdown_connections 2026-03-09T20:48:02.977 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.976+0000 7f103f7fe640 1 --2- 192.168.123.107:0/1088171383 >> [v2:192.168.123.107:6800/39551776,v1:192.168.123.107:6801/39551776] conn(0x7f1050077680 0x7f1050079b40 unknown :-1 s=CLOSED pgs=24 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:02.977 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.976+0000 7f103f7fe640 1 --2- 192.168.123.107:0/1088171383 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1070133370 0x7f1070131f00 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:02.977 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.976+0000 7f103f7fe640 1 --2- 192.168.123.107:0/1088171383 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f1070072440 0x7f10701319c0 unknown :-1 s=CLOSED pgs=117 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:02.977 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.976+0000 7f103f7fe640 1 -- 192.168.123.107:0/1088171383 >> 192.168.123.107:0/1088171383 conn(0x7f107006d4f0 msgr2=0x7f10700753f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:48:02.977 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.976+0000 7f103f7fe640 1 -- 192.168.123.107:0/1088171383 shutdown_connections 2026-03-09T20:48:02.977 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:02.976+0000 7f103f7fe640 1 -- 192.168.123.107:0/1088171383 wait complete. 2026-03-09T20:48:02.993 INFO:tasks.workunit.client.0.vm07.stdout:1/807: dwrite d3/d9c/fd2 [0,4194304] 0 2026-03-09T20:48:03.029 INFO:tasks.workunit.client.0.vm07.stdout:8/714: creat d1/d3b/fe3 x:0 0 0 2026-03-09T20:48:03.035 INFO:tasks.workunit.client.0.vm07.stdout:5/842: getdents d5/df/d13/d4f/d101/d10b 0 2026-03-09T20:48:03.056 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.054+0000 7f63fd5c7640 1 -- 192.168.123.107:0/1755120450 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f63f8071a70 msgr2=0x7f63f8071e70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:48:03.056 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.054+0000 7f63fd5c7640 1 --2- 192.168.123.107:0/1755120450 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f63f8071a70 0x7f63f8071e70 secure :-1 s=READY pgs=330 cs=0 l=1 rev1=1 crypto rx=0x7f63e800bb70 tx=0x7f63e8030ff0 comp rx=0 tx=0).stop 2026-03-09T20:48:03.056 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.054+0000 7f63fd5c7640 1 -- 192.168.123.107:0/1755120450 shutdown_connections 2026-03-09T20:48:03.056 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.054+0000 7f63fd5c7640 1 --2- 192.168.123.107:0/1755120450 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f63f8072440 0x7f63f80771b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:03.056 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.054+0000 7f63fd5c7640 1 --2- 192.168.123.107:0/1755120450 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f63f8071a70 0x7f63f8071e70 unknown :-1 s=CLOSED pgs=330 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:03.056 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.054+0000 7f63fd5c7640 1 -- 192.168.123.107:0/1755120450 >> 192.168.123.107:0/1755120450 conn(0x7f63f806d4f0 msgr2=0x7f63f806f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:48:03.056 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.055+0000 7f63fd5c7640 1 -- 192.168.123.107:0/1755120450 shutdown_connections 2026-03-09T20:48:03.057 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.055+0000 7f63fd5c7640 1 -- 192.168.123.107:0/1755120450 wait complete. 2026-03-09T20:48:03.057 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.055+0000 7f63fd5c7640 1 Processor -- start 2026-03-09T20:48:03.057 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.055+0000 7f63fd5c7640 1 -- start start 2026-03-09T20:48:03.057 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.055+0000 7f63fd5c7640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f63f8072440 0x7f63f8084060 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:48:03.057 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.055+0000 7f63fd5c7640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f63f80826b0 0x7f63f8082b30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:48:03.057 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.055+0000 7f63fd5c7640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f63f80845a0 con 0x7f63f8072440 2026-03-09T20:48:03.057 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.055+0000 7f63fd5c7640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f63f8083070 con 0x7f63f80826b0 2026-03-09T20:48:03.057 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.055+0000 7f63f67fc640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f63f80826b0 0x7f63f8082b30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:48:03.057 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.055+0000 7f63f67fc640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f63f80826b0 0x7f63f8082b30 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:34700/0 (socket says 192.168.123.107:34700) 2026-03-09T20:48:03.057 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.055+0000 7f63f67fc640 1 -- 192.168.123.107:0/3344251563 learned_addr learned my addr 192.168.123.107:0/3344251563 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:48:03.057 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.056+0000 7f63f6ffd640 1 --2- 192.168.123.107:0/3344251563 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f63f8072440 0x7f63f8084060 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:48:03.058 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.058+0000 7f63f67fc640 1 -- 192.168.123.107:0/3344251563 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f63f8072440 msgr2=0x7f63f8084060 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:48:03.059 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.058+0000 7f63f67fc640 1 --2- 192.168.123.107:0/3344251563 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f63f8072440 0x7f63f8084060 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:03.059 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.058+0000 7f63f67fc640 1 -- 192.168.123.107:0/3344251563 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f63e800b820 con 0x7f63f80826b0 2026-03-09T20:48:03.059 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.058+0000 7f63f67fc640 1 --2- 192.168.123.107:0/3344251563 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f63f80826b0 0x7f63f8082b30 secure :-1 s=READY pgs=118 cs=0 l=1 rev1=1 crypto rx=0x7f63f000ea70 tx=0x7f63f000ef40 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:48:03.059 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.058+0000 7f63d7fff640 1 -- 192.168.123.107:0/3344251563 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f63f0002c10 con 0x7f63f80826b0 2026-03-09T20:48:03.060 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.058+0000 7f63fd5c7640 1 -- 192.168.123.107:0/3344251563 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f63f8083350 con 0x7f63f80826b0 2026-03-09T20:48:03.060 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.058+0000 7f63fd5c7640 1 -- 192.168.123.107:0/3344251563 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f63f812ef70 con 0x7f63f80826b0 2026-03-09T20:48:03.061 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.059+0000 7f63d7fff640 1 -- 192.168.123.107:0/3344251563 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f63f000be10 con 0x7f63f80826b0 2026-03-09T20:48:03.061 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.059+0000 7f63fd5c7640 1 -- 192.168.123.107:0/3344251563 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f63f807a920 con 0x7f63f80826b0 2026-03-09T20:48:03.061 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.059+0000 7f63d7fff640 1 -- 192.168.123.107:0/3344251563 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f63f00082c0 con 0x7f63f80826b0 2026-03-09T20:48:03.062 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.060+0000 7f63d7fff640 1 -- 192.168.123.107:0/3344251563 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 30) v1 ==== 100078+0+0 (secure 0 0 0) 0x7f63f0007a00 con 0x7f63f80826b0 2026-03-09T20:48:03.062 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.061+0000 7f63d7fff640 1 --2- 192.168.123.107:0/3344251563 >> [v2:192.168.123.107:6800/39551776,v1:192.168.123.107:6801/39551776] conn(0x7f63d8077630 0x7f63d8079af0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:48:03.062 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.061+0000 7f63d7fff640 1 -- 192.168.123.107:0/3344251563 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(46..46 src has 1..46) v4 ==== 6050+0+0 (secure 0 0 0) 0x7f63f009a540 con 0x7f63f80826b0 2026-03-09T20:48:03.064 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.062+0000 7f63f6ffd640 1 --2- 192.168.123.107:0/3344251563 >> [v2:192.168.123.107:6800/39551776,v1:192.168.123.107:6801/39551776] conn(0x7f63d8077630 0x7f63d8079af0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:48:03.064 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.063+0000 7f63d7fff640 1 -- 192.168.123.107:0/3344251563 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+193909 (secure 0 0 0) 0x7f63f0062f10 con 0x7f63f80826b0 2026-03-09T20:48:03.073 INFO:tasks.workunit.client.0.vm07.stdout:9/751: fsync d4/d16/f27 0 2026-03-09T20:48:03.075 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.074+0000 7f63f6ffd640 1 --2- 192.168.123.107:0/3344251563 >> [v2:192.168.123.107:6800/39551776,v1:192.168.123.107:6801/39551776] conn(0x7f63d8077630 0x7f63d8079af0 secure :-1 s=READY pgs=25 cs=0 l=1 rev1=1 crypto rx=0x7f63e800bb70 tx=0x7f63e80023d0 comp rx=0 tx=0).ready entity=mgr.24495 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:48:03.076 INFO:tasks.workunit.client.0.vm07.stdout:9/752: dread d4/d8/dc/ff [0,4194304] 0 2026-03-09T20:48:03.093 INFO:tasks.workunit.client.1.vm10.stdout:8/788: rename d0/d22/d25/d2e/d41/d47 to d0/d22/d25/d2e/d41/de9/dfc 0 2026-03-09T20:48:03.094 INFO:tasks.workunit.client.1.vm10.stdout:8/789: chown d0/d22/d2f/dd0/le4 13 1 2026-03-09T20:48:03.105 INFO:tasks.workunit.client.1.vm10.stdout:6/744: fsync d3/f52 0 2026-03-09T20:48:03.108 INFO:tasks.workunit.client.0.vm07.stdout:1/808: dwrite d3/d97/da1/dc5/fc3 [0,4194304] 0 2026-03-09T20:48:03.123 INFO:tasks.workunit.client.1.vm10.stdout:2/725: rmdir d5/d18/d27/d89/db6/d41/df1 0 2026-03-09T20:48:03.124 INFO:tasks.workunit.client.1.vm10.stdout:2/726: truncate d5/d18/d27/d38/d61/dc8/ddb/dea/fef 72990 0 2026-03-09T20:48:03.128 INFO:tasks.workunit.client.1.vm10.stdout:1/731: unlink d2/da/d25/l49 0 2026-03-09T20:48:03.133 INFO:tasks.workunit.client.0.vm07.stdout:3/767: link d1/d5/d9/d11/d6d/dd0/d59/fd1 d1/d5/d9/d2f/d3d/d71/dcc/ff9 0 2026-03-09T20:48:03.138 INFO:tasks.workunit.client.1.vm10.stdout:7/730: truncate db/d46/f85 4459327 0 2026-03-09T20:48:03.140 INFO:tasks.workunit.client.0.vm07.stdout:7/837: getdents d3/da/db/d32 0 2026-03-09T20:48:03.142 INFO:tasks.workunit.client.0.vm07.stdout:4/713: truncate d2/d55/f71 4436284 0 2026-03-09T20:48:03.142 INFO:tasks.workunit.client.0.vm07.stdout:4/714: chown d2/l22 25593816 1 2026-03-09T20:48:03.144 INFO:tasks.workunit.client.1.vm10.stdout:3/689: creat dc/d14/d20/d21/fe9 x:0 0 0 2026-03-09T20:48:03.149 INFO:tasks.workunit.client.0.vm07.stdout:9/753: creat d4/d8/db9/f10a x:0 0 0 2026-03-09T20:48:03.151 INFO:tasks.workunit.client.1.vm10.stdout:9/771: dread d2/d12/f2a [0,4194304] 0 2026-03-09T20:48:03.154 INFO:tasks.workunit.client.1.vm10.stdout:5/678: rename d2/d27/d37/dc8/fe8 to d2/d27/d37/dc8/da1/f101 0 2026-03-09T20:48:03.155 INFO:tasks.workunit.client.1.vm10.stdout:5/679: chown d2/d39/d4b/d7a/dd9 0 1 2026-03-09T20:48:03.171 INFO:tasks.workunit.client.0.vm07.stdout:1/809: mknod d3/d97/da1/dc5/d90/dd3/c10d 0 2026-03-09T20:48:03.181 INFO:tasks.workunit.client.1.vm10.stdout:8/790: dwrite d0/d92/de8/fa5 [0,4194304] 0 2026-03-09T20:48:03.191 INFO:tasks.workunit.client.1.vm10.stdout:6/745: fsync d3/d30/d7f/d36/d5c/dad/fce 0 2026-03-09T20:48:03.206 INFO:tasks.workunit.client.1.vm10.stdout:1/732: creat d2/da/d25/d46/d80/da0/fec x:0 0 0 2026-03-09T20:48:03.207 INFO:tasks.workunit.client.1.vm10.stdout:7/731: stat db/d21/f81 0 2026-03-09T20:48:03.211 INFO:tasks.workunit.client.1.vm10.stdout:7/732: dread db/d28/f31 [0,4194304] 0 2026-03-09T20:48:03.218 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.217+0000 7f63fd5c7640 1 -- 192.168.123.107:0/3344251563 --> [v2:192.168.123.107:6800/39551776,v1:192.168.123.107:6801/39551776] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f63f807ba90 con 0x7f63d8077630 2026-03-09T20:48:03.221 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.220+0000 7f63d7fff640 1 -- 192.168.123.107:0/3344251563 <== mgr.24495 v2:192.168.123.107:6800/39551776 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+337 (secure 0 0 0) 0x7f63f807ba90 con 0x7f63d8077630 2026-03-09T20:48:03.221 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T20:48:03.221 INFO:teuthology.orchestra.run.vm07.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc", 2026-03-09T20:48:03.221 INFO:teuthology.orchestra.run.vm07.stdout: "in_progress": true, 2026-03-09T20:48:03.221 INFO:teuthology.orchestra.run.vm07.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-09T20:48:03.221 INFO:teuthology.orchestra.run.vm07.stdout: "services_complete": [ 2026-03-09T20:48:03.221 INFO:teuthology.orchestra.run.vm07.stdout: "mgr" 2026-03-09T20:48:03.221 INFO:teuthology.orchestra.run.vm07.stdout: ], 2026-03-09T20:48:03.221 INFO:teuthology.orchestra.run.vm07.stdout: "progress": "2/23 daemons upgraded", 2026-03-09T20:48:03.221 INFO:teuthology.orchestra.run.vm07.stdout: "message": "", 2026-03-09T20:48:03.221 INFO:teuthology.orchestra.run.vm07.stdout: "is_paused": false 2026-03-09T20:48:03.221 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T20:48:03.223 INFO:tasks.workunit.client.1.vm10.stdout:3/690: write dc/d14/d26/f65 [675624,125808] 0 2026-03-09T20:48:03.223 INFO:tasks.workunit.client.1.vm10.stdout:3/691: chown l8 3 1 2026-03-09T20:48:03.224 INFO:tasks.workunit.client.1.vm10.stdout:9/772: mknod d2/d33/dcf/c100 0 2026-03-09T20:48:03.226 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.225+0000 7f63d5ffb640 1 -- 192.168.123.107:0/3344251563 >> [v2:192.168.123.107:6800/39551776,v1:192.168.123.107:6801/39551776] conn(0x7f63d8077630 msgr2=0x7f63d8079af0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:48:03.226 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.225+0000 7f63d5ffb640 1 --2- 192.168.123.107:0/3344251563 >> [v2:192.168.123.107:6800/39551776,v1:192.168.123.107:6801/39551776] conn(0x7f63d8077630 0x7f63d8079af0 secure :-1 s=READY pgs=25 cs=0 l=1 rev1=1 crypto rx=0x7f63e800bb70 tx=0x7f63e80023d0 comp rx=0 tx=0).stop 2026-03-09T20:48:03.226 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.225+0000 7f63d5ffb640 1 -- 192.168.123.107:0/3344251563 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f63f80826b0 msgr2=0x7f63f8082b30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:48:03.226 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.225+0000 7f63d5ffb640 1 --2- 192.168.123.107:0/3344251563 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f63f80826b0 0x7f63f8082b30 secure :-1 s=READY pgs=118 cs=0 l=1 rev1=1 crypto rx=0x7f63f000ea70 tx=0x7f63f000ef40 comp rx=0 tx=0).stop 2026-03-09T20:48:03.226 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.225+0000 7f63d5ffb640 1 -- 192.168.123.107:0/3344251563 shutdown_connections 2026-03-09T20:48:03.226 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.225+0000 7f63d5ffb640 1 --2- 192.168.123.107:0/3344251563 >> [v2:192.168.123.107:6800/39551776,v1:192.168.123.107:6801/39551776] conn(0x7f63d8077630 0x7f63d8079af0 unknown :-1 s=CLOSED pgs=25 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:03.226 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.225+0000 7f63d5ffb640 1 --2- 192.168.123.107:0/3344251563 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f63f80826b0 0x7f63f8082b30 unknown :-1 s=CLOSED pgs=118 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:03.226 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.225+0000 7f63d5ffb640 1 --2- 192.168.123.107:0/3344251563 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f63f8072440 0x7f63f8084060 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:03.226 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.225+0000 7f63d5ffb640 1 -- 192.168.123.107:0/3344251563 >> 192.168.123.107:0/3344251563 conn(0x7f63f806d4f0 msgr2=0x7f63f807b380 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:48:03.226 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.225+0000 7f63d5ffb640 1 -- 192.168.123.107:0/3344251563 shutdown_connections 2026-03-09T20:48:03.226 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.225+0000 7f63d5ffb640 1 -- 192.168.123.107:0/3344251563 wait complete. 2026-03-09T20:48:03.236 INFO:tasks.workunit.client.1.vm10.stdout:0/708: rename d2/f65 to d2/d4a/d58/d82/d60/ffb 0 2026-03-09T20:48:03.248 INFO:tasks.workunit.client.1.vm10.stdout:5/680: symlink d2/d27/d37/d46/d5d/d6d/l102 0 2026-03-09T20:48:03.264 INFO:tasks.workunit.client.1.vm10.stdout:8/791: mknod d0/d22/d25/d2e/cfd 0 2026-03-09T20:48:03.278 INFO:tasks.workunit.client.1.vm10.stdout:2/727: symlink d5/d18/d27/lf3 0 2026-03-09T20:48:03.287 INFO:tasks.workunit.client.1.vm10.stdout:7/733: truncate db/d1f/f5f 4654713 0 2026-03-09T20:48:03.288 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.287+0000 7fd002ef8640 1 -- 192.168.123.107:0/3265006114 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcffc072440 msgr2=0x7fcffc0771b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:48:03.288 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.287+0000 7fd002ef8640 1 --2- 192.168.123.107:0/3265006114 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcffc072440 0x7fcffc0771b0 secure :-1 s=READY pgs=331 cs=0 l=1 rev1=1 crypto rx=0x7fcff4009040 tx=0x7fcff402fc10 comp rx=0 tx=0).stop 2026-03-09T20:48:03.288 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.287+0000 7fd002ef8640 1 -- 192.168.123.107:0/3265006114 shutdown_connections 2026-03-09T20:48:03.288 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.287+0000 7fd002ef8640 1 --2- 192.168.123.107:0/3265006114 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcffc072440 0x7fcffc0771b0 unknown :-1 s=CLOSED pgs=331 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:03.288 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.287+0000 7fd002ef8640 1 --2- 192.168.123.107:0/3265006114 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fcffc071a70 0x7fcffc071e70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:03.288 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.287+0000 7fd002ef8640 1 -- 192.168.123.107:0/3265006114 >> 192.168.123.107:0/3265006114 conn(0x7fcffc06d4f0 msgr2=0x7fcffc06f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:48:03.289 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.287+0000 7fd002ef8640 1 -- 192.168.123.107:0/3265006114 shutdown_connections 2026-03-09T20:48:03.289 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.287+0000 7fd002ef8640 1 -- 192.168.123.107:0/3265006114 wait complete. 2026-03-09T20:48:03.289 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.287+0000 7fd002ef8640 1 Processor -- start 2026-03-09T20:48:03.289 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.287+0000 7fd002ef8640 1 -- start start 2026-03-09T20:48:03.289 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.287+0000 7fd002ef8640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fcffc071a70 0x7fcffc084090 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:48:03.289 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.287+0000 7fd002ef8640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcffc0826e0 0x7fcffc082b60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:48:03.289 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.287+0000 7fd002ef8640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fcffc0845d0 con 0x7fcffc0826e0 2026-03-09T20:48:03.289 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.287+0000 7fd002ef8640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fcffc0830a0 con 0x7fcffc071a70 2026-03-09T20:48:03.289 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.288+0000 7fd000c6d640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fcffc071a70 0x7fcffc084090 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:48:03.289 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.288+0000 7fcffbfff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcffc0826e0 0x7fcffc082b60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:48:03.289 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.288+0000 7fcffbfff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcffc0826e0 0x7fcffc082b60 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:60984/0 (socket says 192.168.123.107:60984) 2026-03-09T20:48:03.289 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.288+0000 7fcffbfff640 1 -- 192.168.123.107:0/1069690559 learned_addr learned my addr 192.168.123.107:0/1069690559 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:48:03.289 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.288+0000 7fd000c6d640 1 -- 192.168.123.107:0/1069690559 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcffc0826e0 msgr2=0x7fcffc082b60 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:48:03.289 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.288+0000 7fd000c6d640 1 --2- 192.168.123.107:0/1069690559 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcffc0826e0 0x7fcffc082b60 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:03.289 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.288+0000 7fd000c6d640 1 -- 192.168.123.107:0/1069690559 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fcff4008cf0 con 0x7fcffc071a70 2026-03-09T20:48:03.289 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.288+0000 7fd000c6d640 1 --2- 192.168.123.107:0/1069690559 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fcffc071a70 0x7fcffc084090 secure :-1 s=READY pgs=119 cs=0 l=1 rev1=1 crypto rx=0x7fcfec009800 tx=0x7fcfec009cd0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:48:03.289 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.288+0000 7fcff9ffb640 1 -- 192.168.123.107:0/1069690559 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fcfec010040 con 0x7fcffc071a70 2026-03-09T20:48:03.290 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.288+0000 7fd002ef8640 1 -- 192.168.123.107:0/1069690559 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fcffc083380 con 0x7fcffc071a70 2026-03-09T20:48:03.290 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.288+0000 7fd002ef8640 1 -- 192.168.123.107:0/1069690559 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fcffc12ef70 con 0x7fcffc071a70 2026-03-09T20:48:03.290 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.289+0000 7fcff9ffb640 1 -- 192.168.123.107:0/1069690559 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fcfec00ecf0 con 0x7fcffc071a70 2026-03-09T20:48:03.290 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.289+0000 7fcff9ffb640 1 -- 192.168.123.107:0/1069690559 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fcfec002c00 con 0x7fcffc071a70 2026-03-09T20:48:03.293 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.292+0000 7fcff9ffb640 1 -- 192.168.123.107:0/1069690559 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 30) v1 ==== 100078+0+0 (secure 0 0 0) 0x7fcfec004410 con 0x7fcffc071a70 2026-03-09T20:48:03.293 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.292+0000 7fcff9ffb640 1 --2- 192.168.123.107:0/1069690559 >> [v2:192.168.123.107:6800/39551776,v1:192.168.123.107:6801/39551776] conn(0x7fcfe8077750 0x7fcfe8079c10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:48:03.293 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.292+0000 7fcff9ffb640 1 -- 192.168.123.107:0/1069690559 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(46..46 src has 1..46) v4 ==== 6050+0+0 (secure 0 0 0) 0x7fcfec098cf0 con 0x7fcffc071a70 2026-03-09T20:48:03.293 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.292+0000 7fd002ef8640 1 -- 192.168.123.107:0/1069690559 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fcffc072440 con 0x7fcffc071a70 2026-03-09T20:48:03.293 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.293+0000 7fcffbfff640 1 --2- 192.168.123.107:0/1069690559 >> [v2:192.168.123.107:6800/39551776,v1:192.168.123.107:6801/39551776] conn(0x7fcfe8077750 0x7fcfe8079c10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:48:03.294 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.293+0000 7fcffbfff640 1 --2- 192.168.123.107:0/1069690559 >> [v2:192.168.123.107:6800/39551776,v1:192.168.123.107:6801/39551776] conn(0x7fcfe8077750 0x7fcfe8079c10 secure :-1 s=READY pgs=26 cs=0 l=1 rev1=1 crypto rx=0x7fcff4002790 tx=0x7fcff4007660 comp rx=0 tx=0).ready entity=mgr.24495 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:48:03.298 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.297+0000 7fcff9ffb640 1 -- 192.168.123.107:0/1069690559 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+193909 (secure 0 0 0) 0x7fcfec061770 con 0x7fcffc071a70 2026-03-09T20:48:03.309 INFO:tasks.workunit.client.1.vm10.stdout:3/692: truncate dc/d14/d26/d29/d40/d8c/d9c/fb6 4465404 0 2026-03-09T20:48:03.314 INFO:tasks.workunit.client.1.vm10.stdout:4/695: rename d1/f94 to d1/d2/d3/d70/d78/d86/fe0 0 2026-03-09T20:48:03.316 INFO:tasks.workunit.client.1.vm10.stdout:8/792: mkdir d0/d22/d25/d8f/dfe 0 2026-03-09T20:48:03.328 INFO:tasks.workunit.client.1.vm10.stdout:9/773: write d2/d3/d6d/d88/fd4 [988465,116114] 0 2026-03-09T20:48:03.331 INFO:tasks.workunit.client.1.vm10.stdout:0/709: write d2/d9/da/d11/f42 [893442,99100] 0 2026-03-09T20:48:03.334 INFO:tasks.workunit.client.1.vm10.stdout:0/710: truncate d2/d9/da/d11/dd1/d34/f77 864730 0 2026-03-09T20:48:03.352 INFO:tasks.workunit.client.0.vm07.stdout:5/843: rename d5/df/d13/d6c/c102 to d5/d19/d73/d94/c122 0 2026-03-09T20:48:03.372 INFO:tasks.workunit.client.1.vm10.stdout:1/733: symlink d2/da/dbc/dea/led 0 2026-03-09T20:48:03.382 INFO:tasks.workunit.client.1.vm10.stdout:7/734: symlink db/d21/le2 0 2026-03-09T20:48:03.383 INFO:tasks.workunit.client.1.vm10.stdout:7/735: write db/d28/d2b/d36/d3f/fae [4211873,39294] 0 2026-03-09T20:48:03.385 INFO:tasks.workunit.client.0.vm07.stdout:0/811: dwrite d1/d2/d33/fb5 [0,4194304] 0 2026-03-09T20:48:03.428 INFO:tasks.workunit.client.0.vm07.stdout:6/783: creat d8/d16/d22/d24/da0/dab/d40/f100 x:0 0 0 2026-03-09T20:48:03.446 INFO:tasks.workunit.client.0.vm07.stdout:2/790: creat d2/d11/ddb/d6e/dbe/d96/dcc/ffc x:0 0 0 2026-03-09T20:48:03.453 INFO:tasks.workunit.client.1.vm10.stdout:3/693: creat dc/d14/d20/d21/d3b/fea x:0 0 0 2026-03-09T20:48:03.453 INFO:tasks.workunit.client.1.vm10.stdout:3/694: chown dc/d14/d26/d29/d40/da8/c5f 7 1 2026-03-09T20:48:03.468 INFO:tasks.workunit.client.0.vm07.stdout:4/715: dread d2/d55/d5d/d3f/f68 [0,4194304] 0 2026-03-09T20:48:03.483 INFO:tasks.workunit.client.0.vm07.stdout:0/812: sync 2026-03-09T20:48:03.490 INFO:tasks.workunit.client.0.vm07.stdout:1/810: rmdir d3/d23/d52 39 2026-03-09T20:48:03.496 INFO:tasks.workunit.client.0.vm07.stdout:0/813: dread d1/dc0/dcc/dd9/ff9 [4194304,4194304] 0 2026-03-09T20:48:03.505 INFO:teuthology.orchestra.run.vm07.stdout:HEALTH_OK 2026-03-09T20:48:03.505 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.500+0000 7fd002ef8640 1 -- 192.168.123.107:0/1069690559 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "health", "detail": "detail"} v 0) v1 -- 0x7fcffc07a060 con 0x7fcffc071a70 2026-03-09T20:48:03.505 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.501+0000 7fcff9ffb640 1 -- 192.168.123.107:0/1069690559 <== mon.1 v2:192.168.123.110:3300/0 7 ==== mon_command_ack([{"prefix": "health", "detail": "detail"}]=0 v0) v1 ==== 74+0+10 (secure 0 0 0) 0x7fcfec01a090 con 0x7fcffc071a70 2026-03-09T20:48:03.506 INFO:tasks.workunit.client.1.vm10.stdout:8/793: fdatasync d0/d22/f29 0 2026-03-09T20:48:03.507 INFO:tasks.workunit.client.1.vm10.stdout:8/794: chown d0/d22/d25/d2e/dfa 29 1 2026-03-09T20:48:03.510 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.506+0000 7fcfdb7fe640 1 -- 192.168.123.107:0/1069690559 >> [v2:192.168.123.107:6800/39551776,v1:192.168.123.107:6801/39551776] conn(0x7fcfe8077750 msgr2=0x7fcfe8079c10 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:48:03.510 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.506+0000 7fcfdb7fe640 1 --2- 192.168.123.107:0/1069690559 >> [v2:192.168.123.107:6800/39551776,v1:192.168.123.107:6801/39551776] conn(0x7fcfe8077750 0x7fcfe8079c10 secure :-1 s=READY pgs=26 cs=0 l=1 rev1=1 crypto rx=0x7fcff4002790 tx=0x7fcff4007660 comp rx=0 tx=0).stop 2026-03-09T20:48:03.510 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.506+0000 7fcfdb7fe640 1 -- 192.168.123.107:0/1069690559 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fcffc071a70 msgr2=0x7fcffc084090 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:48:03.510 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.506+0000 7fcfdb7fe640 1 --2- 192.168.123.107:0/1069690559 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fcffc071a70 0x7fcffc084090 secure :-1 s=READY pgs=119 cs=0 l=1 rev1=1 crypto rx=0x7fcfec009800 tx=0x7fcfec009cd0 comp rx=0 tx=0).stop 2026-03-09T20:48:03.510 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.506+0000 7fcfdb7fe640 1 -- 192.168.123.107:0/1069690559 shutdown_connections 2026-03-09T20:48:03.510 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.506+0000 7fcfdb7fe640 1 --2- 192.168.123.107:0/1069690559 >> [v2:192.168.123.107:6800/39551776,v1:192.168.123.107:6801/39551776] conn(0x7fcfe8077750 0x7fcfe8079c10 unknown :-1 s=CLOSED pgs=26 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:03.510 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.506+0000 7fcfdb7fe640 1 --2- 192.168.123.107:0/1069690559 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcffc0826e0 0x7fcffc082b60 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:03.510 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.506+0000 7fcfdb7fe640 1 --2- 192.168.123.107:0/1069690559 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fcffc071a70 0x7fcffc084090 unknown :-1 s=CLOSED pgs=119 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:03.510 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.506+0000 7fcfdb7fe640 1 -- 192.168.123.107:0/1069690559 >> 192.168.123.107:0/1069690559 conn(0x7fcffc06d4f0 msgr2=0x7fcffc073150 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:48:03.510 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.506+0000 7fcfdb7fe640 1 -- 192.168.123.107:0/1069690559 shutdown_connections 2026-03-09T20:48:03.510 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:03.506+0000 7fcfdb7fe640 1 -- 192.168.123.107:0/1069690559 wait complete. 2026-03-09T20:48:03.511 INFO:tasks.workunit.client.0.vm07.stdout:3/768: write d1/d5/d9/d11/d1f/f7f [2333130,115967] 0 2026-03-09T20:48:03.524 INFO:tasks.workunit.client.0.vm07.stdout:8/715: link d1/dc/d16/f8d d1/dc/d16/d26/d94/fe4 0 2026-03-09T20:48:03.525 INFO:tasks.workunit.client.0.vm07.stdout:8/716: dread - d1/db0/fe0 zero size 2026-03-09T20:48:03.526 INFO:tasks.workunit.client.0.vm07.stdout:9/754: rename d4/d16/d29/d9c/fc9 to d4/d16/d78/dc4/f10b 0 2026-03-09T20:48:03.526 INFO:tasks.workunit.client.1.vm10.stdout:6/746: link d3/d30/d7f/d36/d6d/d8c/lba d3/d30/d7f/d51/le6 0 2026-03-09T20:48:03.527 INFO:tasks.workunit.client.0.vm07.stdout:5/844: creat d5/d33/d39/f123 x:0 0 0 2026-03-09T20:48:03.529 INFO:tasks.workunit.client.1.vm10.stdout:1/734: chown d2/da/d25/d46/d51/d5d/f67 513199 1 2026-03-09T20:48:03.530 INFO:tasks.workunit.client.1.vm10.stdout:6/747: dread d3/da/d11/f1d [4194304,4194304] 0 2026-03-09T20:48:03.531 INFO:tasks.workunit.client.0.vm07.stdout:6/784: chown d8/d16/d22/d24/da0/dab/f41 10351676 1 2026-03-09T20:48:03.535 INFO:tasks.workunit.client.0.vm07.stdout:5/845: dwrite d5/d33/db2/de8/f11d [4194304,4194304] 0 2026-03-09T20:48:03.564 INFO:tasks.workunit.client.0.vm07.stdout:7/838: mknod d3/da/db/d32/d3e/c119 0 2026-03-09T20:48:03.570 INFO:tasks.workunit.client.1.vm10.stdout:7/736: dwrite db/d28/d2b/f51 [0,4194304] 0 2026-03-09T20:48:03.577 INFO:tasks.workunit.client.0.vm07.stdout:7/839: sync 2026-03-09T20:48:03.584 INFO:tasks.workunit.client.1.vm10.stdout:5/681: rename d2/fff to d2/d39/f103 0 2026-03-09T20:48:03.586 INFO:tasks.workunit.client.0.vm07.stdout:4/716: dread d2/df/d17/f63 [0,4194304] 0 2026-03-09T20:48:03.586 INFO:tasks.workunit.client.1.vm10.stdout:8/795: stat d0/c7 0 2026-03-09T20:48:03.593 INFO:tasks.workunit.client.0.vm07.stdout:3/769: fdatasync d1/d5/d9/d2f/d66/fa4 0 2026-03-09T20:48:03.595 INFO:tasks.workunit.client.0.vm07.stdout:4/717: dwrite d2/d1f/f26 [4194304,4194304] 0 2026-03-09T20:48:03.602 INFO:tasks.workunit.client.1.vm10.stdout:2/728: link d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47/ld5 d5/d18/d27/db4/lf4 0 2026-03-09T20:48:03.608 INFO:tasks.workunit.client.0.vm07.stdout:7/840: fsync d3/da/db/d79/fd1 0 2026-03-09T20:48:03.608 INFO:tasks.workunit.client.0.vm07.stdout:1/811: write d3/d97/da1/dc5/d60/f7c [1703334,90676] 0 2026-03-09T20:48:03.609 INFO:tasks.workunit.client.1.vm10.stdout:7/737: creat db/d28/d86/fe3 x:0 0 0 2026-03-09T20:48:03.616 INFO:tasks.workunit.client.1.vm10.stdout:9/774: rename d2/d12/f65 to d2/d28/d47/d67/f101 0 2026-03-09T20:48:03.620 INFO:tasks.workunit.client.1.vm10.stdout:4/696: mknod d1/d2/d5c/d64/d61/ce1 0 2026-03-09T20:48:03.633 INFO:tasks.workunit.client.1.vm10.stdout:8/796: creat d0/d92/de8/d64/d7f/fff x:0 0 0 2026-03-09T20:48:03.635 INFO:tasks.workunit.client.0.vm07.stdout:8/717: dread d1/dc/d16/f4b [0,4194304] 0 2026-03-09T20:48:03.642 INFO:tasks.workunit.client.1.vm10.stdout:0/711: creat d2/d9/da/d11/ffc x:0 0 0 2026-03-09T20:48:03.642 INFO:tasks.workunit.client.0.vm07.stdout:3/770: unlink d1/d5/d9/d2f/d34/c4c 0 2026-03-09T20:48:03.643 INFO:tasks.workunit.client.0.vm07.stdout:0/814: dread d1/d2/dc/fe2 [0,4194304] 0 2026-03-09T20:48:03.648 INFO:tasks.workunit.client.0.vm07.stdout:4/718: dread d2/df/d59/d8a/d9d/fa8 [0,4194304] 0 2026-03-09T20:48:03.652 INFO:tasks.workunit.client.1.vm10.stdout:2/729: fdatasync d5/d18/d27/d89/db6/d41/d77/f85 0 2026-03-09T20:48:03.656 INFO:tasks.workunit.client.0.vm07.stdout:6/785: symlink d8/d16/d22/d24/da0/dab/d40/d69/dfb/l101 0 2026-03-09T20:48:03.658 INFO:tasks.workunit.client.0.vm07.stdout:9/755: dwrite d4/f5 [0,4194304] 0 2026-03-09T20:48:03.660 INFO:tasks.workunit.client.1.vm10.stdout:8/797: sync 2026-03-09T20:48:03.676 INFO:tasks.workunit.client.0.vm07.stdout:2/791: creat d2/d11/ffd x:0 0 0 2026-03-09T20:48:03.676 INFO:tasks.workunit.client.0.vm07.stdout:2/792: chown d2/db/d49/d7d 31424 1 2026-03-09T20:48:03.677 INFO:tasks.workunit.client.1.vm10.stdout:3/695: write dc/d14/d27/fa7 [5087794,114447] 0 2026-03-09T20:48:03.681 INFO:tasks.workunit.client.1.vm10.stdout:6/748: creat d3/d30/d6a/dd6/fe7 x:0 0 0 2026-03-09T20:48:03.720 INFO:tasks.workunit.client.0.vm07.stdout:1/812: fdatasync d3/d97/da1/dab/de2/ff8 0 2026-03-09T20:48:03.720 INFO:tasks.workunit.client.1.vm10.stdout:4/697: stat d1/l76 0 2026-03-09T20:48:03.724 INFO:tasks.workunit.client.0.vm07.stdout:7/841: creat d3/d58/d77/f11a x:0 0 0 2026-03-09T20:48:03.726 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:03 vm07.local ceph-mon[49120]: Updating vm07:/var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/config/ceph.conf 2026-03-09T20:48:03.726 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:03 vm07.local ceph-mon[49120]: from='client.24521 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:48:03.726 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:03 vm07.local ceph-mon[49120]: Updating vm10:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:48:03.726 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:03 vm07.local ceph-mon[49120]: Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:48:03.726 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:03 vm07.local ceph-mon[49120]: from='client.24525 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:48:03.726 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:03 vm07.local ceph-mon[49120]: Updating vm10:/var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/config/ceph.client.admin.keyring 2026-03-09T20:48:03.726 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:03 vm07.local ceph-mon[49120]: Updating vm07:/var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/config/ceph.client.admin.keyring 2026-03-09T20:48:03.726 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:03 vm07.local ceph-mon[49120]: from='client.24529 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:48:03.726 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:03 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:03.726 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:03 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:03.726 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:03 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:03.726 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:03 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:03.726 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:03 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:03.726 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:03 vm07.local ceph-mon[49120]: from='client.? 192.168.123.107:0/4212757325' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:48:03.726 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:03 vm07.local ceph-mon[49120]: from='client.? 192.168.123.107:0/1088171383' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T20:48:03.732 INFO:tasks.workunit.client.0.vm07.stdout:8/718: readlink d1/dc/lbd 0 2026-03-09T20:48:03.746 INFO:tasks.workunit.client.0.vm07.stdout:3/771: creat d1/d5/d9/d2f/d34/d46/d5d/ffa x:0 0 0 2026-03-09T20:48:03.746 INFO:tasks.workunit.client.1.vm10.stdout:0/712: mkdir d2/d4a/d58/dd5/dfd 0 2026-03-09T20:48:03.748 INFO:tasks.workunit.client.0.vm07.stdout:0/815: stat d1/d2/f47 0 2026-03-09T20:48:03.749 INFO:tasks.workunit.client.1.vm10.stdout:0/713: read d2/d9/da/d48/fb9 [189042,45313] 0 2026-03-09T20:48:03.750 INFO:tasks.workunit.client.1.vm10.stdout:0/714: chown d2/d9/da/d11/dd1/d34/dee 4024 1 2026-03-09T20:48:03.752 INFO:tasks.workunit.client.0.vm07.stdout:4/719: truncate d2/d55/d5d/f6f 3643903 0 2026-03-09T20:48:03.752 INFO:tasks.workunit.client.0.vm07.stdout:8/719: dread d1/d5d/d6f/d80/faa [0,4194304] 0 2026-03-09T20:48:03.766 INFO:tasks.workunit.client.1.vm10.stdout:7/738: write db/d28/fac [25283,107420] 0 2026-03-09T20:48:03.771 INFO:tasks.workunit.client.0.vm07.stdout:6/786: read d8/d16/d22/f75 [3835763,23409] 0 2026-03-09T20:48:03.777 INFO:tasks.workunit.client.1.vm10.stdout:3/696: creat dc/d14/d26/d37/feb x:0 0 0 2026-03-09T20:48:03.780 INFO:tasks.workunit.client.1.vm10.stdout:9/775: dwrite d2/d3/de/f84 [0,4194304] 0 2026-03-09T20:48:03.781 INFO:tasks.workunit.client.1.vm10.stdout:5/682: dwrite d2/d39/d4b/d7a/fc0 [0,4194304] 0 2026-03-09T20:48:03.791 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:03 vm10.local ceph-mon[57011]: Updating vm07:/var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/config/ceph.conf 2026-03-09T20:48:03.791 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:03 vm10.local ceph-mon[57011]: from='client.24521 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:48:03.791 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:03 vm10.local ceph-mon[57011]: Updating vm10:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:48:03.791 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:03 vm10.local ceph-mon[57011]: Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:48:03.791 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:03 vm10.local ceph-mon[57011]: from='client.24525 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:48:03.791 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:03 vm10.local ceph-mon[57011]: Updating vm10:/var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/config/ceph.client.admin.keyring 2026-03-09T20:48:03.791 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:03 vm10.local ceph-mon[57011]: Updating vm07:/var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/config/ceph.client.admin.keyring 2026-03-09T20:48:03.791 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:03 vm10.local ceph-mon[57011]: from='client.24529 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:48:03.791 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:03 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:03.791 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:03 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:03.791 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:03 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:03.791 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:03 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:03.791 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:03 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:03.791 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:03 vm10.local ceph-mon[57011]: from='client.? 192.168.123.107:0/4212757325' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:48:03.791 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:03 vm10.local ceph-mon[57011]: from='client.? 192.168.123.107:0/1088171383' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T20:48:03.802 INFO:tasks.workunit.client.1.vm10.stdout:2/730: dwrite d5/d18/d9f/fd8 [0,4194304] 0 2026-03-09T20:48:03.819 INFO:tasks.workunit.client.0.vm07.stdout:3/772: fsync d1/d5/d9/d2f/d34/d46/d5d/fd2 0 2026-03-09T20:48:03.828 INFO:tasks.workunit.client.0.vm07.stdout:4/720: truncate d2/f69 3262831 0 2026-03-09T20:48:03.844 INFO:tasks.workunit.client.0.vm07.stdout:5/846: getdents d5/d19/d73/dbc/dc7 0 2026-03-09T20:48:03.849 INFO:tasks.workunit.client.0.vm07.stdout:5/847: dwrite d5/df/d13/d3e/de1/f10a [0,4194304] 0 2026-03-09T20:48:03.879 INFO:tasks.workunit.client.0.vm07.stdout:9/756: dwrite d4/d11/d23/d32/fe6 [0,4194304] 0 2026-03-09T20:48:03.890 INFO:tasks.workunit.client.1.vm10.stdout:6/749: dwrite f1 [4194304,4194304] 0 2026-03-09T20:48:03.901 INFO:tasks.workunit.client.0.vm07.stdout:8/720: write d1/dc/d16/f6e [254756,87384] 0 2026-03-09T20:48:03.945 INFO:tasks.workunit.client.0.vm07.stdout:6/787: creat d8/d5d/d97/da1/f102 x:0 0 0 2026-03-09T20:48:03.951 INFO:tasks.workunit.client.0.vm07.stdout:6/788: dwrite d8/d16/d22/ff1 [0,4194304] 0 2026-03-09T20:48:03.953 INFO:tasks.workunit.client.0.vm07.stdout:2/793: creat d2/d11/ffe x:0 0 0 2026-03-09T20:48:03.956 INFO:tasks.workunit.client.0.vm07.stdout:3/773: write d1/d5/d9/d2f/d34/f8f [178873,125004] 0 2026-03-09T20:48:03.959 INFO:tasks.workunit.client.0.vm07.stdout:0/816: write d1/d1f/d20/f2c [1952912,91616] 0 2026-03-09T20:48:03.969 INFO:tasks.workunit.client.0.vm07.stdout:5/848: rename d5/d19/d73/d97/dff to d5/df/d13/d6c/db1/d124 0 2026-03-09T20:48:03.977 INFO:tasks.workunit.client.0.vm07.stdout:8/721: fdatasync d1/d5d/d6f/d2f/d4d/d63/fd5 0 2026-03-09T20:48:03.977 INFO:tasks.workunit.client.0.vm07.stdout:7/842: creat d3/da/db/d32/d3e/d5c/f11b x:0 0 0 2026-03-09T20:48:03.978 INFO:tasks.workunit.client.1.vm10.stdout:4/698: write d1/d2/d5c/d64/d6b/d81/dac/d1b/d57/f8e [747466,115769] 0 2026-03-09T20:48:03.979 INFO:tasks.workunit.client.0.vm07.stdout:4/721: fsync d2/d55/d5d/d3f/d4a/f7f 0 2026-03-09T20:48:03.979 INFO:tasks.workunit.client.1.vm10.stdout:4/699: chown d1/d2/d5c/d64/d6b/d81/dac/d1c/d69/dbd 1016546785 1 2026-03-09T20:48:03.989 INFO:tasks.workunit.client.0.vm07.stdout:6/789: write d8/d16/d22/d9b/fdb [95582,106280] 0 2026-03-09T20:48:03.993 INFO:tasks.workunit.client.1.vm10.stdout:0/715: chown d2/d9/d2a/lab 8988735 1 2026-03-09T20:48:03.993 INFO:tasks.workunit.client.1.vm10.stdout:0/716: readlink d2/d9/d2a/lab 0 2026-03-09T20:48:04.000 INFO:tasks.workunit.client.1.vm10.stdout:1/735: getdents d2/da/d25/d46/d51/d7e 0 2026-03-09T20:48:04.001 INFO:tasks.workunit.client.0.vm07.stdout:2/794: creat d2/db/d49/d7d/fff x:0 0 0 2026-03-09T20:48:04.002 INFO:tasks.workunit.client.1.vm10.stdout:7/739: rename db/d21/d26/d72/fe1 to db/d21/d23/fe4 0 2026-03-09T20:48:04.003 INFO:tasks.workunit.client.0.vm07.stdout:3/774: read - d1/d5/d9/daf/fdf zero size 2026-03-09T20:48:04.005 INFO:tasks.workunit.client.0.vm07.stdout:0/817: truncate d1/d2/dc/f97 930654 0 2026-03-09T20:48:04.019 INFO:tasks.workunit.client.1.vm10.stdout:5/683: mknod d2/d1b/d54/c104 0 2026-03-09T20:48:04.022 INFO:tasks.workunit.client.0.vm07.stdout:9/757: mknod d4/d8/d19/d89/da7/ddd/c10c 0 2026-03-09T20:48:04.023 INFO:tasks.workunit.client.0.vm07.stdout:9/758: chown d4/d16/d29/d24/d37/d8d/c4c 59178 1 2026-03-09T20:48:04.025 INFO:tasks.workunit.client.1.vm10.stdout:9/776: truncate d2/d3/ff6 81828 0 2026-03-09T20:48:04.025 INFO:tasks.workunit.client.0.vm07.stdout:1/813: getdents d3/d97/da1/ddd 0 2026-03-09T20:48:04.057 INFO:tasks.workunit.client.0.vm07.stdout:8/722: symlink d1/d5d/d6f/d2f/le5 0 2026-03-09T20:48:04.062 INFO:tasks.workunit.client.0.vm07.stdout:8/723: dwrite d1/d5d/d6f/d2f/d4d/d63/fd5 [0,4194304] 0 2026-03-09T20:48:04.067 INFO:tasks.workunit.client.1.vm10.stdout:2/731: unlink d5/d18/d27/c75 0 2026-03-09T20:48:04.076 INFO:tasks.workunit.client.1.vm10.stdout:2/732: dwrite f1 [4194304,4194304] 0 2026-03-09T20:48:04.082 INFO:tasks.workunit.client.0.vm07.stdout:8/724: dread - d1/dc/d16/dad/fa1 zero size 2026-03-09T20:48:04.109 INFO:tasks.workunit.client.1.vm10.stdout:4/700: rmdir d1/d2/d5c/d64/d6b/d81/dac/d1b/dbe 39 2026-03-09T20:48:04.110 INFO:tasks.workunit.client.1.vm10.stdout:4/701: stat d1/d2/d5c/d64/d61/f85 0 2026-03-09T20:48:04.121 INFO:tasks.workunit.client.0.vm07.stdout:0/818: rmdir d1/d2/d33 39 2026-03-09T20:48:04.123 INFO:tasks.workunit.client.0.vm07.stdout:3/775: truncate d1/d5/d9/d11/d6d/dd0/f7b 1273771 0 2026-03-09T20:48:04.127 INFO:tasks.workunit.client.1.vm10.stdout:0/717: dread d2/d4a/d58/d82/d71/d5d/f8c [0,4194304] 0 2026-03-09T20:48:04.129 INFO:tasks.workunit.client.1.vm10.stdout:3/697: rename dc/d14/d20/d2e/c35 to dc/d14/d26/d8f/ddd/cec 0 2026-03-09T20:48:04.131 INFO:tasks.workunit.client.0.vm07.stdout:7/843: write d3/da/db/d32/d3e/d5c/f9c [1548,54824] 0 2026-03-09T20:48:04.132 INFO:tasks.workunit.client.0.vm07.stdout:9/759: fsync d4/d11/f4f 0 2026-03-09T20:48:04.142 INFO:tasks.workunit.client.0.vm07.stdout:1/814: dread d3/d14/d54/fa2 [0,4194304] 0 2026-03-09T20:48:04.145 INFO:tasks.workunit.client.1.vm10.stdout:9/777: creat d2/d3/de/d8f/dbc/f102 x:0 0 0 2026-03-09T20:48:04.146 INFO:tasks.workunit.client.1.vm10.stdout:2/733: truncate d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/f92 1495805 0 2026-03-09T20:48:04.147 INFO:tasks.workunit.client.1.vm10.stdout:4/702: truncate d1/d2/d5c/d64/d6b/d81/dac/d1c/f91 1468386 0 2026-03-09T20:48:04.152 INFO:tasks.workunit.client.1.vm10.stdout:6/750: dwrite d3/da/d11/f17 [0,4194304] 0 2026-03-09T20:48:04.152 INFO:tasks.workunit.client.0.vm07.stdout:4/722: write d2/d55/d5d/d3f/f68 [245780,24610] 0 2026-03-09T20:48:04.153 INFO:tasks.workunit.client.0.vm07.stdout:4/723: write d2/d1f/f53 [5184989,71939] 0 2026-03-09T20:48:04.157 INFO:tasks.workunit.client.1.vm10.stdout:3/698: truncate dc/d14/d26/d29/f5c 4610488 0 2026-03-09T20:48:04.157 INFO:tasks.workunit.client.1.vm10.stdout:8/798: getdents d0/d92/de8/d64/d7f 0 2026-03-09T20:48:04.157 INFO:tasks.workunit.client.1.vm10.stdout:0/718: dread - d2/d4a/f7b zero size 2026-03-09T20:48:04.159 INFO:tasks.workunit.client.1.vm10.stdout:3/699: write dc/d14/d90/fc9 [3160690,8863] 0 2026-03-09T20:48:04.162 INFO:tasks.workunit.client.1.vm10.stdout:2/734: dread - d5/d5b/fe2 zero size 2026-03-09T20:48:04.176 INFO:tasks.workunit.client.0.vm07.stdout:2/795: dwrite d2/db/d28/fb8 [0,4194304] 0 2026-03-09T20:48:04.184 INFO:tasks.workunit.client.1.vm10.stdout:7/740: creat db/d28/d2b/d36/d3b/fe5 x:0 0 0 2026-03-09T20:48:04.184 INFO:tasks.workunit.client.1.vm10.stdout:7/741: chown db/d28/d2b/d36/d3f/fae 513 1 2026-03-09T20:48:04.194 INFO:tasks.workunit.client.1.vm10.stdout:9/778: mknod d2/d3/d6d/de8/c103 0 2026-03-09T20:48:04.197 INFO:tasks.workunit.client.1.vm10.stdout:3/700: creat dc/d14/d20/d21/daf/fed x:0 0 0 2026-03-09T20:48:04.203 INFO:tasks.workunit.client.1.vm10.stdout:0/719: mkdir d2/d4a/d58/d82/d71/dca/dfe 0 2026-03-09T20:48:04.205 INFO:tasks.workunit.client.1.vm10.stdout:0/720: chown d2/f99 45978 1 2026-03-09T20:48:04.205 INFO:tasks.workunit.client.1.vm10.stdout:5/684: getdents d2/d39/d4b/d7a/dd9 0 2026-03-09T20:48:04.206 INFO:tasks.workunit.client.1.vm10.stdout:3/701: sync 2026-03-09T20:48:04.209 INFO:tasks.workunit.client.1.vm10.stdout:9/779: rename d2/d12/l22 to d2/d28/d47/d6a/l104 0 2026-03-09T20:48:04.211 INFO:tasks.workunit.client.1.vm10.stdout:1/736: getdents d2/da/d25/d46/d51/d5d/da6 0 2026-03-09T20:48:04.217 INFO:tasks.workunit.client.1.vm10.stdout:6/751: link d3/da/f76 d3/d30/d7f/d36/d6d/dbe/fe8 0 2026-03-09T20:48:04.220 INFO:tasks.workunit.client.0.vm07.stdout:8/725: write d1/f1d [642416,23046] 0 2026-03-09T20:48:04.228 INFO:tasks.workunit.client.0.vm07.stdout:3/776: rename d1/d5/d9/d2f/d34/d46/d5d/ld4 to d1/d5/d9/d11/d60/df3/lfb 0 2026-03-09T20:48:04.229 INFO:tasks.workunit.client.0.vm07.stdout:3/777: chown d1/d5/d9/d2f/d3d/d71/db5/fc9 3510397 1 2026-03-09T20:48:04.235 INFO:tasks.workunit.client.0.vm07.stdout:5/849: symlink d5/df/d13/d30/d56/l125 0 2026-03-09T20:48:04.236 INFO:tasks.workunit.client.1.vm10.stdout:4/703: dwrite d1/d2/f2d [0,4194304] 0 2026-03-09T20:48:04.238 INFO:tasks.workunit.client.1.vm10.stdout:2/735: dwrite d5/d18/d27/d89/db6/d41/d77/db3/db5/f3f [0,4194304] 0 2026-03-09T20:48:04.240 INFO:tasks.workunit.client.1.vm10.stdout:8/799: link d0/d22/d25/d40/lac d0/d92/de8/d64/db5/l100 0 2026-03-09T20:48:04.252 INFO:tasks.workunit.client.1.vm10.stdout:7/742: truncate db/d46/f47 2377552 0 2026-03-09T20:48:04.259 INFO:tasks.workunit.client.0.vm07.stdout:9/760: symlink d4/d11/d23/d32/l10d 0 2026-03-09T20:48:04.260 INFO:tasks.workunit.client.1.vm10.stdout:0/721: dwrite d2/d9/d69/d80/f8d [4194304,4194304] 0 2026-03-09T20:48:04.262 INFO:tasks.workunit.client.0.vm07.stdout:1/815: truncate d3/d23/d67/f69 2334950 0 2026-03-09T20:48:04.267 INFO:tasks.workunit.client.1.vm10.stdout:9/780: mknod d2/d12/c105 0 2026-03-09T20:48:04.269 INFO:tasks.workunit.client.1.vm10.stdout:1/737: creat d2/da/dbc/fee x:0 0 0 2026-03-09T20:48:04.269 INFO:tasks.workunit.client.0.vm07.stdout:6/790: rmdir d8/d26/de5 0 2026-03-09T20:48:04.280 INFO:tasks.workunit.client.0.vm07.stdout:4/724: creat d2/d1f/fc3 x:0 0 0 2026-03-09T20:48:04.280 INFO:tasks.workunit.client.0.vm07.stdout:4/725: chown d2/df/d59/l9a 11431 1 2026-03-09T20:48:04.302 INFO:tasks.workunit.client.1.vm10.stdout:4/704: mkdir d1/d2/d5c/d64/d6b/d81/dac/d1c/d2b/d4a/d9b/de2 0 2026-03-09T20:48:04.305 INFO:tasks.workunit.client.0.vm07.stdout:8/726: dread d1/dc/d16/d26/f36 [0,4194304] 0 2026-03-09T20:48:04.306 INFO:tasks.workunit.client.1.vm10.stdout:4/705: dwrite d1/d2/d5c/d64/d6b/d81/dac/f29 [0,4194304] 0 2026-03-09T20:48:04.319 INFO:tasks.workunit.client.0.vm07.stdout:2/796: rename d2/db/d1c/d4a/db6/fdd to d2/db/d49/d7d/d85/dde/f100 0 2026-03-09T20:48:04.326 INFO:tasks.workunit.client.0.vm07.stdout:7/844: mkdir d3/da/db/d32/d3e/d11c 0 2026-03-09T20:48:04.326 INFO:tasks.workunit.client.0.vm07.stdout:7/845: fsync d3/da/db/d32/d3e/d5c/f9c 0 2026-03-09T20:48:04.329 INFO:tasks.workunit.client.1.vm10.stdout:2/736: truncate d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/f60 2354488 0 2026-03-09T20:48:04.330 INFO:tasks.workunit.client.1.vm10.stdout:3/702: write dc/d14/d20/d2e/d56/f23 [471835,29467] 0 2026-03-09T20:48:04.337 INFO:tasks.workunit.client.1.vm10.stdout:2/737: dread d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47/d7b/fd2 [0,4194304] 0 2026-03-09T20:48:04.342 INFO:tasks.workunit.client.1.vm10.stdout:7/743: rename db/d28/d2b/d36/d40/l9d to db/d28/d2b/d36/d63/le6 0 2026-03-09T20:48:04.346 INFO:tasks.workunit.client.0.vm07.stdout:5/850: truncate d5/d33/d3b/fb4 1025881 0 2026-03-09T20:48:04.346 INFO:tasks.workunit.client.0.vm07.stdout:5/851: readlink d5/df/l8c 0 2026-03-09T20:48:04.346 INFO:tasks.workunit.client.0.vm07.stdout:9/761: chown d4/d8/d19/d5f/f94 713 1 2026-03-09T20:48:04.350 INFO:tasks.workunit.client.1.vm10.stdout:5/685: mkdir d2/d39/d4b/de0/d105 0 2026-03-09T20:48:04.353 INFO:tasks.workunit.client.1.vm10.stdout:0/722: unlink d2/d9/da/fd 0 2026-03-09T20:48:04.357 INFO:tasks.workunit.client.1.vm10.stdout:8/800: dwrite d0/d54/fa4 [0,4194304] 0 2026-03-09T20:48:04.371 INFO:tasks.workunit.client.0.vm07.stdout:6/791: dread - d8/d16/d22/d9b/de4/fd6 zero size 2026-03-09T20:48:04.375 INFO:tasks.workunit.client.1.vm10.stdout:1/738: fsync d2/f19 0 2026-03-09T20:48:04.377 INFO:tasks.workunit.client.1.vm10.stdout:6/752: unlink f0 0 2026-03-09T20:48:04.380 INFO:tasks.workunit.client.0.vm07.stdout:2/797: rename d2/db/d49/f64 to d2/dc8/f101 0 2026-03-09T20:48:04.384 INFO:tasks.workunit.client.1.vm10.stdout:3/703: mkdir dc/d14/d26/d29/d40/da2/dee 0 2026-03-09T20:48:04.384 INFO:tasks.workunit.client.0.vm07.stdout:7/846: symlink d3/da/d53/l11d 0 2026-03-09T20:48:04.385 INFO:tasks.workunit.client.1.vm10.stdout:2/738: truncate d5/d18/d27/d5f/fad 4710701 0 2026-03-09T20:48:04.398 INFO:tasks.workunit.client.0.vm07.stdout:9/762: dread d4/d8/dc/d4e/d54/fac [0,4194304] 0 2026-03-09T20:48:04.403 INFO:tasks.workunit.client.0.vm07.stdout:1/816: creat d3/d66/d86/f10e x:0 0 0 2026-03-09T20:48:04.405 INFO:tasks.workunit.client.1.vm10.stdout:7/744: write db/d28/d4c/f65 [138510,111291] 0 2026-03-09T20:48:04.418 INFO:tasks.workunit.client.1.vm10.stdout:8/801: creat d0/d22/d25/d2e/d41/d85/db9/dc6/f101 x:0 0 0 2026-03-09T20:48:04.418 INFO:tasks.workunit.client.0.vm07.stdout:0/819: getdents d1/d2/dc 0 2026-03-09T20:48:04.419 INFO:tasks.workunit.client.1.vm10.stdout:8/802: chown d0/d22/d25/d40/d86/l99 11297372 1 2026-03-09T20:48:04.419 INFO:tasks.workunit.client.0.vm07.stdout:5/852: rename d5/df/d13/d3e/d5e/ffa to d5/df/d13/d6c/db1/f126 0 2026-03-09T20:48:04.419 INFO:tasks.workunit.client.0.vm07.stdout:5/853: chown d5/df/d13/d4f/d101 53579 1 2026-03-09T20:48:04.420 INFO:tasks.workunit.client.0.vm07.stdout:7/847: fsync d3/da/db/d32/d3e/dac/ff4 0 2026-03-09T20:48:04.428 INFO:tasks.workunit.client.1.vm10.stdout:4/706: dread d1/d47/fb4 [0,4194304] 0 2026-03-09T20:48:04.439 INFO:tasks.workunit.client.1.vm10.stdout:2/739: write d5/d18/d27/f74 [3149751,1863] 0 2026-03-09T20:48:04.441 INFO:tasks.workunit.client.0.vm07.stdout:1/817: dwrite d3/d23/fc9 [0,4194304] 0 2026-03-09T20:48:04.443 INFO:tasks.workunit.client.0.vm07.stdout:1/818: truncate d3/d9c/fcd 217732 0 2026-03-09T20:48:04.444 INFO:tasks.workunit.client.0.vm07.stdout:1/819: chown d3/d66/cfa 5 1 2026-03-09T20:48:04.452 INFO:tasks.workunit.client.1.vm10.stdout:5/686: write d2/d39/d4b/f4e [3785222,124191] 0 2026-03-09T20:48:04.455 INFO:tasks.workunit.client.1.vm10.stdout:0/723: mkdir d2/d9/da/d35/dff 0 2026-03-09T20:48:04.461 INFO:tasks.workunit.client.0.vm07.stdout:4/726: link d2/df/f49 d2/df/d17/d83/fc4 0 2026-03-09T20:48:04.463 INFO:tasks.workunit.client.0.vm07.stdout:2/798: mknod d2/db/df6/c102 0 2026-03-09T20:48:04.471 INFO:tasks.workunit.client.1.vm10.stdout:1/739: symlink d2/da/d25/d3e/lef 0 2026-03-09T20:48:04.472 INFO:tasks.workunit.client.0.vm07.stdout:5/854: creat d5/df/d13/d6c/f127 x:0 0 0 2026-03-09T20:48:04.472 INFO:tasks.workunit.client.1.vm10.stdout:7/745: dread db/d28/d2b/d36/d40/f44 [0,4194304] 0 2026-03-09T20:48:04.472 INFO:tasks.workunit.client.1.vm10.stdout:7/746: write db/d28/fac [25759,50013] 0 2026-03-09T20:48:04.476 INFO:tasks.workunit.client.1.vm10.stdout:6/753: getdents d3/d30/d7f/d36/d5c/dad/de5 0 2026-03-09T20:48:04.478 INFO:tasks.workunit.client.0.vm07.stdout:7/848: chown d3/da4/df2/d113/cd2 983816232 1 2026-03-09T20:48:04.478 INFO:tasks.workunit.client.0.vm07.stdout:3/778: link d1/d5/d9/fa1 d1/d5/dcd/ffc 0 2026-03-09T20:48:04.479 INFO:tasks.workunit.client.1.vm10.stdout:4/707: readlink d1/d2/d5c/d64/d6b/lb6 0 2026-03-09T20:48:04.479 INFO:tasks.workunit.client.0.vm07.stdout:3/779: fsync d1/d5/d9/d11/f4d 0 2026-03-09T20:48:04.482 INFO:tasks.workunit.client.0.vm07.stdout:6/792: truncate d8/d26/f87 1218149 0 2026-03-09T20:48:04.489 INFO:tasks.workunit.client.0.vm07.stdout:4/727: creat d2/d55/d5d/d3f/fc5 x:0 0 0 2026-03-09T20:48:04.493 INFO:tasks.workunit.client.0.vm07.stdout:1/820: dread d3/d14/d54/d3e/f59 [0,4194304] 0 2026-03-09T20:48:04.499 INFO:tasks.workunit.client.0.vm07.stdout:8/727: getdents d1/dc/d16/dad/d87/dd3 0 2026-03-09T20:48:04.499 INFO:tasks.workunit.client.0.vm07.stdout:8/728: dread - d1/dc/d16/dad/fc7 zero size 2026-03-09T20:48:04.503 INFO:tasks.workunit.client.1.vm10.stdout:2/740: dwrite d5/d18/f90 [0,4194304] 0 2026-03-09T20:48:04.504 INFO:tasks.workunit.client.0.vm07.stdout:0/820: dwrite d1/d2/f5e [0,4194304] 0 2026-03-09T20:48:04.506 INFO:tasks.workunit.client.1.vm10.stdout:2/741: readlink d5/d5b/lb9 0 2026-03-09T20:48:04.509 INFO:tasks.workunit.client.1.vm10.stdout:9/781: getdents d2/d3/d6d/d88 0 2026-03-09T20:48:04.514 INFO:tasks.workunit.client.1.vm10.stdout:1/740: fdatasync d2/da/d25/d3e/d42/f7d 0 2026-03-09T20:48:04.515 INFO:tasks.workunit.client.1.vm10.stdout:9/782: sync 2026-03-09T20:48:04.519 INFO:tasks.workunit.client.0.vm07.stdout:2/799: dwrite d2/db/d49/f81 [0,4194304] 0 2026-03-09T20:48:04.538 INFO:tasks.workunit.client.0.vm07.stdout:5/855: mkdir d5/d33/d39/d128 0 2026-03-09T20:48:04.541 INFO:tasks.workunit.client.0.vm07.stdout:5/856: dwrite d5/df/d13/d6c/f77 [0,4194304] 0 2026-03-09T20:48:04.552 INFO:tasks.workunit.client.1.vm10.stdout:7/747: dread - db/d28/d2b/d36/d3b/fcf zero size 2026-03-09T20:48:04.555 INFO:tasks.workunit.client.1.vm10.stdout:7/748: fdatasync db/d28/d4c/f65 0 2026-03-09T20:48:04.555 INFO:tasks.workunit.client.1.vm10.stdout:7/749: chown db/d28/d2b/d36/d63/d8b/la7 25518 1 2026-03-09T20:48:04.563 INFO:tasks.workunit.client.0.vm07.stdout:3/780: creat d1/d5/d9/d2f/d34/d46/ffd x:0 0 0 2026-03-09T20:48:04.564 INFO:tasks.workunit.client.0.vm07.stdout:3/781: dread - d1/d5/d9/d2f/d34/da5/fa9 zero size 2026-03-09T20:48:04.565 INFO:tasks.workunit.client.0.vm07.stdout:9/763: rename d4/d8/db9/c107 to d4/d8/d19/d5f/da5/db8/dc1/c10e 0 2026-03-09T20:48:04.569 INFO:tasks.workunit.client.1.vm10.stdout:3/704: rename dc/d14/d26/d29/d93/l8d to dc/d14/d26/d29/d40/da8/d69/d75/d91/lef 0 2026-03-09T20:48:04.570 INFO:tasks.workunit.client.0.vm07.stdout:6/793: creat d8/d16/dbb/f103 x:0 0 0 2026-03-09T20:48:04.573 INFO:tasks.workunit.client.1.vm10.stdout:5/687: fdatasync d2/d27/d37/dc8/da1/f101 0 2026-03-09T20:48:04.574 INFO:tasks.workunit.client.0.vm07.stdout:1/821: truncate d3/d97/da1/dc5/d90/f93 5056866 0 2026-03-09T20:48:04.582 INFO:tasks.workunit.client.1.vm10.stdout:8/803: creat d0/d22/d25/d2e/d41/de9/dfc/f102 x:0 0 0 2026-03-09T20:48:04.584 INFO:tasks.workunit.client.1.vm10.stdout:7/750: creat db/d28/d30/fe7 x:0 0 0 2026-03-09T20:48:04.588 INFO:tasks.workunit.client.1.vm10.stdout:3/705: creat dc/d14/d22/ff0 x:0 0 0 2026-03-09T20:48:04.591 INFO:tasks.workunit.client.0.vm07.stdout:3/782: readlink d1/d5/l70 0 2026-03-09T20:48:04.591 INFO:tasks.workunit.client.1.vm10.stdout:3/706: fdatasync dc/d14/d20/d21/d3b/fea 0 2026-03-09T20:48:04.592 INFO:tasks.workunit.client.0.vm07.stdout:0/821: dread d1/d2/dc/d17/f6e [0,4194304] 0 2026-03-09T20:48:04.592 INFO:tasks.workunit.client.1.vm10.stdout:8/804: sync 2026-03-09T20:48:04.597 INFO:tasks.workunit.client.0.vm07.stdout:9/764: rename d4/d16/d29/d24/d37/d44/d62/d8e/cb0 to d4/d16/d78/dc4/c10f 0 2026-03-09T20:48:04.599 INFO:tasks.workunit.client.0.vm07.stdout:7/849: write d3/da/db/d32/d3e/dac/d43/d62/fcd [114888,106696] 0 2026-03-09T20:48:04.599 INFO:tasks.workunit.client.1.vm10.stdout:6/754: write d3/d30/d7f/d36/d5c/daa/fc9 [545736,19176] 0 2026-03-09T20:48:04.612 INFO:tasks.workunit.client.1.vm10.stdout:0/724: write d2/f9b [441515,117601] 0 2026-03-09T20:48:04.617 INFO:tasks.workunit.client.0.vm07.stdout:4/728: write d2/df/d59/f81 [3527828,117377] 0 2026-03-09T20:48:04.617 INFO:tasks.workunit.client.0.vm07.stdout:4/729: readlink d2/l30 0 2026-03-09T20:48:04.617 INFO:tasks.workunit.client.0.vm07.stdout:8/729: write d1/dc/d16/fbe [706173,24860] 0 2026-03-09T20:48:04.618 INFO:tasks.workunit.client.0.vm07.stdout:4/730: write d2/df/d59/d8a/fc0 [15515,50616] 0 2026-03-09T20:48:04.635 INFO:tasks.workunit.client.0.vm07.stdout:6/794: write d8/d16/d4b/fbc [3921546,8166] 0 2026-03-09T20:48:04.635 INFO:tasks.workunit.client.0.vm07.stdout:6/795: dread - d8/d5d/d97/dc4/fbe zero size 2026-03-09T20:48:04.642 INFO:tasks.workunit.client.0.vm07.stdout:1/822: dwrite d3/f24 [0,4194304] 0 2026-03-09T20:48:04.643 INFO:tasks.workunit.client.0.vm07.stdout:2/800: fsync d2/dc8/f101 0 2026-03-09T20:48:04.643 INFO:tasks.workunit.client.0.vm07.stdout:5/857: fdatasync d5/d33/d3b/fb4 0 2026-03-09T20:48:04.644 INFO:tasks.workunit.client.1.vm10.stdout:7/751: creat db/d28/d2b/d36/d63/d6d/fe8 x:0 0 0 2026-03-09T20:48:04.646 INFO:tasks.workunit.client.1.vm10.stdout:9/783: dwrite d2/fc [0,4194304] 0 2026-03-09T20:48:04.653 INFO:tasks.workunit.client.1.vm10.stdout:4/708: creat d1/fe3 x:0 0 0 2026-03-09T20:48:04.654 INFO:tasks.workunit.client.1.vm10.stdout:4/709: readlink d1/d2/d3/d70/d99/la0 0 2026-03-09T20:48:04.655 INFO:tasks.workunit.client.1.vm10.stdout:4/710: write d1/d2/d5c/d64/d6b/d81/dac/d1b/d57/f8e [1707499,40589] 0 2026-03-09T20:48:04.669 INFO:tasks.workunit.client.0.vm07.stdout:0/822: rmdir d1/df6 39 2026-03-09T20:48:04.670 INFO:tasks.workunit.client.1.vm10.stdout:8/805: rename d0/d22/l23 to d0/d22/d25/d8f/dfe/l103 0 2026-03-09T20:48:04.675 INFO:tasks.workunit.client.0.vm07.stdout:7/850: rename d3/d58/d82/la7 to d3/da/db/d32/d3e/dac/d43/d62/db1/l11e 0 2026-03-09T20:48:04.676 INFO:tasks.workunit.client.1.vm10.stdout:5/688: mkdir d2/d27/d37/d46/d5d/d106 0 2026-03-09T20:48:04.676 INFO:tasks.workunit.client.0.vm07.stdout:4/731: symlink d2/d55/lc6 0 2026-03-09T20:48:04.679 INFO:tasks.workunit.client.1.vm10.stdout:7/752: mknod db/d21/ce9 0 2026-03-09T20:48:04.681 INFO:tasks.workunit.client.1.vm10.stdout:7/753: dread f3 [0,4194304] 0 2026-03-09T20:48:04.686 INFO:tasks.workunit.client.1.vm10.stdout:7/754: dread db/f7c [0,4194304] 0 2026-03-09T20:48:04.686 INFO:tasks.workunit.client.1.vm10.stdout:7/755: stat db/d28/d30/f73 0 2026-03-09T20:48:04.687 INFO:tasks.workunit.client.1.vm10.stdout:9/784: truncate d2/d3/f2f 397233 0 2026-03-09T20:48:04.689 INFO:tasks.workunit.client.1.vm10.stdout:4/711: write d1/d67/fda [79519,20049] 0 2026-03-09T20:48:04.690 INFO:tasks.workunit.client.1.vm10.stdout:4/712: write d1/d2/d5c/fd4 [186288,7357] 0 2026-03-09T20:48:04.694 INFO:tasks.workunit.client.1.vm10.stdout:7/756: dwrite db/d28/d2b/d36/d63/d6d/fe8 [0,4194304] 0 2026-03-09T20:48:04.696 INFO:tasks.workunit.client.0.vm07.stdout:0/823: readlink d1/d2/l27 0 2026-03-09T20:48:04.698 INFO:tasks.workunit.client.1.vm10.stdout:2/742: getdents d5/d18/d27/d89/db6/d41/d77/db3/db5/d32 0 2026-03-09T20:48:04.707 INFO:tasks.workunit.client.0.vm07.stdout:9/765: write d4/d11/d23/f2f [1313141,29486] 0 2026-03-09T20:48:04.708 INFO:tasks.workunit.client.1.vm10.stdout:6/755: write d3/da/f58 [1027095,34152] 0 2026-03-09T20:48:04.713 INFO:tasks.workunit.client.1.vm10.stdout:3/707: dwrite dc/d14/d26/d29/f70 [0,4194304] 0 2026-03-09T20:48:04.713 INFO:tasks.workunit.client.1.vm10.stdout:5/689: creat d2/d27/d37/d46/d99/f107 x:0 0 0 2026-03-09T20:48:04.716 INFO:tasks.workunit.client.1.vm10.stdout:3/708: readlink dc/d14/d26/d29/d93/lab 0 2026-03-09T20:48:04.717 INFO:tasks.workunit.client.1.vm10.stdout:1/741: link d2/da/d25/d46/c7a d2/da/d25/d46/d51/d5d/cf0 0 2026-03-09T20:48:04.717 INFO:tasks.workunit.client.1.vm10.stdout:1/742: chown d2/da 1 1 2026-03-09T20:48:04.718 INFO:tasks.workunit.client.1.vm10.stdout:1/743: write d2/da/d25/d46/f61 [2691003,64098] 0 2026-03-09T20:48:04.718 INFO:tasks.workunit.client.0.vm07.stdout:4/732: fdatasync d2/d55/d5d/d3f/fa3 0 2026-03-09T20:48:04.719 INFO:tasks.workunit.client.1.vm10.stdout:1/744: dread - d2/da/d25/d46/d51/d5d/d6e/f93 zero size 2026-03-09T20:48:04.719 INFO:tasks.workunit.client.1.vm10.stdout:1/745: fdatasync d2/da/d25/d46/d80/da0/fec 0 2026-03-09T20:48:04.721 INFO:tasks.workunit.client.0.vm07.stdout:4/733: dwrite d2/f4c [8388608,4194304] 0 2026-03-09T20:48:04.735 INFO:tasks.workunit.client.0.vm07.stdout:6/796: write d8/d16/d22/d9b/de4/d85/fad [1042756,129604] 0 2026-03-09T20:48:04.735 INFO:tasks.workunit.client.1.vm10.stdout:0/725: dwrite d2/f8a [0,4194304] 0 2026-03-09T20:48:04.738 INFO:tasks.workunit.client.1.vm10.stdout:4/713: read - d1/d2/d5c/d64/d6b/d79/fc0 zero size 2026-03-09T20:48:04.738 INFO:tasks.workunit.client.1.vm10.stdout:8/806: mknod d0/d92/de8/d64/c104 0 2026-03-09T20:48:04.747 INFO:tasks.workunit.client.1.vm10.stdout:7/757: mknod db/d28/d2b/d36/d3b/dd5/cea 0 2026-03-09T20:48:04.747 INFO:tasks.workunit.client.1.vm10.stdout:7/758: chown db/d28/d2b/d36/d63/d8b 5524483 1 2026-03-09T20:48:04.748 INFO:tasks.workunit.client.1.vm10.stdout:7/759: chown db/d28/d2b/d36/d40/d8a/ldd 893 1 2026-03-09T20:48:04.757 INFO:tasks.workunit.client.1.vm10.stdout:2/743: symlink d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47/dcc/lf5 0 2026-03-09T20:48:04.759 INFO:tasks.workunit.client.0.vm07.stdout:2/801: truncate d2/db/d1c/d8d/ffa 848404 0 2026-03-09T20:48:04.759 INFO:tasks.workunit.client.0.vm07.stdout:0/824: truncate d1/f3b 2583305 0 2026-03-09T20:48:04.763 INFO:tasks.workunit.client.0.vm07.stdout:8/730: dread d1/d5d/d6f/d2f/d4d/d55/fac [0,4194304] 0 2026-03-09T20:48:04.763 INFO:tasks.workunit.client.0.vm07.stdout:0/825: dwrite d1/d2/dc/d17/f3c [0,4194304] 0 2026-03-09T20:48:04.767 INFO:tasks.workunit.client.0.vm07.stdout:1/823: link d3/d23/cef d3/d14/d94/c10f 0 2026-03-09T20:48:04.767 INFO:tasks.workunit.client.0.vm07.stdout:6/797: readlink d8/d16/da3/d9a/lf2 0 2026-03-09T20:48:04.777 INFO:tasks.workunit.client.0.vm07.stdout:3/783: getdents d1 0 2026-03-09T20:48:04.778 INFO:tasks.workunit.client.0.vm07.stdout:3/784: chown d1/d5/d9/d2f/d34/f8f 11 1 2026-03-09T20:48:04.779 INFO:tasks.workunit.client.0.vm07.stdout:8/731: fsync d1/dc/d16/dad/fc7 0 2026-03-09T20:48:04.783 INFO:tasks.workunit.client.0.vm07.stdout:1/824: dread d3/d14/f30 [0,4194304] 0 2026-03-09T20:48:04.784 INFO:tasks.workunit.client.0.vm07.stdout:2/802: symlink d2/d11/d56/l103 0 2026-03-09T20:48:04.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:04 vm10.local ceph-mon[57011]: Reconfiguring prometheus.vm07 (dependencies changed)... 2026-03-09T20:48:04.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:04 vm10.local ceph-mon[57011]: pgmap v8: 65 pgs: 65 active+clean; 3.1 GiB data, 11 GiB used, 109 GiB / 120 GiB avail; 25 MiB/s rd, 48 MiB/s wr, 160 op/s 2026-03-09T20:48:04.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:04 vm10.local ceph-mon[57011]: from='client.24539 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:48:04.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:04 vm10.local ceph-mon[57011]: Reconfiguring daemon prometheus.vm07 on vm07 2026-03-09T20:48:04.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:04 vm10.local ceph-mon[57011]: from='client.? 192.168.123.107:0/1069690559' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T20:48:04.790 INFO:tasks.workunit.client.0.vm07.stdout:2/803: unlink d2/d11/ddb/d6e/f7a 0 2026-03-09T20:48:04.793 INFO:tasks.workunit.client.0.vm07.stdout:3/785: symlink d1/d5/d9/d2f/lfe 0 2026-03-09T20:48:04.793 INFO:tasks.workunit.client.0.vm07.stdout:9/766: getdents d4/d8/d19/d5f/d73 0 2026-03-09T20:48:04.793 INFO:tasks.workunit.client.0.vm07.stdout:9/767: chown d4/d8/db9 1016 1 2026-03-09T20:48:04.796 INFO:tasks.workunit.client.0.vm07.stdout:2/804: creat d2/db/d1c/f104 x:0 0 0 2026-03-09T20:48:04.806 INFO:tasks.workunit.client.0.vm07.stdout:6/798: getdents d8/d16/d22/d9b/de4/d85/df8 0 2026-03-09T20:48:04.817 INFO:tasks.workunit.client.1.vm10.stdout:4/714: creat d1/d2/d5c/d64/d6b/fe4 x:0 0 0 2026-03-09T20:48:04.829 INFO:tasks.workunit.client.1.vm10.stdout:7/760: fdatasync db/d28/d4c/fdc 0 2026-03-09T20:48:04.829 INFO:tasks.workunit.client.1.vm10.stdout:7/761: chown db/d21/fbc 1 1 2026-03-09T20:48:04.829 INFO:tasks.workunit.client.1.vm10.stdout:7/762: chown db/d46/d89/dbf/d78/lba 529315656 1 2026-03-09T20:48:04.829 INFO:tasks.workunit.client.1.vm10.stdout:2/744: rmdir d5/d18/d9f 39 2026-03-09T20:48:04.830 INFO:tasks.workunit.client.0.vm07.stdout:6/799: readlink d8/d16/d22/d24/l9d 0 2026-03-09T20:48:04.830 INFO:tasks.workunit.client.0.vm07.stdout:3/786: mknod d1/d5/d9/d2f/cff 0 2026-03-09T20:48:04.830 INFO:tasks.workunit.client.0.vm07.stdout:9/768: creat d4/d8/d59/de4/f110 x:0 0 0 2026-03-09T20:48:04.830 INFO:tasks.workunit.client.0.vm07.stdout:6/800: creat d8/d16/d61/f104 x:0 0 0 2026-03-09T20:48:04.830 INFO:tasks.workunit.client.0.vm07.stdout:3/787: rmdir d1/d5/d9/d11/d60 39 2026-03-09T20:48:04.830 INFO:tasks.workunit.client.0.vm07.stdout:3/788: dread - d1/d5/fc5 zero size 2026-03-09T20:48:04.830 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:04 vm07.local ceph-mon[49120]: Reconfiguring prometheus.vm07 (dependencies changed)... 2026-03-09T20:48:04.830 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:04 vm07.local ceph-mon[49120]: pgmap v8: 65 pgs: 65 active+clean; 3.1 GiB data, 11 GiB used, 109 GiB / 120 GiB avail; 25 MiB/s rd, 48 MiB/s wr, 160 op/s 2026-03-09T20:48:04.830 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:04 vm07.local ceph-mon[49120]: from='client.24539 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:48:04.830 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:04 vm07.local ceph-mon[49120]: Reconfiguring daemon prometheus.vm07 on vm07 2026-03-09T20:48:04.830 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:04 vm07.local ceph-mon[49120]: from='client.? 192.168.123.107:0/1069690559' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T20:48:04.831 INFO:tasks.workunit.client.0.vm07.stdout:5/858: write d5/d19/f95 [1286772,124626] 0 2026-03-09T20:48:04.831 INFO:tasks.workunit.client.1.vm10.stdout:5/690: symlink d2/d39/dbf/l108 0 2026-03-09T20:48:04.831 INFO:tasks.workunit.client.1.vm10.stdout:1/746: mkdir d2/da/d25/d46/d51/d5d/d6e/d70/db3/dd4/df1 0 2026-03-09T20:48:04.831 INFO:tasks.workunit.client.0.vm07.stdout:6/801: mkdir d8/d16/d22/d24/da0/dab/d40/d105 0 2026-03-09T20:48:04.833 INFO:tasks.workunit.client.0.vm07.stdout:7/851: dwrite d3/da/db/d32/d3e/dac/f2a [0,4194304] 0 2026-03-09T20:48:04.835 INFO:tasks.workunit.client.0.vm07.stdout:7/852: stat d3/da/db/d32/d3e/dac/d1f/l6f 0 2026-03-09T20:48:04.845 INFO:tasks.workunit.client.0.vm07.stdout:3/789: creat d1/d5/d9/d2f/d3d/d71/d76/db6/f100 x:0 0 0 2026-03-09T20:48:04.848 INFO:tasks.workunit.client.1.vm10.stdout:8/807: mknod d0/d22/c105 0 2026-03-09T20:48:04.848 INFO:tasks.workunit.client.1.vm10.stdout:4/715: rename d1/d2/d5c/d64/d6b/lb6 to d1/d67/le5 0 2026-03-09T20:48:04.848 INFO:tasks.workunit.client.0.vm07.stdout:5/859: chown d5/df/d13/c35 111927771 1 2026-03-09T20:48:04.848 INFO:tasks.workunit.client.0.vm07.stdout:7/853: chown d3/d58/d77/de3/le9 74 1 2026-03-09T20:48:04.848 INFO:tasks.workunit.client.0.vm07.stdout:7/854: stat d3/da/db/d32/d3e/l51 0 2026-03-09T20:48:04.849 INFO:tasks.workunit.client.1.vm10.stdout:8/808: readlink d0/d22/d25/d6c/d9b/le5 0 2026-03-09T20:48:04.849 INFO:tasks.workunit.client.1.vm10.stdout:8/809: read - d0/df8/ff9 zero size 2026-03-09T20:48:04.858 INFO:tasks.workunit.client.0.vm07.stdout:3/790: chown d1/d5/d9/d2f/d34/l9d 4 1 2026-03-09T20:48:04.861 INFO:tasks.workunit.client.0.vm07.stdout:7/855: symlink d3/da4/df2/dff/l11f 0 2026-03-09T20:48:04.862 INFO:tasks.workunit.client.1.vm10.stdout:1/747: truncate d2/da/d25/d46/fa7 11804 0 2026-03-09T20:48:04.862 INFO:tasks.workunit.client.0.vm07.stdout:6/802: creat d8/d26/d7d/dfd/f106 x:0 0 0 2026-03-09T20:48:04.864 INFO:tasks.workunit.client.1.vm10.stdout:5/691: dread d2/d39/d4b/d7a/fed [0,4194304] 0 2026-03-09T20:48:04.865 INFO:tasks.workunit.client.1.vm10.stdout:5/692: chown d2/d27/d37/f57 10665205 1 2026-03-09T20:48:04.866 INFO:tasks.workunit.client.1.vm10.stdout:2/745: rename d5/d18/fc6 to d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47/ddc/ff6 0 2026-03-09T20:48:04.870 INFO:tasks.workunit.client.0.vm07.stdout:2/805: sync 2026-03-09T20:48:04.870 INFO:tasks.workunit.client.0.vm07.stdout:4/734: sync 2026-03-09T20:48:04.873 INFO:tasks.workunit.client.1.vm10.stdout:8/810: dread d0/d22/f66 [0,4194304] 0 2026-03-09T20:48:04.881 INFO:tasks.workunit.client.1.vm10.stdout:4/716: dread d1/f26 [0,4194304] 0 2026-03-09T20:48:04.886 INFO:tasks.workunit.client.1.vm10.stdout:6/756: getdents d3/d30/d7f/d36/d5c 0 2026-03-09T20:48:04.886 INFO:tasks.workunit.client.1.vm10.stdout:6/757: chown d3/da/d11/l66 14678 1 2026-03-09T20:48:04.889 INFO:tasks.workunit.client.0.vm07.stdout:2/806: mknod d2/db/d28/d57/df8/c105 0 2026-03-09T20:48:04.911 INFO:tasks.workunit.client.0.vm07.stdout:0/826: dwrite d1/fa1 [0,4194304] 0 2026-03-09T20:48:04.912 INFO:tasks.workunit.client.0.vm07.stdout:0/827: stat d1/d1f/dc3/feb 0 2026-03-09T20:48:04.914 INFO:tasks.workunit.client.0.vm07.stdout:4/735: getdents d2/d55/d5d/d3f/d4a/d4b/d52/dba 0 2026-03-09T20:48:04.916 INFO:tasks.workunit.client.0.vm07.stdout:8/732: dwrite d1/dc/d16/d31/f52 [0,4194304] 0 2026-03-09T20:48:04.926 INFO:tasks.workunit.client.0.vm07.stdout:2/807: rename d2/db/d28/f58 to d2/db/d1c/d8d/f106 0 2026-03-09T20:48:04.926 INFO:tasks.workunit.client.0.vm07.stdout:1/825: write d3/d23/f58 [1282416,122475] 0 2026-03-09T20:48:04.926 INFO:tasks.workunit.client.0.vm07.stdout:1/826: write d3/d9c/fd2 [4577861,28667] 0 2026-03-09T20:48:04.941 INFO:tasks.workunit.client.1.vm10.stdout:1/748: truncate d2/f59 3647436 0 2026-03-09T20:48:04.942 INFO:tasks.workunit.client.1.vm10.stdout:6/758: chown d3/c1a 33 1 2026-03-09T20:48:04.951 INFO:tasks.workunit.client.0.vm07.stdout:8/733: mkdir d1/dc/d16/d31/db4/de6 0 2026-03-09T20:48:04.953 INFO:tasks.workunit.client.0.vm07.stdout:4/736: dread d2/d55/d5d/d3f/d4a/d4b/d52/f9e [0,4194304] 0 2026-03-09T20:48:04.958 INFO:tasks.workunit.client.1.vm10.stdout:2/746: unlink d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/led 0 2026-03-09T20:48:04.961 INFO:tasks.workunit.client.1.vm10.stdout:3/709: dwrite dc/d14/d26/d29/d40/d8c/fbc [0,4194304] 0 2026-03-09T20:48:04.973 INFO:tasks.workunit.client.1.vm10.stdout:3/710: dread dc/d14/d27/f3c [0,4194304] 0 2026-03-09T20:48:04.977 INFO:tasks.workunit.client.1.vm10.stdout:8/811: symlink d0/d92/de8/d64/l106 0 2026-03-09T20:48:04.979 INFO:tasks.workunit.client.0.vm07.stdout:2/808: rename d2/db/faf to d2/d11/ddb/db0/db3/f107 0 2026-03-09T20:48:04.980 INFO:tasks.workunit.client.1.vm10.stdout:9/785: dwrite d2/d3/d6d/ff3 [0,4194304] 0 2026-03-09T20:48:04.980 INFO:tasks.workunit.client.0.vm07.stdout:2/809: chown d2/db/d28/d90/dd6 0 1 2026-03-09T20:48:04.987 INFO:tasks.workunit.client.1.vm10.stdout:4/717: creat d1/d2/d3/d54/fe6 x:0 0 0 2026-03-09T20:48:04.989 INFO:tasks.workunit.client.0.vm07.stdout:1/827: truncate d3/d14/d54/d3e/f59 1798003 0 2026-03-09T20:48:04.992 INFO:tasks.workunit.client.0.vm07.stdout:9/769: write d4/d8/f1c [3164965,101061] 0 2026-03-09T20:48:05.012 INFO:tasks.workunit.client.0.vm07.stdout:1/828: creat d3/d97/da1/dc5/d90/de8/dba/f110 x:0 0 0 2026-03-09T20:48:05.012 INFO:tasks.workunit.client.0.vm07.stdout:5/860: write d5/d33/d39/d8d/dab/f5f [1429047,23600] 0 2026-03-09T20:48:05.012 INFO:tasks.workunit.client.1.vm10.stdout:7/763: write db/d21/fb2 [243535,9754] 0 2026-03-09T20:48:05.012 INFO:tasks.workunit.client.1.vm10.stdout:7/764: chown db/d46/f5a 770292 1 2026-03-09T20:48:05.012 INFO:tasks.workunit.client.1.vm10.stdout:0/726: truncate d2/d9/da/d11/f1f 1927763 0 2026-03-09T20:48:05.013 INFO:tasks.workunit.client.0.vm07.stdout:5/861: read - d5/d33/db2/de8/f11a zero size 2026-03-09T20:48:05.017 INFO:tasks.workunit.client.1.vm10.stdout:3/711: mkdir dc/d14/df1 0 2026-03-09T20:48:05.018 INFO:tasks.workunit.client.1.vm10.stdout:3/712: truncate dc/d14/d22/ff0 548457 0 2026-03-09T20:48:05.022 INFO:tasks.workunit.client.0.vm07.stdout:3/791: dwrite d1/d5/d9/d2f/d66/dc0/fde [0,4194304] 0 2026-03-09T20:48:05.030 INFO:tasks.workunit.client.0.vm07.stdout:7/856: write d3/f3f [8633713,28389] 0 2026-03-09T20:48:05.034 INFO:tasks.workunit.client.0.vm07.stdout:6/803: dwrite d8/d16/d22/d24/da0/faf [0,4194304] 0 2026-03-09T20:48:05.040 INFO:tasks.workunit.client.0.vm07.stdout:9/770: truncate d4/d16/fdb 9038 0 2026-03-09T20:48:05.040 INFO:tasks.workunit.client.0.vm07.stdout:9/771: stat d4/d11 0 2026-03-09T20:48:05.041 INFO:tasks.workunit.client.0.vm07.stdout:6/804: read d8/d16/d22/d9b/de4/f91 [1853524,111658] 0 2026-03-09T20:48:05.042 INFO:tasks.workunit.client.0.vm07.stdout:8/734: rename d1/dc/lbd to d1/d5d/d6f/d2f/d4d/dd4/le7 0 2026-03-09T20:48:05.053 INFO:tasks.workunit.client.0.vm07.stdout:2/810: mkdir d2/db/d108 0 2026-03-09T20:48:05.060 INFO:tasks.workunit.client.1.vm10.stdout:1/749: creat d2/d89/de6/ff2 x:0 0 0 2026-03-09T20:48:05.060 INFO:tasks.workunit.client.1.vm10.stdout:5/693: dwrite d2/d1b/f5c [0,4194304] 0 2026-03-09T20:48:05.060 INFO:tasks.workunit.client.1.vm10.stdout:5/694: chown d2/d39/d4b/de0 252252 1 2026-03-09T20:48:05.060 INFO:tasks.workunit.client.0.vm07.stdout:2/811: chown d2/da7/db4/cca 8844494 1 2026-03-09T20:48:05.060 INFO:tasks.workunit.client.0.vm07.stdout:1/829: mknod d3/d97/da1/dc5/d60/d9f/dd0/c111 0 2026-03-09T20:48:05.061 INFO:tasks.workunit.client.1.vm10.stdout:2/747: link d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d91/ff0 d5/d18/d27/d38/ff7 0 2026-03-09T20:48:05.064 INFO:tasks.workunit.client.0.vm07.stdout:3/792: creat d1/d5/d9/daf/d9f/f101 x:0 0 0 2026-03-09T20:48:05.064 INFO:tasks.workunit.client.1.vm10.stdout:2/748: chown d5/d18/d27/d89/db6/d41/d77/db3/db5/c5e 41383141 1 2026-03-09T20:48:05.065 INFO:tasks.workunit.client.0.vm07.stdout:3/793: chown d1/d5/d9/d2f/d34/lae 1035845090 1 2026-03-09T20:48:05.078 INFO:tasks.workunit.client.0.vm07.stdout:7/857: symlink d3/da/db/d32/d3e/dac/d43/d62/de0/l120 0 2026-03-09T20:48:05.084 INFO:tasks.workunit.client.1.vm10.stdout:0/727: sync 2026-03-09T20:48:05.089 INFO:tasks.workunit.client.1.vm10.stdout:8/812: dread d0/d22/d25/d2e/d41/de9/dfc/d78/f9a [0,4194304] 0 2026-03-09T20:48:05.098 INFO:tasks.workunit.client.0.vm07.stdout:9/772: symlink d4/d16/d29/d24/d37/l111 0 2026-03-09T20:48:05.100 INFO:tasks.workunit.client.0.vm07.stdout:9/773: write d4/d11/d23/d32/fe6 [3162746,95362] 0 2026-03-09T20:48:05.111 INFO:tasks.workunit.client.1.vm10.stdout:6/759: write d3/da/d11/d89/db9/dd1/dd2/dc3/fca [159024,83234] 0 2026-03-09T20:48:05.115 INFO:tasks.workunit.client.1.vm10.stdout:5/695: dread - d2/d39/dbf/d63/fd8 zero size 2026-03-09T20:48:05.117 INFO:tasks.workunit.client.1.vm10.stdout:1/750: truncate d2/da/d25/d46/d51/d5d/d6e/d70/f79 1121686 0 2026-03-09T20:48:05.117 INFO:tasks.workunit.client.1.vm10.stdout:5/696: readlink d2/d27/l2e 0 2026-03-09T20:48:05.117 INFO:tasks.workunit.client.1.vm10.stdout:1/751: dread - d2/da/d25/d46/d80/da0/d92/db5/dc7/fe2 zero size 2026-03-09T20:48:05.118 INFO:tasks.workunit.client.1.vm10.stdout:1/752: chown d2/fd2 9486903 1 2026-03-09T20:48:05.120 INFO:tasks.workunit.client.1.vm10.stdout:7/765: symlink db/d28/d2b/d36/d63/d6d/dc4/leb 0 2026-03-09T20:48:05.121 INFO:tasks.workunit.client.1.vm10.stdout:1/753: readlink d2/da/d25/d46/d80/da0/d92/db5/le8 0 2026-03-09T20:48:05.133 INFO:tasks.workunit.client.0.vm07.stdout:0/828: dwrite d1/d2/d33/d35/f45 [0,4194304] 0 2026-03-09T20:48:05.139 INFO:tasks.workunit.client.1.vm10.stdout:3/713: mknod dc/db4/de3/cf2 0 2026-03-09T20:48:05.142 INFO:tasks.workunit.client.1.vm10.stdout:2/749: dread d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47/d94/f9b [0,4194304] 0 2026-03-09T20:48:05.146 INFO:tasks.workunit.client.1.vm10.stdout:0/728: rmdir d2/d9/db8 39 2026-03-09T20:48:05.149 INFO:tasks.workunit.client.0.vm07.stdout:9/774: mknod d4/d16/d29/d24/d37/d44/d62/d74/c112 0 2026-03-09T20:48:05.149 INFO:tasks.workunit.client.1.vm10.stdout:9/786: creat d2/d3/de/d35/f106 x:0 0 0 2026-03-09T20:48:05.149 INFO:tasks.workunit.client.1.vm10.stdout:6/760: mknod d3/d30/d7f/d24/d39/d9e/ce9 0 2026-03-09T20:48:05.150 INFO:tasks.workunit.client.1.vm10.stdout:6/761: chown d3/d30/d7f/d24/lc5 620 1 2026-03-09T20:48:05.151 INFO:tasks.workunit.client.1.vm10.stdout:6/762: chown d3/da/d11/d26/d5b 772134 1 2026-03-09T20:48:05.152 INFO:tasks.workunit.client.0.vm07.stdout:8/735: read d1/d3b/f9a [101969,86583] 0 2026-03-09T20:48:05.155 INFO:tasks.workunit.client.1.vm10.stdout:6/763: dwrite d3/d30/d7f/d36/f4f [4194304,4194304] 0 2026-03-09T20:48:05.190 INFO:tasks.workunit.client.1.vm10.stdout:7/766: mknod db/d46/d89/cec 0 2026-03-09T20:48:05.190 INFO:tasks.workunit.client.1.vm10.stdout:7/767: write db/d21/d23/f29 [3532136,1044] 0 2026-03-09T20:48:05.190 INFO:tasks.workunit.client.1.vm10.stdout:7/768: truncate db/d28/fac 391235 0 2026-03-09T20:48:05.190 INFO:tasks.workunit.client.1.vm10.stdout:7/769: dread - db/d28/d2b/d36/d3b/fde zero size 2026-03-09T20:48:05.190 INFO:tasks.workunit.client.1.vm10.stdout:0/729: chown d2/d4a/d58/d82/d71/d8e/lc2 192365 1 2026-03-09T20:48:05.190 INFO:tasks.workunit.client.1.vm10.stdout:8/813: symlink d0/d54/dec/l107 0 2026-03-09T20:48:05.190 INFO:tasks.workunit.client.1.vm10.stdout:0/730: write d2/d4a/d58/d82/d60/d98/ff7 [97490,128855] 0 2026-03-09T20:48:05.190 INFO:tasks.workunit.client.1.vm10.stdout:8/814: readlink d0/d22/d25/d40/d86/l99 0 2026-03-09T20:48:05.190 INFO:tasks.workunit.client.1.vm10.stdout:8/815: chown d0/d54/c73 54971 1 2026-03-09T20:48:05.190 INFO:tasks.workunit.client.1.vm10.stdout:8/816: chown d0/d22/f76 399983 1 2026-03-09T20:48:05.190 INFO:tasks.workunit.client.1.vm10.stdout:6/764: dread - d3/d30/d7f/d36/d5c/dad/fce zero size 2026-03-09T20:48:05.191 INFO:tasks.workunit.client.1.vm10.stdout:7/770: mknod db/d28/d2b/d36/ced 0 2026-03-09T20:48:05.191 INFO:tasks.workunit.client.1.vm10.stdout:6/765: dread d3/da/d11/f17 [0,4194304] 0 2026-03-09T20:48:05.191 INFO:tasks.workunit.client.1.vm10.stdout:8/817: symlink d0/d92/de8/d64/db5/l108 0 2026-03-09T20:48:05.191 INFO:tasks.workunit.client.1.vm10.stdout:8/818: chown d0/d54/dec 27 1 2026-03-09T20:48:05.191 INFO:tasks.workunit.client.1.vm10.stdout:3/714: link dc/d14/d20/d2e/d56/lae dc/d14/d26/d29/d40/da8/d69/lf3 0 2026-03-09T20:48:05.191 INFO:tasks.workunit.client.1.vm10.stdout:7/771: symlink db/d21/d95/lee 0 2026-03-09T20:48:05.191 INFO:tasks.workunit.client.1.vm10.stdout:6/766: dwrite d3/d30/d7f/d36/f4f [0,4194304] 0 2026-03-09T20:48:05.191 INFO:tasks.workunit.client.1.vm10.stdout:3/715: readlink dc/d14/d26/d29/d2a/d76/l9d 0 2026-03-09T20:48:05.191 INFO:tasks.workunit.client.1.vm10.stdout:1/754: dread d2/da/f35 [0,4194304] 0 2026-03-09T20:48:05.191 INFO:tasks.workunit.client.1.vm10.stdout:3/716: write dc/d14/d26/d29/d40/da8/fe4 [952613,83443] 0 2026-03-09T20:48:05.194 INFO:tasks.workunit.client.1.vm10.stdout:0/731: read d2/d4a/d58/d82/d93/fbc [1308013,104496] 0 2026-03-09T20:48:05.203 INFO:tasks.workunit.client.1.vm10.stdout:0/732: dread - d2/d9/da/d11/dd1/db7/dcd/d63/ff8 zero size 2026-03-09T20:48:05.205 INFO:tasks.workunit.client.1.vm10.stdout:8/819: fsync d0/d22/d25/d2e/d41/d85/db9/dc6/fc9 0 2026-03-09T20:48:05.212 INFO:tasks.workunit.client.1.vm10.stdout:7/772: symlink db/d21/d95/lef 0 2026-03-09T20:48:05.213 INFO:tasks.workunit.client.0.vm07.stdout:8/736: dread d1/f85 [0,4194304] 0 2026-03-09T20:48:05.217 INFO:tasks.workunit.client.1.vm10.stdout:3/717: fdatasync dc/d14/d26/d29/d40/da8/fc6 0 2026-03-09T20:48:05.245 INFO:tasks.workunit.client.1.vm10.stdout:6/767: creat d3/da/d11/d89/db9/dd1/dd2/da9/fea x:0 0 0 2026-03-09T20:48:05.245 INFO:tasks.workunit.client.1.vm10.stdout:6/768: creat d3/d79/feb x:0 0 0 2026-03-09T20:48:05.245 INFO:tasks.workunit.client.1.vm10.stdout:0/733: creat d2/d9/db8/f100 x:0 0 0 2026-03-09T20:48:05.245 INFO:tasks.workunit.client.1.vm10.stdout:3/718: fdatasync dc/d14/d26/d29/f5c 0 2026-03-09T20:48:05.245 INFO:tasks.workunit.client.1.vm10.stdout:0/734: mkdir d2/d9/db8/db4/d101 0 2026-03-09T20:48:05.245 INFO:tasks.workunit.client.1.vm10.stdout:3/719: rename dc/d14/d26/d37/c47 to dc/db4/de3/cf4 0 2026-03-09T20:48:05.245 INFO:tasks.workunit.client.1.vm10.stdout:6/769: getdents d3/da/d11/d89/db9/dd1/dd2/dc3 0 2026-03-09T20:48:05.245 INFO:tasks.workunit.client.1.vm10.stdout:3/720: mknod dc/d14/d20/cf5 0 2026-03-09T20:48:05.245 INFO:tasks.workunit.client.0.vm07.stdout:7/858: creat d3/da/db/d32/d3e/d11c/f121 x:0 0 0 2026-03-09T20:48:05.245 INFO:tasks.workunit.client.0.vm07.stdout:4/737: link d2/d55/d5d/d3f/d4a/d85/cb1 d2/cc7 0 2026-03-09T20:48:05.245 INFO:tasks.workunit.client.0.vm07.stdout:0/829: mknod d1/d2/d98/c101 0 2026-03-09T20:48:05.245 INFO:tasks.workunit.client.0.vm07.stdout:8/737: dread d1/fb5 [0,4194304] 0 2026-03-09T20:48:05.245 INFO:tasks.workunit.client.0.vm07.stdout:6/805: rename d8/d16/d61/ffa to d8/d16/d22/d24/da0/dab/d40/f107 0 2026-03-09T20:48:05.245 INFO:tasks.workunit.client.0.vm07.stdout:1/830: getdents d3/d97/da1/dab 0 2026-03-09T20:48:05.245 INFO:tasks.workunit.client.0.vm07.stdout:7/859: mkdir d3/da/db/d32/d3e/d5c/d122 0 2026-03-09T20:48:05.245 INFO:tasks.workunit.client.0.vm07.stdout:6/806: dread - d8/d16/d22/d24/da0/dab/dc1/fee zero size 2026-03-09T20:48:05.245 INFO:tasks.workunit.client.0.vm07.stdout:6/807: dwrite d8/d16/d22/d9b/de4/d85/f5a [4194304,4194304] 0 2026-03-09T20:48:05.246 INFO:tasks.workunit.client.0.vm07.stdout:1/831: unlink d3/d23/d67/cdb 0 2026-03-09T20:48:05.247 INFO:tasks.workunit.client.1.vm10.stdout:0/735: creat d2/d9/da/d48/dac/de8/f102 x:0 0 0 2026-03-09T20:48:05.249 INFO:tasks.workunit.client.0.vm07.stdout:1/832: dread d3/d97/da1/dc5/fc3 [0,4194304] 0 2026-03-09T20:48:05.262 INFO:tasks.workunit.client.1.vm10.stdout:0/736: fsync d2/d4a/d58/d82/d71/fe7 0 2026-03-09T20:48:05.265 INFO:tasks.workunit.client.0.vm07.stdout:9/775: dread d4/d8/dc/f25 [0,4194304] 0 2026-03-09T20:48:05.266 INFO:tasks.workunit.client.0.vm07.stdout:9/776: truncate d4/d8/d59/de4/f110 516255 0 2026-03-09T20:48:05.270 INFO:tasks.workunit.client.0.vm07.stdout:7/860: creat d3/da/d53/df5/f123 x:0 0 0 2026-03-09T20:48:05.270 INFO:tasks.workunit.client.0.vm07.stdout:8/738: symlink d1/d5d/le8 0 2026-03-09T20:48:05.270 INFO:tasks.workunit.client.0.vm07.stdout:6/808: dread - d8/d16/d22/d24/da0/dab/d40/fe7 zero size 2026-03-09T20:48:05.274 INFO:tasks.workunit.client.0.vm07.stdout:9/777: truncate d4/d8/f34 939255 0 2026-03-09T20:48:05.275 INFO:tasks.workunit.client.1.vm10.stdout:0/737: creat d2/d9/da/d11/dd1/f103 x:0 0 0 2026-03-09T20:48:05.277 INFO:tasks.workunit.client.0.vm07.stdout:8/739: mkdir d1/dc/d16/dad/de9 0 2026-03-09T20:48:05.277 INFO:tasks.workunit.client.0.vm07.stdout:8/740: readlink d1/d5d/d6f/d2f/d53/la8 0 2026-03-09T20:48:05.278 INFO:tasks.workunit.client.0.vm07.stdout:7/861: mknod d3/da/d53/db7/dde/dc5/c124 0 2026-03-09T20:48:05.280 INFO:tasks.workunit.client.0.vm07.stdout:6/809: creat d8/d16/d4b/f108 x:0 0 0 2026-03-09T20:48:05.283 INFO:tasks.workunit.client.0.vm07.stdout:1/833: rename d3/d66/d86/caf to d3/d23/d52/c112 0 2026-03-09T20:48:05.284 INFO:tasks.workunit.client.0.vm07.stdout:9/778: creat d4/d8/d19/d89/da7/ddd/f113 x:0 0 0 2026-03-09T20:48:05.288 INFO:tasks.workunit.client.0.vm07.stdout:1/834: dread - d3/d97/da1/dc5/d90/de8/f102 zero size 2026-03-09T20:48:05.291 INFO:tasks.workunit.client.0.vm07.stdout:8/741: symlink d1/dc/d16/d26/de2/lea 0 2026-03-09T20:48:05.292 INFO:tasks.workunit.client.0.vm07.stdout:9/779: dread d4/fa [0,4194304] 0 2026-03-09T20:48:05.294 INFO:tasks.workunit.client.1.vm10.stdout:8/820: sync 2026-03-09T20:48:05.313 INFO:tasks.workunit.client.0.vm07.stdout:8/742: mkdir d1/d5d/d6f/d2f/d4d/d63/deb 0 2026-03-09T20:48:05.316 INFO:tasks.workunit.client.0.vm07.stdout:5/862: write d5/df/d13/d6c/fc9 [2358842,25866] 0 2026-03-09T20:48:05.318 INFO:tasks.workunit.client.1.vm10.stdout:4/718: dwrite d1/d67/fa5 [0,4194304] 0 2026-03-09T20:48:05.329 INFO:tasks.workunit.client.0.vm07.stdout:8/743: creat d1/d8f/fec x:0 0 0 2026-03-09T20:48:05.339 INFO:tasks.workunit.client.0.vm07.stdout:2/812: dwrite d2/d11/f52 [0,4194304] 0 2026-03-09T20:48:05.349 INFO:tasks.workunit.client.0.vm07.stdout:3/794: write d1/d5/d9/d11/f73 [3624438,16212] 0 2026-03-09T20:48:05.353 INFO:tasks.workunit.client.1.vm10.stdout:5/697: write d2/d39/dbf/d63/fcd [437101,86275] 0 2026-03-09T20:48:05.356 INFO:tasks.workunit.client.1.vm10.stdout:5/698: dread d2/d39/dbf/d66/fc7 [0,4194304] 0 2026-03-09T20:48:05.360 INFO:tasks.workunit.client.1.vm10.stdout:9/787: write d2/d12/f20 [2272382,63076] 0 2026-03-09T20:48:05.364 INFO:tasks.workunit.client.0.vm07.stdout:5/863: creat d5/d19/d73/d9c/d10c/f129 x:0 0 0 2026-03-09T20:48:05.368 INFO:tasks.workunit.client.0.vm07.stdout:9/780: link d4/d16/d78/dc4/cfc d4/d8/d19/d5f/d73/dbc/c114 0 2026-03-09T20:48:05.371 INFO:tasks.workunit.client.0.vm07.stdout:8/744: chown d1/dc/c5e 870 1 2026-03-09T20:48:05.373 INFO:tasks.workunit.client.1.vm10.stdout:5/699: mkdir d2/d39/dbf/d69/d109 0 2026-03-09T20:48:05.379 INFO:tasks.workunit.client.1.vm10.stdout:2/750: write d5/d18/d27/d89/db6/d41/d77/db3/db5/f69 [3348083,27388] 0 2026-03-09T20:48:05.381 INFO:tasks.workunit.client.0.vm07.stdout:5/864: rmdir d5/d19/d73 39 2026-03-09T20:48:05.381 INFO:tasks.workunit.client.1.vm10.stdout:9/788: dread d2/d3/f2f [0,4194304] 0 2026-03-09T20:48:05.385 INFO:tasks.workunit.client.1.vm10.stdout:5/700: dwrite d2/d39/dbf/f61 [0,4194304] 0 2026-03-09T20:48:05.385 INFO:tasks.workunit.client.0.vm07.stdout:9/781: read - d4/d16/d29/fab zero size 2026-03-09T20:48:05.385 INFO:tasks.workunit.client.0.vm07.stdout:8/745: creat d1/d5d/d6f/fed x:0 0 0 2026-03-09T20:48:05.396 INFO:tasks.workunit.client.1.vm10.stdout:1/755: write d2/fd2 [595404,17558] 0 2026-03-09T20:48:05.399 INFO:tasks.workunit.client.1.vm10.stdout:9/789: getdents d2/d3/d85/df7 0 2026-03-09T20:48:05.410 INFO:tasks.workunit.client.1.vm10.stdout:1/756: mknod d2/da/d25/d46/d80/da0/d92/db5/dc7/cf3 0 2026-03-09T20:48:05.410 INFO:tasks.workunit.client.1.vm10.stdout:5/701: rename d2/d27/d37/d46/d5d/c9e to d2/d39/c10a 0 2026-03-09T20:48:05.410 INFO:tasks.workunit.client.0.vm07.stdout:9/782: fdatasync d4/d16/ff9 0 2026-03-09T20:48:05.410 INFO:tasks.workunit.client.0.vm07.stdout:3/795: creat d1/d5/f102 x:0 0 0 2026-03-09T20:48:05.410 INFO:tasks.workunit.client.0.vm07.stdout:3/796: chown d1/d5/dcd 1033669 1 2026-03-09T20:48:05.419 INFO:tasks.workunit.client.0.vm07.stdout:9/783: dread f2 [4194304,4194304] 0 2026-03-09T20:48:05.421 INFO:tasks.workunit.client.0.vm07.stdout:5/865: symlink d5/d19/d73/d94/l12a 0 2026-03-09T20:48:05.422 INFO:tasks.workunit.client.1.vm10.stdout:7/773: write db/d28/d4c/d6e/fc8 [1026893,57084] 0 2026-03-09T20:48:05.423 INFO:tasks.workunit.client.1.vm10.stdout:3/721: write dc/d14/d26/d29/f5c [400101,41001] 0 2026-03-09T20:48:05.430 INFO:tasks.workunit.client.0.vm07.stdout:0/830: write d1/d2/d33/d35/f5c [5198320,56903] 0 2026-03-09T20:48:05.435 INFO:tasks.workunit.client.0.vm07.stdout:4/738: truncate d2/f4c 1600354 0 2026-03-09T20:48:05.440 INFO:tasks.workunit.client.1.vm10.stdout:5/702: creat d2/d27/d37/d46/d5d/d77/f10b x:0 0 0 2026-03-09T20:48:05.442 INFO:tasks.workunit.client.1.vm10.stdout:6/770: dwrite d3/d30/d6a/fdd [0,4194304] 0 2026-03-09T20:48:05.453 INFO:tasks.workunit.client.0.vm07.stdout:3/797: mknod d1/d5/d9/d2f/d34/d9e/c103 0 2026-03-09T20:48:05.453 INFO:tasks.workunit.client.0.vm07.stdout:3/798: write d1/d5/d9/d2f/d3d/d71/d76/db6/f100 [912320,94659] 0 2026-03-09T20:48:05.465 INFO:tasks.workunit.client.0.vm07.stdout:9/784: rename d4/d16/d29/d24/c5e to d4/d16/d78/c115 0 2026-03-09T20:48:05.479 INFO:tasks.workunit.client.1.vm10.stdout:2/751: mkdir d5/d18/d27/d38/dcf/df8 0 2026-03-09T20:48:05.480 INFO:tasks.workunit.client.0.vm07.stdout:5/866: readlink d5/df/d13/l8a 0 2026-03-09T20:48:05.481 INFO:tasks.workunit.client.1.vm10.stdout:3/722: dread dc/d14/d90/fba [0,4194304] 0 2026-03-09T20:48:05.487 INFO:tasks.workunit.client.0.vm07.stdout:0/831: truncate d1/d2/dc/d17/f23 7834463 0 2026-03-09T20:48:05.488 INFO:tasks.workunit.client.1.vm10.stdout:0/738: write d2/d4a/d58/d82/d60/fd8 [9393902,57587] 0 2026-03-09T20:48:05.495 INFO:tasks.workunit.client.0.vm07.stdout:1/835: write d3/f28 [1007014,58859] 0 2026-03-09T20:48:05.495 INFO:tasks.workunit.client.0.vm07.stdout:7/862: write d3/da/db/d32/d3e/dac/d1f/d2b/d52/f74 [1187548,15737] 0 2026-03-09T20:48:05.496 INFO:tasks.workunit.client.1.vm10.stdout:8/821: dwrite d0/d22/d25/d40/d86/fee [0,4194304] 0 2026-03-09T20:48:05.500 INFO:tasks.workunit.client.0.vm07.stdout:6/810: dwrite d8/d26/f4d [0,4194304] 0 2026-03-09T20:48:05.508 INFO:tasks.workunit.client.0.vm07.stdout:4/739: truncate d2/d1f/f9c 206658 0 2026-03-09T20:48:05.512 INFO:tasks.workunit.client.1.vm10.stdout:5/703: mkdir d2/d39/d4b/d7a/dd9/d10c 0 2026-03-09T20:48:05.517 INFO:tasks.workunit.client.1.vm10.stdout:6/771: creat d3/d30/d7f/d36/d5c/d8d/fec x:0 0 0 2026-03-09T20:48:05.518 INFO:tasks.workunit.client.1.vm10.stdout:4/719: dwrite d1/d2/d5c/d64/d6b/d81/dac/d39/f56 [0,4194304] 0 2026-03-09T20:48:05.528 INFO:tasks.workunit.client.1.vm10.stdout:2/752: creat d5/d18/d27/d89/db6/d41/de4/ff9 x:0 0 0 2026-03-09T20:48:05.540 INFO:tasks.workunit.client.1.vm10.stdout:3/723: mknod dc/d14/d26/d29/d40/cf6 0 2026-03-09T20:48:05.541 INFO:tasks.workunit.client.1.vm10.stdout:3/724: readlink dc/d14/d26/d37/l4d 0 2026-03-09T20:48:05.541 INFO:tasks.workunit.client.0.vm07.stdout:8/746: dwrite d1/dc/d16/f4b [0,4194304] 0 2026-03-09T20:48:05.541 INFO:tasks.workunit.client.0.vm07.stdout:1/836: truncate d3/d97/da1/dc5/fc3 2529021 0 2026-03-09T20:48:05.541 INFO:tasks.workunit.client.0.vm07.stdout:2/813: dwrite d2/f7 [0,4194304] 0 2026-03-09T20:48:05.543 INFO:tasks.workunit.client.0.vm07.stdout:1/837: readlink d3/d23/l4e 0 2026-03-09T20:48:05.548 INFO:tasks.workunit.client.1.vm10.stdout:0/739: dread - d2/d9/d2a/fdc zero size 2026-03-09T20:48:05.553 INFO:tasks.workunit.client.0.vm07.stdout:7/863: mkdir d3/da/db/d32/d3e/dac/d43/d62/de0/d125 0 2026-03-09T20:48:05.553 INFO:tasks.workunit.client.1.vm10.stdout:9/790: write d2/d3/f6c [1608575,87493] 0 2026-03-09T20:48:05.553 INFO:tasks.workunit.client.1.vm10.stdout:0/740: dwrite d2/d9/da/d11/f42 [0,4194304] 0 2026-03-09T20:48:05.558 INFO:tasks.workunit.client.1.vm10.stdout:1/757: write d2/da/f32 [1449653,105183] 0 2026-03-09T20:48:05.558 INFO:tasks.workunit.client.0.vm07.stdout:6/811: rmdir d8/d26/d7d 39 2026-03-09T20:48:05.569 INFO:tasks.workunit.client.0.vm07.stdout:3/799: rename d1/d5/d9/d11/d6d/d80/f93 to d1/d5/d9/d2f/d3d/d71/dcc/f104 0 2026-03-09T20:48:05.573 INFO:tasks.workunit.client.1.vm10.stdout:8/822: dread d0/d22/d25/d40/d86/d91/fa8 [4194304,4194304] 0 2026-03-09T20:48:05.577 INFO:tasks.workunit.client.1.vm10.stdout:6/772: truncate d3/d30/d33/f35 2830469 0 2026-03-09T20:48:05.577 INFO:tasks.workunit.client.1.vm10.stdout:7/774: dwrite db/f70 [0,4194304] 0 2026-03-09T20:48:05.583 INFO:tasks.workunit.client.0.vm07.stdout:5/867: truncate d5/df/d13/fef 598910 0 2026-03-09T20:48:05.594 INFO:tasks.workunit.client.1.vm10.stdout:2/753: dread d5/d18/d27/db8/fce [0,4194304] 0 2026-03-09T20:48:05.604 INFO:tasks.workunit.client.1.vm10.stdout:1/758: creat d2/da/d25/d46/ddb/ff4 x:0 0 0 2026-03-09T20:48:05.607 INFO:tasks.workunit.client.1.vm10.stdout:8/823: mknod d0/d54/dec/c109 0 2026-03-09T20:48:05.609 INFO:tasks.workunit.client.0.vm07.stdout:7/864: mkdir d3/da/db/d32/d126 0 2026-03-09T20:48:05.612 INFO:tasks.workunit.client.0.vm07.stdout:6/812: dread - d8/d16/da3/f9f zero size 2026-03-09T20:48:05.617 INFO:tasks.workunit.client.1.vm10.stdout:7/775: rmdir db/d28/d4c 39 2026-03-09T20:48:05.623 INFO:tasks.workunit.client.1.vm10.stdout:2/754: write d5/d18/d9f/fd8 [269765,30967] 0 2026-03-09T20:48:05.634 INFO:tasks.workunit.client.0.vm07.stdout:9/785: write d4/d16/d29/f4a [2513457,69031] 0 2026-03-09T20:48:05.635 INFO:tasks.workunit.client.1.vm10.stdout:2/755: dread d5/d18/d1b/d22/f4f [0,4194304] 0 2026-03-09T20:48:05.635 INFO:tasks.workunit.client.1.vm10.stdout:2/756: chown d5 3314998 1 2026-03-09T20:48:05.640 INFO:tasks.workunit.client.0.vm07.stdout:8/747: mkdir d1/d5d/d6f/d2f/d4d/dd4/dd9/dee 0 2026-03-09T20:48:05.641 INFO:tasks.workunit.client.1.vm10.stdout:1/759: stat d2/da/d25/d3e/d55/cab 0 2026-03-09T20:48:05.643 INFO:tasks.workunit.client.0.vm07.stdout:7/865: creat d3/da4/df2/f127 x:0 0 0 2026-03-09T20:48:05.646 INFO:tasks.workunit.client.0.vm07.stdout:6/813: fdatasync d8/d16/d22/d24/da0/dab/dc1/fcb 0 2026-03-09T20:48:05.652 INFO:tasks.workunit.client.1.vm10.stdout:7/776: fsync db/d46/f66 0 2026-03-09T20:48:05.652 INFO:tasks.workunit.client.1.vm10.stdout:5/704: dread d2/d1b/d54/d78/fad [0,4194304] 0 2026-03-09T20:48:05.653 INFO:tasks.workunit.client.1.vm10.stdout:9/791: dread d2/d3/de/d35/f78 [0,4194304] 0 2026-03-09T20:48:05.665 INFO:tasks.workunit.client.0.vm07.stdout:3/800: mknod d1/d5/d9/d11/d6d/c105 0 2026-03-09T20:48:05.670 INFO:tasks.workunit.client.0.vm07.stdout:4/740: write d2/f9 [2762454,89978] 0 2026-03-09T20:48:05.671 INFO:tasks.workunit.client.0.vm07.stdout:4/741: chown d2/d1f/f2c 240 1 2026-03-09T20:48:05.677 INFO:tasks.workunit.client.1.vm10.stdout:4/720: rename d1/d2/d5c/d64/d6b/fe4 to d1/fe7 0 2026-03-09T20:48:05.687 INFO:tasks.workunit.client.0.vm07.stdout:5/868: mknod d5/df/c12b 0 2026-03-09T20:48:05.691 INFO:tasks.workunit.client.0.vm07.stdout:1/838: dread d3/f6f [0,4194304] 0 2026-03-09T20:48:05.706 INFO:tasks.workunit.client.0.vm07.stdout:8/748: creat d1/dc/d16/dad/d87/d93/fef x:0 0 0 2026-03-09T20:48:05.708 INFO:tasks.workunit.client.0.vm07.stdout:9/786: dread d4/d16/d29/d24/f8c [0,4194304] 0 2026-03-09T20:48:05.721 INFO:tasks.workunit.client.0.vm07.stdout:0/832: write d1/d2/dc/d80/f87 [2404951,61160] 0 2026-03-09T20:48:05.729 INFO:tasks.workunit.client.0.vm07.stdout:2/814: link d2/d11/d56/f5a d2/db/d28/d90/dd6/f109 0 2026-03-09T20:48:05.735 INFO:tasks.workunit.client.1.vm10.stdout:3/725: dwrite dc/d14/d26/d29/d2a/d76/fc4 [0,4194304] 0 2026-03-09T20:48:05.735 INFO:tasks.workunit.client.1.vm10.stdout:7/777: mknod db/d21/d23/cf0 0 2026-03-09T20:48:05.736 INFO:tasks.workunit.client.0.vm07.stdout:6/814: symlink d8/d5d/d97/l109 0 2026-03-09T20:48:05.736 INFO:tasks.workunit.client.1.vm10.stdout:5/705: creat d2/d27/d37/d46/d99/f10d x:0 0 0 2026-03-09T20:48:05.736 INFO:tasks.workunit.client.0.vm07.stdout:3/801: chown d1/d5/d9/d11/c62 8297336 1 2026-03-09T20:48:05.745 INFO:tasks.workunit.client.1.vm10.stdout:3/726: fdatasync dc/d14/d26/d29/d40/d8c/fbc 0 2026-03-09T20:48:05.745 INFO:tasks.workunit.client.0.vm07.stdout:6/815: write d8/d16/d22/ff1 [3053573,13186] 0 2026-03-09T20:48:05.746 INFO:tasks.workunit.client.0.vm07.stdout:6/816: chown d8/cea 1 1 2026-03-09T20:48:05.755 INFO:tasks.workunit.client.1.vm10.stdout:9/792: dread d2/d3/de/d35/fca [0,4194304] 0 2026-03-09T20:48:05.777 INFO:tasks.workunit.client.0.vm07.stdout:5/869: dread - d5/d19/d73/dbc/ff9 zero size 2026-03-09T20:48:05.777 INFO:tasks.workunit.client.1.vm10.stdout:0/741: dwrite d2/f99 [0,4194304] 0 2026-03-09T20:48:05.803 INFO:tasks.workunit.client.0.vm07.stdout:8/749: rename d1/dc/d16/c65 to d1/d8f/cf0 0 2026-03-09T20:48:05.808 INFO:tasks.workunit.client.1.vm10.stdout:6/773: write d3/da/d11/d89/db9/dd1/dd2/f69 [638453,96080] 0 2026-03-09T20:48:05.808 INFO:tasks.workunit.client.1.vm10.stdout:8/824: write d0/dd1/fdf [2451439,48397] 0 2026-03-09T20:48:05.815 INFO:tasks.workunit.client.0.vm07.stdout:9/787: symlink d4/d8/d19/d89/da7/l116 0 2026-03-09T20:48:05.820 INFO:tasks.workunit.client.0.vm07.stdout:1/839: dwrite d3/d66/f7e [0,4194304] 0 2026-03-09T20:48:05.824 INFO:tasks.workunit.client.0.vm07.stdout:1/840: dread d3/d66/f7e [0,4194304] 0 2026-03-09T20:48:05.830 INFO:tasks.workunit.client.0.vm07.stdout:2/815: symlink d2/db/d28/d57/de1/l10a 0 2026-03-09T20:48:05.833 INFO:tasks.workunit.client.0.vm07.stdout:7/866: unlink d3/d58/d82/fd8 0 2026-03-09T20:48:05.852 INFO:tasks.workunit.client.0.vm07.stdout:3/802: rename d1/d5/d9/d2f/d34/d46/d5d/ffa to d1/d5/d9/d2f/d66/dc0/f106 0 2026-03-09T20:48:05.863 INFO:tasks.workunit.client.0.vm07.stdout:8/750: dread d1/dc/d6a/f62 [0,4194304] 0 2026-03-09T20:48:05.864 INFO:tasks.workunit.client.0.vm07.stdout:4/742: creat d2/d55/d5d/d3f/d4a/d4b/fc8 x:0 0 0 2026-03-09T20:48:05.868 INFO:tasks.workunit.client.0.vm07.stdout:3/803: unlink d1/d5/d9/d2f/d66/lb1 0 2026-03-09T20:48:05.869 INFO:tasks.workunit.client.0.vm07.stdout:9/788: mkdir d4/d8/d19/d5f/dcf/d117 0 2026-03-09T20:48:05.869 INFO:tasks.workunit.client.0.vm07.stdout:9/789: chown d4/d16/f41 170194 1 2026-03-09T20:48:05.872 INFO:tasks.workunit.client.0.vm07.stdout:1/841: link d3/f7d d3/d23/d52/f113 0 2026-03-09T20:48:05.882 INFO:tasks.workunit.client.0.vm07.stdout:0/833: dwrite d1/d2/dc/fd6 [0,4194304] 0 2026-03-09T20:48:05.892 INFO:tasks.workunit.client.0.vm07.stdout:5/870: dwrite d5/df/d13/d3e/de1/fe7 [0,4194304] 0 2026-03-09T20:48:05.893 INFO:tasks.workunit.client.0.vm07.stdout:2/816: fsync d2/f4 0 2026-03-09T20:48:05.900 INFO:tasks.workunit.client.0.vm07.stdout:6/817: dwrite d8/d16/d22/d9b/fc6 [0,4194304] 0 2026-03-09T20:48:05.908 INFO:tasks.workunit.client.0.vm07.stdout:6/818: dwrite d8/d16/d22/d24/da0/dab/d40/d69/f78 [8388608,4194304] 0 2026-03-09T20:48:05.920 INFO:tasks.workunit.client.0.vm07.stdout:7/867: rename d3/da/db/d79/dc3/fd5 to d3/da4/df2/f128 0 2026-03-09T20:48:05.921 INFO:tasks.workunit.client.0.vm07.stdout:7/868: chown d3/d58/d82/f109 3991434 1 2026-03-09T20:48:05.939 INFO:tasks.workunit.client.1.vm10.stdout:5/706: readlink d2/d58/ldf 0 2026-03-09T20:48:05.944 INFO:tasks.workunit.client.1.vm10.stdout:4/721: mkdir d1/dd8/de8 0 2026-03-09T20:48:05.944 INFO:tasks.workunit.client.0.vm07.stdout:2/817: creat d2/db/d28/d5c/f10b x:0 0 0 2026-03-09T20:48:05.948 INFO:tasks.workunit.client.1.vm10.stdout:3/727: dwrite dc/d14/d26/fd8 [0,4194304] 0 2026-03-09T20:48:05.956 INFO:tasks.workunit.client.0.vm07.stdout:8/751: mknod d1/dc/d16/d26/de2/cf1 0 2026-03-09T20:48:05.960 INFO:tasks.workunit.client.1.vm10.stdout:6/774: mkdir d3/da/d11/d89/db9/dd1/dd2/da9/ded 0 2026-03-09T20:48:05.960 INFO:tasks.workunit.client.1.vm10.stdout:1/760: creat d2/da/ff5 x:0 0 0 2026-03-09T20:48:05.960 INFO:tasks.workunit.client.1.vm10.stdout:6/775: fdatasync d3/da/d11/d89/fb0 0 2026-03-09T20:48:05.960 INFO:tasks.workunit.client.0.vm07.stdout:8/752: write d1/db0/fe0 [973982,72051] 0 2026-03-09T20:48:05.960 INFO:tasks.workunit.client.0.vm07.stdout:8/753: write d1/dc/d16/f4b [4031421,130241] 0 2026-03-09T20:48:05.963 INFO:tasks.workunit.client.1.vm10.stdout:7/778: link db/d21/fb2 db/d46/dab/ff1 0 2026-03-09T20:48:05.972 INFO:tasks.workunit.client.0.vm07.stdout:6/819: mknod d8/d16/d22/d24/da0/dab/d40/c10a 0 2026-03-09T20:48:05.972 INFO:tasks.workunit.client.1.vm10.stdout:5/707: symlink d2/d39/dbf/d69/de9/l10e 0 2026-03-09T20:48:05.972 INFO:tasks.workunit.client.1.vm10.stdout:5/708: chown d2/d1b/d54/d7b/l90 29 1 2026-03-09T20:48:05.972 INFO:tasks.workunit.client.1.vm10.stdout:2/757: getdents d5/d18/d27/d38/dcf 0 2026-03-09T20:48:05.974 INFO:tasks.workunit.client.0.vm07.stdout:8/754: mkdir d1/dc/d6a/df2 0 2026-03-09T20:48:05.978 INFO:tasks.workunit.client.1.vm10.stdout:1/761: rename d2/da/d25/f40 to d2/da/d25/d3e/dca/da2/dd5/ff6 0 2026-03-09T20:48:05.978 INFO:tasks.workunit.client.0.vm07.stdout:6/820: mknod d8/db3/c10b 0 2026-03-09T20:48:05.980 INFO:tasks.workunit.client.1.vm10.stdout:5/709: mkdir d2/d39/dbf/d66/d10f 0 2026-03-09T20:48:05.982 INFO:tasks.workunit.client.0.vm07.stdout:0/834: read d1/d1f/d30/f8e [256608,77805] 0 2026-03-09T20:48:05.991 INFO:tasks.workunit.client.1.vm10.stdout:9/793: creat d2/d3/de/d35/f107 x:0 0 0 2026-03-09T20:48:05.994 INFO:tasks.workunit.client.1.vm10.stdout:3/728: dwrite dc/d14/d20/d2e/d56/f15 [0,4194304] 0 2026-03-09T20:48:05.994 INFO:tasks.workunit.client.0.vm07.stdout:8/755: creat d1/db0/ff3 x:0 0 0 2026-03-09T20:48:06.006 INFO:tasks.workunit.client.0.vm07.stdout:6/821: fsync d8/d16/d22/d9b/de4/d85/f5a 0 2026-03-09T20:48:06.007 INFO:tasks.workunit.client.0.vm07.stdout:6/822: fsync d8/d16/d4b/d88/f70 0 2026-03-09T20:48:06.009 INFO:tasks.workunit.client.1.vm10.stdout:2/758: fsync d5/d18/d27/d89/db6/d41/f76 0 2026-03-09T20:48:06.010 INFO:tasks.workunit.client.0.vm07.stdout:7/869: creat d3/da/db/d32/d3e/d5c/f129 x:0 0 0 2026-03-09T20:48:06.012 INFO:tasks.workunit.client.0.vm07.stdout:5/871: getdents d5/df/d13/d4f 0 2026-03-09T20:48:06.024 INFO:tasks.workunit.client.0.vm07.stdout:8/756: dread - d1/dc/d16/d26/fd1 zero size 2026-03-09T20:48:06.030 INFO:tasks.workunit.client.1.vm10.stdout:1/762: fdatasync d2/da/d25/d46/d51/d5d/d6e/f76 0 2026-03-09T20:48:06.034 INFO:tasks.workunit.client.0.vm07.stdout:6/823: truncate d8/fdd 1763961 0 2026-03-09T20:48:06.035 INFO:tasks.workunit.client.0.vm07.stdout:6/824: read d8/d16/d4b/d88/dc3/dd5/fd3 [1615119,117331] 0 2026-03-09T20:48:06.038 INFO:tasks.workunit.client.1.vm10.stdout:3/729: truncate dc/d14/d22/fbf 843380 0 2026-03-09T20:48:06.042 INFO:tasks.workunit.client.0.vm07.stdout:4/743: write d2/f19 [759072,12418] 0 2026-03-09T20:48:06.043 INFO:tasks.workunit.client.0.vm07.stdout:4/744: stat d2/d55/d5d/d3f/d4a/d4b/f7a 0 2026-03-09T20:48:06.043 INFO:tasks.workunit.client.0.vm07.stdout:5/872: truncate d5/df/d13/d30/fe3 718013 0 2026-03-09T20:48:06.045 INFO:tasks.workunit.client.0.vm07.stdout:0/835: truncate d1/d2/d4b/f70 56486 0 2026-03-09T20:48:06.045 INFO:tasks.workunit.client.0.vm07.stdout:0/836: stat d1/dc0/dcc 0 2026-03-09T20:48:06.049 INFO:tasks.workunit.client.0.vm07.stdout:7/870: mknod d3/da4/df2/d113/c12a 0 2026-03-09T20:48:06.050 INFO:tasks.workunit.client.1.vm10.stdout:2/759: mkdir d5/d18/d27/db8/dfa 0 2026-03-09T20:48:06.050 INFO:tasks.workunit.client.1.vm10.stdout:6/776: getdents d3/da/d11/d89/db9/dd1 0 2026-03-09T20:48:06.054 INFO:tasks.workunit.client.1.vm10.stdout:7/779: truncate db/d28/d2b/d36/d63/d84/fc1 137965 0 2026-03-09T20:48:06.061 INFO:tasks.workunit.client.0.vm07.stdout:3/804: dwrite d1/d5/d9/f1b [0,4194304] 0 2026-03-09T20:48:06.063 INFO:tasks.workunit.client.0.vm07.stdout:3/805: dread - d1/d5/d9/d2f/d99/dd8/de0/fe1 zero size 2026-03-09T20:48:06.073 INFO:tasks.workunit.client.0.vm07.stdout:9/790: dwrite d4/d11/f8a [0,4194304] 0 2026-03-09T20:48:06.073 INFO:tasks.workunit.client.1.vm10.stdout:8/825: write d0/d22/d25/d2e/d41/d85/fa7 [3356391,47564] 0 2026-03-09T20:48:06.080 INFO:tasks.workunit.client.0.vm07.stdout:5/873: dread d5/d19/d73/d9c/fcb [0,4194304] 0 2026-03-09T20:48:06.082 INFO:tasks.workunit.client.0.vm07.stdout:1/842: dwrite d3/d97/da1/dc5/d90/dd3/ff9 [0,4194304] 0 2026-03-09T20:48:06.086 INFO:tasks.workunit.client.0.vm07.stdout:2/818: write d2/db/d28/d57/f75 [676565,3612] 0 2026-03-09T20:48:06.086 INFO:tasks.workunit.client.0.vm07.stdout:1/843: chown d3/d97/da1/dc5/d90/de8 14 1 2026-03-09T20:48:06.087 INFO:tasks.workunit.client.1.vm10.stdout:0/742: dwrite d2/d4a/d58/d82/d93/fe3 [0,4194304] 0 2026-03-09T20:48:06.107 INFO:tasks.workunit.client.1.vm10.stdout:4/722: dwrite d1/d2/d5c/d64/d6b/d81/da9/fb8 [0,4194304] 0 2026-03-09T20:48:06.113 INFO:tasks.workunit.client.0.vm07.stdout:7/871: rmdir d3/da/d53/db7/dde/d96 39 2026-03-09T20:48:06.120 INFO:tasks.workunit.client.1.vm10.stdout:9/794: write d2/d3/fa [6480635,117088] 0 2026-03-09T20:48:06.120 INFO:tasks.workunit.client.0.vm07.stdout:4/745: rename d2/df/d17/cb0 to d2/d1f/cc9 0 2026-03-09T20:48:06.120 INFO:tasks.workunit.client.0.vm07.stdout:0/837: unlink d1/d1f/d53/d72/f6b 0 2026-03-09T20:48:06.126 INFO:tasks.workunit.client.1.vm10.stdout:5/710: dwrite d2/d1b/d54/d78/fad [0,4194304] 0 2026-03-09T20:48:06.138 INFO:tasks.workunit.client.1.vm10.stdout:1/763: write d2/da/d25/f27 [779496,97611] 0 2026-03-09T20:48:06.149 INFO:tasks.workunit.client.0.vm07.stdout:5/874: truncate d5/df/d13/f1f 2110574 0 2026-03-09T20:48:06.153 INFO:tasks.workunit.client.0.vm07.stdout:2/819: mknod d2/d11/ddb/db0/db3/c10c 0 2026-03-09T20:48:06.159 INFO:tasks.workunit.client.0.vm07.stdout:1/844: symlink d3/d23/d67/l114 0 2026-03-09T20:48:06.159 INFO:tasks.workunit.client.1.vm10.stdout:7/780: creat db/d28/d2b/d36/d3b/dd5/ff2 x:0 0 0 2026-03-09T20:48:06.159 INFO:tasks.workunit.client.1.vm10.stdout:7/781: readlink db/d28/l33 0 2026-03-09T20:48:06.163 INFO:tasks.workunit.client.0.vm07.stdout:8/757: link d1/d5d/d6f/d2f/d4d/d63/l92 d1/d8f/d9d/lf4 0 2026-03-09T20:48:06.169 INFO:tasks.workunit.client.1.vm10.stdout:9/795: fsync d2/d28/d47/d6a/fc3 0 2026-03-09T20:48:06.177 INFO:tasks.workunit.client.1.vm10.stdout:9/796: readlink d2/d33/l4e 0 2026-03-09T20:48:06.178 INFO:tasks.workunit.client.0.vm07.stdout:6/825: link d8/d16/d22/d24/da0/dab/f7a d8/d16/d22/db1/dc2/f10c 0 2026-03-09T20:48:06.178 INFO:tasks.workunit.client.0.vm07.stdout:7/872: chown d3/da/d53/db7/dde/d96/c10a 4768669 1 2026-03-09T20:48:06.178 INFO:tasks.workunit.client.0.vm07.stdout:7/873: chown d3/da/db/d32/d3e/dac/d1f/d50 903826504 1 2026-03-09T20:48:06.178 INFO:tasks.workunit.client.0.vm07.stdout:4/746: creat d2/d1f/fca x:0 0 0 2026-03-09T20:48:06.178 INFO:tasks.workunit.client.0.vm07.stdout:0/838: creat d1/dc0/f102 x:0 0 0 2026-03-09T20:48:06.178 INFO:tasks.workunit.client.1.vm10.stdout:1/764: rename d2/da/d25/d3e/dca/da2/fad to d2/da/d25/d3e/dca/da2/db9/ff7 0 2026-03-09T20:48:06.188 INFO:tasks.workunit.client.1.vm10.stdout:7/782: unlink db/d28/d30/l7f 0 2026-03-09T20:48:06.190 INFO:tasks.workunit.client.1.vm10.stdout:3/730: getdents dc/d14/d20 0 2026-03-09T20:48:06.205 INFO:tasks.workunit.client.0.vm07.stdout:9/791: dwrite d4/d8/dc/d15/f30 [4194304,4194304] 0 2026-03-09T20:48:06.205 INFO:tasks.workunit.client.1.vm10.stdout:0/743: write d2/d4a/d58/d82/d71/d5d/ff2 [804888,15767] 0 2026-03-09T20:48:06.205 INFO:tasks.workunit.client.1.vm10.stdout:7/783: creat db/d28/d2b/d36/d63/ff3 x:0 0 0 2026-03-09T20:48:06.205 INFO:tasks.workunit.client.1.vm10.stdout:4/723: write d1/d2/d5c/d64/d6b/d81/fca [203366,118025] 0 2026-03-09T20:48:06.206 INFO:tasks.workunit.client.1.vm10.stdout:2/760: write d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47/ddc/ff6 [947833,68519] 0 2026-03-09T20:48:06.206 INFO:tasks.workunit.client.0.vm07.stdout:9/792: readlink d4/d8/d19/ld2 0 2026-03-09T20:48:06.210 INFO:tasks.workunit.client.1.vm10.stdout:6/777: dwrite d3/da/d11/f17 [0,4194304] 0 2026-03-09T20:48:06.222 INFO:tasks.workunit.client.1.vm10.stdout:5/711: dwrite d2/d27/d37/d46/ff6 [0,4194304] 0 2026-03-09T20:48:06.222 INFO:tasks.workunit.client.1.vm10.stdout:5/712: chown d2/d39/dbf/d63/fcd 280 1 2026-03-09T20:48:06.238 INFO:tasks.workunit.client.1.vm10.stdout:7/784: creat db/d28/d2b/d36/d3b/dd5/ff4 x:0 0 0 2026-03-09T20:48:06.241 INFO:tasks.workunit.client.1.vm10.stdout:2/761: mkdir d5/d18/d27/d89/db6/d41/dfb 0 2026-03-09T20:48:06.241 INFO:tasks.workunit.client.1.vm10.stdout:0/744: mknod d2/d4a/d58/d82/d71/d5d/c104 0 2026-03-09T20:48:06.243 INFO:tasks.workunit.client.1.vm10.stdout:0/745: fsync d2/d9/da/d11/ffc 0 2026-03-09T20:48:06.247 INFO:tasks.workunit.client.1.vm10.stdout:7/785: symlink db/d21/d95/lf5 0 2026-03-09T20:48:06.248 INFO:tasks.workunit.client.1.vm10.stdout:0/746: dread - d2/d9/da/d11/dd1/db7/dcd/d63/ff8 zero size 2026-03-09T20:48:06.248 INFO:tasks.workunit.client.1.vm10.stdout:3/731: creat dc/d14/d26/d29/ff7 x:0 0 0 2026-03-09T20:48:06.248 INFO:tasks.workunit.client.1.vm10.stdout:5/713: mknod d2/d39/d4b/d7a/dd9/d10c/c110 0 2026-03-09T20:48:06.249 INFO:tasks.workunit.client.1.vm10.stdout:3/732: chown dc/fbb 54578235 1 2026-03-09T20:48:06.250 INFO:tasks.workunit.client.1.vm10.stdout:5/714: stat d2/d27/d37/d46/d5d/l6b 0 2026-03-09T20:48:06.257 INFO:tasks.workunit.client.1.vm10.stdout:0/747: symlink d2/d4a/d58/d82/d71/l105 0 2026-03-09T20:48:06.265 INFO:tasks.workunit.client.1.vm10.stdout:2/762: fsync d5/f59 0 2026-03-09T20:48:06.266 INFO:tasks.workunit.client.0.vm07.stdout:7/874: dread - d3/da/db/d32/d3e/dac/d1f/f9e zero size 2026-03-09T20:48:06.266 INFO:tasks.workunit.client.0.vm07.stdout:3/806: rename d1/d5/d9/d11/d6d/dd0/d43/c4e to d1/d5/d9/d11/d6d/dd0/c107 0 2026-03-09T20:48:06.266 INFO:tasks.workunit.client.0.vm07.stdout:8/758: read d1/dc/d16/dad/d87/d93/fb3 [1057350,68285] 0 2026-03-09T20:48:06.266 INFO:tasks.workunit.client.0.vm07.stdout:8/759: chown d1/dc/d16/dad/d87/f97 475095 1 2026-03-09T20:48:06.266 INFO:tasks.workunit.client.1.vm10.stdout:8/826: dread d0/d22/f71 [0,4194304] 0 2026-03-09T20:48:06.266 INFO:tasks.workunit.client.1.vm10.stdout:6/778: dread d3/d30/d33/f37 [0,4194304] 0 2026-03-09T20:48:06.266 INFO:tasks.workunit.client.1.vm10.stdout:3/733: rename dc/c79 to dc/d14/d26/d29/d2a/d76/cf8 0 2026-03-09T20:48:06.266 INFO:tasks.workunit.client.1.vm10.stdout:3/734: readlink dc/d14/d20/d21/le6 0 2026-03-09T20:48:06.266 INFO:tasks.workunit.client.1.vm10.stdout:0/748: chown d2/d9/da/l88 156164 1 2026-03-09T20:48:06.267 INFO:tasks.workunit.client.1.vm10.stdout:2/763: read - d5/d18/d27/d38/d61/f81 zero size 2026-03-09T20:48:06.268 INFO:tasks.workunit.client.1.vm10.stdout:5/715: read d2/f23 [1800462,30496] 0 2026-03-09T20:48:06.270 INFO:tasks.workunit.client.1.vm10.stdout:2/764: readlink d5/d18/d1b/lcd 0 2026-03-09T20:48:06.293 INFO:tasks.workunit.client.0.vm07.stdout:9/793: write d4/d8/fd [3228806,28554] 0 2026-03-09T20:48:06.293 INFO:tasks.workunit.client.0.vm07.stdout:1/845: fdatasync d3/d14/d54/d3e/fff 0 2026-03-09T20:48:06.293 INFO:tasks.workunit.client.1.vm10.stdout:9/797: dwrite d2/d3/de/d8f/fbf [0,4194304] 0 2026-03-09T20:48:06.293 INFO:tasks.workunit.client.1.vm10.stdout:9/798: write d2/d3/d6d/d88/fd4 [882384,52494] 0 2026-03-09T20:48:06.293 INFO:tasks.workunit.client.1.vm10.stdout:9/799: dwrite d2/fc [4194304,4194304] 0 2026-03-09T20:48:06.293 INFO:tasks.workunit.client.1.vm10.stdout:6/779: symlink d3/da/d11/d89/db9/dd1/dd2/da9/lee 0 2026-03-09T20:48:06.295 INFO:tasks.workunit.client.0.vm07.stdout:6/826: mknod d8/d16/d22/d9b/de4/c10d 0 2026-03-09T20:48:06.305 INFO:tasks.workunit.client.0.vm07.stdout:5/875: rename d5/d33/db2/de8 to d5/df/d13/d4f/d12c 0 2026-03-09T20:48:06.305 INFO:tasks.workunit.client.0.vm07.stdout:3/807: dread d1/d5/d9/d2f/d3d/d71/fb0 [0,4194304] 0 2026-03-09T20:48:06.306 INFO:tasks.workunit.client.0.vm07.stdout:3/808: stat d1/f65 0 2026-03-09T20:48:06.306 INFO:tasks.workunit.client.0.vm07.stdout:3/809: chown d1/d5/d9/daf/fdf 222977 1 2026-03-09T20:48:06.308 INFO:tasks.workunit.client.1.vm10.stdout:2/765: truncate d5/d18/d27/da6/fe5 1012531 0 2026-03-09T20:48:06.313 INFO:tasks.workunit.client.0.vm07.stdout:9/794: creat d4/d16/d78/dc4/f118 x:0 0 0 2026-03-09T20:48:06.317 INFO:tasks.workunit.client.1.vm10.stdout:4/724: write d1/d2/d5c/d64/d6b/d79/d92/fb2 [120459,61079] 0 2026-03-09T20:48:06.317 INFO:tasks.workunit.client.0.vm07.stdout:9/795: chown d4/d16/d29/f6e 4019 1 2026-03-09T20:48:06.319 INFO:tasks.workunit.client.1.vm10.stdout:1/765: dwrite d2/da/d25/d3e/fe3 [4194304,4194304] 0 2026-03-09T20:48:06.329 INFO:tasks.workunit.client.0.vm07.stdout:2/820: rename d2/d11/d56/l103 to d2/d11/ddb/d6e/dbe/l10d 0 2026-03-09T20:48:06.330 INFO:tasks.workunit.client.1.vm10.stdout:5/716: creat d2/d27/d37/d46/d5d/d106/f111 x:0 0 0 2026-03-09T20:48:06.330 INFO:tasks.workunit.client.1.vm10.stdout:7/786: sync 2026-03-09T20:48:06.332 INFO:tasks.workunit.client.1.vm10.stdout:9/800: dread d2/d33/fb3 [0,4194304] 0 2026-03-09T20:48:06.334 INFO:tasks.workunit.client.0.vm07.stdout:5/876: dread - d5/d69/fd4 zero size 2026-03-09T20:48:06.334 INFO:tasks.workunit.client.0.vm07.stdout:8/760: mkdir d1/dc/d16/d26/de2/df5 0 2026-03-09T20:48:06.334 INFO:tasks.workunit.client.1.vm10.stdout:2/766: truncate d5/d5b/fe2 781357 0 2026-03-09T20:48:06.335 INFO:tasks.workunit.client.1.vm10.stdout:1/766: rmdir d2/da/dbc/dea 39 2026-03-09T20:48:06.341 INFO:tasks.workunit.client.0.vm07.stdout:3/810: dwrite d1/d5/d9/d2f/d34/d46/d5d/fb8 [0,4194304] 0 2026-03-09T20:48:06.351 INFO:tasks.workunit.client.1.vm10.stdout:4/725: read d1/d2/d5c/d64/d6b/d81/dac/d1b/f8b [1294908,8660] 0 2026-03-09T20:48:06.351 INFO:tasks.workunit.client.0.vm07.stdout:0/839: link d1/d1f/cbc d1/dc0/dcc/dd9/c103 0 2026-03-09T20:48:06.359 INFO:tasks.workunit.client.1.vm10.stdout:5/717: fsync d2/d27/d75/d81/fd0 0 2026-03-09T20:48:06.361 INFO:tasks.workunit.client.1.vm10.stdout:7/787: dread db/d28/d2b/d36/d40/f44 [0,4194304] 0 2026-03-09T20:48:06.365 INFO:tasks.workunit.client.1.vm10.stdout:6/780: dread d3/d30/d33/f4e [0,4194304] 0 2026-03-09T20:48:06.370 INFO:tasks.workunit.client.1.vm10.stdout:2/767: mkdir d5/d18/d27/d38/d61/dc8/ddb/dea/dfc 0 2026-03-09T20:48:06.372 INFO:tasks.workunit.client.1.vm10.stdout:8/827: write d0/d95/fe6 [34832,75851] 0 2026-03-09T20:48:06.372 INFO:tasks.workunit.client.0.vm07.stdout:9/796: rename d4/d16/d29/l61 to d4/d8/d19/d89/l119 0 2026-03-09T20:48:06.380 INFO:tasks.workunit.client.0.vm07.stdout:3/811: truncate d1/d5/d9/d2f/d3d/fef 985628 0 2026-03-09T20:48:06.381 INFO:tasks.workunit.client.0.vm07.stdout:8/761: dread d1/dc/d16/dad/fb8 [0,4194304] 0 2026-03-09T20:48:06.381 INFO:tasks.workunit.client.0.vm07.stdout:3/812: readlink d1/d5/d9/d2f/d34/d46/l9a 0 2026-03-09T20:48:06.387 INFO:tasks.workunit.client.1.vm10.stdout:5/718: fsync d2/d27/d37/fa3 0 2026-03-09T20:48:06.389 INFO:tasks.workunit.client.1.vm10.stdout:6/781: read d3/d30/d7f/d24/d39/f88 [511818,64996] 0 2026-03-09T20:48:06.391 INFO:tasks.workunit.client.0.vm07.stdout:1/846: getdents d3/d97/da1/dab 0 2026-03-09T20:48:06.391 INFO:tasks.workunit.client.0.vm07.stdout:6/827: link d8/d16/dcd/lce d8/d5d/d97/dc4/l10e 0 2026-03-09T20:48:06.391 INFO:tasks.workunit.client.0.vm07.stdout:9/797: mkdir d4/d16/d29/d24/d37/d44/d62/d8e/d11a 0 2026-03-09T20:48:06.393 INFO:tasks.workunit.client.0.vm07.stdout:7/875: write d3/da/f11 [341522,25361] 0 2026-03-09T20:48:06.394 INFO:tasks.workunit.client.0.vm07.stdout:7/876: stat d3/da/db/d32/d3e/dac/d1f/d50/la5 0 2026-03-09T20:48:06.397 INFO:tasks.workunit.client.1.vm10.stdout:7/788: dread db/d1f/f62 [0,4194304] 0 2026-03-09T20:48:06.398 INFO:tasks.workunit.client.1.vm10.stdout:0/749: dwrite d2/d9/da/d11/dd1/db7/dcd/d63/f9c [0,4194304] 0 2026-03-09T20:48:06.399 INFO:tasks.workunit.client.0.vm07.stdout:4/747: dwrite d2/f4c [0,4194304] 0 2026-03-09T20:48:06.409 INFO:tasks.workunit.client.1.vm10.stdout:1/767: mkdir d2/da/d25/d46/d51/d5d/d6e/d70/db3/dd4/df1/df8 0 2026-03-09T20:48:06.415 INFO:tasks.workunit.client.0.vm07.stdout:2/821: truncate d2/d11/d56/f5a 962657 0 2026-03-09T20:48:06.415 INFO:tasks.workunit.client.0.vm07.stdout:2/822: chown d2/db/d28/d90/dd6 1 1 2026-03-09T20:48:06.415 INFO:tasks.workunit.client.0.vm07.stdout:2/823: read - d2/d11/ffe zero size 2026-03-09T20:48:06.417 INFO:tasks.workunit.client.0.vm07.stdout:5/877: creat d5/d19/d73/dbc/d10f/f12d x:0 0 0 2026-03-09T20:48:06.419 INFO:tasks.workunit.client.1.vm10.stdout:3/735: truncate dc/d14/d26/d29/f70 1038834 0 2026-03-09T20:48:06.430 INFO:tasks.workunit.client.1.vm10.stdout:9/801: dwrite d2/d3/de/d8f/dbc/fda [0,4194304] 0 2026-03-09T20:48:06.440 INFO:tasks.workunit.client.0.vm07.stdout:3/813: rename d1/d5/f102 to d1/d5/d9/daf/d9f/f108 0 2026-03-09T20:48:06.444 INFO:tasks.workunit.client.1.vm10.stdout:8/828: dread d0/d92/de8/d64/fc3 [0,4194304] 0 2026-03-09T20:48:06.444 INFO:tasks.workunit.client.0.vm07.stdout:3/814: write d1/d5/d9/d2f/d66/dc0/fde [505254,26223] 0 2026-03-09T20:48:06.446 INFO:tasks.workunit.client.1.vm10.stdout:2/768: rename d5/fb to d5/d18/d27/d89/db6/d41/d77/db3/db5/ffd 0 2026-03-09T20:48:06.455 INFO:tasks.workunit.client.0.vm07.stdout:6/828: unlink d8/f12 0 2026-03-09T20:48:06.460 INFO:tasks.workunit.client.0.vm07.stdout:9/798: rmdir d4/d11/d23/d32 39 2026-03-09T20:48:06.462 INFO:tasks.workunit.client.1.vm10.stdout:0/750: creat d2/d9/da/d35/d30/f106 x:0 0 0 2026-03-09T20:48:06.463 INFO:tasks.workunit.client.1.vm10.stdout:0/751: readlink d2/d9/d69/l96 0 2026-03-09T20:48:06.471 INFO:tasks.workunit.client.1.vm10.stdout:1/768: truncate d2/da/d25/d46/d80/da0/d92/fac 759652 0 2026-03-09T20:48:06.471 INFO:tasks.workunit.client.1.vm10.stdout:1/769: chown d2/da/d25/d46/d51/d7e/lce 0 1 2026-03-09T20:48:06.587 INFO:tasks.workunit.client.1.vm10.stdout:3/736: dread dc/d14/d27/f3f [0,4194304] 0 2026-03-09T20:48:06.634 INFO:tasks.workunit.client.0.vm07.stdout:2/824: stat d2/l18 0 2026-03-09T20:48:06.646 INFO:tasks.workunit.client.1.vm10.stdout:4/726: truncate d1/d67/f8f 1158448 0 2026-03-09T20:48:06.650 INFO:tasks.workunit.client.0.vm07.stdout:8/762: dwrite d1/dc/d16/d31/f54 [0,4194304] 0 2026-03-09T20:48:06.664 INFO:tasks.workunit.client.0.vm07.stdout:0/840: write d1/d1f/f7c [3866608,49961] 0 2026-03-09T20:48:06.670 INFO:tasks.workunit.client.1.vm10.stdout:5/719: getdents d2/d39/dbf/dcc 0 2026-03-09T20:48:06.672 INFO:tasks.workunit.client.0.vm07.stdout:7/877: dwrite d3/da/db/fe8 [0,4194304] 0 2026-03-09T20:48:06.674 INFO:tasks.workunit.client.0.vm07.stdout:7/878: chown d3/da/d53/df5/df6 1434323687 1 2026-03-09T20:48:06.676 INFO:tasks.workunit.client.0.vm07.stdout:6/829: chown d8/d26/d7d/dfd 16 1 2026-03-09T20:48:06.690 INFO:tasks.workunit.client.1.vm10.stdout:1/770: truncate d2/da/d25/d3e/f41 1576564 0 2026-03-09T20:48:06.694 INFO:tasks.workunit.client.1.vm10.stdout:3/737: dread - dc/d14/d20/d2e/fb2 zero size 2026-03-09T20:48:06.695 INFO:tasks.workunit.client.1.vm10.stdout:5/720: dread d2/f64 [0,4194304] 0 2026-03-09T20:48:06.702 INFO:tasks.workunit.client.1.vm10.stdout:8/829: mknod d0/dd1/df1/c10a 0 2026-03-09T20:48:06.705 INFO:tasks.workunit.client.1.vm10.stdout:6/782: rename d3/d30/d33/lbf to d3/d30/d7f/d36/lef 0 2026-03-09T20:48:06.711 INFO:tasks.workunit.client.0.vm07.stdout:2/825: dread d2/db/d28/d90/da4/fa5 [0,4194304] 0 2026-03-09T20:48:06.718 INFO:tasks.workunit.client.0.vm07.stdout:8/763: rmdir d1/dc/d16/d26/d94 39 2026-03-09T20:48:06.723 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:06 vm10.local ceph-mon[57011]: pgmap v9: 65 pgs: 65 active+clean; 3.3 GiB data, 11 GiB used, 109 GiB / 120 GiB avail; 34 MiB/s rd, 77 MiB/s wr, 231 op/s 2026-03-09T20:48:06.723 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:06 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:06.723 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:06 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:06.723 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:06 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T20:48:06.723 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:06 vm10.local ceph-mon[57011]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T20:48:06.723 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:06 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:48:06.723 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:06 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm10.byqahe", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T20:48:06.723 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:06 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T20:48:06.724 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:06 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:06.728 INFO:tasks.workunit.client.1.vm10.stdout:0/752: mknod d2/d4a/d58/d82/d71/d8e/c107 0 2026-03-09T20:48:06.731 INFO:tasks.workunit.client.0.vm07.stdout:0/841: chown d1/d2/l5f 4698640 1 2026-03-09T20:48:06.738 INFO:tasks.workunit.client.0.vm07.stdout:0/842: dread d1/d2/dc/fe2 [0,4194304] 0 2026-03-09T20:48:06.739 INFO:tasks.workunit.client.0.vm07.stdout:0/843: chown d1/fb3 0 1 2026-03-09T20:48:06.741 INFO:tasks.workunit.client.0.vm07.stdout:0/844: dwrite d1/d2/d33/d35/f5c [0,4194304] 0 2026-03-09T20:48:06.743 INFO:tasks.workunit.client.0.vm07.stdout:0/845: write d1/d2/d33/d35/f5c [5622934,54833] 0 2026-03-09T20:48:06.751 INFO:tasks.workunit.client.1.vm10.stdout:5/721: truncate d2/d58/d6c/fc4 4369292 0 2026-03-09T20:48:06.762 INFO:tasks.workunit.client.1.vm10.stdout:5/722: dwrite d2/d1b/d54/d78/fad [0,4194304] 0 2026-03-09T20:48:06.764 INFO:tasks.workunit.client.0.vm07.stdout:7/879: fsync d3/da/db/d32/d3e/dac/d1f/d2b/f49 0 2026-03-09T20:48:06.771 INFO:tasks.workunit.client.1.vm10.stdout:8/830: fdatasync d0/d22/d2c/f96 0 2026-03-09T20:48:06.774 INFO:tasks.workunit.client.1.vm10.stdout:8/831: read d0/d22/d25/f34 [3369388,116301] 0 2026-03-09T20:48:06.776 INFO:tasks.workunit.client.1.vm10.stdout:7/789: rename db/d28/d2b/d36/d3f/c4a to db/d28/d30/dd8/cf6 0 2026-03-09T20:48:06.779 INFO:tasks.workunit.client.1.vm10.stdout:6/783: symlink d3/d30/d7f/d36/d6d/dbe/lf0 0 2026-03-09T20:48:06.780 INFO:tasks.workunit.client.0.vm07.stdout:9/799: mknod d4/d8/d19/d5f/da5/c11b 0 2026-03-09T20:48:06.782 INFO:tasks.workunit.client.1.vm10.stdout:2/769: creat d5/d18/d9f/ffe x:0 0 0 2026-03-09T20:48:06.800 INFO:tasks.workunit.client.0.vm07.stdout:7/880: dread - d3/da/db/d79/faf zero size 2026-03-09T20:48:06.803 INFO:tasks.workunit.client.1.vm10.stdout:4/727: creat d1/d2/d3/d70/fe9 x:0 0 0 2026-03-09T20:48:06.812 INFO:tasks.workunit.client.0.vm07.stdout:9/800: mkdir d4/d16/d29/d24/d37/d44/d62/d8e/dd4/d11c 0 2026-03-09T20:48:06.827 INFO:tasks.workunit.client.0.vm07.stdout:6/830: getdents d8/d16/d61 0 2026-03-09T20:48:06.829 INFO:tasks.workunit.client.0.vm07.stdout:9/801: truncate d4/d11/f88 3433334 0 2026-03-09T20:48:06.836 INFO:tasks.workunit.client.0.vm07.stdout:9/802: chown d4/d11/d23/d32/cda 5 1 2026-03-09T20:48:06.844 INFO:tasks.workunit.client.1.vm10.stdout:4/728: dread d1/d2/d5c/d64/d6b/d81/dac/d1c/d2b/d4a/d9b/fc3 [0,4194304] 0 2026-03-09T20:48:06.860 INFO:tasks.workunit.client.1.vm10.stdout:0/753: truncate d2/d9/d69/faa 127471 0 2026-03-09T20:48:06.865 INFO:tasks.workunit.client.1.vm10.stdout:5/723: creat d2/d27/d37/d46/d5d/d77/f112 x:0 0 0 2026-03-09T20:48:06.869 INFO:tasks.workunit.client.1.vm10.stdout:7/790: dread - db/d28/d2b/d36/d3b/fd7 zero size 2026-03-09T20:48:06.880 INFO:tasks.workunit.client.1.vm10.stdout:8/832: mknod d0/d22/d25/d2e/d41/de9/c10b 0 2026-03-09T20:48:06.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:06 vm07.local ceph-mon[49120]: pgmap v9: 65 pgs: 65 active+clean; 3.3 GiB data, 11 GiB used, 109 GiB / 120 GiB avail; 34 MiB/s rd, 77 MiB/s wr, 231 op/s 2026-03-09T20:48:06.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:06 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:06.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:06 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:06.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:06 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T20:48:06.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:06 vm07.local ceph-mon[49120]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T20:48:06.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:06 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:48:06.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:06 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm10.byqahe", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T20:48:06.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:06 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T20:48:06.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:06 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:06.888 INFO:tasks.workunit.client.1.vm10.stdout:4/729: mkdir d1/d2/d5c/d64/d61/dea 0 2026-03-09T20:48:06.891 INFO:tasks.workunit.client.1.vm10.stdout:0/754: rename d2/d9/da/d35/f3a to d2/d9/da/d11/d92/f108 0 2026-03-09T20:48:06.893 INFO:tasks.workunit.client.1.vm10.stdout:1/771: creat d2/ff9 x:0 0 0 2026-03-09T20:48:06.894 INFO:tasks.workunit.client.1.vm10.stdout:1/772: write d2/da/d25/d46/ddb/ff4 [143007,89926] 0 2026-03-09T20:48:06.901 INFO:tasks.workunit.client.1.vm10.stdout:8/833: creat d0/d22/d25/d40/d86/f10c x:0 0 0 2026-03-09T20:48:06.901 INFO:tasks.workunit.client.1.vm10.stdout:8/834: chown d0/f21 128643588 1 2026-03-09T20:48:06.902 INFO:tasks.workunit.client.1.vm10.stdout:8/835: stat d0/d22/f35 0 2026-03-09T20:48:06.906 INFO:tasks.workunit.client.1.vm10.stdout:2/770: symlink d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/lff 0 2026-03-09T20:48:06.910 INFO:tasks.workunit.client.1.vm10.stdout:8/836: dread d0/d22/d25/f2d [0,4194304] 0 2026-03-09T20:48:06.928 INFO:tasks.workunit.client.1.vm10.stdout:4/730: symlink d1/d2/d5c/leb 0 2026-03-09T20:48:06.939 INFO:tasks.workunit.client.1.vm10.stdout:3/738: getdents dc/d14/d26/d29/d2a 0 2026-03-09T20:48:06.941 INFO:tasks.workunit.client.1.vm10.stdout:3/739: dread - dc/d14/d26/d37/feb zero size 2026-03-09T20:48:06.948 INFO:tasks.workunit.client.1.vm10.stdout:8/837: mkdir d0/d22/d25/d2e/d41/d85/db9/d10d 0 2026-03-09T20:48:06.954 INFO:tasks.workunit.client.0.vm07.stdout:4/748: dwrite d2/d55/d5d/d3f/d4a/fbf [0,4194304] 0 2026-03-09T20:48:06.955 INFO:tasks.workunit.client.1.vm10.stdout:2/771: dwrite d5/d18/d1b/f26 [4194304,4194304] 0 2026-03-09T20:48:06.962 INFO:tasks.workunit.client.1.vm10.stdout:7/791: dread db/d28/d4c/f8c [0,4194304] 0 2026-03-09T20:48:06.964 INFO:tasks.workunit.client.1.vm10.stdout:4/731: getdents d1/dd8/de8 0 2026-03-09T20:48:06.977 INFO:tasks.workunit.client.1.vm10.stdout:2/772: dread d5/d18/d27/d89/db6/d41/f4b [0,4194304] 0 2026-03-09T20:48:06.977 INFO:tasks.workunit.client.0.vm07.stdout:4/749: mkdir d2/d55/d5d/dcb 0 2026-03-09T20:48:06.985 INFO:tasks.workunit.client.1.vm10.stdout:7/792: dread - db/d46/dab/db5/fce zero size 2026-03-09T20:48:06.990 INFO:tasks.workunit.client.1.vm10.stdout:4/732: symlink d1/d47/db3/lec 0 2026-03-09T20:48:06.994 INFO:tasks.workunit.client.1.vm10.stdout:4/733: fsync d1/d2/d5c/d64/d6b/d81/dac/d1c/d69/fcb 0 2026-03-09T20:48:06.995 INFO:tasks.workunit.client.1.vm10.stdout:4/734: dwrite d1/d2/d3/d70/fe9 [0,4194304] 0 2026-03-09T20:48:06.998 INFO:tasks.workunit.client.0.vm07.stdout:1/847: dwrite d3/d97/da1/dc5/d90/f93 [4194304,4194304] 0 2026-03-09T20:48:06.999 INFO:tasks.workunit.client.0.vm07.stdout:1/848: chown d3/d97/da1/dd7/dfe 38128835 1 2026-03-09T20:48:06.999 INFO:tasks.workunit.client.1.vm10.stdout:4/735: dread d1/d67/fda [0,4194304] 0 2026-03-09T20:48:07.004 INFO:tasks.workunit.client.1.vm10.stdout:4/736: dwrite d1/d2/d3/d70/d78/d86/fde [0,4194304] 0 2026-03-09T20:48:07.011 INFO:tasks.workunit.client.0.vm07.stdout:1/849: symlink d3/d97/da1/ddd/l115 0 2026-03-09T20:48:07.024 INFO:tasks.workunit.client.1.vm10.stdout:7/793: dread db/d21/d26/f2f [0,4194304] 0 2026-03-09T20:48:07.024 INFO:tasks.workunit.client.1.vm10.stdout:4/737: mknod d1/d2/d5c/d64/d61/ced 0 2026-03-09T20:48:07.028 INFO:tasks.workunit.client.1.vm10.stdout:7/794: write db/d28/d2b/d36/d3b/dd5/fe0 [923788,12528] 0 2026-03-09T20:48:07.030 INFO:tasks.workunit.client.0.vm07.stdout:5/878: rename d5/df/d13/f1f to d5/d33/d39/d8d/dec/f12e 0 2026-03-09T20:48:07.034 INFO:tasks.workunit.client.0.vm07.stdout:3/815: rename d1/d5/d9/d11/d6d/dd0/d59 to d1/d5/d9/d11/d6d/d80/db3/d109 0 2026-03-09T20:48:07.039 INFO:tasks.workunit.client.0.vm07.stdout:2/826: rename d2/db/d1c/l29 to d2/db/d49/d7d/d85/l10e 0 2026-03-09T20:48:07.044 INFO:tasks.workunit.client.0.vm07.stdout:9/803: rename d4/d11/f8a to d4/d8/dc/d4e/f11d 0 2026-03-09T20:48:07.051 INFO:tasks.workunit.client.0.vm07.stdout:3/816: getdents d1/d5/d9/d11/d6d/dd0/d95 0 2026-03-09T20:48:07.053 INFO:tasks.workunit.client.0.vm07.stdout:2/827: getdents d2/db/d1c/d4a/db6 0 2026-03-09T20:48:07.055 INFO:tasks.workunit.client.0.vm07.stdout:3/817: creat d1/d5/d9/d11/d6d/dd0/d95/f10a x:0 0 0 2026-03-09T20:48:07.064 INFO:tasks.workunit.client.0.vm07.stdout:3/818: dread d1/d5/d9/d11/d6d/dd0/f63 [0,4194304] 0 2026-03-09T20:48:07.079 INFO:tasks.workunit.client.0.vm07.stdout:8/764: write d1/dc/f4c [4685824,4719] 0 2026-03-09T20:48:07.083 INFO:tasks.workunit.client.0.vm07.stdout:8/765: fsync d1/d8f/d9d/fa7 0 2026-03-09T20:48:07.084 INFO:tasks.workunit.client.0.vm07.stdout:8/766: write d1/f1d [1091539,125431] 0 2026-03-09T20:48:07.094 INFO:tasks.workunit.client.0.vm07.stdout:3/819: link d1/d5/c4f d1/d5/d9/d2f/d99/dd8/de0/c10b 0 2026-03-09T20:48:07.120 INFO:tasks.workunit.client.1.vm10.stdout:9/802: symlink d2/d28/l108 0 2026-03-09T20:48:07.121 INFO:tasks.workunit.client.1.vm10.stdout:4/738: symlink d1/d2/d5c/d64/d6b/d81/lee 0 2026-03-09T20:48:07.123 INFO:tasks.workunit.client.1.vm10.stdout:9/803: read - d2/d3/d85/ffa zero size 2026-03-09T20:48:07.124 INFO:tasks.workunit.client.1.vm10.stdout:4/739: symlink d1/d2/d5c/d64/d6b/d81/da9/lef 0 2026-03-09T20:48:07.126 INFO:tasks.workunit.client.1.vm10.stdout:5/724: write d2/d27/d37/d46/d5d/d6d/f6e [1635498,108352] 0 2026-03-09T20:48:07.139 INFO:tasks.workunit.client.1.vm10.stdout:9/804: link d2/fc d2/d3/f109 0 2026-03-09T20:48:07.150 INFO:tasks.workunit.client.1.vm10.stdout:9/805: mkdir d2/d3/d6d/de8/d10a 0 2026-03-09T20:48:07.155 INFO:tasks.workunit.client.1.vm10.stdout:9/806: chown d2/d33/d37/c6b 0 1 2026-03-09T20:48:07.157 INFO:tasks.workunit.client.1.vm10.stdout:4/740: link d1/d2/d5c/d64/d61/ced d1/d2/d5c/d64/d6b/d81/dac/d1b/dbe/cf0 0 2026-03-09T20:48:07.162 INFO:tasks.workunit.client.1.vm10.stdout:6/784: write d3/d30/d33/f35 [3652200,35139] 0 2026-03-09T20:48:07.163 INFO:tasks.workunit.client.1.vm10.stdout:6/785: readlink d3/lb4 0 2026-03-09T20:48:07.165 INFO:tasks.workunit.client.1.vm10.stdout:0/755: write d2/d4a/d58/d82/d71/d5d/fc9 [888247,53693] 0 2026-03-09T20:48:07.165 INFO:tasks.workunit.client.1.vm10.stdout:1/773: write d2/da/d25/d3e/d55/f9a [1204518,14704] 0 2026-03-09T20:48:07.167 INFO:tasks.workunit.client.0.vm07.stdout:0/846: mknod d1/d1f/d53/c104 0 2026-03-09T20:48:07.169 INFO:tasks.workunit.client.1.vm10.stdout:6/786: truncate d3/d30/d7f/d4a/f4b 4823646 0 2026-03-09T20:48:07.169 INFO:tasks.workunit.client.1.vm10.stdout:0/756: truncate d2/d9/da/d11/f86 359794 0 2026-03-09T20:48:07.171 INFO:tasks.workunit.client.1.vm10.stdout:1/774: rename d2/da/d25/d3e/d42/cb4 to d2/da/d25/d46/dbe/cfa 0 2026-03-09T20:48:07.171 INFO:tasks.workunit.client.0.vm07.stdout:0/847: rename d1/d1f/d53/c104 to d1/df6/c105 0 2026-03-09T20:48:07.173 INFO:tasks.workunit.client.0.vm07.stdout:0/848: mkdir d1/d2/d4b/d106 0 2026-03-09T20:48:07.174 INFO:tasks.workunit.client.0.vm07.stdout:0/849: rmdir d1/d1f/d53/d72/d9a 39 2026-03-09T20:48:07.175 INFO:tasks.workunit.client.0.vm07.stdout:0/850: readlink d1/d2/d33/d35/ld2 0 2026-03-09T20:48:07.175 INFO:tasks.workunit.client.1.vm10.stdout:9/807: dread d2/d12/f31 [0,4194304] 0 2026-03-09T20:48:07.176 INFO:tasks.workunit.client.1.vm10.stdout:9/808: dread - d2/d3/de/d35/f106 zero size 2026-03-09T20:48:07.177 INFO:tasks.workunit.client.1.vm10.stdout:3/740: write dc/d14/d20/d21/fd9 [294517,69063] 0 2026-03-09T20:48:07.179 INFO:tasks.workunit.client.1.vm10.stdout:3/741: write dc/d14/d20/d2e/d56/f23 [1896489,39049] 0 2026-03-09T20:48:07.207 INFO:tasks.workunit.client.0.vm07.stdout:0/851: unlink d1/d2/dc/d17/f91 0 2026-03-09T20:48:07.207 INFO:tasks.workunit.client.0.vm07.stdout:0/852: mkdir d1/d1f/d9f/df8/d107 0 2026-03-09T20:48:07.207 INFO:tasks.workunit.client.0.vm07.stdout:0/853: mkdir d1/d2/dc/d108 0 2026-03-09T20:48:07.207 INFO:tasks.workunit.client.0.vm07.stdout:4/750: dwrite d2/fa [0,4194304] 0 2026-03-09T20:48:07.207 INFO:tasks.workunit.client.0.vm07.stdout:4/751: chown d2/df/d17/l24 540 1 2026-03-09T20:48:07.207 INFO:tasks.workunit.client.1.vm10.stdout:1/775: dwrite d2/da/f3d [4194304,4194304] 0 2026-03-09T20:48:07.207 INFO:tasks.workunit.client.1.vm10.stdout:6/787: rename d3/da/d11/d89/db9/dd1/dd2/dc3/fe1 to d3/d30/d7f/d24/d39/ff1 0 2026-03-09T20:48:07.207 INFO:tasks.workunit.client.1.vm10.stdout:8/838: write d0/d22/d25/d6c/f68 [3233054,115884] 0 2026-03-09T20:48:07.207 INFO:tasks.workunit.client.1.vm10.stdout:6/788: creat d3/d30/d6a/ff2 x:0 0 0 2026-03-09T20:48:07.207 INFO:tasks.workunit.client.1.vm10.stdout:9/809: symlink d2/d3/d6d/d88/ddd/l10b 0 2026-03-09T20:48:07.207 INFO:tasks.workunit.client.0.vm07.stdout:4/752: dwrite d2/df/d59/f81 [4194304,4194304] 0 2026-03-09T20:48:07.210 INFO:tasks.workunit.client.1.vm10.stdout:2/773: dwrite d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/f96 [0,4194304] 0 2026-03-09T20:48:07.215 INFO:tasks.workunit.client.1.vm10.stdout:3/742: creat dc/d14/d26/dcb/ff9 x:0 0 0 2026-03-09T20:48:07.248 INFO:tasks.workunit.client.0.vm07.stdout:4/753: mknod d2/d55/d5d/ccc 0 2026-03-09T20:48:07.249 INFO:tasks.workunit.client.0.vm07.stdout:4/754: rename d2/df/f6b to d2/d55/d5d/d3f/d4a/dbc/fcd 0 2026-03-09T20:48:07.249 INFO:tasks.workunit.client.1.vm10.stdout:8/839: mknod d0/c10e 0 2026-03-09T20:48:07.249 INFO:tasks.workunit.client.1.vm10.stdout:6/789: symlink d3/da/d11/d89/lf3 0 2026-03-09T20:48:07.249 INFO:tasks.workunit.client.1.vm10.stdout:6/790: stat d3/d79/feb 0 2026-03-09T20:48:07.249 INFO:tasks.workunit.client.1.vm10.stdout:9/810: mkdir d2/d3/d6d/d10c 0 2026-03-09T20:48:07.249 INFO:tasks.workunit.client.1.vm10.stdout:2/774: fdatasync d5/f9e 0 2026-03-09T20:48:07.249 INFO:tasks.workunit.client.1.vm10.stdout:3/743: mkdir dc/d14/d90/dfa 0 2026-03-09T20:48:07.249 INFO:tasks.workunit.client.1.vm10.stdout:9/811: readlink d2/d3/de/d35/l92 0 2026-03-09T20:48:07.249 INFO:tasks.workunit.client.1.vm10.stdout:9/812: stat d2/d3/de/d35/f78 0 2026-03-09T20:48:07.249 INFO:tasks.workunit.client.1.vm10.stdout:3/744: rename dc/d14/d26/d29/d40/da8/c5f to dc/d14/cfb 0 2026-03-09T20:48:07.249 INFO:tasks.workunit.client.1.vm10.stdout:3/745: write dc/d14/d26/d29/ff7 [802045,49429] 0 2026-03-09T20:48:07.251 INFO:tasks.workunit.client.1.vm10.stdout:6/791: truncate d3/da/d11/d26/d5b/fbd 680597 0 2026-03-09T20:48:07.252 INFO:tasks.workunit.client.0.vm07.stdout:4/755: rmdir d2/d55/d5d/d3f/d4a/d7d 39 2026-03-09T20:48:07.252 INFO:tasks.workunit.client.1.vm10.stdout:9/813: rename d2/d28/da2/ded/ffe to d2/d12/dad/f10d 0 2026-03-09T20:48:07.252 INFO:tasks.workunit.client.1.vm10.stdout:3/746: creat dc/d14/d26/d29/d2a/d76/ffc x:0 0 0 2026-03-09T20:48:07.253 INFO:tasks.workunit.client.1.vm10.stdout:3/747: truncate dc/d14/d20/d21/daf/fed 120026 0 2026-03-09T20:48:07.255 INFO:tasks.workunit.client.0.vm07.stdout:4/756: dread d2/d55/f71 [0,4194304] 0 2026-03-09T20:48:07.256 INFO:tasks.workunit.client.0.vm07.stdout:4/757: dread - d2/d55/d5d/d3f/fc5 zero size 2026-03-09T20:48:07.256 INFO:tasks.workunit.client.1.vm10.stdout:1/776: dread d2/da/d25/d3e/dca/da2/dd5/ff6 [0,4194304] 0 2026-03-09T20:48:07.257 INFO:tasks.workunit.client.1.vm10.stdout:9/814: creat d2/d28/da2/ded/f10e x:0 0 0 2026-03-09T20:48:07.258 INFO:tasks.workunit.client.1.vm10.stdout:3/748: dwrite dc/d14/d26/d29/d2a/d76/ffc [0,4194304] 0 2026-03-09T20:48:07.268 INFO:tasks.workunit.client.0.vm07.stdout:4/758: dwrite d2/d55/d5d/d3f/d4a/f5e [0,4194304] 0 2026-03-09T20:48:07.268 INFO:tasks.workunit.client.1.vm10.stdout:9/815: chown d2/d28/d47/d6a/ce1 65 1 2026-03-09T20:48:07.268 INFO:tasks.workunit.client.1.vm10.stdout:3/749: dwrite dc/d14/d20/d21/fe9 [0,4194304] 0 2026-03-09T20:48:07.271 INFO:tasks.workunit.client.0.vm07.stdout:4/759: dread d2/df/d17/f46 [0,4194304] 0 2026-03-09T20:48:07.274 INFO:tasks.workunit.client.1.vm10.stdout:9/816: symlink d2/db8/l10f 0 2026-03-09T20:48:07.289 INFO:tasks.workunit.client.1.vm10.stdout:1/777: symlink d2/da/lfb 0 2026-03-09T20:48:07.290 INFO:tasks.workunit.client.1.vm10.stdout:9/817: chown d2/d33 73 1 2026-03-09T20:48:07.290 INFO:tasks.workunit.client.1.vm10.stdout:3/750: creat dc/d14/d26/d37/ffd x:0 0 0 2026-03-09T20:48:07.290 INFO:tasks.workunit.client.1.vm10.stdout:1/778: mkdir d2/da/d25/d46/dbe/dfc 0 2026-03-09T20:48:07.290 INFO:tasks.workunit.client.1.vm10.stdout:3/751: mknod dc/db4/de3/cfe 0 2026-03-09T20:48:07.290 INFO:tasks.workunit.client.1.vm10.stdout:9/818: creat d2/d28/d47/f110 x:0 0 0 2026-03-09T20:48:07.290 INFO:tasks.workunit.client.1.vm10.stdout:3/752: rename dc/d14/d26/f6f to dc/d14/d22/d4a/fff 0 2026-03-09T20:48:07.290 INFO:tasks.workunit.client.1.vm10.stdout:9/819: getdents d2/d3/d6d/de8/d10a 0 2026-03-09T20:48:07.290 INFO:tasks.workunit.client.1.vm10.stdout:9/820: unlink d2/d3/de/d35/f78 0 2026-03-09T20:48:07.290 INFO:tasks.workunit.client.1.vm10.stdout:9/821: creat d2/d3/f111 x:0 0 0 2026-03-09T20:48:07.291 INFO:tasks.workunit.client.0.vm07.stdout:8/767: sync 2026-03-09T20:48:07.294 INFO:tasks.workunit.client.0.vm07.stdout:8/768: dwrite d1/d5d/d6f/f64 [0,4194304] 0 2026-03-09T20:48:07.310 INFO:tasks.workunit.client.1.vm10.stdout:9/822: dread d2/d28/d47/d67/f99 [0,4194304] 0 2026-03-09T20:48:07.311 INFO:tasks.workunit.client.1.vm10.stdout:9/823: rmdir d2/d12/dad 39 2026-03-09T20:48:07.312 INFO:tasks.workunit.client.1.vm10.stdout:9/824: dread - d2/d33/dcf/fd8 zero size 2026-03-09T20:48:07.318 INFO:tasks.workunit.client.1.vm10.stdout:9/825: dread d2/d28/d47/d6a/fc0 [0,4194304] 0 2026-03-09T20:48:07.324 INFO:tasks.workunit.client.1.vm10.stdout:9/826: creat d2/d12/d5a/da7/f112 x:0 0 0 2026-03-09T20:48:07.329 INFO:tasks.workunit.client.1.vm10.stdout:9/827: dwrite d2/d3/de/d8f/dbc/fda [4194304,4194304] 0 2026-03-09T20:48:07.333 INFO:tasks.workunit.client.1.vm10.stdout:9/828: dwrite d2/d28/d47/d50/f64 [0,4194304] 0 2026-03-09T20:48:07.338 INFO:tasks.workunit.client.1.vm10.stdout:9/829: link d2/d28/l9a d2/d28/d47/d50/l113 0 2026-03-09T20:48:07.338 INFO:tasks.workunit.client.1.vm10.stdout:9/830: readlink d2/l2d 0 2026-03-09T20:48:07.342 INFO:tasks.workunit.client.1.vm10.stdout:9/831: dwrite d2/d3/de/d8f/dbc/f102 [0,4194304] 0 2026-03-09T20:48:07.353 INFO:tasks.workunit.client.1.vm10.stdout:9/832: mknod d2/d33/dcf/c114 0 2026-03-09T20:48:07.361 INFO:tasks.workunit.client.1.vm10.stdout:9/833: link d2/c25 d2/d3/db4/c115 0 2026-03-09T20:48:07.368 INFO:tasks.workunit.client.0.vm07.stdout:7/881: mkdir d3/da/db/d12b 0 2026-03-09T20:48:07.373 INFO:tasks.workunit.client.0.vm07.stdout:7/882: fsync d3/f67 0 2026-03-09T20:48:07.389 INFO:tasks.workunit.client.0.vm07.stdout:1/850: write d3/d14/d54/d9b/fd9 [545469,103756] 0 2026-03-09T20:48:07.389 INFO:tasks.workunit.client.0.vm07.stdout:1/851: creat d3/d97/da1/dc5/d90/dd3/d106/f116 x:0 0 0 2026-03-09T20:48:07.389 INFO:tasks.workunit.client.0.vm07.stdout:7/883: link d3/l8d d3/da/d53/db7/dde/d96/l12c 0 2026-03-09T20:48:07.389 INFO:tasks.workunit.client.0.vm07.stdout:5/879: dwrite d5/df/f2b [8388608,4194304] 0 2026-03-09T20:48:07.389 INFO:tasks.workunit.client.1.vm10.stdout:9/834: link d2/d28/d47/d67/f72 d2/d3/d6d/db7/f116 0 2026-03-09T20:48:07.389 INFO:tasks.workunit.client.1.vm10.stdout:9/835: chown d2/fc 367111 1 2026-03-09T20:48:07.389 INFO:tasks.workunit.client.1.vm10.stdout:9/836: readlink d2/d33/l89 0 2026-03-09T20:48:07.389 INFO:tasks.workunit.client.1.vm10.stdout:9/837: creat d2/da6/f117 x:0 0 0 2026-03-09T20:48:07.389 INFO:tasks.workunit.client.1.vm10.stdout:7/795: write db/d1f/f5f [127726,7397] 0 2026-03-09T20:48:07.393 INFO:tasks.workunit.client.0.vm07.stdout:7/884: chown d3/da/db/d32/d3e/c70 1 1 2026-03-09T20:48:07.394 INFO:tasks.workunit.client.1.vm10.stdout:7/796: creat db/d28/d4c/ff7 x:0 0 0 2026-03-09T20:48:07.394 INFO:tasks.workunit.client.1.vm10.stdout:7/797: readlink db/d21/d95/lf5 0 2026-03-09T20:48:07.402 INFO:tasks.workunit.client.0.vm07.stdout:1/852: read d3/d14/d54/fdc [730757,116929] 0 2026-03-09T20:48:07.404 INFO:tasks.workunit.client.0.vm07.stdout:5/880: creat d5/d19/f12f x:0 0 0 2026-03-09T20:48:07.409 INFO:tasks.workunit.client.0.vm07.stdout:9/804: dwrite d4/d16/d29/d24/d37/d44/d62/d8e/fe2 [0,4194304] 0 2026-03-09T20:48:07.414 INFO:tasks.workunit.client.0.vm07.stdout:2/828: dwrite d2/db/d1c/d4a/d88/fd1 [0,4194304] 0 2026-03-09T20:48:07.429 INFO:tasks.workunit.client.0.vm07.stdout:3/820: dwrite d1/d5/fc5 [0,4194304] 0 2026-03-09T20:48:07.429 INFO:tasks.workunit.client.0.vm07.stdout:6/831: dwrite d8/d16/d22/d9b/de4/d85/f4a [4194304,4194304] 0 2026-03-09T20:48:07.429 INFO:tasks.workunit.client.1.vm10.stdout:5/725: dwrite d2/d27/d37/dc8/da1/f101 [0,4194304] 0 2026-03-09T20:48:07.429 INFO:tasks.workunit.client.1.vm10.stdout:4/741: write d1/f26 [1906375,78574] 0 2026-03-09T20:48:07.430 INFO:tasks.workunit.client.0.vm07.stdout:1/853: dread - d3/d14/ff2 zero size 2026-03-09T20:48:07.431 INFO:tasks.workunit.client.1.vm10.stdout:5/726: readlink d2/l4d 0 2026-03-09T20:48:07.431 INFO:tasks.workunit.client.0.vm07.stdout:1/854: chown d3/d14/d54/d9b 29 1 2026-03-09T20:48:07.431 INFO:tasks.workunit.client.1.vm10.stdout:5/727: stat d2/d39/dbf 0 2026-03-09T20:48:07.433 INFO:tasks.workunit.client.0.vm07.stdout:5/881: creat d5/df/d13/d4f/d101/d10b/f130 x:0 0 0 2026-03-09T20:48:07.436 INFO:tasks.workunit.client.1.vm10.stdout:0/757: write d2/d9/f61 [1452335,68182] 0 2026-03-09T20:48:07.459 INFO:tasks.workunit.client.1.vm10.stdout:4/742: unlink d1/d2/d3/d54/daa/lb5 0 2026-03-09T20:48:07.459 INFO:tasks.workunit.client.1.vm10.stdout:0/758: creat d2/d4a/d58/d82/d60/f109 x:0 0 0 2026-03-09T20:48:07.460 INFO:tasks.workunit.client.1.vm10.stdout:5/728: link d2/d27/d37/d46/d99/f10d d2/d39/dbf/d69/d109/f113 0 2026-03-09T20:48:07.460 INFO:tasks.workunit.client.1.vm10.stdout:5/729: chown d2/d39/dbf/l108 20 1 2026-03-09T20:48:07.460 INFO:tasks.workunit.client.0.vm07.stdout:5/882: dread d5/d33/d39/d8d/dab/f5f [0,4194304] 0 2026-03-09T20:48:07.460 INFO:tasks.workunit.client.0.vm07.stdout:7/885: dread d3/da/f47 [0,4194304] 0 2026-03-09T20:48:07.460 INFO:tasks.workunit.client.0.vm07.stdout:5/883: truncate d5/df/d13/d6c/f127 762850 0 2026-03-09T20:48:07.460 INFO:tasks.workunit.client.0.vm07.stdout:7/886: chown d3/da/db/d32/d3e/dac/d1f/c6a 8856667 1 2026-03-09T20:48:07.460 INFO:tasks.workunit.client.0.vm07.stdout:3/821: mknod d1/d5/d9/d2f/d34/d9e/c10c 0 2026-03-09T20:48:07.460 INFO:tasks.workunit.client.0.vm07.stdout:1/855: creat d3/d23/d67/d8a/f117 x:0 0 0 2026-03-09T20:48:07.469 INFO:tasks.workunit.client.1.vm10.stdout:4/743: creat d1/d2/d5c/d64/d6b/d81/dac/d1c/ff1 x:0 0 0 2026-03-09T20:48:07.475 INFO:tasks.workunit.client.0.vm07.stdout:7/887: symlink d3/da/db/d32/d3e/dac/d1f/d50/l12d 0 2026-03-09T20:48:07.476 INFO:tasks.workunit.client.1.vm10.stdout:0/759: symlink d2/d4a/d58/d82/d71/d8e/l10a 0 2026-03-09T20:48:07.476 INFO:tasks.workunit.client.1.vm10.stdout:0/760: dread d2/d4a/d58/d82/d93/fe3 [0,4194304] 0 2026-03-09T20:48:07.478 INFO:tasks.workunit.client.1.vm10.stdout:0/761: dread d2/d4a/d58/f62 [0,4194304] 0 2026-03-09T20:48:07.480 INFO:tasks.workunit.client.0.vm07.stdout:5/884: creat d5/d33/d39/d8d/dab/d11f/f131 x:0 0 0 2026-03-09T20:48:07.481 INFO:tasks.workunit.client.0.vm07.stdout:5/885: chown d5/d19/d73/dbc/d10f/f12d 292136 1 2026-03-09T20:48:07.487 INFO:tasks.workunit.client.1.vm10.stdout:5/730: link d2/d1b/f2f d2/d39/dbf/d66/f114 0 2026-03-09T20:48:07.488 INFO:tasks.workunit.client.0.vm07.stdout:1/856: creat d3/d23/d109/f118 x:0 0 0 2026-03-09T20:48:07.490 INFO:tasks.workunit.client.0.vm07.stdout:5/886: stat d5/f25 0 2026-03-09T20:48:07.492 INFO:tasks.workunit.client.0.vm07.stdout:6/832: getdents d8/d16/d22/d24/da0 0 2026-03-09T20:48:07.493 INFO:tasks.workunit.client.0.vm07.stdout:1/857: rmdir d3/d23/d67 39 2026-03-09T20:48:07.495 INFO:tasks.workunit.client.0.vm07.stdout:5/887: creat d5/df/d13/d6c/f132 x:0 0 0 2026-03-09T20:48:07.496 INFO:tasks.workunit.client.1.vm10.stdout:5/731: mkdir d2/d39/dbf/d69/de9/dfa/d115 0 2026-03-09T20:48:07.500 INFO:tasks.workunit.client.1.vm10.stdout:5/732: rename d2/f7 to d2/d58/df5/f116 0 2026-03-09T20:48:07.503 INFO:tasks.workunit.client.1.vm10.stdout:9/838: sync 2026-03-09T20:48:07.503 INFO:tasks.workunit.client.1.vm10.stdout:7/798: sync 2026-03-09T20:48:07.503 INFO:tasks.workunit.client.0.vm07.stdout:5/888: truncate d5/d33/d39/d8d/dec/f12e 1312548 0 2026-03-09T20:48:07.503 INFO:tasks.workunit.client.0.vm07.stdout:9/805: sync 2026-03-09T20:48:07.503 INFO:tasks.workunit.client.0.vm07.stdout:7/888: sync 2026-03-09T20:48:07.504 INFO:tasks.workunit.client.1.vm10.stdout:7/799: chown db/d28 158 1 2026-03-09T20:48:07.504 INFO:tasks.workunit.client.1.vm10.stdout:9/839: fsync d2/d3/de/f84 0 2026-03-09T20:48:07.512 INFO:tasks.workunit.client.0.vm07.stdout:9/806: dwrite d4/d16/d78/dc4/f118 [0,4194304] 0 2026-03-09T20:48:07.528 INFO:tasks.workunit.client.1.vm10.stdout:5/733: symlink d2/d39/dbf/d69/d96/l117 0 2026-03-09T20:48:07.528 INFO:tasks.workunit.client.0.vm07.stdout:6/833: link d8/d16/d22/db1/fe1 d8/d16/d4b/d88/f10f 0 2026-03-09T20:48:07.529 INFO:tasks.workunit.client.0.vm07.stdout:6/834: chown d8/d16/d22/d24/da0/dab/d40/d69/dfb 8980 1 2026-03-09T20:48:07.531 INFO:tasks.workunit.client.0.vm07.stdout:5/889: read d5/d33/d39/fe5 [158019,51155] 0 2026-03-09T20:48:07.533 INFO:tasks.workunit.client.0.vm07.stdout:7/889: creat d3/da/d53/db7/dde/d96/f12e x:0 0 0 2026-03-09T20:48:07.534 INFO:tasks.workunit.client.1.vm10.stdout:7/800: symlink db/d28/d2b/d36/d63/d8b/lf8 0 2026-03-09T20:48:07.535 INFO:tasks.workunit.client.1.vm10.stdout:5/734: symlink d2/d39/dbf/d66/l118 0 2026-03-09T20:48:07.535 INFO:tasks.workunit.client.0.vm07.stdout:9/807: mkdir d4/d8/dc/d4e/d54/d11e 0 2026-03-09T20:48:07.539 INFO:tasks.workunit.client.1.vm10.stdout:7/801: unlink db/d28/d2b/d36/d3f/c61 0 2026-03-09T20:48:07.539 INFO:tasks.workunit.client.1.vm10.stdout:5/735: mknod d2/d27/d37/d46/d5d/d6d/c119 0 2026-03-09T20:48:07.540 INFO:tasks.workunit.client.1.vm10.stdout:5/736: chown d2/d39/d4b/d7a/de1/ld5 8667783 1 2026-03-09T20:48:07.541 INFO:tasks.workunit.client.1.vm10.stdout:5/737: truncate d2/d1b/f41 4370894 0 2026-03-09T20:48:07.541 INFO:tasks.workunit.client.0.vm07.stdout:7/890: fsync d3/da/f38 0 2026-03-09T20:48:07.542 INFO:tasks.workunit.client.0.vm07.stdout:9/808: symlink d4/d8/l11f 0 2026-03-09T20:48:07.543 INFO:tasks.workunit.client.1.vm10.stdout:7/802: truncate db/d28/d2b/d36/d3b/f42 1211677 0 2026-03-09T20:48:07.544 INFO:tasks.workunit.client.0.vm07.stdout:6/835: rename d8/d16/dbb/fd7 to d8/d16/d22/d24/da0/dab/dc1/f110 0 2026-03-09T20:48:07.546 INFO:tasks.workunit.client.0.vm07.stdout:5/890: rmdir d5/d19/d73/d94/d112 0 2026-03-09T20:48:07.548 INFO:tasks.workunit.client.1.vm10.stdout:7/803: creat db/d46/d89/dbf/d87/ff9 x:0 0 0 2026-03-09T20:48:07.549 INFO:tasks.workunit.client.0.vm07.stdout:6/836: rmdir d8/d16/d22/d24/da0 39 2026-03-09T20:48:07.550 INFO:tasks.workunit.client.0.vm07.stdout:5/891: fsync d5/df/d13/d6c/db1/f126 0 2026-03-09T20:48:07.551 INFO:tasks.workunit.client.1.vm10.stdout:7/804: rename db/d28/d2b/d36/d40/laa to db/d1f/lfa 0 2026-03-09T20:48:07.556 INFO:tasks.workunit.client.1.vm10.stdout:7/805: write db/d46/f5a [18461,7270] 0 2026-03-09T20:48:07.557 INFO:tasks.workunit.client.0.vm07.stdout:9/809: symlink d4/d8/d19/d5f/da5/l120 0 2026-03-09T20:48:07.557 INFO:tasks.workunit.client.0.vm07.stdout:6/837: fsync d8/d16/d22/d9b/de4/d85/f5a 0 2026-03-09T20:48:07.562 INFO:tasks.workunit.client.0.vm07.stdout:5/892: dread d5/df/d13/f2a [0,4194304] 0 2026-03-09T20:48:07.564 INFO:tasks.workunit.client.0.vm07.stdout:6/838: creat d8/d50/f111 x:0 0 0 2026-03-09T20:48:07.566 INFO:tasks.workunit.client.0.vm07.stdout:5/893: mkdir d5/d133 0 2026-03-09T20:48:07.573 INFO:tasks.workunit.client.0.vm07.stdout:5/894: chown d5/d19/d73/ldc 18620800 1 2026-03-09T20:48:07.573 INFO:tasks.workunit.client.0.vm07.stdout:6/839: symlink d8/d16/d4b/d88/dc3/dd5/def/l112 0 2026-03-09T20:48:07.613 INFO:tasks.workunit.client.0.vm07.stdout:0/854: read d1/d2/dc/f97 [394679,89475] 0 2026-03-09T20:48:07.655 INFO:tasks.workunit.client.0.vm07.stdout:8/769: dread d1/d5d/d6f/d2f/d4d/f67 [0,4194304] 0 2026-03-09T20:48:07.655 INFO:tasks.workunit.client.0.vm07.stdout:8/770: fdatasync d1/d5d/d6f/fed 0 2026-03-09T20:48:07.658 INFO:tasks.workunit.client.1.vm10.stdout:2/775: write d5/d18/d27/d89/db6/f5a [4617821,86535] 0 2026-03-09T20:48:07.663 INFO:tasks.workunit.client.1.vm10.stdout:8/840: dwrite d0/d22/d25/d2e/d41/f80 [4194304,4194304] 0 2026-03-09T20:48:07.663 INFO:tasks.workunit.client.1.vm10.stdout:6/792: write d3/da/d11/d31/fd5 [722085,112264] 0 2026-03-09T20:48:07.664 INFO:tasks.workunit.client.1.vm10.stdout:8/841: chown d0/d22/d25/d6c/fbd 85 1 2026-03-09T20:48:07.668 INFO:tasks.workunit.client.0.vm07.stdout:8/771: creat d1/d5d/ff6 x:0 0 0 2026-03-09T20:48:07.672 INFO:tasks.workunit.client.0.vm07.stdout:4/760: dwrite d2/f7 [0,4194304] 0 2026-03-09T20:48:07.673 INFO:tasks.workunit.client.1.vm10.stdout:1/779: truncate d2/d89/f96 2273048 0 2026-03-09T20:48:07.691 INFO:tasks.workunit.client.1.vm10.stdout:3/753: dwrite dc/fb9 [0,4194304] 0 2026-03-09T20:48:07.696 INFO:tasks.workunit.client.1.vm10.stdout:6/793: mknod d3/da/d11/d26/d5b/cf4 0 2026-03-09T20:48:07.698 INFO:tasks.workunit.client.1.vm10.stdout:2/776: creat d5/d18/d27/d38/f100 x:0 0 0 2026-03-09T20:48:07.701 INFO:tasks.workunit.client.1.vm10.stdout:6/794: dwrite d3/da/d11/d31/fd5 [0,4194304] 0 2026-03-09T20:48:07.721 INFO:tasks.workunit.client.0.vm07.stdout:2/829: write d2/db/d1c/f45 [2260272,93201] 0 2026-03-09T20:48:07.723 INFO:tasks.workunit.client.1.vm10.stdout:2/777: fsync d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47/d94/f9b 0 2026-03-09T20:48:07.728 INFO:tasks.workunit.client.1.vm10.stdout:3/754: getdents dc/d14/d26/d29/d40/da2/de0 0 2026-03-09T20:48:07.729 INFO:tasks.workunit.client.1.vm10.stdout:6/795: stat d3/d30/d7f/d24/l2d 0 2026-03-09T20:48:07.730 INFO:tasks.workunit.client.0.vm07.stdout:3/822: dwrite d1/d5/d9/d2f/d34/d46/d5d/ff1 [0,4194304] 0 2026-03-09T20:48:07.751 INFO:tasks.workunit.client.0.vm07.stdout:1/858: write d3/d97/da1/ddd/fe5 [711695,37949] 0 2026-03-09T20:48:07.754 INFO:tasks.workunit.client.1.vm10.stdout:2/778: rename d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/fa7 to d5/f101 0 2026-03-09T20:48:07.755 INFO:tasks.workunit.client.1.vm10.stdout:0/762: dwrite d2/d9/db8/db4/fce [0,4194304] 0 2026-03-09T20:48:07.758 INFO:tasks.workunit.client.0.vm07.stdout:1/859: creat d3/d14/d54/d3e/d101/f119 x:0 0 0 2026-03-09T20:48:07.760 INFO:tasks.workunit.client.1.vm10.stdout:6/796: fsync d3/d30/d7f/d24/d39/f6c 0 2026-03-09T20:48:07.762 INFO:tasks.workunit.client.0.vm07.stdout:1/860: dwrite d3/f24 [0,4194304] 0 2026-03-09T20:48:07.763 INFO:tasks.workunit.client.1.vm10.stdout:0/763: fsync d2/d9/da/d35/d30/f106 0 2026-03-09T20:48:07.766 INFO:tasks.workunit.client.0.vm07.stdout:2/830: dread d2/db/f67 [0,4194304] 0 2026-03-09T20:48:07.767 INFO:tasks.workunit.client.0.vm07.stdout:2/831: chown d2/db/d28/fcb 213045 1 2026-03-09T20:48:07.774 INFO:tasks.workunit.client.0.vm07.stdout:2/832: creat d2/db/d1c/d4a/d88/f10f x:0 0 0 2026-03-09T20:48:07.775 INFO:tasks.workunit.client.1.vm10.stdout:3/755: truncate dc/d14/fd3 2664434 0 2026-03-09T20:48:07.778 INFO:tasks.workunit.client.1.vm10.stdout:9/840: write d2/d12/d5a/f82 [5104736,107448] 0 2026-03-09T20:48:07.778 INFO:tasks.workunit.client.0.vm07.stdout:2/833: mknod d2/db/d49/d7d/c110 0 2026-03-09T20:48:07.778 INFO:tasks.workunit.client.1.vm10.stdout:6/797: fsync d3/d30/d7f/f18 0 2026-03-09T20:48:07.779 INFO:tasks.workunit.client.0.vm07.stdout:2/834: stat d2/db/d49/d7d/la0 0 2026-03-09T20:48:07.779 INFO:tasks.workunit.client.1.vm10.stdout:2/779: chown d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47/c97 343305677 1 2026-03-09T20:48:07.784 INFO:tasks.workunit.client.1.vm10.stdout:2/780: stat d5/d18/d27/d89/db6/d41/d77/db3/db5/f3f 0 2026-03-09T20:48:07.786 INFO:tasks.workunit.client.1.vm10.stdout:7/806: dread db/d28/d2b/d36/d3b/f42 [0,4194304] 0 2026-03-09T20:48:07.794 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:07 vm10.local ceph-mon[57011]: Upgrade: Updating mgr.vm10.byqahe 2026-03-09T20:48:07.794 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:07 vm10.local ceph-mon[57011]: Deploying daemon mgr.vm10.byqahe on vm10 2026-03-09T20:48:07.795 INFO:tasks.workunit.client.1.vm10.stdout:3/756: symlink dc/d14/d26/d29/d40/da2/l100 0 2026-03-09T20:48:07.795 INFO:tasks.workunit.client.1.vm10.stdout:5/738: write d2/d39/dbf/d69/d96/fb1 [1798266,119264] 0 2026-03-09T20:48:07.795 INFO:tasks.workunit.client.1.vm10.stdout:3/757: chown dc/f10 5435 1 2026-03-09T20:48:07.796 INFO:tasks.workunit.client.1.vm10.stdout:9/841: sync 2026-03-09T20:48:07.797 INFO:tasks.workunit.client.1.vm10.stdout:9/842: dread d2/d28/d47/d6a/fc0 [0,4194304] 0 2026-03-09T20:48:07.798 INFO:tasks.workunit.client.1.vm10.stdout:9/843: fsync d2/d28/d47/f110 0 2026-03-09T20:48:07.798 INFO:tasks.workunit.client.0.vm07.stdout:7/891: write d3/da/fbb [516789,54283] 0 2026-03-09T20:48:07.802 INFO:tasks.workunit.client.1.vm10.stdout:7/807: truncate db/d28/d2b/d36/d40/fa2 401806 0 2026-03-09T20:48:07.805 INFO:tasks.workunit.client.0.vm07.stdout:7/892: creat d3/da/d53/df5/f12f x:0 0 0 2026-03-09T20:48:07.809 INFO:tasks.workunit.client.0.vm07.stdout:9/810: dwrite d4/d8/d59/f66 [4194304,4194304] 0 2026-03-09T20:48:07.813 INFO:tasks.workunit.client.0.vm07.stdout:7/893: read - d3/da/db/d32/d3e/dac/ff3 zero size 2026-03-09T20:48:07.813 INFO:tasks.workunit.client.0.vm07.stdout:7/894: chown d3/da4/df2/dfb 1 1 2026-03-09T20:48:07.823 INFO:tasks.workunit.client.1.vm10.stdout:0/764: rename d2/d9/da/d11/dd1/d34/le4 to d2/d4a/d58/d82/d71/d8e/l10b 0 2026-03-09T20:48:07.828 INFO:tasks.workunit.client.1.vm10.stdout:2/781: symlink d5/d18/d27/d89/db6/d41/d77/db3/db5/db0/deb/l102 0 2026-03-09T20:48:07.830 INFO:tasks.workunit.client.0.vm07.stdout:6/840: dwrite d8/d16/d22/d24/f43 [0,4194304] 0 2026-03-09T20:48:07.835 INFO:tasks.workunit.client.0.vm07.stdout:1/861: dread d3/d23/fa8 [0,4194304] 0 2026-03-09T20:48:07.838 INFO:tasks.workunit.client.0.vm07.stdout:9/811: rename d4/d8 to d4/d16/d29/d24/d37/d44/d62/d108/d121 0 2026-03-09T20:48:07.844 INFO:tasks.workunit.client.0.vm07.stdout:7/895: dread d3/da/db/d32/d3e/dac/d1f/d2b/f33 [0,4194304] 0 2026-03-09T20:48:07.844 INFO:tasks.workunit.client.0.vm07.stdout:5/895: truncate d5/df/d13/d3e/de1/fe7 3858389 0 2026-03-09T20:48:07.844 INFO:tasks.workunit.client.0.vm07.stdout:7/896: chown d3/da/db/d32/d3e/c8a 15642 1 2026-03-09T20:48:07.847 INFO:tasks.workunit.client.0.vm07.stdout:0/855: write d1/f48 [2275460,53025] 0 2026-03-09T20:48:07.853 INFO:tasks.workunit.client.1.vm10.stdout:4/744: write d1/d67/f8f [974630,47353] 0 2026-03-09T20:48:07.856 INFO:tasks.workunit.client.0.vm07.stdout:1/862: truncate d3/d14/f17 5176832 0 2026-03-09T20:48:07.857 INFO:tasks.workunit.client.0.vm07.stdout:1/863: dread - d3/d97/da1/dc5/d60/fb5 zero size 2026-03-09T20:48:07.858 INFO:tasks.workunit.client.0.vm07.stdout:7/897: dread d3/f8f [0,4194304] 0 2026-03-09T20:48:07.859 INFO:tasks.workunit.client.0.vm07.stdout:7/898: stat d3/da/db/d32/d3e/dac/d1f/d2b/f2c 0 2026-03-09T20:48:07.861 INFO:tasks.workunit.client.1.vm10.stdout:9/844: creat d2/d3/db4/ddb/f118 x:0 0 0 2026-03-09T20:48:07.862 INFO:tasks.workunit.client.0.vm07.stdout:6/841: rename d8/d16/d4b/f9c to d8/d16/d4b/d88/f113 0 2026-03-09T20:48:07.864 INFO:tasks.workunit.client.1.vm10.stdout:3/758: rename dc/d14/d20/d2e/d56/f23 to dc/d14/d90/f101 0 2026-03-09T20:48:07.870 INFO:tasks.workunit.client.0.vm07.stdout:8/772: dwrite d1/dc/d16/d26/fd1 [0,4194304] 0 2026-03-09T20:48:07.872 INFO:tasks.workunit.client.0.vm07.stdout:4/761: write d2/d55/d5d/d3f/f51 [300662,120563] 0 2026-03-09T20:48:07.876 INFO:tasks.workunit.client.1.vm10.stdout:1/780: write d2/da/f4d [259626,56207] 0 2026-03-09T20:48:07.880 INFO:tasks.workunit.client.1.vm10.stdout:8/842: dwrite d0/d92/fcc [0,4194304] 0 2026-03-09T20:48:07.880 INFO:tasks.workunit.client.1.vm10.stdout:8/843: chown d0/f19 2209290 1 2026-03-09T20:48:07.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:07 vm07.local ceph-mon[49120]: Upgrade: Updating mgr.vm10.byqahe 2026-03-09T20:48:07.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:07 vm07.local ceph-mon[49120]: Deploying daemon mgr.vm10.byqahe on vm10 2026-03-09T20:48:07.887 INFO:tasks.workunit.client.0.vm07.stdout:1/864: readlink d3/d23/d67/lec 0 2026-03-09T20:48:07.891 INFO:tasks.workunit.client.1.vm10.stdout:9/845: dread d2/d3/d6d/db7/fbb [0,4194304] 0 2026-03-09T20:48:07.893 INFO:tasks.workunit.client.0.vm07.stdout:3/823: write d1/d5/d9/d11/d6d/dd0/d43/fbf [175089,2689] 0 2026-03-09T20:48:07.900 INFO:tasks.workunit.client.1.vm10.stdout:7/808: rename db/l96 to db/d28/d2b/d36/d63/lfb 0 2026-03-09T20:48:07.908 INFO:tasks.workunit.client.0.vm07.stdout:7/899: fsync d3/da/db/d32/d3e/dac/d1f/d2b/d52/f5e 0 2026-03-09T20:48:07.912 INFO:tasks.workunit.client.1.vm10.stdout:3/759: creat dc/d14/f102 x:0 0 0 2026-03-09T20:48:07.912 INFO:tasks.workunit.client.0.vm07.stdout:7/900: chown d3/da/db/d32/d3e/d5c/f129 81619545 1 2026-03-09T20:48:07.915 INFO:tasks.workunit.client.0.vm07.stdout:2/835: dwrite d2/db/d28/d5c/f89 [0,4194304] 0 2026-03-09T20:48:07.926 INFO:tasks.workunit.client.1.vm10.stdout:6/798: write d3/d30/d7f/d24/f99 [869062,32209] 0 2026-03-09T20:48:07.931 INFO:tasks.workunit.client.1.vm10.stdout:5/739: write d2/f71 [1371318,115367] 0 2026-03-09T20:48:07.936 INFO:tasks.workunit.client.0.vm07.stdout:0/856: truncate d1/d2/f14 2339004 0 2026-03-09T20:48:07.939 INFO:tasks.workunit.client.1.vm10.stdout:2/782: rename d5/d18/d27/d89/cbe to d5/d18/d27/d89/db6/d41/d77/db3/db5/db0/deb/c103 0 2026-03-09T20:48:07.942 INFO:tasks.workunit.client.0.vm07.stdout:1/865: mknod d3/d97/da1/dc5/d90/de8/dba/c11a 0 2026-03-09T20:48:07.944 INFO:tasks.workunit.client.1.vm10.stdout:7/809: creat db/d28/d30/ffc x:0 0 0 2026-03-09T20:48:07.948 INFO:tasks.workunit.client.0.vm07.stdout:3/824: fdatasync d1/d5/d9/d11/d6d/dd0/f30 0 2026-03-09T20:48:07.949 INFO:tasks.workunit.client.0.vm07.stdout:3/825: write d1/d5/d9/d11/d6d/dd0/d43/fbf [1112146,68474] 0 2026-03-09T20:48:07.953 INFO:tasks.workunit.client.1.vm10.stdout:3/760: stat dc/d14/d26/d29/d2a/d76/lc2 0 2026-03-09T20:48:07.956 INFO:tasks.workunit.client.1.vm10.stdout:6/799: mkdir d3/d30/d6a/df5 0 2026-03-09T20:48:07.956 INFO:tasks.workunit.client.1.vm10.stdout:6/800: readlink d3/d30/d7f/l84 0 2026-03-09T20:48:07.958 INFO:tasks.workunit.client.1.vm10.stdout:1/781: symlink d2/lfd 0 2026-03-09T20:48:07.963 INFO:tasks.workunit.client.0.vm07.stdout:2/836: dread - d2/db/d49/f9b zero size 2026-03-09T20:48:07.963 INFO:tasks.workunit.client.1.vm10.stdout:1/782: chown d2/da/d25/l31 117835122 1 2026-03-09T20:48:07.968 INFO:tasks.workunit.client.1.vm10.stdout:9/846: symlink d2/d3/d6d/de8/d10a/l119 0 2026-03-09T20:48:07.974 INFO:tasks.workunit.client.1.vm10.stdout:9/847: dread d2/d3/d6d/d88/fd4 [0,4194304] 0 2026-03-09T20:48:07.976 INFO:tasks.workunit.client.1.vm10.stdout:9/848: dread d2/d33/f7d [0,4194304] 0 2026-03-09T20:48:07.977 INFO:tasks.workunit.client.1.vm10.stdout:9/849: fdatasync d2/d3/d6d/d88/fd4 0 2026-03-09T20:48:07.979 INFO:tasks.workunit.client.0.vm07.stdout:9/812: dwrite d4/d16/d29/d24/d37/d44/d62/d108/d121/d19/d5f/d73/dbc/f100 [0,4194304] 0 2026-03-09T20:48:07.981 INFO:tasks.workunit.client.0.vm07.stdout:6/842: dread - d8/d16/d22/d24/da0/dab/fc9 zero size 2026-03-09T20:48:07.982 INFO:tasks.workunit.client.1.vm10.stdout:8/844: rename d0/d22/d2f/dd0 to d0/d22/d25/d2e/d10f 0 2026-03-09T20:48:07.983 INFO:tasks.workunit.client.0.vm07.stdout:8/773: getdents d1/dc/d6a/df2 0 2026-03-09T20:48:07.983 INFO:tasks.workunit.client.0.vm07.stdout:6/843: chown d8/d16/d22/d9b/da6/ded 22 1 2026-03-09T20:48:07.984 INFO:tasks.workunit.client.0.vm07.stdout:8/774: dread d1/dc/d6a/f62 [0,4194304] 0 2026-03-09T20:48:07.992 INFO:tasks.workunit.client.0.vm07.stdout:8/775: dread d1/dc/d16/fbe [0,4194304] 0 2026-03-09T20:48:07.998 INFO:tasks.workunit.client.0.vm07.stdout:0/857: mkdir d1/dc0/dcc/dd9/d109 0 2026-03-09T20:48:08.000 INFO:tasks.workunit.client.1.vm10.stdout:3/761: creat dc/d14/d27/f103 x:0 0 0 2026-03-09T20:48:08.001 INFO:tasks.workunit.client.1.vm10.stdout:4/745: getdents d1/d47 0 2026-03-09T20:48:08.005 INFO:tasks.workunit.client.1.vm10.stdout:5/740: fsync d2/d39/dbf/d66/f114 0 2026-03-09T20:48:08.012 INFO:tasks.workunit.client.1.vm10.stdout:1/783: symlink d2/da/d25/d46/d51/d7e/lfe 0 2026-03-09T20:48:08.014 INFO:tasks.workunit.client.1.vm10.stdout:0/765: write d2/d9/da/d35/d30/f9a [942882,74301] 0 2026-03-09T20:48:08.016 INFO:tasks.workunit.client.1.vm10.stdout:0/766: chown d2/d4a/d58/d82/d71/d5d/ce5 40498641 1 2026-03-09T20:48:08.018 INFO:tasks.workunit.client.0.vm07.stdout:4/762: dwrite d2/d1f/f25 [0,4194304] 0 2026-03-09T20:48:08.031 INFO:tasks.workunit.client.1.vm10.stdout:2/783: write d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/f4c [820212,38948] 0 2026-03-09T20:48:08.032 INFO:tasks.workunit.client.1.vm10.stdout:2/784: stat d5/d18/d9f/lc0 0 2026-03-09T20:48:08.036 INFO:tasks.workunit.client.0.vm07.stdout:7/901: symlink d3/da/db/d32/d3e/d5c/dc2/df1/l130 0 2026-03-09T20:48:08.040 INFO:tasks.workunit.client.0.vm07.stdout:5/896: rename d5/df/l29 to d5/d33/l134 0 2026-03-09T20:48:08.044 INFO:tasks.workunit.client.1.vm10.stdout:2/785: dread d5/d18/d9f/fd8 [0,4194304] 0 2026-03-09T20:48:08.053 INFO:tasks.workunit.client.1.vm10.stdout:5/741: symlink d2/d27/d37/d46/d99/l11a 0 2026-03-09T20:48:08.055 INFO:tasks.workunit.client.0.vm07.stdout:8/776: dread - d1/d5d/f7a zero size 2026-03-09T20:48:08.059 INFO:tasks.workunit.client.1.vm10.stdout:1/784: rename d2/da/d25/d46/d51/d5d/d6e/f93 to d2/da/d25/d46/d51/d5d/d6e/d70/db3/fff 0 2026-03-09T20:48:08.066 INFO:tasks.workunit.client.0.vm07.stdout:1/866: mkdir d3/d97/da1/dc5/d90/de8/dc0/d11b 0 2026-03-09T20:48:08.067 INFO:tasks.workunit.client.0.vm07.stdout:3/826: creat d1/d5/d9/d11/df7/f10d x:0 0 0 2026-03-09T20:48:08.071 INFO:tasks.workunit.client.0.vm07.stdout:4/763: fdatasync d2/df/d17/f46 0 2026-03-09T20:48:08.086 INFO:tasks.workunit.client.0.vm07.stdout:2/837: dwrite d2/da7/fbc [0,4194304] 0 2026-03-09T20:48:08.089 INFO:tasks.workunit.client.1.vm10.stdout:7/810: write db/d46/f47 [3058401,101550] 0 2026-03-09T20:48:08.090 INFO:tasks.workunit.client.1.vm10.stdout:9/850: write d2/d3/d6d/db7/fc7 [618198,111243] 0 2026-03-09T20:48:08.090 INFO:tasks.workunit.client.1.vm10.stdout:9/851: readlink d2/d12/lf2 0 2026-03-09T20:48:08.091 INFO:tasks.workunit.client.1.vm10.stdout:5/742: dread d2/d27/d37/f38 [0,4194304] 0 2026-03-09T20:48:08.093 INFO:tasks.workunit.client.1.vm10.stdout:4/746: dwrite d1/d47/f4f [0,4194304] 0 2026-03-09T20:48:08.095 INFO:tasks.workunit.client.0.vm07.stdout:7/902: dwrite d3/da/db/d79/faf [0,4194304] 0 2026-03-09T20:48:08.101 INFO:tasks.workunit.client.0.vm07.stdout:7/903: dwrite d3/f18 [0,4194304] 0 2026-03-09T20:48:08.121 INFO:tasks.workunit.client.1.vm10.stdout:3/762: getdents dc/d14/d26/d29/d2a/ddc 0 2026-03-09T20:48:08.121 INFO:tasks.workunit.client.0.vm07.stdout:5/897: mkdir d5/df/d13/d6c/db1/dcc/d135 0 2026-03-09T20:48:08.122 INFO:tasks.workunit.client.1.vm10.stdout:3/763: chown dc/d14/d26/dcb 6034518 1 2026-03-09T20:48:08.135 INFO:tasks.workunit.client.0.vm07.stdout:3/827: sync 2026-03-09T20:48:08.136 INFO:tasks.workunit.client.0.vm07.stdout:0/858: creat d1/d2/d4b/d106/f10a x:0 0 0 2026-03-09T20:48:08.136 INFO:tasks.workunit.client.0.vm07.stdout:8/777: creat d1/dc/d16/d31/db4/ff7 x:0 0 0 2026-03-09T20:48:08.140 INFO:tasks.workunit.client.0.vm07.stdout:8/778: dwrite d1/d5d/d6f/f64 [4194304,4194304] 0 2026-03-09T20:48:08.147 INFO:tasks.workunit.client.0.vm07.stdout:6/844: truncate d8/d16/d22/d9b/de4/d85/f83 2459061 0 2026-03-09T20:48:08.159 INFO:tasks.workunit.client.1.vm10.stdout:0/767: mknod d2/d9/d2a/de9/c10c 0 2026-03-09T20:48:08.165 INFO:tasks.workunit.client.0.vm07.stdout:1/867: dread d3/d23/d55/f7b [0,4194304] 0 2026-03-09T20:48:08.166 INFO:tasks.workunit.client.1.vm10.stdout:9/852: fdatasync d2/d12/d5a/fba 0 2026-03-09T20:48:08.167 INFO:tasks.workunit.client.1.vm10.stdout:9/853: chown d2/d28/d47/d50/ld6 1399862277 1 2026-03-09T20:48:08.172 INFO:tasks.workunit.client.0.vm07.stdout:2/838: truncate d2/db/f67 1491343 0 2026-03-09T20:48:08.174 INFO:tasks.workunit.client.1.vm10.stdout:7/811: truncate db/d28/d2b/d36/d3f/f7d 2617675 0 2026-03-09T20:48:08.188 INFO:tasks.workunit.client.1.vm10.stdout:8/845: getdents d0 0 2026-03-09T20:48:08.189 INFO:tasks.workunit.client.0.vm07.stdout:7/904: truncate d3/da4/df2/dfb/f115 224083 0 2026-03-09T20:48:08.190 INFO:tasks.workunit.client.0.vm07.stdout:7/905: stat d3/da/db/d32/d3e/dac/d1f/d50/d110 0 2026-03-09T20:48:08.196 INFO:tasks.workunit.client.1.vm10.stdout:3/764: rename ca to dc/db4/de3/c104 0 2026-03-09T20:48:08.197 INFO:tasks.workunit.client.1.vm10.stdout:2/786: write d5/d5b/f6c [96447,72933] 0 2026-03-09T20:48:08.210 INFO:tasks.workunit.client.1.vm10.stdout:3/765: creat dc/d14/d26/d29/d40/d8c/d9c/f105 x:0 0 0 2026-03-09T20:48:08.211 INFO:tasks.workunit.client.0.vm07.stdout:9/813: write d4/d11/f105 [4181355,71296] 0 2026-03-09T20:48:08.214 INFO:tasks.workunit.client.1.vm10.stdout:6/801: dwrite d3/da/d11/d31/d47/d87/fd0 [0,4194304] 0 2026-03-09T20:48:08.222 INFO:tasks.workunit.client.1.vm10.stdout:1/785: write d2/da/d25/d3e/fba [875062,127479] 0 2026-03-09T20:48:08.223 INFO:tasks.workunit.client.1.vm10.stdout:5/743: write d2/d39/dbf/d63/fbe [277078,116061] 0 2026-03-09T20:48:08.230 INFO:tasks.workunit.client.1.vm10.stdout:0/768: rename d2/d4a/c83 to d2/d9/da/d48/dac/c10d 0 2026-03-09T20:48:08.232 INFO:tasks.workunit.client.1.vm10.stdout:4/747: dwrite d1/d2/d5c/d64/d61/f68 [0,4194304] 0 2026-03-09T20:48:08.248 INFO:tasks.workunit.client.1.vm10.stdout:1/786: chown d2/da/d25/d46/d51/d5d/cf0 14658995 1 2026-03-09T20:48:08.249 INFO:tasks.workunit.client.1.vm10.stdout:5/744: dread - d2/d27/d37/d46/fba zero size 2026-03-09T20:48:08.252 INFO:tasks.workunit.client.1.vm10.stdout:1/787: dwrite d2/d89/de6/ff2 [0,4194304] 0 2026-03-09T20:48:08.255 INFO:tasks.workunit.client.1.vm10.stdout:9/854: rename d2/d33/d37/fef to d2/d3/d6d/d10c/f11a 0 2026-03-09T20:48:08.272 INFO:tasks.workunit.client.0.vm07.stdout:0/859: dread d1/d1f/d20/f43 [0,4194304] 0 2026-03-09T20:48:08.273 INFO:tasks.workunit.client.1.vm10.stdout:4/748: dread - d1/d2/d5c/d64/d6b/d81/dac/d1c/fdb zero size 2026-03-09T20:48:08.274 INFO:tasks.workunit.client.1.vm10.stdout:4/749: chown d1/d2/f2a 6327483 1 2026-03-09T20:48:08.279 INFO:tasks.workunit.client.0.vm07.stdout:1/868: fsync d3/d23/d67/fdf 0 2026-03-09T20:48:08.285 INFO:tasks.workunit.client.0.vm07.stdout:2/839: dread d2/d11/d56/f98 [0,4194304] 0 2026-03-09T20:48:08.286 INFO:tasks.workunit.client.0.vm07.stdout:2/840: chown d2/d11/f52 2 1 2026-03-09T20:48:08.286 INFO:tasks.workunit.client.0.vm07.stdout:2/841: stat d2/d11/fef 0 2026-03-09T20:48:08.288 INFO:tasks.workunit.client.1.vm10.stdout:1/788: sync 2026-03-09T20:48:08.288 INFO:tasks.workunit.client.1.vm10.stdout:5/745: sync 2026-03-09T20:48:08.288 INFO:tasks.workunit.client.0.vm07.stdout:5/898: mknod d5/d33/c136 0 2026-03-09T20:48:08.291 INFO:tasks.workunit.client.1.vm10.stdout:7/812: rename db/d28/d2b/d36/cb8 to db/d28/d2b/d36/d63/d6d/dc4/cfd 0 2026-03-09T20:48:08.292 INFO:tasks.workunit.client.0.vm07.stdout:7/906: mkdir d3/da/db/d32/d3e/d11c/d131 0 2026-03-09T20:48:08.297 INFO:tasks.workunit.client.0.vm07.stdout:9/814: symlink d4/d16/d29/d24/d37/d44/d62/d108/d121/d19/d5f/da5/db8/dc1/l122 0 2026-03-09T20:48:08.310 INFO:tasks.workunit.client.1.vm10.stdout:0/769: truncate d2/d9/da/d11/f86 101411 0 2026-03-09T20:48:08.314 INFO:tasks.workunit.client.0.vm07.stdout:3/828: creat d1/d5/d9/daf/de3/f10e x:0 0 0 2026-03-09T20:48:08.324 INFO:tasks.workunit.client.1.vm10.stdout:3/766: link dc/d14/d20/d2e/c80 dc/d14/d20/c106 0 2026-03-09T20:48:08.324 INFO:tasks.workunit.client.1.vm10.stdout:6/802: write d3/da/f76 [679243,93596] 0 2026-03-09T20:48:08.325 INFO:tasks.workunit.client.1.vm10.stdout:6/803: fsync d3/d30/d6a/dd6/fe7 0 2026-03-09T20:48:08.327 INFO:tasks.workunit.client.1.vm10.stdout:5/746: chown d2/d39/dbf/d69/d96/cef 19 1 2026-03-09T20:48:08.329 INFO:tasks.workunit.client.1.vm10.stdout:1/789: truncate d2/da/fb1 935201 0 2026-03-09T20:48:08.331 INFO:tasks.workunit.client.1.vm10.stdout:6/804: dwrite d3/d30/d6a/dd6/fe7 [0,4194304] 0 2026-03-09T20:48:08.332 INFO:tasks.workunit.client.0.vm07.stdout:4/764: rename d2/df/l35 to d2/lce 0 2026-03-09T20:48:08.338 INFO:tasks.workunit.client.1.vm10.stdout:4/750: write d1/d2/d5c/fd6 [1519136,22960] 0 2026-03-09T20:48:08.338 INFO:tasks.workunit.client.0.vm07.stdout:6/845: write d8/d16/d22/d24/da0/dab/f7a [35992,3919] 0 2026-03-09T20:48:08.340 INFO:tasks.workunit.client.1.vm10.stdout:8/846: rename d0/d22/d25/d2e/d41/de9/dfc/fd8 to d0/d22/d2c/f110 0 2026-03-09T20:48:08.343 INFO:tasks.workunit.client.1.vm10.stdout:2/787: link d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47/d7b/fe3 d5/d18/d27/d38/f104 0 2026-03-09T20:48:08.352 INFO:tasks.workunit.client.0.vm07.stdout:9/815: mkdir d4/d16/d29/d24/d37/d44/d62/d108/d121/db9/d123 0 2026-03-09T20:48:08.356 INFO:tasks.workunit.client.0.vm07.stdout:9/816: dwrite d4/d16/d29/d24/d37/d44/d62/d108/d121/d59/f66 [4194304,4194304] 0 2026-03-09T20:48:08.382 INFO:tasks.workunit.client.1.vm10.stdout:1/790: unlink d2/da/d25/c52 0 2026-03-09T20:48:08.394 INFO:tasks.workunit.client.0.vm07.stdout:2/842: write d2/db/d1c/d8d/fa2 [476411,88585] 0 2026-03-09T20:48:08.395 INFO:tasks.workunit.client.0.vm07.stdout:2/843: chown d2/db/d28/d57/f75 1529 1 2026-03-09T20:48:08.399 INFO:tasks.workunit.client.0.vm07.stdout:0/860: mknod d1/d1f/dc3/dca/c10b 0 2026-03-09T20:48:08.402 INFO:tasks.workunit.client.1.vm10.stdout:4/751: rmdir d1/d2/d5c/d64/d6b/d79 39 2026-03-09T20:48:08.403 INFO:tasks.workunit.client.1.vm10.stdout:0/770: dread d2/d4a/d58/d82/d71/f38 [0,4194304] 0 2026-03-09T20:48:08.414 INFO:tasks.workunit.client.1.vm10.stdout:7/813: rename db/d28/d86 to db/d28/d2b/d36/d63/d6d/dc4/dfe 0 2026-03-09T20:48:08.416 INFO:tasks.workunit.client.1.vm10.stdout:7/814: dread db/d28/d2b/d36/d3b/f42 [0,4194304] 0 2026-03-09T20:48:08.418 INFO:tasks.workunit.client.0.vm07.stdout:5/899: dwrite d5/d33/fb6 [4194304,4194304] 0 2026-03-09T20:48:08.419 INFO:tasks.workunit.client.0.vm07.stdout:5/900: fdatasync d5/df/d13/d6c/f77 0 2026-03-09T20:48:08.423 INFO:tasks.workunit.client.0.vm07.stdout:6/846: chown d8/d5d/d97/dc4/l10e 854910 1 2026-03-09T20:48:08.423 INFO:tasks.workunit.client.0.vm07.stdout:6/847: read - d8/d16/da3/f9f zero size 2026-03-09T20:48:08.432 INFO:tasks.workunit.client.1.vm10.stdout:5/747: dwrite d2/d39/dbf/d63/d95/fd7 [0,4194304] 0 2026-03-09T20:48:08.441 INFO:tasks.workunit.client.0.vm07.stdout:3/829: write d1/d5/d9/d2f/d3d/f75 [2042374,51449] 0 2026-03-09T20:48:08.455 INFO:tasks.workunit.client.1.vm10.stdout:9/855: creat d2/d3/de/d35/f11b x:0 0 0 2026-03-09T20:48:08.466 INFO:tasks.workunit.client.0.vm07.stdout:1/869: unlink d3/d23/f39 0 2026-03-09T20:48:08.470 INFO:tasks.workunit.client.1.vm10.stdout:2/788: dread d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/f5d [0,4194304] 0 2026-03-09T20:48:08.474 INFO:tasks.workunit.client.1.vm10.stdout:3/767: truncate dc/d14/d22/fbf 810910 0 2026-03-09T20:48:08.478 INFO:tasks.workunit.client.1.vm10.stdout:1/791: mknod d2/da/d25/d46/d51/d5d/d6e/c100 0 2026-03-09T20:48:08.485 INFO:tasks.workunit.client.0.vm07.stdout:4/765: write d2/d55/d5d/d3f/f9f [906881,90573] 0 2026-03-09T20:48:08.488 INFO:tasks.workunit.client.0.vm07.stdout:9/817: symlink d4/d16/d29/d24/d7c/l124 0 2026-03-09T20:48:08.489 INFO:tasks.workunit.client.1.vm10.stdout:8/847: write d0/d95/fdc [784240,34761] 0 2026-03-09T20:48:08.496 INFO:tasks.workunit.client.0.vm07.stdout:8/779: getdents d1/dc/d16/dad/d87/d93 0 2026-03-09T20:48:08.499 INFO:tasks.workunit.client.1.vm10.stdout:4/752: rmdir d1/d2/d5c/d64/d6b/d81/dac/d1c/d69 39 2026-03-09T20:48:08.503 INFO:tasks.workunit.client.0.vm07.stdout:0/861: stat d1/d2/dc/d17/l37 0 2026-03-09T20:48:08.503 INFO:tasks.workunit.client.0.vm07.stdout:0/862: readlink d1/d1f/dc2/ldd 0 2026-03-09T20:48:08.505 INFO:tasks.workunit.client.1.vm10.stdout:7/815: symlink db/d21/d95/lff 0 2026-03-09T20:48:08.516 INFO:tasks.workunit.client.1.vm10.stdout:5/748: symlink d2/d39/d4b/d7a/de1/l11b 0 2026-03-09T20:48:08.526 INFO:tasks.workunit.client.0.vm07.stdout:5/901: unlink d5/d69/c78 0 2026-03-09T20:48:08.533 INFO:tasks.workunit.client.1.vm10.stdout:2/789: creat d5/d18/d27/d89/db6/d41/d77/db3/f105 x:0 0 0 2026-03-09T20:48:08.534 INFO:tasks.workunit.client.1.vm10.stdout:2/790: truncate d5/d18/d9f/ffe 47561 0 2026-03-09T20:48:08.560 INFO:tasks.workunit.client.1.vm10.stdout:1/792: read d2/da/d25/d46/fa7 [2090,74336] 0 2026-03-09T20:48:08.574 INFO:tasks.workunit.client.0.vm07.stdout:6/848: write d8/d16/da3/f93 [3656301,103587] 0 2026-03-09T20:48:08.574 INFO:tasks.workunit.client.0.vm07.stdout:3/830: write d1/d5/d9/d2f/d66/fd3 [1264082,44448] 0 2026-03-09T20:48:08.577 INFO:tasks.workunit.client.1.vm10.stdout:9/856: dwrite d2/d3/d6d/db7/fbb [0,4194304] 0 2026-03-09T20:48:08.578 INFO:tasks.workunit.client.0.vm07.stdout:1/870: write d3/f7d [2340652,34692] 0 2026-03-09T20:48:08.593 INFO:tasks.workunit.client.1.vm10.stdout:6/805: creat d3/ff6 x:0 0 0 2026-03-09T20:48:08.594 INFO:tasks.workunit.client.0.vm07.stdout:1/871: dread d3/d66/f8c [0,4194304] 0 2026-03-09T20:48:08.611 INFO:tasks.workunit.client.0.vm07.stdout:7/907: creat d3/da/db/d32/d3e/dac/d43/d62/f132 x:0 0 0 2026-03-09T20:48:08.612 INFO:tasks.workunit.client.1.vm10.stdout:9/857: dread d2/d3/f5 [0,4194304] 0 2026-03-09T20:48:08.618 INFO:tasks.workunit.client.0.vm07.stdout:1/872: dread d3/d23/d52/f113 [0,4194304] 0 2026-03-09T20:48:08.667 INFO:tasks.workunit.client.0.vm07.stdout:9/818: symlink d4/d16/d29/d24/d37/d44/d62/d108/d121/d19/d89/l125 0 2026-03-09T20:48:08.668 INFO:tasks.workunit.client.0.vm07.stdout:1/873: dread d3/d9c/fd2 [0,4194304] 0 2026-03-09T20:48:08.678 INFO:tasks.workunit.client.1.vm10.stdout:4/753: readlink d1/d2/d5c/d64/d61/ld9 0 2026-03-09T20:48:08.686 INFO:tasks.workunit.client.0.vm07.stdout:8/780: fsync d1/dc/d16/f8d 0 2026-03-09T20:48:08.689 INFO:tasks.workunit.client.1.vm10.stdout:0/771: mknod d2/d9/da/d11/dd1/d34/dee/c10e 0 2026-03-09T20:48:08.691 INFO:tasks.workunit.client.1.vm10.stdout:0/772: stat d2/d4a/d58/d82/d60 0 2026-03-09T20:48:08.694 INFO:tasks.workunit.client.1.vm10.stdout:7/816: rename db/d28/d4c/fa1 to db/d28/d2b/d36/d3b/dd5/f100 0 2026-03-09T20:48:08.696 INFO:tasks.workunit.client.1.vm10.stdout:7/817: chown db/d28/d2b/d36/d63/d84 15 1 2026-03-09T20:48:08.706 INFO:tasks.workunit.client.1.vm10.stdout:2/791: fsync d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47/dcc/fdd 0 2026-03-09T20:48:08.709 INFO:tasks.workunit.client.0.vm07.stdout:3/831: fsync d1/d5/d9/d2f/d34/f5c 0 2026-03-09T20:48:08.709 INFO:tasks.workunit.client.1.vm10.stdout:1/793: read - d2/da/d25/d46/d51/d5d/d6e/d70/db3/fc2 zero size 2026-03-09T20:48:08.729 INFO:tasks.workunit.client.1.vm10.stdout:5/749: write d2/d39/d4b/d7a/ff0 [142790,59443] 0 2026-03-09T20:48:08.735 INFO:tasks.workunit.client.0.vm07.stdout:0/863: dwrite d1/d1f/fcd [0,4194304] 0 2026-03-09T20:48:08.736 INFO:tasks.workunit.client.0.vm07.stdout:6/849: write d8/d16/d22/d24/da0/dab/fa9 [1180937,55146] 0 2026-03-09T20:48:08.741 INFO:tasks.workunit.client.1.vm10.stdout:9/858: unlink d2/d28/fa5 0 2026-03-09T20:48:08.756 INFO:tasks.workunit.client.1.vm10.stdout:0/773: rename d2/d9/da to d2/d9/db8/d10f 0 2026-03-09T20:48:08.756 INFO:tasks.workunit.client.1.vm10.stdout:0/774: chown d2/d9/db8/d10f/d48/dac/cc0 1601047 1 2026-03-09T20:48:08.763 INFO:tasks.workunit.client.0.vm07.stdout:4/766: symlink d2/d55/d5d/dc2/lcf 0 2026-03-09T20:48:08.769 INFO:tasks.workunit.client.0.vm07.stdout:1/874: unlink d3/d23/d55/c10b 0 2026-03-09T20:48:08.771 INFO:tasks.workunit.client.0.vm07.stdout:1/875: stat d3/d23/d67/d8a/f10c 0 2026-03-09T20:48:08.771 INFO:tasks.workunit.client.1.vm10.stdout:3/768: dread - dc/d14/d20/d21/fdf zero size 2026-03-09T20:48:08.789 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:08 vm10.local ceph-mon[57011]: pgmap v10: 65 pgs: 65 active+clean; 3.3 GiB data, 11 GiB used, 109 GiB / 120 GiB avail; 30 MiB/s rd, 70 MiB/s wr, 208 op/s 2026-03-09T20:48:08.789 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:08 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:08.789 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:08 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:08.789 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:08 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:48:08.794 INFO:tasks.workunit.client.1.vm10.stdout:1/794: fdatasync d2/da/d25/d3e/dca/fa5 0 2026-03-09T20:48:08.795 INFO:tasks.workunit.client.0.vm07.stdout:2/844: creat d2/db/d49/f111 x:0 0 0 2026-03-09T20:48:08.796 INFO:tasks.workunit.client.1.vm10.stdout:7/818: dwrite db/d21/d23/f1a [0,4194304] 0 2026-03-09T20:48:08.797 INFO:tasks.workunit.client.1.vm10.stdout:2/792: dwrite d5/d18/f67 [0,4194304] 0 2026-03-09T20:48:08.798 INFO:tasks.workunit.client.1.vm10.stdout:7/819: chown db/d21/d26/f2f 14157200 1 2026-03-09T20:48:08.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:08 vm07.local ceph-mon[49120]: pgmap v10: 65 pgs: 65 active+clean; 3.3 GiB data, 11 GiB used, 109 GiB / 120 GiB avail; 30 MiB/s rd, 70 MiB/s wr, 208 op/s 2026-03-09T20:48:08.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:08 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:08.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:08 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:08.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:08 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:48:08.903 INFO:tasks.workunit.client.0.vm07.stdout:3/832: fdatasync d1/d5/d9/d11/d6d/dd0/fc1 0 2026-03-09T20:48:08.918 INFO:tasks.workunit.client.0.vm07.stdout:0/864: mkdir d1/d1f/d53/d10c 0 2026-03-09T20:48:08.928 INFO:tasks.workunit.client.0.vm07.stdout:6/850: mkdir d8/db3/d114 0 2026-03-09T20:48:08.938 INFO:tasks.workunit.client.0.vm07.stdout:4/767: truncate d2/fb7 488343 0 2026-03-09T20:48:08.938 INFO:tasks.workunit.client.0.vm07.stdout:9/819: creat d4/d16/d29/d24/d37/d44/d62/d8e/d11a/f126 x:0 0 0 2026-03-09T20:48:08.947 INFO:tasks.workunit.client.0.vm07.stdout:9/820: stat d4/d16/d29/d24/d37/d44/d62/d108/d121/dc/d15/fa3 0 2026-03-09T20:48:08.953 INFO:tasks.workunit.client.0.vm07.stdout:1/876: truncate d3/d97/da1/dc5/d60/fb5 815325 0 2026-03-09T20:48:08.955 INFO:tasks.workunit.client.1.vm10.stdout:8/848: creat d0/d22/f111 x:0 0 0 2026-03-09T20:48:08.964 INFO:tasks.workunit.client.0.vm07.stdout:2/845: readlink d2/db/d49/d7d/d85/l10e 0 2026-03-09T20:48:08.966 INFO:tasks.workunit.client.0.vm07.stdout:5/902: rename d5/lc to d5/df/l137 0 2026-03-09T20:48:08.968 INFO:tasks.workunit.client.1.vm10.stdout:0/775: rename d2/d9/db8/d10f/d35 to d2/d4a/d58/d82/d71/dca/d110 0 2026-03-09T20:48:08.970 INFO:tasks.workunit.client.0.vm07.stdout:3/833: mknod d1/d5/d9/d2f/d99/dd8/c10f 0 2026-03-09T20:48:08.971 INFO:tasks.workunit.client.0.vm07.stdout:3/834: stat d1/d5/d9/d11/d6d/dd0/f63 0 2026-03-09T20:48:08.972 INFO:tasks.workunit.client.1.vm10.stdout:3/769: creat dc/d14/d22/f107 x:0 0 0 2026-03-09T20:48:08.973 INFO:tasks.workunit.client.0.vm07.stdout:7/908: link d3/d58/l72 d3/da4/l133 0 2026-03-09T20:48:08.974 INFO:tasks.workunit.client.0.vm07.stdout:0/865: mkdir d1/d2/d33/d10d 0 2026-03-09T20:48:08.975 INFO:tasks.workunit.client.0.vm07.stdout:6/851: rmdir d8/d16/d22/d9b/de4 39 2026-03-09T20:48:08.977 INFO:tasks.workunit.client.1.vm10.stdout:1/795: creat d2/d89/f101 x:0 0 0 2026-03-09T20:48:08.978 INFO:tasks.workunit.client.1.vm10.stdout:5/750: dread d2/d27/d37/d46/f94 [4194304,4194304] 0 2026-03-09T20:48:09.000 INFO:tasks.workunit.client.0.vm07.stdout:8/781: dwrite d1/dc/d16/d26/f37 [0,4194304] 0 2026-03-09T20:48:09.016 INFO:tasks.workunit.client.1.vm10.stdout:9/859: dwrite d2/d28/d47/d67/fc6 [0,4194304] 0 2026-03-09T20:48:09.021 INFO:tasks.workunit.client.0.vm07.stdout:4/768: write d2/f3 [1309128,47802] 0 2026-03-09T20:48:09.045 INFO:tasks.workunit.client.1.vm10.stdout:6/806: rmdir d3/da/d11/d89/db9/dd1/dd2/da9/ded 0 2026-03-09T20:48:09.061 INFO:tasks.workunit.client.0.vm07.stdout:3/835: rename d1/l3e to d1/d5/d9/d2f/d3d/dd6/l110 0 2026-03-09T20:48:09.065 INFO:tasks.workunit.client.0.vm07.stdout:1/877: write d3/d97/da1/dc5/d60/f53 [456880,60422] 0 2026-03-09T20:48:09.065 INFO:tasks.workunit.client.1.vm10.stdout:8/849: write d0/d22/f76 [1614012,30834] 0 2026-03-09T20:48:09.066 INFO:tasks.workunit.client.0.vm07.stdout:1/878: readlink d3/d97/da1/dc5/d90/de8/lfc 0 2026-03-09T20:48:09.066 INFO:tasks.workunit.client.0.vm07.stdout:1/879: chown d3/d97/da1/dc5/d60/l63 4494982 1 2026-03-09T20:48:09.067 INFO:tasks.workunit.client.1.vm10.stdout:2/793: dwrite d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47/d7b/fe3 [0,4194304] 0 2026-03-09T20:48:09.084 INFO:tasks.workunit.client.1.vm10.stdout:8/850: stat d0/d22/d25/d6c/d9b/lb4 0 2026-03-09T20:48:09.092 INFO:tasks.workunit.client.0.vm07.stdout:6/852: mknod d8/d16/d61/c115 0 2026-03-09T20:48:09.111 INFO:tasks.workunit.client.1.vm10.stdout:9/860: creat d2/d3/db4/f11c x:0 0 0 2026-03-09T20:48:09.113 INFO:tasks.workunit.client.1.vm10.stdout:1/796: dwrite d2/da/d25/d46/d51/d5d/d6e/d70/f83 [0,4194304] 0 2026-03-09T20:48:09.115 INFO:tasks.workunit.client.1.vm10.stdout:5/751: dwrite d2/d39/dbf/d69/f76 [0,4194304] 0 2026-03-09T20:48:09.124 INFO:tasks.workunit.client.0.vm07.stdout:4/769: creat d2/df/d17/fd0 x:0 0 0 2026-03-09T20:48:09.125 INFO:tasks.workunit.client.1.vm10.stdout:1/797: readlink d2/da/d25/d46/d51/d5d/d6e/ldd 0 2026-03-09T20:48:09.136 INFO:tasks.workunit.client.1.vm10.stdout:6/807: truncate d3/d30/d7f/d36/fd7 786482 0 2026-03-09T20:48:09.140 INFO:tasks.workunit.client.1.vm10.stdout:4/754: getdents d1/dd8 0 2026-03-09T20:48:09.141 INFO:tasks.workunit.client.0.vm07.stdout:2/846: mknod d2/db/d49/c112 0 2026-03-09T20:48:09.157 INFO:tasks.workunit.client.1.vm10.stdout:8/851: mknod d0/d92/de8/c112 0 2026-03-09T20:48:09.158 INFO:tasks.workunit.client.0.vm07.stdout:1/880: symlink d3/d97/da1/dc5/d90/de8/dba/l11c 0 2026-03-09T20:48:09.160 INFO:tasks.workunit.client.0.vm07.stdout:7/909: symlink d3/d58/d77/d10f/l134 0 2026-03-09T20:48:09.179 INFO:tasks.workunit.client.0.vm07.stdout:9/821: truncate d4/d16/d29/d24/d37/d44/d62/d108/d121/d19/d5f/d73/dbc/f100 626890 0 2026-03-09T20:48:09.181 INFO:tasks.workunit.client.1.vm10.stdout:9/861: write d2/d3/db4/ddb/fff [1006767,31574] 0 2026-03-09T20:48:09.182 INFO:tasks.workunit.client.1.vm10.stdout:9/862: readlink d2/d3/d85/l97 0 2026-03-09T20:48:09.183 INFO:tasks.workunit.client.0.vm07.stdout:0/866: dwrite d1/d2/d33/ffb [0,4194304] 0 2026-03-09T20:48:09.189 INFO:tasks.workunit.client.1.vm10.stdout:5/752: chown d2/l18 90881803 1 2026-03-09T20:48:09.190 INFO:tasks.workunit.client.0.vm07.stdout:4/770: truncate d2/f5 2705607 0 2026-03-09T20:48:09.190 INFO:tasks.workunit.client.1.vm10.stdout:2/794: dwrite d5/d18/d27/d89/f9a [0,4194304] 0 2026-03-09T20:48:09.190 INFO:tasks.workunit.client.1.vm10.stdout:5/753: dread - d2/d27/d37/d46/d99/f10d zero size 2026-03-09T20:48:09.190 INFO:tasks.workunit.client.0.vm07.stdout:4/771: read d2/fa [1964765,108736] 0 2026-03-09T20:48:09.196 INFO:tasks.workunit.client.0.vm07.stdout:4/772: dwrite d2/d1f/fc3 [0,4194304] 0 2026-03-09T20:48:09.196 INFO:tasks.workunit.client.0.vm07.stdout:4/773: stat d2/df/f2e 0 2026-03-09T20:48:09.197 INFO:tasks.workunit.client.1.vm10.stdout:1/798: readlink d2/l4e 0 2026-03-09T20:48:09.201 INFO:tasks.workunit.client.1.vm10.stdout:1/799: readlink d2/da/d25/d46/dbe/ld9 0 2026-03-09T20:48:09.212 INFO:tasks.workunit.client.1.vm10.stdout:6/808: creat d3/d30/d7f/d36/ff7 x:0 0 0 2026-03-09T20:48:09.216 INFO:tasks.workunit.client.1.vm10.stdout:0/776: link d2/d4a/d58/d82/d71/dca/d110/f68 d2/d4a/d58/d82/d71/dca/d110/dff/f111 0 2026-03-09T20:48:09.216 INFO:tasks.workunit.client.1.vm10.stdout:0/777: stat d2/d4a/d58/d82/d71/dca/dfe 0 2026-03-09T20:48:09.249 INFO:tasks.workunit.client.1.vm10.stdout:9/863: mknod d2/d3/d6d/d10c/c11d 0 2026-03-09T20:48:09.273 INFO:tasks.workunit.client.1.vm10.stdout:2/795: rmdir d5/d18/d27/d5f 39 2026-03-09T20:48:09.276 INFO:tasks.workunit.client.1.vm10.stdout:7/820: link db/d28/d2b/d36/d63/le6 db/d28/d30/dd8/l101 0 2026-03-09T20:48:09.279 INFO:tasks.workunit.client.0.vm07.stdout:2/847: mknod d2/db/d49/d7d/d85/dd9/c113 0 2026-03-09T20:48:09.280 INFO:tasks.workunit.client.1.vm10.stdout:0/778: unlink d2/c54 0 2026-03-09T20:48:09.280 INFO:tasks.workunit.client.1.vm10.stdout:6/809: dread - d3/da/d11/d31/d47/d87/fe2 zero size 2026-03-09T20:48:09.281 INFO:tasks.workunit.client.1.vm10.stdout:3/770: getdents dc/db4 0 2026-03-09T20:48:09.283 INFO:tasks.workunit.client.1.vm10.stdout:5/754: write d2/f64 [6225614,43729] 0 2026-03-09T20:48:09.284 INFO:tasks.workunit.client.0.vm07.stdout:8/782: rename d1/dc/c22 to d1/dc/d16/dad/de9/cf8 0 2026-03-09T20:48:09.284 INFO:tasks.workunit.client.1.vm10.stdout:0/779: read - d2/d9/db8/d10f/d11/ffc zero size 2026-03-09T20:48:09.284 INFO:tasks.workunit.client.0.vm07.stdout:1/881: unlink d3/d23/d67/fdf 0 2026-03-09T20:48:09.286 INFO:tasks.workunit.client.0.vm07.stdout:9/822: creat d4/d16/d29/d24/d37/f127 x:0 0 0 2026-03-09T20:48:09.288 INFO:tasks.workunit.client.0.vm07.stdout:7/910: dread d3/da4/df2/dfb/f115 [0,4194304] 0 2026-03-09T20:48:09.294 INFO:tasks.workunit.client.0.vm07.stdout:2/848: symlink d2/d11/ddb/d72/l114 0 2026-03-09T20:48:09.294 INFO:tasks.workunit.client.0.vm07.stdout:4/774: dread d2/d55/d5d/d86/fa6 [0,4194304] 0 2026-03-09T20:48:09.295 INFO:tasks.workunit.client.0.vm07.stdout:2/849: stat d2/d11/ddb/db0/db3/c10c 0 2026-03-09T20:48:09.297 INFO:tasks.workunit.client.0.vm07.stdout:5/903: link d5/d19/f20 d5/d19/f138 0 2026-03-09T20:48:09.299 INFO:tasks.workunit.client.1.vm10.stdout:1/800: truncate d2/f2a 1646455 0 2026-03-09T20:48:09.299 INFO:tasks.workunit.client.1.vm10.stdout:4/755: creat d1/d2/d5c/d64/d6b/ff2 x:0 0 0 2026-03-09T20:48:09.300 INFO:tasks.workunit.client.1.vm10.stdout:6/810: truncate d3/da/f15 4419039 0 2026-03-09T20:48:09.300 INFO:tasks.workunit.client.1.vm10.stdout:8/852: creat d0/d22/d25/d2e/d41/de9/dfc/f113 x:0 0 0 2026-03-09T20:48:09.301 INFO:tasks.workunit.client.1.vm10.stdout:9/864: mkdir d2/d28/d47/d50/dd1/d11e 0 2026-03-09T20:48:09.304 INFO:tasks.workunit.client.1.vm10.stdout:8/853: read d0/d22/f66 [182838,105567] 0 2026-03-09T20:48:09.304 INFO:tasks.workunit.client.0.vm07.stdout:9/823: readlink d4/d16/l43 0 2026-03-09T20:48:09.311 INFO:tasks.workunit.client.0.vm07.stdout:5/904: dwrite d5/df/d13/d4f/d12c/f11a [0,4194304] 0 2026-03-09T20:48:09.321 INFO:tasks.workunit.client.0.vm07.stdout:4/775: dwrite d2/d1f/fc3 [0,4194304] 0 2026-03-09T20:48:09.328 INFO:tasks.workunit.client.1.vm10.stdout:5/755: fdatasync d2/d27/d37/fb5 0 2026-03-09T20:48:09.328 INFO:tasks.workunit.client.1.vm10.stdout:3/771: creat dc/d9e/f108 x:0 0 0 2026-03-09T20:48:09.338 INFO:tasks.workunit.client.0.vm07.stdout:8/783: mknod d1/dc/d16/d26/de2/cf9 0 2026-03-09T20:48:09.340 INFO:tasks.workunit.client.0.vm07.stdout:9/824: dread - d4/d16/d78/dc4/f10b zero size 2026-03-09T20:48:09.341 INFO:tasks.workunit.client.1.vm10.stdout:9/865: creat d2/da6/f11f x:0 0 0 2026-03-09T20:48:09.344 INFO:tasks.workunit.client.1.vm10.stdout:4/756: truncate d1/d2/d5c/d64/d6b/d81/dac/d39/f97 919053 0 2026-03-09T20:48:09.344 INFO:tasks.workunit.client.0.vm07.stdout:6/853: getdents d8/d16/d4b/d88/dc3/dd5 0 2026-03-09T20:48:09.344 INFO:tasks.workunit.client.1.vm10.stdout:4/757: truncate d1/fe3 17246 0 2026-03-09T20:48:09.345 INFO:tasks.workunit.client.0.vm07.stdout:6/854: stat d8/d5d/d97/da1/f102 0 2026-03-09T20:48:09.347 INFO:tasks.workunit.client.1.vm10.stdout:2/796: link d5/d18/d27/db4/fee d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/f106 0 2026-03-09T20:48:09.357 INFO:tasks.workunit.client.0.vm07.stdout:4/776: rename d2/d55/d8b to d2/df/d17/dd1 0 2026-03-09T20:48:09.358 INFO:tasks.workunit.client.0.vm07.stdout:8/784: readlink d1/l9 0 2026-03-09T20:48:09.358 INFO:tasks.workunit.client.0.vm07.stdout:6/855: chown d8/d16/d22/d9b/de4/d85 4063589 1 2026-03-09T20:48:09.358 INFO:tasks.workunit.client.0.vm07.stdout:6/856: chown d8/db3/ld0 2 1 2026-03-09T20:48:09.358 INFO:tasks.workunit.client.1.vm10.stdout:2/797: truncate d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/f4c 935419 0 2026-03-09T20:48:09.358 INFO:tasks.workunit.client.1.vm10.stdout:5/756: mknod d2/d27/d75/d81/c11c 0 2026-03-09T20:48:09.358 INFO:tasks.workunit.client.1.vm10.stdout:0/780: symlink d2/d4a/d58/d82/d71/dca/d110/dff/l112 0 2026-03-09T20:48:09.359 INFO:tasks.workunit.client.1.vm10.stdout:0/781: write d2/d9/db8/d10f/d11/dd1/f103 [214481,35037] 0 2026-03-09T20:48:09.359 INFO:tasks.workunit.client.0.vm07.stdout:7/911: sync 2026-03-09T20:48:09.359 INFO:tasks.workunit.client.0.vm07.stdout:2/850: sync 2026-03-09T20:48:09.361 INFO:tasks.workunit.client.0.vm07.stdout:8/785: dread d1/f1d [0,4194304] 0 2026-03-09T20:48:09.361 INFO:tasks.workunit.client.0.vm07.stdout:8/786: readlink d1/dc/d16/dad/ld7 0 2026-03-09T20:48:09.364 INFO:tasks.workunit.client.0.vm07.stdout:7/912: dread d3/f67 [0,4194304] 0 2026-03-09T20:48:09.366 INFO:tasks.workunit.client.0.vm07.stdout:5/905: dread d5/df/d13/d30/fe3 [0,4194304] 0 2026-03-09T20:48:09.368 INFO:tasks.workunit.client.1.vm10.stdout:9/866: truncate d2/d3/f1c 4059953 0 2026-03-09T20:48:09.368 INFO:tasks.workunit.client.0.vm07.stdout:6/857: mknod d8/d5d/c116 0 2026-03-09T20:48:09.369 INFO:tasks.workunit.client.1.vm10.stdout:8/854: unlink d0/d22/d2f/c56 0 2026-03-09T20:48:09.370 INFO:tasks.workunit.client.1.vm10.stdout:4/758: mknod d1/d2/d3/d70/d78/cf3 0 2026-03-09T20:48:09.371 INFO:tasks.workunit.client.0.vm07.stdout:2/851: fsync d2/db/d1c/f2e 0 2026-03-09T20:48:09.376 INFO:tasks.workunit.client.1.vm10.stdout:3/772: mknod dc/d14/d20/d2e/d56/c109 0 2026-03-09T20:48:09.377 INFO:tasks.workunit.client.0.vm07.stdout:2/852: dwrite d2/db/d28/fb8 [0,4194304] 0 2026-03-09T20:48:09.381 INFO:tasks.workunit.client.0.vm07.stdout:4/777: mkdir d2/d55/d5d/d3f/db6/dd2 0 2026-03-09T20:48:09.382 INFO:tasks.workunit.client.1.vm10.stdout:0/782: creat d2/d9/db8/d10f/d11/dd1/db7/dcd/f113 x:0 0 0 2026-03-09T20:48:09.398 INFO:tasks.workunit.client.0.vm07.stdout:9/825: link d4/d16/d29/d9c/lf2 d4/d16/d29/d24/d37/d44/d62/d8e/l128 0 2026-03-09T20:48:09.399 INFO:tasks.workunit.client.0.vm07.stdout:7/913: mknod d3/da/db/d32/d3e/dac/d1f/d50/d110/c135 0 2026-03-09T20:48:09.399 INFO:tasks.workunit.client.1.vm10.stdout:9/867: unlink d2/d3/fa 0 2026-03-09T20:48:09.399 INFO:tasks.workunit.client.1.vm10.stdout:9/868: fdatasync d2/d3/de/d8f/dbc/f102 0 2026-03-09T20:48:09.399 INFO:tasks.workunit.client.1.vm10.stdout:8/855: unlink d0/d22/d25/d2e/d41/de9/dfc/d78/l48 0 2026-03-09T20:48:09.399 INFO:tasks.workunit.client.1.vm10.stdout:0/783: rename d2/d9/d69/c97 to d2/d4a/d58/d82/d71/dca/c114 0 2026-03-09T20:48:09.399 INFO:tasks.workunit.client.1.vm10.stdout:0/784: stat d2/d9/db8/d10f/d11/d92/fb0 0 2026-03-09T20:48:09.399 INFO:tasks.workunit.client.1.vm10.stdout:4/759: dwrite d1/fe7 [0,4194304] 0 2026-03-09T20:48:09.399 INFO:tasks.workunit.client.1.vm10.stdout:0/785: fdatasync d2/d9/db8/d10f/d48/dac/de8/f102 0 2026-03-09T20:48:09.399 INFO:tasks.workunit.client.1.vm10.stdout:9/869: creat d2/d33/f120 x:0 0 0 2026-03-09T20:48:09.399 INFO:tasks.workunit.client.1.vm10.stdout:4/760: rename d1/d2 to d1/d2/d5c/d64/d61/dea/df4 22 2026-03-09T20:48:09.399 INFO:tasks.workunit.client.1.vm10.stdout:0/786: readlink d2/d4a/d58/d82/d93/ldf 0 2026-03-09T20:48:09.399 INFO:tasks.workunit.client.1.vm10.stdout:8/856: mkdir d0/d54/d114 0 2026-03-09T20:48:09.399 INFO:tasks.workunit.client.1.vm10.stdout:3/773: mkdir dc/d14/d26/d10a 0 2026-03-09T20:48:09.399 INFO:tasks.workunit.client.1.vm10.stdout:2/798: creat d5/d18/f107 x:0 0 0 2026-03-09T20:48:09.404 INFO:tasks.workunit.client.1.vm10.stdout:5/757: sync 2026-03-09T20:48:09.404 INFO:tasks.workunit.client.0.vm07.stdout:6/858: sync 2026-03-09T20:48:09.405 INFO:tasks.workunit.client.1.vm10.stdout:8/857: creat d0/f115 x:0 0 0 2026-03-09T20:48:09.405 INFO:tasks.workunit.client.1.vm10.stdout:3/774: mknod dc/d14/d26/d8f/c10b 0 2026-03-09T20:48:09.409 INFO:tasks.workunit.client.0.vm07.stdout:7/914: symlink d3/da/db/d32/d3e/dac/d43/l136 0 2026-03-09T20:48:09.413 INFO:tasks.workunit.client.0.vm07.stdout:7/915: write d3/f3f [560775,76105] 0 2026-03-09T20:48:09.414 INFO:tasks.workunit.client.0.vm07.stdout:7/916: chown d3/da/d53/c54 0 1 2026-03-09T20:48:09.420 INFO:tasks.workunit.client.1.vm10.stdout:5/758: mkdir d2/d39/dbf/d66/d11d 0 2026-03-09T20:48:09.421 INFO:tasks.workunit.client.0.vm07.stdout:2/853: sync 2026-03-09T20:48:09.421 INFO:tasks.workunit.client.0.vm07.stdout:4/778: sync 2026-03-09T20:48:09.421 INFO:tasks.workunit.client.0.vm07.stdout:2/854: chown d2/db/df6/c102 32 1 2026-03-09T20:48:09.428 INFO:tasks.workunit.client.0.vm07.stdout:5/906: dread d5/df/d13/d3e/de1/fe7 [0,4194304] 0 2026-03-09T20:48:09.439 INFO:tasks.workunit.client.0.vm07.stdout:7/917: fsync d3/da/db/d32/d3e/dac/d1f/d2b/d52/f5e 0 2026-03-09T20:48:09.439 INFO:tasks.workunit.client.1.vm10.stdout:9/870: dread d2/d28/d47/d50/f75 [0,4194304] 0 2026-03-09T20:48:09.450 INFO:tasks.workunit.client.1.vm10.stdout:3/775: dread dc/d14/d26/d29/d2a/d76/f97 [0,4194304] 0 2026-03-09T20:48:09.451 INFO:tasks.workunit.client.1.vm10.stdout:3/776: fsync dc/d14/d26/dcb/ff9 0 2026-03-09T20:48:09.451 INFO:tasks.workunit.client.1.vm10.stdout:3/777: dread - dc/db4/fe5 zero size 2026-03-09T20:48:09.452 INFO:tasks.workunit.client.1.vm10.stdout:0/787: creat d2/d4a/f115 x:0 0 0 2026-03-09T20:48:09.455 INFO:tasks.workunit.client.1.vm10.stdout:9/871: mkdir d2/d28/d47/d50/dd1/d121 0 2026-03-09T20:48:09.458 INFO:tasks.workunit.client.0.vm07.stdout:5/907: truncate d5/d19/d73/d97/fdb 337946 0 2026-03-09T20:48:09.462 INFO:tasks.workunit.client.1.vm10.stdout:5/759: dread d2/d58/fb9 [0,4194304] 0 2026-03-09T20:48:09.462 INFO:tasks.workunit.client.1.vm10.stdout:3/778: rename dc/f10 to dc/d14/d26/d29/d40/d8c/d9c/f10c 0 2026-03-09T20:48:09.466 INFO:tasks.workunit.client.1.vm10.stdout:4/761: dread d1/d2/d5c/d64/d6b/d81/dac/d1c/d69/fcb [0,4194304] 0 2026-03-09T20:48:09.467 INFO:tasks.workunit.client.1.vm10.stdout:2/799: dread d5/d18/f24 [0,4194304] 0 2026-03-09T20:48:09.475 INFO:tasks.workunit.client.0.vm07.stdout:0/867: write d1/d1f/d20/f21 [293458,48153] 0 2026-03-09T20:48:09.477 INFO:tasks.workunit.client.1.vm10.stdout:7/821: write db/d28/d2b/d36/d3b/d88/f57 [4798180,114258] 0 2026-03-09T20:48:09.481 INFO:tasks.workunit.client.0.vm07.stdout:1/882: dwrite d3/d14/f6a [0,4194304] 0 2026-03-09T20:48:09.481 INFO:tasks.workunit.client.0.vm07.stdout:3/836: dwrite d1/d5/d9/d11/d6d/dd0/d43/f90 [0,4194304] 0 2026-03-09T20:48:09.484 INFO:tasks.workunit.client.0.vm07.stdout:3/837: dread - d1/d5/d9/daf/d9f/f108 zero size 2026-03-09T20:48:09.485 INFO:tasks.workunit.client.1.vm10.stdout:1/801: dwrite d2/da/d25/d3e/d42/f57 [0,4194304] 0 2026-03-09T20:48:09.495 INFO:tasks.workunit.client.1.vm10.stdout:1/802: readlink d2/da/d25/d3e/dca/lbf 0 2026-03-09T20:48:09.517 INFO:tasks.workunit.client.0.vm07.stdout:8/787: dwrite d1/d5d/d6f/d2f/d4d/d55/fcf [0,4194304] 0 2026-03-09T20:48:09.527 INFO:tasks.workunit.client.1.vm10.stdout:3/779: mkdir dc/d14/d26/d8f/ddd/d10d 0 2026-03-09T20:48:09.530 INFO:tasks.workunit.client.0.vm07.stdout:4/779: dread d2/d55/d5d/d3f/d4a/d4b/d52/f5a [0,4194304] 0 2026-03-09T20:48:09.531 INFO:tasks.workunit.client.1.vm10.stdout:9/872: mknod d2/d12/dad/c122 0 2026-03-09T20:48:09.534 INFO:tasks.workunit.client.0.vm07.stdout:9/826: write d4/f5 [4352165,79091] 0 2026-03-09T20:48:09.538 INFO:tasks.workunit.client.0.vm07.stdout:0/868: mkdir d1/dc0/dcc/d10e 0 2026-03-09T20:48:09.543 INFO:tasks.workunit.client.0.vm07.stdout:1/883: creat d3/d97/da1/dc5/d90/de8/dba/f11d x:0 0 0 2026-03-09T20:48:09.544 INFO:tasks.workunit.client.1.vm10.stdout:7/822: dread db/d28/d2b/d36/d63/d6d/fe8 [0,4194304] 0 2026-03-09T20:48:09.544 INFO:tasks.workunit.client.0.vm07.stdout:1/884: write d3/d23/f58 [1774305,86802] 0 2026-03-09T20:48:09.548 INFO:tasks.workunit.client.0.vm07.stdout:6/859: dwrite d8/d16/d22/d9b/de4/d85/f2f [0,4194304] 0 2026-03-09T20:48:09.550 INFO:tasks.workunit.client.1.vm10.stdout:1/803: fsync d2/f4c 0 2026-03-09T20:48:09.551 INFO:tasks.workunit.client.1.vm10.stdout:0/788: fdatasync d2/d9/db8/d10f/d11/f86 0 2026-03-09T20:48:09.554 INFO:tasks.workunit.client.0.vm07.stdout:2/855: creat d2/db/d49/f115 x:0 0 0 2026-03-09T20:48:09.558 INFO:tasks.workunit.client.1.vm10.stdout:3/780: fdatasync dc/d14/d20/d21/f36 0 2026-03-09T20:48:09.561 INFO:tasks.workunit.client.0.vm07.stdout:9/827: mkdir d4/d16/d29/d24/d37/d44/d62/d108/d121/d59/d129 0 2026-03-09T20:48:09.565 INFO:tasks.workunit.client.1.vm10.stdout:9/873: chown d2/d3/c39 352155 1 2026-03-09T20:48:09.566 INFO:tasks.workunit.client.1.vm10.stdout:9/874: dread - d2/db8/f4d zero size 2026-03-09T20:48:09.570 INFO:tasks.workunit.client.0.vm07.stdout:3/838: unlink d1/fb7 0 2026-03-09T20:48:09.576 INFO:tasks.workunit.client.1.vm10.stdout:7/823: truncate db/d28/d2b/d36/d63/d6d/fa8 19347 0 2026-03-09T20:48:09.587 INFO:tasks.workunit.client.0.vm07.stdout:8/788: rename d1/dc/d16/d26/de2/c99 to d1/d5d/d6f/d2f/d4d/d55/cfa 0 2026-03-09T20:48:09.592 INFO:tasks.workunit.client.1.vm10.stdout:6/811: write d3/da/d11/d26/d5b/f55 [651837,92337] 0 2026-03-09T20:48:09.601 INFO:tasks.workunit.client.0.vm07.stdout:5/908: creat d5/df/d13/d3e/f139 x:0 0 0 2026-03-09T20:48:09.605 INFO:tasks.workunit.client.1.vm10.stdout:1/804: stat d2/c23 0 2026-03-09T20:48:09.607 INFO:tasks.workunit.client.0.vm07.stdout:3/839: mkdir d1/d5/d9/d2f/d34/da5/d111 0 2026-03-09T20:48:09.607 INFO:tasks.workunit.client.1.vm10.stdout:0/789: stat d2/d9/db8/d10f/d11/dd1/db7/dcd/ld3 0 2026-03-09T20:48:09.620 INFO:tasks.workunit.client.0.vm07.stdout:8/789: creat d1/d5d/d6f/d80/ffb x:0 0 0 2026-03-09T20:48:09.623 INFO:tasks.workunit.client.1.vm10.stdout:3/781: mkdir dc/d14/d26/d8f/ddd/d10e 0 2026-03-09T20:48:09.626 INFO:tasks.workunit.client.0.vm07.stdout:0/869: rename d1/d1f/d53/d72/fac to d1/d1f/d30/f10f 0 2026-03-09T20:48:09.627 INFO:tasks.workunit.client.1.vm10.stdout:9/875: mkdir d2/d28/da2/d123 0 2026-03-09T20:48:09.635 INFO:tasks.workunit.client.1.vm10.stdout:8/858: dread d0/d22/fb6 [0,4194304] 0 2026-03-09T20:48:09.636 INFO:tasks.workunit.client.1.vm10.stdout:6/812: truncate d3/d30/d7f/d36/d5c/f5f 3394802 0 2026-03-09T20:48:09.637 INFO:tasks.workunit.client.1.vm10.stdout:6/813: write d3/da/fd [4272319,56064] 0 2026-03-09T20:48:09.637 INFO:tasks.workunit.client.1.vm10.stdout:6/814: write d3/d79/feb [203144,58275] 0 2026-03-09T20:48:09.645 INFO:tasks.workunit.client.0.vm07.stdout:9/828: creat d4/d16/d29/d24/d37/d44/d62/d108/d121/db9/d123/f12a x:0 0 0 2026-03-09T20:48:09.649 INFO:tasks.workunit.client.0.vm07.stdout:3/840: mknod d1/d5/d9/d2f/d3d/d71/d76/db6/c112 0 2026-03-09T20:48:09.655 INFO:tasks.workunit.client.1.vm10.stdout:5/760: getdents d2/d27/d37/d46/d5d/d6d 0 2026-03-09T20:48:09.659 INFO:tasks.workunit.client.0.vm07.stdout:8/790: truncate d1/fb5 2234286 0 2026-03-09T20:48:09.661 INFO:tasks.workunit.client.0.vm07.stdout:7/918: write d3/da/d53/db7/dde/f84 [4747609,130753] 0 2026-03-09T20:48:09.662 INFO:tasks.workunit.client.0.vm07.stdout:4/780: rename d2/d55/d5d/d3f/d4a/d85/fa0 to d2/df/d17/d83/fd3 0 2026-03-09T20:48:09.670 INFO:tasks.workunit.client.0.vm07.stdout:3/841: mkdir d1/d5/d9/d2f/d99/dd8/de0/d113 0 2026-03-09T20:48:09.675 INFO:tasks.workunit.client.1.vm10.stdout:4/762: dwrite d1/d2/d5c/d64/d6b/d81/dac/d1c/d2b/f72 [0,4194304] 0 2026-03-09T20:48:09.684 INFO:tasks.workunit.client.0.vm07.stdout:1/885: write d3/d23/d55/f7b [654655,130208] 0 2026-03-09T20:48:09.691 INFO:tasks.workunit.client.1.vm10.stdout:9/876: dread d2/d3/ff6 [0,4194304] 0 2026-03-09T20:48:09.691 INFO:tasks.workunit.client.0.vm07.stdout:2/856: dwrite d2/db/d49/fc6 [0,4194304] 0 2026-03-09T20:48:09.693 INFO:tasks.workunit.client.0.vm07.stdout:9/829: rename d4/d16/d29/d24/d37/d8d/f5b to d4/d16/d29/d24/d37/d44/f12b 0 2026-03-09T20:48:09.701 INFO:tasks.workunit.client.1.vm10.stdout:8/859: truncate d0/d22/d25/f74 1616916 0 2026-03-09T20:48:09.706 INFO:tasks.workunit.client.0.vm07.stdout:7/919: creat d3/da/db/d32/d3e/dac/d1f/d50/d110/f137 x:0 0 0 2026-03-09T20:48:09.706 INFO:tasks.workunit.client.0.vm07.stdout:0/870: mkdir d1/dc0/dcc/d10e/d110 0 2026-03-09T20:48:09.709 INFO:tasks.workunit.client.1.vm10.stdout:2/800: dwrite d5/d18/d27/f2a [0,4194304] 0 2026-03-09T20:48:09.710 INFO:tasks.workunit.client.1.vm10.stdout:6/815: rename d3/da/d11/d89/db9/dd1/dd2/dc3/fca to d3/d30/d7f/d36/d6d/dbe/ddc/ff8 0 2026-03-09T20:48:09.722 INFO:tasks.workunit.client.0.vm07.stdout:2/857: creat d2/d11/ddb/f116 x:0 0 0 2026-03-09T20:48:09.726 INFO:tasks.workunit.client.0.vm07.stdout:6/860: write d8/d16/d22/d9b/de4/d85/f53 [2020418,84957] 0 2026-03-09T20:48:09.730 INFO:tasks.workunit.client.0.vm07.stdout:5/909: write d5/df/d13/d30/fe3 [1680866,116974] 0 2026-03-09T20:48:09.735 INFO:tasks.workunit.client.1.vm10.stdout:4/763: symlink d1/d67/lf5 0 2026-03-09T20:48:09.736 INFO:tasks.workunit.client.1.vm10.stdout:9/877: symlink d2/d3/d6d/db7/l124 0 2026-03-09T20:48:09.740 INFO:tasks.workunit.client.0.vm07.stdout:0/871: dwrite d1/d2/dc/d80/f87 [0,4194304] 0 2026-03-09T20:48:09.741 INFO:tasks.workunit.client.0.vm07.stdout:3/842: dread d1/d5/d9/d11/d6d/dd0/f55 [0,4194304] 0 2026-03-09T20:48:09.742 INFO:tasks.workunit.client.1.vm10.stdout:7/824: dwrite db/d28/f91 [0,4194304] 0 2026-03-09T20:48:09.754 INFO:tasks.workunit.client.0.vm07.stdout:7/920: write d3/d58/d77/fe1 [4235080,23287] 0 2026-03-09T20:48:09.765 INFO:tasks.workunit.client.0.vm07.stdout:8/791: rename d1/lf to d1/dc/dba/lfc 0 2026-03-09T20:48:09.766 INFO:tasks.workunit.client.0.vm07.stdout:8/792: chown d1/d5d/d6f/d2f/c43 0 1 2026-03-09T20:48:09.778 INFO:tasks.workunit.client.1.vm10.stdout:6/816: fsync d3/d30/d7f/d24/f27 0 2026-03-09T20:48:09.781 INFO:tasks.workunit.client.0.vm07.stdout:5/910: mkdir d5/d33/d39/d8d/dab/d11f/d13a 0 2026-03-09T20:48:09.784 INFO:tasks.workunit.client.1.vm10.stdout:2/801: creat d5/d18/d27/d38/d61/dc8/f108 x:0 0 0 2026-03-09T20:48:09.785 INFO:tasks.workunit.client.1.vm10.stdout:2/802: read d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47/d7b/fe3 [1309548,55167] 0 2026-03-09T20:48:09.790 INFO:tasks.workunit.client.0.vm07.stdout:3/843: truncate d1/d5/d9/d11/d1f/f7f 3889774 0 2026-03-09T20:48:09.791 INFO:tasks.workunit.client.0.vm07.stdout:3/844: readlink d1/d5/d9/d11/d6d/d80/db3/d109/l6b 0 2026-03-09T20:48:09.796 INFO:tasks.workunit.client.0.vm07.stdout:1/886: rename d3/d9c/fcd to d3/d97/da1/dc5/d90/de8/dc0/f11e 0 2026-03-09T20:48:09.799 INFO:tasks.workunit.client.0.vm07.stdout:8/793: truncate d1/dc/d16/d26/f2d 4367142 0 2026-03-09T20:48:09.821 INFO:tasks.workunit.client.0.vm07.stdout:0/872: unlink d1/f3d 0 2026-03-09T20:48:09.843 INFO:tasks.workunit.client.1.vm10.stdout:3/782: creat dc/d14/d26/d29/f10f x:0 0 0 2026-03-09T20:48:09.844 INFO:tasks.workunit.client.0.vm07.stdout:9/830: getdents d4/d16/d29/d24/d37/d8d/dcc 0 2026-03-09T20:48:09.844 INFO:tasks.workunit.client.0.vm07.stdout:8/794: unlink d1/d8f/cf0 0 2026-03-09T20:48:09.845 INFO:tasks.workunit.client.0.vm07.stdout:7/921: rename d3/da/db/d32/d3e/dac/f2a to d3/f138 0 2026-03-09T20:48:09.846 INFO:tasks.workunit.client.1.vm10.stdout:7/825: rename db/d28/d30/c4e to db/d21/d26/c102 0 2026-03-09T20:48:09.849 INFO:tasks.workunit.client.0.vm07.stdout:8/795: mknod d1/d5d/d6f/cfd 0 2026-03-09T20:48:09.850 INFO:tasks.workunit.client.0.vm07.stdout:5/911: rmdir d5/df/d13/d30/d56/d120 0 2026-03-09T20:48:09.854 INFO:tasks.workunit.client.0.vm07.stdout:8/796: mkdir d1/d5d/d6f/d2f/d4d/dfe 0 2026-03-09T20:48:09.854 INFO:tasks.workunit.client.1.vm10.stdout:6/817: creat d3/da/d11/d89/db9/dd1/ff9 x:0 0 0 2026-03-09T20:48:09.855 INFO:tasks.workunit.client.1.vm10.stdout:4/764: dwrite d1/d2/f2d [0,4194304] 0 2026-03-09T20:48:09.856 INFO:tasks.workunit.client.0.vm07.stdout:5/912: truncate d5/df/d13/d3e/d5e/fd5 1540508 0 2026-03-09T20:48:09.856 INFO:tasks.workunit.client.0.vm07.stdout:8/797: creat d1/d5d/d6f/d80/fff x:0 0 0 2026-03-09T20:48:09.856 INFO:tasks.workunit.client.1.vm10.stdout:5/761: rmdir d2/d58/dcf 0 2026-03-09T20:48:09.857 INFO:tasks.workunit.client.1.vm10.stdout:3/783: truncate dc/f11 3903564 0 2026-03-09T20:48:09.858 INFO:tasks.workunit.client.1.vm10.stdout:3/784: truncate dc/d14/f102 521643 0 2026-03-09T20:48:09.879 INFO:tasks.workunit.client.1.vm10.stdout:3/785: chown dc/d14/d26/d29/d2a/d76/fc4 14 1 2026-03-09T20:48:09.879 INFO:tasks.workunit.client.0.vm07.stdout:5/913: fsync d5/df/d13/d6c/fde 0 2026-03-09T20:48:09.879 INFO:tasks.workunit.client.0.vm07.stdout:8/798: stat d1/d5d/d6f/d2f/d4d/dd4/dd9/dee 0 2026-03-09T20:48:09.881 INFO:tasks.workunit.client.0.vm07.stdout:8/799: getdents d1/dc/d16/dad/d87/d93 0 2026-03-09T20:48:09.882 INFO:tasks.workunit.client.0.vm07.stdout:7/922: dread d3/da/db/d32/d3e/d5c/f64 [0,4194304] 0 2026-03-09T20:48:09.890 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:09 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:48:09.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:09 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:48:09.890 INFO:tasks.workunit.client.1.vm10.stdout:3/786: creat dc/d14/d26/d29/d40/da2/f110 x:0 0 0 2026-03-09T20:48:09.890 INFO:tasks.workunit.client.1.vm10.stdout:6/818: link d3/da/d11/d31/ccc d3/da/cfa 0 2026-03-09T20:48:09.891 INFO:tasks.workunit.client.1.vm10.stdout:2/803: link d5/d18/d27/d89/db6/d41/d77/db3/db5/ffd d5/d18/d27/da6/f109 0 2026-03-09T20:48:09.893 INFO:tasks.workunit.client.1.vm10.stdout:8/860: link d0/d22/d25/d2e/d41/d85/fcd d0/d22/d25/d2e/d41/de9/dfc/f116 0 2026-03-09T20:48:09.894 INFO:tasks.workunit.client.1.vm10.stdout:3/787: getdents dc/d14/d26/d29/d40/da8/dde 0 2026-03-09T20:48:09.895 INFO:tasks.workunit.client.1.vm10.stdout:8/861: write d0/f115 [596257,733] 0 2026-03-09T20:48:09.897 INFO:tasks.workunit.client.1.vm10.stdout:2/804: creat d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d8d/f10a x:0 0 0 2026-03-09T20:48:09.898 INFO:tasks.workunit.client.0.vm07.stdout:7/923: dwrite d3/da/db/fe8 [0,4194304] 0 2026-03-09T20:48:09.899 INFO:tasks.workunit.client.1.vm10.stdout:7/826: dread db/d21/fbc [0,4194304] 0 2026-03-09T20:48:09.903 INFO:tasks.workunit.client.1.vm10.stdout:3/788: creat dc/d14/d26/d8f/ddd/d10d/f111 x:0 0 0 2026-03-09T20:48:09.903 INFO:tasks.workunit.client.1.vm10.stdout:8/862: getdents d0/d54/d114 0 2026-03-09T20:48:09.905 INFO:tasks.workunit.client.1.vm10.stdout:3/789: chown dc/d14/d22/ff0 155538946 1 2026-03-09T20:48:09.909 INFO:tasks.workunit.client.0.vm07.stdout:7/924: chown d3/da/db/d32 19391223 1 2026-03-09T20:48:09.916 INFO:tasks.workunit.client.1.vm10.stdout:7/827: fsync db/d28/d2b/d36/d3b/dd5/f100 0 2026-03-09T20:48:09.918 INFO:tasks.workunit.client.0.vm07.stdout:7/925: rename d3/da/db/d32/d3e/c70 to d3/da4/df2/c139 0 2026-03-09T20:48:09.921 INFO:tasks.workunit.client.1.vm10.stdout:2/805: fsync d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/f60 0 2026-03-09T20:48:09.921 INFO:tasks.workunit.client.1.vm10.stdout:3/790: mknod dc/d14/d26/d29/d2a/d76/c112 0 2026-03-09T20:48:09.922 INFO:tasks.workunit.client.0.vm07.stdout:7/926: creat d3/da/db/d32/d3e/dac/d43/d62/de0/d125/f13a x:0 0 0 2026-03-09T20:48:09.927 INFO:tasks.workunit.client.0.vm07.stdout:7/927: creat d3/da/db/d32/d3e/dac/f13b x:0 0 0 2026-03-09T20:48:09.929 INFO:tasks.workunit.client.0.vm07.stdout:9/831: sync 2026-03-09T20:48:09.929 INFO:tasks.workunit.client.1.vm10.stdout:3/791: mkdir dc/d14/d20/d21/daf/d113 0 2026-03-09T20:48:09.930 INFO:tasks.workunit.client.1.vm10.stdout:7/828: dwrite db/d28/d2b/d36/d3b/fe5 [0,4194304] 0 2026-03-09T20:48:09.934 INFO:tasks.workunit.client.0.vm07.stdout:9/832: dwrite d4/d16/d29/d24/d37/d44/d62/d108/d121/d19/d89/da7/ddd/f113 [0,4194304] 0 2026-03-09T20:48:09.936 INFO:tasks.workunit.client.0.vm07.stdout:9/833: chown d4/d16/d29/d24/d7c/l106 48 1 2026-03-09T20:48:09.937 INFO:tasks.workunit.client.0.vm07.stdout:9/834: chown d4/d16/d29/d9c 405175581 1 2026-03-09T20:48:09.944 INFO:tasks.workunit.client.0.vm07.stdout:9/835: unlink d4/d16/d29/d24/d37/d44/d62/d108/d121/d19/fc2 0 2026-03-09T20:48:09.947 INFO:tasks.workunit.client.1.vm10.stdout:6/819: sync 2026-03-09T20:48:09.947 INFO:tasks.workunit.client.1.vm10.stdout:7/829: mkdir db/d28/d2b/d36/d3b/d88/dbd/d103 0 2026-03-09T20:48:09.960 INFO:tasks.workunit.client.0.vm07.stdout:4/781: write d2/df/d17/f80 [328644,13929] 0 2026-03-09T20:48:09.963 INFO:tasks.workunit.client.0.vm07.stdout:9/836: unlink d4/f5 0 2026-03-09T20:48:09.965 INFO:tasks.workunit.client.1.vm10.stdout:0/790: dwrite d2/d9/db8/d10f/d11/dd1/d34/fc5 [0,4194304] 0 2026-03-09T20:48:09.967 INFO:tasks.workunit.client.1.vm10.stdout:7/830: symlink db/d28/d2b/d36/d63/d6d/l104 0 2026-03-09T20:48:09.973 INFO:tasks.workunit.client.1.vm10.stdout:3/792: creat dc/d14/d26/d29/d40/da8/f114 x:0 0 0 2026-03-09T20:48:09.973 INFO:tasks.workunit.client.0.vm07.stdout:9/837: mknod d4/d16/d29/d24/d37/d44/d62/d108/d121/d19/d5f/d73/c12c 0 2026-03-09T20:48:09.973 INFO:tasks.workunit.client.0.vm07.stdout:4/782: link d2/d55/d5d/d3f/f9f d2/d55/d5d/d3f/d4a/d85/fd4 0 2026-03-09T20:48:09.973 INFO:tasks.workunit.client.1.vm10.stdout:0/791: creat d2/d9/db8/d10f/d11/dd1/d34/f116 x:0 0 0 2026-03-09T20:48:09.980 INFO:tasks.workunit.client.1.vm10.stdout:7/831: mknod db/d28/d2b/d36/d63/c105 0 2026-03-09T20:48:09.981 INFO:tasks.workunit.client.1.vm10.stdout:7/832: write db/d28/f41 [502915,53861] 0 2026-03-09T20:48:09.983 INFO:tasks.workunit.client.1.vm10.stdout:6/820: rename d3/d30/d7f/d51/ca0 to d3/da/d11/d89/db9/dd1/dd2/da9/cfb 0 2026-03-09T20:48:10.007 INFO:tasks.workunit.client.0.vm07.stdout:5/914: dread d5/df/d13/d6c/f79 [0,4194304] 0 2026-03-09T20:48:10.009 INFO:tasks.workunit.client.1.vm10.stdout:3/793: rmdir dc/d14/d26/d29/d2a/d55 39 2026-03-09T20:48:10.012 INFO:tasks.workunit.client.1.vm10.stdout:0/792: mkdir d2/d4a/d58/d82/d71/dca/d117 0 2026-03-09T20:48:10.016 INFO:tasks.workunit.client.1.vm10.stdout:1/805: write d2/da/f11 [2459849,45199] 0 2026-03-09T20:48:10.016 INFO:tasks.workunit.client.0.vm07.stdout:9/838: creat d4/d16/d29/d24/d37/d44/d62/d108/d121/dc/d4e/d54/d11e/f12d x:0 0 0 2026-03-09T20:48:10.016 INFO:tasks.workunit.client.1.vm10.stdout:7/833: creat db/d21/d26/f106 x:0 0 0 2026-03-09T20:48:10.025 INFO:tasks.workunit.client.0.vm07.stdout:6/861: write d8/d16/da3/f9f [280647,36277] 0 2026-03-09T20:48:10.030 INFO:tasks.workunit.client.0.vm07.stdout:2/858: dwrite d2/db/d28/d90/dd6/f109 [0,4194304] 0 2026-03-09T20:48:10.041 INFO:tasks.workunit.client.0.vm07.stdout:0/873: dwrite d1/d2/f47 [0,4194304] 0 2026-03-09T20:48:10.041 INFO:tasks.workunit.client.0.vm07.stdout:0/874: stat d1/d2/dc/d80/fbe 0 2026-03-09T20:48:10.049 INFO:tasks.workunit.client.1.vm10.stdout:3/794: creat dc/d14/d26/d8f/f115 x:0 0 0 2026-03-09T20:48:10.051 INFO:tasks.workunit.client.0.vm07.stdout:1/887: dwrite d3/d14/d54/f32 [0,4194304] 0 2026-03-09T20:48:10.055 INFO:tasks.workunit.client.1.vm10.stdout:0/793: symlink d2/d9/db8/d10f/d48/l118 0 2026-03-09T20:48:10.056 INFO:tasks.workunit.client.1.vm10.stdout:9/878: dwrite d2/db8/ff8 [0,4194304] 0 2026-03-09T20:48:10.060 INFO:tasks.workunit.client.1.vm10.stdout:0/794: write d2/f99 [3032632,27893] 0 2026-03-09T20:48:10.063 INFO:tasks.workunit.client.1.vm10.stdout:0/795: dread - d2/d4a/d58/d82/d71/fe7 zero size 2026-03-09T20:48:10.068 INFO:tasks.workunit.client.0.vm07.stdout:5/915: mkdir d5/d33/d13b 0 2026-03-09T20:48:10.072 INFO:tasks.workunit.client.1.vm10.stdout:7/834: symlink db/d28/d2b/d36/d63/d6d/dc4/dfe/l107 0 2026-03-09T20:48:10.073 INFO:tasks.workunit.client.0.vm07.stdout:6/862: creat d8/d16/d4b/d88/dc3/dd5/f117 x:0 0 0 2026-03-09T20:48:10.073 INFO:tasks.workunit.client.1.vm10.stdout:7/835: read db/d21/d26/f2f [1347649,130185] 0 2026-03-09T20:48:10.084 INFO:tasks.workunit.client.1.vm10.stdout:9/879: truncate d2/d33/d37/f66 3217380 0 2026-03-09T20:48:10.084 INFO:tasks.workunit.client.0.vm07.stdout:2/859: unlink d2/db/d28/d90/dd6/f109 0 2026-03-09T20:48:10.094 INFO:tasks.workunit.client.0.vm07.stdout:0/875: mknod d1/d1f/d9f/df8/c111 0 2026-03-09T20:48:10.099 INFO:tasks.workunit.client.1.vm10.stdout:4/765: dwrite d1/d2/f2e [0,4194304] 0 2026-03-09T20:48:10.124 INFO:tasks.workunit.client.1.vm10.stdout:5/762: write d2/d27/d37/fa3 [4883600,113345] 0 2026-03-09T20:48:10.129 INFO:tasks.workunit.client.1.vm10.stdout:7/836: symlink db/d21/d26/d72/l108 0 2026-03-09T20:48:10.129 INFO:tasks.workunit.client.1.vm10.stdout:8/863: write d0/d22/d25/d2e/d41/d85/db9/fdd [674401,34043] 0 2026-03-09T20:48:10.130 INFO:tasks.workunit.client.0.vm07.stdout:8/800: dwrite d1/d5d/f7a [0,4194304] 0 2026-03-09T20:48:10.130 INFO:tasks.workunit.client.0.vm07.stdout:1/888: truncate d3/d14/d54/f4b 2829337 0 2026-03-09T20:48:10.130 INFO:tasks.workunit.client.1.vm10.stdout:7/837: write db/d46/f5a [1052549,46042] 0 2026-03-09T20:48:10.132 INFO:tasks.workunit.client.1.vm10.stdout:8/864: write d0/d22/d25/d6c/f68 [478626,78677] 0 2026-03-09T20:48:10.135 INFO:tasks.workunit.client.1.vm10.stdout:9/880: mknod d2/d28/d47/d67/c125 0 2026-03-09T20:48:10.143 INFO:tasks.workunit.client.1.vm10.stdout:2/806: dwrite d5/d5b/fb7 [0,4194304] 0 2026-03-09T20:48:10.160 INFO:tasks.workunit.client.0.vm07.stdout:7/928: dwrite d3/d58/dc1/fc8 [0,4194304] 0 2026-03-09T20:48:10.163 INFO:tasks.workunit.client.0.vm07.stdout:7/929: chown d3/d58/d77/c7e 1009589 1 2026-03-09T20:48:10.175 INFO:tasks.workunit.client.0.vm07.stdout:6/863: read d8/d16/d22/d24/da0/dab/dc1/f110 [553461,15497] 0 2026-03-09T20:48:10.175 INFO:tasks.workunit.client.0.vm07.stdout:2/860: readlink d2/d11/ddb/d6e/dbe/l10d 0 2026-03-09T20:48:10.181 INFO:tasks.workunit.client.1.vm10.stdout:5/763: mknod d2/d39/dbf/d69/de9/dfa/c11e 0 2026-03-09T20:48:10.182 INFO:tasks.workunit.client.0.vm07.stdout:0/876: mknod d1/d2/d33/d35/ddb/c112 0 2026-03-09T20:48:10.192 INFO:tasks.workunit.client.0.vm07.stdout:4/783: write d2/df/d17/f1b [1777230,87644] 0 2026-03-09T20:48:10.205 INFO:tasks.workunit.client.0.vm07.stdout:9/839: write d4/d16/d29/d24/d37/d44/d62/d108/d121/dc/d15/f18 [4482088,19258] 0 2026-03-09T20:48:10.208 INFO:tasks.workunit.client.0.vm07.stdout:3/845: dwrite d1/d5/d9/d11/d1f/f7f [0,4194304] 0 2026-03-09T20:48:10.227 INFO:tasks.workunit.client.1.vm10.stdout:7/838: truncate db/d28/d2b/d36/d63/d6d/fe8 143497 0 2026-03-09T20:48:10.228 INFO:tasks.workunit.client.1.vm10.stdout:7/839: write db/d21/d26/f106 [701740,92500] 0 2026-03-09T20:48:10.232 INFO:tasks.workunit.client.1.vm10.stdout:2/807: rename d5/d18/d1b/f70 to d5/d18/d27/d89/db6/d41/f10b 0 2026-03-09T20:48:10.241 INFO:tasks.workunit.client.1.vm10.stdout:4/766: symlink d1/d2/d5c/d64/d6b/d81/dac/d1b/d57/lf6 0 2026-03-09T20:48:10.242 INFO:tasks.workunit.client.1.vm10.stdout:2/808: write d5/d5b/fb7 [452961,75603] 0 2026-03-09T20:48:10.246 INFO:tasks.workunit.client.1.vm10.stdout:1/806: getdents d2/da/d25/d46/d80/da0/d92/db5 0 2026-03-09T20:48:10.252 INFO:tasks.workunit.client.1.vm10.stdout:5/764: creat d2/d39/d4b/d7a/dd9/f11f x:0 0 0 2026-03-09T20:48:10.254 INFO:tasks.workunit.client.1.vm10.stdout:3/795: getdents dc/d14/d20/d21/d3b 0 2026-03-09T20:48:10.254 INFO:tasks.workunit.client.1.vm10.stdout:8/865: creat d0/d54/ded/f117 x:0 0 0 2026-03-09T20:48:10.263 INFO:tasks.workunit.client.1.vm10.stdout:0/796: getdents d2/d9/db8/d10f/d11/dd1/d34/dee 0 2026-03-09T20:48:10.265 INFO:tasks.workunit.client.1.vm10.stdout:0/797: read - d2/d9/d2a/fdc zero size 2026-03-09T20:48:10.286 INFO:tasks.workunit.client.1.vm10.stdout:3/796: dread - dc/d14/d26/d29/fd2 zero size 2026-03-09T20:48:10.300 INFO:tasks.workunit.client.1.vm10.stdout:2/809: dread d5/d18/d1b/d22/f6d [0,4194304] 0 2026-03-09T20:48:10.300 INFO:tasks.workunit.client.1.vm10.stdout:5/765: symlink d2/d39/d4b/de0/d105/l120 0 2026-03-09T20:48:10.303 INFO:tasks.workunit.client.1.vm10.stdout:2/810: stat d5/d18/d27/f8c 0 2026-03-09T20:48:10.304 INFO:tasks.workunit.client.1.vm10.stdout:2/811: fdatasync d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d8d/f10a 0 2026-03-09T20:48:10.307 INFO:tasks.workunit.client.1.vm10.stdout:3/797: sync 2026-03-09T20:48:10.307 INFO:tasks.workunit.client.1.vm10.stdout:3/798: read - dc/db4/fca zero size 2026-03-09T20:48:10.307 INFO:tasks.workunit.client.1.vm10.stdout:2/812: chown d5/d18/d27/da6/fac 122 1 2026-03-09T20:48:10.312 INFO:tasks.workunit.client.1.vm10.stdout:2/813: truncate d5/d18/d27/d38/d61/dc8/f108 1038730 0 2026-03-09T20:48:10.312 INFO:tasks.workunit.client.1.vm10.stdout:2/814: chown d5/f59 15840670 1 2026-03-09T20:48:10.333 INFO:tasks.workunit.client.1.vm10.stdout:3/799: creat dc/d14/d27/f116 x:0 0 0 2026-03-09T20:48:10.333 INFO:tasks.workunit.client.0.vm07.stdout:5/916: dwrite d5/ff0 [0,4194304] 0 2026-03-09T20:48:10.347 INFO:tasks.workunit.client.0.vm07.stdout:2/861: creat d2/db/df6/f117 x:0 0 0 2026-03-09T20:48:10.348 INFO:tasks.workunit.client.1.vm10.stdout:1/807: getdents d2/d89 0 2026-03-09T20:48:10.348 INFO:tasks.workunit.client.1.vm10.stdout:0/798: getdents d2/d9/db8/d10f/d48/dac 0 2026-03-09T20:48:10.350 INFO:tasks.workunit.client.1.vm10.stdout:6/821: dread d3/f21 [0,4194304] 0 2026-03-09T20:48:10.350 INFO:tasks.workunit.client.1.vm10.stdout:0/799: stat d2/d4a/d58/d82/d60/d98 0 2026-03-09T20:48:10.354 INFO:tasks.workunit.client.0.vm07.stdout:0/877: fsync d1/d2/d33/f7e 0 2026-03-09T20:48:10.361 INFO:tasks.workunit.client.1.vm10.stdout:2/815: rename d5/d18/d9f/lc0 to d5/d18/d27/d5f/l10c 0 2026-03-09T20:48:10.364 INFO:tasks.workunit.client.1.vm10.stdout:1/808: stat d2/da/dbc/dea 0 2026-03-09T20:48:10.365 INFO:tasks.workunit.client.0.vm07.stdout:8/801: truncate d1/dc/d16/d26/d94/fe4 138040 0 2026-03-09T20:48:10.368 INFO:tasks.workunit.client.0.vm07.stdout:9/840: creat d4/d16/d29/d24/d37/d44/f12e x:0 0 0 2026-03-09T20:48:10.376 INFO:tasks.workunit.client.0.vm07.stdout:3/846: fdatasync d1/d5/d9/d11/d6d/dd0/f55 0 2026-03-09T20:48:10.384 INFO:tasks.workunit.client.1.vm10.stdout:6/822: dread f1 [0,4194304] 0 2026-03-09T20:48:10.388 INFO:tasks.workunit.client.0.vm07.stdout:1/889: write d3/d9c/f105 [355840,26020] 0 2026-03-09T20:48:10.390 INFO:tasks.workunit.client.0.vm07.stdout:1/890: chown d3/f82 3 1 2026-03-09T20:48:10.399 INFO:tasks.workunit.client.1.vm10.stdout:1/809: creat d2/da/d25/d46/d51/d5d/d6e/d70/f102 x:0 0 0 2026-03-09T20:48:10.400 INFO:tasks.workunit.client.1.vm10.stdout:1/810: chown d2/da/d25/d46/d51/c5b 1837050 1 2026-03-09T20:48:10.404 INFO:tasks.workunit.client.1.vm10.stdout:6/823: rmdir d3/d30/d7f/d24/d39/d9e 39 2026-03-09T20:48:10.415 INFO:tasks.workunit.client.1.vm10.stdout:9/881: write d2/d3/f2f [174674,99010] 0 2026-03-09T20:48:10.415 INFO:tasks.workunit.client.1.vm10.stdout:9/882: dread - d2/d3/de/d35/f106 zero size 2026-03-09T20:48:10.416 INFO:tasks.workunit.client.1.vm10.stdout:9/883: chown d2/db8/l87 61140834 1 2026-03-09T20:48:10.421 INFO:tasks.workunit.client.1.vm10.stdout:7/840: write db/d28/d2b/d36/d3f/f7b [91195,111623] 0 2026-03-09T20:48:10.422 INFO:tasks.workunit.client.0.vm07.stdout:6/864: creat d8/d16/d22/d24/da0/dab/d40/d105/f118 x:0 0 0 2026-03-09T20:48:10.423 INFO:tasks.workunit.client.1.vm10.stdout:4/767: dwrite d1/d2/d5c/d64/d6b/d81/dac/f29 [0,4194304] 0 2026-03-09T20:48:10.428 INFO:tasks.workunit.client.1.vm10.stdout:4/768: fsync d1/d47/f4f 0 2026-03-09T20:48:10.443 INFO:tasks.workunit.client.1.vm10.stdout:1/811: read d2/da/d25/d3e/dca/da2/fe1 [453079,74210] 0 2026-03-09T20:48:10.448 INFO:tasks.workunit.client.1.vm10.stdout:8/866: truncate d0/d92/fcc 1552426 0 2026-03-09T20:48:10.452 INFO:tasks.workunit.client.1.vm10.stdout:9/884: mknod d2/d28/d47/d50/dd1/c126 0 2026-03-09T20:48:10.452 INFO:tasks.workunit.client.0.vm07.stdout:9/841: creat d4/d16/d78/f12f x:0 0 0 2026-03-09T20:48:10.452 INFO:tasks.workunit.client.0.vm07.stdout:3/847: mknod d1/d5/d9/d11/d6d/dd0/c114 0 2026-03-09T20:48:10.453 INFO:tasks.workunit.client.0.vm07.stdout:1/891: creat d3/d23/d109/f11f x:0 0 0 2026-03-09T20:48:10.454 INFO:tasks.workunit.client.0.vm07.stdout:3/848: write d1/d5/d9/d11/d1f/f7f [2076638,82984] 0 2026-03-09T20:48:10.455 INFO:tasks.workunit.client.0.vm07.stdout:3/849: write d1/d5/d9/d2f/d66/fd3 [2077350,63127] 0 2026-03-09T20:48:10.464 INFO:tasks.workunit.client.1.vm10.stdout:8/867: mkdir d0/d22/d2f/d118 0 2026-03-09T20:48:10.466 INFO:tasks.workunit.client.1.vm10.stdout:5/766: dwrite f1 [0,4194304] 0 2026-03-09T20:48:10.469 INFO:tasks.workunit.client.0.vm07.stdout:7/930: rename d3/da/d53/db7/dde/d96/l12c to d3/da/db/d79/l13c 0 2026-03-09T20:48:10.474 INFO:tasks.workunit.client.1.vm10.stdout:0/800: dwrite d2/d9/f20 [0,4194304] 0 2026-03-09T20:48:10.474 INFO:tasks.workunit.client.1.vm10.stdout:7/841: mknod db/c109 0 2026-03-09T20:48:10.474 INFO:tasks.workunit.client.1.vm10.stdout:9/885: dread - d2/d3/de/d8f/ffb zero size 2026-03-09T20:48:10.474 INFO:tasks.workunit.client.1.vm10.stdout:6/824: rename d3/d30/d7f/d36/d6d/dbe to d3/da/d11/dfc 0 2026-03-09T20:48:10.474 INFO:tasks.workunit.client.1.vm10.stdout:3/800: dread dc/db4/fe2 [0,4194304] 0 2026-03-09T20:48:10.475 INFO:tasks.workunit.client.0.vm07.stdout:7/931: stat d3/da/db/d32/d3e/d5c/dc2/cc4 0 2026-03-09T20:48:10.478 INFO:tasks.workunit.client.0.vm07.stdout:3/850: read d1/d5/d9/d2f/d34/d46/d5d/fb8 [1931500,86105] 0 2026-03-09T20:48:10.485 INFO:tasks.workunit.client.0.vm07.stdout:6/865: sync 2026-03-09T20:48:10.488 INFO:tasks.workunit.client.1.vm10.stdout:5/767: truncate d2/d39/d4b/d7a/fc0 4405570 0 2026-03-09T20:48:10.488 INFO:tasks.workunit.client.0.vm07.stdout:3/851: unlink d1/d5/d9/d2f/d34/d46/d5d/c56 0 2026-03-09T20:48:10.488 INFO:tasks.workunit.client.1.vm10.stdout:1/812: link d2/f3c d2/da/d25/d46/d51/f103 0 2026-03-09T20:48:10.489 INFO:tasks.workunit.client.1.vm10.stdout:4/769: creat d1/d2/d5c/d64/d6b/d81/dac/d1b/ff7 x:0 0 0 2026-03-09T20:48:10.494 INFO:tasks.workunit.client.0.vm07.stdout:1/892: creat d3/d14/f120 x:0 0 0 2026-03-09T20:48:10.495 INFO:tasks.workunit.client.0.vm07.stdout:1/893: chown d3/d23/l40 699 1 2026-03-09T20:48:10.495 INFO:tasks.workunit.client.0.vm07.stdout:6/866: creat d8/d16/d4b/d88/d99/f119 x:0 0 0 2026-03-09T20:48:10.496 INFO:tasks.workunit.client.0.vm07.stdout:1/894: readlink d3/d97/da1/dc5/d60/l5b 0 2026-03-09T20:48:10.506 INFO:tasks.workunit.client.1.vm10.stdout:7/842: read - db/d28/d30/fe7 zero size 2026-03-09T20:48:10.506 INFO:tasks.workunit.client.1.vm10.stdout:4/770: dwrite d1/d2/d5c/d64/d6b/d81/fca [0,4194304] 0 2026-03-09T20:48:10.506 INFO:tasks.workunit.client.1.vm10.stdout:8/868: symlink d0/d92/de8/d64/l119 0 2026-03-09T20:48:10.507 INFO:tasks.workunit.client.0.vm07.stdout:4/784: rename d2/f3 to d2/d55/d5d/d3f/d4a/fd5 0 2026-03-09T20:48:10.510 INFO:tasks.workunit.client.1.vm10.stdout:0/801: symlink d2/d9/l119 0 2026-03-09T20:48:10.511 INFO:tasks.workunit.client.0.vm07.stdout:9/842: dread d4/d16/d29/f4a [0,4194304] 0 2026-03-09T20:48:10.514 INFO:tasks.workunit.client.1.vm10.stdout:0/802: dread - d2/d9/db8/f100 zero size 2026-03-09T20:48:10.516 INFO:tasks.workunit.client.0.vm07.stdout:1/895: symlink d3/d97/da1/dc5/d60/l121 0 2026-03-09T20:48:10.518 INFO:tasks.workunit.client.1.vm10.stdout:1/813: creat d2/da/d25/d3e/dca/da2/f104 x:0 0 0 2026-03-09T20:48:10.520 INFO:tasks.workunit.client.1.vm10.stdout:9/886: dread d2/d33/f3f [0,4194304] 0 2026-03-09T20:48:10.530 INFO:tasks.workunit.client.0.vm07.stdout:1/896: sync 2026-03-09T20:48:10.540 INFO:tasks.workunit.client.1.vm10.stdout:5/768: rename d2/d39/dbf/d63/d95/le4 to d2/d39/dbf/da9/l121 0 2026-03-09T20:48:10.541 INFO:tasks.workunit.client.0.vm07.stdout:7/932: link d3/da/db/d32/d3e/dac/d1f/d2b/d52/f66 d3/da/d53/db7/dde/d96/d112/f13d 0 2026-03-09T20:48:10.545 INFO:tasks.workunit.client.0.vm07.stdout:6/867: mkdir d8/db3/d114/d11a 0 2026-03-09T20:48:10.554 INFO:tasks.workunit.client.0.vm07.stdout:4/785: chown d2/d55/d5d/d3f/d4a/d7d/c8f 489762 1 2026-03-09T20:48:10.555 INFO:tasks.workunit.client.1.vm10.stdout:3/801: creat dc/d14/d26/d29/d40/da2/dee/f117 x:0 0 0 2026-03-09T20:48:10.555 INFO:tasks.workunit.client.1.vm10.stdout:7/843: symlink db/d28/d2b/d36/d63/l10a 0 2026-03-09T20:48:10.556 INFO:tasks.workunit.client.1.vm10.stdout:8/869: dread - d0/d22/d25/d2e/d41/de9/dfc/f116 zero size 2026-03-09T20:48:10.557 INFO:tasks.workunit.client.1.vm10.stdout:5/769: dread d2/d39/dbf/d63/d95/fd7 [0,4194304] 0 2026-03-09T20:48:10.557 INFO:tasks.workunit.client.0.vm07.stdout:9/843: mknod d4/d16/d29/d24/d37/d44/d62/d108/d121/db9/c130 0 2026-03-09T20:48:10.568 INFO:tasks.workunit.client.0.vm07.stdout:7/933: mkdir d3/da/db/d13e 0 2026-03-09T20:48:10.569 INFO:tasks.workunit.client.0.vm07.stdout:9/844: truncate d4/d16/d29/d24/d37/d44/d62/d108/d121/dc/d4e/d54/d11e/f12d 171088 0 2026-03-09T20:48:10.575 INFO:tasks.workunit.client.0.vm07.stdout:6/868: mknod d8/d16/d22/db1/c11b 0 2026-03-09T20:48:10.576 INFO:tasks.workunit.client.1.vm10.stdout:2/816: write d5/d18/f1a [4119237,31140] 0 2026-03-09T20:48:10.577 INFO:tasks.workunit.client.1.vm10.stdout:6/825: dread d3/f52 [0,4194304] 0 2026-03-09T20:48:10.587 INFO:tasks.workunit.client.1.vm10.stdout:8/870: mknod d0/d92/c11a 0 2026-03-09T20:48:10.587 INFO:tasks.workunit.client.1.vm10.stdout:3/802: creat dc/d14/d20/d2e/f118 x:0 0 0 2026-03-09T20:48:10.590 INFO:tasks.workunit.client.0.vm07.stdout:7/934: mkdir d3/da/db/d32/d3e/dac/d1f/d50/d13f 0 2026-03-09T20:48:10.595 INFO:tasks.workunit.client.1.vm10.stdout:4/771: truncate d1/d2/d5c/d64/d6b/d81/f8a 287434 0 2026-03-09T20:48:10.600 INFO:tasks.workunit.client.0.vm07.stdout:6/869: dread d8/d16/d22/ff1 [0,4194304] 0 2026-03-09T20:48:10.604 INFO:tasks.workunit.client.0.vm07.stdout:2/862: rename d2/db/d49/d7d/la0 to d2/d11/d56/l118 0 2026-03-09T20:48:10.610 INFO:tasks.workunit.client.0.vm07.stdout:4/786: creat d2/d55/d5d/d3f/d4a/dbc/fd6 x:0 0 0 2026-03-09T20:48:10.610 INFO:tasks.workunit.client.0.vm07.stdout:9/845: truncate d4/d16/d29/d24/d37/d44/d62/d108/d121/d19/d5f/d73/dbc/fe8 466000 0 2026-03-09T20:48:10.617 INFO:tasks.workunit.client.1.vm10.stdout:2/817: truncate d5/d18/d27/d38/d61/dc8/fd1 590217 0 2026-03-09T20:48:10.646 INFO:tasks.workunit.client.1.vm10.stdout:0/803: write d2/d9/db8/d10f/d48/dac/fc4 [192221,83597] 0 2026-03-09T20:48:10.651 INFO:tasks.workunit.client.0.vm07.stdout:8/802: rename d1/d5d/d6f/d2f/c43 to d1/dc/d16/d26/de2/df5/c100 0 2026-03-09T20:48:10.652 INFO:tasks.workunit.client.0.vm07.stdout:1/897: write d3/d97/da1/dc5/d60/f7c [649605,95709] 0 2026-03-09T20:48:10.658 INFO:tasks.workunit.client.1.vm10.stdout:9/887: dwrite d2/d3/d6d/f96 [0,4194304] 0 2026-03-09T20:48:10.659 INFO:tasks.workunit.client.1.vm10.stdout:1/814: truncate d2/da/d25/d3e/d55/faf 1470155 0 2026-03-09T20:48:10.666 INFO:tasks.workunit.client.1.vm10.stdout:5/770: dwrite d2/d27/d75/d81/fd0 [0,4194304] 0 2026-03-09T20:48:10.667 INFO:tasks.workunit.client.1.vm10.stdout:5/771: chown d2/l7f 128404494 1 2026-03-09T20:48:10.668 INFO:tasks.workunit.client.0.vm07.stdout:7/935: dwrite d3/da/db/d32/d3e/dac/ff4 [0,4194304] 0 2026-03-09T20:48:10.680 INFO:tasks.workunit.client.1.vm10.stdout:5/772: sync 2026-03-09T20:48:10.688 INFO:tasks.workunit.client.1.vm10.stdout:3/803: unlink dc/d14/d20/d21/fd6 0 2026-03-09T20:48:10.688 INFO:tasks.workunit.client.0.vm07.stdout:5/917: rename d5/d19/d73/d9c/d11e to d5/df/d13/d3e/d47/d13c 0 2026-03-09T20:48:10.688 INFO:tasks.workunit.client.1.vm10.stdout:3/804: chown dc/d14/d26/d29/d40/da2/de0 212 1 2026-03-09T20:48:10.691 INFO:tasks.workunit.client.0.vm07.stdout:5/918: read - d5/d19/d73/d9c/d10c/f129 zero size 2026-03-09T20:48:10.719 INFO:tasks.workunit.client.0.vm07.stdout:9/846: fsync d4/d16/d29/d24/d37/d44/d62/d108/d121/d19/d5f/d73/dbc/f100 0 2026-03-09T20:48:10.723 INFO:tasks.workunit.client.1.vm10.stdout:7/844: truncate db/d46/f5a 2221193 0 2026-03-09T20:48:10.737 INFO:tasks.workunit.client.1.vm10.stdout:4/772: dwrite d1/d2/d5c/d64/d6b/d81/fc8 [0,4194304] 0 2026-03-09T20:48:10.745 INFO:tasks.workunit.client.0.vm07.stdout:7/936: fdatasync d3/da/db/d32/d3e/dac/d1f/d2b/d52/fc0 0 2026-03-09T20:48:10.761 INFO:tasks.workunit.client.1.vm10.stdout:3/805: creat dc/d14/d26/d29/d40/d8c/d9c/f119 x:0 0 0 2026-03-09T20:48:10.764 INFO:tasks.workunit.client.0.vm07.stdout:0/878: rename d1/f48 to d1/d2/d98/de8/f113 0 2026-03-09T20:48:10.768 INFO:tasks.workunit.client.1.vm10.stdout:7/845: chown db/d46/f85 77150516 1 2026-03-09T20:48:10.769 INFO:tasks.workunit.client.0.vm07.stdout:2/863: creat d2/db/d49/f119 x:0 0 0 2026-03-09T20:48:10.769 INFO:tasks.workunit.client.0.vm07.stdout:6/870: getdents d8/d16/d22/d24 0 2026-03-09T20:48:10.770 INFO:tasks.workunit.client.0.vm07.stdout:9/847: read - d4/d16/d29/d24/d37/d44/d62/d108/d121/d19/fb3 zero size 2026-03-09T20:48:10.775 INFO:tasks.workunit.client.0.vm07.stdout:3/852: rename d1/d5/d9/d2f/d34/d46/d5d/ff1 to d1/d5/d9/d2f/d3d/d71/d76/f115 0 2026-03-09T20:48:10.780 INFO:tasks.workunit.client.0.vm07.stdout:0/879: rmdir d1/d2/dc/d17/da6/db9 39 2026-03-09T20:48:10.781 INFO:tasks.workunit.client.0.vm07.stdout:9/848: dread d4/d16/d29/d24/d37/d44/d62/d108/d121/dc/d4e/d54/d11e/f12d [0,4194304] 0 2026-03-09T20:48:10.786 INFO:tasks.workunit.client.0.vm07.stdout:3/853: dread d1/d5/d9/d2f/d34/d46/f8a [0,4194304] 0 2026-03-09T20:48:10.790 INFO:tasks.workunit.client.0.vm07.stdout:6/871: mknod d8/d16/d22/d24/da0/dab/d40/d105/c11c 0 2026-03-09T20:48:10.796 INFO:tasks.workunit.client.0.vm07.stdout:2/864: dwrite d2/db/d28/d57/f75 [0,4194304] 0 2026-03-09T20:48:10.799 INFO:tasks.workunit.client.0.vm07.stdout:2/865: readlink d2/d11/ddb/d6e/lc5 0 2026-03-09T20:48:10.807 INFO:tasks.workunit.client.0.vm07.stdout:7/937: rename d3/da4/l111 to d3/da/db/d32/d3e/d5c/dc2/l140 0 2026-03-09T20:48:10.818 INFO:tasks.workunit.client.0.vm07.stdout:9/849: creat d4/d16/d29/d24/d37/d44/d62/d108/f131 x:0 0 0 2026-03-09T20:48:10.825 INFO:tasks.workunit.client.1.vm10.stdout:6/826: write d3/f96 [130703,107295] 0 2026-03-09T20:48:10.828 INFO:tasks.workunit.client.0.vm07.stdout:3/854: dread d1/d5/d9/d2f/d66/dc0/fde [0,4194304] 0 2026-03-09T20:48:10.838 INFO:tasks.workunit.client.0.vm07.stdout:4/787: truncate d2/d55/d5d/d3f/f51 661696 0 2026-03-09T20:48:10.842 INFO:tasks.workunit.client.0.vm07.stdout:2/866: creat d2/db/d1c/d8d/f11a x:0 0 0 2026-03-09T20:48:10.851 INFO:tasks.workunit.client.1.vm10.stdout:2/818: write d5/d18/d27/d89/db6/d41/d77/db3/db5/fc9 [3124266,26234] 0 2026-03-09T20:48:10.852 INFO:tasks.workunit.client.0.vm07.stdout:8/803: write d1/dc/f75 [673839,66626] 0 2026-03-09T20:48:10.855 INFO:tasks.workunit.client.1.vm10.stdout:0/804: dwrite d2/d4a/d58/d82/d71/fb2 [0,4194304] 0 2026-03-09T20:48:10.862 INFO:tasks.workunit.client.0.vm07.stdout:5/919: dwrite d5/df/d13/d3e/d5e/f7c [4194304,4194304] 0 2026-03-09T20:48:10.870 INFO:tasks.workunit.client.0.vm07.stdout:1/898: dwrite d3/d14/d54/fea [0,4194304] 0 2026-03-09T20:48:10.875 INFO:tasks.workunit.client.0.vm07.stdout:4/788: fsync d2/d55/d5d/d3f/fa3 0 2026-03-09T20:48:10.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:10 vm07.local ceph-mon[49120]: pgmap v11: 65 pgs: 65 active+clean; 3.3 GiB data, 11 GiB used, 109 GiB / 120 GiB avail; 30 MiB/s rd, 70 MiB/s wr, 208 op/s 2026-03-09T20:48:10.924 INFO:tasks.workunit.client.1.vm10.stdout:1/815: fdatasync d2/da/fb1 0 2026-03-09T20:48:10.925 INFO:tasks.workunit.client.0.vm07.stdout:5/920: mknod d5/df/d13/d6c/db1/dcc/c13d 0 2026-03-09T20:48:10.930 INFO:tasks.workunit.client.0.vm07.stdout:1/899: fdatasync d3/d14/f4d 0 2026-03-09T20:48:10.932 INFO:tasks.workunit.client.0.vm07.stdout:1/900: chown d3/d23/d109 812160 1 2026-03-09T20:48:10.938 INFO:tasks.workunit.client.0.vm07.stdout:8/804: rename d1/dc/c5e to d1/dc/c101 0 2026-03-09T20:48:10.939 INFO:tasks.workunit.client.0.vm07.stdout:1/901: chown d3/d23/cef 1 1 2026-03-09T20:48:10.939 INFO:tasks.workunit.client.1.vm10.stdout:8/871: getdents d0/d22/d25/d6c 0 2026-03-09T20:48:10.940 INFO:tasks.workunit.client.1.vm10.stdout:8/872: readlink d0/d92/de8/l69 0 2026-03-09T20:48:10.942 INFO:tasks.workunit.client.0.vm07.stdout:8/805: mknod d1/dc/d6a/c102 0 2026-03-09T20:48:10.956 INFO:tasks.workunit.client.1.vm10.stdout:7/846: symlink db/d28/d2b/d36/d40/d8a/dd4/l10b 0 2026-03-09T20:48:10.956 INFO:tasks.workunit.client.1.vm10.stdout:3/806: creat dc/d14/d26/d29/d40/da8/f11a x:0 0 0 2026-03-09T20:48:10.956 INFO:tasks.workunit.client.1.vm10.stdout:1/816: rmdir d2/da/d25/d46/d51/d5d/d6e/d70/db3/dd4/df1/df8 0 2026-03-09T20:48:10.956 INFO:tasks.workunit.client.0.vm07.stdout:1/902: write d3/d14/d54/f32 [4350032,58357] 0 2026-03-09T20:48:10.956 INFO:tasks.workunit.client.0.vm07.stdout:8/806: chown d1/dc/d6a/c102 74819 1 2026-03-09T20:48:10.956 INFO:tasks.workunit.client.0.vm07.stdout:8/807: chown d1/db0/fe0 389441 1 2026-03-09T20:48:10.956 INFO:tasks.workunit.client.0.vm07.stdout:1/903: rmdir d3/d23/d52 39 2026-03-09T20:48:10.956 INFO:tasks.workunit.client.0.vm07.stdout:1/904: chown d3/d14/d54/d3e/f75 5096 1 2026-03-09T20:48:10.956 INFO:tasks.workunit.client.0.vm07.stdout:8/808: rename d1/d5d/d6f/d2f/lbb to d1/d5d/d6f/d80/l103 0 2026-03-09T20:48:10.957 INFO:tasks.workunit.client.1.vm10.stdout:6/827: getdents d3/d30/d7f/d24/d39/d9e 0 2026-03-09T20:48:10.957 INFO:tasks.workunit.client.0.vm07.stdout:1/905: creat d3/d14/d54/d9b/f122 x:0 0 0 2026-03-09T20:48:10.961 INFO:tasks.workunit.client.0.vm07.stdout:1/906: symlink d3/d9c/l123 0 2026-03-09T20:48:10.961 INFO:tasks.workunit.client.0.vm07.stdout:8/809: creat d1/f104 x:0 0 0 2026-03-09T20:48:10.962 INFO:tasks.workunit.client.0.vm07.stdout:1/907: chown d3/d97/c9a 1434584380 1 2026-03-09T20:48:10.963 INFO:tasks.workunit.client.0.vm07.stdout:1/908: fdatasync d3/d9c/f105 0 2026-03-09T20:48:10.963 INFO:tasks.workunit.client.0.vm07.stdout:5/921: sync 2026-03-09T20:48:10.964 INFO:tasks.workunit.client.0.vm07.stdout:1/909: chown d3/d9c/f105 455996512 1 2026-03-09T20:48:10.974 INFO:tasks.workunit.client.1.vm10.stdout:6/828: creat d3/d30/d6a/df5/ffd x:0 0 0 2026-03-09T20:48:10.979 INFO:tasks.workunit.client.1.vm10.stdout:2/819: dread d5/d18/d27/d89/db6/d41/d77/db3/db5/f69 [0,4194304] 0 2026-03-09T20:48:10.983 INFO:tasks.workunit.client.0.vm07.stdout:8/810: creat d1/dc/d16/d26/de2/f105 x:0 0 0 2026-03-09T20:48:10.983 INFO:tasks.workunit.client.1.vm10.stdout:6/829: creat d3/da/d11/d89/db9/dd1/dd2/ffe x:0 0 0 2026-03-09T20:48:10.983 INFO:tasks.workunit.client.1.vm10.stdout:6/830: chown d3/d30/d6a/df5 5740 1 2026-03-09T20:48:10.987 INFO:tasks.workunit.client.1.vm10.stdout:2/820: read d5/d18/d27/d38/d61/dc8/ddb/dea/fef [9189,5797] 0 2026-03-09T20:48:10.995 INFO:tasks.workunit.client.1.vm10.stdout:2/821: read d5/d18/d27/d89/db6/d41/d77/db3/db5/fc9 [992199,11515] 0 2026-03-09T20:48:10.996 INFO:tasks.workunit.client.0.vm07.stdout:4/789: dread d2/d55/d5d/d3f/d4a/f5e [0,4194304] 0 2026-03-09T20:48:10.996 INFO:tasks.workunit.client.0.vm07.stdout:1/910: fsync d3/d23/d67/fc4 0 2026-03-09T20:48:11.001 INFO:tasks.workunit.client.1.vm10.stdout:6/831: stat d3/da/d11/d26/d5b/l80 0 2026-03-09T20:48:11.014 INFO:tasks.workunit.client.1.vm10.stdout:6/832: sync 2026-03-09T20:48:11.015 INFO:tasks.workunit.client.0.vm07.stdout:7/938: write d3/da/f38 [110515,126124] 0 2026-03-09T20:48:11.016 INFO:tasks.workunit.client.0.vm07.stdout:9/850: write d4/d16/d29/d24/f8c [2171541,62677] 0 2026-03-09T20:48:11.018 INFO:tasks.workunit.client.0.vm07.stdout:6/872: dwrite d8/db3/fd4 [0,4194304] 0 2026-03-09T20:48:11.022 INFO:tasks.workunit.client.1.vm10.stdout:9/888: truncate d2/d3/f6c 42898 0 2026-03-09T20:48:11.026 INFO:tasks.workunit.client.0.vm07.stdout:3/855: dwrite d1/d5/d9/fa1 [0,4194304] 0 2026-03-09T20:48:11.028 INFO:tasks.workunit.client.0.vm07.stdout:0/880: write d1/d1f/d30/f8e [808413,90740] 0 2026-03-09T20:48:11.032 INFO:tasks.workunit.client.1.vm10.stdout:5/773: dwrite d2/d39/d4b/d7a/fc0 [0,4194304] 0 2026-03-09T20:48:11.035 INFO:tasks.workunit.client.0.vm07.stdout:2/867: dwrite d2/db/d28/d57/f68 [4194304,4194304] 0 2026-03-09T20:48:11.039 INFO:tasks.workunit.client.0.vm07.stdout:4/790: creat d2/d1f/fd7 x:0 0 0 2026-03-09T20:48:11.040 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:10 vm10.local ceph-mon[57011]: pgmap v11: 65 pgs: 65 active+clean; 3.3 GiB data, 11 GiB used, 109 GiB / 120 GiB avail; 30 MiB/s rd, 70 MiB/s wr, 208 op/s 2026-03-09T20:48:11.042 INFO:tasks.workunit.client.1.vm10.stdout:0/805: write d2/d9/db8/d10f/d11/f15 [2767687,59377] 0 2026-03-09T20:48:11.047 INFO:tasks.workunit.client.1.vm10.stdout:6/833: chown d3/da/d11/d31/lab 4 1 2026-03-09T20:48:11.050 INFO:tasks.workunit.client.0.vm07.stdout:9/851: dwrite d4/d16/d29/d24/d37/d44/d62/d108/d121/dc/d4e/f11d [4194304,4194304] 0 2026-03-09T20:48:11.062 INFO:tasks.workunit.client.0.vm07.stdout:6/873: symlink d8/d16/da3/l11d 0 2026-03-09T20:48:11.062 INFO:tasks.workunit.client.0.vm07.stdout:1/911: dread d3/d97/da1/fbb [0,4194304] 0 2026-03-09T20:48:11.062 INFO:tasks.workunit.client.1.vm10.stdout:1/817: write d2/f59 [2552756,44323] 0 2026-03-09T20:48:11.062 INFO:tasks.workunit.client.1.vm10.stdout:3/807: stat dc/d14/d26/d8f/fb8 0 2026-03-09T20:48:11.062 INFO:tasks.workunit.client.1.vm10.stdout:8/873: dwrite d0/d22/d25/d2e/f79 [0,4194304] 0 2026-03-09T20:48:11.062 INFO:tasks.workunit.client.0.vm07.stdout:5/922: write d5/df/d13/d6c/fde [623035,73252] 0 2026-03-09T20:48:11.064 INFO:tasks.workunit.client.1.vm10.stdout:7/847: dwrite db/d28/d2b/d36/d63/d84/fc1 [0,4194304] 0 2026-03-09T20:48:11.066 INFO:tasks.workunit.client.1.vm10.stdout:0/806: symlink d2/d4a/d58/d82/d71/dca/dfe/l11a 0 2026-03-09T20:48:11.074 INFO:tasks.workunit.client.0.vm07.stdout:4/791: sync 2026-03-09T20:48:11.074 INFO:tasks.workunit.client.0.vm07.stdout:4/792: chown d2/d55/f71 3 1 2026-03-09T20:48:11.075 INFO:tasks.workunit.client.1.vm10.stdout:6/834: sync 2026-03-09T20:48:11.078 INFO:tasks.workunit.client.0.vm07.stdout:2/868: rename d2/db/d1c/l5f to d2/db/d28/d90/dd6/l11b 0 2026-03-09T20:48:11.078 INFO:tasks.workunit.client.1.vm10.stdout:6/835: stat d3/da/d11/d26/d5b/f74 0 2026-03-09T20:48:11.079 INFO:tasks.workunit.client.1.vm10.stdout:1/818: rmdir d2/da/d25/d3e/d55 39 2026-03-09T20:48:11.080 INFO:tasks.workunit.client.0.vm07.stdout:8/811: link d1/f104 d1/d3b/f106 0 2026-03-09T20:48:11.085 INFO:tasks.workunit.client.0.vm07.stdout:9/852: fsync d4/d16/d29/d24/d37/d44/d62/d108/d121/d19/d5f/da5/ff5 0 2026-03-09T20:48:11.088 INFO:tasks.workunit.client.1.vm10.stdout:3/808: mkdir dc/d14/d26/dcb/d11b 0 2026-03-09T20:48:11.088 INFO:tasks.workunit.client.1.vm10.stdout:8/874: truncate d0/d22/d25/d2e/d41/d85/db9/dc6/fc9 4256534 0 2026-03-09T20:48:11.090 INFO:tasks.workunit.client.0.vm07.stdout:5/923: truncate d5/d50/f52 4055980 0 2026-03-09T20:48:11.090 INFO:tasks.workunit.client.0.vm07.stdout:5/924: stat d5/df/faa 0 2026-03-09T20:48:11.091 INFO:tasks.workunit.client.1.vm10.stdout:1/819: mkdir d2/da/d25/d46/d80/da0/d92/db5/dc7/d105 0 2026-03-09T20:48:11.096 INFO:tasks.workunit.client.0.vm07.stdout:4/793: creat d2/df/d17/dd1/fd8 x:0 0 0 2026-03-09T20:48:11.101 INFO:tasks.workunit.client.0.vm07.stdout:2/869: truncate d2/d11/ddb/d6e/f95 1719364 0 2026-03-09T20:48:11.111 INFO:tasks.workunit.client.0.vm07.stdout:8/812: creat d1/dc/d16/d31/db4/f107 x:0 0 0 2026-03-09T20:48:11.111 INFO:tasks.workunit.client.1.vm10.stdout:3/809: rename dc/d14/d20/d21/d3b/c54 to dc/d14/d20/d21/c11c 0 2026-03-09T20:48:11.120 INFO:tasks.workunit.client.1.vm10.stdout:4/773: dread d1/d2/d5c/d64/d6b/d81/dac/d1b/d57/f4c [4194304,4194304] 0 2026-03-09T20:48:11.120 INFO:tasks.workunit.client.1.vm10.stdout:7/848: link db/d28/d2b/d36/d63/d6d/l104 db/d28/d4c/l10c 0 2026-03-09T20:48:11.121 INFO:tasks.workunit.client.0.vm07.stdout:9/853: rmdir d4/d16/d29/d24/d37/d44/d62/d108/d121/db9 39 2026-03-09T20:48:11.121 INFO:tasks.workunit.client.0.vm07.stdout:6/874: mknod d8/db3/d114/d11a/c11e 0 2026-03-09T20:48:11.127 INFO:tasks.workunit.client.1.vm10.stdout:8/875: mknod d0/d95/c11b 0 2026-03-09T20:48:11.128 INFO:tasks.workunit.client.0.vm07.stdout:0/881: creat d1/d2/d33/d35/f114 x:0 0 0 2026-03-09T20:48:11.135 INFO:tasks.workunit.client.0.vm07.stdout:5/925: creat d5/df/d13/d6c/db1/d124/f13e x:0 0 0 2026-03-09T20:48:11.137 INFO:tasks.workunit.client.1.vm10.stdout:8/876: sync 2026-03-09T20:48:11.137 INFO:tasks.workunit.client.1.vm10.stdout:0/807: getdents d2 0 2026-03-09T20:48:11.137 INFO:tasks.workunit.client.1.vm10.stdout:2/822: write d5/d18/d27/d38/f104 [4344212,86336] 0 2026-03-09T20:48:11.138 INFO:tasks.workunit.client.1.vm10.stdout:2/823: chown d5/d18/d27/d89/db6/d41/d77/f85 40113976 1 2026-03-09T20:48:11.143 INFO:tasks.workunit.client.1.vm10.stdout:4/774: mkdir d1/d47/df8 0 2026-03-09T20:48:11.144 INFO:tasks.workunit.client.0.vm07.stdout:3/856: rename d1/f78 to d1/d5/d9/d11/d60/f116 0 2026-03-09T20:48:11.144 INFO:tasks.workunit.client.0.vm07.stdout:9/854: rename d4 to d4/d16/d29/d24/d37/d44/d62/d108/d121/dc/d4e/d132 22 2026-03-09T20:48:11.146 INFO:tasks.workunit.client.0.vm07.stdout:9/855: chown d4/d16/d29/d24/d37/d44/d62/d108/d121/d19/d89/da7 0 1 2026-03-09T20:48:11.146 INFO:tasks.workunit.client.0.vm07.stdout:9/856: chown d4 605160550 1 2026-03-09T20:48:11.148 INFO:tasks.workunit.client.1.vm10.stdout:1/820: truncate d2/da/d25/d3e/f58 1071727 0 2026-03-09T20:48:11.153 INFO:tasks.workunit.client.0.vm07.stdout:6/875: symlink d8/d50/l11f 0 2026-03-09T20:48:11.157 INFO:tasks.workunit.client.1.vm10.stdout:8/877: truncate d0/d22/d25/f34 4555096 0 2026-03-09T20:48:11.158 INFO:tasks.workunit.client.1.vm10.stdout:8/878: chown d0/d22/d2f/d9d 1093 1 2026-03-09T20:48:11.159 INFO:tasks.workunit.client.0.vm07.stdout:1/912: link d3/d97/c9a d3/d97/da1/c124 0 2026-03-09T20:48:11.168 INFO:tasks.workunit.client.0.vm07.stdout:7/939: write d3/da/f3b [182730,66403] 0 2026-03-09T20:48:11.168 INFO:tasks.workunit.client.1.vm10.stdout:1/821: mkdir d2/da/d25/d46/d8c/d106 0 2026-03-09T20:48:11.168 INFO:tasks.workunit.client.1.vm10.stdout:9/889: write d2/d28/d47/d6a/f7f [1815320,90499] 0 2026-03-09T20:48:11.168 INFO:tasks.workunit.client.1.vm10.stdout:1/822: fdatasync d2/d89/f101 0 2026-03-09T20:48:11.174 INFO:tasks.workunit.client.1.vm10.stdout:5/774: fsync d2/d1b/f41 0 2026-03-09T20:48:11.177 INFO:tasks.workunit.client.0.vm07.stdout:5/926: dread d5/df/f34 [4194304,4194304] 0 2026-03-09T20:48:11.186 INFO:tasks.workunit.client.1.vm10.stdout:6/836: write d3/d30/d7f/d36/d5c/f5f [158131,114004] 0 2026-03-09T20:48:11.186 INFO:tasks.workunit.client.1.vm10.stdout:4/775: creat d1/d2/d5c/d64/d6b/d81/dac/d1c/d69/dbd/ff9 x:0 0 0 2026-03-09T20:48:11.186 INFO:tasks.workunit.client.1.vm10.stdout:2/824: dread d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47/f53 [0,4194304] 0 2026-03-09T20:48:11.187 INFO:tasks.workunit.client.1.vm10.stdout:9/890: fdatasync d2/d3/de/f80 0 2026-03-09T20:48:11.188 INFO:tasks.workunit.client.1.vm10.stdout:4/776: chown d1/d2/d5c/d64/d6b/d81/dac/d39/cbc 66 1 2026-03-09T20:48:11.198 INFO:tasks.workunit.client.1.vm10.stdout:1/823: creat d2/da/d25/d3e/dca/da2/dd5/f107 x:0 0 0 2026-03-09T20:48:11.198 INFO:tasks.workunit.client.1.vm10.stdout:1/824: chown d2/da/dbc 648935 1 2026-03-09T20:48:11.205 INFO:tasks.workunit.client.1.vm10.stdout:7/849: getdents db/d28/d2b/d36/d3b/dd5 0 2026-03-09T20:48:11.205 INFO:tasks.workunit.client.1.vm10.stdout:2/825: fdatasync d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47/f53 0 2026-03-09T20:48:11.213 INFO:tasks.workunit.client.1.vm10.stdout:4/777: mkdir d1/d2/d3/d54/daa/dfa 0 2026-03-09T20:48:11.224 INFO:tasks.workunit.client.1.vm10.stdout:7/850: rmdir db/d28/d2b/d36/d63/d6d 39 2026-03-09T20:48:11.225 INFO:tasks.workunit.client.1.vm10.stdout:7/851: chown db/d46 0 1 2026-03-09T20:48:11.225 INFO:tasks.workunit.client.1.vm10.stdout:4/778: dwrite d1/d2/f2d [0,4194304] 0 2026-03-09T20:48:11.226 INFO:tasks.workunit.client.1.vm10.stdout:7/852: stat db/d46/dab/db5 0 2026-03-09T20:48:11.230 INFO:tasks.workunit.client.1.vm10.stdout:7/853: readlink db/d28/d2b/dd0/ldb 0 2026-03-09T20:48:11.235 INFO:tasks.workunit.client.1.vm10.stdout:2/826: creat d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d8d/d93/da5/dda/f10d x:0 0 0 2026-03-09T20:48:11.242 INFO:tasks.workunit.client.1.vm10.stdout:1/825: creat d2/da/d25/d46/dbe/dfc/f108 x:0 0 0 2026-03-09T20:48:11.243 INFO:tasks.workunit.client.1.vm10.stdout:4/779: stat d1/d2/d5c/d64/d61/ced 0 2026-03-09T20:48:11.248 INFO:tasks.workunit.client.1.vm10.stdout:2/827: symlink d5/d18/d27/d89/db6/d41/d77/db3/db5/l10e 0 2026-03-09T20:48:11.251 INFO:tasks.workunit.client.1.vm10.stdout:1/826: creat d2/d89/f109 x:0 0 0 2026-03-09T20:48:11.259 INFO:tasks.workunit.client.1.vm10.stdout:4/780: dwrite d1/d2/d3/d70/d78/d86/fde [0,4194304] 0 2026-03-09T20:48:11.260 INFO:tasks.workunit.client.1.vm10.stdout:2/828: dwrite d5/f7 [0,4194304] 0 2026-03-09T20:48:11.267 INFO:tasks.workunit.client.0.vm07.stdout:4/794: mkdir d2/d55/d5d/d93/dbe/dd9 0 2026-03-09T20:48:11.267 INFO:tasks.workunit.client.1.vm10.stdout:1/827: dread d2/da/d25/d3e/d42/f57 [0,4194304] 0 2026-03-09T20:48:11.273 INFO:tasks.workunit.client.1.vm10.stdout:6/837: dread d3/da/d11/d26/f2a [0,4194304] 0 2026-03-09T20:48:11.274 INFO:tasks.workunit.client.0.vm07.stdout:3/857: truncate d1/d5/f25 978592 0 2026-03-09T20:48:11.275 INFO:tasks.workunit.client.0.vm07.stdout:3/858: chown d1/d5/d9/d11/d6d/d80/db3/d109 102524812 1 2026-03-09T20:48:11.279 INFO:tasks.workunit.client.1.vm10.stdout:3/810: write dc/d14/d26/d29/d2a/f66 [7693780,90463] 0 2026-03-09T20:48:11.287 INFO:tasks.workunit.client.1.vm10.stdout:2/829: creat d5/d18/d27/d89/f10f x:0 0 0 2026-03-09T20:48:11.291 INFO:tasks.workunit.client.1.vm10.stdout:4/781: truncate d1/d2/d5c/d64/d6b/d81/dac/d1c/d2b/f36 220284 0 2026-03-09T20:48:11.292 INFO:tasks.workunit.client.0.vm07.stdout:8/813: rmdir d1/d5d/d6f/d2f/d4d/d55 39 2026-03-09T20:48:11.292 INFO:tasks.workunit.client.1.vm10.stdout:2/830: write d5/d18/d27/d38/f100 [18218,48177] 0 2026-03-09T20:48:11.292 INFO:tasks.workunit.client.1.vm10.stdout:5/775: dwrite d2/f3e [0,4194304] 0 2026-03-09T20:48:11.294 INFO:tasks.workunit.client.1.vm10.stdout:8/879: dwrite d0/d22/d25/d2e/d41/de9/dfc/f102 [0,4194304] 0 2026-03-09T20:48:11.298 INFO:tasks.workunit.client.1.vm10.stdout:1/828: rename d2/da/d25/f6c to d2/da/d25/d46/d51/f10a 0 2026-03-09T20:48:11.300 INFO:tasks.workunit.client.0.vm07.stdout:6/876: mknod d8/d16/d61/c120 0 2026-03-09T20:48:11.305 INFO:tasks.workunit.client.1.vm10.stdout:9/891: write d2/d3/de/d8f/fe4 [2720048,111484] 0 2026-03-09T20:48:11.308 INFO:tasks.workunit.client.1.vm10.stdout:1/829: sync 2026-03-09T20:48:11.312 INFO:tasks.workunit.client.0.vm07.stdout:1/913: unlink d3/d97/da1/dc5/d60/ca6 0 2026-03-09T20:48:11.313 INFO:tasks.workunit.client.1.vm10.stdout:6/838: creat d3/da/d11/d89/fff x:0 0 0 2026-03-09T20:48:11.316 INFO:tasks.workunit.client.0.vm07.stdout:7/940: read - d3/da/db/d32/d3e/d11c/f121 zero size 2026-03-09T20:48:11.322 INFO:tasks.workunit.client.1.vm10.stdout:7/854: dwrite db/d28/d2b/d36/d3b/fd7 [0,4194304] 0 2026-03-09T20:48:11.335 INFO:tasks.workunit.client.0.vm07.stdout:9/857: write d4/d16/fd0 [717967,74980] 0 2026-03-09T20:48:11.344 INFO:tasks.workunit.client.1.vm10.stdout:2/831: fdatasync d5/d18/d27/d5f/fad 0 2026-03-09T20:48:11.351 INFO:tasks.workunit.client.0.vm07.stdout:3/859: mknod d1/d5/d9/d2f/d86/c117 0 2026-03-09T20:48:11.356 INFO:tasks.workunit.client.1.vm10.stdout:5/776: mkdir d2/d39/dbf/d69/de9/dfa/d122 0 2026-03-09T20:48:11.357 INFO:tasks.workunit.client.1.vm10.stdout:5/777: readlink d2/d39/dbf/d69/d96/l117 0 2026-03-09T20:48:11.357 INFO:tasks.workunit.client.1.vm10.stdout:8/880: truncate d0/d22/d25/d40/fd3 1261557 0 2026-03-09T20:48:11.359 INFO:tasks.workunit.client.1.vm10.stdout:9/892: mkdir d2/d28/da2/d127 0 2026-03-09T20:48:11.360 INFO:tasks.workunit.client.1.vm10.stdout:3/811: dread dc/d14/d20/d21/f96 [0,4194304] 0 2026-03-09T20:48:11.366 INFO:tasks.workunit.client.1.vm10.stdout:1/830: creat d2/da/d25/d46/d80/f10b x:0 0 0 2026-03-09T20:48:11.370 INFO:tasks.workunit.client.0.vm07.stdout:8/814: unlink d1/d5d/d6f/f64 0 2026-03-09T20:48:11.375 INFO:tasks.workunit.client.1.vm10.stdout:1/831: dread d2/da/d25/d3e/d42/f57 [0,4194304] 0 2026-03-09T20:48:11.375 INFO:tasks.workunit.client.0.vm07.stdout:6/877: dread d8/d16/f17 [0,4194304] 0 2026-03-09T20:48:11.376 INFO:tasks.workunit.client.0.vm07.stdout:8/815: dread d1/d5d/f7a [0,4194304] 0 2026-03-09T20:48:11.379 INFO:tasks.workunit.client.0.vm07.stdout:0/882: link d1/ce6 d1/d2/d4b/d106/c115 0 2026-03-09T20:48:11.386 INFO:tasks.workunit.client.1.vm10.stdout:0/808: dread d2/d4a/d58/d82/f5c [0,4194304] 0 2026-03-09T20:48:11.396 INFO:tasks.workunit.client.1.vm10.stdout:7/855: truncate db/d28/d2b/d36/d3f/fcb 1072951 0 2026-03-09T20:48:11.397 INFO:tasks.workunit.client.1.vm10.stdout:4/782: write d1/d2/d5c/d64/d6b/d81/da9/fc7 [3676303,12727] 0 2026-03-09T20:48:11.398 INFO:tasks.workunit.client.1.vm10.stdout:4/783: write d1/d2/d5c/d64/d6b/ff2 [873991,116156] 0 2026-03-09T20:48:11.408 INFO:tasks.workunit.client.0.vm07.stdout:9/858: mknod d4/d16/d29/c133 0 2026-03-09T20:48:11.416 INFO:tasks.workunit.client.0.vm07.stdout:4/795: fsync d2/d1f/f9c 0 2026-03-09T20:48:11.426 INFO:tasks.workunit.client.1.vm10.stdout:8/881: mkdir d0/d22/d25/d2e/d41/d85/d8b/d11c 0 2026-03-09T20:48:11.437 INFO:tasks.workunit.client.1.vm10.stdout:9/893: mkdir d2/d28/d47/d50/dd1/d128 0 2026-03-09T20:48:11.438 INFO:tasks.workunit.client.0.vm07.stdout:5/927: truncate d5/ff0 3665859 0 2026-03-09T20:48:11.444 INFO:tasks.workunit.client.0.vm07.stdout:2/870: getdents d2/db/d49 0 2026-03-09T20:48:11.448 INFO:tasks.workunit.client.1.vm10.stdout:2/832: write d5/f15 [1975464,94933] 0 2026-03-09T20:48:11.449 INFO:tasks.workunit.client.1.vm10.stdout:2/833: chown d5/d18/d1b/f26 3043478 1 2026-03-09T20:48:11.463 INFO:tasks.workunit.client.1.vm10.stdout:1/832: dread - d2/da/d25/d3e/dca/fa5 zero size 2026-03-09T20:48:11.463 INFO:tasks.workunit.client.0.vm07.stdout:3/860: write d1/d5/d9/f1b [2012573,21161] 0 2026-03-09T20:48:11.474 INFO:tasks.workunit.client.1.vm10.stdout:0/809: mknod d2/d4a/d58/d82/d60/c11b 0 2026-03-09T20:48:11.477 INFO:tasks.workunit.client.0.vm07.stdout:6/878: read d8/d5d/fe6 [2478730,60591] 0 2026-03-09T20:48:11.478 INFO:tasks.workunit.client.1.vm10.stdout:6/839: read d3/d30/d7f/d36/d5c/f5f [2333588,110140] 0 2026-03-09T20:48:11.483 INFO:tasks.workunit.client.0.vm07.stdout:8/816: rename d1/dc/d16/d26/f37 to d1/dc/dba/f108 0 2026-03-09T20:48:11.486 INFO:tasks.workunit.client.0.vm07.stdout:8/817: fsync d1/d5d/ff6 0 2026-03-09T20:48:11.488 INFO:tasks.workunit.client.1.vm10.stdout:5/778: dread d2/f71 [0,4194304] 0 2026-03-09T20:48:11.504 INFO:tasks.workunit.client.1.vm10.stdout:3/812: dread dc/d14/d26/d29/f60 [4194304,4194304] 0 2026-03-09T20:48:11.613 INFO:tasks.workunit.client.0.vm07.stdout:0/883: fsync d1/d2/dc/d17/da6/ff1 0 2026-03-09T20:48:11.635 INFO:tasks.workunit.client.0.vm07.stdout:7/941: symlink d3/da/db/d32/d3e/dac/d43/l141 0 2026-03-09T20:48:11.640 INFO:tasks.workunit.client.0.vm07.stdout:9/859: fdatasync d4/d16/d29/d24/d37/d44/d62/d108/d121/d19/d5f/d73/dbc/f100 0 2026-03-09T20:48:11.642 INFO:tasks.workunit.client.1.vm10.stdout:8/882: truncate d0/d92/de8/d64/db5/fdb 725044 0 2026-03-09T20:48:11.643 INFO:tasks.workunit.client.0.vm07.stdout:4/796: mkdir d2/d55/d5d/d3f/d4a/d85/dda 0 2026-03-09T20:48:11.645 INFO:tasks.workunit.client.1.vm10.stdout:2/834: mknod d5/d18/d27/d89/db6/d41/d77/db3/db5/c110 0 2026-03-09T20:48:11.662 INFO:tasks.workunit.client.1.vm10.stdout:1/833: fdatasync d2/da/d25/d3e/f69 0 2026-03-09T20:48:11.663 INFO:tasks.workunit.client.0.vm07.stdout:2/871: chown d2/ff 70532 1 2026-03-09T20:48:11.668 INFO:tasks.workunit.client.0.vm07.stdout:3/861: creat d1/d5/d9/d2f/d99/dd8/f118 x:0 0 0 2026-03-09T20:48:11.673 INFO:tasks.workunit.client.0.vm07.stdout:3/862: dwrite d1/d5/d9/daf/de3/f10e [0,4194304] 0 2026-03-09T20:48:11.675 INFO:tasks.workunit.client.1.vm10.stdout:3/813: unlink dc/d14/d26/d37/f3e 0 2026-03-09T20:48:11.692 INFO:tasks.workunit.client.1.vm10.stdout:4/784: rename d1/d2/d5c/d64/d61 to d1/d2/d5c/d64/d6b/d81/dac/d1c/d2b/d4a/d9b/de2/dfb 0 2026-03-09T20:48:11.693 INFO:tasks.workunit.client.1.vm10.stdout:4/785: write d1/d67/f8f [1375499,44990] 0 2026-03-09T20:48:11.696 INFO:tasks.workunit.client.1.vm10.stdout:8/883: fdatasync d0/d22/f29 0 2026-03-09T20:48:11.722 INFO:tasks.workunit.client.1.vm10.stdout:5/779: creat d2/d39/dbf/dcc/f123 x:0 0 0 2026-03-09T20:48:11.722 INFO:tasks.workunit.client.1.vm10.stdout:5/780: stat d2/fd6 0 2026-03-09T20:48:11.735 INFO:tasks.workunit.client.1.vm10.stdout:3/814: unlink dc/d14/d27/f103 0 2026-03-09T20:48:11.737 INFO:tasks.workunit.client.1.vm10.stdout:3/815: write dc/d14/d26/d29/ff7 [1274822,91736] 0 2026-03-09T20:48:11.745 INFO:tasks.workunit.client.0.vm07.stdout:2/872: dread - d2/db/d28/d57/ff3 zero size 2026-03-09T20:48:11.754 INFO:tasks.workunit.client.1.vm10.stdout:8/884: mkdir d0/d22/d25/d40/d86/d91/d11d 0 2026-03-09T20:48:11.763 INFO:tasks.workunit.client.1.vm10.stdout:5/781: dread d2/d27/d37/d46/d5d/fe2 [0,4194304] 0 2026-03-09T20:48:11.767 INFO:tasks.workunit.client.0.vm07.stdout:8/818: mknod d1/d5d/c109 0 2026-03-09T20:48:11.767 INFO:tasks.workunit.client.0.vm07.stdout:7/942: mkdir d3/da/db/d32/d3e/dac/d1f/d50/d13f/d142 0 2026-03-09T20:48:11.799 INFO:tasks.workunit.client.0.vm07.stdout:2/873: mkdir d2/db/d1c/d4a/d88/d11c 0 2026-03-09T20:48:11.805 INFO:tasks.workunit.client.0.vm07.stdout:7/943: symlink d3/da/db/d32/l143 0 2026-03-09T20:48:11.805 INFO:tasks.workunit.client.1.vm10.stdout:7/856: write db/d28/f4f [1610848,76324] 0 2026-03-09T20:48:11.807 INFO:tasks.workunit.client.0.vm07.stdout:7/944: chown d3/da/db/d32/d3e/d5c/f11b 2 1 2026-03-09T20:48:11.810 INFO:tasks.workunit.client.0.vm07.stdout:2/874: fdatasync d2/db/d49/d7d/fe5 0 2026-03-09T20:48:11.822 INFO:tasks.workunit.client.0.vm07.stdout:8/819: mkdir d1/d5d/d10a 0 2026-03-09T20:48:11.840 INFO:tasks.workunit.client.0.vm07.stdout:4/797: mknod d2/d55/d5d/d3f/d4a/d4b/cdb 0 2026-03-09T20:48:11.840 INFO:tasks.workunit.client.0.vm07.stdout:4/798: stat d2/df/d59/l9a 0 2026-03-09T20:48:11.857 INFO:tasks.workunit.client.0.vm07.stdout:7/945: mkdir d3/da4/df2/d113/d144 0 2026-03-09T20:48:11.862 INFO:tasks.workunit.client.0.vm07.stdout:7/946: rename d3/da/db/d32/d3e to d3/da/db/d32/d3e/dac/d1f/d50/d13f/d145 22 2026-03-09T20:48:11.877 INFO:tasks.workunit.client.1.vm10.stdout:2/835: dread d5/d18/d27/f74 [0,4194304] 0 2026-03-09T20:48:11.892 INFO:tasks.workunit.client.0.vm07.stdout:7/947: creat d3/da/db/d32/d3e/dac/d1f/d50/d110/f146 x:0 0 0 2026-03-09T20:48:11.901 INFO:tasks.workunit.client.0.vm07.stdout:4/799: read d2/f9 [1391353,95804] 0 2026-03-09T20:48:11.918 INFO:tasks.workunit.client.0.vm07.stdout:7/948: truncate d3/da/db/d32/d3e/dac/d1f/d2b/f2c 2216678 0 2026-03-09T20:48:11.929 INFO:tasks.workunit.client.0.vm07.stdout:4/800: fsync d2/d55/d5d/d3f/fa7 0 2026-03-09T20:48:11.929 INFO:tasks.workunit.client.0.vm07.stdout:7/949: creat d3/da/db/d32/d3e/dac/d1f/d50/d110/f147 x:0 0 0 2026-03-09T20:48:11.944 INFO:tasks.workunit.client.0.vm07.stdout:7/950: creat d3/da/db/d32/d3e/dac/d43/d62/db1/f148 x:0 0 0 2026-03-09T20:48:11.950 INFO:tasks.workunit.client.0.vm07.stdout:1/914: dwrite d3/d23/d67/f69 [0,4194304] 0 2026-03-09T20:48:11.955 INFO:tasks.workunit.client.0.vm07.stdout:7/951: symlink d3/da/db/d32/d3e/dac/d43/d62/de0/l149 0 2026-03-09T20:48:11.955 INFO:tasks.workunit.client.0.vm07.stdout:4/801: creat d2/d55/d5d/d86/db9/fdc x:0 0 0 2026-03-09T20:48:11.956 INFO:tasks.workunit.client.0.vm07.stdout:1/915: write d3/d97/da1/dc5/d60/f53 [1719437,28141] 0 2026-03-09T20:48:11.963 INFO:tasks.workunit.client.0.vm07.stdout:7/952: mknod d3/da/db/d79/dc3/c14a 0 2026-03-09T20:48:11.966 INFO:tasks.workunit.client.0.vm07.stdout:5/928: dwrite d5/df/d13/f2a [0,4194304] 0 2026-03-09T20:48:11.972 INFO:tasks.workunit.client.1.vm10.stdout:5/782: creat d2/d27/d37/f124 x:0 0 0 2026-03-09T20:48:11.979 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:11 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:11.979 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:11 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:11.979 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:11 vm10.local ceph-mon[57011]: pgmap v12: 65 pgs: 65 active+clean; 3.6 GiB data, 12 GiB used, 108 GiB / 120 GiB avail; 51 MiB/s rd, 114 MiB/s wr, 326 op/s 2026-03-09T20:48:11.979 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:11 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:11.979 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:11 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:11.995 INFO:tasks.workunit.client.0.vm07.stdout:1/916: dread d3/d23/f58 [0,4194304] 0 2026-03-09T20:48:11.996 INFO:tasks.workunit.client.1.vm10.stdout:0/810: write d2/d4a/d58/d82/d71/fe7 [629640,100453] 0 2026-03-09T20:48:11.998 INFO:tasks.workunit.client.1.vm10.stdout:2/836: symlink d5/d5b/l111 0 2026-03-09T20:48:12.001 INFO:tasks.workunit.client.0.vm07.stdout:5/929: unlink d5/d33/d39/d8d/dab/f60 0 2026-03-09T20:48:12.001 INFO:tasks.workunit.client.0.vm07.stdout:4/802: dread d2/df/d59/d8a/fc0 [0,4194304] 0 2026-03-09T20:48:12.006 INFO:tasks.workunit.client.1.vm10.stdout:3/816: symlink dc/d14/d20/l11d 0 2026-03-09T20:48:12.012 INFO:tasks.workunit.client.0.vm07.stdout:1/917: symlink d3/d97/da1/dd7/l125 0 2026-03-09T20:48:12.012 INFO:tasks.workunit.client.0.vm07.stdout:1/918: stat d3/d97/da1/dc5/d90/de8/c85 0 2026-03-09T20:48:12.014 INFO:tasks.workunit.client.0.vm07.stdout:6/879: symlink d8/d16/d22/d24/da0/l121 0 2026-03-09T20:48:12.015 INFO:tasks.workunit.client.0.vm07.stdout:0/884: write d1/d2/d33/d35/f59 [2705696,1852] 0 2026-03-09T20:48:12.027 INFO:tasks.workunit.client.1.vm10.stdout:5/783: symlink d2/d39/dbf/d66/l125 0 2026-03-09T20:48:12.031 INFO:tasks.workunit.client.0.vm07.stdout:4/803: symlink d2/d55/ldd 0 2026-03-09T20:48:12.033 INFO:tasks.workunit.client.0.vm07.stdout:3/863: write d1/d5/d9/d2f/d34/f68 [2580018,70432] 0 2026-03-09T20:48:12.033 INFO:tasks.workunit.client.1.vm10.stdout:6/840: dwrite d3/da/d11/d31/d47/d87/fd9 [0,4194304] 0 2026-03-09T20:48:12.035 INFO:tasks.workunit.client.0.vm07.stdout:1/919: dread d3/d23/d67/f92 [4194304,4194304] 0 2026-03-09T20:48:12.050 INFO:tasks.workunit.client.1.vm10.stdout:2/837: dread d5/d18/d27/da6/f109 [0,4194304] 0 2026-03-09T20:48:12.050 INFO:tasks.workunit.client.1.vm10.stdout:7/857: rmdir db/d28/d4c/d6e/dbb 0 2026-03-09T20:48:12.051 INFO:tasks.workunit.client.0.vm07.stdout:3/864: read d1/d5/d9/d11/d1f/f27 [1978227,96267] 0 2026-03-09T20:48:12.054 INFO:tasks.workunit.client.0.vm07.stdout:6/880: chown d8/d16/dbb/cd9 84879365 1 2026-03-09T20:48:12.054 INFO:tasks.workunit.client.0.vm07.stdout:0/885: fsync d1/d1f/f63 0 2026-03-09T20:48:12.054 INFO:tasks.workunit.client.0.vm07.stdout:4/804: write d2/d55/f71 [599362,63389] 0 2026-03-09T20:48:12.057 INFO:tasks.workunit.client.1.vm10.stdout:0/811: mknod d2/d9/db8/d10f/d11/dd1/c11c 0 2026-03-09T20:48:12.058 INFO:tasks.workunit.client.1.vm10.stdout:6/841: read d3/f40 [189080,125972] 0 2026-03-09T20:48:12.058 INFO:tasks.workunit.client.1.vm10.stdout:0/812: chown d2/d9/db8/d10f/d11/dd1/d34/dee 1086266 1 2026-03-09T20:48:12.070 INFO:tasks.workunit.client.1.vm10.stdout:3/817: fsync dc/fbb 0 2026-03-09T20:48:12.078 INFO:tasks.workunit.client.1.vm10.stdout:1/834: creat d2/da/f10c x:0 0 0 2026-03-09T20:48:12.079 INFO:tasks.workunit.client.1.vm10.stdout:1/835: chown d2/da/d25/d46/d51/l54 7569 1 2026-03-09T20:48:12.080 INFO:tasks.workunit.client.1.vm10.stdout:1/836: dread - d2/da/d25/d3e/dca/da2/f104 zero size 2026-03-09T20:48:12.081 INFO:tasks.workunit.client.1.vm10.stdout:2/838: mkdir d5/d18/d27/db4/d112 0 2026-03-09T20:48:12.082 INFO:tasks.workunit.client.1.vm10.stdout:1/837: chown d2/da/d25/d46/d51/d5d/d6e/c100 27199 1 2026-03-09T20:48:12.083 INFO:tasks.workunit.client.0.vm07.stdout:1/920: creat d3/d14/d54/d3e/f126 x:0 0 0 2026-03-09T20:48:12.085 INFO:tasks.workunit.client.1.vm10.stdout:1/838: chown d2/da/d25/d46/d51/d5d/d6e/d70 25486 1 2026-03-09T20:48:12.085 INFO:tasks.workunit.client.1.vm10.stdout:1/839: fsync d2/da/f4d 0 2026-03-09T20:48:12.088 INFO:tasks.workunit.client.1.vm10.stdout:9/894: rename d2/ccb to d2/d3/c129 0 2026-03-09T20:48:12.088 INFO:tasks.workunit.client.1.vm10.stdout:9/895: readlink d2/d12/lf2 0 2026-03-09T20:48:12.092 INFO:tasks.workunit.client.0.vm07.stdout:2/875: write d2/dc8/f101 [1763187,44094] 0 2026-03-09T20:48:12.092 INFO:tasks.workunit.client.0.vm07.stdout:7/953: getdents d3/da 0 2026-03-09T20:48:12.096 INFO:tasks.workunit.client.0.vm07.stdout:3/865: dread d1/d5/d9/daf/d9f/fcb [0,4194304] 0 2026-03-09T20:48:12.102 INFO:tasks.workunit.client.0.vm07.stdout:8/820: dwrite d1/dc/d16/f4b [0,4194304] 0 2026-03-09T20:48:12.120 INFO:tasks.workunit.client.1.vm10.stdout:3/818: truncate dc/d14/d26/fd8 3035211 0 2026-03-09T20:48:12.122 INFO:tasks.workunit.client.0.vm07.stdout:4/805: symlink d2/d55/d5d/d3f/d4a/dbc/lde 0 2026-03-09T20:48:12.123 INFO:tasks.workunit.client.0.vm07.stdout:4/806: chown d2/c11 146238 1 2026-03-09T20:48:12.124 INFO:tasks.workunit.client.1.vm10.stdout:2/839: mknod d5/d18/d27/da6/c113 0 2026-03-09T20:48:12.131 INFO:tasks.workunit.client.0.vm07.stdout:2/876: creat d2/db/d49/d7d/d85/dde/f11d x:0 0 0 2026-03-09T20:48:12.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:11 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:12.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:11 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:12.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:11 vm07.local ceph-mon[49120]: pgmap v12: 65 pgs: 65 active+clean; 3.6 GiB data, 12 GiB used, 108 GiB / 120 GiB avail; 51 MiB/s rd, 114 MiB/s wr, 326 op/s 2026-03-09T20:48:12.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:11 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:12.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:11 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:12.134 INFO:tasks.workunit.client.0.vm07.stdout:0/886: mkdir d1/d2/d98/daf/d116 0 2026-03-09T20:48:12.137 INFO:tasks.workunit.client.0.vm07.stdout:7/954: creat d3/da/db/d32/d3e/dac/d43/d62/de0/f14b x:0 0 0 2026-03-09T20:48:12.138 INFO:tasks.workunit.client.0.vm07.stdout:7/955: chown d3/da/db/d32/d126 26184 1 2026-03-09T20:48:12.154 INFO:tasks.workunit.client.0.vm07.stdout:3/866: chown d1/d5/d9/d11/d60/df3/lfb 6953265 1 2026-03-09T20:48:12.154 INFO:tasks.workunit.client.1.vm10.stdout:9/896: mkdir d2/d33/dcf/d12a 0 2026-03-09T20:48:12.154 INFO:tasks.workunit.client.1.vm10.stdout:9/897: write d2/d3/d6d/db7/fc7 [1289987,129019] 0 2026-03-09T20:48:12.154 INFO:tasks.workunit.client.1.vm10.stdout:9/898: write d2/da6/f11f [983023,103893] 0 2026-03-09T20:48:12.154 INFO:tasks.workunit.client.1.vm10.stdout:9/899: chown d2/db8/l10f 63319 1 2026-03-09T20:48:12.161 INFO:tasks.workunit.client.0.vm07.stdout:4/807: rename d2/df/d17/f6a to d2/df/d17/fdf 0 2026-03-09T20:48:12.195 INFO:tasks.workunit.client.0.vm07.stdout:9/860: link d4/d16/d29/d24/d37/d44/d62/d108/d121/d19/f28 d4/d16/d29/d24/d37/d44/f134 0 2026-03-09T20:48:12.195 INFO:tasks.workunit.client.0.vm07.stdout:2/877: read - d2/d11/ddb/d6e/dbe/d96/fd7 zero size 2026-03-09T20:48:12.195 INFO:tasks.workunit.client.0.vm07.stdout:0/887: fsync d1/d1f/d9f/fa7 0 2026-03-09T20:48:12.195 INFO:tasks.workunit.client.0.vm07.stdout:3/867: dread d1/d5/d9/d2f/d34/d46/d5d/fb8 [0,4194304] 0 2026-03-09T20:48:12.195 INFO:tasks.workunit.client.0.vm07.stdout:5/930: dwrite d5/df/faa [0,4194304] 0 2026-03-09T20:48:12.195 INFO:tasks.workunit.client.0.vm07.stdout:5/931: stat d5/d69/l7f 0 2026-03-09T20:48:12.195 INFO:tasks.workunit.client.1.vm10.stdout:8/885: rename d0/d22/d2c/f110 to d0/d22/d25/d2e/d41/d85/db9/f11e 0 2026-03-09T20:48:12.195 INFO:tasks.workunit.client.1.vm10.stdout:0/813: link d2/d4a/d58/d82/d71/d5d/f76 d2/d9/d69/d80/f11d 0 2026-03-09T20:48:12.196 INFO:tasks.workunit.client.1.vm10.stdout:4/786: dwrite d1/d2/d5c/d64/d6b/d81/dac/d1c/d2b/f36 [0,4194304] 0 2026-03-09T20:48:12.196 INFO:tasks.workunit.client.1.vm10.stdout:5/784: write d2/d80/ffd [729240,16046] 0 2026-03-09T20:48:12.196 INFO:tasks.workunit.client.1.vm10.stdout:9/900: mknod d2/d3/de/c12b 0 2026-03-09T20:48:12.196 INFO:tasks.workunit.client.1.vm10.stdout:4/787: readlink d1/d2/d5c/lad 0 2026-03-09T20:48:12.196 INFO:tasks.workunit.client.1.vm10.stdout:8/886: creat d0/d22/d25/d2e/d41/d85/db9/d10d/f11f x:0 0 0 2026-03-09T20:48:12.196 INFO:tasks.workunit.client.1.vm10.stdout:9/901: dread - d2/d12/d5a/fe9 zero size 2026-03-09T20:48:12.196 INFO:tasks.workunit.client.1.vm10.stdout:7/858: dwrite db/d21/d23/f1e [0,4194304] 0 2026-03-09T20:48:12.196 INFO:tasks.workunit.client.1.vm10.stdout:5/785: mkdir d2/d58/d6c/d126 0 2026-03-09T20:48:12.209 INFO:tasks.workunit.client.1.vm10.stdout:8/887: truncate d0/d92/de8/d64/db5/fef 193600 0 2026-03-09T20:48:12.209 INFO:tasks.workunit.client.1.vm10.stdout:9/902: truncate d2/d33/f77 2591086 0 2026-03-09T20:48:12.210 INFO:tasks.workunit.client.1.vm10.stdout:6/842: write d3/da/d11/d26/f8f [4140764,102675] 0 2026-03-09T20:48:12.211 INFO:tasks.workunit.client.1.vm10.stdout:6/843: chown d3/d30/d7f/d36/d5c/daa 13 1 2026-03-09T20:48:12.211 INFO:tasks.workunit.client.1.vm10.stdout:6/844: chown d3/d30/d7f/d36/c61 16296 1 2026-03-09T20:48:12.212 INFO:tasks.workunit.client.1.vm10.stdout:6/845: stat d3/da/d11/d89/db9/dd1/dd2/l73 0 2026-03-09T20:48:12.214 INFO:tasks.workunit.client.1.vm10.stdout:7/859: mkdir db/d21/d95/d10d 0 2026-03-09T20:48:12.215 INFO:tasks.workunit.client.1.vm10.stdout:6/846: chown d3/da/d11/d31/c3b 753842 1 2026-03-09T20:48:12.216 INFO:tasks.workunit.client.0.vm07.stdout:8/821: rmdir d1/d5d/d6f/d2f/d4d/d63 39 2026-03-09T20:48:12.221 INFO:tasks.workunit.client.1.vm10.stdout:3/819: sync 2026-03-09T20:48:12.224 INFO:tasks.workunit.client.0.vm07.stdout:4/808: creat d2/d55/d5d/d3f/d4a/d7d/fe0 x:0 0 0 2026-03-09T20:48:12.226 INFO:tasks.workunit.client.1.vm10.stdout:8/888: mknod d0/d92/de8/d64/d7f/c120 0 2026-03-09T20:48:12.228 INFO:tasks.workunit.client.0.vm07.stdout:2/878: creat d2/db/d28/d5c/dc7/f11e x:0 0 0 2026-03-09T20:48:12.232 INFO:tasks.workunit.client.1.vm10.stdout:8/889: readlink d0/d92/de8/d64/l8a 0 2026-03-09T20:48:12.235 INFO:tasks.workunit.client.0.vm07.stdout:3/868: rmdir d1 39 2026-03-09T20:48:12.235 INFO:tasks.workunit.client.0.vm07.stdout:4/809: mkdir d2/d1f/de1 0 2026-03-09T20:48:12.236 INFO:tasks.workunit.client.1.vm10.stdout:6/847: fdatasync d3/d30/d7f/d4a/f4b 0 2026-03-09T20:48:12.237 INFO:tasks.workunit.client.0.vm07.stdout:9/861: dread d4/d16/d29/d24/d37/d44/d62/d8e/fe0 [0,4194304] 0 2026-03-09T20:48:12.237 INFO:tasks.workunit.client.1.vm10.stdout:3/820: sync 2026-03-09T20:48:12.239 INFO:tasks.workunit.client.0.vm07.stdout:3/869: fsync d1/d5/d9/d11/d6d/dd0/d95/f10a 0 2026-03-09T20:48:12.242 INFO:tasks.workunit.client.1.vm10.stdout:0/814: getdents d2/d4a/d79 0 2026-03-09T20:48:12.243 INFO:tasks.workunit.client.0.vm07.stdout:5/932: dread d5/df/d13/d4f/ff4 [0,4194304] 0 2026-03-09T20:48:12.243 INFO:tasks.workunit.client.0.vm07.stdout:5/933: dread - d5/df/d13/d4f/fe9 zero size 2026-03-09T20:48:12.243 INFO:tasks.workunit.client.0.vm07.stdout:1/921: sync 2026-03-09T20:48:12.246 INFO:tasks.workunit.client.1.vm10.stdout:6/848: mknod d3/da/d11/d89/db9/dd1/dd2/c100 0 2026-03-09T20:48:12.247 INFO:tasks.workunit.client.0.vm07.stdout:0/888: sync 2026-03-09T20:48:12.247 INFO:tasks.workunit.client.0.vm07.stdout:8/822: sync 2026-03-09T20:48:12.247 INFO:tasks.workunit.client.1.vm10.stdout:6/849: fsync d3/d30/d33/f35 0 2026-03-09T20:48:12.256 INFO:tasks.workunit.client.0.vm07.stdout:1/922: dwrite d3/d9c/f105 [0,4194304] 0 2026-03-09T20:48:12.259 INFO:tasks.workunit.client.0.vm07.stdout:1/923: chown d3/d97/da1/dc5/d60 27 1 2026-03-09T20:48:12.259 INFO:tasks.workunit.client.0.vm07.stdout:3/870: truncate d1/d5/f25 1527931 0 2026-03-09T20:48:12.259 INFO:tasks.workunit.client.0.vm07.stdout:5/934: truncate d5/df/d13/d6c/f99 5074571 0 2026-03-09T20:48:12.262 INFO:tasks.workunit.client.1.vm10.stdout:0/815: getdents d2/d4a 0 2026-03-09T20:48:12.264 INFO:tasks.workunit.client.0.vm07.stdout:8/823: creat d1/dc/d16/d31/db4/de6/f10b x:0 0 0 2026-03-09T20:48:12.264 INFO:tasks.workunit.client.0.vm07.stdout:1/924: symlink d3/d14/d54/d3e/d101/l127 0 2026-03-09T20:48:12.264 INFO:tasks.workunit.client.0.vm07.stdout:5/935: symlink d5/df/d13/d6c/l13f 0 2026-03-09T20:48:12.266 INFO:tasks.workunit.client.0.vm07.stdout:8/824: chown d1/dc/d16/d31/l41 73613 1 2026-03-09T20:48:12.268 INFO:tasks.workunit.client.1.vm10.stdout:0/816: unlink d2/d9/db8/d10f/d11/dd1/c11c 0 2026-03-09T20:48:12.268 INFO:tasks.workunit.client.0.vm07.stdout:3/871: symlink d1/l119 0 2026-03-09T20:48:12.269 INFO:tasks.workunit.client.0.vm07.stdout:3/872: stat d1/d5/d9/d2f/d34/d46/d5d/l37 0 2026-03-09T20:48:12.271 INFO:tasks.workunit.client.0.vm07.stdout:1/925: rmdir d3/d97/da1/ddd 39 2026-03-09T20:48:12.272 INFO:tasks.workunit.client.0.vm07.stdout:1/926: write d3/d14/d54/d9b/fd9 [1525521,127446] 0 2026-03-09T20:48:12.275 INFO:tasks.workunit.client.1.vm10.stdout:0/817: creat d2/d4a/d58/df6/f11e x:0 0 0 2026-03-09T20:48:12.278 INFO:tasks.workunit.client.0.vm07.stdout:1/927: fsync d3/d14/d54/f13 0 2026-03-09T20:48:12.278 INFO:tasks.workunit.client.0.vm07.stdout:1/928: chown d3/d14/d54/d3e/f75 0 1 2026-03-09T20:48:12.280 INFO:tasks.workunit.client.0.vm07.stdout:8/825: unlink d1/d5d/d6f/d2f/d4d/lc5 0 2026-03-09T20:48:12.284 INFO:tasks.workunit.client.0.vm07.stdout:5/936: truncate d5/df/d13/d30/d56/f72 327364 0 2026-03-09T20:48:12.287 INFO:tasks.workunit.client.0.vm07.stdout:1/929: rename d3/d23/d67/c87 to d3/d97/da1/dd7/dfe/c128 0 2026-03-09T20:48:12.288 INFO:tasks.workunit.client.0.vm07.stdout:3/873: creat d1/d5/f11a x:0 0 0 2026-03-09T20:48:12.290 INFO:tasks.workunit.client.0.vm07.stdout:3/874: truncate d1/d5/d9/d2f/d66/fd3 2193926 0 2026-03-09T20:48:12.291 INFO:tasks.workunit.client.0.vm07.stdout:1/930: creat d3/d97/da1/dc5/d60/d9f/f129 x:0 0 0 2026-03-09T20:48:12.294 INFO:tasks.workunit.client.0.vm07.stdout:3/875: fdatasync d1/d5/d9/d2f/d3d/d71/fc3 0 2026-03-09T20:48:12.295 INFO:tasks.workunit.client.0.vm07.stdout:8/826: sync 2026-03-09T20:48:12.296 INFO:tasks.workunit.client.0.vm07.stdout:8/827: write d1/dc/d16/d26/de2/f105 [972194,130549] 0 2026-03-09T20:48:12.298 INFO:tasks.workunit.client.0.vm07.stdout:8/828: chown d1/d5d/d6f/d80/cab 481046173 1 2026-03-09T20:48:12.299 INFO:tasks.workunit.client.0.vm07.stdout:5/937: getdents d5/df/d13/d3e/de1 0 2026-03-09T20:48:12.299 INFO:tasks.workunit.client.0.vm07.stdout:1/931: dread d3/d97/da1/dc5/f99 [4194304,4194304] 0 2026-03-09T20:48:12.300 INFO:tasks.workunit.client.0.vm07.stdout:8/829: read d1/f85 [2856246,127669] 0 2026-03-09T20:48:12.301 INFO:tasks.workunit.client.0.vm07.stdout:8/830: write d1/dc/d16/d26/de2/f105 [1566546,57402] 0 2026-03-09T20:48:12.303 INFO:tasks.workunit.client.0.vm07.stdout:5/938: sync 2026-03-09T20:48:12.304 INFO:tasks.workunit.client.0.vm07.stdout:5/939: read - d5/df/d13/d4f/fe9 zero size 2026-03-09T20:48:12.307 INFO:tasks.workunit.client.0.vm07.stdout:8/831: mkdir d1/dc/d16/dad/d87/d10c 0 2026-03-09T20:48:12.316 INFO:tasks.workunit.client.0.vm07.stdout:3/876: rename d1/d5/d9/d11/d6d/dd0/c9c to d1/d5/c11b 0 2026-03-09T20:48:12.317 INFO:tasks.workunit.client.0.vm07.stdout:1/932: link d3/d97/da1/dc5/d90/de8/c108 d3/d97/da1/dab/de2/c12a 0 2026-03-09T20:48:12.318 INFO:tasks.workunit.client.0.vm07.stdout:3/877: write d1/d5/d9/daf/d9f/f108 [583196,124703] 0 2026-03-09T20:48:12.320 INFO:tasks.workunit.client.0.vm07.stdout:1/933: creat d3/d23/d67/f12b x:0 0 0 2026-03-09T20:48:12.322 INFO:tasks.workunit.client.0.vm07.stdout:3/878: mknod d1/d5/d9/c11c 0 2026-03-09T20:48:12.325 INFO:tasks.workunit.client.0.vm07.stdout:1/934: dread d3/d14/d54/fcc [0,4194304] 0 2026-03-09T20:48:12.325 INFO:tasks.workunit.client.0.vm07.stdout:1/935: fsync d3/d23/d109/f11f 0 2026-03-09T20:48:12.328 INFO:tasks.workunit.client.0.vm07.stdout:3/879: creat d1/d5/d9/d2f/d34/da5/dda/f11d x:0 0 0 2026-03-09T20:48:12.330 INFO:tasks.workunit.client.0.vm07.stdout:1/936: rename d3/d23/d55 to d3/d97/da1/dc5/d90/de8/dba/d12c 0 2026-03-09T20:48:12.331 INFO:tasks.workunit.client.0.vm07.stdout:1/937: chown d3/d97/da1/dab/de2/ff8 37083 1 2026-03-09T20:48:12.335 INFO:tasks.workunit.client.0.vm07.stdout:6/881: dwrite d8/d16/d61/f7c [0,4194304] 0 2026-03-09T20:48:12.339 INFO:tasks.workunit.client.0.vm07.stdout:6/882: dread d8/db3/fd4 [0,4194304] 0 2026-03-09T20:48:12.345 INFO:tasks.workunit.client.0.vm07.stdout:6/883: sync 2026-03-09T20:48:12.349 INFO:tasks.workunit.client.1.vm10.stdout:1/840: write d2/da/f88 [3865236,124685] 0 2026-03-09T20:48:12.355 INFO:tasks.workunit.client.0.vm07.stdout:6/884: dread d8/d16/d22/d9b/de4/f91 [0,4194304] 0 2026-03-09T20:48:12.362 INFO:tasks.workunit.client.1.vm10.stdout:2/840: write d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/f92 [906121,13983] 0 2026-03-09T20:48:12.362 INFO:tasks.workunit.client.1.vm10.stdout:1/841: mknod d2/da/d25/d46/d51/d5d/d6e/c10d 0 2026-03-09T20:48:12.367 INFO:tasks.workunit.client.0.vm07.stdout:6/885: truncate d8/d26/d7d/dc8/ff4 3955355 0 2026-03-09T20:48:12.367 INFO:tasks.workunit.client.0.vm07.stdout:6/886: readlink d8/l1b 0 2026-03-09T20:48:12.369 INFO:tasks.workunit.client.1.vm10.stdout:1/842: symlink d2/da/d25/d3e/d42/l10e 0 2026-03-09T20:48:12.370 INFO:tasks.workunit.client.1.vm10.stdout:1/843: chown d2/da/d25/d3e/d42/fe5 7407047 1 2026-03-09T20:48:12.375 INFO:tasks.workunit.client.1.vm10.stdout:2/841: getdents d5/d18/d27/d38/dcf 0 2026-03-09T20:48:12.376 INFO:tasks.workunit.client.1.vm10.stdout:4/788: write d1/d2/d3/d54/f7f [167324,57844] 0 2026-03-09T20:48:12.377 INFO:tasks.workunit.client.1.vm10.stdout:2/842: symlink d5/d18/l114 0 2026-03-09T20:48:12.378 INFO:tasks.workunit.client.1.vm10.stdout:9/903: truncate d2/d33/f77 3576702 0 2026-03-09T20:48:12.379 INFO:tasks.workunit.client.1.vm10.stdout:2/843: write d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/f4c [977910,110360] 0 2026-03-09T20:48:12.383 INFO:tasks.workunit.client.1.vm10.stdout:9/904: sync 2026-03-09T20:48:12.383 INFO:tasks.workunit.client.0.vm07.stdout:7/956: write d3/f8f [475707,120809] 0 2026-03-09T20:48:12.383 INFO:tasks.workunit.client.1.vm10.stdout:9/905: readlink d2/d3/d85/l97 0 2026-03-09T20:48:12.384 INFO:tasks.workunit.client.0.vm07.stdout:7/957: stat d3/da/d53/db7/dde/dc5/c124 0 2026-03-09T20:48:12.385 INFO:tasks.workunit.client.1.vm10.stdout:9/906: write d2/db8/ff8 [1912485,70708] 0 2026-03-09T20:48:12.385 INFO:tasks.workunit.client.1.vm10.stdout:9/907: chown d2/db8/l10f 23683618 1 2026-03-09T20:48:12.386 INFO:tasks.workunit.client.1.vm10.stdout:2/844: symlink d5/d18/d27/d38/d61/dc8/ddb/dea/l115 0 2026-03-09T20:48:12.388 INFO:tasks.workunit.client.0.vm07.stdout:7/958: symlink d3/da/db/d32/d126/l14c 0 2026-03-09T20:48:12.389 INFO:tasks.workunit.client.1.vm10.stdout:9/908: mkdir d2/da6/d12c 0 2026-03-09T20:48:12.390 INFO:tasks.workunit.client.1.vm10.stdout:5/786: dwrite d2/d80/fb2 [0,4194304] 0 2026-03-09T20:48:12.394 INFO:tasks.workunit.client.1.vm10.stdout:2/845: creat d5/d18/d27/d89/db6/d41/d77/db3/db5/db0/deb/f116 x:0 0 0 2026-03-09T20:48:12.397 INFO:tasks.workunit.client.1.vm10.stdout:7/860: dwrite db/d28/d2b/d36/d3b/fcf [0,4194304] 0 2026-03-09T20:48:12.406 INFO:tasks.workunit.client.0.vm07.stdout:2/879: write d2/d11/ddb/d6e/dbe/fe8 [3768865,85361] 0 2026-03-09T20:48:12.407 INFO:tasks.workunit.client.1.vm10.stdout:9/909: truncate d2/d3/de/d8f/fbf 2687034 0 2026-03-09T20:48:12.414 INFO:tasks.workunit.client.0.vm07.stdout:4/810: write d2/f19 [90767,17855] 0 2026-03-09T20:48:12.416 INFO:tasks.workunit.client.1.vm10.stdout:5/787: unlink d2/d39/dbf/d66/l118 0 2026-03-09T20:48:12.418 INFO:tasks.workunit.client.0.vm07.stdout:2/880: mknod d2/db/d49/d7d/d85/c11f 0 2026-03-09T20:48:12.421 INFO:tasks.workunit.client.1.vm10.stdout:7/861: unlink db/d28/d30/c5b 0 2026-03-09T20:48:12.422 INFO:tasks.workunit.client.1.vm10.stdout:7/862: dread - db/d28/d30/ffc zero size 2026-03-09T20:48:12.424 INFO:tasks.workunit.client.0.vm07.stdout:9/862: dwrite d4/d11/d2a/f39 [0,4194304] 0 2026-03-09T20:48:12.427 INFO:tasks.workunit.client.1.vm10.stdout:8/890: truncate d0/d22/d25/d6c/f68 673408 0 2026-03-09T20:48:12.427 INFO:tasks.workunit.client.1.vm10.stdout:3/821: write dc/d14/d26/d29/fd2 [194221,22257] 0 2026-03-09T20:48:12.427 INFO:tasks.workunit.client.1.vm10.stdout:5/788: dread d2/d1b/d54/d78/de6/ff7 [0,4194304] 0 2026-03-09T20:48:12.431 INFO:tasks.workunit.client.0.vm07.stdout:4/811: dwrite d2/f7 [0,4194304] 0 2026-03-09T20:48:12.433 INFO:tasks.workunit.client.1.vm10.stdout:6/850: write d3/d30/d7f/d4a/f9a [1357796,81845] 0 2026-03-09T20:48:12.434 INFO:tasks.workunit.client.1.vm10.stdout:6/851: write d3/da/d11/dfc/fe8 [2179775,13595] 0 2026-03-09T20:48:12.440 INFO:tasks.workunit.client.0.vm07.stdout:0/889: dwrite d1/d1f/d53/f79 [0,4194304] 0 2026-03-09T20:48:12.442 INFO:tasks.workunit.client.1.vm10.stdout:0/818: dwrite d2/d4a/d58/d82/d71/dca/d110/d30/f56 [0,4194304] 0 2026-03-09T20:48:12.442 INFO:tasks.workunit.client.0.vm07.stdout:0/890: chown d1/dc0/dcc 94805764 1 2026-03-09T20:48:12.444 INFO:tasks.workunit.client.1.vm10.stdout:9/910: fsync d2/d3/de/f42 0 2026-03-09T20:48:12.447 INFO:tasks.workunit.client.1.vm10.stdout:9/911: write d2/d3/d6d/db7/fbb [4296008,108645] 0 2026-03-09T20:48:12.447 INFO:tasks.workunit.client.1.vm10.stdout:9/912: chown d2/d28/d47/d50 19 1 2026-03-09T20:48:12.454 INFO:tasks.workunit.client.0.vm07.stdout:2/881: truncate d2/db/d49/d7d/fff 130251 0 2026-03-09T20:48:12.456 INFO:tasks.workunit.client.0.vm07.stdout:9/863: read d4/d11/f88 [3150386,10747] 0 2026-03-09T20:48:12.456 INFO:tasks.workunit.client.1.vm10.stdout:7/863: rmdir db/d28 39 2026-03-09T20:48:12.456 INFO:tasks.workunit.client.0.vm07.stdout:8/832: rmdir d1 39 2026-03-09T20:48:12.456 INFO:tasks.workunit.client.0.vm07.stdout:9/864: stat d4/d16/d29/d24/d37/d44/d62/d108/d121/dc/d15 0 2026-03-09T20:48:12.462 INFO:tasks.workunit.client.1.vm10.stdout:3/822: write dc/fb9 [3973511,126581] 0 2026-03-09T20:48:12.462 INFO:tasks.workunit.client.0.vm07.stdout:5/940: dwrite d5/df/d13/d3e/d5e/fc6 [0,4194304] 0 2026-03-09T20:48:12.464 INFO:tasks.workunit.client.1.vm10.stdout:8/891: unlink d0/d22/f29 0 2026-03-09T20:48:12.465 INFO:tasks.workunit.client.0.vm07.stdout:4/812: mknod d2/d55/d5d/d3f/d4a/d7d/ce2 0 2026-03-09T20:48:12.469 INFO:tasks.workunit.client.0.vm07.stdout:3/880: write d1/d5/d9/d2f/d34/fe9 [555762,80192] 0 2026-03-09T20:48:12.476 INFO:tasks.workunit.client.0.vm07.stdout:1/938: dwrite d3/d14/d54/d3e/fff [0,4194304] 0 2026-03-09T20:48:12.491 INFO:tasks.workunit.client.0.vm07.stdout:0/891: unlink d1/d2/d33/fb5 0 2026-03-09T20:48:12.494 INFO:tasks.workunit.client.1.vm10.stdout:1/844: write d2/da/dbc/fee [37001,87954] 0 2026-03-09T20:48:12.514 INFO:tasks.workunit.client.1.vm10.stdout:0/819: chown d2/d4a/d58/d82/d71/d5d/f76 514494019 1 2026-03-09T20:48:12.514 INFO:tasks.workunit.client.0.vm07.stdout:2/882: dread - d2/db/d1c/f104 zero size 2026-03-09T20:48:12.514 INFO:tasks.workunit.client.0.vm07.stdout:2/883: write d2/dc8/f101 [1200396,129722] 0 2026-03-09T20:48:12.514 INFO:tasks.workunit.client.0.vm07.stdout:6/887: dwrite d8/d16/d22/d24/da0/dab/dc1/fee [0,4194304] 0 2026-03-09T20:48:12.514 INFO:tasks.workunit.client.0.vm07.stdout:2/884: dwrite d2/db/d28/d57/f75 [0,4194304] 0 2026-03-09T20:48:12.514 INFO:tasks.workunit.client.0.vm07.stdout:5/941: stat d5/df/d13/d3e/l6d 0 2026-03-09T20:48:12.520 INFO:tasks.workunit.client.0.vm07.stdout:6/888: dread d8/d16/d22/d24/da0/dab/dc1/f110 [0,4194304] 0 2026-03-09T20:48:12.526 INFO:tasks.workunit.client.0.vm07.stdout:3/881: mkdir d1/d5/d9/d2f/d3d/dd6/d11e 0 2026-03-09T20:48:12.527 INFO:tasks.workunit.client.0.vm07.stdout:7/959: dwrite d3/d58/d82/f109 [0,4194304] 0 2026-03-09T20:48:12.530 INFO:tasks.workunit.client.1.vm10.stdout:3/823: creat dc/d14/d26/d8f/f11e x:0 0 0 2026-03-09T20:48:12.531 INFO:tasks.workunit.client.1.vm10.stdout:8/892: symlink d0/d22/d25/d2e/d41/l121 0 2026-03-09T20:48:12.534 INFO:tasks.workunit.client.1.vm10.stdout:5/789: symlink d2/d39/dbf/d69/d109/l127 0 2026-03-09T20:48:12.534 INFO:tasks.workunit.client.1.vm10.stdout:5/790: truncate d2/f71 1677870 0 2026-03-09T20:48:12.548 INFO:tasks.workunit.client.1.vm10.stdout:6/852: mkdir d3/da/d11/d31/d101 0 2026-03-09T20:48:12.549 INFO:tasks.workunit.client.1.vm10.stdout:4/789: dread d1/d2/d5c/d64/d6b/d81/da9/fb8 [0,4194304] 0 2026-03-09T20:48:12.549 INFO:tasks.workunit.client.1.vm10.stdout:6/853: chown d3/da/f1b 966183707 1 2026-03-09T20:48:12.568 INFO:tasks.workunit.client.0.vm07.stdout:5/942: creat d5/df/d13/d4f/d12c/f140 x:0 0 0 2026-03-09T20:48:12.576 INFO:tasks.workunit.client.0.vm07.stdout:4/813: creat d2/d55/dab/fe3 x:0 0 0 2026-03-09T20:48:12.576 INFO:tasks.workunit.client.0.vm07.stdout:4/814: write d2/d55/d5d/d86/db9/fdc [428919,66340] 0 2026-03-09T20:48:12.576 INFO:tasks.workunit.client.0.vm07.stdout:4/815: stat d2/d55/d5d/d3f/d4a/dbc/fd6 0 2026-03-09T20:48:12.576 INFO:tasks.workunit.client.1.vm10.stdout:8/893: rename d0/d54/ded to d0/d22/d25/d6c/d122 0 2026-03-09T20:48:12.576 INFO:tasks.workunit.client.1.vm10.stdout:5/791: mkdir d2/d39/d4b/d7a/dd9/d128 0 2026-03-09T20:48:12.576 INFO:tasks.workunit.client.1.vm10.stdout:6/854: creat d3/d30/d6a/f102 x:0 0 0 2026-03-09T20:48:12.576 INFO:tasks.workunit.client.1.vm10.stdout:5/792: dread d2/d39/d4b/d7a/fed [0,4194304] 0 2026-03-09T20:48:12.579 INFO:tasks.workunit.client.0.vm07.stdout:3/882: mknod d1/d5/d9/d2f/d3d/d71/c11f 0 2026-03-09T20:48:12.581 INFO:tasks.workunit.client.1.vm10.stdout:1/845: getdents d2/da/d25/d46/d8c/d106 0 2026-03-09T20:48:12.585 INFO:tasks.workunit.client.0.vm07.stdout:7/960: rename d3/da/db/d32/d3e/d5c/d122 to d3/da/db/d32/d3e/dac/d43/d62/db1/d14d 0 2026-03-09T20:48:12.589 INFO:tasks.workunit.client.1.vm10.stdout:0/820: symlink d2/d9/db8/d10f/d11/l11f 0 2026-03-09T20:48:12.593 INFO:tasks.workunit.client.0.vm07.stdout:0/892: write d1/d2/dc/fde [125886,24461] 0 2026-03-09T20:48:12.597 INFO:tasks.workunit.client.1.vm10.stdout:3/824: mkdir dc/d14/d26/dcb/d11b/d11f 0 2026-03-09T20:48:12.599 INFO:tasks.workunit.client.1.vm10.stdout:6/855: dread d3/d30/f75 [0,4194304] 0 2026-03-09T20:48:12.609 INFO:tasks.workunit.client.1.vm10.stdout:4/790: symlink d1/d2/d5c/d64/d6b/lfc 0 2026-03-09T20:48:12.609 INFO:tasks.workunit.client.1.vm10.stdout:1/846: mkdir d2/da/d25/d46/d80/da0/d92/db5/d10f 0 2026-03-09T20:48:12.609 INFO:tasks.workunit.client.0.vm07.stdout:5/943: creat d5/df/d13/d3e/d47/d13c/f141 x:0 0 0 2026-03-09T20:48:12.609 INFO:tasks.workunit.client.0.vm07.stdout:5/944: chown d5/l2d 7 1 2026-03-09T20:48:12.609 INFO:tasks.workunit.client.0.vm07.stdout:6/889: mknod d8/d16/d22/d24/da0/dab/dc1/dcc/c122 0 2026-03-09T20:48:12.609 INFO:tasks.workunit.client.0.vm07.stdout:9/865: getdents d4/d16/d29/d24/d37/d44/d62/d108/d121/d59/de4 0 2026-03-09T20:48:12.612 INFO:tasks.workunit.client.1.vm10.stdout:6/856: rmdir d3/d30/d6a/dd6 39 2026-03-09T20:48:12.613 INFO:tasks.workunit.client.0.vm07.stdout:3/883: symlink d1/d5/d9/l120 0 2026-03-09T20:48:12.615 INFO:tasks.workunit.client.0.vm07.stdout:0/893: symlink d1/d2/dc/d17/da6/l117 0 2026-03-09T20:48:12.618 INFO:tasks.workunit.client.1.vm10.stdout:4/791: creat d1/d2/d3/d70/d99/ffd x:0 0 0 2026-03-09T20:48:12.618 INFO:tasks.workunit.client.1.vm10.stdout:1/847: chown d2/da/d25/d46/d51/c91 324 1 2026-03-09T20:48:12.618 INFO:tasks.workunit.client.0.vm07.stdout:6/890: unlink d8/d26/c6b 0 2026-03-09T20:48:12.618 INFO:tasks.workunit.client.0.vm07.stdout:3/884: mknod d1/d5/d9/d11/d60/c121 0 2026-03-09T20:48:12.622 INFO:tasks.workunit.client.0.vm07.stdout:7/961: creat d3/da/d53/db7/dde/f14e x:0 0 0 2026-03-09T20:48:12.622 INFO:tasks.workunit.client.0.vm07.stdout:7/962: chown d3/da/f38 47 1 2026-03-09T20:48:12.623 INFO:tasks.workunit.client.1.vm10.stdout:3/825: truncate dc/db4/fca 1019030 0 2026-03-09T20:48:12.624 INFO:tasks.workunit.client.1.vm10.stdout:3/826: chown dc/d14/d20/d2e/f38 1 1 2026-03-09T20:48:12.626 INFO:tasks.workunit.client.0.vm07.stdout:9/866: unlink d4/d16/d29/d24/d37/d44/d62/d108/d121/d59/ced 0 2026-03-09T20:48:12.627 INFO:tasks.workunit.client.0.vm07.stdout:5/945: link d5/d33/l89 d5/df/d13/d30/l142 0 2026-03-09T20:48:12.631 INFO:tasks.workunit.client.1.vm10.stdout:4/792: mknod d1/d2/d5c/d64/d6b/d81/dac/d1c/d2b/d4a/d9b/cfe 0 2026-03-09T20:48:12.631 INFO:tasks.workunit.client.0.vm07.stdout:5/946: dread - d5/df/d13/d3e/f139 zero size 2026-03-09T20:48:12.631 INFO:tasks.workunit.client.0.vm07.stdout:5/947: dread - d5/df/fd6 zero size 2026-03-09T20:48:12.631 INFO:tasks.workunit.client.0.vm07.stdout:5/948: readlink d5/df/d13/d30/d56/l116 0 2026-03-09T20:48:12.631 INFO:tasks.workunit.client.0.vm07.stdout:6/891: unlink d8/d16/d22/d9b/fdb 0 2026-03-09T20:48:12.632 INFO:tasks.workunit.client.0.vm07.stdout:4/816: sync 2026-03-09T20:48:12.635 INFO:tasks.workunit.client.0.vm07.stdout:3/885: sync 2026-03-09T20:48:12.636 INFO:tasks.workunit.client.0.vm07.stdout:0/894: dread d1/d1f/d53/d72/f9b [0,4194304] 0 2026-03-09T20:48:12.636 INFO:tasks.workunit.client.0.vm07.stdout:3/886: chown d1/d5/d9/d11/df7/f10d 28 1 2026-03-09T20:48:12.638 INFO:tasks.workunit.client.1.vm10.stdout:4/793: readlink d1/l15 0 2026-03-09T20:48:12.639 INFO:tasks.workunit.client.0.vm07.stdout:6/892: creat d8/d16/d22/d24/da0/dab/d40/d105/f123 x:0 0 0 2026-03-09T20:48:12.642 INFO:tasks.workunit.client.1.vm10.stdout:1/848: fsync d2/da/d25/d3e/d55/f9a 0 2026-03-09T20:48:12.643 INFO:tasks.workunit.client.0.vm07.stdout:4/817: mkdir d2/df/d59/d8a/de4 0 2026-03-09T20:48:12.651 INFO:tasks.workunit.client.1.vm10.stdout:1/849: symlink d2/da/d25/d3e/dca/da2/dd5/l110 0 2026-03-09T20:48:12.653 INFO:tasks.workunit.client.1.vm10.stdout:3/827: rmdir dc/d14/d26/d29/d40/da8/dde 0 2026-03-09T20:48:12.653 INFO:tasks.workunit.client.1.vm10.stdout:3/828: chown dc/d14/d20/d21/daf 35 1 2026-03-09T20:48:12.655 INFO:tasks.workunit.client.0.vm07.stdout:0/895: rename d1/d82 to d1/d2/dc/d80/d118 0 2026-03-09T20:48:12.655 INFO:tasks.workunit.client.0.vm07.stdout:3/887: symlink d1/d5/d9/d2f/d34/da5/dda/l122 0 2026-03-09T20:48:12.657 INFO:tasks.workunit.client.1.vm10.stdout:1/850: fdatasync d2/da/d25/d3e/d42/f86 0 2026-03-09T20:48:12.657 INFO:tasks.workunit.client.0.vm07.stdout:6/893: mkdir d8/d16/d22/d24/da0/dab/dc1/d124 0 2026-03-09T20:48:12.659 INFO:tasks.workunit.client.1.vm10.stdout:3/829: unlink dc/d14/d26/d29/d2a/d76/c112 0 2026-03-09T20:48:12.659 INFO:tasks.workunit.client.1.vm10.stdout:3/830: chown dc/d14/df1 1298 1 2026-03-09T20:48:12.659 INFO:tasks.workunit.client.0.vm07.stdout:4/818: creat d2/d55/fe5 x:0 0 0 2026-03-09T20:48:12.666 INFO:tasks.workunit.client.1.vm10.stdout:2/846: dwrite d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/f84 [0,4194304] 0 2026-03-09T20:48:12.669 INFO:tasks.workunit.client.0.vm07.stdout:1/939: write d3/d23/d67/d8a/ff4 [2280894,115036] 0 2026-03-09T20:48:12.670 INFO:tasks.workunit.client.0.vm07.stdout:1/940: write d3/d14/d54/d3e/fff [3867803,121406] 0 2026-03-09T20:48:12.672 INFO:tasks.workunit.client.0.vm07.stdout:8/833: write d1/dc/d16/f4a [3958715,93113] 0 2026-03-09T20:48:12.674 INFO:tasks.workunit.client.1.vm10.stdout:9/913: write d2/d28/d47/d67/f101 [333672,63757] 0 2026-03-09T20:48:12.676 INFO:tasks.workunit.client.0.vm07.stdout:2/885: dwrite d2/db/d1c/f104 [0,4194304] 0 2026-03-09T20:48:12.679 INFO:tasks.workunit.client.1.vm10.stdout:9/914: dwrite d2/d12/d5a/fea [4194304,4194304] 0 2026-03-09T20:48:12.680 INFO:tasks.workunit.client.0.vm07.stdout:7/963: rmdir d3/da/db/d32/d3e/d11c/d131 0 2026-03-09T20:48:12.683 INFO:tasks.workunit.client.0.vm07.stdout:0/896: creat d1/d1f/dc2/f119 x:0 0 0 2026-03-09T20:48:12.688 INFO:tasks.workunit.client.0.vm07.stdout:6/894: truncate d8/d26/d7d/dfd/f106 808032 0 2026-03-09T20:48:12.689 INFO:tasks.workunit.client.0.vm07.stdout:6/895: chown d8/f8d 2 1 2026-03-09T20:48:12.697 INFO:tasks.workunit.client.1.vm10.stdout:1/851: dread d2/da/f50 [0,4194304] 0 2026-03-09T20:48:12.697 INFO:tasks.workunit.client.1.vm10.stdout:0/821: write d2/d9/db8/d10f/d11/d92/fb0 [4991221,71320] 0 2026-03-09T20:48:12.697 INFO:tasks.workunit.client.0.vm07.stdout:0/897: sync 2026-03-09T20:48:12.700 INFO:tasks.workunit.client.1.vm10.stdout:5/793: dwrite d2/d27/d37/fae [0,4194304] 0 2026-03-09T20:48:12.708 INFO:tasks.workunit.client.1.vm10.stdout:7/864: write db/d28/d2b/d36/d63/d6d/fe8 [398945,59909] 0 2026-03-09T20:48:12.715 INFO:tasks.workunit.client.0.vm07.stdout:6/896: creat d8/d50/f125 x:0 0 0 2026-03-09T20:48:12.715 INFO:tasks.workunit.client.0.vm07.stdout:0/898: fsync d1/d2/dc/d17/ff2 0 2026-03-09T20:48:12.718 INFO:tasks.workunit.client.1.vm10.stdout:3/831: creat dc/d14/d20/d21/daf/d113/f120 x:0 0 0 2026-03-09T20:48:12.720 INFO:tasks.workunit.client.1.vm10.stdout:8/894: write d0/d22/d25/d2e/fd2 [755513,64771] 0 2026-03-09T20:48:12.720 INFO:tasks.workunit.client.1.vm10.stdout:1/852: symlink d2/da/d25/d46/d51/d7e/l111 0 2026-03-09T20:48:12.720 INFO:tasks.workunit.client.1.vm10.stdout:6/857: write d3/d30/d7f/f18 [864932,128699] 0 2026-03-09T20:48:12.721 INFO:tasks.workunit.client.1.vm10.stdout:5/794: creat d2/d39/dbf/d66/f129 x:0 0 0 2026-03-09T20:48:12.721 INFO:tasks.workunit.client.1.vm10.stdout:0/822: rmdir d2/d4a/d58/d82/d93 39 2026-03-09T20:48:12.721 INFO:tasks.workunit.client.1.vm10.stdout:1/853: stat d2/da/d25/d3e/dca/da2/db9 0 2026-03-09T20:48:12.722 INFO:tasks.workunit.client.1.vm10.stdout:4/794: rename d1/d2/d5c/d64/d6b/d81/dac/d1c to d1/d2/d3/d70/d99/dc9/dff 0 2026-03-09T20:48:12.723 INFO:tasks.workunit.client.1.vm10.stdout:5/795: write d2/d80/fb2 [4148799,26810] 0 2026-03-09T20:48:12.723 INFO:tasks.workunit.client.1.vm10.stdout:1/854: readlink d2/da/d25/l31 0 2026-03-09T20:48:12.728 INFO:tasks.workunit.client.0.vm07.stdout:1/941: creat d3/d23/d52/da7/f12d x:0 0 0 2026-03-09T20:48:12.729 INFO:tasks.workunit.client.0.vm07.stdout:5/949: write d5/df/d13/d4f/d101/d10b/ffe [346117,119863] 0 2026-03-09T20:48:12.732 INFO:tasks.workunit.client.1.vm10.stdout:9/915: creat d2/d3/f12d x:0 0 0 2026-03-09T20:48:12.735 INFO:tasks.workunit.client.1.vm10.stdout:6/858: creat d3/d30/d7f/d51/f103 x:0 0 0 2026-03-09T20:48:12.735 INFO:tasks.workunit.client.0.vm07.stdout:8/834: mkdir d1/d5d/d6f/d2f/d4d/d55/d10d 0 2026-03-09T20:48:12.735 INFO:tasks.workunit.client.0.vm07.stdout:2/886: mkdir d2/db/d28/d120 0 2026-03-09T20:48:12.737 INFO:tasks.workunit.client.1.vm10.stdout:4/795: sync 2026-03-09T20:48:12.738 INFO:tasks.workunit.client.1.vm10.stdout:4/796: chown d1/d2/d3/d70/d99/dc9/dff/fdb 1 1 2026-03-09T20:48:12.739 INFO:tasks.workunit.client.1.vm10.stdout:2/847: rename d5/d18/d27/d89/db6/d41/f4b to d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47/dcc/f117 0 2026-03-09T20:48:12.741 INFO:tasks.workunit.client.0.vm07.stdout:8/835: creat d1/d8f/f10e x:0 0 0 2026-03-09T20:48:12.741 INFO:tasks.workunit.client.1.vm10.stdout:0/823: creat d2/d9/d69/f120 x:0 0 0 2026-03-09T20:48:12.745 INFO:tasks.workunit.client.1.vm10.stdout:9/916: dread d2/d3/de/d35/f38 [0,4194304] 0 2026-03-09T20:48:12.745 INFO:tasks.workunit.client.0.vm07.stdout:2/887: mknod d2/d11/ddb/db0/c121 0 2026-03-09T20:48:12.745 INFO:tasks.workunit.client.0.vm07.stdout:6/897: mknod d8/d5d/d97/dc4/de3/c126 0 2026-03-09T20:48:12.747 INFO:tasks.workunit.client.1.vm10.stdout:2/848: sync 2026-03-09T20:48:12.751 INFO:tasks.workunit.client.0.vm07.stdout:9/867: truncate d4/d16/d29/d24/d37/d44/d62/d108/d121/d59/f66 5890231 0 2026-03-09T20:48:12.758 INFO:tasks.workunit.client.0.vm07.stdout:0/899: rename d1/d2/dc/d17/da6 to d1/d1f/d53/d72/d11a 0 2026-03-09T20:48:12.765 INFO:tasks.workunit.client.0.vm07.stdout:6/898: mknod d8/d16/d22/d24/da0/dab/c127 0 2026-03-09T20:48:12.766 INFO:tasks.workunit.client.0.vm07.stdout:6/899: write d8/d16/d22/d24/da0/dab/d40/d69/f78 [130508,25311] 0 2026-03-09T20:48:12.769 INFO:tasks.workunit.client.0.vm07.stdout:9/868: mknod d4/d16/d29/d24/d37/d44/d62/d108/d121/c135 0 2026-03-09T20:48:12.771 INFO:tasks.workunit.client.0.vm07.stdout:6/900: unlink d8/d16/d22/d24/da0/dab/fc9 0 2026-03-09T20:48:12.773 INFO:tasks.workunit.client.0.vm07.stdout:9/869: mkdir d4/d136 0 2026-03-09T20:48:12.775 INFO:tasks.workunit.client.0.vm07.stdout:0/900: dwrite d1/d2/dc/f10 [0,4194304] 0 2026-03-09T20:48:12.786 INFO:tasks.workunit.client.0.vm07.stdout:0/901: chown d1/d2/de7 0 1 2026-03-09T20:48:12.787 INFO:tasks.workunit.client.0.vm07.stdout:0/902: dread d1/d2/dc/fde [0,4194304] 0 2026-03-09T20:48:12.796 INFO:tasks.workunit.client.1.vm10.stdout:3/832: symlink dc/d14/d26/dcb/d11b/d11f/l121 0 2026-03-09T20:48:12.797 INFO:tasks.workunit.client.1.vm10.stdout:8/895: read d0/dd1/fdf [4986173,46040] 0 2026-03-09T20:48:12.800 INFO:tasks.workunit.client.1.vm10.stdout:1/855: fsync d2/da/f34 0 2026-03-09T20:48:12.801 INFO:tasks.workunit.client.1.vm10.stdout:2/849: symlink d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47/dcc/l118 0 2026-03-09T20:48:12.802 INFO:tasks.workunit.client.1.vm10.stdout:4/797: mkdir d1/d2/d5c/d64/d6b/d81/dac/d100 0 2026-03-09T20:48:12.802 INFO:tasks.workunit.client.1.vm10.stdout:1/856: readlink d2/da/d25/d46/d80/da0/d92/db5/dc7/lda 0 2026-03-09T20:48:12.803 INFO:tasks.workunit.client.1.vm10.stdout:6/859: link d3/d30/d7f/d24/d39/d9e/ce9 d3/d30/d6a/df5/c104 0 2026-03-09T20:48:12.804 INFO:tasks.workunit.client.1.vm10.stdout:0/824: mkdir d2/d4a/d58/d82/d93/db1/d121 0 2026-03-09T20:48:12.806 INFO:tasks.workunit.client.1.vm10.stdout:4/798: mkdir d1/d2/d3/d70/d99/dc9/d101 0 2026-03-09T20:48:12.807 INFO:tasks.workunit.client.1.vm10.stdout:6/860: dread - d3/d30/d7f/d51/f94 zero size 2026-03-09T20:48:12.809 INFO:tasks.workunit.client.1.vm10.stdout:1/857: symlink d2/da/d25/d46/d80/da0/d92/db5/dc7/d105/l112 0 2026-03-09T20:48:12.813 INFO:tasks.workunit.client.1.vm10.stdout:2/850: getdents d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d8d/d93/da5 0 2026-03-09T20:48:12.816 INFO:tasks.workunit.client.1.vm10.stdout:0/825: dwrite d2/d9/db8/d10f/d48/dac/de8/f102 [0,4194304] 0 2026-03-09T20:48:12.816 INFO:tasks.workunit.client.1.vm10.stdout:2/851: fdatasync d5/d18/d27/d89/db6/d41/d77/db3/db5/fc9 0 2026-03-09T20:48:12.825 INFO:tasks.workunit.client.1.vm10.stdout:2/852: creat d5/d18/d27/db4/f119 x:0 0 0 2026-03-09T20:48:12.826 INFO:tasks.workunit.client.1.vm10.stdout:0/826: fsync d2/d4a/d58/d82/d71/d5d/f5f 0 2026-03-09T20:48:12.826 INFO:tasks.workunit.client.1.vm10.stdout:4/799: dread d1/d2/d3/d70/d99/dc9/dff/d2b/d4a/d9b/fc3 [0,4194304] 0 2026-03-09T20:48:12.827 INFO:tasks.workunit.client.1.vm10.stdout:1/858: creat d2/da/d25/d3e/d55/dc9/f113 x:0 0 0 2026-03-09T20:48:12.836 INFO:tasks.workunit.client.1.vm10.stdout:4/800: fsync d1/d2/d3/d70/d99/dc9/dff/d2b/f36 0 2026-03-09T20:48:12.841 INFO:tasks.workunit.client.1.vm10.stdout:0/827: rmdir d2/d4a/d58/d82/d71/dca/d110/d30 39 2026-03-09T20:48:12.842 INFO:tasks.workunit.client.1.vm10.stdout:0/828: chown d2/d9/d69/d80 1213 1 2026-03-09T20:48:12.843 INFO:tasks.workunit.client.1.vm10.stdout:2/853: symlink d5/d18/d27/d38/l11a 0 2026-03-09T20:48:12.844 INFO:tasks.workunit.client.1.vm10.stdout:0/829: chown d2/d9/f12 611857 1 2026-03-09T20:48:12.844 INFO:tasks.workunit.client.1.vm10.stdout:4/801: fsync d1/d2/d3/d70/d99/dc9/dff/d2b/fcc 0 2026-03-09T20:48:12.847 INFO:tasks.workunit.client.1.vm10.stdout:2/854: sync 2026-03-09T20:48:12.847 INFO:tasks.workunit.client.1.vm10.stdout:0/830: rename d2/d9/db8/d10f/cd9 to d2/d4a/d58/c122 0 2026-03-09T20:48:12.850 INFO:tasks.workunit.client.1.vm10.stdout:4/802: rename d1/d2/d5c/d64/d6b/d81/dac/d39/c63 to d1/dd8/c102 0 2026-03-09T20:48:12.851 INFO:tasks.workunit.client.1.vm10.stdout:2/855: truncate d5/d18/d27/d38/d61/f81 755942 0 2026-03-09T20:48:12.852 INFO:tasks.workunit.client.0.vm07.stdout:9/870: dread d4/d16/d29/f6e [0,4194304] 0 2026-03-09T20:48:12.853 INFO:tasks.workunit.client.1.vm10.stdout:2/856: chown d5/d18/d27/d38/d61 0 1 2026-03-09T20:48:12.855 INFO:tasks.workunit.client.0.vm07.stdout:9/871: symlink d4/d16/d29/d24/d37/d44/d62/d108/d121/d19/d5f/d73/l137 0 2026-03-09T20:48:12.855 INFO:tasks.workunit.client.1.vm10.stdout:4/803: symlink d1/d2/d5c/d64/d6b/d81/da9/l103 0 2026-03-09T20:48:12.855 INFO:tasks.workunit.client.0.vm07.stdout:9/872: chown d4/d16/d29/c133 1013 1 2026-03-09T20:48:12.857 INFO:tasks.workunit.client.0.vm07.stdout:9/873: mknod d4/d16/d29/d24/d37/d44/d62/d108/d121/d59/c138 0 2026-03-09T20:48:12.859 INFO:tasks.workunit.client.0.vm07.stdout:9/874: read d4/d16/d29/d24/d37/d44/d62/d108/d121/dc/d4e/fd9 [90361,47095] 0 2026-03-09T20:48:12.861 INFO:tasks.workunit.client.1.vm10.stdout:2/857: rmdir d5/d18/d27/d89/db6/d41/dfb 0 2026-03-09T20:48:12.863 INFO:tasks.workunit.client.1.vm10.stdout:0/831: dwrite d2/d9/f61 [0,4194304] 0 2026-03-09T20:48:12.866 INFO:tasks.workunit.client.1.vm10.stdout:2/858: sync 2026-03-09T20:48:12.867 INFO:tasks.workunit.client.1.vm10.stdout:4/804: dread d1/d2/d5c/d64/d6b/d81/fca [0,4194304] 0 2026-03-09T20:48:12.873 INFO:tasks.workunit.client.1.vm10.stdout:4/805: read d1/d2/d3/f18 [1152913,61547] 0 2026-03-09T20:48:12.875 INFO:tasks.workunit.client.1.vm10.stdout:4/806: fdatasync d1/fe 0 2026-03-09T20:48:12.875 INFO:tasks.workunit.client.1.vm10.stdout:2/859: dwrite d5/d5b/f6c [0,4194304] 0 2026-03-09T20:48:12.880 INFO:tasks.workunit.client.1.vm10.stdout:2/860: fdatasync f1 0 2026-03-09T20:48:12.885 INFO:tasks.workunit.client.1.vm10.stdout:2/861: read d5/d18/f63 [10734,129425] 0 2026-03-09T20:48:12.891 INFO:tasks.workunit.client.1.vm10.stdout:2/862: unlink d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47/f53 0 2026-03-09T20:48:12.956 INFO:tasks.workunit.client.0.vm07.stdout:3/888: dwrite d1/d5/d9/d11/d6d/dd0/f1a [4194304,4194304] 0 2026-03-09T20:48:12.959 INFO:tasks.workunit.client.0.vm07.stdout:3/889: chown d1/d5/d9/d11/d6d/dd0/d95/ddb/lf5 1830 1 2026-03-09T20:48:12.961 INFO:tasks.workunit.client.0.vm07.stdout:4/819: dwrite d2/d55/d5d/d3f/d4a/d4b/d52/f5a [0,4194304] 0 2026-03-09T20:48:12.965 INFO:tasks.workunit.client.0.vm07.stdout:1/942: write d3/d14/f30 [800694,93406] 0 2026-03-09T20:48:12.965 INFO:tasks.workunit.client.0.vm07.stdout:5/950: write d5/d69/fd4 [77031,107096] 0 2026-03-09T20:48:12.965 INFO:tasks.workunit.client.0.vm07.stdout:7/964: write d3/da/db/f1e [2557383,28462] 0 2026-03-09T20:48:12.966 INFO:tasks.workunit.client.1.vm10.stdout:5/796: write d2/d39/dbf/d66/fc7 [1218175,113348] 0 2026-03-09T20:48:12.967 INFO:tasks.workunit.client.1.vm10.stdout:5/797: chown d2/d27/d37/d46/d99/l11a 554 1 2026-03-09T20:48:12.968 INFO:tasks.workunit.client.1.vm10.stdout:7/865: dwrite db/d28/d2b/d36/d3b/dd5/ff2 [0,4194304] 0 2026-03-09T20:48:12.976 INFO:tasks.workunit.client.0.vm07.stdout:8/836: write d1/d5d/d6f/d2f/f51 [1749794,40377] 0 2026-03-09T20:48:12.978 INFO:tasks.workunit.client.0.vm07.stdout:4/820: rmdir d2/d55/d5d/d3f 39 2026-03-09T20:48:12.978 INFO:tasks.workunit.client.0.vm07.stdout:4/821: fdatasync d2/d1f/fc3 0 2026-03-09T20:48:12.983 INFO:tasks.workunit.client.0.vm07.stdout:1/943: fsync d3/f7d 0 2026-03-09T20:48:12.984 INFO:tasks.workunit.client.0.vm07.stdout:2/888: write d2/db/d49/d7d/fe5 [1045677,83718] 0 2026-03-09T20:48:12.985 INFO:tasks.workunit.client.0.vm07.stdout:5/951: symlink d5/d19/d73/dbc/d10f/l143 0 2026-03-09T20:48:12.989 INFO:tasks.workunit.client.1.vm10.stdout:7/866: mkdir db/d28/d10e 0 2026-03-09T20:48:13.004 INFO:tasks.workunit.client.0.vm07.stdout:1/944: symlink d3/d97/da1/dc5/d60/l12e 0 2026-03-09T20:48:13.009 INFO:tasks.workunit.client.0.vm07.stdout:6/901: write d8/d16/d4b/d88/d99/fe9 [39124,130410] 0 2026-03-09T20:48:13.011 INFO:tasks.workunit.client.0.vm07.stdout:1/945: mknod d3/d97/da1/dc5/d90/c12f 0 2026-03-09T20:48:13.016 INFO:tasks.workunit.client.0.vm07.stdout:6/902: rmdir d8/d26 39 2026-03-09T20:48:13.017 INFO:tasks.workunit.client.0.vm07.stdout:1/946: creat d3/d9c/f130 x:0 0 0 2026-03-09T20:48:13.019 INFO:tasks.workunit.client.0.vm07.stdout:7/965: dread d3/da/db/d32/d3e/dac/d1f/faa [0,4194304] 0 2026-03-09T20:48:13.021 INFO:tasks.workunit.client.0.vm07.stdout:0/903: dwrite d1/d1f/d30/f10f [0,4194304] 0 2026-03-09T20:48:13.022 INFO:tasks.workunit.client.0.vm07.stdout:2/889: link d2/d11/ddb/d6e/dbe/l10d d2/d11/ddb/l122 0 2026-03-09T20:48:13.024 INFO:tasks.workunit.client.0.vm07.stdout:6/903: rmdir d8/d5d/d97 39 2026-03-09T20:48:13.027 INFO:tasks.workunit.client.0.vm07.stdout:1/947: symlink d3/d23/d52/da7/l131 0 2026-03-09T20:48:13.030 INFO:tasks.workunit.client.0.vm07.stdout:7/966: creat d3/da/db/d32/d3e/d5c/dc2/f14f x:0 0 0 2026-03-09T20:48:13.031 INFO:tasks.workunit.client.0.vm07.stdout:7/967: dread - d3/da/db/d32/d3e/dac/d43/d62/de0/d125/f13a zero size 2026-03-09T20:48:13.033 INFO:tasks.workunit.client.0.vm07.stdout:2/890: chown d2/c8 2022477 1 2026-03-09T20:48:13.037 INFO:tasks.workunit.client.0.vm07.stdout:5/952: getdents d5/d33/d39/d8d/dab 0 2026-03-09T20:48:13.043 INFO:tasks.workunit.client.0.vm07.stdout:6/904: truncate d8/d16/d4b/d88/dc3/dd5/fd3 1097407 0 2026-03-09T20:48:13.044 INFO:tasks.workunit.client.0.vm07.stdout:1/948: rmdir d3/d14/d54/d9b 39 2026-03-09T20:48:13.046 INFO:tasks.workunit.client.1.vm10.stdout:9/917: write d2/d3/d6d/ff3 [2717166,64624] 0 2026-03-09T20:48:13.046 INFO:tasks.workunit.client.1.vm10.stdout:3/833: write dc/d14/d20/d21/d3b/f6d [483241,25478] 0 2026-03-09T20:48:13.047 INFO:tasks.workunit.client.1.vm10.stdout:8/896: write d0/d22/d25/d6c/fb1 [5619488,19860] 0 2026-03-09T20:48:13.048 INFO:tasks.workunit.client.1.vm10.stdout:6/861: write d3/d30/d7f/d36/d5c/fa5 [437168,17572] 0 2026-03-09T20:48:13.049 INFO:tasks.workunit.client.0.vm07.stdout:2/891: creat d2/d11/ddb/d72/d82/f123 x:0 0 0 2026-03-09T20:48:13.049 INFO:tasks.workunit.client.1.vm10.stdout:6/862: dread - d3/da/d11/d26/d5b/fe3 zero size 2026-03-09T20:48:13.054 INFO:tasks.workunit.client.1.vm10.stdout:9/918: unlink d2/d33/dcf/c114 0 2026-03-09T20:48:13.058 INFO:tasks.workunit.client.0.vm07.stdout:1/949: fsync d3/d97/da1/dc5/d90/de8/f102 0 2026-03-09T20:48:13.058 INFO:tasks.workunit.client.0.vm07.stdout:7/968: mknod d3/da/db/d13e/c150 0 2026-03-09T20:48:13.058 INFO:tasks.workunit.client.1.vm10.stdout:6/863: dread d3/da/d11/d31/d47/d87/fd9 [0,4194304] 0 2026-03-09T20:48:13.058 INFO:tasks.workunit.client.1.vm10.stdout:1/859: write d2/da/f35 [1490798,61071] 0 2026-03-09T20:48:13.058 INFO:tasks.workunit.client.1.vm10.stdout:1/860: stat d2/da/d25/d3e/d42 0 2026-03-09T20:48:13.062 INFO:tasks.workunit.client.1.vm10.stdout:9/919: dwrite d2/d3/db4/f11c [0,4194304] 0 2026-03-09T20:48:13.069 INFO:tasks.workunit.client.1.vm10.stdout:0/832: write d2/d4a/d58/d82/d93/fe3 [1737734,84363] 0 2026-03-09T20:48:13.075 INFO:tasks.workunit.client.1.vm10.stdout:6/864: mkdir d3/da/d11/d89/d105 0 2026-03-09T20:48:13.075 INFO:tasks.workunit.client.0.vm07.stdout:2/892: unlink d2/d11/ddb/d6e/dbe/lf7 0 2026-03-09T20:48:13.075 INFO:tasks.workunit.client.0.vm07.stdout:2/893: chown d2/db/f41 5 1 2026-03-09T20:48:13.075 INFO:tasks.workunit.client.0.vm07.stdout:2/894: write d2/db/d1c/d8d/f11a [115297,48594] 0 2026-03-09T20:48:13.075 INFO:tasks.workunit.client.0.vm07.stdout:9/875: truncate d4/d16/d29/f4a 2292212 0 2026-03-09T20:48:13.075 INFO:tasks.workunit.client.0.vm07.stdout:0/904: getdents d1/d1f 0 2026-03-09T20:48:13.075 INFO:tasks.workunit.client.0.vm07.stdout:0/905: stat d1/d1f/dc2/f119 0 2026-03-09T20:48:13.075 INFO:tasks.workunit.client.0.vm07.stdout:0/906: readlink d1/d2/dc/d80/ld0 0 2026-03-09T20:48:13.075 INFO:tasks.workunit.client.0.vm07.stdout:2/895: creat d2/db/d1c/f124 x:0 0 0 2026-03-09T20:48:13.075 INFO:tasks.workunit.client.1.vm10.stdout:9/920: dwrite d2/d3/d6d/f96 [0,4194304] 0 2026-03-09T20:48:13.075 INFO:tasks.workunit.client.1.vm10.stdout:4/807: dwrite d1/d47/db9/fd5 [0,4194304] 0 2026-03-09T20:48:13.080 INFO:tasks.workunit.client.1.vm10.stdout:6/865: rmdir d3/da/d11/d31/d47 39 2026-03-09T20:48:13.080 INFO:tasks.workunit.client.0.vm07.stdout:1/950: mkdir d3/d66/d132 0 2026-03-09T20:48:13.081 INFO:tasks.workunit.client.0.vm07.stdout:1/951: readlink d3/d14/d54/l26 0 2026-03-09T20:48:13.082 INFO:tasks.workunit.client.0.vm07.stdout:1/952: dread d3/d14/d54/fcc [0,4194304] 0 2026-03-09T20:48:13.084 INFO:tasks.workunit.client.1.vm10.stdout:4/808: creat d1/dd8/f104 x:0 0 0 2026-03-09T20:48:13.084 INFO:tasks.workunit.client.0.vm07.stdout:0/907: mknod d1/dc0/dcc/d10e/c11b 0 2026-03-09T20:48:13.086 INFO:tasks.workunit.client.1.vm10.stdout:1/861: rename d2/da/d25/d46/d51/d5d/d6e/l8f to d2/da/d25/d3e/l114 0 2026-03-09T20:48:13.090 INFO:tasks.workunit.client.1.vm10.stdout:6/866: creat d3/d9c/f106 x:0 0 0 2026-03-09T20:48:13.091 INFO:tasks.workunit.client.0.vm07.stdout:6/905: getdents d8/d16/d22/d24/da0/dab/d40 0 2026-03-09T20:48:13.102 INFO:tasks.workunit.client.1.vm10.stdout:4/809: truncate d1/d2/f60 3563409 0 2026-03-09T20:48:13.104 INFO:tasks.workunit.client.0.vm07.stdout:1/953: creat d3/d97/da1/dd7/f133 x:0 0 0 2026-03-09T20:48:13.104 INFO:tasks.workunit.client.1.vm10.stdout:4/810: readlink d1/d2/d3/d70/d78/la3 0 2026-03-09T20:48:13.105 INFO:tasks.workunit.client.0.vm07.stdout:8/837: dread d1/f13 [0,4194304] 0 2026-03-09T20:48:13.110 INFO:tasks.workunit.client.1.vm10.stdout:9/921: getdents d2/d33 0 2026-03-09T20:48:13.115 INFO:tasks.workunit.client.1.vm10.stdout:4/811: creat d1/d47/df8/f105 x:0 0 0 2026-03-09T20:48:13.118 INFO:tasks.workunit.client.1.vm10.stdout:9/922: getdents d2/db8 0 2026-03-09T20:48:13.118 INFO:tasks.workunit.client.1.vm10.stdout:6/867: dread d3/d30/d7f/d36/f4f [4194304,4194304] 0 2026-03-09T20:48:13.122 INFO:tasks.workunit.client.1.vm10.stdout:9/923: rmdir d2/d28/d47/d6a 39 2026-03-09T20:48:13.122 INFO:tasks.workunit.client.1.vm10.stdout:6/868: truncate d3/da/f42 969929 0 2026-03-09T20:48:13.126 INFO:tasks.workunit.client.1.vm10.stdout:6/869: creat d3/d30/d7f/d24/d39/d9e/f107 x:0 0 0 2026-03-09T20:48:13.130 INFO:tasks.workunit.client.1.vm10.stdout:4/812: dread d1/d2/d5c/fd6 [0,4194304] 0 2026-03-09T20:48:13.131 INFO:tasks.workunit.client.1.vm10.stdout:4/813: stat d1/d2/d3/d54/dd7 0 2026-03-09T20:48:13.131 INFO:tasks.workunit.client.1.vm10.stdout:9/924: link d2/d3/d6d/de8/c103 d2/d3/c12e 0 2026-03-09T20:48:13.132 INFO:tasks.workunit.client.1.vm10.stdout:9/925: stat d2/d33/d37/c5f 0 2026-03-09T20:48:13.140 INFO:tasks.workunit.client.1.vm10.stdout:9/926: read - d2/d3/d6d/db7/f116 zero size 2026-03-09T20:48:13.140 INFO:tasks.workunit.client.1.vm10.stdout:4/814: mkdir d1/d2/d3/d70/d99/dc9/d101/d106 0 2026-03-09T20:48:13.143 INFO:tasks.workunit.client.1.vm10.stdout:9/927: getdents d2/d28/d47/d50/dd1/d11e 0 2026-03-09T20:48:13.149 INFO:tasks.workunit.client.1.vm10.stdout:9/928: mkdir d2/d28/d47/d50/dd1/d11e/d12f 0 2026-03-09T20:48:13.149 INFO:tasks.workunit.client.1.vm10.stdout:4/815: read d1/d2/d3/d70/fe9 [1178433,28361] 0 2026-03-09T20:48:13.150 INFO:tasks.workunit.client.0.vm07.stdout:8/838: sync 2026-03-09T20:48:13.153 INFO:tasks.workunit.client.1.vm10.stdout:9/929: mknod d2/d28/da2/c130 0 2026-03-09T20:48:13.154 INFO:tasks.workunit.client.1.vm10.stdout:9/930: write d2/d33/f120 [902811,47146] 0 2026-03-09T20:48:13.171 INFO:tasks.workunit.client.0.vm07.stdout:8/839: rename d1/dc/d6a/df2 to d1/d5d/d6f/d80/d10f 0 2026-03-09T20:48:13.173 INFO:tasks.workunit.client.0.vm07.stdout:8/840: fdatasync d1/d5d/d6f/d2f/f9f 0 2026-03-09T20:48:13.174 INFO:tasks.workunit.client.0.vm07.stdout:8/841: stat d1/d5d/d6f/d2f/d4d/dd4/dd9 0 2026-03-09T20:48:13.175 INFO:tasks.workunit.client.0.vm07.stdout:8/842: chown d1/d5d/d6f/d2f/d4d/d63/f77 7513722 1 2026-03-09T20:48:13.177 INFO:tasks.workunit.client.0.vm07.stdout:8/843: unlink d1/d5d/d6f/cb2 0 2026-03-09T20:48:13.178 INFO:tasks.workunit.client.0.vm07.stdout:8/844: truncate d1/dc/d6a/f62 1524006 0 2026-03-09T20:48:13.212 INFO:tasks.workunit.client.1.vm10.stdout:2/863: write d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47/ddc/ff6 [962546,86270] 0 2026-03-09T20:48:13.213 INFO:tasks.workunit.client.1.vm10.stdout:7/867: write db/d28/d2b/f51 [1802014,14256] 0 2026-03-09T20:48:13.214 INFO:tasks.workunit.client.0.vm07.stdout:3/890: dwrite d1/d5/d9/d11/d6d/fee [8388608,4194304] 0 2026-03-09T20:48:13.215 INFO:tasks.workunit.client.1.vm10.stdout:5/798: dwrite d2/fd6 [0,4194304] 0 2026-03-09T20:48:13.217 INFO:tasks.workunit.client.0.vm07.stdout:4/822: dwrite d2/d55/d5d/d3f/fa3 [0,4194304] 0 2026-03-09T20:48:13.234 INFO:tasks.workunit.client.1.vm10.stdout:3/834: write dc/d14/d22/fbf [1322889,70272] 0 2026-03-09T20:48:13.234 INFO:tasks.workunit.client.1.vm10.stdout:8/897: write d0/d22/d25/d2e/d41/d85/db9/f11e [784177,97694] 0 2026-03-09T20:48:13.235 INFO:tasks.workunit.client.0.vm07.stdout:5/953: write d5/df/d13/f41 [4623786,63420] 0 2026-03-09T20:48:13.236 INFO:tasks.workunit.client.0.vm07.stdout:7/969: write d3/da/db/d32/d3e/dac/d1f/f9e [362341,104754] 0 2026-03-09T20:48:13.241 INFO:tasks.workunit.client.1.vm10.stdout:0/833: dwrite d2/d4a/d58/d82/d71/d5d/f8c [0,4194304] 0 2026-03-09T20:48:13.242 INFO:tasks.workunit.client.0.vm07.stdout:9/876: dwrite d4/d16/d29/d24/d37/d44/d62/d108/d121/d59/de4/f110 [0,4194304] 0 2026-03-09T20:48:13.247 INFO:tasks.workunit.client.1.vm10.stdout:0/834: dwrite d2/d4a/f115 [0,4194304] 0 2026-03-09T20:48:13.252 INFO:tasks.workunit.client.0.vm07.stdout:3/891: dread d1/d5/d9/d2f/d3d/d71/fc3 [0,4194304] 0 2026-03-09T20:48:13.253 INFO:tasks.workunit.client.0.vm07.stdout:4/823: rmdir d2/d55/d5d/d93 39 2026-03-09T20:48:13.258 INFO:tasks.workunit.client.1.vm10.stdout:5/799: dread d2/d39/dbf/d63/fcd [0,4194304] 0 2026-03-09T20:48:13.259 INFO:tasks.workunit.client.0.vm07.stdout:2/896: dwrite d2/db/d28/d90/fd5 [0,4194304] 0 2026-03-09T20:48:13.271 INFO:tasks.workunit.client.0.vm07.stdout:7/970: sync 2026-03-09T20:48:13.278 INFO:tasks.workunit.client.1.vm10.stdout:8/898: mkdir d0/d22/d2f/d9d/d123 0 2026-03-09T20:48:13.278 INFO:tasks.workunit.client.1.vm10.stdout:1/862: write d2/da/f50 [9100986,50168] 0 2026-03-09T20:48:13.279 INFO:tasks.workunit.client.1.vm10.stdout:3/835: rename dc/d14/d26/c94 to dc/d14/d26/dcb/d11b/c122 0 2026-03-09T20:48:13.279 INFO:tasks.workunit.client.0.vm07.stdout:0/908: write d1/d2/dc/d17/f3c [3470297,29747] 0 2026-03-09T20:48:13.279 INFO:tasks.workunit.client.1.vm10.stdout:8/899: readlink d0/d22/d25/d2e/d41/l45 0 2026-03-09T20:48:13.280 INFO:tasks.workunit.client.0.vm07.stdout:0/909: write d1/d1f/d53/f79 [1618299,97129] 0 2026-03-09T20:48:13.281 INFO:tasks.workunit.client.1.vm10.stdout:3/836: truncate dc/d14/d26/d8f/f11e 274192 0 2026-03-09T20:48:13.281 INFO:tasks.workunit.client.0.vm07.stdout:9/877: symlink d4/d16/d29/d24/d37/d44/d62/d108/d121/dc/dbb/db6/l139 0 2026-03-09T20:48:13.282 INFO:tasks.workunit.client.0.vm07.stdout:1/954: write d3/d97/da1/dc5/d90/dd3/ff9 [1223705,80523] 0 2026-03-09T20:48:13.289 INFO:tasks.workunit.client.1.vm10.stdout:2/864: dread d5/d18/d27/d89/db6/d41/d77/db3/db5/db0/fb2 [0,4194304] 0 2026-03-09T20:48:13.290 INFO:tasks.workunit.client.1.vm10.stdout:2/865: write d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47/d7b/fd2 [3290881,120954] 0 2026-03-09T20:48:13.292 INFO:tasks.workunit.client.1.vm10.stdout:2/866: dread d5/d18/d27/da6/f109 [0,4194304] 0 2026-03-09T20:48:13.301 INFO:tasks.workunit.client.0.vm07.stdout:6/906: write d8/d26/d7d/dc8/ff4 [1536461,41133] 0 2026-03-09T20:48:13.308 INFO:tasks.workunit.client.0.vm07.stdout:3/892: chown d1/c6c 0 1 2026-03-09T20:48:13.310 INFO:tasks.workunit.client.1.vm10.stdout:6/870: dwrite d3/d30/f91 [0,4194304] 0 2026-03-09T20:48:13.310 INFO:tasks.workunit.client.1.vm10.stdout:0/835: symlink d2/d4a/d79/l123 0 2026-03-09T20:48:13.310 INFO:tasks.workunit.client.0.vm07.stdout:4/824: rmdir d2/d55/d5d/d3f/db6 39 2026-03-09T20:48:13.314 INFO:tasks.workunit.client.0.vm07.stdout:2/897: fdatasync d2/db/d1c/d4a/d88/f7f 0 2026-03-09T20:48:13.318 INFO:tasks.workunit.client.1.vm10.stdout:1/863: creat d2/da/d25/d46/d51/d5d/da6/f115 x:0 0 0 2026-03-09T20:48:13.318 INFO:tasks.workunit.client.0.vm07.stdout:7/971: rename d3/da/db/d32/d3e/dac/d43/d62/db1/d14d to d3/da/db/d32/d3e/dac/d1f/d50/d151 0 2026-03-09T20:48:13.318 INFO:tasks.workunit.client.1.vm10.stdout:4/816: write d1/d2/d5c/d64/f83 [427875,29008] 0 2026-03-09T20:48:13.319 INFO:tasks.workunit.client.1.vm10.stdout:9/931: write d2/d12/f69 [1625657,77097] 0 2026-03-09T20:48:13.328 INFO:tasks.workunit.client.0.vm07.stdout:9/878: symlink d4/d16/d29/d24/d37/d44/d62/d108/d121/d19/d5f/da5/db8/dc1/l13a 0 2026-03-09T20:48:13.328 INFO:tasks.workunit.client.0.vm07.stdout:9/879: dread - d4/d16/ff9 zero size 2026-03-09T20:48:13.328 INFO:tasks.workunit.client.1.vm10.stdout:2/867: sync 2026-03-09T20:48:13.332 INFO:tasks.workunit.client.1.vm10.stdout:4/817: dread d1/d2/d5c/d64/d6b/d81/dac/f29 [0,4194304] 0 2026-03-09T20:48:13.332 INFO:tasks.workunit.client.1.vm10.stdout:6/871: fsync d3/da/fd 0 2026-03-09T20:48:13.333 INFO:tasks.workunit.client.0.vm07.stdout:4/825: write d2/d1f/f2c [117406,119553] 0 2026-03-09T20:48:13.333 INFO:tasks.workunit.client.0.vm07.stdout:4/826: chown d2/c48 2 1 2026-03-09T20:48:13.335 INFO:tasks.workunit.client.1.vm10.stdout:1/864: read d2/fd2 [277258,80761] 0 2026-03-09T20:48:13.345 INFO:tasks.workunit.client.1.vm10.stdout:8/900: rename d0/d92/de8/d64/db5/l100 to d0/d22/d25/d40/d86/l124 0 2026-03-09T20:48:13.345 INFO:tasks.workunit.client.0.vm07.stdout:2/898: write d2/db/d49/d7d/d85/fea [786118,19597] 0 2026-03-09T20:48:13.345 INFO:tasks.workunit.client.0.vm07.stdout:2/899: chown d2/db/d28/d5c/f10b 862425 1 2026-03-09T20:48:13.345 INFO:tasks.workunit.client.0.vm07.stdout:7/972: truncate d3/da/db/d32/d3e/dac/d1f/d2b/f33 1865201 0 2026-03-09T20:48:13.345 INFO:tasks.workunit.client.0.vm07.stdout:7/973: readlink d3/da/db/d32/d3e/l51 0 2026-03-09T20:48:13.345 INFO:tasks.workunit.client.0.vm07.stdout:1/955: truncate d3/d97/da1/dc5/d60/fb5 684991 0 2026-03-09T20:48:13.355 INFO:tasks.workunit.client.0.vm07.stdout:9/880: fsync d4/d16/d29/d24/d37/d44/d62/d108/d121/dc/dbb/db6/fc6 0 2026-03-09T20:48:13.358 INFO:tasks.workunit.client.1.vm10.stdout:0/836: mknod d2/d9/c124 0 2026-03-09T20:48:13.358 INFO:tasks.workunit.client.1.vm10.stdout:2/868: chown d5/d18/d27/d5f/l10c 692172263 1 2026-03-09T20:48:13.359 INFO:tasks.workunit.client.1.vm10.stdout:2/869: readlink d5/d18/la3 0 2026-03-09T20:48:13.359 INFO:tasks.workunit.client.0.vm07.stdout:4/827: rename d2/df/d59/c88 to d2/df/d59/d8a/d9d/ce6 0 2026-03-09T20:48:13.361 INFO:tasks.workunit.client.1.vm10.stdout:4/818: chown d1/dd8/c102 12130902 1 2026-03-09T20:48:13.362 INFO:tasks.workunit.client.1.vm10.stdout:1/865: creat d2/da/d25/d46/d51/d5d/da6/f116 x:0 0 0 2026-03-09T20:48:13.366 INFO:tasks.workunit.client.0.vm07.stdout:6/907: link d8/d16/d4b/f95 d8/d26/d7d/dc8/f128 0 2026-03-09T20:48:13.366 INFO:tasks.workunit.client.0.vm07.stdout:2/900: symlink d2/d11/l125 0 2026-03-09T20:48:13.366 INFO:tasks.workunit.client.1.vm10.stdout:0/837: creat d2/d9/db8/d10f/d11/dd1/d34/f125 x:0 0 0 2026-03-09T20:48:13.366 INFO:tasks.workunit.client.1.vm10.stdout:2/870: rmdir d5/d18/d27/d5f 39 2026-03-09T20:48:13.366 INFO:tasks.workunit.client.0.vm07.stdout:0/910: getdents d1/d2/dc/d17 0 2026-03-09T20:48:13.369 INFO:tasks.workunit.client.0.vm07.stdout:0/911: dwrite d1/d2/dc/f10 [0,4194304] 0 2026-03-09T20:48:13.370 INFO:tasks.workunit.client.0.vm07.stdout:9/881: getdents d4/d16/d29/d24/d37/d44/d62/d108/d121/dbf 0 2026-03-09T20:48:13.370 INFO:tasks.workunit.client.0.vm07.stdout:2/901: symlink d2/db/d1c/l126 0 2026-03-09T20:48:13.371 INFO:tasks.workunit.client.1.vm10.stdout:1/866: mknod d2/da/d25/d46/d80/da0/d92/c117 0 2026-03-09T20:48:13.372 INFO:tasks.workunit.client.0.vm07.stdout:1/956: sync 2026-03-09T20:48:13.373 INFO:tasks.workunit.client.1.vm10.stdout:1/867: write d2/da/d25/d46/dbe/dfc/f108 [1008736,91329] 0 2026-03-09T20:48:13.382 INFO:tasks.workunit.client.1.vm10.stdout:0/838: creat d2/d4a/d58/d82/d71/f126 x:0 0 0 2026-03-09T20:48:13.385 INFO:tasks.workunit.client.1.vm10.stdout:0/839: read d2/d4a/d58/d82/d71/d5d/fdd [1513179,60435] 0 2026-03-09T20:48:13.391 INFO:tasks.workunit.client.0.vm07.stdout:9/882: rename d4/d16/d29/d24/d37/d44/d62/d108/d121/d19/d5f/da5/ld5 to d4/d16/d29/d24/d37/d44/d62/d108/d121/d19/d89/da7/l13b 0 2026-03-09T20:48:13.392 INFO:tasks.workunit.client.1.vm10.stdout:4/819: link d1/d2/d3/d70/d99/dc9/dff/d2b/d4a/d9b/de2/dfb/ced d1/d2/d3/d70/d99/dc9/d101/c107 0 2026-03-09T20:48:13.393 INFO:tasks.workunit.client.1.vm10.stdout:0/840: mknod d2/d4a/c127 0 2026-03-09T20:48:13.394 INFO:tasks.workunit.client.1.vm10.stdout:1/868: creat d2/da/d25/d46/d51/d5d/d6e/dd0/f118 x:0 0 0 2026-03-09T20:48:13.394 INFO:tasks.workunit.client.0.vm07.stdout:9/883: dread - d4/d16/d29/d24/d37/d44/d62/d108/d121/d19/d5f/da5/f104 zero size 2026-03-09T20:48:13.396 INFO:tasks.workunit.client.1.vm10.stdout:1/869: mknod d2/da/d25/d46/d80/da0/d92/db5/dc7/d105/c119 0 2026-03-09T20:48:13.397 INFO:tasks.workunit.client.0.vm07.stdout:9/884: rmdir d4/d16/d29/d24/d37/d44/d62/d108/d121/dc/dbb/db6 39 2026-03-09T20:48:13.400 INFO:tasks.workunit.client.0.vm07.stdout:9/885: rename d4/d16/d29/d24/d37/d44/d62/d108/d121/d19/d89/f9e to d4/d16/d29/d9c/f13c 0 2026-03-09T20:48:13.402 INFO:tasks.workunit.client.1.vm10.stdout:4/820: rename d1/d2/d5c/d64/f83 to d1/d2/d5c/d64/d6b/d81/dac/d1b/d57/f108 0 2026-03-09T20:48:13.405 INFO:tasks.workunit.client.1.vm10.stdout:4/821: mkdir d1/d2/d3/d70/d99/dc9/dff/d2b/d4a/d9b/d109 0 2026-03-09T20:48:13.410 INFO:tasks.workunit.client.0.vm07.stdout:9/886: dread d4/d16/d29/d24/d37/d44/d62/d108/d121/dc/dbb/fad [0,4194304] 0 2026-03-09T20:48:13.418 INFO:tasks.workunit.client.0.vm07.stdout:8/845: dwrite d1/dc/d16/dad/fc7 [0,4194304] 0 2026-03-09T20:48:13.420 INFO:tasks.workunit.client.0.vm07.stdout:8/846: stat d1/d5d/d6f/d2f/d4d/d55/f8e 0 2026-03-09T20:48:13.427 INFO:tasks.workunit.client.0.vm07.stdout:5/954: dwrite d5/d33/d39/d8d/f8e [0,4194304] 0 2026-03-09T20:48:13.428 INFO:tasks.workunit.client.0.vm07.stdout:5/955: chown d5/d33/d39/d8d/l114 89378 1 2026-03-09T20:48:13.437 INFO:tasks.workunit.client.1.vm10.stdout:7/868: dwrite f5 [0,4194304] 0 2026-03-09T20:48:13.441 INFO:tasks.workunit.client.1.vm10.stdout:5/800: dwrite d2/d39/d4b/f60 [0,4194304] 0 2026-03-09T20:48:13.449 INFO:tasks.workunit.client.1.vm10.stdout:3/837: write dc/d14/d26/d29/d2a/fa4 [511080,11517] 0 2026-03-09T20:48:13.450 INFO:tasks.workunit.client.1.vm10.stdout:3/838: fdatasync dc/d14/d26/d8f/ddd/d10d/f111 0 2026-03-09T20:48:13.450 INFO:tasks.workunit.client.0.vm07.stdout:3/893: write d1/d5/d9/d2f/d99/fa6 [1750919,11444] 0 2026-03-09T20:48:13.455 INFO:tasks.workunit.client.1.vm10.stdout:7/869: rename db/d28/d2b/d36/d63/d6d/dc4/dfe to db/d46/dab/d10f 0 2026-03-09T20:48:13.464 INFO:tasks.workunit.client.1.vm10.stdout:9/932: write d2/d3/d6d/db7/f116 [650640,82910] 0 2026-03-09T20:48:13.465 INFO:tasks.workunit.client.1.vm10.stdout:6/872: write d3/d30/d33/f3a [1817993,61020] 0 2026-03-09T20:48:13.465 INFO:tasks.workunit.client.1.vm10.stdout:3/839: symlink dc/d14/d26/d29/d93/l123 0 2026-03-09T20:48:13.468 INFO:tasks.workunit.client.1.vm10.stdout:7/870: mknod db/d28/d2b/c110 0 2026-03-09T20:48:13.470 INFO:tasks.workunit.client.0.vm07.stdout:7/974: dwrite d3/da/fbb [0,4194304] 0 2026-03-09T20:48:13.476 INFO:tasks.workunit.client.0.vm07.stdout:5/956: rename d5/d33/d39/d8d/dd7 to d5/d33/d39/d8d/dab/d11f/d13a/d144 0 2026-03-09T20:48:13.477 INFO:tasks.workunit.client.0.vm07.stdout:6/908: write d8/d26/f4d [2717949,96388] 0 2026-03-09T20:48:13.482 INFO:tasks.workunit.client.0.vm07.stdout:0/912: write d1/f11 [2640671,559] 0 2026-03-09T20:48:13.483 INFO:tasks.workunit.client.1.vm10.stdout:9/933: truncate d2/d3/de/d35/f107 644692 0 2026-03-09T20:48:13.483 INFO:tasks.workunit.client.0.vm07.stdout:2/902: write d2/db/d28/d90/fa3 [4761465,94280] 0 2026-03-09T20:48:13.488 INFO:tasks.workunit.client.1.vm10.stdout:8/901: dwrite d0/d22/d25/f34 [0,4194304] 0 2026-03-09T20:48:13.488 INFO:tasks.workunit.client.1.vm10.stdout:2/871: dwrite d5/d18/d27/d38/ff7 [0,4194304] 0 2026-03-09T20:48:13.488 INFO:tasks.workunit.client.0.vm07.stdout:1/957: dwrite d3/d66/f76 [0,4194304] 0 2026-03-09T20:48:13.494 INFO:tasks.workunit.client.0.vm07.stdout:8/847: mknod d1/dc/d16/d26/de2/c110 0 2026-03-09T20:48:13.504 INFO:tasks.workunit.client.1.vm10.stdout:9/934: sync 2026-03-09T20:48:13.506 INFO:tasks.workunit.client.1.vm10.stdout:6/873: dread - d3/d30/d7f/d24/fd3 zero size 2026-03-09T20:48:13.507 INFO:tasks.workunit.client.1.vm10.stdout:7/871: rename db/d28/d4c/d6e/l7a to db/d28/d2b/d36/d40/d8a/dd4/l111 0 2026-03-09T20:48:13.509 INFO:tasks.workunit.client.1.vm10.stdout:1/870: write d2/da/d25/d3e/dca/da2/dd5/ff6 [2345289,1360] 0 2026-03-09T20:48:13.510 INFO:tasks.workunit.client.1.vm10.stdout:8/902: read d0/d22/d25/f2b [1242175,108255] 0 2026-03-09T20:48:13.512 INFO:tasks.workunit.client.1.vm10.stdout:2/872: symlink d5/d18/d27/d89/l11b 0 2026-03-09T20:48:13.512 INFO:tasks.workunit.client.0.vm07.stdout:5/957: fsync d5/d50/f61 0 2026-03-09T20:48:13.515 INFO:tasks.workunit.client.1.vm10.stdout:9/935: symlink d2/d33/dcf/l131 0 2026-03-09T20:48:13.519 INFO:tasks.workunit.client.1.vm10.stdout:1/871: dwrite d2/da/f50 [4194304,4194304] 0 2026-03-09T20:48:13.521 INFO:tasks.workunit.client.0.vm07.stdout:2/903: creat d2/db/d49/d7d/d85/dde/f127 x:0 0 0 2026-03-09T20:48:13.521 INFO:tasks.workunit.client.1.vm10.stdout:1/872: stat d2/da/d25/d46/d80/da0/d92/fe7 0 2026-03-09T20:48:13.524 INFO:tasks.workunit.client.0.vm07.stdout:3/894: dread d1/d5/d9/d2f/d86/fbb [0,4194304] 0 2026-03-09T20:48:13.526 INFO:tasks.workunit.client.1.vm10.stdout:7/872: creat db/d28/d4c/d6e/f112 x:0 0 0 2026-03-09T20:48:13.530 INFO:tasks.workunit.client.0.vm07.stdout:1/958: creat d3/dc6/f134 x:0 0 0 2026-03-09T20:48:13.531 INFO:tasks.workunit.client.1.vm10.stdout:0/841: dwrite d2/d4a/d58/df6/f11e [0,4194304] 0 2026-03-09T20:48:13.538 INFO:tasks.workunit.client.1.vm10.stdout:1/873: mkdir d2/da/d25/d46/d51/d5d/d6e/d70/db3/dd4/d11a 0 2026-03-09T20:48:13.540 INFO:tasks.workunit.client.1.vm10.stdout:6/874: mknod d3/d30/d7f/d36/d5c/dad/de5/c108 0 2026-03-09T20:48:13.543 INFO:tasks.workunit.client.0.vm07.stdout:0/913: symlink d1/d1f/d53/d72/d9a/l11c 0 2026-03-09T20:48:13.546 INFO:tasks.workunit.client.0.vm07.stdout:5/958: dread d5/df/d13/f3d [0,4194304] 0 2026-03-09T20:48:13.547 INFO:tasks.workunit.client.1.vm10.stdout:0/842: sync 2026-03-09T20:48:13.548 INFO:tasks.workunit.client.0.vm07.stdout:2/904: truncate d2/f33 317634 0 2026-03-09T20:48:13.549 INFO:tasks.workunit.client.1.vm10.stdout:0/843: fsync d2/d9/db8/db4/fce 0 2026-03-09T20:48:13.549 INFO:tasks.workunit.client.1.vm10.stdout:0/844: write d2/d9/db8/d10f/d11/dd1/f103 [1178467,15747] 0 2026-03-09T20:48:13.549 INFO:tasks.workunit.client.1.vm10.stdout:0/845: write d2/d9/db8/d10f/d11/dd1/f103 [1213222,20854] 0 2026-03-09T20:48:13.554 INFO:tasks.workunit.client.0.vm07.stdout:3/895: dwrite d1/d5/d9/d2f/d34/d46/d5d/fb8 [0,4194304] 0 2026-03-09T20:48:13.555 INFO:tasks.workunit.client.1.vm10.stdout:3/840: getdents dc/d14/d22 0 2026-03-09T20:48:13.556 INFO:tasks.workunit.client.1.vm10.stdout:3/841: chown dc/d14/d26/d29/d40/da8/c86 3 1 2026-03-09T20:48:13.558 INFO:tasks.workunit.client.0.vm07.stdout:1/959: creat d3/d97/da1/dc5/d60/f135 x:0 0 0 2026-03-09T20:48:13.560 INFO:tasks.workunit.client.0.vm07.stdout:5/959: dread d5/f25 [0,4194304] 0 2026-03-09T20:48:13.562 INFO:tasks.workunit.client.1.vm10.stdout:6/875: dread - d3/da/d11/d89/db9/dd1/dd2/da9/fea zero size 2026-03-09T20:48:13.563 INFO:tasks.workunit.client.1.vm10.stdout:6/876: truncate d3/d30/d7f/d4a/f9a 4973785 0 2026-03-09T20:48:13.565 INFO:tasks.workunit.client.0.vm07.stdout:5/960: creat d5/d33/d39/f145 x:0 0 0 2026-03-09T20:48:13.566 INFO:tasks.workunit.client.1.vm10.stdout:7/873: symlink db/d28/d10e/l113 0 2026-03-09T20:48:13.568 INFO:tasks.workunit.client.1.vm10.stdout:0/846: fsync d2/d9/db8/d10f/d11/dd1/db7/dcd/d63/fad 0 2026-03-09T20:48:13.569 INFO:tasks.workunit.client.1.vm10.stdout:2/873: dread d5/d18/f90 [0,4194304] 0 2026-03-09T20:48:13.569 INFO:tasks.workunit.client.1.vm10.stdout:0/847: stat d2/d9/db8/d10f/d48/dac/fc4 0 2026-03-09T20:48:13.570 INFO:tasks.workunit.client.1.vm10.stdout:9/936: link d2/d3/d6d/d88/fd4 d2/d3/d85/df7/f132 0 2026-03-09T20:48:13.570 INFO:tasks.workunit.client.0.vm07.stdout:9/887: write d4/f10 [2346792,42134] 0 2026-03-09T20:48:13.571 INFO:tasks.workunit.client.0.vm07.stdout:5/961: unlink d5/df/d13/d6c/db1/f11b 0 2026-03-09T20:48:13.572 INFO:tasks.workunit.client.1.vm10.stdout:9/937: write d2/d3/d6d/ff3 [1350237,42226] 0 2026-03-09T20:48:13.573 INFO:tasks.workunit.client.0.vm07.stdout:0/914: link d1/d1f/d53/d72/d11a/fae d1/d1f/f11d 0 2026-03-09T20:48:13.574 INFO:tasks.workunit.client.0.vm07.stdout:3/896: creat d1/d5/d9/d11/f123 x:0 0 0 2026-03-09T20:48:13.575 INFO:tasks.workunit.client.0.vm07.stdout:3/897: dread - d1/d5/d9/d2f/d34/da5/dda/f11d zero size 2026-03-09T20:48:13.576 INFO:tasks.workunit.client.1.vm10.stdout:4/822: dwrite d1/d2/d3/d70/fe9 [0,4194304] 0 2026-03-09T20:48:13.577 INFO:tasks.workunit.client.0.vm07.stdout:9/888: unlink d4/d16/d29/d24/d37/d44/d62/d108/d121/d19/d5f/d73/dee/cf7 0 2026-03-09T20:48:13.578 INFO:tasks.workunit.client.1.vm10.stdout:3/842: truncate f6 2548987 0 2026-03-09T20:48:13.585 INFO:tasks.workunit.client.1.vm10.stdout:3/843: sync 2026-03-09T20:48:13.585 INFO:tasks.workunit.client.1.vm10.stdout:1/874: creat d2/da/d25/d3e/d55/f11b x:0 0 0 2026-03-09T20:48:13.587 INFO:tasks.workunit.client.1.vm10.stdout:9/938: mkdir d2/d28/da2/ded/d133 0 2026-03-09T20:48:13.587 INFO:tasks.workunit.client.0.vm07.stdout:0/915: fsync d1/d2/dc/fd6 0 2026-03-09T20:48:13.590 INFO:tasks.workunit.client.0.vm07.stdout:9/889: dread - d4/d16/d29/d24/d37/d44/d62/d108/d121/db9/d123/f12a zero size 2026-03-09T20:48:13.592 INFO:tasks.workunit.client.1.vm10.stdout:2/874: dread d5/d18/d27/d89/db6/d41/d77/db3/db5/fc9 [0,4194304] 0 2026-03-09T20:48:13.592 INFO:tasks.workunit.client.1.vm10.stdout:4/823: rmdir d1/d2/d3/d70/d99/dc9/dff 39 2026-03-09T20:48:13.593 INFO:tasks.workunit.client.1.vm10.stdout:1/875: rmdir d2 39 2026-03-09T20:48:13.594 INFO:tasks.workunit.client.1.vm10.stdout:3/844: symlink dc/db4/l124 0 2026-03-09T20:48:13.598 INFO:tasks.workunit.client.1.vm10.stdout:6/877: link d3/da/d11/d89/db9/dd1/dd2/c100 d3/d30/d7f/d36/d6d/d8c/c109 0 2026-03-09T20:48:13.598 INFO:tasks.workunit.client.1.vm10.stdout:6/878: chown d3/d30/d7f/d24/d39/d9e/fe4 302087263 1 2026-03-09T20:48:13.598 INFO:tasks.workunit.client.1.vm10.stdout:7/874: creat db/f114 x:0 0 0 2026-03-09T20:48:13.598 INFO:tasks.workunit.client.1.vm10.stdout:2/875: mkdir d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47/ddc/d11c 0 2026-03-09T20:48:13.599 INFO:tasks.workunit.client.1.vm10.stdout:9/939: sync 2026-03-09T20:48:13.599 INFO:tasks.workunit.client.1.vm10.stdout:4/824: sync 2026-03-09T20:48:13.603 INFO:tasks.workunit.client.1.vm10.stdout:3/845: truncate dc/d14/d26/f45 1752934 0 2026-03-09T20:48:13.604 INFO:tasks.workunit.client.1.vm10.stdout:6/879: chown d3/da/d11/d26/d5b/fbd 0 1 2026-03-09T20:48:13.606 INFO:tasks.workunit.client.1.vm10.stdout:2/876: unlink d5/d18/d27/d89/la9 0 2026-03-09T20:48:13.607 INFO:tasks.workunit.client.1.vm10.stdout:2/877: stat d5/d18/f90 0 2026-03-09T20:48:13.608 INFO:tasks.workunit.client.1.vm10.stdout:6/880: truncate d3/da/d11/d89/fb0 3924056 0 2026-03-09T20:48:13.610 INFO:tasks.workunit.client.1.vm10.stdout:2/878: mknod d5/c11d 0 2026-03-09T20:48:13.614 INFO:tasks.workunit.client.1.vm10.stdout:3/846: link dc/d14/d26/d8f/l95 dc/d14/d26/d29/d40/da8/dc3/l125 0 2026-03-09T20:48:13.614 INFO:tasks.workunit.client.1.vm10.stdout:9/940: creat d2/d3/f134 x:0 0 0 2026-03-09T20:48:13.615 INFO:tasks.workunit.client.1.vm10.stdout:4/825: symlink d1/d2/d3/d70/d99/dc9/dff/d2b/l10a 0 2026-03-09T20:48:13.616 INFO:tasks.workunit.client.1.vm10.stdout:3/847: dread - dc/d14/d26/d8f/f115 zero size 2026-03-09T20:48:13.617 INFO:tasks.workunit.client.0.vm07.stdout:9/890: dread d4/d16/d29/d24/d37/d44/d62/d108/d121/f1c [0,4194304] 0 2026-03-09T20:48:13.618 INFO:tasks.workunit.client.1.vm10.stdout:8/903: dread d0/d22/d25/d40/f5e [0,4194304] 0 2026-03-09T20:48:13.620 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:13 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:13.620 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:13 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:13.621 INFO:tasks.workunit.client.0.vm07.stdout:9/891: creat d4/d16/d29/d24/d37/d44/d62/d108/d121/d59/d129/f13d x:0 0 0 2026-03-09T20:48:13.621 INFO:tasks.workunit.client.1.vm10.stdout:9/941: creat d2/d33/dcf/f135 x:0 0 0 2026-03-09T20:48:13.621 INFO:tasks.workunit.client.1.vm10.stdout:2/879: getdents d5/d18/d27/d38/d61/dc8/ddb/dea/dfc 0 2026-03-09T20:48:13.624 INFO:tasks.workunit.client.1.vm10.stdout:4/826: dread - d1/d2/d5c/d64/d6b/d81/dac/fdd zero size 2026-03-09T20:48:13.626 INFO:tasks.workunit.client.1.vm10.stdout:9/942: fdatasync d2/d3/de/d8f/fb5 0 2026-03-09T20:48:13.630 INFO:tasks.workunit.client.1.vm10.stdout:9/943: read d2/d33/f3f [540423,40587] 0 2026-03-09T20:48:13.631 INFO:tasks.workunit.client.0.vm07.stdout:9/892: dread d4/d16/d29/d24/d37/d44/d62/d108/d121/d19/f86 [0,4194304] 0 2026-03-09T20:48:13.633 INFO:tasks.workunit.client.1.vm10.stdout:9/944: sync 2026-03-09T20:48:13.633 INFO:tasks.workunit.client.1.vm10.stdout:4/827: mknod d1/d2/d5c/d64/d6b/c10b 0 2026-03-09T20:48:13.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:13 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:13.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:13 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:13.634 INFO:tasks.workunit.client.0.vm07.stdout:9/893: rename d4/d16/d29/d24/d37/d44/d62/d108/d121/d19/d5f/d73/c12c to d4/d16/d29/d24/d37/d8d/c13e 0 2026-03-09T20:48:13.637 INFO:tasks.workunit.client.0.vm07.stdout:9/894: creat d4/d16/d29/d24/d37/d44/d62/d8e/dd4/d11c/f13f x:0 0 0 2026-03-09T20:48:13.639 INFO:tasks.workunit.client.0.vm07.stdout:9/895: symlink d4/d16/d29/d24/d37/d8d/l140 0 2026-03-09T20:48:13.640 INFO:tasks.workunit.client.1.vm10.stdout:2/880: rename d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47/dcc/lf5 to d5/d18/d27/d89/db6/d41/d77/db3/db5/db0/deb/l11e 0 2026-03-09T20:48:13.640 INFO:tasks.workunit.client.1.vm10.stdout:4/828: creat d1/d2/d3/d54/daa/f10c x:0 0 0 2026-03-09T20:48:13.640 INFO:tasks.workunit.client.1.vm10.stdout:9/945: symlink d2/d28/d47/d50/dd1/l136 0 2026-03-09T20:48:13.642 INFO:tasks.workunit.client.1.vm10.stdout:4/829: chown d1/d2/d3/d70/d99/dc9/dff/d2b 1 1 2026-03-09T20:48:13.643 INFO:tasks.workunit.client.1.vm10.stdout:2/881: write d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d8d/d93/da5/dda/f10d [681832,38082] 0 2026-03-09T20:48:13.644 INFO:tasks.workunit.client.1.vm10.stdout:8/904: dread d0/d22/d2c/f32 [0,4194304] 0 2026-03-09T20:48:13.648 INFO:tasks.workunit.client.1.vm10.stdout:2/882: read d5/d18/d27/d89/db6/d41/d77/db3/db5/f69 [1392002,61661] 0 2026-03-09T20:48:13.651 INFO:tasks.workunit.client.1.vm10.stdout:9/946: mkdir d2/d28/d47/d50/dd1/d11e/d137 0 2026-03-09T20:48:13.651 INFO:tasks.workunit.client.1.vm10.stdout:8/905: truncate d0/d22/fb6 1072670 0 2026-03-09T20:48:13.652 INFO:tasks.workunit.client.1.vm10.stdout:2/883: symlink d5/d18/d27/d89/db6/d41/d77/db3/db5/l11f 0 2026-03-09T20:48:13.653 INFO:tasks.workunit.client.1.vm10.stdout:9/947: chown d2/d12/f31 25161 1 2026-03-09T20:48:13.654 INFO:tasks.workunit.client.1.vm10.stdout:8/906: sync 2026-03-09T20:48:13.655 INFO:tasks.workunit.client.1.vm10.stdout:8/907: readlink d0/d22/d25/d6c/d9b/lb4 0 2026-03-09T20:48:13.658 INFO:tasks.workunit.client.1.vm10.stdout:8/908: dread d0/d22/d25/f34 [0,4194304] 0 2026-03-09T20:48:13.663 INFO:tasks.workunit.client.1.vm10.stdout:4/830: getdents d1/dd8 0 2026-03-09T20:48:13.663 INFO:tasks.workunit.client.1.vm10.stdout:8/909: creat d0/d22/d25/d2e/d41/de9/dfc/d63/f125 x:0 0 0 2026-03-09T20:48:13.664 INFO:tasks.workunit.client.1.vm10.stdout:8/910: chown d0/d22/d2c/f32 52942055 1 2026-03-09T20:48:13.665 INFO:tasks.workunit.client.1.vm10.stdout:4/831: fdatasync d1/d2/d5c/d64/d6b/d81/da9/fa4 0 2026-03-09T20:48:13.666 INFO:tasks.workunit.client.1.vm10.stdout:9/948: rmdir d2/d28/d47/d50/dd1/d121 0 2026-03-09T20:48:13.667 INFO:tasks.workunit.client.1.vm10.stdout:8/911: truncate d0/f11 2802740 0 2026-03-09T20:48:13.668 INFO:tasks.workunit.client.1.vm10.stdout:4/832: symlink d1/d2/d5c/d64/d6b/d79/l10d 0 2026-03-09T20:48:13.675 INFO:tasks.workunit.client.1.vm10.stdout:8/912: rename d0/d22/d25/d2e/cfd to d0/d22/d2f/d9d/d123/c126 0 2026-03-09T20:48:13.682 INFO:tasks.workunit.client.1.vm10.stdout:4/833: dwrite d1/d67/f8f [0,4194304] 0 2026-03-09T20:48:13.682 INFO:tasks.workunit.client.1.vm10.stdout:8/913: unlink d0/d54/fa4 0 2026-03-09T20:48:13.682 INFO:tasks.workunit.client.1.vm10.stdout:4/834: mkdir d1/d2/d3/d70/d99/dc9/dff/d69/d10e 0 2026-03-09T20:48:13.682 INFO:tasks.workunit.client.1.vm10.stdout:8/914: mknod d0/d92/de8/d64/d7f/c127 0 2026-03-09T20:48:13.683 INFO:tasks.workunit.client.1.vm10.stdout:4/835: creat d1/d2/d3/d70/d99/dc9/dff/d69/dbd/f10f x:0 0 0 2026-03-09T20:48:13.693 INFO:tasks.workunit.client.1.vm10.stdout:4/836: getdents d1/d2 0 2026-03-09T20:48:13.703 INFO:tasks.workunit.client.1.vm10.stdout:4/837: creat d1/d2/d3/d54/daa/dfa/f110 x:0 0 0 2026-03-09T20:48:13.709 INFO:tasks.workunit.client.1.vm10.stdout:4/838: fdatasync d1/d2/d5c/d64/d6b/d81/dac/d1b/d57/f75 0 2026-03-09T20:48:13.717 INFO:tasks.workunit.client.1.vm10.stdout:5/801: write d2/d39/dbf/d69/d96/fc6 [1174147,11910] 0 2026-03-09T20:48:13.723 INFO:tasks.workunit.client.1.vm10.stdout:4/839: sync 2026-03-09T20:48:13.724 INFO:tasks.workunit.client.1.vm10.stdout:4/840: creat d1/d2/d3/d54/f111 x:0 0 0 2026-03-09T20:48:13.725 INFO:tasks.workunit.client.1.vm10.stdout:4/841: fdatasync d1/d2/d5c/fd6 0 2026-03-09T20:48:13.729 INFO:tasks.workunit.client.1.vm10.stdout:5/802: sync 2026-03-09T20:48:13.729 INFO:tasks.workunit.client.1.vm10.stdout:4/842: rmdir d1/d2/d3/d54/dd7 0 2026-03-09T20:48:13.736 INFO:tasks.workunit.client.1.vm10.stdout:5/803: creat d2/d27/d37/d46/d99/f12a x:0 0 0 2026-03-09T20:48:13.736 INFO:tasks.workunit.client.1.vm10.stdout:4/843: dwrite d1/dd8/f104 [0,4194304] 0 2026-03-09T20:48:13.749 INFO:tasks.workunit.client.1.vm10.stdout:4/844: rename d1/d47/df8 to d1/d2/d5c/d64/d112 0 2026-03-09T20:48:13.760 INFO:tasks.workunit.client.1.vm10.stdout:4/845: rename d1/d2/d3/d70/fe9 to d1/d2/d3/d70/d99/dc9/f113 0 2026-03-09T20:48:13.767 INFO:tasks.workunit.client.0.vm07.stdout:6/909: dwrite d8/d16/d4b/d88/f7f [0,4194304] 0 2026-03-09T20:48:13.773 INFO:tasks.workunit.client.0.vm07.stdout:8/848: dwrite d1/dc/d16/dad/d87/f97 [4194304,4194304] 0 2026-03-09T20:48:13.776 INFO:tasks.workunit.client.0.vm07.stdout:7/975: truncate d3/f138 2666025 0 2026-03-09T20:48:13.784 INFO:tasks.workunit.client.1.vm10.stdout:3/848: dread dc/d14/d20/d2e/f32 [0,4194304] 0 2026-03-09T20:48:13.794 INFO:tasks.workunit.client.0.vm07.stdout:1/960: rmdir d3/d97/da1/dc5/d60 39 2026-03-09T20:48:13.796 INFO:tasks.workunit.client.0.vm07.stdout:1/961: dread d3/d14/d54/fa2 [0,4194304] 0 2026-03-09T20:48:13.800 INFO:tasks.workunit.client.0.vm07.stdout:6/910: unlink d8/d16/da3/cf5 0 2026-03-09T20:48:13.801 INFO:tasks.workunit.client.0.vm07.stdout:6/911: dread - d8/d16/d22/d9b/de4/fd6 zero size 2026-03-09T20:48:13.803 INFO:tasks.workunit.client.1.vm10.stdout:3/849: creat dc/d14/d26/dcb/d11b/d11f/f126 x:0 0 0 2026-03-09T20:48:13.804 INFO:tasks.workunit.client.0.vm07.stdout:2/905: write d2/db/d1c/d4a/d88/fc9 [559343,50351] 0 2026-03-09T20:48:13.807 INFO:tasks.workunit.client.0.vm07.stdout:4/828: dread d2/d1f/fc3 [0,4194304] 0 2026-03-09T20:48:13.807 INFO:tasks.workunit.client.0.vm07.stdout:4/829: chown d2/d55/d5d/d3f/d4a/d4b/d52/f5a 6922574 1 2026-03-09T20:48:13.808 INFO:tasks.workunit.client.0.vm07.stdout:4/830: dread - d2/d55/d5d/d3f/d4a/d7d/fe0 zero size 2026-03-09T20:48:13.809 INFO:tasks.workunit.client.0.vm07.stdout:4/831: readlink d2/d55/d5d/d3f/d4a/d4b/d52/d5c/l6c 0 2026-03-09T20:48:13.810 INFO:tasks.workunit.client.0.vm07.stdout:4/832: dread - d2/d55/d5d/d3f/d4a/d7d/fe0 zero size 2026-03-09T20:48:13.811 INFO:tasks.workunit.client.0.vm07.stdout:6/912: read d8/d16/d22/d24/da0/dab/f7a [263745,32854] 0 2026-03-09T20:48:13.813 INFO:tasks.workunit.client.0.vm07.stdout:8/849: truncate d1/db0/fe0 109083 0 2026-03-09T20:48:13.815 INFO:tasks.workunit.client.0.vm07.stdout:6/913: dwrite d8/d16/f23 [0,4194304] 0 2026-03-09T20:48:13.821 INFO:tasks.workunit.client.1.vm10.stdout:0/848: truncate d2/d9/db8/db4/fce 2389447 0 2026-03-09T20:48:13.822 INFO:tasks.workunit.client.1.vm10.stdout:6/881: dread d3/da/d11/d31/d47/d87/fd0 [0,4194304] 0 2026-03-09T20:48:13.825 INFO:tasks.workunit.client.1.vm10.stdout:6/882: dwrite d3/d30/d7f/d4a/f9a [0,4194304] 0 2026-03-09T20:48:13.826 INFO:tasks.workunit.client.0.vm07.stdout:1/962: truncate d3/d66/d86/f10e 588845 0 2026-03-09T20:48:13.827 INFO:tasks.workunit.client.1.vm10.stdout:6/883: chown d3/d30/d7f/fcd 1124 1 2026-03-09T20:48:13.827 INFO:tasks.workunit.client.1.vm10.stdout:6/884: truncate d3/da/d11/dfc/fe8 3196619 0 2026-03-09T20:48:13.833 INFO:tasks.workunit.client.1.vm10.stdout:6/885: dread - d3/da/d11/d26/dcf/fe0 zero size 2026-03-09T20:48:13.835 INFO:tasks.workunit.client.0.vm07.stdout:4/833: symlink d2/df/d17/le7 0 2026-03-09T20:48:13.838 INFO:tasks.workunit.client.0.vm07.stdout:8/850: symlink d1/d5d/d6f/d80/d10f/l111 0 2026-03-09T20:48:13.839 INFO:tasks.workunit.client.1.vm10.stdout:4/846: link d1/d2/d3/d70/d78/caf d1/d47/c114 0 2026-03-09T20:48:13.863 INFO:tasks.workunit.client.1.vm10.stdout:0/849: symlink d2/d9/db8/d10f/d11/dd1/db7/l128 0 2026-03-09T20:48:13.863 INFO:tasks.workunit.client.0.vm07.stdout:5/962: dwrite d5/d19/f20 [0,4194304] 0 2026-03-09T20:48:13.863 INFO:tasks.workunit.client.0.vm07.stdout:3/898: dwrite d1/d5/d9/fe [4194304,4194304] 0 2026-03-09T20:48:13.863 INFO:tasks.workunit.client.0.vm07.stdout:3/899: readlink d1/d5/d9/d11/d6d/d80/db3/d109/ldc 0 2026-03-09T20:48:13.863 INFO:tasks.workunit.client.0.vm07.stdout:0/916: write d1/d2/dc/f40 [1532258,81262] 0 2026-03-09T20:48:13.866 INFO:tasks.workunit.client.0.vm07.stdout:1/963: chown d3/d14/c70 1884 1 2026-03-09T20:48:13.869 INFO:tasks.workunit.client.1.vm10.stdout:1/876: dwrite d2/da/ff5 [0,4194304] 0 2026-03-09T20:48:13.870 INFO:tasks.workunit.client.1.vm10.stdout:3/850: truncate dc/f11 3739502 0 2026-03-09T20:48:13.871 INFO:tasks.workunit.client.1.vm10.stdout:1/877: stat d2/d89/f101 0 2026-03-09T20:48:13.872 INFO:tasks.workunit.client.1.vm10.stdout:7/875: dwrite db/d28/d2b/d36/f1c [0,4194304] 0 2026-03-09T20:48:13.874 INFO:tasks.workunit.client.0.vm07.stdout:4/834: chown d2/df/l32 201131 1 2026-03-09T20:48:13.877 INFO:tasks.workunit.client.0.vm07.stdout:7/976: link d3/da/d53/db7/dde/dc5/fec d3/da4/f152 0 2026-03-09T20:48:13.878 INFO:tasks.workunit.client.0.vm07.stdout:5/963: chown d5/d19/d73/d97/fdb 161011264 1 2026-03-09T20:48:13.879 INFO:tasks.workunit.client.0.vm07.stdout:5/964: chown d5/d69 13421 1 2026-03-09T20:48:13.879 INFO:tasks.workunit.client.0.vm07.stdout:5/965: chown d5/d33/d39/f145 4 1 2026-03-09T20:48:13.880 INFO:tasks.workunit.client.0.vm07.stdout:5/966: chown d5/df/d13/d30/c83 1867 1 2026-03-09T20:48:13.883 INFO:tasks.workunit.client.0.vm07.stdout:3/900: mknod d1/d5/d9/d2f/d34/d9e/c124 0 2026-03-09T20:48:13.886 INFO:tasks.workunit.client.0.vm07.stdout:9/896: write d4/d16/d29/d24/d37/d44/d62/d108/d121/dc/dbb/fad [713241,24216] 0 2026-03-09T20:48:13.888 INFO:tasks.workunit.client.1.vm10.stdout:6/886: mknod d3/c10a 0 2026-03-09T20:48:13.889 INFO:tasks.workunit.client.1.vm10.stdout:2/884: write d5/d18/d27/f8c [7005,70713] 0 2026-03-09T20:48:13.890 INFO:tasks.workunit.client.0.vm07.stdout:1/964: read d3/d23/d52/f73 [1634750,56813] 0 2026-03-09T20:48:13.896 INFO:tasks.workunit.client.0.vm07.stdout:8/851: mkdir d1/d5d/d112 0 2026-03-09T20:48:13.899 INFO:tasks.workunit.client.0.vm07.stdout:5/967: fsync d5/df/d13/d3e/de1/fe7 0 2026-03-09T20:48:13.907 INFO:tasks.workunit.client.1.vm10.stdout:9/949: dread d2/d3/de/f84 [0,4194304] 0 2026-03-09T20:48:13.907 INFO:tasks.workunit.client.1.vm10.stdout:6/887: creat d3/d30/d7f/d4a/f10b x:0 0 0 2026-03-09T20:48:13.907 INFO:tasks.workunit.client.1.vm10.stdout:8/915: write d0/dd1/fdf [4707737,72398] 0 2026-03-09T20:48:13.907 INFO:tasks.workunit.client.0.vm07.stdout:3/901: chown d1/d5/d9/daf/c87 30534226 1 2026-03-09T20:48:13.907 INFO:tasks.workunit.client.0.vm07.stdout:9/897: rmdir d4/d16/d29 39 2026-03-09T20:48:13.907 INFO:tasks.workunit.client.0.vm07.stdout:2/906: getdents d2/db/d28/d57/df8 0 2026-03-09T20:48:13.907 INFO:tasks.workunit.client.0.vm07.stdout:2/907: write d2/dc8/f101 [1324741,34695] 0 2026-03-09T20:48:13.907 INFO:tasks.workunit.client.0.vm07.stdout:6/914: sync 2026-03-09T20:48:13.922 INFO:tasks.workunit.client.0.vm07.stdout:1/965: truncate d3/d23/d67/f103 266555 0 2026-03-09T20:48:13.922 INFO:tasks.workunit.client.0.vm07.stdout:1/966: dread - d3/d23/d67/f12b zero size 2026-03-09T20:48:13.924 INFO:tasks.workunit.client.0.vm07.stdout:8/852: creat d1/d5d/d6f/d2f/d4d/d55/f113 x:0 0 0 2026-03-09T20:48:13.925 INFO:tasks.workunit.client.1.vm10.stdout:1/878: creat d2/da/f11c x:0 0 0 2026-03-09T20:48:13.926 INFO:tasks.workunit.client.0.vm07.stdout:7/977: truncate d3/da/db/d32/d3e/dac/d1f/d2b/f33 310140 0 2026-03-09T20:48:13.927 INFO:tasks.workunit.client.1.vm10.stdout:6/888: truncate d3/da/d11/d31/fd5 3303697 0 2026-03-09T20:48:13.928 INFO:tasks.workunit.client.0.vm07.stdout:5/968: mknod d5/d33/db2/c146 0 2026-03-09T20:48:13.929 INFO:tasks.workunit.client.1.vm10.stdout:3/851: getdents dc/d14/d20/d21/daf 0 2026-03-09T20:48:13.934 INFO:tasks.workunit.client.0.vm07.stdout:3/902: dread d1/d5/d9/d2f/d34/f4b [0,4194304] 0 2026-03-09T20:48:13.936 INFO:tasks.workunit.client.1.vm10.stdout:2/885: dread d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47/d94/f9b [0,4194304] 0 2026-03-09T20:48:13.937 INFO:tasks.workunit.client.1.vm10.stdout:6/889: dread d3/d30/d7f/d36/d5c/fa5 [0,4194304] 0 2026-03-09T20:48:13.937 INFO:tasks.workunit.client.0.vm07.stdout:0/917: fsync d1/d1f/d53/fb8 0 2026-03-09T20:48:13.937 INFO:tasks.workunit.client.0.vm07.stdout:9/898: rmdir d4/d11/d23 39 2026-03-09T20:48:13.944 INFO:tasks.workunit.client.0.vm07.stdout:6/915: truncate d8/d16/d22/d24/da0/dab/d40/f107 5228418 0 2026-03-09T20:48:13.947 INFO:tasks.workunit.client.1.vm10.stdout:2/886: unlink d5/d18/l54 0 2026-03-09T20:48:13.947 INFO:tasks.workunit.client.0.vm07.stdout:8/853: creat d1/dc/d16/d26/de2/dc1/f114 x:0 0 0 2026-03-09T20:48:13.948 INFO:tasks.workunit.client.1.vm10.stdout:6/890: rename d3/da/d11/d89/db9/dd1/dd2/d60/f77 to d3/da/f10c 0 2026-03-09T20:48:13.949 INFO:tasks.workunit.client.1.vm10.stdout:6/891: chown d3/da/d11/dfc/fe8 8 1 2026-03-09T20:48:13.950 INFO:tasks.workunit.client.1.vm10.stdout:6/892: truncate d3/d30/d7f/d36/d5c/f5f 3799784 0 2026-03-09T20:48:13.951 INFO:tasks.workunit.client.1.vm10.stdout:8/916: link d0/d92/de8/d64/d7f/fc2 d0/d22/d25/d8f/f128 0 2026-03-09T20:48:13.952 INFO:tasks.workunit.client.0.vm07.stdout:0/918: creat d1/dc0/dcc/dd9/f11e x:0 0 0 2026-03-09T20:48:13.965 INFO:tasks.workunit.client.1.vm10.stdout:6/893: fdatasync d3/da/d11/d89/db9/dd1/dd2/da9/fea 0 2026-03-09T20:48:13.967 INFO:tasks.workunit.client.1.vm10.stdout:8/917: fdatasync d0/dd1/fdf 0 2026-03-09T20:48:13.968 INFO:tasks.workunit.client.0.vm07.stdout:3/903: dread d1/d5/d9/d2f/d3d/d71/dcc/f104 [0,4194304] 0 2026-03-09T20:48:13.968 INFO:tasks.workunit.client.0.vm07.stdout:8/854: stat d1/d5d/d6f/d2f/d4d/f73 0 2026-03-09T20:48:13.969 INFO:tasks.workunit.client.0.vm07.stdout:3/904: chown d1/d5/d9/d11/d6d/dd0/d43 428 1 2026-03-09T20:48:13.972 INFO:tasks.workunit.client.0.vm07.stdout:9/899: dread d4/d16/d29/d24/d37/d44/d62/d108/d121/d19/d5f/d73/fb7 [0,4194304] 0 2026-03-09T20:48:13.975 INFO:tasks.workunit.client.1.vm10.stdout:3/852: getdents dc/d14/d26/d29/d2a 0 2026-03-09T20:48:13.983 INFO:tasks.workunit.client.0.vm07.stdout:8/855: truncate d1/dc/d16/dad/d87/d93/fef 744827 0 2026-03-09T20:48:13.983 INFO:tasks.workunit.client.0.vm07.stdout:7/978: link d3/da/d53/lc6 d3/d58/d77/de3/l153 0 2026-03-09T20:48:13.983 INFO:tasks.workunit.client.1.vm10.stdout:3/853: dwrite dc/d14/d20/d2e/f118 [0,4194304] 0 2026-03-09T20:48:13.983 INFO:tasks.workunit.client.1.vm10.stdout:6/894: mkdir d3/da/d11/d89/db9/dd1/dd2/d10d 0 2026-03-09T20:48:13.983 INFO:tasks.workunit.client.1.vm10.stdout:2/887: creat d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/f120 x:0 0 0 2026-03-09T20:48:13.983 INFO:tasks.workunit.client.1.vm10.stdout:6/895: write d3/da/d11/d89/db9/dd1/ff9 [355933,70473] 0 2026-03-09T20:48:13.983 INFO:tasks.workunit.client.1.vm10.stdout:8/918: mkdir d0/d22/d2f/d9d/d123/d129 0 2026-03-09T20:48:13.983 INFO:tasks.workunit.client.0.vm07.stdout:6/916: link d8/d26/d7d/cda d8/d16/d4b/c129 0 2026-03-09T20:48:13.985 INFO:tasks.workunit.client.0.vm07.stdout:7/979: rmdir d3/da/db/d32/d3e/dac/d1f/d50/d110 39 2026-03-09T20:48:13.987 INFO:tasks.workunit.client.0.vm07.stdout:6/917: creat d8/d16/da3/d9a/f12a x:0 0 0 2026-03-09T20:48:13.987 INFO:tasks.workunit.client.1.vm10.stdout:3/854: write dc/d9e/f108 [193425,4089] 0 2026-03-09T20:48:13.988 INFO:tasks.workunit.client.1.vm10.stdout:3/855: dread - dc/d14/d20/d21/daf/d113/f120 zero size 2026-03-09T20:48:13.988 INFO:tasks.workunit.client.0.vm07.stdout:3/905: rename d1/d5/d9/f1c to d1/d5/d9/d11/d6d/f125 0 2026-03-09T20:48:13.990 INFO:tasks.workunit.client.1.vm10.stdout:3/856: dread - dc/d14/d26/d29/d40/d8c/d9c/f119 zero size 2026-03-09T20:48:13.993 INFO:tasks.workunit.client.1.vm10.stdout:2/888: fsync d5/d18/d27/d89/db6/d41/de4/ff9 0 2026-03-09T20:48:13.993 INFO:tasks.workunit.client.1.vm10.stdout:5/804: dwrite d2/d58/fb9 [0,4194304] 0 2026-03-09T20:48:13.994 INFO:tasks.workunit.client.1.vm10.stdout:8/919: sync 2026-03-09T20:48:14.019 INFO:tasks.workunit.client.0.vm07.stdout:7/980: dwrite d3/da/db/d32/d3e/dac/d1f/d50/d110/f137 [0,4194304] 0 2026-03-09T20:48:14.044 INFO:tasks.workunit.client.0.vm07.stdout:1/967: stat d3/d97/da1/dc5/d60/fb5 0 2026-03-09T20:48:14.048 INFO:tasks.workunit.client.0.vm07.stdout:3/906: dwrite d1/d5/d9/d2f/d3d/d71/d76/f115 [0,4194304] 0 2026-03-09T20:48:14.058 INFO:tasks.workunit.client.1.vm10.stdout:3/857: rmdir dc/d14/d20 39 2026-03-09T20:48:14.059 INFO:tasks.workunit.client.1.vm10.stdout:3/858: stat dc/d14/d26/d8f/ddd 0 2026-03-09T20:48:14.067 INFO:tasks.workunit.client.0.vm07.stdout:3/907: symlink d1/d5/d9/d11/d1f/l126 0 2026-03-09T20:48:14.074 INFO:tasks.workunit.client.0.vm07.stdout:3/908: chown d1/d5/d9/daf 3154396 1 2026-03-09T20:48:14.074 INFO:tasks.workunit.client.0.vm07.stdout:7/981: mkdir d3/da/d53/db7/dde/d96/d112/d154 0 2026-03-09T20:48:14.074 INFO:tasks.workunit.client.0.vm07.stdout:6/918: creat d8/d16/d22/f12b x:0 0 0 2026-03-09T20:48:14.074 INFO:tasks.workunit.client.1.vm10.stdout:4/847: truncate d1/d2/d5c/d64/d6b/d81/fca 3537535 0 2026-03-09T20:48:14.077 INFO:tasks.workunit.client.0.vm07.stdout:4/835: write d2/d55/d5d/d3f/f9f [229922,107639] 0 2026-03-09T20:48:14.079 INFO:tasks.workunit.client.1.vm10.stdout:7/876: write db/d46/f85 [567874,122231] 0 2026-03-09T20:48:14.085 INFO:tasks.workunit.client.1.vm10.stdout:0/850: dwrite d2/d4a/fcf [0,4194304] 0 2026-03-09T20:48:14.087 INFO:tasks.workunit.client.0.vm07.stdout:4/836: mknod d2/df/d59/d8a/ce8 0 2026-03-09T20:48:14.087 INFO:tasks.workunit.client.0.vm07.stdout:4/837: chown d2/d55/dab 46493108 1 2026-03-09T20:48:14.088 INFO:tasks.workunit.client.0.vm07.stdout:4/838: fdatasync d2/d55/f71 0 2026-03-09T20:48:14.091 INFO:tasks.workunit.client.1.vm10.stdout:2/889: creat d5/d18/d1b/f121 x:0 0 0 2026-03-09T20:48:14.096 INFO:tasks.workunit.client.1.vm10.stdout:4/848: rename d1/l15 to d1/d2/d3/d70/d78/l115 0 2026-03-09T20:48:14.097 INFO:tasks.workunit.client.1.vm10.stdout:4/849: chown d1/d2/d3/d70/d99/dc9/dff/d2b/d4a 72 1 2026-03-09T20:48:14.098 INFO:tasks.workunit.client.1.vm10.stdout:4/850: read d1/d2/d5c/d64/d6b/d81/dac/d1b/d57/f75 [634722,90604] 0 2026-03-09T20:48:14.100 INFO:tasks.workunit.client.1.vm10.stdout:1/879: write d2/da/d25/f48 [12523667,113215] 0 2026-03-09T20:48:14.102 INFO:tasks.workunit.client.1.vm10.stdout:1/880: chown d2/da/d25/d3e/d42/f63 7 1 2026-03-09T20:48:14.103 INFO:tasks.workunit.client.1.vm10.stdout:9/950: dwrite d2/d28/d47/d6a/fc0 [0,4194304] 0 2026-03-09T20:48:14.117 INFO:tasks.workunit.client.0.vm07.stdout:2/908: dwrite d2/db/d1c/f93 [0,4194304] 0 2026-03-09T20:48:14.133 INFO:tasks.workunit.client.0.vm07.stdout:6/919: rename d8/d16/d4b/d88/dc3/dd5 to d8/d5d/d97/d12c 0 2026-03-09T20:48:14.150 INFO:tasks.workunit.client.0.vm07.stdout:5/969: write d5/df/d13/fef [42949,118445] 0 2026-03-09T20:48:14.154 INFO:tasks.workunit.client.1.vm10.stdout:0/851: creat d2/d9/db8/db4/f129 x:0 0 0 2026-03-09T20:48:14.158 INFO:tasks.workunit.client.0.vm07.stdout:9/900: write d4/d16/d29/fab [761713,123644] 0 2026-03-09T20:48:14.161 INFO:tasks.workunit.client.0.vm07.stdout:4/839: dread d2/f69 [0,4194304] 0 2026-03-09T20:48:14.167 INFO:tasks.workunit.client.1.vm10.stdout:1/881: chown d2/da/d25/d3e/f41 15 1 2026-03-09T20:48:14.169 INFO:tasks.workunit.client.1.vm10.stdout:4/851: dwrite d1/d2/d3/d70/d99/dc9/dff/d2b/d4a/d9b/de2/dfb/f85 [4194304,4194304] 0 2026-03-09T20:48:14.172 INFO:tasks.workunit.client.1.vm10.stdout:7/877: getdents db/d21/d95/d10d 0 2026-03-09T20:48:14.173 INFO:tasks.workunit.client.0.vm07.stdout:7/982: getdents d3/da/db/d32/d3e/dac/d43/d62/de0 0 2026-03-09T20:48:14.174 INFO:tasks.workunit.client.1.vm10.stdout:7/878: chown db/d28/d2b/d36/d3b/d88 406400 1 2026-03-09T20:48:14.214 INFO:tasks.workunit.client.1.vm10.stdout:4/852: chown d1/d2/d5c/d64/d6b/d81/dac/d39/l95 1038931559 1 2026-03-09T20:48:14.217 INFO:tasks.workunit.client.1.vm10.stdout:7/879: creat db/d28/d2b/d36/d40/d8a/dd4/f115 x:0 0 0 2026-03-09T20:48:14.217 INFO:tasks.workunit.client.1.vm10.stdout:0/852: rename d2/d4a/d58/d82/d71/d8e/c107 to d2/d4a/d58/d82/d71/dca/d110/c12a 0 2026-03-09T20:48:14.217 INFO:tasks.workunit.client.1.vm10.stdout:0/853: readlink d2/d9/l119 0 2026-03-09T20:48:14.217 INFO:tasks.workunit.client.1.vm10.stdout:2/890: link d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/fab d5/d18/d1b/f122 0 2026-03-09T20:48:14.217 INFO:tasks.workunit.client.1.vm10.stdout:2/891: read d5/d18/d27/d89/db6/d41/d77/db3/db5/ffd [530032,41331] 0 2026-03-09T20:48:14.217 INFO:tasks.workunit.client.1.vm10.stdout:9/951: link d2/d28/d47/d50/l113 d2/d3/d6d/db7/l138 0 2026-03-09T20:48:14.217 INFO:tasks.workunit.client.1.vm10.stdout:4/853: mknod d1/d2/d5c/d64/d6b/d81/dac/c116 0 2026-03-09T20:48:14.217 INFO:tasks.workunit.client.1.vm10.stdout:0/854: creat d2/d9/db8/d10f/d11/d92/dc1/f12b x:0 0 0 2026-03-09T20:48:14.217 INFO:tasks.workunit.client.1.vm10.stdout:2/892: truncate d5/d18/d1b/f26 6269619 0 2026-03-09T20:48:14.217 INFO:tasks.workunit.client.0.vm07.stdout:4/840: fsync d2/d55/d5d/d3f/fa7 0 2026-03-09T20:48:14.217 INFO:tasks.workunit.client.0.vm07.stdout:7/983: symlink d3/da/db/d32/d126/l155 0 2026-03-09T20:48:14.217 INFO:tasks.workunit.client.0.vm07.stdout:7/984: fsync d3/da/db/d32/f102 0 2026-03-09T20:48:14.217 INFO:tasks.workunit.client.0.vm07.stdout:7/985: dread - d3/da/db/d32/d3e/dac/f13b zero size 2026-03-09T20:48:14.217 INFO:tasks.workunit.client.0.vm07.stdout:7/986: fdatasync d3/da/f38 0 2026-03-09T20:48:14.217 INFO:tasks.workunit.client.0.vm07.stdout:4/841: unlink d2/d55/d5d/d3f/d4a/d7d/fe0 0 2026-03-09T20:48:14.217 INFO:tasks.workunit.client.0.vm07.stdout:4/842: dread d2/f7 [0,4194304] 0 2026-03-09T20:48:14.217 INFO:tasks.workunit.client.0.vm07.stdout:4/843: chown d2/d55/d5d/d3f/f68 1121766 1 2026-03-09T20:48:14.217 INFO:tasks.workunit.client.0.vm07.stdout:7/987: dread - d3/d58/d77/f11a zero size 2026-03-09T20:48:14.217 INFO:tasks.workunit.client.0.vm07.stdout:4/844: creat d2/d55/d5d/d3f/d4a/d4b/d52/d5c/d90/fe9 x:0 0 0 2026-03-09T20:48:14.217 INFO:tasks.workunit.client.0.vm07.stdout:7/988: read - d3/da/d53/db7/dde/d96/f12e zero size 2026-03-09T20:48:14.218 INFO:tasks.workunit.client.0.vm07.stdout:4/845: dread d2/d55/d5d/d3f/d4a/f99 [0,4194304] 0 2026-03-09T20:48:14.218 INFO:tasks.workunit.client.1.vm10.stdout:9/952: symlink d2/d3/d6d/l139 0 2026-03-09T20:48:14.218 INFO:tasks.workunit.client.1.vm10.stdout:9/953: readlink d2/d3/db4/lf4 0 2026-03-09T20:48:14.218 INFO:tasks.workunit.client.1.vm10.stdout:9/954: stat d2/d12/d5a/c6e 0 2026-03-09T20:48:14.220 INFO:tasks.workunit.client.1.vm10.stdout:0/855: creat d2/d9/db8/d10f/d11/d92/dc1/f12c x:0 0 0 2026-03-09T20:48:14.223 INFO:tasks.workunit.client.1.vm10.stdout:2/893: truncate d5/f9e 455811 0 2026-03-09T20:48:14.224 INFO:tasks.workunit.client.0.vm07.stdout:6/920: sync 2026-03-09T20:48:14.225 INFO:tasks.workunit.client.0.vm07.stdout:6/921: stat d8/d16/d22/d9b/de4/c63 0 2026-03-09T20:48:14.229 INFO:tasks.workunit.client.0.vm07.stdout:4/846: dread d2/d55/d5d/d3f/d4a/d4b/d52/f9e [0,4194304] 0 2026-03-09T20:48:14.232 INFO:tasks.workunit.client.0.vm07.stdout:6/922: unlink d8/d16/d4b/d88/d99/lf0 0 2026-03-09T20:48:14.236 INFO:tasks.workunit.client.0.vm07.stdout:6/923: symlink d8/db3/d114/l12d 0 2026-03-09T20:48:14.239 INFO:tasks.workunit.client.0.vm07.stdout:4/847: mknod d2/d55/d5d/dcb/cea 0 2026-03-09T20:48:14.243 INFO:tasks.workunit.client.0.vm07.stdout:6/924: fdatasync d8/d16/d22/d24/da0/dab/dc1/f110 0 2026-03-09T20:48:14.243 INFO:tasks.workunit.client.0.vm07.stdout:4/848: readlink d2/d55/d5d/d3f/d4a/d4b/d52/d5c/lbd 0 2026-03-09T20:48:14.243 INFO:tasks.workunit.client.0.vm07.stdout:4/849: readlink d2/df/l79 0 2026-03-09T20:48:14.245 INFO:tasks.workunit.client.1.vm10.stdout:0/856: rmdir d2/d9/db8/d10f/d11/dd1/db7/dcd/de0 0 2026-03-09T20:48:14.249 INFO:tasks.workunit.client.0.vm07.stdout:6/925: creat d8/d16/d22/d24/da0/dab/d40/d69/dfb/f12e x:0 0 0 2026-03-09T20:48:14.250 INFO:tasks.workunit.client.0.vm07.stdout:6/926: chown d8/d16/d22/d24/da0/dab/d40/fe7 1 1 2026-03-09T20:48:14.251 INFO:tasks.workunit.client.1.vm10.stdout:2/894: rmdir d5/d18/d27/d38/d61/dc8/ddb/dea/dfc 0 2026-03-09T20:48:14.251 INFO:tasks.workunit.client.1.vm10.stdout:2/895: stat d5/d18/d27/db4 0 2026-03-09T20:48:14.251 INFO:tasks.workunit.client.1.vm10.stdout:2/896: chown d5/d18/d27/d89/ca8 105179015 1 2026-03-09T20:48:14.254 INFO:tasks.workunit.client.0.vm07.stdout:6/927: creat d8/d16/d22/d24/da0/dab/dc1/d124/f12f x:0 0 0 2026-03-09T20:48:14.255 INFO:tasks.workunit.client.1.vm10.stdout:2/897: creat d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d91/f123 x:0 0 0 2026-03-09T20:48:14.256 INFO:tasks.workunit.client.1.vm10.stdout:2/898: readlink d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/l4d 0 2026-03-09T20:48:14.257 INFO:tasks.workunit.client.0.vm07.stdout:4/850: getdents d2/d55 0 2026-03-09T20:48:14.262 INFO:tasks.workunit.client.1.vm10.stdout:2/899: rename d5/d18/d27/d38/lbd to d5/d18/d27/d38/l124 0 2026-03-09T20:48:14.263 INFO:tasks.workunit.client.0.vm07.stdout:6/928: rmdir d8/d16/d22/d9b/da6/ded 0 2026-03-09T20:48:14.267 INFO:tasks.workunit.client.0.vm07.stdout:0/919: write d1/d1f/d20/fbd [624417,92231] 0 2026-03-09T20:48:14.267 INFO:tasks.workunit.client.0.vm07.stdout:0/920: dread - d1/d1f/d53/fb8 zero size 2026-03-09T20:48:14.276 INFO:tasks.workunit.client.1.vm10.stdout:4/854: dread d1/d2/d5c/fd4 [0,4194304] 0 2026-03-09T20:48:14.284 INFO:tasks.workunit.client.0.vm07.stdout:6/929: creat d8/d16/d4b/d88/dc3/f130 x:0 0 0 2026-03-09T20:48:14.286 INFO:tasks.workunit.client.0.vm07.stdout:8/856: write d1/d3b/f9a [1633154,426] 0 2026-03-09T20:48:14.290 INFO:tasks.workunit.client.0.vm07.stdout:0/921: unlink d1/d2/d33/l49 0 2026-03-09T20:48:14.296 INFO:tasks.workunit.client.1.vm10.stdout:2/900: unlink d5/d18/d1b/f122 0 2026-03-09T20:48:14.296 INFO:tasks.workunit.client.0.vm07.stdout:4/851: link d2/df/d59/f7c d2/d55/feb 0 2026-03-09T20:48:14.297 INFO:tasks.workunit.client.0.vm07.stdout:0/922: dwrite d1/d2/d33/d35/f114 [0,4194304] 0 2026-03-09T20:48:14.305 INFO:tasks.workunit.client.0.vm07.stdout:6/930: creat d8/d16/da3/f131 x:0 0 0 2026-03-09T20:48:14.314 INFO:tasks.workunit.client.1.vm10.stdout:8/920: dwrite d0/d22/d25/d2e/d41/f67 [0,4194304] 0 2026-03-09T20:48:14.315 INFO:tasks.workunit.client.0.vm07.stdout:1/968: write d3/d97/da1/dc5/d90/de8/dba/d12c/f77 [1050182,3979] 0 2026-03-09T20:48:14.317 INFO:tasks.workunit.client.1.vm10.stdout:6/896: write d3/da/d11/d26/d5b/f48 [3066509,69617] 0 2026-03-09T20:48:14.317 INFO:tasks.workunit.client.1.vm10.stdout:5/805: write d2/d39/f103 [2819413,108129] 0 2026-03-09T20:48:14.324 INFO:tasks.workunit.client.0.vm07.stdout:3/909: dwrite d1/d5/d9/d2f/d66/dc0/fde [0,4194304] 0 2026-03-09T20:48:14.332 INFO:tasks.workunit.client.1.vm10.stdout:8/921: sync 2026-03-09T20:48:14.335 INFO:tasks.workunit.client.1.vm10.stdout:5/806: dread d2/f71 [0,4194304] 0 2026-03-09T20:48:14.351 INFO:tasks.workunit.client.0.vm07.stdout:1/969: dread d3/d23/d52/f113 [0,4194304] 0 2026-03-09T20:48:14.358 INFO:tasks.workunit.client.0.vm07.stdout:2/909: write d2/db/d28/f32 [250760,39383] 0 2026-03-09T20:48:14.360 INFO:tasks.workunit.client.1.vm10.stdout:8/922: dread d0/d22/d25/f2b [0,4194304] 0 2026-03-09T20:48:14.362 INFO:tasks.workunit.client.1.vm10.stdout:4/855: rmdir d1/d2/d5c/d64/d6b/d81/dac/d100 0 2026-03-09T20:48:14.363 INFO:tasks.workunit.client.1.vm10.stdout:4/856: dread - d1/d2/d3/d54/daa/dfa/f110 zero size 2026-03-09T20:48:14.363 INFO:tasks.workunit.client.0.vm07.stdout:7/989: fsync d3/f138 0 2026-03-09T20:48:14.365 INFO:tasks.workunit.client.1.vm10.stdout:1/882: write d2/da/d25/d46/d51/d5d/d6e/d70/db3/fc2 [875727,18415] 0 2026-03-09T20:48:14.368 INFO:tasks.workunit.client.1.vm10.stdout:6/897: dread d3/d30/d7f/d24/d39/ff1 [0,4194304] 0 2026-03-09T20:48:14.373 INFO:tasks.workunit.client.0.vm07.stdout:6/931: creat d8/d5d/d97/dc4/f132 x:0 0 0 2026-03-09T20:48:14.374 INFO:tasks.workunit.client.0.vm07.stdout:5/970: dwrite d5/d19/d73/fa3 [0,4194304] 0 2026-03-09T20:48:14.377 INFO:tasks.workunit.client.1.vm10.stdout:5/807: mkdir d2/d27/d37/dc8/d12b 0 2026-03-09T20:48:14.379 INFO:tasks.workunit.client.0.vm07.stdout:8/857: fdatasync d1/dc/d16/f8d 0 2026-03-09T20:48:14.379 INFO:tasks.workunit.client.0.vm07.stdout:9/901: dwrite d4/d16/d29/d24/d37/d44/d62/d108/d121/d19/d89/f93 [0,4194304] 0 2026-03-09T20:48:14.386 INFO:tasks.workunit.client.1.vm10.stdout:4/857: creat d1/d2/d3/d70/d99/dc9/dff/d2b/d4a/f117 x:0 0 0 2026-03-09T20:48:14.390 INFO:tasks.workunit.client.1.vm10.stdout:7/880: dwrite db/d1f/f2a [0,4194304] 0 2026-03-09T20:48:14.407 INFO:tasks.workunit.client.0.vm07.stdout:3/910: dread d1/d5/d9/d2f/d34/f5c [0,4194304] 0 2026-03-09T20:48:14.408 INFO:tasks.workunit.client.1.vm10.stdout:9/955: write d2/d3/f2e [17760,112005] 0 2026-03-09T20:48:14.409 INFO:tasks.workunit.client.1.vm10.stdout:9/956: chown d2/d3/d6d/db7/f116 126621645 1 2026-03-09T20:48:14.413 INFO:tasks.workunit.client.0.vm07.stdout:2/910: mkdir d2/db/d1c/d128 0 2026-03-09T20:48:14.415 INFO:tasks.workunit.client.0.vm07.stdout:7/990: unlink d3/da/db/d32/d3e/led 0 2026-03-09T20:48:14.426 INFO:tasks.workunit.client.1.vm10.stdout:8/923: symlink d0/d22/d25/d2e/d41/de9/dfc/l12a 0 2026-03-09T20:48:14.428 INFO:tasks.workunit.client.0.vm07.stdout:0/923: rename d1/d1f/d30/f8e to d1/d2/d4b/f11f 0 2026-03-09T20:48:14.429 INFO:tasks.workunit.client.1.vm10.stdout:0/857: write d2/d4a/d58/d82/d60/fd8 [7626669,37476] 0 2026-03-09T20:48:14.430 INFO:tasks.workunit.client.1.vm10.stdout:8/924: dwrite d0/d22/d25/d6c/d122/f117 [0,4194304] 0 2026-03-09T20:48:14.432 INFO:tasks.workunit.client.1.vm10.stdout:8/925: dread d0/d22/d25/d6c/fb8 [0,4194304] 0 2026-03-09T20:48:14.436 INFO:tasks.workunit.client.1.vm10.stdout:4/858: creat d1/d2/d3/d70/d99/dc9/dff/d2b/d4a/f118 x:0 0 0 2026-03-09T20:48:14.437 INFO:tasks.workunit.client.1.vm10.stdout:4/859: chown d1/d2/d5c/d64/d6b/ff2 245 1 2026-03-09T20:48:14.438 INFO:tasks.workunit.client.1.vm10.stdout:4/860: truncate d1/dd8/f104 4954776 0 2026-03-09T20:48:14.440 INFO:tasks.workunit.client.0.vm07.stdout:5/971: truncate d5/df/f34 1061467 0 2026-03-09T20:48:14.441 INFO:tasks.workunit.client.1.vm10.stdout:3/859: write dc/d14/d26/d29/d40/f49 [697079,113097] 0 2026-03-09T20:48:14.444 INFO:tasks.workunit.client.1.vm10.stdout:3/860: read dc/d14/f102 [60591,70120] 0 2026-03-09T20:48:14.445 INFO:tasks.workunit.client.1.vm10.stdout:3/861: chown dc/d14/d26/d29/d40/d8c/d9c/f105 23955 1 2026-03-09T20:48:14.448 INFO:tasks.workunit.client.0.vm07.stdout:8/858: chown d1/dc/c81 3 1 2026-03-09T20:48:14.450 INFO:tasks.workunit.client.0.vm07.stdout:8/859: chown d1/ld0 16404930 1 2026-03-09T20:48:14.452 INFO:tasks.workunit.client.0.vm07.stdout:4/852: dwrite d2/df/d17/fd0 [0,4194304] 0 2026-03-09T20:48:14.454 INFO:tasks.workunit.client.1.vm10.stdout:6/898: mkdir d3/d30/d7f/d24/d39/d10e 0 2026-03-09T20:48:14.456 INFO:tasks.workunit.client.0.vm07.stdout:8/860: fdatasync d1/d5d/d6f/d2f/f51 0 2026-03-09T20:48:14.456 INFO:tasks.workunit.client.0.vm07.stdout:4/853: chown d2/df/d59/f60 891993 1 2026-03-09T20:48:14.459 INFO:tasks.workunit.client.1.vm10.stdout:2/901: getdents d5/d5b 0 2026-03-09T20:48:14.460 INFO:tasks.workunit.client.0.vm07.stdout:7/991: fsync d3/da/db/d32/d3e/dac/d1f/d2b/d52/f73 0 2026-03-09T20:48:14.471 INFO:tasks.workunit.client.0.vm07.stdout:1/970: dwrite d3/d23/f37 [4194304,4194304] 0 2026-03-09T20:48:14.475 INFO:tasks.workunit.client.0.vm07.stdout:0/924: creat d1/d2/d33/f120 x:0 0 0 2026-03-09T20:48:14.476 INFO:tasks.workunit.client.0.vm07.stdout:2/911: rename d2/d11/ddb/d6e/dbe to d2/d11/ddb/d6e/dda/d129 0 2026-03-09T20:48:14.480 INFO:tasks.workunit.client.1.vm10.stdout:8/926: fdatasync d0/d92/de8/fa5 0 2026-03-09T20:48:14.488 INFO:tasks.workunit.client.1.vm10.stdout:4/861: creat d1/d2/d5c/d64/d6b/d81/f119 x:0 0 0 2026-03-09T20:48:14.489 INFO:tasks.workunit.client.1.vm10.stdout:4/862: chown d1/d2/d3/d54 0 1 2026-03-09T20:48:14.489 INFO:tasks.workunit.client.1.vm10.stdout:4/863: write d1/d2/d5c/d64/d6b/d81/fc8 [4112284,68397] 0 2026-03-09T20:48:14.496 INFO:tasks.workunit.client.0.vm07.stdout:9/902: mkdir d4/d11/d141 0 2026-03-09T20:48:14.499 INFO:tasks.workunit.client.0.vm07.stdout:8/861: unlink d1/d5d/d6f/cfd 0 2026-03-09T20:48:14.499 INFO:tasks.workunit.client.1.vm10.stdout:1/883: dwrite d2/f2a [0,4194304] 0 2026-03-09T20:48:14.499 INFO:tasks.workunit.client.1.vm10.stdout:5/808: dwrite d2/d39/d4b/f97 [0,4194304] 0 2026-03-09T20:48:14.499 INFO:tasks.workunit.client.1.vm10.stdout:5/809: truncate d2/d27/d37/d46/d99/f12a 601229 0 2026-03-09T20:48:14.500 INFO:tasks.workunit.client.1.vm10.stdout:5/810: truncate d2/d80/ffd 820139 0 2026-03-09T20:48:14.501 INFO:tasks.workunit.client.0.vm07.stdout:6/932: dwrite d8/d50/fe8 [0,4194304] 0 2026-03-09T20:48:14.511 INFO:tasks.workunit.client.0.vm07.stdout:4/854: mkdir d2/df/d59/d8a/dec 0 2026-03-09T20:48:14.512 INFO:tasks.workunit.client.1.vm10.stdout:3/862: mknod dc/d14/d26/d8f/ddd/d10d/c127 0 2026-03-09T20:48:14.519 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:14 vm10.local ceph-mon[57011]: pgmap v13: 65 pgs: 65 active+clean; 3.6 GiB data, 12 GiB used, 108 GiB / 120 GiB avail; 32 MiB/s rd, 78 MiB/s wr, 208 op/s 2026-03-09T20:48:14.521 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:14 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:14.521 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:14 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:14.521 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:14 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:14.521 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:14 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:48:14.521 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:14 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:14.521 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:14 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:48:14.521 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:14 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:48:14.521 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:14 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:14.521 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:14 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr.vm07.xjrvch"}]: dispatch 2026-03-09T20:48:14.521 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:14 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr.vm07.xjrvch"}]': finished 2026-03-09T20:48:14.521 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:14 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr.vm10.byqahe"}]: dispatch 2026-03-09T20:48:14.521 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:14 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr.vm10.byqahe"}]': finished 2026-03-09T20:48:14.521 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:14 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-09T20:48:14.521 INFO:tasks.workunit.client.0.vm07.stdout:7/992: read - d3/da/db/d32/d3e/dac/d43/d62/f132 zero size 2026-03-09T20:48:14.524 INFO:tasks.workunit.client.1.vm10.stdout:2/902: creat d5/d18/d27/db8/f125 x:0 0 0 2026-03-09T20:48:14.532 INFO:tasks.workunit.client.1.vm10.stdout:9/957: dwrite d2/d33/fb3 [0,4194304] 0 2026-03-09T20:48:14.535 INFO:tasks.workunit.client.1.vm10.stdout:9/958: read d2/d3/d6d/ff3 [650856,116150] 0 2026-03-09T20:48:14.536 INFO:tasks.workunit.client.1.vm10.stdout:4/864: readlink d1/d2/d5c/d64/d6b/d81/dac/la8 0 2026-03-09T20:48:14.539 INFO:tasks.workunit.client.0.vm07.stdout:2/912: dread - d2/db/d28/fcb zero size 2026-03-09T20:48:14.543 INFO:tasks.workunit.client.0.vm07.stdout:5/972: truncate d5/df/d13/f38 931062 0 2026-03-09T20:48:14.543 INFO:tasks.workunit.client.0.vm07.stdout:3/911: write d1/d5/d9/d11/d6d/dd0/f63 [2631630,48111] 0 2026-03-09T20:48:14.543 INFO:tasks.workunit.client.0.vm07.stdout:2/913: chown d2/db/d28/d57/de1 1144 1 2026-03-09T20:48:14.551 INFO:tasks.workunit.client.0.vm07.stdout:8/862: mkdir d1/dc/dba/d115 0 2026-03-09T20:48:14.567 INFO:tasks.workunit.client.1.vm10.stdout:1/884: read - d2/da/d25/d46/d51/fd6 zero size 2026-03-09T20:48:14.567 INFO:tasks.workunit.client.1.vm10.stdout:1/885: readlink d2/da/d25/d3e/d42/l77 0 2026-03-09T20:48:14.567 INFO:tasks.workunit.client.1.vm10.stdout:7/881: rename db/d46/d89/dbf/d78/lba to db/d28/d2b/d36/l116 0 2026-03-09T20:48:14.567 INFO:tasks.workunit.client.0.vm07.stdout:5/973: dread d5/d19/d73/d9c/fcb [0,4194304] 0 2026-03-09T20:48:14.567 INFO:tasks.workunit.client.0.vm07.stdout:9/903: dread d4/d16/d29/f64 [0,4194304] 0 2026-03-09T20:48:14.567 INFO:tasks.workunit.client.0.vm07.stdout:4/855: chown d2/d55/d5d/d3f/fa7 1 1 2026-03-09T20:48:14.568 INFO:tasks.workunit.client.0.vm07.stdout:6/933: dread d8/d16/d22/d9b/de4/d85/f4a [4194304,4194304] 0 2026-03-09T20:48:14.571 INFO:tasks.workunit.client.1.vm10.stdout:5/811: unlink d2/d1b/f41 0 2026-03-09T20:48:14.571 INFO:tasks.workunit.client.1.vm10.stdout:5/812: stat d2/d39/dbf/d63/fa2 0 2026-03-09T20:48:14.576 INFO:tasks.workunit.client.0.vm07.stdout:0/925: rename d1/d2/c55 to d1/d2/d98/de8/c121 0 2026-03-09T20:48:14.589 INFO:tasks.workunit.client.1.vm10.stdout:8/927: symlink d0/d22/d25/d2e/d41/de9/dfc/l12b 0 2026-03-09T20:48:14.589 INFO:tasks.workunit.client.0.vm07.stdout:5/974: mkdir d5/d33/d75/d147 0 2026-03-09T20:48:14.593 INFO:tasks.workunit.client.0.vm07.stdout:7/993: sync 2026-03-09T20:48:14.593 INFO:tasks.workunit.client.0.vm07.stdout:8/863: sync 2026-03-09T20:48:14.594 INFO:tasks.workunit.client.0.vm07.stdout:7/994: dread - d3/da/db/d32/d3e/dac/fb9 zero size 2026-03-09T20:48:14.601 INFO:tasks.workunit.client.0.vm07.stdout:2/914: dread d2/d11/ddb/d6e/dda/d129/d96/fc3 [0,4194304] 0 2026-03-09T20:48:14.607 INFO:tasks.workunit.client.0.vm07.stdout:6/934: creat d8/d5d/d97/f133 x:0 0 0 2026-03-09T20:48:14.609 INFO:tasks.workunit.client.1.vm10.stdout:4/865: creat d1/d2/d3/d70/d99/dc9/dff/d2b/d4a/f11a x:0 0 0 2026-03-09T20:48:14.614 INFO:tasks.workunit.client.1.vm10.stdout:1/886: symlink d2/da/d25/d46/d51/d5d/da6/l11d 0 2026-03-09T20:48:14.620 INFO:tasks.workunit.client.1.vm10.stdout:0/858: write d2/d4a/d58/d82/d71/d5d/f67 [1690552,118207] 0 2026-03-09T20:48:14.627 INFO:tasks.workunit.client.1.vm10.stdout:6/899: dwrite d3/da/d11/f17 [0,4194304] 0 2026-03-09T20:48:14.628 INFO:tasks.workunit.client.0.vm07.stdout:1/971: write d3/d97/da1/dc5/d90/de8/f102 [462071,15414] 0 2026-03-09T20:48:14.635 INFO:tasks.workunit.client.0.vm07.stdout:1/972: write d3/d97/da1/dc5/d90/de8/dba/d12c/f77 [2951052,110942] 0 2026-03-09T20:48:14.639 INFO:tasks.workunit.client.0.vm07.stdout:2/915: dread d2/db/d28/f34 [0,4194304] 0 2026-03-09T20:48:14.654 INFO:tasks.workunit.client.1.vm10.stdout:3/863: rename dc/d14/d26/d29/d40/da2/dee to dc/d14/d26/d29/d40/da8/dc3/d128 0 2026-03-09T20:48:14.658 INFO:tasks.workunit.client.1.vm10.stdout:7/882: creat db/d46/d89/dbf/f117 x:0 0 0 2026-03-09T20:48:14.661 INFO:tasks.workunit.client.0.vm07.stdout:5/975: creat d5/df/d13/d3e/de1/f148 x:0 0 0 2026-03-09T20:48:14.665 INFO:tasks.workunit.client.0.vm07.stdout:9/904: write d4/d16/d29/d24/d37/d44/d62/d108/d121/d19/d5f/d73/fb7 [1020964,80602] 0 2026-03-09T20:48:14.668 INFO:tasks.workunit.client.1.vm10.stdout:9/959: write d2/d28/f51 [7679829,80953] 0 2026-03-09T20:48:14.669 INFO:tasks.workunit.client.1.vm10.stdout:2/903: mknod d5/d18/d27/db4/d112/c126 0 2026-03-09T20:48:14.670 INFO:tasks.workunit.client.0.vm07.stdout:4/856: write d2/d55/d5d/d3f/d4a/d4b/f7a [2574329,114228] 0 2026-03-09T20:48:14.681 INFO:tasks.workunit.client.0.vm07.stdout:7/995: rename d3/da4/df2/f127 to d3/da/d53/df5/f156 0 2026-03-09T20:48:14.681 INFO:tasks.workunit.client.0.vm07.stdout:3/912: truncate d1/d5/d9/d2f/d66/fd3 1567402 0 2026-03-09T20:48:14.682 INFO:tasks.workunit.client.1.vm10.stdout:9/960: sync 2026-03-09T20:48:14.687 INFO:tasks.workunit.client.0.vm07.stdout:4/857: dwrite d2/d55/d5d/d3f/d4a/d4b/f7a [0,4194304] 0 2026-03-09T20:48:14.687 INFO:tasks.workunit.client.0.vm07.stdout:0/926: read - d1/d1f/dc2/ff0 zero size 2026-03-09T20:48:14.692 INFO:tasks.workunit.client.1.vm10.stdout:1/887: mkdir d2/da/d25/d46/d51/d5d/da6/d11e 0 2026-03-09T20:48:14.712 INFO:tasks.workunit.client.1.vm10.stdout:8/928: write d0/d22/d25/d6c/f82 [4112023,13044] 0 2026-03-09T20:48:14.713 INFO:tasks.workunit.client.0.vm07.stdout:8/864: dwrite d1/dc/d16/d26/f36 [0,4194304] 0 2026-03-09T20:48:14.722 INFO:tasks.workunit.client.1.vm10.stdout:6/900: write d3/f40 [173849,114819] 0 2026-03-09T20:48:14.724 INFO:tasks.workunit.client.0.vm07.stdout:8/865: dwrite d1/d5d/ff6 [0,4194304] 0 2026-03-09T20:48:14.727 INFO:tasks.workunit.client.0.vm07.stdout:8/866: stat d1/dc/d16/d26/de2/cf1 0 2026-03-09T20:48:14.732 INFO:tasks.workunit.client.0.vm07.stdout:6/935: dwrite d8/db3/fd4 [0,4194304] 0 2026-03-09T20:48:14.754 INFO:tasks.workunit.client.0.vm07.stdout:7/996: mknod d3/da4/df2/c157 0 2026-03-09T20:48:14.759 INFO:tasks.workunit.client.1.vm10.stdout:5/813: creat d2/d39/dbf/d69/de9/dfa/d115/f12c x:0 0 0 2026-03-09T20:48:14.759 INFO:tasks.workunit.client.1.vm10.stdout:2/904: dread - d5/d18/d27/db4/fee zero size 2026-03-09T20:48:14.759 INFO:tasks.workunit.client.1.vm10.stdout:4/866: mknod d1/d2/d5c/d64/c11b 0 2026-03-09T20:48:14.772 INFO:tasks.workunit.client.1.vm10.stdout:0/859: mkdir d2/d9/d12d 0 2026-03-09T20:48:14.772 INFO:tasks.workunit.client.1.vm10.stdout:0/860: chown d2/f99 360187178 1 2026-03-09T20:48:14.775 INFO:tasks.workunit.client.0.vm07.stdout:4/858: write d2/d55/fe5 [413197,102355] 0 2026-03-09T20:48:14.776 INFO:tasks.workunit.client.0.vm07.stdout:5/976: dread d5/df/f4a [0,4194304] 0 2026-03-09T20:48:14.780 INFO:tasks.workunit.client.0.vm07.stdout:3/913: dread d1/d5/d9/d11/d6d/dd0/f7b [0,4194304] 0 2026-03-09T20:48:14.784 INFO:tasks.workunit.client.0.vm07.stdout:1/973: mkdir d3/d136 0 2026-03-09T20:48:14.794 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:14 vm07.local ceph-mon[49120]: pgmap v13: 65 pgs: 65 active+clean; 3.6 GiB data, 12 GiB used, 108 GiB / 120 GiB avail; 32 MiB/s rd, 78 MiB/s wr, 208 op/s 2026-03-09T20:48:14.794 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:14 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:14.795 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:14 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:14.795 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:14 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:14.795 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:14 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:48:14.795 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:14 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:14.795 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:14 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:48:14.795 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:14 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:48:14.795 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:14 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:14.795 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:14 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr.vm07.xjrvch"}]: dispatch 2026-03-09T20:48:14.795 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:14 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr.vm07.xjrvch"}]': finished 2026-03-09T20:48:14.795 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:14 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr.vm10.byqahe"}]: dispatch 2026-03-09T20:48:14.795 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:14 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr.vm10.byqahe"}]': finished 2026-03-09T20:48:14.795 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:14 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-09T20:48:14.812 INFO:tasks.workunit.client.1.vm10.stdout:7/883: write db/d28/d2b/d36/f55 [467091,48638] 0 2026-03-09T20:48:14.812 INFO:tasks.workunit.client.0.vm07.stdout:8/867: creat d1/dc/d16/f116 x:0 0 0 2026-03-09T20:48:14.813 INFO:tasks.workunit.client.0.vm07.stdout:8/868: readlink d1/d5d/d6f/lb9 0 2026-03-09T20:48:14.824 INFO:tasks.workunit.client.0.vm07.stdout:7/997: dread - d3/d58/d82/f10b zero size 2026-03-09T20:48:14.824 INFO:tasks.workunit.client.0.vm07.stdout:0/927: mknod d1/d1f/d53/d72/d11a/db9/c122 0 2026-03-09T20:48:14.825 INFO:tasks.workunit.client.0.vm07.stdout:7/998: chown d3/da/db/d32/d3e/dac/d1f/d2b/d52/fc0 497 1 2026-03-09T20:48:14.844 INFO:tasks.workunit.client.0.vm07.stdout:4/859: mknod d2/d55/d5d/d3f/d4a/d7d/ced 0 2026-03-09T20:48:14.845 INFO:tasks.workunit.client.0.vm07.stdout:4/860: readlink d2/d1f/l29 0 2026-03-09T20:48:14.849 INFO:tasks.workunit.client.0.vm07.stdout:5/977: symlink d5/df/d13/d6c/db1/l149 0 2026-03-09T20:48:14.851 INFO:tasks.workunit.client.0.vm07.stdout:5/978: fsync d5/df/d13/d4f/d12c/f11a 0 2026-03-09T20:48:14.872 INFO:tasks.workunit.client.0.vm07.stdout:3/914: write d1/d5/d9/f33 [4061686,26015] 0 2026-03-09T20:48:14.877 INFO:tasks.workunit.client.0.vm07.stdout:6/936: creat d8/d16/d22/d9b/de4/d85/df8/f134 x:0 0 0 2026-03-09T20:48:14.877 INFO:tasks.workunit.client.0.vm07.stdout:8/869: dwrite d1/f25 [0,4194304] 0 2026-03-09T20:48:14.879 INFO:tasks.workunit.client.0.vm07.stdout:7/999: readlink d3/d58/l72 0 2026-03-09T20:48:14.884 INFO:tasks.workunit.client.0.vm07.stdout:4/861: creat d2/df/d17/fee x:0 0 0 2026-03-09T20:48:14.887 INFO:tasks.workunit.client.0.vm07.stdout:8/870: write d1/dc/d16/d26/de2/dc1/f114 [800382,5562] 0 2026-03-09T20:48:14.888 INFO:tasks.workunit.client.0.vm07.stdout:8/871: fsync d1/dc/d16/f4a 0 2026-03-09T20:48:14.898 INFO:tasks.workunit.client.0.vm07.stdout:2/916: getdents d2/db/d1c/d8d 0 2026-03-09T20:48:14.898 INFO:tasks.workunit.client.0.vm07.stdout:8/872: dwrite d1/d8f/f10e [0,4194304] 0 2026-03-09T20:48:14.903 INFO:tasks.workunit.client.0.vm07.stdout:9/905: getdents d4/d16/d29 0 2026-03-09T20:48:14.907 INFO:tasks.workunit.client.0.vm07.stdout:3/915: chown d1/d5/d9/d11/d60/df3/lf4 835868252 1 2026-03-09T20:48:14.910 INFO:tasks.workunit.client.0.vm07.stdout:5/979: write d5/df/d13/d6c/f79 [310621,75967] 0 2026-03-09T20:48:14.914 INFO:tasks.workunit.client.1.vm10.stdout:5/814: rmdir d2 39 2026-03-09T20:48:14.916 INFO:tasks.workunit.client.1.vm10.stdout:4/867: creat d1/d2/d5c/d64/d6b/d81/f11c x:0 0 0 2026-03-09T20:48:14.926 INFO:tasks.workunit.client.1.vm10.stdout:2/905: dread - d5/d18/d27/d89/db6/dd3/fd9 zero size 2026-03-09T20:48:14.930 INFO:tasks.workunit.client.0.vm07.stdout:1/974: link d3/d97/da1/dc5/l9d d3/d97/l137 0 2026-03-09T20:48:14.932 INFO:tasks.workunit.client.1.vm10.stdout:0/861: dread d2/d4a/f5a [0,4194304] 0 2026-03-09T20:48:14.937 INFO:tasks.workunit.client.1.vm10.stdout:8/929: mkdir d0/d54/d114/d12c 0 2026-03-09T20:48:14.937 INFO:tasks.workunit.client.1.vm10.stdout:0/862: sync 2026-03-09T20:48:14.940 INFO:tasks.workunit.client.1.vm10.stdout:8/930: dread d0/d22/d25/f3b [0,4194304] 0 2026-03-09T20:48:14.941 INFO:tasks.workunit.client.0.vm07.stdout:9/906: creat d4/d16/d29/d24/d7c/f142 x:0 0 0 2026-03-09T20:48:14.943 INFO:tasks.workunit.client.0.vm07.stdout:3/916: truncate d1/d5/d9/d11/d6d/dd0/d43/fe5 294067 0 2026-03-09T20:48:14.944 INFO:tasks.workunit.client.0.vm07.stdout:9/907: read d4/d16/d29/d24/d37/d44/d62/d108/d121/d19/d89/da7/ddd/f113 [2334664,38469] 0 2026-03-09T20:48:14.946 INFO:tasks.workunit.client.1.vm10.stdout:1/888: dwrite d2/da/d25/d46/d80/fbd [0,4194304] 0 2026-03-09T20:48:14.955 INFO:tasks.workunit.client.1.vm10.stdout:9/961: link d2/d33/f77 d2/d28/da2/ded/f13a 0 2026-03-09T20:48:14.956 INFO:tasks.workunit.client.0.vm07.stdout:2/917: write d2/db/d1c/d8d/ffa [152110,8344] 0 2026-03-09T20:48:14.962 INFO:tasks.workunit.client.0.vm07.stdout:4/862: creat d2/d55/dab/fef x:0 0 0 2026-03-09T20:48:14.964 INFO:tasks.workunit.client.0.vm07.stdout:1/975: mknod d3/d97/da1/dc5/d90/c138 0 2026-03-09T20:48:14.964 INFO:tasks.workunit.client.1.vm10.stdout:7/884: dwrite db/d21/fb2 [0,4194304] 0 2026-03-09T20:48:14.966 INFO:tasks.workunit.client.1.vm10.stdout:3/864: link dc/d14/d26/d29/f30 dc/d14/d26/d8f/ddd/d10e/f129 0 2026-03-09T20:48:14.970 INFO:tasks.workunit.client.1.vm10.stdout:0/863: rmdir d2/d9/db8/d10f/d11 39 2026-03-09T20:48:14.973 INFO:tasks.workunit.client.0.vm07.stdout:5/980: mknod d5/df/c14a 0 2026-03-09T20:48:14.979 INFO:tasks.workunit.client.1.vm10.stdout:1/889: creat d2/da/d25/d3e/d55/dc9/f11f x:0 0 0 2026-03-09T20:48:14.981 INFO:tasks.workunit.client.0.vm07.stdout:9/908: fdatasync d4/d16/d78/f9a 0 2026-03-09T20:48:14.984 INFO:tasks.workunit.client.0.vm07.stdout:2/918: dread d2/db/d49/d7d/fff [0,4194304] 0 2026-03-09T20:48:14.987 INFO:tasks.workunit.client.0.vm07.stdout:2/919: read d2/db/d28/fb8 [2821764,18819] 0 2026-03-09T20:48:14.990 INFO:tasks.workunit.client.0.vm07.stdout:0/928: link d1/d1f/d30/lfa d1/d2/d33/l123 0 2026-03-09T20:48:14.993 INFO:tasks.workunit.client.0.vm07.stdout:0/929: read d1/d1f/d30/f10f [2916668,67748] 0 2026-03-09T20:48:14.999 INFO:tasks.workunit.client.0.vm07.stdout:0/930: chown d1/d2/d33/d35/l93 61971708 1 2026-03-09T20:48:15.004 INFO:tasks.workunit.client.0.vm07.stdout:1/976: creat d3/d97/da1/dc5/d60/d9f/f139 x:0 0 0 2026-03-09T20:48:15.006 INFO:tasks.workunit.client.1.vm10.stdout:2/906: mkdir d5/d18/d27/d38/d127 0 2026-03-09T20:48:15.007 INFO:tasks.workunit.client.0.vm07.stdout:4/863: write d2/f4c [1658125,31000] 0 2026-03-09T20:48:15.007 INFO:tasks.workunit.client.1.vm10.stdout:9/962: write d2/d12/d5a/fe9 [1016719,68109] 0 2026-03-09T20:48:15.010 INFO:tasks.workunit.client.1.vm10.stdout:9/963: sync 2026-03-09T20:48:15.014 INFO:tasks.workunit.client.0.vm07.stdout:8/873: rename d1/f33 to d1/d3b/f117 0 2026-03-09T20:48:15.016 INFO:tasks.workunit.client.1.vm10.stdout:7/885: creat db/d28/d10e/f118 x:0 0 0 2026-03-09T20:48:15.016 INFO:tasks.workunit.client.0.vm07.stdout:8/874: chown d1/l9 262531755 1 2026-03-09T20:48:15.017 INFO:tasks.workunit.client.1.vm10.stdout:3/865: creat dc/d14/d22/f12a x:0 0 0 2026-03-09T20:48:15.020 INFO:tasks.workunit.client.1.vm10.stdout:6/901: getdents d3/d30/d6a 0 2026-03-09T20:48:15.027 INFO:tasks.workunit.client.0.vm07.stdout:6/937: getdents d8/d16/d22/d9b/de4 0 2026-03-09T20:48:15.031 INFO:tasks.workunit.client.1.vm10.stdout:5/815: getdents d2/d27/d37/d46/d5d/d6d 0 2026-03-09T20:48:15.034 INFO:tasks.workunit.client.0.vm07.stdout:3/917: write d1/d5/d9/d2f/d66/fd3 [1370388,67548] 0 2026-03-09T20:48:15.036 INFO:tasks.workunit.client.0.vm07.stdout:2/920: symlink d2/d11/ddb/d72/l12a 0 2026-03-09T20:48:15.039 INFO:tasks.workunit.client.1.vm10.stdout:4/868: rename d1/d2/d3/d70/d99/dc9/dff/d2b/d4a/d9b/de2/dfb/ce1 to d1/d2/d5c/d64/d6b/d81/dac/c11d 0 2026-03-09T20:48:15.041 INFO:tasks.workunit.client.0.vm07.stdout:9/909: dwrite d4/d16/d29/d24/d37/d44/d62/d108/d121/d19/f42 [0,4194304] 0 2026-03-09T20:48:15.047 INFO:tasks.workunit.client.1.vm10.stdout:0/864: mknod d2/d9/d69/d80/de1/c12e 0 2026-03-09T20:48:15.047 INFO:tasks.workunit.client.0.vm07.stdout:9/910: chown d4/d16/d29/d24/d37/d44/d62/d108/d121/db9 393 1 2026-03-09T20:48:15.053 INFO:tasks.workunit.client.0.vm07.stdout:1/977: creat d3/d23/d109/f13a x:0 0 0 2026-03-09T20:48:15.055 INFO:tasks.workunit.client.0.vm07.stdout:4/864: creat d2/d55/dab/ff0 x:0 0 0 2026-03-09T20:48:15.064 INFO:tasks.workunit.client.0.vm07.stdout:8/875: mknod d1/dc/d16/d31/c118 0 2026-03-09T20:48:15.070 INFO:tasks.workunit.client.1.vm10.stdout:5/816: rmdir d2/d39/d4b/d7a/de1 39 2026-03-09T20:48:15.071 INFO:tasks.workunit.client.1.vm10.stdout:2/907: truncate d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47/dcc/f117 5166638 0 2026-03-09T20:48:15.075 INFO:tasks.workunit.client.0.vm07.stdout:6/938: truncate d8/d16/d22/d24/da0/dab/d40/d69/f9e 1753689 0 2026-03-09T20:48:15.076 INFO:tasks.workunit.client.1.vm10.stdout:7/886: write f3 [1123442,13680] 0 2026-03-09T20:48:15.080 INFO:tasks.workunit.client.1.vm10.stdout:0/865: truncate d2/d4a/d58/d82/d71/d5d/f5f 1399126 0 2026-03-09T20:48:15.084 INFO:tasks.workunit.client.0.vm07.stdout:2/921: creat d2/d11/ddb/d6e/f12b x:0 0 0 2026-03-09T20:48:15.084 INFO:tasks.workunit.client.1.vm10.stdout:6/902: dwrite d3/f52 [0,4194304] 0 2026-03-09T20:48:15.085 INFO:tasks.workunit.client.0.vm07.stdout:2/922: truncate d2/db/d28/d90/fd5 4595397 0 2026-03-09T20:48:15.085 INFO:tasks.workunit.client.1.vm10.stdout:8/931: getdents d0/dd1/df1 0 2026-03-09T20:48:15.086 INFO:tasks.workunit.client.0.vm07.stdout:9/911: mknod d4/c143 0 2026-03-09T20:48:15.089 INFO:tasks.workunit.client.1.vm10.stdout:6/903: chown d3/d30/d7f/d24/d39/f6c 32202247 1 2026-03-09T20:48:15.090 INFO:tasks.workunit.client.1.vm10.stdout:4/869: dwrite d1/d2/d5c/d64/d6b/d81/dac/d39/f6e [0,4194304] 0 2026-03-09T20:48:15.095 INFO:tasks.workunit.client.1.vm10.stdout:1/890: rename d2/da/ld7 to d2/da/l120 0 2026-03-09T20:48:15.099 INFO:tasks.workunit.client.1.vm10.stdout:4/870: dwrite d1/d2/d3/d70/d99/dc9/dff/d2b/d4a/f118 [0,4194304] 0 2026-03-09T20:48:15.100 INFO:tasks.workunit.client.1.vm10.stdout:4/871: chown d1/d2/d5c/l74 4479580 1 2026-03-09T20:48:15.102 INFO:tasks.workunit.client.1.vm10.stdout:6/904: sync 2026-03-09T20:48:15.108 INFO:tasks.workunit.client.1.vm10.stdout:6/905: dwrite d3/da/d11/d26/d5b/f55 [0,4194304] 0 2026-03-09T20:48:15.125 INFO:tasks.workunit.client.1.vm10.stdout:3/866: dwrite dc/d14/d26/d29/f70 [0,4194304] 0 2026-03-09T20:48:15.126 INFO:tasks.workunit.client.1.vm10.stdout:5/817: mkdir d2/d39/dbf/da9/d12d 0 2026-03-09T20:48:15.133 INFO:tasks.workunit.client.0.vm07.stdout:5/981: creat d5/d33/d39/d8d/f14b x:0 0 0 2026-03-09T20:48:15.136 INFO:tasks.workunit.client.0.vm07.stdout:5/982: dread - d5/d33/d39/f145 zero size 2026-03-09T20:48:15.137 INFO:tasks.workunit.client.0.vm07.stdout:5/983: chown d5/df/d13/d4f/d101/d10b/f117 402 1 2026-03-09T20:48:15.142 INFO:tasks.workunit.client.0.vm07.stdout:8/876: mknod d1/dc/d16/dad/d87/c119 0 2026-03-09T20:48:15.142 INFO:tasks.workunit.client.0.vm07.stdout:9/912: sync 2026-03-09T20:48:15.144 INFO:tasks.workunit.client.1.vm10.stdout:7/887: mknod db/d46/d89/c119 0 2026-03-09T20:48:15.160 INFO:tasks.workunit.client.0.vm07.stdout:2/923: mkdir d2/db/d49/d7d/d85/d12c 0 2026-03-09T20:48:15.160 INFO:tasks.workunit.client.0.vm07.stdout:9/913: sync 2026-03-09T20:48:15.168 INFO:tasks.workunit.client.1.vm10.stdout:8/932: dread d0/d92/de8/d64/db5/fef [0,4194304] 0 2026-03-09T20:48:15.168 INFO:tasks.workunit.client.1.vm10.stdout:8/933: write d0/d22/f35 [1985122,79751] 0 2026-03-09T20:48:15.172 INFO:tasks.workunit.client.1.vm10.stdout:1/891: creat d2/da/d25/d46/d80/da0/d92/db5/f121 x:0 0 0 2026-03-09T20:48:15.180 INFO:tasks.workunit.client.1.vm10.stdout:6/906: creat d3/d30/d33/f10f x:0 0 0 2026-03-09T20:48:15.184 INFO:tasks.workunit.client.1.vm10.stdout:3/867: rmdir dc/d14/d26 39 2026-03-09T20:48:15.185 INFO:tasks.workunit.client.1.vm10.stdout:9/964: getdents d2/d12 0 2026-03-09T20:48:15.193 INFO:tasks.workunit.client.1.vm10.stdout:0/866: dread - d2/d9/db8/d10f/d11/dd1/db7/dcd/d63/ff8 zero size 2026-03-09T20:48:15.194 INFO:tasks.workunit.client.1.vm10.stdout:0/867: write d2/d4a/d58/d82/d93/fe3 [581861,70532] 0 2026-03-09T20:48:15.199 INFO:tasks.workunit.client.0.vm07.stdout:8/877: dread d1/dc/d16/d26/fd1 [0,4194304] 0 2026-03-09T20:48:15.215 INFO:tasks.workunit.client.1.vm10.stdout:1/892: unlink d2/da/dbc/fee 0 2026-03-09T20:48:15.219 INFO:tasks.workunit.client.1.vm10.stdout:4/872: creat d1/d2/d3/d70/d99/dc9/dff/d2b/d4a/d9b/de2/dfb/dea/f11e x:0 0 0 2026-03-09T20:48:15.220 INFO:tasks.workunit.client.1.vm10.stdout:0/868: dread d2/d9/f20 [0,4194304] 0 2026-03-09T20:48:15.224 INFO:tasks.workunit.client.1.vm10.stdout:0/869: sync 2026-03-09T20:48:15.225 INFO:tasks.workunit.client.0.vm07.stdout:4/865: write d2/df/d17/fdf [3562481,83235] 0 2026-03-09T20:48:15.229 INFO:tasks.workunit.client.0.vm07.stdout:6/939: dwrite d8/d16/d22/d24/da0/dab/f81 [0,4194304] 0 2026-03-09T20:48:15.230 INFO:tasks.workunit.client.0.vm07.stdout:6/940: readlink d8/d16/d22/d24/l2e 0 2026-03-09T20:48:15.238 INFO:tasks.workunit.client.1.vm10.stdout:5/818: dwrite d2/f23 [0,4194304] 0 2026-03-09T20:48:15.238 INFO:tasks.workunit.client.0.vm07.stdout:5/984: dwrite d5/d19/f12f [0,4194304] 0 2026-03-09T20:48:15.242 INFO:tasks.workunit.client.1.vm10.stdout:2/908: link d5/d18/d27/da6/fac d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47/d94/f128 0 2026-03-09T20:48:15.242 INFO:tasks.workunit.client.1.vm10.stdout:8/934: dwrite d0/d22/d25/d8f/fa2 [4194304,4194304] 0 2026-03-09T20:48:15.244 INFO:tasks.workunit.client.1.vm10.stdout:6/907: write d3/da/d11/d89/db9/dd1/dd2/d60/fbc [1136211,8371] 0 2026-03-09T20:48:15.244 INFO:tasks.workunit.client.1.vm10.stdout:3/868: chown dc/d14/d26/d8f/ddd/d10d/c127 9959130 1 2026-03-09T20:48:15.249 INFO:tasks.workunit.client.1.vm10.stdout:6/908: write d3/da/d11/dfc/fe8 [78903,35526] 0 2026-03-09T20:48:15.251 INFO:tasks.workunit.client.1.vm10.stdout:6/909: chown d3/d30/d7f/d36/d5c/daa/fae 452967 1 2026-03-09T20:48:15.253 INFO:tasks.workunit.client.1.vm10.stdout:3/869: dwrite dc/d14/d27/f116 [0,4194304] 0 2026-03-09T20:48:15.255 INFO:tasks.workunit.client.1.vm10.stdout:6/910: chown d3/d30/d7f/d24/d39/f88 8 1 2026-03-09T20:48:15.255 INFO:tasks.workunit.client.1.vm10.stdout:3/870: stat dc/d14/d26/d29/d40/f49 0 2026-03-09T20:48:15.267 INFO:tasks.workunit.client.1.vm10.stdout:6/911: read d3/da/d11/dfc/ddc/ff8 [125172,9974] 0 2026-03-09T20:48:15.268 INFO:tasks.workunit.client.1.vm10.stdout:6/912: truncate d3/da/fd 4433141 0 2026-03-09T20:48:15.270 INFO:tasks.workunit.client.1.vm10.stdout:1/893: symlink d2/da/d25/d46/d51/d5d/d6e/d70/db3/l122 0 2026-03-09T20:48:15.271 INFO:tasks.workunit.client.1.vm10.stdout:0/870: mknod d2/d4a/d58/d82/d71/dca/c12f 0 2026-03-09T20:48:15.271 INFO:tasks.workunit.client.1.vm10.stdout:1/894: chown d2/da/d25/d46/fa7 212195242 1 2026-03-09T20:48:15.282 INFO:tasks.workunit.client.0.vm07.stdout:2/924: mkdir d2/da7/db4/d12d 0 2026-03-09T20:48:15.284 INFO:tasks.workunit.client.0.vm07.stdout:9/914: stat d4/d16/d29/d24/d37/d44/d62/d108/d121/f34 0 2026-03-09T20:48:15.285 INFO:tasks.workunit.client.1.vm10.stdout:2/909: creat d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d8d/f129 x:0 0 0 2026-03-09T20:48:15.294 INFO:tasks.workunit.client.1.vm10.stdout:8/935: mknod d0/d54/dec/c12d 0 2026-03-09T20:48:15.297 INFO:tasks.workunit.client.0.vm07.stdout:1/978: link d3/d97/da1/dc5/d60/l63 d3/d97/da1/dc5/d60/d9f/l13b 0 2026-03-09T20:48:15.297 INFO:tasks.workunit.client.0.vm07.stdout:1/979: chown d3/d97/da1/dc5/d90/dd3/ff9 352833565 1 2026-03-09T20:48:15.297 INFO:tasks.workunit.client.0.vm07.stdout:1/980: readlink d3/d97/da1/dc5/d90/lc2 0 2026-03-09T20:48:15.297 INFO:tasks.workunit.client.0.vm07.stdout:1/981: stat d3/d23/d67/d8a/f10c 0 2026-03-09T20:48:15.302 INFO:tasks.workunit.client.1.vm10.stdout:9/965: fdatasync d2/d3/d85/df7/f132 0 2026-03-09T20:48:15.306 INFO:tasks.workunit.client.1.vm10.stdout:7/888: creat db/d28/d2b/d36/d63/f11a x:0 0 0 2026-03-09T20:48:15.306 INFO:tasks.workunit.client.1.vm10.stdout:7/889: write f3 [1710418,64439] 0 2026-03-09T20:48:15.307 INFO:tasks.workunit.client.1.vm10.stdout:7/890: read - db/d46/d89/dbf/f117 zero size 2026-03-09T20:48:15.314 INFO:tasks.workunit.client.0.vm07.stdout:8/878: rmdir d1/d5d/d6f/d80 39 2026-03-09T20:48:15.325 INFO:tasks.workunit.client.1.vm10.stdout:3/871: dread - dc/d14/d20/d2e/fb2 zero size 2026-03-09T20:48:15.325 INFO:tasks.workunit.client.1.vm10.stdout:6/913: mknod d3/da/d11/d89/db9/dd1/dd2/dc3/c110 0 2026-03-09T20:48:15.325 INFO:tasks.workunit.client.0.vm07.stdout:4/866: read - d2/d1f/fca zero size 2026-03-09T20:48:15.325 INFO:tasks.workunit.client.0.vm07.stdout:6/941: mkdir d8/d16/d22/d9b/de4/d135 0 2026-03-09T20:48:15.325 INFO:tasks.workunit.client.0.vm07.stdout:6/942: write d8/d16/da3/f131 [44664,28005] 0 2026-03-09T20:48:15.330 INFO:tasks.workunit.client.0.vm07.stdout:5/985: rename d5/df/d13/d30/f100 to d5/d33/d39/d8d/dab/d11f/f14c 0 2026-03-09T20:48:15.331 INFO:tasks.workunit.client.0.vm07.stdout:5/986: readlink d5/df/ldd 0 2026-03-09T20:48:15.334 INFO:tasks.workunit.client.0.vm07.stdout:3/918: getdents d1/d5/d9/d2f/d3d/d71 0 2026-03-09T20:48:15.334 INFO:tasks.workunit.client.0.vm07.stdout:3/919: chown d1/d5/d9/d2f/d3d/d71/d76/db6 767307 1 2026-03-09T20:48:15.336 INFO:tasks.workunit.client.1.vm10.stdout:1/895: creat d2/d89/f123 x:0 0 0 2026-03-09T20:48:15.344 INFO:tasks.workunit.client.0.vm07.stdout:0/931: link d1/d1f/dc3/feb d1/d1f/d30/f124 0 2026-03-09T20:48:15.348 INFO:tasks.workunit.client.1.vm10.stdout:5/819: creat d2/d39/dbf/d66/d11d/f12e x:0 0 0 2026-03-09T20:48:15.352 INFO:tasks.workunit.client.1.vm10.stdout:2/910: unlink d5/d5b/lc4 0 2026-03-09T20:48:15.354 INFO:tasks.workunit.client.0.vm07.stdout:4/867: readlink d2/d55/lc6 0 2026-03-09T20:48:15.354 INFO:tasks.workunit.client.0.vm07.stdout:4/868: stat d2/df/d17/dd1/l8d 0 2026-03-09T20:48:15.355 INFO:tasks.workunit.client.1.vm10.stdout:8/936: dread - d0/d22/d2c/fbc zero size 2026-03-09T20:48:15.356 INFO:tasks.workunit.client.1.vm10.stdout:2/911: dwrite d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d8d/f129 [0,4194304] 0 2026-03-09T20:48:15.358 INFO:tasks.workunit.client.1.vm10.stdout:2/912: write d5/d18/d9f/fd8 [1121697,54794] 0 2026-03-09T20:48:15.359 INFO:tasks.workunit.client.0.vm07.stdout:5/987: readlink d5/df/d13/l32 0 2026-03-09T20:48:15.359 INFO:tasks.workunit.client.1.vm10.stdout:9/966: symlink d2/d3/de/d8f/l13b 0 2026-03-09T20:48:15.359 INFO:tasks.workunit.client.1.vm10.stdout:2/913: readlink d5/d18/d27/d38/d61/ld0 0 2026-03-09T20:48:15.359 INFO:tasks.workunit.client.0.vm07.stdout:5/988: fdatasync d5/d19/f138 0 2026-03-09T20:48:15.361 INFO:tasks.workunit.client.0.vm07.stdout:0/932: sync 2026-03-09T20:48:15.362 INFO:tasks.workunit.client.0.vm07.stdout:0/933: readlink d1/l1e 0 2026-03-09T20:48:15.389 INFO:tasks.workunit.client.1.vm10.stdout:3/872: mknod dc/d14/d20/d21/d3b/c12b 0 2026-03-09T20:48:15.391 INFO:tasks.workunit.client.0.vm07.stdout:9/915: write d4/d16/d78/f9a [774224,109578] 0 2026-03-09T20:48:15.397 INFO:tasks.workunit.client.1.vm10.stdout:6/914: creat d3/d30/d7f/d4a/f111 x:0 0 0 2026-03-09T20:48:15.401 INFO:tasks.workunit.client.1.vm10.stdout:7/891: write db/d46/f9f [21018,74123] 0 2026-03-09T20:48:15.402 INFO:tasks.workunit.client.0.vm07.stdout:8/879: mkdir d1/d5d/d6f/d2f/d4d/dfe/d11a 0 2026-03-09T20:48:15.412 INFO:tasks.workunit.client.0.vm07.stdout:2/925: rename d2/db/d28/d57/de1/fe2 to d2/db/d49/d7d/d85/dd9/f12e 0 2026-03-09T20:48:15.414 INFO:tasks.workunit.client.1.vm10.stdout:1/896: creat d2/da/d25/d46/d51/d5d/d6e/f124 x:0 0 0 2026-03-09T20:48:15.416 INFO:tasks.workunit.client.0.vm07.stdout:9/916: dread d4/d16/d78/f9a [0,4194304] 0 2026-03-09T20:48:15.417 INFO:tasks.workunit.client.1.vm10.stdout:5/820: rmdir d2/d39/dbf/d69/de9 39 2026-03-09T20:48:15.419 INFO:tasks.workunit.client.0.vm07.stdout:5/989: mkdir d5/d19/d73/d9c/d10c/d14d 0 2026-03-09T20:48:15.423 INFO:tasks.workunit.client.0.vm07.stdout:0/934: mknod d1/dc0/c125 0 2026-03-09T20:48:15.424 INFO:tasks.workunit.client.1.vm10.stdout:0/871: write d2/d4a/d58/d82/d71/dca/d110/d30/f9a [168889,107404] 0 2026-03-09T20:48:15.434 INFO:tasks.workunit.client.0.vm07.stdout:9/917: truncate d4/d16/d29/d24/d37/d44/f12e 848174 0 2026-03-09T20:48:15.439 INFO:tasks.workunit.client.1.vm10.stdout:2/914: mknod d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47/d7b/c12a 0 2026-03-09T20:48:15.440 INFO:tasks.workunit.client.0.vm07.stdout:6/943: rename d8/d16/da3/d9a to d8/d16/d22/d9b/da6/d136 0 2026-03-09T20:48:15.447 INFO:tasks.workunit.client.1.vm10.stdout:7/892: truncate db/d21/f81 607624 0 2026-03-09T20:48:15.447 INFO:tasks.workunit.client.0.vm07.stdout:5/990: rmdir d5/df/d13/d4f/d12c 39 2026-03-09T20:48:15.449 INFO:tasks.workunit.client.1.vm10.stdout:4/873: getdents d1/d2/d3/d70/d99 0 2026-03-09T20:48:15.449 INFO:tasks.workunit.client.0.vm07.stdout:1/982: getdents d3/d9c 0 2026-03-09T20:48:15.453 INFO:tasks.workunit.client.0.vm07.stdout:4/869: mknod d2/d55/d5d/d3f/db6/dd2/cf1 0 2026-03-09T20:48:15.453 INFO:tasks.workunit.client.0.vm07.stdout:2/926: dwrite d2/db/d28/f32 [4194304,4194304] 0 2026-03-09T20:48:15.459 INFO:tasks.workunit.client.0.vm07.stdout:4/870: chown d2/d55/d5d/d3f/db6/dd2/cf1 7810581 1 2026-03-09T20:48:15.468 INFO:tasks.workunit.client.1.vm10.stdout:8/937: fsync d0/d22/d25/d2e/fe1 0 2026-03-09T20:48:15.476 INFO:tasks.workunit.client.0.vm07.stdout:6/944: mknod d8/d16/d22/d24/da0/c137 0 2026-03-09T20:48:15.476 INFO:tasks.workunit.client.0.vm07.stdout:9/918: dwrite d4/d16/d29/d24/d37/d44/d62/d108/d121/dc/ff [0,4194304] 0 2026-03-09T20:48:15.476 INFO:tasks.workunit.client.0.vm07.stdout:3/920: rename d1/d5/d9/d2f/d99/dd8/de0/c10b to d1/d5/d9/d11/d6d/dd0/d95/ddb/c127 0 2026-03-09T20:48:15.477 INFO:tasks.workunit.client.1.vm10.stdout:0/872: mknod d2/d9/d69/d80/c130 0 2026-03-09T20:48:15.477 INFO:tasks.workunit.client.1.vm10.stdout:9/967: symlink d2/d28/d47/d50/dd1/d11e/d12f/l13c 0 2026-03-09T20:48:15.477 INFO:tasks.workunit.client.1.vm10.stdout:9/968: write d2/d3/d6d/db7/f116 [756701,111822] 0 2026-03-09T20:48:15.477 INFO:tasks.workunit.client.1.vm10.stdout:2/915: truncate d5/d18/f67 2052881 0 2026-03-09T20:48:15.477 INFO:tasks.workunit.client.1.vm10.stdout:3/873: mknod dc/d14/df1/c12c 0 2026-03-09T20:48:15.477 INFO:tasks.workunit.client.1.vm10.stdout:6/915: symlink d3/d30/d7f/l112 0 2026-03-09T20:48:15.479 INFO:tasks.workunit.client.0.vm07.stdout:4/871: sync 2026-03-09T20:48:15.480 INFO:tasks.workunit.client.0.vm07.stdout:5/991: symlink d5/df/l14e 0 2026-03-09T20:48:15.481 INFO:tasks.workunit.client.0.vm07.stdout:5/992: chown d5/l2d 273 1 2026-03-09T20:48:15.483 INFO:tasks.workunit.client.0.vm07.stdout:0/935: rename d1/d2/d4b/f11f to d1/d2/d98/de8/f126 0 2026-03-09T20:48:15.507 INFO:tasks.workunit.client.1.vm10.stdout:9/969: rmdir d2/d3/d6d/d10c 39 2026-03-09T20:48:15.507 INFO:tasks.workunit.client.1.vm10.stdout:2/916: symlink d5/d18/d27/d38/d61/dc8/ddb/l12b 0 2026-03-09T20:48:15.507 INFO:tasks.workunit.client.1.vm10.stdout:3/874: mkdir dc/d14/d26/d29/d40/d8c/d9c/d12d 0 2026-03-09T20:48:15.508 INFO:tasks.workunit.client.1.vm10.stdout:6/916: rmdir d3/d30/d7f/d36/d6d/d8c 39 2026-03-09T20:48:15.508 INFO:tasks.workunit.client.0.vm07.stdout:3/921: symlink d1/d5/d9/d2f/d3d/dd6/l128 0 2026-03-09T20:48:15.509 INFO:tasks.workunit.client.1.vm10.stdout:1/897: link d2/da/d25/d3e/lef d2/da/d25/d46/d51/d5d/d6e/d70/db3/dd4/df1/l125 0 2026-03-09T20:48:15.509 INFO:tasks.workunit.client.1.vm10.stdout:5/821: rename d2/d27/d37/l4f to d2/d27/d37/d46/l12f 0 2026-03-09T20:48:15.509 INFO:tasks.workunit.client.1.vm10.stdout:8/938: symlink d0/d22/d2f/d9d/d123/d129/l12e 0 2026-03-09T20:48:15.511 INFO:tasks.workunit.client.1.vm10.stdout:2/917: truncate d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47/d7b/faf 1258566 0 2026-03-09T20:48:15.512 INFO:tasks.workunit.client.1.vm10.stdout:2/918: chown d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80 105434 1 2026-03-09T20:48:15.512 INFO:tasks.workunit.client.0.vm07.stdout:1/983: link d3/d23/l4e d3/d97/da1/dc5/d90/de8/dc0/l13c 0 2026-03-09T20:48:15.515 INFO:tasks.workunit.client.0.vm07.stdout:1/984: truncate d3/d14/d54/fdc 2449768 0 2026-03-09T20:48:15.515 INFO:tasks.workunit.client.0.vm07.stdout:8/880: write d1/d3b/f3e [199218,17297] 0 2026-03-09T20:48:15.520 INFO:tasks.workunit.client.1.vm10.stdout:0/873: dread d2/d9/db8/d10f/d11/f1f [0,4194304] 0 2026-03-09T20:48:15.521 INFO:tasks.workunit.client.1.vm10.stdout:0/874: write d2/d9/db8/d10f/d48/dac/de8/f102 [2077654,73299] 0 2026-03-09T20:48:15.522 INFO:tasks.workunit.client.1.vm10.stdout:0/875: chown d2/d9/db8/d10f/d11/c1e 1264 1 2026-03-09T20:48:15.529 INFO:tasks.workunit.client.0.vm07.stdout:2/927: rename d2/f4 to d2/db/d1c/f12f 0 2026-03-09T20:48:15.533 INFO:tasks.workunit.client.1.vm10.stdout:9/970: symlink d2/d28/d47/d50/dab/l13d 0 2026-03-09T20:48:15.577 INFO:tasks.workunit.client.0.vm07.stdout:4/872: fdatasync d2/d55/d5d/d3f/d4a/dbc/fcd 0 2026-03-09T20:48:15.582 INFO:tasks.workunit.client.1.vm10.stdout:6/917: dread d3/d30/d6a/dd6/fe7 [0,4194304] 0 2026-03-09T20:48:15.586 INFO:tasks.workunit.client.1.vm10.stdout:7/893: write db/d28/d2b/d36/d40/f48 [118183,46882] 0 2026-03-09T20:48:15.593 INFO:tasks.workunit.client.1.vm10.stdout:1/898: mkdir d2/da/d25/d46/d51/d5d/da6/d126 0 2026-03-09T20:48:15.597 INFO:tasks.workunit.client.0.vm07.stdout:9/919: write d4/d16/f41 [2634528,27376] 0 2026-03-09T20:48:15.605 INFO:tasks.workunit.client.1.vm10.stdout:4/874: rename d1/d2/d3/d70/d99/dc9/dff/d2b/d4a/d9b/de2/dfb/f85 to d1/dd8/de8/f11f 0 2026-03-09T20:48:15.607 INFO:tasks.workunit.client.0.vm07.stdout:9/920: sync 2026-03-09T20:48:15.618 INFO:tasks.workunit.client.1.vm10.stdout:2/919: symlink d5/d18/d27/d89/db6/d41/de4/l12c 0 2026-03-09T20:48:15.621 INFO:tasks.workunit.client.0.vm07.stdout:2/928: creat d2/db/d28/d57/df8/f130 x:0 0 0 2026-03-09T20:48:15.631 INFO:tasks.workunit.client.0.vm07.stdout:3/922: mkdir d1/d5/d129 0 2026-03-09T20:48:15.633 INFO:tasks.workunit.client.0.vm07.stdout:3/923: chown d1/d5/d9/d11/d6d/d80/db3/d109/fd1 1028923 1 2026-03-09T20:48:15.639 INFO:tasks.workunit.client.1.vm10.stdout:6/918: creat d3/da/d11/dfc/ddc/f113 x:0 0 0 2026-03-09T20:48:15.642 INFO:tasks.workunit.client.0.vm07.stdout:8/881: mkdir d1/d3b/d11b 0 2026-03-09T20:48:15.643 INFO:tasks.workunit.client.0.vm07.stdout:8/882: chown d1/d5d/d6f/d2f/d53/la8 1760 1 2026-03-09T20:48:15.644 INFO:tasks.workunit.client.0.vm07.stdout:5/993: write d5/d19/f10e [128538,79227] 0 2026-03-09T20:48:15.649 INFO:tasks.workunit.client.0.vm07.stdout:1/985: symlink d3/d66/d132/l13d 0 2026-03-09T20:48:15.652 INFO:tasks.workunit.client.1.vm10.stdout:6/919: dread d3/d30/d7f/d36/f4f [0,4194304] 0 2026-03-09T20:48:15.652 INFO:tasks.workunit.client.1.vm10.stdout:6/920: write d3/f40 [384631,75874] 0 2026-03-09T20:48:15.657 INFO:tasks.workunit.client.0.vm07.stdout:0/936: dwrite d1/fa1 [0,4194304] 0 2026-03-09T20:48:15.659 INFO:tasks.workunit.client.0.vm07.stdout:9/921: mknod d4/d16/d29/d24/d37/d44/d62/d108/d121/d59/d129/c144 0 2026-03-09T20:48:15.664 INFO:tasks.workunit.client.0.vm07.stdout:9/922: dread d4/d16/d29/d24/d37/d44/f12e [0,4194304] 0 2026-03-09T20:48:15.667 INFO:tasks.workunit.client.1.vm10.stdout:1/899: creat d2/da/d25/d46/ddb/f127 x:0 0 0 2026-03-09T20:48:15.668 INFO:tasks.workunit.client.1.vm10.stdout:3/875: dwrite dc/d14/d20/d2e/d56/f82 [0,4194304] 0 2026-03-09T20:48:15.670 INFO:tasks.workunit.client.1.vm10.stdout:8/939: write d0/d22/d2c/fbc [528716,64023] 0 2026-03-09T20:48:15.673 INFO:tasks.workunit.client.0.vm07.stdout:2/929: dread d2/db/f67 [0,4194304] 0 2026-03-09T20:48:15.676 INFO:tasks.workunit.client.1.vm10.stdout:0/876: rename d2/d9/db8/d10f/d11/ffc to d2/d4a/d58/d82/d93/f131 0 2026-03-09T20:48:15.683 INFO:tasks.workunit.client.1.vm10.stdout:4/875: symlink d1/d2/d3/d70/d99/dc9/dff/d2b/d4a/l120 0 2026-03-09T20:48:15.685 INFO:tasks.workunit.client.0.vm07.stdout:0/937: dread - d1/d1f/d9f/fa4 zero size 2026-03-09T20:48:15.686 INFO:tasks.workunit.client.1.vm10.stdout:9/971: dwrite d2/d12/d5a/f82 [0,4194304] 0 2026-03-09T20:48:15.687 INFO:tasks.workunit.client.0.vm07.stdout:9/923: truncate d4/d16/d29/d24/d37/d44/d62/d108/d121/dc/d4e/fd9 38719 0 2026-03-09T20:48:15.692 INFO:tasks.workunit.client.1.vm10.stdout:6/921: creat d3/d79/f114 x:0 0 0 2026-03-09T20:48:15.693 INFO:tasks.workunit.client.1.vm10.stdout:6/922: stat d3/d30/d7f/d36/d6d 0 2026-03-09T20:48:15.693 INFO:tasks.workunit.client.1.vm10.stdout:7/894: symlink db/d28/d2b/d36/d63/l11b 0 2026-03-09T20:48:15.696 INFO:tasks.workunit.client.0.vm07.stdout:8/883: symlink d1/d5d/d6f/l11c 0 2026-03-09T20:48:15.698 INFO:tasks.workunit.client.0.vm07.stdout:0/938: creat d1/df6/f127 x:0 0 0 2026-03-09T20:48:15.702 INFO:tasks.workunit.client.1.vm10.stdout:0/877: creat d2/d9/db8/d10f/d11/dd1/db7/dcd/d63/f132 x:0 0 0 2026-03-09T20:48:15.710 INFO:tasks.workunit.client.0.vm07.stdout:6/945: rename d8/d16/d22/d9b/de4/d85/c45 to d8/d5d/c138 0 2026-03-09T20:48:15.720 INFO:tasks.workunit.client.0.vm07.stdout:8/884: mkdir d1/d8f/d9d/d11d 0 2026-03-09T20:48:15.724 INFO:tasks.workunit.client.0.vm07.stdout:8/885: dread d1/f25 [0,4194304] 0 2026-03-09T20:48:15.740 INFO:tasks.workunit.client.0.vm07.stdout:5/994: creat d5/d19/f14f x:0 0 0 2026-03-09T20:48:15.740 INFO:tasks.workunit.client.0.vm07.stdout:0/939: truncate d1/fc7 800802 0 2026-03-09T20:48:15.740 INFO:tasks.workunit.client.0.vm07.stdout:6/946: fdatasync d8/d16/d22/d24/da0/dab/d40/fe7 0 2026-03-09T20:48:15.740 INFO:tasks.workunit.client.0.vm07.stdout:6/947: read - d8/d50/f125 zero size 2026-03-09T20:48:15.740 INFO:tasks.workunit.client.1.vm10.stdout:7/895: creat db/d28/d2b/d36/d63/d6d/f11c x:0 0 0 2026-03-09T20:48:15.740 INFO:tasks.workunit.client.1.vm10.stdout:7/896: write db/d21/d23/f1e [1141558,24945] 0 2026-03-09T20:48:15.740 INFO:tasks.workunit.client.1.vm10.stdout:1/900: mkdir d2/da/d25/d46/d51/d5d/d128 0 2026-03-09T20:48:15.740 INFO:tasks.workunit.client.1.vm10.stdout:1/901: dread d2/da/d25/d46/fa7 [0,4194304] 0 2026-03-09T20:48:15.740 INFO:tasks.workunit.client.1.vm10.stdout:8/940: rename d0/l88 to d0/d22/d25/d2e/d41/d85/db9/d10d/l12f 0 2026-03-09T20:48:15.740 INFO:tasks.workunit.client.0.vm07.stdout:3/924: getdents d1/d5/d9/d2f/d3d/dd6 0 2026-03-09T20:48:15.743 INFO:tasks.workunit.client.1.vm10.stdout:5/822: getdents d2/d39/d4b/d7a/dd9 0 2026-03-09T20:48:15.746 INFO:tasks.workunit.client.0.vm07.stdout:4/873: rename d2/d55/d5d/dc2 to d2/d55/d5d/d3f/d4a/d4b/d52/dba/df2 0 2026-03-09T20:48:15.762 INFO:tasks.workunit.client.1.vm10.stdout:2/920: write d5/d18/f107 [949938,85729] 0 2026-03-09T20:48:15.765 INFO:tasks.workunit.client.0.vm07.stdout:2/930: write d2/d11/ddb/d72/fa1 [1341130,114826] 0 2026-03-09T20:48:15.767 INFO:tasks.workunit.client.0.vm07.stdout:2/931: dread d2/db/d28/f34 [0,4194304] 0 2026-03-09T20:48:15.772 INFO:tasks.workunit.client.0.vm07.stdout:9/924: write f2 [3445248,63024] 0 2026-03-09T20:48:15.778 INFO:tasks.workunit.client.0.vm07.stdout:8/886: write d1/d8f/fec [104656,13258] 0 2026-03-09T20:48:15.778 INFO:tasks.workunit.client.1.vm10.stdout:4/876: dwrite d1/d2/d5c/d64/d6b/d81/dac/fdd [0,4194304] 0 2026-03-09T20:48:15.778 INFO:tasks.workunit.client.1.vm10.stdout:9/972: dwrite d2/d12/f31 [4194304,4194304] 0 2026-03-09T20:48:15.779 INFO:tasks.workunit.client.1.vm10.stdout:9/973: dread - d2/d3/d85/ffa zero size 2026-03-09T20:48:15.782 INFO:tasks.workunit.client.1.vm10.stdout:7/897: stat db/d28/d2b/d36/d40/d8a/dd4/l111 0 2026-03-09T20:48:15.782 INFO:tasks.workunit.client.1.vm10.stdout:7/898: chown db/d28/d30 14 1 2026-03-09T20:48:15.784 INFO:tasks.workunit.client.1.vm10.stdout:7/899: truncate db/d28/d2b/d36/d40/d8a/dd4/f115 210575 0 2026-03-09T20:48:15.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:15 vm10.local ceph-mon[57011]: Upgrade: Setting container_image for all mgr 2026-03-09T20:48:15.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:15 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:15.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:15 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T20:48:15.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:15 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T20:48:15.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:15 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:15.790 INFO:tasks.workunit.client.0.vm07.stdout:1/986: rename d3/d23/cef to d3/d97/da1/ddd/c13e 0 2026-03-09T20:48:15.792 INFO:tasks.workunit.client.1.vm10.stdout:1/902: rename d2/da/d25/d3e/d42/f63 to d2/d89/f129 0 2026-03-09T20:48:15.792 INFO:tasks.workunit.client.0.vm07.stdout:4/874: readlink d2/d55/d5d/l98 0 2026-03-09T20:48:15.793 INFO:tasks.workunit.client.1.vm10.stdout:1/903: dread - d2/da/d25/d46/d51/d5d/da6/f115 zero size 2026-03-09T20:48:15.804 INFO:tasks.workunit.client.0.vm07.stdout:3/925: dread d1/d5/d9/d11/f73 [0,4194304] 0 2026-03-09T20:48:15.806 INFO:tasks.workunit.client.0.vm07.stdout:5/995: mknod d5/df/d13/d4f/d12c/c150 0 2026-03-09T20:48:15.809 INFO:tasks.workunit.client.0.vm07.stdout:5/996: dread d5/d19/f12f [0,4194304] 0 2026-03-09T20:48:15.814 INFO:tasks.workunit.client.1.vm10.stdout:6/923: truncate d3/d30/d7f/d4a/f9a 2302019 0 2026-03-09T20:48:15.814 INFO:tasks.workunit.client.1.vm10.stdout:6/924: readlink d3/d30/d7f/l20 0 2026-03-09T20:48:15.816 INFO:tasks.workunit.client.0.vm07.stdout:0/940: fsync d1/d1f/fcd 0 2026-03-09T20:48:15.818 INFO:tasks.workunit.client.0.vm07.stdout:2/932: creat d2/d11/ddb/d6e/dda/d129/d96/f131 x:0 0 0 2026-03-09T20:48:15.827 INFO:tasks.workunit.client.0.vm07.stdout:8/887: fdatasync d1/dc/fd 0 2026-03-09T20:48:15.831 INFO:tasks.workunit.client.0.vm07.stdout:1/987: dread - d3/d23/d67/d8a/f10c zero size 2026-03-09T20:48:15.832 INFO:tasks.workunit.client.0.vm07.stdout:1/988: chown d3/d14/d54/l31 809411634 1 2026-03-09T20:48:15.833 INFO:tasks.workunit.client.0.vm07.stdout:4/875: symlink d2/df/d17/lf3 0 2026-03-09T20:48:15.840 INFO:tasks.workunit.client.0.vm07.stdout:5/997: chown d5/d19/d73/d97/c106 4592 1 2026-03-09T20:48:15.858 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:15 vm07.local ceph-mon[49120]: Upgrade: Setting container_image for all mgr 2026-03-09T20:48:15.858 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:15 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:15.858 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:15 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T20:48:15.858 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:15 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T20:48:15.858 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:15 vm07.local ceph-mon[49120]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:15.870 INFO:tasks.workunit.client.0.vm07.stdout:2/933: fdatasync d2/db/d49/f9b 0 2026-03-09T20:48:15.876 INFO:tasks.workunit.client.1.vm10.stdout:5/823: dread d2/d39/d4b/d7a/ffc [0,4194304] 0 2026-03-09T20:48:15.876 INFO:tasks.workunit.client.0.vm07.stdout:5/998: sync 2026-03-09T20:48:15.880 INFO:tasks.workunit.client.0.vm07.stdout:6/948: link d8/c11 d8/d26/d7d/dc8/c139 0 2026-03-09T20:48:15.886 INFO:tasks.workunit.client.0.vm07.stdout:0/941: write d1/f57 [2949239,129916] 0 2026-03-09T20:48:15.890 INFO:tasks.workunit.client.0.vm07.stdout:8/888: dwrite d1/dc/d16/d31/f52 [4194304,4194304] 0 2026-03-09T20:48:15.898 INFO:tasks.workunit.client.0.vm07.stdout:1/989: chown d3/l1c 937839707 1 2026-03-09T20:48:15.898 INFO:tasks.workunit.client.0.vm07.stdout:1/990: chown d3/d23/f49 2425228 1 2026-03-09T20:48:15.899 INFO:tasks.workunit.client.0.vm07.stdout:1/991: chown d3/dc6/f107 770 1 2026-03-09T20:48:15.902 INFO:tasks.workunit.client.0.vm07.stdout:3/926: creat d1/dcf/f12a x:0 0 0 2026-03-09T20:48:15.905 INFO:tasks.workunit.client.0.vm07.stdout:2/934: rmdir d2/db/d28/d5c/dc7 39 2026-03-09T20:48:15.907 INFO:tasks.workunit.client.0.vm07.stdout:9/925: creat d4/d16/f145 x:0 0 0 2026-03-09T20:48:15.918 INFO:tasks.workunit.client.0.vm07.stdout:5/999: read d5/df/d13/d6c/fc9 [2592802,27096] 0 2026-03-09T20:48:15.921 INFO:tasks.workunit.client.1.vm10.stdout:2/921: write d5/d18/d27/d89/db6/d41/d77/db3/f105 [543168,118960] 0 2026-03-09T20:48:15.924 INFO:tasks.workunit.client.1.vm10.stdout:0/878: dwrite d2/d4a/f7b [0,4194304] 0 2026-03-09T20:48:15.924 INFO:tasks.workunit.client.1.vm10.stdout:9/974: unlink d2/d3/d6d/db7/fbb 0 2026-03-09T20:48:15.924 INFO:tasks.workunit.client.0.vm07.stdout:8/889: chown d1/d3b/l79 284 1 2026-03-09T20:48:15.925 INFO:tasks.workunit.client.1.vm10.stdout:7/900: mknod db/d28/d2b/d36/d3b/dd5/c11d 0 2026-03-09T20:48:15.926 INFO:tasks.workunit.client.1.vm10.stdout:7/901: stat db/f7c 0 2026-03-09T20:48:15.926 INFO:tasks.workunit.client.1.vm10.stdout:3/876: getdents dc/d14/d20/d21/d3b 0 2026-03-09T20:48:15.927 INFO:tasks.workunit.client.0.vm07.stdout:3/927: symlink d1/d5/d9/d2f/d3d/d71/dcc/l12b 0 2026-03-09T20:48:15.928 INFO:tasks.workunit.client.1.vm10.stdout:3/877: readlink dc/d14/d26/d29/d2a/d76/l9d 0 2026-03-09T20:48:15.930 INFO:tasks.workunit.client.1.vm10.stdout:3/878: chown dc/d14/d20/d21/l2c 9 1 2026-03-09T20:48:15.930 INFO:tasks.workunit.client.1.vm10.stdout:6/925: truncate d3/f21 2957977 0 2026-03-09T20:48:15.931 INFO:tasks.workunit.client.1.vm10.stdout:2/922: dwrite d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d8d/f129 [0,4194304] 0 2026-03-09T20:48:15.931 INFO:tasks.workunit.client.0.vm07.stdout:2/935: symlink d2/db/d1c/l132 0 2026-03-09T20:48:15.932 INFO:tasks.workunit.client.0.vm07.stdout:2/936: read d2/ff [614803,66270] 0 2026-03-09T20:48:15.939 INFO:tasks.workunit.client.1.vm10.stdout:2/923: write d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/f120 [756924,74346] 0 2026-03-09T20:48:15.948 INFO:tasks.workunit.client.0.vm07.stdout:9/926: mkdir d4/d16/d29/d24/d37/d44/d62/d108/d121/d19/d5f/da5/db8/dc1/d146 0 2026-03-09T20:48:15.957 INFO:tasks.workunit.client.0.vm07.stdout:9/927: readlink d4/d16/d29/d24/d37/d44/d62/d108/d121/l11f 0 2026-03-09T20:48:15.958 INFO:tasks.workunit.client.0.vm07.stdout:0/942: fsync d1/d2/f1b 0 2026-03-09T20:48:15.958 INFO:tasks.workunit.client.1.vm10.stdout:6/926: sync 2026-03-09T20:48:15.958 INFO:tasks.workunit.client.1.vm10.stdout:0/879: sync 2026-03-09T20:48:15.959 INFO:tasks.workunit.client.0.vm07.stdout:8/890: creat d1/dc/d16/dad/d87/dd3/f11e x:0 0 0 2026-03-09T20:48:15.960 INFO:tasks.workunit.client.1.vm10.stdout:6/927: write d3/d30/d7f/d51/f103 [1041275,115208] 0 2026-03-09T20:48:15.964 INFO:tasks.workunit.client.0.vm07.stdout:4/876: rename d2/f69 to d2/df/ff4 0 2026-03-09T20:48:15.964 INFO:tasks.workunit.client.1.vm10.stdout:5/824: dread d2/d1b/f2f [0,4194304] 0 2026-03-09T20:48:15.965 INFO:tasks.workunit.client.0.vm07.stdout:2/937: creat d2/dc8/f133 x:0 0 0 2026-03-09T20:48:15.966 INFO:tasks.workunit.client.0.vm07.stdout:9/928: creat d4/d16/d29/d24/d37/d44/d62/d108/d121/d19/d5f/d73/dee/f147 x:0 0 0 2026-03-09T20:48:15.971 INFO:tasks.workunit.client.1.vm10.stdout:7/902: creat db/d28/d2b/d36/d3b/d88/f11e x:0 0 0 2026-03-09T20:48:15.973 INFO:tasks.workunit.client.1.vm10.stdout:1/904: truncate d2/da/d25/d3e/d55/faf 2165758 0 2026-03-09T20:48:15.973 INFO:tasks.workunit.client.1.vm10.stdout:2/924: mknod d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47/d94/c12d 0 2026-03-09T20:48:15.973 INFO:tasks.workunit.client.1.vm10.stdout:1/905: stat d2/da/d25/d46/d51/c91 0 2026-03-09T20:48:15.975 INFO:tasks.workunit.client.0.vm07.stdout:3/928: read d1/d5/d9/d11/d6d/d80/db3/d109/f61 [1420767,77311] 0 2026-03-09T20:48:15.976 INFO:tasks.workunit.client.0.vm07.stdout:6/949: rename d8/d5d/d97/d12c to d8/d16/d4b/d88/d13a 0 2026-03-09T20:48:15.986 INFO:tasks.workunit.client.1.vm10.stdout:4/877: dwrite d1/d2/f43 [0,4194304] 0 2026-03-09T20:48:16.000 INFO:tasks.workunit.client.0.vm07.stdout:2/938: dread d2/d11/ddb/f7e [0,4194304] 0 2026-03-09T20:48:16.003 INFO:tasks.workunit.client.1.vm10.stdout:3/879: dread dc/d14/d26/f34 [0,4194304] 0 2026-03-09T20:48:16.012 INFO:tasks.workunit.client.0.vm07.stdout:6/950: dread d8/d16/d22/d9b/fc6 [0,4194304] 0 2026-03-09T20:48:16.012 INFO:tasks.workunit.client.0.vm07.stdout:0/943: write d1/f3b [1163810,128689] 0 2026-03-09T20:48:16.012 INFO:tasks.workunit.client.1.vm10.stdout:3/880: dwrite dc/d14/d20/d21/daf/d113/f120 [0,4194304] 0 2026-03-09T20:48:16.012 INFO:tasks.workunit.client.1.vm10.stdout:9/975: write d2/d3/de/f84 [4091658,96522] 0 2026-03-09T20:48:16.012 INFO:tasks.workunit.client.1.vm10.stdout:8/941: write d0/f94 [181909,120721] 0 2026-03-09T20:48:16.012 INFO:tasks.workunit.client.1.vm10.stdout:3/881: stat dc/d14/d26/d29/lda 0 2026-03-09T20:48:16.012 INFO:tasks.workunit.client.1.vm10.stdout:5/825: symlink d2/d39/d4b/d7a/dd9/d10c/l130 0 2026-03-09T20:48:16.015 INFO:tasks.workunit.client.0.vm07.stdout:8/891: dwrite d1/dc/d16/d31/f47 [0,4194304] 0 2026-03-09T20:48:16.022 INFO:tasks.workunit.client.0.vm07.stdout:1/992: getdents d3/d97/da1/ddd 0 2026-03-09T20:48:16.027 INFO:tasks.workunit.client.1.vm10.stdout:7/903: creat db/d28/d10e/f11f x:0 0 0 2026-03-09T20:48:16.031 INFO:tasks.workunit.client.0.vm07.stdout:3/929: fdatasync d1/d5/d9/d11/f84 0 2026-03-09T20:48:16.040 INFO:tasks.workunit.client.1.vm10.stdout:0/880: write d2/d4a/d58/d82/d71/dca/d110/d30/ff9 [466936,78352] 0 2026-03-09T20:48:16.041 INFO:tasks.workunit.client.1.vm10.stdout:0/881: chown d2/d4a/d58/d82/d71/dca/d110/dff/l112 148837857 1 2026-03-09T20:48:16.044 INFO:tasks.workunit.client.1.vm10.stdout:4/878: rmdir d1/d2/d5c/d64/d6b/d79 39 2026-03-09T20:48:16.045 INFO:tasks.workunit.client.1.vm10.stdout:4/879: chown d1/d2/f7 7575543 1 2026-03-09T20:48:16.047 INFO:tasks.workunit.client.0.vm07.stdout:9/929: dwrite d4/d16/d29/ff4 [0,4194304] 0 2026-03-09T20:48:16.062 INFO:tasks.workunit.client.0.vm07.stdout:6/951: mknod d8/d16/d22/d24/da0/dab/d40/d69/dfb/c13b 0 2026-03-09T20:48:16.063 INFO:tasks.workunit.client.0.vm07.stdout:6/952: write d8/d16/d22/d24/da0/dab/d40/d105/f118 [47630,129611] 0 2026-03-09T20:48:16.068 INFO:tasks.workunit.client.0.vm07.stdout:0/944: rename d1/l1c to d1/dc0/l128 0 2026-03-09T20:48:16.068 INFO:tasks.workunit.client.0.vm07.stdout:0/945: chown d1/dc0/c125 1173534 1 2026-03-09T20:48:16.073 INFO:tasks.workunit.client.0.vm07.stdout:0/946: dwrite d1/d2/d33/d35/f59 [0,4194304] 0 2026-03-09T20:48:16.084 INFO:tasks.workunit.client.0.vm07.stdout:8/892: read - d1/dc/d16/d31/db4/fbc zero size 2026-03-09T20:48:16.086 INFO:tasks.workunit.client.0.vm07.stdout:1/993: mknod d3/d97/da1/dc5/d90/dd3/d106/c13f 0 2026-03-09T20:48:16.090 INFO:tasks.workunit.client.1.vm10.stdout:3/882: fdatasync dc/d14/d26/d29/d40/da8/f114 0 2026-03-09T20:48:16.095 INFO:tasks.workunit.client.1.vm10.stdout:1/906: mkdir d2/da/d25/d46/d80/da0/d92/db5/d10f/d12a 0 2026-03-09T20:48:16.099 INFO:tasks.workunit.client.1.vm10.stdout:1/907: dwrite d2/da/d25/d3e/dca/da2/f104 [0,4194304] 0 2026-03-09T20:48:16.102 INFO:tasks.workunit.client.0.vm07.stdout:2/939: dwrite d2/db/d49/f81 [0,4194304] 0 2026-03-09T20:48:16.117 INFO:tasks.workunit.client.1.vm10.stdout:2/925: fsync d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47/dcc/f117 0 2026-03-09T20:48:16.128 INFO:tasks.workunit.client.0.vm07.stdout:9/930: chown d4/d11/d23/f2f 1 1 2026-03-09T20:48:16.132 INFO:tasks.workunit.client.1.vm10.stdout:6/928: creat d3/d30/d7f/f115 x:0 0 0 2026-03-09T20:48:16.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:15 vm07.local systemd[1]: Stopping Ceph mon.vm07 for 589eab88-1bf8-11f1-9e50-71f3ab1833c4... 2026-03-09T20:48:16.137 INFO:tasks.workunit.client.1.vm10.stdout:4/880: symlink d1/d47/db9/l121 0 2026-03-09T20:48:16.141 INFO:tasks.workunit.client.1.vm10.stdout:9/976: mknod d2/d3/d6d/d10c/c13e 0 2026-03-09T20:48:16.141 INFO:tasks.workunit.client.1.vm10.stdout:9/977: chown d2/d3/d6d/db7/fc9 643949 1 2026-03-09T20:48:16.151 INFO:tasks.workunit.client.0.vm07.stdout:8/893: dread d1/d5d/d6f/d2f/d4d/d63/fd5 [0,4194304] 0 2026-03-09T20:48:16.153 INFO:tasks.workunit.client.0.vm07.stdout:4/877: getdents d2/d1f 0 2026-03-09T20:48:16.177 INFO:tasks.workunit.client.1.vm10.stdout:3/883: rename dc/d14/d22/c7c to dc/d14/d26/d29/d40/da2/de0/c12e 0 2026-03-09T20:48:16.180 INFO:tasks.workunit.client.1.vm10.stdout:1/908: truncate d2/f4c 4999901 0 2026-03-09T20:48:16.181 INFO:tasks.workunit.client.0.vm07.stdout:3/930: write d1/d5/d9/d11/d6d/dd0/f30 [1002586,73242] 0 2026-03-09T20:48:16.186 INFO:tasks.workunit.client.0.vm07.stdout:9/931: creat d4/d16/d29/d24/d37/d44/d62/d8e/dd4/d11c/f148 x:0 0 0 2026-03-09T20:48:16.198 INFO:tasks.workunit.client.1.vm10.stdout:0/882: dwrite d2/d9/db8/d10f/d11/dd1/db7/dcd/d63/ff8 [0,4194304] 0 2026-03-09T20:48:16.200 INFO:tasks.workunit.client.1.vm10.stdout:2/926: mknod d5/d18/d27/d38/dcf/c12e 0 2026-03-09T20:48:16.209 INFO:tasks.workunit.client.0.vm07.stdout:1/994: creat d3/d97/da1/dc5/d90/de8/dc0/d11b/f140 x:0 0 0 2026-03-09T20:48:16.214 INFO:tasks.workunit.client.1.vm10.stdout:8/942: dwrite d0/d22/d25/f2d [0,4194304] 0 2026-03-09T20:48:16.239 INFO:tasks.workunit.client.1.vm10.stdout:3/884: truncate dc/d14/d26/d29/d2a/d55/fa3 3025125 0 2026-03-09T20:48:16.245 INFO:tasks.workunit.client.1.vm10.stdout:0/883: dread - d2/d9/db8/f100 zero size 2026-03-09T20:48:16.245 INFO:tasks.workunit.client.1.vm10.stdout:2/927: rmdir d5/d18/d27/d89/db6/d41 39 2026-03-09T20:48:16.246 INFO:tasks.workunit.client.1.vm10.stdout:0/884: chown d2/d9/d69/d80/de1 2420 1 2026-03-09T20:48:16.247 INFO:tasks.workunit.client.1.vm10.stdout:0/885: dread d2/d4a/f5a [0,4194304] 0 2026-03-09T20:48:16.247 INFO:tasks.workunit.client.1.vm10.stdout:8/943: creat d0/d22/d25/d89/f130 x:0 0 0 2026-03-09T20:48:16.248 INFO:tasks.workunit.client.1.vm10.stdout:5/826: getdents d2/d80 0 2026-03-09T20:48:16.248 INFO:tasks.workunit.client.1.vm10.stdout:0/886: dread - d2/d9/db8/db4/f129 zero size 2026-03-09T20:48:16.249 INFO:tasks.workunit.client.1.vm10.stdout:7/904: getdents db/d28/d2b/d36/d3b/d88 0 2026-03-09T20:48:16.250 INFO:tasks.workunit.client.1.vm10.stdout:2/928: symlink d5/d18/d27/d89/db6/dd3/l12f 0 2026-03-09T20:48:16.252 INFO:tasks.workunit.client.1.vm10.stdout:0/887: symlink d2/l133 0 2026-03-09T20:48:16.252 INFO:tasks.workunit.client.1.vm10.stdout:1/909: link d2/da/d25/d46/d51/d5d/d6e/d70/f83 d2/d89/f12b 0 2026-03-09T20:48:16.253 INFO:tasks.workunit.client.1.vm10.stdout:3/885: truncate dc/d14/d26/d29/d40/d8c/d9c/fb6 5307710 0 2026-03-09T20:48:16.255 INFO:tasks.workunit.client.0.vm07.stdout:3/931: fdatasync d1/d5/d9/d11/d6d/dd0/d43/fbf 0 2026-03-09T20:48:16.256 INFO:tasks.workunit.client.0.vm07.stdout:1/995: sync 2026-03-09T20:48:16.256 INFO:tasks.workunit.client.0.vm07.stdout:3/932: dread - d1/d5/d9/d11/f9b zero size 2026-03-09T20:48:16.259 INFO:tasks.workunit.client.1.vm10.stdout:4/881: getdents d1/d2/d3/d70/d99/dc9/dff/d69/dbd 0 2026-03-09T20:48:16.259 INFO:tasks.workunit.client.0.vm07.stdout:9/932: truncate d4/d16/d29/d24/d37/d44/d62/d108/d121/d19/fb3 499617 0 2026-03-09T20:48:16.265 INFO:tasks.workunit.client.1.vm10.stdout:4/882: dread d1/d2/d3/d70/d99/dc9/dff/d69/f6a [0,4194304] 0 2026-03-09T20:48:16.272 INFO:tasks.workunit.client.0.vm07.stdout:4/878: dread d2/df/f23 [8388608,4194304] 0 2026-03-09T20:48:16.276 INFO:tasks.workunit.client.0.vm07.stdout:2/940: symlink d2/d11/ddb/d6e/dda/d129/d96/dcc/l134 0 2026-03-09T20:48:16.281 INFO:tasks.workunit.client.1.vm10.stdout:7/905: mknod db/d28/d2b/d36/c120 0 2026-03-09T20:48:16.285 INFO:tasks.workunit.client.1.vm10.stdout:6/929: write d3/da/f42 [227046,17846] 0 2026-03-09T20:48:16.286 INFO:tasks.workunit.client.1.vm10.stdout:9/978: write d2/d3/de/d35/f9c [157611,3688] 0 2026-03-09T20:48:16.286 INFO:tasks.workunit.client.1.vm10.stdout:6/930: fdatasync d3/da/f15 0 2026-03-09T20:48:16.287 INFO:tasks.workunit.client.1.vm10.stdout:9/979: fdatasync d2/d3/d6d/db7/f116 0 2026-03-09T20:48:16.301 INFO:tasks.workunit.client.1.vm10.stdout:2/929: rmdir d5/d18/d27/d89/db6/d41/d77/db3 39 2026-03-09T20:48:16.306 INFO:tasks.workunit.client.1.vm10.stdout:2/930: dread d5/d18/f107 [0,4194304] 0 2026-03-09T20:48:16.306 INFO:tasks.workunit.client.0.vm07.stdout:6/953: getdents d8/d16/d22/d24/da0/dab/dc1/d124 0 2026-03-09T20:48:16.306 INFO:tasks.workunit.client.1.vm10.stdout:8/944: write d0/d92/fb3 [828485,72722] 0 2026-03-09T20:48:16.307 INFO:tasks.workunit.client.1.vm10.stdout:2/931: chown d5/d18/d27/d89/db6/dd3/l12f 851 1 2026-03-09T20:48:16.311 INFO:tasks.workunit.client.0.vm07.stdout:9/933: fdatasync d4/d16/d29/d24/d37/d44/d62/d108/d121/d19/d5f/d73/dbc/f100 0 2026-03-09T20:48:16.312 INFO:tasks.workunit.client.1.vm10.stdout:1/910: write d2/da/d25/d3e/fba [457352,57583] 0 2026-03-09T20:48:16.318 INFO:tasks.workunit.client.0.vm07.stdout:0/947: link d1/dc0/l128 d1/d1f/d9f/l129 0 2026-03-09T20:48:16.319 INFO:tasks.workunit.client.1.vm10.stdout:3/886: write dc/d14/d20/d21/f36 [2582188,38751] 0 2026-03-09T20:48:16.320 INFO:tasks.workunit.client.1.vm10.stdout:3/887: write dc/d14/d26/d29/f70 [3001308,55076] 0 2026-03-09T20:48:16.323 INFO:tasks.workunit.client.1.vm10.stdout:1/911: sync 2026-03-09T20:48:16.324 INFO:tasks.workunit.client.1.vm10.stdout:1/912: dread - d2/d89/f123 zero size 2026-03-09T20:48:16.324 INFO:tasks.workunit.client.1.vm10.stdout:1/913: chown d2/da/d25/d3e 89725649 1 2026-03-09T20:48:16.325 INFO:tasks.workunit.client.1.vm10.stdout:0/888: symlink d2/d4a/d79/l134 0 2026-03-09T20:48:16.326 INFO:tasks.workunit.client.1.vm10.stdout:0/889: read d2/d4a/f5a [18375,66077] 0 2026-03-09T20:48:16.327 INFO:tasks.workunit.client.1.vm10.stdout:1/914: truncate d2/da/d25/d3e/d55/f11b 358743 0 2026-03-09T20:48:16.329 INFO:tasks.workunit.client.0.vm07.stdout:8/894: creat d1/dc/d16/d26/de2/f11f x:0 0 0 2026-03-09T20:48:16.331 INFO:tasks.workunit.client.0.vm07.stdout:4/879: unlink d2/df/lc1 0 2026-03-09T20:48:16.335 INFO:tasks.workunit.client.0.vm07.stdout:1/996: dwrite d3/d66/f8c [0,4194304] 0 2026-03-09T20:48:16.342 INFO:tasks.workunit.client.1.vm10.stdout:1/915: dread d2/da/d25/d3e/dca/da2/dd5/ff6 [0,4194304] 0 2026-03-09T20:48:16.348 INFO:tasks.workunit.client.0.vm07.stdout:3/933: dwrite d1/d5/d9/d11/f4d [0,4194304] 0 2026-03-09T20:48:16.359 INFO:tasks.workunit.client.0.vm07.stdout:2/941: dwrite d2/d11/ddb/d72/feb [0,4194304] 0 2026-03-09T20:48:16.364 INFO:tasks.workunit.client.1.vm10.stdout:7/906: chown db/d21/d23/l20 207664768 1 2026-03-09T20:48:16.364 INFO:tasks.workunit.client.1.vm10.stdout:7/907: fsync db/d46/dab/ff1 0 2026-03-09T20:48:16.364 INFO:tasks.workunit.client.0.vm07.stdout:2/942: readlink d2/d11/ddb/d72/led 0 2026-03-09T20:48:16.364 INFO:tasks.workunit.client.0.vm07.stdout:2/943: chown d2/db/d28/d57/df8 322525 1 2026-03-09T20:48:16.364 INFO:tasks.workunit.client.0.vm07.stdout:2/944: stat d2/db/d28/d90/fa3 0 2026-03-09T20:48:16.364 INFO:tasks.workunit.client.0.vm07.stdout:2/945: readlink d2/db/d1c/l126 0 2026-03-09T20:48:16.364 INFO:tasks.workunit.client.1.vm10.stdout:7/908: chown db/d28/d2b/d36/d63/d6d/f80 682686904 1 2026-03-09T20:48:16.367 INFO:tasks.workunit.client.0.vm07.stdout:9/934: mkdir d4/d11/d23/d32/d149 0 2026-03-09T20:48:16.368 INFO:tasks.workunit.client.0.vm07.stdout:9/935: dread - d4/d16/d29/d24/d37/d44/d62/d108/d121/d19/f7e zero size 2026-03-09T20:48:16.383 INFO:tasks.workunit.client.1.vm10.stdout:5/827: getdents d2/d39/dbf/dcc 0 2026-03-09T20:48:16.386 INFO:tasks.workunit.client.0.vm07.stdout:1/997: dread d3/d97/da1/dc5/d90/f96 [0,4194304] 0 2026-03-09T20:48:16.390 INFO:tasks.workunit.client.0.vm07.stdout:3/934: fdatasync d1/d5/d9/d2f/d34/d46/d5d/fd2 0 2026-03-09T20:48:16.391 INFO:tasks.workunit.client.0.vm07.stdout:3/935: chown d1/d5/d9/d11/d6d/dd0/d95 283677025 1 2026-03-09T20:48:16.403 INFO:tasks.workunit.client.1.vm10.stdout:0/890: dread d2/d4a/d58/d82/d71/dca/d110/d30/f51 [0,4194304] 0 2026-03-09T20:48:16.405 INFO:tasks.workunit.client.1.vm10.stdout:4/883: symlink d1/d2/d3/d70/d99/dc9/dff/d2b/d4a/d9b/d109/l122 0 2026-03-09T20:48:16.405 INFO:tasks.workunit.client.1.vm10.stdout:1/916: read - d2/da/d25/d46/d51/f10a zero size 2026-03-09T20:48:16.405 INFO:tasks.workunit.client.0.vm07.stdout:8/895: truncate d1/d5d/d6f/d80/faa 2638022 0 2026-03-09T20:48:16.406 INFO:tasks.workunit.client.1.vm10.stdout:1/917: stat d2/da/d25/d46/d80/da0/d92/db5/d10f/d12a 0 2026-03-09T20:48:16.411 INFO:tasks.workunit.client.1.vm10.stdout:7/909: symlink db/d28/d2b/d36/d3b/d88/dbd/l121 0 2026-03-09T20:48:16.418 INFO:tasks.workunit.client.1.vm10.stdout:9/980: dwrite d2/db8/f4d [0,4194304] 0 2026-03-09T20:48:16.419 INFO:tasks.workunit.client.1.vm10.stdout:9/981: stat d2/d3/d6d/db7/f116 0 2026-03-09T20:48:16.420 INFO:tasks.workunit.client.1.vm10.stdout:9/982: chown d2/d3/de/f42 8 1 2026-03-09T20:48:16.420 INFO:tasks.workunit.client.1.vm10.stdout:9/983: chown d2/d28/d47/fe7 54144562 1 2026-03-09T20:48:16.430 INFO:tasks.workunit.client.1.vm10.stdout:8/945: truncate d0/f19 631798 0 2026-03-09T20:48:16.431 INFO:tasks.workunit.client.1.vm10.stdout:8/946: stat d0/d22/d25/d2e/d41/c8d 0 2026-03-09T20:48:16.433 INFO:tasks.workunit.client.0.vm07.stdout:6/954: write d8/d16/d22/d9b/de4/f91 [3850839,82360] 0 2026-03-09T20:48:16.433 INFO:tasks.workunit.client.1.vm10.stdout:2/932: mkdir d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d8d/d130 0 2026-03-09T20:48:16.434 INFO:tasks.workunit.client.1.vm10.stdout:3/888: dwrite dc/d14/d20/d21/f41 [4194304,4194304] 0 2026-03-09T20:48:16.434 INFO:tasks.workunit.client.0.vm07.stdout:6/955: write d8/d16/d22/d9b/de4/d85/f5a [5198618,24173] 0 2026-03-09T20:48:16.443 INFO:tasks.workunit.client.0.vm07.stdout:0/948: link d1/d2/dc/d17/ff2 d1/df6/f12a 0 2026-03-09T20:48:16.443 INFO:tasks.workunit.client.0.vm07.stdout:0/949: stat d1/d1f/d53/fb8 0 2026-03-09T20:48:16.450 INFO:tasks.workunit.client.0.vm07.stdout:1/998: dwrite d3/d23/fa8 [0,4194304] 0 2026-03-09T20:48:16.451 INFO:tasks.workunit.client.0.vm07.stdout:3/936: write d1/d5/d9/d11/f26 [240047,34290] 0 2026-03-09T20:48:16.458 INFO:tasks.workunit.client.0.vm07.stdout:9/936: truncate d4/d16/d29/d24/d37/d44/d62/d108/d121/dc/d4e/fd9 599420 0 2026-03-09T20:48:16.458 INFO:tasks.workunit.client.1.vm10.stdout:0/891: creat d2/d9/d69/de2/f135 x:0 0 0 2026-03-09T20:48:16.458 INFO:tasks.workunit.client.0.vm07.stdout:4/880: creat d2/ff5 x:0 0 0 2026-03-09T20:48:16.458 INFO:tasks.workunit.client.1.vm10.stdout:4/884: chown d1/d2/d5c/d64/d6b/d81/dac/c11d 5 1 2026-03-09T20:48:16.459 INFO:tasks.workunit.client.1.vm10.stdout:0/892: chown d2/d4a/d79/c8f 8135 1 2026-03-09T20:48:16.459 INFO:tasks.workunit.client.1.vm10.stdout:4/885: fdatasync d1/d2/d5c/d64/d6b/d81/dac/d39/f6e 0 2026-03-09T20:48:16.460 INFO:tasks.workunit.client.1.vm10.stdout:4/886: stat d1/d2/d5c/d64/d6b/d81/dac/d1b/l5b 0 2026-03-09T20:48:16.461 INFO:tasks.workunit.client.0.vm07.stdout:8/896: dread d1/dc/d16/fbe [0,4194304] 0 2026-03-09T20:48:16.463 INFO:tasks.workunit.client.1.vm10.stdout:1/918: truncate d2/da/d25/f78 2568374 0 2026-03-09T20:48:16.465 INFO:tasks.workunit.client.1.vm10.stdout:7/910: creat db/d46/dab/f122 x:0 0 0 2026-03-09T20:48:16.471 INFO:tasks.workunit.client.0.vm07.stdout:2/946: rmdir d2/db/d108 0 2026-03-09T20:48:16.471 INFO:tasks.workunit.client.1.vm10.stdout:6/931: link d3/da/d11/c54 d3/d30/d7f/d4a/c116 0 2026-03-09T20:48:16.472 INFO:tasks.workunit.client.1.vm10.stdout:6/932: dread d3/f2f [0,4194304] 0 2026-03-09T20:48:16.476 INFO:tasks.workunit.client.0.vm07.stdout:0/950: dwrite d1/d2/d98/de8/f126 [0,4194304] 0 2026-03-09T20:48:16.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:16 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-mon-vm07[49116]: 2026-03-09T20:48:16.156+0000 7faca3f26640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.vm07 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T20:48:16.480 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:16 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-mon-vm07[49116]: 2026-03-09T20:48:16.156+0000 7faca3f26640 -1 mon.vm07@0(leader) e2 *** Got Signal Terminated *** 2026-03-09T20:48:16.480 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:16 vm07.local podman[111970]: 2026-03-09 20:48:16.214588323 +0000 UTC m=+0.118002121 container died f3e88bdaa0dd6d867afcbeb0f1ad2c0f94d78f7a28f3c148e3b4c7d4cffd0613 (image=quay.ceph.io/ceph-ci/ceph:reef, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-mon-vm07, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20260223, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=ab47f43c099b2cbae6e21342fe673ce251da54d6, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-09T20:48:16.480 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:16 vm07.local podman[111970]: 2026-03-09 20:48:16.284701658 +0000 UTC m=+0.188115456 container remove f3e88bdaa0dd6d867afcbeb0f1ad2c0f94d78f7a28f3c148e3b4c7d4cffd0613 (image=quay.ceph.io/ceph-ci/ceph:reef, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-mon-vm07, CEPH_SHA1=ab47f43c099b2cbae6e21342fe673ce251da54d6, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default) 2026-03-09T20:48:16.480 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:16 vm07.local bash[111970]: ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-mon-vm07 2026-03-09T20:48:16.688 INFO:tasks.workunit.client.1.vm10.stdout:9/984: write d2/d12/d5a/da7/faf [984387,116549] 0 2026-03-09T20:48:16.690 INFO:tasks.workunit.client.1.vm10.stdout:2/933: write d5/d18/d27/d89/db6/d41/d77/db3/db5/f3f [3938222,14942] 0 2026-03-09T20:48:16.690 INFO:tasks.workunit.client.1.vm10.stdout:8/947: dwrite d0/d22/d25/d2e/d41/de9/dfc/f113 [0,4194304] 0 2026-03-09T20:48:16.692 INFO:tasks.workunit.client.1.vm10.stdout:2/934: write d5/d18/d27/d38/f100 [82545,106687] 0 2026-03-09T20:48:16.692 INFO:tasks.workunit.client.1.vm10.stdout:2/935: fsync d5/f7 0 2026-03-09T20:48:16.693 INFO:tasks.workunit.client.1.vm10.stdout:2/936: read - d5/d18/d27/d89/db6/d41/d77/db3/db5/db0/deb/f116 zero size 2026-03-09T20:48:16.702 INFO:tasks.workunit.client.1.vm10.stdout:3/889: fdatasync dc/d14/d22/d4a/f84 0 2026-03-09T20:48:16.705 INFO:tasks.workunit.client.1.vm10.stdout:3/890: dwrite dc/d14/d27/f116 [0,4194304] 0 2026-03-09T20:48:16.732 INFO:tasks.workunit.client.1.vm10.stdout:5/828: mknod d2/d27/d37/dc8/d12b/c131 0 2026-03-09T20:48:16.738 INFO:tasks.workunit.client.1.vm10.stdout:0/893: truncate d2/d9/db8/d10f/d48/dac/fbd 2689266 0 2026-03-09T20:48:16.739 INFO:tasks.workunit.client.1.vm10.stdout:0/894: fsync d2/d4a/d58/d82/d71/fb2 0 2026-03-09T20:48:16.744 INFO:tasks.workunit.client.1.vm10.stdout:4/887: chown d1/d2/d5c/d64/d6b/d79 957859943 1 2026-03-09T20:48:16.751 INFO:tasks.workunit.client.1.vm10.stdout:1/919: fdatasync d2/da/d25/d3e/d42/f62 0 2026-03-09T20:48:16.755 INFO:tasks.workunit.client.1.vm10.stdout:4/888: dread d1/d2/f7 [4194304,4194304] 0 2026-03-09T20:48:16.755 INFO:tasks.workunit.client.1.vm10.stdout:7/911: stat db/d28/d2b/d36/d3f/fcb 0 2026-03-09T20:48:16.756 INFO:tasks.workunit.client.1.vm10.stdout:4/889: write d1/d2/d5c/d64/d6b/d81/fc8 [3125921,1967] 0 2026-03-09T20:48:16.766 INFO:tasks.workunit.client.1.vm10.stdout:4/890: dread d1/d2/d5c/d64/d6b/d81/dac/d1b/f24 [0,4194304] 0 2026-03-09T20:48:16.772 INFO:tasks.workunit.client.0.vm07.stdout:0/951: dread - d1/dc0/f102 zero size 2026-03-09T20:48:16.779 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:16 vm07.local systemd[1]: ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4@mon.vm07.service: Deactivated successfully. 2026-03-09T20:48:16.779 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:16 vm07.local systemd[1]: Stopped Ceph mon.vm07 for 589eab88-1bf8-11f1-9e50-71f3ab1833c4. 2026-03-09T20:48:16.779 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:16 vm07.local systemd[1]: ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4@mon.vm07.service: Consumed 5.546s CPU time. 2026-03-09T20:48:16.807 INFO:tasks.workunit.client.0.vm07.stdout:4/881: mknod d2/df/d59/d8a/dec/cf6 0 2026-03-09T20:48:16.815 INFO:tasks.workunit.client.0.vm07.stdout:8/897: mkdir d1/d8f/d9d/d11d/d120 0 2026-03-09T20:48:16.820 INFO:tasks.workunit.client.1.vm10.stdout:9/985: mknod d2/d3/d6d/de8/d10a/c13f 0 2026-03-09T20:48:16.830 INFO:tasks.workunit.client.0.vm07.stdout:0/952: chown d1/d2/d4b/f70 16443706 1 2026-03-09T20:48:16.845 INFO:tasks.workunit.client.0.vm07.stdout:2/947: link d2/d11/ddb/d6e/lc5 d2/db/d28/d120/l135 0 2026-03-09T20:48:16.846 INFO:tasks.workunit.client.1.vm10.stdout:9/986: sync 2026-03-09T20:48:16.849 INFO:tasks.workunit.client.0.vm07.stdout:1/999: getdents d3/d23/d109 0 2026-03-09T20:48:16.854 INFO:tasks.workunit.client.0.vm07.stdout:4/882: truncate d2/d55/d5d/d3f/d4a/f84 705271 0 2026-03-09T20:48:16.865 INFO:tasks.workunit.client.0.vm07.stdout:4/883: mknod d2/df/d17/d83/cf7 0 2026-03-09T20:48:16.875 INFO:tasks.workunit.client.0.vm07.stdout:0/953: link d1/f9c d1/d2/d4b/d106/f12b 0 2026-03-09T20:48:16.879 INFO:tasks.workunit.client.1.vm10.stdout:8/948: symlink d0/d92/de8/l131 0 2026-03-09T20:48:16.883 INFO:tasks.workunit.client.0.vm07.stdout:0/954: mkdir d1/d2/dc/d108/d12c 0 2026-03-09T20:48:16.894 INFO:tasks.workunit.client.0.vm07.stdout:0/955: getdents d1/dc0/dcc/dd9 0 2026-03-09T20:48:16.900 INFO:tasks.workunit.client.0.vm07.stdout:0/956: creat d1/d2/dc/d108/f12d x:0 0 0 2026-03-09T20:48:16.910 INFO:tasks.workunit.client.0.vm07.stdout:0/957: link d1/d1f/d20/fbd d1/d1f/d53/d72/d11a/db9/f12e 0 2026-03-09T20:48:16.917 INFO:tasks.workunit.client.0.vm07.stdout:0/958: rename d1/d2/d33/l88 to d1/d2/dc/db1/l12f 0 2026-03-09T20:48:16.918 INFO:tasks.workunit.client.0.vm07.stdout:0/959: truncate d1/d2/d98/de8/f113 5141578 0 2026-03-09T20:48:16.919 INFO:tasks.workunit.client.1.vm10.stdout:1/920: creat d2/da/d25/d46/dbe/dfc/f12c x:0 0 0 2026-03-09T20:48:16.919 INFO:tasks.workunit.client.0.vm07.stdout:0/960: symlink d1/dc0/l130 0 2026-03-09T20:48:16.924 INFO:tasks.workunit.client.0.vm07.stdout:0/961: chown d1/d2/c51 2 1 2026-03-09T20:48:16.925 INFO:tasks.workunit.client.1.vm10.stdout:7/912: symlink db/d28/d2b/d36/d40/d8a/dd4/l123 0 2026-03-09T20:48:16.933 INFO:tasks.workunit.client.0.vm07.stdout:0/962: read d1/d2/dc/d17/ff5 [2388329,93022] 0 2026-03-09T20:48:16.938 INFO:tasks.workunit.client.0.vm07.stdout:0/963: mknod d1/df6/c131 0 2026-03-09T20:48:16.939 INFO:tasks.workunit.client.1.vm10.stdout:0/895: creat d2/d9/d12d/f136 x:0 0 0 2026-03-09T20:48:16.940 INFO:tasks.workunit.client.0.vm07.stdout:0/964: unlink d1/d2/dc/fd6 0 2026-03-09T20:48:16.950 INFO:tasks.workunit.client.1.vm10.stdout:1/921: dread d2/f1c [0,4194304] 0 2026-03-09T20:48:16.951 INFO:tasks.workunit.client.0.vm07.stdout:0/965: creat d1/d1f/d30/f132 x:0 0 0 2026-03-09T20:48:16.952 INFO:tasks.workunit.client.1.vm10.stdout:5/829: rename d2/d27/d37 to d2/d39/d4b/d7a/dd9/d10c/d132 0 2026-03-09T20:48:16.953 INFO:tasks.workunit.client.1.vm10.stdout:2/937: getdents d5/d18/d27 0 2026-03-09T20:48:16.954 INFO:tasks.workunit.client.1.vm10.stdout:2/938: chown d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47/d7b/c12a 1072739114 1 2026-03-09T20:48:16.955 INFO:tasks.workunit.client.0.vm07.stdout:0/966: stat d1/d1f/c92 0 2026-03-09T20:48:16.958 INFO:tasks.workunit.client.0.vm07.stdout:9/937: write d4/d16/d29/d24/d37/d44/d62/d108/d121/dc/f25 [217448,129730] 0 2026-03-09T20:48:16.967 INFO:tasks.workunit.client.0.vm07.stdout:9/938: mkdir d4/d16/d29/d24/d37/d44/d62/d108/d121/dc/d4e/d54/d11e/d14a 0 2026-03-09T20:48:16.970 INFO:tasks.workunit.client.1.vm10.stdout:4/891: rename d1/d67/fa5 to d1/d2/d3/d70/d99/f123 0 2026-03-09T20:48:16.986 INFO:tasks.workunit.client.1.vm10.stdout:9/987: symlink d2/d3/d6d/l140 0 2026-03-09T20:48:16.987 INFO:tasks.workunit.client.1.vm10.stdout:8/949: rmdir d0/d54/d114/d12c 0 2026-03-09T20:48:16.992 INFO:tasks.workunit.client.1.vm10.stdout:9/988: dread d2/d3/d6d/db7/fc7 [0,4194304] 0 2026-03-09T20:48:16.995 INFO:tasks.workunit.client.1.vm10.stdout:0/896: link d2/d9/db8/d10f/d48/l4d d2/d4a/d58/d82/d60/d98/l137 0 2026-03-09T20:48:16.998 INFO:tasks.workunit.client.1.vm10.stdout:7/913: rename db/d46/dab/db5 to db/d21/d26/d72/d124 0 2026-03-09T20:48:16.998 INFO:tasks.workunit.client.1.vm10.stdout:4/892: mknod d1/d47/c124 0 2026-03-09T20:48:16.999 INFO:tasks.workunit.client.1.vm10.stdout:5/830: symlink d2/d39/dbf/l133 0 2026-03-09T20:48:17.000 INFO:tasks.workunit.client.1.vm10.stdout:5/831: fsync d2/d39/d4b/d7a/dd9/d10c/d132/dc8/da1/faa 0 2026-03-09T20:48:17.001 INFO:tasks.workunit.client.1.vm10.stdout:5/832: read d2/fd6 [1309100,2084] 0 2026-03-09T20:48:17.002 INFO:tasks.workunit.client.1.vm10.stdout:5/833: chown d2/d58/d6c/l7e 9712 1 2026-03-09T20:48:17.003 INFO:tasks.workunit.client.1.vm10.stdout:9/989: truncate d2/d28/d47/f110 533492 0 2026-03-09T20:48:17.004 INFO:tasks.workunit.client.1.vm10.stdout:5/834: dread - d2/d39/d4b/d7a/dd9/d10c/d132/d46/fba zero size 2026-03-09T20:48:17.005 INFO:tasks.workunit.client.1.vm10.stdout:5/835: stat d2/l26 0 2026-03-09T20:48:17.009 INFO:tasks.workunit.client.1.vm10.stdout:7/914: mkdir db/d28/d10e/d125 0 2026-03-09T20:48:17.011 INFO:tasks.workunit.client.1.vm10.stdout:8/950: dread d0/f21 [0,4194304] 0 2026-03-09T20:48:17.012 INFO:tasks.workunit.client.1.vm10.stdout:8/951: write d0/d22/d2c/fbc [1200238,86186] 0 2026-03-09T20:48:17.015 INFO:tasks.workunit.client.1.vm10.stdout:9/990: dwrite d2/d28/d47/d67/f72 [0,4194304] 0 2026-03-09T20:48:17.015 INFO:tasks.workunit.client.1.vm10.stdout:5/836: creat d2/d39/d4b/de0/f134 x:0 0 0 2026-03-09T20:48:17.019 INFO:tasks.workunit.client.1.vm10.stdout:5/837: fdatasync d2/d39/dbf/d84/f100 0 2026-03-09T20:48:17.020 INFO:tasks.workunit.client.1.vm10.stdout:5/838: readlink d2/d58/d6c/l7e 0 2026-03-09T20:48:17.020 INFO:tasks.workunit.client.1.vm10.stdout:5/839: dread - d2/d39/dbf/f6a zero size 2026-03-09T20:48:17.021 INFO:tasks.workunit.client.0.vm07.stdout:3/937: mkdir d1/d12c 0 2026-03-09T20:48:17.023 INFO:tasks.workunit.client.1.vm10.stdout:5/840: unlink d2/d27/d75/d81/c86 0 2026-03-09T20:48:17.023 INFO:tasks.workunit.client.1.vm10.stdout:5/841: chown d2/d39/d89/lca 58367 1 2026-03-09T20:48:17.024 INFO:tasks.workunit.client.1.vm10.stdout:9/991: dwrite d2/d3/d6d/db7/f116 [0,4194304] 0 2026-03-09T20:48:17.031 INFO:tasks.workunit.client.1.vm10.stdout:9/992: mknod d2/d3/de/d8f/dbc/c141 0 2026-03-09T20:48:17.036 INFO:tasks.workunit.client.1.vm10.stdout:9/993: creat d2/d28/da2/ded/d133/f142 x:0 0 0 2026-03-09T20:48:17.040 INFO:tasks.workunit.client.0.vm07.stdout:8/898: dwrite d1/dc/dba/fce [0,4194304] 0 2026-03-09T20:48:17.053 INFO:tasks.workunit.client.1.vm10.stdout:3/891: write dc/d14/d26/d29/d2a/d76/f97 [3342898,6259] 0 2026-03-09T20:48:17.056 INFO:tasks.workunit.client.0.vm07.stdout:8/899: dread d1/d5d/d6f/f61 [0,4194304] 0 2026-03-09T20:48:17.062 INFO:tasks.workunit.client.0.vm07.stdout:8/900: dread - d1/d3b/fe3 zero size 2026-03-09T20:48:17.062 INFO:tasks.workunit.client.0.vm07.stdout:8/901: mknod d1/d5d/d6f/d2f/d4d/dd4/c121 0 2026-03-09T20:48:17.066 INFO:tasks.workunit.client.0.vm07.stdout:2/948: truncate d2/db/d1c/d8d/ffa 699207 0 2026-03-09T20:48:17.068 INFO:tasks.workunit.client.0.vm07.stdout:4/884: write d2/d55/d5d/d3f/fa7 [347229,118439] 0 2026-03-09T20:48:17.070 INFO:tasks.workunit.client.0.vm07.stdout:4/885: truncate d2/d1f/fa5 1451401 0 2026-03-09T20:48:17.074 INFO:tasks.workunit.client.0.vm07.stdout:4/886: fdatasync d2/f4c 0 2026-03-09T20:48:17.074 INFO:tasks.workunit.client.0.vm07.stdout:4/887: chown d2/df/d17/d83 827106138 1 2026-03-09T20:48:17.094 INFO:tasks.workunit.client.1.vm10.stdout:2/939: rmdir d5/d18/d9f 39 2026-03-09T20:48:17.097 INFO:tasks.workunit.client.1.vm10.stdout:2/940: creat d5/d18/d9f/f131 x:0 0 0 2026-03-09T20:48:17.105 INFO:tasks.workunit.client.1.vm10.stdout:2/941: dread d5/d18/d27/d38/d61/fa0 [0,4194304] 0 2026-03-09T20:48:17.112 INFO:tasks.workunit.client.0.vm07.stdout:6/956: creat d8/d16/d22/d24/da0/dab/f13c x:0 0 0 2026-03-09T20:48:17.114 INFO:tasks.workunit.client.0.vm07.stdout:6/957: fdatasync d8/d16/d22/d24/da0/dab/dc1/fcb 0 2026-03-09T20:48:17.118 INFO:tasks.workunit.client.1.vm10.stdout:6/933: write d3/da/d11/d31/fd5 [3121075,25423] 0 2026-03-09T20:48:17.124 INFO:tasks.workunit.client.0.vm07.stdout:9/939: write d4/d16/d29/d24/d37/d44/d62/d108/d121/dc/d4e/f11d [7767524,83203] 0 2026-03-09T20:48:17.127 INFO:tasks.workunit.client.0.vm07.stdout:9/940: stat d4/d16/d29/d24/d37/d44/d62/d108/d121/dc/d4e/fd9 0 2026-03-09T20:48:17.128 INFO:tasks.workunit.client.0.vm07.stdout:0/967: truncate d1/d2/dc/f10 1062923 0 2026-03-09T20:48:17.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:16 vm07.local systemd[1]: Starting Ceph mon.vm07 for 589eab88-1bf8-11f1-9e50-71f3ab1833c4... 2026-03-09T20:48:17.134 INFO:tasks.workunit.client.1.vm10.stdout:1/922: rename d2/da/d25/l37 to d2/da/d25/d46/d80/da0/d92/db5/l12d 0 2026-03-09T20:48:17.137 INFO:tasks.workunit.client.1.vm10.stdout:1/923: mkdir d2/da/d25/d46/d51/d5d/d6e/d70/d12e 0 2026-03-09T20:48:17.137 INFO:tasks.workunit.client.1.vm10.stdout:8/952: rename d0/c16 to d0/df8/c132 0 2026-03-09T20:48:17.137 INFO:tasks.workunit.client.1.vm10.stdout:4/893: write d1/d47/fb4 [166566,16143] 0 2026-03-09T20:48:17.139 INFO:tasks.workunit.client.1.vm10.stdout:8/953: chown d0/d22/d2f/f31 3755 1 2026-03-09T20:48:17.139 INFO:tasks.workunit.client.1.vm10.stdout:0/897: dwrite d2/d9/db8/d10f/d48/dac/fc4 [0,4194304] 0 2026-03-09T20:48:17.157 INFO:tasks.workunit.client.1.vm10.stdout:6/934: rename d3/da/d11/d89/db9/dd1/dd2/l71 to d3/d30/d7f/l117 0 2026-03-09T20:48:17.158 INFO:tasks.workunit.client.1.vm10.stdout:0/898: mkdir d2/d9/db8/d10f/d11/dd1/d138 0 2026-03-09T20:48:17.158 INFO:tasks.workunit.client.1.vm10.stdout:6/935: write d3/da/d11/fc6 [2809702,104998] 0 2026-03-09T20:48:17.159 INFO:tasks.workunit.client.0.vm07.stdout:2/949: rename d2/c8 to d2/db/d28/d90/c136 0 2026-03-09T20:48:17.160 INFO:tasks.workunit.client.1.vm10.stdout:7/915: dwrite db/d21/d23/f34 [4194304,4194304] 0 2026-03-09T20:48:17.161 INFO:tasks.workunit.client.1.vm10.stdout:4/894: rename d1/d2/d3/d70/d99/dc9/dff/d2b/d4a/f117 to d1/d2/d5c/d64/d6b/d79/f125 0 2026-03-09T20:48:17.174 INFO:tasks.workunit.client.0.vm07.stdout:3/938: dwrite d1/d5/d9/d2f/d3d/d71/fb0 [0,4194304] 0 2026-03-09T20:48:17.176 INFO:tasks.workunit.client.0.vm07.stdout:3/939: chown d1/d5/d9/d11/d1f/f4a 489 1 2026-03-09T20:48:17.176 INFO:tasks.workunit.client.1.vm10.stdout:1/924: sync 2026-03-09T20:48:17.186 INFO:tasks.workunit.client.0.vm07.stdout:2/950: creat d2/d11/d56/f137 x:0 0 0 2026-03-09T20:48:17.187 INFO:tasks.workunit.client.1.vm10.stdout:4/895: dread - d1/d2/d5c/d64/d6b/d79/fa6 zero size 2026-03-09T20:48:17.189 INFO:tasks.workunit.client.1.vm10.stdout:5/842: dwrite d2/d27/d75/d81/fd0 [0,4194304] 0 2026-03-09T20:48:17.190 INFO:tasks.workunit.client.1.vm10.stdout:4/896: truncate d1/d2/d5c/d64/d6b/d81/f119 629216 0 2026-03-09T20:48:17.195 INFO:tasks.workunit.client.1.vm10.stdout:6/936: sync 2026-03-09T20:48:17.195 INFO:tasks.workunit.client.0.vm07.stdout:6/958: sync 2026-03-09T20:48:17.195 INFO:tasks.workunit.client.0.vm07.stdout:4/888: sync 2026-03-09T20:48:17.195 INFO:tasks.workunit.client.0.vm07.stdout:9/941: sync 2026-03-09T20:48:17.200 INFO:tasks.workunit.client.0.vm07.stdout:4/889: creat d2/df/d59/d8a/ff8 x:0 0 0 2026-03-09T20:48:17.200 INFO:tasks.workunit.client.1.vm10.stdout:9/994: dwrite d2/d12/d5a/fba [4194304,4194304] 0 2026-03-09T20:48:17.202 INFO:tasks.workunit.client.0.vm07.stdout:6/959: mkdir d8/d16/d22/d24/da0/dab/dc1/dcc/d13d 0 2026-03-09T20:48:17.207 INFO:tasks.workunit.client.1.vm10.stdout:6/937: dread d3/f96 [0,4194304] 0 2026-03-09T20:48:17.207 INFO:tasks.workunit.client.1.vm10.stdout:9/995: chown d2/d3/d6d/d88/ddd 0 1 2026-03-09T20:48:17.212 INFO:tasks.workunit.client.0.vm07.stdout:9/942: truncate d4/d16/d29/d24/d37/d44/d62/d108/d121/dc/d15/f30 1812357 0 2026-03-09T20:48:17.216 INFO:tasks.workunit.client.1.vm10.stdout:4/897: truncate d1/d2/d3/d70/d99/dc9/dff/d2b/f5a 1751717 0 2026-03-09T20:48:17.218 INFO:tasks.workunit.client.0.vm07.stdout:4/890: unlink d2/d1f/fc3 0 2026-03-09T20:48:17.221 INFO:tasks.workunit.client.0.vm07.stdout:4/891: dread d2/d55/fe5 [0,4194304] 0 2026-03-09T20:48:17.227 INFO:tasks.workunit.client.1.vm10.stdout:3/892: dwrite dc/d14/d20/d2e/d56/f15 [0,4194304] 0 2026-03-09T20:48:17.230 INFO:tasks.workunit.client.0.vm07.stdout:8/902: dwrite d1/dc/f29 [0,4194304] 0 2026-03-09T20:48:17.231 INFO:tasks.workunit.client.1.vm10.stdout:4/898: dread d1/d2/d5c/f48 [0,4194304] 0 2026-03-09T20:48:17.241 INFO:tasks.workunit.client.0.vm07.stdout:6/960: creat d8/d16/d22/d9b/de4/d135/f13e x:0 0 0 2026-03-09T20:48:17.242 INFO:tasks.workunit.client.1.vm10.stdout:2/942: dwrite d5/d18/d27/d89/db6/dd3/fd9 [0,4194304] 0 2026-03-09T20:48:17.244 INFO:tasks.workunit.client.1.vm10.stdout:3/893: truncate dc/d14/d26/d29/d40/d8c/fbc 1969865 0 2026-03-09T20:48:17.244 INFO:tasks.workunit.client.1.vm10.stdout:2/943: readlink d5/d18/d27/d89/db6/d41/d77/db3/db5/l11f 0 2026-03-09T20:48:17.244 INFO:tasks.workunit.client.1.vm10.stdout:3/894: readlink dc/d14/d26/d29/d93/l123 0 2026-03-09T20:48:17.247 INFO:tasks.workunit.client.1.vm10.stdout:4/899: creat d1/d47/f126 x:0 0 0 2026-03-09T20:48:17.257 INFO:tasks.workunit.client.0.vm07.stdout:0/968: dwrite d1/d1f/d53/d72/f94 [0,4194304] 0 2026-03-09T20:48:17.257 INFO:tasks.workunit.client.1.vm10.stdout:4/900: truncate d1/d2/d3/d70/d99/dc9/dff/d2b/d4a/d9b/fc3 847899 0 2026-03-09T20:48:17.262 INFO:tasks.workunit.client.0.vm07.stdout:9/943: sync 2026-03-09T20:48:17.262 INFO:tasks.workunit.client.0.vm07.stdout:4/892: getdents d2/df/d17 0 2026-03-09T20:48:17.269 INFO:tasks.workunit.client.1.vm10.stdout:4/901: creat d1/d2/d5c/d64/d6b/d79/f127 x:0 0 0 2026-03-09T20:48:17.270 INFO:tasks.workunit.client.1.vm10.stdout:4/902: chown d1/d2/d5c/d64/d6b/d81/dac/d1b/ff7 973250 1 2026-03-09T20:48:17.270 INFO:tasks.workunit.client.0.vm07.stdout:6/961: rename d8/d16/f90 to d8/d16/d22/f13f 0 2026-03-09T20:48:17.270 INFO:tasks.workunit.client.1.vm10.stdout:2/944: link d5/d18/d27/d89/db6/d41/l9c d5/d18/d27/d89/db6/dd3/l132 0 2026-03-09T20:48:17.270 INFO:tasks.workunit.client.0.vm07.stdout:9/944: chown d4/d16/d78/dc4/c10f 1071326 1 2026-03-09T20:48:17.271 INFO:tasks.workunit.client.0.vm07.stdout:9/945: chown d4/d16/d29/d24/d37 234261 1 2026-03-09T20:48:17.272 INFO:tasks.workunit.client.1.vm10.stdout:2/945: write d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d8d/d93/da5/dda/f10d [514110,111167] 0 2026-03-09T20:48:17.272 INFO:tasks.workunit.client.1.vm10.stdout:2/946: readlink d5/d18/d27/db4/lf4 0 2026-03-09T20:48:17.273 INFO:tasks.workunit.client.0.vm07.stdout:4/893: rename d2/f4c to d2/d55/d5d/d86/ff9 0 2026-03-09T20:48:17.275 INFO:tasks.workunit.client.1.vm10.stdout:4/903: rename d1/d2/c93 to d1/d2/d3/d54/c128 0 2026-03-09T20:48:17.282 INFO:tasks.workunit.client.0.vm07.stdout:6/962: symlink d8/d16/d22/l140 0 2026-03-09T20:48:17.283 INFO:tasks.workunit.client.0.vm07.stdout:6/963: stat d8/d16/d22/d24/da0/dab/c127 0 2026-03-09T20:48:17.284 INFO:tasks.workunit.client.1.vm10.stdout:4/904: symlink d1/d47/db9/l129 0 2026-03-09T20:48:17.287 INFO:tasks.workunit.client.0.vm07.stdout:4/894: rename d2/df/d59/f7c to d2/df/ffa 0 2026-03-09T20:48:17.288 INFO:tasks.workunit.client.0.vm07.stdout:4/895: write d2/df/d59/d8a/ff8 [542312,10752] 0 2026-03-09T20:48:17.293 INFO:tasks.workunit.client.0.vm07.stdout:6/964: truncate d8/d16/da3/f93 3039869 0 2026-03-09T20:48:17.294 INFO:tasks.workunit.client.0.vm07.stdout:6/965: fsync d8/d16/d22/d24/da0/dab/f6e 0 2026-03-09T20:48:17.296 INFO:tasks.workunit.client.0.vm07.stdout:6/966: mkdir d8/d16/d22/d24/da0/dab/dc1/d141 0 2026-03-09T20:48:17.297 INFO:tasks.workunit.client.0.vm07.stdout:6/967: dread - d8/d5d/d97/dc4/fbe zero size 2026-03-09T20:48:17.299 INFO:tasks.workunit.client.0.vm07.stdout:6/968: unlink d8/d16/d22/d24/da0/dab/d40/cf3 0 2026-03-09T20:48:17.316 INFO:tasks.workunit.client.0.vm07.stdout:6/969: getdents d8/d16/d22/d9b/de4/d85 0 2026-03-09T20:48:17.320 INFO:tasks.workunit.client.0.vm07.stdout:6/970: symlink d8/d16/d22/d9b/de4/l142 0 2026-03-09T20:48:17.355 INFO:tasks.workunit.client.0.vm07.stdout:0/969: dread d1/d2/dc/d80/d118/f86 [0,4194304] 0 2026-03-09T20:48:17.360 INFO:tasks.workunit.client.1.vm10.stdout:8/954: dwrite d0/d22/d25/d40/d86/f10c [0,4194304] 0 2026-03-09T20:48:17.361 INFO:tasks.workunit.client.0.vm07.stdout:3/940: write d1/d5/d9/d2f/d3d/d71/dcc/fe2 [3596168,32395] 0 2026-03-09T20:48:17.362 INFO:tasks.workunit.client.0.vm07.stdout:3/941: chown d1/d5/d9/d11/d6d/dd0/d43/c7a 6885839 1 2026-03-09T20:48:17.365 INFO:tasks.workunit.client.1.vm10.stdout:7/916: dwrite db/d28/d2b/d36/d3f/fae [0,4194304] 0 2026-03-09T20:48:17.366 INFO:tasks.workunit.client.1.vm10.stdout:0/899: dwrite d2/d9/db8/d10f/f2f [4194304,4194304] 0 2026-03-09T20:48:17.381 INFO:tasks.workunit.client.1.vm10.stdout:5/843: write d2/d39/dbf/d69/f76 [4930581,91577] 0 2026-03-09T20:48:17.382 INFO:tasks.workunit.client.0.vm07.stdout:0/970: unlink d1/d2/d33/f7e 0 2026-03-09T20:48:17.385 INFO:tasks.workunit.client.0.vm07.stdout:2/951: dwrite d2/d11/ddb/db0/db3/f107 [0,4194304] 0 2026-03-09T20:48:17.385 INFO:tasks.workunit.client.0.vm07.stdout:2/952: readlink d2/db/d1c/l70 0 2026-03-09T20:48:17.386 INFO:tasks.workunit.client.0.vm07.stdout:2/953: chown d2/d11/d56/f137 689667 1 2026-03-09T20:48:17.402 INFO:tasks.workunit.client.1.vm10.stdout:8/955: creat d0/d22/d25/d2e/d41/de9/f133 x:0 0 0 2026-03-09T20:48:17.404 INFO:tasks.workunit.client.1.vm10.stdout:1/925: dwrite d2/da/d25/d46/d51/d5d/d6e/d70/db3/fff [0,4194304] 0 2026-03-09T20:48:17.413 INFO:tasks.workunit.client.1.vm10.stdout:0/900: unlink d2/d9/d2a/ca6 0 2026-03-09T20:48:17.422 INFO:tasks.workunit.client.0.vm07.stdout:3/942: truncate d1/d5/d9/d11/f58 6312411 0 2026-03-09T20:48:17.422 INFO:tasks.workunit.client.0.vm07.stdout:3/943: stat d1/d5/d9/daf/d9f/fcb 0 2026-03-09T20:48:17.428 INFO:tasks.workunit.client.1.vm10.stdout:8/956: dread d0/d22/d25/d8f/fa2 [0,4194304] 0 2026-03-09T20:48:17.431 INFO:tasks.workunit.client.1.vm10.stdout:7/917: rename db/d28/d4c/ff7 to db/d28/d2b/f126 0 2026-03-09T20:48:17.439 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local podman[112090]: 2026-03-09 20:48:17.052970517 +0000 UTC m=+0.044324235 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T20:48:17.439 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local podman[112090]: 2026-03-09 20:48:17.202961343 +0000 UTC m=+0.194315041 container create bce9d510f94fb7fb99a2c93a231ec05378fe90445d4a8661873ccbd4f26ebbdb (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-mon-vm07, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , ceph=True, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid) 2026-03-09T20:48:17.439 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local podman[112090]: 2026-03-09 20:48:17.329068536 +0000 UTC m=+0.320422243 container init bce9d510f94fb7fb99a2c93a231ec05378fe90445d4a8661873ccbd4f26ebbdb (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-mon-vm07, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid) 2026-03-09T20:48:17.439 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local podman[112090]: 2026-03-09 20:48:17.332876684 +0000 UTC m=+0.324230391 container start bce9d510f94fb7fb99a2c93a231ec05378fe90445d4a8661873ccbd4f26ebbdb (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-mon-vm07, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-09T20:48:17.439 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local bash[112090]: bce9d510f94fb7fb99a2c93a231ec05378fe90445d4a8661873ccbd4f26ebbdb 2026-03-09T20:48:17.439 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local systemd[1]: Started Ceph mon.vm07 for 589eab88-1bf8-11f1-9e50-71f3ab1833c4. 2026-03-09T20:48:17.440 INFO:tasks.workunit.client.0.vm07.stdout:3/944: symlink d1/d5/d9/daf/de3/l12d 0 2026-03-09T20:48:17.441 INFO:tasks.workunit.client.1.vm10.stdout:6/938: dwrite d3/f2f [0,4194304] 0 2026-03-09T20:48:17.445 INFO:tasks.workunit.client.1.vm10.stdout:9/996: dwrite d2/d3/d6d/ff3 [0,4194304] 0 2026-03-09T20:48:17.455 INFO:tasks.workunit.client.1.vm10.stdout:1/926: dread d2/f14 [0,4194304] 0 2026-03-09T20:48:17.456 INFO:tasks.workunit.client.1.vm10.stdout:1/927: stat d2/da/d25/d46/d80/f10b 0 2026-03-09T20:48:17.456 INFO:tasks.workunit.client.0.vm07.stdout:8/903: dwrite d1/d5d/d6f/d2f/d4d/d63/f84 [0,4194304] 0 2026-03-09T20:48:17.457 INFO:tasks.workunit.client.1.vm10.stdout:1/928: write d2/da/d25/d46/dbe/dfc/f12c [392368,86036] 0 2026-03-09T20:48:17.458 INFO:tasks.workunit.client.1.vm10.stdout:5/844: rename d2/d39/dbf/d63/fa2 to d2/d1b/d54/d78/de6/f135 0 2026-03-09T20:48:17.471 INFO:tasks.workunit.client.1.vm10.stdout:7/918: mkdir db/d46/d89/dbf/d78/d127 0 2026-03-09T20:48:17.481 INFO:tasks.workunit.client.0.vm07.stdout:3/945: dread d1/d5/dcd/ffc [0,4194304] 0 2026-03-09T20:48:17.484 INFO:tasks.workunit.client.1.vm10.stdout:3/895: write dc/f88 [1543816,85665] 0 2026-03-09T20:48:17.490 INFO:tasks.workunit.client.0.vm07.stdout:0/971: rename d1/dc0/dcc/ldc to d1/d1f/d53/d72/l133 0 2026-03-09T20:48:17.492 INFO:tasks.workunit.client.1.vm10.stdout:6/939: mknod d3/d30/d6a/df5/c118 0 2026-03-09T20:48:17.493 INFO:tasks.workunit.client.1.vm10.stdout:6/940: chown d3/d30/d7f/d36/d5c/dad 13 1 2026-03-09T20:48:17.500 INFO:tasks.workunit.client.0.vm07.stdout:9/946: dwrite d4/d16/d29/f6e [0,4194304] 0 2026-03-09T20:48:17.505 INFO:tasks.workunit.client.0.vm07.stdout:2/954: rename d2/db/d49/d7d/d85/dde/cf1 to d2/da7/c138 0 2026-03-09T20:48:17.506 INFO:tasks.workunit.client.0.vm07.stdout:2/955: chown d2/d11/ddb/d72/l12a 38979817 1 2026-03-09T20:48:17.507 INFO:tasks.workunit.client.1.vm10.stdout:2/947: write d5/d18/f90 [2209618,10260] 0 2026-03-09T20:48:17.509 INFO:tasks.workunit.client.1.vm10.stdout:1/929: read - d2/da/fb6 zero size 2026-03-09T20:48:17.512 INFO:tasks.workunit.client.0.vm07.stdout:0/972: mkdir d1/d2/dc/d80/d118/d134 0 2026-03-09T20:48:17.514 INFO:tasks.workunit.client.1.vm10.stdout:5/845: unlink d2/d39/d4b/d7a/dd9/d10c/d132/d46/d5d/d77/f112 0 2026-03-09T20:48:17.519 INFO:tasks.workunit.client.1.vm10.stdout:7/919: symlink db/d46/d89/l128 0 2026-03-09T20:48:17.520 INFO:tasks.workunit.client.1.vm10.stdout:7/920: readlink db/d28/d2b/d36/d40/d8a/dd4/l111 0 2026-03-09T20:48:17.521 INFO:tasks.workunit.client.0.vm07.stdout:3/946: dread d1/d5/d9/d2f/d34/f68 [4194304,4194304] 0 2026-03-09T20:48:17.524 INFO:tasks.workunit.client.0.vm07.stdout:0/973: unlink d1/d2/dc/d80/d118/f86 0 2026-03-09T20:48:17.527 INFO:tasks.workunit.client.0.vm07.stdout:0/974: dread d1/fa1 [0,4194304] 0 2026-03-09T20:48:17.529 INFO:tasks.workunit.client.0.vm07.stdout:9/947: mkdir d4/d11/d141/d14b 0 2026-03-09T20:48:17.532 INFO:tasks.workunit.client.1.vm10.stdout:6/941: mknod d3/da/d11/d26/dcf/c119 0 2026-03-09T20:48:17.536 INFO:tasks.workunit.client.0.vm07.stdout:6/971: dwrite d8/d16/d22/d9b/de4/d85/f2f [0,4194304] 0 2026-03-09T20:48:17.539 INFO:tasks.workunit.client.0.vm07.stdout:3/947: dread d1/d5/d9/d11/d1f/f4a [0,4194304] 0 2026-03-09T20:48:17.552 INFO:tasks.workunit.client.0.vm07.stdout:0/975: readlink d1/d2/d98/le1 0 2026-03-09T20:48:17.556 INFO:tasks.workunit.client.0.vm07.stdout:9/948: mknod d4/d16/d29/d24/d37/d44/d62/d108/d121/d19/d89/da7/c14c 0 2026-03-09T20:48:17.557 INFO:tasks.workunit.client.0.vm07.stdout:4/896: write d2/df/f2e [1009297,73456] 0 2026-03-09T20:48:17.558 INFO:tasks.workunit.client.1.vm10.stdout:2/948: creat d5/d18/d27/d89/db6/dd3/f133 x:0 0 0 2026-03-09T20:48:17.559 INFO:tasks.workunit.client.1.vm10.stdout:2/949: fdatasync d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/f84 0 2026-03-09T20:48:17.564 INFO:tasks.workunit.client.1.vm10.stdout:1/930: unlink d2/da/lc8 0 2026-03-09T20:48:17.573 INFO:tasks.workunit.client.0.vm07.stdout:4/897: dread d2/df/d17/f2a [0,4194304] 0 2026-03-09T20:48:17.575 INFO:tasks.workunit.client.1.vm10.stdout:5/846: creat d2/d39/d4b/d7a/dd9/d10c/d132/d46/d5d/d6d/f136 x:0 0 0 2026-03-09T20:48:17.580 INFO:tasks.workunit.client.0.vm07.stdout:9/949: rmdir d4/d16/d29 39 2026-03-09T20:48:17.580 INFO:tasks.workunit.client.1.vm10.stdout:0/901: dwrite d2/d4a/d58/d82/d71/f38 [0,4194304] 0 2026-03-09T20:48:17.581 INFO:tasks.workunit.client.1.vm10.stdout:0/902: fsync d2/d4a/d58/d82/d71/dca/d110/d30/f9a 0 2026-03-09T20:48:17.583 INFO:tasks.workunit.client.1.vm10.stdout:6/942: truncate d3/da/d11/d89/db9/dd1/dd2/d60/f63 4444814 0 2026-03-09T20:48:17.583 INFO:tasks.workunit.client.1.vm10.stdout:4/905: read - d1/d2/d3/d70/d99/dc9/dff/ff1 zero size 2026-03-09T20:48:17.593 INFO:tasks.workunit.client.1.vm10.stdout:7/921: dread db/d28/d2b/d36/d63/d84/fc1 [0,4194304] 0 2026-03-09T20:48:17.594 INFO:tasks.workunit.client.0.vm07.stdout:8/904: dwrite d1/d5d/d6f/f61 [0,4194304] 0 2026-03-09T20:48:17.596 INFO:tasks.workunit.client.1.vm10.stdout:8/957: getdents d0/d22/d25/d2e/d41/de9/dfc/d63 0 2026-03-09T20:48:17.597 INFO:tasks.workunit.client.0.vm07.stdout:2/956: getdents d2/d11/d56 0 2026-03-09T20:48:17.607 INFO:tasks.workunit.client.0.vm07.stdout:4/898: chown d2/d55/d5d/d3f/d4a/d85/cb1 657990 1 2026-03-09T20:48:17.609 INFO:tasks.workunit.client.1.vm10.stdout:1/931: creat d2/da/d25/d46/d51/d5d/d6e/f12f x:0 0 0 2026-03-09T20:48:17.609 INFO:tasks.workunit.client.0.vm07.stdout:3/948: mkdir d1/d5/d12e 0 2026-03-09T20:48:17.613 INFO:tasks.workunit.client.1.vm10.stdout:5/847: sync 2026-03-09T20:48:17.614 INFO:tasks.workunit.client.1.vm10.stdout:3/896: write dc/d14/d26/d29/fe7 [686711,48991] 0 2026-03-09T20:48:17.615 INFO:tasks.workunit.client.0.vm07.stdout:8/905: readlink d1/d3b/l83 0 2026-03-09T20:48:17.620 INFO:tasks.workunit.client.1.vm10.stdout:8/958: sync 2026-03-09T20:48:17.621 INFO:tasks.workunit.client.1.vm10.stdout:8/959: chown d0/d92/de8/c72 950 1 2026-03-09T20:48:17.622 INFO:tasks.workunit.client.0.vm07.stdout:4/899: mknod d2/d55/d5d/d3f/d4a/d85/cfb 0 2026-03-09T20:48:17.622 INFO:tasks.workunit.client.1.vm10.stdout:9/997: dwrite d2/d3/de/d35/f107 [0,4194304] 0 2026-03-09T20:48:17.632 INFO:tasks.workunit.client.0.vm07.stdout:3/949: truncate d1/d5/d9/d11/d6d/dd0/d43/fbf 673992 0 2026-03-09T20:48:17.634 INFO:tasks.workunit.client.1.vm10.stdout:5/848: truncate d2/d39/d4b/f4e 3129972 0 2026-03-09T20:48:17.635 INFO:tasks.workunit.client.1.vm10.stdout:3/897: mknod dc/d14/d26/d29/d40/c12f 0 2026-03-09T20:48:17.635 INFO:tasks.workunit.client.0.vm07.stdout:8/906: rmdir d1/dc/d16/d26/d94 39 2026-03-09T20:48:17.638 INFO:tasks.workunit.client.1.vm10.stdout:6/943: fsync d3/d30/d7f/d36/fd7 0 2026-03-09T20:48:17.638 INFO:tasks.workunit.client.0.vm07.stdout:4/900: symlink d2/df/d59/d8a/lfc 0 2026-03-09T20:48:17.643 INFO:tasks.workunit.client.0.vm07.stdout:3/950: unlink d1/d5/d9/d2f/d34/d46/ffd 0 2026-03-09T20:48:17.648 INFO:tasks.workunit.client.0.vm07.stdout:9/950: mkdir d4/d16/d29/d24/d37/d44/d62/d108/d121/dc/d4e/d54/d11e/d14a/d14d 0 2026-03-09T20:48:17.655 INFO:tasks.workunit.client.0.vm07.stdout:6/972: rename d8/d16/d22/d24/da0/dab/d40/d69/f39 to d8/d16/d22/d9b/f143 0 2026-03-09T20:48:17.656 INFO:tasks.workunit.client.1.vm10.stdout:3/898: chown dc/db4/de3/cf4 8085563 1 2026-03-09T20:48:17.657 INFO:tasks.workunit.client.0.vm07.stdout:4/901: symlink d2/d1f/lfd 0 2026-03-09T20:48:17.658 INFO:tasks.workunit.client.0.vm07.stdout:4/902: chown d2/df/d17/fd0 232422803 1 2026-03-09T20:48:17.659 INFO:tasks.workunit.client.1.vm10.stdout:5/849: symlink d2/d39/d4b/d7a/dd9/d10c/d132/d46/d99/l137 0 2026-03-09T20:48:17.664 INFO:tasks.workunit.client.1.vm10.stdout:2/950: write d5/d18/d27/d38/d61/dc8/f108 [846932,49690] 0 2026-03-09T20:48:17.664 INFO:tasks.workunit.client.1.vm10.stdout:0/903: link d2/d4a/d58/d82/d71/d5d/f8c d2/d9/f139 0 2026-03-09T20:48:17.665 INFO:tasks.workunit.client.0.vm07.stdout:9/951: symlink d4/d16/d29/d24/d7c/l14e 0 2026-03-09T20:48:17.665 INFO:tasks.workunit.client.1.vm10.stdout:0/904: read d2/d4a/d58/d82/d71/dca/d110/d30/f51 [96226,13069] 0 2026-03-09T20:48:17.670 INFO:tasks.workunit.client.0.vm07.stdout:9/952: dwrite d4/d11/d2a/f39 [0,4194304] 0 2026-03-09T20:48:17.673 INFO:tasks.workunit.client.0.vm07.stdout:8/907: mknod d1/d5d/d6f/c122 0 2026-03-09T20:48:17.681 INFO:tasks.workunit.client.1.vm10.stdout:6/944: fsync d3/d30/d7f/d24/d39/d9e/fe4 0 2026-03-09T20:48:17.684 INFO:tasks.workunit.client.0.vm07.stdout:0/976: write d1/d2/d98/de8/f113 [710345,100162] 0 2026-03-09T20:48:17.685 INFO:tasks.workunit.client.1.vm10.stdout:6/945: dwrite d3/da/d11/dfc/fe8 [0,4194304] 0 2026-03-09T20:48:17.686 INFO:tasks.workunit.client.0.vm07.stdout:4/903: truncate d2/d55/d5d/d3f/d4a/f5e 3908880 0 2026-03-09T20:48:17.688 INFO:tasks.workunit.client.1.vm10.stdout:8/960: unlink d0/l1a 0 2026-03-09T20:48:17.692 INFO:tasks.workunit.client.0.vm07.stdout:2/957: dwrite d2/d11/ffe [0,4194304] 0 2026-03-09T20:48:17.695 INFO:tasks.workunit.client.1.vm10.stdout:6/946: dread d3/da/d11/dfc/fe8 [0,4194304] 0 2026-03-09T20:48:17.703 INFO:tasks.workunit.client.0.vm07.stdout:4/904: dread d2/df/f23 [0,4194304] 0 2026-03-09T20:48:17.705 INFO:tasks.workunit.client.1.vm10.stdout:8/961: sync 2026-03-09T20:48:17.706 INFO:tasks.workunit.client.1.vm10.stdout:8/962: dread - d0/d22/d25/d2e/d41/d85/db9/d10d/f11f zero size 2026-03-09T20:48:17.709 INFO:tasks.workunit.client.1.vm10.stdout:7/922: write db/d21/fbc [5200125,105014] 0 2026-03-09T20:48:17.715 INFO:tasks.workunit.client.0.vm07.stdout:8/908: dread d1/dc/d16/d26/f36 [4194304,4194304] 0 2026-03-09T20:48:17.719 INFO:tasks.workunit.client.1.vm10.stdout:3/899: mknod dc/d14/d26/d29/d40/c130 0 2026-03-09T20:48:17.723 INFO:tasks.workunit.client.1.vm10.stdout:5/850: mknod d2/d39/d4b/d7a/dd9/d10c/d132/d46/d5d/d77/c138 0 2026-03-09T20:48:17.723 INFO:tasks.workunit.client.1.vm10.stdout:5/851: fdatasync d2/f23 0 2026-03-09T20:48:17.723 INFO:tasks.workunit.client.0.vm07.stdout:0/977: fsync d1/f1a 0 2026-03-09T20:48:17.730 INFO:tasks.workunit.client.0.vm07.stdout:2/958: symlink d2/db/d1c/d8d/l139 0 2026-03-09T20:48:17.730 INFO:tasks.workunit.client.1.vm10.stdout:2/951: symlink d5/d18/d27/d38/dcf/l134 0 2026-03-09T20:48:17.730 INFO:tasks.workunit.client.1.vm10.stdout:9/998: dwrite d2/d3/db4/ddb/fff [0,4194304] 0 2026-03-09T20:48:17.730 INFO:tasks.workunit.client.1.vm10.stdout:1/932: write d2/da/d25/d46/d80/da0/d92/feb [4614252,54714] 0 2026-03-09T20:48:17.730 INFO:tasks.workunit.client.1.vm10.stdout:4/906: dwrite d1/d2/d5c/d64/d6b/d81/dac/d39/fd1 [0,4194304] 0 2026-03-09T20:48:17.733 INFO:tasks.workunit.client.0.vm07.stdout:2/959: dwrite d2/d11/ddb/d72/d82/f123 [0,4194304] 0 2026-03-09T20:48:17.745 INFO:tasks.workunit.client.0.vm07.stdout:2/960: dwrite d2/db/d49/f81 [0,4194304] 0 2026-03-09T20:48:17.757 INFO:tasks.workunit.client.0.vm07.stdout:9/953: unlink d4/d16/d29/d24/d37/d44/d62/d108/d121/dc/dbb/db6/l139 0 2026-03-09T20:48:17.764 INFO:tasks.workunit.client.1.vm10.stdout:6/947: rmdir d3/da 39 2026-03-09T20:48:17.769 INFO:tasks.workunit.client.0.vm07.stdout:3/951: dwrite d1/d5/d9/d2f/d99/dd8/de0/fe1 [0,4194304] 0 2026-03-09T20:48:17.778 INFO:tasks.workunit.client.1.vm10.stdout:6/948: dread d3/f52 [0,4194304] 0 2026-03-09T20:48:17.781 INFO:tasks.workunit.client.0.vm07.stdout:6/973: creat d8/d16/d22/f144 x:0 0 0 2026-03-09T20:48:17.789 INFO:tasks.workunit.client.0.vm07.stdout:0/978: symlink d1/d1f/d9f/df8/l135 0 2026-03-09T20:48:17.794 INFO:tasks.workunit.client.0.vm07.stdout:0/979: stat d1/d2/dc/d80/f87 0 2026-03-09T20:48:17.797 INFO:tasks.workunit.client.1.vm10.stdout:5/852: rename d2/d58/lb0 to d2/d39/dbf/d66/d10f/l139 0 2026-03-09T20:48:17.805 INFO:tasks.workunit.client.1.vm10.stdout:9/999: creat d2/d3/d6d/db7/f143 x:0 0 0 2026-03-09T20:48:17.805 INFO:tasks.workunit.client.0.vm07.stdout:9/954: mknod d4/d16/d29/d24/d37/d44/d62/d108/d121/dc/d4e/d54/c14f 0 2026-03-09T20:48:17.806 INFO:tasks.workunit.client.0.vm07.stdout:9/955: read d4/d16/d29/fab [181806,94270] 0 2026-03-09T20:48:17.809 INFO:tasks.workunit.client.0.vm07.stdout:3/952: fdatasync d1/d5/d9/d2f/d34/d46/f8a 0 2026-03-09T20:48:17.818 INFO:tasks.workunit.client.0.vm07.stdout:6/974: mknod d8/d16/d22/d9b/da6/c145 0 2026-03-09T20:48:17.819 INFO:tasks.workunit.client.0.vm07.stdout:0/980: chown d1/d2/d4b/d106/c115 43815245 1 2026-03-09T20:48:17.830 INFO:tasks.workunit.client.0.vm07.stdout:0/981: mknod d1/d2/d98/de8/c136 0 2026-03-09T20:48:17.832 INFO:tasks.workunit.client.0.vm07.stdout:6/975: truncate d8/d16/d4b/fbc 1630747 0 2026-03-09T20:48:17.842 INFO:tasks.workunit.client.0.vm07.stdout:6/976: dread d8/d16/d22/d24/f25 [0,4194304] 0 2026-03-09T20:48:17.845 INFO:tasks.workunit.client.0.vm07.stdout:3/953: link d1/d5/d9/daf/de3/f10e d1/d5/d9/d11/d6d/dd0/d95/f12f 0 2026-03-09T20:48:17.854 INFO:tasks.workunit.client.1.vm10.stdout:8/963: write d0/d22/d25/d2e/d41/d85/fa7 [2196585,46787] 0 2026-03-09T20:48:17.855 INFO:tasks.workunit.client.0.vm07.stdout:2/961: link d2/d11/ddb/l122 d2/db/d28/l13a 0 2026-03-09T20:48:17.856 INFO:tasks.workunit.client.0.vm07.stdout:2/962: readlink d2/db/d1c/d4a/d88/le9 0 2026-03-09T20:48:17.857 INFO:tasks.workunit.client.0.vm07.stdout:2/963: chown d2/db/d1c/d4a/d88 5338 1 2026-03-09T20:48:17.859 INFO:tasks.workunit.client.1.vm10.stdout:7/923: write db/d28/d4c/fdc [978757,82296] 0 2026-03-09T20:48:17.862 INFO:tasks.workunit.client.0.vm07.stdout:2/964: dwrite d2/d11/d56/f137 [0,4194304] 0 2026-03-09T20:48:17.874 INFO:tasks.workunit.client.0.vm07.stdout:6/977: mkdir d8/d16/d22/d24/da0/dab/d40/d105/d146 0 2026-03-09T20:48:17.875 INFO:tasks.workunit.client.1.vm10.stdout:3/900: dwrite dc/d14/d20/d21/d3b/fc0 [0,4194304] 0 2026-03-09T20:48:17.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: set uid:gid to 167:167 (ceph:ceph) 2026-03-09T20:48:17.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 2 2026-03-09T20:48:17.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: pidfile_write: ignore empty --pid-file 2026-03-09T20:48:17.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: load: jerasure load: lrc 2026-03-09T20:48:17.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: RocksDB version: 7.9.2 2026-03-09T20:48:17.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Git sha 0 2026-03-09T20:48:17.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Compile date 2026-02-25 18:11:04 2026-03-09T20:48:17.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: DB SUMMARY 2026-03-09T20:48:17.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: DB Session ID: FHQJBC2GH92QG1LBEH0O 2026-03-09T20:48:17.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: CURRENT file: CURRENT 2026-03-09T20:48:17.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: IDENTITY file: IDENTITY 2026-03-09T20:48:17.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: MANIFEST file: MANIFEST-000015 size: 768 Bytes 2026-03-09T20:48:17.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: SST files in /var/lib/ceph/mon/ceph-vm07/store.db dir, Total Num: 1, files: 000023.sst 2026-03-09T20:48:17.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-vm07/store.db: 000021.log size: 2106539 ; 2026-03-09T20:48:17.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.error_if_exists: 0 2026-03-09T20:48:17.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.create_if_missing: 0 2026-03-09T20:48:17.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.paranoid_checks: 1 2026-03-09T20:48:17.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.flush_verify_memtable_count: 1 2026-03-09T20:48:17.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-09T20:48:17.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-09T20:48:17.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.env: 0x558aabf2cdc0 2026-03-09T20:48:17.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.fs: PosixFileSystem 2026-03-09T20:48:17.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.info_log: 0x558aada45900 2026-03-09T20:48:17.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.max_file_opening_threads: 16 2026-03-09T20:48:17.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.statistics: (nil) 2026-03-09T20:48:17.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.use_fsync: 0 2026-03-09T20:48:17.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.max_log_file_size: 0 2026-03-09T20:48:17.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-09T20:48:17.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.log_file_time_to_roll: 0 2026-03-09T20:48:17.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.keep_log_file_num: 1000 2026-03-09T20:48:17.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.recycle_log_file_num: 0 2026-03-09T20:48:17.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.allow_fallocate: 1 2026-03-09T20:48:17.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.allow_mmap_reads: 0 2026-03-09T20:48:17.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.allow_mmap_writes: 0 2026-03-09T20:48:17.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.use_direct_reads: 0 2026-03-09T20:48:17.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-09T20:48:17.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.create_missing_column_families: 0 2026-03-09T20:48:17.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.db_log_dir: 2026-03-09T20:48:17.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.wal_dir: 2026-03-09T20:48:17.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.table_cache_numshardbits: 6 2026-03-09T20:48:17.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.WAL_ttl_seconds: 0 2026-03-09T20:48:17.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.WAL_size_limit_MB: 0 2026-03-09T20:48:17.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-09T20:48:17.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-09T20:48:17.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.is_fd_close_on_exec: 1 2026-03-09T20:48:17.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.advise_random_on_open: 1 2026-03-09T20:48:17.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.db_write_buffer_size: 0 2026-03-09T20:48:17.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.write_buffer_manager: 0x558aada49900 2026-03-09T20:48:17.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-09T20:48:17.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-09T20:48:17.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.use_adaptive_mutex: 0 2026-03-09T20:48:17.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.rate_limiter: (nil) 2026-03-09T20:48:17.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-09T20:48:17.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.wal_recovery_mode: 2 2026-03-09T20:48:17.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.enable_thread_tracking: 0 2026-03-09T20:48:17.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.enable_pipelined_write: 0 2026-03-09T20:48:17.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.unordered_write: 0 2026-03-09T20:48:17.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-09T20:48:17.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-09T20:48:17.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-09T20:48:17.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-09T20:48:17.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.row_cache: None 2026-03-09T20:48:17.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.wal_filter: None 2026-03-09T20:48:17.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-09T20:48:17.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.allow_ingest_behind: 0 2026-03-09T20:48:17.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.two_write_queues: 0 2026-03-09T20:48:17.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.manual_wal_flush: 0 2026-03-09T20:48:17.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.wal_compression: 0 2026-03-09T20:48:17.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.atomic_flush: 0 2026-03-09T20:48:17.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-09T20:48:17.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.persist_stats_to_disk: 0 2026-03-09T20:48:17.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.write_dbid_to_manifest: 0 2026-03-09T20:48:17.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.log_readahead_size: 0 2026-03-09T20:48:17.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-09T20:48:17.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.best_efforts_recovery: 0 2026-03-09T20:48:17.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-09T20:48:17.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-09T20:48:17.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.allow_data_in_errors: 0 2026-03-09T20:48:17.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.db_host_id: __hostname__ 2026-03-09T20:48:17.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.enforce_single_del_contracts: true 2026-03-09T20:48:17.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.max_background_jobs: 2 2026-03-09T20:48:17.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.max_background_compactions: -1 2026-03-09T20:48:17.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.max_subcompactions: 1 2026-03-09T20:48:17.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-09T20:48:17.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-09T20:48:17.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.delayed_write_rate : 16777216 2026-03-09T20:48:17.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.max_total_wal_size: 0 2026-03-09T20:48:17.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-09T20:48:17.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.stats_dump_period_sec: 600 2026-03-09T20:48:17.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.stats_persist_period_sec: 600 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.max_open_files: -1 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.bytes_per_sync: 0 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.wal_bytes_per_sync: 0 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.strict_bytes_per_sync: 0 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.compaction_readahead_size: 0 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.max_background_flushes: -1 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Compression algorithms supported: 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: kZSTD supported: 0 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: kXpressCompression supported: 0 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: kBZip2Compression supported: 0 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: kLZ4Compression supported: 1 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: kZlibCompression supported: 1 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: kLZ4HCCompression supported: 1 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: kSnappyCompression supported: 1 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Fast CRC32 supported: Supported on x86 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: DMutex implementation: pthread_mutex_t 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-vm07/store.db/MANIFEST-000015 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.merge_operator: 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.compaction_filter: None 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.compaction_filter_factory: None 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.sst_partitioner_factory: None 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.memtable_factory: SkipListFactory 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.table_factory: BlockBasedTable 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558aada44500) 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout: cache_index_and_filter_blocks: 1 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout: pin_top_level_index_and_filter: 1 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout: index_type: 0 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout: data_block_index_type: 0 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout: index_shortening: 1 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout: data_block_hash_table_util_ratio: 0.750000 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout: checksum: 4 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout: no_block_cache: 0 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout: block_cache: 0x558aada69350 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout: block_cache_name: BinnedLRUCache 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout: block_cache_options: 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout: capacity : 536870912 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout: num_shard_bits : 4 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout: strict_capacity_limit : 0 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout: high_pri_pool_ratio: 0.000 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout: block_cache_compressed: (nil) 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout: persistent_cache: (nil) 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout: block_size: 4096 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout: block_size_deviation: 10 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout: block_restart_interval: 16 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout: index_block_restart_interval: 1 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout: metadata_block_size: 4096 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout: partition_filters: 0 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout: use_delta_encoding: 1 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout: filter_policy: bloomfilter 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout: whole_key_filtering: 1 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout: verify_compression: 0 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout: read_amp_bytes_per_bit: 0 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout: format_version: 5 2026-03-09T20:48:17.887 INFO:journalctl@ceph.mon.vm07.vm07.stdout: enable_index_compression: 1 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout: block_align: 0 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout: max_auto_readahead_size: 262144 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout: prepopulate_block_cache: 0 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout: initial_auto_readahead_size: 8192 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout: num_file_reads_for_auto_readahead: 2 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.write_buffer_size: 33554432 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.max_write_buffer_number: 2 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.compression: NoCompression 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.bottommost_compression: Disabled 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.prefix_extractor: nullptr 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.num_levels: 7 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.compression_opts.window_bits: -14 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.compression_opts.level: 32767 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.compression_opts.strategy: 0 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.compression_opts.enabled: false 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.target_file_size_base: 67108864 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.target_file_size_multiplier: 1 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.arena_block_size: 1048576 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.disable_auto_compactions: 0 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-09T20:48:17.888 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.inplace_update_support: 0 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.inplace_update_num_locks: 10000 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.memtable_huge_page_size: 0 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.bloom_locality: 0 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.max_successive_merges: 0 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.optimize_filters_for_hits: 0 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.paranoid_file_checks: 0 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.force_consistency_checks: 1 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.report_bg_io_stats: 0 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.ttl: 2592000 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.periodic_compaction_seconds: 0 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.enable_blob_files: false 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.min_blob_size: 0 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.blob_file_size: 268435456 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.blob_compression_type: NoCompression 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.enable_blob_garbage_collection: false 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.blob_file_starting_level: 0 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-vm07/store.db/MANIFEST-000015 succeeded,manifest_file_number is 15, next_file_number is 25, last_sequence is 7457, log_number is 21,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 21 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 21 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 4e1294e8-4400-4e9a-9a02-67a268a55194 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773089297484574, "job": 1, "event": "recovery_started", "wal_files": [21]} 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #21 mode 2 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773089297537926, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 26, "file_size": 1888292, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7458, "largest_seqno": 8122, "table_properties": {"data_size": 1883856, "index_size": 2563, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 7814, "raw_average_key_size": 23, "raw_value_size": 1876232, "raw_average_value_size": 5702, "num_data_blocks": 122, "num_entries": 329, "num_filter_entries": 329, "num_deletions": 2, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773089297, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4e1294e8-4400-4e9a-9a02-67a268a55194", "db_session_id": "FHQJBC2GH92QG1LBEH0O", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}} 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773089297538716, "job": 1, "event": "recovery_finished"} 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: [db/version_set.cc:5047] Creating manifest 28 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-vm07/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x558aada6ae00 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: DB pointer 0x558aadb76000 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout: ** DB Stats ** 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout: Uptime(secs): 0.1 total, 0.1 interval 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout: 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout: ** Compaction Stats [default] ** 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout: L0 1/0 1.80 MB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 40.8 0.04 0.00 1 0.044 0 0 0.0 0.0 2026-03-09T20:48:17.889 INFO:journalctl@ceph.mon.vm07.vm07.stdout: L6 1/0 6.41 MB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 2026-03-09T20:48:17.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout: Sum 2/0 8.21 MB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 40.8 0.04 0.00 1 0.044 0 0 0.0 0.0 2026-03-09T20:48:17.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 40.8 0.04 0.00 1 0.044 0 0 0.0 0.0 2026-03-09T20:48:17.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout: 2026-03-09T20:48:17.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout: ** Compaction Stats [default] ** 2026-03-09T20:48:17.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T20:48:17.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout: --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-09T20:48:17.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout: User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 40.8 0.04 0.00 1 0.044 0 0 0.0 0.0 2026-03-09T20:48:17.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout: 2026-03-09T20:48:17.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout: Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-09T20:48:17.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout: 2026-03-09T20:48:17.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout: Uptime(secs): 0.1 total, 0.1 interval 2026-03-09T20:48:17.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout: Flush(GB): cumulative 0.002, interval 0.002 2026-03-09T20:48:17.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-09T20:48:17.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout: AddFile(Total Files): cumulative 0, interval 0 2026-03-09T20:48:17.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout: AddFile(L0 Files): cumulative 0, interval 0 2026-03-09T20:48:17.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout: AddFile(Keys): cumulative 0, interval 0 2026-03-09T20:48:17.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout: Cumulative compaction: 0.00 GB write, 27.69 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T20:48:17.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout: Interval compaction: 0.00 GB write, 27.69 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T20:48:17.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-09T20:48:17.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout: Block cache BinnedLRUCache@0x558aada69350#2 capacity: 512.00 MB usage: 3.73 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.6e-05 secs_since: 0 2026-03-09T20:48:17.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout: Block cache entry stats(count,size,portion): FilterBlock(1,0.91 KB,0.000172853%) IndexBlock(1,2.83 KB,0.000539422%) Misc(1,0.00 KB,0%) 2026-03-09T20:48:17.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout: 2026-03-09T20:48:17.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout: ** File Read Latency Histogram By Level [default] ** 2026-03-09T20:48:17.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: starting mon.vm07 rank 0 at public addrs [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] at bind addrs [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon_data /var/lib/ceph/mon/ceph-vm07 fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 2026-03-09T20:48:17.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: mon.vm07@-1(???) e2 preinit fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 2026-03-09T20:48:17.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: mon.vm07@-1(???).mds e11 new map 2026-03-09T20:48:17.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: mon.vm07@-1(???).mds e11 print_map 2026-03-09T20:48:17.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout: e11 2026-03-09T20:48:17.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout: btime 1970-01-01T00:00:00:000000+0000 2026-03-09T20:48:17.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout: enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T20:48:17.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T20:48:17.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout: legacy client fscid: 1 2026-03-09T20:48:17.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout: 2026-03-09T20:48:17.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout: Filesystem 'cephfs' (1) 2026-03-09T20:48:17.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout: fs_name cephfs 2026-03-09T20:48:17.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout: epoch 11 2026-03-09T20:48:17.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout: flags 32 joinable allow_snaps allow_multimds_snaps allow_standby_replay 2026-03-09T20:48:17.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout: created 2026-03-09T20:44:59.885491+0000 2026-03-09T20:48:17.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout: modified 2026-03-09T20:45:12.822947+0000 2026-03-09T20:48:17.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout: tableserver 0 2026-03-09T20:48:17.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout: root 0 2026-03-09T20:48:17.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout: session_timeout 60 2026-03-09T20:48:17.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout: session_autoclose 300 2026-03-09T20:48:17.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout: max_file_size 1099511627776 2026-03-09T20:48:17.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout: max_xattr_size 65536 2026-03-09T20:48:17.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout: required_client_features {} 2026-03-09T20:48:17.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout: last_failure 0 2026-03-09T20:48:17.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout: last_failure_osd_epoch 0 2026-03-09T20:48:17.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout: compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T20:48:17.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout: max_mds 2 2026-03-09T20:48:17.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout: in 0,1 2026-03-09T20:48:17.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout: up {0=14476,1=24291} 2026-03-09T20:48:17.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout: failed 2026-03-09T20:48:17.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout: damaged 2026-03-09T20:48:17.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout: stopped 2026-03-09T20:48:17.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout: data_pools [3] 2026-03-09T20:48:17.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout: metadata_pool 2 2026-03-09T20:48:17.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout: inline_data disabled 2026-03-09T20:48:17.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout: balancer 2026-03-09T20:48:17.891 INFO:journalctl@ceph.mon.vm07.vm07.stdout: bal_rank_mask -1 2026-03-09T20:48:17.891 INFO:journalctl@ceph.mon.vm07.vm07.stdout: standby_count_wanted 1 2026-03-09T20:48:17.891 INFO:journalctl@ceph.mon.vm07.vm07.stdout: qdb_cluster leader: 0 members: 2026-03-09T20:48:17.891 INFO:journalctl@ceph.mon.vm07.vm07.stdout: [mds.cephfs.vm07.rovdbp{0:14476} state up:active seq 3 join_fscid=1 addr [v2:192.168.123.107:6826/2216764941,v1:192.168.123.107:6827/2216764941] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:48:17.891 INFO:journalctl@ceph.mon.vm07.vm07.stdout: [mds.cephfs.vm10.hzyuyq{0:14498} state up:standby-replay seq 3 join_fscid=1 addr [v2:192.168.123.110:6826/3212743251,v1:192.168.123.110:6827/3212743251] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:48:17.891 INFO:journalctl@ceph.mon.vm07.vm07.stdout: [mds.cephfs.vm10.qpltwp{1:24291} state up:active seq 3 join_fscid=1 addr [v2:192.168.123.110:6824/61492274,v1:192.168.123.110:6825/61492274] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:48:17.891 INFO:journalctl@ceph.mon.vm07.vm07.stdout: [mds.cephfs.vm07.potfau{1:14490} state up:standby-replay seq 2 join_fscid=1 addr [v2:192.168.123.107:6828/3289699342,v1:192.168.123.107:6829/3289699342] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:48:17.891 INFO:journalctl@ceph.mon.vm07.vm07.stdout: 2026-03-09T20:48:17.891 INFO:journalctl@ceph.mon.vm07.vm07.stdout: 2026-03-09T20:48:17.891 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: mon.vm07@-1(???).osd e46 crush map has features 3314933000852226048, adjusting msgr requires 2026-03-09T20:48:17.891 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: mon.vm07@-1(???).osd e46 crush map has features 288514051259236352, adjusting msgr requires 2026-03-09T20:48:17.891 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: mon.vm07@-1(???).osd e46 crush map has features 288514051259236352, adjusting msgr requires 2026-03-09T20:48:17.891 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: mon.vm07@-1(???).osd e46 crush map has features 288514051259236352, adjusting msgr requires 2026-03-09T20:48:17.891 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:17 vm07.local ceph-mon[112105]: mon.vm07@-1(???).paxosservice(auth 1..21) refresh upgraded, format 0 -> 3 2026-03-09T20:48:17.891 INFO:tasks.workunit.client.0.vm07.stdout:8/909: write d1/dc/d16/d26/d94/fe4 [734719,93002] 0 2026-03-09T20:48:17.893 INFO:tasks.workunit.client.0.vm07.stdout:4/905: dwrite d2/d55/d5d/f6f [0,4194304] 0 2026-03-09T20:48:17.900 INFO:tasks.workunit.client.0.vm07.stdout:0/982: link d1/d2/d4b/d106/c115 d1/dc0/c137 0 2026-03-09T20:48:17.905 INFO:tasks.workunit.client.0.vm07.stdout:4/906: creat d2/d55/d5d/d3f/d4a/d4b/d52/d5c/ffe x:0 0 0 2026-03-09T20:48:17.907 INFO:tasks.workunit.client.0.vm07.stdout:9/956: sync 2026-03-09T20:48:17.911 INFO:tasks.workunit.client.0.vm07.stdout:2/965: symlink d2/db/d28/d120/l13b 0 2026-03-09T20:48:17.911 INFO:tasks.workunit.client.0.vm07.stdout:6/978: mkdir d8/d16/d22/d9b/de4/d147 0 2026-03-09T20:48:17.915 INFO:tasks.workunit.client.1.vm10.stdout:4/907: rename d1/d2/d3/d70/d99/dc9/dff/d2b/d4a/f73 to d1/dd8/f12a 0 2026-03-09T20:48:17.925 INFO:tasks.workunit.client.1.vm10.stdout:0/905: rmdir d2/d4a/d58/d82/d93/db1/d121 0 2026-03-09T20:48:17.925 INFO:tasks.workunit.client.1.vm10.stdout:1/933: creat d2/da/d25/d46/d51/d5d/d128/f130 x:0 0 0 2026-03-09T20:48:17.925 INFO:tasks.workunit.client.0.vm07.stdout:9/957: creat d4/d16/d78/dc4/f150 x:0 0 0 2026-03-09T20:48:17.925 INFO:tasks.workunit.client.0.vm07.stdout:6/979: dread d8/d16/d22/d9b/de4/d85/f83 [0,4194304] 0 2026-03-09T20:48:17.925 INFO:tasks.workunit.client.0.vm07.stdout:2/966: dread d2/db/d28/d90/f99 [0,4194304] 0 2026-03-09T20:48:17.930 INFO:tasks.workunit.client.0.vm07.stdout:0/983: dread d1/d1f/f63 [0,4194304] 0 2026-03-09T20:48:17.936 INFO:tasks.workunit.client.1.vm10.stdout:2/952: dwrite d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/f71 [0,4194304] 0 2026-03-09T20:48:17.938 INFO:tasks.workunit.client.1.vm10.stdout:8/964: unlink d0/d22/d2f/f31 0 2026-03-09T20:48:17.938 INFO:tasks.workunit.client.0.vm07.stdout:2/967: creat d2/d11/ddb/d6e/f13c x:0 0 0 2026-03-09T20:48:17.938 INFO:tasks.workunit.client.1.vm10.stdout:8/965: fsync d0/d22/d25/d2e/d41/de9/f133 0 2026-03-09T20:48:17.946 INFO:tasks.workunit.client.0.vm07.stdout:9/958: mknod d4/d16/d29/d24/d37/d44/d62/d108/d121/d19/d5f/da5/db8/dc1/d146/c151 0 2026-03-09T20:48:17.954 INFO:tasks.workunit.client.1.vm10.stdout:3/901: truncate dc/d14/d26/d37/ffd 794181 0 2026-03-09T20:48:17.964 INFO:tasks.workunit.client.1.vm10.stdout:6/949: rename d3/da/d11/d89/fff to d3/da/d11/d89/db9/f11a 0 2026-03-09T20:48:17.964 INFO:tasks.workunit.client.0.vm07.stdout:0/984: mkdir d1/dc0/dcc/d138 0 2026-03-09T20:48:17.964 INFO:tasks.workunit.client.0.vm07.stdout:6/980: symlink d8/d5d/l148 0 2026-03-09T20:48:17.964 INFO:tasks.workunit.client.0.vm07.stdout:6/981: dread - d8/d16/d22/d9b/de4/d135/f13e zero size 2026-03-09T20:48:17.964 INFO:tasks.workunit.client.0.vm07.stdout:6/982: fsync d8/d16/d22/d9b/de4/d85/df8/f134 0 2026-03-09T20:48:17.964 INFO:tasks.workunit.client.0.vm07.stdout:3/954: write d1/d5/d9/d11/d6d/dd0/d43/f90 [3127498,6677] 0 2026-03-09T20:48:17.964 INFO:tasks.workunit.client.0.vm07.stdout:9/959: rename d4/d16/d29/d24/d37/d44/d62/d108/d121/db9/d123 to d4/d16/d78/dc4/d152 0 2026-03-09T20:48:17.967 INFO:tasks.workunit.client.0.vm07.stdout:0/985: creat d1/d2/d4b/d106/f139 x:0 0 0 2026-03-09T20:48:17.968 INFO:tasks.workunit.client.1.vm10.stdout:0/906: creat d2/d4a/d58/d82/d93/db1/f13a x:0 0 0 2026-03-09T20:48:17.970 INFO:tasks.workunit.client.1.vm10.stdout:0/907: fsync d2/d4a/d58/d82/d71/dca/d110/d30/ff9 0 2026-03-09T20:48:17.971 INFO:tasks.workunit.client.0.vm07.stdout:8/910: write d1/dc/d16/d26/f36 [4847617,13865] 0 2026-03-09T20:48:17.977 INFO:tasks.workunit.client.1.vm10.stdout:3/902: sync 2026-03-09T20:48:17.977 INFO:tasks.workunit.client.1.vm10.stdout:6/950: sync 2026-03-09T20:48:17.978 INFO:tasks.workunit.client.1.vm10.stdout:2/953: truncate d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/f5c 1185926 0 2026-03-09T20:48:17.979 INFO:tasks.workunit.client.0.vm07.stdout:9/960: truncate d4/d16/d29/d24/d37/d44/d62/d8e/fe0 319482 0 2026-03-09T20:48:17.985 INFO:tasks.workunit.client.1.vm10.stdout:6/951: dwrite d3/da/d11/d89/db9/dd1/ff9 [0,4194304] 0 2026-03-09T20:48:17.987 INFO:tasks.workunit.client.1.vm10.stdout:5/853: dwrite d2/d39/d4b/d7a/ffc [4194304,4194304] 0 2026-03-09T20:48:17.989 INFO:tasks.workunit.client.0.vm07.stdout:4/907: dwrite d2/df/ff4 [0,4194304] 0 2026-03-09T20:48:17.991 INFO:tasks.workunit.client.0.vm07.stdout:4/908: write d2/f9 [654694,69238] 0 2026-03-09T20:48:17.991 INFO:tasks.workunit.client.0.vm07.stdout:4/909: chown d2/df/l4e 93541514 1 2026-03-09T20:48:17.993 INFO:tasks.workunit.client.1.vm10.stdout:5/854: dwrite d2/d39/dbf/fb6 [0,4194304] 0 2026-03-09T20:48:17.996 INFO:tasks.workunit.client.1.vm10.stdout:5/855: chown d2/f71 491 1 2026-03-09T20:48:18.000 INFO:tasks.workunit.client.1.vm10.stdout:5/856: sync 2026-03-09T20:48:18.016 INFO:tasks.workunit.client.1.vm10.stdout:4/908: link d1/d2/d5c/d64/d6b/d79/f125 d1/d2/d3/d70/d78/f12b 0 2026-03-09T20:48:18.023 INFO:tasks.workunit.client.1.vm10.stdout:1/934: write d2/ff9 [556921,105255] 0 2026-03-09T20:48:18.026 INFO:tasks.workunit.client.1.vm10.stdout:3/903: creat dc/d14/d26/d37/f131 x:0 0 0 2026-03-09T20:48:18.027 INFO:tasks.workunit.client.1.vm10.stdout:2/954: mkdir d5/d5b/d135 0 2026-03-09T20:48:18.028 INFO:tasks.workunit.client.1.vm10.stdout:7/924: creat db/d28/d2b/d36/f129 x:0 0 0 2026-03-09T20:48:18.029 INFO:tasks.workunit.client.1.vm10.stdout:6/952: symlink d3/d79/l11b 0 2026-03-09T20:48:18.032 INFO:tasks.workunit.client.1.vm10.stdout:0/908: mknod d2/d9/db8/d10f/c13b 0 2026-03-09T20:48:18.038 INFO:tasks.workunit.client.1.vm10.stdout:0/909: dread - d2/d9/db8/d10f/d11/dd1/db7/dcd/d63/fad zero size 2026-03-09T20:48:18.038 INFO:tasks.workunit.client.1.vm10.stdout:1/935: symlink d2/da/l131 0 2026-03-09T20:48:18.039 INFO:tasks.workunit.client.1.vm10.stdout:7/925: sync 2026-03-09T20:48:18.039 INFO:tasks.workunit.client.0.vm07.stdout:2/968: rmdir d2/db/d1c/d4a/d88/d11c 0 2026-03-09T20:48:18.040 INFO:tasks.workunit.client.1.vm10.stdout:7/926: write db/d46/f85 [5312369,20009] 0 2026-03-09T20:48:18.040 INFO:tasks.workunit.client.0.vm07.stdout:8/911: chown d1/d5d/d6f/d80/faa 56 1 2026-03-09T20:48:18.040 INFO:tasks.workunit.client.0.vm07.stdout:8/912: stat d1/d3b/f3e 0 2026-03-09T20:48:18.042 INFO:tasks.workunit.client.0.vm07.stdout:6/983: dwrite d8/d50/f111 [0,4194304] 0 2026-03-09T20:48:18.044 INFO:tasks.workunit.client.0.vm07.stdout:0/986: symlink d1/l13a 0 2026-03-09T20:48:18.046 INFO:tasks.workunit.client.1.vm10.stdout:2/955: dread d5/d18/d27/d38/f45 [0,4194304] 0 2026-03-09T20:48:18.050 INFO:tasks.workunit.client.0.vm07.stdout:2/969: symlink d2/d11/ddb/d6e/l13d 0 2026-03-09T20:48:18.053 INFO:tasks.workunit.client.0.vm07.stdout:4/910: read d2/d55/d5d/d3f/d4a/d4b/d52/f82 [2503058,22596] 0 2026-03-09T20:48:18.057 INFO:tasks.workunit.client.1.vm10.stdout:6/953: dread d3/da/d11/f8b [0,4194304] 0 2026-03-09T20:48:18.060 INFO:tasks.workunit.client.1.vm10.stdout:0/910: creat d2/d9/db8/d10f/d11/dd1/d138/f13c x:0 0 0 2026-03-09T20:48:18.060 INFO:tasks.workunit.client.1.vm10.stdout:1/936: mknod d2/da/d25/d46/d80/da0/d92/db5/d10f/c132 0 2026-03-09T20:48:18.061 INFO:tasks.workunit.client.1.vm10.stdout:1/937: write d2/da/f11c [546885,104219] 0 2026-03-09T20:48:18.064 INFO:tasks.workunit.client.1.vm10.stdout:1/938: dread d2/f14 [0,4194304] 0 2026-03-09T20:48:18.064 INFO:tasks.workunit.client.0.vm07.stdout:8/913: symlink d1/d5d/d6f/d2f/d4d/dd4/l123 0 2026-03-09T20:48:18.069 INFO:tasks.workunit.client.0.vm07.stdout:3/955: rename d1/d5/d9/d2f/d34/fe9 to d1/f130 0 2026-03-09T20:48:18.072 INFO:tasks.workunit.client.1.vm10.stdout:0/911: dread d2/d4a/d58/d82/d71/d5d/fdd [0,4194304] 0 2026-03-09T20:48:18.080 INFO:tasks.workunit.client.0.vm07.stdout:6/984: mknod d8/d5d/d97/dc4/de3/c149 0 2026-03-09T20:48:18.085 INFO:tasks.workunit.client.1.vm10.stdout:3/904: getdents dc/d14/d20/d2e/d56 0 2026-03-09T20:48:18.086 INFO:tasks.workunit.client.1.vm10.stdout:5/857: link d2/c19 d2/d1b/c13a 0 2026-03-09T20:48:18.086 INFO:tasks.workunit.client.0.vm07.stdout:3/956: truncate d1/d5/d9/d11/d1f/f27 287695 0 2026-03-09T20:48:18.086 INFO:tasks.workunit.client.1.vm10.stdout:1/939: fsync d2/da/d25/d46/d51/fd6 0 2026-03-09T20:48:18.087 INFO:tasks.workunit.client.1.vm10.stdout:0/912: mkdir d2/d9/db8/d10f/d11/d92/d13d 0 2026-03-09T20:48:18.089 INFO:tasks.workunit.client.1.vm10.stdout:2/956: symlink d5/d18/d1b/l136 0 2026-03-09T20:48:18.091 INFO:tasks.workunit.client.0.vm07.stdout:8/914: creat d1/dc/dba/d115/f124 x:0 0 0 2026-03-09T20:48:18.092 INFO:tasks.workunit.client.1.vm10.stdout:3/905: mknod dc/d14/d26/d8f/c132 0 2026-03-09T20:48:18.092 INFO:tasks.workunit.client.1.vm10.stdout:5/858: fdatasync d2/d39/d4b/d7a/dd9/d10c/d132/f38 0 2026-03-09T20:48:18.095 INFO:tasks.workunit.client.1.vm10.stdout:0/913: rename d2/d4a/d58/d82/d93/fbc to d2/d9/db8/d10f/d11/dd1/d34/f13e 0 2026-03-09T20:48:18.095 INFO:tasks.workunit.client.0.vm07.stdout:9/961: getdents d4/d16/d29/d24 0 2026-03-09T20:48:18.095 INFO:tasks.workunit.client.1.vm10.stdout:0/914: dread - d2/d4a/d58/d82/d93/db1/f13a zero size 2026-03-09T20:48:18.096 INFO:tasks.workunit.client.1.vm10.stdout:0/915: chown d2/d9/db8/d10f/d48/dac/c10d 531 1 2026-03-09T20:48:18.099 INFO:tasks.workunit.client.1.vm10.stdout:6/954: dread d3/d30/d7f/f25 [0,4194304] 0 2026-03-09T20:48:18.102 INFO:tasks.workunit.client.0.vm07.stdout:6/985: rename d8/d16/d22/d9b/de4/ff7 to d8/d26/d7d/dfd/f14a 0 2026-03-09T20:48:18.104 INFO:tasks.workunit.client.1.vm10.stdout:2/957: unlink d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/f120 0 2026-03-09T20:48:18.115 INFO:tasks.workunit.client.1.vm10.stdout:8/966: dwrite d0/d22/d25/d2e/f33 [0,4194304] 0 2026-03-09T20:48:18.120 INFO:tasks.workunit.client.1.vm10.stdout:4/909: dwrite d1/d2/f2e [4194304,4194304] 0 2026-03-09T20:48:18.135 INFO:tasks.workunit.client.1.vm10.stdout:7/927: write db/d28/d2b/d36/d3b/dd5/fe0 [945105,17823] 0 2026-03-09T20:48:18.135 INFO:tasks.workunit.client.0.vm07.stdout:0/987: write d1/f31 [5079023,113366] 0 2026-03-09T20:48:18.139 INFO:tasks.workunit.client.1.vm10.stdout:5/859: symlink d2/d39/dbf/d66/l13b 0 2026-03-09T20:48:18.140 INFO:tasks.workunit.client.1.vm10.stdout:1/940: dwrite d2/da/f10c [0,4194304] 0 2026-03-09T20:48:18.163 INFO:tasks.workunit.client.0.vm07.stdout:9/962: mknod d4/d16/d29/d24/d37/d44/d62/d108/d121/d19/d89/da7/ddd/c153 0 2026-03-09T20:48:18.171 INFO:tasks.workunit.client.1.vm10.stdout:6/955: creat d3/d30/d7f/d4a/f11c x:0 0 0 2026-03-09T20:48:18.172 INFO:tasks.workunit.client.1.vm10.stdout:6/956: write d3/f2f [2157000,109728] 0 2026-03-09T20:48:18.173 INFO:tasks.workunit.client.1.vm10.stdout:6/957: chown d3/da/d11/d31/d47/d87 109486 1 2026-03-09T20:48:18.173 INFO:tasks.workunit.client.1.vm10.stdout:6/958: write d3/d30/d7f/d4a/f11c [493999,40446] 0 2026-03-09T20:48:18.176 INFO:tasks.workunit.client.0.vm07.stdout:4/911: getdents d2/d55/d5d/d3f/d4a/d4b/d52/d5c/d90 0 2026-03-09T20:48:18.179 INFO:tasks.workunit.client.0.vm07.stdout:4/912: write d2/d1f/f2c [1002785,113404] 0 2026-03-09T20:48:18.180 INFO:tasks.workunit.client.0.vm07.stdout:0/988: truncate d1/d2/f1b 5099357 0 2026-03-09T20:48:18.186 INFO:tasks.workunit.client.1.vm10.stdout:7/928: mknod db/d46/d89/dbf/d78/c12a 0 2026-03-09T20:48:18.188 INFO:tasks.workunit.client.1.vm10.stdout:5/860: creat d2/d39/dbf/da9/f13c x:0 0 0 2026-03-09T20:48:18.189 INFO:tasks.workunit.client.1.vm10.stdout:1/941: creat d2/da/d25/d46/d80/da0/d92/f133 x:0 0 0 2026-03-09T20:48:18.190 INFO:tasks.workunit.client.1.vm10.stdout:1/942: chown d2/da/d25/d46/d80/da0/d92/db5/lde 3536 1 2026-03-09T20:48:18.191 INFO:tasks.workunit.client.1.vm10.stdout:0/916: mkdir d2/d4a/d58/d82/d13f 0 2026-03-09T20:48:18.193 INFO:tasks.workunit.client.1.vm10.stdout:6/959: mknod d3/d30/d33/c11d 0 2026-03-09T20:48:18.199 INFO:tasks.workunit.client.1.vm10.stdout:2/958: mknod d5/d18/d9f/dc2/c137 0 2026-03-09T20:48:18.200 INFO:tasks.workunit.client.1.vm10.stdout:8/967: mkdir d0/d95/d134 0 2026-03-09T20:48:18.204 INFO:tasks.workunit.client.1.vm10.stdout:7/929: unlink db/d28/d2b/d36/l116 0 2026-03-09T20:48:18.204 INFO:tasks.workunit.client.1.vm10.stdout:7/930: chown db/d28/d2b/d36/c120 3594 1 2026-03-09T20:48:18.207 INFO:tasks.workunit.client.1.vm10.stdout:2/959: dread d5/d18/f63 [0,4194304] 0 2026-03-09T20:48:18.209 INFO:tasks.workunit.client.0.vm07.stdout:8/915: dwrite d1/d5d/d6f/d2f/f9f [0,4194304] 0 2026-03-09T20:48:18.219 INFO:tasks.workunit.client.1.vm10.stdout:4/910: write d1/d67/fda [310393,20970] 0 2026-03-09T20:48:18.231 INFO:tasks.workunit.client.1.vm10.stdout:3/906: truncate dc/d14/d90/f101 1414843 0 2026-03-09T20:48:18.233 INFO:tasks.workunit.client.1.vm10.stdout:1/943: creat d2/da/d25/d46/d80/da0/d92/db5/dc7/f134 x:0 0 0 2026-03-09T20:48:18.233 INFO:tasks.workunit.client.1.vm10.stdout:0/917: dread - d2/d4a/d58/d82/d93/f131 zero size 2026-03-09T20:48:18.233 INFO:tasks.workunit.client.1.vm10.stdout:6/960: truncate d3/da/f58 531093 0 2026-03-09T20:48:18.234 INFO:tasks.workunit.client.1.vm10.stdout:0/918: chown d2/d4a/d58/d82/d71/d8e/lc2 4100 1 2026-03-09T20:48:18.236 INFO:tasks.workunit.client.1.vm10.stdout:6/961: write d3/da/d11/d89/db9/dd1/dd2/d60/fbc [291209,86315] 0 2026-03-09T20:48:18.236 INFO:tasks.workunit.client.1.vm10.stdout:4/911: fsync d1/dd8/de8/f11f 0 2026-03-09T20:48:18.241 INFO:tasks.workunit.client.1.vm10.stdout:3/907: dread dc/d14/d22/fbf [0,4194304] 0 2026-03-09T20:48:18.254 INFO:tasks.workunit.client.1.vm10.stdout:7/931: write db/d28/d2b/d36/d3b/f42 [1581358,55995] 0 2026-03-09T20:48:18.254 INFO:tasks.workunit.client.1.vm10.stdout:1/944: write d2/da/d25/d46/fa7 [356116,125182] 0 2026-03-09T20:48:18.257 INFO:tasks.workunit.client.1.vm10.stdout:6/962: creat d3/d30/f11e x:0 0 0 2026-03-09T20:48:18.260 INFO:tasks.workunit.client.1.vm10.stdout:5/861: rename d2/d39/d4b/d7a/dd9/d10c/d132/c3b to d2/d27/c13d 0 2026-03-09T20:48:18.266 INFO:tasks.workunit.client.1.vm10.stdout:4/912: mknod d1/d2/d5c/d64/d6b/d81/dac/d1b/c12c 0 2026-03-09T20:48:18.267 INFO:tasks.workunit.client.1.vm10.stdout:4/913: chown d1/dd8/f104 179310 1 2026-03-09T20:48:18.267 INFO:tasks.workunit.client.1.vm10.stdout:8/968: rename d0/d22/d25/d2e/d41/de9/dfc/l50 to d0/d22/d25/d2e/d41/d85/db9/l135 0 2026-03-09T20:48:18.269 INFO:tasks.workunit.client.1.vm10.stdout:5/862: creat d2/d1b/d54/f13e x:0 0 0 2026-03-09T20:48:18.270 INFO:tasks.workunit.client.1.vm10.stdout:2/960: getdents d5/d18/d27/d38/d61/dc8 0 2026-03-09T20:48:18.270 INFO:tasks.workunit.client.0.vm07.stdout:9/963: symlink d4/d16/d29/d24/d37/d44/d62/d108/d121/dc/d4e/d54/l154 0 2026-03-09T20:48:18.273 INFO:tasks.workunit.client.1.vm10.stdout:2/961: read d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/f5d [236006,14486] 0 2026-03-09T20:48:18.273 INFO:tasks.workunit.client.0.vm07.stdout:9/964: dwrite f2 [0,4194304] 0 2026-03-09T20:48:18.274 INFO:tasks.workunit.client.1.vm10.stdout:2/962: chown d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47/l7a 1 1 2026-03-09T20:48:18.276 INFO:tasks.workunit.client.0.vm07.stdout:3/957: rename d1/d5/d9/d11/d6d/dd0/d43/fbf to d1/d5/d9/d2f/d3d/d71/db5/f131 0 2026-03-09T20:48:18.276 INFO:tasks.workunit.client.1.vm10.stdout:3/908: dread dc/d14/d26/d29/d2a/d55/fa3 [0,4194304] 0 2026-03-09T20:48:18.277 INFO:tasks.workunit.client.1.vm10.stdout:6/963: mknod d3/d30/d7f/d24/d39/d10e/c11f 0 2026-03-09T20:48:18.277 INFO:tasks.workunit.client.0.vm07.stdout:2/970: link d2/d11/ddb/d72/d82/cee d2/c13e 0 2026-03-09T20:48:18.277 INFO:tasks.workunit.client.0.vm07.stdout:2/971: read - d2/d11/ddb/d6e/f12b zero size 2026-03-09T20:48:18.278 INFO:tasks.workunit.client.1.vm10.stdout:6/964: chown d3/d30/d7f/d24/f99 51275634 1 2026-03-09T20:48:18.278 INFO:tasks.workunit.client.1.vm10.stdout:4/914: read d1/d2/d5c/d64/d6b/d81/dac/d39/f56 [197967,39149] 0 2026-03-09T20:48:18.279 INFO:tasks.workunit.client.1.vm10.stdout:1/945: rename d2/d89/l97 to d2/da/d25/d46/dbe/l135 0 2026-03-09T20:48:18.279 INFO:tasks.workunit.client.0.vm07.stdout:0/989: fsync d1/d2/d33/fea 0 2026-03-09T20:48:18.280 INFO:tasks.workunit.client.1.vm10.stdout:5/863: rmdir d2/d39/dbf/d69/d96 39 2026-03-09T20:48:18.282 INFO:tasks.workunit.client.1.vm10.stdout:0/919: getdents d2/d4a/d58/d82/d71 0 2026-03-09T20:48:18.296 INFO:tasks.workunit.client.1.vm10.stdout:7/932: creat db/d28/d2b/d36/d63/f12b x:0 0 0 2026-03-09T20:48:18.296 INFO:tasks.workunit.client.1.vm10.stdout:2/963: truncate d5/d18/d9f/ffe 905741 0 2026-03-09T20:48:18.299 INFO:tasks.workunit.client.1.vm10.stdout:3/909: read - dc/d14/d26/dcb/ff9 zero size 2026-03-09T20:48:18.304 INFO:tasks.workunit.client.0.vm07.stdout:6/986: rename d8/d16/d22/ff1 to d8/d16/d22/d9b/de4/d147/f14b 0 2026-03-09T20:48:18.305 INFO:tasks.workunit.client.1.vm10.stdout:4/915: fsync d1/d2/d5c/d64/d6b/d81/dac/d1b/d57/f75 0 2026-03-09T20:48:18.307 INFO:tasks.workunit.client.1.vm10.stdout:1/946: creat d2/da/d25/d46/d80/da0/d92/db5/f136 x:0 0 0 2026-03-09T20:48:18.310 INFO:tasks.workunit.client.1.vm10.stdout:0/920: mkdir d2/d4a/d58/d82/d71/dca/d110/d30/d140 0 2026-03-09T20:48:18.314 INFO:tasks.workunit.client.0.vm07.stdout:3/958: dread d1/d5/d9/d2f/d34/f40 [4194304,4194304] 0 2026-03-09T20:48:18.316 INFO:tasks.workunit.client.1.vm10.stdout:3/910: truncate dc/d14/d26/f65 1637575 0 2026-03-09T20:48:18.316 INFO:tasks.workunit.client.0.vm07.stdout:6/987: creat d8/d50/f14c x:0 0 0 2026-03-09T20:48:18.316 INFO:tasks.workunit.client.0.vm07.stdout:6/988: readlink d8/d16/dcd/lce 0 2026-03-09T20:48:18.319 INFO:tasks.workunit.client.0.vm07.stdout:8/916: link d1/ld0 d1/d5d/d6f/d80/l125 0 2026-03-09T20:48:18.320 INFO:tasks.workunit.client.1.vm10.stdout:7/933: dread db/f70 [0,4194304] 0 2026-03-09T20:48:18.320 INFO:tasks.workunit.client.0.vm07.stdout:2/972: rename d2/d11/ddb/d6e/dda/d129/l10d to d2/db/d28/d120/l13f 0 2026-03-09T20:48:18.320 INFO:tasks.workunit.client.0.vm07.stdout:3/959: rename d1/d5/d9/d2f/d3d to d1/d5/d9/d2f/d3d/dd6/d11e/d132 22 2026-03-09T20:48:18.321 INFO:tasks.workunit.client.0.vm07.stdout:6/989: mkdir d8/d50/d14d 0 2026-03-09T20:48:18.322 INFO:tasks.workunit.client.0.vm07.stdout:4/913: link d2/df/d17/f6d d2/fff 0 2026-03-09T20:48:18.324 INFO:tasks.workunit.client.0.vm07.stdout:2/973: unlink d2/d11/ddb/d72/l12a 0 2026-03-09T20:48:18.326 INFO:tasks.workunit.client.1.vm10.stdout:8/969: getdents d0/d22/d25/d2e/d41/d85/db9/dc6 0 2026-03-09T20:48:18.331 INFO:tasks.workunit.client.1.vm10.stdout:4/916: mknod d1/d2/d3/c12d 0 2026-03-09T20:48:18.350 INFO:tasks.workunit.client.0.vm07.stdout:6/990: rename d8/d16/d22/d9b/de4/fd6 to d8/d16/d22/d9b/de4/d85/df8/f14e 0 2026-03-09T20:48:18.351 INFO:tasks.workunit.client.0.vm07.stdout:6/991: dwrite d8/d16/d22/d24/da0/dab/d40/d69/f78 [0,4194304] 0 2026-03-09T20:48:18.351 INFO:tasks.workunit.client.0.vm07.stdout:6/992: rmdir d8/db3 39 2026-03-09T20:48:18.351 INFO:tasks.workunit.client.1.vm10.stdout:1/947: fdatasync d2/da/d25/d3e/d55/faf 0 2026-03-09T20:48:18.351 INFO:tasks.workunit.client.1.vm10.stdout:8/970: rename d0/d22/d25/d6c to d0/d22/d25/d2e/d41/d85/d8b/d136 0 2026-03-09T20:48:18.351 INFO:tasks.workunit.client.1.vm10.stdout:3/911: mknod dc/d14/d26/d29/d2a/ddc/c133 0 2026-03-09T20:48:18.351 INFO:tasks.workunit.client.1.vm10.stdout:1/948: rmdir d2/da/d25/d3e/dca/da2/dd5 39 2026-03-09T20:48:18.353 INFO:tasks.workunit.client.1.vm10.stdout:1/949: mkdir d2/da/d25/d46/d51/d5d/da6/d11e/d137 0 2026-03-09T20:48:18.354 INFO:tasks.workunit.client.1.vm10.stdout:1/950: chown d2/da/d25/d46/c7a 23337263 1 2026-03-09T20:48:18.354 INFO:tasks.workunit.client.1.vm10.stdout:4/917: rename d1/d2/d3/d70/d99/dc9/dff/d2b/c49 to d1/d2/d5c/d64/d6b/d81/dac/c12e 0 2026-03-09T20:48:18.356 INFO:tasks.workunit.client.1.vm10.stdout:8/971: rename d0/d22/d25/d2e/d41/de9/dfc/d63/c98 to d0/d22/d25/d2e/d41/d85/c137 0 2026-03-09T20:48:18.357 INFO:tasks.workunit.client.1.vm10.stdout:1/951: creat d2/da/d25/d46/d80/da0/d92/db5/dc7/d105/f138 x:0 0 0 2026-03-09T20:48:18.358 INFO:tasks.workunit.client.1.vm10.stdout:4/918: unlink d1/d2/d5c/d64/d6b/d79/f125 0 2026-03-09T20:48:18.369 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:18 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mon metadata", "id": "vm07"}]: dispatch 2026-03-09T20:48:18.369 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:18 vm10.local ceph-mon[57011]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mon metadata", "id": "vm10"}]: dispatch 2026-03-09T20:48:18.369 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:18 vm10.local ceph-mon[57011]: from='mgr.? 192.168.123.110:0/2784840034' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm10.byqahe/crt"}]: dispatch 2026-03-09T20:48:18.369 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:18 vm10.local ceph-mon[57011]: from='mgr.? 192.168.123.110:0/2784840034' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T20:48:18.369 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:18 vm10.local ceph-mon[57011]: from='mgr.? 192.168.123.110:0/2784840034' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm10.byqahe/key"}]: dispatch 2026-03-09T20:48:18.369 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:18 vm10.local ceph-mon[57011]: from='mgr.? 192.168.123.110:0/2784840034' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T20:48:18.369 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:18 vm10.local ceph-mon[57011]: pgmap v15: 65 pgs: 65 active+clean; 3.8 GiB data, 13 GiB used, 107 GiB / 120 GiB avail; 43 MiB/s rd, 88 MiB/s wr, 252 op/s 2026-03-09T20:48:18.369 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:18 vm10.local ceph-mon[57011]: mon.vm07 calling monitor election 2026-03-09T20:48:18.369 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:18 vm10.local ceph-mon[57011]: mon.vm07 is new leader, mons vm07,vm10 in quorum (ranks 0,1) 2026-03-09T20:48:18.369 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:18 vm10.local ceph-mon[57011]: monmap epoch 2 2026-03-09T20:48:18.369 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:18 vm10.local ceph-mon[57011]: fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 2026-03-09T20:48:18.369 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:18 vm10.local ceph-mon[57011]: last_changed 2026-03-09T20:43:30.011073+0000 2026-03-09T20:48:18.369 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:18 vm10.local ceph-mon[57011]: created 2026-03-09T20:42:20.613735+0000 2026-03-09T20:48:18.369 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:18 vm10.local ceph-mon[57011]: min_mon_release 18 (reef) 2026-03-09T20:48:18.369 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:18 vm10.local ceph-mon[57011]: election_strategy: 1 2026-03-09T20:48:18.369 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:18 vm10.local ceph-mon[57011]: 0: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.vm07 2026-03-09T20:48:18.369 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:18 vm10.local ceph-mon[57011]: 1: [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] mon.vm10 2026-03-09T20:48:18.370 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:18 vm10.local ceph-mon[57011]: fsmap cephfs:2 {0=cephfs.vm07.rovdbp=up:active,1=cephfs.vm10.qpltwp=up:active} 2 up:standby-replay 2026-03-09T20:48:18.370 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:18 vm10.local ceph-mon[57011]: osdmap e46: 6 total, 6 up, 6 in 2026-03-09T20:48:18.370 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:18 vm10.local ceph-mon[57011]: mgrmap e30: vm07.xjrvch(active, since 24s), standbys: vm10.byqahe 2026-03-09T20:48:18.370 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:18 vm10.local ceph-mon[57011]: overall HEALTH_OK 2026-03-09T20:48:18.375 INFO:tasks.workunit.client.1.vm10.stdout:7/934: sync 2026-03-09T20:48:18.377 INFO:tasks.workunit.client.0.vm07.stdout:0/990: sync 2026-03-09T20:48:18.377 INFO:tasks.workunit.client.0.vm07.stdout:8/917: sync 2026-03-09T20:48:18.382 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:18 vm07.local ceph-mon[112105]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mon metadata", "id": "vm07"}]: dispatch 2026-03-09T20:48:18.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:18 vm07.local ceph-mon[112105]: from='mgr.24495 192.168.123.107:0/808449053' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mon metadata", "id": "vm10"}]: dispatch 2026-03-09T20:48:18.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:18 vm07.local ceph-mon[112105]: from='mgr.? 192.168.123.110:0/2784840034' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm10.byqahe/crt"}]: dispatch 2026-03-09T20:48:18.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:18 vm07.local ceph-mon[112105]: from='mgr.? 192.168.123.110:0/2784840034' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T20:48:18.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:18 vm07.local ceph-mon[112105]: from='mgr.? 192.168.123.110:0/2784840034' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm10.byqahe/key"}]: dispatch 2026-03-09T20:48:18.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:18 vm07.local ceph-mon[112105]: from='mgr.? 192.168.123.110:0/2784840034' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T20:48:18.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:18 vm07.local ceph-mon[112105]: pgmap v15: 65 pgs: 65 active+clean; 3.8 GiB data, 13 GiB used, 107 GiB / 120 GiB avail; 43 MiB/s rd, 88 MiB/s wr, 252 op/s 2026-03-09T20:48:18.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:18 vm07.local ceph-mon[112105]: mon.vm07 calling monitor election 2026-03-09T20:48:18.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:18 vm07.local ceph-mon[112105]: mon.vm07 is new leader, mons vm07,vm10 in quorum (ranks 0,1) 2026-03-09T20:48:18.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:18 vm07.local ceph-mon[112105]: monmap epoch 2 2026-03-09T20:48:18.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:18 vm07.local ceph-mon[112105]: fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 2026-03-09T20:48:18.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:18 vm07.local ceph-mon[112105]: last_changed 2026-03-09T20:43:30.011073+0000 2026-03-09T20:48:18.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:18 vm07.local ceph-mon[112105]: created 2026-03-09T20:42:20.613735+0000 2026-03-09T20:48:18.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:18 vm07.local ceph-mon[112105]: min_mon_release 18 (reef) 2026-03-09T20:48:18.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:18 vm07.local ceph-mon[112105]: election_strategy: 1 2026-03-09T20:48:18.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:18 vm07.local ceph-mon[112105]: 0: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.vm07 2026-03-09T20:48:18.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:18 vm07.local ceph-mon[112105]: 1: [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] mon.vm10 2026-03-09T20:48:18.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:18 vm07.local ceph-mon[112105]: fsmap cephfs:2 {0=cephfs.vm07.rovdbp=up:active,1=cephfs.vm10.qpltwp=up:active} 2 up:standby-replay 2026-03-09T20:48:18.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:18 vm07.local ceph-mon[112105]: osdmap e46: 6 total, 6 up, 6 in 2026-03-09T20:48:18.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:18 vm07.local ceph-mon[112105]: mgrmap e30: vm07.xjrvch(active, since 24s), standbys: vm10.byqahe 2026-03-09T20:48:18.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:18 vm07.local ceph-mon[112105]: overall HEALTH_OK 2026-03-09T20:48:18.384 INFO:tasks.workunit.client.0.vm07.stdout:9/965: write d4/d16/d29/d24/d37/d44/d62/d108/d121/dc/dbb/f91 [912644,118225] 0 2026-03-09T20:48:18.386 INFO:tasks.workunit.client.1.vm10.stdout:6/965: dwrite d3/d30/d7f/d24/f27 [0,4194304] 0 2026-03-09T20:48:18.394 INFO:tasks.workunit.client.1.vm10.stdout:5/864: dwrite d2/d39/d4b/f85 [0,4194304] 0 2026-03-09T20:48:18.395 INFO:tasks.workunit.client.0.vm07.stdout:0/991: dread d1/d2/dc/fde [0,4194304] 0 2026-03-09T20:48:18.398 INFO:tasks.workunit.client.1.vm10.stdout:2/964: dwrite d5/d5b/fb7 [4194304,4194304] 0 2026-03-09T20:48:18.403 INFO:tasks.workunit.client.1.vm10.stdout:0/921: write d2/f9b [1563151,39409] 0 2026-03-09T20:48:18.422 INFO:tasks.workunit.client.0.vm07.stdout:3/960: dwrite d1/d5/d9/d2f/d34/d46/d5d/fd2 [0,4194304] 0 2026-03-09T20:48:18.427 INFO:tasks.workunit.client.0.vm07.stdout:4/914: dwrite d2/d55/d5d/d3f/f68 [0,4194304] 0 2026-03-09T20:48:18.435 INFO:tasks.workunit.client.0.vm07.stdout:2/974: write d2/db/f67 [2315497,76226] 0 2026-03-09T20:48:18.440 INFO:tasks.workunit.client.1.vm10.stdout:8/972: getdents d0/d22/d25/d2e/d41/d85/d8b 0 2026-03-09T20:48:18.441 INFO:tasks.workunit.client.1.vm10.stdout:8/973: write d0/d92/fb3 [1306309,124475] 0 2026-03-09T20:48:18.445 INFO:tasks.workunit.client.1.vm10.stdout:7/935: link db/d46/d89/fb6 db/d46/dab/d10f/f12c 0 2026-03-09T20:48:18.445 INFO:tasks.workunit.client.1.vm10.stdout:3/912: write dc/d14/d20/d21/f96 [2147431,49099] 0 2026-03-09T20:48:18.446 INFO:tasks.workunit.client.1.vm10.stdout:5/865: mknod d2/d39/dbf/d66/d10f/c13f 0 2026-03-09T20:48:18.450 INFO:tasks.workunit.client.0.vm07.stdout:6/993: dwrite d8/f8d [0,4194304] 0 2026-03-09T20:48:18.452 INFO:tasks.workunit.client.1.vm10.stdout:2/965: mknod d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47/d7b/c138 0 2026-03-09T20:48:18.455 INFO:tasks.workunit.client.0.vm07.stdout:0/992: dwrite d1/dc0/dcc/dd9/ff9 [4194304,4194304] 0 2026-03-09T20:48:18.457 INFO:tasks.workunit.client.0.vm07.stdout:4/915: dread d2/df/d17/fdf [0,4194304] 0 2026-03-09T20:48:18.464 INFO:tasks.workunit.client.1.vm10.stdout:7/936: dread db/f7c [0,4194304] 0 2026-03-09T20:48:18.470 INFO:tasks.workunit.client.0.vm07.stdout:3/961: rename d1/d5/d9/d2f/d34/d46/d5d to d1/d5/d12e/d133 0 2026-03-09T20:48:18.477 INFO:tasks.workunit.client.1.vm10.stdout:0/922: fdatasync d2/d4a/d58/d82/d93/f131 0 2026-03-09T20:48:18.477 INFO:tasks.workunit.client.1.vm10.stdout:0/923: dwrite d2/d9/f61 [0,4194304] 0 2026-03-09T20:48:18.477 INFO:tasks.workunit.client.1.vm10.stdout:1/952: truncate d2/da/d25/d3e/fba 990081 0 2026-03-09T20:48:18.480 INFO:tasks.workunit.client.1.vm10.stdout:4/919: truncate d1/d2/d5c/d64/d6b/d81/dac/fdd 2305582 0 2026-03-09T20:48:18.483 INFO:tasks.workunit.client.1.vm10.stdout:0/924: read d2/d4a/d58/d82/d71/dca/d110/d30/ff9 [521101,97151] 0 2026-03-09T20:48:18.483 INFO:tasks.workunit.client.1.vm10.stdout:0/925: dread - d2/d9/db8/d10f/d11/d92/dc1/f12c zero size 2026-03-09T20:48:18.484 INFO:tasks.workunit.client.0.vm07.stdout:6/994: creat d8/d16/da3/f14f x:0 0 0 2026-03-09T20:48:18.484 INFO:tasks.workunit.client.1.vm10.stdout:0/926: read - d2/d9/d2a/fdc zero size 2026-03-09T20:48:18.487 INFO:tasks.workunit.client.1.vm10.stdout:8/974: rename d0/d92/de8/d64 to d0/d22/d25/d40/d86/d138 0 2026-03-09T20:48:18.505 INFO:tasks.workunit.client.0.vm07.stdout:4/916: mkdir d2/d55/d5d/d3f/d4a/d85/d100 0 2026-03-09T20:48:18.505 INFO:tasks.workunit.client.0.vm07.stdout:3/962: mknod d1/d5/d9/d2f/d34/d9e/c134 0 2026-03-09T20:48:18.505 INFO:tasks.workunit.client.0.vm07.stdout:0/993: link d1/d2/d98/de8/f126 d1/dc0/f13b 0 2026-03-09T20:48:18.505 INFO:tasks.workunit.client.0.vm07.stdout:3/963: rmdir d1/d5/d9/d2f/d34/da5 39 2026-03-09T20:48:18.505 INFO:tasks.workunit.client.0.vm07.stdout:3/964: unlink d1/d5/d9/d2f/d3d/d71/dcc/ff9 0 2026-03-09T20:48:18.505 INFO:tasks.workunit.client.1.vm10.stdout:5/866: creat d2/d58/f140 x:0 0 0 2026-03-09T20:48:18.505 INFO:tasks.workunit.client.1.vm10.stdout:4/920: truncate d1/d2/d5c/d64/d6b/d81/dac/d1b/d57/f4c 4087285 0 2026-03-09T20:48:18.505 INFO:tasks.workunit.client.1.vm10.stdout:0/927: truncate d2/d4a/d58/d82/d71/fe7 967504 0 2026-03-09T20:48:18.505 INFO:tasks.workunit.client.1.vm10.stdout:7/937: rename db/d28/d30/cb9 to db/d28/d10e/d125/c12d 0 2026-03-09T20:48:18.506 INFO:tasks.workunit.client.1.vm10.stdout:8/975: truncate d0/d22/d2f/fc5 785155 0 2026-03-09T20:48:18.508 INFO:tasks.workunit.client.0.vm07.stdout:3/965: getdents d1/d5/d9/d11/d60/da7 0 2026-03-09T20:48:18.514 INFO:tasks.workunit.client.1.vm10.stdout:4/921: rmdir d1/d2/d3/d70/d99/dc9/dff 39 2026-03-09T20:48:18.515 INFO:tasks.workunit.client.1.vm10.stdout:1/953: truncate d2/da/d25/f78 2940587 0 2026-03-09T20:48:18.516 INFO:tasks.workunit.client.0.vm07.stdout:3/966: rename d1/d5/d9/d11/d6d/d80/db3/d109/l6b to d1/d5/d9/d2f/d3d/d71/l135 0 2026-03-09T20:48:18.518 INFO:tasks.workunit.client.1.vm10.stdout:7/938: mkdir db/d21/d95/d12e 0 2026-03-09T20:48:18.519 INFO:tasks.workunit.client.1.vm10.stdout:7/939: chown db/d28/d2b/d36/d63/d8b/lf8 4079641 1 2026-03-09T20:48:18.520 INFO:tasks.workunit.client.1.vm10.stdout:3/913: creat dc/f134 x:0 0 0 2026-03-09T20:48:18.522 INFO:tasks.workunit.client.0.vm07.stdout:3/967: dwrite d1/d5/d9/d2f/d66/fd3 [0,4194304] 0 2026-03-09T20:48:18.525 INFO:tasks.workunit.client.1.vm10.stdout:4/922: chown d1/d2/d3/d54/c128 34 1 2026-03-09T20:48:18.532 INFO:tasks.workunit.client.1.vm10.stdout:0/928: mknod d2/d4a/c141 0 2026-03-09T20:48:18.538 INFO:tasks.workunit.client.0.vm07.stdout:3/968: creat d1/d5/d9/d2f/d3d/d71/f136 x:0 0 0 2026-03-09T20:48:18.538 INFO:tasks.workunit.client.1.vm10.stdout:0/929: dwrite d2/d9/db8/d10f/f2f [0,4194304] 0 2026-03-09T20:48:18.541 INFO:tasks.workunit.client.1.vm10.stdout:1/954: dread d2/d89/f129 [0,4194304] 0 2026-03-09T20:48:18.545 INFO:tasks.workunit.client.0.vm07.stdout:3/969: mknod d1/d5/d9/d11/d6d/d80/c137 0 2026-03-09T20:48:18.546 INFO:tasks.workunit.client.0.vm07.stdout:3/970: chown d1/d5/d9/d11/d60/c67 47 1 2026-03-09T20:48:18.547 INFO:tasks.workunit.client.0.vm07.stdout:3/971: write d1/d5/d9/d11/f26 [2548589,38760] 0 2026-03-09T20:48:18.550 INFO:tasks.workunit.client.1.vm10.stdout:5/867: sync 2026-03-09T20:48:18.550 INFO:tasks.workunit.client.1.vm10.stdout:3/914: sync 2026-03-09T20:48:18.551 INFO:tasks.workunit.client.1.vm10.stdout:2/966: link d5/d18/d27/d89/db6/d41/l9c d5/d18/d27/db4/l139 0 2026-03-09T20:48:18.552 INFO:tasks.workunit.client.1.vm10.stdout:2/967: write d5/f7 [710058,108288] 0 2026-03-09T20:48:18.552 INFO:tasks.workunit.client.1.vm10.stdout:2/968: chown d5/c51 605 1 2026-03-09T20:48:18.554 INFO:tasks.workunit.client.0.vm07.stdout:9/966: write d4/d16/d29/d24/d37/d44/d62/d108/d121/dc/d4e/d54/fac [82162,97009] 0 2026-03-09T20:48:18.562 INFO:tasks.workunit.client.0.vm07.stdout:8/918: dwrite d1/d5d/d6f/d2f/d4d/d55/fac [4194304,4194304] 0 2026-03-09T20:48:18.562 INFO:tasks.workunit.client.1.vm10.stdout:5/868: sync 2026-03-09T20:48:18.566 INFO:tasks.workunit.client.1.vm10.stdout:0/930: symlink d2/d9/d69/de2/l142 0 2026-03-09T20:48:18.569 INFO:tasks.workunit.client.0.vm07.stdout:8/919: creat d1/d5d/d6f/d2f/d4d/dd4/f126 x:0 0 0 2026-03-09T20:48:18.569 INFO:tasks.workunit.client.0.vm07.stdout:8/920: write d1/d5d/d6f/d2f/f9f [4353748,80448] 0 2026-03-09T20:48:18.570 INFO:tasks.workunit.client.0.vm07.stdout:8/921: chown d1/d5d/d6f/c72 111 1 2026-03-09T20:48:18.577 INFO:tasks.workunit.client.0.vm07.stdout:3/972: unlink d1/d5/d9/d11/d6d/dd0/d95/f12f 0 2026-03-09T20:48:18.580 INFO:tasks.workunit.client.1.vm10.stdout:8/976: link d0/d22/d25/d2e/d41/de9/dfc/d63/f8c d0/d22/d2f/f139 0 2026-03-09T20:48:18.580 INFO:tasks.workunit.client.0.vm07.stdout:3/973: dwrite d1/d5/d9/f33 [0,4194304] 0 2026-03-09T20:48:18.582 INFO:tasks.workunit.client.0.vm07.stdout:3/974: stat d1/f130 0 2026-03-09T20:48:18.594 INFO:tasks.workunit.client.1.vm10.stdout:6/966: dwrite d3/f1f [0,4194304] 0 2026-03-09T20:48:18.596 INFO:tasks.workunit.client.1.vm10.stdout:6/967: stat d3/d30/d7f/d51/f7c 0 2026-03-09T20:48:18.610 INFO:tasks.workunit.client.1.vm10.stdout:3/915: readlink dc/d14/d20/d21/d3b/l7a 0 2026-03-09T20:48:18.624 INFO:tasks.workunit.client.1.vm10.stdout:3/916: stat dc/d14/d26/d29/d40/lcd 0 2026-03-09T20:48:18.624 INFO:tasks.workunit.client.0.vm07.stdout:9/967: link d4/d16/l43 d4/d16/d29/d24/d37/d44/d62/d108/d121/d19/d5f/dcf/l155 0 2026-03-09T20:48:18.624 INFO:tasks.workunit.client.0.vm07.stdout:9/968: chown d4/d16/d29/d24/d37/d44/d62/d108/d121/db9/ld1 598078193 1 2026-03-09T20:48:18.628 INFO:tasks.workunit.client.1.vm10.stdout:2/969: dread d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/f4c [0,4194304] 0 2026-03-09T20:48:18.629 INFO:tasks.workunit.client.1.vm10.stdout:2/970: stat d5/d18/d27/d89/db6/d41/d77/db3/db5/d32 0 2026-03-09T20:48:18.630 INFO:tasks.workunit.client.0.vm07.stdout:8/922: unlink d1/ca5 0 2026-03-09T20:48:18.631 INFO:tasks.workunit.client.1.vm10.stdout:2/971: chown d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d8d/d93 90896657 1 2026-03-09T20:48:18.631 INFO:tasks.workunit.client.0.vm07.stdout:8/923: chown d1/d5d/d6f/d2f/d4d/dd4/ldc 3134010 1 2026-03-09T20:48:18.631 INFO:tasks.workunit.client.0.vm07.stdout:8/924: dread - d1/dc/dba/d115/f124 zero size 2026-03-09T20:48:18.635 INFO:tasks.workunit.client.1.vm10.stdout:1/955: truncate d2/f4c 1581975 0 2026-03-09T20:48:18.636 INFO:tasks.workunit.client.1.vm10.stdout:2/972: dwrite d5/d18/d27/db4/f119 [0,4194304] 0 2026-03-09T20:48:18.637 INFO:tasks.workunit.client.0.vm07.stdout:3/975: sync 2026-03-09T20:48:18.640 INFO:tasks.workunit.client.1.vm10.stdout:8/977: mknod d0/d54/c13a 0 2026-03-09T20:48:18.641 INFO:tasks.workunit.client.1.vm10.stdout:8/978: chown d0/d22/c105 0 1 2026-03-09T20:48:18.642 INFO:tasks.workunit.client.0.vm07.stdout:9/969: mknod d4/d11/d23/d32/d149/c156 0 2026-03-09T20:48:18.648 INFO:tasks.workunit.client.1.vm10.stdout:6/968: mknod d3/d30/d7f/d24/d39/d9e/c120 0 2026-03-09T20:48:18.655 INFO:tasks.workunit.client.1.vm10.stdout:3/917: dread - dc/d14/d26/d29/d40/da8/f11a zero size 2026-03-09T20:48:18.655 INFO:tasks.workunit.client.1.vm10.stdout:3/918: chown dc/d14/d20/d2e/d56 373135 1 2026-03-09T20:48:18.655 INFO:tasks.workunit.client.0.vm07.stdout:9/970: symlink d4/d16/d29/d24/d37/d44/d62/d108/d121/dc/dbb/db6/l157 0 2026-03-09T20:48:18.655 INFO:tasks.workunit.client.0.vm07.stdout:3/976: creat d1/d5/d9/d2f/d34/da5/dda/f138 x:0 0 0 2026-03-09T20:48:18.655 INFO:tasks.workunit.client.0.vm07.stdout:3/977: fsync d1/dcf/f12a 0 2026-03-09T20:48:18.656 INFO:tasks.workunit.client.0.vm07.stdout:9/971: unlink d4/d16/d29/d24/d37/d44/d62/d108/d121/d19/d5f/da5/l120 0 2026-03-09T20:48:18.663 INFO:tasks.workunit.client.0.vm07.stdout:3/978: rmdir d1/d5/d9/d11/d6d/dd0/d95 39 2026-03-09T20:48:18.669 INFO:tasks.workunit.client.1.vm10.stdout:1/956: fsync d2/d89/de6/ff2 0 2026-03-09T20:48:18.673 INFO:tasks.workunit.client.0.vm07.stdout:3/979: fsync d1/d5/d9/d11/d6d/dd0/f1a 0 2026-03-09T20:48:18.674 INFO:tasks.workunit.client.1.vm10.stdout:2/973: fdatasync d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/fc5 0 2026-03-09T20:48:18.681 INFO:tasks.workunit.client.1.vm10.stdout:6/969: dread - d3/d30/d7f/d36/ff7 zero size 2026-03-09T20:48:18.683 INFO:tasks.workunit.client.1.vm10.stdout:3/919: stat dc/d14/d26/f45 0 2026-03-09T20:48:18.686 INFO:tasks.workunit.client.0.vm07.stdout:9/972: link d4/d16/d29/d24/d37/d8d/c4c d4/d11/d141/c158 0 2026-03-09T20:48:18.687 INFO:tasks.workunit.client.0.vm07.stdout:3/980: dwrite d1/d5/d9/d2f/d66/dc0/f106 [0,4194304] 0 2026-03-09T20:48:18.688 INFO:tasks.workunit.client.1.vm10.stdout:3/920: sync 2026-03-09T20:48:18.691 INFO:tasks.workunit.client.1.vm10.stdout:0/931: symlink d2/d4a/d58/dd5/dfd/l143 0 2026-03-09T20:48:18.704 INFO:tasks.workunit.client.1.vm10.stdout:1/957: creat d2/da/d25/d46/d51/d5d/d6e/d70/db3/f139 x:0 0 0 2026-03-09T20:48:18.708 INFO:tasks.workunit.client.0.vm07.stdout:2/975: dread d2/db/d1c/d4a/d88/f7f [0,4194304] 0 2026-03-09T20:48:18.722 INFO:tasks.workunit.client.0.vm07.stdout:9/973: mkdir d4/d16/d159 0 2026-03-09T20:48:18.725 INFO:tasks.workunit.client.1.vm10.stdout:2/974: rename d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47/l7a to d5/d18/d27/d38/d61/dc8/ddb/dea/l13a 0 2026-03-09T20:48:18.725 INFO:tasks.workunit.client.0.vm07.stdout:3/981: mknod d1/d5/d9/d11/c139 0 2026-03-09T20:48:18.728 INFO:tasks.workunit.client.0.vm07.stdout:2/976: mkdir d2/d11/ddb/d72/d82/d140 0 2026-03-09T20:48:18.730 INFO:tasks.workunit.client.0.vm07.stdout:9/974: symlink d4/d16/d29/d24/d37/d44/l15a 0 2026-03-09T20:48:18.730 INFO:tasks.workunit.client.0.vm07.stdout:9/975: write d4/d16/d29/d24/f77 [466487,45914] 0 2026-03-09T20:48:18.733 INFO:tasks.workunit.client.1.vm10.stdout:3/921: mknod dc/d14/d26/dcb/d11b/d11f/c135 0 2026-03-09T20:48:18.733 INFO:tasks.workunit.client.1.vm10.stdout:3/922: readlink dc/d14/d26/d29/d2a/d76/lc2 0 2026-03-09T20:48:18.736 INFO:tasks.workunit.client.0.vm07.stdout:6/995: dwrite d8/d16/d22/d9b/de4/d85/f83 [0,4194304] 0 2026-03-09T20:48:18.739 INFO:tasks.workunit.client.0.vm07.stdout:3/982: truncate d1/d5/d9/d2f/d3d/d71/dcc/f104 584789 0 2026-03-09T20:48:18.750 INFO:tasks.workunit.client.1.vm10.stdout:0/932: creat d2/d4a/d58/d82/d71/f144 x:0 0 0 2026-03-09T20:48:18.755 INFO:tasks.workunit.client.0.vm07.stdout:4/917: dwrite d2/df/d59/f81 [8388608,4194304] 0 2026-03-09T20:48:18.759 INFO:tasks.workunit.client.0.vm07.stdout:2/977: symlink d2/db/d1c/d4a/l141 0 2026-03-09T20:48:18.772 INFO:tasks.workunit.client.0.vm07.stdout:0/994: dwrite d1/d2/d33/d35/f45 [0,4194304] 0 2026-03-09T20:48:18.788 INFO:tasks.workunit.client.0.vm07.stdout:6/996: write d8/db3/fd4 [2564915,49430] 0 2026-03-09T20:48:18.793 INFO:tasks.workunit.client.1.vm10.stdout:0/933: rmdir d2/d9/db8/d10f/d48 39 2026-03-09T20:48:18.795 INFO:tasks.workunit.client.0.vm07.stdout:2/978: creat d2/db/d28/d90/da4/f142 x:0 0 0 2026-03-09T20:48:18.797 INFO:tasks.workunit.client.0.vm07.stdout:9/976: dread d4/d16/d29/d24/d37/d44/d62/d8e/fe0 [0,4194304] 0 2026-03-09T20:48:18.805 INFO:tasks.workunit.client.0.vm07.stdout:2/979: dread d2/db/d28/f32 [0,4194304] 0 2026-03-09T20:48:18.815 INFO:tasks.workunit.client.1.vm10.stdout:7/940: truncate db/d1f/f2a 1308172 0 2026-03-09T20:48:18.816 INFO:tasks.workunit.client.0.vm07.stdout:0/995: creat d1/d1f/dc3/dca/f13c x:0 0 0 2026-03-09T20:48:18.820 INFO:tasks.workunit.client.1.vm10.stdout:5/869: write d2/d39/d4b/d7a/dd9/d10c/d132/d46/fb7 [548671,90616] 0 2026-03-09T20:48:18.829 INFO:tasks.workunit.client.0.vm07.stdout:8/925: dwrite d1/dc/d16/f6e [0,4194304] 0 2026-03-09T20:48:18.842 INFO:tasks.workunit.client.0.vm07.stdout:4/918: symlink d2/d55/d5d/d3f/d4a/d85/dda/l101 0 2026-03-09T20:48:18.842 INFO:tasks.workunit.client.0.vm07.stdout:4/919: stat d2/d55/d5d/d3f/fa7 0 2026-03-09T20:48:18.848 INFO:tasks.workunit.client.0.vm07.stdout:9/977: rename d4/d11/f9d to d4/d16/d29/d24/d37/d44/d62/d108/d121/d19/d5f/dcf/f15b 0 2026-03-09T20:48:18.853 INFO:tasks.workunit.client.1.vm10.stdout:8/979: dwrite d0/d22/d25/d2e/d41/de9/dfc/d63/ff3 [0,4194304] 0 2026-03-09T20:48:18.862 INFO:tasks.workunit.client.1.vm10.stdout:6/970: write d3/d30/d7f/f18 [9362904,123031] 0 2026-03-09T20:48:18.873 INFO:tasks.workunit.client.0.vm07.stdout:3/983: link d1/d5/d9/d2f/d34/da5/fa9 d1/d5/d9/d2f/d3d/d71/dcc/f13a 0 2026-03-09T20:48:18.876 INFO:tasks.workunit.client.1.vm10.stdout:1/958: write d2/da/d25/d3e/dca/da2/fdc [420923,129952] 0 2026-03-09T20:48:18.881 INFO:tasks.workunit.client.1.vm10.stdout:2/975: dwrite d5/d18/d27/db8/fce [0,4194304] 0 2026-03-09T20:48:18.883 INFO:tasks.workunit.client.1.vm10.stdout:3/923: dwrite dc/d14/d26/d29/d40/da8/f11a [0,4194304] 0 2026-03-09T20:48:18.888 INFO:tasks.workunit.client.0.vm07.stdout:0/996: creat d1/dc0/dcc/d10e/d110/f13d x:0 0 0 2026-03-09T20:48:18.901 INFO:tasks.workunit.client.0.vm07.stdout:8/926: creat d1/d5d/d6f/d2f/d4d/dd4/dd9/dee/f127 x:0 0 0 2026-03-09T20:48:18.904 INFO:tasks.workunit.client.0.vm07.stdout:8/927: dread d1/d5d/d6f/f61 [0,4194304] 0 2026-03-09T20:48:18.918 INFO:tasks.workunit.client.0.vm07.stdout:9/978: dread - d4/d16/d29/d24/d37/d44/d62/d108/d121/d19/d5f/dcf/f15b zero size 2026-03-09T20:48:18.923 INFO:tasks.workunit.client.0.vm07.stdout:2/980: dwrite d2/d11/ddb/f116 [0,4194304] 0 2026-03-09T20:48:18.935 INFO:tasks.workunit.client.0.vm07.stdout:3/984: truncate d1/d5/d9/d2f/d34/d46/f8a 284648 0 2026-03-09T20:48:18.935 INFO:tasks.workunit.client.0.vm07.stdout:3/985: chown d1/d5/d9/d2f/d3d/d71/db5 30934 1 2026-03-09T20:48:18.939 INFO:tasks.workunit.client.0.vm07.stdout:0/997: readlink d1/d2/dc/l18 0 2026-03-09T20:48:18.947 INFO:tasks.workunit.client.0.vm07.stdout:8/928: creat d1/dc/d16/d31/db4/de6/f128 x:0 0 0 2026-03-09T20:48:18.957 INFO:tasks.workunit.client.0.vm07.stdout:9/979: fdatasync d4/d11/d2a/f5d 0 2026-03-09T20:48:18.957 INFO:tasks.workunit.client.0.vm07.stdout:2/981: mkdir d2/d11/ddb/d143 0 2026-03-09T20:48:18.963 INFO:tasks.workunit.client.0.vm07.stdout:6/997: write d8/d16/d22/d24/da0/dab/d40/fe7 [680436,41941] 0 2026-03-09T20:48:18.967 INFO:tasks.workunit.client.0.vm07.stdout:4/920: dwrite d2/d1f/f25 [0,4194304] 0 2026-03-09T20:48:18.969 INFO:tasks.workunit.client.0.vm07.stdout:6/998: dwrite d8/d50/f111 [0,4194304] 0 2026-03-09T20:48:18.976 INFO:tasks.workunit.client.0.vm07.stdout:6/999: dread d8/d16/f23 [0,4194304] 0 2026-03-09T20:48:18.989 INFO:tasks.workunit.client.0.vm07.stdout:8/929: dread d1/dc/dba/fce [0,4194304] 0 2026-03-09T20:48:18.990 INFO:tasks.workunit.client.0.vm07.stdout:8/930: write d1/d5d/d6f/d2f/d4d/d55/fac [5157225,127439] 0 2026-03-09T20:48:18.996 INFO:tasks.workunit.client.0.vm07.stdout:2/982: mkdir d2/db/d49/d144 0 2026-03-09T20:48:19.000 INFO:tasks.workunit.client.0.vm07.stdout:3/986: creat d1/d5/d9/d2f/d99/dd8/de0/d113/f13b x:0 0 0 2026-03-09T20:48:19.016 INFO:tasks.workunit.client.0.vm07.stdout:0/998: mknod d1/dc0/dcc/dd9/d109/c13e 0 2026-03-09T20:48:19.032 INFO:tasks.workunit.client.0.vm07.stdout:9/980: dwrite d4/d16/d78/dc4/ff1 [0,4194304] 0 2026-03-09T20:48:19.034 INFO:tasks.workunit.client.0.vm07.stdout:9/981: chown d4/d16/d29/d24/d7c/l14e 3 1 2026-03-09T20:48:19.047 INFO:tasks.workunit.client.0.vm07.stdout:4/921: dwrite d2/df/d17/f63 [0,4194304] 0 2026-03-09T20:48:19.060 INFO:tasks.workunit.client.0.vm07.stdout:3/987: creat d1/d5/d9/d2f/d3d/d71/db5/f13c x:0 0 0 2026-03-09T20:48:19.064 INFO:tasks.workunit.client.1.vm10.stdout:4/923: dread d1/d2/d3/d70/d78/d86/fe0 [0,4194304] 0 2026-03-09T20:48:19.068 INFO:tasks.workunit.client.1.vm10.stdout:0/934: creat d2/d9/db8/d10f/d11/dd1/db7/dcd/f145 x:0 0 0 2026-03-09T20:48:19.075 INFO:tasks.workunit.client.1.vm10.stdout:7/941: creat db/d28/d30/dd8/f12f x:0 0 0 2026-03-09T20:48:19.078 INFO:tasks.workunit.client.1.vm10.stdout:5/870: creat d2/d39/d4b/de0/f141 x:0 0 0 2026-03-09T20:48:19.079 INFO:tasks.workunit.client.1.vm10.stdout:5/871: dread - d2/d1b/d54/f13e zero size 2026-03-09T20:48:19.086 INFO:tasks.workunit.client.0.vm07.stdout:4/922: creat d2/d55/dab/f102 x:0 0 0 2026-03-09T20:48:19.087 INFO:tasks.workunit.client.0.vm07.stdout:2/983: write d2/f3e [8129951,80652] 0 2026-03-09T20:48:19.093 INFO:tasks.workunit.client.0.vm07.stdout:3/988: unlink d1/d5/d9/d2f/d3d/d71/fb0 0 2026-03-09T20:48:19.097 INFO:tasks.workunit.client.1.vm10.stdout:6/971: unlink d3/da/d11/d89/db9/dd1/dd2/d60/fb1 0 2026-03-09T20:48:19.105 INFO:tasks.workunit.client.0.vm07.stdout:9/982: rename d4/d16/d29/d24/d37/d44/d62/d108/d121/dc/lbd to d4/d16/d29/d24/d37/l15c 0 2026-03-09T20:48:19.109 INFO:tasks.workunit.client.0.vm07.stdout:4/923: readlink d2/lce 0 2026-03-09T20:48:19.110 INFO:tasks.workunit.client.1.vm10.stdout:2/976: creat d5/d18/d27/d89/db6/d41/de4/f13b x:0 0 0 2026-03-09T20:48:19.119 INFO:tasks.workunit.client.0.vm07.stdout:0/999: link d1/d1f/dc3/dca/c10b d1/d2/d98/daf/d116/c13f 0 2026-03-09T20:48:19.119 INFO:tasks.workunit.client.1.vm10.stdout:4/924: truncate d1/d2/d5c/f9c 461284 0 2026-03-09T20:48:19.120 INFO:tasks.workunit.client.1.vm10.stdout:4/925: chown d1/d2/d3 0 1 2026-03-09T20:48:19.121 INFO:tasks.workunit.client.0.vm07.stdout:8/931: getdents d1/dc/d16 0 2026-03-09T20:48:19.123 INFO:tasks.workunit.client.1.vm10.stdout:0/935: truncate d2/d4a/d58/d82/f5c 5376066 0 2026-03-09T20:48:19.125 INFO:tasks.workunit.client.0.vm07.stdout:4/924: rename d2/d55/d5d/d86/db9/fdc to d2/df/d59/d8a/d9d/f103 0 2026-03-09T20:48:19.129 INFO:tasks.workunit.client.1.vm10.stdout:7/942: creat db/d28/d2b/f130 x:0 0 0 2026-03-09T20:48:19.132 INFO:tasks.workunit.client.0.vm07.stdout:3/989: unlink d1/d5/d9/d11/d6d/dd0/d43/fe5 0 2026-03-09T20:48:19.135 INFO:tasks.workunit.client.1.vm10.stdout:8/980: symlink d0/dd1/l13b 0 2026-03-09T20:48:19.140 INFO:tasks.workunit.client.1.vm10.stdout:6/972: symlink d3/d30/d7f/d36/d5c/d8d/l121 0 2026-03-09T20:48:19.144 INFO:tasks.workunit.client.0.vm07.stdout:2/984: chown d2/db/d49/d7d/d85/dde/f100 461 1 2026-03-09T20:48:19.144 INFO:tasks.workunit.client.0.vm07.stdout:9/983: sync 2026-03-09T20:48:19.151 INFO:tasks.workunit.client.1.vm10.stdout:1/959: truncate d2/da/d25/d46/d51/d5d/d6e/d70/db3/fff 2411374 0 2026-03-09T20:48:19.153 INFO:tasks.workunit.client.0.vm07.stdout:8/932: dwrite d1/dc/d16/d31/db4/fbc [0,4194304] 0 2026-03-09T20:48:19.178 INFO:tasks.workunit.client.0.vm07.stdout:3/990: fdatasync d1/d5/dcd/ffc 0 2026-03-09T20:48:19.191 INFO:tasks.workunit.client.0.vm07.stdout:4/925: rename d2/fff to d2/d1f/de1/f104 0 2026-03-09T20:48:19.203 INFO:tasks.workunit.client.0.vm07.stdout:2/985: dread - d2/db/d49/f115 zero size 2026-03-09T20:48:19.207 INFO:tasks.workunit.client.0.vm07.stdout:9/984: chown d4/d16/d78/dc4/cfc 1049 1 2026-03-09T20:48:19.223 INFO:tasks.workunit.client.1.vm10.stdout:5/872: read d2/d39/dbf/d69/d96/fb1 [1262672,96430] 0 2026-03-09T20:48:19.224 INFO:tasks.workunit.client.1.vm10.stdout:7/943: dwrite db/d28/fac [0,4194304] 0 2026-03-09T20:48:19.230 INFO:tasks.workunit.client.1.vm10.stdout:7/944: dwrite db/d28/d2b/d36/d3b/dd5/fe0 [0,4194304] 0 2026-03-09T20:48:19.249 INFO:tasks.workunit.client.1.vm10.stdout:8/981: write d0/d92/de8/fa5 [2726532,96972] 0 2026-03-09T20:48:19.262 INFO:tasks.workunit.client.1.vm10.stdout:3/924: link dc/d14/d26/d29/d2a/d76/lc2 dc/d14/d26/d29/d2a/d55/l136 0 2026-03-09T20:48:19.262 INFO:tasks.workunit.client.1.vm10.stdout:3/925: chown dc/d14/d20/d21/daf/d113 7 1 2026-03-09T20:48:19.279 INFO:tasks.workunit.client.1.vm10.stdout:0/936: dwrite d2/d4a/f5a [0,4194304] 0 2026-03-09T20:48:19.288 INFO:tasks.workunit.client.1.vm10.stdout:7/945: mkdir db/d28/d4c/d131 0 2026-03-09T20:48:19.309 INFO:tasks.workunit.client.1.vm10.stdout:6/973: link d3/da/d11/c9f d3/da/d11/d89/db9/dd1/dd2/c122 0 2026-03-09T20:48:19.314 INFO:tasks.workunit.client.1.vm10.stdout:7/946: mknod db/d21/d26/d72/d124/c132 0 2026-03-09T20:48:19.319 INFO:tasks.workunit.client.1.vm10.stdout:2/977: link d5/d18/d1b/l136 d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/l13c 0 2026-03-09T20:48:19.320 INFO:tasks.workunit.client.1.vm10.stdout:3/926: rmdir dc/db4 39 2026-03-09T20:48:19.323 INFO:tasks.workunit.client.1.vm10.stdout:0/937: creat d2/d9/db8/d10f/d48/dac/f146 x:0 0 0 2026-03-09T20:48:19.323 INFO:tasks.workunit.client.1.vm10.stdout:0/938: chown d2/d9/c124 5 1 2026-03-09T20:48:19.326 INFO:tasks.workunit.client.1.vm10.stdout:6/974: truncate d3/f52 1378384 0 2026-03-09T20:48:19.327 INFO:tasks.workunit.client.1.vm10.stdout:5/873: write d2/d39/d4b/d7a/ff0 [3602526,56369] 0 2026-03-09T20:48:19.331 INFO:tasks.workunit.client.1.vm10.stdout:4/926: truncate d1/d2/f43 172093 0 2026-03-09T20:48:19.332 INFO:tasks.workunit.client.1.vm10.stdout:8/982: dwrite d0/d22/d25/d2e/d41/de9/dfc/f102 [0,4194304] 0 2026-03-09T20:48:19.334 INFO:tasks.workunit.client.1.vm10.stdout:7/947: symlink db/d28/d2b/d36/d40/d8a/dd4/l133 0 2026-03-09T20:48:19.335 INFO:tasks.workunit.client.1.vm10.stdout:7/948: chown db/c109 6638 1 2026-03-09T20:48:19.335 INFO:tasks.workunit.client.1.vm10.stdout:1/960: link d2/da/d25/d46/d8c/ca3 d2/da/c13a 0 2026-03-09T20:48:19.336 INFO:tasks.workunit.client.1.vm10.stdout:2/978: symlink d5/d5b/l13d 0 2026-03-09T20:48:19.355 INFO:tasks.workunit.client.1.vm10.stdout:7/949: unlink db/d28/d2b/d36/c120 0 2026-03-09T20:48:19.356 INFO:tasks.workunit.client.1.vm10.stdout:5/874: dread d2/d39/d4b/f51 [0,4194304] 0 2026-03-09T20:48:19.361 INFO:tasks.workunit.client.1.vm10.stdout:1/961: mkdir d2/da/d25/d46/d80/da0/d92/db5/dc7/d105/d13b 0 2026-03-09T20:48:19.363 INFO:tasks.workunit.client.1.vm10.stdout:2/979: mknod d5/c13e 0 2026-03-09T20:48:19.364 INFO:tasks.workunit.client.1.vm10.stdout:1/962: dwrite d2/da/d25/d3e/dca/da2/fdc [0,4194304] 0 2026-03-09T20:48:19.367 INFO:tasks.workunit.client.1.vm10.stdout:3/927: mkdir dc/d14/d20/d137 0 2026-03-09T20:48:19.368 INFO:tasks.workunit.client.0.vm07.stdout:2/986: creat d2/da7/f145 x:0 0 0 2026-03-09T20:48:19.376 INFO:tasks.workunit.client.0.vm07.stdout:9/985: unlink d4/d16/d29/d24/d37/d44/d62/d108/d121/d19/d5f/d73/dbc/f100 0 2026-03-09T20:48:19.378 INFO:tasks.workunit.client.0.vm07.stdout:8/933: symlink d1/d5d/d6f/l129 0 2026-03-09T20:48:19.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:19 vm07.local ceph-mon[112105]: from='mgr.24495 ' entity='' 2026-03-09T20:48:19.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:19 vm07.local ceph-mon[112105]: Standby manager daemon vm10.byqahe restarted 2026-03-09T20:48:19.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:19 vm07.local ceph-mon[112105]: Standby manager daemon vm10.byqahe started 2026-03-09T20:48:19.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:19 vm07.local ceph-mon[112105]: mgrmap e31: vm07.xjrvch(active, since 24s), standbys: vm10.byqahe 2026-03-09T20:48:19.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:19 vm07.local ceph-mon[112105]: mgrmap e32: vm07.xjrvch(active, since 25s), standbys: vm10.byqahe 2026-03-09T20:48:19.387 INFO:tasks.workunit.client.1.vm10.stdout:0/939: write d2/d4a/d58/d82/d93/ff3 [304422,105295] 0 2026-03-09T20:48:19.387 INFO:tasks.workunit.client.1.vm10.stdout:6/975: write d3/d30/d7f/d24/d39/fa3 [394003,101358] 0 2026-03-09T20:48:19.387 INFO:tasks.workunit.client.1.vm10.stdout:4/927: write d1/d2/d5c/d64/d6b/d81/dac/d1b/ff7 [967138,33869] 0 2026-03-09T20:48:19.388 INFO:tasks.workunit.client.1.vm10.stdout:0/940: dread - d2/d4a/d58/d82/d71/f144 zero size 2026-03-09T20:48:19.388 INFO:tasks.workunit.client.1.vm10.stdout:4/928: chown d1/d2/d5c/d64/d6b/lfc 15000129 1 2026-03-09T20:48:19.393 INFO:tasks.workunit.client.0.vm07.stdout:3/991: write d1/d5/d9/d2f/d3d/d71/ff0 [791796,110273] 0 2026-03-09T20:48:19.401 INFO:tasks.workunit.client.1.vm10.stdout:7/950: rmdir db/d28/d2b/d36/d63/d6d/dc4 39 2026-03-09T20:48:19.402 INFO:tasks.workunit.client.1.vm10.stdout:5/875: unlink d2/d39/dbf/fb6 0 2026-03-09T20:48:19.402 INFO:tasks.workunit.client.1.vm10.stdout:5/876: fdatasync d2/f2c 0 2026-03-09T20:48:19.403 INFO:tasks.workunit.client.1.vm10.stdout:5/877: chown d2/f35 4 1 2026-03-09T20:48:19.406 INFO:tasks.workunit.client.1.vm10.stdout:2/980: read d5/d18/d27/d38/d61/f81 [197775,106585] 0 2026-03-09T20:48:19.407 INFO:tasks.workunit.client.0.vm07.stdout:8/934: creat d1/d5d/d6f/d2f/f12a x:0 0 0 2026-03-09T20:48:19.407 INFO:tasks.workunit.client.0.vm07.stdout:8/935: read d1/d5d/d6f/f61 [443972,78369] 0 2026-03-09T20:48:19.410 INFO:tasks.workunit.client.1.vm10.stdout:0/941: sync 2026-03-09T20:48:19.413 INFO:tasks.workunit.client.0.vm07.stdout:8/936: dwrite d1/d5d/d6f/d2f/f12a [0,4194304] 0 2026-03-09T20:48:19.414 INFO:tasks.workunit.client.0.vm07.stdout:8/937: dread - d1/d5d/d6f/d2f/d4d/dd4/dd9/dee/f127 zero size 2026-03-09T20:48:19.415 INFO:tasks.workunit.client.1.vm10.stdout:1/963: rename d2/da/d25/d46/d51/d5d/d6e/d70/db3/f139 to d2/da/d25/d46/d80/da0/d92/db5/dc7/d105/f13c 0 2026-03-09T20:48:19.418 INFO:tasks.workunit.client.0.vm07.stdout:9/986: dread d4/d11/f105 [0,4194304] 0 2026-03-09T20:48:19.420 INFO:tasks.workunit.client.0.vm07.stdout:2/987: write d2/d11/ddb/d6e/dda/d129/d96/fc3 [1223108,56865] 0 2026-03-09T20:48:19.425 INFO:tasks.workunit.client.0.vm07.stdout:3/992: creat d1/d5/d12e/f13d x:0 0 0 2026-03-09T20:48:19.426 INFO:tasks.workunit.client.1.vm10.stdout:3/928: chown dc/d9e 100167691 1 2026-03-09T20:48:19.430 INFO:tasks.workunit.client.1.vm10.stdout:6/976: truncate d3/d79/feb 336715 0 2026-03-09T20:48:19.435 INFO:tasks.workunit.client.0.vm07.stdout:2/988: creat d2/db/d1c/d4a/f146 x:0 0 0 2026-03-09T20:48:19.452 INFO:tasks.workunit.client.1.vm10.stdout:8/983: link d0/d92/de8/f6d d0/d22/d25/d2e/d41/d85/d8b/f13c 0 2026-03-09T20:48:19.452 INFO:tasks.workunit.client.0.vm07.stdout:2/989: readlink d2/db/l43 0 2026-03-09T20:48:19.452 INFO:tasks.workunit.client.0.vm07.stdout:2/990: readlink d2/d11/l125 0 2026-03-09T20:48:19.452 INFO:tasks.workunit.client.0.vm07.stdout:4/926: getdents d2/d55/d5d/d3f 0 2026-03-09T20:48:19.452 INFO:tasks.workunit.client.0.vm07.stdout:4/927: readlink d2/l22 0 2026-03-09T20:48:19.453 INFO:tasks.workunit.client.0.vm07.stdout:9/987: rename d4/d16/c58 to d4/d16/d29/d24/d37/d44/d62/d108/d121/d19/d5f/da5/db8/dc1/c15d 0 2026-03-09T20:48:19.453 INFO:tasks.workunit.client.0.vm07.stdout:8/938: sync 2026-03-09T20:48:19.454 INFO:tasks.workunit.client.0.vm07.stdout:8/939: readlink d1/d5d/d6f/d2f/d4d/dd4/l123 0 2026-03-09T20:48:19.455 INFO:tasks.workunit.client.0.vm07.stdout:8/940: chown d1/dc/d16/dad/d87/d10c 331 1 2026-03-09T20:48:19.459 INFO:tasks.workunit.client.0.vm07.stdout:8/941: dwrite d1/d5d/d6f/d2f/d4d/dd4/dd9/dee/f127 [0,4194304] 0 2026-03-09T20:48:19.462 INFO:tasks.workunit.client.1.vm10.stdout:5/878: creat d2/d58/d6c/f142 x:0 0 0 2026-03-09T20:48:19.464 INFO:tasks.workunit.client.1.vm10.stdout:2/981: fsync d5/d18/d27/db8/fbc 0 2026-03-09T20:48:19.465 INFO:tasks.workunit.client.0.vm07.stdout:8/942: sync 2026-03-09T20:48:19.467 INFO:tasks.workunit.client.0.vm07.stdout:4/928: dread d2/fb7 [0,4194304] 0 2026-03-09T20:48:19.469 INFO:tasks.workunit.client.1.vm10.stdout:0/942: write d2/d4a/fcf [3070175,84237] 0 2026-03-09T20:48:19.470 INFO:tasks.workunit.client.1.vm10.stdout:0/943: fdatasync d2/d9/d12d/f136 0 2026-03-09T20:48:19.477 INFO:tasks.workunit.client.1.vm10.stdout:1/964: mkdir d2/da/d25/d46/d51/d5d/d6e/d70/db3/dd4/d13d 0 2026-03-09T20:48:19.480 INFO:tasks.workunit.client.0.vm07.stdout:3/993: write d1/d5/d9/d11/d6d/d80/db3/d109/f61 [2724878,66016] 0 2026-03-09T20:48:19.480 INFO:tasks.workunit.client.1.vm10.stdout:1/965: chown d2/da/d25/d46/cbb 0 1 2026-03-09T20:48:19.481 INFO:tasks.workunit.client.1.vm10.stdout:1/966: dwrite d2/da/d25/d46/d51/d5d/d6e/f12f [0,4194304] 0 2026-03-09T20:48:19.483 INFO:tasks.workunit.client.1.vm10.stdout:3/929: symlink dc/d9e/l138 0 2026-03-09T20:48:19.484 INFO:tasks.workunit.client.1.vm10.stdout:1/967: read - d2/da/d25/d46/d51/d5d/d6e/f124 zero size 2026-03-09T20:48:19.486 INFO:tasks.workunit.client.0.vm07.stdout:4/929: rename d2/d55/d5d to d2/d55/d5d/d93/d105 22 2026-03-09T20:48:19.487 INFO:tasks.workunit.client.1.vm10.stdout:4/929: symlink d1/d2/d5c/d64/d6b/d81/dac/d1b/l12f 0 2026-03-09T20:48:19.489 INFO:tasks.workunit.client.1.vm10.stdout:6/977: creat d3/da/d11/d89/db9/dd1/dd2/da9/f123 x:0 0 0 2026-03-09T20:48:19.492 INFO:tasks.workunit.client.0.vm07.stdout:4/930: sync 2026-03-09T20:48:19.492 INFO:tasks.workunit.client.0.vm07.stdout:3/994: fsync d1/d5/d9/d11/d6d/dd0/d95/f10a 0 2026-03-09T20:48:19.495 INFO:tasks.workunit.client.0.vm07.stdout:3/995: read d1/d5/d9/daf/d9f/fcb [16121,111479] 0 2026-03-09T20:48:19.500 INFO:tasks.workunit.client.1.vm10.stdout:3/930: dread dc/d14/d20/d21/f96 [0,4194304] 0 2026-03-09T20:48:19.500 INFO:tasks.workunit.client.1.vm10.stdout:3/931: readlink dc/d14/d27/lad 0 2026-03-09T20:48:19.500 INFO:tasks.workunit.client.1.vm10.stdout:3/932: readlink lb 0 2026-03-09T20:48:19.502 INFO:tasks.workunit.client.1.vm10.stdout:5/879: mkdir d2/d27/d75/d81/d143 0 2026-03-09T20:48:19.502 INFO:tasks.workunit.client.1.vm10.stdout:5/880: dread - d2/d39/d4b/de0/f134 zero size 2026-03-09T20:48:19.509 INFO:tasks.workunit.client.0.vm07.stdout:3/996: fdatasync d1/d5/d9/d11/d6d/d80/db3/d109/f61 0 2026-03-09T20:48:19.510 INFO:tasks.workunit.client.0.vm07.stdout:8/943: read d1/dc/d16/d26/f4f [787108,72529] 0 2026-03-09T20:48:19.517 INFO:tasks.workunit.client.1.vm10.stdout:7/951: dread db/d21/d23/ff [0,4194304] 0 2026-03-09T20:48:19.519 INFO:tasks.workunit.client.0.vm07.stdout:9/988: write d4/d16/d29/d24/d37/d44/d62/d108/d121/f1c [4370256,106695] 0 2026-03-09T20:48:19.521 INFO:tasks.workunit.client.0.vm07.stdout:2/991: dwrite d2/db/d1c/f9d [4194304,4194304] 0 2026-03-09T20:48:19.529 INFO:tasks.workunit.client.1.vm10.stdout:0/944: truncate d2/d4a/d58/df6/f11e 3923475 0 2026-03-09T20:48:19.529 INFO:tasks.workunit.client.1.vm10.stdout:0/945: stat d2/d4a/d58/d82/d71/dca/c12f 0 2026-03-09T20:48:19.532 INFO:tasks.workunit.client.0.vm07.stdout:3/997: unlink d1/d5/d9/d2f/cff 0 2026-03-09T20:48:19.533 INFO:tasks.workunit.client.0.vm07.stdout:3/998: write d1/d5/d9/d2f/d66/dc0/fde [2203787,31521] 0 2026-03-09T20:48:19.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:19 vm10.local ceph-mon[57011]: from='mgr.24495 ' entity='' 2026-03-09T20:48:19.540 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:19 vm10.local ceph-mon[57011]: Standby manager daemon vm10.byqahe restarted 2026-03-09T20:48:19.540 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:19 vm10.local ceph-mon[57011]: Standby manager daemon vm10.byqahe started 2026-03-09T20:48:19.540 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:19 vm10.local ceph-mon[57011]: mgrmap e31: vm07.xjrvch(active, since 24s), standbys: vm10.byqahe 2026-03-09T20:48:19.540 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:19 vm10.local ceph-mon[57011]: mgrmap e32: vm07.xjrvch(active, since 25s), standbys: vm10.byqahe 2026-03-09T20:48:19.540 INFO:tasks.workunit.client.0.vm07.stdout:4/931: dwrite d2/d1f/f9c [0,4194304] 0 2026-03-09T20:48:19.543 INFO:tasks.workunit.client.0.vm07.stdout:8/944: mknod d1/dc/d16/c12b 0 2026-03-09T20:48:19.553 INFO:tasks.workunit.client.1.vm10.stdout:2/982: write d5/d18/f63 [505350,100998] 0 2026-03-09T20:48:19.560 INFO:tasks.workunit.client.1.vm10.stdout:1/968: truncate d2/d89/f129 1789979 0 2026-03-09T20:48:19.566 INFO:tasks.workunit.client.0.vm07.stdout:9/989: rename d4/d16/d29/d24/d37/d44/d62/d108/d121/d59/cd6 to d4/d16/d29/d24/d37/d44/d62/d8e/c15e 0 2026-03-09T20:48:19.570 INFO:tasks.workunit.client.0.vm07.stdout:3/999: readlink d1/l8 0 2026-03-09T20:48:19.574 INFO:tasks.workunit.client.1.vm10.stdout:8/984: rename d0/d22/d25/f74 to d0/d22/d25/d2e/d41/d85/d8b/d136/d122/f13d 0 2026-03-09T20:48:19.575 INFO:tasks.workunit.client.1.vm10.stdout:4/930: dread d1/d2/f7 [0,4194304] 0 2026-03-09T20:48:19.580 INFO:tasks.workunit.client.1.vm10.stdout:3/933: fdatasync dc/d14/d26/d37/feb 0 2026-03-09T20:48:19.581 INFO:tasks.workunit.client.1.vm10.stdout:8/985: sync 2026-03-09T20:48:19.582 INFO:tasks.workunit.client.1.vm10.stdout:3/934: dread dc/d14/d22/ff0 [0,4194304] 0 2026-03-09T20:48:19.583 INFO:tasks.workunit.client.0.vm07.stdout:4/932: unlink d2/d1f/f53 0 2026-03-09T20:48:19.586 INFO:tasks.workunit.client.1.vm10.stdout:5/881: mkdir d2/d39/d4b/de0/d105/d144 0 2026-03-09T20:48:19.588 INFO:tasks.workunit.client.1.vm10.stdout:6/978: dread d3/d30/d7f/d36/d5c/d8d/fac [0,4194304] 0 2026-03-09T20:48:19.599 INFO:tasks.workunit.client.0.vm07.stdout:2/992: dwrite d2/db/d28/d5c/f89 [0,4194304] 0 2026-03-09T20:48:19.603 INFO:tasks.workunit.client.0.vm07.stdout:8/945: dwrite d1/dc/d16/dad/d87/f97 [0,4194304] 0 2026-03-09T20:48:19.613 INFO:tasks.workunit.client.0.vm07.stdout:9/990: rename d4/d11/d23/d32/d149/c156 to d4/d11/d23/d32/d149/c15f 0 2026-03-09T20:48:19.616 INFO:tasks.workunit.client.1.vm10.stdout:0/946: creat d2/d9/db8/f147 x:0 0 0 2026-03-09T20:48:19.619 INFO:tasks.workunit.client.0.vm07.stdout:9/991: dwrite d4/d16/d29/d24/d37/d44/d62/d108/d121/d19/d5f/d73/fb7 [0,4194304] 0 2026-03-09T20:48:19.622 INFO:tasks.workunit.client.0.vm07.stdout:4/933: symlink d2/d55/d5d/d93/dbe/l106 0 2026-03-09T20:48:19.623 INFO:tasks.workunit.client.1.vm10.stdout:2/983: mkdir d5/d5b/d13f 0 2026-03-09T20:48:19.623 INFO:tasks.workunit.client.1.vm10.stdout:2/984: read - d5/d18/d27/db8/f125 zero size 2026-03-09T20:48:19.627 INFO:tasks.workunit.client.1.vm10.stdout:1/969: truncate d2/da/f11 2185987 0 2026-03-09T20:48:19.630 INFO:tasks.workunit.client.0.vm07.stdout:8/946: truncate d1/d5d/d6f/d2f/d4d/d63/fd5 4011272 0 2026-03-09T20:48:19.631 INFO:tasks.workunit.client.1.vm10.stdout:1/970: write d2/da/d25/d46/d80/da0/d92/f133 [167377,44145] 0 2026-03-09T20:48:19.633 INFO:tasks.workunit.client.1.vm10.stdout:4/931: dread - d1/d2/d5c/d64/d6b/d81/dac/d1b/dbe/fcd zero size 2026-03-09T20:48:19.635 INFO:tasks.workunit.client.0.vm07.stdout:9/992: creat d4/d16/d29/d24/d7c/f160 x:0 0 0 2026-03-09T20:48:19.641 INFO:tasks.workunit.client.1.vm10.stdout:3/935: dread - dc/d14/d26/d8f/ddd/d10d/f111 zero size 2026-03-09T20:48:19.644 INFO:tasks.workunit.client.0.vm07.stdout:2/993: truncate d2/db/f7c 355350 0 2026-03-09T20:48:19.647 INFO:tasks.workunit.client.0.vm07.stdout:2/994: dread d2/d11/ddb/d72/d82/f123 [0,4194304] 0 2026-03-09T20:48:19.649 INFO:tasks.workunit.client.1.vm10.stdout:5/882: read - d2/d1b/d54/d78/de6/f135 zero size 2026-03-09T20:48:19.658 INFO:tasks.workunit.client.1.vm10.stdout:0/947: creat d2/d9/d12d/f148 x:0 0 0 2026-03-09T20:48:19.666 INFO:tasks.workunit.client.0.vm07.stdout:9/993: mknod d4/d11/d23/c161 0 2026-03-09T20:48:19.666 INFO:tasks.workunit.client.1.vm10.stdout:7/952: dwrite db/d28/d2b/d36/d40/f8d [0,4194304] 0 2026-03-09T20:48:19.667 INFO:tasks.workunit.client.1.vm10.stdout:1/971: readlink d2/da/d25/d3e/l6d 0 2026-03-09T20:48:19.667 INFO:tasks.workunit.client.1.vm10.stdout:7/953: readlink db/d28/d2b/d36/d3b/d88/lb3 0 2026-03-09T20:48:19.669 INFO:tasks.workunit.client.0.vm07.stdout:4/934: rmdir d2/d55/d5d/d86/db9 0 2026-03-09T20:48:19.683 INFO:tasks.workunit.client.0.vm07.stdout:2/995: stat d2/d11/ddb/d6e/dda/d129/d96/dcc/lce 0 2026-03-09T20:48:19.686 INFO:tasks.workunit.client.0.vm07.stdout:8/947: creat d1/dc/d16/d26/f12c x:0 0 0 2026-03-09T20:48:19.692 INFO:tasks.workunit.client.1.vm10.stdout:5/883: fsync d2/d39/d4b/d7a/fed 0 2026-03-09T20:48:19.696 INFO:tasks.workunit.client.1.vm10.stdout:3/936: dread dc/d14/fd3 [0,4194304] 0 2026-03-09T20:48:19.702 INFO:tasks.workunit.client.0.vm07.stdout:9/994: sync 2026-03-09T20:48:19.704 INFO:tasks.workunit.client.0.vm07.stdout:8/948: dread d1/dc/d16/dad/fc7 [0,4194304] 0 2026-03-09T20:48:19.710 INFO:tasks.workunit.client.1.vm10.stdout:0/948: dread d2/d4a/d58/d82/d71/dca/d110/d30/f56 [0,4194304] 0 2026-03-09T20:48:19.711 INFO:tasks.workunit.client.0.vm07.stdout:2/996: unlink d2/db/d1c/d8d/ffa 0 2026-03-09T20:48:19.719 INFO:tasks.workunit.client.0.vm07.stdout:4/935: write d2/df/d17/f46 [1223870,118184] 0 2026-03-09T20:48:19.719 INFO:tasks.workunit.client.1.vm10.stdout:8/986: write d0/d22/d25/d40/fd3 [547716,76677] 0 2026-03-09T20:48:19.720 INFO:tasks.workunit.client.1.vm10.stdout:2/985: dwrite d5/d18/d27/da6/fac [0,4194304] 0 2026-03-09T20:48:19.733 INFO:tasks.workunit.client.1.vm10.stdout:2/986: dread d5/d18/d27/d5f/fad [0,4194304] 0 2026-03-09T20:48:19.735 INFO:tasks.workunit.client.1.vm10.stdout:1/972: dread d2/da/d25/d46/d51/d5d/d6e/f76 [0,4194304] 0 2026-03-09T20:48:19.745 INFO:tasks.workunit.client.1.vm10.stdout:7/954: write db/d28/d2b/d36/d3b/dd5/f100 [1931399,75909] 0 2026-03-09T20:48:19.745 INFO:tasks.workunit.client.0.vm07.stdout:8/949: write d1/d5d/d6f/d80/ffb [748483,36784] 0 2026-03-09T20:48:19.748 INFO:tasks.workunit.client.0.vm07.stdout:2/997: dwrite d2/db/d28/d57/f68 [0,4194304] 0 2026-03-09T20:48:19.768 INFO:tasks.workunit.client.0.vm07.stdout:4/936: symlink d2/d55/d5d/d3f/d4a/d4b/d52/l107 0 2026-03-09T20:48:19.768 INFO:tasks.workunit.client.0.vm07.stdout:4/937: dread - d2/df/f75 zero size 2026-03-09T20:48:19.772 INFO:tasks.workunit.client.1.vm10.stdout:5/884: truncate d2/d27/f2a 1081075 0 2026-03-09T20:48:19.777 INFO:tasks.workunit.client.1.vm10.stdout:6/979: creat d3/d30/d7f/f124 x:0 0 0 2026-03-09T20:48:19.789 INFO:tasks.workunit.client.1.vm10.stdout:3/937: dread dc/d14/d26/d29/d40/d8c/fbc [0,4194304] 0 2026-03-09T20:48:19.794 INFO:tasks.workunit.client.0.vm07.stdout:9/995: write d4/d16/d29/d24/d37/d44/d62/d108/d121/dc/d15/f30 [161595,90314] 0 2026-03-09T20:48:19.796 INFO:tasks.workunit.client.1.vm10.stdout:0/949: write d2/d9/db8/d10f/d11/f15 [3620592,25420] 0 2026-03-09T20:48:19.806 INFO:tasks.workunit.client.0.vm07.stdout:4/938: mkdir d2/d55/d5d/d3f/d4a/dbc/d108 0 2026-03-09T20:48:19.817 INFO:tasks.workunit.client.0.vm07.stdout:8/950: symlink d1/d5d/d6f/l12d 0 2026-03-09T20:48:19.820 INFO:tasks.workunit.client.1.vm10.stdout:1/973: dread d2/da/fa1 [0,4194304] 0 2026-03-09T20:48:19.820 INFO:tasks.workunit.client.0.vm07.stdout:9/996: creat d4/d11/d23/d32/f162 x:0 0 0 2026-03-09T20:48:19.824 INFO:tasks.workunit.client.1.vm10.stdout:8/987: dwrite d0/d22/f66 [0,4194304] 0 2026-03-09T20:48:19.825 INFO:tasks.workunit.client.0.vm07.stdout:2/998: fdatasync d2/db/d49/fb2 0 2026-03-09T20:48:19.829 INFO:tasks.workunit.client.0.vm07.stdout:2/999: dread d2/d11/ddb/f116 [0,4194304] 0 2026-03-09T20:48:19.833 INFO:tasks.workunit.client.1.vm10.stdout:4/932: creat d1/d2/d5c/d64/d6b/d81/dac/d1b/d57/f130 x:0 0 0 2026-03-09T20:48:19.834 INFO:tasks.workunit.client.0.vm07.stdout:4/939: truncate d2/d55/dab/fe3 536717 0 2026-03-09T20:48:19.835 INFO:tasks.workunit.client.0.vm07.stdout:4/940: dread d2/fb7 [0,4194304] 0 2026-03-09T20:48:19.839 INFO:tasks.workunit.client.1.vm10.stdout:4/933: dread d1/d47/fb4 [0,4194304] 0 2026-03-09T20:48:19.841 INFO:tasks.workunit.client.0.vm07.stdout:9/997: rmdir d4/d16/d29/d24/d37/d44/d62/d108/d121/d19/d89/da7/ddd 39 2026-03-09T20:48:19.848 INFO:tasks.workunit.client.0.vm07.stdout:4/941: write d2/d1f/f26 [328374,49413] 0 2026-03-09T20:48:19.856 INFO:tasks.workunit.client.0.vm07.stdout:8/951: creat d1/d5d/d112/f12e x:0 0 0 2026-03-09T20:48:19.857 INFO:tasks.workunit.client.1.vm10.stdout:7/955: dwrite db/d46/d89/dbf/d87/f98 [0,4194304] 0 2026-03-09T20:48:19.858 INFO:tasks.workunit.client.1.vm10.stdout:7/956: chown db/d28/d4c/fcd 3 1 2026-03-09T20:48:19.859 INFO:tasks.workunit.client.1.vm10.stdout:7/957: chown db/d28/d4c/d6e 7 1 2026-03-09T20:48:19.878 INFO:tasks.workunit.client.0.vm07.stdout:4/942: write d2/d55/f62 [268840,80175] 0 2026-03-09T20:48:19.879 INFO:tasks.workunit.client.0.vm07.stdout:4/943: chown d2/fa 62 1 2026-03-09T20:48:19.884 INFO:tasks.workunit.client.0.vm07.stdout:9/998: write d4/d11/f1a [4132760,10548] 0 2026-03-09T20:48:19.885 INFO:tasks.workunit.client.0.vm07.stdout:8/952: write d1/dc/d16/dad/d87/dd3/fda [67557,88901] 0 2026-03-09T20:48:19.890 INFO:tasks.workunit.client.0.vm07.stdout:4/944: mkdir d2/d55/d5d/d3f/d4a/dbc/d109 0 2026-03-09T20:48:19.892 INFO:tasks.workunit.client.0.vm07.stdout:4/945: dread d2/df/d59/d8a/fc0 [0,4194304] 0 2026-03-09T20:48:19.899 INFO:tasks.workunit.client.0.vm07.stdout:9/999: sync 2026-03-09T20:48:19.905 INFO:tasks.workunit.client.0.vm07.stdout:8/953: dwrite d1/d5d/d6f/d2f/d53/f5f [0,4194304] 0 2026-03-09T20:48:19.909 INFO:tasks.workunit.client.0.vm07.stdout:8/954: dwrite d1/d5d/d6f/d2f/d4d/dd4/dd9/dee/f127 [0,4194304] 0 2026-03-09T20:48:19.917 INFO:tasks.workunit.client.0.vm07.stdout:4/946: chown d2/d55/d5d/d3f/f51 2382 1 2026-03-09T20:48:19.917 INFO:tasks.workunit.client.0.vm07.stdout:4/947: stat d2/d55/d5d/d93/ca1 0 2026-03-09T20:48:19.932 INFO:tasks.workunit.client.0.vm07.stdout:8/955: creat d1/dc/d16/d26/f12f x:0 0 0 2026-03-09T20:48:19.940 INFO:tasks.workunit.client.0.vm07.stdout:4/948: creat d2/f10a x:0 0 0 2026-03-09T20:48:19.940 INFO:tasks.workunit.client.0.vm07.stdout:8/956: truncate d1/f85 2531382 0 2026-03-09T20:48:19.942 INFO:tasks.workunit.client.0.vm07.stdout:4/949: truncate d2/df/f97 834336 0 2026-03-09T20:48:19.945 INFO:tasks.workunit.client.0.vm07.stdout:8/957: mkdir d1/dc/d16/dad/d87/d130 0 2026-03-09T20:48:19.949 INFO:tasks.workunit.client.0.vm07.stdout:8/958: dwrite d1/d3b/f9a [0,4194304] 0 2026-03-09T20:48:19.958 INFO:tasks.workunit.client.0.vm07.stdout:4/950: rmdir d2/d55/d5d/d3f/d4a/dbc 39 2026-03-09T20:48:19.968 INFO:tasks.workunit.client.0.vm07.stdout:8/959: unlink d1/d8f/f10e 0 2026-03-09T20:48:19.974 INFO:tasks.workunit.client.0.vm07.stdout:4/951: dread d2/f7 [0,4194304] 0 2026-03-09T20:48:19.979 INFO:tasks.workunit.client.0.vm07.stdout:8/960: creat d1/d5d/d6f/d2f/d4d/d63/d91/f131 x:0 0 0 2026-03-09T20:48:19.985 INFO:tasks.workunit.client.0.vm07.stdout:4/952: sync 2026-03-09T20:48:19.985 INFO:tasks.workunit.client.1.vm10.stdout:5/885: rename d2/d27/l2e to d2/d39/d4b/d7a/dd9/d10c/d132/l145 0 2026-03-09T20:48:19.992 INFO:tasks.workunit.client.0.vm07.stdout:4/953: mkdir d2/d55/d5d/d3f/db6/dd2/d10b 0 2026-03-09T20:48:19.995 INFO:tasks.workunit.client.1.vm10.stdout:6/980: write d3/d30/d7f/fcd [447957,124829] 0 2026-03-09T20:48:19.999 INFO:tasks.workunit.client.0.vm07.stdout:8/961: getdents d1/d8f/d9d/d11d 0 2026-03-09T20:48:20.000 INFO:tasks.workunit.client.0.vm07.stdout:4/954: mkdir d2/d10c 0 2026-03-09T20:48:20.003 INFO:tasks.workunit.client.1.vm10.stdout:2/987: symlink d5/d18/d27/d38/l140 0 2026-03-09T20:48:20.008 INFO:tasks.workunit.client.0.vm07.stdout:8/962: write d1/dc/d16/f70 [935853,89968] 0 2026-03-09T20:48:20.009 INFO:tasks.workunit.client.0.vm07.stdout:8/963: read d1/dc/d16/d31/db4/fbc [1568601,96823] 0 2026-03-09T20:48:20.018 INFO:tasks.workunit.client.1.vm10.stdout:1/974: dwrite d2/da/d25/d46/d51/d5d/d6e/dd0/f118 [0,4194304] 0 2026-03-09T20:48:20.027 INFO:tasks.workunit.client.0.vm07.stdout:8/964: creat d1/d5d/d6f/d80/d10f/f132 x:0 0 0 2026-03-09T20:48:20.032 INFO:tasks.workunit.client.1.vm10.stdout:4/934: creat d1/d2/d5c/d64/d112/f131 x:0 0 0 2026-03-09T20:48:20.044 INFO:tasks.workunit.client.1.vm10.stdout:0/950: rename d2/d4a/d58/d82/d60/cc6 to d2/d9/db8/db4/d101/c149 0 2026-03-09T20:48:20.045 INFO:tasks.workunit.client.1.vm10.stdout:7/958: write db/d28/d4c/d6e/f112 [881317,38738] 0 2026-03-09T20:48:20.051 INFO:tasks.workunit.client.0.vm07.stdout:4/955: rmdir d2/d55/d5d/d3f/d4a/dbc/d109 0 2026-03-09T20:48:20.054 INFO:tasks.workunit.client.0.vm07.stdout:8/965: dwrite d1/dc/fe [0,4194304] 0 2026-03-09T20:48:20.063 INFO:tasks.workunit.client.1.vm10.stdout:6/981: unlink d3/d30/d7f/d36/fc2 0 2026-03-09T20:48:20.070 INFO:tasks.workunit.client.1.vm10.stdout:3/938: dread dc/db4/fca [0,4194304] 0 2026-03-09T20:48:20.074 INFO:tasks.workunit.client.0.vm07.stdout:8/966: symlink d1/d5d/d6f/d2f/d4d/d63/deb/l133 0 2026-03-09T20:48:20.080 INFO:tasks.workunit.client.1.vm10.stdout:1/975: rmdir d2/da/d25/d46/d51/d5d/d6e 39 2026-03-09T20:48:20.084 INFO:tasks.workunit.client.0.vm07.stdout:4/956: dwrite d2/df/d59/f60 [0,4194304] 0 2026-03-09T20:48:20.084 INFO:tasks.workunit.client.1.vm10.stdout:2/988: dwrite d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47/ddc/ff6 [0,4194304] 0 2026-03-09T20:48:20.086 INFO:tasks.workunit.client.1.vm10.stdout:8/988: unlink d0/d22/d25/d2e/c8e 0 2026-03-09T20:48:20.102 INFO:tasks.workunit.client.1.vm10.stdout:4/935: creat d1/d2/d5c/d64/d6b/d81/da9/f132 x:0 0 0 2026-03-09T20:48:20.113 INFO:tasks.workunit.client.0.vm07.stdout:4/957: mknod d2/d55/c10d 0 2026-03-09T20:48:20.113 INFO:tasks.workunit.client.1.vm10.stdout:7/959: mkdir db/d28/d2b/dd0/d134 0 2026-03-09T20:48:20.113 INFO:tasks.workunit.client.1.vm10.stdout:5/886: creat d2/d39/dbf/da9/d12d/f146 x:0 0 0 2026-03-09T20:48:20.114 INFO:tasks.workunit.client.0.vm07.stdout:4/958: creat d2/d55/d5d/f10e x:0 0 0 2026-03-09T20:48:20.118 INFO:tasks.workunit.client.1.vm10.stdout:6/982: unlink d3/d30/d7f/d24/l2d 0 2026-03-09T20:48:20.133 INFO:tasks.workunit.client.0.vm07.stdout:8/967: truncate d1/dc/d16/dad/d87/f97 6455998 0 2026-03-09T20:48:20.134 INFO:tasks.workunit.client.0.vm07.stdout:8/968: read - d1/dc/d16/dad/fa1 zero size 2026-03-09T20:48:20.136 INFO:tasks.workunit.client.1.vm10.stdout:3/939: dwrite dc/d14/fd3 [0,4194304] 0 2026-03-09T20:48:20.137 INFO:tasks.workunit.client.1.vm10.stdout:1/976: creat d2/da/d25/d46/d51/d7e/f13e x:0 0 0 2026-03-09T20:48:20.137 INFO:tasks.workunit.client.0.vm07.stdout:8/969: rmdir d1/dc/d16 39 2026-03-09T20:48:20.140 INFO:tasks.workunit.client.0.vm07.stdout:4/959: link d2/cc7 d2/d55/d5d/d3f/d4a/d4b/d52/d5c/c10f 0 2026-03-09T20:48:20.140 INFO:tasks.workunit.client.0.vm07.stdout:4/960: chown d2/l30 4154534 1 2026-03-09T20:48:20.151 INFO:tasks.workunit.client.0.vm07.stdout:8/970: mknod d1/d8f/c134 0 2026-03-09T20:48:20.153 INFO:tasks.workunit.client.1.vm10.stdout:8/989: symlink d0/dd1/df1/l13e 0 2026-03-09T20:48:20.155 INFO:tasks.workunit.client.0.vm07.stdout:4/961: fsync d2/df/d17/fdf 0 2026-03-09T20:48:20.164 INFO:tasks.workunit.client.1.vm10.stdout:0/951: mknod d2/d4a/c14a 0 2026-03-09T20:48:20.164 INFO:tasks.workunit.client.1.vm10.stdout:4/936: dread d1/d2/d5c/d64/d6b/d81/dac/d1b/d57/f8e [0,4194304] 0 2026-03-09T20:48:20.170 INFO:tasks.workunit.client.1.vm10.stdout:2/989: write d5/d18/d27/d38/d61/fa0 [4479486,30531] 0 2026-03-09T20:48:20.181 INFO:tasks.workunit.client.1.vm10.stdout:0/952: sync 2026-03-09T20:48:20.182 INFO:tasks.workunit.client.1.vm10.stdout:7/960: rename db/d46/dab/d10f/f12c to db/d28/d2b/d36/d40/f135 0 2026-03-09T20:48:20.187 INFO:tasks.workunit.client.1.vm10.stdout:1/977: write d2/da/fb1 [934011,126695] 0 2026-03-09T20:48:20.189 INFO:tasks.workunit.client.1.vm10.stdout:4/937: chown d1/d2/d3/d70/d99/dc9/dff/d69/dbd/f10f 114 1 2026-03-09T20:48:20.190 INFO:tasks.workunit.client.1.vm10.stdout:2/990: mknod d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47/c141 0 2026-03-09T20:48:20.190 INFO:tasks.workunit.client.1.vm10.stdout:8/990: dwrite d0/d22/d25/d40/d86/d91/fa8 [8388608,4194304] 0 2026-03-09T20:48:20.204 INFO:tasks.workunit.client.1.vm10.stdout:0/953: truncate d2/d4a/d58/d82/d71/d5d/ff2 808471 0 2026-03-09T20:48:20.207 INFO:tasks.workunit.client.1.vm10.stdout:7/961: creat db/d28/d30/f136 x:0 0 0 2026-03-09T20:48:20.223 INFO:tasks.workunit.client.1.vm10.stdout:1/978: write d2/fd2 [1253349,116181] 0 2026-03-09T20:48:20.228 INFO:tasks.workunit.client.1.vm10.stdout:4/938: dwrite d1/d2/d3/d54/fe6 [0,4194304] 0 2026-03-09T20:48:20.228 INFO:tasks.workunit.client.1.vm10.stdout:2/991: creat d5/d18/d27/d89/db6/dd3/f142 x:0 0 0 2026-03-09T20:48:20.230 INFO:tasks.workunit.client.1.vm10.stdout:4/939: write d1/d2/d5c/d64/d6b/d81/dac/d1b/d57/f130 [701964,107369] 0 2026-03-09T20:48:20.230 INFO:tasks.workunit.client.1.vm10.stdout:8/991: creat d0/d22/d25/d2e/d41/d85/d8b/d136/d122/f13f x:0 0 0 2026-03-09T20:48:20.230 INFO:tasks.workunit.client.1.vm10.stdout:6/983: link d3/da/c23 d3/da/d11/d89/c125 0 2026-03-09T20:48:20.231 INFO:tasks.workunit.client.1.vm10.stdout:3/940: rename dc/d9e/ce8 to dc/d14/d26/d29/d40/da2/dd7/c139 0 2026-03-09T20:48:20.232 INFO:tasks.workunit.client.1.vm10.stdout:8/992: readlink d0/d22/d25/d2e/d41/d85/d8b/d136/l9e 0 2026-03-09T20:48:20.249 INFO:tasks.workunit.client.1.vm10.stdout:5/887: getdents d2/d39/d4b/d7a/dd9/d10c/d132/d46 0 2026-03-09T20:48:20.257 INFO:tasks.workunit.client.1.vm10.stdout:2/992: chown d5/d18/d27/d89/db6/d41/d77/db3/db5/db0/deb/l11e 115747465 1 2026-03-09T20:48:20.262 INFO:tasks.workunit.client.1.vm10.stdout:6/984: mknod d3/d30/d33/c126 0 2026-03-09T20:48:20.264 INFO:tasks.workunit.client.1.vm10.stdout:5/888: dread d2/d1b/d54/d78/f47 [0,4194304] 0 2026-03-09T20:48:20.269 INFO:tasks.workunit.client.1.vm10.stdout:0/954: rename d2/d4a/d58/d82/d93/fe3 to d2/d9/db8/d10f/d11/dd1/db7/dcd/f14b 0 2026-03-09T20:48:20.273 INFO:tasks.workunit.client.1.vm10.stdout:4/940: fdatasync d1/d2/d3/d70/d99/dc9/dff/d69/dbd/f10f 0 2026-03-09T20:48:20.279 INFO:tasks.workunit.client.1.vm10.stdout:7/962: write db/d28/d2b/d36/d40/f44 [41237,70936] 0 2026-03-09T20:48:20.280 INFO:tasks.workunit.client.1.vm10.stdout:1/979: write d2/d89/f101 [180304,49676] 0 2026-03-09T20:48:20.281 INFO:tasks.workunit.client.1.vm10.stdout:1/980: write d2/da/d25/d46/d51/d7e/f13e [863398,35777] 0 2026-03-09T20:48:20.289 INFO:tasks.workunit.client.1.vm10.stdout:2/993: mkdir d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47/dcc/d143 0 2026-03-09T20:48:20.295 INFO:tasks.workunit.client.1.vm10.stdout:5/889: mkdir d2/d1b/d54/d147 0 2026-03-09T20:48:20.295 INFO:tasks.workunit.client.1.vm10.stdout:8/993: read d0/d22/d25/d40/d86/d138/db5/fdb [649919,69556] 0 2026-03-09T20:48:20.300 INFO:tasks.workunit.client.0.vm07.stdout:8/971: dwrite d1/dc/d16/f4a [0,4194304] 0 2026-03-09T20:48:20.308 INFO:tasks.workunit.client.1.vm10.stdout:3/941: rename dc/d14/d26/d8f to dc/d14/d26/d29/d40/da8/d69/d13a 0 2026-03-09T20:48:20.314 INFO:tasks.workunit.client.0.vm07.stdout:8/972: fdatasync d1/dc/f75 0 2026-03-09T20:48:20.316 INFO:tasks.workunit.client.0.vm07.stdout:4/962: getdents d2/d55/d5d/d3f/d4a/d85/dda 0 2026-03-09T20:48:20.319 INFO:tasks.workunit.client.1.vm10.stdout:4/941: mknod d1/d2/d3/d54/daa/dfa/c133 0 2026-03-09T20:48:20.323 INFO:tasks.workunit.client.0.vm07.stdout:8/973: mknod d1/dc/d16/d26/d94/c135 0 2026-03-09T20:48:20.331 INFO:tasks.workunit.client.1.vm10.stdout:3/942: creat dc/d14/d20/d2e/d56/f13b x:0 0 0 2026-03-09T20:48:20.333 INFO:tasks.workunit.client.1.vm10.stdout:7/963: write db/d28/f4f [2158737,46468] 0 2026-03-09T20:48:20.333 INFO:tasks.workunit.client.0.vm07.stdout:4/963: write d2/df/faf [41297,87662] 0 2026-03-09T20:48:20.334 INFO:tasks.workunit.client.1.vm10.stdout:7/964: truncate db/d28/d30/f136 95236 0 2026-03-09T20:48:20.340 INFO:tasks.workunit.client.1.vm10.stdout:1/981: mkdir d2/da/d25/d46/d8c/d106/d13f 0 2026-03-09T20:48:20.349 INFO:tasks.workunit.client.1.vm10.stdout:1/982: dread - d2/da/fb6 zero size 2026-03-09T20:48:20.349 INFO:tasks.workunit.client.1.vm10.stdout:1/983: write d2/da/d25/d3e/dca/da2/f104 [3191929,105510] 0 2026-03-09T20:48:20.349 INFO:tasks.workunit.client.1.vm10.stdout:3/943: readlink dc/d14/d20/d2e/d56/lae 0 2026-03-09T20:48:20.349 INFO:tasks.workunit.client.1.vm10.stdout:5/890: dread d2/fd6 [0,4194304] 0 2026-03-09T20:48:20.350 INFO:tasks.workunit.client.1.vm10.stdout:4/942: fsync d1/d2/d3/d70/d99/dc9/dff/f3f 0 2026-03-09T20:48:20.351 INFO:tasks.workunit.client.1.vm10.stdout:6/985: getdents d3/d30/d7f/d24/d39/d10e 0 2026-03-09T20:48:20.355 INFO:tasks.workunit.client.1.vm10.stdout:1/984: fdatasync d2/f14 0 2026-03-09T20:48:20.358 INFO:tasks.workunit.client.1.vm10.stdout:2/994: dread d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/fba [0,4194304] 0 2026-03-09T20:48:20.358 INFO:tasks.workunit.client.1.vm10.stdout:0/955: link d2/d4a/d58/d82/f5c d2/d4a/d58/f14c 0 2026-03-09T20:48:20.359 INFO:tasks.workunit.client.1.vm10.stdout:7/965: sync 2026-03-09T20:48:20.360 INFO:tasks.workunit.client.0.vm07.stdout:8/974: sync 2026-03-09T20:48:20.360 INFO:tasks.workunit.client.1.vm10.stdout:5/891: truncate d2/d1b/d54/d78/de6/f135 154356 0 2026-03-09T20:48:20.363 INFO:tasks.workunit.client.1.vm10.stdout:0/956: dwrite d2/d4a/d58/d82/d71/f38 [0,4194304] 0 2026-03-09T20:48:20.365 INFO:tasks.workunit.client.1.vm10.stdout:2/995: dwrite d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/f84 [0,4194304] 0 2026-03-09T20:48:20.365 INFO:tasks.workunit.client.0.vm07.stdout:8/975: sync 2026-03-09T20:48:20.377 INFO:tasks.workunit.client.1.vm10.stdout:8/994: getdents d0 0 2026-03-09T20:48:20.379 INFO:tasks.workunit.client.0.vm07.stdout:4/964: truncate d2/df/d17/f46 568662 0 2026-03-09T20:48:20.381 INFO:tasks.workunit.client.1.vm10.stdout:3/944: write dc/d14/d27/fa7 [2202015,36462] 0 2026-03-09T20:48:20.385 INFO:tasks.workunit.client.1.vm10.stdout:1/985: fsync d2/da/d25/d46/d80/da0/d92/db5/dc7/d105/f13c 0 2026-03-09T20:48:20.392 INFO:tasks.workunit.client.1.vm10.stdout:5/892: chown d2/d1b/d54/d78/de6/ff7 4355037 1 2026-03-09T20:48:20.394 INFO:tasks.workunit.client.1.vm10.stdout:7/966: truncate db/d1f/f62 3853915 0 2026-03-09T20:48:20.398 INFO:tasks.workunit.client.1.vm10.stdout:0/957: rename d2/d4a/d58/d82/d93 to d2/d9/d12d/d14d 0 2026-03-09T20:48:20.407 INFO:tasks.workunit.client.1.vm10.stdout:1/986: chown d2/da/d25/d46/d51/d5d/d6e/d70/db3/dd4 558 1 2026-03-09T20:48:20.408 INFO:tasks.workunit.client.1.vm10.stdout:5/893: truncate d2/d39/dbf/d69/de9/dfa/d115/f12c 612206 0 2026-03-09T20:48:20.410 INFO:tasks.workunit.client.1.vm10.stdout:8/995: rename d0/d22/d25/d2e/d41/l121 to d0/d22/d25/d2e/d41/d85/d8b/l140 0 2026-03-09T20:48:20.412 INFO:tasks.workunit.client.1.vm10.stdout:0/958: mknod d2/d9/db8/db4/d101/c14e 0 2026-03-09T20:48:20.415 INFO:tasks.workunit.client.1.vm10.stdout:1/987: mkdir d2/da/d25/d46/d8c/d140 0 2026-03-09T20:48:20.415 INFO:tasks.workunit.client.1.vm10.stdout:1/988: chown d2/da/d25/d3e/d42/f62 1 1 2026-03-09T20:48:20.417 INFO:tasks.workunit.client.1.vm10.stdout:4/943: getdents d1/d2/d3/d70/d99 0 2026-03-09T20:48:20.422 INFO:tasks.workunit.client.1.vm10.stdout:8/996: unlink d0/d22/d25/d2e/d41/d85/db9/dc6/f101 0 2026-03-09T20:48:20.424 INFO:tasks.workunit.client.0.vm07.stdout:4/965: write d2/df/d17/f73 [316981,29252] 0 2026-03-09T20:48:20.427 INFO:tasks.workunit.client.0.vm07.stdout:8/976: dwrite d1/d5d/d6f/d2f/d4d/d63/d91/f96 [0,4194304] 0 2026-03-09T20:48:20.438 INFO:tasks.workunit.client.1.vm10.stdout:3/945: write dc/d14/d26/d29/d40/da8/f114 [714477,124229] 0 2026-03-09T20:48:20.438 INFO:tasks.workunit.client.1.vm10.stdout:7/967: write db/f70 [2091149,77861] 0 2026-03-09T20:48:20.441 INFO:tasks.workunit.client.1.vm10.stdout:3/946: chown dc/d14/d26/d29/d40/da8/dc3/d128 14972 1 2026-03-09T20:48:20.441 INFO:tasks.workunit.client.1.vm10.stdout:0/959: rename d2/d9/db8/d10f/d11/dd1/d138 to d2/d9/db8/d14f 0 2026-03-09T20:48:20.445 INFO:tasks.workunit.client.1.vm10.stdout:6/986: dwrite d3/d79/fb7 [0,4194304] 0 2026-03-09T20:48:20.447 INFO:tasks.workunit.client.1.vm10.stdout:2/996: dwrite d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/f106 [0,4194304] 0 2026-03-09T20:48:20.448 INFO:tasks.workunit.client.0.vm07.stdout:8/977: write d1/d5d/d6f/d2f/f34 [3243982,67601] 0 2026-03-09T20:48:20.452 INFO:tasks.workunit.client.1.vm10.stdout:7/968: sync 2026-03-09T20:48:20.452 INFO:tasks.workunit.client.1.vm10.stdout:2/997: sync 2026-03-09T20:48:20.454 INFO:tasks.workunit.client.0.vm07.stdout:8/978: creat d1/d5d/d6f/d2f/d4d/dd4/dd9/f136 x:0 0 0 2026-03-09T20:48:20.465 INFO:tasks.workunit.client.0.vm07.stdout:8/979: creat d1/dc/d16/dad/d87/dd3/f137 x:0 0 0 2026-03-09T20:48:20.469 INFO:tasks.workunit.client.0.vm07.stdout:8/980: dwrite d1/dc/d16/d26/f12f [0,4194304] 0 2026-03-09T20:48:20.475 INFO:tasks.workunit.client.1.vm10.stdout:3/947: dread dc/d14/d26/d29/ff7 [0,4194304] 0 2026-03-09T20:48:20.480 INFO:tasks.workunit.client.1.vm10.stdout:1/989: mknod d2/da/d25/d46/d51/d5d/da6/d126/c141 0 2026-03-09T20:48:20.483 INFO:tasks.workunit.client.1.vm10.stdout:0/960: dread d2/d9/db8/d10f/d48/fb9 [0,4194304] 0 2026-03-09T20:48:20.487 INFO:tasks.workunit.client.0.vm07.stdout:8/981: rename d1/dc/d16/dad/d87/c119 to d1/d5d/d6f/d2f/c138 0 2026-03-09T20:48:20.491 INFO:tasks.workunit.client.0.vm07.stdout:8/982: dread d1/dc/d16/f6e [0,4194304] 0 2026-03-09T20:48:20.492 INFO:tasks.workunit.client.1.vm10.stdout:6/987: creat d3/da/d11/d31/d47/f127 x:0 0 0 2026-03-09T20:48:20.497 INFO:tasks.workunit.client.0.vm07.stdout:8/983: fsync d1/dc/d16/d26/de2/dc1/f114 0 2026-03-09T20:48:20.498 INFO:tasks.workunit.client.0.vm07.stdout:8/984: truncate d1/dc/d16/d26/de2/f11f 881487 0 2026-03-09T20:48:20.501 INFO:tasks.workunit.client.0.vm07.stdout:4/966: write d2/d55/d5d/d3f/f51 [1146910,100301] 0 2026-03-09T20:48:20.502 INFO:tasks.workunit.client.0.vm07.stdout:4/967: readlink d2/d55/d5d/d93/dbe/l106 0 2026-03-09T20:48:20.504 INFO:tasks.workunit.client.1.vm10.stdout:8/997: dwrite d0/d22/f111 [0,4194304] 0 2026-03-09T20:48:20.505 INFO:tasks.workunit.client.0.vm07.stdout:4/968: dwrite d2/d1f/f25 [0,4194304] 0 2026-03-09T20:48:20.506 INFO:tasks.workunit.client.0.vm07.stdout:4/969: dread d2/fb7 [0,4194304] 0 2026-03-09T20:48:20.506 INFO:tasks.workunit.client.1.vm10.stdout:8/998: chown d0/d22/d25/d2e/d41/c7d 313 1 2026-03-09T20:48:20.521 INFO:tasks.workunit.client.1.vm10.stdout:4/944: rename d1/d2/f2d to d1/d2/d3/d70/d99/dc9/dff/d2b/d4a/d9b/de2/f134 0 2026-03-09T20:48:20.527 INFO:tasks.workunit.client.1.vm10.stdout:1/990: fsync d2/da/d25/d3e/dca/da2/fe1 0 2026-03-09T20:48:20.530 INFO:tasks.workunit.client.0.vm07.stdout:4/970: creat d2/d55/d5d/d3f/d4a/d4b/d52/dba/df2/f110 x:0 0 0 2026-03-09T20:48:20.533 INFO:tasks.workunit.client.0.vm07.stdout:4/971: dwrite d2/d1f/f26 [0,4194304] 0 2026-03-09T20:48:20.535 INFO:tasks.workunit.client.0.vm07.stdout:4/972: write d2/d55/d5d/f6f [2986019,27781] 0 2026-03-09T20:48:20.546 INFO:tasks.workunit.client.1.vm10.stdout:0/961: dwrite d2/d4a/d58/d82/d71/d5d/fdd [0,4194304] 0 2026-03-09T20:48:20.552 INFO:tasks.workunit.client.0.vm07.stdout:8/985: rename d1/d5d/d6f/d80/l103 to d1/d8f/l139 0 2026-03-09T20:48:20.560 INFO:tasks.workunit.client.1.vm10.stdout:7/969: unlink db/d21/d23/f1e 0 2026-03-09T20:48:20.570 INFO:tasks.workunit.client.0.vm07.stdout:4/973: mkdir d2/df/d59/d8a/dec/d111 0 2026-03-09T20:48:20.574 INFO:tasks.workunit.client.0.vm07.stdout:8/986: rename d1/d8f/d9d/fa7 to d1/dc/dba/f13a 0 2026-03-09T20:48:20.576 INFO:tasks.workunit.client.0.vm07.stdout:4/974: creat d2/d55/d5d/d93/f112 x:0 0 0 2026-03-09T20:48:20.576 INFO:tasks.workunit.client.0.vm07.stdout:4/975: chown d2/l50 3079664 1 2026-03-09T20:48:20.576 INFO:tasks.workunit.client.0.vm07.stdout:4/976: stat d2/df/f23 0 2026-03-09T20:48:20.577 INFO:tasks.workunit.client.0.vm07.stdout:8/987: mkdir d1/dc/d16/d26/de2/df5/d13b 0 2026-03-09T20:48:20.579 INFO:tasks.workunit.client.1.vm10.stdout:5/894: rename d2/d39/d4b/d7a/dd9/d10c/d132/d46/d5d/d77/f93 to d2/d1b/d54/d78/f148 0 2026-03-09T20:48:20.583 INFO:tasks.workunit.client.0.vm07.stdout:4/977: mknod d2/df/d59/d8a/d9d/c113 0 2026-03-09T20:48:20.588 INFO:tasks.workunit.client.1.vm10.stdout:2/998: write d5/d18/d27/d89/db6/d41/d77/db3/db5/d32/d80/d47/dcc/fdd [285368,95840] 0 2026-03-09T20:48:20.592 INFO:tasks.workunit.client.1.vm10.stdout:6/988: dwrite d3/d30/d7f/d36/d5c/fa5 [0,4194304] 0 2026-03-09T20:48:20.593 INFO:tasks.workunit.client.1.vm10.stdout:1/991: rmdir d2/da/d25/d3e/d55 39 2026-03-09T20:48:20.595 INFO:tasks.workunit.client.1.vm10.stdout:0/962: truncate d2/d9/db8/d10f/d11/f86 509800 0 2026-03-09T20:48:20.606 INFO:tasks.workunit.client.0.vm07.stdout:4/978: write d2/f28 [4284173,113942] 0 2026-03-09T20:48:20.606 INFO:tasks.workunit.client.0.vm07.stdout:8/988: write d1/d5d/fb7 [762345,59024] 0 2026-03-09T20:48:20.607 INFO:tasks.workunit.client.0.vm07.stdout:8/989: dread - d1/d5d/d6f/d2f/d4d/d63/d91/f131 zero size 2026-03-09T20:48:20.610 INFO:tasks.workunit.client.1.vm10.stdout:7/970: creat db/d28/d2b/dd0/f137 x:0 0 0 2026-03-09T20:48:20.616 INFO:tasks.workunit.client.1.vm10.stdout:8/999: mknod d0/d22/d2f/d118/c141 0 2026-03-09T20:48:20.617 INFO:tasks.workunit.client.1.vm10.stdout:7/971: write db/d28/d2b/dd0/f137 [537400,99241] 0 2026-03-09T20:48:20.617 INFO:tasks.workunit.client.1.vm10.stdout:5/895: mkdir d2/d39/d4b/d7a/dd9/d10c/d132/dc8/da1/d149 0 2026-03-09T20:48:20.617 INFO:tasks.workunit.client.1.vm10.stdout:7/972: dwrite db/f70 [0,4194304] 0 2026-03-09T20:48:20.619 INFO:tasks.workunit.client.1.vm10.stdout:3/948: creat dc/d14/d20/f13c x:0 0 0 2026-03-09T20:48:20.622 INFO:tasks.workunit.client.0.vm07.stdout:4/979: fsync d2/df/d59/d8a/d9d/f103 0 2026-03-09T20:48:20.623 INFO:tasks.workunit.client.1.vm10.stdout:7/973: dwrite db/d28/d30/dd8/f12f [0,4194304] 0 2026-03-09T20:48:20.625 INFO:tasks.workunit.client.1.vm10.stdout:3/949: chown dc/d14/d26/d29/d40/da8/d69/d13a/ddd/d10d 20 1 2026-03-09T20:48:20.638 INFO:tasks.workunit.client.1.vm10.stdout:3/950: dread dc/d14/d27/f3f [0,4194304] 0 2026-03-09T20:48:20.639 INFO:tasks.workunit.client.1.vm10.stdout:6/989: unlink d3/da/d11/d26/dcf/c119 0 2026-03-09T20:48:20.640 INFO:tasks.workunit.client.0.vm07.stdout:4/980: rename d2/df/d17/l96 to d2/d55/d5d/d93/dbe/l114 0 2026-03-09T20:48:20.641 INFO:tasks.workunit.client.0.vm07.stdout:4/981: chown d2/ff5 2 1 2026-03-09T20:48:20.643 INFO:tasks.workunit.client.1.vm10.stdout:1/992: symlink d2/da/d25/d46/dbe/dfc/l142 0 2026-03-09T20:48:20.652 INFO:tasks.workunit.client.0.vm07.stdout:4/982: rename d2/d55/d5d/d3f/d4a/dbc to d2/df/d59/d8a/dec/d115 0 2026-03-09T20:48:20.652 INFO:tasks.workunit.client.1.vm10.stdout:5/896: fdatasync d2/f40 0 2026-03-09T20:48:20.655 INFO:tasks.workunit.client.1.vm10.stdout:5/897: dwrite d2/f2c [4194304,4194304] 0 2026-03-09T20:48:20.656 INFO:tasks.workunit.client.0.vm07.stdout:4/983: rename d2/f9 to d2/d55/d5d/d3f/d4a/f116 0 2026-03-09T20:48:20.664 INFO:tasks.workunit.client.1.vm10.stdout:2/999: link d5/d18/d27/d89/db6/d41/d77/db3/db5/f3f d5/d18/d27/d38/d61/f144 0 2026-03-09T20:48:20.665 INFO:tasks.workunit.client.0.vm07.stdout:4/984: creat d2/d55/d5d/dcb/f117 x:0 0 0 2026-03-09T20:48:20.668 INFO:tasks.workunit.client.1.vm10.stdout:7/974: creat db/d28/d2b/d36/d3f/f138 x:0 0 0 2026-03-09T20:48:20.669 INFO:tasks.workunit.client.0.vm07.stdout:8/990: write d1/d5d/d6f/d2f/d4d/d55/fcf [2720299,16635] 0 2026-03-09T20:48:20.670 INFO:tasks.workunit.client.1.vm10.stdout:7/975: read db/d28/d2b/d36/f55 [4158855,98341] 0 2026-03-09T20:48:20.672 INFO:tasks.workunit.client.0.vm07.stdout:4/985: rename d2/d55/lc6 to d2/d55/d5d/d93/dbe/l118 0 2026-03-09T20:48:20.673 INFO:tasks.workunit.client.0.vm07.stdout:4/986: truncate d2/d55/d5d/d3f/f51 1715821 0 2026-03-09T20:48:20.676 INFO:tasks.workunit.client.1.vm10.stdout:6/990: unlink d3/d30/f75 0 2026-03-09T20:48:20.680 INFO:tasks.workunit.client.0.vm07.stdout:8/991: mknod d1/dc/d16/dad/d87/d93/c13c 0 2026-03-09T20:48:20.680 INFO:tasks.workunit.client.1.vm10.stdout:6/991: write d3/d30/d33/f35 [569229,49754] 0 2026-03-09T20:48:20.680 INFO:tasks.workunit.client.1.vm10.stdout:1/993: symlink d2/da/d25/d46/d51/d5d/d6e/d70/db3/dd4/df1/l143 0 2026-03-09T20:48:20.684 INFO:tasks.workunit.client.1.vm10.stdout:0/963: truncate d2/d9/db8/d10f/d48/dac/fbd 1386979 0 2026-03-09T20:48:20.688 INFO:tasks.workunit.client.0.vm07.stdout:8/992: dwrite d1/dc/d16/f4b [4194304,4194304] 0 2026-03-09T20:48:20.696 INFO:tasks.workunit.client.0.vm07.stdout:4/987: mknod d2/d55/d5d/d3f/d4a/c119 0 2026-03-09T20:48:20.701 INFO:tasks.workunit.client.1.vm10.stdout:4/945: mkdir d1/d2/d3/d70/d99/dc9/dff/d69/d10e/d135 0 2026-03-09T20:48:20.705 INFO:tasks.workunit.client.0.vm07.stdout:8/993: symlink d1/d8f/d9d/l13d 0 2026-03-09T20:48:20.705 INFO:tasks.workunit.client.0.vm07.stdout:4/988: fdatasync d2/d55/d5d/d3f/d4a/d4b/d52/f9e 0 2026-03-09T20:48:20.710 INFO:tasks.workunit.client.1.vm10.stdout:7/976: rename db/d46/d89/dbf/d78 to db/d28/d2b/d36/d63/d84/d139 0 2026-03-09T20:48:20.721 INFO:tasks.workunit.client.1.vm10.stdout:5/898: dwrite d2/d39/d4b/d7a/dd9/d10c/d132/f124 [0,4194304] 0 2026-03-09T20:48:20.722 INFO:tasks.workunit.client.1.vm10.stdout:3/951: truncate dc/ff 4218041 0 2026-03-09T20:48:20.725 INFO:tasks.workunit.client.1.vm10.stdout:3/952: chown dc/d14/d26/d29/d40/c12f 167 1 2026-03-09T20:48:20.725 INFO:tasks.workunit.client.1.vm10.stdout:6/992: truncate d3/d30/d7f/d36/fd7 1767634 0 2026-03-09T20:48:20.728 INFO:tasks.workunit.client.1.vm10.stdout:4/946: fsync d1/d2/d5c/d64/d6b/d81/f11c 0 2026-03-09T20:48:20.729 INFO:tasks.workunit.client.1.vm10.stdout:4/947: truncate d1/d67/fda 1178285 0 2026-03-09T20:48:20.730 INFO:tasks.workunit.client.1.vm10.stdout:4/948: chown d1/d2/d3/d70/d99/dc9/dff/d69/d10e/d135 877526 1 2026-03-09T20:48:20.737 INFO:tasks.workunit.client.1.vm10.stdout:5/899: creat d2/d39/d4b/d7a/dd9/d10c/d132/dc8/d12b/f14a x:0 0 0 2026-03-09T20:48:20.738 INFO:tasks.workunit.client.1.vm10.stdout:5/900: write d2/d39/d4b/f85 [4955322,86647] 0 2026-03-09T20:48:20.743 INFO:tasks.workunit.client.1.vm10.stdout:4/949: dread d1/d2/d3/d70/d99/f123 [0,4194304] 0 2026-03-09T20:48:20.745 INFO:tasks.workunit.client.1.vm10.stdout:1/994: fsync d2/da/d25/d46/d51/d5d/d6e/d70/f83 0 2026-03-09T20:48:20.746 INFO:tasks.workunit.client.1.vm10.stdout:6/993: dread d3/d30/d33/f35 [0,4194304] 0 2026-03-09T20:48:20.747 INFO:tasks.workunit.client.1.vm10.stdout:0/964: mknod d2/d4a/c150 0 2026-03-09T20:48:20.755 INFO:tasks.workunit.client.1.vm10.stdout:0/965: fdatasync d2/d4a/d58/d82/d71/d5d/fc9 0 2026-03-09T20:48:20.758 INFO:tasks.workunit.client.1.vm10.stdout:6/994: mknod d3/da/d11/d89/db9/dd1/dd2/dc3/c128 0 2026-03-09T20:48:20.761 INFO:tasks.workunit.client.1.vm10.stdout:5/901: getdents d2/d39/dbf/d69/de9/dfa/d122 0 2026-03-09T20:48:20.767 INFO:tasks.workunit.client.1.vm10.stdout:5/902: dread d2/f2c [4194304,4194304] 0 2026-03-09T20:48:20.771 INFO:tasks.workunit.client.0.vm07.stdout:4/989: dwrite d2/d55/d5d/d3f/d4a/d4b/d52/f82 [0,4194304] 0 2026-03-09T20:48:20.774 INFO:tasks.workunit.client.1.vm10.stdout:4/950: dread d1/d2/d3/d70/d99/dc9/dff/d2b/d4a/d9b/de2/dfb/f62 [0,4194304] 0 2026-03-09T20:48:20.784 INFO:tasks.workunit.client.1.vm10.stdout:5/903: sync 2026-03-09T20:48:20.792 INFO:tasks.workunit.client.0.vm07.stdout:8/994: dwrite d1/d5d/d6f/d80/fd8 [0,4194304] 0 2026-03-09T20:48:20.794 INFO:tasks.workunit.client.0.vm07.stdout:8/995: write d1/d5d/d6f/d2f/d53/f5f [2606005,22183] 0 2026-03-09T20:48:20.795 INFO:tasks.workunit.client.0.vm07.stdout:8/996: write d1/d5d/d6f/d2f/d4d/dd4/dd9/dee/f127 [2072406,9466] 0 2026-03-09T20:48:20.807 INFO:tasks.workunit.client.1.vm10.stdout:3/953: dwrite dc/d14/d27/f3c [0,4194304] 0 2026-03-09T20:48:20.820 INFO:tasks.workunit.client.0.vm07.stdout:8/997: rename d1/d5d/d6f/d2f/d4d/dfe/d11a to d1/d3b/d13e 0 2026-03-09T20:48:20.825 INFO:tasks.workunit.client.0.vm07.stdout:8/998: rename d1/d5d/d6f/d80/fff to d1/dc/d16/dad/d87/f13f 0 2026-03-09T20:48:20.826 INFO:tasks.workunit.client.0.vm07.stdout:8/999: write d1/d5d/d6f/d2f/d4d/dd4/dd9/f136 [83622,53950] 0 2026-03-09T20:48:20.842 INFO:tasks.workunit.client.1.vm10.stdout:7/977: link db/d28/cc0 db/d28/d2b/d36/c13a 0 2026-03-09T20:48:20.848 INFO:tasks.workunit.client.0.vm07.stdout:4/990: dwrite d2/d55/d5d/d3f/d4a/f5e [0,4194304] 0 2026-03-09T20:48:20.849 INFO:tasks.workunit.client.0.vm07.stdout:4/991: chown d2/d55/d5d/d3f/d4a/d4b/d52/d5c/l6c 213943856 1 2026-03-09T20:48:20.857 INFO:tasks.workunit.client.1.vm10.stdout:6/995: dread d3/d30/f91 [0,4194304] 0 2026-03-09T20:48:20.858 INFO:tasks.workunit.client.1.vm10.stdout:6/996: chown d3/d30/d7f/d36 3265 1 2026-03-09T20:48:20.859 INFO:tasks.workunit.client.1.vm10.stdout:6/997: chown d3/da/d11/d31/d47/d87/fd9 504804 1 2026-03-09T20:48:20.859 INFO:tasks.workunit.client.0.vm07.stdout:4/992: mkdir d2/d55/d5d/d3f/d4a/d85/dda/d11a 0 2026-03-09T20:48:20.859 INFO:tasks.workunit.client.1.vm10.stdout:6/998: chown d3/d79/f90 28 1 2026-03-09T20:48:20.859 INFO:tasks.workunit.client.0.vm07.stdout:4/993: readlink d2/df/d17/le7 0 2026-03-09T20:48:20.864 INFO:tasks.workunit.client.0.vm07.stdout:4/994: dwrite d2/d55/d5d/d3f/fa7 [0,4194304] 0 2026-03-09T20:48:20.868 INFO:tasks.workunit.client.1.vm10.stdout:4/951: symlink d1/d2/d3/d70/d99/dc9/dff/d2b/d4a/d9b/de2/dfb/l136 0 2026-03-09T20:48:20.869 INFO:tasks.workunit.client.1.vm10.stdout:5/904: rmdir d2/d58/df5 39 2026-03-09T20:48:20.870 INFO:tasks.workunit.client.1.vm10.stdout:0/966: mknod d2/d9/db8/d10f/d11/c151 0 2026-03-09T20:48:20.873 INFO:tasks.workunit.client.1.vm10.stdout:3/954: dread dc/d14/d26/d29/d2a/fa4 [0,4194304] 0 2026-03-09T20:48:20.877 INFO:tasks.workunit.client.1.vm10.stdout:1/995: link d2/da/d25/d3e/d55/f9a d2/da/d25/d3e/d55/dc9/f144 0 2026-03-09T20:48:20.889 INFO:tasks.workunit.client.1.vm10.stdout:5/905: mkdir d2/d39/dbf/d66/d14b 0 2026-03-09T20:48:20.891 INFO:tasks.workunit.client.1.vm10.stdout:7/978: mkdir db/d28/d4c/d131/d13b 0 2026-03-09T20:48:20.893 INFO:tasks.workunit.client.1.vm10.stdout:0/967: fdatasync d2/d4a/d58/d82/d71/dca/d110/f68 0 2026-03-09T20:48:20.896 INFO:tasks.workunit.client.1.vm10.stdout:3/955: creat dc/d14/d26/d29/d40/da8/d69/d13a/ddd/f13d x:0 0 0 2026-03-09T20:48:20.897 INFO:tasks.workunit.client.1.vm10.stdout:1/996: truncate d2/f14 967537 0 2026-03-09T20:48:20.899 INFO:tasks.workunit.client.1.vm10.stdout:6/999: mkdir d3/d30/d7f/d36/d6d/d129 0 2026-03-09T20:48:20.904 INFO:tasks.workunit.client.1.vm10.stdout:5/906: sync 2026-03-09T20:48:20.906 INFO:tasks.workunit.client.1.vm10.stdout:7/979: read db/d21/d26/f106 [142147,22630] 0 2026-03-09T20:48:20.909 INFO:tasks.workunit.client.1.vm10.stdout:5/907: mknod d2/d39/d4b/de0/d105/c14c 0 2026-03-09T20:48:20.911 INFO:tasks.workunit.client.1.vm10.stdout:1/997: dread d2/da/d25/d3e/f58 [0,4194304] 0 2026-03-09T20:48:20.912 INFO:tasks.workunit.client.1.vm10.stdout:1/998: chown d2/da/d25/d46/ddb 55 1 2026-03-09T20:48:20.913 INFO:tasks.workunit.client.1.vm10.stdout:7/980: rmdir db/d28/d2b/d36/d63/d6d 39 2026-03-09T20:48:20.914 INFO:tasks.workunit.client.1.vm10.stdout:7/981: read db/d28/d2b/d36/d40/f44 [991607,32756] 0 2026-03-09T20:48:20.915 INFO:tasks.workunit.client.1.vm10.stdout:7/982: read db/d28/d2b/d36/d3b/dd5/fe0 [3129757,99522] 0 2026-03-09T20:48:20.916 INFO:tasks.workunit.client.1.vm10.stdout:7/983: fsync db/d28/f4f 0 2026-03-09T20:48:20.920 INFO:tasks.workunit.client.1.vm10.stdout:1/999: symlink d2/da/d25/d46/d51/d7e/l145 0 2026-03-09T20:48:20.928 INFO:tasks.workunit.client.0.vm07.stdout:4/995: dwrite d2/df/d17/f6d [0,4194304] 0 2026-03-09T20:48:20.929 INFO:tasks.workunit.client.0.vm07.stdout:4/996: chown d2/d1f/lfd 45458 1 2026-03-09T20:48:20.933 INFO:tasks.workunit.client.1.vm10.stdout:4/952: dwrite d1/d2/d5c/d64/d6b/d81/dac/d1b/f24 [0,4194304] 0 2026-03-09T20:48:20.933 INFO:tasks.workunit.client.0.vm07.stdout:4/997: dwrite d2/d55/d5d/d3f/d4a/f5e [0,4194304] 0 2026-03-09T20:48:20.939 INFO:tasks.workunit.client.1.vm10.stdout:0/968: dwrite d2/d9/d69/d80/f11d [0,4194304] 0 2026-03-09T20:48:20.945 INFO:tasks.workunit.client.1.vm10.stdout:3/956: dwrite dc/d14/d22/f107 [0,4194304] 0 2026-03-09T20:48:20.954 INFO:tasks.workunit.client.0.vm07.stdout:4/998: rename d2/d55/d5d/d3f/d4a/d4b/cdb to d2/d55/dab/c11b 0 2026-03-09T20:48:20.956 INFO:tasks.workunit.client.0.vm07.stdout:4/999: unlink d2/d55/d5d/d3f/d4a/fad 0 2026-03-09T20:48:20.961 INFO:tasks.workunit.client.0.vm07.stderr:+ rm -rf -- ./tmp.plnCPV9zAr 2026-03-09T20:48:20.977 INFO:tasks.workunit.client.1.vm10.stdout:3/957: fdatasync dc/db4/fe5 0 2026-03-09T20:48:20.977 INFO:tasks.workunit.client.1.vm10.stdout:3/958: write dc/d14/d27/f116 [2278992,2687] 0 2026-03-09T20:48:20.993 INFO:tasks.workunit.client.1.vm10.stdout:0/969: dread d2/d9/db8/db4/fce [0,4194304] 0 2026-03-09T20:48:20.994 INFO:tasks.workunit.client.1.vm10.stdout:0/970: fdatasync d2/d9/db8/d14f/f13c 0 2026-03-09T20:48:20.999 INFO:tasks.workunit.client.1.vm10.stdout:0/971: creat d2/d4a/d58/dd5/dfd/f152 x:0 0 0 2026-03-09T20:48:21.000 INFO:tasks.workunit.client.1.vm10.stdout:7/984: link db/d28/d2b/d36/d63/l10a db/d28/d2b/d36/l13c 0 2026-03-09T20:48:21.003 INFO:tasks.workunit.client.1.vm10.stdout:0/972: readlink d2/d4a/d58/d82/d71/d8e/l10b 0 2026-03-09T20:48:21.004 INFO:tasks.workunit.client.1.vm10.stdout:0/973: truncate d2/d9/d12d/d14d/db1/f13a 576930 0 2026-03-09T20:48:21.008 INFO:tasks.workunit.client.1.vm10.stdout:7/985: rename db/d46/d89 to db/d46/dab/d13d 0 2026-03-09T20:48:21.048 INFO:tasks.workunit.client.1.vm10.stdout:5/908: dwrite d2/d58/fb9 [0,4194304] 0 2026-03-09T20:48:21.061 INFO:tasks.workunit.client.1.vm10.stdout:5/909: creat d2/d39/dbf/da9/ddd/f14d x:0 0 0 2026-03-09T20:48:21.072 INFO:tasks.workunit.client.1.vm10.stdout:4/953: dwrite d1/d2/f43 [0,4194304] 0 2026-03-09T20:48:21.073 INFO:tasks.workunit.client.1.vm10.stdout:3/959: dwrite dc/d14/d27/f3f [0,4194304] 0 2026-03-09T20:48:21.082 INFO:tasks.workunit.client.1.vm10.stdout:3/960: mkdir dc/d14/d26/d29/d40/da8/d69/d13e 0 2026-03-09T20:48:21.083 INFO:tasks.workunit.client.1.vm10.stdout:3/961: chown dc/d14/d26/d29/d40/da8/dc3/d128 750 1 2026-03-09T20:48:21.090 INFO:tasks.workunit.client.1.vm10.stdout:4/954: unlink d1/d67/f8f 0 2026-03-09T20:48:21.091 INFO:tasks.workunit.client.1.vm10.stdout:7/986: write db/d28/d2b/d36/d63/ff3 [995139,81887] 0 2026-03-09T20:48:21.091 INFO:tasks.workunit.client.1.vm10.stdout:0/974: write d2/d4a/d58/d82/d71/fe7 [461575,89540] 0 2026-03-09T20:48:21.097 INFO:tasks.workunit.client.1.vm10.stdout:5/910: dwrite d2/d39/dbf/d84/fe7 [0,4194304] 0 2026-03-09T20:48:21.100 INFO:tasks.workunit.client.1.vm10.stdout:4/955: dread - d1/d2/d3/d70/d99/dc9/dff/d69/dbd/ff9 zero size 2026-03-09T20:48:21.105 INFO:tasks.workunit.client.1.vm10.stdout:0/975: chown d2/d4a/d79/cbf 301 1 2026-03-09T20:48:21.106 INFO:tasks.workunit.client.1.vm10.stdout:0/976: chown d2/d9/db8/d10f/d11/f1f 44 1 2026-03-09T20:48:21.112 INFO:tasks.workunit.client.1.vm10.stdout:5/911: dread d2/d39/d4b/d7a/dd9/d10c/d132/d46/f94 [0,4194304] 0 2026-03-09T20:48:21.133 INFO:tasks.workunit.client.1.vm10.stdout:5/912: symlink d2/d1b/d54/d7b/l14e 0 2026-03-09T20:48:21.140 INFO:tasks.workunit.client.1.vm10.stdout:4/956: dread d1/d2/f2e [4194304,4194304] 0 2026-03-09T20:48:21.151 INFO:tasks.workunit.client.1.vm10.stdout:5/913: dread d2/f2c [0,4194304] 0 2026-03-09T20:48:21.152 INFO:tasks.workunit.client.1.vm10.stdout:3/962: getdents dc/d14/d26/d29/d40/da8/d69/d13a/ddd/d10d 0 2026-03-09T20:48:21.161 INFO:tasks.workunit.client.1.vm10.stdout:0/977: dwrite d2/d4a/d58/d82/d71/dca/d110/f68 [4194304,4194304] 0 2026-03-09T20:48:21.162 INFO:tasks.workunit.client.1.vm10.stdout:7/987: dwrite db/d28/d2b/d36/d3b/f42 [0,4194304] 0 2026-03-09T20:48:21.166 INFO:tasks.workunit.client.1.vm10.stdout:4/957: mknod d1/d2/d5c/d64/d6b/d81/da9/c137 0 2026-03-09T20:48:21.171 INFO:tasks.workunit.client.1.vm10.stdout:3/963: creat dc/db4/de3/f13f x:0 0 0 2026-03-09T20:48:21.186 INFO:tasks.workunit.client.1.vm10.stdout:0/978: rmdir d2/d9/d69/de2 39 2026-03-09T20:48:21.201 INFO:tasks.workunit.client.1.vm10.stdout:5/914: rename d2/d1b/c13a to d2/d39/d4b/d7a/dd9/d10c/c14f 0 2026-03-09T20:48:21.201 INFO:tasks.workunit.client.1.vm10.stdout:5/915: dread d2/d39/d4b/f51 [0,4194304] 0 2026-03-09T20:48:21.201 INFO:tasks.workunit.client.1.vm10.stdout:4/958: chown d1/d2/d3/d70/d78/la3 1 1 2026-03-09T20:48:21.201 INFO:tasks.workunit.client.1.vm10.stdout:4/959: chown d1/d2/d5c/d64/d6b/d81/dac/c11d 6 1 2026-03-09T20:48:21.202 INFO:tasks.workunit.client.1.vm10.stdout:3/964: creat dc/d14/d26/d29/d2a/d55/f140 x:0 0 0 2026-03-09T20:48:21.203 INFO:tasks.workunit.client.1.vm10.stdout:3/965: chown dc/d14/d27 28752052 1 2026-03-09T20:48:21.203 INFO:tasks.workunit.client.1.vm10.stdout:3/966: fsync dc/d14/d20/d2e/d56/f15 0 2026-03-09T20:48:21.204 INFO:tasks.workunit.client.1.vm10.stdout:4/960: read d1/d2/d5c/d64/d6b/d81/dac/d1b/d57/f108 [421488,18878] 0 2026-03-09T20:48:21.205 INFO:tasks.workunit.client.1.vm10.stdout:0/979: symlink d2/d9/d69/d80/de1/l153 0 2026-03-09T20:48:21.217 INFO:tasks.workunit.client.1.vm10.stdout:0/980: mknod d2/d9/d12d/d14d/db1/c154 0 2026-03-09T20:48:21.222 INFO:tasks.workunit.client.1.vm10.stdout:7/988: write db/d28/d2b/d36/d63/f11a [197759,112955] 0 2026-03-09T20:48:21.229 INFO:tasks.workunit.client.1.vm10.stdout:3/967: unlink dc/d14/d90/c8b 0 2026-03-09T20:48:21.249 INFO:tasks.workunit.client.1.vm10.stdout:5/916: getdents d2/d39/d4b/d7a/dd9/d10c/d132/d46/d5d/d77 0 2026-03-09T20:48:21.261 INFO:tasks.workunit.client.1.vm10.stdout:7/989: write db/d28/d2b/d36/d3b/fd7 [661663,75556] 0 2026-03-09T20:48:21.264 INFO:tasks.workunit.client.1.vm10.stdout:4/961: dwrite d1/d2/d5c/d64/d6b/d81/dac/d1b/d57/f8e [0,4194304] 0 2026-03-09T20:48:21.270 INFO:tasks.workunit.client.1.vm10.stdout:7/990: dread db/d1f/f5e [0,4194304] 0 2026-03-09T20:48:21.272 INFO:tasks.workunit.client.1.vm10.stdout:4/962: creat d1/d2/d3/d70/d99/dc9/dff/d69/d10e/f138 x:0 0 0 2026-03-09T20:48:21.272 INFO:tasks.workunit.client.1.vm10.stdout:5/917: link d2/l6 d2/d39/d4b/d7a/dd9/d10c/d132/d46/d5d/d106/l150 0 2026-03-09T20:48:21.273 INFO:tasks.workunit.client.1.vm10.stdout:4/963: readlink d1/d2/d3/d70/d99/dc9/dff/d2b/d4a/l120 0 2026-03-09T20:48:21.273 INFO:tasks.workunit.client.1.vm10.stdout:7/991: mknod db/d28/d2b/d36/d63/d8b/c13e 0 2026-03-09T20:48:21.278 INFO:tasks.workunit.client.1.vm10.stdout:7/992: creat db/d28/d2b/d36/d3f/f13f x:0 0 0 2026-03-09T20:48:21.279 INFO:tasks.workunit.client.1.vm10.stdout:5/918: mknod d2/d39/dbf/c151 0 2026-03-09T20:48:21.286 INFO:tasks.workunit.client.1.vm10.stdout:4/964: rename d1/d2/d3/d54/fe6 to d1/d2/d5c/d64/d6b/f139 0 2026-03-09T20:48:21.293 INFO:tasks.workunit.client.1.vm10.stdout:4/965: rmdir d1/d2/d5c/d64/d6b/d79/d92 39 2026-03-09T20:48:21.295 INFO:tasks.workunit.client.1.vm10.stdout:4/966: chown d1/d2/d5c/d64/d6b/d81/dac/d1b/d57/f44 10 1 2026-03-09T20:48:21.295 INFO:tasks.workunit.client.1.vm10.stdout:4/967: stat d1/d2/d3/d70/d99/dc9/dff/d69/d10e/d135 0 2026-03-09T20:48:21.301 INFO:tasks.workunit.client.1.vm10.stdout:4/968: link d1/d47/cd3 d1/dd8/c13a 0 2026-03-09T20:48:21.310 INFO:tasks.workunit.client.1.vm10.stdout:3/968: dwrite dc/d9e/f108 [0,4194304] 0 2026-03-09T20:48:21.313 INFO:tasks.workunit.client.1.vm10.stdout:3/969: creat dc/d14/d26/d29/d40/da8/d69/d75/f141 x:0 0 0 2026-03-09T20:48:21.322 INFO:tasks.workunit.client.1.vm10.stdout:0/981: write d2/d9/d69/de2/ff4 [186323,45995] 0 2026-03-09T20:48:21.339 INFO:tasks.workunit.client.1.vm10.stdout:7/993: write db/d28/d2b/d36/f55 [3844917,9867] 0 2026-03-09T20:48:21.339 INFO:tasks.workunit.client.1.vm10.stdout:5/919: write d2/d39/d4b/d7a/dd9/d10c/d132/d46/d5d/d106/f111 [308505,91383] 0 2026-03-09T20:48:21.340 INFO:tasks.workunit.client.1.vm10.stdout:5/920: read d2/d1b/d54/d78/f47 [7287700,13992] 0 2026-03-09T20:48:21.341 INFO:tasks.workunit.client.1.vm10.stdout:5/921: chown d2/d27/d75 39 1 2026-03-09T20:48:21.342 INFO:tasks.workunit.client.1.vm10.stdout:5/922: read d2/d39/dbf/d84/f8b [306811,56093] 0 2026-03-09T20:48:21.360 INFO:tasks.workunit.client.1.vm10.stdout:5/923: creat d2/f152 x:0 0 0 2026-03-09T20:48:21.364 INFO:tasks.workunit.client.1.vm10.stdout:5/924: rename d2/d39/dbf/d66/l125 to d2/d39/d4b/de0/l153 0 2026-03-09T20:48:21.368 INFO:tasks.workunit.client.1.vm10.stdout:5/925: creat d2/d39/d4b/d7a/dd9/d10c/d132/dc8/da1/f154 x:0 0 0 2026-03-09T20:48:21.373 INFO:tasks.workunit.client.1.vm10.stdout:5/926: mknod d2/d39/dbf/d69/c155 0 2026-03-09T20:48:21.377 INFO:tasks.workunit.client.1.vm10.stdout:3/970: dread dc/d14/d26/d29/d2a/f57 [0,4194304] 0 2026-03-09T20:48:21.378 INFO:tasks.workunit.client.1.vm10.stdout:5/927: fdatasync d2/f2c 0 2026-03-09T20:48:21.378 INFO:tasks.workunit.client.1.vm10.stdout:3/971: chown dc/d14/d26/c9f 1424 1 2026-03-09T20:48:21.389 INFO:tasks.workunit.client.1.vm10.stdout:0/982: dwrite d2/d4a/d58/f62 [0,4194304] 0 2026-03-09T20:48:21.389 INFO:tasks.workunit.client.1.vm10.stdout:4/969: dwrite d1/d2/d3/d70/d99/dc9/dff/f52 [0,4194304] 0 2026-03-09T20:48:21.394 INFO:tasks.workunit.client.1.vm10.stdout:0/983: chown d2/d9/db8/d10f/d11/c151 15913101 1 2026-03-09T20:48:21.394 INFO:tasks.workunit.client.1.vm10.stdout:4/970: read d1/d2/d5c/d64/d6b/d81/dac/d1b/d57/f8e [3245714,31491] 0 2026-03-09T20:48:21.409 INFO:tasks.workunit.client.1.vm10.stdout:7/994: dwrite db/d28/d2b/d36/d63/d6d/fa8 [0,4194304] 0 2026-03-09T20:48:21.411 INFO:tasks.workunit.client.1.vm10.stdout:0/984: truncate d2/d9/db8/d10f/d48/fd4 531329 0 2026-03-09T20:48:21.412 INFO:tasks.workunit.client.1.vm10.stdout:3/972: getdents dc/d14/d26/d29/d40/da8/d69/d13a/ddd/d10e 0 2026-03-09T20:48:21.415 INFO:tasks.workunit.client.1.vm10.stdout:4/971: mknod d1/d2/d5c/d64/d6b/c13b 0 2026-03-09T20:48:21.415 INFO:tasks.workunit.client.1.vm10.stdout:7/995: dread db/d28/d2b/d36/d40/f44 [0,4194304] 0 2026-03-09T20:48:21.416 INFO:tasks.workunit.client.1.vm10.stdout:4/972: dread - d1/d2/d5c/d64/d112/f131 zero size 2026-03-09T20:48:21.417 INFO:tasks.workunit.client.1.vm10.stdout:7/996: truncate db/d28/d2b/f8f 4719049 0 2026-03-09T20:48:21.417 INFO:tasks.workunit.client.1.vm10.stdout:4/973: dread - d1/d2/d3/d70/d99/dc9/dff/d2b/fcc zero size 2026-03-09T20:48:21.426 INFO:tasks.workunit.client.1.vm10.stdout:7/997: stat db/d28/d2b/d36/l18 0 2026-03-09T20:48:21.436 INFO:tasks.workunit.client.1.vm10.stdout:0/985: mkdir d2/d4a/d58/d82/d71/dca/d110/d155 0 2026-03-09T20:48:21.438 INFO:tasks.workunit.client.1.vm10.stdout:0/986: dwrite d2/d9/db8/d14f/f13c [0,4194304] 0 2026-03-09T20:48:21.441 INFO:tasks.workunit.client.1.vm10.stdout:0/987: dread - d2/d4a/d58/d82/d71/f144 zero size 2026-03-09T20:48:21.443 INFO:tasks.workunit.client.1.vm10.stdout:0/988: readlink d2/d4a/d58/d82/d71/d8e/l10a 0 2026-03-09T20:48:21.443 INFO:tasks.workunit.client.1.vm10.stdout:7/998: getdents db/d21/d95/d12e 0 2026-03-09T20:48:21.454 INFO:tasks.workunit.client.1.vm10.stdout:5/928: write d2/d27/f2a [2116211,88591] 0 2026-03-09T20:48:21.464 INFO:tasks.workunit.client.1.vm10.stdout:5/929: rmdir d2/d39/dbf/da9/ddd 39 2026-03-09T20:48:21.478 INFO:tasks.workunit.client.1.vm10.stdout:3/973: dwrite dc/d14/d26/d29/d2a/f57 [0,4194304] 0 2026-03-09T20:48:21.488 INFO:tasks.workunit.client.1.vm10.stdout:3/974: creat dc/d14/d26/d29/d40/da8/d69/f142 x:0 0 0 2026-03-09T20:48:21.489 INFO:tasks.workunit.client.1.vm10.stdout:3/975: dread dc/d14/d26/d29/d40/d8c/fbc [0,4194304] 0 2026-03-09T20:48:21.491 INFO:tasks.workunit.client.1.vm10.stdout:3/976: mknod dc/d14/d26/d29/d40/da8/d69/d75/d91/c143 0 2026-03-09T20:48:21.512 INFO:tasks.workunit.client.1.vm10.stdout:5/930: dread d2/d39/d4b/f60 [0,4194304] 0 2026-03-09T20:48:21.582 INFO:tasks.workunit.client.1.vm10.stdout:4/974: write d1/d2/d3/d70/d99/dc9/dff/d2b/d4a/f118 [3552282,13441] 0 2026-03-09T20:48:21.590 INFO:tasks.workunit.client.1.vm10.stdout:7/999: write db/d21/d26/f106 [1489379,63712] 0 2026-03-09T20:48:21.591 INFO:tasks.workunit.client.1.vm10.stdout:4/975: mkdir d1/d2/d3/d70/d78/d86/d13c 0 2026-03-09T20:48:21.591 INFO:tasks.workunit.client.1.vm10.stdout:4/976: chown d1/d2/d3/d70/d99/dc9/dff/d2b 0 1 2026-03-09T20:48:21.592 INFO:tasks.workunit.client.1.vm10.stdout:4/977: readlink d1/d2/d3/d70/d99/dc9/dff/d2b/d4a/d9b/d109/l122 0 2026-03-09T20:48:21.594 INFO:tasks.workunit.client.1.vm10.stdout:0/989: write d2/d9/db8/d10f/d48/dac/fbd [615725,33916] 0 2026-03-09T20:48:21.595 INFO:tasks.workunit.client.1.vm10.stdout:0/990: write d2/d4a/fcf [4056451,9178] 0 2026-03-09T20:48:21.604 INFO:tasks.workunit.client.1.vm10.stdout:4/978: unlink d1/d2/d5c/d64/d6b/d81/da9/l103 0 2026-03-09T20:48:21.604 INFO:tasks.workunit.client.1.vm10.stdout:4/979: chown d1/d47/fb4 49319195 1 2026-03-09T20:48:21.612 INFO:tasks.workunit.client.1.vm10.stdout:4/980: mkdir d1/d2/d3/d70/d99/dc9/dff/d69/d10e/d13d 0 2026-03-09T20:48:21.621 INFO:tasks.workunit.client.1.vm10.stdout:4/981: creat d1/d2/d3/f13e x:0 0 0 2026-03-09T20:48:21.624 INFO:tasks.workunit.client.1.vm10.stdout:0/991: sync 2026-03-09T20:48:21.632 INFO:tasks.workunit.client.1.vm10.stdout:0/992: rmdir d2/d9/d69/d80/de1 39 2026-03-09T20:48:21.633 INFO:tasks.workunit.client.1.vm10.stdout:4/982: getdents d1/d2/d3/d70/d99/dc9/dff/d69/d10e/d13d 0 2026-03-09T20:48:21.635 INFO:tasks.workunit.client.1.vm10.stdout:0/993: sync 2026-03-09T20:48:21.643 INFO:tasks.workunit.client.1.vm10.stdout:0/994: creat d2/d9/d69/d80/f156 x:0 0 0 2026-03-09T20:48:21.643 INFO:tasks.workunit.client.1.vm10.stdout:0/995: dread - d2/d9/d12d/f136 zero size 2026-03-09T20:48:21.645 INFO:tasks.workunit.client.1.vm10.stdout:0/996: sync 2026-03-09T20:48:21.646 INFO:tasks.workunit.client.1.vm10.stdout:0/997: write d2/d9/d69/d80/f156 [37517,84] 0 2026-03-09T20:48:21.688 INFO:tasks.workunit.client.1.vm10.stdout:0/998: dread d2/d9/db8/d10f/d48/dac/de8/f102 [0,4194304] 0 2026-03-09T20:48:21.690 INFO:tasks.workunit.client.1.vm10.stdout:4/983: rmdir d1/dd8/de8 39 2026-03-09T20:48:21.694 INFO:tasks.workunit.client.1.vm10.stdout:4/984: symlink d1/d2/d5c/d64/d6b/d79/l13f 0 2026-03-09T20:48:21.705 INFO:tasks.workunit.client.1.vm10.stdout:3/977: creat dc/d14/d20/f144 x:0 0 0 2026-03-09T20:48:21.710 INFO:tasks.workunit.client.1.vm10.stdout:3/978: dwrite dc/f88 [0,4194304] 0 2026-03-09T20:48:21.713 INFO:tasks.workunit.client.1.vm10.stdout:5/931: write d2/d39/dbf/dcc/f123 [193447,54861] 0 2026-03-09T20:48:21.713 INFO:tasks.workunit.client.1.vm10.stdout:3/979: chown dc/d14/d26/d29/d40/c130 14984247 1 2026-03-09T20:48:21.715 INFO:tasks.workunit.client.1.vm10.stdout:3/980: creat dc/d14/d26/d29/d40/da8/d69/d13a/f145 x:0 0 0 2026-03-09T20:48:21.719 INFO:tasks.workunit.client.1.vm10.stdout:5/932: fsync d2/d58/df5/f116 0 2026-03-09T20:48:21.720 INFO:tasks.workunit.client.1.vm10.stdout:3/981: symlink dc/d14/d26/d10a/l146 0 2026-03-09T20:48:21.720 INFO:tasks.workunit.client.1.vm10.stdout:3/982: chown dc/d14/d26 131137 1 2026-03-09T20:48:21.725 INFO:tasks.workunit.client.1.vm10.stdout:0/999: dread d2/d9/db8/d10f/fd6 [0,4194304] 0 2026-03-09T20:48:21.751 INFO:tasks.workunit.client.1.vm10.stdout:3/983: rename dc/d14/d26/d29/d93 to dc/d14/d20/d2e/d56/d147 0 2026-03-09T20:48:21.790 INFO:tasks.workunit.client.1.vm10.stdout:4/985: dwrite d1/d2/d3/d70/d99/dc9/dff/d2b/f5a [0,4194304] 0 2026-03-09T20:48:21.800 INFO:tasks.workunit.client.1.vm10.stdout:5/933: write d2/f64 [236035,55513] 0 2026-03-09T20:48:21.803 INFO:tasks.workunit.client.1.vm10.stdout:4/986: mkdir d1/d2/d3/d70/d99/dc9/dff/d2b/d4a/d140 0 2026-03-09T20:48:21.807 INFO:tasks.workunit.client.1.vm10.stdout:4/987: unlink d1/d2/d3/d70/d99/dc9/dff/c45 0 2026-03-09T20:48:21.808 INFO:tasks.workunit.client.1.vm10.stdout:5/934: rename d2/f23 to d2/d39/d4b/d7a/dd9/d10c/d132/d46/d5d/d6d/f156 0 2026-03-09T20:48:21.809 INFO:tasks.workunit.client.1.vm10.stdout:3/984: write dc/db4/fca [1861294,29253] 0 2026-03-09T20:48:21.817 INFO:tasks.workunit.client.1.vm10.stdout:4/988: truncate d1/d2/d3/d70/d99/dc9/dff/d69/fcb 1325000 0 2026-03-09T20:48:21.818 INFO:tasks.workunit.client.1.vm10.stdout:4/989: chown d1/d2/d5c/d64/d6b/d81 244059672 1 2026-03-09T20:48:21.820 INFO:tasks.workunit.client.1.vm10.stdout:5/935: dread d2/d39/d4b/d7a/ffc [0,4194304] 0 2026-03-09T20:48:21.825 INFO:tasks.workunit.client.1.vm10.stdout:5/936: mkdir d2/d39/dbf/d69/de9/dfa/d157 0 2026-03-09T20:48:21.826 INFO:tasks.workunit.client.1.vm10.stdout:5/937: stat d2/d27/d75/l9f 0 2026-03-09T20:48:21.839 INFO:tasks.workunit.client.1.vm10.stdout:4/990: dread d1/dd8/de8/f11f [4194304,4194304] 0 2026-03-09T20:48:21.841 INFO:tasks.workunit.client.1.vm10.stdout:5/938: mknod d2/d39/dbf/d69/d96/c158 0 2026-03-09T20:48:21.850 INFO:tasks.workunit.client.1.vm10.stdout:4/991: mknod d1/d2/d5c/d64/d6b/d81/dac/d1b/c141 0 2026-03-09T20:48:21.852 INFO:tasks.workunit.client.1.vm10.stdout:3/985: dwrite dc/d14/d26/d29/d40/da2/f110 [0,4194304] 0 2026-03-09T20:48:21.855 INFO:tasks.workunit.client.1.vm10.stdout:3/986: chown dc/d14/d26/d29/d40/d8c/d9c/f105 5 1 2026-03-09T20:48:21.865 INFO:tasks.workunit.client.1.vm10.stdout:3/987: creat dc/d14/d26/d29/d40/da8/d69/d13a/ddd/d10e/f148 x:0 0 0 2026-03-09T20:48:21.865 INFO:tasks.workunit.client.1.vm10.stdout:5/939: write d2/d39/dbf/d69/d96/fc6 [2097500,105720] 0 2026-03-09T20:48:21.872 INFO:tasks.workunit.client.1.vm10.stdout:3/988: rename dc/d14/cb3 to dc/d14/d22/d4a/c149 0 2026-03-09T20:48:21.874 INFO:tasks.workunit.client.1.vm10.stdout:4/992: creat d1/d2/d5c/d64/d6b/d79/f142 x:0 0 0 2026-03-09T20:48:21.876 INFO:tasks.workunit.client.1.vm10.stdout:5/940: creat d2/d39/d4b/d7a/dd9/d128/f159 x:0 0 0 2026-03-09T20:48:21.878 INFO:tasks.workunit.client.1.vm10.stdout:3/989: mkdir dc/d14/d26/d29/d40/da2/dd7/d14a 0 2026-03-09T20:48:21.880 INFO:tasks.workunit.client.1.vm10.stdout:5/941: creat d2/d39/d4b/d7a/dd9/d10c/d132/d46/d5d/d6d/f15a x:0 0 0 2026-03-09T20:48:21.890 INFO:tasks.workunit.client.1.vm10.stdout:5/942: getdents d2/d27/d75 0 2026-03-09T20:48:21.891 INFO:tasks.workunit.client.1.vm10.stdout:5/943: write d2/d39/d4b/d7a/dd9/d10c/d132/d46/d5d/d6d/f15a [92510,98321] 0 2026-03-09T20:48:21.895 INFO:tasks.workunit.client.1.vm10.stdout:5/944: symlink d2/d39/d4b/d7a/dd9/l15b 0 2026-03-09T20:48:21.901 INFO:tasks.workunit.client.1.vm10.stdout:5/945: mknod d2/d39/d4b/d7a/dd9/d10c/d132/d46/d5d/c15c 0 2026-03-09T20:48:21.904 INFO:tasks.workunit.client.1.vm10.stdout:4/993: dwrite d1/d2/f2e [0,4194304] 0 2026-03-09T20:48:21.905 INFO:tasks.workunit.client.1.vm10.stdout:3/990: dwrite dc/d14/d26/d29/d40/da8/dc3/d128/f117 [0,4194304] 0 2026-03-09T20:48:21.914 INFO:tasks.workunit.client.1.vm10.stdout:3/991: dwrite dc/d14/d26/d29/d2a/d76/f97 [4194304,4194304] 0 2026-03-09T20:48:21.932 INFO:tasks.workunit.client.1.vm10.stdout:3/992: dread dc/d14/d26/d29/d40/da8/f11a [0,4194304] 0 2026-03-09T20:48:21.939 INFO:tasks.workunit.client.1.vm10.stdout:5/946: dread d2/d1b/d54/d78/fdb [4194304,4194304] 0 2026-03-09T20:48:21.966 INFO:tasks.workunit.client.1.vm10.stdout:3/993: sync 2026-03-09T20:48:21.969 INFO:tasks.workunit.client.1.vm10.stdout:3/994: creat dc/d14/d20/d21/daf/d113/f14b x:0 0 0 2026-03-09T20:48:21.971 INFO:tasks.workunit.client.1.vm10.stdout:5/947: link d2/d39/d4b/d7a/dd9/d10c/d132/d46/d5d/d77/l45 d2/d39/dbf/d69/d96/l15d 0 2026-03-09T20:48:21.971 INFO:tasks.workunit.client.1.vm10.stdout:5/948: stat d2/d39/dbf/d69/d96/cef 0 2026-03-09T20:48:21.975 INFO:tasks.workunit.client.1.vm10.stdout:5/949: dwrite d2/d39/dbf/da9/f13c [0,4194304] 0 2026-03-09T20:48:21.976 INFO:tasks.workunit.client.1.vm10.stdout:3/995: rmdir dc/d14 39 2026-03-09T20:48:21.989 INFO:tasks.workunit.client.1.vm10.stdout:4/994: write d1/dd8/f12a [833928,33261] 0 2026-03-09T20:48:21.994 INFO:tasks.workunit.client.1.vm10.stdout:5/950: mkdir d2/d1b/d54/d78/d15e 0 2026-03-09T20:48:22.001 INFO:tasks.workunit.client.1.vm10.stdout:3/996: mknod dc/d14/d26/d29/d2a/d55/c14c 0 2026-03-09T20:48:22.006 INFO:tasks.workunit.client.1.vm10.stdout:4/995: fsync d1/d2/f2a 0 2026-03-09T20:48:22.017 INFO:tasks.workunit.client.1.vm10.stdout:3/997: creat dc/d14/d26/d29/d40/da2/dd7/d14a/f14d x:0 0 0 2026-03-09T20:48:22.019 INFO:tasks.workunit.client.1.vm10.stdout:5/951: rmdir d2/d58/d6c/d126 0 2026-03-09T20:48:22.022 INFO:tasks.workunit.client.1.vm10.stdout:5/952: dwrite d2/d1b/d54/fb8 [0,4194304] 0 2026-03-09T20:48:22.027 INFO:tasks.workunit.client.1.vm10.stdout:4/996: truncate d1/d2/d3/d70/d99/dc9/dff/f91 1836524 0 2026-03-09T20:48:22.033 INFO:tasks.workunit.client.1.vm10.stdout:3/998: rename dc/d14/d26/d10a to dc/d14/d90/dfa/d14e 0 2026-03-09T20:48:22.043 INFO:tasks.workunit.client.1.vm10.stdout:4/997: unlink d1/d2/d5c/d64/d6b/d81/da9/lef 0 2026-03-09T20:48:22.044 INFO:tasks.workunit.client.1.vm10.stdout:4/998: write d1/d2/d5c/d64/d6b/d81/f11c [127001,52545] 0 2026-03-09T20:48:22.045 INFO:tasks.workunit.client.1.vm10.stdout:4/999: stat d1/d2/d3/d70/d99/dc9/dff/fdb 0 2026-03-09T20:48:22.050 INFO:tasks.workunit.client.1.vm10.stdout:3/999: mkdir dc/db4/de3/d14f 0 2026-03-09T20:48:22.057 INFO:tasks.workunit.client.1.vm10.stdout:5/953: link d2/c12 d2/d27/d75/d81/d143/c15f 0 2026-03-09T20:48:22.072 INFO:tasks.workunit.client.1.vm10.stdout:5/954: dwrite d2/d39/dbf/d69/d109/f113 [0,4194304] 0 2026-03-09T20:48:22.076 INFO:tasks.workunit.client.1.vm10.stdout:5/955: rename d2/d39/dbf/d69/de9/l10e to d2/d39/d4b/de0/d105/l160 0 2026-03-09T20:48:22.076 INFO:tasks.workunit.client.1.vm10.stdout:5/956: fdatasync d2/f152 0 2026-03-09T20:48:22.103 INFO:tasks.workunit.client.1.vm10.stdout:5/957: dwrite d2/d39/d4b/d7a/fc0 [0,4194304] 0 2026-03-09T20:48:22.108 INFO:tasks.workunit.client.1.vm10.stdout:5/958: sync 2026-03-09T20:48:22.131 INFO:tasks.workunit.client.1.vm10.stdout:5/959: write d2/d39/dbf/d66/ff3 [606327,23046] 0 2026-03-09T20:48:22.135 INFO:tasks.workunit.client.1.vm10.stdout:5/960: dwrite d2/d39/d4b/d7a/dd9/d10c/d132/d46/d5d/d6d/f136 [0,4194304] 0 2026-03-09T20:48:22.137 INFO:tasks.workunit.client.1.vm10.stdout:5/961: read d2/d39/f103 [1481207,3044] 0 2026-03-09T20:48:22.149 INFO:tasks.workunit.client.1.vm10.stdout:5/962: creat d2/d39/d89/f161 x:0 0 0 2026-03-09T20:48:22.165 INFO:tasks.workunit.client.1.vm10.stdout:5/963: write d2/d39/d4b/f97 [615571,46023] 0 2026-03-09T20:48:22.168 INFO:tasks.workunit.client.1.vm10.stdout:5/964: symlink d2/d80/l162 0 2026-03-09T20:48:22.172 INFO:tasks.workunit.client.1.vm10.stdout:5/965: mknod d2/d1b/d54/c163 0 2026-03-09T20:48:22.175 INFO:tasks.workunit.client.1.vm10.stdout:5/966: symlink d2/d39/dbf/d69/de9/dfa/l164 0 2026-03-09T20:48:22.175 INFO:tasks.workunit.client.1.vm10.stdout:5/967: chown d2/d39/d4b/c82 24 1 2026-03-09T20:48:22.183 INFO:tasks.workunit.client.1.vm10.stdout:5/968: mkdir d2/d39/dbf/da9/ddd/d165 0 2026-03-09T20:48:22.187 INFO:tasks.workunit.client.1.vm10.stdout:5/969: creat d2/d39/f166 x:0 0 0 2026-03-09T20:48:22.191 INFO:tasks.workunit.client.1.vm10.stdout:5/970: fdatasync f1 0 2026-03-09T20:48:22.193 INFO:tasks.workunit.client.1.vm10.stdout:5/971: dread d2/d39/dbf/d66/f114 [0,4194304] 0 2026-03-09T20:48:22.199 INFO:tasks.workunit.client.1.vm10.stdout:5/972: truncate d2/d39/dbf/d66/fc7 558854 0 2026-03-09T20:48:22.202 INFO:tasks.workunit.client.1.vm10.stdout:5/973: fsync d2/d58/f65 0 2026-03-09T20:48:22.202 INFO:tasks.workunit.client.1.vm10.stdout:5/974: chown d2/d1b/d54/d78 3327 1 2026-03-09T20:48:22.203 INFO:tasks.workunit.client.1.vm10.stdout:5/975: chown d2/d39/dbf/d63/fcd 18374891 1 2026-03-09T20:48:22.211 INFO:tasks.workunit.client.1.vm10.stdout:5/976: dread d2/d1b/d54/d78/de6/f135 [0,4194304] 0 2026-03-09T20:48:22.226 INFO:tasks.workunit.client.1.vm10.stdout:5/977: dwrite d2/d39/d4b/d7a/dd9/d10c/d132/d46/d5d/ff2 [0,4194304] 0 2026-03-09T20:48:22.238 INFO:tasks.workunit.client.1.vm10.stdout:5/978: truncate d2/d39/d4b/d7a/dd9/f11f 73788 0 2026-03-09T20:48:22.242 INFO:tasks.workunit.client.1.vm10.stdout:5/979: mknod d2/d39/d4b/d7a/dd9/d10c/d132/d46/d5d/d77/c167 0 2026-03-09T20:48:22.245 INFO:tasks.workunit.client.1.vm10.stdout:5/980: dread d2/f3e [4194304,4194304] 0 2026-03-09T20:48:22.253 INFO:tasks.workunit.client.1.vm10.stdout:5/981: write d2/d80/ffd [252401,19178] 0 2026-03-09T20:48:22.254 INFO:tasks.workunit.client.1.vm10.stdout:5/982: write d2/d39/dbf/da9/f13c [3844639,53102] 0 2026-03-09T20:48:22.264 INFO:tasks.workunit.client.1.vm10.stdout:5/983: mknod d2/d39/d4b/de0/d105/d144/c168 0 2026-03-09T20:48:22.264 INFO:tasks.workunit.client.1.vm10.stdout:5/984: chown d2/d1b/d54/d7b 3696394 1 2026-03-09T20:48:22.274 INFO:tasks.workunit.client.1.vm10.stdout:5/985: dwrite d2/d39/dbf/d84/f8b [0,4194304] 0 2026-03-09T20:48:22.276 INFO:tasks.workunit.client.1.vm10.stdout:5/986: chown d2/d39/d4b/d7a/dd9/d10c/d132/l4a 0 1 2026-03-09T20:48:22.303 INFO:tasks.workunit.client.1.vm10.stdout:5/987: write d2/d39/d4b/d7a/dd9/d10c/d132/d46/f94 [5226964,28076] 0 2026-03-09T20:48:22.312 INFO:tasks.workunit.client.1.vm10.stdout:5/988: dwrite d2/d39/d4b/d7a/dd9/fea [0,4194304] 0 2026-03-09T20:48:22.314 INFO:tasks.workunit.client.1.vm10.stdout:5/989: write d2/d39/d4b/d7a/dd9/fea [2465430,54115] 0 2026-03-09T20:48:22.315 INFO:tasks.workunit.client.1.vm10.stdout:5/990: chown d2/d39/d4b/d7a/dd9/d10c/d132/dc8/d12b 27 1 2026-03-09T20:48:22.341 INFO:tasks.workunit.client.1.vm10.stdout:5/991: dwrite d2/d39/dbf/f6a [0,4194304] 0 2026-03-09T20:48:22.345 INFO:tasks.workunit.client.1.vm10.stdout:5/992: dwrite d2/f152 [0,4194304] 0 2026-03-09T20:48:22.362 INFO:tasks.workunit.client.1.vm10.stdout:5/993: unlink d2/d80/l162 0 2026-03-09T20:48:22.363 INFO:tasks.workunit.client.1.vm10.stdout:5/994: chown d2/d39/d4b/d7a/dd9/d10c/d132/d46/d5d/d6d/f136 6810 1 2026-03-09T20:48:22.364 INFO:tasks.workunit.client.1.vm10.stdout:5/995: chown d2/d39/d4b/fda 153 1 2026-03-09T20:48:22.365 INFO:tasks.workunit.client.1.vm10.stdout:5/996: dread - d2/d39/d4b/d7a/dd9/d10c/d132/d46/d5d/d77/f10b zero size 2026-03-09T20:48:22.368 INFO:tasks.workunit.client.1.vm10.stdout:5/997: creat d2/d39/d4b/de0/d105/f169 x:0 0 0 2026-03-09T20:48:22.372 INFO:tasks.workunit.client.1.vm10.stdout:5/998: mkdir d2/d39/d4b/de0/d105/d144/d16a 0 2026-03-09T20:48:22.382 INFO:tasks.workunit.client.1.vm10.stdout:5/999: creat d2/d39/dbf/da9/d12d/f16b x:0 0 0 2026-03-09T20:48:22.382 INFO:tasks.workunit.client.1.vm10.stderr:+ rm -rf -- ./tmp.ts7NzC5V0t 2026-03-09T20:48:24.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:23 vm10.local ceph-mon[57011]: Standby manager daemon vm10.byqahe restarted 2026-03-09T20:48:24.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:23 vm10.local ceph-mon[57011]: Standby manager daemon vm10.byqahe started 2026-03-09T20:48:24.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:23 vm10.local ceph-mon[57011]: from='mgr.? 192.168.123.110:0/2936645606' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm10.byqahe/crt"}]: dispatch 2026-03-09T20:48:24.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:23 vm10.local ceph-mon[57011]: from='mgr.? 192.168.123.110:0/2936645606' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T20:48:24.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:23 vm10.local ceph-mon[57011]: from='mgr.? 192.168.123.110:0/2936645606' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm10.byqahe/key"}]: dispatch 2026-03-09T20:48:24.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:23 vm10.local ceph-mon[57011]: from='mgr.? 192.168.123.110:0/2936645606' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T20:48:24.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:23 vm07.local ceph-mon[112105]: Standby manager daemon vm10.byqahe restarted 2026-03-09T20:48:24.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:23 vm07.local ceph-mon[112105]: Standby manager daemon vm10.byqahe started 2026-03-09T20:48:24.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:23 vm07.local ceph-mon[112105]: from='mgr.? 192.168.123.110:0/2936645606' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm10.byqahe/crt"}]: dispatch 2026-03-09T20:48:24.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:23 vm07.local ceph-mon[112105]: from='mgr.? 192.168.123.110:0/2936645606' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T20:48:24.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:23 vm07.local ceph-mon[112105]: from='mgr.? 192.168.123.110:0/2936645606' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm10.byqahe/key"}]: dispatch 2026-03-09T20:48:24.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:23 vm07.local ceph-mon[112105]: from='mgr.? 192.168.123.110:0/2936645606' entity='mgr.vm10.byqahe' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T20:48:24.926 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:24 vm10.local ceph-mon[57011]: mgrmap e33: vm07.xjrvch(active, since 30s), standbys: vm10.byqahe 2026-03-09T20:48:24.927 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:24 vm10.local ceph-mon[57011]: Active manager daemon vm07.xjrvch restarted 2026-03-09T20:48:24.927 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:24 vm10.local ceph-mon[57011]: Activating manager daemon vm07.xjrvch 2026-03-09T20:48:24.927 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:24 vm10.local ceph-mon[57011]: osdmap e47: 6 total, 6 up, 6 in 2026-03-09T20:48:24.927 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:24 vm10.local ceph-mon[57011]: mgrmap e34: vm07.xjrvch(active, starting, since 0.00877423s), standbys: vm10.byqahe 2026-03-09T20:48:24.927 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:24 vm10.local ceph-mon[57011]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mon metadata", "id": "vm07"}]: dispatch 2026-03-09T20:48:24.927 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:24 vm10.local ceph-mon[57011]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mon metadata", "id": "vm10"}]: dispatch 2026-03-09T20:48:24.927 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:24 vm10.local ceph-mon[57011]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm07.rovdbp"}]: dispatch 2026-03-09T20:48:24.927 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:24 vm10.local ceph-mon[57011]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm07.potfau"}]: dispatch 2026-03-09T20:48:24.927 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:24 vm10.local ceph-mon[57011]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm10.hzyuyq"}]: dispatch 2026-03-09T20:48:24.927 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:24 vm10.local ceph-mon[57011]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm10.qpltwp"}]: dispatch 2026-03-09T20:48:24.927 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:24 vm10.local ceph-mon[57011]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mgr metadata", "who": "vm07.xjrvch", "id": "vm07.xjrvch"}]: dispatch 2026-03-09T20:48:24.927 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:24 vm10.local ceph-mon[57011]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mgr metadata", "who": "vm10.byqahe", "id": "vm10.byqahe"}]: dispatch 2026-03-09T20:48:24.927 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:24 vm10.local ceph-mon[57011]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:48:24.927 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:24 vm10.local ceph-mon[57011]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:48:24.927 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:24 vm10.local ceph-mon[57011]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:48:24.927 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:24 vm10.local ceph-mon[57011]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:48:24.927 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:24 vm10.local ceph-mon[57011]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T20:48:24.927 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:24 vm10.local ceph-mon[57011]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T20:48:25.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:24 vm07.local ceph-mon[112105]: mgrmap e33: vm07.xjrvch(active, since 30s), standbys: vm10.byqahe 2026-03-09T20:48:25.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:24 vm07.local ceph-mon[112105]: Active manager daemon vm07.xjrvch restarted 2026-03-09T20:48:25.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:24 vm07.local ceph-mon[112105]: Activating manager daemon vm07.xjrvch 2026-03-09T20:48:25.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:24 vm07.local ceph-mon[112105]: osdmap e47: 6 total, 6 up, 6 in 2026-03-09T20:48:25.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:24 vm07.local ceph-mon[112105]: mgrmap e34: vm07.xjrvch(active, starting, since 0.00877423s), standbys: vm10.byqahe 2026-03-09T20:48:25.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:24 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mon metadata", "id": "vm07"}]: dispatch 2026-03-09T20:48:25.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:24 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mon metadata", "id": "vm10"}]: dispatch 2026-03-09T20:48:25.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:24 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm07.rovdbp"}]: dispatch 2026-03-09T20:48:25.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:24 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm07.potfau"}]: dispatch 2026-03-09T20:48:25.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:24 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm10.hzyuyq"}]: dispatch 2026-03-09T20:48:25.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:24 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm10.qpltwp"}]: dispatch 2026-03-09T20:48:25.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:24 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mgr metadata", "who": "vm07.xjrvch", "id": "vm07.xjrvch"}]: dispatch 2026-03-09T20:48:25.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:24 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mgr metadata", "who": "vm10.byqahe", "id": "vm10.byqahe"}]: dispatch 2026-03-09T20:48:25.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:24 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:48:25.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:24 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:48:25.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:24 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:48:25.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:24 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:48:25.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:24 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T20:48:25.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:24 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T20:48:25.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:24 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T20:48:25.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:24 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T20:48:25.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:24 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T20:48:25.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:24 vm07.local ceph-mon[112105]: Manager daemon vm07.xjrvch is now available 2026-03-09T20:48:25.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:24 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:48:25.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:24 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:48:25.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:24 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm07.xjrvch/mirror_snapshot_schedule"}]: dispatch 2026-03-09T20:48:25.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:24 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm07.xjrvch/trash_purge_schedule"}]: dispatch 2026-03-09T20:48:25.289 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:24 vm10.local ceph-mon[57011]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T20:48:25.299 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:24 vm10.local ceph-mon[57011]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T20:48:25.299 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:24 vm10.local ceph-mon[57011]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T20:48:25.300 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:24 vm10.local ceph-mon[57011]: Manager daemon vm07.xjrvch is now available 2026-03-09T20:48:25.300 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:24 vm10.local ceph-mon[57011]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:48:25.300 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:24 vm10.local ceph-mon[57011]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:48:25.300 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:24 vm10.local ceph-mon[57011]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm07.xjrvch/mirror_snapshot_schedule"}]: dispatch 2026-03-09T20:48:25.300 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:24 vm10.local ceph-mon[57011]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm07.xjrvch/trash_purge_schedule"}]: dispatch 2026-03-09T20:48:26.831 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:26 vm07.local ceph-mon[112105]: mgrmap e35: vm07.xjrvch(active, since 1.19499s), standbys: vm10.byqahe 2026-03-09T20:48:26.831 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:26 vm07.local ceph-mon[112105]: pgmap v3: 65 pgs: 65 active+clean; 4.0 GiB data, 14 GiB used, 106 GiB / 120 GiB avail 2026-03-09T20:48:27.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:26 vm10.local ceph-mon[57011]: mgrmap e35: vm07.xjrvch(active, since 1.19499s), standbys: vm10.byqahe 2026-03-09T20:48:27.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:26 vm10.local ceph-mon[57011]: pgmap v3: 65 pgs: 65 active+clean; 4.0 GiB data, 14 GiB used, 106 GiB / 120 GiB avail 2026-03-09T20:48:27.715 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:27 vm07.local ceph-mon[112105]: mgrmap e36: vm07.xjrvch(active, since 2s), standbys: vm10.byqahe 2026-03-09T20:48:27.715 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:27 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:27.715 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:27 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:27.715 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:27 vm07.local ceph-mon[112105]: [09/Mar/2026:20:48:26] ENGINE Bus STARTING 2026-03-09T20:48:27.715 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:27 vm07.local ceph-mon[112105]: [09/Mar/2026:20:48:27] ENGINE Serving on https://192.168.123.107:7150 2026-03-09T20:48:27.715 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:27 vm07.local ceph-mon[112105]: [09/Mar/2026:20:48:27] ENGINE Client ('192.168.123.107', 49004) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T20:48:27.715 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:27 vm07.local ceph-mon[112105]: [09/Mar/2026:20:48:27] ENGINE Serving on http://192.168.123.107:8765 2026-03-09T20:48:27.716 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:27 vm07.local ceph-mon[112105]: [09/Mar/2026:20:48:27] ENGINE Bus STARTED 2026-03-09T20:48:27.716 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:27 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:27.716 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:27 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:27.887 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:27 vm10.local ceph-mon[57011]: mgrmap e36: vm07.xjrvch(active, since 2s), standbys: vm10.byqahe 2026-03-09T20:48:27.887 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:27 vm10.local ceph-mon[57011]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:27.887 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:27 vm10.local ceph-mon[57011]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:27.887 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:27 vm10.local ceph-mon[57011]: [09/Mar/2026:20:48:26] ENGINE Bus STARTING 2026-03-09T20:48:27.887 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:27 vm10.local ceph-mon[57011]: [09/Mar/2026:20:48:27] ENGINE Serving on https://192.168.123.107:7150 2026-03-09T20:48:27.887 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:27 vm10.local ceph-mon[57011]: [09/Mar/2026:20:48:27] ENGINE Client ('192.168.123.107', 49004) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T20:48:27.887 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:27 vm10.local ceph-mon[57011]: [09/Mar/2026:20:48:27] ENGINE Serving on http://192.168.123.107:8765 2026-03-09T20:48:27.887 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:27 vm10.local ceph-mon[57011]: [09/Mar/2026:20:48:27] ENGINE Bus STARTED 2026-03-09T20:48:27.887 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:27 vm10.local ceph-mon[57011]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:27.887 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:27 vm10.local ceph-mon[57011]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:29.076 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:29 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:29.076 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:29 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:29.076 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:29 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:29.076 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:29 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:29.076 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:29 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:48:29.076 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:29 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:29.076 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:29 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:29.076 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:29 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "who": "osd/host:vm10", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:48:29.076 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:29 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:29.076 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:29 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:48:29.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:29 vm10.local ceph-mon[57011]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:29.538 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:29 vm10.local ceph-mon[57011]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:29.538 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:29 vm10.local ceph-mon[57011]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:29.538 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:29 vm10.local ceph-mon[57011]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:29.538 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:29 vm10.local ceph-mon[57011]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:48:29.538 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:29 vm10.local ceph-mon[57011]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:29.538 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:29 vm10.local ceph-mon[57011]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:29.538 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:29 vm10.local ceph-mon[57011]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "who": "osd/host:vm10", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:48:29.538 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:29 vm10.local ceph-mon[57011]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:29.538 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:29 vm10.local ceph-mon[57011]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:48:30.784 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:30 vm10.local ceph-mon[57011]: pgmap v5: 65 pgs: 65 active+clean; 4.0 GiB data, 14 GiB used, 106 GiB / 120 GiB avail 2026-03-09T20:48:30.785 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:30 vm10.local ceph-mon[57011]: Updating vm07:/etc/ceph/ceph.conf 2026-03-09T20:48:30.785 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:30 vm10.local ceph-mon[57011]: Updating vm10:/etc/ceph/ceph.conf 2026-03-09T20:48:30.785 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:30 vm10.local ceph-mon[57011]: Updating vm07:/var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/config/ceph.conf 2026-03-09T20:48:30.785 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:30 vm10.local ceph-mon[57011]: Updating vm10:/var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/config/ceph.conf 2026-03-09T20:48:30.785 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:30 vm10.local ceph-mon[57011]: mgrmap e37: vm07.xjrvch(active, since 4s), standbys: vm10.byqahe 2026-03-09T20:48:30.785 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:30 vm10.local ceph-mon[57011]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:30.785 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:30 vm10.local ceph-mon[57011]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:30.785 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:30 vm10.local ceph-mon[57011]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:30.785 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:30 vm10.local ceph-mon[57011]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:30.785 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:30 vm10.local ceph-mon[57011]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:30.785 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:30 vm10.local ceph-mon[57011]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:48:30.785 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:30 vm10.local ceph-mon[57011]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:48:30.785 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:30 vm10.local ceph-mon[57011]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-09T20:48:30.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:30 vm07.local ceph-mon[112105]: pgmap v5: 65 pgs: 65 active+clean; 4.0 GiB data, 14 GiB used, 106 GiB / 120 GiB avail 2026-03-09T20:48:30.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:30 vm07.local ceph-mon[112105]: Updating vm07:/etc/ceph/ceph.conf 2026-03-09T20:48:30.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:30 vm07.local ceph-mon[112105]: Updating vm10:/etc/ceph/ceph.conf 2026-03-09T20:48:30.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:30 vm07.local ceph-mon[112105]: Updating vm07:/var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/config/ceph.conf 2026-03-09T20:48:30.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:30 vm07.local ceph-mon[112105]: Updating vm10:/var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/config/ceph.conf 2026-03-09T20:48:30.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:30 vm07.local ceph-mon[112105]: mgrmap e37: vm07.xjrvch(active, since 4s), standbys: vm10.byqahe 2026-03-09T20:48:30.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:30 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:30.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:30 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:30.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:30 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:30.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:30 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:30.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:30 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:30.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:30 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:48:30.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:30 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:48:30.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:30 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-09T20:48:31.763 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:31 vm10.local ceph-mon[57011]: Updating vm10:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:48:31.763 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:31 vm10.local ceph-mon[57011]: Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:48:31.763 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:31 vm10.local ceph-mon[57011]: Updating vm10:/var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/config/ceph.client.admin.keyring 2026-03-09T20:48:31.763 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:31 vm10.local ceph-mon[57011]: Updating vm07:/var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/config/ceph.client.admin.keyring 2026-03-09T20:48:31.763 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:31 vm10.local ceph-mon[57011]: pgmap v6: 65 pgs: 65 active+clean; 4.0 GiB data, 14 GiB used, 106 GiB / 120 GiB avail 2026-03-09T20:48:31.763 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:31 vm10.local ceph-mon[57011]: Upgrade: Updating mon.vm10 2026-03-09T20:48:31.764 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:31 vm10.local ceph-mon[57011]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:31.764 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:31 vm10.local ceph-mon[57011]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T20:48:31.764 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:31 vm10.local ceph-mon[57011]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T20:48:31.764 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:31 vm10.local ceph-mon[57011]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:31.764 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:31 vm10.local ceph-mon[57011]: Deploying daemon mon.vm10 on vm10 2026-03-09T20:48:31.764 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:31 vm10.local systemd[1]: Stopping Ceph mon.vm10 for 589eab88-1bf8-11f1-9e50-71f3ab1833c4... 2026-03-09T20:48:31.764 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:31 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-mon-vm10[57007]: 2026-03-09T20:48:31.671+0000 7fed3cb94640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.vm10 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T20:48:31.764 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:31 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-mon-vm10[57007]: 2026-03-09T20:48:31.671+0000 7fed3cb94640 -1 mon.vm10@1(peon) e2 *** Got Signal Terminated *** 2026-03-09T20:48:31.764 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:31 vm10.local podman[103397]: 2026-03-09 20:48:31.743397147 +0000 UTC m=+0.101974804 container died 4e5d7d18c6604ffec700a0410da3f974af0ffab70832ba33703b07ecc2bd3f3d (image=quay.ceph.io/ceph-ci/ceph@sha256:5c4977cb71fa562f9eeef885de8392360f108b57ddc9fb513207223fafb4cf40, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-mon-vm10, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=ab47f43c099b2cbae6e21342fe673ce251da54d6, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3) 2026-03-09T20:48:31.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:31 vm07.local ceph-mon[112105]: Updating vm10:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:48:31.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:31 vm07.local ceph-mon[112105]: Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:48:31.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:31 vm07.local ceph-mon[112105]: Updating vm10:/var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/config/ceph.client.admin.keyring 2026-03-09T20:48:31.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:31 vm07.local ceph-mon[112105]: Updating vm07:/var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/config/ceph.client.admin.keyring 2026-03-09T20:48:31.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:31 vm07.local ceph-mon[112105]: pgmap v6: 65 pgs: 65 active+clean; 4.0 GiB data, 14 GiB used, 106 GiB / 120 GiB avail 2026-03-09T20:48:31.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:31 vm07.local ceph-mon[112105]: Upgrade: Updating mon.vm10 2026-03-09T20:48:31.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:31 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:31.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:31 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T20:48:31.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:31 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T20:48:31.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:31 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:31.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:31 vm07.local ceph-mon[112105]: Deploying daemon mon.vm10 on vm10 2026-03-09T20:48:32.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:31 vm10.local podman[103397]: 2026-03-09 20:48:31.777374164 +0000 UTC m=+0.135951821 container remove 4e5d7d18c6604ffec700a0410da3f974af0ffab70832ba33703b07ecc2bd3f3d (image=quay.ceph.io/ceph-ci/ceph@sha256:5c4977cb71fa562f9eeef885de8392360f108b57ddc9fb513207223fafb4cf40, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-mon-vm10, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, CEPH_REF=reef, CEPH_SHA1=ab47f43c099b2cbae6e21342fe673ce251da54d6, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team , ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0) 2026-03-09T20:48:32.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:31 vm10.local bash[103397]: ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-mon-vm10 2026-03-09T20:48:32.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:31 vm10.local systemd[1]: ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4@mon.vm10.service: Deactivated successfully. 2026-03-09T20:48:32.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:31 vm10.local systemd[1]: Stopped Ceph mon.vm10 for 589eab88-1bf8-11f1-9e50-71f3ab1833c4. 2026-03-09T20:48:32.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:31 vm10.local systemd[1]: ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4@mon.vm10.service: Consumed 3.403s CPU time. 2026-03-09T20:48:32.415 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local systemd[1]: Starting Ceph mon.vm10 for 589eab88-1bf8-11f1-9e50-71f3ab1833c4... 2026-03-09T20:48:32.415 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local podman[103511]: 2026-03-09 20:48:32.388305641 +0000 UTC m=+0.089784748 container create 4428cf7f0607bdeb22f587a4124e71f15446c241c91010a8a32efe73a21b0707 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-mon-vm10, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default) 2026-03-09T20:48:32.706 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local podman[103511]: 2026-03-09 20:48:32.316272532 +0000 UTC m=+0.017751648 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T20:48:32.706 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local podman[103511]: 2026-03-09 20:48:32.436381187 +0000 UTC m=+0.137860304 container init 4428cf7f0607bdeb22f587a4124e71f15446c241c91010a8a32efe73a21b0707 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-mon-vm10, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-09T20:48:32.706 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local podman[103511]: 2026-03-09 20:48:32.441113114 +0000 UTC m=+0.142592221 container start 4428cf7f0607bdeb22f587a4124e71f15446c241c91010a8a32efe73a21b0707 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-mon-vm10, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, io.buildah.version=1.41.3) 2026-03-09T20:48:32.706 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local bash[103511]: 4428cf7f0607bdeb22f587a4124e71f15446c241c91010a8a32efe73a21b0707 2026-03-09T20:48:32.706 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local systemd[1]: Started Ceph mon.vm10 for 589eab88-1bf8-11f1-9e50-71f3ab1833c4. 2026-03-09T20:48:32.706 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: set uid:gid to 167:167 (ceph:ceph) 2026-03-09T20:48:32.706 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 2 2026-03-09T20:48:32.706 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: pidfile_write: ignore empty --pid-file 2026-03-09T20:48:32.706 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: load: jerasure load: lrc 2026-03-09T20:48:32.706 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: RocksDB version: 7.9.2 2026-03-09T20:48:32.706 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Git sha 0 2026-03-09T20:48:32.706 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Compile date 2026-02-25 18:11:04 2026-03-09T20:48:32.706 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: DB SUMMARY 2026-03-09T20:48:32.706 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: DB Session ID: 0Q91D84O4IUURBFAQHAO 2026-03-09T20:48:32.706 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: CURRENT file: CURRENT 2026-03-09T20:48:32.706 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: IDENTITY file: IDENTITY 2026-03-09T20:48:32.706 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: MANIFEST file: MANIFEST-000010 size: 661 Bytes 2026-03-09T20:48:32.706 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: SST files in /var/lib/ceph/mon/ceph-vm10/store.db dir, Total Num: 1, files: 000018.sst 2026-03-09T20:48:32.706 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-vm10/store.db: 000016.log size: 6615857 ; 2026-03-09T20:48:32.707 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.error_if_exists: 0 2026-03-09T20:48:32.707 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.create_if_missing: 0 2026-03-09T20:48:32.707 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.paranoid_checks: 1 2026-03-09T20:48:32.707 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.flush_verify_memtable_count: 1 2026-03-09T20:48:32.707 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-09T20:48:32.707 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-09T20:48:32.707 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.env: 0x559cb714bdc0 2026-03-09T20:48:32.707 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.fs: PosixFileSystem 2026-03-09T20:48:32.707 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.info_log: 0x559cb931f900 2026-03-09T20:48:32.707 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.max_file_opening_threads: 16 2026-03-09T20:48:32.707 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.statistics: (nil) 2026-03-09T20:48:32.707 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.use_fsync: 0 2026-03-09T20:48:32.707 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.max_log_file_size: 0 2026-03-09T20:48:32.707 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-09T20:48:32.707 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.log_file_time_to_roll: 0 2026-03-09T20:48:32.707 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.keep_log_file_num: 1000 2026-03-09T20:48:32.707 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.recycle_log_file_num: 0 2026-03-09T20:48:32.707 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.allow_fallocate: 1 2026-03-09T20:48:32.707 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.allow_mmap_reads: 0 2026-03-09T20:48:32.707 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.allow_mmap_writes: 0 2026-03-09T20:48:32.707 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.use_direct_reads: 0 2026-03-09T20:48:32.707 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-09T20:48:32.707 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.create_missing_column_families: 0 2026-03-09T20:48:32.707 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.db_log_dir: 2026-03-09T20:48:32.707 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.wal_dir: 2026-03-09T20:48:32.707 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.table_cache_numshardbits: 6 2026-03-09T20:48:32.707 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.WAL_ttl_seconds: 0 2026-03-09T20:48:32.707 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.WAL_size_limit_MB: 0 2026-03-09T20:48:32.707 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-09T20:48:32.707 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-09T20:48:32.707 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.is_fd_close_on_exec: 1 2026-03-09T20:48:32.707 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.advise_random_on_open: 1 2026-03-09T20:48:32.707 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.db_write_buffer_size: 0 2026-03-09T20:48:32.707 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.write_buffer_manager: 0x559cb9323900 2026-03-09T20:48:32.707 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-09T20:48:32.707 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-09T20:48:32.707 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.use_adaptive_mutex: 0 2026-03-09T20:48:32.707 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.rate_limiter: (nil) 2026-03-09T20:48:32.707 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-09T20:48:32.707 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.wal_recovery_mode: 2 2026-03-09T20:48:32.707 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.enable_thread_tracking: 0 2026-03-09T20:48:32.707 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.enable_pipelined_write: 0 2026-03-09T20:48:32.707 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.unordered_write: 0 2026-03-09T20:48:32.707 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-09T20:48:32.707 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-09T20:48:32.707 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-09T20:48:32.707 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-09T20:48:32.707 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.row_cache: None 2026-03-09T20:48:32.707 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.wal_filter: None 2026-03-09T20:48:32.707 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-09T20:48:32.707 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.allow_ingest_behind: 0 2026-03-09T20:48:32.708 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.two_write_queues: 0 2026-03-09T20:48:32.708 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.manual_wal_flush: 0 2026-03-09T20:48:32.708 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.wal_compression: 0 2026-03-09T20:48:32.708 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.atomic_flush: 0 2026-03-09T20:48:32.708 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-09T20:48:32.708 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.persist_stats_to_disk: 0 2026-03-09T20:48:32.708 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.write_dbid_to_manifest: 0 2026-03-09T20:48:32.708 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.log_readahead_size: 0 2026-03-09T20:48:32.708 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-09T20:48:32.708 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.best_efforts_recovery: 0 2026-03-09T20:48:32.708 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-09T20:48:32.708 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-09T20:48:32.708 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.allow_data_in_errors: 0 2026-03-09T20:48:32.708 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.db_host_id: __hostname__ 2026-03-09T20:48:32.708 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.enforce_single_del_contracts: true 2026-03-09T20:48:32.708 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.max_background_jobs: 2 2026-03-09T20:48:32.708 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.max_background_compactions: -1 2026-03-09T20:48:32.708 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.max_subcompactions: 1 2026-03-09T20:48:32.708 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-09T20:48:32.708 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-09T20:48:32.708 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.delayed_write_rate : 16777216 2026-03-09T20:48:32.708 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.max_total_wal_size: 0 2026-03-09T20:48:32.708 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-09T20:48:32.708 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.stats_dump_period_sec: 600 2026-03-09T20:48:32.708 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.stats_persist_period_sec: 600 2026-03-09T20:48:32.708 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-09T20:48:32.708 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.max_open_files: -1 2026-03-09T20:48:32.708 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.bytes_per_sync: 0 2026-03-09T20:48:32.708 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.wal_bytes_per_sync: 0 2026-03-09T20:48:32.708 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.strict_bytes_per_sync: 0 2026-03-09T20:48:32.708 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.compaction_readahead_size: 0 2026-03-09T20:48:32.708 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.max_background_flushes: -1 2026-03-09T20:48:32.708 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Compression algorithms supported: 2026-03-09T20:48:32.708 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: kZSTD supported: 0 2026-03-09T20:48:32.708 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: kXpressCompression supported: 0 2026-03-09T20:48:32.708 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: kBZip2Compression supported: 0 2026-03-09T20:48:32.708 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-09T20:48:32.708 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: kLZ4Compression supported: 1 2026-03-09T20:48:32.708 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: kZlibCompression supported: 1 2026-03-09T20:48:32.708 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: kLZ4HCCompression supported: 1 2026-03-09T20:48:32.708 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: kSnappyCompression supported: 1 2026-03-09T20:48:32.708 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Fast CRC32 supported: Supported on x86 2026-03-09T20:48:32.708 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: DMutex implementation: pthread_mutex_t 2026-03-09T20:48:32.708 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-vm10/store.db/MANIFEST-000010 2026-03-09T20:48:32.708 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-09T20:48:32.708 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-09T20:48:32.708 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.merge_operator: 2026-03-09T20:48:32.708 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.compaction_filter: None 2026-03-09T20:48:32.708 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.compaction_filter_factory: None 2026-03-09T20:48:32.709 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.sst_partitioner_factory: None 2026-03-09T20:48:32.709 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.memtable_factory: SkipListFactory 2026-03-09T20:48:32.709 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.table_factory: BlockBasedTable 2026-03-09T20:48:32.709 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559cb931e500) 2026-03-09T20:48:32.709 INFO:journalctl@ceph.mon.vm10.vm10.stdout: cache_index_and_filter_blocks: 1 2026-03-09T20:48:32.709 INFO:journalctl@ceph.mon.vm10.vm10.stdout: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-09T20:48:32.709 INFO:journalctl@ceph.mon.vm10.vm10.stdout: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-09T20:48:32.709 INFO:journalctl@ceph.mon.vm10.vm10.stdout: pin_top_level_index_and_filter: 1 2026-03-09T20:48:32.709 INFO:journalctl@ceph.mon.vm10.vm10.stdout: index_type: 0 2026-03-09T20:48:32.709 INFO:journalctl@ceph.mon.vm10.vm10.stdout: data_block_index_type: 0 2026-03-09T20:48:32.709 INFO:journalctl@ceph.mon.vm10.vm10.stdout: index_shortening: 1 2026-03-09T20:48:32.709 INFO:journalctl@ceph.mon.vm10.vm10.stdout: data_block_hash_table_util_ratio: 0.750000 2026-03-09T20:48:32.709 INFO:journalctl@ceph.mon.vm10.vm10.stdout: checksum: 4 2026-03-09T20:48:32.709 INFO:journalctl@ceph.mon.vm10.vm10.stdout: no_block_cache: 0 2026-03-09T20:48:32.709 INFO:journalctl@ceph.mon.vm10.vm10.stdout: block_cache: 0x559cb9343350 2026-03-09T20:48:32.709 INFO:journalctl@ceph.mon.vm10.vm10.stdout: block_cache_name: BinnedLRUCache 2026-03-09T20:48:32.709 INFO:journalctl@ceph.mon.vm10.vm10.stdout: block_cache_options: 2026-03-09T20:48:32.709 INFO:journalctl@ceph.mon.vm10.vm10.stdout: capacity : 536870912 2026-03-09T20:48:32.709 INFO:journalctl@ceph.mon.vm10.vm10.stdout: num_shard_bits : 4 2026-03-09T20:48:32.709 INFO:journalctl@ceph.mon.vm10.vm10.stdout: strict_capacity_limit : 0 2026-03-09T20:48:32.709 INFO:journalctl@ceph.mon.vm10.vm10.stdout: high_pri_pool_ratio: 0.000 2026-03-09T20:48:32.709 INFO:journalctl@ceph.mon.vm10.vm10.stdout: block_cache_compressed: (nil) 2026-03-09T20:48:32.709 INFO:journalctl@ceph.mon.vm10.vm10.stdout: persistent_cache: (nil) 2026-03-09T20:48:32.709 INFO:journalctl@ceph.mon.vm10.vm10.stdout: block_size: 4096 2026-03-09T20:48:32.709 INFO:journalctl@ceph.mon.vm10.vm10.stdout: block_size_deviation: 10 2026-03-09T20:48:32.709 INFO:journalctl@ceph.mon.vm10.vm10.stdout: block_restart_interval: 16 2026-03-09T20:48:32.709 INFO:journalctl@ceph.mon.vm10.vm10.stdout: index_block_restart_interval: 1 2026-03-09T20:48:32.709 INFO:journalctl@ceph.mon.vm10.vm10.stdout: metadata_block_size: 4096 2026-03-09T20:48:32.709 INFO:journalctl@ceph.mon.vm10.vm10.stdout: partition_filters: 0 2026-03-09T20:48:32.709 INFO:journalctl@ceph.mon.vm10.vm10.stdout: use_delta_encoding: 1 2026-03-09T20:48:32.709 INFO:journalctl@ceph.mon.vm10.vm10.stdout: filter_policy: bloomfilter 2026-03-09T20:48:32.709 INFO:journalctl@ceph.mon.vm10.vm10.stdout: whole_key_filtering: 1 2026-03-09T20:48:32.709 INFO:journalctl@ceph.mon.vm10.vm10.stdout: verify_compression: 0 2026-03-09T20:48:32.709 INFO:journalctl@ceph.mon.vm10.vm10.stdout: read_amp_bytes_per_bit: 0 2026-03-09T20:48:32.709 INFO:journalctl@ceph.mon.vm10.vm10.stdout: format_version: 5 2026-03-09T20:48:32.709 INFO:journalctl@ceph.mon.vm10.vm10.stdout: enable_index_compression: 1 2026-03-09T20:48:32.709 INFO:journalctl@ceph.mon.vm10.vm10.stdout: block_align: 0 2026-03-09T20:48:32.709 INFO:journalctl@ceph.mon.vm10.vm10.stdout: max_auto_readahead_size: 262144 2026-03-09T20:48:32.709 INFO:journalctl@ceph.mon.vm10.vm10.stdout: prepopulate_block_cache: 0 2026-03-09T20:48:32.709 INFO:journalctl@ceph.mon.vm10.vm10.stdout: initial_auto_readahead_size: 8192 2026-03-09T20:48:32.709 INFO:journalctl@ceph.mon.vm10.vm10.stdout: num_file_reads_for_auto_readahead: 2 2026-03-09T20:48:32.709 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.write_buffer_size: 33554432 2026-03-09T20:48:32.709 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.max_write_buffer_number: 2 2026-03-09T20:48:32.709 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.compression: NoCompression 2026-03-09T20:48:32.709 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.bottommost_compression: Disabled 2026-03-09T20:48:32.709 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.prefix_extractor: nullptr 2026-03-09T20:48:32.709 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-09T20:48:32.709 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.num_levels: 7 2026-03-09T20:48:32.709 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-09T20:48:32.709 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-09T20:48:32.709 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-09T20:48:32.709 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-09T20:48:32.709 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-09T20:48:32.709 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-09T20:48:32.709 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-09T20:48:32.709 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-09T20:48:32.710 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-09T20:48:32.710 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-09T20:48:32.710 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-09T20:48:32.710 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-09T20:48:32.710 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.compression_opts.window_bits: -14 2026-03-09T20:48:32.710 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.compression_opts.level: 32767 2026-03-09T20:48:32.710 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.compression_opts.strategy: 0 2026-03-09T20:48:32.710 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-09T20:48:32.710 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-09T20:48:32.710 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-09T20:48:32.710 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-09T20:48:32.710 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.compression_opts.enabled: false 2026-03-09T20:48:32.710 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-09T20:48:32.710 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-09T20:48:32.710 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-09T20:48:32.710 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-09T20:48:32.710 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.target_file_size_base: 67108864 2026-03-09T20:48:32.710 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.target_file_size_multiplier: 1 2026-03-09T20:48:32.710 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-09T20:48:32.710 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-09T20:48:32.710 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-09T20:48:32.710 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-09T20:48:32.710 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-09T20:48:32.710 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-09T20:48:32.710 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-09T20:48:32.710 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-09T20:48:32.710 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-09T20:48:32.710 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-09T20:48:32.710 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-09T20:48:32.710 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-09T20:48:32.710 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-09T20:48:32.710 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.arena_block_size: 1048576 2026-03-09T20:48:32.710 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-09T20:48:32.710 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-09T20:48:32.710 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.disable_auto_compactions: 0 2026-03-09T20:48:32.710 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-09T20:48:32.710 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-09T20:48:32.710 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-09T20:48:32.710 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-09T20:48:32.710 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-09T20:48:32.710 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-09T20:48:32.710 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-09T20:48:32.710 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-09T20:48:32.710 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-09T20:48:32.710 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-09T20:48:32.711 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-09T20:48:32.711 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.inplace_update_support: 0 2026-03-09T20:48:32.711 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.inplace_update_num_locks: 10000 2026-03-09T20:48:32.711 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-09T20:48:32.711 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-09T20:48:32.711 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.memtable_huge_page_size: 0 2026-03-09T20:48:32.711 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.bloom_locality: 0 2026-03-09T20:48:32.711 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.max_successive_merges: 0 2026-03-09T20:48:32.711 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.optimize_filters_for_hits: 0 2026-03-09T20:48:32.711 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.paranoid_file_checks: 0 2026-03-09T20:48:32.711 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.force_consistency_checks: 1 2026-03-09T20:48:32.711 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.report_bg_io_stats: 0 2026-03-09T20:48:32.711 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.ttl: 2592000 2026-03-09T20:48:32.711 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.periodic_compaction_seconds: 0 2026-03-09T20:48:32.711 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-09T20:48:32.711 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-09T20:48:32.711 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.enable_blob_files: false 2026-03-09T20:48:32.711 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.min_blob_size: 0 2026-03-09T20:48:32.711 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.blob_file_size: 268435456 2026-03-09T20:48:32.711 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.blob_compression_type: NoCompression 2026-03-09T20:48:32.711 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.enable_blob_garbage_collection: false 2026-03-09T20:48:32.711 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-09T20:48:32.711 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-09T20:48:32.711 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-09T20:48:32.711 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.blob_file_starting_level: 0 2026-03-09T20:48:32.711 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-09T20:48:32.711 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. 2026-03-09T20:48:32.711 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-vm10/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 20, last_sequence is 7365, log_number is 16,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 16 2026-03-09T20:48:32.711 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 16 2026-03-09T20:48:32.711 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d3783c27-1c9a-41b9-b036-2b292d08f0ed 2026-03-09T20:48:32.711 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773089312491210, "job": 1, "event": "recovery_started", "wal_files": [16]} 2026-03-09T20:48:32.711 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #16 mode 2 2026-03-09T20:48:32.711 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773089312513366, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 21, "file_size": 3953192, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7370, "largest_seqno": 8524, "table_properties": {"data_size": 3946776, "index_size": 4026, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 12362, "raw_average_key_size": 23, "raw_value_size": 3934942, "raw_average_value_size": 7538, "num_data_blocks": 191, "num_entries": 522, "num_filter_entries": 522, "num_deletions": 2, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773089312, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d3783c27-1c9a-41b9-b036-2b292d08f0ed", "db_session_id": "0Q91D84O4IUURBFAQHAO", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}} 2026-03-09T20:48:32.711 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773089312513530, "job": 1, "event": "recovery_finished"} 2026-03-09T20:48:32.711 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: [db/version_set.cc:5047] Creating manifest 23 2026-03-09T20:48:32.711 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. 2026-03-09T20:48:32.711 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-vm10/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-09T20:48:32.711 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x559cb9344e00 2026-03-09T20:48:32.711 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: DB pointer 0x559cb9450000 2026-03-09T20:48:32.711 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-09T20:48:32.711 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-09T20:48:32.711 INFO:journalctl@ceph.mon.vm10.vm10.stdout: ** DB Stats ** 2026-03-09T20:48:32.711 INFO:journalctl@ceph.mon.vm10.vm10.stdout: Uptime(secs): 0.2 total, 0.2 interval 2026-03-09T20:48:32.711 INFO:journalctl@ceph.mon.vm10.vm10.stdout: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-09T20:48:32.711 INFO:journalctl@ceph.mon.vm10.vm10.stdout: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T20:48:32.711 INFO:journalctl@ceph.mon.vm10.vm10.stdout: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T20:48:32.711 INFO:journalctl@ceph.mon.vm10.vm10.stdout: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-09T20:48:32.711 INFO:journalctl@ceph.mon.vm10.vm10.stdout: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T20:48:32.711 INFO:journalctl@ceph.mon.vm10.vm10.stdout: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T20:48:32.711 INFO:journalctl@ceph.mon.vm10.vm10.stdout: 2026-03-09T20:48:32.711 INFO:journalctl@ceph.mon.vm10.vm10.stdout: ** Compaction Stats [default] ** 2026-03-09T20:48:32.711 INFO:journalctl@ceph.mon.vm10.vm10.stdout: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T20:48:32.711 INFO:journalctl@ceph.mon.vm10.vm10.stdout: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-09T20:48:32.711 INFO:journalctl@ceph.mon.vm10.vm10.stdout: L0 1/0 3.77 MB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 291.3 0.01 0.00 1 0.013 0 0 0.0 0.0 2026-03-09T20:48:32.711 INFO:journalctl@ceph.mon.vm10.vm10.stdout: L6 1/0 6.41 MB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 2026-03-09T20:48:32.712 INFO:journalctl@ceph.mon.vm10.vm10.stdout: Sum 2/0 10.18 MB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 291.3 0.01 0.00 1 0.013 0 0 0.0 0.0 2026-03-09T20:48:32.712 INFO:journalctl@ceph.mon.vm10.vm10.stdout: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 291.3 0.01 0.00 1 0.013 0 0 0.0 0.0 2026-03-09T20:48:32.712 INFO:journalctl@ceph.mon.vm10.vm10.stdout: 2026-03-09T20:48:32.712 INFO:journalctl@ceph.mon.vm10.vm10.stdout: ** Compaction Stats [default] ** 2026-03-09T20:48:32.712 INFO:journalctl@ceph.mon.vm10.vm10.stdout: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T20:48:32.712 INFO:journalctl@ceph.mon.vm10.vm10.stdout: --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-09T20:48:32.712 INFO:journalctl@ceph.mon.vm10.vm10.stdout: User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 291.3 0.01 0.00 1 0.013 0 0 0.0 0.0 2026-03-09T20:48:32.712 INFO:journalctl@ceph.mon.vm10.vm10.stdout: 2026-03-09T20:48:32.712 INFO:journalctl@ceph.mon.vm10.vm10.stdout: Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-09T20:48:32.712 INFO:journalctl@ceph.mon.vm10.vm10.stdout: 2026-03-09T20:48:32.712 INFO:journalctl@ceph.mon.vm10.vm10.stdout: Uptime(secs): 0.2 total, 0.2 interval 2026-03-09T20:48:32.712 INFO:journalctl@ceph.mon.vm10.vm10.stdout: Flush(GB): cumulative 0.004, interval 0.004 2026-03-09T20:48:32.712 INFO:journalctl@ceph.mon.vm10.vm10.stdout: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-09T20:48:32.712 INFO:journalctl@ceph.mon.vm10.vm10.stdout: AddFile(Total Files): cumulative 0, interval 0 2026-03-09T20:48:32.712 INFO:journalctl@ceph.mon.vm10.vm10.stdout: AddFile(L0 Files): cumulative 0, interval 0 2026-03-09T20:48:32.712 INFO:journalctl@ceph.mon.vm10.vm10.stdout: AddFile(Keys): cumulative 0, interval 0 2026-03-09T20:48:32.712 INFO:journalctl@ceph.mon.vm10.vm10.stdout: Cumulative compaction: 0.00 GB write, 17.23 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T20:48:32.712 INFO:journalctl@ceph.mon.vm10.vm10.stdout: Interval compaction: 0.00 GB write, 17.23 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T20:48:32.712 INFO:journalctl@ceph.mon.vm10.vm10.stdout: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-09T20:48:32.712 INFO:journalctl@ceph.mon.vm10.vm10.stdout: Block cache BinnedLRUCache@0x559cb9343350#2 capacity: 512.00 MB usage: 42.83 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.2e-05 secs_since: 0 2026-03-09T20:48:32.712 INFO:journalctl@ceph.mon.vm10.vm10.stdout: Block cache entry stats(count,size,portion): DataBlock(3,13.23 KB,0.00252426%) FilterBlock(2,9.44 KB,0.00180006%) IndexBlock(2,20.16 KB,0.0038445%) Misc(1,0.00 KB,0%) 2026-03-09T20:48:32.712 INFO:journalctl@ceph.mon.vm10.vm10.stdout: 2026-03-09T20:48:32.712 INFO:journalctl@ceph.mon.vm10.vm10.stdout: ** File Read Latency Histogram By Level [default] ** 2026-03-09T20:48:32.712 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: starting mon.vm10 rank 1 at public addrs [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] at bind addrs [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] mon_data /var/lib/ceph/mon/ceph-vm10 fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 2026-03-09T20:48:33.024 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: mon.vm10@-1(???) e2 preinit fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 2026-03-09T20:48:33.025 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: mon.vm10@-1(???).mds e11 new map 2026-03-09T20:48:33.025 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: mon.vm10@-1(???).mds e11 print_map 2026-03-09T20:48:33.025 INFO:journalctl@ceph.mon.vm10.vm10.stdout: e11 2026-03-09T20:48:33.025 INFO:journalctl@ceph.mon.vm10.vm10.stdout: btime 1970-01-01T00:00:00:000000+0000 2026-03-09T20:48:33.025 INFO:journalctl@ceph.mon.vm10.vm10.stdout: enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T20:48:33.025 INFO:journalctl@ceph.mon.vm10.vm10.stdout: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T20:48:33.025 INFO:journalctl@ceph.mon.vm10.vm10.stdout: legacy client fscid: 1 2026-03-09T20:48:33.025 INFO:journalctl@ceph.mon.vm10.vm10.stdout: 2026-03-09T20:48:33.025 INFO:journalctl@ceph.mon.vm10.vm10.stdout: Filesystem 'cephfs' (1) 2026-03-09T20:48:33.025 INFO:journalctl@ceph.mon.vm10.vm10.stdout: fs_name cephfs 2026-03-09T20:48:33.025 INFO:journalctl@ceph.mon.vm10.vm10.stdout: epoch 11 2026-03-09T20:48:33.025 INFO:journalctl@ceph.mon.vm10.vm10.stdout: flags 32 joinable allow_snaps allow_multimds_snaps allow_standby_replay 2026-03-09T20:48:33.025 INFO:journalctl@ceph.mon.vm10.vm10.stdout: created 2026-03-09T20:44:59.885491+0000 2026-03-09T20:48:33.025 INFO:journalctl@ceph.mon.vm10.vm10.stdout: modified 2026-03-09T20:45:12.822947+0000 2026-03-09T20:48:33.025 INFO:journalctl@ceph.mon.vm10.vm10.stdout: tableserver 0 2026-03-09T20:48:33.025 INFO:journalctl@ceph.mon.vm10.vm10.stdout: root 0 2026-03-09T20:48:33.025 INFO:journalctl@ceph.mon.vm10.vm10.stdout: session_timeout 60 2026-03-09T20:48:33.025 INFO:journalctl@ceph.mon.vm10.vm10.stdout: session_autoclose 300 2026-03-09T20:48:33.025 INFO:journalctl@ceph.mon.vm10.vm10.stdout: max_file_size 1099511627776 2026-03-09T20:48:33.025 INFO:journalctl@ceph.mon.vm10.vm10.stdout: max_xattr_size 65536 2026-03-09T20:48:33.025 INFO:journalctl@ceph.mon.vm10.vm10.stdout: required_client_features {} 2026-03-09T20:48:33.025 INFO:journalctl@ceph.mon.vm10.vm10.stdout: last_failure 0 2026-03-09T20:48:33.025 INFO:journalctl@ceph.mon.vm10.vm10.stdout: last_failure_osd_epoch 0 2026-03-09T20:48:33.025 INFO:journalctl@ceph.mon.vm10.vm10.stdout: compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T20:48:33.025 INFO:journalctl@ceph.mon.vm10.vm10.stdout: max_mds 2 2026-03-09T20:48:33.025 INFO:journalctl@ceph.mon.vm10.vm10.stdout: in 0,1 2026-03-09T20:48:33.025 INFO:journalctl@ceph.mon.vm10.vm10.stdout: up {0=14476,1=24291} 2026-03-09T20:48:33.025 INFO:journalctl@ceph.mon.vm10.vm10.stdout: failed 2026-03-09T20:48:33.025 INFO:journalctl@ceph.mon.vm10.vm10.stdout: damaged 2026-03-09T20:48:33.025 INFO:journalctl@ceph.mon.vm10.vm10.stdout: stopped 2026-03-09T20:48:33.025 INFO:journalctl@ceph.mon.vm10.vm10.stdout: data_pools [3] 2026-03-09T20:48:33.025 INFO:journalctl@ceph.mon.vm10.vm10.stdout: metadata_pool 2 2026-03-09T20:48:33.025 INFO:journalctl@ceph.mon.vm10.vm10.stdout: inline_data disabled 2026-03-09T20:48:33.025 INFO:journalctl@ceph.mon.vm10.vm10.stdout: balancer 2026-03-09T20:48:33.025 INFO:journalctl@ceph.mon.vm10.vm10.stdout: bal_rank_mask -1 2026-03-09T20:48:33.025 INFO:journalctl@ceph.mon.vm10.vm10.stdout: standby_count_wanted 1 2026-03-09T20:48:33.025 INFO:journalctl@ceph.mon.vm10.vm10.stdout: qdb_cluster leader: 0 members: 2026-03-09T20:48:33.025 INFO:journalctl@ceph.mon.vm10.vm10.stdout: [mds.cephfs.vm07.rovdbp{0:14476} state up:active seq 3 join_fscid=1 addr [v2:192.168.123.107:6826/2216764941,v1:192.168.123.107:6827/2216764941] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:48:33.025 INFO:journalctl@ceph.mon.vm10.vm10.stdout: [mds.cephfs.vm10.hzyuyq{0:14498} state up:standby-replay seq 3 join_fscid=1 addr [v2:192.168.123.110:6826/3212743251,v1:192.168.123.110:6827/3212743251] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:48:33.025 INFO:journalctl@ceph.mon.vm10.vm10.stdout: [mds.cephfs.vm10.qpltwp{1:24291} state up:active seq 3 join_fscid=1 addr [v2:192.168.123.110:6824/61492274,v1:192.168.123.110:6825/61492274] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:48:33.026 INFO:journalctl@ceph.mon.vm10.vm10.stdout: [mds.cephfs.vm07.potfau{1:14490} state up:standby-replay seq 2 join_fscid=1 addr [v2:192.168.123.107:6828/3289699342,v1:192.168.123.107:6829/3289699342] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:48:33.026 INFO:journalctl@ceph.mon.vm10.vm10.stdout: 2026-03-09T20:48:33.026 INFO:journalctl@ceph.mon.vm10.vm10.stdout: 2026-03-09T20:48:33.026 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: mon.vm10@-1(???).osd e47 crush map has features 3314933000852226048, adjusting msgr requires 2026-03-09T20:48:33.026 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: mon.vm10@-1(???).osd e47 crush map has features 288514051259236352, adjusting msgr requires 2026-03-09T20:48:33.026 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: mon.vm10@-1(???).osd e47 crush map has features 288514051259236352, adjusting msgr requires 2026-03-09T20:48:33.026 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: mon.vm10@-1(???).osd e47 crush map has features 288514051259236352, adjusting msgr requires 2026-03-09T20:48:33.026 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:32 vm10.local ceph-mon[103526]: mon.vm10@-1(???).paxosservice(auth 1..22) refresh upgraded, format 0 -> 3 2026-03-09T20:48:33.793 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:33.790+0000 7fed7af2d640 1 -- 192.168.123.107:0/82137385 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fed74071d40 msgr2=0x7fed74072140 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:48:33.793 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:33.790+0000 7fed7af2d640 1 --2- 192.168.123.107:0/82137385 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fed74071d40 0x7fed74072140 secure :-1 s=READY pgs=35 cs=0 l=1 rev1=1 crypto rx=0x7fed68009bf0 tx=0x7fed6802f640 comp rx=0 tx=0).stop 2026-03-09T20:48:33.793 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:33.791+0000 7fed7af2d640 1 -- 192.168.123.107:0/82137385 shutdown_connections 2026-03-09T20:48:33.793 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:33.791+0000 7fed7af2d640 1 --2- 192.168.123.107:0/82137385 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fed74072710 0x7fed7410c590 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:33.793 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:33.791+0000 7fed7af2d640 1 --2- 192.168.123.107:0/82137385 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fed74071d40 0x7fed74072140 unknown :-1 s=CLOSED pgs=35 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:33.793 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:33.791+0000 7fed7af2d640 1 -- 192.168.123.107:0/82137385 >> 192.168.123.107:0/82137385 conn(0x7fed7406d660 msgr2=0x7fed7406faa0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:48:33.793 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:33.792+0000 7fed7af2d640 1 -- 192.168.123.107:0/82137385 shutdown_connections 2026-03-09T20:48:33.793 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:33.792+0000 7fed7af2d640 1 -- 192.168.123.107:0/82137385 wait complete. 2026-03-09T20:48:33.795 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:33.792+0000 7fed7af2d640 1 Processor -- start 2026-03-09T20:48:33.795 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:33.793+0000 7fed7af2d640 1 -- start start 2026-03-09T20:48:33.795 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:33.793+0000 7fed7af2d640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fed74071d40 0x7fed741a72a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:48:33.795 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:33.794+0000 7fed7af2d640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fed74072710 0x7fed741a77e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:48:33.795 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:33.794+0000 7fed7af2d640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fed741a7db0 con 0x7fed74071d40 2026-03-09T20:48:33.795 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:33.794+0000 7fed7af2d640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fed741a7f20 con 0x7fed74072710 2026-03-09T20:48:33.795 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:33.794+0000 7fed78ca2640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fed74071d40 0x7fed741a72a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:48:33.795 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:33.794+0000 7fed78ca2640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fed74071d40 0x7fed741a72a0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:59832/0 (socket says 192.168.123.107:59832) 2026-03-09T20:48:33.795 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:33.794+0000 7fed78ca2640 1 -- 192.168.123.107:0/1601962339 learned_addr learned my addr 192.168.123.107:0/1601962339 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:48:33.795 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:33.794+0000 7fed78ca2640 1 -- 192.168.123.107:0/1601962339 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fed74072710 msgr2=0x7fed741a77e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:48:33.795 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:33.794+0000 7fed78ca2640 1 --2- 192.168.123.107:0/1601962339 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fed74072710 0x7fed741a77e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:33.795 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:33.794+0000 7fed78ca2640 1 -- 192.168.123.107:0/1601962339 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fed680098d0 con 0x7fed74071d40 2026-03-09T20:48:33.796 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:33.795+0000 7fed78ca2640 1 --2- 192.168.123.107:0/1601962339 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fed74071d40 0x7fed741a72a0 secure :-1 s=READY pgs=36 cs=0 l=1 rev1=1 crypto rx=0x7fed68009bc0 tx=0x7fed68009510 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:48:33.797 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:33.795+0000 7fed71ffb640 1 -- 192.168.123.107:0/1601962339 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fed68007980 con 0x7fed74071d40 2026-03-09T20:48:33.797 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:33.795+0000 7fed71ffb640 1 -- 192.168.123.107:0/1601962339 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fed68041440 con 0x7fed74071d40 2026-03-09T20:48:33.797 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:33.795+0000 7fed71ffb640 1 -- 192.168.123.107:0/1601962339 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fed6803f5a0 con 0x7fed74071d40 2026-03-09T20:48:33.797 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:33.795+0000 7fed7af2d640 1 -- 192.168.123.107:0/1601962339 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fed741ac960 con 0x7fed74071d40 2026-03-09T20:48:33.797 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:33.796+0000 7fed7af2d640 1 -- 192.168.123.107:0/1601962339 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fed741aceb0 con 0x7fed74071d40 2026-03-09T20:48:33.797 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:33.796+0000 7fed7af2d640 1 -- 192.168.123.107:0/1601962339 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fed74072140 con 0x7fed74071d40 2026-03-09T20:48:33.802 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:33.800+0000 7fed71ffb640 1 -- 192.168.123.107:0/1601962339 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 37) v1 ==== 100217+0+0 (secure 0 0 0) 0x7fed680415b0 con 0x7fed74071d40 2026-03-09T20:48:33.802 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:33.800+0000 7fed71ffb640 1 --2- 192.168.123.107:0/1601962339 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fed400779b0 0x7fed40079e70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:48:33.802 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:33.801+0000 7fed71ffb640 1 -- 192.168.123.107:0/1601962339 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(47..47 src has 1..47) v4 ==== 6394+0+0 (secure 0 0 0) 0x7fed680bfbf0 con 0x7fed74071d40 2026-03-09T20:48:33.802 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:33.801+0000 7fed73fff640 1 --2- 192.168.123.107:0/1601962339 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fed400779b0 0x7fed40079e70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:48:33.803 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:33.802+0000 7fed73fff640 1 --2- 192.168.123.107:0/1601962339 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fed400779b0 0x7fed40079e70 secure :-1 s=READY pgs=20 cs=0 l=1 rev1=1 crypto rx=0x7fed600059c0 tx=0x7fed60005950 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:48:33.806 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:33.805+0000 7fed71ffb640 1 -- 192.168.123.107:0/1601962339 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fed68088100 con 0x7fed74071d40 2026-03-09T20:48:33.962 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:33.960+0000 7fed7af2d640 1 -- 192.168.123.107:0/1601962339 --> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7fed7410bf20 con 0x7fed400779b0 2026-03-09T20:48:33.965 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:33.963+0000 7fed71ffb640 1 -- 192.168.123.107:0/1601962339 <== mgr.34100 v2:192.168.123.107:6800/2064434004 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+368 (secure 0 0 0) 0x7fed7410bf20 con 0x7fed400779b0 2026-03-09T20:48:33.972 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:33.971+0000 7fed477fe640 1 -- 192.168.123.107:0/1601962339 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fed400779b0 msgr2=0x7fed40079e70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:48:33.972 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:33.971+0000 7fed477fe640 1 --2- 192.168.123.107:0/1601962339 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fed400779b0 0x7fed40079e70 secure :-1 s=READY pgs=20 cs=0 l=1 rev1=1 crypto rx=0x7fed600059c0 tx=0x7fed60005950 comp rx=0 tx=0).stop 2026-03-09T20:48:33.973 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:33.971+0000 7fed477fe640 1 -- 192.168.123.107:0/1601962339 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fed74071d40 msgr2=0x7fed741a72a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:48:33.973 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:33.971+0000 7fed477fe640 1 --2- 192.168.123.107:0/1601962339 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fed74071d40 0x7fed741a72a0 secure :-1 s=READY pgs=36 cs=0 l=1 rev1=1 crypto rx=0x7fed68009bc0 tx=0x7fed68009510 comp rx=0 tx=0).stop 2026-03-09T20:48:33.973 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:33.971+0000 7fed477fe640 1 -- 192.168.123.107:0/1601962339 shutdown_connections 2026-03-09T20:48:33.973 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:33.971+0000 7fed477fe640 1 --2- 192.168.123.107:0/1601962339 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fed400779b0 0x7fed40079e70 unknown :-1 s=CLOSED pgs=20 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:33.973 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:33.971+0000 7fed477fe640 1 --2- 192.168.123.107:0/1601962339 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fed74072710 0x7fed741a77e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:33.973 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:33.971+0000 7fed477fe640 1 --2- 192.168.123.107:0/1601962339 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fed74071d40 0x7fed741a72a0 unknown :-1 s=CLOSED pgs=36 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:33.973 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:33.971+0000 7fed477fe640 1 -- 192.168.123.107:0/1601962339 >> 192.168.123.107:0/1601962339 conn(0x7fed7406d660 msgr2=0x7fed7410a800 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:48:33.973 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:33.971+0000 7fed477fe640 1 -- 192.168.123.107:0/1601962339 shutdown_connections 2026-03-09T20:48:33.973 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:33.971+0000 7fed477fe640 1 -- 192.168.123.107:0/1601962339 wait complete. 2026-03-09T20:48:33.983 INFO:teuthology.orchestra.run.vm07.stdout:true 2026-03-09T20:48:34.063 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.061+0000 7f4fdb855640 1 -- 192.168.123.107:0/4141478288 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4fd40719c0 msgr2=0x7f4fd4071dc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:48:34.063 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.061+0000 7f4fdb855640 1 --2- 192.168.123.107:0/4141478288 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4fd40719c0 0x7f4fd4071dc0 secure :-1 s=READY pgs=37 cs=0 l=1 rev1=1 crypto rx=0x7f4fc40099b0 tx=0x7f4fc402f240 comp rx=0 tx=0).stop 2026-03-09T20:48:34.065 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.064+0000 7f4fdb855640 1 -- 192.168.123.107:0/4141478288 shutdown_connections 2026-03-09T20:48:34.065 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.064+0000 7f4fdb855640 1 --2- 192.168.123.107:0/4141478288 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4fd4072390 0x7f4fd410c590 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:34.065 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.064+0000 7f4fdb855640 1 --2- 192.168.123.107:0/4141478288 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4fd40719c0 0x7f4fd4071dc0 unknown :-1 s=CLOSED pgs=37 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:34.065 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.064+0000 7f4fdb855640 1 -- 192.168.123.107:0/4141478288 >> 192.168.123.107:0/4141478288 conn(0x7f4fd406d4f0 msgr2=0x7f4fd406f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:48:34.066 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.065+0000 7f4fdb855640 1 -- 192.168.123.107:0/4141478288 shutdown_connections 2026-03-09T20:48:34.066 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.065+0000 7f4fdb855640 1 -- 192.168.123.107:0/4141478288 wait complete. 2026-03-09T20:48:34.067 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.066+0000 7f4fdb855640 1 Processor -- start 2026-03-09T20:48:34.067 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.066+0000 7f4fdb855640 1 -- start start 2026-03-09T20:48:34.067 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.066+0000 7f4fdb855640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4fd40719c0 0x7f4fd41a7120 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:48:34.067 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.066+0000 7f4fdb855640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4fd4072390 0x7f4fd41a7660 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:48:34.068 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.067+0000 7f4fd95ca640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4fd40719c0 0x7f4fd41a7120 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:48:34.068 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.067+0000 7f4fd95ca640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4fd40719c0 0x7f4fd41a7120 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:59852/0 (socket says 192.168.123.107:59852) 2026-03-09T20:48:34.069 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.067+0000 7f4fd95ca640 1 -- 192.168.123.107:0/216274685 learned_addr learned my addr 192.168.123.107:0/216274685 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:48:34.069 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.068+0000 7f4fdb855640 1 -- 192.168.123.107:0/216274685 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f4fd41a7c30 con 0x7f4fd40719c0 2026-03-09T20:48:34.069 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.068+0000 7f4fdb855640 1 -- 192.168.123.107:0/216274685 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f4fd41a7da0 con 0x7f4fd4072390 2026-03-09T20:48:34.070 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.068+0000 7f4fd8dc9640 1 --2- 192.168.123.107:0/216274685 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4fd4072390 0x7f4fd41a7660 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:48:34.070 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.068+0000 7f4fd8dc9640 1 -- 192.168.123.107:0/216274685 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4fd4072390 msgr2=0x7f4fd41a7660 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).read_bulk peer close file descriptor 13 2026-03-09T20:48:34.070 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.068+0000 7f4fd8dc9640 1 -- 192.168.123.107:0/216274685 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4fd4072390 msgr2=0x7f4fd41a7660 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).read_until read failed 2026-03-09T20:48:34.070 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.068+0000 7f4fd8dc9640 1 --2- 192.168.123.107:0/216274685 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4fd4072390 0x7f4fd41a7660 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_read_frame_preamble_main read frame preamble failed r=-1 2026-03-09T20:48:34.070 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.068+0000 7f4fd8dc9640 1 --2- 192.168.123.107:0/216274685 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4fd4072390 0x7f4fd41a7660 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.200000 2026-03-09T20:48:34.070 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.068+0000 7f4fd95ca640 1 -- 192.168.123.107:0/216274685 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4fd4072390 msgr2=0x7f4fd41a7660 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T20:48:34.070 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.068+0000 7f4fd95ca640 1 --2- 192.168.123.107:0/216274685 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4fd4072390 0x7f4fd41a7660 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:34.070 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.068+0000 7f4fd95ca640 1 -- 192.168.123.107:0/216274685 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f4fc4009660 con 0x7f4fd40719c0 2026-03-09T20:48:34.070 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.069+0000 7f4fd95ca640 1 --2- 192.168.123.107:0/216274685 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4fd40719c0 0x7f4fd41a7120 secure :-1 s=READY pgs=38 cs=0 l=1 rev1=1 crypto rx=0x7f4fc4031cf0 tx=0x7f4fc4031d20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:48:34.071 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.069+0000 7f4fc27fc640 1 -- 192.168.123.107:0/216274685 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f4fc403d070 con 0x7f4fd40719c0 2026-03-09T20:48:34.071 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.069+0000 7f4fdb855640 1 -- 192.168.123.107:0/216274685 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f4fd41ac7e0 con 0x7f4fd40719c0 2026-03-09T20:48:34.071 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.070+0000 7f4fdb855640 1 -- 192.168.123.107:0/216274685 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f4fd41accd0 con 0x7f4fd40719c0 2026-03-09T20:48:34.071 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.070+0000 7f4fdb855640 1 -- 192.168.123.107:0/216274685 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f4fd4071dc0 con 0x7f4fd40719c0 2026-03-09T20:48:34.072 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.071+0000 7f4fc27fc640 1 -- 192.168.123.107:0/216274685 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f4fc40043d0 con 0x7f4fd40719c0 2026-03-09T20:48:34.072 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.072+0000 7f4fc27fc640 1 -- 192.168.123.107:0/216274685 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f4fc4031950 con 0x7f4fd40719c0 2026-03-09T20:48:34.074 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.073+0000 7f4fc27fc640 1 -- 192.168.123.107:0/216274685 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 37) v1 ==== 100217+0+0 (secure 0 0 0) 0x7f4fc4049050 con 0x7f4fd40719c0 2026-03-09T20:48:34.074 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.073+0000 7f4fc27fc640 1 --2- 192.168.123.107:0/216274685 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f4fb00779b0 0x7f4fb0079e70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:48:34.074 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.073+0000 7f4fc27fc640 1 -- 192.168.123.107:0/216274685 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(47..47 src has 1..47) v4 ==== 6394+0+0 (secure 0 0 0) 0x7f4fc40bea70 con 0x7f4fd40719c0 2026-03-09T20:48:34.075 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.074+0000 7f4fd8dc9640 1 --2- 192.168.123.107:0/216274685 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f4fb00779b0 0x7f4fb0079e70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:48:34.076 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.075+0000 7f4fd8dc9640 1 --2- 192.168.123.107:0/216274685 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f4fb00779b0 0x7f4fb0079e70 secure :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0x7f4fc8005e90 tx=0x7f4fc8005e20 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:48:34.082 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.077+0000 7f4fc27fc640 1 -- 192.168.123.107:0/216274685 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f4fc4086950 con 0x7f4fd40719c0 2026-03-09T20:48:34.241 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.239+0000 7f4fdb855640 1 -- 192.168.123.107:0/216274685 --> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f4fd410bed0 con 0x7f4fb00779b0 2026-03-09T20:48:34.244 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.241+0000 7f4fc27fc640 1 -- 192.168.123.107:0/216274685 <== mgr.34100 v2:192.168.123.107:6800/2064434004 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+368 (secure 0 0 0) 0x7f4fd410bed0 con 0x7f4fb00779b0 2026-03-09T20:48:34.246 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.245+0000 7f4fa3fff640 1 -- 192.168.123.107:0/216274685 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f4fb00779b0 msgr2=0x7f4fb0079e70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:48:34.246 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.245+0000 7f4fa3fff640 1 --2- 192.168.123.107:0/216274685 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f4fb00779b0 0x7f4fb0079e70 secure :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0x7f4fc8005e90 tx=0x7f4fc8005e20 comp rx=0 tx=0).stop 2026-03-09T20:48:34.246 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.245+0000 7f4fa3fff640 1 -- 192.168.123.107:0/216274685 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4fd40719c0 msgr2=0x7f4fd41a7120 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:48:34.246 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.245+0000 7f4fa3fff640 1 --2- 192.168.123.107:0/216274685 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4fd40719c0 0x7f4fd41a7120 secure :-1 s=READY pgs=38 cs=0 l=1 rev1=1 crypto rx=0x7f4fc4031cf0 tx=0x7f4fc4031d20 comp rx=0 tx=0).stop 2026-03-09T20:48:34.246 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.245+0000 7f4fa3fff640 1 -- 192.168.123.107:0/216274685 shutdown_connections 2026-03-09T20:48:34.246 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.245+0000 7f4fa3fff640 1 --2- 192.168.123.107:0/216274685 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f4fb00779b0 0x7f4fb0079e70 secure :-1 s=CLOSED pgs=21 cs=0 l=1 rev1=1 crypto rx=0x7f4fc8005e90 tx=0x7f4fc8005e20 comp rx=0 tx=0).stop 2026-03-09T20:48:34.246 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.245+0000 7f4fa3fff640 1 --2- 192.168.123.107:0/216274685 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4fd4072390 0x7f4fd41a7660 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:34.246 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.245+0000 7f4fa3fff640 1 --2- 192.168.123.107:0/216274685 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4fd40719c0 0x7f4fd41a7120 unknown :-1 s=CLOSED pgs=38 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:34.247 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.245+0000 7f4fa3fff640 1 -- 192.168.123.107:0/216274685 >> 192.168.123.107:0/216274685 conn(0x7f4fd406d4f0 msgr2=0x7f4fd410a7b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:48:34.247 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.245+0000 7f4fa3fff640 1 -- 192.168.123.107:0/216274685 shutdown_connections 2026-03-09T20:48:34.247 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.245+0000 7f4fa3fff640 1 -- 192.168.123.107:0/216274685 wait complete. 2026-03-09T20:48:34.321 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.319+0000 7f86ab272640 1 -- 192.168.123.107:0/16983076 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f86a4072440 msgr2=0x7f86a40771b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:48:34.321 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.319+0000 7f86ab272640 1 --2- 192.168.123.107:0/16983076 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f86a4072440 0x7f86a40771b0 secure :-1 s=READY pgs=39 cs=0 l=1 rev1=1 crypto rx=0x7f869c008030 tx=0x7f869c030dc0 comp rx=0 tx=0).stop 2026-03-09T20:48:34.321 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.319+0000 7f86ab272640 1 -- 192.168.123.107:0/16983076 shutdown_connections 2026-03-09T20:48:34.321 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.319+0000 7f86ab272640 1 --2- 192.168.123.107:0/16983076 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f86a4072440 0x7f86a40771b0 unknown :-1 s=CLOSED pgs=39 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:34.321 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.319+0000 7f86ab272640 1 --2- 192.168.123.107:0/16983076 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f86a4071a70 0x7f86a4071e70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:34.321 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.319+0000 7f86ab272640 1 -- 192.168.123.107:0/16983076 >> 192.168.123.107:0/16983076 conn(0x7f86a406d4f0 msgr2=0x7f86a406f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:48:34.321 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.320+0000 7f86ab272640 1 -- 192.168.123.107:0/16983076 shutdown_connections 2026-03-09T20:48:34.322 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.320+0000 7f86ab272640 1 -- 192.168.123.107:0/16983076 wait complete. 2026-03-09T20:48:34.322 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.320+0000 7f86ab272640 1 Processor -- start 2026-03-09T20:48:34.322 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.320+0000 7f86ab272640 1 -- start start 2026-03-09T20:48:34.322 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.320+0000 7f86ab272640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f86a4071a70 0x7f86a4131a80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:48:34.322 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.320+0000 7f86ab272640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f86a4131fc0 0x7f86a4132440 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:48:34.322 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.320+0000 7f86ab272640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f86a4133430 con 0x7f86a4131fc0 2026-03-09T20:48:34.322 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.320+0000 7f86ab272640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f86a41335a0 con 0x7f86a4071a70 2026-03-09T20:48:34.322 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.321+0000 7f86a8fe7640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f86a4071a70 0x7f86a4131a80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:48:34.322 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.321+0000 7f86a8fe7640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f86a4071a70 0x7f86a4131a80 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:48908/0 (socket says 192.168.123.107:48908) 2026-03-09T20:48:34.322 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.321+0000 7f86a8fe7640 1 -- 192.168.123.107:0/3892088288 learned_addr learned my addr 192.168.123.107:0/3892088288 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:48:34.322 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.321+0000 7f86a8fe7640 1 -- 192.168.123.107:0/3892088288 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f86a4071a70 msgr2=0x7f86a4131a80 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).read_bulk peer close file descriptor 12 2026-03-09T20:48:34.323 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.321+0000 7f86a8fe7640 1 -- 192.168.123.107:0/3892088288 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f86a4071a70 msgr2=0x7f86a4131a80 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).read_until read failed 2026-03-09T20:48:34.323 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.321+0000 7f86a8fe7640 1 --2- 192.168.123.107:0/3892088288 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f86a4071a70 0x7f86a4131a80 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_read_frame_preamble_main read frame preamble failed r=-1 2026-03-09T20:48:34.323 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.321+0000 7f86a8fe7640 1 --2- 192.168.123.107:0/3892088288 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f86a4071a70 0x7f86a4131a80 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.200000 2026-03-09T20:48:34.325 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.324+0000 7f86a3fff640 1 --2- 192.168.123.107:0/3892088288 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f86a4131fc0 0x7f86a4132440 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:48:34.326 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.325+0000 7f86a3fff640 1 -- 192.168.123.107:0/3892088288 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f86a4071a70 msgr2=0x7f86a4131a80 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T20:48:34.326 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.325+0000 7f86a3fff640 1 --2- 192.168.123.107:0/3892088288 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f86a4071a70 0x7f86a4131a80 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:34.326 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.325+0000 7f86a3fff640 1 -- 192.168.123.107:0/3892088288 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f869c007ce0 con 0x7f86a4131fc0 2026-03-09T20:48:34.326 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.325+0000 7f86a3fff640 1 --2- 192.168.123.107:0/3892088288 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f86a4131fc0 0x7f86a4132440 secure :-1 s=READY pgs=40 cs=0 l=1 rev1=1 crypto rx=0x7f86a40730e0 tx=0x7f869c002ea0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:48:34.326 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.325+0000 7f86a1ffb640 1 -- 192.168.123.107:0/3892088288 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f869c0317f0 con 0x7f86a4131fc0 2026-03-09T20:48:34.327 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.326+0000 7f86ab272640 1 -- 192.168.123.107:0/3892088288 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f86a407fad0 con 0x7f86a4131fc0 2026-03-09T20:48:34.328 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.326+0000 7f86ab272640 1 -- 192.168.123.107:0/3892088288 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f86a407ff40 con 0x7f86a4131fc0 2026-03-09T20:48:34.328 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.327+0000 7f86a1ffb640 1 -- 192.168.123.107:0/3892088288 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f869c031e10 con 0x7f86a4131fc0 2026-03-09T20:48:34.328 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.327+0000 7f86a1ffb640 1 -- 192.168.123.107:0/3892088288 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f869c03acc0 con 0x7f86a4131fc0 2026-03-09T20:48:34.328 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.327+0000 7f86ab272640 1 -- 192.168.123.107:0/3892088288 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f86a407a3d0 con 0x7f86a4131fc0 2026-03-09T20:48:34.330 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.329+0000 7f86a1ffb640 1 -- 192.168.123.107:0/3892088288 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 37) v1 ==== 100217+0+0 (secure 0 0 0) 0x7f869c03a470 con 0x7f86a4131fc0 2026-03-09T20:48:34.330 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.329+0000 7f86a1ffb640 1 --2- 192.168.123.107:0/3892088288 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f8684077a00 0x7f8684079ec0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:48:34.330 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.330+0000 7f86a1ffb640 1 -- 192.168.123.107:0/3892088288 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(47..47 src has 1..47) v4 ==== 6394+0+0 (secure 0 0 0) 0x7f869c0bfbd0 con 0x7f86a4131fc0 2026-03-09T20:48:34.331 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.330+0000 7f86a8fe7640 1 --2- 192.168.123.107:0/3892088288 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f8684077a00 0x7f8684079ec0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:48:34.331 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.331+0000 7f86a8fe7640 1 --2- 192.168.123.107:0/3892088288 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f8684077a00 0x7f8684079ec0 secure :-1 s=READY pgs=22 cs=0 l=1 rev1=1 crypto rx=0x7f8694004750 tx=0x7f8694004090 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:48:34.333 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.332+0000 7f86a1ffb640 1 -- 192.168.123.107:0/3892088288 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f869c0880d0 con 0x7f86a4131fc0 2026-03-09T20:48:34.474 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.473+0000 7f86ab272640 1 -- 192.168.123.107:0/3892088288 --> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] -- mgr_command(tid 0: {"prefix": "orch ps", "target": ["mon-mgr", ""]}) v1 -- 0x7f86a4076950 con 0x7f8684077a00 2026-03-09T20:48:34.485 INFO:teuthology.orchestra.run.vm07.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T20:48:34.486 INFO:teuthology.orchestra.run.vm07.stdout:alertmanager.vm07 vm07 *:9093,9094 running (4m) 6s ago 5m 42.9M - 0.25.0 c8568f914cd2 aa3206f6f5cb 2026-03-09T20:48:34.486 INFO:teuthology.orchestra.run.vm07.stdout:ceph-exporter.vm07 vm07 running (5m) 6s ago 5m 9063k - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 06140d824fae 2026-03-09T20:48:34.486 INFO:teuthology.orchestra.run.vm07.stdout:ceph-exporter.vm10 vm10 running (5m) 7s ago 5m 9869k - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 ecddc8340426 2026-03-09T20:48:34.486 INFO:teuthology.orchestra.run.vm07.stdout:crash.vm07 vm07 running (5m) 6s ago 5m 7620k - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 8dda9981b08b 2026-03-09T20:48:34.486 INFO:teuthology.orchestra.run.vm07.stdout:crash.vm10 vm10 running (5m) 7s ago 5m 7616k - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 eba80e79586f 2026-03-09T20:48:34.486 INFO:teuthology.orchestra.run.vm07.stdout:grafana.vm07 vm07 *:3000 running (4m) 6s ago 5m 158M - 9.4.7 954c08fa6188 74cf2e7ee6ad 2026-03-09T20:48:34.486 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm07.potfau vm07 running (3m) 6s ago 3m 29.2M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 2492b6874dc8 2026-03-09T20:48:34.486 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm07.rovdbp vm07 running (3m) 6s ago 3m 234M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 3dd0b4a28f35 2026-03-09T20:48:34.486 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm10.hzyuyq vm10 running (3m) 7s ago 3m 143M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 ed740ceed51a 2026-03-09T20:48:34.486 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm10.qpltwp vm10 running (3m) 7s ago 3m 25.7M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 c5fdba181aaf 2026-03-09T20:48:34.486 INFO:teuthology.orchestra.run.vm07.stdout:mgr.vm07.xjrvch vm07 *:8443,9283,8765 running (52s) 6s ago 6m 580M - 19.2.3-678-ge911bdeb 654f31e6858e bc6ab9c540eb 2026-03-09T20:48:34.486 INFO:teuthology.orchestra.run.vm07.stdout:mgr.vm10.byqahe vm10 *:8443,9283,8765 running (26s) 7s ago 5m 488M - 19.2.3-678-ge911bdeb 654f31e6858e f7ad162e95ff 2026-03-09T20:48:34.486 INFO:teuthology.orchestra.run.vm07.stdout:mon.vm07 vm07 running (17s) 6s ago 6m 43.3M 2048M 19.2.3-678-ge911bdeb 654f31e6858e bce9d510f94f 2026-03-09T20:48:34.486 INFO:teuthology.orchestra.run.vm07.stdout:mon.vm10 vm10 starting - - - 2048M 2026-03-09T20:48:34.486 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.vm07 vm07 *:9100 running (5m) 6s ago 5m 16.2M - 1.5.0 0da6a335fe13 d6fac1f8a1d0 2026-03-09T20:48:34.486 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.vm10 vm10 *:9100 running (5m) 7s ago 5m 15.3M - 1.5.0 0da6a335fe13 9716a97e7ed1 2026-03-09T20:48:34.486 INFO:teuthology.orchestra.run.vm07.stdout:osd.0 vm07 running (4m) 6s ago 4m 383M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 482878bd7721 2026-03-09T20:48:34.486 INFO:teuthology.orchestra.run.vm07.stdout:osd.1 vm07 running (4m) 6s ago 4m 397M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 15564e5032c9 2026-03-09T20:48:34.486 INFO:teuthology.orchestra.run.vm07.stdout:osd.2 vm07 running (4m) 6s ago 4m 323M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 a2ad523a264c 2026-03-09T20:48:34.486 INFO:teuthology.orchestra.run.vm07.stdout:osd.3 vm10 running (4m) 7s ago 4m 461M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 c4d7e2279ba1 2026-03-09T20:48:34.486 INFO:teuthology.orchestra.run.vm07.stdout:osd.4 vm10 running (4m) 7s ago 4m 400M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 37651efc9a7d 2026-03-09T20:48:34.486 INFO:teuthology.orchestra.run.vm07.stdout:osd.5 vm10 running (3m) 7s ago 3m 369M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 e1bd83add343 2026-03-09T20:48:34.486 INFO:teuthology.orchestra.run.vm07.stdout:prometheus.vm07 vm07 *:9095 running (29s) 6s ago 5m 46.1M - 2.43.0 a07b618ecd1d 3f9c07cd3fe3 2026-03-09T20:48:34.487 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.481+0000 7f86a1ffb640 1 -- 192.168.123.107:0/3892088288 <== mgr.34100 v2:192.168.123.107:6800/2064434004 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+3576 (secure 0 0 0) 0x7f86a4076950 con 0x7f8684077a00 2026-03-09T20:48:34.487 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.485+0000 7f86737fe640 1 -- 192.168.123.107:0/3892088288 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f8684077a00 msgr2=0x7f8684079ec0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:48:34.487 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.485+0000 7f86737fe640 1 --2- 192.168.123.107:0/3892088288 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f8684077a00 0x7f8684079ec0 secure :-1 s=READY pgs=22 cs=0 l=1 rev1=1 crypto rx=0x7f8694004750 tx=0x7f8694004090 comp rx=0 tx=0).stop 2026-03-09T20:48:34.487 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.485+0000 7f86737fe640 1 -- 192.168.123.107:0/3892088288 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f86a4131fc0 msgr2=0x7f86a4132440 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:48:34.487 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.485+0000 7f86737fe640 1 --2- 192.168.123.107:0/3892088288 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f86a4131fc0 0x7f86a4132440 secure :-1 s=READY pgs=40 cs=0 l=1 rev1=1 crypto rx=0x7f86a40730e0 tx=0x7f869c002ea0 comp rx=0 tx=0).stop 2026-03-09T20:48:34.487 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.485+0000 7f86737fe640 1 -- 192.168.123.107:0/3892088288 shutdown_connections 2026-03-09T20:48:34.487 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.485+0000 7f86737fe640 1 --2- 192.168.123.107:0/3892088288 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f8684077a00 0x7f8684079ec0 unknown :-1 s=CLOSED pgs=22 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:34.487 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.485+0000 7f86737fe640 1 --2- 192.168.123.107:0/3892088288 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f86a4131fc0 0x7f86a4132440 unknown :-1 s=CLOSED pgs=40 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:34.487 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.485+0000 7f86737fe640 1 --2- 192.168.123.107:0/3892088288 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f86a4071a70 0x7f86a4131a80 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:34.487 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.485+0000 7f86737fe640 1 -- 192.168.123.107:0/3892088288 >> 192.168.123.107:0/3892088288 conn(0x7f86a406d4f0 msgr2=0x7f86a4075230 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:48:34.487 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.485+0000 7f86737fe640 1 -- 192.168.123.107:0/3892088288 shutdown_connections 2026-03-09T20:48:34.487 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.485+0000 7f86737fe640 1 -- 192.168.123.107:0/3892088288 wait complete. 2026-03-09T20:48:34.571 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.570+0000 7fc3d22bf640 1 -- 192.168.123.107:0/2788030921 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc3cc072390 msgr2=0x7fc3cc10c590 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:48:34.571 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.570+0000 7fc3d22bf640 1 --2- 192.168.123.107:0/2788030921 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc3cc072390 0x7fc3cc10c590 secure :-1 s=READY pgs=41 cs=0 l=1 rev1=1 crypto rx=0x7fc3c4009f90 tx=0x7fc3c402f440 comp rx=0 tx=0).stop 2026-03-09T20:48:34.571 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.570+0000 7fc3d22bf640 1 -- 192.168.123.107:0/2788030921 shutdown_connections 2026-03-09T20:48:34.571 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.570+0000 7fc3d22bf640 1 --2- 192.168.123.107:0/2788030921 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc3cc072390 0x7fc3cc10c590 unknown :-1 s=CLOSED pgs=41 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:34.571 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.570+0000 7fc3d22bf640 1 --2- 192.168.123.107:0/2788030921 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc3cc0719c0 0x7fc3cc071dc0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:34.571 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.570+0000 7fc3d22bf640 1 -- 192.168.123.107:0/2788030921 >> 192.168.123.107:0/2788030921 conn(0x7fc3cc06d4f0 msgr2=0x7fc3cc06f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:48:34.572 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.570+0000 7fc3d22bf640 1 -- 192.168.123.107:0/2788030921 shutdown_connections 2026-03-09T20:48:34.572 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.570+0000 7fc3d22bf640 1 -- 192.168.123.107:0/2788030921 wait complete. 2026-03-09T20:48:34.572 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.571+0000 7fc3d22bf640 1 Processor -- start 2026-03-09T20:48:34.573 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.571+0000 7fc3d22bf640 1 -- start start 2026-03-09T20:48:34.573 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.571+0000 7fc3d22bf640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc3cc0719c0 0x7fc3cc1a73e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:48:34.573 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.571+0000 7fc3d22bf640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc3cc072390 0x7fc3cc1a7920 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:48:34.573 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.571+0000 7fc3d22bf640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc3cc1a7ef0 con 0x7fc3cc0719c0 2026-03-09T20:48:34.573 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.571+0000 7fc3d22bf640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc3cc1a8060 con 0x7fc3cc072390 2026-03-09T20:48:34.573 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.571+0000 7fc3cb7fe640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc3cc072390 0x7fc3cc1a7920 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:48:34.573 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.571+0000 7fc3cb7fe640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc3cc072390 0x7fc3cc1a7920 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:48920/0 (socket says 192.168.123.107:48920) 2026-03-09T20:48:34.573 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.571+0000 7fc3cb7fe640 1 -- 192.168.123.107:0/38311963 learned_addr learned my addr 192.168.123.107:0/38311963 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:48:34.573 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.572+0000 7fc3cb7fe640 1 -- 192.168.123.107:0/38311963 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc3cc072390 msgr2=0x7fc3cc1a7920 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).read_bulk peer close file descriptor 13 2026-03-09T20:48:34.573 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.572+0000 7fc3cb7fe640 1 -- 192.168.123.107:0/38311963 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc3cc072390 msgr2=0x7fc3cc1a7920 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).read_until read failed 2026-03-09T20:48:34.573 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.572+0000 7fc3cb7fe640 1 --2- 192.168.123.107:0/38311963 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc3cc072390 0x7fc3cc1a7920 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_read_frame_preamble_main read frame preamble failed r=-1 2026-03-09T20:48:34.573 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.572+0000 7fc3cb7fe640 1 --2- 192.168.123.107:0/38311963 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc3cc072390 0x7fc3cc1a7920 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.200000 2026-03-09T20:48:34.577 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.576+0000 7fc3cbfff640 1 --2- 192.168.123.107:0/38311963 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc3cc0719c0 0x7fc3cc1a73e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:48:34.578 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.577+0000 7fc3cbfff640 1 -- 192.168.123.107:0/38311963 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc3cc072390 msgr2=0x7fc3cc1a7920 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T20:48:34.578 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.577+0000 7fc3cbfff640 1 --2- 192.168.123.107:0/38311963 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc3cc072390 0x7fc3cc1a7920 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:34.578 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.577+0000 7fc3cbfff640 1 -- 192.168.123.107:0/38311963 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fc3c4009c40 con 0x7fc3cc0719c0 2026-03-09T20:48:34.578 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.577+0000 7fc3cbfff640 1 --2- 192.168.123.107:0/38311963 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc3cc0719c0 0x7fc3cc1a73e0 secure :-1 s=READY pgs=42 cs=0 l=1 rev1=1 crypto rx=0x7fc3bc00d560 tx=0x7fc3bc00da30 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:48:34.581 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.579+0000 7fc3c97fa640 1 -- 192.168.123.107:0/38311963 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fc3bc00f840 con 0x7fc3cc0719c0 2026-03-09T20:48:34.581 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.579+0000 7fc3d22bf640 1 -- 192.168.123.107:0/38311963 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fc3cc10eeb0 con 0x7fc3cc0719c0 2026-03-09T20:48:34.581 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.579+0000 7fc3d22bf640 1 -- 192.168.123.107:0/38311963 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fc3cc10f400 con 0x7fc3cc0719c0 2026-03-09T20:48:34.581 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.580+0000 7fc3d22bf640 1 -- 192.168.123.107:0/38311963 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fc390005350 con 0x7fc3cc0719c0 2026-03-09T20:48:34.583 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.581+0000 7fc3c97fa640 1 -- 192.168.123.107:0/38311963 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fc3bc026080 con 0x7fc3cc0719c0 2026-03-09T20:48:34.583 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.581+0000 7fc3c97fa640 1 -- 192.168.123.107:0/38311963 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fc3bc022430 con 0x7fc3cc0719c0 2026-03-09T20:48:34.585 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.583+0000 7fc3c97fa640 1 -- 192.168.123.107:0/38311963 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 37) v1 ==== 100217+0+0 (secure 0 0 0) 0x7fc3bc016020 con 0x7fc3cc0719c0 2026-03-09T20:48:34.585 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.584+0000 7fc3c97fa640 1 --2- 192.168.123.107:0/38311963 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc3a80779b0 0x7fc3a8079e70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:48:34.585 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.584+0000 7fc3cb7fe640 1 --2- 192.168.123.107:0/38311963 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc3a80779b0 0x7fc3a8079e70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:48:34.586 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.585+0000 7fc3cb7fe640 1 --2- 192.168.123.107:0/38311963 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc3a80779b0 0x7fc3a8079e70 secure :-1 s=READY pgs=23 cs=0 l=1 rev1=1 crypto rx=0x7fc3c400c090 tx=0x7fc3c403a040 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:48:34.596 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.595+0000 7fc3c97fa640 1 -- 192.168.123.107:0/38311963 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(47..47 src has 1..47) v4 ==== 6394+0+0 (secure 0 0 0) 0x7fc3bc066c60 con 0x7fc3cc0719c0 2026-03-09T20:48:34.604 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.600+0000 7fc3c97fa640 1 -- 192.168.123.107:0/38311963 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fc3bc061fb0 con 0x7fc3cc0719c0 2026-03-09T20:48:34.737 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:34 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mon metadata", "id": "vm07"}]: dispatch 2026-03-09T20:48:34.737 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:34 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mon metadata", "id": "vm10"}]: dispatch 2026-03-09T20:48:34.737 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:34 vm10.local ceph-mon[103526]: mon.vm07 calling monitor election 2026-03-09T20:48:34.737 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:34 vm10.local ceph-mon[103526]: mon.vm10 calling monitor election 2026-03-09T20:48:34.737 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:34 vm10.local ceph-mon[103526]: mon.vm07 is new leader, mons vm07,vm10 in quorum (ranks 0,1) 2026-03-09T20:48:34.737 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:34 vm10.local ceph-mon[103526]: monmap epoch 3 2026-03-09T20:48:34.737 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:34 vm10.local ceph-mon[103526]: fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 2026-03-09T20:48:34.737 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:34 vm10.local ceph-mon[103526]: last_changed 2026-03-09T20:48:33.572220+0000 2026-03-09T20:48:34.737 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:34 vm10.local ceph-mon[103526]: created 2026-03-09T20:42:20.613735+0000 2026-03-09T20:48:34.737 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:34 vm10.local ceph-mon[103526]: min_mon_release 19 (squid) 2026-03-09T20:48:34.737 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:34 vm10.local ceph-mon[103526]: election_strategy: 1 2026-03-09T20:48:34.737 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:34 vm10.local ceph-mon[103526]: 0: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.vm07 2026-03-09T20:48:34.738 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:34 vm10.local ceph-mon[103526]: 1: [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] mon.vm10 2026-03-09T20:48:34.738 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:34 vm10.local ceph-mon[103526]: fsmap cephfs:2 {0=cephfs.vm07.rovdbp=up:active,1=cephfs.vm10.qpltwp=up:active} 2 up:standby-replay 2026-03-09T20:48:34.738 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:34 vm10.local ceph-mon[103526]: osdmap e47: 6 total, 6 up, 6 in 2026-03-09T20:48:34.738 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:34 vm10.local ceph-mon[103526]: mgrmap e37: vm07.xjrvch(active, since 9s), standbys: vm10.byqahe 2026-03-09T20:48:34.738 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:34 vm10.local ceph-mon[103526]: overall HEALTH_OK 2026-03-09T20:48:34.738 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:34 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:34.738 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:34 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:34.738 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:34 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:48:34.738 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:34 vm10.local ceph-mon[103526]: from='client.34126 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:48:34.791 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.790+0000 7fc3d22bf640 1 -- 192.168.123.107:0/38311963 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7fc3900058d0 con 0x7fc3cc0719c0 2026-03-09T20:48:34.792 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.791+0000 7fc3c97fa640 1 -- 192.168.123.107:0/38311963 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+739 (secure 0 0 0) 0x7fc3bc061dd0 con 0x7fc3cc0719c0 2026-03-09T20:48:34.793 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T20:48:34.793 INFO:teuthology.orchestra.run.vm07.stdout: "mon": { 2026-03-09T20:48:34.793 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T20:48:34.793 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:48:34.793 INFO:teuthology.orchestra.run.vm07.stdout: "mgr": { 2026-03-09T20:48:34.793 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T20:48:34.793 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:48:34.793 INFO:teuthology.orchestra.run.vm07.stdout: "osd": { 2026-03-09T20:48:34.793 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable)": 6 2026-03-09T20:48:34.793 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:48:34.793 INFO:teuthology.orchestra.run.vm07.stdout: "mds": { 2026-03-09T20:48:34.793 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable)": 4 2026-03-09T20:48:34.793 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:48:34.793 INFO:teuthology.orchestra.run.vm07.stdout: "overall": { 2026-03-09T20:48:34.793 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable)": 10, 2026-03-09T20:48:34.793 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 4 2026-03-09T20:48:34.793 INFO:teuthology.orchestra.run.vm07.stdout: } 2026-03-09T20:48:34.793 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T20:48:34.796 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.795+0000 7fc3a2ffd640 1 -- 192.168.123.107:0/38311963 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc3a80779b0 msgr2=0x7fc3a8079e70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:48:34.796 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.795+0000 7fc3a2ffd640 1 --2- 192.168.123.107:0/38311963 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc3a80779b0 0x7fc3a8079e70 secure :-1 s=READY pgs=23 cs=0 l=1 rev1=1 crypto rx=0x7fc3c400c090 tx=0x7fc3c403a040 comp rx=0 tx=0).stop 2026-03-09T20:48:34.796 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.795+0000 7fc3a2ffd640 1 -- 192.168.123.107:0/38311963 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc3cc0719c0 msgr2=0x7fc3cc1a73e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:48:34.796 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.795+0000 7fc3a2ffd640 1 --2- 192.168.123.107:0/38311963 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc3cc0719c0 0x7fc3cc1a73e0 secure :-1 s=READY pgs=42 cs=0 l=1 rev1=1 crypto rx=0x7fc3bc00d560 tx=0x7fc3bc00da30 comp rx=0 tx=0).stop 2026-03-09T20:48:34.798 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.797+0000 7fc3a2ffd640 1 -- 192.168.123.107:0/38311963 shutdown_connections 2026-03-09T20:48:34.798 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.797+0000 7fc3a2ffd640 1 --2- 192.168.123.107:0/38311963 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc3a80779b0 0x7fc3a8079e70 unknown :-1 s=CLOSED pgs=23 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:34.798 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.797+0000 7fc3a2ffd640 1 --2- 192.168.123.107:0/38311963 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc3cc072390 0x7fc3cc1a7920 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:34.798 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.797+0000 7fc3a2ffd640 1 --2- 192.168.123.107:0/38311963 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc3cc0719c0 0x7fc3cc1a73e0 unknown :-1 s=CLOSED pgs=42 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:34.798 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.797+0000 7fc3a2ffd640 1 -- 192.168.123.107:0/38311963 >> 192.168.123.107:0/38311963 conn(0x7fc3cc06d4f0 msgr2=0x7fc3cc0707f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:48:34.799 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.798+0000 7fc3a2ffd640 1 -- 192.168.123.107:0/38311963 shutdown_connections 2026-03-09T20:48:34.799 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.798+0000 7fc3a2ffd640 1 -- 192.168.123.107:0/38311963 wait complete. 2026-03-09T20:48:34.863 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:34 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mon metadata", "id": "vm07"}]: dispatch 2026-03-09T20:48:34.863 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:34 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mon metadata", "id": "vm10"}]: dispatch 2026-03-09T20:48:34.863 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:34 vm07.local ceph-mon[112105]: mon.vm07 calling monitor election 2026-03-09T20:48:34.863 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:34 vm07.local ceph-mon[112105]: mon.vm10 calling monitor election 2026-03-09T20:48:34.863 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:34 vm07.local ceph-mon[112105]: mon.vm07 is new leader, mons vm07,vm10 in quorum (ranks 0,1) 2026-03-09T20:48:34.863 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:34 vm07.local ceph-mon[112105]: monmap epoch 3 2026-03-09T20:48:34.863 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:34 vm07.local ceph-mon[112105]: fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 2026-03-09T20:48:34.863 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:34 vm07.local ceph-mon[112105]: last_changed 2026-03-09T20:48:33.572220+0000 2026-03-09T20:48:34.863 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:34 vm07.local ceph-mon[112105]: created 2026-03-09T20:42:20.613735+0000 2026-03-09T20:48:34.863 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:34 vm07.local ceph-mon[112105]: min_mon_release 19 (squid) 2026-03-09T20:48:34.863 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:34 vm07.local ceph-mon[112105]: election_strategy: 1 2026-03-09T20:48:34.863 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:34 vm07.local ceph-mon[112105]: 0: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.vm07 2026-03-09T20:48:34.863 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:34 vm07.local ceph-mon[112105]: 1: [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] mon.vm10 2026-03-09T20:48:34.863 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:34 vm07.local ceph-mon[112105]: fsmap cephfs:2 {0=cephfs.vm07.rovdbp=up:active,1=cephfs.vm10.qpltwp=up:active} 2 up:standby-replay 2026-03-09T20:48:34.863 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:34 vm07.local ceph-mon[112105]: osdmap e47: 6 total, 6 up, 6 in 2026-03-09T20:48:34.863 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:34 vm07.local ceph-mon[112105]: mgrmap e37: vm07.xjrvch(active, since 9s), standbys: vm10.byqahe 2026-03-09T20:48:34.864 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:34 vm07.local ceph-mon[112105]: overall HEALTH_OK 2026-03-09T20:48:34.864 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:34 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:34.864 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:34 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:34.864 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:34 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:48:34.864 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:34 vm07.local ceph-mon[112105]: from='client.34126 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:48:34.864 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.861+0000 7f2838b2d640 1 -- 192.168.123.107:0/2569174632 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2834072440 msgr2=0x7f28340771b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:48:34.864 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.861+0000 7f2838b2d640 1 --2- 192.168.123.107:0/2569174632 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2834072440 0x7f28340771b0 secure :-1 s=READY pgs=43 cs=0 l=1 rev1=1 crypto rx=0x7f282c00a090 tx=0x7f282c02f440 comp rx=0 tx=0).stop 2026-03-09T20:48:34.864 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.861+0000 7f2838b2d640 1 -- 192.168.123.107:0/2569174632 shutdown_connections 2026-03-09T20:48:34.864 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.861+0000 7f2838b2d640 1 --2- 192.168.123.107:0/2569174632 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2834072440 0x7f28340771b0 unknown :-1 s=CLOSED pgs=43 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:34.864 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.861+0000 7f2838b2d640 1 --2- 192.168.123.107:0/2569174632 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f2834071a70 0x7f2834071e70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:34.864 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.861+0000 7f2838b2d640 1 -- 192.168.123.107:0/2569174632 >> 192.168.123.107:0/2569174632 conn(0x7f283406d4f0 msgr2=0x7f283406f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:48:34.867 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.865+0000 7f2838b2d640 1 -- 192.168.123.107:0/2569174632 shutdown_connections 2026-03-09T20:48:34.867 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.865+0000 7f2838b2d640 1 -- 192.168.123.107:0/2569174632 wait complete. 2026-03-09T20:48:34.867 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.866+0000 7f2838b2d640 1 Processor -- start 2026-03-09T20:48:34.867 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.866+0000 7f2838b2d640 1 -- start start 2026-03-09T20:48:34.867 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.866+0000 7f2838b2d640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2834071a70 0x7f2834084080 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:48:34.867 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.866+0000 7f2838b2d640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f28340826d0 0x7f2834082b50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:48:34.867 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.866+0000 7f2838b2d640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f28340845c0 con 0x7f2834071a70 2026-03-09T20:48:34.867 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.866+0000 7f2838b2d640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f2834083090 con 0x7f28340826d0 2026-03-09T20:48:34.867 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.866+0000 7f2831d74640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f28340826d0 0x7f2834082b50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:48:34.867 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.866+0000 7f2831d74640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f28340826d0 0x7f2834082b50 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:48930/0 (socket says 192.168.123.107:48930) 2026-03-09T20:48:34.867 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.866+0000 7f2831d74640 1 -- 192.168.123.107:0/1120481342 learned_addr learned my addr 192.168.123.107:0/1120481342 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:48:34.867 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.866+0000 7f2831d74640 1 -- 192.168.123.107:0/1120481342 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f28340826d0 msgr2=0x7f2834082b50 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).read_bulk peer close file descriptor 13 2026-03-09T20:48:34.867 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.866+0000 7f2831d74640 1 -- 192.168.123.107:0/1120481342 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f28340826d0 msgr2=0x7f2834082b50 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).read_until read failed 2026-03-09T20:48:34.867 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.866+0000 7f2831d74640 1 --2- 192.168.123.107:0/1120481342 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f28340826d0 0x7f2834082b50 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_read_frame_preamble_main read frame preamble failed r=-1 2026-03-09T20:48:34.867 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.866+0000 7f2831d74640 1 --2- 192.168.123.107:0/1120481342 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f28340826d0 0x7f2834082b50 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.200000 2026-03-09T20:48:34.868 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.867+0000 7f2832575640 1 --2- 192.168.123.107:0/1120481342 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2834071a70 0x7f2834084080 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:48:34.868 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.867+0000 7f2832575640 1 -- 192.168.123.107:0/1120481342 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f28340826d0 msgr2=0x7f2834082b50 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T20:48:34.868 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.867+0000 7f2832575640 1 --2- 192.168.123.107:0/1120481342 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f28340826d0 0x7f2834082b50 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:34.868 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.867+0000 7f2832575640 1 -- 192.168.123.107:0/1120481342 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f282c009d00 con 0x7f2834071a70 2026-03-09T20:48:34.868 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.868+0000 7f2832575640 1 --2- 192.168.123.107:0/1120481342 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2834071a70 0x7f2834084080 secure :-1 s=READY pgs=44 cs=0 l=1 rev1=1 crypto rx=0x7f282400fa60 tx=0x7f282400ff30 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:48:34.869 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.868+0000 7f28237fe640 1 -- 192.168.123.107:0/1120481342 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f2824004490 con 0x7f2834071a70 2026-03-09T20:48:34.870 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.868+0000 7f2838b2d640 1 -- 192.168.123.107:0/1120481342 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f2834083310 con 0x7f2834071a70 2026-03-09T20:48:34.870 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.868+0000 7f2838b2d640 1 -- 192.168.123.107:0/1120481342 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f28341b5bc0 con 0x7f2834071a70 2026-03-09T20:48:34.870 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.869+0000 7f28237fe640 1 -- 192.168.123.107:0/1120481342 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f2824011040 con 0x7f2834071a70 2026-03-09T20:48:34.870 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.869+0000 7f28237fe640 1 -- 192.168.123.107:0/1120481342 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f28240026e0 con 0x7f2834071a70 2026-03-09T20:48:34.870 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.869+0000 7f2838b2d640 1 -- 192.168.123.107:0/1120481342 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f2800005350 con 0x7f2834071a70 2026-03-09T20:48:34.872 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.871+0000 7f28237fe640 1 -- 192.168.123.107:0/1120481342 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 37) v1 ==== 100217+0+0 (secure 0 0 0) 0x7f28240047c0 con 0x7f2834071a70 2026-03-09T20:48:34.872 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.871+0000 7f28237fe640 1 --2- 192.168.123.107:0/1120481342 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f2818077a00 0x7f2818079ec0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:48:34.873 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.872+0000 7f2831d74640 1 --2- 192.168.123.107:0/1120481342 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f2818077a00 0x7f2818079ec0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:48:34.873 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.872+0000 7f2831d74640 1 --2- 192.168.123.107:0/1120481342 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f2818077a00 0x7f2818079ec0 secure :-1 s=READY pgs=24 cs=0 l=1 rev1=1 crypto rx=0x7f282c02f950 tx=0x7f282c03a040 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:48:34.873 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.872+0000 7f28237fe640 1 -- 192.168.123.107:0/1120481342 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(47..47 src has 1..47) v4 ==== 6394+0+0 (secure 0 0 0) 0x7f282401c310 con 0x7f2834071a70 2026-03-09T20:48:34.879 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:34.876+0000 7f28237fe640 1 -- 192.168.123.107:0/1120481342 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f2824064430 con 0x7f2834071a70 2026-03-09T20:48:35.012 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.010+0000 7f2838b2d640 1 -- 192.168.123.107:0/1120481342 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "fs dump"} v 0) v1 -- 0x7f28000058d0 con 0x7f2834071a70 2026-03-09T20:48:35.013 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.011+0000 7f28237fe640 1 -- 192.168.123.107:0/1120481342 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump"}]=0 dumped fsmap epoch 11 v11) v1 ==== 76+0+1937 (secure 0 0 0) 0x7f2824063b80 con 0x7f2834071a70 2026-03-09T20:48:35.013 INFO:teuthology.orchestra.run.vm07.stdout:e11 2026-03-09T20:48:35.013 INFO:teuthology.orchestra.run.vm07.stdout:btime 1970-01-01T00:00:00:000000+0000 2026-03-09T20:48:35.013 INFO:teuthology.orchestra.run.vm07.stdout:enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T20:48:35.013 INFO:teuthology.orchestra.run.vm07.stdout:default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T20:48:35.013 INFO:teuthology.orchestra.run.vm07.stdout:legacy client fscid: 1 2026-03-09T20:48:35.013 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:48:35.013 INFO:teuthology.orchestra.run.vm07.stdout:Filesystem 'cephfs' (1) 2026-03-09T20:48:35.013 INFO:teuthology.orchestra.run.vm07.stdout:fs_name cephfs 2026-03-09T20:48:35.013 INFO:teuthology.orchestra.run.vm07.stdout:epoch 11 2026-03-09T20:48:35.013 INFO:teuthology.orchestra.run.vm07.stdout:flags 32 joinable allow_snaps allow_multimds_snaps allow_standby_replay 2026-03-09T20:48:35.013 INFO:teuthology.orchestra.run.vm07.stdout:created 2026-03-09T20:44:59.885491+0000 2026-03-09T20:48:35.013 INFO:teuthology.orchestra.run.vm07.stdout:modified 2026-03-09T20:45:12.822947+0000 2026-03-09T20:48:35.013 INFO:teuthology.orchestra.run.vm07.stdout:tableserver 0 2026-03-09T20:48:35.013 INFO:teuthology.orchestra.run.vm07.stdout:root 0 2026-03-09T20:48:35.013 INFO:teuthology.orchestra.run.vm07.stdout:session_timeout 60 2026-03-09T20:48:35.013 INFO:teuthology.orchestra.run.vm07.stdout:session_autoclose 300 2026-03-09T20:48:35.013 INFO:teuthology.orchestra.run.vm07.stdout:max_file_size 1099511627776 2026-03-09T20:48:35.013 INFO:teuthology.orchestra.run.vm07.stdout:max_xattr_size 65536 2026-03-09T20:48:35.013 INFO:teuthology.orchestra.run.vm07.stdout:required_client_features {} 2026-03-09T20:48:35.013 INFO:teuthology.orchestra.run.vm07.stdout:last_failure 0 2026-03-09T20:48:35.013 INFO:teuthology.orchestra.run.vm07.stdout:last_failure_osd_epoch 0 2026-03-09T20:48:35.013 INFO:teuthology.orchestra.run.vm07.stdout:compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T20:48:35.013 INFO:teuthology.orchestra.run.vm07.stdout:max_mds 2 2026-03-09T20:48:35.013 INFO:teuthology.orchestra.run.vm07.stdout:in 0,1 2026-03-09T20:48:35.013 INFO:teuthology.orchestra.run.vm07.stdout:up {0=14476,1=24291} 2026-03-09T20:48:35.013 INFO:teuthology.orchestra.run.vm07.stdout:failed 2026-03-09T20:48:35.013 INFO:teuthology.orchestra.run.vm07.stdout:damaged 2026-03-09T20:48:35.013 INFO:teuthology.orchestra.run.vm07.stdout:stopped 2026-03-09T20:48:35.013 INFO:teuthology.orchestra.run.vm07.stdout:data_pools [3] 2026-03-09T20:48:35.013 INFO:teuthology.orchestra.run.vm07.stdout:metadata_pool 2 2026-03-09T20:48:35.013 INFO:teuthology.orchestra.run.vm07.stdout:inline_data disabled 2026-03-09T20:48:35.013 INFO:teuthology.orchestra.run.vm07.stdout:balancer 2026-03-09T20:48:35.013 INFO:teuthology.orchestra.run.vm07.stdout:bal_rank_mask -1 2026-03-09T20:48:35.014 INFO:teuthology.orchestra.run.vm07.stdout:standby_count_wanted 1 2026-03-09T20:48:35.014 INFO:teuthology.orchestra.run.vm07.stdout:qdb_cluster leader: 0 members: 2026-03-09T20:48:35.014 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm07.rovdbp{0:14476} state up:active seq 3 join_fscid=1 addr [v2:192.168.123.107:6826/2216764941,v1:192.168.123.107:6827/2216764941] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:48:35.014 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm10.hzyuyq{0:14498} state up:standby-replay seq 3 join_fscid=1 addr [v2:192.168.123.110:6826/3212743251,v1:192.168.123.110:6827/3212743251] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:48:35.014 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm10.qpltwp{1:24291} state up:active seq 3 join_fscid=1 addr [v2:192.168.123.110:6824/61492274,v1:192.168.123.110:6825/61492274] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:48:35.014 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm07.potfau{1:14490} state up:standby-replay seq 2 join_fscid=1 addr [v2:192.168.123.107:6828/3289699342,v1:192.168.123.107:6829/3289699342] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:48:35.014 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:48:35.014 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:48:35.014 INFO:teuthology.orchestra.run.vm07.stderr:dumped fsmap epoch 11 2026-03-09T20:48:35.016 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.015+0000 7f28217fa640 1 -- 192.168.123.107:0/1120481342 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f2818077a00 msgr2=0x7f2818079ec0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:48:35.016 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.015+0000 7f28217fa640 1 --2- 192.168.123.107:0/1120481342 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f2818077a00 0x7f2818079ec0 secure :-1 s=READY pgs=24 cs=0 l=1 rev1=1 crypto rx=0x7f282c02f950 tx=0x7f282c03a040 comp rx=0 tx=0).stop 2026-03-09T20:48:35.017 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.016+0000 7f28217fa640 1 -- 192.168.123.107:0/1120481342 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2834071a70 msgr2=0x7f2834084080 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:48:35.017 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.016+0000 7f28217fa640 1 --2- 192.168.123.107:0/1120481342 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2834071a70 0x7f2834084080 secure :-1 s=READY pgs=44 cs=0 l=1 rev1=1 crypto rx=0x7f282400fa60 tx=0x7f282400ff30 comp rx=0 tx=0).stop 2026-03-09T20:48:35.017 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.016+0000 7f28217fa640 1 -- 192.168.123.107:0/1120481342 shutdown_connections 2026-03-09T20:48:35.017 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.016+0000 7f28217fa640 1 --2- 192.168.123.107:0/1120481342 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f2818077a00 0x7f2818079ec0 unknown :-1 s=CLOSED pgs=24 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:35.017 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.016+0000 7f28217fa640 1 --2- 192.168.123.107:0/1120481342 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f28340826d0 0x7f2834082b50 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:35.017 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.016+0000 7f28217fa640 1 --2- 192.168.123.107:0/1120481342 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2834071a70 0x7f2834084080 unknown :-1 s=CLOSED pgs=44 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:35.017 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.016+0000 7f28217fa640 1 -- 192.168.123.107:0/1120481342 >> 192.168.123.107:0/1120481342 conn(0x7f283406d4f0 msgr2=0x7f2834070470 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:48:35.018 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.017+0000 7f28217fa640 1 -- 192.168.123.107:0/1120481342 shutdown_connections 2026-03-09T20:48:35.018 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.017+0000 7f28217fa640 1 -- 192.168.123.107:0/1120481342 wait complete. 2026-03-09T20:48:35.097 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.096+0000 7f7062809640 1 -- 192.168.123.107:0/3504518258 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f705c071a70 msgr2=0x7f705c071e70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:48:35.097 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.096+0000 7f7062809640 1 --2- 192.168.123.107:0/3504518258 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f705c071a70 0x7f705c071e70 secure :-1 s=READY pgs=45 cs=0 l=1 rev1=1 crypto rx=0x7f704c00bb70 tx=0x7f704c030ff0 comp rx=0 tx=0).stop 2026-03-09T20:48:35.097 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.096+0000 7f7062809640 1 -- 192.168.123.107:0/3504518258 shutdown_connections 2026-03-09T20:48:35.097 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.096+0000 7f7062809640 1 --2- 192.168.123.107:0/3504518258 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f705c072440 0x7f705c0771b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:35.097 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.096+0000 7f7062809640 1 --2- 192.168.123.107:0/3504518258 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f705c071a70 0x7f705c071e70 unknown :-1 s=CLOSED pgs=45 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:35.097 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.096+0000 7f7062809640 1 -- 192.168.123.107:0/3504518258 >> 192.168.123.107:0/3504518258 conn(0x7f705c06d4f0 msgr2=0x7f705c06f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:48:35.098 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.096+0000 7f7062809640 1 -- 192.168.123.107:0/3504518258 shutdown_connections 2026-03-09T20:48:35.098 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.096+0000 7f7062809640 1 -- 192.168.123.107:0/3504518258 wait complete. 2026-03-09T20:48:35.098 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.097+0000 7f7062809640 1 Processor -- start 2026-03-09T20:48:35.098 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.097+0000 7f7062809640 1 -- start start 2026-03-09T20:48:35.098 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.097+0000 7f7062809640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f705c072440 0x7f705c0840c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:48:35.098 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.097+0000 7f7062809640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f705c082710 0x7f705c082b90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:48:35.098 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.097+0000 7f7062809640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f705c084600 con 0x7f705c082710 2026-03-09T20:48:35.098 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.097+0000 7f7062809640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f705c0830d0 con 0x7f705c072440 2026-03-09T20:48:35.099 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.098+0000 7f705b7fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f705c082710 0x7f705c082b90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:48:35.099 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.098+0000 7f705b7fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f705c082710 0x7f705c082b90 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:59932/0 (socket says 192.168.123.107:59932) 2026-03-09T20:48:35.099 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.098+0000 7f705b7fe640 1 -- 192.168.123.107:0/2973795483 learned_addr learned my addr 192.168.123.107:0/2973795483 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:48:35.099 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.098+0000 7f705bfff640 1 --2- 192.168.123.107:0/2973795483 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f705c072440 0x7f705c0840c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:48:35.099 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.098+0000 7f705bfff640 1 -- 192.168.123.107:0/2973795483 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f705c072440 msgr2=0x7f705c0840c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).read_bulk peer close file descriptor 13 2026-03-09T20:48:35.099 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.098+0000 7f705bfff640 1 -- 192.168.123.107:0/2973795483 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f705c072440 msgr2=0x7f705c0840c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).read_until read failed 2026-03-09T20:48:35.099 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.098+0000 7f705bfff640 1 --2- 192.168.123.107:0/2973795483 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f705c072440 0x7f705c0840c0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_read_frame_preamble_main read frame preamble failed r=-1 2026-03-09T20:48:35.099 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.098+0000 7f705bfff640 1 --2- 192.168.123.107:0/2973795483 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f705c072440 0x7f705c0840c0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.200000 2026-03-09T20:48:35.100 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.099+0000 7f705b7fe640 1 -- 192.168.123.107:0/2973795483 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f705c072440 msgr2=0x7f705c0840c0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T20:48:35.100 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.099+0000 7f705b7fe640 1 --2- 192.168.123.107:0/2973795483 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f705c072440 0x7f705c0840c0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:35.100 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.099+0000 7f705b7fe640 1 -- 192.168.123.107:0/2973795483 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f704c00b820 con 0x7f705c082710 2026-03-09T20:48:35.100 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.099+0000 7f705b7fe640 1 --2- 192.168.123.107:0/2973795483 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f705c082710 0x7f705c082b90 secure :-1 s=READY pgs=46 cs=0 l=1 rev1=1 crypto rx=0x7f705400b330 tx=0x7f705400b800 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:48:35.101 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.100+0000 7f70597fa640 1 -- 192.168.123.107:0/2973795483 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f7054002c70 con 0x7f705c082710 2026-03-09T20:48:35.101 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.100+0000 7f7062809640 1 -- 192.168.123.107:0/2973795483 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f705c0833b0 con 0x7f705c082710 2026-03-09T20:48:35.101 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.100+0000 7f7062809640 1 -- 192.168.123.107:0/2973795483 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f705c12efc0 con 0x7f705c082710 2026-03-09T20:48:35.102 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.101+0000 7f7062809640 1 -- 192.168.123.107:0/2973795483 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f705c071a70 con 0x7f705c082710 2026-03-09T20:48:35.102 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.101+0000 7f70597fa640 1 -- 192.168.123.107:0/2973795483 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f7054002dd0 con 0x7f705c082710 2026-03-09T20:48:35.102 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.101+0000 7f70597fa640 1 -- 192.168.123.107:0/2973795483 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f7054021980 con 0x7f705c082710 2026-03-09T20:48:35.106 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.104+0000 7f70597fa640 1 -- 192.168.123.107:0/2973795483 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 37) v1 ==== 100217+0+0 (secure 0 0 0) 0x7f70540047b0 con 0x7f705c082710 2026-03-09T20:48:35.106 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.105+0000 7f70597fa640 1 --2- 192.168.123.107:0/2973795483 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f703c0779b0 0x7f703c079e70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:48:35.106 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.105+0000 7f705bfff640 1 --2- 192.168.123.107:0/2973795483 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f703c0779b0 0x7f703c079e70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:48:35.106 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.105+0000 7f705bfff640 1 --2- 192.168.123.107:0/2973795483 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f703c0779b0 0x7f703c079e70 secure :-1 s=READY pgs=25 cs=0 l=1 rev1=1 crypto rx=0x7f704c00b7f0 tx=0x7f704c00b6a0 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:48:35.111 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.110+0000 7f70597fa640 1 -- 192.168.123.107:0/2973795483 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(47..47 src has 1..47) v4 ==== 6394+0+0 (secure 0 0 0) 0x7f7054066a50 con 0x7f705c082710 2026-03-09T20:48:35.120 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.114+0000 7f70597fa640 1 -- 192.168.123.107:0/2973795483 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f7054062470 con 0x7f705c082710 2026-03-09T20:48:35.267 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.266+0000 7f7062809640 1 -- 192.168.123.107:0/2973795483 --> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f705c075c00 con 0x7f703c0779b0 2026-03-09T20:48:35.268 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T20:48:35.268 INFO:teuthology.orchestra.run.vm07.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc", 2026-03-09T20:48:35.268 INFO:teuthology.orchestra.run.vm07.stdout: "in_progress": true, 2026-03-09T20:48:35.268 INFO:teuthology.orchestra.run.vm07.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-09T20:48:35.268 INFO:teuthology.orchestra.run.vm07.stdout: "services_complete": [ 2026-03-09T20:48:35.268 INFO:teuthology.orchestra.run.vm07.stdout: "mon", 2026-03-09T20:48:35.268 INFO:teuthology.orchestra.run.vm07.stdout: "mgr" 2026-03-09T20:48:35.268 INFO:teuthology.orchestra.run.vm07.stdout: ], 2026-03-09T20:48:35.268 INFO:teuthology.orchestra.run.vm07.stdout: "progress": "4/23 daemons upgraded", 2026-03-09T20:48:35.268 INFO:teuthology.orchestra.run.vm07.stdout: "message": "Currently upgrading mon daemons", 2026-03-09T20:48:35.268 INFO:teuthology.orchestra.run.vm07.stdout: "is_paused": false 2026-03-09T20:48:35.268 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T20:48:35.268 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.267+0000 7f70597fa640 1 -- 192.168.123.107:0/2973795483 <== mgr.34100 v2:192.168.123.107:6800/2064434004 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+383 (secure 0 0 0) 0x7f705c075c00 con 0x7f703c0779b0 2026-03-09T20:48:35.271 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.270+0000 7f703affd640 1 -- 192.168.123.107:0/2973795483 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f703c0779b0 msgr2=0x7f703c079e70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:48:35.271 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.270+0000 7f703affd640 1 --2- 192.168.123.107:0/2973795483 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f703c0779b0 0x7f703c079e70 secure :-1 s=READY pgs=25 cs=0 l=1 rev1=1 crypto rx=0x7f704c00b7f0 tx=0x7f704c00b6a0 comp rx=0 tx=0).stop 2026-03-09T20:48:35.271 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.270+0000 7f703affd640 1 -- 192.168.123.107:0/2973795483 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f705c082710 msgr2=0x7f705c082b90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:48:35.271 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.270+0000 7f703affd640 1 --2- 192.168.123.107:0/2973795483 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f705c082710 0x7f705c082b90 secure :-1 s=READY pgs=46 cs=0 l=1 rev1=1 crypto rx=0x7f705400b330 tx=0x7f705400b800 comp rx=0 tx=0).stop 2026-03-09T20:48:35.271 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.270+0000 7f703affd640 1 -- 192.168.123.107:0/2973795483 shutdown_connections 2026-03-09T20:48:35.271 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.270+0000 7f703affd640 1 --2- 192.168.123.107:0/2973795483 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f703c0779b0 0x7f703c079e70 unknown :-1 s=CLOSED pgs=25 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:35.271 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.270+0000 7f703affd640 1 --2- 192.168.123.107:0/2973795483 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f705c082710 0x7f705c082b90 unknown :-1 s=CLOSED pgs=46 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:35.271 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.270+0000 7f703affd640 1 --2- 192.168.123.107:0/2973795483 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f705c072440 0x7f705c0840c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:35.271 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.270+0000 7f703affd640 1 -- 192.168.123.107:0/2973795483 >> 192.168.123.107:0/2973795483 conn(0x7f705c06d4f0 msgr2=0x7f705c073150 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:48:35.271 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.271+0000 7f703affd640 1 -- 192.168.123.107:0/2973795483 shutdown_connections 2026-03-09T20:48:35.271 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.271+0000 7f703affd640 1 -- 192.168.123.107:0/2973795483 wait complete. 2026-03-09T20:48:35.359 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.355+0000 7f5b4f14d640 1 -- 192.168.123.107:0/2102286380 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5b400a5d10 msgr2=0x7f5b400a6110 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:48:35.359 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.355+0000 7f5b4f14d640 1 --2- 192.168.123.107:0/2102286380 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5b400a5d10 0x7f5b400a6110 secure :-1 s=READY pgs=47 cs=0 l=1 rev1=1 crypto rx=0x7f5b44007920 tx=0x7f5b4402ffe0 comp rx=0 tx=0).stop 2026-03-09T20:48:35.359 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.355+0000 7f5b4f14d640 1 -- 192.168.123.107:0/2102286380 shutdown_connections 2026-03-09T20:48:35.359 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.355+0000 7f5b4f14d640 1 --2- 192.168.123.107:0/2102286380 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5b400a4350 0x7f5b400a47d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:35.359 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.355+0000 7f5b4f14d640 1 --2- 192.168.123.107:0/2102286380 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5b400a5d10 0x7f5b400a6110 unknown :-1 s=CLOSED pgs=47 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:35.359 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.355+0000 7f5b4f14d640 1 -- 192.168.123.107:0/2102286380 >> 192.168.123.107:0/2102286380 conn(0x7f5b4009fea0 msgr2=0x7f5b400a2300 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:48:35.359 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.355+0000 7f5b4f14d640 1 -- 192.168.123.107:0/2102286380 shutdown_connections 2026-03-09T20:48:35.359 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.356+0000 7f5b4f14d640 1 -- 192.168.123.107:0/2102286380 wait complete. 2026-03-09T20:48:35.359 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.356+0000 7f5b4f14d640 1 Processor -- start 2026-03-09T20:48:35.359 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.356+0000 7f5b4f14d640 1 -- start start 2026-03-09T20:48:35.359 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.357+0000 7f5b4f14d640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5b400a4350 0x7f5b40013e40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:48:35.359 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.357+0000 7f5b4f14d640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5b40014380 0x7f5b400153d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:48:35.359 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.357+0000 7f5b4f14d640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f5b40014800 con 0x7f5b400a4350 2026-03-09T20:48:35.359 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.357+0000 7f5b4f14d640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f5b40014970 con 0x7f5b40014380 2026-03-09T20:48:35.359 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.357+0000 7f5b4e14b640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5b400a4350 0x7f5b40013e40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:48:35.359 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.357+0000 7f5b4e14b640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5b400a4350 0x7f5b40013e40 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:59950/0 (socket says 192.168.123.107:59950) 2026-03-09T20:48:35.359 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.357+0000 7f5b4e14b640 1 -- 192.168.123.107:0/2516958693 learned_addr learned my addr 192.168.123.107:0/2516958693 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:48:35.359 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.358+0000 7f5b4d94a640 1 --2- 192.168.123.107:0/2516958693 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5b40014380 0x7f5b400153d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:48:35.359 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.358+0000 7f5b4d94a640 1 -- 192.168.123.107:0/2516958693 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5b40014380 msgr2=0x7f5b400153d0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).read_bulk peer close file descriptor 12 2026-03-09T20:48:35.359 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.358+0000 7f5b4d94a640 1 -- 192.168.123.107:0/2516958693 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5b40014380 msgr2=0x7f5b400153d0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).read_until read failed 2026-03-09T20:48:35.359 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.358+0000 7f5b4d94a640 1 --2- 192.168.123.107:0/2516958693 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5b40014380 0x7f5b400153d0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_read_frame_preamble_main read frame preamble failed r=-1 2026-03-09T20:48:35.359 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.358+0000 7f5b4d94a640 1 --2- 192.168.123.107:0/2516958693 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5b40014380 0x7f5b400153d0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.200000 2026-03-09T20:48:35.359 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.358+0000 7f5b4e14b640 1 -- 192.168.123.107:0/2516958693 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5b40014380 msgr2=0x7f5b400153d0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T20:48:35.359 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.358+0000 7f5b4e14b640 1 --2- 192.168.123.107:0/2516958693 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5b40014380 0x7f5b400153d0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:35.359 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.358+0000 7f5b4e14b640 1 -- 192.168.123.107:0/2516958693 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f5b440075d0 con 0x7f5b400a4350 2026-03-09T20:48:35.360 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.359+0000 7f5b4e14b640 1 --2- 192.168.123.107:0/2516958693 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5b400a4350 0x7f5b40013e40 secure :-1 s=READY pgs=48 cs=0 l=1 rev1=1 crypto rx=0x7f5b440304f0 tx=0x7f5b44030a70 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:48:35.360 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.359+0000 7f5b3f7fe640 1 -- 192.168.123.107:0/2516958693 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f5b44030d70 con 0x7f5b400a4350 2026-03-09T20:48:35.362 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.359+0000 7f5b4f14d640 1 -- 192.168.123.107:0/2516958693 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f5b40015910 con 0x7f5b400a4350 2026-03-09T20:48:35.362 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.359+0000 7f5b4f14d640 1 -- 192.168.123.107:0/2516958693 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f5b40015d80 con 0x7f5b400a4350 2026-03-09T20:48:35.362 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.360+0000 7f5b3f7fe640 1 -- 192.168.123.107:0/2516958693 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f5b44002d30 con 0x7f5b400a4350 2026-03-09T20:48:35.362 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.361+0000 7f5b3f7fe640 1 -- 192.168.123.107:0/2516958693 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f5b440417b0 con 0x7f5b400a4350 2026-03-09T20:48:35.371 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.361+0000 7f5b3d7fa640 1 -- 192.168.123.107:0/2516958693 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f5b400a5d10 con 0x7f5b400a4350 2026-03-09T20:48:35.371 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.369+0000 7f5b3f7fe640 1 -- 192.168.123.107:0/2516958693 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 37) v1 ==== 100217+0+0 (secure 0 0 0) 0x7f5b44049050 con 0x7f5b400a4350 2026-03-09T20:48:35.371 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.369+0000 7f5b3f7fe640 1 --2- 192.168.123.107:0/2516958693 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f5b1c0777a0 0x7f5b1c079c60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:48:35.371 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.370+0000 7f5b3f7fe640 1 -- 192.168.123.107:0/2516958693 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(47..47 src has 1..47) v4 ==== 6394+0+0 (secure 0 0 0) 0x7f5b440bf590 con 0x7f5b400a4350 2026-03-09T20:48:35.375 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.374+0000 7f5b4d94a640 1 --2- 192.168.123.107:0/2516958693 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f5b1c0777a0 0x7f5b1c079c60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:48:35.376 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.375+0000 7f5b4d94a640 1 --2- 192.168.123.107:0/2516958693 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f5b1c0777a0 0x7f5b1c079c60 secure :-1 s=READY pgs=26 cs=0 l=1 rev1=1 crypto rx=0x7f5b48064030 tx=0x7f5b4806c040 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:48:35.382 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.379+0000 7f5b3f7fe640 1 -- 192.168.123.107:0/2516958693 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f5b44087ab0 con 0x7f5b400a4350 2026-03-09T20:48:35.579 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.577+0000 7f5b3d7fa640 1 -- 192.168.123.107:0/2516958693 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "health", "detail": "detail"} v 0) v1 -- 0x7f5b4000bd10 con 0x7f5b400a4350 2026-03-09T20:48:35.580 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.579+0000 7f5b3f7fe640 1 -- 192.168.123.107:0/2516958693 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "health", "detail": "detail"}]=0 v0) v1 ==== 74+0+10 (secure 0 0 0) 0x7f5b44087200 con 0x7f5b400a4350 2026-03-09T20:48:35.580 INFO:teuthology.orchestra.run.vm07.stdout:HEALTH_OK 2026-03-09T20:48:35.584 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.583+0000 7f5b4f14d640 1 -- 192.168.123.107:0/2516958693 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f5b1c0777a0 msgr2=0x7f5b1c079c60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:48:35.584 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.583+0000 7f5b4f14d640 1 --2- 192.168.123.107:0/2516958693 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f5b1c0777a0 0x7f5b1c079c60 secure :-1 s=READY pgs=26 cs=0 l=1 rev1=1 crypto rx=0x7f5b48064030 tx=0x7f5b4806c040 comp rx=0 tx=0).stop 2026-03-09T20:48:35.584 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.583+0000 7f5b4f14d640 1 -- 192.168.123.107:0/2516958693 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5b400a4350 msgr2=0x7f5b40013e40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:48:35.584 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.583+0000 7f5b4f14d640 1 --2- 192.168.123.107:0/2516958693 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5b400a4350 0x7f5b40013e40 secure :-1 s=READY pgs=48 cs=0 l=1 rev1=1 crypto rx=0x7f5b440304f0 tx=0x7f5b44030a70 comp rx=0 tx=0).stop 2026-03-09T20:48:35.585 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.584+0000 7f5b4f14d640 1 -- 192.168.123.107:0/2516958693 shutdown_connections 2026-03-09T20:48:35.585 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.584+0000 7f5b4f14d640 1 --2- 192.168.123.107:0/2516958693 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f5b1c0777a0 0x7f5b1c079c60 unknown :-1 s=CLOSED pgs=26 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:35.585 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.584+0000 7f5b4f14d640 1 --2- 192.168.123.107:0/2516958693 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5b40014380 0x7f5b400153d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:35.585 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.584+0000 7f5b4f14d640 1 --2- 192.168.123.107:0/2516958693 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5b400a4350 0x7f5b40013e40 unknown :-1 s=CLOSED pgs=48 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:48:35.585 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.584+0000 7f5b4f14d640 1 -- 192.168.123.107:0/2516958693 >> 192.168.123.107:0/2516958693 conn(0x7f5b4009fea0 msgr2=0x7f5b40006850 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:48:35.585 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.585+0000 7f5b4f14d640 1 -- 192.168.123.107:0/2516958693 shutdown_connections 2026-03-09T20:48:35.585 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:48:35.585+0000 7f5b4f14d640 1 -- 192.168.123.107:0/2516958693 wait complete. 2026-03-09T20:48:36.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:35 vm10.local ceph-mon[103526]: from='client.34130 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:48:36.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:35 vm10.local ceph-mon[103526]: pgmap v8: 65 pgs: 65 active+clean; 3.5 GiB data, 12 GiB used, 108 GiB / 120 GiB avail; 416 KiB/s rd, 462 KiB/s wr, 82 op/s 2026-03-09T20:48:36.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:35 vm10.local ceph-mon[103526]: from='client.34134 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:48:36.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:35 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/38311963' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:48:36.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:35 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/1120481342' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T20:48:36.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:35 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:36.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:35 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:36.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:35 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/2516958693' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T20:48:36.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:35 vm07.local ceph-mon[112105]: from='client.34130 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:48:36.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:35 vm07.local ceph-mon[112105]: pgmap v8: 65 pgs: 65 active+clean; 3.5 GiB data, 12 GiB used, 108 GiB / 120 GiB avail; 416 KiB/s rd, 462 KiB/s wr, 82 op/s 2026-03-09T20:48:36.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:35 vm07.local ceph-mon[112105]: from='client.34134 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:48:36.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:35 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/38311963' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:48:36.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:35 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/1120481342' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T20:48:36.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:35 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:36.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:35 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:36.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:35 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/2516958693' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T20:48:37.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:36 vm07.local ceph-mon[112105]: from='client.34146 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:48:37.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:36 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:37.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:36 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:37.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:36 vm10.local ceph-mon[103526]: from='client.34146 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:48:37.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:36 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:37.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:36 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:38.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:37 vm10.local ceph-mon[103526]: pgmap v9: 65 pgs: 65 active+clean; 2.8 GiB data, 10 GiB used, 110 GiB / 120 GiB avail; 756 KiB/s rd, 790 KiB/s wr, 130 op/s 2026-03-09T20:48:38.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:37 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:38.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:37 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:38.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:37 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:38.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:37 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:48:38.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:37 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:38.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:37 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T20:48:38.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:37 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T20:48:38.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:37 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:38.086 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:37 vm07.local ceph-mon[112105]: pgmap v9: 65 pgs: 65 active+clean; 2.8 GiB data, 10 GiB used, 110 GiB / 120 GiB avail; 756 KiB/s rd, 790 KiB/s wr, 130 op/s 2026-03-09T20:48:38.086 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:37 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:38.086 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:37 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:38.086 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:37 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:38.086 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:37 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:48:38.087 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:37 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:38.087 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:37 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T20:48:38.087 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:37 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T20:48:38.087 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:37 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:38.871 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:38 vm07.local ceph-mon[112105]: Reconfiguring mon.vm07 (monmap changed)... 2026-03-09T20:48:39.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:38 vm07.local ceph-mon[112105]: Reconfiguring daemon mon.vm07 on vm07 2026-03-09T20:48:39.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:38 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:39.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:38 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:39.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:38 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm07.xjrvch", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T20:48:39.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:38 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T20:48:39.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:38 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:39.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:38 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:39.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:38 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:39.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:38 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm07", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-09T20:48:39.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:38 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth caps", "entity": "client.ceph-exporter.vm07", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-09T20:48:39.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:38 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "auth caps", "entity": "client.ceph-exporter.vm07", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]': finished 2026-03-09T20:48:39.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:38 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.ceph-exporter.vm07"}]: dispatch 2026-03-09T20:48:39.136 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:38 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:39.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:38 vm10.local ceph-mon[103526]: Reconfiguring mon.vm07 (monmap changed)... 2026-03-09T20:48:39.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:38 vm10.local ceph-mon[103526]: Reconfiguring daemon mon.vm07 on vm07 2026-03-09T20:48:39.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:38 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:39.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:38 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:39.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:38 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm07.xjrvch", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T20:48:39.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:38 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T20:48:39.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:38 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:39.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:38 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:39.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:38 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:39.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:38 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm07", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-09T20:48:39.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:38 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth caps", "entity": "client.ceph-exporter.vm07", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-09T20:48:39.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:38 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "auth caps", "entity": "client.ceph-exporter.vm07", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]': finished 2026-03-09T20:48:39.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:38 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.ceph-exporter.vm07"}]: dispatch 2026-03-09T20:48:39.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:38 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:40.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:39 vm07.local ceph-mon[112105]: Reconfiguring mgr.vm07.xjrvch (monmap changed)... 2026-03-09T20:48:40.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:39 vm07.local ceph-mon[112105]: Reconfiguring daemon mgr.vm07.xjrvch on vm07 2026-03-09T20:48:40.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:39 vm07.local ceph-mon[112105]: pgmap v10: 65 pgs: 65 active+clean; 2.8 GiB data, 10 GiB used, 110 GiB / 120 GiB avail; 680 KiB/s rd, 711 KiB/s wr, 117 op/s 2026-03-09T20:48:40.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:39 vm07.local ceph-mon[112105]: Reconfiguring ceph-exporter.vm07 (monmap changed)... 2026-03-09T20:48:40.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:39 vm07.local ceph-mon[112105]: Unable to update caps for client.ceph-exporter.vm07 2026-03-09T20:48:40.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:39 vm07.local ceph-mon[112105]: Reconfiguring daemon ceph-exporter.vm07 on vm07 2026-03-09T20:48:40.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:39 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:40.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:39 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:40.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:39 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm07", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-09T20:48:40.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:39 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:40.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:39 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:48:40.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:39 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:40.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:39 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:40.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:39 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T20:48:40.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:39 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:40.189 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:39 vm10.local ceph-mon[103526]: Reconfiguring mgr.vm07.xjrvch (monmap changed)... 2026-03-09T20:48:40.189 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:39 vm10.local ceph-mon[103526]: Reconfiguring daemon mgr.vm07.xjrvch on vm07 2026-03-09T20:48:40.189 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:39 vm10.local ceph-mon[103526]: pgmap v10: 65 pgs: 65 active+clean; 2.8 GiB data, 10 GiB used, 110 GiB / 120 GiB avail; 680 KiB/s rd, 711 KiB/s wr, 117 op/s 2026-03-09T20:48:40.189 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:39 vm10.local ceph-mon[103526]: Reconfiguring ceph-exporter.vm07 (monmap changed)... 2026-03-09T20:48:40.189 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:39 vm10.local ceph-mon[103526]: Unable to update caps for client.ceph-exporter.vm07 2026-03-09T20:48:40.189 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:39 vm10.local ceph-mon[103526]: Reconfiguring daemon ceph-exporter.vm07 on vm07 2026-03-09T20:48:40.189 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:39 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:40.189 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:39 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:40.189 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:39 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm07", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-09T20:48:40.189 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:39 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:40.189 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:39 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:48:40.189 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:39 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:40.189 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:39 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:40.190 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:39 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T20:48:40.190 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:39 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:41.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:41 vm07.local ceph-mon[112105]: Reconfiguring crash.vm07 (monmap changed)... 2026-03-09T20:48:41.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:41 vm07.local ceph-mon[112105]: Reconfiguring daemon crash.vm07 on vm07 2026-03-09T20:48:41.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:41 vm07.local ceph-mon[112105]: Reconfiguring osd.0 (monmap changed)... 2026-03-09T20:48:41.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:41 vm07.local ceph-mon[112105]: Reconfiguring daemon osd.0 on vm07 2026-03-09T20:48:41.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:41 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:41.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:41 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:41.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:41 vm07.local ceph-mon[112105]: Reconfiguring osd.1 (monmap changed)... 2026-03-09T20:48:41.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:41 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T20:48:41.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:41 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:41.385 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:41 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:41.385 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:41 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:41.385 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:41 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T20:48:41.385 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:41 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:41.385 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:41 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:41.385 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:41 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:41.385 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:41 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm07.rovdbp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-09T20:48:41.385 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:41 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:41.538 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:41 vm10.local ceph-mon[103526]: Reconfiguring crash.vm07 (monmap changed)... 2026-03-09T20:48:41.538 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:41 vm10.local ceph-mon[103526]: Reconfiguring daemon crash.vm07 on vm07 2026-03-09T20:48:41.538 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:41 vm10.local ceph-mon[103526]: Reconfiguring osd.0 (monmap changed)... 2026-03-09T20:48:41.538 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:41 vm10.local ceph-mon[103526]: Reconfiguring daemon osd.0 on vm07 2026-03-09T20:48:41.538 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:41 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:41.538 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:41 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:41.538 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:41 vm10.local ceph-mon[103526]: Reconfiguring osd.1 (monmap changed)... 2026-03-09T20:48:41.538 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:41 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T20:48:41.538 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:41 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:41.538 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:41 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:41.538 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:41 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:41.539 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:41 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T20:48:41.539 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:41 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:41.539 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:41 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:41.539 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:41 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:41.539 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:41 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm07.rovdbp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-09T20:48:41.539 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:41 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:42.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:42 vm10.local ceph-mon[103526]: Reconfiguring daemon osd.1 on vm07 2026-03-09T20:48:42.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:42 vm10.local ceph-mon[103526]: pgmap v11: 65 pgs: 65 active+clean; 2.8 GiB data, 10 GiB used, 110 GiB / 120 GiB avail; 680 KiB/s rd, 711 KiB/s wr, 117 op/s 2026-03-09T20:48:42.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:42 vm10.local ceph-mon[103526]: Reconfiguring osd.2 (monmap changed)... 2026-03-09T20:48:42.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:42 vm10.local ceph-mon[103526]: Reconfiguring daemon osd.2 on vm07 2026-03-09T20:48:42.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:42 vm10.local ceph-mon[103526]: Reconfiguring mds.cephfs.vm07.rovdbp (monmap changed)... 2026-03-09T20:48:42.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:42 vm10.local ceph-mon[103526]: Reconfiguring daemon mds.cephfs.vm07.rovdbp on vm07 2026-03-09T20:48:42.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:42 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:42.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:42 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:42.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:42 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm07.potfau", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-09T20:48:42.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:42 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:42.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:42 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:42.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:42 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:42.538 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:42 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm10", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-09T20:48:42.538 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:42 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth caps", "entity": "client.ceph-exporter.vm10", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-09T20:48:42.538 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:42 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "auth caps", "entity": "client.ceph-exporter.vm10", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]': finished 2026-03-09T20:48:42.538 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:42 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.ceph-exporter.vm10"}]: dispatch 2026-03-09T20:48:42.538 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:42 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:42.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:42 vm07.local ceph-mon[112105]: Reconfiguring daemon osd.1 on vm07 2026-03-09T20:48:42.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:42 vm07.local ceph-mon[112105]: pgmap v11: 65 pgs: 65 active+clean; 2.8 GiB data, 10 GiB used, 110 GiB / 120 GiB avail; 680 KiB/s rd, 711 KiB/s wr, 117 op/s 2026-03-09T20:48:42.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:42 vm07.local ceph-mon[112105]: Reconfiguring osd.2 (monmap changed)... 2026-03-09T20:48:42.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:42 vm07.local ceph-mon[112105]: Reconfiguring daemon osd.2 on vm07 2026-03-09T20:48:42.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:42 vm07.local ceph-mon[112105]: Reconfiguring mds.cephfs.vm07.rovdbp (monmap changed)... 2026-03-09T20:48:42.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:42 vm07.local ceph-mon[112105]: Reconfiguring daemon mds.cephfs.vm07.rovdbp on vm07 2026-03-09T20:48:42.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:42 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:42.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:42 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:42.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:42 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm07.potfau", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-09T20:48:42.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:42 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:42.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:42 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:42.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:42 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:42.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:42 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm10", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-09T20:48:42.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:42 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth caps", "entity": "client.ceph-exporter.vm10", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-09T20:48:42.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:42 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "auth caps", "entity": "client.ceph-exporter.vm10", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]': finished 2026-03-09T20:48:42.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:42 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.ceph-exporter.vm10"}]: dispatch 2026-03-09T20:48:42.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:42 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:43.481 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:43 vm10.local ceph-mon[103526]: Reconfiguring mds.cephfs.vm07.potfau (monmap changed)... 2026-03-09T20:48:43.481 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:43 vm10.local ceph-mon[103526]: Reconfiguring daemon mds.cephfs.vm07.potfau on vm07 2026-03-09T20:48:43.481 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:43 vm10.local ceph-mon[103526]: Reconfiguring ceph-exporter.vm10 (monmap changed)... 2026-03-09T20:48:43.481 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:43 vm10.local ceph-mon[103526]: Unable to update caps for client.ceph-exporter.vm10 2026-03-09T20:48:43.481 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:43 vm10.local ceph-mon[103526]: Reconfiguring daemon ceph-exporter.vm10 on vm10 2026-03-09T20:48:43.481 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:43 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:43.481 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:43 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:43.482 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:43 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm10", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-09T20:48:43.482 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:43 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:43.482 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:43 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:43.482 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:43 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:43.482 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:43 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm10.byqahe", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T20:48:43.482 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:43 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T20:48:43.482 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:43 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:43.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:43 vm07.local ceph-mon[112105]: Reconfiguring mds.cephfs.vm07.potfau (monmap changed)... 2026-03-09T20:48:43.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:43 vm07.local ceph-mon[112105]: Reconfiguring daemon mds.cephfs.vm07.potfau on vm07 2026-03-09T20:48:43.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:43 vm07.local ceph-mon[112105]: Reconfiguring ceph-exporter.vm10 (monmap changed)... 2026-03-09T20:48:43.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:43 vm07.local ceph-mon[112105]: Unable to update caps for client.ceph-exporter.vm10 2026-03-09T20:48:43.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:43 vm07.local ceph-mon[112105]: Reconfiguring daemon ceph-exporter.vm10 on vm10 2026-03-09T20:48:43.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:43 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:43.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:43 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:43.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:43 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm10", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-09T20:48:43.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:43 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:43.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:43 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:43.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:43 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:43.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:43 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm10.byqahe", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T20:48:43.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:43 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T20:48:43.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:43 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:44.722 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:44 vm10.local ceph-mon[103526]: pgmap v12: 65 pgs: 65 active+clean; 2.1 GiB data, 8.4 GiB used, 112 GiB / 120 GiB avail; 1.0 MiB/s rd, 1.0 MiB/s wr, 165 op/s 2026-03-09T20:48:44.723 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:44 vm10.local ceph-mon[103526]: Reconfiguring crash.vm10 (monmap changed)... 2026-03-09T20:48:44.723 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:44 vm10.local ceph-mon[103526]: Reconfiguring daemon crash.vm10 on vm10 2026-03-09T20:48:44.723 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:44 vm10.local ceph-mon[103526]: Reconfiguring mgr.vm10.byqahe (monmap changed)... 2026-03-09T20:48:44.723 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:44 vm10.local ceph-mon[103526]: Reconfiguring daemon mgr.vm10.byqahe on vm10 2026-03-09T20:48:44.723 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:44 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:44.723 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:44 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:44.723 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:44 vm10.local ceph-mon[103526]: Reconfiguring mon.vm10 (monmap changed)... 2026-03-09T20:48:44.723 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:44 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T20:48:44.723 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:44 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T20:48:44.723 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:44 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:44.723 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:44 vm10.local ceph-mon[103526]: Reconfiguring daemon mon.vm10 on vm10 2026-03-09T20:48:44.792 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:44 vm07.local ceph-mon[112105]: pgmap v12: 65 pgs: 65 active+clean; 2.1 GiB data, 8.4 GiB used, 112 GiB / 120 GiB avail; 1.0 MiB/s rd, 1.0 MiB/s wr, 165 op/s 2026-03-09T20:48:44.793 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:44 vm07.local ceph-mon[112105]: Reconfiguring crash.vm10 (monmap changed)... 2026-03-09T20:48:44.793 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:44 vm07.local ceph-mon[112105]: Reconfiguring daemon crash.vm10 on vm10 2026-03-09T20:48:44.793 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:44 vm07.local ceph-mon[112105]: Reconfiguring mgr.vm10.byqahe (monmap changed)... 2026-03-09T20:48:44.793 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:44 vm07.local ceph-mon[112105]: Reconfiguring daemon mgr.vm10.byqahe on vm10 2026-03-09T20:48:44.793 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:44 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:44.793 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:44 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:44.793 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:44 vm07.local ceph-mon[112105]: Reconfiguring mon.vm10 (monmap changed)... 2026-03-09T20:48:44.793 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:44 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T20:48:44.793 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:44 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T20:48:44.793 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:44 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:44.793 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:44 vm07.local ceph-mon[112105]: Reconfiguring daemon mon.vm10 on vm10 2026-03-09T20:48:45.473 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:45 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:45.788 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:45 vm10.local ceph-mon[103526]: pgmap v13: 65 pgs: 65 active+clean; 2.1 GiB data, 8.4 GiB used, 112 GiB / 120 GiB avail; 736 KiB/s rd, 731 KiB/s wr, 105 op/s 2026-03-09T20:48:45.788 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:45 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:45.788 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:45 vm10.local ceph-mon[103526]: Reconfiguring osd.3 (monmap changed)... 2026-03-09T20:48:45.788 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:45 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T20:48:45.788 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:45 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:45.788 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:45 vm10.local ceph-mon[103526]: Reconfiguring daemon osd.3 on vm10 2026-03-09T20:48:45.788 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:45 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:45.788 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:45 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:45.788 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:45 vm10.local ceph-mon[103526]: Reconfiguring osd.4 (monmap changed)... 2026-03-09T20:48:45.788 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:45 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T20:48:45.788 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:45 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:45.788 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:45 vm10.local ceph-mon[103526]: Reconfiguring daemon osd.4 on vm10 2026-03-09T20:48:45.788 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:45 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:45.788 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:45 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:45.788 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:45 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T20:48:45.788 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:45 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:45.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:45 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:45.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:45 vm07.local ceph-mon[112105]: pgmap v13: 65 pgs: 65 active+clean; 2.1 GiB data, 8.4 GiB used, 112 GiB / 120 GiB avail; 736 KiB/s rd, 731 KiB/s wr, 105 op/s 2026-03-09T20:48:45.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:45 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:45.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:45 vm07.local ceph-mon[112105]: Reconfiguring osd.3 (monmap changed)... 2026-03-09T20:48:45.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:45 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T20:48:45.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:45 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:45.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:45 vm07.local ceph-mon[112105]: Reconfiguring daemon osd.3 on vm10 2026-03-09T20:48:45.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:45 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:45.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:45 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:45.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:45 vm07.local ceph-mon[112105]: Reconfiguring osd.4 (monmap changed)... 2026-03-09T20:48:45.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:45 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T20:48:45.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:45 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:45.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:45 vm07.local ceph-mon[112105]: Reconfiguring daemon osd.4 on vm10 2026-03-09T20:48:45.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:45 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:45.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:45 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:45.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:45 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T20:48:45.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:45 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:46.957 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:46 vm07.local ceph-mon[112105]: Reconfiguring osd.5 (monmap changed)... 2026-03-09T20:48:46.957 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:46 vm07.local ceph-mon[112105]: Reconfiguring daemon osd.5 on vm10 2026-03-09T20:48:46.957 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:46 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:46.957 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:46 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:46.957 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:46 vm07.local ceph-mon[112105]: Reconfiguring mds.cephfs.vm10.qpltwp (monmap changed)... 2026-03-09T20:48:46.957 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:46 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm10.qpltwp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-09T20:48:46.957 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:46 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:46.957 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:46 vm07.local ceph-mon[112105]: Reconfiguring daemon mds.cephfs.vm10.qpltwp on vm10 2026-03-09T20:48:46.957 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:46 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:46.957 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:46 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:46.957 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:46 vm07.local ceph-mon[112105]: Reconfiguring mds.cephfs.vm10.hzyuyq (monmap changed)... 2026-03-09T20:48:46.957 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:46 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm10.hzyuyq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-09T20:48:46.957 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:46 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:46.957 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:46 vm07.local ceph-mon[112105]: Reconfiguring daemon mds.cephfs.vm10.hzyuyq on vm10 2026-03-09T20:48:46.957 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:46 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:46.957 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:46 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:46.957 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:46 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:48:46.958 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:46 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:48:46.958 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:46 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:48:46.958 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:46 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:46.958 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:46 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.vm07"}]: dispatch 2026-03-09T20:48:46.958 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:46 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.vm07"}]': finished 2026-03-09T20:48:46.958 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:46 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.vm10"}]: dispatch 2026-03-09T20:48:46.958 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:46 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.vm10"}]': finished 2026-03-09T20:48:47.021 INFO:teuthology.orchestra.run:Running command with timeout 3600 2026-03-09T20:48:47.021 DEBUG:teuthology.orchestra.run.vm07:> sudo rm -rf -- /home/ubuntu/cephtest/mnt.0/client.0/tmp 2026-03-09T20:48:47.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:46 vm10.local ceph-mon[103526]: Reconfiguring osd.5 (monmap changed)... 2026-03-09T20:48:47.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:46 vm10.local ceph-mon[103526]: Reconfiguring daemon osd.5 on vm10 2026-03-09T20:48:47.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:46 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:47.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:46 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:47.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:46 vm10.local ceph-mon[103526]: Reconfiguring mds.cephfs.vm10.qpltwp (monmap changed)... 2026-03-09T20:48:47.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:46 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm10.qpltwp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-09T20:48:47.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:46 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:47.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:46 vm10.local ceph-mon[103526]: Reconfiguring daemon mds.cephfs.vm10.qpltwp on vm10 2026-03-09T20:48:47.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:46 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:47.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:46 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:47.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:46 vm10.local ceph-mon[103526]: Reconfiguring mds.cephfs.vm10.hzyuyq (monmap changed)... 2026-03-09T20:48:47.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:46 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm10.hzyuyq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-09T20:48:47.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:46 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:47.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:46 vm10.local ceph-mon[103526]: Reconfiguring daemon mds.cephfs.vm10.hzyuyq on vm10 2026-03-09T20:48:47.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:46 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:47.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:46 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:47.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:46 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:48:47.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:46 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:48:47.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:46 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:48:47.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:46 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:47.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:46 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.vm07"}]: dispatch 2026-03-09T20:48:47.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:46 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.vm07"}]': finished 2026-03-09T20:48:47.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:46 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.vm10"}]: dispatch 2026-03-09T20:48:47.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:46 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.vm10"}]': finished 2026-03-09T20:48:48.379 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:48 vm07.local ceph-mon[112105]: pgmap v14: 65 pgs: 65 active+clean; 1.4 GiB data, 6.8 GiB used, 113 GiB / 120 GiB avail; 1.0 MiB/s rd, 1.1 MiB/s wr, 156 op/s 2026-03-09T20:48:48.380 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:48 vm07.local ceph-mon[112105]: Upgrade: Setting container_image for all mon 2026-03-09T20:48:48.380 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:48 vm07.local ceph-mon[112105]: Upgrade: Updating crash.vm07 (1/2) 2026-03-09T20:48:48.380 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:48 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:48.380 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:48 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm07", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-09T20:48:48.380 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:48 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:48.380 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:48 vm07.local ceph-mon[112105]: Deploying daemon crash.vm07 on vm07 2026-03-09T20:48:48.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:48 vm10.local ceph-mon[103526]: pgmap v14: 65 pgs: 65 active+clean; 1.4 GiB data, 6.8 GiB used, 113 GiB / 120 GiB avail; 1.0 MiB/s rd, 1.1 MiB/s wr, 156 op/s 2026-03-09T20:48:48.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:48 vm10.local ceph-mon[103526]: Upgrade: Setting container_image for all mon 2026-03-09T20:48:48.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:48 vm10.local ceph-mon[103526]: Upgrade: Updating crash.vm07 (1/2) 2026-03-09T20:48:48.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:48 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:48.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:48 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm07", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-09T20:48:48.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:48 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:48.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:48 vm10.local ceph-mon[103526]: Deploying daemon crash.vm07 on vm07 2026-03-09T20:48:50.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:49 vm10.local ceph-mon[103526]: pgmap v15: 65 pgs: 65 active+clean; 1.4 GiB data, 6.8 GiB used, 113 GiB / 120 GiB avail; 692 KiB/s rd, 780 KiB/s wr, 98 op/s 2026-03-09T20:48:50.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:49 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:50.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:49 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:50.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:49 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:50.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:49 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm10", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-09T20:48:50.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:49 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:50.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:49 vm07.local ceph-mon[112105]: pgmap v15: 65 pgs: 65 active+clean; 1.4 GiB data, 6.8 GiB used, 113 GiB / 120 GiB avail; 692 KiB/s rd, 780 KiB/s wr, 98 op/s 2026-03-09T20:48:50.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:49 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:50.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:49 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:50.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:49 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:50.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:49 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm10", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-09T20:48:50.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:49 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:51.133 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:50 vm10.local ceph-mon[103526]: Upgrade: Updating crash.vm10 (2/2) 2026-03-09T20:48:51.133 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:50 vm10.local ceph-mon[103526]: Deploying daemon crash.vm10 on vm10 2026-03-09T20:48:51.208 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:50 vm07.local ceph-mon[112105]: Upgrade: Updating crash.vm10 (2/2) 2026-03-09T20:48:51.208 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:50 vm07.local ceph-mon[112105]: Deploying daemon crash.vm10 on vm10 2026-03-09T20:48:52.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:52 vm07.local ceph-mon[112105]: pgmap v16: 65 pgs: 65 active+clean; 1.4 GiB data, 6.8 GiB used, 113 GiB / 120 GiB avail; 692 KiB/s rd, 780 KiB/s wr, 98 op/s 2026-03-09T20:48:52.385 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:52 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:52.385 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:52 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:52.385 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:52 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:48:52.434 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:52 vm10.local ceph-mon[103526]: pgmap v16: 65 pgs: 65 active+clean; 1.4 GiB data, 6.8 GiB used, 113 GiB / 120 GiB avail; 692 KiB/s rd, 780 KiB/s wr, 98 op/s 2026-03-09T20:48:52.434 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:52 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:52.434 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:52 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:52.434 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:52 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:48:52.975 INFO:teuthology.orchestra.run:Running command with timeout 3600 2026-03-09T20:48:52.975 DEBUG:teuthology.orchestra.run.vm10:> sudo rm -rf -- /home/ubuntu/cephtest/mnt.1/client.1/tmp 2026-03-09T20:48:54.409 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:54 vm10.local ceph-mon[103526]: pgmap v17: 65 pgs: 65 active+clean; 613 MiB data, 4.3 GiB used, 116 GiB / 120 GiB avail; 1.1 MiB/s rd, 1.2 MiB/s wr, 157 op/s 2026-03-09T20:48:54.409 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:54 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:54.409 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:54 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:54.409 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:54 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:54.409 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:54 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:54.636 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:54 vm07.local ceph-mon[112105]: pgmap v17: 65 pgs: 65 active+clean; 613 MiB data, 4.3 GiB used, 116 GiB / 120 GiB avail; 1.1 MiB/s rd, 1.2 MiB/s wr, 157 op/s 2026-03-09T20:48:54.636 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:54 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:54.636 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:54 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:54.636 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:54 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:54.636 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:54 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:55.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:55 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:55.788 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:55 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:55.788 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:55 vm10.local ceph-mon[103526]: pgmap v18: 65 pgs: 65 active+clean; 613 MiB data, 4.3 GiB used, 116 GiB / 120 GiB avail; 782 KiB/s rd, 849 KiB/s wr, 109 op/s 2026-03-09T20:48:55.788 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:55 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:48:55.788 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:55 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:55.788 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:55 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:55.881 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:55 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:55.881 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:55 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:55.881 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:55 vm07.local ceph-mon[112105]: pgmap v18: 65 pgs: 65 active+clean; 613 MiB data, 4.3 GiB used, 116 GiB / 120 GiB avail; 782 KiB/s rd, 849 KiB/s wr, 109 op/s 2026-03-09T20:48:55.881 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:55 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:48:55.881 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:55 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:55.881 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:55 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:57.163 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:56 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:57.163 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:56 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:57.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:56 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:57.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:56 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:58.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:58 vm10.local ceph-mon[103526]: pgmap v19: 65 pgs: 65 active+clean; 289 MiB data, 3.4 GiB used, 117 GiB / 120 GiB avail; 1.0 MiB/s rd, 1.1 MiB/s wr, 189 op/s 2026-03-09T20:48:58.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:58 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:58.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:58 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:58.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:58 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:58.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:58 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:48:58.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:58 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:58.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:58 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:48:58.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:58 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:48:58.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:58 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:48:58.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:58 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:48:58.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:58 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:58.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:58 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash.vm07"}]: dispatch 2026-03-09T20:48:58.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:58 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash.vm07"}]': finished 2026-03-09T20:48:58.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:58 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash.vm10"}]: dispatch 2026-03-09T20:48:58.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:58 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash.vm10"}]': finished 2026-03-09T20:48:58.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:58 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-09T20:48:58.308 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:58 vm07.local ceph-mon[112105]: pgmap v19: 65 pgs: 65 active+clean; 289 MiB data, 3.4 GiB used, 117 GiB / 120 GiB avail; 1.0 MiB/s rd, 1.1 MiB/s wr, 189 op/s 2026-03-09T20:48:58.308 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:58 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:58.308 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:58 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:58.308 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:58 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:58.308 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:58 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:48:58.308 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:58 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:58.308 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:58 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:48:58.308 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:58 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:48:58.308 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:58 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:48:58.308 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:58 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:48:58.308 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:58 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:58.308 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:58 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash.vm07"}]: dispatch 2026-03-09T20:48:58.308 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:58 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash.vm07"}]': finished 2026-03-09T20:48:58.308 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:58 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash.vm10"}]: dispatch 2026-03-09T20:48:58.308 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:58 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash.vm10"}]': finished 2026-03-09T20:48:58.308 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:58 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-09T20:48:59.313 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:59 vm07.local ceph-mon[112105]: Upgrade: Setting container_image for all crash 2026-03-09T20:48:59.313 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:59 vm07.local ceph-mon[112105]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-09T20:48:59.313 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:59 vm07.local ceph-mon[112105]: Upgrade: osd.0 is safe to restart 2026-03-09T20:48:59.313 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:59 vm07.local ceph-mon[112105]: Upgrade: Updating osd.0 2026-03-09T20:48:59.313 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:59 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:59.313 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:59 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T20:48:59.313 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:59 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:59.313 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:48:59 vm07.local ceph-mon[112105]: Deploying daemon osd.0 on vm07 2026-03-09T20:48:59.313 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:48:59 vm07.local systemd[1]: Stopping Ceph osd.0 for 589eab88-1bf8-11f1-9e50-71f3ab1833c4... 2026-03-09T20:48:59.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:59 vm10.local ceph-mon[103526]: Upgrade: Setting container_image for all crash 2026-03-09T20:48:59.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:59 vm10.local ceph-mon[103526]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-09T20:48:59.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:59 vm10.local ceph-mon[103526]: Upgrade: osd.0 is safe to restart 2026-03-09T20:48:59.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:59 vm10.local ceph-mon[103526]: Upgrade: Updating osd.0 2026-03-09T20:48:59.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:59 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:48:59.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:59 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T20:48:59.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:59 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:59.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:48:59 vm10.local ceph-mon[103526]: Deploying daemon osd.0 on vm07 2026-03-09T20:48:59.634 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:48:59 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-0[68598]: 2026-03-09T20:48:59.311+0000 7f9f315bc640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T20:48:59.634 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:48:59 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-0[68598]: 2026-03-09T20:48:59.311+0000 7f9f315bc640 -1 osd.0 47 *** Got signal Terminated *** 2026-03-09T20:48:59.634 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:48:59 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-0[68598]: 2026-03-09T20:48:59.311+0000 7f9f315bc640 -1 osd.0 47 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T20:49:00.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:00 vm07.local ceph-mon[112105]: pgmap v20: 65 pgs: 65 active+clean; 289 MiB data, 3.4 GiB used, 117 GiB / 120 GiB avail; 737 KiB/s rd, 703 KiB/s wr, 139 op/s 2026-03-09T20:49:00.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:00 vm07.local ceph-mon[112105]: osd.0 marked itself down and dead 2026-03-09T20:49:00.384 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:49:00 vm07.local podman[119609]: 2026-03-09 20:49:00.16558512 +0000 UTC m=+0.896106627 container died 482878bd7721e2d02c4195cec0169b36379dee50c5e42b4e4e37d4f41996e743 (image=quay.ceph.io/ceph-ci/ceph@sha256:5c4977cb71fa562f9eeef885de8392360f108b57ddc9fb513207223fafb4cf40, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-0, CEPH_SHA1=ab47f43c099b2cbae6e21342fe673ce251da54d6, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0) 2026-03-09T20:49:00.384 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:49:00 vm07.local podman[119609]: 2026-03-09 20:49:00.232228954 +0000 UTC m=+0.962750461 container remove 482878bd7721e2d02c4195cec0169b36379dee50c5e42b4e4e37d4f41996e743 (image=quay.ceph.io/ceph-ci/ceph@sha256:5c4977cb71fa562f9eeef885de8392360f108b57ddc9fb513207223fafb4cf40, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, CEPH_SHA1=ab47f43c099b2cbae6e21342fe673ce251da54d6, io.buildah.version=1.41.3) 2026-03-09T20:49:00.384 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:49:00 vm07.local bash[119609]: ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-0 2026-03-09T20:49:00.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:00 vm10.local ceph-mon[103526]: pgmap v20: 65 pgs: 65 active+clean; 289 MiB data, 3.4 GiB used, 117 GiB / 120 GiB avail; 737 KiB/s rd, 703 KiB/s wr, 139 op/s 2026-03-09T20:49:00.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:00 vm10.local ceph-mon[103526]: osd.0 marked itself down and dead 2026-03-09T20:49:00.663 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:49:00 vm07.local podman[119673]: 2026-03-09 20:49:00.41146635 +0000 UTC m=+0.021171721 container create 410b14d36a7506d6ecfb522c676d3c90542019a60468feb0b50d4a85a3bc7d0b (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-0-deactivate, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid) 2026-03-09T20:49:00.663 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:49:00 vm07.local podman[119673]: 2026-03-09 20:49:00.454539644 +0000 UTC m=+0.064245026 container init 410b14d36a7506d6ecfb522c676d3c90542019a60468feb0b50d4a85a3bc7d0b (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-0-deactivate, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-09T20:49:00.663 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:49:00 vm07.local podman[119673]: 2026-03-09 20:49:00.460566384 +0000 UTC m=+0.070271755 container start 410b14d36a7506d6ecfb522c676d3c90542019a60468feb0b50d4a85a3bc7d0b (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-0-deactivate, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-09T20:49:00.663 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:49:00 vm07.local podman[119673]: 2026-03-09 20:49:00.466657276 +0000 UTC m=+0.076362647 container attach 410b14d36a7506d6ecfb522c676d3c90542019a60468feb0b50d4a85a3bc7d0b (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-0-deactivate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-09T20:49:00.663 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:49:00 vm07.local podman[119673]: 2026-03-09 20:49:00.403199706 +0000 UTC m=+0.012905087 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T20:49:00.663 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:49:00 vm07.local podman[119673]: 2026-03-09 20:49:00.608111329 +0000 UTC m=+0.217816700 container died 410b14d36a7506d6ecfb522c676d3c90542019a60468feb0b50d4a85a3bc7d0b (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-0-deactivate, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-09T20:49:00.663 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:49:00 vm07.local podman[119673]: 2026-03-09 20:49:00.626390974 +0000 UTC m=+0.236096345 container remove 410b14d36a7506d6ecfb522c676d3c90542019a60468feb0b50d4a85a3bc7d0b (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-0-deactivate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , ceph=True, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-09T20:49:00.663 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:49:00 vm07.local systemd[1]: ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4@osd.0.service: Deactivated successfully. 2026-03-09T20:49:00.663 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:49:00 vm07.local systemd[1]: ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4@osd.0.service: Unit process 119684 (conmon) remains running after unit stopped. 2026-03-09T20:49:00.663 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:49:00 vm07.local systemd[1]: ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4@osd.0.service: Unit process 119693 (podman) remains running after unit stopped. 2026-03-09T20:49:00.663 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:49:00 vm07.local systemd[1]: Stopped Ceph osd.0 for 589eab88-1bf8-11f1-9e50-71f3ab1833c4. 2026-03-09T20:49:00.663 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:49:00 vm07.local systemd[1]: ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4@osd.0.service: Consumed 28.798s CPU time, 519.7M memory peak. 2026-03-09T20:49:00.995 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:49:00 vm07.local systemd[1]: Starting Ceph osd.0 for 589eab88-1bf8-11f1-9e50-71f3ab1833c4... 2026-03-09T20:49:01.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:01 vm07.local ceph-mon[112105]: Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T20:49:01.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:01 vm07.local ceph-mon[112105]: osdmap e48: 6 total, 5 up, 6 in 2026-03-09T20:49:01.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:01 vm07.local ceph-mon[112105]: osdmap e49: 6 total, 5 up, 6 in 2026-03-09T20:49:01.384 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:49:00 vm07.local podman[119776]: 2026-03-09 20:49:00.994920145 +0000 UTC m=+0.024432583 container create 1a0cfdbfb73d2657c186ea795ca6847c69d8e2e126abc58850ae3e65fabbc166 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-0-activate, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-09T20:49:01.384 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:49:01 vm07.local podman[119776]: 2026-03-09 20:49:01.04609821 +0000 UTC m=+0.075610648 container init 1a0cfdbfb73d2657c186ea795ca6847c69d8e2e126abc58850ae3e65fabbc166 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-0-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-09T20:49:01.384 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:49:01 vm07.local podman[119776]: 2026-03-09 20:49:01.049906899 +0000 UTC m=+0.079419327 container start 1a0cfdbfb73d2657c186ea795ca6847c69d8e2e126abc58850ae3e65fabbc166 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-0-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS) 2026-03-09T20:49:01.384 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:49:01 vm07.local podman[119776]: 2026-03-09 20:49:01.050882555 +0000 UTC m=+0.080394983 container attach 1a0cfdbfb73d2657c186ea795ca6847c69d8e2e126abc58850ae3e65fabbc166 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-0-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) 2026-03-09T20:49:01.384 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:49:01 vm07.local podman[119776]: 2026-03-09 20:49:00.98665246 +0000 UTC m=+0.016164908 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T20:49:01.384 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:49:01 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-0-activate[119787]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T20:49:01.384 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:49:01 vm07.local bash[119776]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T20:49:01.384 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:49:01 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-0-activate[119787]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T20:49:01.384 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:49:01 vm07.local bash[119776]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T20:49:01.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:01 vm10.local ceph-mon[103526]: Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T20:49:01.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:01 vm10.local ceph-mon[103526]: osdmap e48: 6 total, 5 up, 6 in 2026-03-09T20:49:01.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:01 vm10.local ceph-mon[103526]: osdmap e49: 6 total, 5 up, 6 in 2026-03-09T20:49:02.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:02 vm07.local ceph-mon[112105]: pgmap v22: 65 pgs: 9 stale+active+clean, 56 active+clean; 289 MiB data, 3.4 GiB used, 117 GiB / 120 GiB avail; 884 KiB/s rd, 844 KiB/s wr, 166 op/s 2026-03-09T20:49:02.385 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:49:02 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-0-activate[119787]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-09T20:49:02.385 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:49:02 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-0-activate[119787]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T20:49:02.385 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:49:02 vm07.local bash[119776]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-09T20:49:02.385 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:49:02 vm07.local bash[119776]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T20:49:02.385 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:49:02 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-0-activate[119787]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T20:49:02.385 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:49:02 vm07.local bash[119776]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T20:49:02.385 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:49:02 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-0-activate[119787]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 2026-03-09T20:49:02.385 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:49:02 vm07.local bash[119776]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 2026-03-09T20:49:02.385 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:49:02 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-0-activate[119787]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-4a82f94d-c3b3-4330-bab2-8a74a05b545a/osd-block-4ceba074-cc1e-460f-b8f1-b7d80b498d37 --path /var/lib/ceph/osd/ceph-0 --no-mon-config 2026-03-09T20:49:02.385 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:49:02 vm07.local bash[119776]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-4a82f94d-c3b3-4330-bab2-8a74a05b545a/osd-block-4ceba074-cc1e-460f-b8f1-b7d80b498d37 --path /var/lib/ceph/osd/ceph-0 --no-mon-config 2026-03-09T20:49:02.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:02 vm10.local ceph-mon[103526]: pgmap v22: 65 pgs: 9 stale+active+clean, 56 active+clean; 289 MiB data, 3.4 GiB used, 117 GiB / 120 GiB avail; 884 KiB/s rd, 844 KiB/s wr, 166 op/s 2026-03-09T20:49:02.772 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:49:02 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-0-activate[119787]: Running command: /usr/bin/ln -snf /dev/ceph-4a82f94d-c3b3-4330-bab2-8a74a05b545a/osd-block-4ceba074-cc1e-460f-b8f1-b7d80b498d37 /var/lib/ceph/osd/ceph-0/block 2026-03-09T20:49:02.772 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:49:02 vm07.local bash[119776]: Running command: /usr/bin/ln -snf /dev/ceph-4a82f94d-c3b3-4330-bab2-8a74a05b545a/osd-block-4ceba074-cc1e-460f-b8f1-b7d80b498d37 /var/lib/ceph/osd/ceph-0/block 2026-03-09T20:49:02.772 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:49:02 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-0-activate[119787]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block 2026-03-09T20:49:02.772 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:49:02 vm07.local bash[119776]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block 2026-03-09T20:49:02.772 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:49:02 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-0-activate[119787]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 2026-03-09T20:49:02.772 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:49:02 vm07.local bash[119776]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 2026-03-09T20:49:02.772 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:49:02 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-0-activate[119787]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 2026-03-09T20:49:02.772 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:49:02 vm07.local bash[119776]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 2026-03-09T20:49:02.772 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:49:02 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-0-activate[119787]: --> ceph-volume lvm activate successful for osd ID: 0 2026-03-09T20:49:02.772 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:49:02 vm07.local bash[119776]: --> ceph-volume lvm activate successful for osd ID: 0 2026-03-09T20:49:02.772 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:49:02 vm07.local conmon[119787]: conmon 1a0cfdbfb73d2657c186 : Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1a0cfdbfb73d2657c186ea795ca6847c69d8e2e126abc58850ae3e65fabbc166.scope/container/memory.events 2026-03-09T20:49:02.772 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:49:02 vm07.local podman[119776]: 2026-03-09 20:49:02.502753078 +0000 UTC m=+1.532265516 container died 1a0cfdbfb73d2657c186ea795ca6847c69d8e2e126abc58850ae3e65fabbc166 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-0-activate, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0) 2026-03-09T20:49:02.772 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:49:02 vm07.local podman[119776]: 2026-03-09 20:49:02.541143767 +0000 UTC m=+1.570656205 container remove 1a0cfdbfb73d2657c186ea795ca6847c69d8e2e126abc58850ae3e65fabbc166 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-0-activate, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-09T20:49:03.077 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:49:02 vm07.local podman[120050]: 2026-03-09 20:49:02.771662972 +0000 UTC m=+0.039544620 container create 1da9d2cdbdc33dbc96a5b0f9c60e8be480f5e4f62b5eb0aeb537c3483e1d2366 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid) 2026-03-09T20:49:03.077 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:49:02 vm07.local podman[120050]: 2026-03-09 20:49:02.834029258 +0000 UTC m=+0.101910906 container init 1da9d2cdbdc33dbc96a5b0f9c60e8be480f5e4f62b5eb0aeb537c3483e1d2366 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-09T20:49:03.077 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:49:02 vm07.local podman[120050]: 2026-03-09 20:49:02.838494757 +0000 UTC m=+0.106376405 container start 1da9d2cdbdc33dbc96a5b0f9c60e8be480f5e4f62b5eb0aeb537c3483e1d2366 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-0, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS) 2026-03-09T20:49:03.077 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:49:02 vm07.local bash[120050]: 1da9d2cdbdc33dbc96a5b0f9c60e8be480f5e4f62b5eb0aeb537c3483e1d2366 2026-03-09T20:49:03.077 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:49:02 vm07.local podman[120050]: 2026-03-09 20:49:02.757356353 +0000 UTC m=+0.025238001 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T20:49:03.077 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:49:02 vm07.local systemd[1]: Started Ceph osd.0 for 589eab88-1bf8-11f1-9e50-71f3ab1833c4. 2026-03-09T20:49:04.017 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:03 vm07.local ceph-mon[112105]: pgmap v24: 65 pgs: 34 peering, 31 active+clean; 290 MiB data, 3.3 GiB used, 117 GiB / 120 GiB avail; 228 KiB/s rd, 998 KiB/s wr, 238 op/s 2026-03-09T20:49:04.017 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:03 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:49:04.017 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:03 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:49:04.017 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:03 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:49:04.017 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:03 vm07.local ceph-mon[112105]: Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY) 2026-03-09T20:49:04.017 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:49:03 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-0[120060]: 2026-03-09T20:49:03.739+0000 7ffbd0828740 -1 Falling back to public interface 2026-03-09T20:49:04.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:03 vm10.local ceph-mon[103526]: pgmap v24: 65 pgs: 34 peering, 31 active+clean; 290 MiB data, 3.3 GiB used, 117 GiB / 120 GiB avail; 228 KiB/s rd, 998 KiB/s wr, 238 op/s 2026-03-09T20:49:04.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:03 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:49:04.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:03 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:49:04.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:03 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:49:04.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:03 vm10.local ceph-mon[103526]: Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY) 2026-03-09T20:49:05.727 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:05.724+0000 7f3238327640 1 -- 192.168.123.107:0/2343324699 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f3230072710 msgr2=0x7f323010c590 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:49:05.727 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:05.724+0000 7f3238327640 1 --2- 192.168.123.107:0/2343324699 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f3230072710 0x7f323010c590 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7f32240099b0 tx=0x7f322402f240 comp rx=0 tx=0).stop 2026-03-09T20:49:05.727 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:05.725+0000 7f3238327640 1 -- 192.168.123.107:0/2343324699 shutdown_connections 2026-03-09T20:49:05.727 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:05.725+0000 7f3238327640 1 --2- 192.168.123.107:0/2343324699 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f3230072710 0x7f323010c590 secure :-1 s=CLOSED pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7f32240099b0 tx=0x7f322402f240 comp rx=0 tx=0).stop 2026-03-09T20:49:05.727 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:05.725+0000 7f3238327640 1 --2- 192.168.123.107:0/2343324699 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3230071d40 0x7f3230072140 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:05.727 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:05.725+0000 7f3238327640 1 -- 192.168.123.107:0/2343324699 >> 192.168.123.107:0/2343324699 conn(0x7f323006d660 msgr2=0x7f323006faa0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:49:05.727 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:05.726+0000 7f3238327640 1 -- 192.168.123.107:0/2343324699 shutdown_connections 2026-03-09T20:49:05.728 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:05.726+0000 7f3238327640 1 -- 192.168.123.107:0/2343324699 wait complete. 2026-03-09T20:49:05.728 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:05.727+0000 7f3238327640 1 Processor -- start 2026-03-09T20:49:05.728 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:05.727+0000 7f3238327640 1 -- start start 2026-03-09T20:49:05.728 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:05.727+0000 7f3238327640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f3230071d40 0x7f32301a77b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:49:05.729 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:05.727+0000 7f3238327640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f32301a7cf0 0x7f32301acd60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:49:05.729 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:05.727+0000 7f3238327640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f32301a8170 con 0x7f32301a7cf0 2026-03-09T20:49:05.729 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:05.727+0000 7f3238327640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f32301a82e0 con 0x7f3230071d40 2026-03-09T20:49:05.729 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:05.727+0000 7f323609c640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f3230071d40 0x7f32301a77b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:49:05.729 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:05.727+0000 7f323609c640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f3230071d40 0x7f32301a77b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:54428/0 (socket says 192.168.123.107:54428) 2026-03-09T20:49:05.729 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:05.727+0000 7f323609c640 1 -- 192.168.123.107:0/1373146218 learned_addr learned my addr 192.168.123.107:0/1373146218 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:49:05.731 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:05.728+0000 7f323609c640 1 -- 192.168.123.107:0/1373146218 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f32301a7cf0 msgr2=0x7f32301acd60 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:49:05.731 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:05.728+0000 7f323609c640 1 --2- 192.168.123.107:0/1373146218 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f32301a7cf0 0x7f32301acd60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:05.731 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:05.728+0000 7f323609c640 1 -- 192.168.123.107:0/1373146218 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f3224009660 con 0x7f3230071d40 2026-03-09T20:49:05.731 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:05.728+0000 7f323609c640 1 --2- 192.168.123.107:0/1373146218 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f3230071d40 0x7f32301a77b0 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7f322000d8d0 tx=0x7f322000dda0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:49:05.731 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:05.729+0000 7f321f7fe640 1 -- 192.168.123.107:0/1373146218 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f3220004490 con 0x7f3230071d40 2026-03-09T20:49:05.731 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:05.729+0000 7f3238327640 1 -- 192.168.123.107:0/1373146218 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f32301ad300 con 0x7f3230071d40 2026-03-09T20:49:05.731 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:05.729+0000 7f3238327640 1 -- 192.168.123.107:0/1373146218 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f32301ad800 con 0x7f3230071d40 2026-03-09T20:49:05.731 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:05.729+0000 7f321f7fe640 1 -- 192.168.123.107:0/1373146218 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f322000bd00 con 0x7f3230071d40 2026-03-09T20:49:05.731 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:05.729+0000 7f321f7fe640 1 -- 192.168.123.107:0/1373146218 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f3220010460 con 0x7f3230071d40 2026-03-09T20:49:05.731 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:05.730+0000 7f3238327640 1 -- 192.168.123.107:0/1373146218 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f31f8005350 con 0x7f3230071d40 2026-03-09T20:49:05.735 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:05.733+0000 7f321f7fe640 1 -- 192.168.123.107:0/1373146218 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 37) v1 ==== 100217+0+0 (secure 0 0 0) 0x7f32200027e0 con 0x7f3230071d40 2026-03-09T20:49:05.737 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:05.734+0000 7f321f7fe640 1 --2- 192.168.123.107:0/1373146218 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f32040779b0 0x7f3204079e70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:49:05.737 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:05.734+0000 7f321f7fe640 1 -- 192.168.123.107:0/1373146218 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(49..49 src has 1..49) v4 ==== 6394+0+0 (secure 0 0 0) 0x7f322009a2c0 con 0x7f3230071d40 2026-03-09T20:49:05.737 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:05.736+0000 7f323589b640 1 --2- 192.168.123.107:0/1373146218 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f32040779b0 0x7f3204079e70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:49:05.743 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:05.737+0000 7f323589b640 1 --2- 192.168.123.107:0/1373146218 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f32040779b0 0x7f3204079e70 secure :-1 s=READY pgs=27 cs=0 l=1 rev1=1 crypto rx=0x7f32301a8cd0 tx=0x7f32240047c0 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:49:05.743 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:05.738+0000 7f321f7fe640 1 -- 192.168.123.107:0/1373146218 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f32200627c0 con 0x7f3230071d40 2026-03-09T20:49:05.911 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:05.909+0000 7f3238327640 1 -- 192.168.123.107:0/1373146218 --> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f31f8002bf0 con 0x7f32040779b0 2026-03-09T20:49:05.916 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:05.912+0000 7f321f7fe640 1 -- 192.168.123.107:0/1373146218 <== mgr.34100 v2:192.168.123.107:6800/2064434004 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+400 (secure 0 0 0) 0x7f31f8002bf0 con 0x7f32040779b0 2026-03-09T20:49:05.917 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:05.916+0000 7f321d7fa640 1 -- 192.168.123.107:0/1373146218 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f32040779b0 msgr2=0x7f3204079e70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:49:05.917 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:05.916+0000 7f321d7fa640 1 --2- 192.168.123.107:0/1373146218 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f32040779b0 0x7f3204079e70 secure :-1 s=READY pgs=27 cs=0 l=1 rev1=1 crypto rx=0x7f32301a8cd0 tx=0x7f32240047c0 comp rx=0 tx=0).stop 2026-03-09T20:49:05.917 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:05.916+0000 7f321d7fa640 1 -- 192.168.123.107:0/1373146218 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f3230071d40 msgr2=0x7f32301a77b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:49:05.917 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:05.916+0000 7f321d7fa640 1 --2- 192.168.123.107:0/1373146218 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f3230071d40 0x7f32301a77b0 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7f322000d8d0 tx=0x7f322000dda0 comp rx=0 tx=0).stop 2026-03-09T20:49:05.919 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:05.916+0000 7f321d7fa640 1 -- 192.168.123.107:0/1373146218 shutdown_connections 2026-03-09T20:49:05.919 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:05.916+0000 7f321d7fa640 1 --2- 192.168.123.107:0/1373146218 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f32040779b0 0x7f3204079e70 unknown :-1 s=CLOSED pgs=27 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:05.919 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:05.916+0000 7f321d7fa640 1 --2- 192.168.123.107:0/1373146218 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f32301a7cf0 0x7f32301acd60 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:05.919 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:05.916+0000 7f321d7fa640 1 --2- 192.168.123.107:0/1373146218 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f3230071d40 0x7f32301a77b0 unknown :-1 s=CLOSED pgs=8 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:05.919 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:05.916+0000 7f321d7fa640 1 -- 192.168.123.107:0/1373146218 >> 192.168.123.107:0/1373146218 conn(0x7f323006d660 msgr2=0x7f3230070ad0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:49:05.919 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:05.918+0000 7f321d7fa640 1 -- 192.168.123.107:0/1373146218 shutdown_connections 2026-03-09T20:49:05.920 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:05.919+0000 7f321d7fa640 1 -- 192.168.123.107:0/1373146218 wait complete. 2026-03-09T20:49:05.936 INFO:teuthology.orchestra.run.vm07.stdout:true 2026-03-09T20:49:06.044 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.043+0000 7f4ff1869640 1 -- 192.168.123.107:0/231996624 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4fec072440 msgr2=0x7f4fec0771b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:49:06.044 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.043+0000 7f4ff1869640 1 --2- 192.168.123.107:0/231996624 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4fec072440 0x7f4fec0771b0 secure :-1 s=READY pgs=50 cs=0 l=1 rev1=1 crypto rx=0x7f4fe4009040 tx=0x7f4fe402fc10 comp rx=0 tx=0).stop 2026-03-09T20:49:06.044 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.043+0000 7f4ff1869640 1 -- 192.168.123.107:0/231996624 shutdown_connections 2026-03-09T20:49:06.044 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.043+0000 7f4ff1869640 1 --2- 192.168.123.107:0/231996624 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4fec072440 0x7f4fec0771b0 unknown :-1 s=CLOSED pgs=50 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:06.044 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.043+0000 7f4ff1869640 1 --2- 192.168.123.107:0/231996624 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4fec071a70 0x7f4fec071e70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:06.044 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.043+0000 7f4ff1869640 1 -- 192.168.123.107:0/231996624 >> 192.168.123.107:0/231996624 conn(0x7f4fec06d4f0 msgr2=0x7f4fec06f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:49:06.045 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.043+0000 7f4ff1869640 1 -- 192.168.123.107:0/231996624 shutdown_connections 2026-03-09T20:49:06.046 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:05 vm07.local ceph-mon[112105]: pgmap v25: 65 pgs: 34 peering, 31 active+clean; 290 MiB data, 3.3 GiB used, 117 GiB / 120 GiB avail; 0 B/s rd, 584 KiB/s wr, 118 op/s 2026-03-09T20:49:06.046 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:05 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:49:06.046 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:05 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:49:06.048 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.043+0000 7f4ff1869640 1 -- 192.168.123.107:0/231996624 wait complete. 2026-03-09T20:49:06.048 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.044+0000 7f4ff1869640 1 Processor -- start 2026-03-09T20:49:06.048 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.044+0000 7f4ff1869640 1 -- start start 2026-03-09T20:49:06.048 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.044+0000 7f4ff1869640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4fec071a70 0x7f4fec0840d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:49:06.048 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.044+0000 7f4ff1869640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4fec082720 0x7f4fec082ba0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:49:06.048 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.044+0000 7f4ff1869640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f4fec084610 con 0x7f4fec071a70 2026-03-09T20:49:06.048 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.044+0000 7f4ff1869640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f4fec0830e0 con 0x7f4fec082720 2026-03-09T20:49:06.048 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.047+0000 7f4fea7fc640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4fec082720 0x7f4fec082ba0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:49:06.048 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.047+0000 7f4fea7fc640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4fec082720 0x7f4fec082ba0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:54450/0 (socket says 192.168.123.107:54450) 2026-03-09T20:49:06.048 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.047+0000 7f4fea7fc640 1 -- 192.168.123.107:0/2249818481 learned_addr learned my addr 192.168.123.107:0/2249818481 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:49:06.048 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.047+0000 7f4fea7fc640 1 -- 192.168.123.107:0/2249818481 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4fec071a70 msgr2=0x7f4fec0840d0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:49:06.048 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.047+0000 7f4fea7fc640 1 --2- 192.168.123.107:0/2249818481 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4fec071a70 0x7f4fec0840d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:06.048 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.047+0000 7f4fea7fc640 1 -- 192.168.123.107:0/2249818481 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f4fe4008cf0 con 0x7f4fec082720 2026-03-09T20:49:06.049 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.047+0000 7f4fea7fc640 1 --2- 192.168.123.107:0/2249818481 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4fec082720 0x7f4fec082ba0 secure :-1 s=READY pgs=9 cs=0 l=1 rev1=1 crypto rx=0x7f4fe4002fc0 tx=0x7f4fe4031440 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:49:06.049 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.048+0000 7f4ff0867640 1 -- 192.168.123.107:0/2249818481 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f4fe4041070 con 0x7f4fec082720 2026-03-09T20:49:06.051 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.048+0000 7f4ff1869640 1 -- 192.168.123.107:0/2249818481 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f4fec083330 con 0x7f4fec082720 2026-03-09T20:49:06.051 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.048+0000 7f4ff1869640 1 -- 192.168.123.107:0/2249818481 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f4fec12ef70 con 0x7f4fec082720 2026-03-09T20:49:06.051 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.050+0000 7f4ff0867640 1 -- 192.168.123.107:0/2249818481 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f4fe40316f0 con 0x7f4fec082720 2026-03-09T20:49:06.051 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.050+0000 7f4ff0867640 1 -- 192.168.123.107:0/2249818481 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f4fe403b7c0 con 0x7f4fec082720 2026-03-09T20:49:06.055 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.051+0000 7f4ff1869640 1 -- 192.168.123.107:0/2249818481 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f4fc0005350 con 0x7f4fec082720 2026-03-09T20:49:06.055 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.054+0000 7f4ff0867640 1 -- 192.168.123.107:0/2249818481 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 37) v1 ==== 100217+0+0 (secure 0 0 0) 0x7f4fe404d050 con 0x7f4fec082720 2026-03-09T20:49:06.056 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.055+0000 7f4ff0867640 1 --2- 192.168.123.107:0/2249818481 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f4fd8077a00 0x7f4fd8079ec0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:49:06.056 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.055+0000 7f4ff0867640 1 -- 192.168.123.107:0/2249818481 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(49..49 src has 1..49) v4 ==== 6394+0+0 (secure 0 0 0) 0x7f4fe40bf410 con 0x7f4fec082720 2026-03-09T20:49:06.063 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.059+0000 7f4ff0867640 1 -- 192.168.123.107:0/2249818481 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f4fe4087890 con 0x7f4fec082720 2026-03-09T20:49:06.069 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.067+0000 7f4feaffd640 1 --2- 192.168.123.107:0/2249818481 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f4fd8077a00 0x7f4fd8079ec0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:49:06.082 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.081+0000 7f4feaffd640 1 --2- 192.168.123.107:0/2249818481 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f4fd8077a00 0x7f4fd8079ec0 secure :-1 s=READY pgs=28 cs=0 l=1 rev1=1 crypto rx=0x7f4fdc00b440 tx=0x7f4fdc00d040 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:49:06.247 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.245+0000 7f4ff1869640 1 -- 192.168.123.107:0/2249818481 --> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f4fc0002bf0 con 0x7f4fd8077a00 2026-03-09T20:49:06.248 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.247+0000 7f4ff0867640 1 -- 192.168.123.107:0/2249818481 <== mgr.34100 v2:192.168.123.107:6800/2064434004 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+400 (secure 0 0 0) 0x7f4fc0002bf0 con 0x7f4fd8077a00 2026-03-09T20:49:06.252 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.251+0000 7f4ff1869640 1 -- 192.168.123.107:0/2249818481 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f4fd8077a00 msgr2=0x7f4fd8079ec0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:49:06.253 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.251+0000 7f4ff1869640 1 --2- 192.168.123.107:0/2249818481 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f4fd8077a00 0x7f4fd8079ec0 secure :-1 s=READY pgs=28 cs=0 l=1 rev1=1 crypto rx=0x7f4fdc00b440 tx=0x7f4fdc00d040 comp rx=0 tx=0).stop 2026-03-09T20:49:06.253 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.251+0000 7f4ff1869640 1 -- 192.168.123.107:0/2249818481 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4fec082720 msgr2=0x7f4fec082ba0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:49:06.253 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.252+0000 7f4ff1869640 1 --2- 192.168.123.107:0/2249818481 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4fec082720 0x7f4fec082ba0 secure :-1 s=READY pgs=9 cs=0 l=1 rev1=1 crypto rx=0x7f4fe4002fc0 tx=0x7f4fe4031440 comp rx=0 tx=0).stop 2026-03-09T20:49:06.253 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.252+0000 7f4ff1869640 1 -- 192.168.123.107:0/2249818481 shutdown_connections 2026-03-09T20:49:06.254 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.252+0000 7f4ff1869640 1 --2- 192.168.123.107:0/2249818481 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f4fd8077a00 0x7f4fd8079ec0 unknown :-1 s=CLOSED pgs=28 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:06.254 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.252+0000 7f4ff1869640 1 --2- 192.168.123.107:0/2249818481 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4fec082720 0x7f4fec082ba0 unknown :-1 s=CLOSED pgs=9 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:06.254 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.253+0000 7f4ff1869640 1 --2- 192.168.123.107:0/2249818481 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4fec071a70 0x7f4fec0840d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:06.255 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.253+0000 7f4ff1869640 1 -- 192.168.123.107:0/2249818481 >> 192.168.123.107:0/2249818481 conn(0x7f4fec06d4f0 msgr2=0x7f4fec073150 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:49:06.255 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.254+0000 7f4ff1869640 1 -- 192.168.123.107:0/2249818481 shutdown_connections 2026-03-09T20:49:06.255 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.254+0000 7f4ff1869640 1 -- 192.168.123.107:0/2249818481 wait complete. 2026-03-09T20:49:06.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:05 vm10.local ceph-mon[103526]: pgmap v25: 65 pgs: 34 peering, 31 active+clean; 290 MiB data, 3.3 GiB used, 117 GiB / 120 GiB avail; 0 B/s rd, 584 KiB/s wr, 118 op/s 2026-03-09T20:49:06.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:05 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:49:06.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:05 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:49:06.369 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.367+0000 7f5497541640 1 -- 192.168.123.107:0/2723234890 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5490071c20 msgr2=0x7f5490072020 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:49:06.369 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.367+0000 7f5497541640 1 --2- 192.168.123.107:0/2723234890 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5490071c20 0x7f5490072020 secure :-1 s=READY pgs=51 cs=0 l=1 rev1=1 crypto rx=0x7f548c007920 tx=0x7f548c031130 comp rx=0 tx=0).stop 2026-03-09T20:49:06.369 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.367+0000 7f5497541640 1 -- 192.168.123.107:0/2723234890 shutdown_connections 2026-03-09T20:49:06.369 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.367+0000 7f5497541640 1 --2- 192.168.123.107:0/2723234890 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f54900725f0 0x7f5490077360 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:06.369 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.367+0000 7f5497541640 1 --2- 192.168.123.107:0/2723234890 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5490071c20 0x7f5490072020 unknown :-1 s=CLOSED pgs=51 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:06.369 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.367+0000 7f5497541640 1 -- 192.168.123.107:0/2723234890 >> 192.168.123.107:0/2723234890 conn(0x7f549006d660 msgr2=0x7f549006faa0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:49:06.377 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.368+0000 7f5497541640 1 -- 192.168.123.107:0/2723234890 shutdown_connections 2026-03-09T20:49:06.377 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.368+0000 7f5497541640 1 -- 192.168.123.107:0/2723234890 wait complete. 2026-03-09T20:49:06.377 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.368+0000 7f5497541640 1 Processor -- start 2026-03-09T20:49:06.377 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.368+0000 7f5497541640 1 -- start start 2026-03-09T20:49:06.377 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.368+0000 7f5497541640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f54900725f0 0x7f54900828f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:49:06.377 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.368+0000 7f5497541640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f54900842a0 0x7f5490082e30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:49:06.377 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.368+0000 7f5497541640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f5490083370 con 0x7f54900725f0 2026-03-09T20:49:06.377 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.368+0000 7f5497541640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f54900834e0 con 0x7f54900842a0 2026-03-09T20:49:06.377 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.369+0000 7f5494ab5640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f54900842a0 0x7f5490082e30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:49:06.377 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.369+0000 7f5494ab5640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f54900842a0 0x7f5490082e30 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:54472/0 (socket says 192.168.123.107:54472) 2026-03-09T20:49:06.377 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.369+0000 7f5494ab5640 1 -- 192.168.123.107:0/2013424705 learned_addr learned my addr 192.168.123.107:0/2013424705 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:49:06.377 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.369+0000 7f5494ab5640 1 -- 192.168.123.107:0/2013424705 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f54900725f0 msgr2=0x7f54900828f0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:49:06.377 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.369+0000 7f5494ab5640 1 --2- 192.168.123.107:0/2013424705 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f54900725f0 0x7f54900828f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:06.377 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.369+0000 7f5494ab5640 1 -- 192.168.123.107:0/2013424705 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f5488009c40 con 0x7f54900842a0 2026-03-09T20:49:06.377 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.369+0000 7f5494ab5640 1 --2- 192.168.123.107:0/2013424705 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f54900842a0 0x7f5490082e30 secure :-1 s=READY pgs=10 cs=0 l=1 rev1=1 crypto rx=0x7f5488007fc0 tx=0x7f548800e5e0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:49:06.377 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.369+0000 7f54867fc640 1 -- 192.168.123.107:0/2013424705 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f5488002c70 con 0x7f54900842a0 2026-03-09T20:49:06.377 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.369+0000 7f5497541640 1 -- 192.168.123.107:0/2013424705 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f548c0075d0 con 0x7f54900842a0 2026-03-09T20:49:06.377 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.369+0000 7f5497541640 1 -- 192.168.123.107:0/2013424705 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f549012efc0 con 0x7f54900842a0 2026-03-09T20:49:06.377 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.370+0000 7f54867fc640 1 -- 192.168.123.107:0/2013424705 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f548800f040 con 0x7f54900842a0 2026-03-09T20:49:06.377 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.370+0000 7f54867fc640 1 -- 192.168.123.107:0/2013424705 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f5488014910 con 0x7f54900842a0 2026-03-09T20:49:06.377 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.372+0000 7f54867fc640 1 -- 192.168.123.107:0/2013424705 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 37) v1 ==== 100217+0+0 (secure 0 0 0) 0x7f5488004690 con 0x7f54900842a0 2026-03-09T20:49:06.377 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.372+0000 7f54867fc640 1 --2- 192.168.123.107:0/2013424705 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f5474077a30 0x7f5474079ef0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:49:06.377 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.372+0000 7f54867fc640 1 -- 192.168.123.107:0/2013424705 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(49..49 src has 1..49) v4 ==== 6394+0+0 (secure 0 0 0) 0x7f5488099a10 con 0x7f54900842a0 2026-03-09T20:49:06.377 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.373+0000 7f5497541640 1 -- 192.168.123.107:0/2013424705 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f5460005350 con 0x7f54900842a0 2026-03-09T20:49:06.377 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.374+0000 7f54952b6640 1 --2- 192.168.123.107:0/2013424705 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f5474077a30 0x7f5474079ef0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:49:06.382 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.378+0000 7f54867fc640 1 -- 192.168.123.107:0/2013424705 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f5488061fd0 con 0x7f54900842a0 2026-03-09T20:49:06.382 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.378+0000 7f54952b6640 1 --2- 192.168.123.107:0/2013424705 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f5474077a30 0x7f5474079ef0 secure :-1 s=READY pgs=29 cs=0 l=1 rev1=1 crypto rx=0x7f548c004870 tx=0x7f548c0047e0 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:49:06.575 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.571+0000 7f5497541640 1 -- 192.168.123.107:0/2013424705 --> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] -- mgr_command(tid 0: {"prefix": "orch ps", "target": ["mon-mgr", ""]}) v1 -- 0x7f5460002bf0 con 0x7f5474077a30 2026-03-09T20:49:06.596 INFO:teuthology.orchestra.run.vm07.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T20:49:06.596 INFO:teuthology.orchestra.run.vm07.stdout:alertmanager.vm07 vm07 *:9093,9094 running (5m) 0s ago 6m 43.0M - 0.25.0 c8568f914cd2 aa3206f6f5cb 2026-03-09T20:49:06.596 INFO:teuthology.orchestra.run.vm07.stdout:ceph-exporter.vm07 vm07 running (6m) 0s ago 6m 9345k - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 06140d824fae 2026-03-09T20:49:06.596 INFO:teuthology.orchestra.run.vm07.stdout:ceph-exporter.vm10 vm10 running (5m) 13s ago 5m 9.90M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 ecddc8340426 2026-03-09T20:49:06.596 INFO:teuthology.orchestra.run.vm07.stdout:crash.vm07 vm07 running (17s) 0s ago 6m 7834k - 19.2.3-678-ge911bdeb 654f31e6858e 406c9c54f34a 2026-03-09T20:49:06.596 INFO:teuthology.orchestra.run.vm07.stdout:crash.vm10 vm10 running (15s) 13s ago 5m 7856k - 19.2.3-678-ge911bdeb 654f31e6858e 30eaebf5d733 2026-03-09T20:49:06.596 INFO:teuthology.orchestra.run.vm07.stdout:grafana.vm07 vm07 *:3000 running (5m) 0s ago 5m 160M - 9.4.7 954c08fa6188 74cf2e7ee6ad 2026-03-09T20:49:06.596 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm07.potfau vm07 running (4m) 0s ago 4m 30.5M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 2492b6874dc8 2026-03-09T20:49:06.596 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm07.rovdbp vm07 running (4m) 0s ago 4m 238M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 3dd0b4a28f35 2026-03-09T20:49:06.596 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm10.hzyuyq vm10 running (4m) 13s ago 4m 151M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 ed740ceed51a 2026-03-09T20:49:06.596 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm10.qpltwp vm10 running (4m) 13s ago 4m 27.0M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 c5fdba181aaf 2026-03-09T20:49:06.596 INFO:teuthology.orchestra.run.vm07.stdout:mgr.vm07.xjrvch vm07 *:8443,9283,8765 running (84s) 0s ago 6m 608M - 19.2.3-678-ge911bdeb 654f31e6858e bc6ab9c540eb 2026-03-09T20:49:06.597 INFO:teuthology.orchestra.run.vm07.stdout:mgr.vm10.byqahe vm10 *:8443,9283,8765 running (58s) 13s ago 5m 489M - 19.2.3-678-ge911bdeb 654f31e6858e f7ad162e95ff 2026-03-09T20:49:06.597 INFO:teuthology.orchestra.run.vm07.stdout:mon.vm07 vm07 running (49s) 0s ago 6m 56.1M 2048M 19.2.3-678-ge911bdeb 654f31e6858e bce9d510f94f 2026-03-09T20:49:06.597 INFO:teuthology.orchestra.run.vm07.stdout:mon.vm10 vm10 running (34s) 13s ago 5m 45.2M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 4428cf7f0607 2026-03-09T20:49:06.597 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.vm07 vm07 *:9100 running (6m) 0s ago 6m 16.0M - 1.5.0 0da6a335fe13 d6fac1f8a1d0 2026-03-09T20:49:06.597 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.vm10 vm10 *:9100 running (5m) 13s ago 5m 15.4M - 1.5.0 0da6a335fe13 9716a97e7ed1 2026-03-09T20:49:06.597 INFO:teuthology.orchestra.run.vm07.stdout:osd.0 vm07 running (3s) 0s ago 5m 29.1M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 1da9d2cdbdc3 2026-03-09T20:49:06.597 INFO:teuthology.orchestra.run.vm07.stdout:osd.1 vm07 running (5m) 0s ago 5m 389M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 15564e5032c9 2026-03-09T20:49:06.597 INFO:teuthology.orchestra.run.vm07.stdout:osd.2 vm07 running (4m) 0s ago 4m 314M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 a2ad523a264c 2026-03-09T20:49:06.597 INFO:teuthology.orchestra.run.vm07.stdout:osd.3 vm10 running (4m) 13s ago 4m 452M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 c4d7e2279ba1 2026-03-09T20:49:06.597 INFO:teuthology.orchestra.run.vm07.stdout:osd.4 vm10 running (4m) 13s ago 4m 409M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 37651efc9a7d 2026-03-09T20:49:06.597 INFO:teuthology.orchestra.run.vm07.stdout:osd.5 vm10 running (4m) 13s ago 4m 343M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 e1bd83add343 2026-03-09T20:49:06.597 INFO:teuthology.orchestra.run.vm07.stdout:prometheus.vm07 vm07 *:9095 running (61s) 0s ago 5m 46.9M - 2.43.0 a07b618ecd1d 3f9c07cd3fe3 2026-03-09T20:49:06.597 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.594+0000 7f54867fc640 1 -- 192.168.123.107:0/2013424705 <== mgr.34100 v2:192.168.123.107:6800/2064434004 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+3576 (secure 0 0 0) 0x7f5460002bf0 con 0x7f5474077a30 2026-03-09T20:49:06.598 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.597+0000 7f545bfff640 1 -- 192.168.123.107:0/2013424705 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f5474077a30 msgr2=0x7f5474079ef0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:49:06.598 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.597+0000 7f545bfff640 1 --2- 192.168.123.107:0/2013424705 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f5474077a30 0x7f5474079ef0 secure :-1 s=READY pgs=29 cs=0 l=1 rev1=1 crypto rx=0x7f548c004870 tx=0x7f548c0047e0 comp rx=0 tx=0).stop 2026-03-09T20:49:06.598 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.597+0000 7f545bfff640 1 -- 192.168.123.107:0/2013424705 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f54900842a0 msgr2=0x7f5490082e30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:49:06.598 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.597+0000 7f545bfff640 1 --2- 192.168.123.107:0/2013424705 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f54900842a0 0x7f5490082e30 secure :-1 s=READY pgs=10 cs=0 l=1 rev1=1 crypto rx=0x7f5488007fc0 tx=0x7f548800e5e0 comp rx=0 tx=0).stop 2026-03-09T20:49:06.598 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.597+0000 7f545bfff640 1 -- 192.168.123.107:0/2013424705 shutdown_connections 2026-03-09T20:49:06.598 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.597+0000 7f545bfff640 1 --2- 192.168.123.107:0/2013424705 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f5474077a30 0x7f5474079ef0 unknown :-1 s=CLOSED pgs=29 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:06.598 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.597+0000 7f545bfff640 1 --2- 192.168.123.107:0/2013424705 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f54900842a0 0x7f5490082e30 unknown :-1 s=CLOSED pgs=10 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:06.598 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.597+0000 7f545bfff640 1 --2- 192.168.123.107:0/2013424705 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f54900725f0 0x7f54900828f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:06.598 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.597+0000 7f545bfff640 1 -- 192.168.123.107:0/2013424705 >> 192.168.123.107:0/2013424705 conn(0x7f549006d660 msgr2=0x7f549007b5c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:49:06.598 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.597+0000 7f545bfff640 1 -- 192.168.123.107:0/2013424705 shutdown_connections 2026-03-09T20:49:06.598 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.597+0000 7f545bfff640 1 -- 192.168.123.107:0/2013424705 wait complete. 2026-03-09T20:49:06.687 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.686+0000 7f67b5df3640 1 -- 192.168.123.107:0/16857453 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f67b0072390 msgr2=0x7f67b010c590 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:49:06.687 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.686+0000 7f67b5df3640 1 --2- 192.168.123.107:0/16857453 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f67b0072390 0x7f67b010c590 secure :-1 s=READY pgs=52 cs=0 l=1 rev1=1 crypto rx=0x7f67a800b0a0 tx=0x7f67a802f4c0 comp rx=0 tx=0).stop 2026-03-09T20:49:06.688 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.687+0000 7f67b5df3640 1 -- 192.168.123.107:0/16857453 shutdown_connections 2026-03-09T20:49:06.689 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.687+0000 7f67b5df3640 1 --2- 192.168.123.107:0/16857453 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f67b0072390 0x7f67b010c590 unknown :-1 s=CLOSED pgs=52 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:06.689 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.687+0000 7f67b5df3640 1 --2- 192.168.123.107:0/16857453 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f67b00719c0 0x7f67b0071dc0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:06.689 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.687+0000 7f67b5df3640 1 -- 192.168.123.107:0/16857453 >> 192.168.123.107:0/16857453 conn(0x7f67b006d4f0 msgr2=0x7f67b006f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:49:06.689 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.687+0000 7f67b5df3640 1 -- 192.168.123.107:0/16857453 shutdown_connections 2026-03-09T20:49:06.689 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.687+0000 7f67b5df3640 1 -- 192.168.123.107:0/16857453 wait complete. 2026-03-09T20:49:06.689 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.687+0000 7f67b5df3640 1 Processor -- start 2026-03-09T20:49:06.690 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.689+0000 7f67b5df3640 1 -- start start 2026-03-09T20:49:06.690 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.689+0000 7f67b5df3640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f67b00719c0 0x7f67b0115980 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:49:06.690 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.689+0000 7f67b5df3640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f67b0072390 0x7f67b0115ec0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:49:06.690 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.689+0000 7f67b5df3640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f67b01173c0 con 0x7f67b0072390 2026-03-09T20:49:06.690 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.689+0000 7f67b5df3640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f67b0117530 con 0x7f67b00719c0 2026-03-09T20:49:06.690 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.689+0000 7f67af7fe640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f67b00719c0 0x7f67b0115980 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:49:06.690 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.689+0000 7f67af7fe640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f67b00719c0 0x7f67b0115980 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:54484/0 (socket says 192.168.123.107:54484) 2026-03-09T20:49:06.690 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.689+0000 7f67af7fe640 1 -- 192.168.123.107:0/3065374048 learned_addr learned my addr 192.168.123.107:0/3065374048 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:49:06.690 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.689+0000 7f67aeffd640 1 --2- 192.168.123.107:0/3065374048 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f67b0072390 0x7f67b0115ec0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:49:06.691 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.689+0000 7f67aeffd640 1 -- 192.168.123.107:0/3065374048 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f67b00719c0 msgr2=0x7f67b0115980 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:49:06.691 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.689+0000 7f67aeffd640 1 --2- 192.168.123.107:0/3065374048 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f67b00719c0 0x7f67b0115980 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:06.691 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.689+0000 7f67aeffd640 1 -- 192.168.123.107:0/3065374048 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f67a8009d00 con 0x7f67b0072390 2026-03-09T20:49:06.691 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.690+0000 7f67aeffd640 1 --2- 192.168.123.107:0/3065374048 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f67b0072390 0x7f67b0115ec0 secure :-1 s=READY pgs=53 cs=0 l=1 rev1=1 crypto rx=0x7f67a800b0a0 tx=0x7f67a8009800 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:49:06.694 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.691+0000 7f67acff9640 1 -- 192.168.123.107:0/3065374048 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f67a8004910 con 0x7f67b0072390 2026-03-09T20:49:06.694 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.691+0000 7f67acff9640 1 -- 192.168.123.107:0/3065374048 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f67a8007880 con 0x7f67b0072390 2026-03-09T20:49:06.694 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.691+0000 7f67acff9640 1 -- 192.168.123.107:0/3065374048 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f67a8040ad0 con 0x7f67b0072390 2026-03-09T20:49:06.694 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.691+0000 7f67b5df3640 1 -- 192.168.123.107:0/3065374048 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f67b01164c0 con 0x7f67b0072390 2026-03-09T20:49:06.694 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.691+0000 7f67b5df3640 1 -- 192.168.123.107:0/3065374048 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f67b01b58d0 con 0x7f67b0072390 2026-03-09T20:49:06.694 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.693+0000 7f67b5df3640 1 -- 192.168.123.107:0/3065374048 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f67b01183e0 con 0x7f67b0072390 2026-03-09T20:49:06.697 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.694+0000 7f67acff9640 1 -- 192.168.123.107:0/3065374048 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 37) v1 ==== 100217+0+0 (secure 0 0 0) 0x7f67a80079f0 con 0x7f67b0072390 2026-03-09T20:49:06.697 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.694+0000 7f67acff9640 1 --2- 192.168.123.107:0/3065374048 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f6780077ad0 0x7f6780079f90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:49:06.697 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.694+0000 7f67acff9640 1 -- 192.168.123.107:0/3065374048 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(49..49 src has 1..49) v4 ==== 6394+0+0 (secure 0 0 0) 0x7f67a80beda0 con 0x7f67b0072390 2026-03-09T20:49:06.697 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.696+0000 7f67af7fe640 1 --2- 192.168.123.107:0/3065374048 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f6780077ad0 0x7f6780079f90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:49:06.698 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.697+0000 7f67af7fe640 1 --2- 192.168.123.107:0/3065374048 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f6780077ad0 0x7f6780079f90 secure :-1 s=READY pgs=30 cs=0 l=1 rev1=1 crypto rx=0x7f67a0005fd0 tx=0x7f67a00074e0 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:49:06.702 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.701+0000 7f67acff9640 1 -- 192.168.123.107:0/3065374048 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f67a8087420 con 0x7f67b0072390 2026-03-09T20:49:06.884 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.883+0000 7f67b5df3640 1 -- 192.168.123.107:0/3065374048 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7f67b01b5a60 con 0x7f67b0072390 2026-03-09T20:49:06.976 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.975+0000 7f67acff9640 1 -- 192.168.123.107:0/3065374048 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+738 (secure 0 0 0) 0x7f67a8086b70 con 0x7f67b0072390 2026-03-09T20:49:06.979 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T20:49:06.979 INFO:teuthology.orchestra.run.vm07.stdout: "mon": { 2026-03-09T20:49:06.979 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T20:49:06.979 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:49:06.979 INFO:teuthology.orchestra.run.vm07.stdout: "mgr": { 2026-03-09T20:49:06.979 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T20:49:06.979 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:49:06.979 INFO:teuthology.orchestra.run.vm07.stdout: "osd": { 2026-03-09T20:49:06.979 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable)": 5 2026-03-09T20:49:06.979 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:49:06.979 INFO:teuthology.orchestra.run.vm07.stdout: "mds": { 2026-03-09T20:49:06.979 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable)": 4 2026-03-09T20:49:06.979 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:49:06.979 INFO:teuthology.orchestra.run.vm07.stdout: "overall": { 2026-03-09T20:49:06.979 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable)": 9, 2026-03-09T20:49:06.979 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 4 2026-03-09T20:49:06.979 INFO:teuthology.orchestra.run.vm07.stdout: } 2026-03-09T20:49:06.979 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T20:49:06.980 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.978+0000 7f67b5df3640 1 -- 192.168.123.107:0/3065374048 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f6780077ad0 msgr2=0x7f6780079f90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:49:06.980 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.978+0000 7f67b5df3640 1 --2- 192.168.123.107:0/3065374048 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f6780077ad0 0x7f6780079f90 secure :-1 s=READY pgs=30 cs=0 l=1 rev1=1 crypto rx=0x7f67a0005fd0 tx=0x7f67a00074e0 comp rx=0 tx=0).stop 2026-03-09T20:49:06.980 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.978+0000 7f67b5df3640 1 -- 192.168.123.107:0/3065374048 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f67b0072390 msgr2=0x7f67b0115ec0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:49:06.980 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.978+0000 7f67b5df3640 1 --2- 192.168.123.107:0/3065374048 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f67b0072390 0x7f67b0115ec0 secure :-1 s=READY pgs=53 cs=0 l=1 rev1=1 crypto rx=0x7f67a800b0a0 tx=0x7f67a8009800 comp rx=0 tx=0).stop 2026-03-09T20:49:06.980 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.979+0000 7f67b5df3640 1 -- 192.168.123.107:0/3065374048 shutdown_connections 2026-03-09T20:49:06.980 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.979+0000 7f67b5df3640 1 --2- 192.168.123.107:0/3065374048 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f6780077ad0 0x7f6780079f90 unknown :-1 s=CLOSED pgs=30 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:06.980 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.979+0000 7f67b5df3640 1 --2- 192.168.123.107:0/3065374048 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f67b0072390 0x7f67b0115ec0 unknown :-1 s=CLOSED pgs=53 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:06.980 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.979+0000 7f67b5df3640 1 --2- 192.168.123.107:0/3065374048 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f67b00719c0 0x7f67b0115980 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:06.980 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.979+0000 7f67b5df3640 1 -- 192.168.123.107:0/3065374048 >> 192.168.123.107:0/3065374048 conn(0x7f67b006d4f0 msgr2=0x7f67b010a760 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:49:06.980 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.979+0000 7f67b5df3640 1 -- 192.168.123.107:0/3065374048 shutdown_connections 2026-03-09T20:49:06.980 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:06.979+0000 7f67b5df3640 1 -- 192.168.123.107:0/3065374048 wait complete. 2026-03-09T20:49:07.108 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.106+0000 7f6d912fd640 1 -- 192.168.123.107:0/716484649 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6d8c072370 msgr2=0x7f6d8c10c590 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:49:07.108 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.106+0000 7f6d912fd640 1 --2- 192.168.123.107:0/716484649 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6d8c072370 0x7f6d8c10c590 secure :-1 s=READY pgs=54 cs=0 l=1 rev1=1 crypto rx=0x7f6d7c00b0a0 tx=0x7f6d7c02f4c0 comp rx=0 tx=0).stop 2026-03-09T20:49:07.108 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.106+0000 7f6d912fd640 1 -- 192.168.123.107:0/716484649 shutdown_connections 2026-03-09T20:49:07.108 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.106+0000 7f6d912fd640 1 --2- 192.168.123.107:0/716484649 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6d8c072370 0x7f6d8c10c590 unknown :-1 s=CLOSED pgs=54 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:07.108 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.106+0000 7f6d912fd640 1 --2- 192.168.123.107:0/716484649 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f6d8c0719a0 0x7f6d8c071da0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:07.108 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.106+0000 7f6d912fd640 1 -- 192.168.123.107:0/716484649 >> 192.168.123.107:0/716484649 conn(0x7f6d8c06d4f0 msgr2=0x7f6d8c06f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:49:07.108 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.107+0000 7f6d912fd640 1 -- 192.168.123.107:0/716484649 shutdown_connections 2026-03-09T20:49:07.108 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.107+0000 7f6d912fd640 1 -- 192.168.123.107:0/716484649 wait complete. 2026-03-09T20:49:07.108 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.107+0000 7f6d912fd640 1 Processor -- start 2026-03-09T20:49:07.111 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.109+0000 7f6d912fd640 1 -- start start 2026-03-09T20:49:07.111 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.109+0000 7f6d912fd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6d8c0719a0 0x7f6d8c115920 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:49:07.111 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.109+0000 7f6d912fd640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f6d8c1172d0 0x7f6d8c115e60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:49:07.111 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.109+0000 7f6d912fd640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f6d8c116430 con 0x7f6d8c0719a0 2026-03-09T20:49:07.111 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.109+0000 7f6d912fd640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f6d8c1165a0 con 0x7f6d8c1172d0 2026-03-09T20:49:07.111 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.110+0000 7f6d8bfff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6d8c0719a0 0x7f6d8c115920 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:49:07.111 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.110+0000 7f6d8bfff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6d8c0719a0 0x7f6d8c115920 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:47504/0 (socket says 192.168.123.107:47504) 2026-03-09T20:49:07.111 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.110+0000 7f6d8bfff640 1 -- 192.168.123.107:0/1645408500 learned_addr learned my addr 192.168.123.107:0/1645408500 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:49:07.111 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.110+0000 7f6d8b7fe640 1 --2- 192.168.123.107:0/1645408500 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f6d8c1172d0 0x7f6d8c115e60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:49:07.112 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.110+0000 7f6d8b7fe640 1 -- 192.168.123.107:0/1645408500 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6d8c0719a0 msgr2=0x7f6d8c115920 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:49:07.112 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.110+0000 7f6d8b7fe640 1 --2- 192.168.123.107:0/1645408500 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6d8c0719a0 0x7f6d8c115920 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:07.112 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.110+0000 7f6d8b7fe640 1 -- 192.168.123.107:0/1645408500 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f6d7c009d00 con 0x7f6d8c1172d0 2026-03-09T20:49:07.112 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.111+0000 7f6d8b7fe640 1 --2- 192.168.123.107:0/1645408500 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f6d8c1172d0 0x7f6d8c115e60 secure :-1 s=READY pgs=11 cs=0 l=1 rev1=1 crypto rx=0x7f6d7c009fd0 tx=0x7f6d7c009300 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:49:07.112 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.111+0000 7f6d897fa640 1 -- 192.168.123.107:0/1645408500 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f6d7c0048f0 con 0x7f6d8c1172d0 2026-03-09T20:49:07.113 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.111+0000 7f6d912fd640 1 -- 192.168.123.107:0/1645408500 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f6d8c1b5700 con 0x7f6d8c1172d0 2026-03-09T20:49:07.113 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.111+0000 7f6d912fd640 1 -- 192.168.123.107:0/1645408500 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f6d8c1b5b10 con 0x7f6d8c1172d0 2026-03-09T20:49:07.113 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.111+0000 7f6d897fa640 1 -- 192.168.123.107:0/1645408500 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f6d7c0093f0 con 0x7f6d8c1172d0 2026-03-09T20:49:07.113 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.111+0000 7f6d897fa640 1 -- 192.168.123.107:0/1645408500 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f6d7c040430 con 0x7f6d8c1172d0 2026-03-09T20:49:07.113 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.112+0000 7f6d912fd640 1 -- 192.168.123.107:0/1645408500 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f6d8c072370 con 0x7f6d8c1172d0 2026-03-09T20:49:07.114 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.113+0000 7f6d897fa640 1 -- 192.168.123.107:0/1645408500 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 37) v1 ==== 100217+0+0 (secure 0 0 0) 0x7f6d7c0077e0 con 0x7f6d8c1172d0 2026-03-09T20:49:07.114 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.113+0000 7f6d897fa640 1 --2- 192.168.123.107:0/1645408500 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f6d60077a00 0x7f6d60079ec0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:49:07.115 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.114+0000 7f6d897fa640 1 -- 192.168.123.107:0/1645408500 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(49..49 src has 1..49) v4 ==== 6394+0+0 (secure 0 0 0) 0x7f6d7c0be5c0 con 0x7f6d8c1172d0 2026-03-09T20:49:07.117 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.115+0000 7f6d8bfff640 1 --2- 192.168.123.107:0/1645408500 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f6d60077a00 0x7f6d60079ec0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:49:07.118 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.117+0000 7f6d8bfff640 1 --2- 192.168.123.107:0/1645408500 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f6d60077a00 0x7f6d60079ec0 secure :-1 s=READY pgs=31 cs=0 l=1 rev1=1 crypto rx=0x7f6d800059c0 tx=0x7f6d80009290 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:49:07.121 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.117+0000 7f6d897fa640 1 -- 192.168.123.107:0/1645408500 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f6d7c086ad0 con 0x7f6d8c1172d0 2026-03-09T20:49:07.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:06 vm10.local ceph-mon[103526]: from='client.44101 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:49:07.337 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:06 vm07.local ceph-mon[112105]: from='client.44101 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:49:07.339 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.334+0000 7f6d912fd640 1 -- 192.168.123.107:0/1645408500 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "fs dump"} v 0) v1 -- 0x7f6d8c118640 con 0x7f6d8c1172d0 2026-03-09T20:49:07.339 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.336+0000 7f6d897fa640 1 -- 192.168.123.107:0/1645408500 <== mon.1 v2:192.168.123.110:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump"}]=0 dumped fsmap epoch 11 v11) v1 ==== 76+0+1937 (secure 0 0 0) 0x7f6d7c086220 con 0x7f6d8c1172d0 2026-03-09T20:49:07.345 INFO:teuthology.orchestra.run.vm07.stdout:e11 2026-03-09T20:49:07.345 INFO:teuthology.orchestra.run.vm07.stdout:btime 1970-01-01T00:00:00:000000+0000 2026-03-09T20:49:07.345 INFO:teuthology.orchestra.run.vm07.stdout:enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T20:49:07.345 INFO:teuthology.orchestra.run.vm07.stdout:default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T20:49:07.345 INFO:teuthology.orchestra.run.vm07.stdout:legacy client fscid: 1 2026-03-09T20:49:07.345 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:49:07.345 INFO:teuthology.orchestra.run.vm07.stdout:Filesystem 'cephfs' (1) 2026-03-09T20:49:07.345 INFO:teuthology.orchestra.run.vm07.stdout:fs_name cephfs 2026-03-09T20:49:07.345 INFO:teuthology.orchestra.run.vm07.stdout:epoch 11 2026-03-09T20:49:07.345 INFO:teuthology.orchestra.run.vm07.stdout:flags 32 joinable allow_snaps allow_multimds_snaps allow_standby_replay 2026-03-09T20:49:07.345 INFO:teuthology.orchestra.run.vm07.stdout:created 2026-03-09T20:44:59.885491+0000 2026-03-09T20:49:07.345 INFO:teuthology.orchestra.run.vm07.stdout:modified 2026-03-09T20:45:12.822947+0000 2026-03-09T20:49:07.345 INFO:teuthology.orchestra.run.vm07.stdout:tableserver 0 2026-03-09T20:49:07.345 INFO:teuthology.orchestra.run.vm07.stdout:root 0 2026-03-09T20:49:07.345 INFO:teuthology.orchestra.run.vm07.stdout:session_timeout 60 2026-03-09T20:49:07.345 INFO:teuthology.orchestra.run.vm07.stdout:session_autoclose 300 2026-03-09T20:49:07.345 INFO:teuthology.orchestra.run.vm07.stdout:max_file_size 1099511627776 2026-03-09T20:49:07.345 INFO:teuthology.orchestra.run.vm07.stdout:max_xattr_size 65536 2026-03-09T20:49:07.345 INFO:teuthology.orchestra.run.vm07.stdout:required_client_features {} 2026-03-09T20:49:07.345 INFO:teuthology.orchestra.run.vm07.stdout:last_failure 0 2026-03-09T20:49:07.345 INFO:teuthology.orchestra.run.vm07.stdout:last_failure_osd_epoch 0 2026-03-09T20:49:07.345 INFO:teuthology.orchestra.run.vm07.stdout:compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T20:49:07.345 INFO:teuthology.orchestra.run.vm07.stdout:max_mds 2 2026-03-09T20:49:07.345 INFO:teuthology.orchestra.run.vm07.stdout:in 0,1 2026-03-09T20:49:07.345 INFO:teuthology.orchestra.run.vm07.stdout:up {0=14476,1=24291} 2026-03-09T20:49:07.345 INFO:teuthology.orchestra.run.vm07.stdout:failed 2026-03-09T20:49:07.345 INFO:teuthology.orchestra.run.vm07.stdout:damaged 2026-03-09T20:49:07.345 INFO:teuthology.orchestra.run.vm07.stdout:stopped 2026-03-09T20:49:07.345 INFO:teuthology.orchestra.run.vm07.stdout:data_pools [3] 2026-03-09T20:49:07.345 INFO:teuthology.orchestra.run.vm07.stdout:metadata_pool 2 2026-03-09T20:49:07.345 INFO:teuthology.orchestra.run.vm07.stdout:inline_data disabled 2026-03-09T20:49:07.345 INFO:teuthology.orchestra.run.vm07.stdout:balancer 2026-03-09T20:49:07.345 INFO:teuthology.orchestra.run.vm07.stdout:bal_rank_mask -1 2026-03-09T20:49:07.345 INFO:teuthology.orchestra.run.vm07.stdout:standby_count_wanted 1 2026-03-09T20:49:07.345 INFO:teuthology.orchestra.run.vm07.stdout:qdb_cluster leader: 0 members: 2026-03-09T20:49:07.345 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm07.rovdbp{0:14476} state up:active seq 3 join_fscid=1 addr [v2:192.168.123.107:6826/2216764941,v1:192.168.123.107:6827/2216764941] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:49:07.345 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm10.hzyuyq{0:14498} state up:standby-replay seq 3 join_fscid=1 addr [v2:192.168.123.110:6826/3212743251,v1:192.168.123.110:6827/3212743251] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:49:07.345 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm10.qpltwp{1:24291} state up:active seq 3 join_fscid=1 addr [v2:192.168.123.110:6824/61492274,v1:192.168.123.110:6825/61492274] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:49:07.345 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm07.potfau{1:14490} state up:standby-replay seq 2 join_fscid=1 addr [v2:192.168.123.107:6828/3289699342,v1:192.168.123.107:6829/3289699342] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:49:07.345 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:49:07.345 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:49:07.345 INFO:teuthology.orchestra.run.vm07.stderr:dumped fsmap epoch 11 2026-03-09T20:49:07.346 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.343+0000 7f6d6affd640 1 -- 192.168.123.107:0/1645408500 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f6d60077a00 msgr2=0x7f6d60079ec0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:49:07.346 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.343+0000 7f6d6affd640 1 --2- 192.168.123.107:0/1645408500 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f6d60077a00 0x7f6d60079ec0 secure :-1 s=READY pgs=31 cs=0 l=1 rev1=1 crypto rx=0x7f6d800059c0 tx=0x7f6d80009290 comp rx=0 tx=0).stop 2026-03-09T20:49:07.346 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.343+0000 7f6d6affd640 1 -- 192.168.123.107:0/1645408500 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f6d8c1172d0 msgr2=0x7f6d8c115e60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:49:07.346 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.343+0000 7f6d6affd640 1 --2- 192.168.123.107:0/1645408500 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f6d8c1172d0 0x7f6d8c115e60 secure :-1 s=READY pgs=11 cs=0 l=1 rev1=1 crypto rx=0x7f6d7c009fd0 tx=0x7f6d7c009300 comp rx=0 tx=0).stop 2026-03-09T20:49:07.346 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.345+0000 7f6d6affd640 1 -- 192.168.123.107:0/1645408500 shutdown_connections 2026-03-09T20:49:07.346 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.345+0000 7f6d6affd640 1 --2- 192.168.123.107:0/1645408500 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f6d60077a00 0x7f6d60079ec0 unknown :-1 s=CLOSED pgs=31 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:07.346 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.345+0000 7f6d6affd640 1 --2- 192.168.123.107:0/1645408500 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f6d8c1172d0 0x7f6d8c115e60 unknown :-1 s=CLOSED pgs=11 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:07.346 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.345+0000 7f6d6affd640 1 --2- 192.168.123.107:0/1645408500 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6d8c0719a0 0x7f6d8c115920 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:07.346 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.345+0000 7f6d6affd640 1 -- 192.168.123.107:0/1645408500 >> 192.168.123.107:0/1645408500 conn(0x7f6d8c06d4f0 msgr2=0x7f6d8c070380 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:49:07.349 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.347+0000 7f6d6affd640 1 -- 192.168.123.107:0/1645408500 shutdown_connections 2026-03-09T20:49:07.352 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.348+0000 7f6d6affd640 1 -- 192.168.123.107:0/1645408500 wait complete. 2026-03-09T20:49:07.450 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.447+0000 7fe91fcc8640 1 -- 192.168.123.107:0/1414317578 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe918072440 msgr2=0x7fe9180771b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:49:07.450 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.447+0000 7fe91fcc8640 1 --2- 192.168.123.107:0/1414317578 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe918072440 0x7fe9180771b0 secure :-1 s=READY pgs=55 cs=0 l=1 rev1=1 crypto rx=0x7fe910009040 tx=0x7fe91002fc10 comp rx=0 tx=0).stop 2026-03-09T20:49:07.450 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.447+0000 7fe91fcc8640 1 -- 192.168.123.107:0/1414317578 shutdown_connections 2026-03-09T20:49:07.450 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.447+0000 7fe91fcc8640 1 --2- 192.168.123.107:0/1414317578 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe918072440 0x7fe9180771b0 unknown :-1 s=CLOSED pgs=55 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:07.451 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.447+0000 7fe91fcc8640 1 --2- 192.168.123.107:0/1414317578 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe918071a70 0x7fe918071e70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:07.451 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.447+0000 7fe91fcc8640 1 -- 192.168.123.107:0/1414317578 >> 192.168.123.107:0/1414317578 conn(0x7fe91806d4f0 msgr2=0x7fe91806f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:49:07.451 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.447+0000 7fe91fcc8640 1 -- 192.168.123.107:0/1414317578 shutdown_connections 2026-03-09T20:49:07.451 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.447+0000 7fe91fcc8640 1 -- 192.168.123.107:0/1414317578 wait complete. 2026-03-09T20:49:07.451 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.448+0000 7fe91fcc8640 1 Processor -- start 2026-03-09T20:49:07.451 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.448+0000 7fe91fcc8640 1 -- start start 2026-03-09T20:49:07.451 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.448+0000 7fe91fcc8640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe918071a70 0x7fe9180840d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:49:07.451 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.448+0000 7fe91fcc8640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe918082720 0x7fe918082ba0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:49:07.451 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.448+0000 7fe91fcc8640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fe918084610 con 0x7fe918082720 2026-03-09T20:49:07.451 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.448+0000 7fe91fcc8640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fe9180830e0 con 0x7fe918071a70 2026-03-09T20:49:07.451 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.449+0000 7fe91da3d640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe918071a70 0x7fe9180840d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:49:07.451 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.449+0000 7fe91da3d640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe918071a70 0x7fe9180840d0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:54530/0 (socket says 192.168.123.107:54530) 2026-03-09T20:49:07.451 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.449+0000 7fe91da3d640 1 -- 192.168.123.107:0/3505368765 learned_addr learned my addr 192.168.123.107:0/3505368765 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:49:07.451 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.449+0000 7fe91da3d640 1 -- 192.168.123.107:0/3505368765 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe918082720 msgr2=0x7fe918082ba0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:49:07.451 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.449+0000 7fe91da3d640 1 --2- 192.168.123.107:0/3505368765 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe918082720 0x7fe918082ba0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:07.451 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.449+0000 7fe91da3d640 1 -- 192.168.123.107:0/3505368765 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fe910008cf0 con 0x7fe918071a70 2026-03-09T20:49:07.451 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.449+0000 7fe91da3d640 1 --2- 192.168.123.107:0/3505368765 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe918071a70 0x7fe9180840d0 secure :-1 s=READY pgs=12 cs=0 l=1 rev1=1 crypto rx=0x7fe914009870 tx=0x7fe914009d40 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:49:07.454 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.451+0000 7fe90effd640 1 -- 192.168.123.107:0/3505368765 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fe914010040 con 0x7fe918071a70 2026-03-09T20:49:07.454 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.451+0000 7fe90effd640 1 -- 192.168.123.107:0/3505368765 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fe91400ecf0 con 0x7fe918071a70 2026-03-09T20:49:07.454 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.451+0000 7fe90effd640 1 -- 192.168.123.107:0/3505368765 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fe914002cf0 con 0x7fe918071a70 2026-03-09T20:49:07.454 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.451+0000 7fe91fcc8640 1 -- 192.168.123.107:0/3505368765 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fe918083390 con 0x7fe918071a70 2026-03-09T20:49:07.454 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.451+0000 7fe91fcc8640 1 -- 192.168.123.107:0/3505368765 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fe91812ef70 con 0x7fe918071a70 2026-03-09T20:49:07.454 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.452+0000 7fe90cff9640 1 -- 192.168.123.107:0/3505368765 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fe8e8005350 con 0x7fe918071a70 2026-03-09T20:49:07.466 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.453+0000 7fe90effd640 1 -- 192.168.123.107:0/3505368765 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 37) v1 ==== 100217+0+0 (secure 0 0 0) 0x7fe91401c020 con 0x7fe918071a70 2026-03-09T20:49:07.466 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.454+0000 7fe90effd640 1 --2- 192.168.123.107:0/3505368765 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fe8ec0779b0 0x7fe8ec079e70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:49:07.466 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.454+0000 7fe90effd640 1 -- 192.168.123.107:0/3505368765 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(49..49 src has 1..49) v4 ==== 6394+0+0 (secure 0 0 0) 0x7fe914024080 con 0x7fe918071a70 2026-03-09T20:49:07.466 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.455+0000 7fe91d23c640 1 --2- 192.168.123.107:0/3505368765 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fe8ec0779b0 0x7fe8ec079e70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:49:07.466 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.456+0000 7fe90effd640 1 -- 192.168.123.107:0/3505368765 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fe9140618b0 con 0x7fe918071a70 2026-03-09T20:49:07.476 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.470+0000 7fe91d23c640 1 --2- 192.168.123.107:0/3505368765 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fe8ec0779b0 0x7fe8ec079e70 secure :-1 s=READY pgs=32 cs=0 l=1 rev1=1 crypto rx=0x7fe918083e50 tx=0x7fe910007480 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:49:07.709 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.708+0000 7fe90cff9640 1 -- 192.168.123.107:0/3505368765 --> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7fe8e8002bf0 con 0x7fe8ec0779b0 2026-03-09T20:49:07.720 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T20:49:07.720 INFO:teuthology.orchestra.run.vm07.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc", 2026-03-09T20:49:07.720 INFO:teuthology.orchestra.run.vm07.stdout: "in_progress": true, 2026-03-09T20:49:07.720 INFO:teuthology.orchestra.run.vm07.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-09T20:49:07.720 INFO:teuthology.orchestra.run.vm07.stdout: "services_complete": [ 2026-03-09T20:49:07.720 INFO:teuthology.orchestra.run.vm07.stdout: "mon", 2026-03-09T20:49:07.720 INFO:teuthology.orchestra.run.vm07.stdout: "crash", 2026-03-09T20:49:07.720 INFO:teuthology.orchestra.run.vm07.stdout: "mgr" 2026-03-09T20:49:07.720 INFO:teuthology.orchestra.run.vm07.stdout: ], 2026-03-09T20:49:07.720 INFO:teuthology.orchestra.run.vm07.stdout: "progress": "7/23 daemons upgraded", 2026-03-09T20:49:07.720 INFO:teuthology.orchestra.run.vm07.stdout: "message": "Currently upgrading osd daemons", 2026-03-09T20:49:07.720 INFO:teuthology.orchestra.run.vm07.stdout: "is_paused": false 2026-03-09T20:49:07.720 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T20:49:07.720 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.714+0000 7fe90effd640 1 -- 192.168.123.107:0/3505368765 <== mgr.34100 v2:192.168.123.107:6800/2064434004 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+400 (secure 0 0 0) 0x7fe8e8002bf0 con 0x7fe8ec0779b0 2026-03-09T20:49:07.721 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.719+0000 7fe91fcc8640 1 -- 192.168.123.107:0/3505368765 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fe8ec0779b0 msgr2=0x7fe8ec079e70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:49:07.721 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.719+0000 7fe91fcc8640 1 --2- 192.168.123.107:0/3505368765 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fe8ec0779b0 0x7fe8ec079e70 secure :-1 s=READY pgs=32 cs=0 l=1 rev1=1 crypto rx=0x7fe918083e50 tx=0x7fe910007480 comp rx=0 tx=0).stop 2026-03-09T20:49:07.721 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.719+0000 7fe91fcc8640 1 -- 192.168.123.107:0/3505368765 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe918071a70 msgr2=0x7fe9180840d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:49:07.721 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.719+0000 7fe91fcc8640 1 --2- 192.168.123.107:0/3505368765 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe918071a70 0x7fe9180840d0 secure :-1 s=READY pgs=12 cs=0 l=1 rev1=1 crypto rx=0x7fe914009870 tx=0x7fe914009d40 comp rx=0 tx=0).stop 2026-03-09T20:49:07.724 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.720+0000 7fe91fcc8640 1 -- 192.168.123.107:0/3505368765 shutdown_connections 2026-03-09T20:49:07.724 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.720+0000 7fe91fcc8640 1 --2- 192.168.123.107:0/3505368765 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fe8ec0779b0 0x7fe8ec079e70 unknown :-1 s=CLOSED pgs=32 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:07.724 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.720+0000 7fe91fcc8640 1 --2- 192.168.123.107:0/3505368765 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe918082720 0x7fe918082ba0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:07.724 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.720+0000 7fe91fcc8640 1 --2- 192.168.123.107:0/3505368765 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe918071a70 0x7fe9180840d0 unknown :-1 s=CLOSED pgs=12 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:07.724 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.720+0000 7fe91fcc8640 1 -- 192.168.123.107:0/3505368765 >> 192.168.123.107:0/3505368765 conn(0x7fe91806d4f0 msgr2=0x7fe918073150 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:49:07.730 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.724+0000 7fe91fcc8640 1 -- 192.168.123.107:0/3505368765 shutdown_connections 2026-03-09T20:49:07.730 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.724+0000 7fe91fcc8640 1 -- 192.168.123.107:0/3505368765 wait complete. 2026-03-09T20:49:07.818 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.810+0000 7f37fffff640 1 -- 192.168.123.107:0/638769252 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3800071a50 msgr2=0x7f3800071e50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:49:07.818 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.810+0000 7f37fffff640 1 --2- 192.168.123.107:0/638769252 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3800071a50 0x7f3800071e50 secure :-1 s=READY pgs=56 cs=0 l=1 rev1=1 crypto rx=0x7f37f000bb70 tx=0x7f37f0030ff0 comp rx=0 tx=0).stop 2026-03-09T20:49:07.819 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.811+0000 7f37fffff640 1 -- 192.168.123.107:0/638769252 shutdown_connections 2026-03-09T20:49:07.819 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.811+0000 7f37fffff640 1 --2- 192.168.123.107:0/638769252 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f3800072420 0x7f3800077190 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:07.819 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.811+0000 7f37fffff640 1 --2- 192.168.123.107:0/638769252 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3800071a50 0x7f3800071e50 unknown :-1 s=CLOSED pgs=56 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:07.819 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.811+0000 7f37fffff640 1 -- 192.168.123.107:0/638769252 >> 192.168.123.107:0/638769252 conn(0x7f380006d4f0 msgr2=0x7f380006f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:49:07.819 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.811+0000 7f37fffff640 1 -- 192.168.123.107:0/638769252 shutdown_connections 2026-03-09T20:49:07.819 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.811+0000 7f37fffff640 1 -- 192.168.123.107:0/638769252 wait complete. 2026-03-09T20:49:07.819 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.811+0000 7f37fffff640 1 Processor -- start 2026-03-09T20:49:07.819 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.811+0000 7f37fffff640 1 -- start start 2026-03-09T20:49:07.819 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.811+0000 7f37fffff640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f3800072420 0x7f3800084060 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:49:07.819 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.811+0000 7f37fffff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f38000826b0 0x7f3800082b30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:49:07.819 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.811+0000 7f37fffff640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f38000845a0 con 0x7f38000826b0 2026-03-09T20:49:07.819 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.811+0000 7f37fffff640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f3800083070 con 0x7f3800072420 2026-03-09T20:49:07.819 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.812+0000 7f37feffd640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f3800072420 0x7f3800084060 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:49:07.819 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.812+0000 7f37feffd640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f3800072420 0x7f3800084060 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:45450/0 (socket says 192.168.123.107:45450) 2026-03-09T20:49:07.819 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.812+0000 7f37feffd640 1 -- 192.168.123.107:0/2001687634 learned_addr learned my addr 192.168.123.107:0/2001687634 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:49:07.819 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.812+0000 7f37feffd640 1 -- 192.168.123.107:0/2001687634 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f38000826b0 msgr2=0x7f3800082b30 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:49:07.819 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.812+0000 7f37feffd640 1 --2- 192.168.123.107:0/2001687634 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f38000826b0 0x7f3800082b30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:07.819 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.812+0000 7f37feffd640 1 -- 192.168.123.107:0/2001687634 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f37f000b820 con 0x7f3800072420 2026-03-09T20:49:07.819 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.813+0000 7f37feffd640 1 --2- 192.168.123.107:0/2001687634 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f3800072420 0x7f3800084060 secure :-1 s=READY pgs=13 cs=0 l=1 rev1=1 crypto rx=0x7f37f0004840 tx=0x7f37f0004690 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:49:07.819 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.815+0000 7f37dffff640 1 -- 192.168.123.107:0/2001687634 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f37f00317d0 con 0x7f3800072420 2026-03-09T20:49:07.819 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.815+0000 7f37fffff640 1 -- 192.168.123.107:0/2001687634 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f38000832f0 con 0x7f3800072420 2026-03-09T20:49:07.819 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.815+0000 7f37fffff640 1 -- 192.168.123.107:0/2001687634 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f380012ef70 con 0x7f3800072420 2026-03-09T20:49:07.819 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.816+0000 7f37dffff640 1 -- 192.168.123.107:0/2001687634 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f37f0031df0 con 0x7f3800072420 2026-03-09T20:49:07.819 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.816+0000 7f37dffff640 1 -- 192.168.123.107:0/2001687634 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f37f003acb0 con 0x7f3800072420 2026-03-09T20:49:07.820 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.819+0000 7f37dffff640 1 -- 192.168.123.107:0/2001687634 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 37) v1 ==== 100217+0+0 (secure 0 0 0) 0x7f37f003a640 con 0x7f3800072420 2026-03-09T20:49:07.821 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.820+0000 7f37dffff640 1 --2- 192.168.123.107:0/2001687634 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f37d0077ad0 0x7f37d0079f90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:49:07.821 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.820+0000 7f37dffff640 1 -- 192.168.123.107:0/2001687634 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(49..49 src has 1..49) v4 ==== 6394+0+0 (secure 0 0 0) 0x7f37f00be4f0 con 0x7f3800072420 2026-03-09T20:49:07.821 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.819+0000 7f37fffff640 1 -- 192.168.123.107:0/2001687634 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f37cc005350 con 0x7f3800072420 2026-03-09T20:49:07.823 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.822+0000 7f37fe7fc640 1 --2- 192.168.123.107:0/2001687634 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f37d0077ad0 0x7f37d0079f90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:49:07.823 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.823+0000 7f37fe7fc640 1 --2- 192.168.123.107:0/2001687634 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f37d0077ad0 0x7f37d0079f90 secure :-1 s=READY pgs=33 cs=0 l=1 rev1=1 crypto rx=0x7f37f80062a0 tx=0x7f37f8006210 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:49:07.830 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:07.827+0000 7f37dffff640 1 -- 192.168.123.107:0/2001687634 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f37f0086b00 con 0x7f3800072420 2026-03-09T20:49:08.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:08 vm07.local ceph-mon[112105]: from='client.44105 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:49:08.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:08 vm07.local ceph-mon[112105]: pgmap v26: 65 pgs: 34 active+undersized+degraded, 31 active+clean; 293 MiB data, 3.3 GiB used, 117 GiB / 120 GiB avail; 472 KiB/s rd, 1.1 MiB/s wr, 271 op/s; 8417/56880 objects degraded (14.798%) 2026-03-09T20:49:08.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:08 vm07.local ceph-mon[112105]: from='client.44109 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:49:08.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:08 vm07.local ceph-mon[112105]: Health check failed: Degraded data redundancy: 8417/56880 objects degraded (14.798%), 34 pgs degraded (PG_DEGRADED) 2026-03-09T20:49:08.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:08 vm07.local ceph-mon[112105]: Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 2 pgs peering) 2026-03-09T20:49:08.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:08 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/3065374048' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:49:08.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:08 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:49:08.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:08 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:49:08.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:08 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/1645408500' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T20:49:08.185 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:08.183+0000 7f37fffff640 1 -- 192.168.123.107:0/2001687634 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "health", "detail": "detail"} v 0) v1 -- 0x7f37cc0051c0 con 0x7f3800072420 2026-03-09T20:49:08.188 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:08.187+0000 7f37dffff640 1 -- 192.168.123.107:0/2001687634 <== mon.1 v2:192.168.123.110:3300/0 7 ==== mon_command_ack([{"prefix": "health", "detail": "detail"}]=0 v0) v1 ==== 74+0+2164 (secure 0 0 0) 0x7f37f0086250 con 0x7f3800072420 2026-03-09T20:49:08.188 INFO:teuthology.orchestra.run.vm07.stdout:HEALTH_WARN 1 osds down; Degraded data redundancy: 8417/56880 objects degraded (14.798%), 34 pgs degraded 2026-03-09T20:49:08.188 INFO:teuthology.orchestra.run.vm07.stdout:[WRN] OSD_DOWN: 1 osds down 2026-03-09T20:49:08.188 INFO:teuthology.orchestra.run.vm07.stdout: osd.0 (root=default,host=vm07) is down 2026-03-09T20:49:08.188 INFO:teuthology.orchestra.run.vm07.stdout:[WRN] PG_DEGRADED: Degraded data redundancy: 8417/56880 objects degraded (14.798%), 34 pgs degraded 2026-03-09T20:49:08.188 INFO:teuthology.orchestra.run.vm07.stdout: pg 1.0 is active+undersized+degraded, acting [3,1] 2026-03-09T20:49:08.188 INFO:teuthology.orchestra.run.vm07.stdout: pg 2.0 is active+undersized+degraded, acting [3,1] 2026-03-09T20:49:08.188 INFO:teuthology.orchestra.run.vm07.stdout: pg 2.1 is active+undersized+degraded, acting [2,1] 2026-03-09T20:49:08.188 INFO:teuthology.orchestra.run.vm07.stdout: pg 2.2 is active+undersized+degraded, acting [5,1] 2026-03-09T20:49:08.188 INFO:teuthology.orchestra.run.vm07.stdout: pg 2.4 is active+undersized+degraded, acting [1,4] 2026-03-09T20:49:08.188 INFO:teuthology.orchestra.run.vm07.stdout: pg 2.5 is active+undersized+degraded, acting [3,4] 2026-03-09T20:49:08.188 INFO:teuthology.orchestra.run.vm07.stdout: pg 2.8 is active+undersized+degraded, acting [3,5] 2026-03-09T20:49:08.188 INFO:teuthology.orchestra.run.vm07.stdout: pg 2.9 is active+undersized+degraded, acting [1,4] 2026-03-09T20:49:08.188 INFO:teuthology.orchestra.run.vm07.stdout: pg 2.c is active+undersized+degraded, acting [2,3] 2026-03-09T20:49:08.188 INFO:teuthology.orchestra.run.vm07.stdout: pg 2.e is active+undersized+degraded, acting [2,3] 2026-03-09T20:49:08.188 INFO:teuthology.orchestra.run.vm07.stdout: pg 2.f is active+undersized+degraded, acting [4,5] 2026-03-09T20:49:08.188 INFO:teuthology.orchestra.run.vm07.stdout: pg 2.10 is active+undersized+degraded, acting [2,1] 2026-03-09T20:49:08.188 INFO:teuthology.orchestra.run.vm07.stdout: pg 2.12 is active+undersized+degraded, acting [3,1] 2026-03-09T20:49:08.188 INFO:teuthology.orchestra.run.vm07.stdout: pg 2.13 is active+undersized+degraded, acting [4,2] 2026-03-09T20:49:08.188 INFO:teuthology.orchestra.run.vm07.stdout: pg 2.15 is active+undersized+degraded, acting [1,3] 2026-03-09T20:49:08.188 INFO:teuthology.orchestra.run.vm07.stdout: pg 2.19 is active+undersized+degraded, acting [4,2] 2026-03-09T20:49:08.188 INFO:teuthology.orchestra.run.vm07.stdout: pg 2.1b is active+undersized+degraded, acting [1,5] 2026-03-09T20:49:08.188 INFO:teuthology.orchestra.run.vm07.stdout: pg 2.1d is active+undersized+degraded, acting [3,5] 2026-03-09T20:49:08.188 INFO:teuthology.orchestra.run.vm07.stdout: pg 2.1e is active+undersized+degraded, acting [2,5] 2026-03-09T20:49:08.188 INFO:teuthology.orchestra.run.vm07.stdout: pg 2.1f is active+undersized+degraded, acting [3,4] 2026-03-09T20:49:08.188 INFO:teuthology.orchestra.run.vm07.stdout: pg 3.1 is active+undersized+degraded, acting [4,2] 2026-03-09T20:49:08.188 INFO:teuthology.orchestra.run.vm07.stdout: pg 3.3 is active+undersized+degraded, acting [4,3] 2026-03-09T20:49:08.188 INFO:teuthology.orchestra.run.vm07.stdout: pg 3.6 is active+undersized+degraded, acting [1,4] 2026-03-09T20:49:08.188 INFO:teuthology.orchestra.run.vm07.stdout: pg 3.b is active+undersized+degraded, acting [1,4] 2026-03-09T20:49:08.188 INFO:teuthology.orchestra.run.vm07.stdout: pg 3.c is active+undersized+degraded, acting [5,3] 2026-03-09T20:49:08.189 INFO:teuthology.orchestra.run.vm07.stdout: pg 3.f is active+undersized+degraded, acting [5,3] 2026-03-09T20:49:08.189 INFO:teuthology.orchestra.run.vm07.stdout: pg 3.10 is active+undersized+degraded, acting [5,1] 2026-03-09T20:49:08.189 INFO:teuthology.orchestra.run.vm07.stdout: pg 3.11 is active+undersized+degraded, acting [3,4] 2026-03-09T20:49:08.189 INFO:teuthology.orchestra.run.vm07.stdout: pg 3.12 is active+undersized+degraded, acting [3,1] 2026-03-09T20:49:08.189 INFO:teuthology.orchestra.run.vm07.stdout: pg 3.15 is active+undersized+degraded, acting [3,4] 2026-03-09T20:49:08.189 INFO:teuthology.orchestra.run.vm07.stdout: pg 3.17 is active+undersized+degraded, acting [5,2] 2026-03-09T20:49:08.189 INFO:teuthology.orchestra.run.vm07.stdout: pg 3.18 is active+undersized+degraded, acting [2,1] 2026-03-09T20:49:08.189 INFO:teuthology.orchestra.run.vm07.stdout: pg 3.1b is active+undersized+degraded, acting [4,3] 2026-03-09T20:49:08.189 INFO:teuthology.orchestra.run.vm07.stdout: pg 3.1f is active+undersized+degraded, acting [3,2] 2026-03-09T20:49:08.191 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:08.190+0000 7f37ddffb640 1 -- 192.168.123.107:0/2001687634 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f37d0077ad0 msgr2=0x7f37d0079f90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:49:08.191 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:08.190+0000 7f37ddffb640 1 --2- 192.168.123.107:0/2001687634 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f37d0077ad0 0x7f37d0079f90 secure :-1 s=READY pgs=33 cs=0 l=1 rev1=1 crypto rx=0x7f37f80062a0 tx=0x7f37f8006210 comp rx=0 tx=0).stop 2026-03-09T20:49:08.191 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:08.190+0000 7f37ddffb640 1 -- 192.168.123.107:0/2001687634 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f3800072420 msgr2=0x7f3800084060 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:49:08.191 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:08.190+0000 7f37ddffb640 1 --2- 192.168.123.107:0/2001687634 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f3800072420 0x7f3800084060 secure :-1 s=READY pgs=13 cs=0 l=1 rev1=1 crypto rx=0x7f37f0004840 tx=0x7f37f0004690 comp rx=0 tx=0).stop 2026-03-09T20:49:08.191 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:08.190+0000 7f37ddffb640 1 -- 192.168.123.107:0/2001687634 shutdown_connections 2026-03-09T20:49:08.191 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:08.190+0000 7f37ddffb640 1 --2- 192.168.123.107:0/2001687634 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f37d0077ad0 0x7f37d0079f90 unknown :-1 s=CLOSED pgs=33 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:08.191 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:08.190+0000 7f37ddffb640 1 --2- 192.168.123.107:0/2001687634 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f38000826b0 0x7f3800082b30 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:08.191 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:08.190+0000 7f37ddffb640 1 --2- 192.168.123.107:0/2001687634 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f3800072420 0x7f3800084060 unknown :-1 s=CLOSED pgs=13 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:08.191 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:08.190+0000 7f37ddffb640 1 -- 192.168.123.107:0/2001687634 >> 192.168.123.107:0/2001687634 conn(0x7f380006d4f0 msgr2=0x7f380007b340 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:49:08.192 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:08.191+0000 7f37ddffb640 1 -- 192.168.123.107:0/2001687634 shutdown_connections 2026-03-09T20:49:08.192 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:08.191+0000 7f37ddffb640 1 -- 192.168.123.107:0/2001687634 wait complete. 2026-03-09T20:49:08.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:08 vm10.local ceph-mon[103526]: from='client.44105 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:49:08.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:08 vm10.local ceph-mon[103526]: pgmap v26: 65 pgs: 34 active+undersized+degraded, 31 active+clean; 293 MiB data, 3.3 GiB used, 117 GiB / 120 GiB avail; 472 KiB/s rd, 1.1 MiB/s wr, 271 op/s; 8417/56880 objects degraded (14.798%) 2026-03-09T20:49:08.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:08 vm10.local ceph-mon[103526]: from='client.44109 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:49:08.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:08 vm10.local ceph-mon[103526]: Health check failed: Degraded data redundancy: 8417/56880 objects degraded (14.798%), 34 pgs degraded (PG_DEGRADED) 2026-03-09T20:49:08.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:08 vm10.local ceph-mon[103526]: Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 2 pgs peering) 2026-03-09T20:49:08.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:08 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/3065374048' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:49:08.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:08 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:49:08.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:08 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:49:08.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:08 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/1645408500' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T20:49:09.029 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:09 vm10.local ceph-mon[103526]: from='client.44121 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:49:09.029 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:09 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/2001687634' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T20:49:09.029 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:09 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:49:09.029 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:09 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:49:09.029 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:09 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:49:09.029 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:09 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:49:09.029 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:09 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:49:09.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:09 vm07.local ceph-mon[112105]: from='client.44121 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:49:09.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:09 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/2001687634' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T20:49:09.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:09 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:49:09.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:09 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:49:09.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:09 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:49:09.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:09 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:49:09.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:09 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:49:10.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:10 vm10.local ceph-mon[103526]: pgmap v27: 65 pgs: 34 active+undersized+degraded, 31 active+clean; 293 MiB data, 3.3 GiB used, 117 GiB / 120 GiB avail; 449 KiB/s rd, 1.0 MiB/s wr, 258 op/s; 8417/56880 objects degraded (14.798%) 2026-03-09T20:49:10.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:10 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:49:10.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:10 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:49:10.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:10 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:49:10.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:10 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:49:10.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:10 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-09T20:49:10.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:10 vm10.local ceph-mon[103526]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-09T20:49:10.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:10 vm10.local ceph-mon[103526]: Upgrade: unsafe to stop osd(s) at this time (15 PGs are or would become offline) 2026-03-09T20:49:10.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:10 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:49:10.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:10 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:49:10.335 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:10 vm07.local ceph-mon[112105]: pgmap v27: 65 pgs: 34 active+undersized+degraded, 31 active+clean; 293 MiB data, 3.3 GiB used, 117 GiB / 120 GiB avail; 449 KiB/s rd, 1.0 MiB/s wr, 258 op/s; 8417/56880 objects degraded (14.798%) 2026-03-09T20:49:10.335 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:10 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:49:10.335 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:10 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:49:10.335 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:10 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:49:10.335 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:10 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:49:10.335 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:10 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-09T20:49:10.335 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:10 vm07.local ceph-mon[112105]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-09T20:49:10.335 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:10 vm07.local ceph-mon[112105]: Upgrade: unsafe to stop osd(s) at this time (15 PGs are or would become offline) 2026-03-09T20:49:10.335 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:10 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:49:10.335 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:10 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:49:12.134 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:49:11 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-0[120060]: 2026-03-09T20:49:11.673+0000 7ffbd0828740 -1 osd.0 47 log_to_monitors true 2026-03-09T20:49:12.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:12 vm10.local ceph-mon[103526]: pgmap v28: 65 pgs: 34 active+undersized+degraded, 31 active+clean; 293 MiB data, 3.3 GiB used, 117 GiB / 120 GiB avail; 378 KiB/s rd, 879 KiB/s wr, 217 op/s; 8417/56880 objects degraded (14.798%) 2026-03-09T20:49:12.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:12 vm10.local ceph-mon[103526]: from='osd.0 [v2:192.168.123.107:6802/3962739801,v1:192.168.123.107:6803/3962739801]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T20:49:12.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:12 vm07.local ceph-mon[112105]: pgmap v28: 65 pgs: 34 active+undersized+degraded, 31 active+clean; 293 MiB data, 3.3 GiB used, 117 GiB / 120 GiB avail; 378 KiB/s rd, 879 KiB/s wr, 217 op/s; 8417/56880 objects degraded (14.798%) 2026-03-09T20:49:12.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:12 vm07.local ceph-mon[112105]: from='osd.0 [v2:192.168.123.107:6802/3962739801,v1:192.168.123.107:6803/3962739801]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T20:49:12.634 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 20:49:12 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-0[120060]: 2026-03-09T20:49:12.212+0000 7ffbc7dc1640 -1 osd.0 47 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T20:49:13.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:13 vm10.local ceph-mon[103526]: from='osd.0 [v2:192.168.123.107:6802/3962739801,v1:192.168.123.107:6803/3962739801]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T20:49:13.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:13 vm10.local ceph-mon[103526]: osdmap e50: 6 total, 5 up, 6 in 2026-03-09T20:49:13.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:13 vm10.local ceph-mon[103526]: from='osd.0 [v2:192.168.123.107:6802/3962739801,v1:192.168.123.107:6803/3962739801]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T20:49:13.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:13 vm07.local ceph-mon[112105]: from='osd.0 [v2:192.168.123.107:6802/3962739801,v1:192.168.123.107:6803/3962739801]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T20:49:13.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:13 vm07.local ceph-mon[112105]: osdmap e50: 6 total, 5 up, 6 in 2026-03-09T20:49:13.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:13 vm07.local ceph-mon[112105]: from='osd.0 [v2:192.168.123.107:6802/3962739801,v1:192.168.123.107:6803/3962739801]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T20:49:14.788 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:14 vm10.local ceph-mon[103526]: pgmap v30: 65 pgs: 34 active+undersized+degraded, 31 active+clean; 294 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 862 KiB/s rd, 721 KiB/s wr, 206 op/s; 8140/55002 objects degraded (14.799%) 2026-03-09T20:49:14.788 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:14 vm10.local ceph-mon[103526]: Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T20:49:14.788 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:14 vm10.local ceph-mon[103526]: Health check update: Degraded data redundancy: 8140/55002 objects degraded (14.799%), 34 pgs degraded (PG_DEGRADED) 2026-03-09T20:49:14.788 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:14 vm10.local ceph-mon[103526]: osd.0 [v2:192.168.123.107:6802/3962739801,v1:192.168.123.107:6803/3962739801] boot 2026-03-09T20:49:14.788 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:14 vm10.local ceph-mon[103526]: osdmap e51: 6 total, 6 up, 6 in 2026-03-09T20:49:14.788 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:14 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:49:14.788 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:14 vm10.local ceph-mon[103526]: osdmap e52: 6 total, 6 up, 6 in 2026-03-09T20:49:14.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:14 vm07.local ceph-mon[112105]: pgmap v30: 65 pgs: 34 active+undersized+degraded, 31 active+clean; 294 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 862 KiB/s rd, 721 KiB/s wr, 206 op/s; 8140/55002 objects degraded (14.799%) 2026-03-09T20:49:14.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:14 vm07.local ceph-mon[112105]: Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T20:49:14.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:14 vm07.local ceph-mon[112105]: Health check update: Degraded data redundancy: 8140/55002 objects degraded (14.799%), 34 pgs degraded (PG_DEGRADED) 2026-03-09T20:49:14.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:14 vm07.local ceph-mon[112105]: osd.0 [v2:192.168.123.107:6802/3962739801,v1:192.168.123.107:6803/3962739801] boot 2026-03-09T20:49:14.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:14 vm07.local ceph-mon[112105]: osdmap e51: 6 total, 6 up, 6 in 2026-03-09T20:49:14.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:14 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:49:14.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:14 vm07.local ceph-mon[112105]: osdmap e52: 6 total, 6 up, 6 in 2026-03-09T20:49:15.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:15 vm10.local ceph-mon[103526]: pgmap v32: 65 pgs: 34 active+undersized+degraded, 31 active+clean; 294 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 405 KiB/s rd, 387 KiB/s wr, 104 op/s; 8140/55002 objects degraded (14.799%) 2026-03-09T20:49:15.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:15 vm10.local ceph-mon[103526]: osdmap e53: 6 total, 6 up, 6 in 2026-03-09T20:49:15.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:15 vm07.local ceph-mon[112105]: pgmap v32: 65 pgs: 34 active+undersized+degraded, 31 active+clean; 294 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 405 KiB/s rd, 387 KiB/s wr, 104 op/s; 8140/55002 objects degraded (14.799%) 2026-03-09T20:49:15.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:15 vm07.local ceph-mon[112105]: osdmap e53: 6 total, 6 up, 6 in 2026-03-09T20:49:18.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:17 vm10.local ceph-mon[103526]: pgmap v35: 65 pgs: 7 active+recovery_wait+degraded, 1 active+recovering+undersized+degraded+remapped, 9 active+undersized+degraded, 48 active+clean; 297 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1.1 MiB/s rd, 1.2 MiB/s wr, 278 op/s; 3971/51975 objects degraded (7.640%); 57 KiB/s, 17 objects/s recovering 2026-03-09T20:49:18.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:17 vm10.local ceph-mon[103526]: osdmap e54: 6 total, 6 up, 6 in 2026-03-09T20:49:18.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:17 vm07.local ceph-mon[112105]: pgmap v35: 65 pgs: 7 active+recovery_wait+degraded, 1 active+recovering+undersized+degraded+remapped, 9 active+undersized+degraded, 48 active+clean; 297 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1.1 MiB/s rd, 1.2 MiB/s wr, 278 op/s; 3971/51975 objects degraded (7.640%); 57 KiB/s, 17 objects/s recovering 2026-03-09T20:49:18.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:17 vm07.local ceph-mon[112105]: osdmap e54: 6 total, 6 up, 6 in 2026-03-09T20:49:18.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:18 vm07.local ceph-mon[112105]: osdmap e55: 6 total, 6 up, 6 in 2026-03-09T20:49:19.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:18 vm10.local ceph-mon[103526]: osdmap e55: 6 total, 6 up, 6 in 2026-03-09T20:49:20.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:19 vm10.local ceph-mon[103526]: pgmap v38: 65 pgs: 9 active+recovery_wait+degraded, 1 active+recovering+undersized+degraded+remapped, 55 active+clean; 297 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1.2 MiB/s rd, 1.3 MiB/s wr, 377 op/s; 644/51009 objects degraded (1.263%); 61 KiB/s, 89 objects/s recovering 2026-03-09T20:49:20.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:19 vm10.local ceph-mon[103526]: Health check update: Degraded data redundancy: 644/51009 objects degraded (1.263%), 10 pgs degraded (PG_DEGRADED) 2026-03-09T20:49:20.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:19 vm07.local ceph-mon[112105]: pgmap v38: 65 pgs: 9 active+recovery_wait+degraded, 1 active+recovering+undersized+degraded+remapped, 55 active+clean; 297 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1.2 MiB/s rd, 1.3 MiB/s wr, 377 op/s; 644/51009 objects degraded (1.263%); 61 KiB/s, 89 objects/s recovering 2026-03-09T20:49:20.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:19 vm07.local ceph-mon[112105]: Health check update: Degraded data redundancy: 644/51009 objects degraded (1.263%), 10 pgs degraded (PG_DEGRADED) 2026-03-09T20:49:22.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:22 vm07.local ceph-mon[112105]: pgmap v39: 65 pgs: 9 active+recovery_wait+degraded, 1 active+recovering+undersized+degraded+remapped, 55 active+clean; 297 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 837 KiB/s rd, 875 KiB/s wr, 253 op/s; 644/51009 objects degraded (1.263%); 41 KiB/s, 60 objects/s recovering 2026-03-09T20:49:22.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:22 vm10.local ceph-mon[103526]: pgmap v39: 65 pgs: 9 active+recovery_wait+degraded, 1 active+recovering+undersized+degraded+remapped, 55 active+clean; 297 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 837 KiB/s rd, 875 KiB/s wr, 253 op/s; 644/51009 objects degraded (1.263%); 41 KiB/s, 60 objects/s recovering 2026-03-09T20:49:24.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:24 vm07.local ceph-mon[112105]: pgmap v40: 65 pgs: 7 active+recovery_wait+degraded, 1 active+recovering, 57 active+clean; 301 MiB data, 3.1 GiB used, 117 GiB / 120 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 442 op/s; 501/47253 objects degraded (1.060%); 374 KiB/s, 68 objects/s recovering 2026-03-09T20:49:24.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:24 vm10.local ceph-mon[103526]: pgmap v40: 65 pgs: 7 active+recovery_wait+degraded, 1 active+recovering, 57 active+clean; 301 MiB data, 3.1 GiB used, 117 GiB / 120 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 442 op/s; 501/47253 objects degraded (1.060%); 374 KiB/s, 68 objects/s recovering 2026-03-09T20:49:25.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:25 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-09T20:49:25.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:25 vm10.local ceph-mon[103526]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-09T20:49:25.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:25 vm10.local ceph-mon[103526]: Upgrade: unsafe to stop osd(s) at this time (2 PGs are or would become offline) 2026-03-09T20:49:25.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:25 vm10.local ceph-mon[103526]: pgmap v41: 65 pgs: 7 active+recovery_wait+degraded, 1 active+recovering, 57 active+clean; 301 MiB data, 3.1 GiB used, 117 GiB / 120 GiB avail; 878 KiB/s rd, 978 KiB/s wr, 236 op/s; 501/47253 objects degraded (1.060%); 295 KiB/s, 50 objects/s recovering 2026-03-09T20:49:25.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:25 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:49:25.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:25 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:49:25.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:25 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-09T20:49:25.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:25 vm07.local ceph-mon[112105]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-09T20:49:25.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:25 vm07.local ceph-mon[112105]: Upgrade: unsafe to stop osd(s) at this time (2 PGs are or would become offline) 2026-03-09T20:49:25.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:25 vm07.local ceph-mon[112105]: pgmap v41: 65 pgs: 7 active+recovery_wait+degraded, 1 active+recovering, 57 active+clean; 301 MiB data, 3.1 GiB used, 117 GiB / 120 GiB avail; 878 KiB/s rd, 978 KiB/s wr, 236 op/s; 501/47253 objects degraded (1.060%); 295 KiB/s, 50 objects/s recovering 2026-03-09T20:49:25.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:25 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:49:25.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:25 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:49:26.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:26 vm07.local ceph-mon[112105]: Health check update: Degraded data redundancy: 299/43272 objects degraded (0.691%), 4 pgs degraded (PG_DEGRADED) 2026-03-09T20:49:27.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:26 vm10.local ceph-mon[103526]: Health check update: Degraded data redundancy: 299/43272 objects degraded (0.691%), 4 pgs degraded (PG_DEGRADED) 2026-03-09T20:49:28.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:27 vm10.local ceph-mon[103526]: pgmap v42: 65 pgs: 4 active+recovery_wait+degraded, 1 active+recovering, 60 active+clean; 298 MiB data, 3.1 GiB used, 117 GiB / 120 GiB avail; 1.2 MiB/s rd, 1.4 MiB/s wr, 350 op/s; 299/43272 objects degraded (0.691%); 239 KiB/s, 58 objects/s recovering 2026-03-09T20:49:28.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:27 vm07.local ceph-mon[112105]: pgmap v42: 65 pgs: 4 active+recovery_wait+degraded, 1 active+recovering, 60 active+clean; 298 MiB data, 3.1 GiB used, 117 GiB / 120 GiB avail; 1.2 MiB/s rd, 1.4 MiB/s wr, 350 op/s; 299/43272 objects degraded (0.691%); 239 KiB/s, 58 objects/s recovering 2026-03-09T20:49:30.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:29 vm10.local ceph-mon[103526]: pgmap v43: 65 pgs: 2 active+recovery_wait+degraded, 1 active+recovering, 62 active+clean; 296 MiB data, 3.1 GiB used, 117 GiB / 120 GiB avail; 1.1 MiB/s rd, 1.3 MiB/s wr, 346 op/s; 155/42606 objects degraded (0.364%); 219 KiB/s, 40 objects/s recovering 2026-03-09T20:49:30.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:29 vm07.local ceph-mon[112105]: pgmap v43: 65 pgs: 2 active+recovery_wait+degraded, 1 active+recovering, 62 active+clean; 296 MiB data, 3.1 GiB used, 117 GiB / 120 GiB avail; 1.1 MiB/s rd, 1.3 MiB/s wr, 346 op/s; 155/42606 objects degraded (0.364%); 219 KiB/s, 40 objects/s recovering 2026-03-09T20:49:32.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:31 vm10.local ceph-mon[103526]: pgmap v44: 65 pgs: 2 active+recovery_wait+degraded, 1 active+recovering, 62 active+clean; 296 MiB data, 3.1 GiB used, 117 GiB / 120 GiB avail; 1021 KiB/s rd, 1.1 MiB/s wr, 311 op/s; 155/42606 objects degraded (0.364%); 197 KiB/s, 36 objects/s recovering 2026-03-09T20:49:32.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:31 vm07.local ceph-mon[112105]: pgmap v44: 65 pgs: 2 active+recovery_wait+degraded, 1 active+recovering, 62 active+clean; 296 MiB data, 3.1 GiB used, 117 GiB / 120 GiB avail; 1021 KiB/s rd, 1.1 MiB/s wr, 311 op/s; 155/42606 objects degraded (0.364%); 197 KiB/s, 36 objects/s recovering 2026-03-09T20:49:33.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:33 vm07.local ceph-mon[112105]: Health check update: Degraded data redundancy: 155/42606 objects degraded (0.364%), 2 pgs degraded (PG_DEGRADED) 2026-03-09T20:49:33.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:33 vm10.local ceph-mon[103526]: Health check update: Degraded data redundancy: 155/42606 objects degraded (0.364%), 2 pgs degraded (PG_DEGRADED) 2026-03-09T20:49:34.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:34 vm07.local ceph-mon[112105]: pgmap v45: 65 pgs: 2 active+recovery_wait+degraded, 1 active+recovering, 62 active+clean; 293 MiB data, 3.1 GiB used, 117 GiB / 120 GiB avail; 1.5 MiB/s rd, 1.7 MiB/s wr, 819 op/s; 155/39561 objects degraded (0.392%); 197 KiB/s, 39 objects/s recovering 2026-03-09T20:49:34.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:34 vm10.local ceph-mon[103526]: pgmap v45: 65 pgs: 2 active+recovery_wait+degraded, 1 active+recovering, 62 active+clean; 293 MiB data, 3.1 GiB used, 117 GiB / 120 GiB avail; 1.5 MiB/s rd, 1.7 MiB/s wr, 819 op/s; 155/39561 objects degraded (0.392%); 197 KiB/s, 39 objects/s recovering 2026-03-09T20:49:36.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:35 vm10.local ceph-mon[103526]: pgmap v46: 65 pgs: 2 active+recovery_wait+degraded, 1 active+recovering, 62 active+clean; 293 MiB data, 3.1 GiB used, 117 GiB / 120 GiB avail; 903 KiB/s rd, 1.1 MiB/s wr, 688 op/s; 155/39561 objects degraded (0.392%); 0 B/s, 29 objects/s recovering 2026-03-09T20:49:36.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:35 vm07.local ceph-mon[112105]: pgmap v46: 65 pgs: 2 active+recovery_wait+degraded, 1 active+recovering, 62 active+clean; 293 MiB data, 3.1 GiB used, 117 GiB / 120 GiB avail; 903 KiB/s rd, 1.1 MiB/s wr, 688 op/s; 155/39561 objects degraded (0.392%); 0 B/s, 29 objects/s recovering 2026-03-09T20:49:37.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:36 vm07.local ceph-mon[112105]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 155/39561 objects degraded (0.392%), 2 pgs degraded) 2026-03-09T20:49:37.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:36 vm07.local ceph-mon[112105]: Cluster is now healthy 2026-03-09T20:49:37.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:36 vm10.local ceph-mon[103526]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 155/39561 objects degraded (0.392%), 2 pgs degraded) 2026-03-09T20:49:37.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:36 vm10.local ceph-mon[103526]: Cluster is now healthy 2026-03-09T20:49:38.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:37 vm10.local ceph-mon[103526]: pgmap v47: 65 pgs: 65 active+clean; 285 MiB data, 3.1 GiB used, 117 GiB / 120 GiB avail; 1.3 MiB/s rd, 1.5 MiB/s wr, 1.09k op/s; 0 B/s, 43 objects/s recovering 2026-03-09T20:49:38.311 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:37 vm07.local ceph-mon[112105]: pgmap v47: 65 pgs: 65 active+clean; 285 MiB data, 3.1 GiB used, 117 GiB / 120 GiB avail; 1.3 MiB/s rd, 1.5 MiB/s wr, 1.09k op/s; 0 B/s, 43 objects/s recovering 2026-03-09T20:49:38.311 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.309+0000 7feab3316640 1 -- 192.168.123.107:0/2666830458 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7feaac072440 msgr2=0x7feaac0771b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:49:38.311 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.309+0000 7feab3316640 1 --2- 192.168.123.107:0/2666830458 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7feaac072440 0x7feaac0771b0 secure :-1 s=READY pgs=14 cs=0 l=1 rev1=1 crypto rx=0x7feaa4008030 tx=0x7feaa4030dc0 comp rx=0 tx=0).stop 2026-03-09T20:49:38.312 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.310+0000 7feab3316640 1 -- 192.168.123.107:0/2666830458 shutdown_connections 2026-03-09T20:49:38.312 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.310+0000 7feab3316640 1 --2- 192.168.123.107:0/2666830458 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7feaac072440 0x7feaac0771b0 unknown :-1 s=CLOSED pgs=14 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:38.312 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.310+0000 7feab3316640 1 --2- 192.168.123.107:0/2666830458 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7feaac071a70 0x7feaac071e70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:38.312 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.310+0000 7feab3316640 1 -- 192.168.123.107:0/2666830458 >> 192.168.123.107:0/2666830458 conn(0x7feaac06d4f0 msgr2=0x7feaac06f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:49:38.312 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.311+0000 7feab3316640 1 -- 192.168.123.107:0/2666830458 shutdown_connections 2026-03-09T20:49:38.312 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.311+0000 7feab3316640 1 -- 192.168.123.107:0/2666830458 wait complete. 2026-03-09T20:49:38.312 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.311+0000 7feab3316640 1 Processor -- start 2026-03-09T20:49:38.312 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.311+0000 7feab3316640 1 -- start start 2026-03-09T20:49:38.313 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.311+0000 7feab3316640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7feaac071a70 0x7feaac1319e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:49:38.313 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.311+0000 7feab3316640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7feaac133390 0x7feaac131f20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:49:38.313 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.311+0000 7feab3316640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7feaac1324f0 con 0x7feaac133390 2026-03-09T20:49:38.313 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.311+0000 7feab3316640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7feaac132660 con 0x7feaac071a70 2026-03-09T20:49:38.313 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.312+0000 7feab108b640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7feaac071a70 0x7feaac1319e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:49:38.314 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.312+0000 7feab108b640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7feaac071a70 0x7feaac1319e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:34638/0 (socket says 192.168.123.107:34638) 2026-03-09T20:49:38.314 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.312+0000 7feab108b640 1 -- 192.168.123.107:0/4081431842 learned_addr learned my addr 192.168.123.107:0/4081431842 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:49:38.315 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.312+0000 7feab108b640 1 -- 192.168.123.107:0/4081431842 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7feaac133390 msgr2=0x7feaac131f20 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:49:38.315 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.312+0000 7feab108b640 1 --2- 192.168.123.107:0/4081431842 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7feaac133390 0x7feaac131f20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:38.315 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.312+0000 7feab108b640 1 -- 192.168.123.107:0/4081431842 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7feaa4007ce0 con 0x7feaac071a70 2026-03-09T20:49:38.315 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.314+0000 7feab108b640 1 --2- 192.168.123.107:0/4081431842 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7feaac071a70 0x7feaac1319e0 secure :-1 s=READY pgs=15 cs=0 l=1 rev1=1 crypto rx=0x7feaa800b700 tx=0x7feaa800bbd0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:49:38.315 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.314+0000 7feaa27fc640 1 -- 192.168.123.107:0/4081431842 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7feaa80042e0 con 0x7feaac071a70 2026-03-09T20:49:38.316 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.314+0000 7feab3316640 1 -- 192.168.123.107:0/4081431842 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7feaac07fb50 con 0x7feaac071a70 2026-03-09T20:49:38.316 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.314+0000 7feab3316640 1 -- 192.168.123.107:0/4081431842 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7feaac080120 con 0x7feaac071a70 2026-03-09T20:49:38.316 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.315+0000 7feaa27fc640 1 -- 192.168.123.107:0/4081431842 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7feaa8009450 con 0x7feaac071a70 2026-03-09T20:49:38.316 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.315+0000 7feaa27fc640 1 -- 192.168.123.107:0/4081431842 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7feaa800cae0 con 0x7feaac071a70 2026-03-09T20:49:38.317 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.316+0000 7feaa27fc640 1 -- 192.168.123.107:0/4081431842 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 37) v1 ==== 100217+0+0 (secure 0 0 0) 0x7feaa801e3e0 con 0x7feaac071a70 2026-03-09T20:49:38.321 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.320+0000 7feaa27fc640 1 --2- 192.168.123.107:0/4081431842 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fea90077a30 0x7fea90079ef0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:49:38.321 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.320+0000 7feaa27fc640 1 -- 192.168.123.107:0/4081431842 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(55..55 src has 1..55) v4 ==== 6394+0+0 (secure 0 0 0) 0x7feaa809a310 con 0x7feaac071a70 2026-03-09T20:49:38.322 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.321+0000 7feab3316640 1 -- 192.168.123.107:0/4081431842 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fea7c005350 con 0x7feaac071a70 2026-03-09T20:49:38.322 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.321+0000 7feab088a640 1 --2- 192.168.123.107:0/4081431842 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fea90077a30 0x7fea90079ef0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:49:38.322 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.322+0000 7feab088a640 1 --2- 192.168.123.107:0/4081431842 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fea90077a30 0x7fea90079ef0 secure :-1 s=READY pgs=35 cs=0 l=1 rev1=1 crypto rx=0x7feaa4007cb0 tx=0x7feaa4007c40 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:49:38.328 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.325+0000 7feaa27fc640 1 -- 192.168.123.107:0/4081431842 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7feaa8062930 con 0x7feaac071a70 2026-03-09T20:49:38.471 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.470+0000 7feab3316640 1 -- 192.168.123.107:0/4081431842 --> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7fea7c002bf0 con 0x7fea90077a30 2026-03-09T20:49:38.476 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.472+0000 7feaa27fc640 1 -- 192.168.123.107:0/4081431842 <== mgr.34100 v2:192.168.123.107:6800/2064434004 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+400 (secure 0 0 0) 0x7fea7c002bf0 con 0x7fea90077a30 2026-03-09T20:49:38.476 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.475+0000 7fea77fff640 1 -- 192.168.123.107:0/4081431842 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fea90077a30 msgr2=0x7fea90079ef0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:49:38.476 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.475+0000 7fea77fff640 1 --2- 192.168.123.107:0/4081431842 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fea90077a30 0x7fea90079ef0 secure :-1 s=READY pgs=35 cs=0 l=1 rev1=1 crypto rx=0x7feaa4007cb0 tx=0x7feaa4007c40 comp rx=0 tx=0).stop 2026-03-09T20:49:38.476 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.475+0000 7fea77fff640 1 -- 192.168.123.107:0/4081431842 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7feaac071a70 msgr2=0x7feaac1319e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:49:38.476 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.475+0000 7fea77fff640 1 --2- 192.168.123.107:0/4081431842 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7feaac071a70 0x7feaac1319e0 secure :-1 s=READY pgs=15 cs=0 l=1 rev1=1 crypto rx=0x7feaa800b700 tx=0x7feaa800bbd0 comp rx=0 tx=0).stop 2026-03-09T20:49:38.476 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.475+0000 7fea77fff640 1 -- 192.168.123.107:0/4081431842 shutdown_connections 2026-03-09T20:49:38.476 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.475+0000 7fea77fff640 1 --2- 192.168.123.107:0/4081431842 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fea90077a30 0x7fea90079ef0 unknown :-1 s=CLOSED pgs=35 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:38.476 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.475+0000 7fea77fff640 1 --2- 192.168.123.107:0/4081431842 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7feaac133390 0x7feaac131f20 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:38.476 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.476+0000 7fea77fff640 1 --2- 192.168.123.107:0/4081431842 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7feaac071a70 0x7feaac1319e0 unknown :-1 s=CLOSED pgs=15 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:38.476 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.476+0000 7fea77fff640 1 -- 192.168.123.107:0/4081431842 >> 192.168.123.107:0/4081431842 conn(0x7feaac06d4f0 msgr2=0x7feaac070460 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:49:38.477 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.476+0000 7fea77fff640 1 -- 192.168.123.107:0/4081431842 shutdown_connections 2026-03-09T20:49:38.477 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.476+0000 7fea77fff640 1 -- 192.168.123.107:0/4081431842 wait complete. 2026-03-09T20:49:38.486 INFO:teuthology.orchestra.run.vm07.stdout:true 2026-03-09T20:49:38.564 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.563+0000 7fc2b7dfc640 1 -- 192.168.123.107:0/1051172987 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc2b0072440 msgr2=0x7fc2b00771b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:49:38.564 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.563+0000 7fc2b7dfc640 1 --2- 192.168.123.107:0/1051172987 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc2b0072440 0x7fc2b00771b0 secure :-1 s=READY pgs=16 cs=0 l=1 rev1=1 crypto rx=0x7fc2a8009040 tx=0x7fc2a802fc10 comp rx=0 tx=0).stop 2026-03-09T20:49:38.565 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.563+0000 7fc2b7dfc640 1 -- 192.168.123.107:0/1051172987 shutdown_connections 2026-03-09T20:49:38.565 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.563+0000 7fc2b7dfc640 1 --2- 192.168.123.107:0/1051172987 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc2b0072440 0x7fc2b00771b0 unknown :-1 s=CLOSED pgs=16 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:38.565 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.563+0000 7fc2b7dfc640 1 --2- 192.168.123.107:0/1051172987 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc2b0071a70 0x7fc2b0071e70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:38.565 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.563+0000 7fc2b7dfc640 1 -- 192.168.123.107:0/1051172987 >> 192.168.123.107:0/1051172987 conn(0x7fc2b006d4f0 msgr2=0x7fc2b006f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:49:38.565 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.564+0000 7fc2b7dfc640 1 -- 192.168.123.107:0/1051172987 shutdown_connections 2026-03-09T20:49:38.565 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.564+0000 7fc2b7dfc640 1 -- 192.168.123.107:0/1051172987 wait complete. 2026-03-09T20:49:38.565 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.564+0000 7fc2b7dfc640 1 Processor -- start 2026-03-09T20:49:38.565 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.564+0000 7fc2b7dfc640 1 -- start start 2026-03-09T20:49:38.566 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.564+0000 7fc2b7dfc640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc2b0071a70 0x7fc2b0084090 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:49:38.566 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.564+0000 7fc2b7dfc640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc2b00826e0 0x7fc2b0082b60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:49:38.566 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.564+0000 7fc2b7dfc640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc2b00845d0 con 0x7fc2b0071a70 2026-03-09T20:49:38.566 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.564+0000 7fc2b7dfc640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc2b00830a0 con 0x7fc2b00826e0 2026-03-09T20:49:38.566 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.564+0000 7fc2b5370640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc2b00826e0 0x7fc2b0082b60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:49:38.566 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.564+0000 7fc2b5370640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc2b00826e0 0x7fc2b0082b60 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:34664/0 (socket says 192.168.123.107:34664) 2026-03-09T20:49:38.566 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.564+0000 7fc2b5370640 1 -- 192.168.123.107:0/3703126340 learned_addr learned my addr 192.168.123.107:0/3703126340 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:49:38.566 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.565+0000 7fc2b5370640 1 -- 192.168.123.107:0/3703126340 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc2b0071a70 msgr2=0x7fc2b0084090 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:49:38.566 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.565+0000 7fc2b5370640 1 --2- 192.168.123.107:0/3703126340 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc2b0071a70 0x7fc2b0084090 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:38.566 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.565+0000 7fc2b5370640 1 -- 192.168.123.107:0/3703126340 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fc2a8008cf0 con 0x7fc2b00826e0 2026-03-09T20:49:38.566 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.565+0000 7fc2b5370640 1 --2- 192.168.123.107:0/3703126340 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc2b00826e0 0x7fc2b0082b60 secure :-1 s=READY pgs=17 cs=0 l=1 rev1=1 crypto rx=0x7fc2a8031670 tx=0x7fc2a80316a0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:49:38.566 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.565+0000 7fc2a6ffd640 1 -- 192.168.123.107:0/3703126340 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fc2a8041070 con 0x7fc2b00826e0 2026-03-09T20:49:38.567 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.566+0000 7fc2b7dfc640 1 -- 192.168.123.107:0/3703126340 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fc2b0083320 con 0x7fc2b00826e0 2026-03-09T20:49:38.569 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.566+0000 7fc2b7dfc640 1 -- 192.168.123.107:0/3703126340 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fc2b012ef70 con 0x7fc2b00826e0 2026-03-09T20:49:38.570 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.569+0000 7fc2a6ffd640 1 -- 192.168.123.107:0/3703126340 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fc2a8004720 con 0x7fc2b00826e0 2026-03-09T20:49:38.570 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.569+0000 7fc2a6ffd640 1 -- 192.168.123.107:0/3703126340 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fc2a803bd40 con 0x7fc2b00826e0 2026-03-09T20:49:38.570 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.570+0000 7fc2a6ffd640 1 -- 192.168.123.107:0/3703126340 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 37) v1 ==== 100217+0+0 (secure 0 0 0) 0x7fc2a804d050 con 0x7fc2b00826e0 2026-03-09T20:49:38.571 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.570+0000 7fc2a6ffd640 1 --2- 192.168.123.107:0/3703126340 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc27c077ba0 0x7fc27c07a060 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:49:38.571 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.570+0000 7fc2a6ffd640 1 -- 192.168.123.107:0/3703126340 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(55..55 src has 1..55) v4 ==== 6394+0+0 (secure 0 0 0) 0x7fc2a80c04e0 con 0x7fc2b00826e0 2026-03-09T20:49:38.571 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.571+0000 7fc2b7dfc640 1 -- 192.168.123.107:0/3703126340 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fc2b0072440 con 0x7fc2b00826e0 2026-03-09T20:49:38.573 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.572+0000 7fc2b5b71640 1 --2- 192.168.123.107:0/3703126340 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc27c077ba0 0x7fc27c07a060 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:49:38.573 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.573+0000 7fc2b5b71640 1 --2- 192.168.123.107:0/3703126340 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc27c077ba0 0x7fc27c07a060 secure :-1 s=READY pgs=36 cs=0 l=1 rev1=1 crypto rx=0x7fc2ac00b440 tx=0x7fc2ac00d040 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:49:38.582 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.577+0000 7fc2a6ffd640 1 -- 192.168.123.107:0/3703126340 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fc2a80889e0 con 0x7fc2b00826e0 2026-03-09T20:49:38.724 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.723+0000 7fc2b7dfc640 1 -- 192.168.123.107:0/3703126340 --> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7fc2b0075a60 con 0x7fc27c077ba0 2026-03-09T20:49:38.727 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.726+0000 7fc2a6ffd640 1 -- 192.168.123.107:0/3703126340 <== mgr.34100 v2:192.168.123.107:6800/2064434004 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+400 (secure 0 0 0) 0x7fc2b0075a60 con 0x7fc27c077ba0 2026-03-09T20:49:38.730 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.729+0000 7fc2a4ff9640 1 -- 192.168.123.107:0/3703126340 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc27c077ba0 msgr2=0x7fc27c07a060 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:49:38.730 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.729+0000 7fc2a4ff9640 1 --2- 192.168.123.107:0/3703126340 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc27c077ba0 0x7fc27c07a060 secure :-1 s=READY pgs=36 cs=0 l=1 rev1=1 crypto rx=0x7fc2ac00b440 tx=0x7fc2ac00d040 comp rx=0 tx=0).stop 2026-03-09T20:49:38.731 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.730+0000 7fc2a4ff9640 1 -- 192.168.123.107:0/3703126340 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc2b00826e0 msgr2=0x7fc2b0082b60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:49:38.731 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.730+0000 7fc2a4ff9640 1 --2- 192.168.123.107:0/3703126340 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc2b00826e0 0x7fc2b0082b60 secure :-1 s=READY pgs=17 cs=0 l=1 rev1=1 crypto rx=0x7fc2a8031670 tx=0x7fc2a80316a0 comp rx=0 tx=0).stop 2026-03-09T20:49:38.731 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.730+0000 7fc2a4ff9640 1 -- 192.168.123.107:0/3703126340 shutdown_connections 2026-03-09T20:49:38.731 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.730+0000 7fc2a4ff9640 1 --2- 192.168.123.107:0/3703126340 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc27c077ba0 0x7fc27c07a060 unknown :-1 s=CLOSED pgs=36 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:38.731 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.730+0000 7fc2a4ff9640 1 --2- 192.168.123.107:0/3703126340 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc2b00826e0 0x7fc2b0082b60 unknown :-1 s=CLOSED pgs=17 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:38.732 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.730+0000 7fc2a4ff9640 1 --2- 192.168.123.107:0/3703126340 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc2b0071a70 0x7fc2b0084090 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:38.732 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.730+0000 7fc2a4ff9640 1 -- 192.168.123.107:0/3703126340 >> 192.168.123.107:0/3703126340 conn(0x7fc2b006d4f0 msgr2=0x7fc2b0073150 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:49:38.732 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.731+0000 7fc2a4ff9640 1 -- 192.168.123.107:0/3703126340 shutdown_connections 2026-03-09T20:49:38.732 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.731+0000 7fc2a4ff9640 1 -- 192.168.123.107:0/3703126340 wait complete. 2026-03-09T20:49:38.819 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.817+0000 7f70c7577640 1 -- 192.168.123.107:0/3030063204 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f70c80719a0 msgr2=0x7f70c8071da0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:49:38.819 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.817+0000 7f70c7577640 1 --2- 192.168.123.107:0/3030063204 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f70c80719a0 0x7f70c8071da0 secure :-1 s=READY pgs=58 cs=0 l=1 rev1=1 crypto rx=0x7f70bc01c4b0 tx=0x7f70bc040860 comp rx=0 tx=0).stop 2026-03-09T20:49:38.819 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.817+0000 7f70c7577640 1 -- 192.168.123.107:0/3030063204 shutdown_connections 2026-03-09T20:49:38.819 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.817+0000 7f70c7577640 1 --2- 192.168.123.107:0/3030063204 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f70c8072370 0x7f70c810c590 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:38.819 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.817+0000 7f70c7577640 1 --2- 192.168.123.107:0/3030063204 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f70c80719a0 0x7f70c8071da0 unknown :-1 s=CLOSED pgs=58 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:38.819 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.817+0000 7f70c7577640 1 -- 192.168.123.107:0/3030063204 >> 192.168.123.107:0/3030063204 conn(0x7f70c806d4f0 msgr2=0x7f70c806f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:49:38.819 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.818+0000 7f70c7577640 1 -- 192.168.123.107:0/3030063204 shutdown_connections 2026-03-09T20:49:38.819 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.818+0000 7f70c7577640 1 -- 192.168.123.107:0/3030063204 wait complete. 2026-03-09T20:49:38.819 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.818+0000 7f70c7577640 1 Processor -- start 2026-03-09T20:49:38.819 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.818+0000 7f70c7577640 1 -- start start 2026-03-09T20:49:38.819 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.818+0000 7f70c7577640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f70c8072370 0x7f70c819e690 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:49:38.819 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.818+0000 7f70c7577640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f70c819ebd0 0x7f70c81a3c40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:49:38.819 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.818+0000 7f70c7577640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f70c819f050 con 0x7f70c8072370 2026-03-09T20:49:38.819 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.819+0000 7f70c7577640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f70c819f1c0 con 0x7f70c819ebd0 2026-03-09T20:49:38.820 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.819+0000 7f70c5d74640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f70c819ebd0 0x7f70c81a3c40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:49:38.820 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.819+0000 7f70c5d74640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f70c819ebd0 0x7f70c81a3c40 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:34682/0 (socket says 192.168.123.107:34682) 2026-03-09T20:49:38.820 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.819+0000 7f70c5d74640 1 -- 192.168.123.107:0/2615798627 learned_addr learned my addr 192.168.123.107:0/2615798627 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:49:38.821 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.820+0000 7f70c5d74640 1 -- 192.168.123.107:0/2615798627 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f70c8072370 msgr2=0x7f70c819e690 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:49:38.821 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.820+0000 7f70c5d74640 1 --2- 192.168.123.107:0/2615798627 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f70c8072370 0x7f70c819e690 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:38.821 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.820+0000 7f70c5d74640 1 -- 192.168.123.107:0/2615798627 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f70bc009d00 con 0x7f70c819ebd0 2026-03-09T20:49:38.822 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.821+0000 7f70c5d74640 1 --2- 192.168.123.107:0/2615798627 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f70c819ebd0 0x7f70c81a3c40 secure :-1 s=READY pgs=18 cs=0 l=1 rev1=1 crypto rx=0x7f70b000d8d0 tx=0x7f70b000dda0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:49:38.822 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.822+0000 7f70b77fe640 1 -- 192.168.123.107:0/2615798627 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f70b0004490 con 0x7f70c819ebd0 2026-03-09T20:49:38.822 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.822+0000 7f70b77fe640 1 -- 192.168.123.107:0/2615798627 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f70b0004d60 con 0x7f70c819ebd0 2026-03-09T20:49:38.823 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.822+0000 7f70b77fe640 1 -- 192.168.123.107:0/2615798627 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f70b0005230 con 0x7f70c819ebd0 2026-03-09T20:49:38.823 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.822+0000 7f70c7577640 1 -- 192.168.123.107:0/2615798627 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f70c81a41e0 con 0x7f70c819ebd0 2026-03-09T20:49:38.823 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.822+0000 7f70c7577640 1 -- 192.168.123.107:0/2615798627 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f70c81a46e0 con 0x7f70c819ebd0 2026-03-09T20:49:38.823 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.823+0000 7f70c7577640 1 -- 192.168.123.107:0/2615798627 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f708c005350 con 0x7f70c819ebd0 2026-03-09T20:49:38.825 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.824+0000 7f70b77fe640 1 -- 192.168.123.107:0/2615798627 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 37) v1 ==== 100217+0+0 (secure 0 0 0) 0x7f70b000b9d0 con 0x7f70c819ebd0 2026-03-09T20:49:38.826 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.824+0000 7f70b77fe640 1 --2- 192.168.123.107:0/2615798627 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f70a0077ad0 0x7f70a0079f90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:49:38.826 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.824+0000 7f70b77fe640 1 -- 192.168.123.107:0/2615798627 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(55..55 src has 1..55) v4 ==== 6394+0+0 (secure 0 0 0) 0x7f70b009a110 con 0x7f70c819ebd0 2026-03-09T20:49:38.826 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.825+0000 7f70c6575640 1 --2- 192.168.123.107:0/2615798627 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f70a0077ad0 0x7f70a0079f90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:49:38.826 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.825+0000 7f70c6575640 1 --2- 192.168.123.107:0/2615798627 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f70a0077ad0 0x7f70a0079f90 secure :-1 s=READY pgs=37 cs=0 l=1 rev1=1 crypto rx=0x7f70bc01c4b0 tx=0x7f70bc0096e0 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:49:38.828 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.827+0000 7f70b77fe640 1 -- 192.168.123.107:0/2615798627 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f70b0062610 con 0x7f70c819ebd0 2026-03-09T20:49:38.955 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.953+0000 7f70c7577640 1 -- 192.168.123.107:0/2615798627 --> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] -- mgr_command(tid 0: {"prefix": "orch ps", "target": ["mon-mgr", ""]}) v1 -- 0x7f708c002bf0 con 0x7f70a0077ad0 2026-03-09T20:49:38.960 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.959+0000 7f70b77fe640 1 -- 192.168.123.107:0/2615798627 <== mgr.34100 v2:192.168.123.107:6800/2064434004 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+3600 (secure 0 0 0) 0x7f708c002bf0 con 0x7f70a0077ad0 2026-03-09T20:49:38.960 INFO:teuthology.orchestra.run.vm07.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T20:49:38.960 INFO:teuthology.orchestra.run.vm07.stdout:alertmanager.vm07 vm07 *:9093,9094 running (5m) 33s ago 6m 43.0M - 0.25.0 c8568f914cd2 aa3206f6f5cb 2026-03-09T20:49:38.960 INFO:teuthology.orchestra.run.vm07.stdout:ceph-exporter.vm07 vm07 running (6m) 33s ago 6m 9345k - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 06140d824fae 2026-03-09T20:49:38.960 INFO:teuthology.orchestra.run.vm07.stdout:ceph-exporter.vm10 vm10 running (6m) 45s ago 6m 9.90M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 ecddc8340426 2026-03-09T20:49:38.960 INFO:teuthology.orchestra.run.vm07.stdout:crash.vm07 vm07 running (50s) 33s ago 6m 7834k - 19.2.3-678-ge911bdeb 654f31e6858e 406c9c54f34a 2026-03-09T20:49:38.960 INFO:teuthology.orchestra.run.vm07.stdout:crash.vm10 vm10 running (47s) 45s ago 6m 7856k - 19.2.3-678-ge911bdeb 654f31e6858e 30eaebf5d733 2026-03-09T20:49:38.960 INFO:teuthology.orchestra.run.vm07.stdout:grafana.vm07 vm07 *:3000 running (5m) 33s ago 6m 160M - 9.4.7 954c08fa6188 74cf2e7ee6ad 2026-03-09T20:49:38.960 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm07.potfau vm07 running (4m) 33s ago 4m 30.5M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 2492b6874dc8 2026-03-09T20:49:38.960 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm07.rovdbp vm07 running (4m) 33s ago 4m 238M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 3dd0b4a28f35 2026-03-09T20:49:38.961 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm10.hzyuyq vm10 running (4m) 45s ago 4m 151M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 ed740ceed51a 2026-03-09T20:49:38.961 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm10.qpltwp vm10 running (4m) 45s ago 4m 27.0M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 c5fdba181aaf 2026-03-09T20:49:38.961 INFO:teuthology.orchestra.run.vm07.stdout:mgr.vm07.xjrvch vm07 *:8443,9283,8765 running (116s) 33s ago 7m 608M - 19.2.3-678-ge911bdeb 654f31e6858e bc6ab9c540eb 2026-03-09T20:49:38.961 INFO:teuthology.orchestra.run.vm07.stdout:mgr.vm10.byqahe vm10 *:8443,9283,8765 running (90s) 45s ago 6m 489M - 19.2.3-678-ge911bdeb 654f31e6858e f7ad162e95ff 2026-03-09T20:49:38.961 INFO:teuthology.orchestra.run.vm07.stdout:mon.vm07 vm07 running (81s) 33s ago 7m 56.1M 2048M 19.2.3-678-ge911bdeb 654f31e6858e bce9d510f94f 2026-03-09T20:49:38.961 INFO:teuthology.orchestra.run.vm07.stdout:mon.vm10 vm10 running (66s) 45s ago 6m 45.2M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 4428cf7f0607 2026-03-09T20:49:38.961 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.vm07 vm07 *:9100 running (6m) 33s ago 6m 16.0M - 1.5.0 0da6a335fe13 d6fac1f8a1d0 2026-03-09T20:49:38.961 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.vm10 vm10 *:9100 running (6m) 45s ago 6m 15.4M - 1.5.0 0da6a335fe13 9716a97e7ed1 2026-03-09T20:49:38.961 INFO:teuthology.orchestra.run.vm07.stdout:osd.0 vm07 running (36s) 33s ago 5m 29.1M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 1da9d2cdbdc3 2026-03-09T20:49:38.961 INFO:teuthology.orchestra.run.vm07.stdout:osd.1 vm07 running (5m) 33s ago 5m 389M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 15564e5032c9 2026-03-09T20:49:38.961 INFO:teuthology.orchestra.run.vm07.stdout:osd.2 vm07 running (5m) 33s ago 5m 314M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 a2ad523a264c 2026-03-09T20:49:38.961 INFO:teuthology.orchestra.run.vm07.stdout:osd.3 vm10 running (5m) 45s ago 5m 452M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 c4d7e2279ba1 2026-03-09T20:49:38.961 INFO:teuthology.orchestra.run.vm07.stdout:osd.4 vm10 running (5m) 45s ago 5m 409M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 37651efc9a7d 2026-03-09T20:49:38.961 INFO:teuthology.orchestra.run.vm07.stdout:osd.5 vm10 running (5m) 45s ago 5m 343M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 e1bd83add343 2026-03-09T20:49:38.961 INFO:teuthology.orchestra.run.vm07.stdout:prometheus.vm07 vm07 *:9095 running (93s) 33s ago 6m 46.9M - 2.43.0 a07b618ecd1d 3f9c07cd3fe3 2026-03-09T20:49:38.964 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.963+0000 7f70b57fa640 1 -- 192.168.123.107:0/2615798627 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f70a0077ad0 msgr2=0x7f70a0079f90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:49:38.964 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.963+0000 7f70b57fa640 1 --2- 192.168.123.107:0/2615798627 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f70a0077ad0 0x7f70a0079f90 secure :-1 s=READY pgs=37 cs=0 l=1 rev1=1 crypto rx=0x7f70bc01c4b0 tx=0x7f70bc0096e0 comp rx=0 tx=0).stop 2026-03-09T20:49:38.964 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.963+0000 7f70b57fa640 1 -- 192.168.123.107:0/2615798627 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f70c819ebd0 msgr2=0x7f70c81a3c40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:49:38.965 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.963+0000 7f70b57fa640 1 --2- 192.168.123.107:0/2615798627 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f70c819ebd0 0x7f70c81a3c40 secure :-1 s=READY pgs=18 cs=0 l=1 rev1=1 crypto rx=0x7f70b000d8d0 tx=0x7f70b000dda0 comp rx=0 tx=0).stop 2026-03-09T20:49:38.965 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.963+0000 7f70b57fa640 1 -- 192.168.123.107:0/2615798627 shutdown_connections 2026-03-09T20:49:38.966 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.963+0000 7f70b57fa640 1 --2- 192.168.123.107:0/2615798627 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f70a0077ad0 0x7f70a0079f90 unknown :-1 s=CLOSED pgs=37 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:38.967 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.963+0000 7f70b57fa640 1 --2- 192.168.123.107:0/2615798627 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f70c819ebd0 0x7f70c81a3c40 unknown :-1 s=CLOSED pgs=18 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:38.967 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.963+0000 7f70b57fa640 1 --2- 192.168.123.107:0/2615798627 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f70c8072370 0x7f70c819e690 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:38.967 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.963+0000 7f70b57fa640 1 -- 192.168.123.107:0/2615798627 >> 192.168.123.107:0/2615798627 conn(0x7f70c806d4f0 msgr2=0x7f70c8070370 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:49:38.967 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.966+0000 7f70b57fa640 1 -- 192.168.123.107:0/2615798627 shutdown_connections 2026-03-09T20:49:38.967 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:38.966+0000 7f70b57fa640 1 -- 192.168.123.107:0/2615798627 wait complete. 2026-03-09T20:49:39.032 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.031+0000 7fa57e624640 1 -- 192.168.123.107:0/3554458507 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa578072440 msgr2=0x7fa5780771b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:49:39.032 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.031+0000 7fa57e624640 1 --2- 192.168.123.107:0/3554458507 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa578072440 0x7fa5780771b0 secure :-1 s=READY pgs=59 cs=0 l=1 rev1=1 crypto rx=0x7fa570008030 tx=0x7fa570030dc0 comp rx=0 tx=0).stop 2026-03-09T20:49:39.032 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.031+0000 7fa57e624640 1 -- 192.168.123.107:0/3554458507 shutdown_connections 2026-03-09T20:49:39.032 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.031+0000 7fa57e624640 1 --2- 192.168.123.107:0/3554458507 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa578072440 0x7fa5780771b0 unknown :-1 s=CLOSED pgs=59 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:39.032 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.031+0000 7fa57e624640 1 --2- 192.168.123.107:0/3554458507 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa578071a70 0x7fa578071e70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:39.033 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.031+0000 7fa57e624640 1 -- 192.168.123.107:0/3554458507 >> 192.168.123.107:0/3554458507 conn(0x7fa57806d4f0 msgr2=0x7fa57806f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:49:39.033 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.031+0000 7fa57e624640 1 -- 192.168.123.107:0/3554458507 shutdown_connections 2026-03-09T20:49:39.033 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.031+0000 7fa57e624640 1 -- 192.168.123.107:0/3554458507 wait complete. 2026-03-09T20:49:39.033 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.032+0000 7fa57e624640 1 Processor -- start 2026-03-09T20:49:39.033 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.032+0000 7fa57e624640 1 -- start start 2026-03-09T20:49:39.033 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.032+0000 7fa57e624640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa578071a70 0x7fa5781319a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:49:39.033 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.032+0000 7fa57e624640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa578133350 0x7fa578131ee0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:49:39.033 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.032+0000 7fa57e624640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa5781324b0 con 0x7fa578133350 2026-03-09T20:49:39.033 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.032+0000 7fa57e624640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa578132620 con 0x7fa578071a70 2026-03-09T20:49:39.033 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.032+0000 7fa577fff640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa578071a70 0x7fa5781319a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:49:39.033 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.032+0000 7fa577fff640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa578071a70 0x7fa5781319a0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:34704/0 (socket says 192.168.123.107:34704) 2026-03-09T20:49:39.033 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.032+0000 7fa577fff640 1 -- 192.168.123.107:0/3982634151 learned_addr learned my addr 192.168.123.107:0/3982634151 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:49:39.034 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.032+0000 7fa5777fe640 1 --2- 192.168.123.107:0/3982634151 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa578133350 0x7fa578131ee0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:49:39.034 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.033+0000 7fa577fff640 1 -- 192.168.123.107:0/3982634151 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa578133350 msgr2=0x7fa578131ee0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:49:39.034 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.033+0000 7fa577fff640 1 --2- 192.168.123.107:0/3982634151 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa578133350 0x7fa578131ee0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:39.034 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.033+0000 7fa577fff640 1 -- 192.168.123.107:0/3982634151 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fa570007ce0 con 0x7fa578071a70 2026-03-09T20:49:39.034 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.033+0000 7fa577fff640 1 --2- 192.168.123.107:0/3982634151 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa578071a70 0x7fa5781319a0 secure :-1 s=READY pgs=19 cs=0 l=1 rev1=1 crypto rx=0x7fa56800b700 tx=0x7fa56800bbd0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:49:39.034 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.033+0000 7fa5757fa640 1 -- 192.168.123.107:0/3982634151 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fa5680042e0 con 0x7fa578071a70 2026-03-09T20:49:39.035 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.033+0000 7fa57e624640 1 -- 192.168.123.107:0/3982634151 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fa57807fb50 con 0x7fa578071a70 2026-03-09T20:49:39.035 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.033+0000 7fa57e624640 1 -- 192.168.123.107:0/3982634151 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fa5780800a0 con 0x7fa578071a70 2026-03-09T20:49:39.035 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.034+0000 7fa5757fa640 1 -- 192.168.123.107:0/3982634151 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fa568009450 con 0x7fa578071a70 2026-03-09T20:49:39.035 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.034+0000 7fa5757fa640 1 -- 192.168.123.107:0/3982634151 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fa56800cae0 con 0x7fa578071a70 2026-03-09T20:49:39.036 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.035+0000 7fa5757fa640 1 -- 192.168.123.107:0/3982634151 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 37) v1 ==== 100217+0+0 (secure 0 0 0) 0x7fa56801e3e0 con 0x7fa578071a70 2026-03-09T20:49:39.037 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.036+0000 7fa5757fa640 1 --2- 192.168.123.107:0/3982634151 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fa55c077aa0 0x7fa55c079f60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:49:39.037 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.036+0000 7fa5757fa640 1 -- 192.168.123.107:0/3982634151 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(55..55 src has 1..55) v4 ==== 6394+0+0 (secure 0 0 0) 0x7fa56809a270 con 0x7fa578071a70 2026-03-09T20:49:39.037 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.036+0000 7fa5777fe640 1 --2- 192.168.123.107:0/3982634151 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fa55c077aa0 0x7fa55c079f60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:49:39.038 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.037+0000 7fa5777fe640 1 --2- 192.168.123.107:0/3982634151 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fa55c077aa0 0x7fa55c079f60 secure :-1 s=READY pgs=38 cs=0 l=1 rev1=1 crypto rx=0x7fa570007530 tx=0x7fa57003c040 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:49:39.038 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.037+0000 7fa57e624640 1 -- 192.168.123.107:0/3982634151 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fa540005350 con 0x7fa578071a70 2026-03-09T20:49:39.042 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.041+0000 7fa5757fa640 1 -- 192.168.123.107:0/3982634151 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fa568062890 con 0x7fa578071a70 2026-03-09T20:49:39.227 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.226+0000 7fa57e624640 1 -- 192.168.123.107:0/3982634151 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7fa5400058d0 con 0x7fa578071a70 2026-03-09T20:49:39.228 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.227+0000 7fa5757fa640 1 -- 192.168.123.107:0/3982634151 <== mon.1 v2:192.168.123.110:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+844 (secure 0 0 0) 0x7fa568061fe0 con 0x7fa578071a70 2026-03-09T20:49:39.229 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T20:49:39.229 INFO:teuthology.orchestra.run.vm07.stdout: "mon": { 2026-03-09T20:49:39.229 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T20:49:39.229 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:49:39.229 INFO:teuthology.orchestra.run.vm07.stdout: "mgr": { 2026-03-09T20:49:39.229 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T20:49:39.229 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:49:39.229 INFO:teuthology.orchestra.run.vm07.stdout: "osd": { 2026-03-09T20:49:39.229 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable)": 5, 2026-03-09T20:49:39.229 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 1 2026-03-09T20:49:39.229 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:49:39.229 INFO:teuthology.orchestra.run.vm07.stdout: "mds": { 2026-03-09T20:49:39.229 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable)": 4 2026-03-09T20:49:39.229 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:49:39.229 INFO:teuthology.orchestra.run.vm07.stdout: "overall": { 2026-03-09T20:49:39.229 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable)": 9, 2026-03-09T20:49:39.229 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 5 2026-03-09T20:49:39.229 INFO:teuthology.orchestra.run.vm07.stdout: } 2026-03-09T20:49:39.229 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T20:49:39.232 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.231+0000 7fa556ffd640 1 -- 192.168.123.107:0/3982634151 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fa55c077aa0 msgr2=0x7fa55c079f60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:49:39.232 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.231+0000 7fa556ffd640 1 --2- 192.168.123.107:0/3982634151 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fa55c077aa0 0x7fa55c079f60 secure :-1 s=READY pgs=38 cs=0 l=1 rev1=1 crypto rx=0x7fa570007530 tx=0x7fa57003c040 comp rx=0 tx=0).stop 2026-03-09T20:49:39.232 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.231+0000 7fa556ffd640 1 -- 192.168.123.107:0/3982634151 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa578071a70 msgr2=0x7fa5781319a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:49:39.232 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.231+0000 7fa556ffd640 1 --2- 192.168.123.107:0/3982634151 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa578071a70 0x7fa5781319a0 secure :-1 s=READY pgs=19 cs=0 l=1 rev1=1 crypto rx=0x7fa56800b700 tx=0x7fa56800bbd0 comp rx=0 tx=0).stop 2026-03-09T20:49:39.232 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.231+0000 7fa556ffd640 1 -- 192.168.123.107:0/3982634151 shutdown_connections 2026-03-09T20:49:39.232 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.231+0000 7fa556ffd640 1 --2- 192.168.123.107:0/3982634151 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fa55c077aa0 0x7fa55c079f60 unknown :-1 s=CLOSED pgs=38 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:39.232 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.231+0000 7fa556ffd640 1 --2- 192.168.123.107:0/3982634151 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa578133350 0x7fa578131ee0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:39.232 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.231+0000 7fa556ffd640 1 --2- 192.168.123.107:0/3982634151 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa578071a70 0x7fa5781319a0 unknown :-1 s=CLOSED pgs=19 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:39.233 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.231+0000 7fa556ffd640 1 -- 192.168.123.107:0/3982634151 >> 192.168.123.107:0/3982634151 conn(0x7fa57806d4f0 msgr2=0x7fa578070440 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:49:39.233 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.231+0000 7fa556ffd640 1 -- 192.168.123.107:0/3982634151 shutdown_connections 2026-03-09T20:49:39.233 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.232+0000 7fa556ffd640 1 -- 192.168.123.107:0/3982634151 wait complete. 2026-03-09T20:49:39.319 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.316+0000 7f136b577640 1 -- 192.168.123.107:0/279794897 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f136c0719a0 msgr2=0x7f136c071da0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:49:39.319 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.316+0000 7f136b577640 1 --2- 192.168.123.107:0/279794897 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f136c0719a0 0x7f136c071da0 secure :-1 s=READY pgs=20 cs=0 l=1 rev1=1 crypto rx=0x7f1364009f90 tx=0x7f136402f440 comp rx=0 tx=0).stop 2026-03-09T20:49:39.319 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.317+0000 7f136b577640 1 -- 192.168.123.107:0/279794897 shutdown_connections 2026-03-09T20:49:39.319 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.317+0000 7f136b577640 1 --2- 192.168.123.107:0/279794897 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f136c072370 0x7f136c10c590 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:39.319 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.317+0000 7f136b577640 1 --2- 192.168.123.107:0/279794897 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f136c0719a0 0x7f136c071da0 unknown :-1 s=CLOSED pgs=20 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:39.319 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.317+0000 7f136b577640 1 -- 192.168.123.107:0/279794897 >> 192.168.123.107:0/279794897 conn(0x7f136c06d4f0 msgr2=0x7f136c06f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:49:39.319 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.317+0000 7f136b577640 1 -- 192.168.123.107:0/279794897 shutdown_connections 2026-03-09T20:49:39.319 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.317+0000 7f136b577640 1 -- 192.168.123.107:0/279794897 wait complete. 2026-03-09T20:49:39.319 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.317+0000 7f136b577640 1 Processor -- start 2026-03-09T20:49:39.319 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.317+0000 7f136b577640 1 -- start start 2026-03-09T20:49:39.319 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.317+0000 7f136b577640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f136c072370 0x7f136c115920 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:49:39.319 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.317+0000 7f136b577640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f136c117290 0x7f136c115e60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:49:39.319 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.317+0000 7f136b577640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f136c116430 con 0x7f136c072370 2026-03-09T20:49:39.319 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.317+0000 7f136b577640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f136c1165a0 con 0x7f136c117290 2026-03-09T20:49:39.319 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.318+0000 7f1369d74640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f136c117290 0x7f136c115e60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:49:39.319 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.318+0000 7f1369d74640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f136c117290 0x7f136c115e60 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:34718/0 (socket says 192.168.123.107:34718) 2026-03-09T20:49:39.319 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.318+0000 7f1369d74640 1 -- 192.168.123.107:0/3785110029 learned_addr learned my addr 192.168.123.107:0/3785110029 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:49:39.321 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.318+0000 7f1369d74640 1 -- 192.168.123.107:0/3785110029 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f136c072370 msgr2=0x7f136c115920 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:49:39.321 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.318+0000 7f1369d74640 1 --2- 192.168.123.107:0/3785110029 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f136c072370 0x7f136c115920 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:39.321 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.318+0000 7f1369d74640 1 -- 192.168.123.107:0/3785110029 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f1364009c40 con 0x7f136c117290 2026-03-09T20:49:39.321 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.319+0000 7f1369d74640 1 --2- 192.168.123.107:0/3785110029 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f136c117290 0x7f136c115e60 secure :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0x7f136000e990 tx=0x7f136000ee60 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:49:39.321 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.319+0000 7f135b7fe640 1 -- 192.168.123.107:0/3785110029 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f136000cd30 con 0x7f136c117290 2026-03-09T20:49:39.321 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.319+0000 7f136b577640 1 -- 192.168.123.107:0/3785110029 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f136c1a42f0 con 0x7f136c117290 2026-03-09T20:49:39.321 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.319+0000 7f136b577640 1 -- 192.168.123.107:0/3785110029 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f136c1a4840 con 0x7f136c117290 2026-03-09T20:49:39.321 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.319+0000 7f135b7fe640 1 -- 192.168.123.107:0/3785110029 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f136000ce90 con 0x7f136c117290 2026-03-09T20:49:39.321 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.319+0000 7f135b7fe640 1 -- 192.168.123.107:0/3785110029 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f1360010640 con 0x7f136c117290 2026-03-09T20:49:39.324 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.321+0000 7f135b7fe640 1 -- 192.168.123.107:0/3785110029 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 37) v1 ==== 100217+0+0 (secure 0 0 0) 0x7f1360002990 con 0x7f136c117290 2026-03-09T20:49:39.324 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.322+0000 7f135b7fe640 1 --2- 192.168.123.107:0/3785110029 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f1348077a00 0x7f1348079ec0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:49:39.324 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.322+0000 7f135b7fe640 1 -- 192.168.123.107:0/3785110029 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(55..55 src has 1..55) v4 ==== 6394+0+0 (secure 0 0 0) 0x7f1360014070 con 0x7f136c117290 2026-03-09T20:49:39.324 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.323+0000 7f136b577640 1 -- 192.168.123.107:0/3785110029 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f136c0719a0 con 0x7f136c117290 2026-03-09T20:49:39.326 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.325+0000 7f136a575640 1 --2- 192.168.123.107:0/3785110029 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f1348077a00 0x7f1348079ec0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:49:39.330 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.328+0000 7f136a575640 1 --2- 192.168.123.107:0/3785110029 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f1348077a00 0x7f1348079ec0 secure :-1 s=READY pgs=39 cs=0 l=1 rev1=1 crypto rx=0x7f1364009f60 tx=0x7f1364009310 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:49:39.334 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.329+0000 7f135b7fe640 1 -- 192.168.123.107:0/3785110029 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f1360062a70 con 0x7f136c117290 2026-03-09T20:49:39.562 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.555+0000 7f136b577640 1 -- 192.168.123.107:0/3785110029 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "fs dump"} v 0) v1 -- 0x7f136c1a4b20 con 0x7f136c117290 2026-03-09T20:49:39.671 INFO:teuthology.orchestra.run.vm07.stdout:e11 2026-03-09T20:49:39.671 INFO:teuthology.orchestra.run.vm07.stdout:btime 1970-01-01T00:00:00:000000+0000 2026-03-09T20:49:39.671 INFO:teuthology.orchestra.run.vm07.stdout:enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T20:49:39.671 INFO:teuthology.orchestra.run.vm07.stdout:default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T20:49:39.671 INFO:teuthology.orchestra.run.vm07.stdout:legacy client fscid: 1 2026-03-09T20:49:39.671 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:49:39.671 INFO:teuthology.orchestra.run.vm07.stdout:Filesystem 'cephfs' (1) 2026-03-09T20:49:39.671 INFO:teuthology.orchestra.run.vm07.stdout:fs_name cephfs 2026-03-09T20:49:39.671 INFO:teuthology.orchestra.run.vm07.stdout:epoch 11 2026-03-09T20:49:39.671 INFO:teuthology.orchestra.run.vm07.stdout:flags 32 joinable allow_snaps allow_multimds_snaps allow_standby_replay 2026-03-09T20:49:39.671 INFO:teuthology.orchestra.run.vm07.stdout:created 2026-03-09T20:44:59.885491+0000 2026-03-09T20:49:39.671 INFO:teuthology.orchestra.run.vm07.stdout:modified 2026-03-09T20:45:12.822947+0000 2026-03-09T20:49:39.671 INFO:teuthology.orchestra.run.vm07.stdout:tableserver 0 2026-03-09T20:49:39.671 INFO:teuthology.orchestra.run.vm07.stdout:root 0 2026-03-09T20:49:39.671 INFO:teuthology.orchestra.run.vm07.stdout:session_timeout 60 2026-03-09T20:49:39.671 INFO:teuthology.orchestra.run.vm07.stdout:session_autoclose 300 2026-03-09T20:49:39.671 INFO:teuthology.orchestra.run.vm07.stdout:max_file_size 1099511627776 2026-03-09T20:49:39.671 INFO:teuthology.orchestra.run.vm07.stdout:max_xattr_size 65536 2026-03-09T20:49:39.671 INFO:teuthology.orchestra.run.vm07.stdout:required_client_features {} 2026-03-09T20:49:39.671 INFO:teuthology.orchestra.run.vm07.stdout:last_failure 0 2026-03-09T20:49:39.671 INFO:teuthology.orchestra.run.vm07.stdout:last_failure_osd_epoch 0 2026-03-09T20:49:39.671 INFO:teuthology.orchestra.run.vm07.stdout:compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T20:49:39.671 INFO:teuthology.orchestra.run.vm07.stdout:max_mds 2 2026-03-09T20:49:39.671 INFO:teuthology.orchestra.run.vm07.stdout:in 0,1 2026-03-09T20:49:39.671 INFO:teuthology.orchestra.run.vm07.stdout:up {0=14476,1=24291} 2026-03-09T20:49:39.671 INFO:teuthology.orchestra.run.vm07.stdout:failed 2026-03-09T20:49:39.671 INFO:teuthology.orchestra.run.vm07.stdout:damaged 2026-03-09T20:49:39.671 INFO:teuthology.orchestra.run.vm07.stdout:stopped 2026-03-09T20:49:39.671 INFO:teuthology.orchestra.run.vm07.stdout:data_pools [3] 2026-03-09T20:49:39.671 INFO:teuthology.orchestra.run.vm07.stdout:metadata_pool 2 2026-03-09T20:49:39.671 INFO:teuthology.orchestra.run.vm07.stdout:inline_data disabled 2026-03-09T20:49:39.671 INFO:teuthology.orchestra.run.vm07.stdout:balancer 2026-03-09T20:49:39.671 INFO:teuthology.orchestra.run.vm07.stdout:bal_rank_mask -1 2026-03-09T20:49:39.671 INFO:teuthology.orchestra.run.vm07.stdout:standby_count_wanted 1 2026-03-09T20:49:39.671 INFO:teuthology.orchestra.run.vm07.stdout:qdb_cluster leader: 0 members: 2026-03-09T20:49:39.671 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm07.rovdbp{0:14476} state up:active seq 3 join_fscid=1 addr [v2:192.168.123.107:6826/2216764941,v1:192.168.123.107:6827/2216764941] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:49:39.671 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm10.hzyuyq{0:14498} state up:standby-replay seq 3 join_fscid=1 addr [v2:192.168.123.110:6826/3212743251,v1:192.168.123.110:6827/3212743251] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:49:39.671 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm10.qpltwp{1:24291} state up:active seq 3 join_fscid=1 addr [v2:192.168.123.110:6824/61492274,v1:192.168.123.110:6825/61492274] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:49:39.672 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm07.potfau{1:14490} state up:standby-replay seq 2 join_fscid=1 addr [v2:192.168.123.107:6828/3289699342,v1:192.168.123.107:6829/3289699342] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:49:39.672 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:49:39.672 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:49:39.672 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.666+0000 7f135b7fe640 1 -- 192.168.123.107:0/3785110029 <== mon.1 v2:192.168.123.110:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump"}]=0 dumped fsmap epoch 11 v11) v1 ==== 76+0+1937 (secure 0 0 0) 0x7f13600621c0 con 0x7f136c117290 2026-03-09T20:49:39.672 INFO:teuthology.orchestra.run.vm07.stderr:dumped fsmap epoch 11 2026-03-09T20:49:39.672 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.668+0000 7f13597fa640 1 -- 192.168.123.107:0/3785110029 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f1348077a00 msgr2=0x7f1348079ec0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:49:39.672 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.668+0000 7f13597fa640 1 --2- 192.168.123.107:0/3785110029 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f1348077a00 0x7f1348079ec0 secure :-1 s=READY pgs=39 cs=0 l=1 rev1=1 crypto rx=0x7f1364009f60 tx=0x7f1364009310 comp rx=0 tx=0).stop 2026-03-09T20:49:39.672 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.669+0000 7f13597fa640 1 -- 192.168.123.107:0/3785110029 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f136c117290 msgr2=0x7f136c115e60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:49:39.672 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.669+0000 7f13597fa640 1 --2- 192.168.123.107:0/3785110029 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f136c117290 0x7f136c115e60 secure :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0x7f136000e990 tx=0x7f136000ee60 comp rx=0 tx=0).stop 2026-03-09T20:49:39.672 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.669+0000 7f13597fa640 1 -- 192.168.123.107:0/3785110029 shutdown_connections 2026-03-09T20:49:39.672 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.669+0000 7f13597fa640 1 --2- 192.168.123.107:0/3785110029 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f1348077a00 0x7f1348079ec0 unknown :-1 s=CLOSED pgs=39 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:39.672 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.669+0000 7f13597fa640 1 --2- 192.168.123.107:0/3785110029 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f136c117290 0x7f136c115e60 unknown :-1 s=CLOSED pgs=21 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:39.672 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.669+0000 7f13597fa640 1 --2- 192.168.123.107:0/3785110029 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f136c072370 0x7f136c115920 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:39.672 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.669+0000 7f13597fa640 1 -- 192.168.123.107:0/3785110029 >> 192.168.123.107:0/3785110029 conn(0x7f136c06d4f0 msgr2=0x7f136c0703b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:49:39.672 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.669+0000 7f13597fa640 1 -- 192.168.123.107:0/3785110029 shutdown_connections 2026-03-09T20:49:39.672 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.669+0000 7f13597fa640 1 -- 192.168.123.107:0/3785110029 wait complete. 2026-03-09T20:49:39.784 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.782+0000 7f4683fff640 1 -- 192.168.123.107:0/361794174 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4684072420 msgr2=0x7f4684077190 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:49:39.784 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.782+0000 7f4683fff640 1 --2- 192.168.123.107:0/361794174 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4684072420 0x7f4684077190 secure :-1 s=READY pgs=60 cs=0 l=1 rev1=1 crypto rx=0x7f4674007920 tx=0x7f467402ffe0 comp rx=0 tx=0).stop 2026-03-09T20:49:39.784 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.782+0000 7f4683fff640 1 -- 192.168.123.107:0/361794174 shutdown_connections 2026-03-09T20:49:39.784 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.782+0000 7f4683fff640 1 --2- 192.168.123.107:0/361794174 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4684072420 0x7f4684077190 unknown :-1 s=CLOSED pgs=60 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:39.784 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.782+0000 7f4683fff640 1 --2- 192.168.123.107:0/361794174 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4684071a50 0x7f4684071e50 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:39.784 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.782+0000 7f4683fff640 1 -- 192.168.123.107:0/361794174 >> 192.168.123.107:0/361794174 conn(0x7f468406d4f0 msgr2=0x7f468406f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:49:39.784 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.782+0000 7f4683fff640 1 -- 192.168.123.107:0/361794174 shutdown_connections 2026-03-09T20:49:39.784 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.782+0000 7f4683fff640 1 -- 192.168.123.107:0/361794174 wait complete. 2026-03-09T20:49:39.784 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.783+0000 7f4683fff640 1 Processor -- start 2026-03-09T20:49:39.784 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.783+0000 7f4683fff640 1 -- start start 2026-03-09T20:49:39.784 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.783+0000 7f4683fff640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4684071a50 0x7f4684084100 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:49:39.784 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.783+0000 7f4683fff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4684072420 0x7f4684082750 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:49:39.784 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.783+0000 7f4683fff640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f4684082c90 con 0x7f4684072420 2026-03-09T20:49:39.784 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.783+0000 7f4683fff640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f4684082e00 con 0x7f4684071a50 2026-03-09T20:49:39.784 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.783+0000 7f46827fc640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4684072420 0x7f4684082750 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:49:39.784 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.783+0000 7f4682ffd640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4684071a50 0x7f4684084100 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:49:39.784 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.783+0000 7f4682ffd640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4684071a50 0x7f4684084100 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:34732/0 (socket says 192.168.123.107:34732) 2026-03-09T20:49:39.784 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.783+0000 7f4682ffd640 1 -- 192.168.123.107:0/2437514929 learned_addr learned my addr 192.168.123.107:0/2437514929 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:49:39.785 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.784+0000 7f4682ffd640 1 -- 192.168.123.107:0/2437514929 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4684072420 msgr2=0x7f4684082750 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:49:39.785 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.784+0000 7f4682ffd640 1 --2- 192.168.123.107:0/2437514929 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4684072420 0x7f4684082750 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:39.785 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.784+0000 7f4682ffd640 1 -- 192.168.123.107:0/2437514929 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f46740075d0 con 0x7f4684071a50 2026-03-09T20:49:39.785 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.784+0000 7f4682ffd640 1 --2- 192.168.123.107:0/2437514929 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4684071a50 0x7f4684084100 secure :-1 s=READY pgs=22 cs=0 l=1 rev1=1 crypto rx=0x7f467c0049b0 tx=0x7f467c00d4a0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:49:39.787 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.785+0000 7f46889dc640 1 -- 192.168.123.107:0/2437514929 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f467c00dbb0 con 0x7f4684071a50 2026-03-09T20:49:39.787 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.786+0000 7f46889dc640 1 -- 192.168.123.107:0/2437514929 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f467c00f040 con 0x7f4684071a50 2026-03-09T20:49:39.787 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.786+0000 7f46889dc640 1 -- 192.168.123.107:0/2437514929 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f467c013600 con 0x7f4684071a50 2026-03-09T20:49:39.788 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.786+0000 7f4683fff640 1 -- 192.168.123.107:0/2437514929 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f46840830e0 con 0x7f4684071a50 2026-03-09T20:49:39.788 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.786+0000 7f4683fff640 1 -- 192.168.123.107:0/2437514929 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f46840835b0 con 0x7f4684071a50 2026-03-09T20:49:39.788 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.787+0000 7f46627fc640 1 -- 192.168.123.107:0/2437514929 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f4650005350 con 0x7f4684071a50 2026-03-09T20:49:39.790 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.789+0000 7f46889dc640 1 -- 192.168.123.107:0/2437514929 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 37) v1 ==== 100217+0+0 (secure 0 0 0) 0x7f467c01a020 con 0x7f4684071a50 2026-03-09T20:49:39.790 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.789+0000 7f46889dc640 1 --2- 192.168.123.107:0/2437514929 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f46580778c0 0x7f4658079d80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:49:39.790 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.789+0000 7f46889dc640 1 -- 192.168.123.107:0/2437514929 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(55..55 src has 1..55) v4 ==== 6394+0+0 (secure 0 0 0) 0x7f467c022080 con 0x7f4684071a50 2026-03-09T20:49:39.791 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.790+0000 7f46827fc640 1 --2- 192.168.123.107:0/2437514929 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f46580778c0 0x7f4658079d80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:49:39.791 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.790+0000 7f46827fc640 1 --2- 192.168.123.107:0/2437514929 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f46580778c0 0x7f4658079d80 secure :-1 s=READY pgs=40 cs=0 l=1 rev1=1 crypto rx=0x7f4674002c30 tx=0x7f46740023d0 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:49:39.793 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:39.792+0000 7f46889dc640 1 -- 192.168.123.107:0/2437514929 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f467c061a70 con 0x7f4684071a50 2026-03-09T20:49:40.046 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:40.044+0000 7f46627fc640 1 -- 192.168.123.107:0/2437514929 --> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f4650002bf0 con 0x7f46580778c0 2026-03-09T20:49:40.048 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:40.047+0000 7f46889dc640 1 -- 192.168.123.107:0/2437514929 <== mgr.34100 v2:192.168.123.107:6800/2064434004 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+400 (secure 0 0 0) 0x7f4650002bf0 con 0x7f46580778c0 2026-03-09T20:49:40.048 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T20:49:40.048 INFO:teuthology.orchestra.run.vm07.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc", 2026-03-09T20:49:40.048 INFO:teuthology.orchestra.run.vm07.stdout: "in_progress": true, 2026-03-09T20:49:40.048 INFO:teuthology.orchestra.run.vm07.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-09T20:49:40.048 INFO:teuthology.orchestra.run.vm07.stdout: "services_complete": [ 2026-03-09T20:49:40.048 INFO:teuthology.orchestra.run.vm07.stdout: "mon", 2026-03-09T20:49:40.048 INFO:teuthology.orchestra.run.vm07.stdout: "crash", 2026-03-09T20:49:40.048 INFO:teuthology.orchestra.run.vm07.stdout: "mgr" 2026-03-09T20:49:40.048 INFO:teuthology.orchestra.run.vm07.stdout: ], 2026-03-09T20:49:40.048 INFO:teuthology.orchestra.run.vm07.stdout: "progress": "7/23 daemons upgraded", 2026-03-09T20:49:40.048 INFO:teuthology.orchestra.run.vm07.stdout: "message": "Currently upgrading osd daemons", 2026-03-09T20:49:40.048 INFO:teuthology.orchestra.run.vm07.stdout: "is_paused": false 2026-03-09T20:49:40.048 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T20:49:40.052 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:40.051+0000 7f4683fff640 1 -- 192.168.123.107:0/2437514929 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f46580778c0 msgr2=0x7f4658079d80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:49:40.052 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:40.051+0000 7f4683fff640 1 --2- 192.168.123.107:0/2437514929 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f46580778c0 0x7f4658079d80 secure :-1 s=READY pgs=40 cs=0 l=1 rev1=1 crypto rx=0x7f4674002c30 tx=0x7f46740023d0 comp rx=0 tx=0).stop 2026-03-09T20:49:40.052 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:40.051+0000 7f4683fff640 1 -- 192.168.123.107:0/2437514929 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4684071a50 msgr2=0x7f4684084100 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:49:40.052 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:40.051+0000 7f4683fff640 1 --2- 192.168.123.107:0/2437514929 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4684071a50 0x7f4684084100 secure :-1 s=READY pgs=22 cs=0 l=1 rev1=1 crypto rx=0x7f467c0049b0 tx=0x7f467c00d4a0 comp rx=0 tx=0).stop 2026-03-09T20:49:40.052 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:40.052+0000 7f4683fff640 1 -- 192.168.123.107:0/2437514929 shutdown_connections 2026-03-09T20:49:40.053 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:40.052+0000 7f4683fff640 1 --2- 192.168.123.107:0/2437514929 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f46580778c0 0x7f4658079d80 unknown :-1 s=CLOSED pgs=40 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:40.053 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:40.052+0000 7f4683fff640 1 --2- 192.168.123.107:0/2437514929 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4684072420 0x7f4684082750 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:40.053 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:40.052+0000 7f4683fff640 1 --2- 192.168.123.107:0/2437514929 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4684071a50 0x7f4684084100 unknown :-1 s=CLOSED pgs=22 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:40.053 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:40.052+0000 7f4683fff640 1 -- 192.168.123.107:0/2437514929 >> 192.168.123.107:0/2437514929 conn(0x7f468406d4f0 msgr2=0x7f4684075410 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:49:40.053 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:40.053+0000 7f4683fff640 1 -- 192.168.123.107:0/2437514929 shutdown_connections 2026-03-09T20:49:40.054 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:40.053+0000 7f4683fff640 1 -- 192.168.123.107:0/2437514929 wait complete. 2026-03-09T20:49:40.144 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:40 vm07.local ceph-mon[112105]: pgmap v48: 65 pgs: 65 active+clean; 283 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 934 KiB/s rd, 1.1 MiB/s wr, 1.01k op/s; 0 B/s, 28 objects/s recovering 2026-03-09T20:49:40.144 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:40 vm07.local ceph-mon[112105]: from='client.44131 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:49:40.144 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:40 vm07.local ceph-mon[112105]: from='client.44135 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:49:40.144 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:40 vm07.local ceph-mon[112105]: from='client.44139 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:49:40.144 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:40 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/3982634151' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:49:40.144 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:40 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-09T20:49:40.144 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:40 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/3785110029' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T20:49:40.144 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:40 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:49:40.144 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:40 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:49:40.144 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:40 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:49:40.144 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:40 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T20:49:40.144 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:40 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:49:40.159 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:40.157+0000 7fceb3665640 1 -- 192.168.123.107:0/2817460384 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fceac0fead0 msgr2=0x7fceac0feed0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:49:40.159 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:40.157+0000 7fceb3665640 1 --2- 192.168.123.107:0/2817460384 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fceac0fead0 0x7fceac0feed0 secure :-1 s=READY pgs=61 cs=0 l=1 rev1=1 crypto rx=0x7fce9c0099b0 tx=0x7fce9c02f220 comp rx=0 tx=0).stop 2026-03-09T20:49:40.159 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:40.158+0000 7fceb3665640 1 -- 192.168.123.107:0/2817460384 shutdown_connections 2026-03-09T20:49:40.159 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:40.158+0000 7fceb3665640 1 --2- 192.168.123.107:0/2817460384 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fceac0ff7b0 0x7fceac0ffc30 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:40.159 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:40.158+0000 7fceb3665640 1 --2- 192.168.123.107:0/2817460384 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fceac0fead0 0x7fceac0feed0 unknown :-1 s=CLOSED pgs=61 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:40.159 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:40.158+0000 7fceb3665640 1 -- 192.168.123.107:0/2817460384 >> 192.168.123.107:0/2817460384 conn(0x7fceac0fa5b0 msgr2=0x7fceac0fc9d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:49:40.159 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:40.158+0000 7fceb3665640 1 -- 192.168.123.107:0/2817460384 shutdown_connections 2026-03-09T20:49:40.159 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:40.158+0000 7fceb3665640 1 -- 192.168.123.107:0/2817460384 wait complete. 2026-03-09T20:49:40.159 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:40.158+0000 7fceb3665640 1 Processor -- start 2026-03-09T20:49:40.159 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:40.158+0000 7fceb3665640 1 -- start start 2026-03-09T20:49:40.159 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:40.158+0000 7fceb3665640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fceac0fead0 0x7fceac19e910 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:49:40.159 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:40.158+0000 7fceb3665640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fceac0ff7b0 0x7fceac19ee50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:49:40.159 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:40.158+0000 7fceb3665640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fceac19f420 con 0x7fceac0ff7b0 2026-03-09T20:49:40.159 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:40.158+0000 7fceb3665640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fceac19f590 con 0x7fceac0fead0 2026-03-09T20:49:40.161 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:40.159+0000 7fceb13da640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fceac0fead0 0x7fceac19e910 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:49:40.161 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:40.159+0000 7fceb13da640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fceac0fead0 0x7fceac19e910 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:34742/0 (socket says 192.168.123.107:34742) 2026-03-09T20:49:40.161 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:40.159+0000 7fceb13da640 1 -- 192.168.123.107:0/3630171024 learned_addr learned my addr 192.168.123.107:0/3630171024 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:49:40.161 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:40.159+0000 7fceb13da640 1 -- 192.168.123.107:0/3630171024 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fceac0ff7b0 msgr2=0x7fceac19ee50 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:49:40.161 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:40.159+0000 7fceb13da640 1 --2- 192.168.123.107:0/3630171024 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fceac0ff7b0 0x7fceac19ee50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:40.161 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:40.159+0000 7fceb13da640 1 -- 192.168.123.107:0/3630171024 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fce9c009660 con 0x7fceac0fead0 2026-03-09T20:49:40.161 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:40.159+0000 7fceb13da640 1 --2- 192.168.123.107:0/3630171024 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fceac0fead0 0x7fceac19e910 secure :-1 s=READY pgs=23 cs=0 l=1 rev1=1 crypto rx=0x7fce9c002fc0 tx=0x7fce9c0026e0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:49:40.163 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:40.162+0000 7fce9a7fc640 1 -- 192.168.123.107:0/3630171024 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fce9c03d070 con 0x7fceac0fead0 2026-03-09T20:49:40.163 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:40.162+0000 7fceb3665640 1 -- 192.168.123.107:0/3630171024 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fceac1a3fd0 con 0x7fceac0fead0 2026-03-09T20:49:40.166 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:40.162+0000 7fceb3665640 1 -- 192.168.123.107:0/3630171024 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fceac1a4490 con 0x7fceac0fead0 2026-03-09T20:49:40.166 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:40.162+0000 7fce9a7fc640 1 -- 192.168.123.107:0/3630171024 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fce9c0028c0 con 0x7fceac0fead0 2026-03-09T20:49:40.166 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:40.162+0000 7fce9a7fc640 1 -- 192.168.123.107:0/3630171024 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fce9c041920 con 0x7fceac0fead0 2026-03-09T20:49:40.166 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:40.164+0000 7fce9a7fc640 1 -- 192.168.123.107:0/3630171024 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 37) v1 ==== 100217+0+0 (secure 0 0 0) 0x7fce9c038c80 con 0x7fceac0fead0 2026-03-09T20:49:40.166 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:40.164+0000 7fceb3665640 1 -- 192.168.123.107:0/3630171024 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fceac0feed0 con 0x7fceac0fead0 2026-03-09T20:49:40.166 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:40.164+0000 7fce9a7fc640 1 --2- 192.168.123.107:0/3630171024 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fce80077a00 0x7fce80079ec0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:49:40.166 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:40.165+0000 7fce9a7fc640 1 -- 192.168.123.107:0/3630171024 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(55..55 src has 1..55) v4 ==== 6394+0+0 (secure 0 0 0) 0x7fce9c0be790 con 0x7fceac0fead0 2026-03-09T20:49:40.169 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:40.168+0000 7fceb0bd9640 1 --2- 192.168.123.107:0/3630171024 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fce80077a00 0x7fce80079ec0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:49:40.177 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:40.169+0000 7fceb0bd9640 1 --2- 192.168.123.107:0/3630171024 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fce80077a00 0x7fce80079ec0 secure :-1 s=READY pgs=41 cs=0 l=1 rev1=1 crypto rx=0x7fcea0005fd0 tx=0x7fcea0005950 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:49:40.177 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:40.170+0000 7fce9a7fc640 1 -- 192.168.123.107:0/3630171024 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fce9c086d20 con 0x7fceac0fead0 2026-03-09T20:49:40.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:40 vm10.local ceph-mon[103526]: pgmap v48: 65 pgs: 65 active+clean; 283 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 934 KiB/s rd, 1.1 MiB/s wr, 1.01k op/s; 0 B/s, 28 objects/s recovering 2026-03-09T20:49:40.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:40 vm10.local ceph-mon[103526]: from='client.44131 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:49:40.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:40 vm10.local ceph-mon[103526]: from='client.44135 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:49:40.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:40 vm10.local ceph-mon[103526]: from='client.44139 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:49:40.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:40 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/3982634151' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:49:40.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:40 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-09T20:49:40.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:40 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/3785110029' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T20:49:40.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:40 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:49:40.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:40 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:49:40.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:40 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:49:40.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:40 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T20:49:40.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:40 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:49:40.422 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:40.418+0000 7fceb3665640 1 -- 192.168.123.107:0/3630171024 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "health", "detail": "detail"} v 0) v1 -- 0x7fceac10fc30 con 0x7fceac0fead0 2026-03-09T20:49:40.425 INFO:teuthology.orchestra.run.vm07.stdout:HEALTH_OK 2026-03-09T20:49:40.425 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:40.420+0000 7fce9a7fc640 1 -- 192.168.123.107:0/3630171024 <== mon.1 v2:192.168.123.110:3300/0 7 ==== mon_command_ack([{"prefix": "health", "detail": "detail"}]=0 v0) v1 ==== 74+0+10 (secure 0 0 0) 0x7fce9c086470 con 0x7fceac0fead0 2026-03-09T20:49:40.439 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:40.435+0000 7fce7bfff640 1 -- 192.168.123.107:0/3630171024 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fce80077a00 msgr2=0x7fce80079ec0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:49:40.439 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:40.435+0000 7fce7bfff640 1 --2- 192.168.123.107:0/3630171024 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fce80077a00 0x7fce80079ec0 secure :-1 s=READY pgs=41 cs=0 l=1 rev1=1 crypto rx=0x7fcea0005fd0 tx=0x7fcea0005950 comp rx=0 tx=0).stop 2026-03-09T20:49:40.439 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:40.435+0000 7fce7bfff640 1 -- 192.168.123.107:0/3630171024 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fceac0fead0 msgr2=0x7fceac19e910 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:49:40.439 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:40.435+0000 7fce7bfff640 1 --2- 192.168.123.107:0/3630171024 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fceac0fead0 0x7fceac19e910 secure :-1 s=READY pgs=23 cs=0 l=1 rev1=1 crypto rx=0x7fce9c002fc0 tx=0x7fce9c0026e0 comp rx=0 tx=0).stop 2026-03-09T20:49:40.439 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:40.436+0000 7fce7bfff640 1 -- 192.168.123.107:0/3630171024 shutdown_connections 2026-03-09T20:49:40.439 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:40.436+0000 7fce7bfff640 1 --2- 192.168.123.107:0/3630171024 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fce80077a00 0x7fce80079ec0 unknown :-1 s=CLOSED pgs=41 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:40.439 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:40.436+0000 7fce7bfff640 1 --2- 192.168.123.107:0/3630171024 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fceac0ff7b0 0x7fceac19ee50 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:40.440 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:40.436+0000 7fce7bfff640 1 --2- 192.168.123.107:0/3630171024 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fceac0fead0 0x7fceac19e910 unknown :-1 s=CLOSED pgs=23 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:49:40.440 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:40.436+0000 7fce7bfff640 1 -- 192.168.123.107:0/3630171024 >> 192.168.123.107:0/3630171024 conn(0x7fceac0fa5b0 msgr2=0x7fceac0fbe40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:49:40.440 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:40.436+0000 7fce7bfff640 1 -- 192.168.123.107:0/3630171024 shutdown_connections 2026-03-09T20:49:40.440 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:49:40.436+0000 7fce7bfff640 1 -- 192.168.123.107:0/3630171024 wait complete. 2026-03-09T20:49:41.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:41 vm07.local ceph-mon[112105]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-09T20:49:41.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:41 vm07.local ceph-mon[112105]: Upgrade: osd.1 is safe to restart 2026-03-09T20:49:41.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:41 vm07.local ceph-mon[112105]: Upgrade: Updating osd.1 2026-03-09T20:49:41.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:41 vm07.local ceph-mon[112105]: Deploying daemon osd.1 on vm07 2026-03-09T20:49:41.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:41 vm07.local ceph-mon[112105]: from='client.44149 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:49:41.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:41 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/3630171024' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T20:49:41.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:41 vm10.local ceph-mon[103526]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-09T20:49:41.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:41 vm10.local ceph-mon[103526]: Upgrade: osd.1 is safe to restart 2026-03-09T20:49:41.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:41 vm10.local ceph-mon[103526]: Upgrade: Updating osd.1 2026-03-09T20:49:41.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:41 vm10.local ceph-mon[103526]: Deploying daemon osd.1 on vm07 2026-03-09T20:49:41.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:41 vm10.local ceph-mon[103526]: from='client.44149 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:49:41.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:41 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/3630171024' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T20:49:41.634 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:49:41 vm07.local systemd[1]: Stopping Ceph osd.1 for 589eab88-1bf8-11f1-9e50-71f3ab1833c4... 2026-03-09T20:49:41.634 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:49:41 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-1[76030]: 2026-03-09T20:49:41.358+0000 7fbd689df640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.1 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T20:49:41.634 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:49:41 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-1[76030]: 2026-03-09T20:49:41.358+0000 7fbd689df640 -1 osd.1 55 *** Got signal Terminated *** 2026-03-09T20:49:41.634 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:49:41 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-1[76030]: 2026-03-09T20:49:41.358+0000 7fbd689df640 -1 osd.1 55 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T20:49:42.384 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:49:42 vm07.local podman[125457]: 2026-03-09 20:49:42.198120368 +0000 UTC m=+0.853226154 container died 15564e5032c9e7b189365e7c671684489bb3817159026e1c8848eb7fae240566 (image=quay.ceph.io/ceph-ci/ceph@sha256:5c4977cb71fa562f9eeef885de8392360f108b57ddc9fb513207223fafb4cf40, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, CEPH_SHA1=ab47f43c099b2cbae6e21342fe673ce251da54d6, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-09T20:49:42.384 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:49:42 vm07.local podman[125457]: 2026-03-09 20:49:42.243989496 +0000 UTC m=+0.899095282 container remove 15564e5032c9e7b189365e7c671684489bb3817159026e1c8848eb7fae240566 (image=quay.ceph.io/ceph-ci/ceph@sha256:5c4977cb71fa562f9eeef885de8392360f108b57ddc9fb513207223fafb4cf40, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-1, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20260223, CEPH_SHA1=ab47f43c099b2cbae6e21342fe673ce251da54d6, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3) 2026-03-09T20:49:42.384 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:49:42 vm07.local bash[125457]: ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-1 2026-03-09T20:49:42.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:42 vm07.local ceph-mon[112105]: pgmap v49: 65 pgs: 65 active+clean; 283 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 934 KiB/s rd, 1.0 MiB/s wr, 959 op/s; 0 B/s, 16 objects/s recovering 2026-03-09T20:49:42.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:42 vm07.local ceph-mon[112105]: osd.1 marked itself down and dead 2026-03-09T20:49:42.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:42 vm10.local ceph-mon[103526]: pgmap v49: 65 pgs: 65 active+clean; 283 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 934 KiB/s rd, 1.0 MiB/s wr, 959 op/s; 0 B/s, 16 objects/s recovering 2026-03-09T20:49:42.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:42 vm10.local ceph-mon[103526]: osd.1 marked itself down and dead 2026-03-09T20:49:42.721 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:49:42 vm07.local podman[125520]: 2026-03-09 20:49:42.467222202 +0000 UTC m=+0.026738640 container create 28354892348feed1e6bc764f1bc8513c613d22b32c3ec1a35aab486a2d21005f (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-1-deactivate, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-09T20:49:42.721 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:49:42 vm07.local podman[125520]: 2026-03-09 20:49:42.522547479 +0000 UTC m=+0.082063928 container init 28354892348feed1e6bc764f1bc8513c613d22b32c3ec1a35aab486a2d21005f (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-1-deactivate, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default) 2026-03-09T20:49:42.721 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:49:42 vm07.local podman[125520]: 2026-03-09 20:49:42.526313829 +0000 UTC m=+0.085830268 container start 28354892348feed1e6bc764f1bc8513c613d22b32c3ec1a35aab486a2d21005f (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-1-deactivate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-09T20:49:42.721 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:49:42 vm07.local podman[125520]: 2026-03-09 20:49:42.530353931 +0000 UTC m=+0.089870389 container attach 28354892348feed1e6bc764f1bc8513c613d22b32c3ec1a35aab486a2d21005f (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-1-deactivate, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223) 2026-03-09T20:49:42.721 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:49:42 vm07.local podman[125520]: 2026-03-09 20:49:42.453528279 +0000 UTC m=+0.013044738 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T20:49:42.721 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:49:42 vm07.local podman[125520]: 2026-03-09 20:49:42.663144537 +0000 UTC m=+0.222660986 container died 28354892348feed1e6bc764f1bc8513c613d22b32c3ec1a35aab486a2d21005f (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-1-deactivate, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-09T20:49:42.721 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:49:42 vm07.local podman[125520]: 2026-03-09 20:49:42.716418676 +0000 UTC m=+0.275935136 container remove 28354892348feed1e6bc764f1bc8513c613d22b32c3ec1a35aab486a2d21005f (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-1-deactivate, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-09T20:49:43.075 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:49:42 vm07.local systemd[1]: ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4@osd.1.service: Deactivated successfully. 2026-03-09T20:49:43.075 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:49:42 vm07.local systemd[1]: Stopped Ceph osd.1 for 589eab88-1bf8-11f1-9e50-71f3ab1833c4. 2026-03-09T20:49:43.075 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:49:42 vm07.local systemd[1]: ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4@osd.1.service: Consumed 41.527s CPU time. 2026-03-09T20:49:43.075 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:49:42 vm07.local systemd[1]: Starting Ceph osd.1 for 589eab88-1bf8-11f1-9e50-71f3ab1833c4... 2026-03-09T20:49:43.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:43 vm07.local ceph-mon[112105]: Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T20:49:43.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:43 vm07.local ceph-mon[112105]: osdmap e56: 6 total, 5 up, 6 in 2026-03-09T20:49:43.384 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:49:43 vm07.local podman[125624]: 2026-03-09 20:49:43.075587817 +0000 UTC m=+0.018776727 container create c1f191ce8c4737eaaae2a833712af5e53700a649511f090ee50403ead1fb3700 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-1-activate, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-09T20:49:43.384 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:49:43 vm07.local podman[125624]: 2026-03-09 20:49:43.138496538 +0000 UTC m=+0.081685459 container init c1f191ce8c4737eaaae2a833712af5e53700a649511f090ee50403ead1fb3700 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-1-activate, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS) 2026-03-09T20:49:43.384 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:49:43 vm07.local podman[125624]: 2026-03-09 20:49:43.145360657 +0000 UTC m=+0.088549567 container start c1f191ce8c4737eaaae2a833712af5e53700a649511f090ee50403ead1fb3700 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-1-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2) 2026-03-09T20:49:43.384 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:49:43 vm07.local podman[125624]: 2026-03-09 20:49:43.152749016 +0000 UTC m=+0.095937937 container attach c1f191ce8c4737eaaae2a833712af5e53700a649511f090ee50403ead1fb3700 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-1-activate, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=squid) 2026-03-09T20:49:43.384 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:49:43 vm07.local podman[125624]: 2026-03-09 20:49:43.068454905 +0000 UTC m=+0.011643824 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T20:49:43.384 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:49:43 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-1-activate[125635]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T20:49:43.384 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:49:43 vm07.local bash[125624]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T20:49:43.384 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:49:43 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-1-activate[125635]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T20:49:43.384 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:49:43 vm07.local bash[125624]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T20:49:43.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:43 vm10.local ceph-mon[103526]: Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T20:49:43.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:43 vm10.local ceph-mon[103526]: osdmap e56: 6 total, 5 up, 6 in 2026-03-09T20:49:44.347 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:49:44 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-1-activate[125635]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-09T20:49:44.347 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:49:44 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-1-activate[125635]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T20:49:44.347 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:49:44 vm07.local bash[125624]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-09T20:49:44.347 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:49:44 vm07.local bash[125624]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T20:49:44.347 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:49:44 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-1-activate[125635]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T20:49:44.347 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:49:44 vm07.local bash[125624]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T20:49:44.347 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:49:44 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-1-activate[125635]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1 2026-03-09T20:49:44.347 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:49:44 vm07.local bash[125624]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1 2026-03-09T20:49:44.347 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:49:44 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-1-activate[125635]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-4fc5ff30-2353-4581-94b8-143801418cb5/osd-block-7431b664-9dad-4df6-ac1e-d480eeb7d102 --path /var/lib/ceph/osd/ceph-1 --no-mon-config 2026-03-09T20:49:44.347 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:49:44 vm07.local bash[125624]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-4fc5ff30-2353-4581-94b8-143801418cb5/osd-block-7431b664-9dad-4df6-ac1e-d480eeb7d102 --path /var/lib/ceph/osd/ceph-1 --no-mon-config 2026-03-09T20:49:44.347 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:49:44 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-1-activate[125635]: Running command: /usr/bin/ln -snf /dev/ceph-4fc5ff30-2353-4581-94b8-143801418cb5/osd-block-7431b664-9dad-4df6-ac1e-d480eeb7d102 /var/lib/ceph/osd/ceph-1/block 2026-03-09T20:49:44.347 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:49:44 vm07.local bash[125624]: Running command: /usr/bin/ln -snf /dev/ceph-4fc5ff30-2353-4581-94b8-143801418cb5/osd-block-7431b664-9dad-4df6-ac1e-d480eeb7d102 /var/lib/ceph/osd/ceph-1/block 2026-03-09T20:49:44.347 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:49:44 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-1-activate[125635]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block 2026-03-09T20:49:44.347 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:49:44 vm07.local bash[125624]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block 2026-03-09T20:49:44.347 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:49:44 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-1-activate[125635]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 2026-03-09T20:49:44.347 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:49:44 vm07.local bash[125624]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 2026-03-09T20:49:44.347 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:49:44 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-1-activate[125635]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1 2026-03-09T20:49:44.347 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:49:44 vm07.local bash[125624]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1 2026-03-09T20:49:44.347 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:49:44 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-1-activate[125635]: --> ceph-volume lvm activate successful for osd ID: 1 2026-03-09T20:49:44.347 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:49:44 vm07.local bash[125624]: --> ceph-volume lvm activate successful for osd ID: 1 2026-03-09T20:49:44.347 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:49:44 vm07.local podman[125847]: 2026-03-09 20:49:44.31077384 +0000 UTC m=+0.045305694 container died c1f191ce8c4737eaaae2a833712af5e53700a649511f090ee50403ead1fb3700 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-1-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20260223, io.buildah.version=1.41.3) 2026-03-09T20:49:44.347 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:44 vm07.local ceph-mon[112105]: pgmap v51: 65 pgs: 5 peering, 9 stale+active+clean, 51 active+clean; 273 MiB data, 3.1 GiB used, 117 GiB / 120 GiB avail; 920 KiB/s rd, 952 KiB/s wr, 748 op/s; 0 B/s, 16 objects/s recovering 2026-03-09T20:49:44.347 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:44 vm07.local ceph-mon[112105]: osdmap e57: 6 total, 5 up, 6 in 2026-03-09T20:49:44.413 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:44 vm10.local ceph-mon[103526]: pgmap v51: 65 pgs: 5 peering, 9 stale+active+clean, 51 active+clean; 273 MiB data, 3.1 GiB used, 117 GiB / 120 GiB avail; 920 KiB/s rd, 952 KiB/s wr, 748 op/s; 0 B/s, 16 objects/s recovering 2026-03-09T20:49:44.414 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:44 vm10.local ceph-mon[103526]: osdmap e57: 6 total, 5 up, 6 in 2026-03-09T20:49:44.604 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:49:44 vm07.local podman[125847]: 2026-03-09 20:49:44.359712433 +0000 UTC m=+0.094244287 container remove c1f191ce8c4737eaaae2a833712af5e53700a649511f090ee50403ead1fb3700 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-1-activate, ceph=True, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-09T20:49:44.884 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:49:44 vm07.local podman[125885]: 2026-03-09 20:49:44.604396301 +0000 UTC m=+0.052412617 container create 95f518bf664f65fc3388230a6cd58163a3d87db7bb0ddb79b98abeec73692ba7 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-1, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20260223) 2026-03-09T20:49:44.884 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:49:44 vm07.local podman[125885]: 2026-03-09 20:49:44.657937569 +0000 UTC m=+0.105953895 container init 95f518bf664f65fc3388230a6cd58163a3d87db7bb0ddb79b98abeec73692ba7 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-09T20:49:44.884 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:49:44 vm07.local podman[125885]: 2026-03-09 20:49:44.662499348 +0000 UTC m=+0.110515664 container start 95f518bf664f65fc3388230a6cd58163a3d87db7bb0ddb79b98abeec73692ba7 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-1, org.label-schema.build-date=20260223, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-09T20:49:44.884 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:49:44 vm07.local bash[125885]: 95f518bf664f65fc3388230a6cd58163a3d87db7bb0ddb79b98abeec73692ba7 2026-03-09T20:49:44.884 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:49:44 vm07.local podman[125885]: 2026-03-09 20:49:44.582008556 +0000 UTC m=+0.030024882 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T20:49:44.884 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:49:44 vm07.local systemd[1]: Started Ceph osd.1 for 589eab88-1bf8-11f1-9e50-71f3ab1833c4. 2026-03-09T20:49:44.884 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:49:44 vm07.local ceph-osd[125899]: -- 192.168.123.107:0/1281979113 <== mon.0 v2:192.168.123.107:3300/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 194+0+0 (secure 0 0 0) 0x55e6606d4960 con 0x55e6606b3c00 2026-03-09T20:49:45.520 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:49:45 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-1[125895]: 2026-03-09T20:49:45.254+0000 7f8493bc8740 -1 Falling back to public interface 2026-03-09T20:49:45.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:45 vm10.local ceph-mon[103526]: pgmap v53: 65 pgs: 5 peering, 9 stale+active+clean, 51 active+clean; 273 MiB data, 3.1 GiB used, 117 GiB / 120 GiB avail; 450 KiB/s rd, 485 KiB/s wr, 327 op/s 2026-03-09T20:49:45.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:45 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:49:45.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:45 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:49:45.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:45 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:49:45.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:45 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:49:45.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:45 vm07.local ceph-mon[112105]: pgmap v53: 65 pgs: 5 peering, 9 stale+active+clean, 51 active+clean; 273 MiB data, 3.1 GiB used, 117 GiB / 120 GiB avail; 450 KiB/s rd, 485 KiB/s wr, 327 op/s 2026-03-09T20:49:45.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:45 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:49:45.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:45 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:49:45.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:45 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:49:45.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:45 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:49:46.604 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:46 vm07.local ceph-mon[112105]: Health check failed: Degraded data redundancy: 4583/30246 objects degraded (15.152%), 29 pgs degraded (PG_DEGRADED) 2026-03-09T20:49:47.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:46 vm10.local ceph-mon[103526]: Health check failed: Degraded data redundancy: 4583/30246 objects degraded (15.152%), 29 pgs degraded (PG_DEGRADED) 2026-03-09T20:49:47.886 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:47 vm07.local ceph-mon[112105]: pgmap v54: 65 pgs: 5 peering, 29 active+undersized+degraded, 31 active+clean; 271 MiB data, 3.1 GiB used, 117 GiB / 120 GiB avail; 1.0 MiB/s rd, 1.1 MiB/s wr, 405 op/s; 4583/30246 objects degraded (15.152%) 2026-03-09T20:49:48.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:47 vm10.local ceph-mon[103526]: pgmap v54: 65 pgs: 5 peering, 29 active+undersized+degraded, 31 active+clean; 271 MiB data, 3.1 GiB used, 117 GiB / 120 GiB avail; 1.0 MiB/s rd, 1.1 MiB/s wr, 405 op/s; 4583/30246 objects degraded (15.152%) 2026-03-09T20:49:48.987 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:48 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:49:48.987 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:48 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:49:48.987 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:48 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:49:48.987 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:48 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:49:49.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:48 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:49:49.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:48 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:49:49.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:48 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:49:49.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:48 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:49:50.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:49 vm07.local ceph-mon[112105]: pgmap v55: 65 pgs: 34 active+undersized+degraded, 31 active+clean; 266 MiB data, 3.1 GiB used, 117 GiB / 120 GiB avail; 1.0 MiB/s rd, 1.1 MiB/s wr, 441 op/s; 5449/29397 objects degraded (18.536%) 2026-03-09T20:49:50.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:49 vm10.local ceph-mon[103526]: pgmap v55: 65 pgs: 34 active+undersized+degraded, 31 active+clean; 266 MiB data, 3.1 GiB used, 117 GiB / 120 GiB avail; 1.0 MiB/s rd, 1.1 MiB/s wr, 441 op/s; 5449/29397 objects degraded (18.536%) 2026-03-09T20:49:51.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:49:51.788 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:49:51.788 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:49:51.788 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:49:51.788 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:49:51.788 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:51 vm10.local ceph-mon[103526]: pgmap v56: 65 pgs: 34 active+undersized+degraded, 31 active+clean; 266 MiB data, 3.1 GiB used, 117 GiB / 120 GiB avail; 552 KiB/s rd, 601 KiB/s wr, 198 op/s; 5449/29397 objects degraded (18.536%) 2026-03-09T20:49:51.788 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:49:51.788 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:49:51.788 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:49:51.788 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:49:51.788 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-09T20:49:51.788 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:51 vm10.local ceph-mon[103526]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-09T20:49:51.788 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:51 vm10.local ceph-mon[103526]: Upgrade: unsafe to stop osd(s) at this time (12 PGs are or would become offline) 2026-03-09T20:49:51.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:49:51.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:49:51.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:49:51.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:49:51.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:49:51.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:51 vm07.local ceph-mon[112105]: pgmap v56: 65 pgs: 34 active+undersized+degraded, 31 active+clean; 266 MiB data, 3.1 GiB used, 117 GiB / 120 GiB avail; 552 KiB/s rd, 601 KiB/s wr, 198 op/s; 5449/29397 objects degraded (18.536%) 2026-03-09T20:49:51.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:49:51.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:49:51.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:49:51.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:49:51.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-09T20:49:51.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:51 vm07.local ceph-mon[112105]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-09T20:49:51.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:51 vm07.local ceph-mon[112105]: Upgrade: unsafe to stop osd(s) at this time (12 PGs are or would become offline) 2026-03-09T20:49:52.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:52 vm07.local ceph-mon[112105]: Health check update: Degraded data redundancy: 4935/26538 objects degraded (18.596%), 34 pgs degraded (PG_DEGRADED) 2026-03-09T20:49:53.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:52 vm10.local ceph-mon[103526]: Health check update: Degraded data redundancy: 4935/26538 objects degraded (18.596%), 34 pgs degraded (PG_DEGRADED) 2026-03-09T20:49:53.383 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:49:52 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-1[125895]: 2026-03-09T20:49:52.907+0000 7f8493bc8740 -1 osd.1 55 log_to_monitors true 2026-03-09T20:49:54.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:54 vm10.local ceph-mon[103526]: pgmap v57: 65 pgs: 34 active+undersized+degraded, 31 active+clean; 265 MiB data, 3.1 GiB used, 117 GiB / 120 GiB avail; 825 KiB/s rd, 858 KiB/s wr, 265 op/s; 4935/26538 objects degraded (18.596%) 2026-03-09T20:49:54.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:54 vm10.local ceph-mon[103526]: from='osd.1 [v2:192.168.123.107:6810/460543480,v1:192.168.123.107:6811/460543480]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T20:49:54.635 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 20:49:54 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-1[125895]: 2026-03-09T20:49:54.235+0000 7f848b161640 -1 osd.1 55 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T20:49:54.635 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:54 vm07.local ceph-mon[112105]: pgmap v57: 65 pgs: 34 active+undersized+degraded, 31 active+clean; 265 MiB data, 3.1 GiB used, 117 GiB / 120 GiB avail; 825 KiB/s rd, 858 KiB/s wr, 265 op/s; 4935/26538 objects degraded (18.596%) 2026-03-09T20:49:54.635 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:54 vm07.local ceph-mon[112105]: from='osd.1 [v2:192.168.123.107:6810/460543480,v1:192.168.123.107:6811/460543480]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T20:49:55.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:55 vm10.local ceph-mon[103526]: from='osd.1 [v2:192.168.123.107:6810/460543480,v1:192.168.123.107:6811/460543480]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T20:49:55.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:55 vm10.local ceph-mon[103526]: osdmap e58: 6 total, 5 up, 6 in 2026-03-09T20:49:55.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:55 vm10.local ceph-mon[103526]: from='osd.1 [v2:192.168.123.107:6810/460543480,v1:192.168.123.107:6811/460543480]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T20:49:55.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:55 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:49:55.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:55 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:49:55.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:55 vm07.local ceph-mon[112105]: from='osd.1 [v2:192.168.123.107:6810/460543480,v1:192.168.123.107:6811/460543480]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T20:49:55.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:55 vm07.local ceph-mon[112105]: osdmap e58: 6 total, 5 up, 6 in 2026-03-09T20:49:55.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:55 vm07.local ceph-mon[112105]: from='osd.1 [v2:192.168.123.107:6810/460543480,v1:192.168.123.107:6811/460543480]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T20:49:55.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:55 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:49:55.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:55 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:49:56.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:56 vm07.local ceph-mon[112105]: pgmap v59: 65 pgs: 34 active+undersized+degraded, 31 active+clean; 265 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 825 KiB/s rd, 858 KiB/s wr, 265 op/s; 4935/26538 objects degraded (18.596%) 2026-03-09T20:49:56.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:56 vm07.local ceph-mon[112105]: Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T20:49:56.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:56 vm07.local ceph-mon[112105]: osd.1 [v2:192.168.123.107:6810/460543480,v1:192.168.123.107:6811/460543480] boot 2026-03-09T20:49:56.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:56 vm07.local ceph-mon[112105]: osdmap e59: 6 total, 6 up, 6 in 2026-03-09T20:49:56.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:56 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:49:56.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:56 vm07.local ceph-mon[112105]: osdmap e60: 6 total, 6 up, 6 in 2026-03-09T20:49:56.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:56 vm10.local ceph-mon[103526]: pgmap v59: 65 pgs: 34 active+undersized+degraded, 31 active+clean; 265 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 825 KiB/s rd, 858 KiB/s wr, 265 op/s; 4935/26538 objects degraded (18.596%) 2026-03-09T20:49:56.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:56 vm10.local ceph-mon[103526]: Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T20:49:56.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:56 vm10.local ceph-mon[103526]: osd.1 [v2:192.168.123.107:6810/460543480,v1:192.168.123.107:6811/460543480] boot 2026-03-09T20:49:56.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:56 vm10.local ceph-mon[103526]: osdmap e59: 6 total, 6 up, 6 in 2026-03-09T20:49:56.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:56 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:49:56.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:56 vm10.local ceph-mon[103526]: osdmap e60: 6 total, 6 up, 6 in 2026-03-09T20:49:57.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:57 vm07.local ceph-mon[112105]: pgmap v62: 65 pgs: 20 peering, 14 active+undersized+degraded, 31 active+clean; 264 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 1.3 MiB/s rd, 1.5 MiB/s wr, 366 op/s; 1863/24657 objects degraded (7.556%) 2026-03-09T20:49:58.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:57 vm10.local ceph-mon[103526]: pgmap v62: 65 pgs: 20 peering, 14 active+undersized+degraded, 31 active+clean; 264 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 1.3 MiB/s rd, 1.5 MiB/s wr, 366 op/s; 1863/24657 objects degraded (7.556%) 2026-03-09T20:49:59.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:58 vm10.local ceph-mon[103526]: mgrmap e38: vm07.xjrvch(active, since 93s), standbys: vm10.byqahe 2026-03-09T20:49:59.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:58 vm10.local ceph-mon[103526]: Health check update: Degraded data redundancy: 1863/24657 objects degraded (7.556%), 14 pgs degraded (PG_DEGRADED) 2026-03-09T20:49:59.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:58 vm07.local ceph-mon[112105]: mgrmap e38: vm07.xjrvch(active, since 93s), standbys: vm10.byqahe 2026-03-09T20:49:59.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:58 vm07.local ceph-mon[112105]: Health check update: Degraded data redundancy: 1863/24657 objects degraded (7.556%), 14 pgs degraded (PG_DEGRADED) 2026-03-09T20:50:00.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:49:59 vm10.local ceph-mon[103526]: pgmap v63: 65 pgs: 2 active+recovery_wait+degraded, 20 peering, 12 active+undersized+degraded, 31 active+clean; 263 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 729 KiB/s rd, 903 KiB/s wr, 200 op/s; 1531/24096 objects degraded (6.354%) 2026-03-09T20:50:00.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:49:59 vm07.local ceph-mon[112105]: pgmap v63: 65 pgs: 2 active+recovery_wait+degraded, 20 peering, 12 active+undersized+degraded, 31 active+clean; 263 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 729 KiB/s rd, 903 KiB/s wr, 200 op/s; 1531/24096 objects degraded (6.354%) 2026-03-09T20:50:01.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:00 vm07.local ceph-mon[112105]: Health detail: HEALTH_WARN Degraded data redundancy: 1531/24096 objects degraded (6.354%), 14 pgs degraded 2026-03-09T20:50:01.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:00 vm07.local ceph-mon[112105]: [WRN] PG_DEGRADED: Degraded data redundancy: 1531/24096 objects degraded (6.354%), 14 pgs degraded 2026-03-09T20:50:01.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:00 vm07.local ceph-mon[112105]: pg 2.4 is active+undersized+degraded, acting [0,4] 2026-03-09T20:50:01.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:00 vm07.local ceph-mon[112105]: pg 2.6 is active+undersized+degraded, acting [3,4] 2026-03-09T20:50:01.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:00 vm07.local ceph-mon[112105]: pg 2.9 is active+undersized+degraded, acting [4,0] 2026-03-09T20:50:01.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:00 vm07.local ceph-mon[112105]: pg 2.a is active+undersized+degraded, acting [4,3] 2026-03-09T20:50:01.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:00 vm07.local ceph-mon[112105]: pg 2.d is active+undersized+degraded, acting [3,2] 2026-03-09T20:50:01.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:00 vm07.local ceph-mon[112105]: pg 2.15 is active+undersized+degraded, acting [3,0] 2026-03-09T20:50:01.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:00 vm07.local ceph-mon[112105]: pg 2.1b is active+undersized+degraded, acting [0,5] 2026-03-09T20:50:01.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:00 vm07.local ceph-mon[112105]: pg 3.0 is active+undersized+degraded, acting [2,4] 2026-03-09T20:50:01.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:00 vm07.local ceph-mon[112105]: pg 3.2 is active+undersized+degraded, acting [5,3] 2026-03-09T20:50:01.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:00 vm07.local ceph-mon[112105]: pg 3.4 is active+undersized+degraded, acting [2,3] 2026-03-09T20:50:01.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:00 vm07.local ceph-mon[112105]: pg 3.6 is active+recovery_wait+degraded, acting [0,1,4] 2026-03-09T20:50:01.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:00 vm07.local ceph-mon[112105]: pg 3.b is active+undersized+degraded, acting [0,4] 2026-03-09T20:50:01.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:00 vm07.local ceph-mon[112105]: pg 3.12 is active+recovery_wait+degraded, acting [0,3,1] 2026-03-09T20:50:01.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:00 vm07.local ceph-mon[112105]: pg 3.19 is active+undersized+degraded, acting [4,3] 2026-03-09T20:50:01.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:00 vm10.local ceph-mon[103526]: Health detail: HEALTH_WARN Degraded data redundancy: 1531/24096 objects degraded (6.354%), 14 pgs degraded 2026-03-09T20:50:01.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:00 vm10.local ceph-mon[103526]: [WRN] PG_DEGRADED: Degraded data redundancy: 1531/24096 objects degraded (6.354%), 14 pgs degraded 2026-03-09T20:50:01.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:00 vm10.local ceph-mon[103526]: pg 2.4 is active+undersized+degraded, acting [0,4] 2026-03-09T20:50:01.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:00 vm10.local ceph-mon[103526]: pg 2.6 is active+undersized+degraded, acting [3,4] 2026-03-09T20:50:01.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:00 vm10.local ceph-mon[103526]: pg 2.9 is active+undersized+degraded, acting [4,0] 2026-03-09T20:50:01.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:00 vm10.local ceph-mon[103526]: pg 2.a is active+undersized+degraded, acting [4,3] 2026-03-09T20:50:01.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:00 vm10.local ceph-mon[103526]: pg 2.d is active+undersized+degraded, acting [3,2] 2026-03-09T20:50:01.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:00 vm10.local ceph-mon[103526]: pg 2.15 is active+undersized+degraded, acting [3,0] 2026-03-09T20:50:01.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:00 vm10.local ceph-mon[103526]: pg 2.1b is active+undersized+degraded, acting [0,5] 2026-03-09T20:50:01.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:00 vm10.local ceph-mon[103526]: pg 3.0 is active+undersized+degraded, acting [2,4] 2026-03-09T20:50:01.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:00 vm10.local ceph-mon[103526]: pg 3.2 is active+undersized+degraded, acting [5,3] 2026-03-09T20:50:01.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:00 vm10.local ceph-mon[103526]: pg 3.4 is active+undersized+degraded, acting [2,3] 2026-03-09T20:50:01.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:00 vm10.local ceph-mon[103526]: pg 3.6 is active+recovery_wait+degraded, acting [0,1,4] 2026-03-09T20:50:01.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:00 vm10.local ceph-mon[103526]: pg 3.b is active+undersized+degraded, acting [0,4] 2026-03-09T20:50:01.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:00 vm10.local ceph-mon[103526]: pg 3.12 is active+recovery_wait+degraded, acting [0,3,1] 2026-03-09T20:50:01.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:00 vm10.local ceph-mon[103526]: pg 3.19 is active+undersized+degraded, acting [4,3] 2026-03-09T20:50:02.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:02 vm07.local ceph-mon[112105]: pgmap v64: 65 pgs: 11 active+recovery_wait+degraded, 20 peering, 1 active+recovering, 33 active+clean; 262 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 699 KiB/s rd, 876 KiB/s wr, 246 op/s; 616/23136 objects degraded (2.663%); 59 B/s, 242 keys/s, 3 objects/s recovering 2026-03-09T20:50:02.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:02 vm10.local ceph-mon[103526]: pgmap v64: 65 pgs: 11 active+recovery_wait+degraded, 20 peering, 1 active+recovering, 33 active+clean; 262 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 699 KiB/s rd, 876 KiB/s wr, 246 op/s; 616/23136 objects degraded (2.663%); 59 B/s, 242 keys/s, 3 objects/s recovering 2026-03-09T20:50:03.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:03 vm07.local ceph-mon[112105]: Health check update: Degraded data redundancy: 616/23136 objects degraded (2.663%), 11 pgs degraded (PG_DEGRADED) 2026-03-09T20:50:03.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:03 vm10.local ceph-mon[103526]: Health check update: Degraded data redundancy: 616/23136 objects degraded (2.663%), 11 pgs degraded (PG_DEGRADED) 2026-03-09T20:50:05.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:04 vm10.local ceph-mon[103526]: pgmap v65: 65 pgs: 19 active+recovery_wait+degraded, 2 active+recovering, 44 active+clean; 254 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 1.1 MiB/s rd, 1.2 MiB/s wr, 372 op/s; 1233/19701 objects degraded (6.259%); 37 KiB/s, 189 keys/s, 33 objects/s recovering 2026-03-09T20:50:05.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:04 vm07.local ceph-mon[112105]: pgmap v65: 65 pgs: 19 active+recovery_wait+degraded, 2 active+recovering, 44 active+clean; 254 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 1.1 MiB/s rd, 1.2 MiB/s wr, 372 op/s; 1233/19701 objects degraded (6.259%); 37 KiB/s, 189 keys/s, 33 objects/s recovering 2026-03-09T20:50:06.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:06 vm10.local ceph-mon[103526]: pgmap v66: 65 pgs: 10 active+recovery_wait+degraded, 1 active+recovering, 54 active+clean; 258 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 1.4 MiB/s rd, 1.5 MiB/s wr, 367 op/s; 796/19287 objects degraded (4.127%); 349 KiB/s, 166 keys/s, 78 objects/s recovering 2026-03-09T20:50:06.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:06 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-09T20:50:06.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:05 vm07.local ceph-mon[112105]: pgmap v66: 65 pgs: 10 active+recovery_wait+degraded, 1 active+recovering, 54 active+clean; 258 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 1.4 MiB/s rd, 1.5 MiB/s wr, 367 op/s; 796/19287 objects degraded (4.127%); 349 KiB/s, 166 keys/s, 78 objects/s recovering 2026-03-09T20:50:06.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:06 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-09T20:50:07.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:07 vm10.local ceph-mon[103526]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-09T20:50:07.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:07 vm10.local ceph-mon[103526]: Upgrade: unsafe to stop osd(s) at this time (4 PGs are or would become offline) 2026-03-09T20:50:07.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:07 vm07.local ceph-mon[112105]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-09T20:50:07.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:07 vm07.local ceph-mon[112105]: Upgrade: unsafe to stop osd(s) at this time (4 PGs are or would become offline) 2026-03-09T20:50:08.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:08 vm07.local ceph-mon[112105]: pgmap v67: 65 pgs: 10 active+recovery_wait+degraded, 1 active+recovering, 54 active+clean; 253 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 1.1 MiB/s rd, 1.0 MiB/s wr, 312 op/s; 796/17196 objects degraded (4.629%); 315 KiB/s, 150 keys/s, 73 objects/s recovering 2026-03-09T20:50:08.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:08 vm07.local ceph-mon[112105]: Health check update: Degraded data redundancy: 796/17196 objects degraded (4.629%), 10 pgs degraded (PG_DEGRADED) 2026-03-09T20:50:08.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:08 vm10.local ceph-mon[103526]: pgmap v67: 65 pgs: 10 active+recovery_wait+degraded, 1 active+recovering, 54 active+clean; 253 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 1.1 MiB/s rd, 1.0 MiB/s wr, 312 op/s; 796/17196 objects degraded (4.629%); 315 KiB/s, 150 keys/s, 73 objects/s recovering 2026-03-09T20:50:08.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:08 vm10.local ceph-mon[103526]: Health check update: Degraded data redundancy: 796/17196 objects degraded (4.629%), 10 pgs degraded (PG_DEGRADED) 2026-03-09T20:50:09.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:09 vm07.local ceph-mon[112105]: pgmap v68: 65 pgs: 10 active+recovery_wait+degraded, 1 active+recovering, 54 active+clean; 252 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 928 KiB/s rd, 889 KiB/s wr, 281 op/s; 796/16545 objects degraded (4.811%); 265 KiB/s, 126 keys/s, 62 objects/s recovering 2026-03-09T20:50:09.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:09 vm10.local ceph-mon[103526]: pgmap v68: 65 pgs: 10 active+recovery_wait+degraded, 1 active+recovering, 54 active+clean; 252 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 928 KiB/s rd, 889 KiB/s wr, 281 op/s; 796/16545 objects degraded (4.811%); 265 KiB/s, 126 keys/s, 62 objects/s recovering 2026-03-09T20:50:10.559 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.557+0000 7f645fa7b640 1 -- 192.168.123.107:0/3249836555 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6458071a70 msgr2=0x7f6458071e70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:50:10.559 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.557+0000 7f645fa7b640 1 --2- 192.168.123.107:0/3249836555 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6458071a70 0x7f6458071e70 secure :-1 s=READY pgs=64 cs=0 l=1 rev1=1 crypto rx=0x7f6454008880 tx=0x7f645402eeb0 comp rx=0 tx=0).stop 2026-03-09T20:50:10.559 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.557+0000 7f645fa7b640 1 -- 192.168.123.107:0/3249836555 shutdown_connections 2026-03-09T20:50:10.559 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.557+0000 7f645fa7b640 1 --2- 192.168.123.107:0/3249836555 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f6458072440 0x7f64580771b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:10.559 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.557+0000 7f645fa7b640 1 --2- 192.168.123.107:0/3249836555 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6458071a70 0x7f6458071e70 unknown :-1 s=CLOSED pgs=64 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:10.559 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.557+0000 7f645fa7b640 1 -- 192.168.123.107:0/3249836555 >> 192.168.123.107:0/3249836555 conn(0x7f645806d4f0 msgr2=0x7f645806f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:50:10.559 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.557+0000 7f645fa7b640 1 -- 192.168.123.107:0/3249836555 shutdown_connections 2026-03-09T20:50:10.559 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.557+0000 7f645fa7b640 1 -- 192.168.123.107:0/3249836555 wait complete. 2026-03-09T20:50:10.559 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.557+0000 7f645fa7b640 1 Processor -- start 2026-03-09T20:50:10.559 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.557+0000 7f645fa7b640 1 -- start start 2026-03-09T20:50:10.559 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.557+0000 7f645fa7b640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f6458072440 0x7f6458084140 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:50:10.559 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.557+0000 7f645fa7b640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6458082790 0x7f6458082c10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:50:10.559 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.557+0000 7f645fa7b640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f6458083150 con 0x7f6458082790 2026-03-09T20:50:10.559 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.557+0000 7f645fa7b640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f64580832c0 con 0x7f6458072440 2026-03-09T20:50:10.559 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.558+0000 7f645d7f0640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f6458072440 0x7f6458084140 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:50:10.559 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.558+0000 7f645d7f0640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f6458072440 0x7f6458084140 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:47998/0 (socket says 192.168.123.107:47998) 2026-03-09T20:50:10.559 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.558+0000 7f645d7f0640 1 -- 192.168.123.107:0/2003032234 learned_addr learned my addr 192.168.123.107:0/2003032234 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:50:10.560 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.559+0000 7f645d7f0640 1 -- 192.168.123.107:0/2003032234 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6458082790 msgr2=0x7f6458082c10 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:50:10.560 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.559+0000 7f645d7f0640 1 --2- 192.168.123.107:0/2003032234 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6458082790 0x7f6458082c10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:10.560 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.559+0000 7f645d7f0640 1 -- 192.168.123.107:0/2003032234 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f6454008530 con 0x7f6458072440 2026-03-09T20:50:10.561 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.559+0000 7f645d7f0640 1 --2- 192.168.123.107:0/2003032234 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f6458072440 0x7f6458084140 secure :-1 s=READY pgs=24 cs=0 l=1 rev1=1 crypto rx=0x7f6454004870 tx=0x7f64540028f0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:50:10.564 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.562+0000 7f644e7fc640 1 -- 192.168.123.107:0/2003032234 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f645402fa30 con 0x7f6458072440 2026-03-09T20:50:10.565 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.562+0000 7f645fa7b640 1 -- 192.168.123.107:0/2003032234 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f6458083540 con 0x7f6458072440 2026-03-09T20:50:10.565 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.562+0000 7f645fa7b640 1 -- 192.168.123.107:0/2003032234 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f64581b5bc0 con 0x7f6458072440 2026-03-09T20:50:10.565 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.563+0000 7f644e7fc640 1 -- 192.168.123.107:0/2003032234 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f6454002c70 con 0x7f6458072440 2026-03-09T20:50:10.565 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.563+0000 7f644e7fc640 1 -- 192.168.123.107:0/2003032234 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f6454040730 con 0x7f6458072440 2026-03-09T20:50:10.565 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.564+0000 7f645fa7b640 1 -- 192.168.123.107:0/2003032234 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f6428005350 con 0x7f6458072440 2026-03-09T20:50:10.565 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.565+0000 7f644e7fc640 1 -- 192.168.123.107:0/2003032234 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f6454048050 con 0x7f6458072440 2026-03-09T20:50:10.566 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.565+0000 7f644e7fc640 1 --2- 192.168.123.107:0/2003032234 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f643c077910 0x7f643c079dd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:50:10.566 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.565+0000 7f645cfef640 1 --2- 192.168.123.107:0/2003032234 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f643c077910 0x7f643c079dd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:50:10.566 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.565+0000 7f644e7fc640 1 -- 192.168.123.107:0/2003032234 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(60..60 src has 1..60) v4 ==== 6394+0+0 (secure 0 0 0) 0x7f64540bdf60 con 0x7f6458072440 2026-03-09T20:50:10.568 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.567+0000 7f645cfef640 1 --2- 192.168.123.107:0/2003032234 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f643c077910 0x7f643c079dd0 secure :-1 s=READY pgs=43 cs=0 l=1 rev1=1 crypto rx=0x7f645000bfd0 tx=0x7f6450012040 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:50:10.569 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.568+0000 7f644e7fc640 1 -- 192.168.123.107:0/2003032234 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f6454086600 con 0x7f6458072440 2026-03-09T20:50:10.764 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.762+0000 7f645fa7b640 1 -- 192.168.123.107:0/2003032234 --> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f6428002bf0 con 0x7f643c077910 2026-03-09T20:50:10.766 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.765+0000 7f644e7fc640 1 -- 192.168.123.107:0/2003032234 <== mgr.34100 v2:192.168.123.107:6800/2064434004 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+400 (secure 0 0 0) 0x7f6428002bf0 con 0x7f643c077910 2026-03-09T20:50:10.771 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.770+0000 7f6423fff640 1 -- 192.168.123.107:0/2003032234 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f643c077910 msgr2=0x7f643c079dd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:50:10.772 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.770+0000 7f6423fff640 1 --2- 192.168.123.107:0/2003032234 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f643c077910 0x7f643c079dd0 secure :-1 s=READY pgs=43 cs=0 l=1 rev1=1 crypto rx=0x7f645000bfd0 tx=0x7f6450012040 comp rx=0 tx=0).stop 2026-03-09T20:50:10.772 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.770+0000 7f6423fff640 1 -- 192.168.123.107:0/2003032234 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f6458072440 msgr2=0x7f6458084140 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:50:10.772 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.770+0000 7f6423fff640 1 --2- 192.168.123.107:0/2003032234 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f6458072440 0x7f6458084140 secure :-1 s=READY pgs=24 cs=0 l=1 rev1=1 crypto rx=0x7f6454004870 tx=0x7f64540028f0 comp rx=0 tx=0).stop 2026-03-09T20:50:10.772 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.770+0000 7f6423fff640 1 -- 192.168.123.107:0/2003032234 shutdown_connections 2026-03-09T20:50:10.772 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.770+0000 7f6423fff640 1 --2- 192.168.123.107:0/2003032234 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f643c077910 0x7f643c079dd0 unknown :-1 s=CLOSED pgs=43 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:10.772 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.770+0000 7f6423fff640 1 --2- 192.168.123.107:0/2003032234 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6458082790 0x7f6458082c10 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:10.772 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.770+0000 7f6423fff640 1 --2- 192.168.123.107:0/2003032234 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f6458072440 0x7f6458084140 unknown :-1 s=CLOSED pgs=24 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:10.772 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.770+0000 7f6423fff640 1 -- 192.168.123.107:0/2003032234 >> 192.168.123.107:0/2003032234 conn(0x7f645806d4f0 msgr2=0x7f64580754b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:50:10.772 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.771+0000 7f6423fff640 1 -- 192.168.123.107:0/2003032234 shutdown_connections 2026-03-09T20:50:10.772 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.771+0000 7f6423fff640 1 -- 192.168.123.107:0/2003032234 wait complete. 2026-03-09T20:50:10.782 INFO:teuthology.orchestra.run.vm07.stdout:true 2026-03-09T20:50:10.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:10 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:50:10.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:10 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:50:10.904 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.903+0000 7f74e4c0d640 1 -- 192.168.123.107:0/104390693 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f74e0071a50 msgr2=0x7f74e0071e50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:50:10.905 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.903+0000 7f74e4c0d640 1 --2- 192.168.123.107:0/104390693 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f74e0071a50 0x7f74e0071e50 secure :-1 s=READY pgs=65 cs=0 l=1 rev1=1 crypto rx=0x7f74d0007920 tx=0x7f74d0031130 comp rx=0 tx=0).stop 2026-03-09T20:50:10.905 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.903+0000 7f74e4c0d640 1 -- 192.168.123.107:0/104390693 shutdown_connections 2026-03-09T20:50:10.905 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.903+0000 7f74e4c0d640 1 --2- 192.168.123.107:0/104390693 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f74e0072420 0x7f74e0077190 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:10.905 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.903+0000 7f74e4c0d640 1 --2- 192.168.123.107:0/104390693 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f74e0071a50 0x7f74e0071e50 unknown :-1 s=CLOSED pgs=65 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:10.905 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.903+0000 7f74e4c0d640 1 -- 192.168.123.107:0/104390693 >> 192.168.123.107:0/104390693 conn(0x7f74e006d4f0 msgr2=0x7f74e006f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:50:10.905 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.903+0000 7f74e4c0d640 1 -- 192.168.123.107:0/104390693 shutdown_connections 2026-03-09T20:50:10.905 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.904+0000 7f74e4c0d640 1 -- 192.168.123.107:0/104390693 wait complete. 2026-03-09T20:50:10.905 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.904+0000 7f74e4c0d640 1 Processor -- start 2026-03-09T20:50:10.905 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.904+0000 7f74e4c0d640 1 -- start start 2026-03-09T20:50:10.905 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.904+0000 7f74e4c0d640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f74e0072420 0x7f74e00840f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:50:10.905 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.904+0000 7f74e4c0d640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f74e0082740 0x7f74e0082bc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:50:10.905 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.904+0000 7f74e4c0d640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f74e0084630 con 0x7f74e0082740 2026-03-09T20:50:10.905 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.904+0000 7f74e4c0d640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f74e0083100 con 0x7f74e0072420 2026-03-09T20:50:10.905 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.904+0000 7f74deffd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f74e0082740 0x7f74e0082bc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:50:10.905 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.904+0000 7f74deffd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f74e0082740 0x7f74e0082bc0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:42702/0 (socket says 192.168.123.107:42702) 2026-03-09T20:50:10.905 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.904+0000 7f74deffd640 1 -- 192.168.123.107:0/3521772872 learned_addr learned my addr 192.168.123.107:0/3521772872 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:50:10.905 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.904+0000 7f74df7fe640 1 --2- 192.168.123.107:0/3521772872 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f74e0072420 0x7f74e00840f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:50:10.906 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.905+0000 7f74deffd640 1 -- 192.168.123.107:0/3521772872 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f74e0072420 msgr2=0x7f74e00840f0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:50:10.906 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.905+0000 7f74deffd640 1 --2- 192.168.123.107:0/3521772872 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f74e0072420 0x7f74e00840f0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:10.906 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.905+0000 7f74deffd640 1 -- 192.168.123.107:0/3521772872 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f74d00075d0 con 0x7f74e0082740 2026-03-09T20:50:10.906 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.905+0000 7f74deffd640 1 --2- 192.168.123.107:0/3521772872 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f74e0082740 0x7f74e0082bc0 secure :-1 s=READY pgs=66 cs=0 l=1 rev1=1 crypto rx=0x7f74d800efc0 tx=0x7f74d800c490 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:50:10.907 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.906+0000 7f74dcff9640 1 -- 192.168.123.107:0/3521772872 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f74d8009280 con 0x7f74e0082740 2026-03-09T20:50:10.907 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.906+0000 7f74e4c0d640 1 -- 192.168.123.107:0/3521772872 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f74e0083380 con 0x7f74e0082740 2026-03-09T20:50:10.907 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.906+0000 7f74e4c0d640 1 -- 192.168.123.107:0/3521772872 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f74e01be5d0 con 0x7f74e0082740 2026-03-09T20:50:10.907 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.907+0000 7f74dcff9640 1 -- 192.168.123.107:0/3521772872 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f74d800f040 con 0x7f74e0082740 2026-03-09T20:50:10.908 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.907+0000 7f74dcff9640 1 -- 192.168.123.107:0/3521772872 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f74d8004940 con 0x7f74e0082740 2026-03-09T20:50:10.908 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.907+0000 7f74e4c0d640 1 -- 192.168.123.107:0/3521772872 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f74e0079e40 con 0x7f74e0082740 2026-03-09T20:50:10.909 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.908+0000 7f74dcff9640 1 -- 192.168.123.107:0/3521772872 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f74d8004b40 con 0x7f74e0082740 2026-03-09T20:50:10.910 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.909+0000 7f74dcff9640 1 --2- 192.168.123.107:0/3521772872 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f74cc0779e0 0x7f74cc079ea0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:50:10.912 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.911+0000 7f74df7fe640 1 --2- 192.168.123.107:0/3521772872 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f74cc0779e0 0x7f74cc079ea0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:50:10.912 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.911+0000 7f74dcff9640 1 -- 192.168.123.107:0/3521772872 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(60..60 src has 1..60) v4 ==== 6394+0+0 (secure 0 0 0) 0x7f74d80995f0 con 0x7f74e0082740 2026-03-09T20:50:10.912 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.912+0000 7f74dcff9640 1 -- 192.168.123.107:0/3521772872 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f74d8099920 con 0x7f74e0082740 2026-03-09T20:50:10.913 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:10.912+0000 7f74df7fe640 1 --2- 192.168.123.107:0/3521772872 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f74cc0779e0 0x7f74cc079ea0 secure :-1 s=READY pgs=44 cs=0 l=1 rev1=1 crypto rx=0x7f74d0002410 tx=0x7f74d0033040 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:50:11.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:10 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:50:11.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:10 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:50:11.088 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.087+0000 7f74e4c0d640 1 -- 192.168.123.107:0/3521772872 --> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f74e007bb20 con 0x7f74cc0779e0 2026-03-09T20:50:11.089 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.088+0000 7f74dcff9640 1 -- 192.168.123.107:0/3521772872 <== mgr.34100 v2:192.168.123.107:6800/2064434004 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+400 (secure 0 0 0) 0x7f74e007bb20 con 0x7f74cc0779e0 2026-03-09T20:50:11.092 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.091+0000 7f74be7fc640 1 -- 192.168.123.107:0/3521772872 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f74cc0779e0 msgr2=0x7f74cc079ea0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:50:11.092 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.091+0000 7f74be7fc640 1 --2- 192.168.123.107:0/3521772872 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f74cc0779e0 0x7f74cc079ea0 secure :-1 s=READY pgs=44 cs=0 l=1 rev1=1 crypto rx=0x7f74d0002410 tx=0x7f74d0033040 comp rx=0 tx=0).stop 2026-03-09T20:50:11.092 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.091+0000 7f74be7fc640 1 -- 192.168.123.107:0/3521772872 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f74e0082740 msgr2=0x7f74e0082bc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:50:11.092 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.091+0000 7f74be7fc640 1 --2- 192.168.123.107:0/3521772872 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f74e0082740 0x7f74e0082bc0 secure :-1 s=READY pgs=66 cs=0 l=1 rev1=1 crypto rx=0x7f74d800efc0 tx=0x7f74d800c490 comp rx=0 tx=0).stop 2026-03-09T20:50:11.093 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.092+0000 7f74be7fc640 1 -- 192.168.123.107:0/3521772872 shutdown_connections 2026-03-09T20:50:11.093 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.092+0000 7f74be7fc640 1 --2- 192.168.123.107:0/3521772872 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f74cc0779e0 0x7f74cc079ea0 unknown :-1 s=CLOSED pgs=44 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:11.093 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.092+0000 7f74be7fc640 1 --2- 192.168.123.107:0/3521772872 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f74e0082740 0x7f74e0082bc0 unknown :-1 s=CLOSED pgs=66 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:11.093 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.092+0000 7f74be7fc640 1 --2- 192.168.123.107:0/3521772872 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f74e0072420 0x7f74e00840f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:11.093 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.092+0000 7f74be7fc640 1 -- 192.168.123.107:0/3521772872 >> 192.168.123.107:0/3521772872 conn(0x7f74e006d4f0 msgr2=0x7f74e007b410 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:50:11.093 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.092+0000 7f74be7fc640 1 -- 192.168.123.107:0/3521772872 shutdown_connections 2026-03-09T20:50:11.093 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.092+0000 7f74be7fc640 1 -- 192.168.123.107:0/3521772872 wait complete. 2026-03-09T20:50:11.200 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.199+0000 7f5bd4218640 1 -- 192.168.123.107:0/2278437930 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5bcc103c80 msgr2=0x7f5bcc104100 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:50:11.200 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.199+0000 7f5bd4218640 1 --2- 192.168.123.107:0/2278437930 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5bcc103c80 0x7f5bcc104100 secure :-1 s=READY pgs=67 cs=0 l=1 rev1=1 crypto rx=0x7f5bc00099b0 tx=0x7f5bc002f240 comp rx=0 tx=0).stop 2026-03-09T20:50:11.204 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.202+0000 7f5bd4218640 1 -- 192.168.123.107:0/2278437930 shutdown_connections 2026-03-09T20:50:11.204 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.202+0000 7f5bd4218640 1 --2- 192.168.123.107:0/2278437930 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5bcc103c80 0x7f5bcc104100 unknown :-1 s=CLOSED pgs=67 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:11.204 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.202+0000 7f5bd4218640 1 --2- 192.168.123.107:0/2278437930 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5bcc102a80 0x7f5bcc102e80 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:11.204 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.202+0000 7f5bd4218640 1 -- 192.168.123.107:0/2278437930 >> 192.168.123.107:0/2278437930 conn(0x7f5bcc0fe250 msgr2=0x7f5bcc100670 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:50:11.205 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.203+0000 7f5bd4218640 1 -- 192.168.123.107:0/2278437930 shutdown_connections 2026-03-09T20:50:11.205 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.204+0000 7f5bd4218640 1 -- 192.168.123.107:0/2278437930 wait complete. 2026-03-09T20:50:11.206 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.205+0000 7f5bd4218640 1 Processor -- start 2026-03-09T20:50:11.206 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.206+0000 7f5bd4218640 1 -- start start 2026-03-09T20:50:11.208 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.207+0000 7f5bd4218640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5bcc102a80 0x7f5bcc19a420 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:50:11.208 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.207+0000 7f5bd4218640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5bcc103c80 0x7f5bcc19a960 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:50:11.208 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.207+0000 7f5bd4218640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f5bcc19af30 con 0x7f5bcc103c80 2026-03-09T20:50:11.208 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.207+0000 7f5bd4218640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f5bcc19b0a0 con 0x7f5bcc102a80 2026-03-09T20:50:11.208 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.207+0000 7f5bd178c640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5bcc103c80 0x7f5bcc19a960 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:50:11.208 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.207+0000 7f5bd178c640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5bcc103c80 0x7f5bcc19a960 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:42722/0 (socket says 192.168.123.107:42722) 2026-03-09T20:50:11.208 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.207+0000 7f5bd178c640 1 -- 192.168.123.107:0/4004959791 learned_addr learned my addr 192.168.123.107:0/4004959791 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:50:11.208 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.207+0000 7f5bd178c640 1 -- 192.168.123.107:0/4004959791 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5bcc102a80 msgr2=0x7f5bcc19a420 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:50:11.208 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.207+0000 7f5bd178c640 1 --2- 192.168.123.107:0/4004959791 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5bcc102a80 0x7f5bcc19a420 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:11.208 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.207+0000 7f5bd178c640 1 -- 192.168.123.107:0/4004959791 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f5bc0009660 con 0x7f5bcc103c80 2026-03-09T20:50:11.209 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.208+0000 7f5bd178c640 1 --2- 192.168.123.107:0/4004959791 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5bcc103c80 0x7f5bcc19a960 secure :-1 s=READY pgs=68 cs=0 l=1 rev1=1 crypto rx=0x7f5bc0004290 tx=0x7f5bc0038620 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:50:11.209 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.208+0000 7f5bbaffd640 1 -- 192.168.123.107:0/4004959791 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f5bc003d070 con 0x7f5bcc103c80 2026-03-09T20:50:11.209 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.208+0000 7f5bd4218640 1 -- 192.168.123.107:0/4004959791 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f5bcc19fae0 con 0x7f5bcc103c80 2026-03-09T20:50:11.211 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.208+0000 7f5bd4218640 1 -- 192.168.123.107:0/4004959791 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f5bcc19ffd0 con 0x7f5bcc103c80 2026-03-09T20:50:11.211 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.209+0000 7f5bbaffd640 1 -- 192.168.123.107:0/4004959791 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f5bc0038730 con 0x7f5bcc103c80 2026-03-09T20:50:11.211 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.210+0000 7f5bbaffd640 1 -- 192.168.123.107:0/4004959791 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f5bc004b6d0 con 0x7f5bcc103c80 2026-03-09T20:50:11.211 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.210+0000 7f5bbaffd640 1 -- 192.168.123.107:0/4004959791 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f5bc0049050 con 0x7f5bcc103c80 2026-03-09T20:50:11.211 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.211+0000 7f5bbaffd640 1 --2- 192.168.123.107:0/4004959791 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f5b9c077750 0x7f5b9c079c10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:50:11.212 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.211+0000 7f5bd1f8d640 1 --2- 192.168.123.107:0/4004959791 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f5b9c077750 0x7f5b9c079c10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:50:11.212 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.211+0000 7f5bbaffd640 1 -- 192.168.123.107:0/4004959791 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(60..60 src has 1..60) v4 ==== 6394+0+0 (secure 0 0 0) 0x7f5bc00bf4d0 con 0x7f5bcc103c80 2026-03-09T20:50:11.212 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.211+0000 7f5bd1f8d640 1 --2- 192.168.123.107:0/4004959791 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f5b9c077750 0x7f5b9c079c10 secure :-1 s=READY pgs=45 cs=0 l=1 rev1=1 crypto rx=0x7f5bcc103ae0 tx=0x7f5bbc009730 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:50:11.212 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.211+0000 7f5bd4218640 1 -- 192.168.123.107:0/4004959791 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f5bcc102e80 con 0x7f5bcc103c80 2026-03-09T20:50:11.220 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.214+0000 7f5bbaffd640 1 -- 192.168.123.107:0/4004959791 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f5bc0087c80 con 0x7f5bcc103c80 2026-03-09T20:50:11.340 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.339+0000 7f5bd4218640 1 -- 192.168.123.107:0/4004959791 --> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] -- mgr_command(tid 0: {"prefix": "orch ps", "target": ["mon-mgr", ""]}) v1 -- 0x7f5bcc107ec0 con 0x7f5b9c077750 2026-03-09T20:50:11.346 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.345+0000 7f5bbaffd640 1 -- 192.168.123.107:0/4004959791 <== mgr.34100 v2:192.168.123.107:6800/2064434004 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+3600 (secure 0 0 0) 0x7f5bcc107ec0 con 0x7f5b9c077750 2026-03-09T20:50:11.346 INFO:teuthology.orchestra.run.vm07.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T20:50:11.346 INFO:teuthology.orchestra.run.vm07.stdout:alertmanager.vm07 vm07 *:9093,9094 running (6m) 23s ago 7m 43.0M - 0.25.0 c8568f914cd2 aa3206f6f5cb 2026-03-09T20:50:11.346 INFO:teuthology.orchestra.run.vm07.stdout:ceph-exporter.vm07 vm07 running (7m) 23s ago 7m 9701k - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 06140d824fae 2026-03-09T20:50:11.346 INFO:teuthology.orchestra.run.vm07.stdout:ceph-exporter.vm10 vm10 running (6m) 78s ago 6m 9.90M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 ecddc8340426 2026-03-09T20:50:11.346 INFO:teuthology.orchestra.run.vm07.stdout:crash.vm07 vm07 running (82s) 23s ago 7m 7834k - 19.2.3-678-ge911bdeb 654f31e6858e 406c9c54f34a 2026-03-09T20:50:11.346 INFO:teuthology.orchestra.run.vm07.stdout:crash.vm10 vm10 running (80s) 78s ago 6m 7856k - 19.2.3-678-ge911bdeb 654f31e6858e 30eaebf5d733 2026-03-09T20:50:11.346 INFO:teuthology.orchestra.run.vm07.stdout:grafana.vm07 vm07 *:3000 running (6m) 23s ago 7m 160M - 9.4.7 954c08fa6188 74cf2e7ee6ad 2026-03-09T20:50:11.346 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm07.potfau vm07 running (5m) 23s ago 5m 30.2M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 2492b6874dc8 2026-03-09T20:50:11.346 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm07.rovdbp vm07 running (5m) 23s ago 5m 226M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 3dd0b4a28f35 2026-03-09T20:50:11.346 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm10.hzyuyq vm10 running (5m) 78s ago 5m 151M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 ed740ceed51a 2026-03-09T20:50:11.346 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm10.qpltwp vm10 running (5m) 78s ago 5m 27.0M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 c5fdba181aaf 2026-03-09T20:50:11.346 INFO:teuthology.orchestra.run.vm07.stdout:mgr.vm07.xjrvch vm07 *:8443,9283,8765 running (2m) 23s ago 7m 611M - 19.2.3-678-ge911bdeb 654f31e6858e bc6ab9c540eb 2026-03-09T20:50:11.346 INFO:teuthology.orchestra.run.vm07.stdout:mgr.vm10.byqahe vm10 *:8443,9283,8765 running (2m) 78s ago 6m 489M - 19.2.3-678-ge911bdeb 654f31e6858e f7ad162e95ff 2026-03-09T20:50:11.346 INFO:teuthology.orchestra.run.vm07.stdout:mon.vm07 vm07 running (114s) 23s ago 7m 56.1M 2048M 19.2.3-678-ge911bdeb 654f31e6858e bce9d510f94f 2026-03-09T20:50:11.346 INFO:teuthology.orchestra.run.vm07.stdout:mon.vm10 vm10 running (99s) 78s ago 6m 45.2M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 4428cf7f0607 2026-03-09T20:50:11.346 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.vm07 vm07 *:9100 running (7m) 23s ago 7m 16.0M - 1.5.0 0da6a335fe13 d6fac1f8a1d0 2026-03-09T20:50:11.346 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.vm10 vm10 *:9100 running (6m) 78s ago 6m 15.4M - 1.5.0 0da6a335fe13 9716a97e7ed1 2026-03-09T20:50:11.347 INFO:teuthology.orchestra.run.vm07.stdout:osd.0 vm07 running (68s) 23s ago 6m 189M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 1da9d2cdbdc3 2026-03-09T20:50:11.347 INFO:teuthology.orchestra.run.vm07.stdout:osd.1 vm07 running (26s) 23s ago 6m 11.5M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 95f518bf664f 2026-03-09T20:50:11.347 INFO:teuthology.orchestra.run.vm07.stdout:osd.2 vm07 running (6m) 23s ago 6m 341M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 a2ad523a264c 2026-03-09T20:50:11.347 INFO:teuthology.orchestra.run.vm07.stdout:osd.3 vm10 running (5m) 78s ago 5m 452M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 c4d7e2279ba1 2026-03-09T20:50:11.347 INFO:teuthology.orchestra.run.vm07.stdout:osd.4 vm10 running (5m) 78s ago 5m 409M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 37651efc9a7d 2026-03-09T20:50:11.347 INFO:teuthology.orchestra.run.vm07.stdout:osd.5 vm10 running (5m) 78s ago 5m 343M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 e1bd83add343 2026-03-09T20:50:11.347 INFO:teuthology.orchestra.run.vm07.stdout:prometheus.vm07 vm07 *:9095 running (2m) 23s ago 6m 47.3M - 2.43.0 a07b618ecd1d 3f9c07cd3fe3 2026-03-09T20:50:11.353 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.352+0000 7f5bd4218640 1 -- 192.168.123.107:0/4004959791 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f5b9c077750 msgr2=0x7f5b9c079c10 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:50:11.353 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.352+0000 7f5bd4218640 1 --2- 192.168.123.107:0/4004959791 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f5b9c077750 0x7f5b9c079c10 secure :-1 s=READY pgs=45 cs=0 l=1 rev1=1 crypto rx=0x7f5bcc103ae0 tx=0x7f5bbc009730 comp rx=0 tx=0).stop 2026-03-09T20:50:11.353 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.352+0000 7f5bd4218640 1 -- 192.168.123.107:0/4004959791 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5bcc103c80 msgr2=0x7f5bcc19a960 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:50:11.353 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.352+0000 7f5bd4218640 1 --2- 192.168.123.107:0/4004959791 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5bcc103c80 0x7f5bcc19a960 secure :-1 s=READY pgs=68 cs=0 l=1 rev1=1 crypto rx=0x7f5bc0004290 tx=0x7f5bc0038620 comp rx=0 tx=0).stop 2026-03-09T20:50:11.353 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.352+0000 7f5bd4218640 1 -- 192.168.123.107:0/4004959791 shutdown_connections 2026-03-09T20:50:11.353 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.352+0000 7f5bd4218640 1 --2- 192.168.123.107:0/4004959791 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f5b9c077750 0x7f5b9c079c10 unknown :-1 s=CLOSED pgs=45 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:11.353 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.352+0000 7f5bd4218640 1 --2- 192.168.123.107:0/4004959791 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5bcc103c80 0x7f5bcc19a960 unknown :-1 s=CLOSED pgs=68 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:11.353 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.352+0000 7f5bd4218640 1 --2- 192.168.123.107:0/4004959791 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5bcc102a80 0x7f5bcc19a420 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:11.353 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.352+0000 7f5bd4218640 1 -- 192.168.123.107:0/4004959791 >> 192.168.123.107:0/4004959791 conn(0x7f5bcc0fe250 msgr2=0x7f5bcc0ffd10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:50:11.353 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.352+0000 7f5bd4218640 1 -- 192.168.123.107:0/4004959791 shutdown_connections 2026-03-09T20:50:11.353 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.352+0000 7f5bd4218640 1 -- 192.168.123.107:0/4004959791 wait complete. 2026-03-09T20:50:11.431 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.429+0000 7f9a5124a640 1 -- 192.168.123.107:0/3288521897 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9a4c071c20 msgr2=0x7f9a4c072020 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:50:11.431 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.429+0000 7f9a5124a640 1 --2- 192.168.123.107:0/3288521897 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9a4c071c20 0x7f9a4c072020 secure :-1 s=READY pgs=69 cs=0 l=1 rev1=1 crypto rx=0x7f9a3c007920 tx=0x7f9a3c031130 comp rx=0 tx=0).stop 2026-03-09T20:50:11.431 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.430+0000 7f9a5124a640 1 -- 192.168.123.107:0/3288521897 shutdown_connections 2026-03-09T20:50:11.431 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.430+0000 7f9a5124a640 1 --2- 192.168.123.107:0/3288521897 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9a4c0725f0 0x7f9a4c077360 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:11.431 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.430+0000 7f9a5124a640 1 --2- 192.168.123.107:0/3288521897 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9a4c071c20 0x7f9a4c072020 unknown :-1 s=CLOSED pgs=69 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:11.431 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.430+0000 7f9a5124a640 1 -- 192.168.123.107:0/3288521897 >> 192.168.123.107:0/3288521897 conn(0x7f9a4c06d660 msgr2=0x7f9a4c06faa0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:50:11.432 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.430+0000 7f9a5124a640 1 -- 192.168.123.107:0/3288521897 shutdown_connections 2026-03-09T20:50:11.432 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.430+0000 7f9a5124a640 1 -- 192.168.123.107:0/3288521897 wait complete. 2026-03-09T20:50:11.432 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.430+0000 7f9a5124a640 1 Processor -- start 2026-03-09T20:50:11.432 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.430+0000 7f9a5124a640 1 -- start start 2026-03-09T20:50:11.432 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.430+0000 7f9a5124a640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9a4c071c20 0x7f9a4c0828e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:50:11.432 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.430+0000 7f9a5124a640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9a4c0725f0 0x7f9a4c082e20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:50:11.432 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.430+0000 7f9a5124a640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9a4c084320 con 0x7f9a4c071c20 2026-03-09T20:50:11.432 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.430+0000 7f9a5124a640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9a4c084490 con 0x7f9a4c0725f0 2026-03-09T20:50:11.432 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.431+0000 7f9a4ad76640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9a4c071c20 0x7f9a4c0828e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:50:11.432 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.431+0000 7f9a4ad76640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9a4c071c20 0x7f9a4c0828e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:42738/0 (socket says 192.168.123.107:42738) 2026-03-09T20:50:11.432 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.431+0000 7f9a4ad76640 1 -- 192.168.123.107:0/3137136474 learned_addr learned my addr 192.168.123.107:0/3137136474 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:50:11.432 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.431+0000 7f9a4a575640 1 --2- 192.168.123.107:0/3137136474 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9a4c0725f0 0x7f9a4c082e20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:50:11.432 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.431+0000 7f9a4ad76640 1 -- 192.168.123.107:0/3137136474 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9a4c0725f0 msgr2=0x7f9a4c082e20 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:50:11.432 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.431+0000 7f9a4ad76640 1 --2- 192.168.123.107:0/3137136474 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9a4c0725f0 0x7f9a4c082e20 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:11.432 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.431+0000 7f9a4ad76640 1 -- 192.168.123.107:0/3137136474 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f9a3c0075d0 con 0x7f9a4c071c20 2026-03-09T20:50:11.433 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.432+0000 7f9a4ad76640 1 --2- 192.168.123.107:0/3137136474 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9a4c071c20 0x7f9a4c0828e0 secure :-1 s=READY pgs=70 cs=0 l=1 rev1=1 crypto rx=0x7f9a3c002410 tx=0x7f9a3c002910 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:50:11.433 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.432+0000 7f9a2bfff640 1 -- 192.168.123.107:0/3137136474 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f9a3c00f040 con 0x7f9a4c071c20 2026-03-09T20:50:11.434 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.432+0000 7f9a5124a640 1 -- 192.168.123.107:0/3137136474 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f9a4c083360 con 0x7f9a4c071c20 2026-03-09T20:50:11.434 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.432+0000 7f9a5124a640 1 -- 192.168.123.107:0/3137136474 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f9a4c12eda0 con 0x7f9a4c071c20 2026-03-09T20:50:11.434 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.433+0000 7f9a2bfff640 1 -- 192.168.123.107:0/3137136474 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f9a3c010210 con 0x7f9a4c071c20 2026-03-09T20:50:11.434 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.433+0000 7f9a2bfff640 1 -- 192.168.123.107:0/3137136474 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f9a3c0416d0 con 0x7f9a4c071c20 2026-03-09T20:50:11.435 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.434+0000 7f9a5124a640 1 -- 192.168.123.107:0/3137136474 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f9a4c072020 con 0x7f9a4c071c20 2026-03-09T20:50:11.435 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.435+0000 7f9a2bfff640 1 -- 192.168.123.107:0/3137136474 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f9a3c049050 con 0x7f9a4c071c20 2026-03-09T20:50:11.436 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.435+0000 7f9a2bfff640 1 --2- 192.168.123.107:0/3137136474 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f9a380778e0 0x7f9a38079da0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:50:11.438 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.437+0000 7f9a4a575640 1 --2- 192.168.123.107:0/3137136474 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f9a380778e0 0x7f9a38079da0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:50:11.438 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.437+0000 7f9a2bfff640 1 -- 192.168.123.107:0/3137136474 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(60..60 src has 1..60) v4 ==== 6394+0+0 (secure 0 0 0) 0x7f9a3c0be3e0 con 0x7f9a4c071c20 2026-03-09T20:50:11.438 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.437+0000 7f9a4a575640 1 --2- 192.168.123.107:0/3137136474 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f9a380778e0 0x7f9a38079da0 secure :-1 s=READY pgs=46 cs=0 l=1 rev1=1 crypto rx=0x7f9a4c084010 tx=0x7f9a4400b040 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:50:11.439 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.438+0000 7f9a2bfff640 1 -- 192.168.123.107:0/3137136474 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f9a3c086b90 con 0x7f9a4c071c20 2026-03-09T20:50:11.608 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.607+0000 7f9a5124a640 1 -- 192.168.123.107:0/3137136474 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7f9a4c06fd00 con 0x7f9a4c071c20 2026-03-09T20:50:11.612 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T20:50:11.612 INFO:teuthology.orchestra.run.vm07.stdout: "mon": { 2026-03-09T20:50:11.612 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T20:50:11.612 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:50:11.612 INFO:teuthology.orchestra.run.vm07.stdout: "mgr": { 2026-03-09T20:50:11.612 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T20:50:11.612 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:50:11.612 INFO:teuthology.orchestra.run.vm07.stdout: "osd": { 2026-03-09T20:50:11.612 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable)": 4, 2026-03-09T20:50:11.612 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T20:50:11.612 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:50:11.612 INFO:teuthology.orchestra.run.vm07.stdout: "mds": { 2026-03-09T20:50:11.612 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable)": 4 2026-03-09T20:50:11.612 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:50:11.612 INFO:teuthology.orchestra.run.vm07.stdout: "overall": { 2026-03-09T20:50:11.612 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable)": 8, 2026-03-09T20:50:11.612 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 6 2026-03-09T20:50:11.612 INFO:teuthology.orchestra.run.vm07.stdout: } 2026-03-09T20:50:11.612 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T20:50:11.612 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.608+0000 7f9a2bfff640 1 -- 192.168.123.107:0/3137136474 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+844 (secure 0 0 0) 0x7f9a3c0862e0 con 0x7f9a4c071c20 2026-03-09T20:50:11.613 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.612+0000 7f9a29ffb640 1 -- 192.168.123.107:0/3137136474 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f9a380778e0 msgr2=0x7f9a38079da0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:50:11.613 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.612+0000 7f9a29ffb640 1 --2- 192.168.123.107:0/3137136474 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f9a380778e0 0x7f9a38079da0 secure :-1 s=READY pgs=46 cs=0 l=1 rev1=1 crypto rx=0x7f9a4c084010 tx=0x7f9a4400b040 comp rx=0 tx=0).stop 2026-03-09T20:50:11.613 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.612+0000 7f9a29ffb640 1 -- 192.168.123.107:0/3137136474 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9a4c071c20 msgr2=0x7f9a4c0828e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:50:11.613 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.612+0000 7f9a29ffb640 1 --2- 192.168.123.107:0/3137136474 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9a4c071c20 0x7f9a4c0828e0 secure :-1 s=READY pgs=70 cs=0 l=1 rev1=1 crypto rx=0x7f9a3c002410 tx=0x7f9a3c002910 comp rx=0 tx=0).stop 2026-03-09T20:50:11.613 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.612+0000 7f9a29ffb640 1 -- 192.168.123.107:0/3137136474 shutdown_connections 2026-03-09T20:50:11.613 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.612+0000 7f9a29ffb640 1 --2- 192.168.123.107:0/3137136474 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f9a380778e0 0x7f9a38079da0 unknown :-1 s=CLOSED pgs=46 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:11.613 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.612+0000 7f9a29ffb640 1 --2- 192.168.123.107:0/3137136474 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9a4c0725f0 0x7f9a4c082e20 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:11.613 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.612+0000 7f9a29ffb640 1 --2- 192.168.123.107:0/3137136474 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9a4c071c20 0x7f9a4c0828e0 unknown :-1 s=CLOSED pgs=70 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:11.613 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.612+0000 7f9a29ffb640 1 -- 192.168.123.107:0/3137136474 >> 192.168.123.107:0/3137136474 conn(0x7f9a4c06d660 msgr2=0x7f9a4c07c070 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:50:11.613 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.612+0000 7f9a29ffb640 1 -- 192.168.123.107:0/3137136474 shutdown_connections 2026-03-09T20:50:11.613 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.612+0000 7f9a29ffb640 1 -- 192.168.123.107:0/3137136474 wait complete. 2026-03-09T20:50:11.710 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.708+0000 7fc87bfff640 1 -- 192.168.123.107:0/2996902958 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc87c072370 msgr2=0x7fc87c10c590 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:50:11.710 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.708+0000 7fc87bfff640 1 --2- 192.168.123.107:0/2996902958 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc87c072370 0x7fc87c10c590 secure :-1 s=READY pgs=71 cs=0 l=1 rev1=1 crypto rx=0x7fc86c00b0a0 tx=0x7fc86c02f4c0 comp rx=0 tx=0).stop 2026-03-09T20:50:11.710 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.708+0000 7fc87bfff640 1 -- 192.168.123.107:0/2996902958 shutdown_connections 2026-03-09T20:50:11.710 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.708+0000 7fc87bfff640 1 --2- 192.168.123.107:0/2996902958 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc87c072370 0x7fc87c10c590 unknown :-1 s=CLOSED pgs=71 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:11.710 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.708+0000 7fc87bfff640 1 --2- 192.168.123.107:0/2996902958 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc87c0719a0 0x7fc87c071da0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:11.710 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.708+0000 7fc87bfff640 1 -- 192.168.123.107:0/2996902958 >> 192.168.123.107:0/2996902958 conn(0x7fc87c06d4f0 msgr2=0x7fc87c06f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:50:11.710 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.708+0000 7fc87bfff640 1 -- 192.168.123.107:0/2996902958 shutdown_connections 2026-03-09T20:50:11.710 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.708+0000 7fc87bfff640 1 -- 192.168.123.107:0/2996902958 wait complete. 2026-03-09T20:50:11.710 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.709+0000 7fc87bfff640 1 Processor -- start 2026-03-09T20:50:11.710 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.709+0000 7fc87bfff640 1 -- start start 2026-03-09T20:50:11.710 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.709+0000 7fc87bfff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc87c0719a0 0x7fc87c1158b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:50:11.710 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.709+0000 7fc87bfff640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc87c117260 0x7fc87c115df0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:50:11.710 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.709+0000 7fc87bfff640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc87c116330 con 0x7fc87c0719a0 2026-03-09T20:50:11.710 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.709+0000 7fc87bfff640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc87c1164a0 con 0x7fc87c117260 2026-03-09T20:50:11.715 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.714+0000 7fc87a7fc640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc87c117260 0x7fc87c115df0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:50:11.715 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.714+0000 7fc87a7fc640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc87c117260 0x7fc87c115df0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:48094/0 (socket says 192.168.123.107:48094) 2026-03-09T20:50:11.715 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.714+0000 7fc87a7fc640 1 -- 192.168.123.107:0/2409467107 learned_addr learned my addr 192.168.123.107:0/2409467107 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:50:11.715 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.714+0000 7fc87affd640 1 --2- 192.168.123.107:0/2409467107 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc87c0719a0 0x7fc87c1158b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:50:11.715 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.714+0000 7fc87a7fc640 1 -- 192.168.123.107:0/2409467107 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc87c0719a0 msgr2=0x7fc87c1158b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:50:11.715 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.714+0000 7fc87a7fc640 1 --2- 192.168.123.107:0/2409467107 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc87c0719a0 0x7fc87c1158b0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:11.715 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.714+0000 7fc87a7fc640 1 -- 192.168.123.107:0/2409467107 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fc86c009d00 con 0x7fc87c117260 2026-03-09T20:50:11.715 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.714+0000 7fc87a7fc640 1 --2- 192.168.123.107:0/2409467107 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc87c117260 0x7fc87c115df0 secure :-1 s=READY pgs=25 cs=0 l=1 rev1=1 crypto rx=0x7fc86c009fd0 tx=0x7fc86c009300 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:50:11.717 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.715+0000 7fc85bfff640 1 -- 192.168.123.107:0/2409467107 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fc86c0048f0 con 0x7fc87c117260 2026-03-09T20:50:11.717 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.715+0000 7fc87bfff640 1 -- 192.168.123.107:0/2409467107 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fc87c116720 con 0x7fc87c117260 2026-03-09T20:50:11.717 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.715+0000 7fc87bfff640 1 -- 192.168.123.107:0/2409467107 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fc87c1b5a90 con 0x7fc87c117260 2026-03-09T20:50:11.717 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.715+0000 7fc85bfff640 1 -- 192.168.123.107:0/2409467107 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fc86c007ca0 con 0x7fc87c117260 2026-03-09T20:50:11.717 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.715+0000 7fc85bfff640 1 -- 192.168.123.107:0/2409467107 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fc86c0408d0 con 0x7fc87c117260 2026-03-09T20:50:11.717 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.716+0000 7fc87bfff640 1 -- 192.168.123.107:0/2409467107 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fc87c072370 con 0x7fc87c117260 2026-03-09T20:50:11.718 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.716+0000 7fc85bfff640 1 -- 192.168.123.107:0/2409467107 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fc86c04a430 con 0x7fc87c117260 2026-03-09T20:50:11.718 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.717+0000 7fc85bfff640 1 --2- 192.168.123.107:0/2409467107 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc8440778e0 0x7fc844079da0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:50:11.718 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.717+0000 7fc87affd640 1 --2- 192.168.123.107:0/2409467107 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc8440778e0 0x7fc844079da0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:50:11.718 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.717+0000 7fc85bfff640 1 -- 192.168.123.107:0/2409467107 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(60..60 src has 1..60) v4 ==== 6394+0+0 (secure 0 0 0) 0x7fc86c0be3d0 con 0x7fc87c117260 2026-03-09T20:50:11.718 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.717+0000 7fc87affd640 1 --2- 192.168.123.107:0/2409467107 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc8440778e0 0x7fc844079da0 secure :-1 s=READY pgs=47 cs=0 l=1 rev1=1 crypto rx=0x7fc870005fd0 tx=0x7fc870004380 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:50:11.723 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.720+0000 7fc85bfff640 1 -- 192.168.123.107:0/2409467107 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fc86c086a70 con 0x7fc87c117260 2026-03-09T20:50:11.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:11 vm07.local ceph-mon[112105]: pgmap v69: 65 pgs: 10 active+recovery_wait+degraded, 1 active+recovering, 54 active+clean; 251 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 928 KiB/s rd, 890 KiB/s wr, 281 op/s; 796/16005 objects degraded (4.973%); 265 KiB/s, 126 keys/s, 62 objects/s recovering 2026-03-09T20:50:11.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:11 vm07.local ceph-mon[112105]: from='client.44161 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:50:11.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:11 vm07.local ceph-mon[112105]: from='client.34196 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:50:11.911 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.910+0000 7fc87bfff640 1 -- 192.168.123.107:0/2409467107 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "fs dump"} v 0) v1 -- 0x7fc87c1183e0 con 0x7fc87c117260 2026-03-09T20:50:11.912 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.911+0000 7fc85bfff640 1 -- 192.168.123.107:0/2409467107 <== mon.1 v2:192.168.123.110:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump"}]=0 dumped fsmap epoch 11 v11) v1 ==== 76+0+1937 (secure 0 0 0) 0x7fc86c0861c0 con 0x7fc87c117260 2026-03-09T20:50:11.914 INFO:teuthology.orchestra.run.vm07.stdout:e11 2026-03-09T20:50:11.914 INFO:teuthology.orchestra.run.vm07.stdout:btime 1970-01-01T00:00:00:000000+0000 2026-03-09T20:50:11.914 INFO:teuthology.orchestra.run.vm07.stdout:enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T20:50:11.914 INFO:teuthology.orchestra.run.vm07.stdout:default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T20:50:11.914 INFO:teuthology.orchestra.run.vm07.stdout:legacy client fscid: 1 2026-03-09T20:50:11.914 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:50:11.914 INFO:teuthology.orchestra.run.vm07.stdout:Filesystem 'cephfs' (1) 2026-03-09T20:50:11.914 INFO:teuthology.orchestra.run.vm07.stdout:fs_name cephfs 2026-03-09T20:50:11.914 INFO:teuthology.orchestra.run.vm07.stdout:epoch 11 2026-03-09T20:50:11.914 INFO:teuthology.orchestra.run.vm07.stdout:flags 32 joinable allow_snaps allow_multimds_snaps allow_standby_replay 2026-03-09T20:50:11.914 INFO:teuthology.orchestra.run.vm07.stdout:created 2026-03-09T20:44:59.885491+0000 2026-03-09T20:50:11.914 INFO:teuthology.orchestra.run.vm07.stdout:modified 2026-03-09T20:45:12.822947+0000 2026-03-09T20:50:11.914 INFO:teuthology.orchestra.run.vm07.stdout:tableserver 0 2026-03-09T20:50:11.914 INFO:teuthology.orchestra.run.vm07.stdout:root 0 2026-03-09T20:50:11.915 INFO:teuthology.orchestra.run.vm07.stdout:session_timeout 60 2026-03-09T20:50:11.915 INFO:teuthology.orchestra.run.vm07.stdout:session_autoclose 300 2026-03-09T20:50:11.915 INFO:teuthology.orchestra.run.vm07.stdout:max_file_size 1099511627776 2026-03-09T20:50:11.915 INFO:teuthology.orchestra.run.vm07.stdout:max_xattr_size 65536 2026-03-09T20:50:11.915 INFO:teuthology.orchestra.run.vm07.stdout:required_client_features {} 2026-03-09T20:50:11.915 INFO:teuthology.orchestra.run.vm07.stdout:last_failure 0 2026-03-09T20:50:11.915 INFO:teuthology.orchestra.run.vm07.stdout:last_failure_osd_epoch 0 2026-03-09T20:50:11.915 INFO:teuthology.orchestra.run.vm07.stdout:compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T20:50:11.915 INFO:teuthology.orchestra.run.vm07.stdout:max_mds 2 2026-03-09T20:50:11.915 INFO:teuthology.orchestra.run.vm07.stdout:in 0,1 2026-03-09T20:50:11.915 INFO:teuthology.orchestra.run.vm07.stdout:up {0=14476,1=24291} 2026-03-09T20:50:11.915 INFO:teuthology.orchestra.run.vm07.stdout:failed 2026-03-09T20:50:11.915 INFO:teuthology.orchestra.run.vm07.stdout:damaged 2026-03-09T20:50:11.915 INFO:teuthology.orchestra.run.vm07.stdout:stopped 2026-03-09T20:50:11.915 INFO:teuthology.orchestra.run.vm07.stdout:data_pools [3] 2026-03-09T20:50:11.915 INFO:teuthology.orchestra.run.vm07.stdout:metadata_pool 2 2026-03-09T20:50:11.915 INFO:teuthology.orchestra.run.vm07.stdout:inline_data disabled 2026-03-09T20:50:11.915 INFO:teuthology.orchestra.run.vm07.stdout:balancer 2026-03-09T20:50:11.915 INFO:teuthology.orchestra.run.vm07.stdout:bal_rank_mask -1 2026-03-09T20:50:11.915 INFO:teuthology.orchestra.run.vm07.stdout:standby_count_wanted 1 2026-03-09T20:50:11.915 INFO:teuthology.orchestra.run.vm07.stdout:qdb_cluster leader: 0 members: 2026-03-09T20:50:11.915 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm07.rovdbp{0:14476} state up:active seq 3 join_fscid=1 addr [v2:192.168.123.107:6826/2216764941,v1:192.168.123.107:6827/2216764941] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:50:11.915 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm10.hzyuyq{0:14498} state up:standby-replay seq 3 join_fscid=1 addr [v2:192.168.123.110:6826/3212743251,v1:192.168.123.110:6827/3212743251] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:50:11.915 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm10.qpltwp{1:24291} state up:active seq 3 join_fscid=1 addr [v2:192.168.123.110:6824/61492274,v1:192.168.123.110:6825/61492274] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:50:11.915 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm07.potfau{1:14490} state up:standby-replay seq 2 join_fscid=1 addr [v2:192.168.123.107:6828/3289699342,v1:192.168.123.107:6829/3289699342] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:50:11.915 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:50:11.915 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:50:11.915 INFO:teuthology.orchestra.run.vm07.stderr:dumped fsmap epoch 11 2026-03-09T20:50:11.923 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.921+0000 7fc859ffb640 1 -- 192.168.123.107:0/2409467107 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc8440778e0 msgr2=0x7fc844079da0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:50:11.923 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.921+0000 7fc859ffb640 1 --2- 192.168.123.107:0/2409467107 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc8440778e0 0x7fc844079da0 secure :-1 s=READY pgs=47 cs=0 l=1 rev1=1 crypto rx=0x7fc870005fd0 tx=0x7fc870004380 comp rx=0 tx=0).stop 2026-03-09T20:50:11.923 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.921+0000 7fc859ffb640 1 -- 192.168.123.107:0/2409467107 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc87c117260 msgr2=0x7fc87c115df0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:50:11.923 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.921+0000 7fc859ffb640 1 --2- 192.168.123.107:0/2409467107 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc87c117260 0x7fc87c115df0 secure :-1 s=READY pgs=25 cs=0 l=1 rev1=1 crypto rx=0x7fc86c009fd0 tx=0x7fc86c009300 comp rx=0 tx=0).stop 2026-03-09T20:50:11.923 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.922+0000 7fc859ffb640 1 -- 192.168.123.107:0/2409467107 shutdown_connections 2026-03-09T20:50:11.923 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.922+0000 7fc859ffb640 1 --2- 192.168.123.107:0/2409467107 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc8440778e0 0x7fc844079da0 unknown :-1 s=CLOSED pgs=47 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:11.923 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.922+0000 7fc859ffb640 1 --2- 192.168.123.107:0/2409467107 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc87c117260 0x7fc87c115df0 unknown :-1 s=CLOSED pgs=25 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:11.923 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.922+0000 7fc859ffb640 1 --2- 192.168.123.107:0/2409467107 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc87c0719a0 0x7fc87c1158b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:11.923 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.922+0000 7fc859ffb640 1 -- 192.168.123.107:0/2409467107 >> 192.168.123.107:0/2409467107 conn(0x7fc87c06d4f0 msgr2=0x7fc87c070300 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:50:11.923 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.922+0000 7fc859ffb640 1 -- 192.168.123.107:0/2409467107 shutdown_connections 2026-03-09T20:50:11.923 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:11.922+0000 7fc859ffb640 1 -- 192.168.123.107:0/2409467107 wait complete. 2026-03-09T20:50:12.015 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.013+0000 7fed53478640 1 -- 192.168.123.107:0/1369319852 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fed4c072420 msgr2=0x7fed4c077190 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:50:12.015 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.013+0000 7fed53478640 1 --2- 192.168.123.107:0/1369319852 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fed4c072420 0x7fed4c077190 secure :-1 s=READY pgs=26 cs=0 l=1 rev1=1 crypto rx=0x7fed44009040 tx=0x7fed4402fc10 comp rx=0 tx=0).stop 2026-03-09T20:50:12.015 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.013+0000 7fed53478640 1 -- 192.168.123.107:0/1369319852 shutdown_connections 2026-03-09T20:50:12.015 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.013+0000 7fed53478640 1 --2- 192.168.123.107:0/1369319852 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fed4c072420 0x7fed4c077190 unknown :-1 s=CLOSED pgs=26 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:12.015 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.013+0000 7fed53478640 1 --2- 192.168.123.107:0/1369319852 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fed4c071a50 0x7fed4c071e50 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:12.015 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.013+0000 7fed53478640 1 -- 192.168.123.107:0/1369319852 >> 192.168.123.107:0/1369319852 conn(0x7fed4c06d4f0 msgr2=0x7fed4c06f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:50:12.015 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.014+0000 7fed53478640 1 -- 192.168.123.107:0/1369319852 shutdown_connections 2026-03-09T20:50:12.016 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.014+0000 7fed53478640 1 -- 192.168.123.107:0/1369319852 wait complete. 2026-03-09T20:50:12.016 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.014+0000 7fed53478640 1 Processor -- start 2026-03-09T20:50:12.016 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.014+0000 7fed53478640 1 -- start start 2026-03-09T20:50:12.016 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.014+0000 7fed53478640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fed4c071a50 0x7fed4c084010 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:50:12.017 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.014+0000 7fed53478640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fed4c082660 0x7fed4c082ae0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:50:12.017 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.014+0000 7fed53478640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fed4c084550 con 0x7fed4c071a50 2026-03-09T20:50:12.017 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.014+0000 7fed53478640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fed4c083020 con 0x7fed4c082660 2026-03-09T20:50:12.017 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.015+0000 7fed52476640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fed4c071a50 0x7fed4c084010 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:50:12.017 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.015+0000 7fed52476640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fed4c071a50 0x7fed4c084010 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:42782/0 (socket says 192.168.123.107:42782) 2026-03-09T20:50:12.017 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.015+0000 7fed52476640 1 -- 192.168.123.107:0/3193473545 learned_addr learned my addr 192.168.123.107:0/3193473545 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:50:12.017 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.015+0000 7fed51c75640 1 --2- 192.168.123.107:0/3193473545 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fed4c082660 0x7fed4c082ae0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:50:12.017 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.015+0000 7fed52476640 1 -- 192.168.123.107:0/3193473545 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fed4c082660 msgr2=0x7fed4c082ae0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:50:12.017 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.015+0000 7fed52476640 1 --2- 192.168.123.107:0/3193473545 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fed4c082660 0x7fed4c082ae0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:12.017 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.015+0000 7fed52476640 1 -- 192.168.123.107:0/3193473545 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fed44008cf0 con 0x7fed4c071a50 2026-03-09T20:50:12.017 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.016+0000 7fed52476640 1 --2- 192.168.123.107:0/3193473545 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fed4c071a50 0x7fed4c084010 secure :-1 s=READY pgs=72 cs=0 l=1 rev1=1 crypto rx=0x7fed48009870 tx=0x7fed48009d40 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:50:12.018 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.017+0000 7fed437fe640 1 -- 192.168.123.107:0/3193473545 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fed48010040 con 0x7fed4c071a50 2026-03-09T20:50:12.018 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.017+0000 7fed53478640 1 -- 192.168.123.107:0/3193473545 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fed4c083300 con 0x7fed4c071a50 2026-03-09T20:50:12.020 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.017+0000 7fed53478640 1 -- 192.168.123.107:0/3193473545 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fed4c12ef70 con 0x7fed4c071a50 2026-03-09T20:50:12.020 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.019+0000 7fed437fe640 1 -- 192.168.123.107:0/3193473545 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fed4800ecf0 con 0x7fed4c071a50 2026-03-09T20:50:12.020 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.019+0000 7fed437fe640 1 -- 192.168.123.107:0/3193473545 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fed48002cf0 con 0x7fed4c071a50 2026-03-09T20:50:12.021 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.019+0000 7fed437fe640 1 -- 192.168.123.107:0/3193473545 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fed4801e3b0 con 0x7fed4c071a50 2026-03-09T20:50:12.021 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.020+0000 7fed437fe640 1 --2- 192.168.123.107:0/3193473545 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fed34077a80 0x7fed34079f40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:50:12.023 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.022+0000 7fed437fe640 1 -- 192.168.123.107:0/3193473545 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(60..60 src has 1..60) v4 ==== 6394+0+0 (secure 0 0 0) 0x7fed4809a1e0 con 0x7fed4c071a50 2026-03-09T20:50:12.023 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.022+0000 7fed53478640 1 -- 192.168.123.107:0/3193473545 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fed10005350 con 0x7fed4c071a50 2026-03-09T20:50:12.023 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.023+0000 7fed51c75640 1 --2- 192.168.123.107:0/3193473545 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fed34077a80 0x7fed34079f40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:50:12.026 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.025+0000 7fed51c75640 1 --2- 192.168.123.107:0/3193473545 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fed34077a80 0x7fed34079f40 secure :-1 s=READY pgs=48 cs=0 l=1 rev1=1 crypto rx=0x7fed44002790 tx=0x7fed44007480 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:50:12.030 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.026+0000 7fed437fe640 1 -- 192.168.123.107:0/3193473545 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fed480629b0 con 0x7fed4c071a50 2026-03-09T20:50:12.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:11 vm10.local ceph-mon[103526]: pgmap v69: 65 pgs: 10 active+recovery_wait+degraded, 1 active+recovering, 54 active+clean; 251 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 928 KiB/s rd, 890 KiB/s wr, 281 op/s; 796/16005 objects degraded (4.973%); 265 KiB/s, 126 keys/s, 62 objects/s recovering 2026-03-09T20:50:12.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:11 vm10.local ceph-mon[103526]: from='client.44161 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:50:12.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:11 vm10.local ceph-mon[103526]: from='client.34196 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:50:12.179 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.175+0000 7fed53478640 1 -- 192.168.123.107:0/3193473545 --> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7fed10002bf0 con 0x7fed34077a80 2026-03-09T20:50:12.187 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T20:50:12.187 INFO:teuthology.orchestra.run.vm07.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc", 2026-03-09T20:50:12.187 INFO:teuthology.orchestra.run.vm07.stdout: "in_progress": true, 2026-03-09T20:50:12.187 INFO:teuthology.orchestra.run.vm07.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-09T20:50:12.187 INFO:teuthology.orchestra.run.vm07.stdout: "services_complete": [ 2026-03-09T20:50:12.187 INFO:teuthology.orchestra.run.vm07.stdout: "mon", 2026-03-09T20:50:12.187 INFO:teuthology.orchestra.run.vm07.stdout: "crash", 2026-03-09T20:50:12.187 INFO:teuthology.orchestra.run.vm07.stdout: "mgr" 2026-03-09T20:50:12.187 INFO:teuthology.orchestra.run.vm07.stdout: ], 2026-03-09T20:50:12.187 INFO:teuthology.orchestra.run.vm07.stdout: "progress": "8/23 daemons upgraded", 2026-03-09T20:50:12.187 INFO:teuthology.orchestra.run.vm07.stdout: "message": "Currently upgrading osd daemons", 2026-03-09T20:50:12.187 INFO:teuthology.orchestra.run.vm07.stdout: "is_paused": false 2026-03-09T20:50:12.187 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T20:50:12.187 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.184+0000 7fed437fe640 1 -- 192.168.123.107:0/3193473545 <== mgr.34100 v2:192.168.123.107:6800/2064434004 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+400 (secure 0 0 0) 0x7fed10002bf0 con 0x7fed34077a80 2026-03-09T20:50:12.188 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.187+0000 7fed53478640 1 -- 192.168.123.107:0/3193473545 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fed34077a80 msgr2=0x7fed34079f40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:50:12.188 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.187+0000 7fed53478640 1 --2- 192.168.123.107:0/3193473545 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fed34077a80 0x7fed34079f40 secure :-1 s=READY pgs=48 cs=0 l=1 rev1=1 crypto rx=0x7fed44002790 tx=0x7fed44007480 comp rx=0 tx=0).stop 2026-03-09T20:50:12.189 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.188+0000 7fed53478640 1 -- 192.168.123.107:0/3193473545 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fed4c071a50 msgr2=0x7fed4c084010 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:50:12.189 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.188+0000 7fed53478640 1 --2- 192.168.123.107:0/3193473545 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fed4c071a50 0x7fed4c084010 secure :-1 s=READY pgs=72 cs=0 l=1 rev1=1 crypto rx=0x7fed48009870 tx=0x7fed48009d40 comp rx=0 tx=0).stop 2026-03-09T20:50:12.189 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.188+0000 7fed53478640 1 -- 192.168.123.107:0/3193473545 shutdown_connections 2026-03-09T20:50:12.189 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.188+0000 7fed53478640 1 --2- 192.168.123.107:0/3193473545 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fed34077a80 0x7fed34079f40 unknown :-1 s=CLOSED pgs=48 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:12.189 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.188+0000 7fed53478640 1 --2- 192.168.123.107:0/3193473545 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fed4c082660 0x7fed4c082ae0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:12.189 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.188+0000 7fed53478640 1 --2- 192.168.123.107:0/3193473545 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fed4c071a50 0x7fed4c084010 unknown :-1 s=CLOSED pgs=72 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:12.189 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.188+0000 7fed53478640 1 -- 192.168.123.107:0/3193473545 >> 192.168.123.107:0/3193473545 conn(0x7fed4c06d4f0 msgr2=0x7fed4c073130 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:50:12.189 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.188+0000 7fed53478640 1 -- 192.168.123.107:0/3193473545 shutdown_connections 2026-03-09T20:50:12.189 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.188+0000 7fed53478640 1 -- 192.168.123.107:0/3193473545 wait complete. 2026-03-09T20:50:12.287 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.285+0000 7fea7faab640 1 -- 192.168.123.107:0/3058537334 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fea78102a80 msgr2=0x7fea78102e80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:50:12.287 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.285+0000 7fea7faab640 1 --2- 192.168.123.107:0/3058537334 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fea78102a80 0x7fea78102e80 secure :-1 s=READY pgs=73 cs=0 l=1 rev1=1 crypto rx=0x7fea680099b0 tx=0x7fea6802f240 comp rx=0 tx=0).stop 2026-03-09T20:50:12.287 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.286+0000 7fea7faab640 1 -- 192.168.123.107:0/3058537334 shutdown_connections 2026-03-09T20:50:12.287 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.286+0000 7fea7faab640 1 --2- 192.168.123.107:0/3058537334 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fea78103c80 0x7fea78104100 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:12.287 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.286+0000 7fea7faab640 1 --2- 192.168.123.107:0/3058537334 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fea78102a80 0x7fea78102e80 unknown :-1 s=CLOSED pgs=73 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:12.287 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.286+0000 7fea7faab640 1 -- 192.168.123.107:0/3058537334 >> 192.168.123.107:0/3058537334 conn(0x7fea780fe250 msgr2=0x7fea78100670 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:50:12.287 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.286+0000 7fea7faab640 1 -- 192.168.123.107:0/3058537334 shutdown_connections 2026-03-09T20:50:12.287 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.287+0000 7fea7faab640 1 -- 192.168.123.107:0/3058537334 wait complete. 2026-03-09T20:50:12.288 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.287+0000 7fea7faab640 1 Processor -- start 2026-03-09T20:50:12.289 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.288+0000 7fea7faab640 1 -- start start 2026-03-09T20:50:12.289 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.288+0000 7fea7faab640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fea78102a80 0x7fea7819e960 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:50:12.290 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.288+0000 7fea7faab640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fea78103c80 0x7fea7819eea0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:50:12.290 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.288+0000 7fea7faab640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fea7819f470 con 0x7fea78102a80 2026-03-09T20:50:12.290 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.288+0000 7fea7faab640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fea7819f5e0 con 0x7fea78103c80 2026-03-09T20:50:12.290 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.289+0000 7fea7d01f640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fea78103c80 0x7fea7819eea0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:50:12.290 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.289+0000 7fea7d01f640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fea78103c80 0x7fea7819eea0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:48144/0 (socket says 192.168.123.107:48144) 2026-03-09T20:50:12.290 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.289+0000 7fea7d01f640 1 -- 192.168.123.107:0/2510356893 learned_addr learned my addr 192.168.123.107:0/2510356893 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:50:12.290 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.289+0000 7fea7d820640 1 --2- 192.168.123.107:0/2510356893 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fea78102a80 0x7fea7819e960 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:50:12.290 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.289+0000 7fea7d01f640 1 -- 192.168.123.107:0/2510356893 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fea78102a80 msgr2=0x7fea7819e960 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:50:12.290 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.289+0000 7fea7d01f640 1 --2- 192.168.123.107:0/2510356893 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fea78102a80 0x7fea7819e960 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:12.290 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.289+0000 7fea7d01f640 1 -- 192.168.123.107:0/2510356893 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fea68009660 con 0x7fea78103c80 2026-03-09T20:50:12.290 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.289+0000 7fea7d01f640 1 --2- 192.168.123.107:0/2510356893 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fea78103c80 0x7fea7819eea0 secure :-1 s=READY pgs=27 cs=0 l=1 rev1=1 crypto rx=0x7fea6c00d8d0 tx=0x7fea6c00dda0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:50:12.291 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.290+0000 7fea66ffd640 1 -- 192.168.123.107:0/2510356893 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fea6c004490 con 0x7fea78103c80 2026-03-09T20:50:12.291 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.290+0000 7fea7faab640 1 -- 192.168.123.107:0/2510356893 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fea781a4080 con 0x7fea78103c80 2026-03-09T20:50:12.291 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.290+0000 7fea7faab640 1 -- 192.168.123.107:0/2510356893 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fea781a45d0 con 0x7fea78103c80 2026-03-09T20:50:12.292 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.290+0000 7fea66ffd640 1 -- 192.168.123.107:0/2510356893 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fea6c00bd00 con 0x7fea78103c80 2026-03-09T20:50:12.292 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.291+0000 7fea66ffd640 1 -- 192.168.123.107:0/2510356893 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fea6c010460 con 0x7fea78103c80 2026-03-09T20:50:12.292 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.291+0000 7fea7faab640 1 -- 192.168.123.107:0/2510356893 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fea780767e0 con 0x7fea78103c80 2026-03-09T20:50:12.295 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.294+0000 7fea66ffd640 1 -- 192.168.123.107:0/2510356893 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fea6c00b840 con 0x7fea78103c80 2026-03-09T20:50:12.295 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.295+0000 7fea66ffd640 1 --2- 192.168.123.107:0/2510356893 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fea540778e0 0x7fea54079da0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:50:12.296 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.295+0000 7fea66ffd640 1 -- 192.168.123.107:0/2510356893 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(60..60 src has 1..60) v4 ==== 6394+0+0 (secure 0 0 0) 0x7fea6c099ca0 con 0x7fea78103c80 2026-03-09T20:50:12.296 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.295+0000 7fea7d820640 1 --2- 192.168.123.107:0/2510356893 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fea540778e0 0x7fea54079da0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:50:12.297 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.296+0000 7fea66ffd640 1 -- 192.168.123.107:0/2510356893 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fea6c062350 con 0x7fea78103c80 2026-03-09T20:50:12.297 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.296+0000 7fea7d820640 1 --2- 192.168.123.107:0/2510356893 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fea540778e0 0x7fea54079da0 secure :-1 s=READY pgs=49 cs=0 l=1 rev1=1 crypto rx=0x7fea680040c0 tx=0x7fea6803a040 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:50:12.468 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.467+0000 7fea7faab640 1 -- 192.168.123.107:0/2510356893 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "health", "detail": "detail"} v 0) v1 -- 0x7fea780769f0 con 0x7fea78103c80 2026-03-09T20:50:12.471 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.470+0000 7fea66ffd640 1 -- 192.168.123.107:0/2510356893 <== mon.1 v2:192.168.123.110:3300/0 7 ==== mon_command_ack([{"prefix": "health", "detail": "detail"}]=0 v0) v1 ==== 74+0+796 (secure 0 0 0) 0x7fea6c061aa0 con 0x7fea78103c80 2026-03-09T20:50:12.472 INFO:teuthology.orchestra.run.vm07.stdout:HEALTH_WARN Degraded data redundancy: 796/16005 objects degraded (4.973%), 10 pgs degraded 2026-03-09T20:50:12.472 INFO:teuthology.orchestra.run.vm07.stdout:[WRN] PG_DEGRADED: Degraded data redundancy: 796/16005 objects degraded (4.973%), 10 pgs degraded 2026-03-09T20:50:12.472 INFO:teuthology.orchestra.run.vm07.stdout: pg 3.6 is active+recovery_wait+degraded, acting [0,1,4] 2026-03-09T20:50:12.472 INFO:teuthology.orchestra.run.vm07.stdout: pg 3.7 is active+recovery_wait+degraded, acting [2,1,4] 2026-03-09T20:50:12.472 INFO:teuthology.orchestra.run.vm07.stdout: pg 3.8 is active+recovery_wait+degraded, acting [2,1,5] 2026-03-09T20:50:12.472 INFO:teuthology.orchestra.run.vm07.stdout: pg 3.10 is active+recovery_wait+degraded, acting [5,0,1] 2026-03-09T20:50:12.472 INFO:teuthology.orchestra.run.vm07.stdout: pg 3.12 is active+recovery_wait+degraded, acting [0,3,1] 2026-03-09T20:50:12.472 INFO:teuthology.orchestra.run.vm07.stdout: pg 3.16 is active+recovery_wait+degraded, acting [5,3,1] 2026-03-09T20:50:12.472 INFO:teuthology.orchestra.run.vm07.stdout: pg 3.1a is active+recovery_wait+degraded, acting [4,1,2] 2026-03-09T20:50:12.472 INFO:teuthology.orchestra.run.vm07.stdout: pg 3.1c is active+recovery_wait+degraded, acting [5,4,1] 2026-03-09T20:50:12.472 INFO:teuthology.orchestra.run.vm07.stdout: pg 3.1d is active+recovery_wait+degraded, acting [5,4,1] 2026-03-09T20:50:12.472 INFO:teuthology.orchestra.run.vm07.stdout: pg 3.1e is active+recovery_wait+degraded, acting [2,3,1] 2026-03-09T20:50:12.475 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.474+0000 7fea64f79640 1 -- 192.168.123.107:0/2510356893 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fea540778e0 msgr2=0x7fea54079da0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:50:12.475 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.474+0000 7fea64f79640 1 --2- 192.168.123.107:0/2510356893 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fea540778e0 0x7fea54079da0 secure :-1 s=READY pgs=49 cs=0 l=1 rev1=1 crypto rx=0x7fea680040c0 tx=0x7fea6803a040 comp rx=0 tx=0).stop 2026-03-09T20:50:12.475 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.474+0000 7fea64f79640 1 -- 192.168.123.107:0/2510356893 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fea78103c80 msgr2=0x7fea7819eea0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:50:12.475 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.474+0000 7fea64f79640 1 --2- 192.168.123.107:0/2510356893 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fea78103c80 0x7fea7819eea0 secure :-1 s=READY pgs=27 cs=0 l=1 rev1=1 crypto rx=0x7fea6c00d8d0 tx=0x7fea6c00dda0 comp rx=0 tx=0).stop 2026-03-09T20:50:12.475 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.475+0000 7fea64f79640 1 -- 192.168.123.107:0/2510356893 shutdown_connections 2026-03-09T20:50:12.476 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.475+0000 7fea64f79640 1 --2- 192.168.123.107:0/2510356893 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fea540778e0 0x7fea54079da0 unknown :-1 s=CLOSED pgs=49 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:12.476 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.475+0000 7fea64f79640 1 --2- 192.168.123.107:0/2510356893 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fea78103c80 0x7fea7819eea0 unknown :-1 s=CLOSED pgs=27 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:12.476 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.475+0000 7fea64f79640 1 --2- 192.168.123.107:0/2510356893 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fea78102a80 0x7fea7819e960 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:12.476 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.475+0000 7fea64f79640 1 -- 192.168.123.107:0/2510356893 >> 192.168.123.107:0/2510356893 conn(0x7fea780fe250 msgr2=0x7fea78104ea0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:50:12.478 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.477+0000 7fea64f79640 1 -- 192.168.123.107:0/2510356893 shutdown_connections 2026-03-09T20:50:12.479 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:12.477+0000 7fea64f79640 1 -- 192.168.123.107:0/2510356893 wait complete. 2026-03-09T20:50:12.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:12 vm07.local ceph-mon[112105]: from='client.34200 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:50:12.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:12 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/3137136474' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:50:12.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:12 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/2409467107' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T20:50:12.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:12 vm07.local ceph-mon[112105]: from='client.34212 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:50:12.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:12 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/2510356893' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T20:50:13.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:12 vm10.local ceph-mon[103526]: from='client.34200 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:50:13.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:12 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/3137136474' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:50:13.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:12 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/2409467107' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T20:50:13.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:12 vm10.local ceph-mon[103526]: from='client.34212 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:50:13.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:12 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/2510356893' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T20:50:14.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:13 vm10.local ceph-mon[103526]: pgmap v70: 65 pgs: 9 active+recovery_wait+degraded, 56 active+clean; 243 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 1.2 MiB/s rd, 1.3 MiB/s wr, 353 op/s; 719/13482 objects degraded (5.333%); 265 KiB/s, 70 objects/s recovering 2026-03-09T20:50:14.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:13 vm10.local ceph-mon[103526]: Health check update: Degraded data redundancy: 719/13482 objects degraded (5.333%), 9 pgs degraded (PG_DEGRADED) 2026-03-09T20:50:14.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:13 vm07.local ceph-mon[112105]: pgmap v70: 65 pgs: 9 active+recovery_wait+degraded, 56 active+clean; 243 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 1.2 MiB/s rd, 1.3 MiB/s wr, 353 op/s; 719/13482 objects degraded (5.333%); 265 KiB/s, 70 objects/s recovering 2026-03-09T20:50:14.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:13 vm07.local ceph-mon[112105]: Health check update: Degraded data redundancy: 719/13482 objects degraded (5.333%), 9 pgs degraded (PG_DEGRADED) 2026-03-09T20:50:17.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:16 vm07.local ceph-mon[112105]: pgmap v71: 65 pgs: 9 active+recovery_wait+degraded, 56 active+clean; 242 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 834 KiB/s rd, 963 KiB/s wr, 248 op/s; 719/12996 objects degraded (5.532%); 240 KiB/s, 49 objects/s recovering 2026-03-09T20:50:17.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:16 vm10.local ceph-mon[103526]: pgmap v71: 65 pgs: 9 active+recovery_wait+degraded, 56 active+clean; 242 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 834 KiB/s rd, 963 KiB/s wr, 248 op/s; 719/12996 objects degraded (5.532%); 240 KiB/s, 49 objects/s recovering 2026-03-09T20:50:18.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:18 vm07.local ceph-mon[112105]: pgmap v72: 65 pgs: 8 active+recovery_wait+degraded, 1 active+recovering, 56 active+clean; 242 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 891 KiB/s rd, 974 KiB/s wr, 305 op/s; 645/10878 objects degraded (5.929%); 0 B/s, 15 objects/s recovering 2026-03-09T20:50:18.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:18 vm07.local ceph-mon[112105]: Health check update: Degraded data redundancy: 645/10878 objects degraded (5.929%), 8 pgs degraded (PG_DEGRADED) 2026-03-09T20:50:19.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:18 vm10.local ceph-mon[103526]: pgmap v72: 65 pgs: 8 active+recovery_wait+degraded, 1 active+recovering, 56 active+clean; 242 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 891 KiB/s rd, 974 KiB/s wr, 305 op/s; 645/10878 objects degraded (5.929%); 0 B/s, 15 objects/s recovering 2026-03-09T20:50:19.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:18 vm10.local ceph-mon[103526]: Health check update: Degraded data redundancy: 645/10878 objects degraded (5.929%), 8 pgs degraded (PG_DEGRADED) 2026-03-09T20:50:20.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:19 vm07.local ceph-mon[112105]: pgmap v73: 65 pgs: 8 active+recovery_wait+degraded, 1 active+recovering, 56 active+clean; 240 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 712 KiB/s rd, 784 KiB/s wr, 248 op/s; 645/10434 objects degraded (6.182%); 0 B/s, 12 objects/s recovering 2026-03-09T20:50:20.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:19 vm10.local ceph-mon[103526]: pgmap v73: 65 pgs: 8 active+recovery_wait+degraded, 1 active+recovering, 56 active+clean; 240 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 712 KiB/s rd, 784 KiB/s wr, 248 op/s; 645/10434 objects degraded (6.182%); 0 B/s, 12 objects/s recovering 2026-03-09T20:50:21.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:21 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-09T20:50:21.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:21 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-09T20:50:22.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:22 vm10.local ceph-mon[103526]: pgmap v74: 65 pgs: 8 active+recovery_wait+degraded, 1 active+recovering, 56 active+clean; 240 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 712 KiB/s rd, 819 KiB/s wr, 243 op/s; 645/10080 objects degraded (6.399%); 0 B/s, 12 objects/s recovering 2026-03-09T20:50:22.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:22 vm10.local ceph-mon[103526]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-09T20:50:22.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:22 vm10.local ceph-mon[103526]: Upgrade: unsafe to stop osd(s) at this time (3 PGs are or would become offline) 2026-03-09T20:50:22.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:22 vm07.local ceph-mon[112105]: pgmap v74: 65 pgs: 8 active+recovery_wait+degraded, 1 active+recovering, 56 active+clean; 240 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 712 KiB/s rd, 819 KiB/s wr, 243 op/s; 645/10080 objects degraded (6.399%); 0 B/s, 12 objects/s recovering 2026-03-09T20:50:22.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:22 vm07.local ceph-mon[112105]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-09T20:50:22.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:22 vm07.local ceph-mon[112105]: Upgrade: unsafe to stop osd(s) at this time (3 PGs are or would become offline) 2026-03-09T20:50:23.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:23 vm10.local ceph-mon[103526]: Health check update: Degraded data redundancy: 645/10080 objects degraded (6.399%), 8 pgs degraded (PG_DEGRADED) 2026-03-09T20:50:23.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:23 vm07.local ceph-mon[112105]: Health check update: Degraded data redundancy: 645/10080 objects degraded (6.399%), 8 pgs degraded (PG_DEGRADED) 2026-03-09T20:50:24.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:24 vm10.local ceph-mon[103526]: pgmap v75: 65 pgs: 8 active+recovery_wait+degraded, 1 active+recovering, 56 active+clean; 231 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 888 KiB/s rd, 946 KiB/s wr, 294 op/s; 645/8094 objects degraded (7.969%); 0 B/s, 15 objects/s recovering 2026-03-09T20:50:24.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:24 vm07.local ceph-mon[112105]: pgmap v75: 65 pgs: 8 active+recovery_wait+degraded, 1 active+recovering, 56 active+clean; 231 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 888 KiB/s rd, 946 KiB/s wr, 294 op/s; 645/8094 objects degraded (7.969%); 0 B/s, 15 objects/s recovering 2026-03-09T20:50:25.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:25 vm07.local ceph-mon[112105]: pgmap v76: 65 pgs: 8 active+recovery_wait+degraded, 1 active+recovering, 56 active+clean; 234 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 915 KiB/s rd, 845 KiB/s wr, 231 op/s; 645/7725 objects degraded (8.350%); 0 B/s, 5 objects/s recovering 2026-03-09T20:50:25.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:25 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:50:26.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:25 vm10.local ceph-mon[103526]: pgmap v76: 65 pgs: 8 active+recovery_wait+degraded, 1 active+recovering, 56 active+clean; 234 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 915 KiB/s rd, 845 KiB/s wr, 231 op/s; 645/7725 objects degraded (8.350%); 0 B/s, 5 objects/s recovering 2026-03-09T20:50:26.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:25 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:50:26.271 INFO:tasks.workunit:Stopping ['suites/fsstress.sh'] on client.0... 2026-03-09T20:50:26.271 DEBUG:teuthology.orchestra.run.vm07:> sudo rm -rf -- /home/ubuntu/cephtest/workunits.list.client.0 /home/ubuntu/cephtest/clone.client.0 2026-03-09T20:50:26.691 DEBUG:teuthology.parallel:result is None 2026-03-09T20:50:28.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:27 vm07.local ceph-mon[112105]: pgmap v77: 65 pgs: 6 active+recovery_wait+degraded, 1 active+recovering, 58 active+clean; 229 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 990 KiB/s rd, 1.0 MiB/s wr, 278 op/s; 483/6018 objects degraded (8.026%); 0 B/s, 15 objects/s recovering 2026-03-09T20:50:28.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:27 vm10.local ceph-mon[103526]: pgmap v77: 65 pgs: 6 active+recovery_wait+degraded, 1 active+recovering, 58 active+clean; 229 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 990 KiB/s rd, 1.0 MiB/s wr, 278 op/s; 483/6018 objects degraded (8.026%); 0 B/s, 15 objects/s recovering 2026-03-09T20:50:29.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:28 vm07.local ceph-mon[112105]: Health check update: Degraded data redundancy: 483/6018 objects degraded (8.026%), 6 pgs degraded (PG_DEGRADED) 2026-03-09T20:50:29.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:28 vm10.local ceph-mon[103526]: Health check update: Degraded data redundancy: 483/6018 objects degraded (8.026%), 6 pgs degraded (PG_DEGRADED) 2026-03-09T20:50:30.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:29 vm10.local ceph-mon[103526]: pgmap v78: 65 pgs: 6 active+recovery_wait+degraded, 1 active+recovering, 58 active+clean; 224 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 592 KiB/s rd, 684 KiB/s wr, 205 op/s; 483/5538 objects degraded (8.722%); 0 B/s, 12 objects/s recovering 2026-03-09T20:50:30.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:29 vm07.local ceph-mon[112105]: pgmap v78: 65 pgs: 6 active+recovery_wait+degraded, 1 active+recovering, 58 active+clean; 224 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 592 KiB/s rd, 684 KiB/s wr, 205 op/s; 483/5538 objects degraded (8.722%); 0 B/s, 12 objects/s recovering 2026-03-09T20:50:31.690 INFO:tasks.workunit:Stopping ['suites/fsstress.sh'] on client.1... 2026-03-09T20:50:31.690 DEBUG:teuthology.orchestra.run.vm10:> sudo rm -rf -- /home/ubuntu/cephtest/workunits.list.client.1 /home/ubuntu/cephtest/clone.client.1 2026-03-09T20:50:32.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:31 vm10.local ceph-mon[103526]: pgmap v79: 65 pgs: 6 active+recovery_wait+degraded, 1 active+recovering, 58 active+clean; 223 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 592 KiB/s rd, 685 KiB/s wr, 206 op/s; 483/5094 objects degraded (9.482%); 0 B/s, 12 objects/s recovering 2026-03-09T20:50:32.132 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:31 vm07.local ceph-mon[112105]: pgmap v79: 65 pgs: 6 active+recovery_wait+degraded, 1 active+recovering, 58 active+clean; 223 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 592 KiB/s rd, 685 KiB/s wr, 206 op/s; 483/5094 objects degraded (9.482%); 0 B/s, 12 objects/s recovering 2026-03-09T20:50:32.157 DEBUG:teuthology.parallel:result is None 2026-03-09T20:50:32.158 DEBUG:teuthology.orchestra.run.vm07:> sudo rm -rf -- /home/ubuntu/cephtest/mnt.0/client.0 2026-03-09T20:50:32.201 INFO:tasks.workunit:Deleted dir /home/ubuntu/cephtest/mnt.0/client.0 2026-03-09T20:50:32.202 DEBUG:teuthology.orchestra.run.vm10:> sudo rm -rf -- /home/ubuntu/cephtest/mnt.1/client.1 2026-03-09T20:50:32.264 INFO:tasks.workunit:Deleted dir /home/ubuntu/cephtest/mnt.1/client.1 2026-03-09T20:50:32.264 DEBUG:teuthology.parallel:result is None 2026-03-09T20:50:34.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:33 vm07.local ceph-mon[112105]: pgmap v80: 65 pgs: 5 active+recovery_wait+degraded, 2 active+recovering, 58 active+clean; 221 MiB data, 2.8 GiB used, 117 GiB / 120 GiB avail; 1.0 MiB/s rd, 1.1 MiB/s wr, 295 op/s; 407/2583 objects degraded (15.757%); 0 B/s, 20 objects/s recovering 2026-03-09T20:50:34.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:33 vm07.local ceph-mon[112105]: Health check update: Degraded data redundancy: 407/2583 objects degraded (15.757%), 5 pgs degraded (PG_DEGRADED) 2026-03-09T20:50:34.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:33 vm10.local ceph-mon[103526]: pgmap v80: 65 pgs: 5 active+recovery_wait+degraded, 2 active+recovering, 58 active+clean; 221 MiB data, 2.8 GiB used, 117 GiB / 120 GiB avail; 1.0 MiB/s rd, 1.1 MiB/s wr, 295 op/s; 407/2583 objects degraded (15.757%); 0 B/s, 20 objects/s recovering 2026-03-09T20:50:34.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:33 vm10.local ceph-mon[103526]: Health check update: Degraded data redundancy: 407/2583 objects degraded (15.757%), 5 pgs degraded (PG_DEGRADED) 2026-03-09T20:50:36.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:35 vm10.local ceph-mon[103526]: pgmap v81: 65 pgs: 5 active+recovery_wait+degraded, 2 active+recovering, 58 active+clean; 219 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 874 KiB/s rd, 957 KiB/s wr, 241 op/s; 407/2157 objects degraded (18.869%); 0 B/s, 17 objects/s recovering 2026-03-09T20:50:36.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:35 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-09T20:50:36.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:35 vm07.local ceph-mon[112105]: pgmap v81: 65 pgs: 5 active+recovery_wait+degraded, 2 active+recovering, 58 active+clean; 219 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 874 KiB/s rd, 957 KiB/s wr, 241 op/s; 407/2157 objects degraded (18.869%); 0 B/s, 17 objects/s recovering 2026-03-09T20:50:36.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:35 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-09T20:50:37.276 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:36 vm07.local ceph-mon[112105]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-09T20:50:37.276 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:36 vm07.local ceph-mon[112105]: Upgrade: unsafe to stop osd(s) at this time (2 PGs are or would become offline) 2026-03-09T20:50:37.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:36 vm10.local ceph-mon[103526]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-09T20:50:37.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:36 vm10.local ceph-mon[103526]: Upgrade: unsafe to stop osd(s) at this time (2 PGs are or would become offline) 2026-03-09T20:50:38.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:37 vm07.local ceph-mon[112105]: pgmap v82: 65 pgs: 3 active+recovery_wait+degraded, 2 active+recovering, 60 active+clean; 212 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 679 KiB/s rd, 704 KiB/s wr, 252 op/s; 239/627 objects degraded (38.118%); 0 B/s, 28 objects/s recovering 2026-03-09T20:50:38.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:37 vm07.local ceph-mon[112105]: Health check update: Degraded data redundancy: 239/627 objects degraded (38.118%), 3 pgs degraded (PG_DEGRADED) 2026-03-09T20:50:38.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:37 vm10.local ceph-mon[103526]: pgmap v82: 65 pgs: 3 active+recovery_wait+degraded, 2 active+recovering, 60 active+clean; 212 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 679 KiB/s rd, 704 KiB/s wr, 252 op/s; 239/627 objects degraded (38.118%); 0 B/s, 28 objects/s recovering 2026-03-09T20:50:38.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:37 vm10.local ceph-mon[103526]: Health check update: Degraded data redundancy: 239/627 objects degraded (38.118%), 3 pgs degraded (PG_DEGRADED) 2026-03-09T20:50:40.188 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:39 vm10.local ceph-mon[103526]: pgmap v83: 65 pgs: 3 active+recovery_wait+degraded, 1 active+recovering, 61 active+clean; 211 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 604 KiB/s rd, 506 KiB/s wr, 199 op/s; 239/288 objects degraded (82.986%); 0 B/s, 20 objects/s recovering 2026-03-09T20:50:40.188 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:39 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:50:40.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:39 vm07.local ceph-mon[112105]: pgmap v83: 65 pgs: 3 active+recovery_wait+degraded, 1 active+recovering, 61 active+clean; 211 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 604 KiB/s rd, 506 KiB/s wr, 199 op/s; 239/288 objects degraded (82.986%); 0 B/s, 20 objects/s recovering 2026-03-09T20:50:40.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:39 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:50:42.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:41 vm10.local ceph-mon[103526]: pgmap v84: 65 pgs: 3 active+recovery_wait+degraded, 1 active+recovering, 61 active+clean; 211 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 604 KiB/s rd, 506 KiB/s wr, 187 op/s; 239/288 objects degraded (82.986%); 0 B/s, 20 objects/s recovering 2026-03-09T20:50:42.342 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:41 vm07.local ceph-mon[112105]: pgmap v84: 65 pgs: 3 active+recovery_wait+degraded, 1 active+recovering, 61 active+clean; 211 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 604 KiB/s rd, 506 KiB/s wr, 187 op/s; 239/288 objects degraded (82.986%); 0 B/s, 20 objects/s recovering 2026-03-09T20:50:42.553 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.550+0000 7f78a350f640 1 -- 192.168.123.107:0/2852027219 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f789c103c60 msgr2=0x7f789c1040e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:50:42.553 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.550+0000 7f78a350f640 1 --2- 192.168.123.107:0/2852027219 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f789c103c60 0x7f789c1040e0 secure :-1 s=READY pgs=74 cs=0 l=1 rev1=1 crypto rx=0x7f788c0099b0 tx=0x7f788c02f220 comp rx=0 tx=0).stop 2026-03-09T20:50:42.553 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.552+0000 7f78a350f640 1 -- 192.168.123.107:0/2852027219 shutdown_connections 2026-03-09T20:50:42.553 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.552+0000 7f78a350f640 1 --2- 192.168.123.107:0/2852027219 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f789c103c60 0x7f789c1040e0 unknown :-1 s=CLOSED pgs=74 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:42.553 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.552+0000 7f78a350f640 1 --2- 192.168.123.107:0/2852027219 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f789c102a60 0x7f789c102e60 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:42.553 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.552+0000 7f78a350f640 1 -- 192.168.123.107:0/2852027219 >> 192.168.123.107:0/2852027219 conn(0x7f789c0fe250 msgr2=0x7f789c100670 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:50:42.553 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.552+0000 7f78a350f640 1 -- 192.168.123.107:0/2852027219 shutdown_connections 2026-03-09T20:50:42.553 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.552+0000 7f78a350f640 1 -- 192.168.123.107:0/2852027219 wait complete. 2026-03-09T20:50:42.554 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.553+0000 7f78a350f640 1 Processor -- start 2026-03-09T20:50:42.554 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.553+0000 7f78a350f640 1 -- start start 2026-03-09T20:50:42.554 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.553+0000 7f78a350f640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f789c102a60 0x7f789c19a460 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:50:42.554 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.553+0000 7f78a350f640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f789c103c60 0x7f789c19a9a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:50:42.555 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.553+0000 7f78a350f640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f789c19aee0 con 0x7f789c102a60 2026-03-09T20:50:42.555 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.553+0000 7f78a350f640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f789c19b050 con 0x7f789c103c60 2026-03-09T20:50:42.555 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.554+0000 7f78a0a83640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f789c103c60 0x7f789c19a9a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:50:42.555 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.554+0000 7f78a0a83640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f789c103c60 0x7f789c19a9a0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:37724/0 (socket says 192.168.123.107:37724) 2026-03-09T20:50:42.555 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.554+0000 7f78a0a83640 1 -- 192.168.123.107:0/403442724 learned_addr learned my addr 192.168.123.107:0/403442724 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:50:42.555 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.554+0000 7f78a1284640 1 --2- 192.168.123.107:0/403442724 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f789c102a60 0x7f789c19a460 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:50:42.555 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.554+0000 7f78a0a83640 1 -- 192.168.123.107:0/403442724 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f789c102a60 msgr2=0x7f789c19a460 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:50:42.555 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.555+0000 7f78a0a83640 1 --2- 192.168.123.107:0/403442724 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f789c102a60 0x7f789c19a460 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:42.556 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.555+0000 7f78a0a83640 1 -- 192.168.123.107:0/403442724 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f788c009660 con 0x7f789c103c60 2026-03-09T20:50:42.556 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.555+0000 7f78a1284640 1 --2- 192.168.123.107:0/403442724 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f789c102a60 0x7f789c19a460 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T20:50:42.556 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.555+0000 7f78a0a83640 1 --2- 192.168.123.107:0/403442724 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f789c103c60 0x7f789c19a9a0 secure :-1 s=READY pgs=28 cs=0 l=1 rev1=1 crypto rx=0x7f788c002410 tx=0x7f788c002980 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:50:42.556 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.555+0000 7f78927fc640 1 -- 192.168.123.107:0/403442724 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f788c03d070 con 0x7f789c103c60 2026-03-09T20:50:42.556 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.555+0000 7f78a350f640 1 -- 192.168.123.107:0/403442724 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f789c19fad0 con 0x7f789c103c60 2026-03-09T20:50:42.557 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.555+0000 7f78a350f640 1 -- 192.168.123.107:0/403442724 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f789c19ff70 con 0x7f789c103c60 2026-03-09T20:50:42.558 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.555+0000 7f78927fc640 1 -- 192.168.123.107:0/403442724 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f788c02fd50 con 0x7f789c103c60 2026-03-09T20:50:42.558 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.555+0000 7f78927fc640 1 -- 192.168.123.107:0/403442724 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f788c041aa0 con 0x7f789c103c60 2026-03-09T20:50:42.558 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.557+0000 7f78927fc640 1 -- 192.168.123.107:0/403442724 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f788c041c00 con 0x7f789c103c60 2026-03-09T20:50:42.559 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.558+0000 7f78a350f640 1 -- 192.168.123.107:0/403442724 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f7864005350 con 0x7f789c103c60 2026-03-09T20:50:42.559 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.558+0000 7f78927fc640 1 --2- 192.168.123.107:0/403442724 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f7878077890 0x7f7878079d50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:50:42.559 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.558+0000 7f78927fc640 1 -- 192.168.123.107:0/403442724 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(60..60 src has 1..60) v4 ==== 6394+0+0 (secure 0 0 0) 0x7f788c0be4e0 con 0x7f789c103c60 2026-03-09T20:50:42.559 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.558+0000 7f78a1284640 1 --2- 192.168.123.107:0/403442724 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f7878077890 0x7f7878079d50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:50:42.560 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.559+0000 7f78a1284640 1 --2- 192.168.123.107:0/403442724 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f7878077890 0x7f7878079d50 secure :-1 s=READY pgs=50 cs=0 l=1 rev1=1 crypto rx=0x7f789c103ac0 tx=0x7f788400a430 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:50:42.563 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.562+0000 7f78927fc640 1 -- 192.168.123.107:0/403442724 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f788c0c3050 con 0x7f789c103c60 2026-03-09T20:50:42.679 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.678+0000 7f78a350f640 1 -- 192.168.123.107:0/403442724 --> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f7864002bf0 con 0x7f7878077890 2026-03-09T20:50:42.681 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.680+0000 7f78927fc640 1 -- 192.168.123.107:0/403442724 <== mgr.34100 v2:192.168.123.107:6800/2064434004 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+400 (secure 0 0 0) 0x7f7864002bf0 con 0x7f7878077890 2026-03-09T20:50:42.684 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.682+0000 7f78a350f640 1 -- 192.168.123.107:0/403442724 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f7878077890 msgr2=0x7f7878079d50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:50:42.684 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.683+0000 7f78a350f640 1 --2- 192.168.123.107:0/403442724 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f7878077890 0x7f7878079d50 secure :-1 s=READY pgs=50 cs=0 l=1 rev1=1 crypto rx=0x7f789c103ac0 tx=0x7f788400a430 comp rx=0 tx=0).stop 2026-03-09T20:50:42.684 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.683+0000 7f78a350f640 1 -- 192.168.123.107:0/403442724 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f789c103c60 msgr2=0x7f789c19a9a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:50:42.684 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.683+0000 7f78a350f640 1 --2- 192.168.123.107:0/403442724 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f789c103c60 0x7f789c19a9a0 secure :-1 s=READY pgs=28 cs=0 l=1 rev1=1 crypto rx=0x7f788c002410 tx=0x7f788c002980 comp rx=0 tx=0).stop 2026-03-09T20:50:42.684 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.683+0000 7f78a350f640 1 -- 192.168.123.107:0/403442724 shutdown_connections 2026-03-09T20:50:42.684 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.683+0000 7f78a350f640 1 --2- 192.168.123.107:0/403442724 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f7878077890 0x7f7878079d50 unknown :-1 s=CLOSED pgs=50 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:42.684 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.683+0000 7f78a350f640 1 --2- 192.168.123.107:0/403442724 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f789c103c60 0x7f789c19a9a0 unknown :-1 s=CLOSED pgs=28 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:42.684 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.684+0000 7f78a350f640 1 --2- 192.168.123.107:0/403442724 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f789c102a60 0x7f789c19a460 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:42.685 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.684+0000 7f78a350f640 1 -- 192.168.123.107:0/403442724 >> 192.168.123.107:0/403442724 conn(0x7f789c0fe250 msgr2=0x7f789c0ffd70 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:50:42.685 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.684+0000 7f78a350f640 1 -- 192.168.123.107:0/403442724 shutdown_connections 2026-03-09T20:50:42.685 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.684+0000 7f78a350f640 1 -- 192.168.123.107:0/403442724 wait complete. 2026-03-09T20:50:42.694 INFO:teuthology.orchestra.run.vm07.stdout:true 2026-03-09T20:50:42.746 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.744+0000 7f6d31acc640 1 -- 192.168.123.107:0/473313101 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6d2c103c60 msgr2=0x7f6d2c1040e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:50:42.746 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.744+0000 7f6d31acc640 1 --2- 192.168.123.107:0/473313101 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6d2c103c60 0x7f6d2c1040e0 secure :-1 s=READY pgs=75 cs=0 l=1 rev1=1 crypto rx=0x7f6d180099b0 tx=0x7f6d1802f220 comp rx=0 tx=0).stop 2026-03-09T20:50:42.746 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.745+0000 7f6d31acc640 1 -- 192.168.123.107:0/473313101 shutdown_connections 2026-03-09T20:50:42.746 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.745+0000 7f6d31acc640 1 --2- 192.168.123.107:0/473313101 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6d2c103c60 0x7f6d2c1040e0 unknown :-1 s=CLOSED pgs=75 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:42.746 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.745+0000 7f6d31acc640 1 --2- 192.168.123.107:0/473313101 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f6d2c102a60 0x7f6d2c102e60 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:42.746 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.745+0000 7f6d31acc640 1 -- 192.168.123.107:0/473313101 >> 192.168.123.107:0/473313101 conn(0x7f6d2c0fe250 msgr2=0x7f6d2c100670 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:50:42.746 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.745+0000 7f6d31acc640 1 -- 192.168.123.107:0/473313101 shutdown_connections 2026-03-09T20:50:42.746 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.745+0000 7f6d31acc640 1 -- 192.168.123.107:0/473313101 wait complete. 2026-03-09T20:50:42.746 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.745+0000 7f6d31acc640 1 Processor -- start 2026-03-09T20:50:42.746 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.745+0000 7f6d31acc640 1 -- start start 2026-03-09T20:50:42.747 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.746+0000 7f6d31acc640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6d2c102a60 0x7f6d2c19e930 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:50:42.747 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.746+0000 7f6d2b7fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6d2c102a60 0x7f6d2c19e930 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:50:42.747 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.746+0000 7f6d2b7fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6d2c102a60 0x7f6d2c19e930 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:48632/0 (socket says 192.168.123.107:48632) 2026-03-09T20:50:42.747 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.746+0000 7f6d31acc640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f6d2c103c60 0x7f6d2c19ee70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:50:42.747 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.746+0000 7f6d2b7fe640 1 -- 192.168.123.107:0/1260402916 learned_addr learned my addr 192.168.123.107:0/1260402916 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:50:42.747 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.746+0000 7f6d31acc640 1 -- 192.168.123.107:0/1260402916 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f6d2c19f440 con 0x7f6d2c102a60 2026-03-09T20:50:42.747 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.746+0000 7f6d31acc640 1 -- 192.168.123.107:0/1260402916 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f6d2c19f5b0 con 0x7f6d2c103c60 2026-03-09T20:50:42.747 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.746+0000 7f6d2b7fe640 1 -- 192.168.123.107:0/1260402916 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f6d2c103c60 msgr2=0x7f6d2c19ee70 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T20:50:42.747 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.746+0000 7f6d2b7fe640 1 --2- 192.168.123.107:0/1260402916 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f6d2c103c60 0x7f6d2c19ee70 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:42.747 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.746+0000 7f6d2b7fe640 1 -- 192.168.123.107:0/1260402916 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f6d18009660 con 0x7f6d2c102a60 2026-03-09T20:50:42.748 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.746+0000 7f6d2b7fe640 1 --2- 192.168.123.107:0/1260402916 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6d2c102a60 0x7f6d2c19e930 secure :-1 s=READY pgs=76 cs=0 l=1 rev1=1 crypto rx=0x7f6d1c00d8d0 tx=0x7f6d1c00dda0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:50:42.748 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.747+0000 7f6d28ff9640 1 -- 192.168.123.107:0/1260402916 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f6d1c004490 con 0x7f6d2c102a60 2026-03-09T20:50:42.748 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.747+0000 7f6d28ff9640 1 -- 192.168.123.107:0/1260402916 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f6d1c00bd00 con 0x7f6d2c102a60 2026-03-09T20:50:42.748 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.747+0000 7f6d31acc640 1 -- 192.168.123.107:0/1260402916 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f6d2c1a3ff0 con 0x7f6d2c102a60 2026-03-09T20:50:42.748 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.747+0000 7f6d31acc640 1 -- 192.168.123.107:0/1260402916 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f6d2c1a4540 con 0x7f6d2c102a60 2026-03-09T20:50:42.750 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.749+0000 7f6d28ff9640 1 -- 192.168.123.107:0/1260402916 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f6d1c010460 con 0x7f6d2c102a60 2026-03-09T20:50:42.750 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.749+0000 7f6d31acc640 1 -- 192.168.123.107:0/1260402916 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f6cf0005350 con 0x7f6d2c102a60 2026-03-09T20:50:42.752 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.749+0000 7f6d28ff9640 1 -- 192.168.123.107:0/1260402916 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f6d1c010600 con 0x7f6d2c102a60 2026-03-09T20:50:42.752 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.749+0000 7f6d28ff9640 1 --2- 192.168.123.107:0/1260402916 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f6d00077820 0x7f6d00079ce0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:50:42.752 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.750+0000 7f6d2affd640 1 --2- 192.168.123.107:0/1260402916 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f6d00077820 0x7f6d00079ce0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:50:42.752 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.750+0000 7f6d2affd640 1 --2- 192.168.123.107:0/1260402916 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f6d00077820 0x7f6d00079ce0 secure :-1 s=READY pgs=51 cs=0 l=1 rev1=1 crypto rx=0x7f6d18002c20 tx=0x7f6d180023d0 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:50:42.752 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.750+0000 7f6d28ff9640 1 -- 192.168.123.107:0/1260402916 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(60..60 src has 1..60) v4 ==== 6394+0+0 (secure 0 0 0) 0x7f6d1c09ac90 con 0x7f6d2c102a60 2026-03-09T20:50:42.753 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.752+0000 7f6d28ff9640 1 -- 192.168.123.107:0/1260402916 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f6d1c063440 con 0x7f6d2c102a60 2026-03-09T20:50:42.860 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.858+0000 7f6d31acc640 1 -- 192.168.123.107:0/1260402916 --> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f6cf0002bf0 con 0x7f6d00077820 2026-03-09T20:50:42.860 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.859+0000 7f6d28ff9640 1 -- 192.168.123.107:0/1260402916 <== mgr.34100 v2:192.168.123.107:6800/2064434004 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+400 (secure 0 0 0) 0x7f6cf0002bf0 con 0x7f6d00077820 2026-03-09T20:50:42.863 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.862+0000 7f6d31acc640 1 -- 192.168.123.107:0/1260402916 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f6d00077820 msgr2=0x7f6d00079ce0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:50:42.864 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.862+0000 7f6d31acc640 1 --2- 192.168.123.107:0/1260402916 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f6d00077820 0x7f6d00079ce0 secure :-1 s=READY pgs=51 cs=0 l=1 rev1=1 crypto rx=0x7f6d18002c20 tx=0x7f6d180023d0 comp rx=0 tx=0).stop 2026-03-09T20:50:42.864 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.862+0000 7f6d31acc640 1 -- 192.168.123.107:0/1260402916 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6d2c102a60 msgr2=0x7f6d2c19e930 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:50:42.864 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.862+0000 7f6d31acc640 1 --2- 192.168.123.107:0/1260402916 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6d2c102a60 0x7f6d2c19e930 secure :-1 s=READY pgs=76 cs=0 l=1 rev1=1 crypto rx=0x7f6d1c00d8d0 tx=0x7f6d1c00dda0 comp rx=0 tx=0).stop 2026-03-09T20:50:42.864 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.863+0000 7f6d31acc640 1 -- 192.168.123.107:0/1260402916 shutdown_connections 2026-03-09T20:50:42.864 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.863+0000 7f6d31acc640 1 --2- 192.168.123.107:0/1260402916 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f6d00077820 0x7f6d00079ce0 unknown :-1 s=CLOSED pgs=51 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:42.864 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.863+0000 7f6d31acc640 1 --2- 192.168.123.107:0/1260402916 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f6d2c103c60 0x7f6d2c19ee70 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:42.864 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.863+0000 7f6d31acc640 1 --2- 192.168.123.107:0/1260402916 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6d2c102a60 0x7f6d2c19e930 unknown :-1 s=CLOSED pgs=76 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:42.864 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.863+0000 7f6d31acc640 1 -- 192.168.123.107:0/1260402916 >> 192.168.123.107:0/1260402916 conn(0x7f6d2c0fe250 msgr2=0x7f6d2c0ffd50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:50:42.864 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.863+0000 7f6d31acc640 1 -- 192.168.123.107:0/1260402916 shutdown_connections 2026-03-09T20:50:42.864 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.863+0000 7f6d31acc640 1 -- 192.168.123.107:0/1260402916 wait complete. 2026-03-09T20:50:42.924 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.922+0000 7ffbb14e8640 1 -- 192.168.123.107:0/1630994182 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ffbac0719c0 msgr2=0x7ffbac071dc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:50:42.924 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.922+0000 7ffbb14e8640 1 --2- 192.168.123.107:0/1630994182 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ffbac0719c0 0x7ffbac071dc0 secure :-1 s=READY pgs=77 cs=0 l=1 rev1=1 crypto rx=0x7ffba00099b0 tx=0x7ffba002f240 comp rx=0 tx=0).stop 2026-03-09T20:50:42.924 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.923+0000 7ffbb14e8640 1 -- 192.168.123.107:0/1630994182 shutdown_connections 2026-03-09T20:50:42.924 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.923+0000 7ffbb14e8640 1 --2- 192.168.123.107:0/1630994182 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7ffbac072390 0x7ffbac10c590 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:42.924 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.923+0000 7ffbb14e8640 1 --2- 192.168.123.107:0/1630994182 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ffbac0719c0 0x7ffbac071dc0 unknown :-1 s=CLOSED pgs=77 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:42.924 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.923+0000 7ffbb14e8640 1 -- 192.168.123.107:0/1630994182 >> 192.168.123.107:0/1630994182 conn(0x7ffbac06d4f0 msgr2=0x7ffbac06f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:50:42.924 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.923+0000 7ffbb14e8640 1 -- 192.168.123.107:0/1630994182 shutdown_connections 2026-03-09T20:50:42.924 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.923+0000 7ffbb14e8640 1 -- 192.168.123.107:0/1630994182 wait complete. 2026-03-09T20:50:42.925 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.924+0000 7ffbb14e8640 1 Processor -- start 2026-03-09T20:50:42.925 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.924+0000 7ffbb14e8640 1 -- start start 2026-03-09T20:50:42.925 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.924+0000 7ffbb14e8640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7ffbac0719c0 0x7ffbac1a7360 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:50:42.925 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.924+0000 7ffbb14e8640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ffbac072390 0x7ffbac1a78a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:50:42.925 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.924+0000 7ffbb14e8640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ffbac1a7e70 con 0x7ffbac072390 2026-03-09T20:50:42.926 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.924+0000 7ffbb14e8640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ffbac1a7fe0 con 0x7ffbac0719c0 2026-03-09T20:50:42.926 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.924+0000 7ffbaa7fc640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ffbac072390 0x7ffbac1a78a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:50:42.926 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.924+0000 7ffbaa7fc640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ffbac072390 0x7ffbac1a78a0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:48648/0 (socket says 192.168.123.107:48648) 2026-03-09T20:50:42.926 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.924+0000 7ffbaa7fc640 1 -- 192.168.123.107:0/3886645086 learned_addr learned my addr 192.168.123.107:0/3886645086 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:50:42.926 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.925+0000 7ffbaaffd640 1 --2- 192.168.123.107:0/3886645086 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7ffbac0719c0 0x7ffbac1a7360 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:50:42.926 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.925+0000 7ffbaa7fc640 1 -- 192.168.123.107:0/3886645086 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7ffbac0719c0 msgr2=0x7ffbac1a7360 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:50:42.927 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.925+0000 7ffbaa7fc640 1 --2- 192.168.123.107:0/3886645086 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7ffbac0719c0 0x7ffbac1a7360 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:42.927 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.925+0000 7ffbaa7fc640 1 -- 192.168.123.107:0/3886645086 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7ffba0009660 con 0x7ffbac072390 2026-03-09T20:50:42.927 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.925+0000 7ffbaa7fc640 1 --2- 192.168.123.107:0/3886645086 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ffbac072390 0x7ffbac1a78a0 secure :-1 s=READY pgs=78 cs=0 l=1 rev1=1 crypto rx=0x7ffb9400d8d0 tx=0x7ffb9400dda0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:50:42.927 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.926+0000 7ffb8bfff640 1 -- 192.168.123.107:0/3886645086 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7ffb94004490 con 0x7ffbac072390 2026-03-09T20:50:42.927 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.926+0000 7ffbb14e8640 1 -- 192.168.123.107:0/3886645086 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7ffbac10eeb0 con 0x7ffbac072390 2026-03-09T20:50:42.927 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.926+0000 7ffbb14e8640 1 -- 192.168.123.107:0/3886645086 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7ffbac10f400 con 0x7ffbac072390 2026-03-09T20:50:42.928 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.927+0000 7ffb8bfff640 1 -- 192.168.123.107:0/3886645086 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7ffb9400bd00 con 0x7ffbac072390 2026-03-09T20:50:42.928 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.928+0000 7ffb8bfff640 1 -- 192.168.123.107:0/3886645086 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7ffb94010460 con 0x7ffbac072390 2026-03-09T20:50:42.933 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.928+0000 7ffbb14e8640 1 -- 192.168.123.107:0/3886645086 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7ffb78005350 con 0x7ffbac072390 2026-03-09T20:50:42.933 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.932+0000 7ffb8bfff640 1 -- 192.168.123.107:0/3886645086 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7ffb940105c0 con 0x7ffbac072390 2026-03-09T20:50:42.933 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.933+0000 7ffb8bfff640 1 --2- 192.168.123.107:0/3886645086 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7ffb840779b0 0x7ffb84079e70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:50:42.933 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.933+0000 7ffb8bfff640 1 -- 192.168.123.107:0/3886645086 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(60..60 src has 1..60) v4 ==== 6394+0+0 (secure 0 0 0) 0x7ffb94099fe0 con 0x7ffbac072390 2026-03-09T20:50:42.937 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.936+0000 7ffbaaffd640 1 --2- 192.168.123.107:0/3886645086 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7ffb840779b0 0x7ffb84079e70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:50:42.937 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.936+0000 7ffb8bfff640 1 -- 192.168.123.107:0/3886645086 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7ffb94062790 con 0x7ffbac072390 2026-03-09T20:50:42.938 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:42.937+0000 7ffbaaffd640 1 --2- 192.168.123.107:0/3886645086 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7ffb840779b0 0x7ffb84079e70 secure :-1 s=READY pgs=52 cs=0 l=1 rev1=1 crypto rx=0x7ffba0002410 tx=0x7ffba003a040 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:50:43.050 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.049+0000 7ffbb14e8640 1 -- 192.168.123.107:0/3886645086 --> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] -- mgr_command(tid 0: {"prefix": "orch ps", "target": ["mon-mgr", ""]}) v1 -- 0x7ffb78002bf0 con 0x7ffb840779b0 2026-03-09T20:50:43.055 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.054+0000 7ffb8bfff640 1 -- 192.168.123.107:0/3886645086 <== mgr.34100 v2:192.168.123.107:6800/2064434004 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+3600 (secure 0 0 0) 0x7ffb78002bf0 con 0x7ffb840779b0 2026-03-09T20:50:43.056 INFO:teuthology.orchestra.run.vm07.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T20:50:43.056 INFO:teuthology.orchestra.run.vm07.stdout:alertmanager.vm07 vm07 *:9093,9094 running (7m) 55s ago 7m 43.0M - 0.25.0 c8568f914cd2 aa3206f6f5cb 2026-03-09T20:50:43.056 INFO:teuthology.orchestra.run.vm07.stdout:ceph-exporter.vm07 vm07 running (7m) 55s ago 7m 9701k - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 06140d824fae 2026-03-09T20:50:43.056 INFO:teuthology.orchestra.run.vm07.stdout:ceph-exporter.vm10 vm10 running (7m) 109s ago 7m 9.90M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 ecddc8340426 2026-03-09T20:50:43.056 INFO:teuthology.orchestra.run.vm07.stdout:crash.vm07 vm07 running (114s) 55s ago 7m 7834k - 19.2.3-678-ge911bdeb 654f31e6858e 406c9c54f34a 2026-03-09T20:50:43.056 INFO:teuthology.orchestra.run.vm07.stdout:crash.vm10 vm10 running (112s) 109s ago 7m 7856k - 19.2.3-678-ge911bdeb 654f31e6858e 30eaebf5d733 2026-03-09T20:50:43.056 INFO:teuthology.orchestra.run.vm07.stdout:grafana.vm07 vm07 *:3000 running (7m) 55s ago 7m 160M - 9.4.7 954c08fa6188 74cf2e7ee6ad 2026-03-09T20:50:43.056 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm07.potfau vm07 running (5m) 55s ago 5m 30.2M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 2492b6874dc8 2026-03-09T20:50:43.056 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm07.rovdbp vm07 running (5m) 55s ago 5m 226M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 3dd0b4a28f35 2026-03-09T20:50:43.056 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm10.hzyuyq vm10 running (5m) 109s ago 5m 151M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 ed740ceed51a 2026-03-09T20:50:43.056 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm10.qpltwp vm10 running (5m) 109s ago 5m 27.0M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 c5fdba181aaf 2026-03-09T20:50:43.056 INFO:teuthology.orchestra.run.vm07.stdout:mgr.vm07.xjrvch vm07 *:8443,9283,8765 running (3m) 55s ago 8m 611M - 19.2.3-678-ge911bdeb 654f31e6858e bc6ab9c540eb 2026-03-09T20:50:43.056 INFO:teuthology.orchestra.run.vm07.stdout:mgr.vm10.byqahe vm10 *:8443,9283,8765 running (2m) 109s ago 7m 489M - 19.2.3-678-ge911bdeb 654f31e6858e f7ad162e95ff 2026-03-09T20:50:43.056 INFO:teuthology.orchestra.run.vm07.stdout:mon.vm07 vm07 running (2m) 55s ago 8m 56.1M 2048M 19.2.3-678-ge911bdeb 654f31e6858e bce9d510f94f 2026-03-09T20:50:43.056 INFO:teuthology.orchestra.run.vm07.stdout:mon.vm10 vm10 running (2m) 109s ago 7m 45.2M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 4428cf7f0607 2026-03-09T20:50:43.056 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.vm07 vm07 *:9100 running (7m) 55s ago 7m 16.0M - 1.5.0 0da6a335fe13 d6fac1f8a1d0 2026-03-09T20:50:43.056 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.vm10 vm10 *:9100 running (7m) 109s ago 7m 15.4M - 1.5.0 0da6a335fe13 9716a97e7ed1 2026-03-09T20:50:43.056 INFO:teuthology.orchestra.run.vm07.stdout:osd.0 vm07 running (100s) 55s ago 6m 189M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 1da9d2cdbdc3 2026-03-09T20:50:43.056 INFO:teuthology.orchestra.run.vm07.stdout:osd.1 vm07 running (58s) 55s ago 6m 11.5M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 95f518bf664f 2026-03-09T20:50:43.056 INFO:teuthology.orchestra.run.vm07.stdout:osd.2 vm07 running (6m) 55s ago 6m 341M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 a2ad523a264c 2026-03-09T20:50:43.056 INFO:teuthology.orchestra.run.vm07.stdout:osd.3 vm10 running (6m) 109s ago 6m 452M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 c4d7e2279ba1 2026-03-09T20:50:43.056 INFO:teuthology.orchestra.run.vm07.stdout:osd.4 vm10 running (6m) 109s ago 6m 409M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 37651efc9a7d 2026-03-09T20:50:43.056 INFO:teuthology.orchestra.run.vm07.stdout:osd.5 vm10 running (6m) 109s ago 6m 343M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 e1bd83add343 2026-03-09T20:50:43.056 INFO:teuthology.orchestra.run.vm07.stdout:prometheus.vm07 vm07 *:9095 running (2m) 55s ago 7m 47.3M - 2.43.0 a07b618ecd1d 3f9c07cd3fe3 2026-03-09T20:50:43.058 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.057+0000 7ffbb14e8640 1 -- 192.168.123.107:0/3886645086 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7ffb840779b0 msgr2=0x7ffb84079e70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:50:43.058 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.057+0000 7ffbb14e8640 1 --2- 192.168.123.107:0/3886645086 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7ffb840779b0 0x7ffb84079e70 secure :-1 s=READY pgs=52 cs=0 l=1 rev1=1 crypto rx=0x7ffba0002410 tx=0x7ffba003a040 comp rx=0 tx=0).stop 2026-03-09T20:50:43.058 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.057+0000 7ffbb14e8640 1 -- 192.168.123.107:0/3886645086 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ffbac072390 msgr2=0x7ffbac1a78a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:50:43.058 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.057+0000 7ffbb14e8640 1 --2- 192.168.123.107:0/3886645086 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ffbac072390 0x7ffbac1a78a0 secure :-1 s=READY pgs=78 cs=0 l=1 rev1=1 crypto rx=0x7ffb9400d8d0 tx=0x7ffb9400dda0 comp rx=0 tx=0).stop 2026-03-09T20:50:43.058 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.057+0000 7ffbb14e8640 1 -- 192.168.123.107:0/3886645086 shutdown_connections 2026-03-09T20:50:43.058 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.057+0000 7ffbb14e8640 1 --2- 192.168.123.107:0/3886645086 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7ffb840779b0 0x7ffb84079e70 unknown :-1 s=CLOSED pgs=52 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:43.058 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.057+0000 7ffbb14e8640 1 --2- 192.168.123.107:0/3886645086 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ffbac072390 0x7ffbac1a78a0 unknown :-1 s=CLOSED pgs=78 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:43.058 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.057+0000 7ffbb14e8640 1 --2- 192.168.123.107:0/3886645086 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7ffbac0719c0 0x7ffbac1a7360 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:43.058 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.057+0000 7ffbb14e8640 1 -- 192.168.123.107:0/3886645086 >> 192.168.123.107:0/3886645086 conn(0x7ffbac06d4f0 msgr2=0x7ffbac0707b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:50:43.058 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.057+0000 7ffbb14e8640 1 -- 192.168.123.107:0/3886645086 shutdown_connections 2026-03-09T20:50:43.058 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.057+0000 7ffbb14e8640 1 -- 192.168.123.107:0/3886645086 wait complete. 2026-03-09T20:50:43.115 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.114+0000 7f883ffff640 1 -- 192.168.123.107:0/108992564 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f88401028b0 msgr2=0x7f8840102cb0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:50:43.116 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.114+0000 7f883ffff640 1 --2- 192.168.123.107:0/108992564 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f88401028b0 0x7f8840102cb0 secure :-1 s=READY pgs=79 cs=0 l=1 rev1=1 crypto rx=0x7f882c0099b0 tx=0x7f882c02f220 comp rx=0 tx=0).stop 2026-03-09T20:50:43.116 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.114+0000 7f883ffff640 1 -- 192.168.123.107:0/108992564 shutdown_connections 2026-03-09T20:50:43.116 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.114+0000 7f883ffff640 1 --2- 192.168.123.107:0/108992564 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f8840103ab0 0x7f8840103f30 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:43.116 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.114+0000 7f883ffff640 1 --2- 192.168.123.107:0/108992564 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f88401028b0 0x7f8840102cb0 unknown :-1 s=CLOSED pgs=79 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:43.116 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.114+0000 7f883ffff640 1 -- 192.168.123.107:0/108992564 >> 192.168.123.107:0/108992564 conn(0x7f88400fe060 msgr2=0x7f8840100480 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:50:43.116 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.115+0000 7f883ffff640 1 -- 192.168.123.107:0/108992564 shutdown_connections 2026-03-09T20:50:43.116 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.115+0000 7f883ffff640 1 -- 192.168.123.107:0/108992564 wait complete. 2026-03-09T20:50:43.116 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.115+0000 7f883ffff640 1 Processor -- start 2026-03-09T20:50:43.117 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.116+0000 7f883ffff640 1 -- start start 2026-03-09T20:50:43.117 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.116+0000 7f883ffff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f88401028b0 0x7f884019a370 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:50:43.117 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.116+0000 7f883effd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f88401028b0 0x7f884019a370 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:50:43.117 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.116+0000 7f883effd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f88401028b0 0x7f884019a370 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:48672/0 (socket says 192.168.123.107:48672) 2026-03-09T20:50:43.117 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.116+0000 7f883ffff640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f8840103ab0 0x7f884019a8b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:50:43.117 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.116+0000 7f883effd640 1 -- 192.168.123.107:0/1321161128 learned_addr learned my addr 192.168.123.107:0/1321161128 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:50:43.117 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.116+0000 7f883ffff640 1 -- 192.168.123.107:0/1321161128 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f884019ae80 con 0x7f88401028b0 2026-03-09T20:50:43.118 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.117+0000 7f883e7fc640 1 --2- 192.168.123.107:0/1321161128 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f8840103ab0 0x7f884019a8b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:50:43.118 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.117+0000 7f883ffff640 1 -- 192.168.123.107:0/1321161128 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f884019aff0 con 0x7f8840103ab0 2026-03-09T20:50:43.118 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.117+0000 7f883effd640 1 -- 192.168.123.107:0/1321161128 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f8840103ab0 msgr2=0x7f884019a8b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:50:43.118 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.117+0000 7f883effd640 1 --2- 192.168.123.107:0/1321161128 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f8840103ab0 0x7f884019a8b0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:43.118 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.117+0000 7f883effd640 1 -- 192.168.123.107:0/1321161128 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f882c009660 con 0x7f88401028b0 2026-03-09T20:50:43.119 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.118+0000 7f883effd640 1 --2- 192.168.123.107:0/1321161128 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f88401028b0 0x7f884019a370 secure :-1 s=READY pgs=80 cs=0 l=1 rev1=1 crypto rx=0x7f882c002940 tx=0x7f882c002970 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:50:43.119 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.118+0000 7f881bfff640 1 -- 192.168.123.107:0/1321161128 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f882c03d070 con 0x7f88401028b0 2026-03-09T20:50:43.119 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.118+0000 7f881bfff640 1 -- 192.168.123.107:0/1321161128 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f882c02fd50 con 0x7f88401028b0 2026-03-09T20:50:43.119 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.118+0000 7f883ffff640 1 -- 192.168.123.107:0/1321161128 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f884019fa30 con 0x7f88401028b0 2026-03-09T20:50:43.120 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.119+0000 7f881bfff640 1 -- 192.168.123.107:0/1321161128 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f882c041a50 con 0x7f88401028b0 2026-03-09T20:50:43.120 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.119+0000 7f883ffff640 1 -- 192.168.123.107:0/1321161128 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f884019ffa0 con 0x7f88401028b0 2026-03-09T20:50:43.121 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.120+0000 7f881bfff640 1 -- 192.168.123.107:0/1321161128 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f882c049050 con 0x7f88401028b0 2026-03-09T20:50:43.121 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.120+0000 7f883ffff640 1 -- 192.168.123.107:0/1321161128 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f884010b560 con 0x7f88401028b0 2026-03-09T20:50:43.125 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.122+0000 7f881bfff640 1 --2- 192.168.123.107:0/1321161128 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f88140776d0 0x7f8814079b90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:50:43.125 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.122+0000 7f881bfff640 1 -- 192.168.123.107:0/1321161128 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(60..60 src has 1..60) v4 ==== 6394+0+0 (secure 0 0 0) 0x7f882c0bded0 con 0x7f88401028b0 2026-03-09T20:50:43.125 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.123+0000 7f883e7fc640 1 --2- 192.168.123.107:0/1321161128 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f88140776d0 0x7f8814079b90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:50:43.125 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.124+0000 7f883e7fc640 1 --2- 192.168.123.107:0/1321161128 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f88140776d0 0x7f8814079b90 secure :-1 s=READY pgs=53 cs=0 l=1 rev1=1 crypto rx=0x7f884019b890 tx=0x7f8834008040 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:50:43.125 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.124+0000 7f881bfff640 1 -- 192.168.123.107:0/1321161128 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f882c0c3050 con 0x7f88401028b0 2026-03-09T20:50:43.271 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.270+0000 7f883ffff640 1 -- 192.168.123.107:0/1321161128 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7f884010b750 con 0x7f88401028b0 2026-03-09T20:50:43.271 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:42 vm07.local ceph-mon[112105]: Health check update: Degraded data redundancy: 239/288 objects degraded (82.986%), 3 pgs degraded (PG_DEGRADED) 2026-03-09T20:50:43.273 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.272+0000 7f881bfff640 1 -- 192.168.123.107:0/1321161128 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+844 (secure 0 0 0) 0x7f882c086690 con 0x7f88401028b0 2026-03-09T20:50:43.275 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T20:50:43.275 INFO:teuthology.orchestra.run.vm07.stdout: "mon": { 2026-03-09T20:50:43.275 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T20:50:43.275 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:50:43.275 INFO:teuthology.orchestra.run.vm07.stdout: "mgr": { 2026-03-09T20:50:43.275 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T20:50:43.275 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:50:43.275 INFO:teuthology.orchestra.run.vm07.stdout: "osd": { 2026-03-09T20:50:43.275 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable)": 4, 2026-03-09T20:50:43.275 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T20:50:43.275 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:50:43.275 INFO:teuthology.orchestra.run.vm07.stdout: "mds": { 2026-03-09T20:50:43.275 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable)": 4 2026-03-09T20:50:43.275 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:50:43.275 INFO:teuthology.orchestra.run.vm07.stdout: "overall": { 2026-03-09T20:50:43.275 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable)": 8, 2026-03-09T20:50:43.275 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 6 2026-03-09T20:50:43.275 INFO:teuthology.orchestra.run.vm07.stdout: } 2026-03-09T20:50:43.276 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T20:50:43.278 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.277+0000 7f883ffff640 1 -- 192.168.123.107:0/1321161128 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f88140776d0 msgr2=0x7f8814079b90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:50:43.278 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.277+0000 7f883ffff640 1 --2- 192.168.123.107:0/1321161128 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f88140776d0 0x7f8814079b90 secure :-1 s=READY pgs=53 cs=0 l=1 rev1=1 crypto rx=0x7f884019b890 tx=0x7f8834008040 comp rx=0 tx=0).stop 2026-03-09T20:50:43.278 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.277+0000 7f883ffff640 1 -- 192.168.123.107:0/1321161128 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f88401028b0 msgr2=0x7f884019a370 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:50:43.278 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.277+0000 7f883ffff640 1 --2- 192.168.123.107:0/1321161128 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f88401028b0 0x7f884019a370 secure :-1 s=READY pgs=80 cs=0 l=1 rev1=1 crypto rx=0x7f882c002940 tx=0x7f882c002970 comp rx=0 tx=0).stop 2026-03-09T20:50:43.279 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.278+0000 7f883ffff640 1 -- 192.168.123.107:0/1321161128 shutdown_connections 2026-03-09T20:50:43.279 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.278+0000 7f883ffff640 1 --2- 192.168.123.107:0/1321161128 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f88140776d0 0x7f8814079b90 unknown :-1 s=CLOSED pgs=53 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:43.279 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.278+0000 7f883ffff640 1 --2- 192.168.123.107:0/1321161128 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f8840103ab0 0x7f884019a8b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:43.279 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.278+0000 7f883ffff640 1 --2- 192.168.123.107:0/1321161128 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f88401028b0 0x7f884019a370 unknown :-1 s=CLOSED pgs=80 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:43.279 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.278+0000 7f883ffff640 1 -- 192.168.123.107:0/1321161128 >> 192.168.123.107:0/1321161128 conn(0x7f88400fe060 msgr2=0x7f88400ffbc0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:50:43.279 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.279+0000 7f883ffff640 1 -- 192.168.123.107:0/1321161128 shutdown_connections 2026-03-09T20:50:43.279 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.279+0000 7f883ffff640 1 -- 192.168.123.107:0/1321161128 wait complete. 2026-03-09T20:50:43.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:42 vm10.local ceph-mon[103526]: Health check update: Degraded data redundancy: 239/288 objects degraded (82.986%), 3 pgs degraded (PG_DEGRADED) 2026-03-09T20:50:43.339 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.338+0000 7f72eeda4640 1 -- 192.168.123.107:0/3247390285 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f72e8102a80 msgr2=0x7f72e8102e80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:50:43.339 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.338+0000 7f72eeda4640 1 --2- 192.168.123.107:0/3247390285 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f72e8102a80 0x7f72e8102e80 secure :-1 s=READY pgs=81 cs=0 l=1 rev1=1 crypto rx=0x7f72d00099b0 tx=0x7f72d002f240 comp rx=0 tx=0).stop 2026-03-09T20:50:43.339 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.338+0000 7f72eeda4640 1 -- 192.168.123.107:0/3247390285 shutdown_connections 2026-03-09T20:50:43.339 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.338+0000 7f72eeda4640 1 --2- 192.168.123.107:0/3247390285 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f72e8103c80 0x7f72e8104100 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:43.339 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.338+0000 7f72eeda4640 1 --2- 192.168.123.107:0/3247390285 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f72e8102a80 0x7f72e8102e80 unknown :-1 s=CLOSED pgs=81 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:43.339 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.338+0000 7f72eeda4640 1 -- 192.168.123.107:0/3247390285 >> 192.168.123.107:0/3247390285 conn(0x7f72e80fe250 msgr2=0x7f72e8100670 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:50:43.340 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.339+0000 7f72eeda4640 1 -- 192.168.123.107:0/3247390285 shutdown_connections 2026-03-09T20:50:43.340 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.339+0000 7f72eeda4640 1 -- 192.168.123.107:0/3247390285 wait complete. 2026-03-09T20:50:43.340 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.339+0000 7f72eeda4640 1 Processor -- start 2026-03-09T20:50:43.340 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.339+0000 7f72eeda4640 1 -- start start 2026-03-09T20:50:43.340 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.340+0000 7f72eeda4640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f72e8102a80 0x7f72e819a480 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:50:43.341 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.340+0000 7f72eeda4640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f72e8103c80 0x7f72e819a9c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:50:43.341 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.340+0000 7f72eeda4640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f72e819af90 con 0x7f72e8103c80 2026-03-09T20:50:43.341 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.340+0000 7f72eeda4640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f72e819b100 con 0x7f72e8102a80 2026-03-09T20:50:43.341 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.340+0000 7f72ecb19640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f72e8102a80 0x7f72e819a480 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:50:43.341 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.340+0000 7f72dffff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f72e8103c80 0x7f72e819a9c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:50:43.341 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.340+0000 7f72ecb19640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f72e8102a80 0x7f72e819a480 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:37792/0 (socket says 192.168.123.107:37792) 2026-03-09T20:50:43.341 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.340+0000 7f72ecb19640 1 -- 192.168.123.107:0/550541683 learned_addr learned my addr 192.168.123.107:0/550541683 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:50:43.342 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.341+0000 7f72dffff640 1 -- 192.168.123.107:0/550541683 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f72e8102a80 msgr2=0x7f72e819a480 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:50:43.342 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.341+0000 7f72dffff640 1 --2- 192.168.123.107:0/550541683 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f72e8102a80 0x7f72e819a480 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:43.342 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.341+0000 7f72dffff640 1 -- 192.168.123.107:0/550541683 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f72d0009660 con 0x7f72e8103c80 2026-03-09T20:50:43.342 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.341+0000 7f72dffff640 1 --2- 192.168.123.107:0/550541683 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f72e8103c80 0x7f72e819a9c0 secure :-1 s=READY pgs=82 cs=0 l=1 rev1=1 crypto rx=0x7f72d800d6e0 tx=0x7f72d800dbb0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:50:43.344 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.342+0000 7f72ddffb640 1 -- 192.168.123.107:0/550541683 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f72d8004280 con 0x7f72e8103c80 2026-03-09T20:50:43.344 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.342+0000 7f72ddffb640 1 -- 192.168.123.107:0/550541683 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f72d8004d60 con 0x7f72e8103c80 2026-03-09T20:50:43.344 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.342+0000 7f72ddffb640 1 -- 192.168.123.107:0/550541683 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f72d8005020 con 0x7f72e8103c80 2026-03-09T20:50:43.344 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.342+0000 7f72eeda4640 1 -- 192.168.123.107:0/550541683 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f72e819fba0 con 0x7f72e8103c80 2026-03-09T20:50:43.344 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.342+0000 7f72eeda4640 1 -- 192.168.123.107:0/550541683 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f72e8075800 con 0x7f72e8103c80 2026-03-09T20:50:43.349 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.344+0000 7f72ddffb640 1 -- 192.168.123.107:0/550541683 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f72d800b7c0 con 0x7f72e8103c80 2026-03-09T20:50:43.349 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.344+0000 7f72eeda4640 1 -- 192.168.123.107:0/550541683 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f72b0005350 con 0x7f72e8103c80 2026-03-09T20:50:43.349 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.345+0000 7f72ddffb640 1 --2- 192.168.123.107:0/550541683 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f72c00778e0 0x7f72c0079da0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:50:43.349 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.345+0000 7f72ddffb640 1 -- 192.168.123.107:0/550541683 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(60..60 src has 1..60) v4 ==== 6394+0+0 (secure 0 0 0) 0x7f72d8099540 con 0x7f72e8103c80 2026-03-09T20:50:43.349 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.346+0000 7f72ecb19640 1 --2- 192.168.123.107:0/550541683 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f72c00778e0 0x7f72c0079da0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:50:43.349 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.346+0000 7f72ecb19640 1 --2- 192.168.123.107:0/550541683 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f72c00778e0 0x7f72c0079da0 secure :-1 s=READY pgs=54 cs=0 l=1 rev1=1 crypto rx=0x7f72d0002c30 tx=0x7f72d003a040 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:50:43.349 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.348+0000 7f72ddffb640 1 -- 192.168.123.107:0/550541683 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f72d8061da0 con 0x7f72e8103c80 2026-03-09T20:50:43.468 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.467+0000 7f72eeda4640 1 -- 192.168.123.107:0/550541683 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "fs dump"} v 0) v1 -- 0x7f72b0005e10 con 0x7f72e8103c80 2026-03-09T20:50:43.469 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.468+0000 7f72ddffb640 1 -- 192.168.123.107:0/550541683 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump"}]=0 dumped fsmap epoch 11 v11) v1 ==== 76+0+1937 (secure 0 0 0) 0x7f72d80614f0 con 0x7f72e8103c80 2026-03-09T20:50:43.469 INFO:teuthology.orchestra.run.vm07.stdout:e11 2026-03-09T20:50:43.469 INFO:teuthology.orchestra.run.vm07.stdout:btime 1970-01-01T00:00:00:000000+0000 2026-03-09T20:50:43.469 INFO:teuthology.orchestra.run.vm07.stdout:enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T20:50:43.469 INFO:teuthology.orchestra.run.vm07.stdout:default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T20:50:43.469 INFO:teuthology.orchestra.run.vm07.stdout:legacy client fscid: 1 2026-03-09T20:50:43.469 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:50:43.469 INFO:teuthology.orchestra.run.vm07.stdout:Filesystem 'cephfs' (1) 2026-03-09T20:50:43.469 INFO:teuthology.orchestra.run.vm07.stdout:fs_name cephfs 2026-03-09T20:50:43.469 INFO:teuthology.orchestra.run.vm07.stdout:epoch 11 2026-03-09T20:50:43.469 INFO:teuthology.orchestra.run.vm07.stdout:flags 32 joinable allow_snaps allow_multimds_snaps allow_standby_replay 2026-03-09T20:50:43.469 INFO:teuthology.orchestra.run.vm07.stdout:created 2026-03-09T20:44:59.885491+0000 2026-03-09T20:50:43.469 INFO:teuthology.orchestra.run.vm07.stdout:modified 2026-03-09T20:45:12.822947+0000 2026-03-09T20:50:43.470 INFO:teuthology.orchestra.run.vm07.stdout:tableserver 0 2026-03-09T20:50:43.470 INFO:teuthology.orchestra.run.vm07.stdout:root 0 2026-03-09T20:50:43.470 INFO:teuthology.orchestra.run.vm07.stdout:session_timeout 60 2026-03-09T20:50:43.470 INFO:teuthology.orchestra.run.vm07.stdout:session_autoclose 300 2026-03-09T20:50:43.470 INFO:teuthology.orchestra.run.vm07.stdout:max_file_size 1099511627776 2026-03-09T20:50:43.470 INFO:teuthology.orchestra.run.vm07.stdout:max_xattr_size 65536 2026-03-09T20:50:43.470 INFO:teuthology.orchestra.run.vm07.stdout:required_client_features {} 2026-03-09T20:50:43.470 INFO:teuthology.orchestra.run.vm07.stdout:last_failure 0 2026-03-09T20:50:43.470 INFO:teuthology.orchestra.run.vm07.stdout:last_failure_osd_epoch 0 2026-03-09T20:50:43.470 INFO:teuthology.orchestra.run.vm07.stdout:compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T20:50:43.470 INFO:teuthology.orchestra.run.vm07.stdout:max_mds 2 2026-03-09T20:50:43.470 INFO:teuthology.orchestra.run.vm07.stdout:in 0,1 2026-03-09T20:50:43.470 INFO:teuthology.orchestra.run.vm07.stdout:up {0=14476,1=24291} 2026-03-09T20:50:43.470 INFO:teuthology.orchestra.run.vm07.stdout:failed 2026-03-09T20:50:43.470 INFO:teuthology.orchestra.run.vm07.stdout:damaged 2026-03-09T20:50:43.470 INFO:teuthology.orchestra.run.vm07.stdout:stopped 2026-03-09T20:50:43.470 INFO:teuthology.orchestra.run.vm07.stdout:data_pools [3] 2026-03-09T20:50:43.470 INFO:teuthology.orchestra.run.vm07.stdout:metadata_pool 2 2026-03-09T20:50:43.470 INFO:teuthology.orchestra.run.vm07.stdout:inline_data disabled 2026-03-09T20:50:43.470 INFO:teuthology.orchestra.run.vm07.stdout:balancer 2026-03-09T20:50:43.470 INFO:teuthology.orchestra.run.vm07.stdout:bal_rank_mask -1 2026-03-09T20:50:43.470 INFO:teuthology.orchestra.run.vm07.stdout:standby_count_wanted 1 2026-03-09T20:50:43.470 INFO:teuthology.orchestra.run.vm07.stdout:qdb_cluster leader: 0 members: 2026-03-09T20:50:43.470 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm07.rovdbp{0:14476} state up:active seq 3 join_fscid=1 addr [v2:192.168.123.107:6826/2216764941,v1:192.168.123.107:6827/2216764941] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:50:43.470 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm10.hzyuyq{0:14498} state up:standby-replay seq 3 join_fscid=1 addr [v2:192.168.123.110:6826/3212743251,v1:192.168.123.110:6827/3212743251] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:50:43.470 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm10.qpltwp{1:24291} state up:active seq 3 join_fscid=1 addr [v2:192.168.123.110:6824/61492274,v1:192.168.123.110:6825/61492274] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:50:43.470 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm07.potfau{1:14490} state up:standby-replay seq 2 join_fscid=1 addr [v2:192.168.123.107:6828/3289699342,v1:192.168.123.107:6829/3289699342] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:50:43.470 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:50:43.470 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:50:43.470 INFO:teuthology.orchestra.run.vm07.stderr:dumped fsmap epoch 11 2026-03-09T20:50:43.472 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.471+0000 7f72eeda4640 1 -- 192.168.123.107:0/550541683 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f72c00778e0 msgr2=0x7f72c0079da0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:50:43.472 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.471+0000 7f72eeda4640 1 --2- 192.168.123.107:0/550541683 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f72c00778e0 0x7f72c0079da0 secure :-1 s=READY pgs=54 cs=0 l=1 rev1=1 crypto rx=0x7f72d0002c30 tx=0x7f72d003a040 comp rx=0 tx=0).stop 2026-03-09T20:50:43.472 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.471+0000 7f72eeda4640 1 -- 192.168.123.107:0/550541683 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f72e8103c80 msgr2=0x7f72e819a9c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:50:43.472 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.471+0000 7f72eeda4640 1 --2- 192.168.123.107:0/550541683 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f72e8103c80 0x7f72e819a9c0 secure :-1 s=READY pgs=82 cs=0 l=1 rev1=1 crypto rx=0x7f72d800d6e0 tx=0x7f72d800dbb0 comp rx=0 tx=0).stop 2026-03-09T20:50:43.472 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.471+0000 7f72eeda4640 1 -- 192.168.123.107:0/550541683 shutdown_connections 2026-03-09T20:50:43.472 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.471+0000 7f72eeda4640 1 --2- 192.168.123.107:0/550541683 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f72c00778e0 0x7f72c0079da0 unknown :-1 s=CLOSED pgs=54 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:43.472 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.471+0000 7f72eeda4640 1 --2- 192.168.123.107:0/550541683 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f72e8103c80 0x7f72e819a9c0 unknown :-1 s=CLOSED pgs=82 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:43.472 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.471+0000 7f72eeda4640 1 --2- 192.168.123.107:0/550541683 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f72e8102a80 0x7f72e819a480 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:43.472 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.471+0000 7f72eeda4640 1 -- 192.168.123.107:0/550541683 >> 192.168.123.107:0/550541683 conn(0x7f72e80fe250 msgr2=0x7f72e80ffd70 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:50:43.472 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.472+0000 7f72eeda4640 1 -- 192.168.123.107:0/550541683 shutdown_connections 2026-03-09T20:50:43.473 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.472+0000 7f72eeda4640 1 -- 192.168.123.107:0/550541683 wait complete. 2026-03-09T20:50:43.531 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.530+0000 7f46cb05f640 1 -- 192.168.123.107:0/4026220999 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f46c41006e0 msgr2=0x7f46c4100b60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:50:43.531 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.530+0000 7f46cb05f640 1 --2- 192.168.123.107:0/4026220999 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f46c41006e0 0x7f46c4100b60 secure :-1 s=READY pgs=83 cs=0 l=1 rev1=1 crypto rx=0x7f46ac0099b0 tx=0x7f46ac02f220 comp rx=0 tx=0).stop 2026-03-09T20:50:43.531 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.530+0000 7f46cb05f640 1 -- 192.168.123.107:0/4026220999 shutdown_connections 2026-03-09T20:50:43.531 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.530+0000 7f46cb05f640 1 --2- 192.168.123.107:0/4026220999 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f46c41006e0 0x7f46c4100b60 unknown :-1 s=CLOSED pgs=83 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:43.531 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.530+0000 7f46cb05f640 1 --2- 192.168.123.107:0/4026220999 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f46c40ff4e0 0x7f46c40ff8e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:43.532 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.530+0000 7f46cb05f640 1 -- 192.168.123.107:0/4026220999 >> 192.168.123.107:0/4026220999 conn(0x7f46c40fac90 msgr2=0x7f46c40fd0b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:50:43.536 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.535+0000 7f46cb05f640 1 -- 192.168.123.107:0/4026220999 shutdown_connections 2026-03-09T20:50:43.536 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.535+0000 7f46cb05f640 1 -- 192.168.123.107:0/4026220999 wait complete. 2026-03-09T20:50:43.536 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.535+0000 7f46cb05f640 1 Processor -- start 2026-03-09T20:50:43.536 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.536+0000 7f46cb05f640 1 -- start start 2026-03-09T20:50:43.537 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.536+0000 7f46cb05f640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f46c40ff4e0 0x7f46c406d070 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:50:43.537 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.536+0000 7f46cb05f640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f46c41006e0 0x7f46c406d5b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:50:43.537 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.536+0000 7f46cb05f640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f46c406db80 con 0x7f46c41006e0 2026-03-09T20:50:43.537 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.536+0000 7f46cb05f640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f46c406dcf0 con 0x7f46c40ff4e0 2026-03-09T20:50:43.537 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.536+0000 7f46c985c640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f46c41006e0 0x7f46c406d5b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:50:43.537 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.537+0000 7f46c985c640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f46c41006e0 0x7f46c406d5b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:48696/0 (socket says 192.168.123.107:48696) 2026-03-09T20:50:43.538 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.537+0000 7f46c985c640 1 -- 192.168.123.107:0/2336716471 learned_addr learned my addr 192.168.123.107:0/2336716471 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:50:43.538 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.537+0000 7f46c985c640 1 -- 192.168.123.107:0/2336716471 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f46c40ff4e0 msgr2=0x7f46c406d070 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T20:50:43.538 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.537+0000 7f46c985c640 1 --2- 192.168.123.107:0/2336716471 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f46c40ff4e0 0x7f46c406d070 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:43.538 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.537+0000 7f46c985c640 1 -- 192.168.123.107:0/2336716471 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f46ac009660 con 0x7f46c41006e0 2026-03-09T20:50:43.538 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.538+0000 7f46c985c640 1 --2- 192.168.123.107:0/2336716471 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f46c41006e0 0x7f46c406d5b0 secure :-1 s=READY pgs=84 cs=0 l=1 rev1=1 crypto rx=0x7f46ac002c20 tx=0x7f46ac031d20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:50:43.539 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.538+0000 7f46bb7fe640 1 -- 192.168.123.107:0/2336716471 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f46ac03d070 con 0x7f46c41006e0 2026-03-09T20:50:43.539 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.538+0000 7f46cb05f640 1 -- 192.168.123.107:0/2336716471 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f46c4072760 con 0x7f46c41006e0 2026-03-09T20:50:43.539 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.538+0000 7f46cb05f640 1 -- 192.168.123.107:0/2336716471 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f46c4072c50 con 0x7f46c41006e0 2026-03-09T20:50:43.539 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.539+0000 7f46bb7fe640 1 -- 192.168.123.107:0/2336716471 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f46ac031ed0 con 0x7f46c41006e0 2026-03-09T20:50:43.540 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.539+0000 7f46bb7fe640 1 -- 192.168.123.107:0/2336716471 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f46ac038680 con 0x7f46c41006e0 2026-03-09T20:50:43.541 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.539+0000 7f46b97fa640 1 -- 192.168.123.107:0/2336716471 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f46c4072f30 con 0x7f46c41006e0 2026-03-09T20:50:43.544 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.543+0000 7f46bb7fe640 1 -- 192.168.123.107:0/2336716471 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f46ac031320 con 0x7f46c41006e0 2026-03-09T20:50:43.544 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.543+0000 7f46bb7fe640 1 --2- 192.168.123.107:0/2336716471 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f46980779b0 0x7f4698079e70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:50:43.544 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.543+0000 7f46bb7fe640 1 -- 192.168.123.107:0/2336716471 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(60..60 src has 1..60) v4 ==== 6394+0+0 (secure 0 0 0) 0x7f46ac0be930 con 0x7f46c41006e0 2026-03-09T20:50:43.544 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.544+0000 7f46bb7fe640 1 -- 192.168.123.107:0/2336716471 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f46ac0eea90 con 0x7f46c41006e0 2026-03-09T20:50:43.545 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.544+0000 7f46ca05d640 1 --2- 192.168.123.107:0/2336716471 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f46980779b0 0x7f4698079e70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:50:43.546 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.545+0000 7f46ca05d640 1 --2- 192.168.123.107:0/2336716471 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f46980779b0 0x7f4698079e70 secure :-1 s=READY pgs=55 cs=0 l=1 rev1=1 crypto rx=0x7f46b4005fd0 tx=0x7f46b4005950 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:50:43.656 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.654+0000 7f46b97fa640 1 -- 192.168.123.107:0/2336716471 --> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f46c40611c0 con 0x7f46980779b0 2026-03-09T20:50:43.657 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.656+0000 7f46bb7fe640 1 -- 192.168.123.107:0/2336716471 <== mgr.34100 v2:192.168.123.107:6800/2064434004 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+400 (secure 0 0 0) 0x7f46c40611c0 con 0x7f46980779b0 2026-03-09T20:50:43.657 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T20:50:43.657 INFO:teuthology.orchestra.run.vm07.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc", 2026-03-09T20:50:43.657 INFO:teuthology.orchestra.run.vm07.stdout: "in_progress": true, 2026-03-09T20:50:43.657 INFO:teuthology.orchestra.run.vm07.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-09T20:50:43.657 INFO:teuthology.orchestra.run.vm07.stdout: "services_complete": [ 2026-03-09T20:50:43.657 INFO:teuthology.orchestra.run.vm07.stdout: "mon", 2026-03-09T20:50:43.657 INFO:teuthology.orchestra.run.vm07.stdout: "crash", 2026-03-09T20:50:43.657 INFO:teuthology.orchestra.run.vm07.stdout: "mgr" 2026-03-09T20:50:43.657 INFO:teuthology.orchestra.run.vm07.stdout: ], 2026-03-09T20:50:43.657 INFO:teuthology.orchestra.run.vm07.stdout: "progress": "8/23 daemons upgraded", 2026-03-09T20:50:43.657 INFO:teuthology.orchestra.run.vm07.stdout: "message": "Currently upgrading osd daemons", 2026-03-09T20:50:43.657 INFO:teuthology.orchestra.run.vm07.stdout: "is_paused": false 2026-03-09T20:50:43.657 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T20:50:43.660 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.659+0000 7f46b97fa640 1 -- 192.168.123.107:0/2336716471 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f46980779b0 msgr2=0x7f4698079e70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:50:43.660 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.659+0000 7f46b97fa640 1 --2- 192.168.123.107:0/2336716471 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f46980779b0 0x7f4698079e70 secure :-1 s=READY pgs=55 cs=0 l=1 rev1=1 crypto rx=0x7f46b4005fd0 tx=0x7f46b4005950 comp rx=0 tx=0).stop 2026-03-09T20:50:43.660 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.659+0000 7f46b97fa640 1 -- 192.168.123.107:0/2336716471 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f46c41006e0 msgr2=0x7f46c406d5b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:50:43.660 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.660+0000 7f46b97fa640 1 --2- 192.168.123.107:0/2336716471 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f46c41006e0 0x7f46c406d5b0 secure :-1 s=READY pgs=84 cs=0 l=1 rev1=1 crypto rx=0x7f46ac002c20 tx=0x7f46ac031d20 comp rx=0 tx=0).stop 2026-03-09T20:50:43.661 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.660+0000 7f46b97fa640 1 -- 192.168.123.107:0/2336716471 shutdown_connections 2026-03-09T20:50:43.661 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.661+0000 7f46b97fa640 1 --2- 192.168.123.107:0/2336716471 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f46980779b0 0x7f4698079e70 unknown :-1 s=CLOSED pgs=55 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:43.661 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.661+0000 7f46b97fa640 1 --2- 192.168.123.107:0/2336716471 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f46c41006e0 0x7f46c406d5b0 unknown :-1 s=CLOSED pgs=84 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:43.662 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.661+0000 7f46b97fa640 1 --2- 192.168.123.107:0/2336716471 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f46c40ff4e0 0x7f46c406d070 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:43.662 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.661+0000 7f46b97fa640 1 -- 192.168.123.107:0/2336716471 >> 192.168.123.107:0/2336716471 conn(0x7f46c40fac90 msgr2=0x7f46c40fc750 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:50:43.663 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.662+0000 7f46b97fa640 1 -- 192.168.123.107:0/2336716471 shutdown_connections 2026-03-09T20:50:43.663 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.662+0000 7f46b97fa640 1 -- 192.168.123.107:0/2336716471 wait complete. 2026-03-09T20:50:43.724 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.723+0000 7fe451974640 1 -- 192.168.123.107:0/2628562201 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe44c101820 msgr2=0x7fe44c101ca0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:50:43.724 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.723+0000 7fe451974640 1 --2- 192.168.123.107:0/2628562201 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe44c101820 0x7fe44c101ca0 secure :-1 s=READY pgs=29 cs=0 l=1 rev1=1 crypto rx=0x7fe43c0099b0 tx=0x7fe43c02f220 comp rx=0 tx=0).stop 2026-03-09T20:50:43.724 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.723+0000 7fe451974640 1 -- 192.168.123.107:0/2628562201 shutdown_connections 2026-03-09T20:50:43.724 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.723+0000 7fe451974640 1 --2- 192.168.123.107:0/2628562201 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe44c101820 0x7fe44c101ca0 unknown :-1 s=CLOSED pgs=29 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:43.724 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.723+0000 7fe451974640 1 --2- 192.168.123.107:0/2628562201 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe44c100620 0x7fe44c100a20 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:43.724 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.723+0000 7fe451974640 1 -- 192.168.123.107:0/2628562201 >> 192.168.123.107:0/2628562201 conn(0x7fe44c0fbdb0 msgr2=0x7fe44c0fe1f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:50:43.724 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.723+0000 7fe451974640 1 -- 192.168.123.107:0/2628562201 shutdown_connections 2026-03-09T20:50:43.724 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.724+0000 7fe451974640 1 -- 192.168.123.107:0/2628562201 wait complete. 2026-03-09T20:50:43.725 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.724+0000 7fe451974640 1 Processor -- start 2026-03-09T20:50:43.725 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.724+0000 7fe451974640 1 -- start start 2026-03-09T20:50:43.726 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.725+0000 7fe451974640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe44c100620 0x7fe44c19a300 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:50:43.726 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.725+0000 7fe450972640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe44c100620 0x7fe44c19a300 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:50:43.726 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.725+0000 7fe450972640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe44c100620 0x7fe44c19a300 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:48728/0 (socket says 192.168.123.107:48728) 2026-03-09T20:50:43.726 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.725+0000 7fe451974640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe44c101820 0x7fe44c19a840 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:50:43.727 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.726+0000 7fe450972640 1 -- 192.168.123.107:0/612204753 learned_addr learned my addr 192.168.123.107:0/612204753 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:50:43.727 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.726+0000 7fe451974640 1 -- 192.168.123.107:0/612204753 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fe44c19ae10 con 0x7fe44c100620 2026-03-09T20:50:43.727 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.726+0000 7fe443fff640 1 --2- 192.168.123.107:0/612204753 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe44c101820 0x7fe44c19a840 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:50:43.727 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.726+0000 7fe451974640 1 -- 192.168.123.107:0/612204753 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fe44c19af80 con 0x7fe44c101820 2026-03-09T20:50:43.727 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.726+0000 7fe443fff640 1 -- 192.168.123.107:0/612204753 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe44c100620 msgr2=0x7fe44c19a300 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:50:43.728 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.727+0000 7fe443fff640 1 --2- 192.168.123.107:0/612204753 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe44c100620 0x7fe44c19a300 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:43.728 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.727+0000 7fe443fff640 1 -- 192.168.123.107:0/612204753 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fe434009590 con 0x7fe44c101820 2026-03-09T20:50:43.728 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.727+0000 7fe450972640 1 --2- 192.168.123.107:0/612204753 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe44c100620 0x7fe44c19a300 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T20:50:43.728 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.727+0000 7fe443fff640 1 --2- 192.168.123.107:0/612204753 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe44c101820 0x7fe44c19a840 secure :-1 s=READY pgs=30 cs=0 l=1 rev1=1 crypto rx=0x7fe43c02f730 tx=0x7fe43c004290 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:50:43.728 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.727+0000 7fe441ffb640 1 -- 192.168.123.107:0/612204753 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fe43c03d070 con 0x7fe44c101820 2026-03-09T20:50:43.729 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.727+0000 7fe441ffb640 1 -- 192.168.123.107:0/612204753 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fe43c038730 con 0x7fe44c101820 2026-03-09T20:50:43.729 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.727+0000 7fe441ffb640 1 -- 192.168.123.107:0/612204753 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fe43c0415e0 con 0x7fe44c101820 2026-03-09T20:50:43.729 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.727+0000 7fe451974640 1 -- 192.168.123.107:0/612204753 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fe43c009660 con 0x7fe44c101820 2026-03-09T20:50:43.729 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.727+0000 7fe451974640 1 -- 192.168.123.107:0/612204753 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fe44c19fd20 con 0x7fe44c101820 2026-03-09T20:50:43.729 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.728+0000 7fe451974640 1 -- 192.168.123.107:0/612204753 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fe418005350 con 0x7fe44c101820 2026-03-09T20:50:43.731 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.729+0000 7fe441ffb640 1 -- 192.168.123.107:0/612204753 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fe43c0388a0 con 0x7fe44c101820 2026-03-09T20:50:43.731 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.730+0000 7fe441ffb640 1 --2- 192.168.123.107:0/612204753 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fe41c0778e0 0x7fe41c079da0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:50:43.731 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.730+0000 7fe441ffb640 1 -- 192.168.123.107:0/612204753 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(60..60 src has 1..60) v4 ==== 6394+0+0 (secure 0 0 0) 0x7fe43c0be930 con 0x7fe44c101820 2026-03-09T20:50:43.732 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.730+0000 7fe450972640 1 --2- 192.168.123.107:0/612204753 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fe41c0778e0 0x7fe41c079da0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:50:43.732 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.731+0000 7fe450972640 1 --2- 192.168.123.107:0/612204753 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fe41c0778e0 0x7fe41c079da0 secure :-1 s=READY pgs=56 cs=0 l=1 rev1=1 crypto rx=0x7fe434004750 tx=0x7fe434009290 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:50:43.733 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.732+0000 7fe441ffb640 1 -- 192.168.123.107:0/612204753 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fe43c087190 con 0x7fe44c101820 2026-03-09T20:50:43.880 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.879+0000 7fe451974640 1 -- 192.168.123.107:0/612204753 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "health", "detail": "detail"} v 0) v1 -- 0x7fe4180051c0 con 0x7fe44c101820 2026-03-09T20:50:43.881 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.880+0000 7fe441ffb640 1 -- 192.168.123.107:0/612204753 <== mon.1 v2:192.168.123.110:3300/0 7 ==== mon_command_ack([{"prefix": "health", "detail": "detail"}]=0 v0) v1 ==== 74+0+367 (secure 0 0 0) 0x7fe43c0868e0 con 0x7fe44c101820 2026-03-09T20:50:43.884 INFO:teuthology.orchestra.run.vm07.stdout:HEALTH_WARN Degraded data redundancy: 239/285 objects degraded (83.860%), 3 pgs degraded 2026-03-09T20:50:43.884 INFO:teuthology.orchestra.run.vm07.stdout:[WRN] PG_DEGRADED: Degraded data redundancy: 239/285 objects degraded (83.860%), 3 pgs degraded 2026-03-09T20:50:43.884 INFO:teuthology.orchestra.run.vm07.stdout: pg 3.7 is active+recovery_wait+degraded, acting [2,1,4] 2026-03-09T20:50:43.884 INFO:teuthology.orchestra.run.vm07.stdout: pg 3.16 is active+recovery_wait+degraded, acting [5,3,1] 2026-03-09T20:50:43.884 INFO:teuthology.orchestra.run.vm07.stdout: pg 3.1d is active+recovery_wait+degraded, acting [5,4,1] 2026-03-09T20:50:43.886 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.885+0000 7fe451974640 1 -- 192.168.123.107:0/612204753 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fe41c0778e0 msgr2=0x7fe41c079da0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:50:43.886 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.885+0000 7fe451974640 1 --2- 192.168.123.107:0/612204753 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fe41c0778e0 0x7fe41c079da0 secure :-1 s=READY pgs=56 cs=0 l=1 rev1=1 crypto rx=0x7fe434004750 tx=0x7fe434009290 comp rx=0 tx=0).stop 2026-03-09T20:50:43.886 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.885+0000 7fe451974640 1 -- 192.168.123.107:0/612204753 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe44c101820 msgr2=0x7fe44c19a840 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:50:43.886 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.886+0000 7fe451974640 1 --2- 192.168.123.107:0/612204753 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe44c101820 0x7fe44c19a840 secure :-1 s=READY pgs=30 cs=0 l=1 rev1=1 crypto rx=0x7fe43c02f730 tx=0x7fe43c004290 comp rx=0 tx=0).stop 2026-03-09T20:50:43.887 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.886+0000 7fe451974640 1 -- 192.168.123.107:0/612204753 shutdown_connections 2026-03-09T20:50:43.887 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.886+0000 7fe451974640 1 --2- 192.168.123.107:0/612204753 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fe41c0778e0 0x7fe41c079da0 unknown :-1 s=CLOSED pgs=56 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:43.887 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.886+0000 7fe451974640 1 --2- 192.168.123.107:0/612204753 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe44c101820 0x7fe44c19a840 unknown :-1 s=CLOSED pgs=30 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:43.887 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.886+0000 7fe451974640 1 --2- 192.168.123.107:0/612204753 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe44c100620 0x7fe44c19a300 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:50:43.887 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.886+0000 7fe451974640 1 -- 192.168.123.107:0/612204753 >> 192.168.123.107:0/612204753 conn(0x7fe44c0fbdb0 msgr2=0x7fe44c0fd980 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:50:43.887 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.887+0000 7fe451974640 1 -- 192.168.123.107:0/612204753 shutdown_connections 2026-03-09T20:50:43.888 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:50:43.887+0000 7fe451974640 1 -- 192.168.123.107:0/612204753 wait complete. 2026-03-09T20:50:44.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:43 vm07.local ceph-mon[112105]: pgmap v85: 65 pgs: 3 active+recovery_wait+degraded, 1 active+recovering, 61 active+clean; 211 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 631 KiB/s rd, 531 KiB/s wr, 175 op/s; 239/285 objects degraded (83.860%); 0 B/s, 24 objects/s recovering 2026-03-09T20:50:44.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:43 vm07.local ceph-mon[112105]: from='client.44185 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:50:44.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:43 vm07.local ceph-mon[112105]: from='client.34224 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:50:44.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:43 vm07.local ceph-mon[112105]: from='client.34228 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:50:44.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:43 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/1321161128' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:50:44.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:43 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/550541683' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T20:50:44.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:43 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/612204753' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T20:50:44.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:43 vm10.local ceph-mon[103526]: pgmap v85: 65 pgs: 3 active+recovery_wait+degraded, 1 active+recovering, 61 active+clean; 211 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 631 KiB/s rd, 531 KiB/s wr, 175 op/s; 239/285 objects degraded (83.860%); 0 B/s, 24 objects/s recovering 2026-03-09T20:50:44.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:43 vm10.local ceph-mon[103526]: from='client.44185 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:50:44.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:43 vm10.local ceph-mon[103526]: from='client.34224 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:50:44.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:43 vm10.local ceph-mon[103526]: from='client.34228 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:50:44.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:43 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/1321161128' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:50:44.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:43 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/550541683' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T20:50:44.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:43 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/612204753' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T20:50:45.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:44 vm10.local ceph-mon[103526]: from='client.34240 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:50:45.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:44 vm07.local ceph-mon[112105]: from='client.34240 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:50:46.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:45 vm10.local ceph-mon[103526]: pgmap v86: 65 pgs: 3 active+recovery_wait+degraded, 1 active+recovering, 61 active+clean; 211 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 173 KiB/s rd, 97 KiB/s wr, 73 op/s; 239/285 objects degraded (83.860%); 0 B/s, 15 objects/s recovering 2026-03-09T20:50:46.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:45 vm07.local ceph-mon[112105]: pgmap v86: 65 pgs: 3 active+recovery_wait+degraded, 1 active+recovering, 61 active+clean; 211 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 173 KiB/s rd, 97 KiB/s wr, 73 op/s; 239/285 objects degraded (83.860%); 0 B/s, 15 objects/s recovering 2026-03-09T20:50:48.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:47 vm07.local ceph-mon[112105]: pgmap v87: 65 pgs: 1 active+recovery_wait+degraded, 1 active+recovering, 63 active+clean; 211 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 174 KiB/s rd, 97 KiB/s wr, 62 op/s; 97/285 objects degraded (34.035%); 0 B/s, 24 objects/s recovering 2026-03-09T20:50:48.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:47 vm07.local ceph-mon[112105]: Health check update: Degraded data redundancy: 97/285 objects degraded (34.035%), 1 pg degraded (PG_DEGRADED) 2026-03-09T20:50:48.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:47 vm10.local ceph-mon[103526]: pgmap v87: 65 pgs: 1 active+recovery_wait+degraded, 1 active+recovering, 63 active+clean; 211 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 174 KiB/s rd, 97 KiB/s wr, 62 op/s; 97/285 objects degraded (34.035%); 0 B/s, 24 objects/s recovering 2026-03-09T20:50:48.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:47 vm10.local ceph-mon[103526]: Health check update: Degraded data redundancy: 97/285 objects degraded (34.035%), 1 pg degraded (PG_DEGRADED) 2026-03-09T20:50:50.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:50 vm10.local ceph-mon[103526]: pgmap v88: 65 pgs: 1 active+recovery_wait+degraded, 1 active+recovering, 63 active+clean; 211 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 28 KiB/s rd, 26 KiB/s wr, 14 op/s; 97/285 objects degraded (34.035%); 0 B/s, 14 objects/s recovering 2026-03-09T20:50:50.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:50 vm07.local ceph-mon[112105]: pgmap v88: 65 pgs: 1 active+recovery_wait+degraded, 1 active+recovering, 63 active+clean; 211 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 28 KiB/s rd, 26 KiB/s wr, 14 op/s; 97/285 objects degraded (34.035%); 0 B/s, 14 objects/s recovering 2026-03-09T20:50:51.273 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:50:51.274 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:50:51.274 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:50:51.274 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:50:51.274 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:50:51.274 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:50:51.274 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:50:51.274 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:50:51.274 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-09T20:50:51.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:50:51.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:50:51.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:50:51.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:50:51.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:50:51.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:50:51.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:50:51.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:50:51.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-09T20:50:52.196 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:52 vm07.local ceph-mon[112105]: pgmap v89: 65 pgs: 1 active+recovery_wait+degraded, 1 active+recovering, 63 active+clean; 211 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 28 KiB/s rd, 26 KiB/s wr, 5 op/s; 97/285 objects degraded (34.035%); 0 B/s, 13 objects/s recovering 2026-03-09T20:50:52.196 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:52 vm07.local ceph-mon[112105]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-09T20:50:52.196 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:52 vm07.local ceph-mon[112105]: Upgrade: osd.2 is safe to restart 2026-03-09T20:50:52.196 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:52 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:50:52.196 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:52 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T20:50:52.196 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:52 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:50:52.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:52 vm10.local ceph-mon[103526]: pgmap v89: 65 pgs: 1 active+recovery_wait+degraded, 1 active+recovering, 63 active+clean; 211 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 28 KiB/s rd, 26 KiB/s wr, 5 op/s; 97/285 objects degraded (34.035%); 0 B/s, 13 objects/s recovering 2026-03-09T20:50:52.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:52 vm10.local ceph-mon[103526]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-09T20:50:52.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:52 vm10.local ceph-mon[103526]: Upgrade: osd.2 is safe to restart 2026-03-09T20:50:52.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:52 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:50:52.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:52 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T20:50:52.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:52 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:50:52.465 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:50:52 vm07.local systemd[1]: Stopping Ceph osd.2 for 589eab88-1bf8-11f1-9e50-71f3ab1833c4... 2026-03-09T20:50:52.465 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:50:52 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-2[83454]: 2026-03-09T20:50:52.310+0000 7face3762640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.2 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T20:50:52.465 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:50:52 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-2[83454]: 2026-03-09T20:50:52.311+0000 7face3762640 -1 osd.2 60 *** Got signal Terminated *** 2026-03-09T20:50:52.465 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:50:52 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-2[83454]: 2026-03-09T20:50:52.311+0000 7face3762640 -1 osd.2 60 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T20:50:53.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:53 vm10.local ceph-mon[103526]: Upgrade: Updating osd.2 2026-03-09T20:50:53.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:53 vm10.local ceph-mon[103526]: Deploying daemon osd.2 on vm07 2026-03-09T20:50:53.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:53 vm10.local ceph-mon[103526]: osd.2 marked itself down and dead 2026-03-09T20:50:53.384 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:50:53 vm07.local podman[131976]: 2026-03-09 20:50:53.073126468 +0000 UTC m=+0.778044694 container died a2ad523a264c4bc8bd2c0a0e95c1295a80616b9b7f4b63db6916d0c6b1f4dd1a (image=quay.ceph.io/ceph-ci/ceph@sha256:5c4977cb71fa562f9eeef885de8392360f108b57ddc9fb513207223fafb4cf40, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=ab47f43c099b2cbae6e21342fe673ce251da54d6, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-09T20:50:53.384 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:50:53 vm07.local podman[131976]: 2026-03-09 20:50:53.098918276 +0000 UTC m=+0.803836491 container remove a2ad523a264c4bc8bd2c0a0e95c1295a80616b9b7f4b63db6916d0c6b1f4dd1a (image=quay.ceph.io/ceph-ci/ceph@sha256:5c4977cb71fa562f9eeef885de8392360f108b57ddc9fb513207223fafb4cf40, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-2, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=ab47f43c099b2cbae6e21342fe673ce251da54d6, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True) 2026-03-09T20:50:53.384 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:50:53 vm07.local bash[131976]: ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-2 2026-03-09T20:50:53.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:53 vm07.local ceph-mon[112105]: Upgrade: Updating osd.2 2026-03-09T20:50:53.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:53 vm07.local ceph-mon[112105]: Deploying daemon osd.2 on vm07 2026-03-09T20:50:53.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:53 vm07.local ceph-mon[112105]: osd.2 marked itself down and dead 2026-03-09T20:50:53.646 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:50:53 vm07.local podman[132039]: 2026-03-09 20:50:53.383438894 +0000 UTC m=+0.108687851 container create 43a240f0bb3efffcc726297405357b5066996e8a2c369b3c6b13a1a2885e8439 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-2-deactivate, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid) 2026-03-09T20:50:53.646 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:50:53 vm07.local podman[132039]: 2026-03-09 20:50:53.288082128 +0000 UTC m=+0.013331095 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T20:50:53.646 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:50:53 vm07.local podman[132039]: 2026-03-09 20:50:53.500027422 +0000 UTC m=+0.225276379 container init 43a240f0bb3efffcc726297405357b5066996e8a2c369b3c6b13a1a2885e8439 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-2-deactivate, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20260223) 2026-03-09T20:50:53.646 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:50:53 vm07.local podman[132039]: 2026-03-09 20:50:53.506247063 +0000 UTC m=+0.231496009 container start 43a240f0bb3efffcc726297405357b5066996e8a2c369b3c6b13a1a2885e8439 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-2-deactivate, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True) 2026-03-09T20:50:53.646 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:50:53 vm07.local podman[132039]: 2026-03-09 20:50:53.515755129 +0000 UTC m=+0.241004086 container attach 43a240f0bb3efffcc726297405357b5066996e8a2c369b3c6b13a1a2885e8439 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-2-deactivate, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, CEPH_REF=squid, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default) 2026-03-09T20:50:53.646 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:50:53 vm07.local podman[132039]: 2026-03-09 20:50:53.644571916 +0000 UTC m=+0.369820873 container died 43a240f0bb3efffcc726297405357b5066996e8a2c369b3c6b13a1a2885e8439 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-2-deactivate, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-09T20:50:53.919 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:50:53 vm07.local podman[132039]: 2026-03-09 20:50:53.690393774 +0000 UTC m=+0.415642731 container remove 43a240f0bb3efffcc726297405357b5066996e8a2c369b3c6b13a1a2885e8439 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-2-deactivate, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-09T20:50:53.919 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:50:53 vm07.local systemd[1]: ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4@osd.2.service: Deactivated successfully. 2026-03-09T20:50:53.919 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:50:53 vm07.local systemd[1]: ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4@osd.2.service: Unit process 132058 (conmon) remains running after unit stopped. 2026-03-09T20:50:53.919 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:50:53 vm07.local systemd[1]: Stopped Ceph osd.2 for 589eab88-1bf8-11f1-9e50-71f3ab1833c4. 2026-03-09T20:50:53.919 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:50:53 vm07.local systemd[1]: ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4@osd.2.service: Consumed 42.717s CPU time, 552.4M memory peak. 2026-03-09T20:50:54.196 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:54 vm07.local ceph-mon[112105]: pgmap v90: 65 pgs: 65 active+clean; 211 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 28 KiB/s rd, 26 KiB/s wr, 5 op/s; 0 B/s, 24 objects/s recovering 2026-03-09T20:50:54.196 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:54 vm07.local ceph-mon[112105]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 97/285 objects degraded (34.035%), 1 pg degraded) 2026-03-09T20:50:54.196 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:54 vm07.local ceph-mon[112105]: Cluster is now healthy 2026-03-09T20:50:54.196 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:54 vm07.local ceph-mon[112105]: Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T20:50:54.196 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:54 vm07.local ceph-mon[112105]: osdmap e61: 6 total, 5 up, 6 in 2026-03-09T20:50:54.196 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:50:53 vm07.local systemd[1]: Starting Ceph osd.2 for 589eab88-1bf8-11f1-9e50-71f3ab1833c4... 2026-03-09T20:50:54.197 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:50:54 vm07.local podman[132150]: 2026-03-09 20:50:54.060236527 +0000 UTC m=+0.052538121 container create 01d4b43b24f994011fcffacabc16eb71a198db1586b7418e350ac10d5368548f (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-2-activate, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS) 2026-03-09T20:50:54.197 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:50:54 vm07.local podman[132150]: 2026-03-09 20:50:54.108870127 +0000 UTC m=+0.101171721 container init 01d4b43b24f994011fcffacabc16eb71a198db1586b7418e350ac10d5368548f (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-2-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-09T20:50:54.197 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:50:54 vm07.local podman[132150]: 2026-03-09 20:50:54.112481437 +0000 UTC m=+0.104783031 container start 01d4b43b24f994011fcffacabc16eb71a198db1586b7418e350ac10d5368548f (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-2-activate, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-09T20:50:54.197 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:50:54 vm07.local podman[132150]: 2026-03-09 20:50:54.114420306 +0000 UTC m=+0.106721900 container attach 01d4b43b24f994011fcffacabc16eb71a198db1586b7418e350ac10d5368548f (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-2-activate, ceph=True, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-09T20:50:54.197 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:50:54 vm07.local podman[132150]: 2026-03-09 20:50:54.02163303 +0000 UTC m=+0.013934633 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T20:50:54.197 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:50:54 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-2-activate[132161]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T20:50:54.197 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:50:54 vm07.local bash[132150]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T20:50:54.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:54 vm10.local ceph-mon[103526]: pgmap v90: 65 pgs: 65 active+clean; 211 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 28 KiB/s rd, 26 KiB/s wr, 5 op/s; 0 B/s, 24 objects/s recovering 2026-03-09T20:50:54.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:54 vm10.local ceph-mon[103526]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 97/285 objects degraded (34.035%), 1 pg degraded) 2026-03-09T20:50:54.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:54 vm10.local ceph-mon[103526]: Cluster is now healthy 2026-03-09T20:50:54.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:54 vm10.local ceph-mon[103526]: Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T20:50:54.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:54 vm10.local ceph-mon[103526]: osdmap e61: 6 total, 5 up, 6 in 2026-03-09T20:50:54.633 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:50:54 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-2-activate[132161]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T20:50:54.634 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:50:54 vm07.local bash[132150]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T20:50:55.017 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:50:54 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-2-activate[132161]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-09T20:50:55.017 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:50:54 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-2-activate[132161]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T20:50:55.017 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:50:54 vm07.local bash[132150]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-09T20:50:55.017 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:50:54 vm07.local bash[132150]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T20:50:55.017 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:50:54 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-2-activate[132161]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T20:50:55.017 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:50:54 vm07.local bash[132150]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T20:50:55.017 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:50:54 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-2-activate[132161]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 2026-03-09T20:50:55.017 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:50:54 vm07.local bash[132150]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 2026-03-09T20:50:55.017 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:50:54 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-2-activate[132161]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-ded824b4-1556-4628-b9be-ff83179557c3/osd-block-91efe4fd-879b-433f-ab7e-d98ab2676ea3 --path /var/lib/ceph/osd/ceph-2 --no-mon-config 2026-03-09T20:50:55.017 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:50:54 vm07.local bash[132150]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-ded824b4-1556-4628-b9be-ff83179557c3/osd-block-91efe4fd-879b-433f-ab7e-d98ab2676ea3 --path /var/lib/ceph/osd/ceph-2 --no-mon-config 2026-03-09T20:50:55.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:55 vm10.local ceph-mon[103526]: osdmap e62: 6 total, 5 up, 6 in 2026-03-09T20:50:55.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:55 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:50:55.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:55 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:50:55.385 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:50:55 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-2-activate[132161]: Running command: /usr/bin/ln -snf /dev/ceph-ded824b4-1556-4628-b9be-ff83179557c3/osd-block-91efe4fd-879b-433f-ab7e-d98ab2676ea3 /var/lib/ceph/osd/ceph-2/block 2026-03-09T20:50:55.385 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:50:55 vm07.local bash[132150]: Running command: /usr/bin/ln -snf /dev/ceph-ded824b4-1556-4628-b9be-ff83179557c3/osd-block-91efe4fd-879b-433f-ab7e-d98ab2676ea3 /var/lib/ceph/osd/ceph-2/block 2026-03-09T20:50:55.385 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:50:55 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-2-activate[132161]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block 2026-03-09T20:50:55.385 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:50:55 vm07.local bash[132150]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block 2026-03-09T20:50:55.385 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:50:55 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-2-activate[132161]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2 2026-03-09T20:50:55.385 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:50:55 vm07.local bash[132150]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2 2026-03-09T20:50:55.385 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:50:55 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-2-activate[132161]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 2026-03-09T20:50:55.385 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:50:55 vm07.local bash[132150]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 2026-03-09T20:50:55.385 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:50:55 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-2-activate[132161]: --> ceph-volume lvm activate successful for osd ID: 2 2026-03-09T20:50:55.385 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:50:55 vm07.local bash[132150]: --> ceph-volume lvm activate successful for osd ID: 2 2026-03-09T20:50:55.385 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:50:55 vm07.local conmon[132161]: conmon 01d4b43b24f994011fcf : Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-01d4b43b24f994011fcffacabc16eb71a198db1586b7418e350ac10d5368548f.scope/container/memory.events 2026-03-09T20:50:55.385 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:50:55 vm07.local podman[132150]: 2026-03-09 20:50:55.06243752 +0000 UTC m=+1.054739134 container died 01d4b43b24f994011fcffacabc16eb71a198db1586b7418e350ac10d5368548f (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-2-activate, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, ceph=True, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0) 2026-03-09T20:50:55.385 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:50:55 vm07.local podman[132150]: 2026-03-09 20:50:55.080930525 +0000 UTC m=+1.073232119 container remove 01d4b43b24f994011fcffacabc16eb71a198db1586b7418e350ac10d5368548f (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-2-activate, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-09T20:50:55.385 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:50:55 vm07.local podman[132421]: 2026-03-09 20:50:55.174439282 +0000 UTC m=+0.016644626 container create 0d3aa63353bb720a934b8ab3b3781c190354c49c4674e62a7b7553aa98e4161f (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-09T20:50:55.385 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:50:55 vm07.local podman[132421]: 2026-03-09 20:50:55.214407543 +0000 UTC m=+0.056612887 container init 0d3aa63353bb720a934b8ab3b3781c190354c49c4674e62a7b7553aa98e4161f (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default) 2026-03-09T20:50:55.385 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:50:55 vm07.local podman[132421]: 2026-03-09 20:50:55.217834769 +0000 UTC m=+0.060040113 container start 0d3aa63353bb720a934b8ab3b3781c190354c49c4674e62a7b7553aa98e4161f (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-2, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223) 2026-03-09T20:50:55.385 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:50:55 vm07.local bash[132421]: 0d3aa63353bb720a934b8ab3b3781c190354c49c4674e62a7b7553aa98e4161f 2026-03-09T20:50:55.385 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:50:55 vm07.local podman[132421]: 2026-03-09 20:50:55.168492982 +0000 UTC m=+0.010698337 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T20:50:55.385 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:50:55 vm07.local systemd[1]: Started Ceph osd.2 for 589eab88-1bf8-11f1-9e50-71f3ab1833c4. 2026-03-09T20:50:55.385 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:55 vm07.local ceph-mon[112105]: osdmap e62: 6 total, 5 up, 6 in 2026-03-09T20:50:55.385 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:55 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:50:55.385 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:55 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:50:55.817 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:50:55 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-2[132432]: 2026-03-09T20:50:55.545+0000 7f242df26740 -1 Falling back to public interface 2026-03-09T20:50:56.127 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:56 vm07.local ceph-mon[112105]: pgmap v93: 65 pgs: 4 active+undersized, 5 stale+active+clean, 3 active+undersized+degraded, 53 active+clean; 211 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 1.1 KiB/s rd, 2 op/s; 12/285 objects degraded (4.211%) 2026-03-09T20:50:56.127 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:56 vm07.local ceph-mon[112105]: Health check failed: Degraded data redundancy: 12/285 objects degraded (4.211%), 3 pgs degraded (PG_DEGRADED) 2026-03-09T20:50:56.127 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:56 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:50:56.127 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:56 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:50:56.127 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:56 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:50:56.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:56 vm10.local ceph-mon[103526]: pgmap v93: 65 pgs: 4 active+undersized, 5 stale+active+clean, 3 active+undersized+degraded, 53 active+clean; 211 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 1.1 KiB/s rd, 2 op/s; 12/285 objects degraded (4.211%) 2026-03-09T20:50:56.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:56 vm10.local ceph-mon[103526]: Health check failed: Degraded data redundancy: 12/285 objects degraded (4.211%), 3 pgs degraded (PG_DEGRADED) 2026-03-09T20:50:56.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:56 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:50:56.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:56 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:50:56.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:56 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:50:57.824 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:57 vm07.local ceph-mon[112105]: pgmap v94: 65 pgs: 10 active+undersized, 3 stale+active+clean, 9 active+undersized+degraded, 43 active+clean; 211 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s; 28/285 objects degraded (9.825%) 2026-03-09T20:50:57.824 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:57 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:50:57.824 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:57 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:50:57.824 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:57 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:50:57.824 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:57 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:50:57.825 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:57 vm10.local ceph-mon[103526]: pgmap v94: 65 pgs: 10 active+undersized, 3 stale+active+clean, 9 active+undersized+degraded, 43 active+clean; 211 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s; 28/285 objects degraded (9.825%) 2026-03-09T20:50:57.825 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:57 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:50:57.825 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:57 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:50:57.825 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:57 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:50:57.829 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:57 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:50:59.736 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:59 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:50:59.736 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:59 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:50:59.736 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:59 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:50:59.736 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:59 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:50:59.736 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:59 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:50:59.736 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:59 vm07.local ceph-mon[112105]: pgmap v95: 65 pgs: 14 active+undersized, 13 active+undersized+degraded, 38 active+clean; 211 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s; 37/285 objects degraded (12.982%) 2026-03-09T20:50:59.736 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:59 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:50:59.736 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:59 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:50:59.737 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:59 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:50:59.737 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:59 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:50:59.737 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:59 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-09T20:50:59.737 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:59 vm07.local ceph-mon[112105]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-09T20:50:59.737 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:50:59 vm07.local ceph-mon[112105]: Upgrade: unsafe to stop osd(s) at this time (11 PGs are or would become offline) 2026-03-09T20:50:59.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:59 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:50:59.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:59 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:50:59.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:59 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:50:59.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:59 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:50:59.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:59 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:50:59.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:59 vm10.local ceph-mon[103526]: pgmap v95: 65 pgs: 14 active+undersized, 13 active+undersized+degraded, 38 active+clean; 211 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s; 37/285 objects degraded (12.982%) 2026-03-09T20:50:59.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:59 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:50:59.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:59 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:50:59.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:59 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:50:59.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:59 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:50:59.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:59 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-09T20:50:59.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:59 vm10.local ceph-mon[103526]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-09T20:50:59.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:50:59 vm10.local ceph-mon[103526]: Upgrade: unsafe to stop osd(s) at this time (11 PGs are or would become offline) 2026-03-09T20:51:00.134 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:50:59 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-2[132432]: 2026-03-09T20:50:59.735+0000 7f242df26740 -1 osd.2 60 log_to_monitors true 2026-03-09T20:51:00.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:00 vm10.local ceph-mon[103526]: from='osd.2 [v2:192.168.123.107:6818/4133405570,v1:192.168.123.107:6819/4133405570]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T20:51:00.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:00 vm07.local ceph-mon[112105]: from='osd.2 [v2:192.168.123.107:6818/4133405570,v1:192.168.123.107:6819/4133405570]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T20:51:01.634 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 20:51:01 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-2[132432]: 2026-03-09T20:51:01.331+0000 7f24254bf640 -1 osd.2 60 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T20:51:01.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:01 vm07.local ceph-mon[112105]: from='osd.2 [v2:192.168.123.107:6818/4133405570,v1:192.168.123.107:6819/4133405570]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T20:51:01.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:01 vm07.local ceph-mon[112105]: osdmap e63: 6 total, 5 up, 6 in 2026-03-09T20:51:01.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:01 vm07.local ceph-mon[112105]: from='osd.2 [v2:192.168.123.107:6818/4133405570,v1:192.168.123.107:6819/4133405570]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T20:51:01.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:01 vm07.local ceph-mon[112105]: pgmap v97: 65 pgs: 14 active+undersized, 13 active+undersized+degraded, 38 active+clean; 211 MiB data, 2.4 GiB used, 118 GiB / 120 GiB avail; 958 B/s rd, 2 op/s; 37/285 objects degraded (12.982%) 2026-03-09T20:51:01.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:01 vm07.local ceph-mon[112105]: from='osd.2 [v2:192.168.123.107:6818/4133405570,v1:192.168.123.107:6819/4133405570]' entity='osd.2' 2026-03-09T20:51:01.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:01 vm10.local ceph-mon[103526]: from='osd.2 [v2:192.168.123.107:6818/4133405570,v1:192.168.123.107:6819/4133405570]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T20:51:01.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:01 vm10.local ceph-mon[103526]: osdmap e63: 6 total, 5 up, 6 in 2026-03-09T20:51:01.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:01 vm10.local ceph-mon[103526]: from='osd.2 [v2:192.168.123.107:6818/4133405570,v1:192.168.123.107:6819/4133405570]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T20:51:01.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:01 vm10.local ceph-mon[103526]: pgmap v97: 65 pgs: 14 active+undersized, 13 active+undersized+degraded, 38 active+clean; 211 MiB data, 2.4 GiB used, 118 GiB / 120 GiB avail; 958 B/s rd, 2 op/s; 37/285 objects degraded (12.982%) 2026-03-09T20:51:01.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:01 vm10.local ceph-mon[103526]: from='osd.2 [v2:192.168.123.107:6818/4133405570,v1:192.168.123.107:6819/4133405570]' entity='osd.2' 2026-03-09T20:51:02.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:02 vm10.local ceph-mon[103526]: Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T20:51:02.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:02 vm10.local ceph-mon[103526]: osd.2 [v2:192.168.123.107:6818/4133405570,v1:192.168.123.107:6819/4133405570] boot 2026-03-09T20:51:02.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:02 vm10.local ceph-mon[103526]: osdmap e64: 6 total, 6 up, 6 in 2026-03-09T20:51:02.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:02 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:51:02.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:02 vm07.local ceph-mon[112105]: Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T20:51:02.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:02 vm07.local ceph-mon[112105]: osd.2 [v2:192.168.123.107:6818/4133405570,v1:192.168.123.107:6819/4133405570] boot 2026-03-09T20:51:02.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:02 vm07.local ceph-mon[112105]: osdmap e64: 6 total, 6 up, 6 in 2026-03-09T20:51:02.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:02 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:51:03.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:03 vm10.local ceph-mon[103526]: pgmap v99: 65 pgs: 5 peering, 11 active+undersized, 11 active+undersized+degraded, 38 active+clean; 211 MiB data, 2.4 GiB used, 118 GiB / 120 GiB avail; 2.2 KiB/s rd, 3 op/s; 33/285 objects degraded (11.579%) 2026-03-09T20:51:03.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:03 vm10.local ceph-mon[103526]: osdmap e65: 6 total, 6 up, 6 in 2026-03-09T20:51:03.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:03 vm10.local ceph-mon[103526]: Health check update: Degraded data redundancy: 33/285 objects degraded (11.579%), 11 pgs degraded (PG_DEGRADED) 2026-03-09T20:51:03.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:03 vm07.local ceph-mon[112105]: pgmap v99: 65 pgs: 5 peering, 11 active+undersized, 11 active+undersized+degraded, 38 active+clean; 211 MiB data, 2.4 GiB used, 118 GiB / 120 GiB avail; 2.2 KiB/s rd, 3 op/s; 33/285 objects degraded (11.579%) 2026-03-09T20:51:03.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:03 vm07.local ceph-mon[112105]: osdmap e65: 6 total, 6 up, 6 in 2026-03-09T20:51:03.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:03 vm07.local ceph-mon[112105]: Health check update: Degraded data redundancy: 33/285 objects degraded (11.579%), 11 pgs degraded (PG_DEGRADED) 2026-03-09T20:51:05.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:05 vm10.local ceph-mon[103526]: pgmap v101: 65 pgs: 5 peering, 9 active+undersized, 10 active+undersized+degraded, 41 active+clean; 211 MiB data, 2.4 GiB used, 118 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s; 30/285 objects degraded (10.526%) 2026-03-09T20:51:05.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:05 vm07.local ceph-mon[112105]: pgmap v101: 65 pgs: 5 peering, 9 active+undersized, 10 active+undersized+degraded, 41 active+clean; 211 MiB data, 2.4 GiB used, 118 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s; 30/285 objects degraded (10.526%) 2026-03-09T20:51:06.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:06 vm07.local ceph-mon[112105]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 30/285 objects degraded (10.526%), 10 pgs degraded) 2026-03-09T20:51:06.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:06 vm07.local ceph-mon[112105]: Cluster is now healthy 2026-03-09T20:51:07.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:06 vm10.local ceph-mon[103526]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 30/285 objects degraded (10.526%), 10 pgs degraded) 2026-03-09T20:51:07.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:06 vm10.local ceph-mon[103526]: Cluster is now healthy 2026-03-09T20:51:07.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:07 vm07.local ceph-mon[112105]: pgmap v102: 65 pgs: 5 peering, 60 active+clean; 211 MiB data, 2.8 GiB used, 117 GiB / 120 GiB avail; 3.2 KiB/s rd, 5 op/s 2026-03-09T20:51:08.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:07 vm10.local ceph-mon[103526]: pgmap v102: 65 pgs: 5 peering, 60 active+clean; 211 MiB data, 2.8 GiB used, 117 GiB / 120 GiB avail; 3.2 KiB/s rd, 5 op/s 2026-03-09T20:51:09.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:09 vm07.local ceph-mon[112105]: pgmap v103: 65 pgs: 65 active+clean; 211 MiB data, 2.8 GiB used, 117 GiB / 120 GiB avail; 2.4 KiB/s rd, 4 op/s 2026-03-09T20:51:09.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:09 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:51:09.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:09 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:51:10.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:09 vm10.local ceph-mon[103526]: pgmap v103: 65 pgs: 65 active+clean; 211 MiB data, 2.8 GiB used, 117 GiB / 120 GiB avail; 2.4 KiB/s rd, 4 op/s 2026-03-09T20:51:10.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:09 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:51:10.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:09 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:51:11.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:11 vm07.local ceph-mon[112105]: pgmap v104: 65 pgs: 65 active+clean; 211 MiB data, 2.8 GiB used, 117 GiB / 120 GiB avail; 1021 B/s rd, 2 op/s 2026-03-09T20:51:12.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:11 vm10.local ceph-mon[103526]: pgmap v104: 65 pgs: 65 active+clean; 211 MiB data, 2.8 GiB used, 117 GiB / 120 GiB avail; 1021 B/s rd, 2 op/s 2026-03-09T20:51:13.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:13 vm07.local ceph-mon[112105]: pgmap v105: 65 pgs: 65 active+clean; 211 MiB data, 2.8 GiB used, 117 GiB / 120 GiB avail; 1.9 KiB/s rd, 3 op/s 2026-03-09T20:51:13.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:13 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-09T20:51:13.952 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:13.950+0000 7fbf9e00d640 1 -- 192.168.123.107:0/921812096 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbf98102a60 msgr2=0x7fbf98102e60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:51:13.952 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:13.950+0000 7fbf9e00d640 1 --2- 192.168.123.107:0/921812096 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbf98102a60 0x7fbf98102e60 secure :-1 s=READY pgs=87 cs=0 l=1 rev1=1 crypto rx=0x7fbf880099b0 tx=0x7fbf8802f220 comp rx=0 tx=0).stop 2026-03-09T20:51:13.952 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:13.951+0000 7fbf9e00d640 1 -- 192.168.123.107:0/921812096 shutdown_connections 2026-03-09T20:51:13.952 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:13.951+0000 7fbf9e00d640 1 --2- 192.168.123.107:0/921812096 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fbf98103c60 0x7fbf981040e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:13.952 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:13.951+0000 7fbf9e00d640 1 --2- 192.168.123.107:0/921812096 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbf98102a60 0x7fbf98102e60 unknown :-1 s=CLOSED pgs=87 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:13.952 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:13.951+0000 7fbf9e00d640 1 -- 192.168.123.107:0/921812096 >> 192.168.123.107:0/921812096 conn(0x7fbf980fe250 msgr2=0x7fbf98100670 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:51:13.952 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:13.951+0000 7fbf9e00d640 1 -- 192.168.123.107:0/921812096 shutdown_connections 2026-03-09T20:51:13.952 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:13.951+0000 7fbf9e00d640 1 -- 192.168.123.107:0/921812096 wait complete. 2026-03-09T20:51:13.953 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:13.952+0000 7fbf9e00d640 1 Processor -- start 2026-03-09T20:51:13.953 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:13.952+0000 7fbf9e00d640 1 -- start start 2026-03-09T20:51:13.953 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:13.952+0000 7fbf9e00d640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbf98102a60 0x7fbf9819a4e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:51:13.953 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:13.952+0000 7fbf977fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbf98102a60 0x7fbf9819a4e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:51:13.953 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:13.952+0000 7fbf9e00d640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fbf98103c60 0x7fbf9819aa20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:51:13.954 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:13.952+0000 7fbf9e00d640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fbf9819aff0 con 0x7fbf98102a60 2026-03-09T20:51:13.954 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:13.952+0000 7fbf9e00d640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fbf9819b160 con 0x7fbf98103c60 2026-03-09T20:51:13.954 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:13.952+0000 7fbf977fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbf98102a60 0x7fbf9819a4e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:44744/0 (socket says 192.168.123.107:44744) 2026-03-09T20:51:13.954 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:13.952+0000 7fbf977fe640 1 -- 192.168.123.107:0/3722677627 learned_addr learned my addr 192.168.123.107:0/3722677627 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:51:13.954 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:13.953+0000 7fbf96ffd640 1 --2- 192.168.123.107:0/3722677627 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fbf98103c60 0x7fbf9819aa20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:51:13.954 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:13.953+0000 7fbf977fe640 1 -- 192.168.123.107:0/3722677627 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fbf98103c60 msgr2=0x7fbf9819aa20 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:51:13.954 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:13.953+0000 7fbf977fe640 1 --2- 192.168.123.107:0/3722677627 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fbf98103c60 0x7fbf9819aa20 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:13.954 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:13.953+0000 7fbf977fe640 1 -- 192.168.123.107:0/3722677627 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fbf88009660 con 0x7fbf98102a60 2026-03-09T20:51:13.954 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:13.953+0000 7fbf977fe640 1 --2- 192.168.123.107:0/3722677627 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbf98102a60 0x7fbf9819a4e0 secure :-1 s=READY pgs=88 cs=0 l=1 rev1=1 crypto rx=0x7fbf88002af0 tx=0x7fbf88042c90 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:51:13.954 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:13.954+0000 7fbf94ff9640 1 -- 192.168.123.107:0/3722677627 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fbf8802fd50 con 0x7fbf98102a60 2026-03-09T20:51:13.955 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:13.954+0000 7fbf94ff9640 1 -- 192.168.123.107:0/3722677627 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fbf88038930 con 0x7fbf98102a60 2026-03-09T20:51:13.955 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:13.954+0000 7fbf9e00d640 1 -- 192.168.123.107:0/3722677627 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fbf98075630 con 0x7fbf98102a60 2026-03-09T20:51:13.955 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:13.955+0000 7fbf94ff9640 1 -- 192.168.123.107:0/3722677627 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fbf8804a890 con 0x7fbf98102a60 2026-03-09T20:51:13.956 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:13.955+0000 7fbf9e00d640 1 -- 192.168.123.107:0/3722677627 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fbf98075ba0 con 0x7fbf98102a60 2026-03-09T20:51:13.956 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:13.955+0000 7fbf94ff9640 1 -- 192.168.123.107:0/3722677627 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fbf88052050 con 0x7fbf98102a60 2026-03-09T20:51:13.956 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:13.955+0000 7fbf7a7fc640 1 -- 192.168.123.107:0/3722677627 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fbf98102e60 con 0x7fbf98102a60 2026-03-09T20:51:13.960 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:13.957+0000 7fbf94ff9640 1 --2- 192.168.123.107:0/3722677627 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fbf70077720 0x7fbf70079be0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:51:13.960 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:13.957+0000 7fbf94ff9640 1 -- 192.168.123.107:0/3722677627 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(65..65 src has 1..65) v4 ==== 6394+0+0 (secure 0 0 0) 0x7fbf880c6d90 con 0x7fbf98102a60 2026-03-09T20:51:13.960 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:13.959+0000 7fbf96ffd640 1 --2- 192.168.123.107:0/3722677627 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fbf70077720 0x7fbf70079be0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:51:13.960 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:13.959+0000 7fbf96ffd640 1 --2- 192.168.123.107:0/3722677627 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fbf70077720 0x7fbf70079be0 secure :-1 s=READY pgs=58 cs=0 l=1 rev1=1 crypto rx=0x7fbf9819ba00 tx=0x7fbf84008040 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:51:13.960 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:13.959+0000 7fbf94ff9640 1 -- 192.168.123.107:0/3722677627 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fbf880cc050 con 0x7fbf98102a60 2026-03-09T20:51:14.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:13 vm10.local ceph-mon[103526]: pgmap v105: 65 pgs: 65 active+clean; 211 MiB data, 2.8 GiB used, 117 GiB / 120 GiB avail; 1.9 KiB/s rd, 3 op/s 2026-03-09T20:51:14.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:13 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-09T20:51:14.069 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.068+0000 7fbf7a7fc640 1 -- 192.168.123.107:0/3722677627 --> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7fbf98061bd0 con 0x7fbf70077720 2026-03-09T20:51:14.070 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.069+0000 7fbf94ff9640 1 -- 192.168.123.107:0/3722677627 <== mgr.34100 v2:192.168.123.107:6800/2064434004 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+400 (secure 0 0 0) 0x7fbf98061bd0 con 0x7fbf70077720 2026-03-09T20:51:14.072 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.071+0000 7fbf7a7fc640 1 -- 192.168.123.107:0/3722677627 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fbf70077720 msgr2=0x7fbf70079be0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:51:14.072 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.071+0000 7fbf7a7fc640 1 --2- 192.168.123.107:0/3722677627 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fbf70077720 0x7fbf70079be0 secure :-1 s=READY pgs=58 cs=0 l=1 rev1=1 crypto rx=0x7fbf9819ba00 tx=0x7fbf84008040 comp rx=0 tx=0).stop 2026-03-09T20:51:14.072 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.072+0000 7fbf7a7fc640 1 -- 192.168.123.107:0/3722677627 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbf98102a60 msgr2=0x7fbf9819a4e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:51:14.073 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.072+0000 7fbf7a7fc640 1 --2- 192.168.123.107:0/3722677627 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbf98102a60 0x7fbf9819a4e0 secure :-1 s=READY pgs=88 cs=0 l=1 rev1=1 crypto rx=0x7fbf88002af0 tx=0x7fbf88042c90 comp rx=0 tx=0).stop 2026-03-09T20:51:14.073 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.072+0000 7fbf7a7fc640 1 -- 192.168.123.107:0/3722677627 shutdown_connections 2026-03-09T20:51:14.073 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.072+0000 7fbf7a7fc640 1 --2- 192.168.123.107:0/3722677627 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fbf70077720 0x7fbf70079be0 unknown :-1 s=CLOSED pgs=58 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:14.073 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.072+0000 7fbf7a7fc640 1 --2- 192.168.123.107:0/3722677627 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fbf98103c60 0x7fbf9819aa20 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:14.073 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.072+0000 7fbf7a7fc640 1 --2- 192.168.123.107:0/3722677627 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbf98102a60 0x7fbf9819a4e0 unknown :-1 s=CLOSED pgs=88 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:14.073 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.072+0000 7fbf7a7fc640 1 -- 192.168.123.107:0/3722677627 >> 192.168.123.107:0/3722677627 conn(0x7fbf980fe250 msgr2=0x7fbf980ffd90 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:51:14.073 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.072+0000 7fbf7a7fc640 1 -- 192.168.123.107:0/3722677627 shutdown_connections 2026-03-09T20:51:14.073 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.072+0000 7fbf7a7fc640 1 -- 192.168.123.107:0/3722677627 wait complete. 2026-03-09T20:51:14.081 INFO:teuthology.orchestra.run.vm07.stdout:true 2026-03-09T20:51:14.135 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.134+0000 7f7561ac2640 1 -- 192.168.123.107:0/985653552 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f755c102a70 msgr2=0x7f755c102e70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:51:14.135 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.134+0000 7f7561ac2640 1 --2- 192.168.123.107:0/985653552 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f755c102a70 0x7f755c102e70 secure :-1 s=READY pgs=89 cs=0 l=1 rev1=1 crypto rx=0x7f754c0099b0 tx=0x7f754c02f240 comp rx=0 tx=0).stop 2026-03-09T20:51:14.135 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.134+0000 7f7561ac2640 1 -- 192.168.123.107:0/985653552 shutdown_connections 2026-03-09T20:51:14.135 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.134+0000 7f7561ac2640 1 --2- 192.168.123.107:0/985653552 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f755c103c70 0x7f755c1040f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:14.135 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.134+0000 7f7561ac2640 1 --2- 192.168.123.107:0/985653552 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f755c102a70 0x7f755c102e70 unknown :-1 s=CLOSED pgs=89 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:14.135 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.134+0000 7f7561ac2640 1 -- 192.168.123.107:0/985653552 >> 192.168.123.107:0/985653552 conn(0x7f755c0fe220 msgr2=0x7f755c100640 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:51:14.135 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.134+0000 7f7561ac2640 1 -- 192.168.123.107:0/985653552 shutdown_connections 2026-03-09T20:51:14.135 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.135+0000 7f7561ac2640 1 -- 192.168.123.107:0/985653552 wait complete. 2026-03-09T20:51:14.136 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.135+0000 7f7561ac2640 1 Processor -- start 2026-03-09T20:51:14.136 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.135+0000 7f7561ac2640 1 -- start start 2026-03-09T20:51:14.136 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.136+0000 7f7561ac2640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f755c102a70 0x7f755c19a4c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:51:14.137 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.136+0000 7f7561ac2640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f755c103c70 0x7f755c19aa00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:51:14.137 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.136+0000 7f7561ac2640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f755c19afd0 con 0x7f755c103c70 2026-03-09T20:51:14.137 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.136+0000 7f755b7fe640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f755c102a70 0x7f755c19a4c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:51:14.137 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.136+0000 7f755b7fe640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f755c102a70 0x7f755c19a4c0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:39198/0 (socket says 192.168.123.107:39198) 2026-03-09T20:51:14.137 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.136+0000 7f755b7fe640 1 -- 192.168.123.107:0/2572700460 learned_addr learned my addr 192.168.123.107:0/2572700460 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:51:14.137 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.136+0000 7f7561ac2640 1 -- 192.168.123.107:0/2572700460 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f755c19b140 con 0x7f755c102a70 2026-03-09T20:51:14.137 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.136+0000 7f755affd640 1 --2- 192.168.123.107:0/2572700460 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f755c103c70 0x7f755c19aa00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:51:14.138 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.137+0000 7f755affd640 1 -- 192.168.123.107:0/2572700460 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f755c102a70 msgr2=0x7f755c19a4c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:51:14.138 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.137+0000 7f755affd640 1 --2- 192.168.123.107:0/2572700460 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f755c102a70 0x7f755c19a4c0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:14.138 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.137+0000 7f755affd640 1 -- 192.168.123.107:0/2572700460 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f754c009660 con 0x7f755c103c70 2026-03-09T20:51:14.138 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.137+0000 7f755affd640 1 --2- 192.168.123.107:0/2572700460 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f755c103c70 0x7f755c19aa00 secure :-1 s=READY pgs=90 cs=0 l=1 rev1=1 crypto rx=0x7f754800e9a0 tx=0x7f754800ee70 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:51:14.140 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.137+0000 7f7558ff9640 1 -- 192.168.123.107:0/2572700460 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f754800cd30 con 0x7f755c103c70 2026-03-09T20:51:14.140 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.137+0000 7f7558ff9640 1 -- 192.168.123.107:0/2572700460 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f7548004590 con 0x7f755c103c70 2026-03-09T20:51:14.140 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.137+0000 7f7558ff9640 1 -- 192.168.123.107:0/2572700460 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f7548010640 con 0x7f755c103c70 2026-03-09T20:51:14.140 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.137+0000 7f7561ac2640 1 -- 192.168.123.107:0/2572700460 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f755c19fb70 con 0x7f755c103c70 2026-03-09T20:51:14.140 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.138+0000 7f7561ac2640 1 -- 192.168.123.107:0/2572700460 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f755c075990 con 0x7f755c103c70 2026-03-09T20:51:14.141 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.140+0000 7f7558ff9640 1 -- 192.168.123.107:0/2572700460 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f75480107a0 con 0x7f755c103c70 2026-03-09T20:51:14.141 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.140+0000 7f7558ff9640 1 --2- 192.168.123.107:0/2572700460 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f7530077890 0x7f7530079d50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:51:14.141 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.140+0000 7f7558ff9640 1 -- 192.168.123.107:0/2572700460 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(65..65 src has 1..65) v4 ==== 6394+0+0 (secure 0 0 0) 0x7f7548014070 con 0x7f755c103c70 2026-03-09T20:51:14.141 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.140+0000 7f755b7fe640 1 --2- 192.168.123.107:0/2572700460 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f7530077890 0x7f7530079d50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:51:14.142 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.141+0000 7f7561ac2640 1 -- 192.168.123.107:0/2572700460 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f7520005350 con 0x7f755c103c70 2026-03-09T20:51:14.142 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.141+0000 7f755b7fe640 1 --2- 192.168.123.107:0/2572700460 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f7530077890 0x7f7530079d50 secure :-1 s=READY pgs=59 cs=0 l=1 rev1=1 crypto rx=0x7f754c002410 tx=0x7f754c03a040 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:51:14.145 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.144+0000 7f7558ff9640 1 -- 192.168.123.107:0/2572700460 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f754806c030 con 0x7f755c103c70 2026-03-09T20:51:14.253 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.251+0000 7f7561ac2640 1 -- 192.168.123.107:0/2572700460 --> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f7520002bf0 con 0x7f7530077890 2026-03-09T20:51:14.256 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.255+0000 7f7558ff9640 1 -- 192.168.123.107:0/2572700460 <== mgr.34100 v2:192.168.123.107:6800/2064434004 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+400 (secure 0 0 0) 0x7f7520002bf0 con 0x7f7530077890 2026-03-09T20:51:14.258 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.257+0000 7f7561ac2640 1 -- 192.168.123.107:0/2572700460 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f7530077890 msgr2=0x7f7530079d50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:51:14.258 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.257+0000 7f7561ac2640 1 --2- 192.168.123.107:0/2572700460 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f7530077890 0x7f7530079d50 secure :-1 s=READY pgs=59 cs=0 l=1 rev1=1 crypto rx=0x7f754c002410 tx=0x7f754c03a040 comp rx=0 tx=0).stop 2026-03-09T20:51:14.258 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.257+0000 7f7561ac2640 1 -- 192.168.123.107:0/2572700460 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f755c103c70 msgr2=0x7f755c19aa00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:51:14.258 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.258+0000 7f7561ac2640 1 --2- 192.168.123.107:0/2572700460 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f755c103c70 0x7f755c19aa00 secure :-1 s=READY pgs=90 cs=0 l=1 rev1=1 crypto rx=0x7f754800e9a0 tx=0x7f754800ee70 comp rx=0 tx=0).stop 2026-03-09T20:51:14.259 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.258+0000 7f7561ac2640 1 -- 192.168.123.107:0/2572700460 shutdown_connections 2026-03-09T20:51:14.259 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.258+0000 7f7561ac2640 1 --2- 192.168.123.107:0/2572700460 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f7530077890 0x7f7530079d50 unknown :-1 s=CLOSED pgs=59 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:14.259 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.258+0000 7f7561ac2640 1 --2- 192.168.123.107:0/2572700460 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f755c103c70 0x7f755c19aa00 unknown :-1 s=CLOSED pgs=90 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:14.259 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.258+0000 7f7561ac2640 1 --2- 192.168.123.107:0/2572700460 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f755c102a70 0x7f755c19a4c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:14.259 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.258+0000 7f7561ac2640 1 -- 192.168.123.107:0/2572700460 >> 192.168.123.107:0/2572700460 conn(0x7f755c0fe220 msgr2=0x7f755c0ffa40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:51:14.259 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.258+0000 7f7561ac2640 1 -- 192.168.123.107:0/2572700460 shutdown_connections 2026-03-09T20:51:14.259 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.259+0000 7f7561ac2640 1 -- 192.168.123.107:0/2572700460 wait complete. 2026-03-09T20:51:14.315 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.314+0000 7fd2710a3640 1 -- 192.168.123.107:0/3655144946 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd26c102a60 msgr2=0x7fd26c102e60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:51:14.315 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.314+0000 7fd2710a3640 1 --2- 192.168.123.107:0/3655144946 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd26c102a60 0x7fd26c102e60 secure :-1 s=READY pgs=91 cs=0 l=1 rev1=1 crypto rx=0x7fd2540099b0 tx=0x7fd25402f220 comp rx=0 tx=0).stop 2026-03-09T20:51:14.315 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.314+0000 7fd2710a3640 1 -- 192.168.123.107:0/3655144946 shutdown_connections 2026-03-09T20:51:14.315 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.314+0000 7fd2710a3640 1 --2- 192.168.123.107:0/3655144946 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fd26c103c60 0x7fd26c1040e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:14.315 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.314+0000 7fd2710a3640 1 --2- 192.168.123.107:0/3655144946 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd26c102a60 0x7fd26c102e60 unknown :-1 s=CLOSED pgs=91 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:14.315 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.314+0000 7fd2710a3640 1 -- 192.168.123.107:0/3655144946 >> 192.168.123.107:0/3655144946 conn(0x7fd26c0fe250 msgr2=0x7fd26c100670 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:51:14.315 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.314+0000 7fd2710a3640 1 -- 192.168.123.107:0/3655144946 shutdown_connections 2026-03-09T20:51:14.315 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.314+0000 7fd2710a3640 1 -- 192.168.123.107:0/3655144946 wait complete. 2026-03-09T20:51:14.316 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.315+0000 7fd2710a3640 1 Processor -- start 2026-03-09T20:51:14.316 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.315+0000 7fd2710a3640 1 -- start start 2026-03-09T20:51:14.316 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.315+0000 7fd2710a3640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fd26c102a60 0x7fd26c19a440 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:51:14.316 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.315+0000 7fd2710a3640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd26c103c60 0x7fd26c19a980 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:51:14.316 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.315+0000 7fd2710a3640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fd26c19af50 con 0x7fd26c103c60 2026-03-09T20:51:14.317 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.315+0000 7fd2710a3640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fd26c19b0c0 con 0x7fd26c102a60 2026-03-09T20:51:14.317 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.316+0000 7fd26a575640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd26c103c60 0x7fd26c19a980 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:51:14.317 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.316+0000 7fd26a575640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd26c103c60 0x7fd26c19a980 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:44764/0 (socket says 192.168.123.107:44764) 2026-03-09T20:51:14.317 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.316+0000 7fd26a575640 1 -- 192.168.123.107:0/3322231836 learned_addr learned my addr 192.168.123.107:0/3322231836 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:51:14.317 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.316+0000 7fd26a575640 1 -- 192.168.123.107:0/3322231836 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fd26c102a60 msgr2=0x7fd26c19a440 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:51:14.317 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.316+0000 7fd26ad76640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fd26c102a60 0x7fd26c19a440 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:51:14.317 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.316+0000 7fd26a575640 1 --2- 192.168.123.107:0/3322231836 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fd26c102a60 0x7fd26c19a440 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:14.317 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.316+0000 7fd26a575640 1 -- 192.168.123.107:0/3322231836 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fd254009660 con 0x7fd26c103c60 2026-03-09T20:51:14.317 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.316+0000 7fd26a575640 1 --2- 192.168.123.107:0/3322231836 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd26c103c60 0x7fd26c19a980 secure :-1 s=READY pgs=92 cs=0 l=1 rev1=1 crypto rx=0x7fd26000c940 tx=0x7fd26000ce10 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:51:14.318 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.317+0000 7fd24bfff640 1 -- 192.168.123.107:0/3322231836 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fd260007c20 con 0x7fd26c103c60 2026-03-09T20:51:14.318 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.317+0000 7fd24bfff640 1 -- 192.168.123.107:0/3322231836 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fd260007d80 con 0x7fd26c103c60 2026-03-09T20:51:14.318 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.317+0000 7fd24bfff640 1 -- 192.168.123.107:0/3322231836 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fd26000f450 con 0x7fd26c103c60 2026-03-09T20:51:14.318 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.317+0000 7fd2710a3640 1 -- 192.168.123.107:0/3322231836 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fd26c19fb60 con 0x7fd26c103c60 2026-03-09T20:51:14.318 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.317+0000 7fd26ad76640 1 --2- 192.168.123.107:0/3322231836 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fd26c102a60 0x7fd26c19a440 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T20:51:14.318 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.317+0000 7fd2710a3640 1 -- 192.168.123.107:0/3322231836 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fd26c075990 con 0x7fd26c103c60 2026-03-09T20:51:14.319 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.318+0000 7fd2710a3640 1 -- 192.168.123.107:0/3322231836 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fd238005350 con 0x7fd26c103c60 2026-03-09T20:51:14.323 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.321+0000 7fd24bfff640 1 -- 192.168.123.107:0/3322231836 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fd260016030 con 0x7fd26c103c60 2026-03-09T20:51:14.323 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.322+0000 7fd24bfff640 1 --2- 192.168.123.107:0/3322231836 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fd2440779b0 0x7fd244079e70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:51:14.323 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.322+0000 7fd24bfff640 1 -- 192.168.123.107:0/3322231836 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(65..65 src has 1..65) v4 ==== 6394+0+0 (secure 0 0 0) 0x7fd260099ea0 con 0x7fd26c103c60 2026-03-09T20:51:14.323 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.322+0000 7fd24bfff640 1 -- 192.168.123.107:0/3322231836 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fd26009a280 con 0x7fd26c103c60 2026-03-09T20:51:14.323 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.322+0000 7fd26ad76640 1 --2- 192.168.123.107:0/3322231836 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fd2440779b0 0x7fd244079e70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:51:14.324 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.323+0000 7fd26ad76640 1 --2- 192.168.123.107:0/3322231836 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fd2440779b0 0x7fd244079e70 secure :-1 s=READY pgs=60 cs=0 l=1 rev1=1 crypto rx=0x7fd254002c20 tx=0x7fd254002da0 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:51:14.428 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.427+0000 7fd2710a3640 1 -- 192.168.123.107:0/3322231836 --> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] -- mgr_command(tid 0: {"prefix": "orch ps", "target": ["mon-mgr", ""]}) v1 -- 0x7fd238002bf0 con 0x7fd2440779b0 2026-03-09T20:51:14.434 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.433+0000 7fd24bfff640 1 -- 192.168.123.107:0/3322231836 <== mgr.34100 v2:192.168.123.107:6800/2064434004 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+3576 (secure 0 0 0) 0x7fd238002bf0 con 0x7fd2440779b0 2026-03-09T20:51:14.434 INFO:teuthology.orchestra.run.vm07.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T20:51:14.434 INFO:teuthology.orchestra.run.vm07.stdout:alertmanager.vm07 vm07 *:9093,9094 running (7m) 17s ago 8m 43.7M - 0.25.0 c8568f914cd2 aa3206f6f5cb 2026-03-09T20:51:14.434 INFO:teuthology.orchestra.run.vm07.stdout:ceph-exporter.vm07 vm07 running (8m) 17s ago 8m 9.82M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 06140d824fae 2026-03-09T20:51:14.434 INFO:teuthology.orchestra.run.vm07.stdout:ceph-exporter.vm10 vm10 running (7m) 2m ago 7m 9.90M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 ecddc8340426 2026-03-09T20:51:14.434 INFO:teuthology.orchestra.run.vm07.stdout:crash.vm07 vm07 running (2m) 17s ago 8m 7834k - 19.2.3-678-ge911bdeb 654f31e6858e 406c9c54f34a 2026-03-09T20:51:14.434 INFO:teuthology.orchestra.run.vm07.stdout:crash.vm10 vm10 running (2m) 2m ago 7m 7856k - 19.2.3-678-ge911bdeb 654f31e6858e 30eaebf5d733 2026-03-09T20:51:14.434 INFO:teuthology.orchestra.run.vm07.stdout:grafana.vm07 vm07 *:3000 running (7m) 17s ago 8m 160M - 9.4.7 954c08fa6188 74cf2e7ee6ad 2026-03-09T20:51:14.434 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm07.potfau vm07 running (6m) 17s ago 6m 30.4M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 2492b6874dc8 2026-03-09T20:51:14.434 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm07.rovdbp vm07 running (6m) 17s ago 6m 176M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 3dd0b4a28f35 2026-03-09T20:51:14.434 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm10.hzyuyq vm10 running (6m) 2m ago 6m 151M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 ed740ceed51a 2026-03-09T20:51:14.434 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm10.qpltwp vm10 running (6m) 2m ago 6m 27.0M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 c5fdba181aaf 2026-03-09T20:51:14.434 INFO:teuthology.orchestra.run.vm07.stdout:mgr.vm07.xjrvch vm07 *:8443,9283,8765 running (3m) 17s ago 8m 615M - 19.2.3-678-ge911bdeb 654f31e6858e bc6ab9c540eb 2026-03-09T20:51:14.434 INFO:teuthology.orchestra.run.vm07.stdout:mgr.vm10.byqahe vm10 *:8443,9283,8765 running (3m) 2m ago 7m 489M - 19.2.3-678-ge911bdeb 654f31e6858e f7ad162e95ff 2026-03-09T20:51:14.434 INFO:teuthology.orchestra.run.vm07.stdout:mon.vm07 vm07 running (2m) 17s ago 8m 58.7M 2048M 19.2.3-678-ge911bdeb 654f31e6858e bce9d510f94f 2026-03-09T20:51:14.434 INFO:teuthology.orchestra.run.vm07.stdout:mon.vm10 vm10 running (2m) 2m ago 7m 45.2M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 4428cf7f0607 2026-03-09T20:51:14.434 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.vm07 vm07 *:9100 running (8m) 17s ago 8m 15.5M - 1.5.0 0da6a335fe13 d6fac1f8a1d0 2026-03-09T20:51:14.434 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.vm10 vm10 *:9100 running (7m) 2m ago 7m 15.4M - 1.5.0 0da6a335fe13 9716a97e7ed1 2026-03-09T20:51:14.434 INFO:teuthology.orchestra.run.vm07.stdout:osd.0 vm07 running (2m) 17s ago 7m 232M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 1da9d2cdbdc3 2026-03-09T20:51:14.434 INFO:teuthology.orchestra.run.vm07.stdout:osd.1 vm07 running (89s) 17s ago 7m 171M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 95f518bf664f 2026-03-09T20:51:14.434 INFO:teuthology.orchestra.run.vm07.stdout:osd.2 vm07 running (19s) 17s ago 7m 11.5M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 0d3aa63353bb 2026-03-09T20:51:14.435 INFO:teuthology.orchestra.run.vm07.stdout:osd.3 vm10 running (6m) 2m ago 6m 452M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 c4d7e2279ba1 2026-03-09T20:51:14.435 INFO:teuthology.orchestra.run.vm07.stdout:osd.4 vm10 running (6m) 2m ago 6m 409M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 37651efc9a7d 2026-03-09T20:51:14.435 INFO:teuthology.orchestra.run.vm07.stdout:osd.5 vm10 running (6m) 2m ago 6m 343M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 e1bd83add343 2026-03-09T20:51:14.435 INFO:teuthology.orchestra.run.vm07.stdout:prometheus.vm07 vm07 *:9095 running (3m) 17s ago 7m 49.8M - 2.43.0 a07b618ecd1d 3f9c07cd3fe3 2026-03-09T20:51:14.436 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.435+0000 7fd2710a3640 1 -- 192.168.123.107:0/3322231836 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fd2440779b0 msgr2=0x7fd244079e70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:51:14.436 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.435+0000 7fd2710a3640 1 --2- 192.168.123.107:0/3322231836 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fd2440779b0 0x7fd244079e70 secure :-1 s=READY pgs=60 cs=0 l=1 rev1=1 crypto rx=0x7fd254002c20 tx=0x7fd254002da0 comp rx=0 tx=0).stop 2026-03-09T20:51:14.436 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.435+0000 7fd2710a3640 1 -- 192.168.123.107:0/3322231836 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd26c103c60 msgr2=0x7fd26c19a980 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:51:14.436 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.435+0000 7fd2710a3640 1 --2- 192.168.123.107:0/3322231836 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd26c103c60 0x7fd26c19a980 secure :-1 s=READY pgs=92 cs=0 l=1 rev1=1 crypto rx=0x7fd26000c940 tx=0x7fd26000ce10 comp rx=0 tx=0).stop 2026-03-09T20:51:14.436 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.435+0000 7fd2710a3640 1 -- 192.168.123.107:0/3322231836 shutdown_connections 2026-03-09T20:51:14.436 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.435+0000 7fd2710a3640 1 --2- 192.168.123.107:0/3322231836 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fd2440779b0 0x7fd244079e70 unknown :-1 s=CLOSED pgs=60 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:14.436 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.435+0000 7fd2710a3640 1 --2- 192.168.123.107:0/3322231836 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd26c103c60 0x7fd26c19a980 unknown :-1 s=CLOSED pgs=92 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:14.436 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.435+0000 7fd2710a3640 1 --2- 192.168.123.107:0/3322231836 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fd26c102a60 0x7fd26c19a440 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:14.436 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.435+0000 7fd2710a3640 1 -- 192.168.123.107:0/3322231836 >> 192.168.123.107:0/3322231836 conn(0x7fd26c0fe250 msgr2=0x7fd26c0ffd70 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:51:14.436 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.435+0000 7fd2710a3640 1 -- 192.168.123.107:0/3322231836 shutdown_connections 2026-03-09T20:51:14.436 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.435+0000 7fd2710a3640 1 -- 192.168.123.107:0/3322231836 wait complete. 2026-03-09T20:51:14.490 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.489+0000 7f9c42acc640 1 -- 192.168.123.107:0/520023732 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9c3c102a60 msgr2=0x7f9c3c102e60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:51:14.490 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.489+0000 7f9c42acc640 1 --2- 192.168.123.107:0/520023732 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9c3c102a60 0x7f9c3c102e60 secure :-1 s=READY pgs=93 cs=0 l=1 rev1=1 crypto rx=0x7f9c240099b0 tx=0x7f9c2402f220 comp rx=0 tx=0).stop 2026-03-09T20:51:14.490 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.489+0000 7f9c42acc640 1 -- 192.168.123.107:0/520023732 shutdown_connections 2026-03-09T20:51:14.490 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.489+0000 7f9c42acc640 1 --2- 192.168.123.107:0/520023732 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9c3c103c60 0x7f9c3c1040e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:14.490 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.489+0000 7f9c42acc640 1 --2- 192.168.123.107:0/520023732 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9c3c102a60 0x7f9c3c102e60 unknown :-1 s=CLOSED pgs=93 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:14.491 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.490+0000 7f9c42acc640 1 -- 192.168.123.107:0/520023732 >> 192.168.123.107:0/520023732 conn(0x7f9c3c0fe250 msgr2=0x7f9c3c100670 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:51:14.491 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.490+0000 7f9c42acc640 1 -- 192.168.123.107:0/520023732 shutdown_connections 2026-03-09T20:51:14.491 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.490+0000 7f9c42acc640 1 -- 192.168.123.107:0/520023732 wait complete. 2026-03-09T20:51:14.491 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.491+0000 7f9c42acc640 1 Processor -- start 2026-03-09T20:51:14.492 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.491+0000 7f9c42acc640 1 -- start start 2026-03-09T20:51:14.492 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.491+0000 7f9c42acc640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9c3c102a60 0x7f9c3c19a4c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:51:14.492 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.491+0000 7f9c42acc640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9c3c103c60 0x7f9c3c19aa00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:51:14.492 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.491+0000 7f9c42acc640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9c3c19afd0 con 0x7f9c3c103c60 2026-03-09T20:51:14.493 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.491+0000 7f9c42acc640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9c3c19b140 con 0x7f9c3c102a60 2026-03-09T20:51:14.493 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.491+0000 7f9c40841640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9c3c102a60 0x7f9c3c19a4c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:51:14.493 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.491+0000 7f9c33fff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9c3c103c60 0x7f9c3c19aa00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:51:14.493 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.491+0000 7f9c33fff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9c3c103c60 0x7f9c3c19aa00 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:44772/0 (socket says 192.168.123.107:44772) 2026-03-09T20:51:14.493 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.491+0000 7f9c33fff640 1 -- 192.168.123.107:0/1415496042 learned_addr learned my addr 192.168.123.107:0/1415496042 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:51:14.493 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.492+0000 7f9c33fff640 1 -- 192.168.123.107:0/1415496042 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9c3c102a60 msgr2=0x7f9c3c19a4c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:51:14.493 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.491+0000 7f9c40841640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9c3c102a60 0x7f9c3c19a4c0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:39222/0 (socket says 192.168.123.107:39222) 2026-03-09T20:51:14.493 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.492+0000 7f9c33fff640 1 --2- 192.168.123.107:0/1415496042 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9c3c102a60 0x7f9c3c19a4c0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:14.493 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.492+0000 7f9c33fff640 1 -- 192.168.123.107:0/1415496042 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f9c24009660 con 0x7f9c3c103c60 2026-03-09T20:51:14.493 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.492+0000 7f9c33fff640 1 --2- 192.168.123.107:0/1415496042 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9c3c103c60 0x7f9c3c19aa00 secure :-1 s=READY pgs=94 cs=0 l=1 rev1=1 crypto rx=0x7f9c2c00b730 tx=0x7f9c2c00bc00 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:51:14.494 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.493+0000 7f9c31ffb640 1 -- 192.168.123.107:0/1415496042 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f9c2c004280 con 0x7f9c3c103c60 2026-03-09T20:51:14.494 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.493+0000 7f9c31ffb640 1 -- 192.168.123.107:0/1415496042 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f9c2c0043e0 con 0x7f9c3c103c60 2026-03-09T20:51:14.494 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.493+0000 7f9c31ffb640 1 -- 192.168.123.107:0/1415496042 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f9c2c00ca90 con 0x7f9c3c103c60 2026-03-09T20:51:14.494 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.493+0000 7f9c42acc640 1 -- 192.168.123.107:0/1415496042 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f9c3c19fbe0 con 0x7f9c3c103c60 2026-03-09T20:51:14.494 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.493+0000 7f9c40841640 1 --2- 192.168.123.107:0/1415496042 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9c3c102a60 0x7f9c3c19a4c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T20:51:14.494 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.493+0000 7f9c42acc640 1 -- 192.168.123.107:0/1415496042 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f9c3c075a10 con 0x7f9c3c103c60 2026-03-09T20:51:14.495 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.494+0000 7f9c42acc640 1 -- 192.168.123.107:0/1415496042 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f9c04005350 con 0x7f9c3c103c60 2026-03-09T20:51:14.498 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.497+0000 7f9c31ffb640 1 -- 192.168.123.107:0/1415496042 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f9c2c00cbf0 con 0x7f9c3c103c60 2026-03-09T20:51:14.498 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.497+0000 7f9c31ffb640 1 --2- 192.168.123.107:0/1415496042 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f9c180779b0 0x7f9c18079e70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:51:14.498 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.497+0000 7f9c40841640 1 --2- 192.168.123.107:0/1415496042 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f9c180779b0 0x7f9c18079e70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:51:14.499 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.498+0000 7f9c31ffb640 1 -- 192.168.123.107:0/1415496042 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(65..65 src has 1..65) v4 ==== 6394+0+0 (secure 0 0 0) 0x7f9c2c0991d0 con 0x7f9c3c103c60 2026-03-09T20:51:14.499 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.498+0000 7f9c40841640 1 --2- 192.168.123.107:0/1415496042 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f9c180779b0 0x7f9c18079e70 secure :-1 s=READY pgs=61 cs=0 l=1 rev1=1 crypto rx=0x7f9c24002c20 tx=0x7f9c24002da0 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:51:14.499 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.498+0000 7f9c31ffb640 1 -- 192.168.123.107:0/1415496042 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f9c2c061a90 con 0x7f9c3c103c60 2026-03-09T20:51:14.640 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.639+0000 7f9c42acc640 1 -- 192.168.123.107:0/1415496042 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7f9c040058d0 con 0x7f9c3c103c60 2026-03-09T20:51:14.640 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.639+0000 7f9c31ffb640 1 -- 192.168.123.107:0/1415496042 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+844 (secure 0 0 0) 0x7f9c2c0611e0 con 0x7f9c3c103c60 2026-03-09T20:51:14.641 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T20:51:14.641 INFO:teuthology.orchestra.run.vm07.stdout: "mon": { 2026-03-09T20:51:14.641 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T20:51:14.641 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:51:14.641 INFO:teuthology.orchestra.run.vm07.stdout: "mgr": { 2026-03-09T20:51:14.641 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T20:51:14.641 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:51:14.641 INFO:teuthology.orchestra.run.vm07.stdout: "osd": { 2026-03-09T20:51:14.641 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable)": 3, 2026-03-09T20:51:14.641 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 3 2026-03-09T20:51:14.641 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:51:14.641 INFO:teuthology.orchestra.run.vm07.stdout: "mds": { 2026-03-09T20:51:14.641 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable)": 4 2026-03-09T20:51:14.641 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:51:14.641 INFO:teuthology.orchestra.run.vm07.stdout: "overall": { 2026-03-09T20:51:14.641 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable)": 7, 2026-03-09T20:51:14.641 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 7 2026-03-09T20:51:14.641 INFO:teuthology.orchestra.run.vm07.stdout: } 2026-03-09T20:51:14.641 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T20:51:14.643 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.642+0000 7f9c42acc640 1 -- 192.168.123.107:0/1415496042 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f9c180779b0 msgr2=0x7f9c18079e70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:51:14.643 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.642+0000 7f9c42acc640 1 --2- 192.168.123.107:0/1415496042 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f9c180779b0 0x7f9c18079e70 secure :-1 s=READY pgs=61 cs=0 l=1 rev1=1 crypto rx=0x7f9c24002c20 tx=0x7f9c24002da0 comp rx=0 tx=0).stop 2026-03-09T20:51:14.643 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.642+0000 7f9c42acc640 1 -- 192.168.123.107:0/1415496042 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9c3c103c60 msgr2=0x7f9c3c19aa00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:51:14.643 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.642+0000 7f9c42acc640 1 --2- 192.168.123.107:0/1415496042 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9c3c103c60 0x7f9c3c19aa00 secure :-1 s=READY pgs=94 cs=0 l=1 rev1=1 crypto rx=0x7f9c2c00b730 tx=0x7f9c2c00bc00 comp rx=0 tx=0).stop 2026-03-09T20:51:14.643 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.642+0000 7f9c42acc640 1 -- 192.168.123.107:0/1415496042 shutdown_connections 2026-03-09T20:51:14.643 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.642+0000 7f9c42acc640 1 --2- 192.168.123.107:0/1415496042 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f9c180779b0 0x7f9c18079e70 unknown :-1 s=CLOSED pgs=61 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:14.643 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.642+0000 7f9c42acc640 1 --2- 192.168.123.107:0/1415496042 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9c3c103c60 0x7f9c3c19aa00 unknown :-1 s=CLOSED pgs=94 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:14.643 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.642+0000 7f9c42acc640 1 --2- 192.168.123.107:0/1415496042 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9c3c102a60 0x7f9c3c19a4c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:14.643 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.642+0000 7f9c42acc640 1 -- 192.168.123.107:0/1415496042 >> 192.168.123.107:0/1415496042 conn(0x7f9c3c0fe250 msgr2=0x7f9c3c0ffd50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:51:14.643 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.643+0000 7f9c42acc640 1 -- 192.168.123.107:0/1415496042 shutdown_connections 2026-03-09T20:51:14.644 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.643+0000 7f9c42acc640 1 -- 192.168.123.107:0/1415496042 wait complete. 2026-03-09T20:51:14.700 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.699+0000 7f5c61988640 1 -- 192.168.123.107:0/1723756330 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5c5c102a80 msgr2=0x7f5c5c102e80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:51:14.700 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.699+0000 7f5c61988640 1 --2- 192.168.123.107:0/1723756330 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5c5c102a80 0x7f5c5c102e80 secure :-1 s=READY pgs=95 cs=0 l=1 rev1=1 crypto rx=0x7f5c500099b0 tx=0x7f5c5002f220 comp rx=0 tx=0).stop 2026-03-09T20:51:14.700 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.699+0000 7f5c61988640 1 -- 192.168.123.107:0/1723756330 shutdown_connections 2026-03-09T20:51:14.700 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.699+0000 7f5c61988640 1 --2- 192.168.123.107:0/1723756330 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5c5c103c80 0x7f5c5c104100 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:14.700 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.699+0000 7f5c61988640 1 --2- 192.168.123.107:0/1723756330 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5c5c102a80 0x7f5c5c102e80 unknown :-1 s=CLOSED pgs=95 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:14.700 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.699+0000 7f5c61988640 1 -- 192.168.123.107:0/1723756330 >> 192.168.123.107:0/1723756330 conn(0x7f5c5c0fe250 msgr2=0x7f5c5c100670 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:51:14.700 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.700+0000 7f5c61988640 1 -- 192.168.123.107:0/1723756330 shutdown_connections 2026-03-09T20:51:14.701 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.700+0000 7f5c61988640 1 -- 192.168.123.107:0/1723756330 wait complete. 2026-03-09T20:51:14.701 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.700+0000 7f5c61988640 1 Processor -- start 2026-03-09T20:51:14.701 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.700+0000 7f5c61988640 1 -- start start 2026-03-09T20:51:14.701 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.701+0000 7f5c61988640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5c5c102a80 0x7f5c5c19a450 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:51:14.702 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.701+0000 7f5c5affd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5c5c102a80 0x7f5c5c19a450 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:51:14.702 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.701+0000 7f5c5affd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5c5c102a80 0x7f5c5c19a450 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:44786/0 (socket says 192.168.123.107:44786) 2026-03-09T20:51:14.702 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.701+0000 7f5c61988640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5c5c103c80 0x7f5c5c19a990 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:51:14.702 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.701+0000 7f5c61988640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f5c5c19aed0 con 0x7f5c5c102a80 2026-03-09T20:51:14.702 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.701+0000 7f5c5affd640 1 -- 192.168.123.107:0/3921607432 learned_addr learned my addr 192.168.123.107:0/3921607432 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:51:14.702 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.701+0000 7f5c61988640 1 -- 192.168.123.107:0/3921607432 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f5c5c19b040 con 0x7f5c5c103c80 2026-03-09T20:51:14.702 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.701+0000 7f5c5a7fc640 1 --2- 192.168.123.107:0/3921607432 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5c5c103c80 0x7f5c5c19a990 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:51:14.703 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.702+0000 7f5c5a7fc640 1 -- 192.168.123.107:0/3921607432 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5c5c102a80 msgr2=0x7f5c5c19a450 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:51:14.703 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.702+0000 7f5c5a7fc640 1 --2- 192.168.123.107:0/3921607432 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5c5c102a80 0x7f5c5c19a450 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:14.703 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.702+0000 7f5c5a7fc640 1 -- 192.168.123.107:0/3921607432 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f5c50009660 con 0x7f5c5c103c80 2026-03-09T20:51:14.703 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.702+0000 7f5c5affd640 1 --2- 192.168.123.107:0/3921607432 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5c5c102a80 0x7f5c5c19a450 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-09T20:51:14.703 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.702+0000 7f5c5a7fc640 1 --2- 192.168.123.107:0/3921607432 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5c5c103c80 0x7f5c5c19a990 secure :-1 s=READY pgs=31 cs=0 l=1 rev1=1 crypto rx=0x7f5c4400c8f0 tx=0x7f5c4400cdc0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:51:14.703 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.702+0000 7f5c60986640 1 -- 192.168.123.107:0/3921607432 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f5c44004430 con 0x7f5c5c103c80 2026-03-09T20:51:14.704 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.702+0000 7f5c61988640 1 -- 192.168.123.107:0/3921607432 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f5c5c19fb20 con 0x7f5c5c103c80 2026-03-09T20:51:14.704 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.702+0000 7f5c60986640 1 -- 192.168.123.107:0/3921607432 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f5c44004590 con 0x7f5c5c103c80 2026-03-09T20:51:14.704 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.702+0000 7f5c60986640 1 -- 192.168.123.107:0/3921607432 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f5c4400f450 con 0x7f5c5c103c80 2026-03-09T20:51:14.704 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.702+0000 7f5c61988640 1 -- 192.168.123.107:0/3921607432 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f5c5c1a0020 con 0x7f5c5c103c80 2026-03-09T20:51:14.704 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.704+0000 7f5c61988640 1 -- 192.168.123.107:0/3921607432 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f5c28005350 con 0x7f5c5c103c80 2026-03-09T20:51:14.706 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.705+0000 7f5c60986640 1 -- 192.168.123.107:0/3921607432 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f5c44002770 con 0x7f5c5c103c80 2026-03-09T20:51:14.706 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.705+0000 7f5c60986640 1 --2- 192.168.123.107:0/3921607432 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f5c340778e0 0x7f5c34079da0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:51:14.706 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.705+0000 7f5c60986640 1 -- 192.168.123.107:0/3921607432 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(65..65 src has 1..65) v4 ==== 6394+0+0 (secure 0 0 0) 0x7f5c44099e30 con 0x7f5c5c103c80 2026-03-09T20:51:14.706 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.706+0000 7f5c5affd640 1 --2- 192.168.123.107:0/3921607432 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f5c340778e0 0x7f5c34079da0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:51:14.707 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.706+0000 7f5c5affd640 1 --2- 192.168.123.107:0/3921607432 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f5c340778e0 0x7f5c34079da0 secure :-1 s=READY pgs=62 cs=0 l=1 rev1=1 crypto rx=0x7f5c50002c20 tx=0x7f5c5003a040 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:51:14.708 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.707+0000 7f5c60986640 1 -- 192.168.123.107:0/3921607432 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f5c440625e0 con 0x7f5c5c103c80 2026-03-09T20:51:14.828 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.826+0000 7f5c61988640 1 -- 192.168.123.107:0/3921607432 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "fs dump"} v 0) v1 -- 0x7f5c280058d0 con 0x7f5c5c103c80 2026-03-09T20:51:14.829 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.827+0000 7f5c60986640 1 -- 192.168.123.107:0/3921607432 <== mon.1 v2:192.168.123.110:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump"}]=0 dumped fsmap epoch 11 v11) v1 ==== 76+0+1937 (secure 0 0 0) 0x7f5c4400d340 con 0x7f5c5c103c80 2026-03-09T20:51:14.830 INFO:teuthology.orchestra.run.vm07.stdout:e11 2026-03-09T20:51:14.830 INFO:teuthology.orchestra.run.vm07.stdout:btime 1970-01-01T00:00:00:000000+0000 2026-03-09T20:51:14.830 INFO:teuthology.orchestra.run.vm07.stdout:enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T20:51:14.830 INFO:teuthology.orchestra.run.vm07.stdout:default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T20:51:14.830 INFO:teuthology.orchestra.run.vm07.stdout:legacy client fscid: 1 2026-03-09T20:51:14.830 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:51:14.830 INFO:teuthology.orchestra.run.vm07.stdout:Filesystem 'cephfs' (1) 2026-03-09T20:51:14.830 INFO:teuthology.orchestra.run.vm07.stdout:fs_name cephfs 2026-03-09T20:51:14.830 INFO:teuthology.orchestra.run.vm07.stdout:epoch 11 2026-03-09T20:51:14.830 INFO:teuthology.orchestra.run.vm07.stdout:flags 32 joinable allow_snaps allow_multimds_snaps allow_standby_replay 2026-03-09T20:51:14.830 INFO:teuthology.orchestra.run.vm07.stdout:created 2026-03-09T20:44:59.885491+0000 2026-03-09T20:51:14.830 INFO:teuthology.orchestra.run.vm07.stdout:modified 2026-03-09T20:45:12.822947+0000 2026-03-09T20:51:14.830 INFO:teuthology.orchestra.run.vm07.stdout:tableserver 0 2026-03-09T20:51:14.830 INFO:teuthology.orchestra.run.vm07.stdout:root 0 2026-03-09T20:51:14.830 INFO:teuthology.orchestra.run.vm07.stdout:session_timeout 60 2026-03-09T20:51:14.830 INFO:teuthology.orchestra.run.vm07.stdout:session_autoclose 300 2026-03-09T20:51:14.830 INFO:teuthology.orchestra.run.vm07.stdout:max_file_size 1099511627776 2026-03-09T20:51:14.830 INFO:teuthology.orchestra.run.vm07.stdout:max_xattr_size 65536 2026-03-09T20:51:14.830 INFO:teuthology.orchestra.run.vm07.stdout:required_client_features {} 2026-03-09T20:51:14.830 INFO:teuthology.orchestra.run.vm07.stdout:last_failure 0 2026-03-09T20:51:14.830 INFO:teuthology.orchestra.run.vm07.stdout:last_failure_osd_epoch 0 2026-03-09T20:51:14.830 INFO:teuthology.orchestra.run.vm07.stdout:compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T20:51:14.830 INFO:teuthology.orchestra.run.vm07.stdout:max_mds 2 2026-03-09T20:51:14.830 INFO:teuthology.orchestra.run.vm07.stdout:in 0,1 2026-03-09T20:51:14.830 INFO:teuthology.orchestra.run.vm07.stdout:up {0=14476,1=24291} 2026-03-09T20:51:14.830 INFO:teuthology.orchestra.run.vm07.stdout:failed 2026-03-09T20:51:14.830 INFO:teuthology.orchestra.run.vm07.stdout:damaged 2026-03-09T20:51:14.830 INFO:teuthology.orchestra.run.vm07.stdout:stopped 2026-03-09T20:51:14.830 INFO:teuthology.orchestra.run.vm07.stdout:data_pools [3] 2026-03-09T20:51:14.830 INFO:teuthology.orchestra.run.vm07.stdout:metadata_pool 2 2026-03-09T20:51:14.830 INFO:teuthology.orchestra.run.vm07.stdout:inline_data disabled 2026-03-09T20:51:14.830 INFO:teuthology.orchestra.run.vm07.stdout:balancer 2026-03-09T20:51:14.831 INFO:teuthology.orchestra.run.vm07.stdout:bal_rank_mask -1 2026-03-09T20:51:14.831 INFO:teuthology.orchestra.run.vm07.stdout:standby_count_wanted 1 2026-03-09T20:51:14.831 INFO:teuthology.orchestra.run.vm07.stdout:qdb_cluster leader: 0 members: 2026-03-09T20:51:14.831 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm07.rovdbp{0:14476} state up:active seq 3 join_fscid=1 addr [v2:192.168.123.107:6826/2216764941,v1:192.168.123.107:6827/2216764941] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:51:14.831 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm10.hzyuyq{0:14498} state up:standby-replay seq 3 join_fscid=1 addr [v2:192.168.123.110:6826/3212743251,v1:192.168.123.110:6827/3212743251] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:51:14.831 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm10.qpltwp{1:24291} state up:active seq 3 join_fscid=1 addr [v2:192.168.123.110:6824/61492274,v1:192.168.123.110:6825/61492274] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:51:14.831 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm07.potfau{1:14490} state up:standby-replay seq 2 join_fscid=1 addr [v2:192.168.123.107:6828/3289699342,v1:192.168.123.107:6829/3289699342] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:51:14.831 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:51:14.831 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:51:14.831 INFO:teuthology.orchestra.run.vm07.stderr:dumped fsmap epoch 11 2026-03-09T20:51:14.831 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.830+0000 7f5c61988640 1 -- 192.168.123.107:0/3921607432 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f5c340778e0 msgr2=0x7f5c34079da0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:51:14.831 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.830+0000 7f5c61988640 1 --2- 192.168.123.107:0/3921607432 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f5c340778e0 0x7f5c34079da0 secure :-1 s=READY pgs=62 cs=0 l=1 rev1=1 crypto rx=0x7f5c50002c20 tx=0x7f5c5003a040 comp rx=0 tx=0).stop 2026-03-09T20:51:14.832 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.831+0000 7f5c61988640 1 -- 192.168.123.107:0/3921607432 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5c5c103c80 msgr2=0x7f5c5c19a990 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:51:14.832 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.831+0000 7f5c61988640 1 --2- 192.168.123.107:0/3921607432 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5c5c103c80 0x7f5c5c19a990 secure :-1 s=READY pgs=31 cs=0 l=1 rev1=1 crypto rx=0x7f5c4400c8f0 tx=0x7f5c4400cdc0 comp rx=0 tx=0).stop 2026-03-09T20:51:14.832 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.831+0000 7f5c61988640 1 -- 192.168.123.107:0/3921607432 shutdown_connections 2026-03-09T20:51:14.832 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.831+0000 7f5c61988640 1 --2- 192.168.123.107:0/3921607432 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f5c340778e0 0x7f5c34079da0 unknown :-1 s=CLOSED pgs=62 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:14.832 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.831+0000 7f5c61988640 1 --2- 192.168.123.107:0/3921607432 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5c5c103c80 0x7f5c5c19a990 unknown :-1 s=CLOSED pgs=31 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:14.832 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.831+0000 7f5c61988640 1 --2- 192.168.123.107:0/3921607432 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5c5c102a80 0x7f5c5c19a450 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:14.832 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.831+0000 7f5c61988640 1 -- 192.168.123.107:0/3921607432 >> 192.168.123.107:0/3921607432 conn(0x7f5c5c0fe250 msgr2=0x7f5c5c0ffa00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:51:14.832 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.831+0000 7f5c61988640 1 -- 192.168.123.107:0/3921607432 shutdown_connections 2026-03-09T20:51:14.832 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.831+0000 7f5c61988640 1 -- 192.168.123.107:0/3921607432 wait complete. 2026-03-09T20:51:14.880 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:14 vm10.local systemd[1]: Stopping Ceph osd.3 for 589eab88-1bf8-11f1-9e50-71f3ab1833c4... 2026-03-09T20:51:14.880 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:14 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-3[62912]: 2026-03-09T20:51:14.733+0000 7f124bf5f640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.3 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T20:51:14.880 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:14 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-3[62912]: 2026-03-09T20:51:14.733+0000 7f124bf5f640 -1 osd.3 65 *** Got signal Terminated *** 2026-03-09T20:51:14.880 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:14 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-3[62912]: 2026-03-09T20:51:14.733+0000 7f124bf5f640 -1 osd.3 65 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T20:51:14.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:14 vm07.local ceph-mon[112105]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-09T20:51:14.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:14 vm07.local ceph-mon[112105]: Upgrade: osd.3 is safe to restart 2026-03-09T20:51:14.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:14 vm07.local ceph-mon[112105]: Upgrade: Updating osd.3 2026-03-09T20:51:14.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:14 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:51:14.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:14 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T20:51:14.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:14 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:51:14.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:14 vm07.local ceph-mon[112105]: Deploying daemon osd.3 on vm10 2026-03-09T20:51:14.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:14 vm07.local ceph-mon[112105]: from='client.34252 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:51:14.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:14 vm07.local ceph-mon[112105]: from='client.34256 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:51:14.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:14 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:51:14.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:14 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/1415496042' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:51:14.890 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:14 vm07.local ceph-mon[112105]: osd.3 marked itself down and dead 2026-03-09T20:51:14.891 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:14 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/3921607432' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T20:51:14.891 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.889+0000 7ff5b2421640 1 -- 192.168.123.107:0/1343515518 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ff5ac0fead0 msgr2=0x7ff5ac0feed0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:51:14.891 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.889+0000 7ff5b2421640 1 --2- 192.168.123.107:0/1343515518 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ff5ac0fead0 0x7ff5ac0feed0 secure :-1 s=READY pgs=96 cs=0 l=1 rev1=1 crypto rx=0x7ff59c0099b0 tx=0x7ff59c02f220 comp rx=0 tx=0).stop 2026-03-09T20:51:14.894 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.893+0000 7ff5b2421640 1 -- 192.168.123.107:0/1343515518 shutdown_connections 2026-03-09T20:51:14.894 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.893+0000 7ff5b2421640 1 --2- 192.168.123.107:0/1343515518 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7ff5ac0ff7b0 0x7ff5ac0ffc30 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:14.894 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.893+0000 7ff5b2421640 1 --2- 192.168.123.107:0/1343515518 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ff5ac0fead0 0x7ff5ac0feed0 unknown :-1 s=CLOSED pgs=96 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:14.895 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.893+0000 7ff5b2421640 1 -- 192.168.123.107:0/1343515518 >> 192.168.123.107:0/1343515518 conn(0x7ff5ac0fa5b0 msgr2=0x7ff5ac0fc9d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:51:14.895 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.894+0000 7ff5b2421640 1 -- 192.168.123.107:0/1343515518 shutdown_connections 2026-03-09T20:51:14.895 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.894+0000 7ff5b2421640 1 -- 192.168.123.107:0/1343515518 wait complete. 2026-03-09T20:51:14.895 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.895+0000 7ff5b2421640 1 Processor -- start 2026-03-09T20:51:14.896 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.895+0000 7ff5b2421640 1 -- start start 2026-03-09T20:51:14.896 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.895+0000 7ff5b2421640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7ff5ac0fead0 0x7ff5ac19e960 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:51:14.896 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.895+0000 7ff5b2421640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ff5ac0ff7b0 0x7ff5ac19eea0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:51:14.896 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.895+0000 7ff5b2421640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff5ac19f470 con 0x7ff5ac0ff7b0 2026-03-09T20:51:14.896 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.895+0000 7ff5b2421640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff5ac19f5e0 con 0x7ff5ac0fead0 2026-03-09T20:51:14.897 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.895+0000 7ff5ab7fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ff5ac0ff7b0 0x7ff5ac19eea0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:51:14.897 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.895+0000 7ff5ab7fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ff5ac0ff7b0 0x7ff5ac19eea0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:44802/0 (socket says 192.168.123.107:44802) 2026-03-09T20:51:14.897 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.895+0000 7ff5ab7fe640 1 -- 192.168.123.107:0/2880218643 learned_addr learned my addr 192.168.123.107:0/2880218643 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:51:14.897 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.896+0000 7ff5ab7fe640 1 -- 192.168.123.107:0/2880218643 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7ff5ac0fead0 msgr2=0x7ff5ac19e960 unknown :-1 s=STATE_CONNECTING_RE l=1).mark_down 2026-03-09T20:51:14.897 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.896+0000 7ff5ab7fe640 1 --2- 192.168.123.107:0/2880218643 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7ff5ac0fead0 0x7ff5ac19e960 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:14.897 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.896+0000 7ff5ab7fe640 1 -- 192.168.123.107:0/2880218643 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7ff59c009660 con 0x7ff5ac0ff7b0 2026-03-09T20:51:14.897 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.896+0000 7ff5ab7fe640 1 --2- 192.168.123.107:0/2880218643 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ff5ac0ff7b0 0x7ff5ac19eea0 secure :-1 s=READY pgs=97 cs=0 l=1 rev1=1 crypto rx=0x7ff59800d900 tx=0x7ff59800ddd0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:51:14.897 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.896+0000 7ff5a97fa640 1 -- 192.168.123.107:0/2880218643 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7ff598004490 con 0x7ff5ac0ff7b0 2026-03-09T20:51:14.898 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.896+0000 7ff5a97fa640 1 -- 192.168.123.107:0/2880218643 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7ff59800bd00 con 0x7ff5ac0ff7b0 2026-03-09T20:51:14.898 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.897+0000 7ff5a97fa640 1 -- 192.168.123.107:0/2880218643 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7ff598010460 con 0x7ff5ac0ff7b0 2026-03-09T20:51:14.899 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.897+0000 7ff5b2421640 1 -- 192.168.123.107:0/2880218643 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7ff5ac1a4080 con 0x7ff5ac0ff7b0 2026-03-09T20:51:14.900 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.898+0000 7ff5b2421640 1 -- 192.168.123.107:0/2880218643 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7ff5ac1a4550 con 0x7ff5ac0ff7b0 2026-03-09T20:51:14.900 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.899+0000 7ff5a97fa640 1 -- 192.168.123.107:0/2880218643 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7ff59800b840 con 0x7ff5ac0ff7b0 2026-03-09T20:51:14.901 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.899+0000 7ff5b2421640 1 -- 192.168.123.107:0/2880218643 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7ff578005350 con 0x7ff5ac0ff7b0 2026-03-09T20:51:14.901 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.900+0000 7ff5a97fa640 1 --2- 192.168.123.107:0/2880218643 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7ff5800776d0 0x7ff580079b90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:51:14.901 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.900+0000 7ff5a97fa640 1 -- 192.168.123.107:0/2880218643 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(65..65 src has 1..65) v4 ==== 6394+0+0 (secure 0 0 0) 0x7ff598099310 con 0x7ff5ac0ff7b0 2026-03-09T20:51:14.903 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.902+0000 7ff5a97fa640 1 -- 192.168.123.107:0/2880218643 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7ff59809e050 con 0x7ff5ac0ff7b0 2026-03-09T20:51:14.903 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.903+0000 7ff5abfff640 1 --2- 192.168.123.107:0/2880218643 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7ff5800776d0 0x7ff580079b90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:51:14.907 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:14.903+0000 7ff5abfff640 1 --2- 192.168.123.107:0/2880218643 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7ff5800776d0 0x7ff580079b90 secure :-1 s=READY pgs=63 cs=0 l=1 rev1=1 crypto rx=0x7ff59c02f730 tx=0x7ff59c0023d0 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:51:15.010 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.009+0000 7ff5b2421640 1 -- 192.168.123.107:0/2880218643 --> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7ff578002bf0 con 0x7ff5800776d0 2026-03-09T20:51:15.012 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.011+0000 7ff5a97fa640 1 -- 192.168.123.107:0/2880218643 <== mgr.34100 v2:192.168.123.107:6800/2064434004 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+400 (secure 0 0 0) 0x7ff578002bf0 con 0x7ff5800776d0 2026-03-09T20:51:15.012 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T20:51:15.012 INFO:teuthology.orchestra.run.vm07.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc", 2026-03-09T20:51:15.012 INFO:teuthology.orchestra.run.vm07.stdout: "in_progress": true, 2026-03-09T20:51:15.012 INFO:teuthology.orchestra.run.vm07.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-09T20:51:15.012 INFO:teuthology.orchestra.run.vm07.stdout: "services_complete": [ 2026-03-09T20:51:15.012 INFO:teuthology.orchestra.run.vm07.stdout: "mon", 2026-03-09T20:51:15.012 INFO:teuthology.orchestra.run.vm07.stdout: "crash", 2026-03-09T20:51:15.012 INFO:teuthology.orchestra.run.vm07.stdout: "mgr" 2026-03-09T20:51:15.012 INFO:teuthology.orchestra.run.vm07.stdout: ], 2026-03-09T20:51:15.012 INFO:teuthology.orchestra.run.vm07.stdout: "progress": "9/23 daemons upgraded", 2026-03-09T20:51:15.012 INFO:teuthology.orchestra.run.vm07.stdout: "message": "Currently upgrading osd daemons", 2026-03-09T20:51:15.012 INFO:teuthology.orchestra.run.vm07.stdout: "is_paused": false 2026-03-09T20:51:15.012 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T20:51:15.013 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.012+0000 7ff5b2421640 1 -- 192.168.123.107:0/2880218643 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7ff5800776d0 msgr2=0x7ff580079b90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:51:15.013 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.012+0000 7ff5b2421640 1 --2- 192.168.123.107:0/2880218643 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7ff5800776d0 0x7ff580079b90 secure :-1 s=READY pgs=63 cs=0 l=1 rev1=1 crypto rx=0x7ff59c02f730 tx=0x7ff59c0023d0 comp rx=0 tx=0).stop 2026-03-09T20:51:15.013 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.013+0000 7ff5b2421640 1 -- 192.168.123.107:0/2880218643 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ff5ac0ff7b0 msgr2=0x7ff5ac19eea0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:51:15.013 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.013+0000 7ff5b2421640 1 --2- 192.168.123.107:0/2880218643 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ff5ac0ff7b0 0x7ff5ac19eea0 secure :-1 s=READY pgs=97 cs=0 l=1 rev1=1 crypto rx=0x7ff59800d900 tx=0x7ff59800ddd0 comp rx=0 tx=0).stop 2026-03-09T20:51:15.014 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.013+0000 7ff5b2421640 1 -- 192.168.123.107:0/2880218643 shutdown_connections 2026-03-09T20:51:15.014 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.013+0000 7ff5b2421640 1 --2- 192.168.123.107:0/2880218643 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7ff5800776d0 0x7ff580079b90 unknown :-1 s=CLOSED pgs=63 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:15.014 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.013+0000 7ff5b2421640 1 --2- 192.168.123.107:0/2880218643 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ff5ac0ff7b0 0x7ff5ac19eea0 unknown :-1 s=CLOSED pgs=97 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:15.014 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.013+0000 7ff5b2421640 1 --2- 192.168.123.107:0/2880218643 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7ff5ac0fead0 0x7ff5ac19e960 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:15.014 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.013+0000 7ff5b2421640 1 -- 192.168.123.107:0/2880218643 >> 192.168.123.107:0/2880218643 conn(0x7ff5ac0fa5b0 msgr2=0x7ff5ac0fbd90 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:51:15.014 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.013+0000 7ff5b2421640 1 -- 192.168.123.107:0/2880218643 shutdown_connections 2026-03-09T20:51:15.014 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.013+0000 7ff5b2421640 1 -- 192.168.123.107:0/2880218643 wait complete. 2026-03-09T20:51:15.073 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.072+0000 7f52e1688640 1 -- 192.168.123.107:0/2212227487 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f52dc075ba0 msgr2=0x7f52dc075fa0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:51:15.073 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.072+0000 7f52e1688640 1 --2- 192.168.123.107:0/2212227487 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f52dc075ba0 0x7f52dc075fa0 secure :-1 s=READY pgs=98 cs=0 l=1 rev1=1 crypto rx=0x7f52c4009a00 tx=0x7f52c402f280 comp rx=0 tx=0).stop 2026-03-09T20:51:15.073 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.072+0000 7f52e1688640 1 -- 192.168.123.107:0/2212227487 shutdown_connections 2026-03-09T20:51:15.073 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.072+0000 7f52e1688640 1 --2- 192.168.123.107:0/2212227487 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f52dc076df0 0x7f52dc077250 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:15.073 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.072+0000 7f52e1688640 1 --2- 192.168.123.107:0/2212227487 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f52dc075ba0 0x7f52dc075fa0 unknown :-1 s=CLOSED pgs=98 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:15.073 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.072+0000 7f52e1688640 1 -- 192.168.123.107:0/2212227487 >> 192.168.123.107:0/2212227487 conn(0x7f52dc0fe250 msgr2=0x7f52dc100670 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:51:15.073 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.073+0000 7f52e1688640 1 -- 192.168.123.107:0/2212227487 shutdown_connections 2026-03-09T20:51:15.074 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.073+0000 7f52e1688640 1 -- 192.168.123.107:0/2212227487 wait complete. 2026-03-09T20:51:15.074 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.073+0000 7f52e1688640 1 Processor -- start 2026-03-09T20:51:15.074 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.073+0000 7f52e1688640 1 -- start start 2026-03-09T20:51:15.075 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.073+0000 7f52e1688640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f52dc075ba0 0x7f52dc071610 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:51:15.075 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.073+0000 7f52e1688640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f52dc076df0 0x7f52dc071b50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:51:15.075 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.073+0000 7f52e1688640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f52dc073050 con 0x7f52dc076df0 2026-03-09T20:51:15.075 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.073+0000 7f52e1688640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f52dc0731c0 con 0x7f52dc075ba0 2026-03-09T20:51:15.075 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.074+0000 7f52daffd640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f52dc075ba0 0x7f52dc071610 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:51:15.075 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.074+0000 7f52daffd640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f52dc075ba0 0x7f52dc071610 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:39290/0 (socket says 192.168.123.107:39290) 2026-03-09T20:51:15.075 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.074+0000 7f52daffd640 1 -- 192.168.123.107:0/1526261781 learned_addr learned my addr 192.168.123.107:0/1526261781 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:51:15.075 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.074+0000 7f52da7fc640 1 --2- 192.168.123.107:0/1526261781 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f52dc076df0 0x7f52dc071b50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:51:15.075 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.074+0000 7f52daffd640 1 -- 192.168.123.107:0/1526261781 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f52dc076df0 msgr2=0x7f52dc071b50 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:51:15.075 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.074+0000 7f52daffd640 1 --2- 192.168.123.107:0/1526261781 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f52dc076df0 0x7f52dc071b50 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:15.075 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.074+0000 7f52daffd640 1 -- 192.168.123.107:0/1526261781 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f52c4009660 con 0x7f52dc075ba0 2026-03-09T20:51:15.075 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.074+0000 7f52da7fc640 1 --2- 192.168.123.107:0/1526261781 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f52dc076df0 0x7f52dc071b50 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T20:51:15.075 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.075+0000 7f52daffd640 1 --2- 192.168.123.107:0/1526261781 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f52dc075ba0 0x7f52dc071610 secure :-1 s=READY pgs=32 cs=0 l=1 rev1=1 crypto rx=0x7f52c4005bb0 tx=0x7f52c4004300 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:51:15.076 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.075+0000 7f52bbfff640 1 -- 192.168.123.107:0/1526261781 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f52c402fcf0 con 0x7f52dc075ba0 2026-03-09T20:51:15.076 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.075+0000 7f52e1688640 1 -- 192.168.123.107:0/1526261781 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f52dc072150 con 0x7f52dc075ba0 2026-03-09T20:51:15.076 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.075+0000 7f52bbfff640 1 -- 192.168.123.107:0/1526261781 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f52c402fe50 con 0x7f52dc075ba0 2026-03-09T20:51:15.076 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.075+0000 7f52e1688640 1 -- 192.168.123.107:0/1526261781 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f52dc072470 con 0x7f52dc075ba0 2026-03-09T20:51:15.077 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.076+0000 7f52bbfff640 1 -- 192.168.123.107:0/1526261781 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f52c402fcf0 con 0x7f52dc075ba0 2026-03-09T20:51:15.077 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.076+0000 7f52e1688640 1 -- 192.168.123.107:0/1526261781 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f52a8005350 con 0x7f52dc075ba0 2026-03-09T20:51:15.078 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.077+0000 7f52bbfff640 1 -- 192.168.123.107:0/1526261781 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f52c403f070 con 0x7f52dc075ba0 2026-03-09T20:51:15.079 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.078+0000 7f52bbfff640 1 --2- 192.168.123.107:0/1526261781 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f52b40778f0 0x7f52b4079db0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:51:15.079 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.078+0000 7f52bbfff640 1 -- 192.168.123.107:0/1526261781 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(65..65 src has 1..65) v4 ==== 6394+0+0 (secure 0 0 0) 0x7f52c40901a0 con 0x7f52dc075ba0 2026-03-09T20:51:15.079 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.078+0000 7f52da7fc640 1 --2- 192.168.123.107:0/1526261781 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f52b40778f0 0x7f52b4079db0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:51:15.079 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.078+0000 7f52da7fc640 1 --2- 192.168.123.107:0/1526261781 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f52b40778f0 0x7f52b4079db0 secure :-1 s=READY pgs=64 cs=0 l=1 rev1=1 crypto rx=0x7f52dc072d40 tx=0x7f52d0005f50 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:51:15.080 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.079+0000 7f52bbfff640 1 -- 192.168.123.107:0/1526261781 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f52c40834e0 con 0x7f52dc075ba0 2026-03-09T20:51:15.215 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.214+0000 7f52e1688640 1 -- 192.168.123.107:0/1526261781 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "health", "detail": "detail"} v 0) v1 -- 0x7f52a80058d0 con 0x7f52dc075ba0 2026-03-09T20:51:15.220 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.215+0000 7f52bbfff640 1 -- 192.168.123.107:0/1526261781 <== mon.1 v2:192.168.123.110:3300/0 7 ==== mon_command_ack([{"prefix": "health", "detail": "detail"}]=0 v0) v1 ==== 74+0+10 (secure 0 0 0) 0x7f52c40834e0 con 0x7f52dc075ba0 2026-03-09T20:51:15.220 INFO:teuthology.orchestra.run.vm07.stdout:HEALTH_OK 2026-03-09T20:51:15.222 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.222+0000 7f52e1688640 1 -- 192.168.123.107:0/1526261781 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f52b40778f0 msgr2=0x7f52b4079db0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:51:15.222 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.222+0000 7f52e1688640 1 --2- 192.168.123.107:0/1526261781 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f52b40778f0 0x7f52b4079db0 secure :-1 s=READY pgs=64 cs=0 l=1 rev1=1 crypto rx=0x7f52dc072d40 tx=0x7f52d0005f50 comp rx=0 tx=0).stop 2026-03-09T20:51:15.223 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.222+0000 7f52e1688640 1 -- 192.168.123.107:0/1526261781 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f52dc075ba0 msgr2=0x7f52dc071610 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:51:15.223 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.222+0000 7f52e1688640 1 --2- 192.168.123.107:0/1526261781 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f52dc075ba0 0x7f52dc071610 secure :-1 s=READY pgs=32 cs=0 l=1 rev1=1 crypto rx=0x7f52c4005bb0 tx=0x7f52c4004300 comp rx=0 tx=0).stop 2026-03-09T20:51:15.223 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.222+0000 7f52e1688640 1 -- 192.168.123.107:0/1526261781 shutdown_connections 2026-03-09T20:51:15.223 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.222+0000 7f52e1688640 1 --2- 192.168.123.107:0/1526261781 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f52b40778f0 0x7f52b4079db0 unknown :-1 s=CLOSED pgs=64 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:15.223 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.222+0000 7f52e1688640 1 --2- 192.168.123.107:0/1526261781 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f52dc076df0 0x7f52dc071b50 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:15.223 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.222+0000 7f52e1688640 1 --2- 192.168.123.107:0/1526261781 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f52dc075ba0 0x7f52dc071610 unknown :-1 s=CLOSED pgs=32 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:15.223 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.222+0000 7f52e1688640 1 -- 192.168.123.107:0/1526261781 >> 192.168.123.107:0/1526261781 conn(0x7f52dc0fe250 msgr2=0x7f52dc0ffd90 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:51:15.223 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.223+0000 7f52e1688640 1 -- 192.168.123.107:0/1526261781 shutdown_connections 2026-03-09T20:51:15.223 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:15.223+0000 7f52e1688640 1 -- 192.168.123.107:0/1526261781 wait complete. 2026-03-09T20:51:15.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:14 vm10.local ceph-mon[103526]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-09T20:51:15.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:14 vm10.local ceph-mon[103526]: Upgrade: osd.3 is safe to restart 2026-03-09T20:51:15.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:14 vm10.local ceph-mon[103526]: Upgrade: Updating osd.3 2026-03-09T20:51:15.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:14 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:51:15.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:14 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T20:51:15.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:14 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:51:15.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:14 vm10.local ceph-mon[103526]: Deploying daemon osd.3 on vm10 2026-03-09T20:51:15.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:14 vm10.local ceph-mon[103526]: from='client.34252 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:51:15.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:14 vm10.local ceph-mon[103526]: from='client.34256 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:51:15.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:14 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:51:15.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:14 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/1415496042' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:51:15.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:14 vm10.local ceph-mon[103526]: osd.3 marked itself down and dead 2026-03-09T20:51:15.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:14 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/3921607432' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T20:51:15.787 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:15 vm10.local podman[113553]: 2026-03-09 20:51:15.624739006 +0000 UTC m=+0.907943172 container died c4d7e2279ba1d43f24e1e30d18a3b133bc5d0857f7f68db37ae92f4e63cc8fe7 (image=quay.ceph.io/ceph-ci/ceph@sha256:5c4977cb71fa562f9eeef885de8392360f108b57ddc9fb513207223fafb4cf40, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=ab47f43c099b2cbae6e21342fe673ce251da54d6, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-09T20:51:15.787 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:15 vm10.local podman[113553]: 2026-03-09 20:51:15.649403432 +0000 UTC m=+0.932607598 container remove c4d7e2279ba1d43f24e1e30d18a3b133bc5d0857f7f68db37ae92f4e63cc8fe7 (image=quay.ceph.io/ceph-ci/ceph@sha256:5c4977cb71fa562f9eeef885de8392360f108b57ddc9fb513207223fafb4cf40, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-3, org.label-schema.build-date=20260223, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=ab47f43c099b2cbae6e21342fe673ce251da54d6, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) 2026-03-09T20:51:15.787 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:15 vm10.local bash[113553]: ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-3 2026-03-09T20:51:16.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:15 vm07.local ceph-mon[112105]: from='client.34260 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:51:16.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:15 vm07.local ceph-mon[112105]: pgmap v106: 65 pgs: 65 active+clean; 211 MiB data, 2.8 GiB used, 117 GiB / 120 GiB avail; 1.6 KiB/s rd, 3 op/s 2026-03-09T20:51:16.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:15 vm07.local ceph-mon[112105]: from='client.34272 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:51:16.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:15 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/1526261781' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T20:51:16.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:15 vm07.local ceph-mon[112105]: Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T20:51:16.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:15 vm07.local ceph-mon[112105]: osdmap e66: 6 total, 5 up, 6 in 2026-03-09T20:51:16.134 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:15 vm10.local ceph-mon[103526]: from='client.34260 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:51:16.134 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:15 vm10.local ceph-mon[103526]: pgmap v106: 65 pgs: 65 active+clean; 211 MiB data, 2.8 GiB used, 117 GiB / 120 GiB avail; 1.6 KiB/s rd, 3 op/s 2026-03-09T20:51:16.134 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:15 vm10.local ceph-mon[103526]: from='client.34272 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:51:16.134 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:15 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/1526261781' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T20:51:16.134 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:15 vm10.local ceph-mon[103526]: Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T20:51:16.134 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:15 vm10.local ceph-mon[103526]: osdmap e66: 6 total, 5 up, 6 in 2026-03-09T20:51:16.134 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:15 vm10.local podman[113621]: 2026-03-09 20:51:15.808201095 +0000 UTC m=+0.020779706 container create 97f82298103b7736507caa610d41541208774f3b94548092455803c3d6d2ea65 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-3-deactivate, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , ceph=True, org.label-schema.vendor=CentOS) 2026-03-09T20:51:16.134 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:15 vm10.local podman[113621]: 2026-03-09 20:51:15.860139632 +0000 UTC m=+0.072718243 container init 97f82298103b7736507caa610d41541208774f3b94548092455803c3d6d2ea65 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-3-deactivate, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default) 2026-03-09T20:51:16.134 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:15 vm10.local podman[113621]: 2026-03-09 20:51:15.863293424 +0000 UTC m=+0.075872035 container start 97f82298103b7736507caa610d41541208774f3b94548092455803c3d6d2ea65 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-3-deactivate, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-09T20:51:16.134 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:15 vm10.local podman[113621]: 2026-03-09 20:51:15.864176257 +0000 UTC m=+0.076754868 container attach 97f82298103b7736507caa610d41541208774f3b94548092455803c3d6d2ea65 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-3-deactivate, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default) 2026-03-09T20:51:16.134 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:15 vm10.local podman[113621]: 2026-03-09 20:51:15.801169043 +0000 UTC m=+0.013747664 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T20:51:16.134 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:15 vm10.local conmon[113632]: conmon 97f82298103b7736507c : Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-97f82298103b7736507caa610d41541208774f3b94548092455803c3d6d2ea65.scope/container/memory.events 2026-03-09T20:51:16.134 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:15 vm10.local podman[113621]: 2026-03-09 20:51:15.999881166 +0000 UTC m=+0.212459777 container died 97f82298103b7736507caa610d41541208774f3b94548092455803c3d6d2ea65 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-3-deactivate, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default) 2026-03-09T20:51:16.134 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:16 vm10.local podman[113621]: 2026-03-09 20:51:16.025124516 +0000 UTC m=+0.237703127 container remove 97f82298103b7736507caa610d41541208774f3b94548092455803c3d6d2ea65 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-3-deactivate, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-09T20:51:16.134 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:16 vm10.local systemd[1]: ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4@osd.3.service: Deactivated successfully. 2026-03-09T20:51:16.134 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:16 vm10.local systemd[1]: Stopped Ceph osd.3 for 589eab88-1bf8-11f1-9e50-71f3ab1833c4. 2026-03-09T20:51:16.134 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:16 vm10.local systemd[1]: ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4@osd.3.service: Consumed 58.274s CPU time. 2026-03-09T20:51:16.486 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:16 vm10.local systemd[1]: Starting Ceph osd.3 for 589eab88-1bf8-11f1-9e50-71f3ab1833c4... 2026-03-09T20:51:16.486 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:16 vm10.local podman[113725]: 2026-03-09 20:51:16.333546581 +0000 UTC m=+0.020874053 container create 545877e83094dc4c6d3b97843a917a7e83140cbf5133d95ed660e40529dca82d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-3-activate, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0) 2026-03-09T20:51:16.486 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:16 vm10.local podman[113725]: 2026-03-09 20:51:16.381012055 +0000 UTC m=+0.068339537 container init 545877e83094dc4c6d3b97843a917a7e83140cbf5133d95ed660e40529dca82d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-3-activate, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-09T20:51:16.486 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:16 vm10.local podman[113725]: 2026-03-09 20:51:16.384205914 +0000 UTC m=+0.071533396 container start 545877e83094dc4c6d3b97843a917a7e83140cbf5133d95ed660e40529dca82d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-3-activate, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20260223, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3) 2026-03-09T20:51:16.486 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:16 vm10.local podman[113725]: 2026-03-09 20:51:16.385118101 +0000 UTC m=+0.072445583 container attach 545877e83094dc4c6d3b97843a917a7e83140cbf5133d95ed660e40529dca82d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-3-activate, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True) 2026-03-09T20:51:16.486 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:16 vm10.local podman[113725]: 2026-03-09 20:51:16.321555456 +0000 UTC m=+0.008882948 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T20:51:16.486 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:16 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-3-activate[113740]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T20:51:16.486 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:16 vm10.local bash[113725]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T20:51:16.486 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:16 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-3-activate[113740]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T20:51:16.486 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:16 vm10.local bash[113725]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T20:51:17.287 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:17 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-3-activate[113740]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-09T20:51:17.287 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:17 vm10.local bash[113725]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-09T20:51:17.287 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:17 vm10.local bash[113725]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T20:51:17.287 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:17 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-3-activate[113740]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T20:51:17.287 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:17 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-3-activate[113740]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T20:51:17.287 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:17 vm10.local bash[113725]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T20:51:17.287 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:17 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-3-activate[113740]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3 2026-03-09T20:51:17.287 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:17 vm10.local bash[113725]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3 2026-03-09T20:51:17.287 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:17 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-3-activate[113740]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-4c2c98ad-125f-43ab-a0b5-1b383eac9886/osd-block-5baaeff4-3fa0-43d6-81ca-ff28de0673a4 --path /var/lib/ceph/osd/ceph-3 --no-mon-config 2026-03-09T20:51:17.287 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:17 vm10.local bash[113725]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-4c2c98ad-125f-43ab-a0b5-1b383eac9886/osd-block-5baaeff4-3fa0-43d6-81ca-ff28de0673a4 --path /var/lib/ceph/osd/ceph-3 --no-mon-config 2026-03-09T20:51:17.703 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:17 vm10.local ceph-mon[103526]: pgmap v108: 65 pgs: 18 peering, 6 stale+active+clean, 41 active+clean; 211 MiB data, 2.8 GiB used, 117 GiB / 120 GiB avail 2026-03-09T20:51:17.703 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:17 vm10.local ceph-mon[103526]: osdmap e67: 6 total, 5 up, 6 in 2026-03-09T20:51:17.703 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:17 vm10.local ceph-mon[103526]: Health check failed: Reduced data availability: 3 pgs peering (PG_AVAILABILITY) 2026-03-09T20:51:17.704 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:17 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-3-activate[113740]: Running command: /usr/bin/ln -snf /dev/ceph-4c2c98ad-125f-43ab-a0b5-1b383eac9886/osd-block-5baaeff4-3fa0-43d6-81ca-ff28de0673a4 /var/lib/ceph/osd/ceph-3/block 2026-03-09T20:51:17.704 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:17 vm10.local bash[113725]: Running command: /usr/bin/ln -snf /dev/ceph-4c2c98ad-125f-43ab-a0b5-1b383eac9886/osd-block-5baaeff4-3fa0-43d6-81ca-ff28de0673a4 /var/lib/ceph/osd/ceph-3/block 2026-03-09T20:51:17.704 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:17 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-3-activate[113740]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-3/block 2026-03-09T20:51:17.704 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:17 vm10.local bash[113725]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-3/block 2026-03-09T20:51:17.704 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:17 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-3-activate[113740]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 2026-03-09T20:51:17.704 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:17 vm10.local bash[113725]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 2026-03-09T20:51:17.704 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:17 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-3-activate[113740]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3 2026-03-09T20:51:17.704 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:17 vm10.local bash[113725]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3 2026-03-09T20:51:17.704 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:17 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-3-activate[113740]: --> ceph-volume lvm activate successful for osd ID: 3 2026-03-09T20:51:17.704 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:17 vm10.local bash[113725]: --> ceph-volume lvm activate successful for osd ID: 3 2026-03-09T20:51:17.704 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:17 vm10.local podman[113725]: 2026-03-09 20:51:17.478106664 +0000 UTC m=+1.165434146 container died 545877e83094dc4c6d3b97843a917a7e83140cbf5133d95ed660e40529dca82d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-3-activate, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-09T20:51:17.704 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:17 vm10.local podman[113725]: 2026-03-09 20:51:17.498228487 +0000 UTC m=+1.185555969 container remove 545877e83094dc4c6d3b97843a917a7e83140cbf5133d95ed660e40529dca82d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-3-activate, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2) 2026-03-09T20:51:17.704 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:17 vm10.local podman[113987]: 2026-03-09 20:51:17.622884265 +0000 UTC m=+0.034656159 container create c8d2b453e9e22355c1b85786b88570c44183fda41bdcad0752dc98ae5026bf72 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-3, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2) 2026-03-09T20:51:17.704 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:17 vm10.local podman[113987]: 2026-03-09 20:51:17.661445543 +0000 UTC m=+0.073217437 container init c8d2b453e9e22355c1b85786b88570c44183fda41bdcad0752dc98ae5026bf72 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-09T20:51:17.704 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:17 vm10.local podman[113987]: 2026-03-09 20:51:17.666058858 +0000 UTC m=+0.077830752 container start c8d2b453e9e22355c1b85786b88570c44183fda41bdcad0752dc98ae5026bf72 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-09T20:51:17.704 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:17 vm10.local bash[113987]: c8d2b453e9e22355c1b85786b88570c44183fda41bdcad0752dc98ae5026bf72 2026-03-09T20:51:17.704 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:17 vm10.local podman[113987]: 2026-03-09 20:51:17.600703408 +0000 UTC m=+0.012475312 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T20:51:17.704 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:17 vm10.local systemd[1]: Started Ceph osd.3 for 589eab88-1bf8-11f1-9e50-71f3ab1833c4. 2026-03-09T20:51:17.704 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:17 vm10.local ceph-osd[114003]: -- 192.168.123.110:0/3355201175 <== mon.1 v2:192.168.123.110:3300/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 194+0+0 (secure 0 0 0) 0x55cb82376960 con 0x55cb83168000 2026-03-09T20:51:17.795 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:17 vm07.local ceph-mon[112105]: pgmap v108: 65 pgs: 18 peering, 6 stale+active+clean, 41 active+clean; 211 MiB data, 2.8 GiB used, 117 GiB / 120 GiB avail 2026-03-09T20:51:17.795 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:17 vm07.local ceph-mon[112105]: osdmap e67: 6 total, 5 up, 6 in 2026-03-09T20:51:17.795 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:17 vm07.local ceph-mon[112105]: Health check failed: Reduced data availability: 3 pgs peering (PG_AVAILABILITY) 2026-03-09T20:51:18.704 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:18 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:51:18.704 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:18 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:51:18.704 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:18 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:51:18.704 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:18 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:51:18.704 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:18 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:51:18.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:18 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:51:18.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:18 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:51:18.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:18 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:51:18.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:18 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:51:18.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:18 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:51:19.037 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:18 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-3[113999]: 2026-03-09T20:51:18.764+0000 7fce35a87740 -1 Falling back to public interface 2026-03-09T20:51:19.707 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:19 vm10.local ceph-mon[103526]: pgmap v110: 65 pgs: 4 active+undersized, 18 peering, 3 stale+active+clean, 3 active+undersized+degraded, 37 active+clean; 211 MiB data, 2.8 GiB used, 117 GiB / 120 GiB avail; 9/285 objects degraded (3.158%) 2026-03-09T20:51:19.707 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:19 vm10.local ceph-mon[103526]: Health check failed: Degraded data redundancy: 9/285 objects degraded (3.158%), 3 pgs degraded (PG_DEGRADED) 2026-03-09T20:51:19.707 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:19 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:51:19.708 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:19 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:51:20.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:19 vm07.local ceph-mon[112105]: pgmap v110: 65 pgs: 4 active+undersized, 18 peering, 3 stale+active+clean, 3 active+undersized+degraded, 37 active+clean; 211 MiB data, 2.8 GiB used, 117 GiB / 120 GiB avail; 9/285 objects degraded (3.158%) 2026-03-09T20:51:20.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:19 vm07.local ceph-mon[112105]: Health check failed: Degraded data redundancy: 9/285 objects degraded (3.158%), 3 pgs degraded (PG_DEGRADED) 2026-03-09T20:51:20.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:19 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:51:20.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:19 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:51:21.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:21 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:51:21.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:21 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:51:21.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:21 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:51:21.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:21 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:51:21.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:21 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:51:21.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:21 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:51:21.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:21 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:51:21.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:21 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:51:21.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:21 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:51:21.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:21 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-09T20:51:21.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:21 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:51:21.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:21 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:51:21.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:21 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:51:21.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:21 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:51:21.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:21 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:51:21.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:21 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:51:21.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:21 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:51:21.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:21 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:51:21.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:21 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:51:21.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:21 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-09T20:51:22.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:22 vm10.local ceph-mon[103526]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-09T20:51:22.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:22 vm10.local ceph-mon[103526]: Upgrade: unsafe to stop osd(s) at this time (14 PGs are or would become offline) 2026-03-09T20:51:22.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:22 vm10.local ceph-mon[103526]: pgmap v111: 65 pgs: 9 active+undersized, 18 peering, 11 active+undersized+degraded, 27 active+clean; 211 MiB data, 2.4 GiB used, 118 GiB / 120 GiB avail; 30/285 objects degraded (10.526%) 2026-03-09T20:51:22.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:22 vm07.local ceph-mon[112105]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-09T20:51:22.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:22 vm07.local ceph-mon[112105]: Upgrade: unsafe to stop osd(s) at this time (14 PGs are or would become offline) 2026-03-09T20:51:22.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:22 vm07.local ceph-mon[112105]: pgmap v111: 65 pgs: 9 active+undersized, 18 peering, 11 active+undersized+degraded, 27 active+clean; 211 MiB data, 2.4 GiB used, 118 GiB / 120 GiB avail; 30/285 objects degraded (10.526%) 2026-03-09T20:51:23.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:23 vm10.local ceph-mon[103526]: Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 3 pgs peering) 2026-03-09T20:51:23.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:23 vm07.local ceph-mon[112105]: Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 3 pgs peering) 2026-03-09T20:51:24.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:24 vm10.local ceph-mon[103526]: pgmap v112: 65 pgs: 19 active+undersized, 19 active+undersized+degraded, 27 active+clean; 211 MiB data, 2.4 GiB used, 118 GiB / 120 GiB avail; 63/285 objects degraded (22.105%) 2026-03-09T20:51:24.537 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:24 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-3[113999]: 2026-03-09T20:51:24.352+0000 7fce35a87740 -1 osd.3 65 log_to_monitors true 2026-03-09T20:51:24.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:24 vm07.local ceph-mon[112105]: pgmap v112: 65 pgs: 19 active+undersized, 19 active+undersized+degraded, 27 active+clean; 211 MiB data, 2.4 GiB used, 118 GiB / 120 GiB avail; 63/285 objects degraded (22.105%) 2026-03-09T20:51:25.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:25 vm10.local ceph-mon[103526]: from='osd.3 [v2:192.168.123.110:6800/1140204796,v1:192.168.123.110:6801/1140204796]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T20:51:25.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:25 vm10.local ceph-mon[103526]: from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T20:51:25.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:25 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:51:25.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:25 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:51:25.537 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 20:51:25 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-3[113999]: 2026-03-09T20:51:25.280+0000 7fce2d020640 -1 osd.3 65 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T20:51:25.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:25 vm07.local ceph-mon[112105]: from='osd.3 [v2:192.168.123.110:6800/1140204796,v1:192.168.123.110:6801/1140204796]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T20:51:25.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:25 vm07.local ceph-mon[112105]: from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T20:51:25.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:25 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:51:25.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:25 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:51:26.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:26 vm07.local ceph-mon[112105]: pgmap v113: 65 pgs: 19 active+undersized, 19 active+undersized+degraded, 27 active+clean; 211 MiB data, 2.4 GiB used, 118 GiB / 120 GiB avail; 63/285 objects degraded (22.105%) 2026-03-09T20:51:26.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:26 vm07.local ceph-mon[112105]: from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-09T20:51:26.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:26 vm07.local ceph-mon[112105]: osdmap e68: 6 total, 5 up, 6 in 2026-03-09T20:51:26.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:26 vm07.local ceph-mon[112105]: from='osd.3 [v2:192.168.123.110:6800/1140204796,v1:192.168.123.110:6801/1140204796]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T20:51:26.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:26 vm07.local ceph-mon[112105]: from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T20:51:27.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:26 vm10.local ceph-mon[103526]: pgmap v113: 65 pgs: 19 active+undersized, 19 active+undersized+degraded, 27 active+clean; 211 MiB data, 2.4 GiB used, 118 GiB / 120 GiB avail; 63/285 objects degraded (22.105%) 2026-03-09T20:51:27.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:26 vm10.local ceph-mon[103526]: from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-09T20:51:27.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:26 vm10.local ceph-mon[103526]: osdmap e68: 6 total, 5 up, 6 in 2026-03-09T20:51:27.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:26 vm10.local ceph-mon[103526]: from='osd.3 [v2:192.168.123.110:6800/1140204796,v1:192.168.123.110:6801/1140204796]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T20:51:27.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:26 vm10.local ceph-mon[103526]: from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T20:51:27.915 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:27 vm07.local ceph-mon[112105]: Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T20:51:27.915 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:27 vm07.local ceph-mon[112105]: pgmap v115: 65 pgs: 19 active+undersized, 19 active+undersized+degraded, 27 active+clean; 211 MiB data, 1.8 GiB used, 118 GiB / 120 GiB avail; 822 B/s rd, 2 op/s; 63/285 objects degraded (22.105%) 2026-03-09T20:51:27.916 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:27 vm07.local ceph-mon[112105]: osd.3 [v2:192.168.123.110:6800/1140204796,v1:192.168.123.110:6801/1140204796] boot 2026-03-09T20:51:27.916 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:27 vm07.local ceph-mon[112105]: osdmap e69: 6 total, 6 up, 6 in 2026-03-09T20:51:27.916 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:27 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:51:28.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:27 vm10.local ceph-mon[103526]: Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T20:51:28.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:27 vm10.local ceph-mon[103526]: pgmap v115: 65 pgs: 19 active+undersized, 19 active+undersized+degraded, 27 active+clean; 211 MiB data, 1.8 GiB used, 118 GiB / 120 GiB avail; 822 B/s rd, 2 op/s; 63/285 objects degraded (22.105%) 2026-03-09T20:51:28.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:27 vm10.local ceph-mon[103526]: osd.3 [v2:192.168.123.110:6800/1140204796,v1:192.168.123.110:6801/1140204796] boot 2026-03-09T20:51:28.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:27 vm10.local ceph-mon[103526]: osdmap e69: 6 total, 6 up, 6 in 2026-03-09T20:51:28.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:27 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:51:29.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:29 vm10.local ceph-mon[103526]: Health check update: Degraded data redundancy: 63/285 objects degraded (22.105%), 19 pgs degraded (PG_DEGRADED) 2026-03-09T20:51:29.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:29 vm10.local ceph-mon[103526]: osdmap e70: 6 total, 6 up, 6 in 2026-03-09T20:51:29.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:28 vm07.local ceph-mon[112105]: Health check update: Degraded data redundancy: 63/285 objects degraded (22.105%), 19 pgs degraded (PG_DEGRADED) 2026-03-09T20:51:29.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:28 vm07.local ceph-mon[112105]: osdmap e70: 6 total, 6 up, 6 in 2026-03-09T20:51:30.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:30 vm10.local ceph-mon[103526]: pgmap v118: 65 pgs: 4 peering, 16 active+undersized, 18 active+undersized+degraded, 27 active+clean; 211 MiB data, 1.8 GiB used, 118 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s; 60/285 objects degraded (21.053%) 2026-03-09T20:51:30.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:30 vm07.local ceph-mon[112105]: pgmap v118: 65 pgs: 4 peering, 16 active+undersized, 18 active+undersized+degraded, 27 active+clean; 211 MiB data, 1.8 GiB used, 118 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s; 60/285 objects degraded (21.053%) 2026-03-09T20:51:32.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:32 vm10.local ceph-mon[103526]: pgmap v119: 65 pgs: 4 peering, 6 active+undersized, 2 active+undersized+degraded, 53 active+clean; 211 MiB data, 1.8 GiB used, 118 GiB / 120 GiB avail; 2.0 KiB/s rd, 3 op/s; 9/285 objects degraded (3.158%) 2026-03-09T20:51:32.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:32 vm07.local ceph-mon[112105]: pgmap v119: 65 pgs: 4 peering, 6 active+undersized, 2 active+undersized+degraded, 53 active+clean; 211 MiB data, 1.8 GiB used, 118 GiB / 120 GiB avail; 2.0 KiB/s rd, 3 op/s; 9/285 objects degraded (3.158%) 2026-03-09T20:51:33.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:33 vm10.local ceph-mon[103526]: pgmap v120: 65 pgs: 65 active+clean; 211 MiB data, 1.8 GiB used, 118 GiB / 120 GiB avail; 2.3 KiB/s rd, 4 op/s 2026-03-09T20:51:33.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:33 vm10.local ceph-mon[103526]: Health check update: Degraded data redundancy: 9/285 objects degraded (3.158%), 2 pgs degraded (PG_DEGRADED) 2026-03-09T20:51:33.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:33 vm07.local ceph-mon[112105]: pgmap v120: 65 pgs: 65 active+clean; 211 MiB data, 1.8 GiB used, 118 GiB / 120 GiB avail; 2.3 KiB/s rd, 4 op/s 2026-03-09T20:51:33.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:33 vm07.local ceph-mon[112105]: Health check update: Degraded data redundancy: 9/285 objects degraded (3.158%), 2 pgs degraded (PG_DEGRADED) 2026-03-09T20:51:34.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:34 vm10.local ceph-mon[103526]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 9/285 objects degraded (3.158%), 2 pgs degraded) 2026-03-09T20:51:34.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:34 vm10.local ceph-mon[103526]: Cluster is now healthy 2026-03-09T20:51:34.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:34 vm07.local ceph-mon[112105]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 9/285 objects degraded (3.158%), 2 pgs degraded) 2026-03-09T20:51:34.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:34 vm07.local ceph-mon[112105]: Cluster is now healthy 2026-03-09T20:51:35.565 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:35 vm10.local ceph-mon[103526]: pgmap v121: 65 pgs: 65 active+clean; 211 MiB data, 1.8 GiB used, 118 GiB / 120 GiB avail; 895 B/s rd, 2 op/s 2026-03-09T20:51:35.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:35 vm07.local ceph-mon[112105]: pgmap v121: 65 pgs: 65 active+clean; 211 MiB data, 1.8 GiB used, 118 GiB / 120 GiB avail; 895 B/s rd, 2 op/s 2026-03-09T20:51:36.429 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:36 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-09T20:51:36.429 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:36 vm10.local ceph-mon[103526]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-09T20:51:36.429 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:36 vm10.local ceph-mon[103526]: Upgrade: osd.4 is safe to restart 2026-03-09T20:51:36.429 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:36 vm10.local ceph-mon[103526]: Upgrade: Updating osd.4 2026-03-09T20:51:36.429 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:36 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:51:36.429 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:36 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T20:51:36.429 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:36 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:51:36.429 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:36 vm10.local ceph-mon[103526]: Deploying daemon osd.4 on vm10 2026-03-09T20:51:36.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:36 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-09T20:51:36.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:36 vm07.local ceph-mon[112105]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-09T20:51:36.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:36 vm07.local ceph-mon[112105]: Upgrade: osd.4 is safe to restart 2026-03-09T20:51:36.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:36 vm07.local ceph-mon[112105]: Upgrade: Updating osd.4 2026-03-09T20:51:36.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:36 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:51:36.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:36 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T20:51:36.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:36 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:51:36.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:36 vm07.local ceph-mon[112105]: Deploying daemon osd.4 on vm10 2026-03-09T20:51:36.704 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:51:36 vm10.local systemd[1]: Stopping Ceph osd.4 for 589eab88-1bf8-11f1-9e50-71f3ab1833c4... 2026-03-09T20:51:37.037 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:51:36 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-4[69362]: 2026-03-09T20:51:36.833+0000 7f6756bc6640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.4 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T20:51:37.037 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:51:36 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-4[69362]: 2026-03-09T20:51:36.833+0000 7f6756bc6640 -1 osd.4 70 *** Got signal Terminated *** 2026-03-09T20:51:37.037 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:51:36 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-4[69362]: 2026-03-09T20:51:36.833+0000 7f6756bc6640 -1 osd.4 70 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T20:51:37.562 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:37 vm10.local ceph-mon[103526]: pgmap v122: 65 pgs: 65 active+clean; 211 MiB data, 1.8 GiB used, 118 GiB / 120 GiB avail; 1.8 KiB/s rd, 3 op/s 2026-03-09T20:51:37.562 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:37 vm10.local ceph-mon[103526]: osd.4 marked itself down and dead 2026-03-09T20:51:37.562 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:51:37 vm10.local podman[118930]: 2026-03-09 20:51:37.38674369 +0000 UTC m=+0.616131433 container died 37651efc9a7d6802d7af5e99beda74973f792c7966f9fc2b578f4600c282f207 (image=quay.ceph.io/ceph-ci/ceph@sha256:5c4977cb71fa562f9eeef885de8392360f108b57ddc9fb513207223fafb4cf40, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-4, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=ab47f43c099b2cbae6e21342fe673ce251da54d6, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-09T20:51:37.562 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:51:37 vm10.local podman[118930]: 2026-03-09 20:51:37.405818923 +0000 UTC m=+0.635206666 container remove 37651efc9a7d6802d7af5e99beda74973f792c7966f9fc2b578f4600c282f207 (image=quay.ceph.io/ceph-ci/ceph@sha256:5c4977cb71fa562f9eeef885de8392360f108b57ddc9fb513207223fafb4cf40, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-4, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=ab47f43c099b2cbae6e21342fe673ce251da54d6, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-09T20:51:37.562 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:51:37 vm10.local bash[118930]: ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-4 2026-03-09T20:51:37.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:37 vm07.local ceph-mon[112105]: pgmap v122: 65 pgs: 65 active+clean; 211 MiB data, 1.8 GiB used, 118 GiB / 120 GiB avail; 1.8 KiB/s rd, 3 op/s 2026-03-09T20:51:37.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:37 vm07.local ceph-mon[112105]: osd.4 marked itself down and dead 2026-03-09T20:51:37.884 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:51:37 vm10.local podman[118997]: 2026-03-09 20:51:37.562412436 +0000 UTC m=+0.021268871 container create 36c86d4460b89d5e4b970da553774a74a4dbd99afd1124ffca089d029ae89553 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-4-deactivate, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-09T20:51:37.884 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:51:37 vm10.local podman[118997]: 2026-03-09 20:51:37.598339804 +0000 UTC m=+0.057196239 container init 36c86d4460b89d5e4b970da553774a74a4dbd99afd1124ffca089d029ae89553 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-4-deactivate, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-09T20:51:37.884 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:51:37 vm10.local podman[118997]: 2026-03-09 20:51:37.604680632 +0000 UTC m=+0.063537067 container start 36c86d4460b89d5e4b970da553774a74a4dbd99afd1124ffca089d029ae89553 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-4-deactivate, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-09T20:51:37.884 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:51:37 vm10.local podman[118997]: 2026-03-09 20:51:37.609665994 +0000 UTC m=+0.068522429 container attach 36c86d4460b89d5e4b970da553774a74a4dbd99afd1124ffca089d029ae89553 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-4-deactivate, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-09T20:51:37.884 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:51:37 vm10.local podman[118997]: 2026-03-09 20:51:37.5541011 +0000 UTC m=+0.012957546 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T20:51:37.884 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:51:37 vm10.local podman[119017]: 2026-03-09 20:51:37.762519968 +0000 UTC m=+0.011559107 container died 36c86d4460b89d5e4b970da553774a74a4dbd99afd1124ffca089d029ae89553 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-4-deactivate, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2) 2026-03-09T20:51:37.884 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:51:37 vm10.local podman[119017]: 2026-03-09 20:51:37.780897706 +0000 UTC m=+0.029936855 container remove 36c86d4460b89d5e4b970da553774a74a4dbd99afd1124ffca089d029ae89553 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-4-deactivate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2) 2026-03-09T20:51:37.884 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:51:37 vm10.local systemd[1]: ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4@osd.4.service: Deactivated successfully. 2026-03-09T20:51:37.884 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:51:37 vm10.local systemd[1]: Stopped Ceph osd.4 for 589eab88-1bf8-11f1-9e50-71f3ab1833c4. 2026-03-09T20:51:37.884 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:51:37 vm10.local systemd[1]: ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4@osd.4.service: Consumed 50.297s CPU time. 2026-03-09T20:51:38.246 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:51:37 vm10.local systemd[1]: Starting Ceph osd.4 for 589eab88-1bf8-11f1-9e50-71f3ab1833c4... 2026-03-09T20:51:38.247 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:51:38 vm10.local podman[119103]: 2026-03-09 20:51:38.098534081 +0000 UTC m=+0.022889014 container create 6920f9a04d92bd6c417065a39bcf77ec2ca0eef653306d93c7528878454cb06d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-4-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-09T20:51:38.247 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:51:38 vm10.local podman[119103]: 2026-03-09 20:51:38.150685445 +0000 UTC m=+0.075040388 container init 6920f9a04d92bd6c417065a39bcf77ec2ca0eef653306d93c7528878454cb06d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-4-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, CEPH_REF=squid) 2026-03-09T20:51:38.247 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:51:38 vm10.local podman[119103]: 2026-03-09 20:51:38.154019486 +0000 UTC m=+0.078374419 container start 6920f9a04d92bd6c417065a39bcf77ec2ca0eef653306d93c7528878454cb06d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-4-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3) 2026-03-09T20:51:38.247 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:51:38 vm10.local podman[119103]: 2026-03-09 20:51:38.155642132 +0000 UTC m=+0.079997065 container attach 6920f9a04d92bd6c417065a39bcf77ec2ca0eef653306d93c7528878454cb06d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-4-activate, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0) 2026-03-09T20:51:38.247 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:51:38 vm10.local podman[119103]: 2026-03-09 20:51:38.090001209 +0000 UTC m=+0.014356153 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T20:51:38.247 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:51:38 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-4-activate[119114]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T20:51:38.247 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:51:38 vm10.local bash[119103]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T20:51:38.247 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:51:38 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-4-activate[119114]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T20:51:38.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:38 vm07.local ceph-mon[112105]: Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T20:51:38.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:38 vm07.local ceph-mon[112105]: osdmap e71: 6 total, 5 up, 6 in 2026-03-09T20:51:38.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:38 vm10.local ceph-mon[103526]: Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T20:51:38.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:38 vm10.local ceph-mon[103526]: osdmap e71: 6 total, 5 up, 6 in 2026-03-09T20:51:38.537 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:51:38 vm10.local bash[119103]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T20:51:39.037 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:51:38 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-4-activate[119114]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-09T20:51:39.037 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:51:38 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-4-activate[119114]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T20:51:39.037 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:51:38 vm10.local bash[119103]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-09T20:51:39.037 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:51:38 vm10.local bash[119103]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T20:51:39.037 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:51:38 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-4-activate[119114]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T20:51:39.037 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:51:38 vm10.local bash[119103]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T20:51:39.037 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:51:38 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-4-activate[119114]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4 2026-03-09T20:51:39.037 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:51:38 vm10.local bash[119103]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4 2026-03-09T20:51:39.038 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:51:38 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-4-activate[119114]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-b6130be3-c748-4df2-ad02-05a15e8118b2/osd-block-5ced8315-7f95-41be-88c5-e29628c579a6 --path /var/lib/ceph/osd/ceph-4 --no-mon-config 2026-03-09T20:51:39.038 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:51:38 vm10.local bash[119103]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-b6130be3-c748-4df2-ad02-05a15e8118b2/osd-block-5ced8315-7f95-41be-88c5-e29628c579a6 --path /var/lib/ceph/osd/ceph-4 --no-mon-config 2026-03-09T20:51:39.330 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:51:39 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-4-activate[119114]: Running command: /usr/bin/ln -snf /dev/ceph-b6130be3-c748-4df2-ad02-05a15e8118b2/osd-block-5ced8315-7f95-41be-88c5-e29628c579a6 /var/lib/ceph/osd/ceph-4/block 2026-03-09T20:51:39.330 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:51:39 vm10.local bash[119103]: Running command: /usr/bin/ln -snf /dev/ceph-b6130be3-c748-4df2-ad02-05a15e8118b2/osd-block-5ced8315-7f95-41be-88c5-e29628c579a6 /var/lib/ceph/osd/ceph-4/block 2026-03-09T20:51:39.330 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:51:39 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-4-activate[119114]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-4/block 2026-03-09T20:51:39.330 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:51:39 vm10.local bash[119103]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-4/block 2026-03-09T20:51:39.330 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:51:39 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-4-activate[119114]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 2026-03-09T20:51:39.330 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:51:39 vm10.local bash[119103]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 2026-03-09T20:51:39.330 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:51:39 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-4-activate[119114]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4 2026-03-09T20:51:39.330 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:51:39 vm10.local bash[119103]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4 2026-03-09T20:51:39.330 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:51:39 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-4-activate[119114]: --> ceph-volume lvm activate successful for osd ID: 4 2026-03-09T20:51:39.330 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:51:39 vm10.local bash[119103]: --> ceph-volume lvm activate successful for osd ID: 4 2026-03-09T20:51:39.330 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:51:39 vm10.local podman[119103]: 2026-03-09 20:51:39.111428393 +0000 UTC m=+1.035783326 container died 6920f9a04d92bd6c417065a39bcf77ec2ca0eef653306d93c7528878454cb06d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-4-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, org.label-schema.license=GPLv2) 2026-03-09T20:51:39.330 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:51:39 vm10.local podman[119103]: 2026-03-09 20:51:39.13213986 +0000 UTC m=+1.056494793 container remove 6920f9a04d92bd6c417065a39bcf77ec2ca0eef653306d93c7528878454cb06d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-4-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-09T20:51:39.607 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:39 vm10.local ceph-mon[103526]: osdmap e72: 6 total, 5 up, 6 in 2026-03-09T20:51:39.608 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:39 vm10.local ceph-mon[103526]: pgmap v125: 65 pgs: 6 stale+active+clean, 59 active+clean; 211 MiB data, 1.8 GiB used, 118 GiB / 120 GiB avail; 1.9 KiB/s rd, 3 op/s 2026-03-09T20:51:39.608 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:51:39 vm10.local podman[119355]: 2026-03-09 20:51:39.231268112 +0000 UTC m=+0.012039264 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T20:51:39.608 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:51:39 vm10.local podman[119355]: 2026-03-09 20:51:39.333890646 +0000 UTC m=+0.114661798 container create d0231a0cf2beb1439d52638e900a919bf5408d14564808ad1a36ef0067ef9297 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-4, org.opencontainers.image.authors=Ceph Release Team , ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) 2026-03-09T20:51:39.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:39 vm07.local ceph-mon[112105]: osdmap e72: 6 total, 5 up, 6 in 2026-03-09T20:51:39.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:39 vm07.local ceph-mon[112105]: pgmap v125: 65 pgs: 6 stale+active+clean, 59 active+clean; 211 MiB data, 1.8 GiB used, 118 GiB / 120 GiB avail; 1.9 KiB/s rd, 3 op/s 2026-03-09T20:51:40.037 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:51:39 vm10.local podman[119355]: 2026-03-09 20:51:39.608169286 +0000 UTC m=+0.388940428 container init d0231a0cf2beb1439d52638e900a919bf5408d14564808ad1a36ef0067ef9297 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-4, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) 2026-03-09T20:51:40.037 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:51:39 vm10.local podman[119355]: 2026-03-09 20:51:39.611870254 +0000 UTC m=+0.392641406 container start d0231a0cf2beb1439d52638e900a919bf5408d14564808ad1a36ef0067ef9297 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-4, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-09T20:51:40.037 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:51:39 vm10.local ceph-osd[119369]: -- 192.168.123.110:0/488727267 <== mon.1 v2:192.168.123.110:3300/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 194+0+0 (secure 0 0 0) 0x55f27613c960 con 0x55f27611bc00 2026-03-09T20:51:40.037 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:51:39 vm10.local bash[119355]: d0231a0cf2beb1439d52638e900a919bf5408d14564808ad1a36ef0067ef9297 2026-03-09T20:51:40.037 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:51:39 vm10.local systemd[1]: Started Ceph osd.4 for 589eab88-1bf8-11f1-9e50-71f3ab1833c4. 2026-03-09T20:51:41.037 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:51:40 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-4[119365]: 2026-03-09T20:51:40.689+0000 7f20d9535740 -1 Falling back to public interface 2026-03-09T20:51:41.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:41 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:51:41.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:41 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:51:41.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:41 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:51:41.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:41 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:51:41.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:41 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:51:41.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:41 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:51:41.520 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:41 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:51:41.520 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:41 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:51:41.520 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:41 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:51:41.520 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:41 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:51:41.520 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:41 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:51:41.520 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:41 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:51:42.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:42 vm07.local ceph-mon[112105]: pgmap v126: 65 pgs: 12 active+undersized, 3 stale+active+clean, 9 active+undersized+degraded, 41 active+clean; 211 MiB data, 1.8 GiB used, 118 GiB / 120 GiB avail; 1.5 KiB/s rd, 3 op/s; 32/285 objects degraded (11.228%) 2026-03-09T20:51:42.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:42 vm07.local ceph-mon[112105]: Health check failed: Degraded data redundancy: 32/285 objects degraded (11.228%), 9 pgs degraded (PG_DEGRADED) 2026-03-09T20:51:42.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:42 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:51:42.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:42 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:51:42.517 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:42 vm10.local ceph-mon[103526]: pgmap v126: 65 pgs: 12 active+undersized, 3 stale+active+clean, 9 active+undersized+degraded, 41 active+clean; 211 MiB data, 1.8 GiB used, 118 GiB / 120 GiB avail; 1.5 KiB/s rd, 3 op/s; 32/285 objects degraded (11.228%) 2026-03-09T20:51:42.517 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:42 vm10.local ceph-mon[103526]: Health check failed: Degraded data redundancy: 32/285 objects degraded (11.228%), 9 pgs degraded (PG_DEGRADED) 2026-03-09T20:51:42.517 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:42 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:51:42.517 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:42 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:51:43.538 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:43 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:51:43.538 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:43 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:51:43.538 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:43 vm10.local ceph-mon[103526]: pgmap v127: 65 pgs: 19 active+undersized, 15 active+undersized+degraded, 31 active+clean; 211 MiB data, 1.8 GiB used, 118 GiB / 120 GiB avail; 48/285 objects degraded (16.842%) 2026-03-09T20:51:43.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:43 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:51:43.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:43 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:51:43.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:43 vm07.local ceph-mon[112105]: pgmap v127: 65 pgs: 19 active+undersized, 15 active+undersized+degraded, 31 active+clean; 211 MiB data, 1.8 GiB used, 118 GiB / 120 GiB avail; 48/285 objects degraded (16.842%) 2026-03-09T20:51:45.282 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.280+0000 7fbcc8a59640 1 -- 192.168.123.107:0/2565903332 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbcc41001c0 msgr2=0x7fbcc40fe2a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:51:45.282 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.280+0000 7fbcc8a59640 1 --2- 192.168.123.107:0/2565903332 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbcc41001c0 0x7fbcc40fe2a0 secure :-1 s=READY pgs=99 cs=0 l=1 rev1=1 crypto rx=0x7fbcac0099b0 tx=0x7fbcac02f220 comp rx=0 tx=0).stop 2026-03-09T20:51:45.282 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.281+0000 7fbcc8a59640 1 -- 192.168.123.107:0/2565903332 shutdown_connections 2026-03-09T20:51:45.282 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.281+0000 7fbcc8a59640 1 --2- 192.168.123.107:0/2565903332 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbcc41001c0 0x7fbcc40fe2a0 unknown :-1 s=CLOSED pgs=99 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:45.282 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.281+0000 7fbcc8a59640 1 --2- 192.168.123.107:0/2565903332 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fbcc40ff810 0x7fbcc40ffbf0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:45.282 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.281+0000 7fbcc8a59640 1 -- 192.168.123.107:0/2565903332 >> 192.168.123.107:0/2565903332 conn(0x7fbcc40f9f80 msgr2=0x7fbcc40fc3c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:51:45.282 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.281+0000 7fbcc8a59640 1 -- 192.168.123.107:0/2565903332 shutdown_connections 2026-03-09T20:51:45.282 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.281+0000 7fbcc8a59640 1 -- 192.168.123.107:0/2565903332 wait complete. 2026-03-09T20:51:45.283 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.282+0000 7fbcc8a59640 1 Processor -- start 2026-03-09T20:51:45.283 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.282+0000 7fbcc8a59640 1 -- start start 2026-03-09T20:51:45.283 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.282+0000 7fbcc8a59640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fbcc40ff810 0x7fbcc4195db0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:51:45.283 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.282+0000 7fbcc8a59640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbcc41001c0 0x7fbcc41962f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:51:45.283 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.282+0000 7fbcc8a59640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fbcc41968c0 con 0x7fbcc41001c0 2026-03-09T20:51:45.283 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.282+0000 7fbcc8a59640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fbcc4196a30 con 0x7fbcc40ff810 2026-03-09T20:51:45.283 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.282+0000 7fbcc2575640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbcc41001c0 0x7fbcc41962f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:51:45.283 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.282+0000 7fbcc2575640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbcc41001c0 0x7fbcc41962f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:43058/0 (socket says 192.168.123.107:43058) 2026-03-09T20:51:45.283 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.282+0000 7fbcc2575640 1 -- 192.168.123.107:0/780690581 learned_addr learned my addr 192.168.123.107:0/780690581 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:51:45.283 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.282+0000 7fbcc2575640 1 -- 192.168.123.107:0/780690581 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fbcc40ff810 msgr2=0x7fbcc4195db0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T20:51:45.285 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:45 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:51:45.285 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:45 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:51:45.285 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:45 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:51:45.285 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:45 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:51:45.285 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:45 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:51:45.285 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:45 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:51:45.285 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:45 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:51:45.285 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:45 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:51:45.285 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:45 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:51:45.285 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:45 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-09T20:51:45.285 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:45 vm07.local ceph-mon[112105]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-09T20:51:45.285 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:45 vm07.local ceph-mon[112105]: Upgrade: unsafe to stop osd(s) at this time (11 PGs are or would become offline) 2026-03-09T20:51:45.285 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.283+0000 7fbcc2575640 1 --2- 192.168.123.107:0/780690581 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fbcc40ff810 0x7fbcc4195db0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:45.285 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.283+0000 7fbcc2575640 1 -- 192.168.123.107:0/780690581 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fbcac009660 con 0x7fbcc41001c0 2026-03-09T20:51:45.285 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.284+0000 7fbcc2575640 1 --2- 192.168.123.107:0/780690581 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbcc41001c0 0x7fbcc41962f0 secure :-1 s=READY pgs=100 cs=0 l=1 rev1=1 crypto rx=0x7fbcac02f730 tx=0x7fbcac002980 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:51:45.285 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.284+0000 7fbca3fff640 1 -- 192.168.123.107:0/780690581 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fbcac03d070 con 0x7fbcc41001c0 2026-03-09T20:51:45.285 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.284+0000 7fbcc8a59640 1 -- 192.168.123.107:0/780690581 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fbcc419b4a0 con 0x7fbcc41001c0 2026-03-09T20:51:45.285 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.284+0000 7fbcc8a59640 1 -- 192.168.123.107:0/780690581 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fbcc419b990 con 0x7fbcc41001c0 2026-03-09T20:51:45.285 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.284+0000 7fbca3fff640 1 -- 192.168.123.107:0/780690581 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fbcac02fd50 con 0x7fbcc41001c0 2026-03-09T20:51:45.285 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.285+0000 7fbca3fff640 1 -- 192.168.123.107:0/780690581 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fbcac041a30 con 0x7fbcc41001c0 2026-03-09T20:51:45.287 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.285+0000 7fbcc8a59640 1 -- 192.168.123.107:0/780690581 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fbc88005350 con 0x7fbcc41001c0 2026-03-09T20:51:45.291 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.287+0000 7fbca3fff640 1 -- 192.168.123.107:0/780690581 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fbcac038730 con 0x7fbcc41001c0 2026-03-09T20:51:45.291 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.288+0000 7fbca3fff640 1 --2- 192.168.123.107:0/780690581 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fbc94077890 0x7fbc94079d50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:51:45.291 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.288+0000 7fbca3fff640 1 -- 192.168.123.107:0/780690581 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(72..72 src has 1..72) v4 ==== 6394+0+0 (secure 0 0 0) 0x7fbcac0be740 con 0x7fbcc41001c0 2026-03-09T20:51:45.291 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.290+0000 7fbcc2d76640 1 --2- 192.168.123.107:0/780690581 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fbc94077890 0x7fbc94079d50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:51:45.291 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.290+0000 7fbca3fff640 1 -- 192.168.123.107:0/780690581 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fbcac086e70 con 0x7fbcc41001c0 2026-03-09T20:51:45.291 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.291+0000 7fbcc2d76640 1 --2- 192.168.123.107:0/780690581 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fbc94077890 0x7fbc94079d50 secure :-1 s=READY pgs=66 cs=0 l=1 rev1=1 crypto rx=0x7fbcb800a9f0 tx=0x7fbcb8005cf0 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:51:45.403 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.402+0000 7fbcc8a59640 1 -- 192.168.123.107:0/780690581 --> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7fbc88002bf0 con 0x7fbc94077890 2026-03-09T20:51:45.404 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.403+0000 7fbca3fff640 1 -- 192.168.123.107:0/780690581 <== mgr.34100 v2:192.168.123.107:6800/2064434004 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+401 (secure 0 0 0) 0x7fbc88002bf0 con 0x7fbc94077890 2026-03-09T20:51:45.406 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.405+0000 7fbcc8a59640 1 -- 192.168.123.107:0/780690581 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fbc94077890 msgr2=0x7fbc94079d50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:51:45.406 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.405+0000 7fbcc8a59640 1 --2- 192.168.123.107:0/780690581 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fbc94077890 0x7fbc94079d50 secure :-1 s=READY pgs=66 cs=0 l=1 rev1=1 crypto rx=0x7fbcb800a9f0 tx=0x7fbcb8005cf0 comp rx=0 tx=0).stop 2026-03-09T20:51:45.406 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.405+0000 7fbcc8a59640 1 -- 192.168.123.107:0/780690581 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbcc41001c0 msgr2=0x7fbcc41962f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:51:45.406 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.405+0000 7fbcc8a59640 1 --2- 192.168.123.107:0/780690581 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbcc41001c0 0x7fbcc41962f0 secure :-1 s=READY pgs=100 cs=0 l=1 rev1=1 crypto rx=0x7fbcac02f730 tx=0x7fbcac002980 comp rx=0 tx=0).stop 2026-03-09T20:51:45.407 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.405+0000 7fbcc8a59640 1 -- 192.168.123.107:0/780690581 shutdown_connections 2026-03-09T20:51:45.407 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.405+0000 7fbcc8a59640 1 --2- 192.168.123.107:0/780690581 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fbc94077890 0x7fbc94079d50 unknown :-1 s=CLOSED pgs=66 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:45.407 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.405+0000 7fbcc8a59640 1 --2- 192.168.123.107:0/780690581 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbcc41001c0 0x7fbcc41962f0 unknown :-1 s=CLOSED pgs=100 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:45.407 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.405+0000 7fbcc8a59640 1 --2- 192.168.123.107:0/780690581 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fbcc40ff810 0x7fbcc4195db0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:45.407 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.405+0000 7fbcc8a59640 1 -- 192.168.123.107:0/780690581 >> 192.168.123.107:0/780690581 conn(0x7fbcc40f9f80 msgr2=0x7fbcc40fbb50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:51:45.407 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.406+0000 7fbcc8a59640 1 -- 192.168.123.107:0/780690581 shutdown_connections 2026-03-09T20:51:45.407 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.406+0000 7fbcc8a59640 1 -- 192.168.123.107:0/780690581 wait complete. 2026-03-09T20:51:45.415 INFO:teuthology.orchestra.run.vm07.stdout:true 2026-03-09T20:51:45.463 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.462+0000 7f8225a7e640 1 -- 192.168.123.107:0/3657471431 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8220076df0 msgr2=0x7f8220077250 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:51:45.463 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.462+0000 7f8225a7e640 1 --2- 192.168.123.107:0/3657471431 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8220076df0 0x7f8220077250 secure :-1 s=READY pgs=101 cs=0 l=1 rev1=1 crypto rx=0x7f820c0099b0 tx=0x7f820c02f220 comp rx=0 tx=0).stop 2026-03-09T20:51:45.464 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.463+0000 7f8225a7e640 1 -- 192.168.123.107:0/3657471431 shutdown_connections 2026-03-09T20:51:45.464 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.463+0000 7f8225a7e640 1 --2- 192.168.123.107:0/3657471431 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8220076df0 0x7f8220077250 unknown :-1 s=CLOSED pgs=101 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:45.464 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.463+0000 7f8225a7e640 1 --2- 192.168.123.107:0/3657471431 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f8220075ba0 0x7f8220075fa0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:45.464 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.463+0000 7f8225a7e640 1 -- 192.168.123.107:0/3657471431 >> 192.168.123.107:0/3657471431 conn(0x7f82200fe250 msgr2=0x7f8220100670 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:51:45.464 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.463+0000 7f8225a7e640 1 -- 192.168.123.107:0/3657471431 shutdown_connections 2026-03-09T20:51:45.464 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.463+0000 7f8225a7e640 1 -- 192.168.123.107:0/3657471431 wait complete. 2026-03-09T20:51:45.464 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.464+0000 7f8225a7e640 1 Processor -- start 2026-03-09T20:51:45.465 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.464+0000 7f8225a7e640 1 -- start start 2026-03-09T20:51:45.465 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.464+0000 7f8225a7e640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8220075ba0 0x7f822019e920 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:51:45.465 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.464+0000 7f821effd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8220075ba0 0x7f822019e920 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:51:45.465 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.464+0000 7f821effd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8220075ba0 0x7f822019e920 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:43080/0 (socket says 192.168.123.107:43080) 2026-03-09T20:51:45.465 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.464+0000 7f8225a7e640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f8220076df0 0x7f822019ee60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:51:45.465 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.464+0000 7f8225a7e640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f822019f430 con 0x7f8220075ba0 2026-03-09T20:51:45.465 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.464+0000 7f8225a7e640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f822019f5a0 con 0x7f8220076df0 2026-03-09T20:51:45.466 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.464+0000 7f821effd640 1 -- 192.168.123.107:0/1612002128 learned_addr learned my addr 192.168.123.107:0/1612002128 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:51:45.466 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.465+0000 7f821e7fc640 1 --2- 192.168.123.107:0/1612002128 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f8220076df0 0x7f822019ee60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:51:45.466 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.465+0000 7f821e7fc640 1 -- 192.168.123.107:0/1612002128 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8220075ba0 msgr2=0x7f822019e920 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:51:45.466 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.465+0000 7f821e7fc640 1 --2- 192.168.123.107:0/1612002128 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8220075ba0 0x7f822019e920 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:45.466 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.465+0000 7f821e7fc640 1 -- 192.168.123.107:0/1612002128 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f820c009660 con 0x7f8220076df0 2026-03-09T20:51:45.466 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.465+0000 7f821effd640 1 --2- 192.168.123.107:0/1612002128 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8220075ba0 0x7f822019e920 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T20:51:45.466 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.466+0000 7f821e7fc640 1 --2- 192.168.123.107:0/1612002128 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f8220076df0 0x7f822019ee60 secure :-1 s=READY pgs=36 cs=0 l=1 rev1=1 crypto rx=0x7f820c02f730 tx=0x7f820c002980 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:51:45.467 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.466+0000 7f8224a7c640 1 -- 192.168.123.107:0/1612002128 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f820c03d070 con 0x7f8220076df0 2026-03-09T20:51:45.467 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.466+0000 7f8224a7c640 1 -- 192.168.123.107:0/1612002128 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f820c02fd50 con 0x7f8220076df0 2026-03-09T20:51:45.467 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.466+0000 7f8224a7c640 1 -- 192.168.123.107:0/1612002128 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f820c0419c0 con 0x7f8220076df0 2026-03-09T20:51:45.467 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.466+0000 7f8225a7e640 1 -- 192.168.123.107:0/1612002128 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f82201a3fe0 con 0x7f8220076df0 2026-03-09T20:51:45.467 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.467+0000 7f8225a7e640 1 -- 192.168.123.107:0/1612002128 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f82201a4340 con 0x7f8220076df0 2026-03-09T20:51:45.469 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.467+0000 7f8225a7e640 1 -- 192.168.123.107:0/1612002128 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f81e4005350 con 0x7f8220076df0 2026-03-09T20:51:45.469 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.468+0000 7f8224a7c640 1 -- 192.168.123.107:0/1612002128 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f820c038730 con 0x7f8220076df0 2026-03-09T20:51:45.469 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.468+0000 7f8224a7c640 1 --2- 192.168.123.107:0/1612002128 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f81f40776d0 0x7f81f4079b90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:51:45.469 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.468+0000 7f8224a7c640 1 -- 192.168.123.107:0/1612002128 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(72..72 src has 1..72) v4 ==== 6394+0+0 (secure 0 0 0) 0x7f820c0be590 con 0x7f8220076df0 2026-03-09T20:51:45.470 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.469+0000 7f821effd640 1 --2- 192.168.123.107:0/1612002128 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f81f40776d0 0x7f81f4079b90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:51:45.470 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.469+0000 7f821effd640 1 --2- 192.168.123.107:0/1612002128 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f81f40776d0 0x7f81f4079b90 secure :-1 s=READY pgs=67 cs=0 l=1 rev1=1 crypto rx=0x7f82140059c0 tx=0x7f821400a430 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:51:45.472 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.471+0000 7f8224a7c640 1 -- 192.168.123.107:0/1612002128 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f820c086cf0 con 0x7f8220076df0 2026-03-09T20:51:45.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:45 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:51:45.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:45 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:51:45.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:45 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:51:45.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:45 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:51:45.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:45 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:51:45.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:45 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:51:45.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:45 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:51:45.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:45 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:51:45.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:45 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:51:45.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:45 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-09T20:51:45.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:45 vm10.local ceph-mon[103526]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-09T20:51:45.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:45 vm10.local ceph-mon[103526]: Upgrade: unsafe to stop osd(s) at this time (11 PGs are or would become offline) 2026-03-09T20:51:45.581 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.580+0000 7f8225a7e640 1 -- 192.168.123.107:0/1612002128 --> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f81e4002bf0 con 0x7f81f40776d0 2026-03-09T20:51:45.583 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.582+0000 7f8224a7c640 1 -- 192.168.123.107:0/1612002128 <== mgr.34100 v2:192.168.123.107:6800/2064434004 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+401 (secure 0 0 0) 0x7f81e4002bf0 con 0x7f81f40776d0 2026-03-09T20:51:45.586 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.585+0000 7f8225a7e640 1 -- 192.168.123.107:0/1612002128 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f81f40776d0 msgr2=0x7f81f4079b90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:51:45.586 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.585+0000 7f8225a7e640 1 --2- 192.168.123.107:0/1612002128 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f81f40776d0 0x7f81f4079b90 secure :-1 s=READY pgs=67 cs=0 l=1 rev1=1 crypto rx=0x7f82140059c0 tx=0x7f821400a430 comp rx=0 tx=0).stop 2026-03-09T20:51:45.586 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.586+0000 7f8225a7e640 1 -- 192.168.123.107:0/1612002128 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f8220076df0 msgr2=0x7f822019ee60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:51:45.586 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.586+0000 7f8225a7e640 1 --2- 192.168.123.107:0/1612002128 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f8220076df0 0x7f822019ee60 secure :-1 s=READY pgs=36 cs=0 l=1 rev1=1 crypto rx=0x7f820c02f730 tx=0x7f820c002980 comp rx=0 tx=0).stop 2026-03-09T20:51:45.587 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.586+0000 7f8225a7e640 1 -- 192.168.123.107:0/1612002128 shutdown_connections 2026-03-09T20:51:45.587 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.586+0000 7f8225a7e640 1 --2- 192.168.123.107:0/1612002128 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f81f40776d0 0x7f81f4079b90 unknown :-1 s=CLOSED pgs=67 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:45.587 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.586+0000 7f8225a7e640 1 --2- 192.168.123.107:0/1612002128 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f8220076df0 0x7f822019ee60 unknown :-1 s=CLOSED pgs=36 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:45.587 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.586+0000 7f8225a7e640 1 --2- 192.168.123.107:0/1612002128 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8220075ba0 0x7f822019e920 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:45.587 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.586+0000 7f8225a7e640 1 -- 192.168.123.107:0/1612002128 >> 192.168.123.107:0/1612002128 conn(0x7f82200fe250 msgr2=0x7f82200ffda0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:51:45.587 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.586+0000 7f8225a7e640 1 -- 192.168.123.107:0/1612002128 shutdown_connections 2026-03-09T20:51:45.587 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.586+0000 7f8225a7e640 1 -- 192.168.123.107:0/1612002128 wait complete. 2026-03-09T20:51:45.648 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.647+0000 7fcee1f32640 1 -- 192.168.123.107:0/3155019352 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcedc102a60 msgr2=0x7fcedc102e60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:51:45.648 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.647+0000 7fcee1f32640 1 --2- 192.168.123.107:0/3155019352 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcedc102a60 0x7fcedc102e60 secure :-1 s=READY pgs=102 cs=0 l=1 rev1=1 crypto rx=0x7fcecc0099b0 tx=0x7fcecc02f220 comp rx=0 tx=0).stop 2026-03-09T20:51:45.648 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.647+0000 7fcee1f32640 1 -- 192.168.123.107:0/3155019352 shutdown_connections 2026-03-09T20:51:45.648 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.647+0000 7fcee1f32640 1 --2- 192.168.123.107:0/3155019352 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fcedc103c60 0x7fcedc1040e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:45.649 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.647+0000 7fcee1f32640 1 --2- 192.168.123.107:0/3155019352 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcedc102a60 0x7fcedc102e60 unknown :-1 s=CLOSED pgs=102 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:45.649 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.647+0000 7fcee1f32640 1 -- 192.168.123.107:0/3155019352 >> 192.168.123.107:0/3155019352 conn(0x7fcedc0fe250 msgr2=0x7fcedc100670 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:51:45.649 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.648+0000 7fcee1f32640 1 -- 192.168.123.107:0/3155019352 shutdown_connections 2026-03-09T20:51:45.649 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.648+0000 7fcee1f32640 1 -- 192.168.123.107:0/3155019352 wait complete. 2026-03-09T20:51:45.649 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.648+0000 7fcee1f32640 1 Processor -- start 2026-03-09T20:51:45.650 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.649+0000 7fcee1f32640 1 -- start start 2026-03-09T20:51:45.650 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.649+0000 7fcee1f32640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fcedc103c60 0x7fcedc0718a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:51:45.650 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.649+0000 7fcee1f32640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcedc071de0 0x7fcedc072260 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:51:45.650 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.649+0000 7fcee1f32640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fcedc073250 con 0x7fcedc071de0 2026-03-09T20:51:45.650 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.649+0000 7fcee1f32640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fcedc079650 con 0x7fcedc103c60 2026-03-09T20:51:45.651 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.649+0000 7fcedaffd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcedc071de0 0x7fcedc072260 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:51:45.651 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.649+0000 7fcedaffd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcedc071de0 0x7fcedc072260 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:43090/0 (socket says 192.168.123.107:43090) 2026-03-09T20:51:45.651 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.649+0000 7fcedaffd640 1 -- 192.168.123.107:0/1477964761 learned_addr learned my addr 192.168.123.107:0/1477964761 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:51:45.651 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.649+0000 7fcedaffd640 1 -- 192.168.123.107:0/1477964761 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fcedc103c60 msgr2=0x7fcedc0718a0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T20:51:45.651 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.649+0000 7fcedaffd640 1 --2- 192.168.123.107:0/1477964761 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fcedc103c60 0x7fcedc0718a0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:45.651 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.649+0000 7fcedaffd640 1 -- 192.168.123.107:0/1477964761 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fcecc009660 con 0x7fcedc071de0 2026-03-09T20:51:45.651 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.649+0000 7fcedaffd640 1 --2- 192.168.123.107:0/1477964761 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcedc071de0 0x7fcedc072260 secure :-1 s=READY pgs=103 cs=0 l=1 rev1=1 crypto rx=0x7fcec800d900 tx=0x7fcec800ddd0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:51:45.651 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.649+0000 7fced8ff9640 1 -- 192.168.123.107:0/1477964761 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fcec8004490 con 0x7fcedc071de0 2026-03-09T20:51:45.651 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.649+0000 7fced8ff9640 1 -- 192.168.123.107:0/1477964761 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fcec8004d60 con 0x7fcedc071de0 2026-03-09T20:51:45.651 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.649+0000 7fced8ff9640 1 -- 192.168.123.107:0/1477964761 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fcec8005230 con 0x7fcedc071de0 2026-03-09T20:51:45.652 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.651+0000 7fcee1f32640 1 -- 192.168.123.107:0/1477964761 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fcedc079850 con 0x7fcedc071de0 2026-03-09T20:51:45.652 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.651+0000 7fcee1f32640 1 -- 192.168.123.107:0/1477964761 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fcedc1a4be0 con 0x7fcedc071de0 2026-03-09T20:51:45.653 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.651+0000 7fcee1f32640 1 -- 192.168.123.107:0/1477964761 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fcea0005350 con 0x7fcedc071de0 2026-03-09T20:51:45.656 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.655+0000 7fced8ff9640 1 -- 192.168.123.107:0/1477964761 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fcec800b8d0 con 0x7fcedc071de0 2026-03-09T20:51:45.656 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.655+0000 7fced8ff9640 1 --2- 192.168.123.107:0/1477964761 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fceb00779b0 0x7fceb0079e70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:51:45.657 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.656+0000 7fcedb7fe640 1 --2- 192.168.123.107:0/1477964761 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fceb00779b0 0x7fceb0079e70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:51:45.657 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.656+0000 7fced8ff9640 1 -- 192.168.123.107:0/1477964761 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(72..72 src has 1..72) v4 ==== 6394+0+0 (secure 0 0 0) 0x7fcec809a100 con 0x7fcedc071de0 2026-03-09T20:51:45.657 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.656+0000 7fcedb7fe640 1 --2- 192.168.123.107:0/1477964761 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fceb00779b0 0x7fceb0079e70 secure :-1 s=READY pgs=68 cs=0 l=1 rev1=1 crypto rx=0x7fcecc02f730 tx=0x7fcecc0023d0 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:51:45.658 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.657+0000 7fced8ff9640 1 -- 192.168.123.107:0/1477964761 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fcec8062820 con 0x7fcedc071de0 2026-03-09T20:51:45.774 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.766+0000 7fcee1f32640 1 -- 192.168.123.107:0/1477964761 --> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] -- mgr_command(tid 0: {"prefix": "orch ps", "target": ["mon-mgr", ""]}) v1 -- 0x7fcea0002bf0 con 0x7fceb00779b0 2026-03-09T20:51:45.774 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.772+0000 7fced8ff9640 1 -- 192.168.123.107:0/1477964761 <== mgr.34100 v2:192.168.123.107:6800/2064434004 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+3576 (secure 0 0 0) 0x7fcea0002bf0 con 0x7fceb00779b0 2026-03-09T20:51:45.774 INFO:teuthology.orchestra.run.vm07.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T20:51:45.774 INFO:teuthology.orchestra.run.vm07.stdout:alertmanager.vm07 vm07 *:9093,9094 running (8m) 48s ago 8m 43.7M - 0.25.0 c8568f914cd2 aa3206f6f5cb 2026-03-09T20:51:45.774 INFO:teuthology.orchestra.run.vm07.stdout:ceph-exporter.vm07 vm07 running (8m) 48s ago 8m 9.82M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 06140d824fae 2026-03-09T20:51:45.774 INFO:teuthology.orchestra.run.vm07.stdout:ceph-exporter.vm10 vm10 running (8m) 4s ago 8m 10.2M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 ecddc8340426 2026-03-09T20:51:45.774 INFO:teuthology.orchestra.run.vm07.stdout:crash.vm07 vm07 running (2m) 48s ago 8m 7834k - 19.2.3-678-ge911bdeb 654f31e6858e 406c9c54f34a 2026-03-09T20:51:45.774 INFO:teuthology.orchestra.run.vm07.stdout:crash.vm10 vm10 running (2m) 4s ago 8m 7856k - 19.2.3-678-ge911bdeb 654f31e6858e 30eaebf5d733 2026-03-09T20:51:45.774 INFO:teuthology.orchestra.run.vm07.stdout:grafana.vm07 vm07 *:3000 running (8m) 48s ago 8m 160M - 9.4.7 954c08fa6188 74cf2e7ee6ad 2026-03-09T20:51:45.774 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm07.potfau vm07 running (6m) 48s ago 6m 30.4M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 2492b6874dc8 2026-03-09T20:51:45.774 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm07.rovdbp vm07 running (6m) 48s ago 6m 176M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 3dd0b4a28f35 2026-03-09T20:51:45.774 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm10.hzyuyq vm10 running (6m) 4s ago 6m 97.5M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 ed740ceed51a 2026-03-09T20:51:45.774 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm10.qpltwp vm10 running (6m) 4s ago 6m 28.2M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 c5fdba181aaf 2026-03-09T20:51:45.774 INFO:teuthology.orchestra.run.vm07.stdout:mgr.vm07.xjrvch vm07 *:8443,9283,8765 running (4m) 48s ago 9m 615M - 19.2.3-678-ge911bdeb 654f31e6858e bc6ab9c540eb 2026-03-09T20:51:45.774 INFO:teuthology.orchestra.run.vm07.stdout:mgr.vm10.byqahe vm10 *:8443,9283,8765 running (3m) 4s ago 8m 491M - 19.2.3-678-ge911bdeb 654f31e6858e f7ad162e95ff 2026-03-09T20:51:45.774 INFO:teuthology.orchestra.run.vm07.stdout:mon.vm07 vm07 running (3m) 48s ago 9m 58.7M 2048M 19.2.3-678-ge911bdeb 654f31e6858e bce9d510f94f 2026-03-09T20:51:45.774 INFO:teuthology.orchestra.run.vm07.stdout:mon.vm10 vm10 running (3m) 4s ago 8m 53.4M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 4428cf7f0607 2026-03-09T20:51:45.774 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.vm07 vm07 *:9100 running (8m) 48s ago 8m 15.5M - 1.5.0 0da6a335fe13 d6fac1f8a1d0 2026-03-09T20:51:45.774 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.vm10 vm10 *:9100 running (8m) 4s ago 8m 15.9M - 1.5.0 0da6a335fe13 9716a97e7ed1 2026-03-09T20:51:45.774 INFO:teuthology.orchestra.run.vm07.stdout:osd.0 vm07 running (2m) 48s ago 7m 232M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 1da9d2cdbdc3 2026-03-09T20:51:45.774 INFO:teuthology.orchestra.run.vm07.stdout:osd.1 vm07 running (2m) 48s ago 7m 171M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 95f518bf664f 2026-03-09T20:51:45.774 INFO:teuthology.orchestra.run.vm07.stdout:osd.2 vm07 running (50s) 48s ago 7m 11.5M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 0d3aa63353bb 2026-03-09T20:51:45.774 INFO:teuthology.orchestra.run.vm07.stdout:osd.3 vm10 running (28s) 4s ago 7m 156M 4096M 19.2.3-678-ge911bdeb 654f31e6858e c8d2b453e9e2 2026-03-09T20:51:45.774 INFO:teuthology.orchestra.run.vm07.stdout:osd.4 vm10 running (6s) 4s ago 7m 15.0M 4096M 19.2.3-678-ge911bdeb 654f31e6858e d0231a0cf2be 2026-03-09T20:51:45.774 INFO:teuthology.orchestra.run.vm07.stdout:osd.5 vm10 running (7m) 4s ago 7m 435M 4096M 18.2.7-1055-gab47f43c b6fe7eb6a9d0 e1bd83add343 2026-03-09T20:51:45.774 INFO:teuthology.orchestra.run.vm07.stdout:prometheus.vm07 vm07 *:9095 running (3m) 48s ago 8m 49.8M - 2.43.0 a07b618ecd1d 3f9c07cd3fe3 2026-03-09T20:51:45.776 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.775+0000 7fcee1f32640 1 -- 192.168.123.107:0/1477964761 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fceb00779b0 msgr2=0x7fceb0079e70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:51:45.776 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.775+0000 7fcee1f32640 1 --2- 192.168.123.107:0/1477964761 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fceb00779b0 0x7fceb0079e70 secure :-1 s=READY pgs=68 cs=0 l=1 rev1=1 crypto rx=0x7fcecc02f730 tx=0x7fcecc0023d0 comp rx=0 tx=0).stop 2026-03-09T20:51:45.776 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.775+0000 7fcee1f32640 1 -- 192.168.123.107:0/1477964761 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcedc071de0 msgr2=0x7fcedc072260 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:51:45.776 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.775+0000 7fcee1f32640 1 --2- 192.168.123.107:0/1477964761 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcedc071de0 0x7fcedc072260 secure :-1 s=READY pgs=103 cs=0 l=1 rev1=1 crypto rx=0x7fcec800d900 tx=0x7fcec800ddd0 comp rx=0 tx=0).stop 2026-03-09T20:51:45.776 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.775+0000 7fcee1f32640 1 -- 192.168.123.107:0/1477964761 shutdown_connections 2026-03-09T20:51:45.776 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.775+0000 7fcee1f32640 1 --2- 192.168.123.107:0/1477964761 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fceb00779b0 0x7fceb0079e70 unknown :-1 s=CLOSED pgs=68 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:45.776 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.775+0000 7fcee1f32640 1 --2- 192.168.123.107:0/1477964761 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcedc071de0 0x7fcedc072260 unknown :-1 s=CLOSED pgs=103 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:45.776 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.775+0000 7fcee1f32640 1 --2- 192.168.123.107:0/1477964761 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fcedc103c60 0x7fcedc0718a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:45.776 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.775+0000 7fcee1f32640 1 -- 192.168.123.107:0/1477964761 >> 192.168.123.107:0/1477964761 conn(0x7fcedc0fe250 msgr2=0x7fcedc0ffbb0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:51:45.776 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.775+0000 7fcee1f32640 1 -- 192.168.123.107:0/1477964761 shutdown_connections 2026-03-09T20:51:45.776 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.776+0000 7fcee1f32640 1 -- 192.168.123.107:0/1477964761 wait complete. 2026-03-09T20:51:45.843 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.841+0000 7fdc849cf640 1 -- 192.168.123.107:0/680145401 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fdc7c103c80 msgr2=0x7fdc7c104100 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:51:45.843 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.841+0000 7fdc849cf640 1 --2- 192.168.123.107:0/680145401 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fdc7c103c80 0x7fdc7c104100 secure :-1 s=READY pgs=104 cs=0 l=1 rev1=1 crypto rx=0x7fdc740099b0 tx=0x7fdc7402f220 comp rx=0 tx=0).stop 2026-03-09T20:51:45.848 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.847+0000 7fdc849cf640 1 -- 192.168.123.107:0/680145401 shutdown_connections 2026-03-09T20:51:45.848 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.847+0000 7fdc849cf640 1 --2- 192.168.123.107:0/680145401 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fdc7c103c80 0x7fdc7c104100 unknown :-1 s=CLOSED pgs=104 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:45.848 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.847+0000 7fdc849cf640 1 --2- 192.168.123.107:0/680145401 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fdc7c102a80 0x7fdc7c102e80 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:45.848 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.847+0000 7fdc849cf640 1 -- 192.168.123.107:0/680145401 >> 192.168.123.107:0/680145401 conn(0x7fdc7c0fe250 msgr2=0x7fdc7c100670 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:51:45.849 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.848+0000 7fdc849cf640 1 -- 192.168.123.107:0/680145401 shutdown_connections 2026-03-09T20:51:45.849 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.848+0000 7fdc849cf640 1 -- 192.168.123.107:0/680145401 wait complete. 2026-03-09T20:51:45.849 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.848+0000 7fdc849cf640 1 Processor -- start 2026-03-09T20:51:45.850 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.849+0000 7fdc849cf640 1 -- start start 2026-03-09T20:51:45.850 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.849+0000 7fdc849cf640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fdc7c102a80 0x7fdc7c195f60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:51:45.850 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.849+0000 7fdc849cf640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fdc7c103c80 0x7fdc7c1964a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:51:45.851 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.849+0000 7fdc849cf640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fdc7c196a70 con 0x7fdc7c103c80 2026-03-09T20:51:45.851 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.849+0000 7fdc849cf640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fdc7c196be0 con 0x7fdc7c102a80 2026-03-09T20:51:45.851 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.849+0000 7fdc81f43640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fdc7c103c80 0x7fdc7c1964a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:51:45.851 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.849+0000 7fdc81f43640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fdc7c103c80 0x7fdc7c1964a0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:43098/0 (socket says 192.168.123.107:43098) 2026-03-09T20:51:45.851 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.849+0000 7fdc81f43640 1 -- 192.168.123.107:0/285096963 learned_addr learned my addr 192.168.123.107:0/285096963 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:51:45.851 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.849+0000 7fdc81f43640 1 -- 192.168.123.107:0/285096963 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fdc7c102a80 msgr2=0x7fdc7c195f60 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T20:51:45.851 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.849+0000 7fdc81f43640 1 --2- 192.168.123.107:0/285096963 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fdc7c102a80 0x7fdc7c195f60 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:45.851 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.849+0000 7fdc81f43640 1 -- 192.168.123.107:0/285096963 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fdc74009660 con 0x7fdc7c103c80 2026-03-09T20:51:45.851 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.849+0000 7fdc81f43640 1 --2- 192.168.123.107:0/285096963 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fdc7c103c80 0x7fdc7c1964a0 secure :-1 s=READY pgs=105 cs=0 l=1 rev1=1 crypto rx=0x7fdc74002410 tx=0x7fdc74002980 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:51:45.853 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.850+0000 7fdc677fe640 1 -- 192.168.123.107:0/285096963 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fdc7403d070 con 0x7fdc7c103c80 2026-03-09T20:51:45.853 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.850+0000 7fdc677fe640 1 -- 192.168.123.107:0/285096963 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fdc7402fd50 con 0x7fdc7c103c80 2026-03-09T20:51:45.853 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.850+0000 7fdc677fe640 1 -- 192.168.123.107:0/285096963 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fdc74041b00 con 0x7fdc7c103c80 2026-03-09T20:51:45.853 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.851+0000 7fdc849cf640 1 -- 192.168.123.107:0/285096963 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fdc7c19b650 con 0x7fdc7c103c80 2026-03-09T20:51:45.853 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.851+0000 7fdc849cf640 1 -- 192.168.123.107:0/285096963 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fdc7c19bb10 con 0x7fdc7c103c80 2026-03-09T20:51:45.853 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.852+0000 7fdc677fe640 1 -- 192.168.123.107:0/285096963 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fdc74038730 con 0x7fdc7c103c80 2026-03-09T20:51:45.853 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.852+0000 7fdc677fe640 1 --2- 192.168.123.107:0/285096963 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fdc500779b0 0x7fdc50079e70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:51:45.854 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.853+0000 7fdc677fe640 1 -- 192.168.123.107:0/285096963 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(72..72 src has 1..72) v4 ==== 6394+0+0 (secure 0 0 0) 0x7fdc740beaf0 con 0x7fdc7c103c80 2026-03-09T20:51:45.854 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.853+0000 7fdc82744640 1 --2- 192.168.123.107:0/285096963 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fdc500779b0 0x7fdc50079e70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:51:45.854 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.853+0000 7fdc82744640 1 --2- 192.168.123.107:0/285096963 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fdc500779b0 0x7fdc50079e70 secure :-1 s=READY pgs=69 cs=0 l=1 rev1=1 crypto rx=0x7fdc7c103ae0 tx=0x7fdc68005ca0 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:51:45.854 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.854+0000 7fdc849cf640 1 -- 192.168.123.107:0/285096963 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fdc4c005350 con 0x7fdc7c103c80 2026-03-09T20:51:45.858 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:45.856+0000 7fdc677fe640 1 -- 192.168.123.107:0/285096963 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fdc74087220 con 0x7fdc7c103c80 2026-03-09T20:51:46.005 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.004+0000 7fdc849cf640 1 -- 192.168.123.107:0/285096963 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7fdc4c005e10 con 0x7fdc7c103c80 2026-03-09T20:51:46.005 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.005+0000 7fdc677fe640 1 -- 192.168.123.107:0/285096963 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+844 (secure 0 0 0) 0x7fdc74046360 con 0x7fdc7c103c80 2026-03-09T20:51:46.006 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T20:51:46.006 INFO:teuthology.orchestra.run.vm07.stdout: "mon": { 2026-03-09T20:51:46.006 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T20:51:46.006 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:51:46.006 INFO:teuthology.orchestra.run.vm07.stdout: "mgr": { 2026-03-09T20:51:46.006 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T20:51:46.006 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:51:46.006 INFO:teuthology.orchestra.run.vm07.stdout: "osd": { 2026-03-09T20:51:46.006 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable)": 1, 2026-03-09T20:51:46.006 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 4 2026-03-09T20:51:46.006 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:51:46.006 INFO:teuthology.orchestra.run.vm07.stdout: "mds": { 2026-03-09T20:51:46.006 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable)": 4 2026-03-09T20:51:46.006 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:51:46.006 INFO:teuthology.orchestra.run.vm07.stdout: "overall": { 2026-03-09T20:51:46.006 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable)": 5, 2026-03-09T20:51:46.006 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 8 2026-03-09T20:51:46.006 INFO:teuthology.orchestra.run.vm07.stdout: } 2026-03-09T20:51:46.006 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T20:51:46.008 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.007+0000 7fdc849cf640 1 -- 192.168.123.107:0/285096963 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fdc500779b0 msgr2=0x7fdc50079e70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:51:46.008 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.007+0000 7fdc849cf640 1 --2- 192.168.123.107:0/285096963 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fdc500779b0 0x7fdc50079e70 secure :-1 s=READY pgs=69 cs=0 l=1 rev1=1 crypto rx=0x7fdc7c103ae0 tx=0x7fdc68005ca0 comp rx=0 tx=0).stop 2026-03-09T20:51:46.008 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.007+0000 7fdc849cf640 1 -- 192.168.123.107:0/285096963 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fdc7c103c80 msgr2=0x7fdc7c1964a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:51:46.008 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.007+0000 7fdc849cf640 1 --2- 192.168.123.107:0/285096963 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fdc7c103c80 0x7fdc7c1964a0 secure :-1 s=READY pgs=105 cs=0 l=1 rev1=1 crypto rx=0x7fdc74002410 tx=0x7fdc74002980 comp rx=0 tx=0).stop 2026-03-09T20:51:46.008 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.007+0000 7fdc849cf640 1 -- 192.168.123.107:0/285096963 shutdown_connections 2026-03-09T20:51:46.008 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.007+0000 7fdc849cf640 1 --2- 192.168.123.107:0/285096963 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fdc500779b0 0x7fdc50079e70 unknown :-1 s=CLOSED pgs=69 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:46.008 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.008+0000 7fdc849cf640 1 --2- 192.168.123.107:0/285096963 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fdc7c103c80 0x7fdc7c1964a0 unknown :-1 s=CLOSED pgs=105 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:46.008 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.008+0000 7fdc849cf640 1 --2- 192.168.123.107:0/285096963 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fdc7c102a80 0x7fdc7c195f60 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:46.009 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.008+0000 7fdc849cf640 1 -- 192.168.123.107:0/285096963 >> 192.168.123.107:0/285096963 conn(0x7fdc7c0fe250 msgr2=0x7fdc7c0ffd50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:51:46.009 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.008+0000 7fdc849cf640 1 -- 192.168.123.107:0/285096963 shutdown_connections 2026-03-09T20:51:46.009 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.008+0000 7fdc849cf640 1 -- 192.168.123.107:0/285096963 wait complete. 2026-03-09T20:51:46.068 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.067+0000 7f18e5095640 1 -- 192.168.123.107:0/3931435721 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f18e0103c80 msgr2=0x7f18e0104100 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:51:46.068 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.067+0000 7f18e5095640 1 --2- 192.168.123.107:0/3931435721 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f18e0103c80 0x7f18e0104100 secure :-1 s=READY pgs=106 cs=0 l=1 rev1=1 crypto rx=0x7f18d4009a00 tx=0x7f18d402f290 comp rx=0 tx=0).stop 2026-03-09T20:51:46.068 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.067+0000 7f18e5095640 1 -- 192.168.123.107:0/3931435721 shutdown_connections 2026-03-09T20:51:46.068 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.067+0000 7f18e5095640 1 --2- 192.168.123.107:0/3931435721 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f18e0103c80 0x7f18e0104100 unknown :-1 s=CLOSED pgs=106 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:46.068 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.067+0000 7f18e5095640 1 --2- 192.168.123.107:0/3931435721 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f18e0102a80 0x7f18e0102e80 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:46.068 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.067+0000 7f18e5095640 1 -- 192.168.123.107:0/3931435721 >> 192.168.123.107:0/3931435721 conn(0x7f18e00fe250 msgr2=0x7f18e0100670 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:51:46.068 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.067+0000 7f18e5095640 1 -- 192.168.123.107:0/3931435721 shutdown_connections 2026-03-09T20:51:46.069 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.067+0000 7f18e5095640 1 -- 192.168.123.107:0/3931435721 wait complete. 2026-03-09T20:51:46.069 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.068+0000 7f18e5095640 1 Processor -- start 2026-03-09T20:51:46.069 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.068+0000 7f18e5095640 1 -- start start 2026-03-09T20:51:46.069 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.068+0000 7f18e5095640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f18e0102a80 0x7f18e019a4d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:51:46.069 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.068+0000 7f18e5095640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f18e0103c80 0x7f18e019aa10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:51:46.069 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.068+0000 7f18e5095640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f18e019afe0 con 0x7f18e0102a80 2026-03-09T20:51:46.069 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.068+0000 7f18e5095640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f18e019b150 con 0x7f18e0103c80 2026-03-09T20:51:46.069 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.069+0000 7f18de575640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f18e0103c80 0x7f18e019aa10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:51:46.070 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.069+0000 7f18de575640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f18e0103c80 0x7f18e019aa10 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:34254/0 (socket says 192.168.123.107:34254) 2026-03-09T20:51:46.070 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.069+0000 7f18de575640 1 -- 192.168.123.107:0/3506511094 learned_addr learned my addr 192.168.123.107:0/3506511094 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:51:46.070 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.069+0000 7f18ded76640 1 --2- 192.168.123.107:0/3506511094 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f18e0102a80 0x7f18e019a4d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:51:46.070 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.069+0000 7f18de575640 1 -- 192.168.123.107:0/3506511094 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f18e0102a80 msgr2=0x7f18e019a4d0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:51:46.070 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.069+0000 7f18de575640 1 --2- 192.168.123.107:0/3506511094 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f18e0102a80 0x7f18e019a4d0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:46.070 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.069+0000 7f18de575640 1 -- 192.168.123.107:0/3506511094 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f18d4009660 con 0x7f18e0103c80 2026-03-09T20:51:46.070 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.069+0000 7f18ded76640 1 --2- 192.168.123.107:0/3506511094 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f18e0102a80 0x7f18e019a4d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T20:51:46.071 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.070+0000 7f18de575640 1 --2- 192.168.123.107:0/3506511094 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f18e0103c80 0x7f18e019aa10 secure :-1 s=READY pgs=37 cs=0 l=1 rev1=1 crypto rx=0x7f18d402f7a0 tx=0x7f18d40043d0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:51:46.071 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.070+0000 7f18bffff640 1 -- 192.168.123.107:0/3506511094 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f18d402fd00 con 0x7f18e0103c80 2026-03-09T20:51:46.071 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.070+0000 7f18e5095640 1 -- 192.168.123.107:0/3506511094 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f18e019fb90 con 0x7f18e0103c80 2026-03-09T20:51:46.071 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.070+0000 7f18e5095640 1 -- 192.168.123.107:0/3506511094 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f18e01a0100 con 0x7f18e0103c80 2026-03-09T20:51:46.072 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.070+0000 7f18bffff640 1 -- 192.168.123.107:0/3506511094 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f18d402fe60 con 0x7f18e0103c80 2026-03-09T20:51:46.072 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.070+0000 7f18bffff640 1 -- 192.168.123.107:0/3506511094 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f18d4041b40 con 0x7f18e0103c80 2026-03-09T20:51:46.073 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.072+0000 7f18bffff640 1 -- 192.168.123.107:0/3506511094 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f18d4041d20 con 0x7f18e0103c80 2026-03-09T20:51:46.073 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.072+0000 7f18bffff640 1 --2- 192.168.123.107:0/3506511094 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f18b00778e0 0x7f18b0079da0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:51:46.073 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.073+0000 7f18ded76640 1 --2- 192.168.123.107:0/3506511094 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f18b00778e0 0x7f18b0079da0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:51:46.074 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.073+0000 7f18bffff640 1 -- 192.168.123.107:0/3506511094 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(72..72 src has 1..72) v4 ==== 6394+0+0 (secure 0 0 0) 0x7f18d40be7e0 con 0x7f18e0103c80 2026-03-09T20:51:46.074 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.073+0000 7f18ded76640 1 --2- 192.168.123.107:0/3506511094 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f18b00778e0 0x7f18b0079da0 secure :-1 s=READY pgs=70 cs=0 l=1 rev1=1 crypto rx=0x7f18e0103ae0 tx=0x7f18c8009210 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:51:46.074 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.073+0000 7f18e5095640 1 -- 192.168.123.107:0/3506511094 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f18ac005350 con 0x7f18e0103c80 2026-03-09T20:51:46.077 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.076+0000 7f18bffff640 1 -- 192.168.123.107:0/3506511094 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f18d4086fc0 con 0x7f18e0103c80 2026-03-09T20:51:46.127 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:46 vm07.local ceph-mon[112105]: pgmap v128: 65 pgs: 19 active+undersized, 15 active+undersized+degraded, 31 active+clean; 211 MiB data, 1.8 GiB used, 118 GiB / 120 GiB avail; 48/285 objects degraded (16.842%) 2026-03-09T20:51:46.202 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.200+0000 7f18e5095640 1 -- 192.168.123.107:0/3506511094 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "fs dump"} v 0) v1 -- 0x7f18ac005e10 con 0x7f18e0103c80 2026-03-09T20:51:46.202 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.201+0000 7f18bffff640 1 -- 192.168.123.107:0/3506511094 <== mon.1 v2:192.168.123.110:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump"}]=0 dumped fsmap epoch 11 v11) v1 ==== 76+0+1937 (secure 0 0 0) 0x7f18d403d070 con 0x7f18e0103c80 2026-03-09T20:51:46.203 INFO:teuthology.orchestra.run.vm07.stdout:e11 2026-03-09T20:51:46.203 INFO:teuthology.orchestra.run.vm07.stdout:btime 1970-01-01T00:00:00:000000+0000 2026-03-09T20:51:46.203 INFO:teuthology.orchestra.run.vm07.stdout:enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T20:51:46.203 INFO:teuthology.orchestra.run.vm07.stdout:default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T20:51:46.203 INFO:teuthology.orchestra.run.vm07.stdout:legacy client fscid: 1 2026-03-09T20:51:46.203 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:51:46.203 INFO:teuthology.orchestra.run.vm07.stdout:Filesystem 'cephfs' (1) 2026-03-09T20:51:46.203 INFO:teuthology.orchestra.run.vm07.stdout:fs_name cephfs 2026-03-09T20:51:46.203 INFO:teuthology.orchestra.run.vm07.stdout:epoch 11 2026-03-09T20:51:46.203 INFO:teuthology.orchestra.run.vm07.stdout:flags 32 joinable allow_snaps allow_multimds_snaps allow_standby_replay 2026-03-09T20:51:46.203 INFO:teuthology.orchestra.run.vm07.stdout:created 2026-03-09T20:44:59.885491+0000 2026-03-09T20:51:46.203 INFO:teuthology.orchestra.run.vm07.stdout:modified 2026-03-09T20:45:12.822947+0000 2026-03-09T20:51:46.203 INFO:teuthology.orchestra.run.vm07.stdout:tableserver 0 2026-03-09T20:51:46.203 INFO:teuthology.orchestra.run.vm07.stdout:root 0 2026-03-09T20:51:46.203 INFO:teuthology.orchestra.run.vm07.stdout:session_timeout 60 2026-03-09T20:51:46.203 INFO:teuthology.orchestra.run.vm07.stdout:session_autoclose 300 2026-03-09T20:51:46.203 INFO:teuthology.orchestra.run.vm07.stdout:max_file_size 1099511627776 2026-03-09T20:51:46.203 INFO:teuthology.orchestra.run.vm07.stdout:max_xattr_size 65536 2026-03-09T20:51:46.203 INFO:teuthology.orchestra.run.vm07.stdout:required_client_features {} 2026-03-09T20:51:46.203 INFO:teuthology.orchestra.run.vm07.stdout:last_failure 0 2026-03-09T20:51:46.203 INFO:teuthology.orchestra.run.vm07.stdout:last_failure_osd_epoch 0 2026-03-09T20:51:46.203 INFO:teuthology.orchestra.run.vm07.stdout:compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T20:51:46.203 INFO:teuthology.orchestra.run.vm07.stdout:max_mds 2 2026-03-09T20:51:46.203 INFO:teuthology.orchestra.run.vm07.stdout:in 0,1 2026-03-09T20:51:46.203 INFO:teuthology.orchestra.run.vm07.stdout:up {0=14476,1=24291} 2026-03-09T20:51:46.203 INFO:teuthology.orchestra.run.vm07.stdout:failed 2026-03-09T20:51:46.203 INFO:teuthology.orchestra.run.vm07.stdout:damaged 2026-03-09T20:51:46.203 INFO:teuthology.orchestra.run.vm07.stdout:stopped 2026-03-09T20:51:46.203 INFO:teuthology.orchestra.run.vm07.stdout:data_pools [3] 2026-03-09T20:51:46.203 INFO:teuthology.orchestra.run.vm07.stdout:metadata_pool 2 2026-03-09T20:51:46.203 INFO:teuthology.orchestra.run.vm07.stdout:inline_data disabled 2026-03-09T20:51:46.204 INFO:teuthology.orchestra.run.vm07.stdout:balancer 2026-03-09T20:51:46.204 INFO:teuthology.orchestra.run.vm07.stdout:bal_rank_mask -1 2026-03-09T20:51:46.204 INFO:teuthology.orchestra.run.vm07.stdout:standby_count_wanted 1 2026-03-09T20:51:46.204 INFO:teuthology.orchestra.run.vm07.stdout:qdb_cluster leader: 0 members: 2026-03-09T20:51:46.204 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm07.rovdbp{0:14476} state up:active seq 3 join_fscid=1 addr [v2:192.168.123.107:6826/2216764941,v1:192.168.123.107:6827/2216764941] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:51:46.204 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm10.hzyuyq{0:14498} state up:standby-replay seq 3 join_fscid=1 addr [v2:192.168.123.110:6826/3212743251,v1:192.168.123.110:6827/3212743251] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:51:46.204 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm10.qpltwp{1:24291} state up:active seq 3 join_fscid=1 addr [v2:192.168.123.110:6824/61492274,v1:192.168.123.110:6825/61492274] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:51:46.204 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm07.potfau{1:14490} state up:standby-replay seq 2 join_fscid=1 addr [v2:192.168.123.107:6828/3289699342,v1:192.168.123.107:6829/3289699342] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:51:46.204 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:51:46.204 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:51:46.204 INFO:teuthology.orchestra.run.vm07.stderr:dumped fsmap epoch 11 2026-03-09T20:51:46.206 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.205+0000 7f18e5095640 1 -- 192.168.123.107:0/3506511094 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f18b00778e0 msgr2=0x7f18b0079da0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:51:46.206 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.205+0000 7f18e5095640 1 --2- 192.168.123.107:0/3506511094 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f18b00778e0 0x7f18b0079da0 secure :-1 s=READY pgs=70 cs=0 l=1 rev1=1 crypto rx=0x7f18e0103ae0 tx=0x7f18c8009210 comp rx=0 tx=0).stop 2026-03-09T20:51:46.206 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.205+0000 7f18e5095640 1 -- 192.168.123.107:0/3506511094 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f18e0103c80 msgr2=0x7f18e019aa10 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:51:46.206 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.205+0000 7f18e5095640 1 --2- 192.168.123.107:0/3506511094 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f18e0103c80 0x7f18e019aa10 secure :-1 s=READY pgs=37 cs=0 l=1 rev1=1 crypto rx=0x7f18d402f7a0 tx=0x7f18d40043d0 comp rx=0 tx=0).stop 2026-03-09T20:51:46.206 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.205+0000 7f18e5095640 1 -- 192.168.123.107:0/3506511094 shutdown_connections 2026-03-09T20:51:46.206 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.205+0000 7f18e5095640 1 --2- 192.168.123.107:0/3506511094 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f18b00778e0 0x7f18b0079da0 unknown :-1 s=CLOSED pgs=70 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:46.206 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.205+0000 7f18e5095640 1 --2- 192.168.123.107:0/3506511094 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f18e0103c80 0x7f18e019aa10 unknown :-1 s=CLOSED pgs=37 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:46.206 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.205+0000 7f18e5095640 1 --2- 192.168.123.107:0/3506511094 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f18e0102a80 0x7f18e019a4d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:46.207 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.205+0000 7f18e5095640 1 -- 192.168.123.107:0/3506511094 >> 192.168.123.107:0/3506511094 conn(0x7f18e00fe250 msgr2=0x7f18e00ffd30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:51:46.207 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.205+0000 7f18e5095640 1 -- 192.168.123.107:0/3506511094 shutdown_connections 2026-03-09T20:51:46.207 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.205+0000 7f18e5095640 1 -- 192.168.123.107:0/3506511094 wait complete. 2026-03-09T20:51:46.267 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.265+0000 7f83858f5640 1 -- 192.168.123.107:0/2833444484 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8380103aa0 msgr2=0x7f8380103f20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:51:46.267 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.265+0000 7f83858f5640 1 --2- 192.168.123.107:0/2833444484 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8380103aa0 0x7f8380103f20 secure :-1 s=READY pgs=107 cs=0 l=1 rev1=1 crypto rx=0x7f8368009a00 tx=0x7f836802f280 comp rx=0 tx=0).stop 2026-03-09T20:51:46.267 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.265+0000 7f83858f5640 1 -- 192.168.123.107:0/2833444484 shutdown_connections 2026-03-09T20:51:46.267 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.265+0000 7f83858f5640 1 --2- 192.168.123.107:0/2833444484 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8380103aa0 0x7f8380103f20 unknown :-1 s=CLOSED pgs=107 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:46.267 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.265+0000 7f83858f5640 1 --2- 192.168.123.107:0/2833444484 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f83801028a0 0x7f8380102ca0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:46.267 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.265+0000 7f83858f5640 1 -- 192.168.123.107:0/2833444484 >> 192.168.123.107:0/2833444484 conn(0x7f83800fe030 msgr2=0x7f8380100470 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:51:46.267 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.266+0000 7f83858f5640 1 -- 192.168.123.107:0/2833444484 shutdown_connections 2026-03-09T20:51:46.267 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.266+0000 7f83858f5640 1 -- 192.168.123.107:0/2833444484 wait complete. 2026-03-09T20:51:46.267 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.266+0000 7f83858f5640 1 Processor -- start 2026-03-09T20:51:46.267 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.266+0000 7f83858f5640 1 -- start start 2026-03-09T20:51:46.267 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.267+0000 7f83858f5640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f83801028a0 0x7f838019a270 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:51:46.268 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.267+0000 7f83858f5640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f8380103aa0 0x7f838019a7b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:51:46.268 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.267+0000 7f83858f5640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f838019ad80 con 0x7f83801028a0 2026-03-09T20:51:46.268 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.267+0000 7f83858f5640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f838019aef0 con 0x7f8380103aa0 2026-03-09T20:51:46.268 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.267+0000 7f83848f3640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f83801028a0 0x7f838019a270 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:51:46.268 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.267+0000 7f83848f3640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f83801028a0 0x7f838019a270 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:43150/0 (socket says 192.168.123.107:43150) 2026-03-09T20:51:46.268 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.267+0000 7f83848f3640 1 -- 192.168.123.107:0/3599365481 learned_addr learned my addr 192.168.123.107:0/3599365481 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:51:46.268 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.267+0000 7f83848f3640 1 -- 192.168.123.107:0/3599365481 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f8380103aa0 msgr2=0x7f838019a7b0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T20:51:46.268 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.267+0000 7f83848f3640 1 --2- 192.168.123.107:0/3599365481 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f8380103aa0 0x7f838019a7b0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:46.268 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.267+0000 7f83848f3640 1 -- 192.168.123.107:0/3599365481 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f8368009660 con 0x7f83801028a0 2026-03-09T20:51:46.268 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.267+0000 7f83848f3640 1 --2- 192.168.123.107:0/3599365481 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f83801028a0 0x7f838019a270 secure :-1 s=READY pgs=108 cs=0 l=1 rev1=1 crypto rx=0x7f837000e970 tx=0x7f837000ee40 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:51:46.269 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.268+0000 7f8375ffb640 1 -- 192.168.123.107:0/3599365481 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f83700097e0 con 0x7f83801028a0 2026-03-09T20:51:46.269 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.268+0000 7f8375ffb640 1 -- 192.168.123.107:0/3599365481 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f8370004590 con 0x7f83801028a0 2026-03-09T20:51:46.269 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.268+0000 7f83858f5640 1 -- 192.168.123.107:0/3599365481 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f838019f990 con 0x7f83801028a0 2026-03-09T20:51:46.269 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.268+0000 7f8375ffb640 1 -- 192.168.123.107:0/3599365481 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f8370010640 con 0x7f83801028a0 2026-03-09T20:51:46.269 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.268+0000 7f83858f5640 1 -- 192.168.123.107:0/3599365481 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f838019fee0 con 0x7f83801028a0 2026-03-09T20:51:46.270 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.269+0000 7f83858f5640 1 -- 192.168.123.107:0/3599365481 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f834c005350 con 0x7f83801028a0 2026-03-09T20:51:46.275 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.270+0000 7f8375ffb640 1 -- 192.168.123.107:0/3599365481 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f837000cd10 con 0x7f83801028a0 2026-03-09T20:51:46.275 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.270+0000 7f8375ffb640 1 --2- 192.168.123.107:0/3599365481 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f835c0779b0 0x7f835c079e70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:51:46.275 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.270+0000 7f8375ffb640 1 -- 192.168.123.107:0/3599365481 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(72..72 src has 1..72) v4 ==== 6394+0+0 (secure 0 0 0) 0x7f837001d030 con 0x7f83801028a0 2026-03-09T20:51:46.275 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.271+0000 7f8377fff640 1 --2- 192.168.123.107:0/3599365481 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f835c0779b0 0x7f835c079e70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:51:46.275 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.271+0000 7f8377fff640 1 --2- 192.168.123.107:0/3599365481 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f835c0779b0 0x7f835c079e70 secure :-1 s=READY pgs=71 cs=0 l=1 rev1=1 crypto rx=0x7f836802f790 tx=0x7f83680023d0 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:51:46.275 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.274+0000 7f8375ffb640 1 -- 192.168.123.107:0/3599365481 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f8370062280 con 0x7f83801028a0 2026-03-09T20:51:46.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:46 vm10.local ceph-mon[103526]: pgmap v128: 65 pgs: 19 active+undersized, 15 active+undersized+degraded, 31 active+clean; 211 MiB data, 1.8 GiB used, 118 GiB / 120 GiB avail; 48/285 objects degraded (16.842%) 2026-03-09T20:51:46.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:46 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/285096963' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:51:46.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:46 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/285096963' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:51:46.388 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.387+0000 7f83858f5640 1 -- 192.168.123.107:0/3599365481 --> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f834c002bf0 con 0x7f835c0779b0 2026-03-09T20:51:46.390 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.389+0000 7f8375ffb640 1 -- 192.168.123.107:0/3599365481 <== mgr.34100 v2:192.168.123.107:6800/2064434004 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+401 (secure 0 0 0) 0x7f834c002bf0 con 0x7f835c0779b0 2026-03-09T20:51:46.390 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T20:51:46.390 INFO:teuthology.orchestra.run.vm07.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc", 2026-03-09T20:51:46.390 INFO:teuthology.orchestra.run.vm07.stdout: "in_progress": true, 2026-03-09T20:51:46.390 INFO:teuthology.orchestra.run.vm07.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-09T20:51:46.390 INFO:teuthology.orchestra.run.vm07.stdout: "services_complete": [ 2026-03-09T20:51:46.390 INFO:teuthology.orchestra.run.vm07.stdout: "mon", 2026-03-09T20:51:46.390 INFO:teuthology.orchestra.run.vm07.stdout: "crash", 2026-03-09T20:51:46.390 INFO:teuthology.orchestra.run.vm07.stdout: "mgr" 2026-03-09T20:51:46.390 INFO:teuthology.orchestra.run.vm07.stdout: ], 2026-03-09T20:51:46.390 INFO:teuthology.orchestra.run.vm07.stdout: "progress": "11/23 daemons upgraded", 2026-03-09T20:51:46.390 INFO:teuthology.orchestra.run.vm07.stdout: "message": "Currently upgrading osd daemons", 2026-03-09T20:51:46.390 INFO:teuthology.orchestra.run.vm07.stdout: "is_paused": false 2026-03-09T20:51:46.390 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T20:51:46.393 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.392+0000 7f83858f5640 1 -- 192.168.123.107:0/3599365481 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f835c0779b0 msgr2=0x7f835c079e70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:51:46.393 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.392+0000 7f83858f5640 1 --2- 192.168.123.107:0/3599365481 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f835c0779b0 0x7f835c079e70 secure :-1 s=READY pgs=71 cs=0 l=1 rev1=1 crypto rx=0x7f836802f790 tx=0x7f83680023d0 comp rx=0 tx=0).stop 2026-03-09T20:51:46.393 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.392+0000 7f83858f5640 1 -- 192.168.123.107:0/3599365481 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f83801028a0 msgr2=0x7f838019a270 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:51:46.393 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.392+0000 7f83858f5640 1 --2- 192.168.123.107:0/3599365481 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f83801028a0 0x7f838019a270 secure :-1 s=READY pgs=108 cs=0 l=1 rev1=1 crypto rx=0x7f837000e970 tx=0x7f837000ee40 comp rx=0 tx=0).stop 2026-03-09T20:51:46.393 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.392+0000 7f83858f5640 1 -- 192.168.123.107:0/3599365481 shutdown_connections 2026-03-09T20:51:46.393 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.392+0000 7f83858f5640 1 --2- 192.168.123.107:0/3599365481 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f835c0779b0 0x7f835c079e70 unknown :-1 s=CLOSED pgs=71 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:46.393 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.392+0000 7f83858f5640 1 --2- 192.168.123.107:0/3599365481 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f8380103aa0 0x7f838019a7b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:46.393 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.392+0000 7f83858f5640 1 --2- 192.168.123.107:0/3599365481 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f83801028a0 0x7f838019a270 unknown :-1 s=CLOSED pgs=108 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:46.393 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.392+0000 7f83858f5640 1 -- 192.168.123.107:0/3599365481 >> 192.168.123.107:0/3599365481 conn(0x7f83800fe030 msgr2=0x7f83800ffb40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:51:46.393 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.392+0000 7f83858f5640 1 -- 192.168.123.107:0/3599365481 shutdown_connections 2026-03-09T20:51:46.393 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.392+0000 7f83858f5640 1 -- 192.168.123.107:0/3599365481 wait complete. 2026-03-09T20:51:46.454 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.453+0000 7fac3e81c640 1 -- 192.168.123.107:0/3324003849 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fac38076df0 msgr2=0x7fac38077250 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:51:46.454 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.453+0000 7fac3e81c640 1 --2- 192.168.123.107:0/3324003849 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fac38076df0 0x7fac38077250 secure :-1 s=READY pgs=109 cs=0 l=1 rev1=1 crypto rx=0x7fac240099b0 tx=0x7fac2402f220 comp rx=0 tx=0).stop 2026-03-09T20:51:46.455 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.453+0000 7fac3e81c640 1 -- 192.168.123.107:0/3324003849 shutdown_connections 2026-03-09T20:51:46.455 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.453+0000 7fac3e81c640 1 --2- 192.168.123.107:0/3324003849 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fac38076df0 0x7fac38077250 unknown :-1 s=CLOSED pgs=109 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:46.455 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.453+0000 7fac3e81c640 1 --2- 192.168.123.107:0/3324003849 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fac38075ba0 0x7fac38075fa0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:46.455 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.453+0000 7fac3e81c640 1 -- 192.168.123.107:0/3324003849 >> 192.168.123.107:0/3324003849 conn(0x7fac380fe250 msgr2=0x7fac38100670 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:51:46.455 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.454+0000 7fac3e81c640 1 -- 192.168.123.107:0/3324003849 shutdown_connections 2026-03-09T20:51:46.455 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.454+0000 7fac3e81c640 1 -- 192.168.123.107:0/3324003849 wait complete. 2026-03-09T20:51:46.455 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.454+0000 7fac3e81c640 1 Processor -- start 2026-03-09T20:51:46.455 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.454+0000 7fac3e81c640 1 -- start start 2026-03-09T20:51:46.455 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.455+0000 7fac3e81c640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fac38075ba0 0x7fac3819e910 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:51:46.455 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.455+0000 7fac3e81c640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fac38076df0 0x7fac3819ee50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:51:46.456 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.455+0000 7fac3e81c640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fac3819f420 con 0x7fac38075ba0 2026-03-09T20:51:46.456 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.455+0000 7fac3e81c640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fac3819f590 con 0x7fac38076df0 2026-03-09T20:51:46.456 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.455+0000 7fac37fff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fac38075ba0 0x7fac3819e910 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:51:46.456 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.455+0000 7fac37fff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fac38075ba0 0x7fac3819e910 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:43158/0 (socket says 192.168.123.107:43158) 2026-03-09T20:51:46.456 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.455+0000 7fac37fff640 1 -- 192.168.123.107:0/25637349 learned_addr learned my addr 192.168.123.107:0/25637349 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:51:46.457 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.455+0000 7fac37fff640 1 -- 192.168.123.107:0/25637349 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fac38076df0 msgr2=0x7fac3819ee50 unknown :-1 s=STATE_CONNECTING_RE l=1).mark_down 2026-03-09T20:51:46.457 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.455+0000 7fac37fff640 1 --2- 192.168.123.107:0/25637349 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fac38076df0 0x7fac3819ee50 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:46.457 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.455+0000 7fac37fff640 1 -- 192.168.123.107:0/25637349 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fac24009660 con 0x7fac38075ba0 2026-03-09T20:51:46.457 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.455+0000 7fac37fff640 1 --2- 192.168.123.107:0/25637349 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fac38075ba0 0x7fac3819e910 secure :-1 s=READY pgs=110 cs=0 l=1 rev1=1 crypto rx=0x7fac1c00e9e0 tx=0x7fac1c00eeb0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:51:46.457 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.456+0000 7fac357fa640 1 -- 192.168.123.107:0/25637349 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fac1c00cd90 con 0x7fac38075ba0 2026-03-09T20:51:46.457 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.456+0000 7fac357fa640 1 -- 192.168.123.107:0/25637349 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fac1c004590 con 0x7fac38075ba0 2026-03-09T20:51:46.457 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.456+0000 7fac3e81c640 1 -- 192.168.123.107:0/25637349 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fac381a3fd0 con 0x7fac38075ba0 2026-03-09T20:51:46.458 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.457+0000 7fac357fa640 1 -- 192.168.123.107:0/25637349 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fac1c010640 con 0x7fac38075ba0 2026-03-09T20:51:46.458 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.457+0000 7fac3e81c640 1 -- 192.168.123.107:0/25637349 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fac381a4520 con 0x7fac38075ba0 2026-03-09T20:51:46.458 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.457+0000 7fac357fa640 1 -- 192.168.123.107:0/25637349 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fac1c0107e0 con 0x7fac38075ba0 2026-03-09T20:51:46.459 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.458+0000 7fac3e81c640 1 -- 192.168.123.107:0/25637349 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fac38075fa0 con 0x7fac38075ba0 2026-03-09T20:51:46.462 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.460+0000 7fac357fa640 1 --2- 192.168.123.107:0/25637349 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fac100776d0 0x7fac10079b90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:51:46.462 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.460+0000 7fac357fa640 1 -- 192.168.123.107:0/25637349 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(72..72 src has 1..72) v4 ==== 6394+0+0 (secure 0 0 0) 0x7fac1c014070 con 0x7fac38075ba0 2026-03-09T20:51:46.463 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.460+0000 7fac377fe640 1 --2- 192.168.123.107:0/25637349 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fac100776d0 0x7fac10079b90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:51:46.463 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.460+0000 7fac377fe640 1 --2- 192.168.123.107:0/25637349 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fac100776d0 0x7fac10079b90 secure :-1 s=READY pgs=72 cs=0 l=1 rev1=1 crypto rx=0x7fac24002c20 tx=0x7fac240023d0 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:51:46.463 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.462+0000 7fac357fa640 1 -- 192.168.123.107:0/25637349 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fac1c09e050 con 0x7fac38075ba0 2026-03-09T20:51:46.589 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.588+0000 7fac3e81c640 1 -- 192.168.123.107:0/25637349 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "health", "detail": "detail"} v 0) v1 -- 0x7fac3810fe20 con 0x7fac38075ba0 2026-03-09T20:51:46.590 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.589+0000 7fac357fa640 1 -- 192.168.123.107:0/25637349 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "health", "detail": "detail"}]=0 v0) v1 ==== 74+0+1101 (secure 0 0 0) 0x7fac1c062de0 con 0x7fac38075ba0 2026-03-09T20:51:46.590 INFO:teuthology.orchestra.run.vm07.stdout:HEALTH_WARN 1 osds down; Degraded data redundancy: 48/285 objects degraded (16.842%), 15 pgs degraded 2026-03-09T20:51:46.590 INFO:teuthology.orchestra.run.vm07.stdout:[WRN] OSD_DOWN: 1 osds down 2026-03-09T20:51:46.590 INFO:teuthology.orchestra.run.vm07.stdout: osd.4 (root=default,host=vm10) is down 2026-03-09T20:51:46.590 INFO:teuthology.orchestra.run.vm07.stdout:[WRN] PG_DEGRADED: Degraded data redundancy: 48/285 objects degraded (16.842%), 15 pgs degraded 2026-03-09T20:51:46.590 INFO:teuthology.orchestra.run.vm07.stdout: pg 2.4 is active+undersized+degraded, acting [1,0] 2026-03-09T20:51:46.590 INFO:teuthology.orchestra.run.vm07.stdout: pg 2.5 is active+undersized+degraded, acting [3,0] 2026-03-09T20:51:46.590 INFO:teuthology.orchestra.run.vm07.stdout: pg 2.6 is active+undersized+degraded, acting [1,3] 2026-03-09T20:51:46.590 INFO:teuthology.orchestra.run.vm07.stdout: pg 2.7 is active+undersized+degraded, acting [3,2] 2026-03-09T20:51:46.590 INFO:teuthology.orchestra.run.vm07.stdout: pg 2.9 is active+undersized+degraded, acting [1,0] 2026-03-09T20:51:46.590 INFO:teuthology.orchestra.run.vm07.stdout: pg 2.a is active+undersized+degraded, acting [1,3] 2026-03-09T20:51:46.590 INFO:teuthology.orchestra.run.vm07.stdout: pg 2.b is active+undersized+degraded, acting [3,5] 2026-03-09T20:51:46.590 INFO:teuthology.orchestra.run.vm07.stdout: pg 2.f is active+undersized+degraded, acting [0,5] 2026-03-09T20:51:46.590 INFO:teuthology.orchestra.run.vm07.stdout: pg 2.13 is active+undersized+degraded, acting [0,2] 2026-03-09T20:51:46.590 INFO:teuthology.orchestra.run.vm07.stdout: pg 2.14 is active+undersized+degraded, acting [3,5] 2026-03-09T20:51:46.590 INFO:teuthology.orchestra.run.vm07.stdout: pg 2.18 is active+undersized+degraded, acting [5,3] 2026-03-09T20:51:46.590 INFO:teuthology.orchestra.run.vm07.stdout: pg 2.19 is active+undersized+degraded, acting [0,2] 2026-03-09T20:51:46.590 INFO:teuthology.orchestra.run.vm07.stdout: pg 2.1a is active+undersized+degraded, acting [3,5] 2026-03-09T20:51:46.590 INFO:teuthology.orchestra.run.vm07.stdout: pg 2.1c is active+undersized+degraded, acting [5,2] 2026-03-09T20:51:46.591 INFO:teuthology.orchestra.run.vm07.stdout: pg 2.1f is active+undersized+degraded, acting [0,3] 2026-03-09T20:51:46.593 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.592+0000 7fac3e81c640 1 -- 192.168.123.107:0/25637349 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fac100776d0 msgr2=0x7fac10079b90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:51:46.593 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.592+0000 7fac3e81c640 1 --2- 192.168.123.107:0/25637349 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fac100776d0 0x7fac10079b90 secure :-1 s=READY pgs=72 cs=0 l=1 rev1=1 crypto rx=0x7fac24002c20 tx=0x7fac240023d0 comp rx=0 tx=0).stop 2026-03-09T20:51:46.593 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.592+0000 7fac3e81c640 1 -- 192.168.123.107:0/25637349 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fac38075ba0 msgr2=0x7fac3819e910 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:51:46.594 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.592+0000 7fac3e81c640 1 --2- 192.168.123.107:0/25637349 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fac38075ba0 0x7fac3819e910 secure :-1 s=READY pgs=110 cs=0 l=1 rev1=1 crypto rx=0x7fac1c00e9e0 tx=0x7fac1c00eeb0 comp rx=0 tx=0).stop 2026-03-09T20:51:46.594 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.593+0000 7fac3e81c640 1 -- 192.168.123.107:0/25637349 shutdown_connections 2026-03-09T20:51:46.594 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.593+0000 7fac3e81c640 1 --2- 192.168.123.107:0/25637349 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fac100776d0 0x7fac10079b90 unknown :-1 s=CLOSED pgs=72 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:46.594 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.593+0000 7fac3e81c640 1 --2- 192.168.123.107:0/25637349 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fac38076df0 0x7fac3819ee50 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:46.594 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.593+0000 7fac3e81c640 1 --2- 192.168.123.107:0/25637349 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fac38075ba0 0x7fac3819e910 unknown :-1 s=CLOSED pgs=110 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:51:46.594 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.593+0000 7fac3e81c640 1 -- 192.168.123.107:0/25637349 >> 192.168.123.107:0/25637349 conn(0x7fac380fe250 msgr2=0x7fac380ff9d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:51:46.594 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.593+0000 7fac3e81c640 1 -- 192.168.123.107:0/25637349 shutdown_connections 2026-03-09T20:51:46.594 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:51:46.593+0000 7fac3e81c640 1 -- 192.168.123.107:0/25637349 wait complete. 2026-03-09T20:51:47.037 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:51:46 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-4[119365]: 2026-03-09T20:51:46.607+0000 7f20d9535740 -1 osd.4 70 log_to_monitors true 2026-03-09T20:51:47.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:47 vm07.local ceph-mon[112105]: from='client.34282 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:51:47.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:47 vm07.local ceph-mon[112105]: from='client.44229 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:51:47.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:47 vm07.local ceph-mon[112105]: from='client.34290 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:51:47.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:47 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/3506511094' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T20:51:47.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:47 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/25637349' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T20:51:47.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:47 vm07.local ceph-mon[112105]: from='osd.4 [v2:192.168.123.110:6808/2725170540,v1:192.168.123.110:6809/2725170540]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T20:51:47.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:47 vm07.local ceph-mon[112105]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T20:51:47.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:47 vm10.local ceph-mon[103526]: from='client.34282 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:51:47.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:47 vm10.local ceph-mon[103526]: from='client.44229 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:51:47.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:47 vm10.local ceph-mon[103526]: from='client.34290 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:51:47.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:47 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/3506511094' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T20:51:47.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:47 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/25637349' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T20:51:47.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:47 vm10.local ceph-mon[103526]: from='osd.4 [v2:192.168.123.110:6808/2725170540,v1:192.168.123.110:6809/2725170540]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T20:51:47.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:47 vm10.local ceph-mon[103526]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T20:51:47.787 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 20:51:47 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-4[119365]: 2026-03-09T20:51:47.378+0000 7f20d12cf640 -1 osd.4 70 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T20:51:48.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:48 vm07.local ceph-mon[112105]: from='client.34302 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:51:48.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:48 vm07.local ceph-mon[112105]: pgmap v129: 65 pgs: 19 active+undersized, 15 active+undersized+degraded, 31 active+clean; 211 MiB data, 1.8 GiB used, 118 GiB / 120 GiB avail; 48/285 objects degraded (16.842%) 2026-03-09T20:51:48.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:48 vm07.local ceph-mon[112105]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-09T20:51:48.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:48 vm07.local ceph-mon[112105]: osdmap e73: 6 total, 5 up, 6 in 2026-03-09T20:51:48.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:48 vm07.local ceph-mon[112105]: from='osd.4 [v2:192.168.123.110:6808/2725170540,v1:192.168.123.110:6809/2725170540]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T20:51:48.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:48 vm07.local ceph-mon[112105]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T20:51:48.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:48 vm07.local ceph-mon[112105]: Health check update: Degraded data redundancy: 48/285 objects degraded (16.842%), 15 pgs degraded (PG_DEGRADED) 2026-03-09T20:51:48.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:48 vm10.local ceph-mon[103526]: from='client.34302 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:51:48.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:48 vm10.local ceph-mon[103526]: pgmap v129: 65 pgs: 19 active+undersized, 15 active+undersized+degraded, 31 active+clean; 211 MiB data, 1.8 GiB used, 118 GiB / 120 GiB avail; 48/285 objects degraded (16.842%) 2026-03-09T20:51:48.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:48 vm10.local ceph-mon[103526]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-09T20:51:48.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:48 vm10.local ceph-mon[103526]: osdmap e73: 6 total, 5 up, 6 in 2026-03-09T20:51:48.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:48 vm10.local ceph-mon[103526]: from='osd.4 [v2:192.168.123.110:6808/2725170540,v1:192.168.123.110:6809/2725170540]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T20:51:48.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:48 vm10.local ceph-mon[103526]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T20:51:48.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:48 vm10.local ceph-mon[103526]: Health check update: Degraded data redundancy: 48/285 objects degraded (16.842%), 15 pgs degraded (PG_DEGRADED) 2026-03-09T20:51:49.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:49 vm10.local ceph-mon[103526]: Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T20:51:49.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:49 vm10.local ceph-mon[103526]: osd.4 [v2:192.168.123.110:6808/2725170540,v1:192.168.123.110:6809/2725170540] boot 2026-03-09T20:51:49.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:49 vm10.local ceph-mon[103526]: osdmap e74: 6 total, 6 up, 6 in 2026-03-09T20:51:49.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:49 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T20:51:49.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:49 vm10.local ceph-mon[103526]: pgmap v132: 65 pgs: 19 active+undersized, 15 active+undersized+degraded, 31 active+clean; 211 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 48/285 objects degraded (16.842%) 2026-03-09T20:51:49.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:49 vm07.local ceph-mon[112105]: Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T20:51:49.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:49 vm07.local ceph-mon[112105]: osd.4 [v2:192.168.123.110:6808/2725170540,v1:192.168.123.110:6809/2725170540] boot 2026-03-09T20:51:49.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:49 vm07.local ceph-mon[112105]: osdmap e74: 6 total, 6 up, 6 in 2026-03-09T20:51:49.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:49 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T20:51:49.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:49 vm07.local ceph-mon[112105]: pgmap v132: 65 pgs: 19 active+undersized, 15 active+undersized+degraded, 31 active+clean; 211 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 48/285 objects degraded (16.842%) 2026-03-09T20:51:50.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:50 vm10.local ceph-mon[103526]: osdmap e75: 6 total, 6 up, 6 in 2026-03-09T20:51:50.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:50 vm07.local ceph-mon[112105]: osdmap e75: 6 total, 6 up, 6 in 2026-03-09T20:51:51.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:51 vm10.local ceph-mon[103526]: pgmap v134: 65 pgs: 7 peering, 10 active+undersized, 6 active+undersized+degraded, 42 active+clean; 211 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 2.2 KiB/s rd, 3 op/s; 16/285 objects degraded (5.614%) 2026-03-09T20:51:51.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:51 vm07.local ceph-mon[112105]: pgmap v134: 65 pgs: 7 peering, 10 active+undersized, 6 active+undersized+degraded, 42 active+clean; 211 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 2.2 KiB/s rd, 3 op/s; 16/285 objects degraded (5.614%) 2026-03-09T20:51:52.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:52 vm07.local ceph-mon[112105]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 16/285 objects degraded (5.614%), 6 pgs degraded) 2026-03-09T20:51:52.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:52 vm07.local ceph-mon[112105]: Cluster is now healthy 2026-03-09T20:51:53.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:52 vm10.local ceph-mon[103526]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 16/285 objects degraded (5.614%), 6 pgs degraded) 2026-03-09T20:51:53.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:52 vm10.local ceph-mon[103526]: Cluster is now healthy 2026-03-09T20:51:53.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:53 vm07.local ceph-mon[112105]: pgmap v135: 65 pgs: 7 peering, 58 active+clean; 211 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 1.3 KiB/s rd, 3 op/s 2026-03-09T20:51:54.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:53 vm10.local ceph-mon[103526]: pgmap v135: 65 pgs: 7 peering, 58 active+clean; 211 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 1.3 KiB/s rd, 3 op/s 2026-03-09T20:51:55.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:55 vm10.local ceph-mon[103526]: pgmap v136: 65 pgs: 65 active+clean; 211 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 1.1 KiB/s rd, 2 op/s 2026-03-09T20:51:55.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:55 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:51:55.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:55 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:51:55.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:55 vm07.local ceph-mon[112105]: pgmap v136: 65 pgs: 65 active+clean; 211 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 1.1 KiB/s rd, 2 op/s 2026-03-09T20:51:55.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:55 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:51:55.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:55 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:51:57.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:57 vm10.local ceph-mon[103526]: pgmap v137: 65 pgs: 65 active+clean; 211 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 2.2 KiB/s rd, 3 op/s 2026-03-09T20:51:57.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:57 vm07.local ceph-mon[112105]: pgmap v137: 65 pgs: 65 active+clean; 211 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 2.2 KiB/s rd, 3 op/s 2026-03-09T20:51:59.766 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:59 vm10.local ceph-mon[103526]: pgmap v138: 65 pgs: 65 active+clean; 211 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 1.8 KiB/s rd, 3 op/s 2026-03-09T20:51:59.766 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:59 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-09T20:51:59.766 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:59 vm10.local ceph-mon[103526]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-09T20:51:59.766 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:59 vm10.local ceph-mon[103526]: Upgrade: osd.5 is safe to restart 2026-03-09T20:51:59.766 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:59 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:51:59.766 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:59 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T20:51:59.766 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:51:59 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:51:59.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:59 vm07.local ceph-mon[112105]: pgmap v138: 65 pgs: 65 active+clean; 211 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 1.8 KiB/s rd, 3 op/s 2026-03-09T20:51:59.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:59 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-09T20:51:59.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:59 vm07.local ceph-mon[112105]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-09T20:51:59.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:59 vm07.local ceph-mon[112105]: Upgrade: osd.5 is safe to restart 2026-03-09T20:51:59.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:59 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:51:59.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:59 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T20:51:59.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:51:59 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:52:00.287 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:52:00 vm10.local systemd[1]: Stopping Ceph osd.5 for 589eab88-1bf8-11f1-9e50-71f3ab1833c4... 2026-03-09T20:52:00.287 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:52:00 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-5[75905]: 2026-03-09T20:52:00.113+0000 7fcebbe8d640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.5 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T20:52:00.287 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:52:00 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-5[75905]: 2026-03-09T20:52:00.113+0000 7fcebbe8d640 -1 osd.5 75 *** Got signal Terminated *** 2026-03-09T20:52:00.287 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:52:00 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-5[75905]: 2026-03-09T20:52:00.113+0000 7fcebbe8d640 -1 osd.5 75 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T20:52:00.805 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:00 vm10.local ceph-mon[103526]: Upgrade: Updating osd.5 2026-03-09T20:52:00.805 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:00 vm10.local ceph-mon[103526]: Deploying daemon osd.5 on vm10 2026-03-09T20:52:00.805 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:00 vm10.local ceph-mon[103526]: osd.5 marked itself down and dead 2026-03-09T20:52:00.805 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:00 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:00.805 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:52:00 vm10.local podman[124672]: 2026-03-09 20:52:00.624078431 +0000 UTC m=+0.523311407 container died e1bd83add3431e17e1a6bb092f977b061094bf6f9427aacf11b7d0e396379198 (image=quay.ceph.io/ceph-ci/ceph@sha256:5c4977cb71fa562f9eeef885de8392360f108b57ddc9fb513207223fafb4cf40, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-5, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=ab47f43c099b2cbae6e21342fe673ce251da54d6, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-09T20:52:00.805 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:52:00 vm10.local podman[124672]: 2026-03-09 20:52:00.643677014 +0000 UTC m=+0.542910001 container remove e1bd83add3431e17e1a6bb092f977b061094bf6f9427aacf11b7d0e396379198 (image=quay.ceph.io/ceph-ci/ceph@sha256:5c4977cb71fa562f9eeef885de8392360f108b57ddc9fb513207223fafb4cf40, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-5, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=ab47f43c099b2cbae6e21342fe673ce251da54d6, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-09T20:52:00.805 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:52:00 vm10.local bash[124672]: ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-5 2026-03-09T20:52:00.805 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:52:00 vm10.local podman[124737]: 2026-03-09 20:52:00.781692364 +0000 UTC m=+0.018297941 container create 2835551cb2edffabc7e355e85df50bffb736658bda55e408d01746cb6be44a96 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-5-deactivate, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, OSD_FLAVOR=default) 2026-03-09T20:52:00.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:00 vm07.local ceph-mon[112105]: Upgrade: Updating osd.5 2026-03-09T20:52:00.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:00 vm07.local ceph-mon[112105]: Deploying daemon osd.5 on vm10 2026-03-09T20:52:00.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:00 vm07.local ceph-mon[112105]: osd.5 marked itself down and dead 2026-03-09T20:52:00.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:00 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:01.069 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:52:00 vm10.local podman[124737]: 2026-03-09 20:52:00.816258022 +0000 UTC m=+0.052863609 container init 2835551cb2edffabc7e355e85df50bffb736658bda55e408d01746cb6be44a96 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-5-deactivate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-09T20:52:01.069 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:52:00 vm10.local podman[124737]: 2026-03-09 20:52:00.821879194 +0000 UTC m=+0.058484771 container start 2835551cb2edffabc7e355e85df50bffb736658bda55e408d01746cb6be44a96 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-5-deactivate, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, CEPH_REF=squid) 2026-03-09T20:52:01.069 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:52:00 vm10.local podman[124737]: 2026-03-09 20:52:00.822875208 +0000 UTC m=+0.059480785 container attach 2835551cb2edffabc7e355e85df50bffb736658bda55e408d01746cb6be44a96 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-5-deactivate, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, ceph=True) 2026-03-09T20:52:01.069 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:52:00 vm10.local podman[124737]: 2026-03-09 20:52:00.774526812 +0000 UTC m=+0.011132399 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T20:52:01.069 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:52:00 vm10.local podman[124756]: 2026-03-09 20:52:00.959236171 +0000 UTC m=+0.010177752 container died 2835551cb2edffabc7e355e85df50bffb736658bda55e408d01746cb6be44a96 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-5-deactivate, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) 2026-03-09T20:52:01.326 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:52:01 vm10.local podman[124756]: 2026-03-09 20:52:01.09357023 +0000 UTC m=+0.144511811 container remove 2835551cb2edffabc7e355e85df50bffb736658bda55e408d01746cb6be44a96 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-5-deactivate, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-09T20:52:01.327 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:52:01 vm10.local systemd[1]: ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4@osd.5.service: Deactivated successfully. 2026-03-09T20:52:01.327 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:52:01 vm10.local systemd[1]: Stopped Ceph osd.5 for 589eab88-1bf8-11f1-9e50-71f3ab1833c4. 2026-03-09T20:52:01.327 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:52:01 vm10.local systemd[1]: ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4@osd.5.service: Consumed 47.584s CPU time. 2026-03-09T20:52:01.586 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:01 vm10.local ceph-mon[103526]: pgmap v139: 65 pgs: 65 active+clean; 211 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:52:01.586 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:01 vm10.local ceph-mon[103526]: Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T20:52:01.586 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:01 vm10.local ceph-mon[103526]: Health check failed: all OSDs are running squid or later but require_osd_release < squid (OSD_UPGRADE_FINISHED) 2026-03-09T20:52:01.586 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:01 vm10.local ceph-mon[103526]: osdmap e76: 6 total, 5 up, 6 in 2026-03-09T20:52:01.586 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:52:01 vm10.local systemd[1]: Starting Ceph osd.5 for 589eab88-1bf8-11f1-9e50-71f3ab1833c4... 2026-03-09T20:52:01.586 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:52:01 vm10.local podman[124847]: 2026-03-09 20:52:01.452995334 +0000 UTC m=+0.017529913 container create 0d2168fe22a5722b98e8a4658bb7823f2deb2151eb5c1353dfce68ad822f386e (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-5-activate, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223) 2026-03-09T20:52:01.586 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:52:01 vm10.local podman[124847]: 2026-03-09 20:52:01.494045128 +0000 UTC m=+0.058579697 container init 0d2168fe22a5722b98e8a4658bb7823f2deb2151eb5c1353dfce68ad822f386e (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-5-activate, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default) 2026-03-09T20:52:01.586 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:52:01 vm10.local podman[124847]: 2026-03-09 20:52:01.497431316 +0000 UTC m=+0.061965895 container start 0d2168fe22a5722b98e8a4658bb7823f2deb2151eb5c1353dfce68ad822f386e (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-5-activate, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-09T20:52:01.586 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:52:01 vm10.local podman[124847]: 2026-03-09 20:52:01.498339166 +0000 UTC m=+0.062873745 container attach 0d2168fe22a5722b98e8a4658bb7823f2deb2151eb5c1353dfce68ad822f386e (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-5-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, org.label-schema.build-date=20260223, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True) 2026-03-09T20:52:01.586 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:52:01 vm10.local podman[124847]: 2026-03-09 20:52:01.445589261 +0000 UTC m=+0.010123850 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T20:52:01.586 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:52:01 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-5-activate[124859]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T20:52:01.586 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:52:01 vm10.local bash[124847]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T20:52:01.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:01 vm07.local ceph-mon[112105]: pgmap v139: 65 pgs: 65 active+clean; 211 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:52:01.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:01 vm07.local ceph-mon[112105]: Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T20:52:01.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:01 vm07.local ceph-mon[112105]: Health check failed: all OSDs are running squid or later but require_osd_release < squid (OSD_UPGRADE_FINISHED) 2026-03-09T20:52:01.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:01 vm07.local ceph-mon[112105]: osdmap e76: 6 total, 5 up, 6 in 2026-03-09T20:52:01.995 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:52:01 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-5-activate[124859]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T20:52:01.995 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:52:01 vm10.local bash[124847]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T20:52:02.287 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:52:02 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-5-activate[124859]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-09T20:52:02.287 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:52:02 vm10.local bash[124847]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-09T20:52:02.287 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:52:02 vm10.local bash[124847]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T20:52:02.287 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:52:02 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-5-activate[124859]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T20:52:02.287 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:52:02 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-5-activate[124859]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T20:52:02.287 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:52:02 vm10.local bash[124847]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T20:52:02.287 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:52:02 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-5-activate[124859]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-5 2026-03-09T20:52:02.287 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:52:02 vm10.local bash[124847]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-5 2026-03-09T20:52:02.287 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:52:02 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-5-activate[124859]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-66e22c7d-d80a-4230-a705-b165a2b5f801/osd-block-bb0b085e-9ae4-46b4-9158-53e2edb3b952 --path /var/lib/ceph/osd/ceph-5 --no-mon-config 2026-03-09T20:52:02.287 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:52:02 vm10.local bash[124847]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-66e22c7d-d80a-4230-a705-b165a2b5f801/osd-block-bb0b085e-9ae4-46b4-9158-53e2edb3b952 --path /var/lib/ceph/osd/ceph-5 --no-mon-config 2026-03-09T20:52:02.787 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:52:02 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-5-activate[124859]: Running command: /usr/bin/ln -snf /dev/ceph-66e22c7d-d80a-4230-a705-b165a2b5f801/osd-block-bb0b085e-9ae4-46b4-9158-53e2edb3b952 /var/lib/ceph/osd/ceph-5/block 2026-03-09T20:52:02.788 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:52:02 vm10.local bash[124847]: Running command: /usr/bin/ln -snf /dev/ceph-66e22c7d-d80a-4230-a705-b165a2b5f801/osd-block-bb0b085e-9ae4-46b4-9158-53e2edb3b952 /var/lib/ceph/osd/ceph-5/block 2026-03-09T20:52:02.788 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:52:02 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-5-activate[124859]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-5/block 2026-03-09T20:52:02.788 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:52:02 vm10.local bash[124847]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-5/block 2026-03-09T20:52:02.788 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:52:02 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-5-activate[124859]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2 2026-03-09T20:52:02.788 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:52:02 vm10.local bash[124847]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2 2026-03-09T20:52:02.788 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:52:02 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-5-activate[124859]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-5 2026-03-09T20:52:02.788 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:52:02 vm10.local bash[124847]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-5 2026-03-09T20:52:02.788 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:52:02 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-5-activate[124859]: --> ceph-volume lvm activate successful for osd ID: 5 2026-03-09T20:52:02.788 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:52:02 vm10.local bash[124847]: --> ceph-volume lvm activate successful for osd ID: 5 2026-03-09T20:52:02.788 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:52:02 vm10.local podman[124847]: 2026-03-09 20:52:02.528469559 +0000 UTC m=+1.093004138 container died 0d2168fe22a5722b98e8a4658bb7823f2deb2151eb5c1353dfce68ad822f386e (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-5-activate, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-09T20:52:02.788 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:52:02 vm10.local podman[124847]: 2026-03-09 20:52:02.544188291 +0000 UTC m=+1.108722870 container remove 0d2168fe22a5722b98e8a4658bb7823f2deb2151eb5c1353dfce68ad822f386e (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-5-activate, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20260223, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) 2026-03-09T20:52:02.788 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:52:02 vm10.local podman[125118]: 2026-03-09 20:52:02.644594025 +0000 UTC m=+0.017698127 container create 7489b8a43e7f5b9d1f86178b6701dbeb684ee31dde054546b00e8f6dc3552838 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-5, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3) 2026-03-09T20:52:02.788 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:52:02 vm10.local podman[125118]: 2026-03-09 20:52:02.672031982 +0000 UTC m=+0.045136084 container init 7489b8a43e7f5b9d1f86178b6701dbeb684ee31dde054546b00e8f6dc3552838 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-5, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223) 2026-03-09T20:52:02.788 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:52:02 vm10.local podman[125118]: 2026-03-09 20:52:02.674851198 +0000 UTC m=+0.047955301 container start 7489b8a43e7f5b9d1f86178b6701dbeb684ee31dde054546b00e8f6dc3552838 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-5, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-09T20:52:02.788 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:52:02 vm10.local bash[125118]: 7489b8a43e7f5b9d1f86178b6701dbeb684ee31dde054546b00e8f6dc3552838 2026-03-09T20:52:02.788 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:52:02 vm10.local podman[125118]: 2026-03-09 20:52:02.635587297 +0000 UTC m=+0.008691399 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T20:52:02.788 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:52:02 vm10.local systemd[1]: Started Ceph osd.5 for 589eab88-1bf8-11f1-9e50-71f3ab1833c4. 2026-03-09T20:52:02.788 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:02 vm10.local ceph-mon[103526]: osdmap e77: 6 total, 5 up, 6 in 2026-03-09T20:52:02.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:02 vm07.local ceph-mon[112105]: osdmap e77: 6 total, 5 up, 6 in 2026-03-09T20:52:03.265 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:52:03 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-5[125129]: 2026-03-09T20:52:03.262+0000 7fe99fc37740 -1 Falling back to public interface 2026-03-09T20:52:03.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:03 vm07.local ceph-mon[112105]: pgmap v142: 65 pgs: 7 active+undersized, 7 stale+active+clean, 3 active+undersized+degraded, 48 active+clean; 211 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 1.9 KiB/s rd, 3 op/s; 9/285 objects degraded (3.158%) 2026-03-09T20:52:03.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:03 vm07.local ceph-mon[112105]: Health check failed: Degraded data redundancy: 9/285 objects degraded (3.158%), 3 pgs degraded (PG_DEGRADED) 2026-03-09T20:52:03.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:03 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:03.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:03 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:03.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:03 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:52:04.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:03 vm10.local ceph-mon[103526]: pgmap v142: 65 pgs: 7 active+undersized, 7 stale+active+clean, 3 active+undersized+degraded, 48 active+clean; 211 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 1.9 KiB/s rd, 3 op/s; 9/285 objects degraded (3.158%) 2026-03-09T20:52:04.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:03 vm10.local ceph-mon[103526]: Health check failed: Degraded data redundancy: 9/285 objects degraded (3.158%), 3 pgs degraded (PG_DEGRADED) 2026-03-09T20:52:04.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:03 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:04.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:03 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:04.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:03 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:52:05.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:04 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:05.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:04 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:05.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:04 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:05.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:04 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:05.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:04 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:05.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:04 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:05.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:04 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:05.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:04 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:06.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:06 vm10.local ceph-mon[103526]: pgmap v143: 65 pgs: 8 active+undersized, 6 stale+active+clean, 5 active+undersized+degraded, 46 active+clean; 211 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 639 B/s rd, 1 op/s; 12/285 objects degraded (4.211%) 2026-03-09T20:52:06.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:06 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:06.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:06 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:06.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:06 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:52:06.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:06 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:52:06.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:06 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:06.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:06 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:52:06.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:06 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:06.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:06 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:06.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:06 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:06.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:06 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:06.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:06 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:06.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:06 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]: dispatch 2026-03-09T20:52:06.538 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:06 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]': finished 2026-03-09T20:52:06.538 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:06 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]: dispatch 2026-03-09T20:52:06.538 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:06 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]': finished 2026-03-09T20:52:06.538 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:06 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]: dispatch 2026-03-09T20:52:06.538 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:06 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]': finished 2026-03-09T20:52:06.538 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:06 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]: dispatch 2026-03-09T20:52:06.538 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:06 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]': finished 2026-03-09T20:52:06.538 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:06 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]: dispatch 2026-03-09T20:52:06.538 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:06 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]': finished 2026-03-09T20:52:06.538 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:06 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]: dispatch 2026-03-09T20:52:06.538 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:06 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]': finished 2026-03-09T20:52:06.538 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:06 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd require-osd-release", "release": "squid"}]: dispatch 2026-03-09T20:52:06.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:06 vm07.local ceph-mon[112105]: pgmap v143: 65 pgs: 8 active+undersized, 6 stale+active+clean, 5 active+undersized+degraded, 46 active+clean; 211 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 639 B/s rd, 1 op/s; 12/285 objects degraded (4.211%) 2026-03-09T20:52:06.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:06 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:06.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:06 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:06.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:06 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:52:06.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:06 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:52:06.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:06 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:06.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:06 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:52:06.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:06 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:06.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:06 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:06.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:06 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:06.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:06 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:06.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:06 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:06.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:06 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]: dispatch 2026-03-09T20:52:06.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:06 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]': finished 2026-03-09T20:52:06.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:06 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]: dispatch 2026-03-09T20:52:06.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:06 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]': finished 2026-03-09T20:52:06.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:06 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]: dispatch 2026-03-09T20:52:06.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:06 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]': finished 2026-03-09T20:52:06.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:06 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]: dispatch 2026-03-09T20:52:06.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:06 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]': finished 2026-03-09T20:52:06.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:06 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]: dispatch 2026-03-09T20:52:06.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:06 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]': finished 2026-03-09T20:52:06.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:06 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]: dispatch 2026-03-09T20:52:06.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:06 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]': finished 2026-03-09T20:52:06.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:06 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd require-osd-release", "release": "squid"}]: dispatch 2026-03-09T20:52:07.171 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:52:06 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-5[125129]: 2026-03-09T20:52:06.820+0000 7fe99fc37740 -1 osd.5 75 log_to_monitors true 2026-03-09T20:52:07.536 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:07 vm07.local ceph-mon[112105]: Upgrade: Setting container_image for all osd 2026-03-09T20:52:07.536 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:07 vm07.local ceph-mon[112105]: Upgrade: Setting require_osd_release to 19 squid 2026-03-09T20:52:07.536 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:07 vm07.local ceph-mon[112105]: Health check cleared: OSD_UPGRADE_FINISHED (was: all OSDs are running squid or later but require_osd_release < squid) 2026-03-09T20:52:07.536 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:07 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "osd require-osd-release", "release": "squid"}]': finished 2026-03-09T20:52:07.536 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:07 vm07.local ceph-mon[112105]: osdmap e78: 6 total, 5 up, 6 in 2026-03-09T20:52:07.536 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:07 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:07.536 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:07 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "fs set", "fs_name": "cephfs", "var": "allow_standby_replay", "val": "0"}]: dispatch 2026-03-09T20:52:07.536 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:07 vm07.local ceph-mon[112105]: from='osd.5 [v2:192.168.123.110:6816/3404455726,v1:192.168.123.110:6817/3404455726]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T20:52:07.536 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:07 vm07.local ceph-mon[112105]: from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T20:52:07.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:07 vm10.local ceph-mon[103526]: Upgrade: Setting container_image for all osd 2026-03-09T20:52:07.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:07 vm10.local ceph-mon[103526]: Upgrade: Setting require_osd_release to 19 squid 2026-03-09T20:52:07.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:07 vm10.local ceph-mon[103526]: Health check cleared: OSD_UPGRADE_FINISHED (was: all OSDs are running squid or later but require_osd_release < squid) 2026-03-09T20:52:07.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:07 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "osd require-osd-release", "release": "squid"}]': finished 2026-03-09T20:52:07.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:07 vm10.local ceph-mon[103526]: osdmap e78: 6 total, 5 up, 6 in 2026-03-09T20:52:07.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:07 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:07.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:07 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "fs set", "fs_name": "cephfs", "var": "allow_standby_replay", "val": "0"}]: dispatch 2026-03-09T20:52:07.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:07 vm10.local ceph-mon[103526]: from='osd.5 [v2:192.168.123.110:6816/3404455726,v1:192.168.123.110:6817/3404455726]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T20:52:07.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:07 vm10.local ceph-mon[103526]: from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T20:52:08.445 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:08 vm07.local ceph-mon[112105]: pgmap v144: 65 pgs: 14 active+undersized, 14 active+undersized+degraded, 37 active+clean; 211 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 45/285 objects degraded (15.789%) 2026-03-09T20:52:08.445 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:08 vm07.local ceph-mon[112105]: Upgrade: Disabling standby-replay for filesystem cephfs 2026-03-09T20:52:08.445 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:08 vm07.local ceph-mon[112105]: osdmap e79: 6 total, 5 up, 6 in 2026-03-09T20:52:08.445 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:08 vm07.local ceph-mon[112105]: Health check failed: insufficient standby MDS daemons available (MDS_INSUFFICIENT_STANDBY) 2026-03-09T20:52:08.445 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:08 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "fs set", "fs_name": "cephfs", "var": "allow_standby_replay", "val": "0"}]': finished 2026-03-09T20:52:08.446 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:08 vm07.local ceph-mon[112105]: fsmap cephfs:2 {0=cephfs.vm07.rovdbp=up:active,1=cephfs.vm10.qpltwp=up:active} 2026-03-09T20:52:08.446 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:08 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:52:08.446 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:08 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:52:08.446 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:08 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:52:08.446 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:08 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:08.446 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:08 vm07.local ceph-mon[112105]: Health check update: Degraded data redundancy: 45/285 objects degraded (15.789%), 14 pgs degraded (PG_DEGRADED) 2026-03-09T20:52:08.446 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:08 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:52:08.446 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:08 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:08.446 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:08 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:08.446 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:08 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:08.446 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:08 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:08.446 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:08 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:08.446 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:08 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "fs set", "fs_name": "cephfs", "var": "max_mds", "val": "1"}]: dispatch 2026-03-09T20:52:08.537 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 20:52:08 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-5[125129]: 2026-03-09T20:52:08.196+0000 7fe9971d0640 -1 osd.5 75 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T20:52:08.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:08 vm10.local ceph-mon[103526]: pgmap v144: 65 pgs: 14 active+undersized, 14 active+undersized+degraded, 37 active+clean; 211 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 45/285 objects degraded (15.789%) 2026-03-09T20:52:08.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:08 vm10.local ceph-mon[103526]: Upgrade: Disabling standby-replay for filesystem cephfs 2026-03-09T20:52:08.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:08 vm10.local ceph-mon[103526]: osdmap e79: 6 total, 5 up, 6 in 2026-03-09T20:52:08.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:08 vm10.local ceph-mon[103526]: Health check failed: insufficient standby MDS daemons available (MDS_INSUFFICIENT_STANDBY) 2026-03-09T20:52:08.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:08 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "fs set", "fs_name": "cephfs", "var": "allow_standby_replay", "val": "0"}]': finished 2026-03-09T20:52:08.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:08 vm10.local ceph-mon[103526]: fsmap cephfs:2 {0=cephfs.vm07.rovdbp=up:active,1=cephfs.vm10.qpltwp=up:active} 2026-03-09T20:52:08.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:08 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:52:08.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:08 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:52:08.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:08 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:52:08.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:08 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:08.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:08 vm10.local ceph-mon[103526]: Health check update: Degraded data redundancy: 45/285 objects degraded (15.789%), 14 pgs degraded (PG_DEGRADED) 2026-03-09T20:52:08.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:08 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:52:08.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:08 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:08.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:08 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:08.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:08 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:08.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:08 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:08.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:08 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:08.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:08 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "fs set", "fs_name": "cephfs", "var": "max_mds", "val": "1"}]: dispatch 2026-03-09T20:52:09.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:09 vm10.local ceph-mon[103526]: Upgrade: Scaling down filesystem cephfs 2026-03-09T20:52:09.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:09 vm10.local ceph-mon[103526]: from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-09T20:52:09.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:09 vm10.local ceph-mon[103526]: osdmap e80: 6 total, 5 up, 6 in 2026-03-09T20:52:09.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:09 vm10.local ceph-mon[103526]: from='osd.5 [v2:192.168.123.110:6816/3404455726,v1:192.168.123.110:6817/3404455726]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T20:52:09.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:09 vm10.local ceph-mon[103526]: from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T20:52:09.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:09 vm10.local ceph-mon[103526]: Health check cleared: MDS_INSUFFICIENT_STANDBY (was: insufficient standby MDS daemons available) 2026-03-09T20:52:09.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:09 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "fs set", "fs_name": "cephfs", "var": "max_mds", "val": "1"}]': finished 2026-03-09T20:52:09.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:09 vm10.local ceph-mon[103526]: mds.? [v2:192.168.123.110:6826/2699915815,v1:192.168.123.110:6827/2699915815] up:boot 2026-03-09T20:52:09.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:09 vm10.local ceph-mon[103526]: mds.? [v2:192.168.123.107:6828/3625324292,v1:192.168.123.107:6829/3625324292] up:boot 2026-03-09T20:52:09.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:09 vm10.local ceph-mon[103526]: stopping daemon mds.cephfs.vm10.qpltwp 2026-03-09T20:52:09.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:09 vm10.local ceph-mon[103526]: fsmap cephfs:2 {0=cephfs.vm07.rovdbp=up:active,1=cephfs.vm10.qpltwp=up:active} 2 up:standby 2026-03-09T20:52:09.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:09 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm07.potfau"}]: dispatch 2026-03-09T20:52:09.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:09 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm10.hzyuyq"}]: dispatch 2026-03-09T20:52:09.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:09 vm10.local ceph-mon[103526]: fsmap cephfs:2 {0=cephfs.vm07.rovdbp=up:active,1=cephfs.vm10.qpltwp=up:stopping} 2 up:standby 2026-03-09T20:52:09.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:09 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:52:09.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:09 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:52:09.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:09 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:52:09.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:09 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:09.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:09 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:52:09.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:09 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:09.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:09 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:09.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:09 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:09.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:09 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:09.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:09 vm07.local ceph-mon[112105]: Upgrade: Scaling down filesystem cephfs 2026-03-09T20:52:09.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:09 vm07.local ceph-mon[112105]: from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-09T20:52:09.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:09 vm07.local ceph-mon[112105]: osdmap e80: 6 total, 5 up, 6 in 2026-03-09T20:52:09.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:09 vm07.local ceph-mon[112105]: from='osd.5 [v2:192.168.123.110:6816/3404455726,v1:192.168.123.110:6817/3404455726]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T20:52:09.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:09 vm07.local ceph-mon[112105]: from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-09T20:52:09.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:09 vm07.local ceph-mon[112105]: Health check cleared: MDS_INSUFFICIENT_STANDBY (was: insufficient standby MDS daemons available) 2026-03-09T20:52:09.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:09 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "fs set", "fs_name": "cephfs", "var": "max_mds", "val": "1"}]': finished 2026-03-09T20:52:09.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:09 vm07.local ceph-mon[112105]: mds.? [v2:192.168.123.110:6826/2699915815,v1:192.168.123.110:6827/2699915815] up:boot 2026-03-09T20:52:09.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:09 vm07.local ceph-mon[112105]: mds.? [v2:192.168.123.107:6828/3625324292,v1:192.168.123.107:6829/3625324292] up:boot 2026-03-09T20:52:09.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:09 vm07.local ceph-mon[112105]: stopping daemon mds.cephfs.vm10.qpltwp 2026-03-09T20:52:09.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:09 vm07.local ceph-mon[112105]: fsmap cephfs:2 {0=cephfs.vm07.rovdbp=up:active,1=cephfs.vm10.qpltwp=up:active} 2 up:standby 2026-03-09T20:52:09.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:09 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm07.potfau"}]: dispatch 2026-03-09T20:52:09.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:09 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm10.hzyuyq"}]: dispatch 2026-03-09T20:52:09.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:09 vm07.local ceph-mon[112105]: fsmap cephfs:2 {0=cephfs.vm07.rovdbp=up:active,1=cephfs.vm10.qpltwp=up:stopping} 2 up:standby 2026-03-09T20:52:09.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:09 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:52:09.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:09 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:52:09.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:09 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:52:09.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:09 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:09.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:09 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:52:09.635 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:09 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:09.635 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:09 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:09.635 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:09 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:09.635 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:09 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:10.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:10 vm10.local ceph-mon[103526]: pgmap v148: 65 pgs: 14 active+undersized, 14 active+undersized+degraded, 37 active+clean; 211 MiB data, 950 MiB used, 119 GiB / 120 GiB avail; 45/285 objects degraded (15.789%) 2026-03-09T20:52:10.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:10 vm10.local ceph-mon[103526]: Upgrade: Waiting for fs cephfs to scale down to reach 1 MDS 2026-03-09T20:52:10.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:10 vm10.local ceph-mon[103526]: Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T20:52:10.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:10 vm10.local ceph-mon[103526]: osd.5 [v2:192.168.123.110:6816/3404455726,v1:192.168.123.110:6817/3404455726] boot 2026-03-09T20:52:10.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:10 vm10.local ceph-mon[103526]: osdmap e81: 6 total, 6 up, 6 in 2026-03-09T20:52:10.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:10 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T20:52:10.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:10 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:10.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:10 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:52:10.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:10 vm07.local ceph-mon[112105]: pgmap v148: 65 pgs: 14 active+undersized, 14 active+undersized+degraded, 37 active+clean; 211 MiB data, 950 MiB used, 119 GiB / 120 GiB avail; 45/285 objects degraded (15.789%) 2026-03-09T20:52:10.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:10 vm07.local ceph-mon[112105]: Upgrade: Waiting for fs cephfs to scale down to reach 1 MDS 2026-03-09T20:52:10.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:10 vm07.local ceph-mon[112105]: Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T20:52:10.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:10 vm07.local ceph-mon[112105]: osd.5 [v2:192.168.123.110:6816/3404455726,v1:192.168.123.110:6817/3404455726] boot 2026-03-09T20:52:10.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:10 vm07.local ceph-mon[112105]: osdmap e81: 6 total, 6 up, 6 in 2026-03-09T20:52:10.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:10 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T20:52:10.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:10 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:10.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:10 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:52:11.446 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:11 vm10.local ceph-mon[103526]: osdmap e82: 6 total, 6 up, 6 in 2026-03-09T20:52:11.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:11 vm07.local ceph-mon[112105]: osdmap e82: 6 total, 6 up, 6 in 2026-03-09T20:52:12.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:12 vm10.local ceph-mon[103526]: pgmap v151: 65 pgs: 1 active, 2 peering, 10 active+undersized, 7 active+undersized+degraded, 45 active+clean; 211 MiB data, 950 MiB used, 119 GiB / 120 GiB avail; 825 B/s rd, 2 op/s; 20/285 objects degraded (7.018%) 2026-03-09T20:52:12.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:12 vm07.local ceph-mon[112105]: pgmap v151: 65 pgs: 1 active, 2 peering, 10 active+undersized, 7 active+undersized+degraded, 45 active+clean; 211 MiB data, 950 MiB used, 119 GiB / 120 GiB avail; 825 B/s rd, 2 op/s; 20/285 objects degraded (7.018%) 2026-03-09T20:52:13.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:13 vm10.local ceph-mon[103526]: Health check update: Degraded data redundancy: 20/285 objects degraded (7.018%), 7 pgs degraded (PG_DEGRADED) 2026-03-09T20:52:13.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:13 vm07.local ceph-mon[112105]: Health check update: Degraded data redundancy: 20/285 objects degraded (7.018%), 7 pgs degraded (PG_DEGRADED) 2026-03-09T20:52:14.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:14 vm10.local ceph-mon[103526]: pgmap v152: 65 pgs: 1 active, 2 peering, 8 active+undersized, 5 active+undersized+degraded, 49 active+clean; 211 MiB data, 950 MiB used, 119 GiB / 120 GiB avail; 575 B/s rd, 2 op/s; 15/285 objects degraded (5.263%) 2026-03-09T20:52:14.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:14 vm07.local ceph-mon[112105]: pgmap v152: 65 pgs: 1 active, 2 peering, 8 active+undersized, 5 active+undersized+degraded, 49 active+clean; 211 MiB data, 950 MiB used, 119 GiB / 120 GiB avail; 575 B/s rd, 2 op/s; 15/285 objects degraded (5.263%) 2026-03-09T20:52:15.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:15 vm10.local ceph-mon[103526]: pgmap v153: 65 pgs: 1 active, 64 active+clean; 211 MiB data, 954 MiB used, 119 GiB / 120 GiB avail; 0 B/s rd, 485 B/s wr, 0 op/s 2026-03-09T20:52:15.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:15 vm07.local ceph-mon[112105]: pgmap v153: 65 pgs: 1 active, 64 active+clean; 211 MiB data, 954 MiB used, 119 GiB / 120 GiB avail; 0 B/s rd, 485 B/s wr, 0 op/s 2026-03-09T20:52:16.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:16 vm10.local ceph-mon[103526]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 15/285 objects degraded (5.263%), 5 pgs degraded) 2026-03-09T20:52:16.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:16 vm10.local ceph-mon[103526]: Cluster is now healthy 2026-03-09T20:52:16.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:16 vm07.local ceph-mon[112105]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 15/285 objects degraded (5.263%), 5 pgs degraded) 2026-03-09T20:52:16.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:16 vm07.local ceph-mon[112105]: Cluster is now healthy 2026-03-09T20:52:16.658 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.656+0000 7f5d2e970640 1 -- 192.168.123.107:0/3668693149 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5d28075ba0 msgr2=0x7f5d28075fa0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:16.658 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.656+0000 7f5d2e970640 1 --2- 192.168.123.107:0/3668693149 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5d28075ba0 0x7f5d28075fa0 secure :-1 s=READY pgs=113 cs=0 l=1 rev1=1 crypto rx=0x7f5d18009a00 tx=0x7f5d1802f290 comp rx=0 tx=0).stop 2026-03-09T20:52:16.658 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.657+0000 7f5d2e970640 1 -- 192.168.123.107:0/3668693149 shutdown_connections 2026-03-09T20:52:16.658 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.657+0000 7f5d2e970640 1 --2- 192.168.123.107:0/3668693149 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5d28076df0 0x7f5d28077250 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:16.658 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.657+0000 7f5d2e970640 1 --2- 192.168.123.107:0/3668693149 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5d28075ba0 0x7f5d28075fa0 unknown :-1 s=CLOSED pgs=113 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:16.658 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.657+0000 7f5d2e970640 1 -- 192.168.123.107:0/3668693149 >> 192.168.123.107:0/3668693149 conn(0x7f5d280fe250 msgr2=0x7f5d28100670 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:52:16.658 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.657+0000 7f5d2e970640 1 -- 192.168.123.107:0/3668693149 shutdown_connections 2026-03-09T20:52:16.658 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.657+0000 7f5d2e970640 1 -- 192.168.123.107:0/3668693149 wait complete. 2026-03-09T20:52:16.659 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.658+0000 7f5d2e970640 1 Processor -- start 2026-03-09T20:52:16.659 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.658+0000 7f5d2e970640 1 -- start start 2026-03-09T20:52:16.659 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.658+0000 7f5d2e970640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5d28075ba0 0x7f5d2819e900 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:52:16.659 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.658+0000 7f5d2e970640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5d28076df0 0x7f5d2819ee40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:52:16.659 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.658+0000 7f5d2e970640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f5d2819f410 con 0x7f5d28075ba0 2026-03-09T20:52:16.659 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.658+0000 7f5d2e970640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f5d2819f580 con 0x7f5d28076df0 2026-03-09T20:52:16.660 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.659+0000 7f5d27fff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5d28075ba0 0x7f5d2819e900 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:52:16.660 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.659+0000 7f5d27fff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5d28075ba0 0x7f5d2819e900 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:58576/0 (socket says 192.168.123.107:58576) 2026-03-09T20:52:16.660 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.659+0000 7f5d27fff640 1 -- 192.168.123.107:0/1803915181 learned_addr learned my addr 192.168.123.107:0/1803915181 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:52:16.660 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.659+0000 7f5d277fe640 1 --2- 192.168.123.107:0/1803915181 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5d28076df0 0x7f5d2819ee40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:52:16.660 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.659+0000 7f5d27fff640 1 -- 192.168.123.107:0/1803915181 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5d28076df0 msgr2=0x7f5d2819ee40 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:16.660 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.659+0000 7f5d27fff640 1 --2- 192.168.123.107:0/1803915181 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5d28076df0 0x7f5d2819ee40 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:16.660 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.660+0000 7f5d27fff640 1 -- 192.168.123.107:0/1803915181 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f5d18009660 con 0x7f5d28075ba0 2026-03-09T20:52:16.661 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.660+0000 7f5d27fff640 1 --2- 192.168.123.107:0/1803915181 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5d28075ba0 0x7f5d2819e900 secure :-1 s=READY pgs=114 cs=0 l=1 rev1=1 crypto rx=0x7f5d18005bb0 tx=0x7f5d18004380 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:52:16.661 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.660+0000 7f5d257fa640 1 -- 192.168.123.107:0/1803915181 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f5d1802faf0 con 0x7f5d28075ba0 2026-03-09T20:52:16.661 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.660+0000 7f5d257fa640 1 -- 192.168.123.107:0/1803915181 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f5d1802fc50 con 0x7f5d28075ba0 2026-03-09T20:52:16.661 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.660+0000 7f5d257fa640 1 -- 192.168.123.107:0/1803915181 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f5d18041a00 con 0x7f5d28075ba0 2026-03-09T20:52:16.662 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.660+0000 7f5d2e970640 1 -- 192.168.123.107:0/1803915181 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f5d281a3fc0 con 0x7f5d28075ba0 2026-03-09T20:52:16.662 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.660+0000 7f5d2e970640 1 -- 192.168.123.107:0/1803915181 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f5d281a4460 con 0x7f5d28075ba0 2026-03-09T20:52:16.663 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.662+0000 7f5d257fa640 1 -- 192.168.123.107:0/1803915181 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f5d1803f070 con 0x7f5d28075ba0 2026-03-09T20:52:16.663 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.662+0000 7f5d2e970640 1 -- 192.168.123.107:0/1803915181 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f5cec005350 con 0x7f5d28075ba0 2026-03-09T20:52:16.664 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.662+0000 7f5d257fa640 1 --2- 192.168.123.107:0/1803915181 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f5cfc077890 0x7f5cfc079d50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:52:16.664 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.663+0000 7f5d257fa640 1 -- 192.168.123.107:0/1803915181 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(82..82 src has 1..82) v4 ==== 6394+0+0 (secure 0 0 0) 0x7f5d180be330 con 0x7f5d28075ba0 2026-03-09T20:52:16.666 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.665+0000 7f5d277fe640 1 --2- 192.168.123.107:0/1803915181 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f5cfc077890 0x7f5cfc079d50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:52:16.667 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.666+0000 7f5d277fe640 1 --2- 192.168.123.107:0/1803915181 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f5cfc077890 0x7f5cfc079d50 secure :-1 s=READY pgs=78 cs=0 l=1 rev1=1 crypto rx=0x7f5d2819fe20 tx=0x7f5d14009290 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:52:16.667 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.666+0000 7f5d257fa640 1 -- 192.168.123.107:0/1803915181 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f5d18087a90 con 0x7f5d28075ba0 2026-03-09T20:52:16.779 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.776+0000 7f5d2e970640 1 -- 192.168.123.107:0/1803915181 --> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f5cec002bf0 con 0x7f5cfc077890 2026-03-09T20:52:16.779 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.777+0000 7f5d257fa640 1 -- 192.168.123.107:0/1803915181 <== mgr.34100 v2:192.168.123.107:6800/2064434004 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+416 (secure 0 0 0) 0x7f5cec002bf0 con 0x7f5cfc077890 2026-03-09T20:52:16.782 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.781+0000 7f5d2e970640 1 -- 192.168.123.107:0/1803915181 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f5cfc077890 msgr2=0x7f5cfc079d50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:16.782 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.781+0000 7f5d2e970640 1 --2- 192.168.123.107:0/1803915181 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f5cfc077890 0x7f5cfc079d50 secure :-1 s=READY pgs=78 cs=0 l=1 rev1=1 crypto rx=0x7f5d2819fe20 tx=0x7f5d14009290 comp rx=0 tx=0).stop 2026-03-09T20:52:16.782 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.781+0000 7f5d2e970640 1 -- 192.168.123.107:0/1803915181 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5d28075ba0 msgr2=0x7f5d2819e900 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:16.782 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.781+0000 7f5d2e970640 1 --2- 192.168.123.107:0/1803915181 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5d28075ba0 0x7f5d2819e900 secure :-1 s=READY pgs=114 cs=0 l=1 rev1=1 crypto rx=0x7f5d18005bb0 tx=0x7f5d18004380 comp rx=0 tx=0).stop 2026-03-09T20:52:16.783 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.782+0000 7f5d2e970640 1 -- 192.168.123.107:0/1803915181 shutdown_connections 2026-03-09T20:52:16.783 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.782+0000 7f5d2e970640 1 --2- 192.168.123.107:0/1803915181 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f5cfc077890 0x7f5cfc079d50 unknown :-1 s=CLOSED pgs=78 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:16.783 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.782+0000 7f5d2e970640 1 --2- 192.168.123.107:0/1803915181 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5d28076df0 0x7f5d2819ee40 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:16.783 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.782+0000 7f5d2e970640 1 --2- 192.168.123.107:0/1803915181 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5d28075ba0 0x7f5d2819e900 unknown :-1 s=CLOSED pgs=114 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:16.783 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.782+0000 7f5d2e970640 1 -- 192.168.123.107:0/1803915181 >> 192.168.123.107:0/1803915181 conn(0x7f5d280fe250 msgr2=0x7f5d280ffa30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:52:16.783 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.783+0000 7f5d2e970640 1 -- 192.168.123.107:0/1803915181 shutdown_connections 2026-03-09T20:52:16.784 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.783+0000 7f5d2e970640 1 -- 192.168.123.107:0/1803915181 wait complete. 2026-03-09T20:52:16.793 INFO:teuthology.orchestra.run.vm07.stdout:true 2026-03-09T20:52:16.845 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.843+0000 7fc86df2d640 1 -- 192.168.123.107:0/3536354172 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc8681007f0 msgr2=0x7fc868100bf0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:16.845 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.843+0000 7fc86df2d640 1 --2- 192.168.123.107:0/3536354172 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc8681007f0 0x7fc868100bf0 secure :-1 s=READY pgs=115 cs=0 l=1 rev1=1 crypto rx=0x7fc8540099b0 tx=0x7fc85402f220 comp rx=0 tx=0).stop 2026-03-09T20:52:16.845 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.844+0000 7fc86df2d640 1 -- 192.168.123.107:0/3536354172 shutdown_connections 2026-03-09T20:52:16.845 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.844+0000 7fc86df2d640 1 --2- 192.168.123.107:0/3536354172 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc8681019f0 0x7fc868101e70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:16.845 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.844+0000 7fc86df2d640 1 --2- 192.168.123.107:0/3536354172 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc8681007f0 0x7fc868100bf0 unknown :-1 s=CLOSED pgs=115 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:16.845 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.844+0000 7fc86df2d640 1 -- 192.168.123.107:0/3536354172 >> 192.168.123.107:0/3536354172 conn(0x7fc8680fbf80 msgr2=0x7fc8680fe3c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:52:16.845 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.844+0000 7fc86df2d640 1 -- 192.168.123.107:0/3536354172 shutdown_connections 2026-03-09T20:52:16.845 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.844+0000 7fc86df2d640 1 -- 192.168.123.107:0/3536354172 wait complete. 2026-03-09T20:52:16.846 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.845+0000 7fc86df2d640 1 Processor -- start 2026-03-09T20:52:16.846 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.845+0000 7fc86df2d640 1 -- start start 2026-03-09T20:52:16.846 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.845+0000 7fc86df2d640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc8681007f0 0x7fc868074ec0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:52:16.846 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.845+0000 7fc86df2d640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc8681019f0 0x7fc868073560 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:52:16.847 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.845+0000 7fc86df2d640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc868075400 con 0x7fc8681007f0 2026-03-09T20:52:16.847 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.845+0000 7fc86df2d640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc868073aa0 con 0x7fc8681019f0 2026-03-09T20:52:16.847 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.845+0000 7fc8677fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc8681007f0 0x7fc868074ec0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:52:16.847 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.845+0000 7fc8677fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc8681007f0 0x7fc868074ec0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:58594/0 (socket says 192.168.123.107:58594) 2026-03-09T20:52:16.847 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.845+0000 7fc8677fe640 1 -- 192.168.123.107:0/516625938 learned_addr learned my addr 192.168.123.107:0/516625938 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:52:16.847 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.846+0000 7fc8677fe640 1 -- 192.168.123.107:0/516625938 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc8681019f0 msgr2=0x7fc868073560 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T20:52:16.847 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.846+0000 7fc8677fe640 1 --2- 192.168.123.107:0/516625938 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc8681019f0 0x7fc868073560 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:16.848 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.846+0000 7fc8677fe640 1 -- 192.168.123.107:0/516625938 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fc854009660 con 0x7fc8681007f0 2026-03-09T20:52:16.848 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.846+0000 7fc8677fe640 1 --2- 192.168.123.107:0/516625938 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc8681007f0 0x7fc868074ec0 secure :-1 s=READY pgs=116 cs=0 l=1 rev1=1 crypto rx=0x7fc854002910 tx=0x7fc854002940 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:52:16.848 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.847+0000 7fc864ff9640 1 -- 192.168.123.107:0/516625938 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fc85403d070 con 0x7fc8681007f0 2026-03-09T20:52:16.848 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.847+0000 7fc864ff9640 1 -- 192.168.123.107:0/516625938 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fc85402fd50 con 0x7fc8681007f0 2026-03-09T20:52:16.849 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.848+0000 7fc864ff9640 1 -- 192.168.123.107:0/516625938 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fc854041aa0 con 0x7fc8681007f0 2026-03-09T20:52:16.850 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.848+0000 7fc86df2d640 1 -- 192.168.123.107:0/516625938 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fc868073d20 con 0x7fc8681007f0 2026-03-09T20:52:16.852 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.848+0000 7fc86df2d640 1 -- 192.168.123.107:0/516625938 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fc8680741e0 con 0x7fc8681007f0 2026-03-09T20:52:16.853 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.849+0000 7fc86df2d640 1 -- 192.168.123.107:0/516625938 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fc830005350 con 0x7fc8681007f0 2026-03-09T20:52:16.853 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.850+0000 7fc864ff9640 1 -- 192.168.123.107:0/516625938 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fc854038730 con 0x7fc8681007f0 2026-03-09T20:52:16.853 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.850+0000 7fc864ff9640 1 --2- 192.168.123.107:0/516625938 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc83c077720 0x7fc83c079be0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:52:16.853 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.850+0000 7fc864ff9640 1 -- 192.168.123.107:0/516625938 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(82..82 src has 1..82) v4 ==== 6394+0+0 (secure 0 0 0) 0x7fc8540beba0 con 0x7fc8681007f0 2026-03-09T20:52:16.853 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.852+0000 7fc866ffd640 1 --2- 192.168.123.107:0/516625938 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc83c077720 0x7fc83c079be0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:52:16.853 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.852+0000 7fc866ffd640 1 --2- 192.168.123.107:0/516625938 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc83c077720 0x7fc83c079be0 secure :-1 s=READY pgs=79 cs=0 l=1 rev1=1 crypto rx=0x7fc858005fd0 tx=0x7fc858005d00 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:52:16.853 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.852+0000 7fc864ff9640 1 -- 192.168.123.107:0/516625938 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fc8540872d0 con 0x7fc8681007f0 2026-03-09T20:52:16.953 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.952+0000 7fc86df2d640 1 -- 192.168.123.107:0/516625938 --> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7fc830002bf0 con 0x7fc83c077720 2026-03-09T20:52:16.956 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.955+0000 7fc864ff9640 1 -- 192.168.123.107:0/516625938 <== mgr.34100 v2:192.168.123.107:6800/2064434004 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+416 (secure 0 0 0) 0x7fc830002bf0 con 0x7fc83c077720 2026-03-09T20:52:16.958 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.958+0000 7fc86df2d640 1 -- 192.168.123.107:0/516625938 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc83c077720 msgr2=0x7fc83c079be0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:16.959 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.958+0000 7fc86df2d640 1 --2- 192.168.123.107:0/516625938 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc83c077720 0x7fc83c079be0 secure :-1 s=READY pgs=79 cs=0 l=1 rev1=1 crypto rx=0x7fc858005fd0 tx=0x7fc858005d00 comp rx=0 tx=0).stop 2026-03-09T20:52:16.959 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.958+0000 7fc86df2d640 1 -- 192.168.123.107:0/516625938 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc8681007f0 msgr2=0x7fc868074ec0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:16.959 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.958+0000 7fc86df2d640 1 --2- 192.168.123.107:0/516625938 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc8681007f0 0x7fc868074ec0 secure :-1 s=READY pgs=116 cs=0 l=1 rev1=1 crypto rx=0x7fc854002910 tx=0x7fc854002940 comp rx=0 tx=0).stop 2026-03-09T20:52:16.959 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.958+0000 7fc86df2d640 1 -- 192.168.123.107:0/516625938 shutdown_connections 2026-03-09T20:52:16.959 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.958+0000 7fc86df2d640 1 --2- 192.168.123.107:0/516625938 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc83c077720 0x7fc83c079be0 unknown :-1 s=CLOSED pgs=79 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:16.959 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.958+0000 7fc86df2d640 1 --2- 192.168.123.107:0/516625938 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc8681019f0 0x7fc868073560 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:16.959 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.958+0000 7fc86df2d640 1 --2- 192.168.123.107:0/516625938 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc8681007f0 0x7fc868074ec0 unknown :-1 s=CLOSED pgs=116 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:16.959 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.959+0000 7fc86df2d640 1 -- 192.168.123.107:0/516625938 >> 192.168.123.107:0/516625938 conn(0x7fc8680fbf80 msgr2=0x7fc8680fdaf0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:52:16.959 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.959+0000 7fc86df2d640 1 -- 192.168.123.107:0/516625938 shutdown_connections 2026-03-09T20:52:16.960 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:16.959+0000 7fc86df2d640 1 -- 192.168.123.107:0/516625938 wait complete. 2026-03-09T20:52:17.015 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.014+0000 7fc38ad23640 1 -- 192.168.123.107:0/1203678417 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc3841028b0 msgr2=0x7fc384102cb0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:17.015 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.014+0000 7fc38ad23640 1 --2- 192.168.123.107:0/1203678417 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc3841028b0 0x7fc384102cb0 secure :-1 s=READY pgs=117 cs=0 l=1 rev1=1 crypto rx=0x7fc36c0099b0 tx=0x7fc36c02f220 comp rx=0 tx=0).stop 2026-03-09T20:52:17.015 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.014+0000 7fc38ad23640 1 -- 192.168.123.107:0/1203678417 shutdown_connections 2026-03-09T20:52:17.015 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.014+0000 7fc38ad23640 1 --2- 192.168.123.107:0/1203678417 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc384103ab0 0x7fc384103f30 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:17.015 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.014+0000 7fc38ad23640 1 --2- 192.168.123.107:0/1203678417 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc3841028b0 0x7fc384102cb0 unknown :-1 s=CLOSED pgs=117 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:17.015 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.014+0000 7fc38ad23640 1 -- 192.168.123.107:0/1203678417 >> 192.168.123.107:0/1203678417 conn(0x7fc3840fe060 msgr2=0x7fc384100480 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:52:17.015 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.015+0000 7fc38ad23640 1 -- 192.168.123.107:0/1203678417 shutdown_connections 2026-03-09T20:52:17.016 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.015+0000 7fc38ad23640 1 -- 192.168.123.107:0/1203678417 wait complete. 2026-03-09T20:52:17.016 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.015+0000 7fc38ad23640 1 Processor -- start 2026-03-09T20:52:17.016 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.015+0000 7fc38ad23640 1 -- start start 2026-03-09T20:52:17.016 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.015+0000 7fc38ad23640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc3841028b0 0x7fc38419a280 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:52:17.017 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.016+0000 7fc38ad23640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc384103ab0 0x7fc38419a7c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:52:17.017 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.016+0000 7fc38ad23640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc38419ad00 con 0x7fc384103ab0 2026-03-09T20:52:17.017 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.016+0000 7fc38ad23640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc38419ae70 con 0x7fc3841028b0 2026-03-09T20:52:17.017 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.016+0000 7fc389520640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc384103ab0 0x7fc38419a7c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:52:17.017 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.016+0000 7fc389d21640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc3841028b0 0x7fc38419a280 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:52:17.017 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.016+0000 7fc389d21640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc3841028b0 0x7fc38419a280 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:32792/0 (socket says 192.168.123.107:32792) 2026-03-09T20:52:17.017 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.016+0000 7fc389d21640 1 -- 192.168.123.107:0/1160584650 learned_addr learned my addr 192.168.123.107:0/1160584650 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:52:17.017 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.016+0000 7fc389d21640 1 -- 192.168.123.107:0/1160584650 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc384103ab0 msgr2=0x7fc38419a7c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:17.017 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.016+0000 7fc389d21640 1 --2- 192.168.123.107:0/1160584650 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc384103ab0 0x7fc38419a7c0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:17.017 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.016+0000 7fc389d21640 1 -- 192.168.123.107:0/1160584650 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fc36c009660 con 0x7fc3841028b0 2026-03-09T20:52:17.017 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.016+0000 7fc389520640 1 --2- 192.168.123.107:0/1160584650 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc384103ab0 0x7fc38419a7c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T20:52:17.017 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.017+0000 7fc389d21640 1 --2- 192.168.123.107:0/1160584650 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc3841028b0 0x7fc38419a280 secure :-1 s=READY pgs=43 cs=0 l=1 rev1=1 crypto rx=0x7fc36c02f730 tx=0x7fc36c002980 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:52:17.018 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.017+0000 7fc37affd640 1 -- 192.168.123.107:0/1160584650 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fc36c03d070 con 0x7fc3841028b0 2026-03-09T20:52:17.018 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.017+0000 7fc38ad23640 1 -- 192.168.123.107:0/1160584650 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fc38419f8f0 con 0x7fc3841028b0 2026-03-09T20:52:17.018 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.017+0000 7fc38ad23640 1 -- 192.168.123.107:0/1160584650 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fc38419fd90 con 0x7fc3841028b0 2026-03-09T20:52:17.019 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.018+0000 7fc37affd640 1 -- 192.168.123.107:0/1160584650 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fc36c02fd50 con 0x7fc3841028b0 2026-03-09T20:52:17.019 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.018+0000 7fc37affd640 1 -- 192.168.123.107:0/1160584650 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fc36c0419f0 con 0x7fc3841028b0 2026-03-09T20:52:17.021 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.019+0000 7fc37affd640 1 -- 192.168.123.107:0/1160584650 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fc36c038730 con 0x7fc3841028b0 2026-03-09T20:52:17.021 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.019+0000 7fc38ad23640 1 -- 192.168.123.107:0/1160584650 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fc38410b4e0 con 0x7fc3841028b0 2026-03-09T20:52:17.021 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.019+0000 7fc37affd640 1 --2- 192.168.123.107:0/1160584650 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc35c077890 0x7fc35c079d50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:52:17.021 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.019+0000 7fc37affd640 1 -- 192.168.123.107:0/1160584650 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(82..82 src has 1..82) v4 ==== 6394+0+0 (secure 0 0 0) 0x7fc36c0bea60 con 0x7fc3841028b0 2026-03-09T20:52:17.021 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.020+0000 7fc389520640 1 --2- 192.168.123.107:0/1160584650 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc35c077890 0x7fc35c079d50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:52:17.021 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.020+0000 7fc389520640 1 --2- 192.168.123.107:0/1160584650 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc35c077890 0x7fc35c079d50 secure :-1 s=READY pgs=80 cs=0 l=1 rev1=1 crypto rx=0x7fc38419b7a0 tx=0x7fc37400a400 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:52:17.023 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.022+0000 7fc37affd640 1 -- 192.168.123.107:0/1160584650 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fc36c087240 con 0x7fc3841028b0 2026-03-09T20:52:17.120 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.119+0000 7fc38ad23640 1 -- 192.168.123.107:0/1160584650 --> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] -- mgr_command(tid 0: {"prefix": "orch ps", "target": ["mon-mgr", ""]}) v1 -- 0x7fc384107f50 con 0x7fc35c077890 2026-03-09T20:52:17.125 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.124+0000 7fc37affd640 1 -- 192.168.123.107:0/1160584650 <== mgr.34100 v2:192.168.123.107:6800/2064434004 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+3576 (secure 0 0 0) 0x7fc384107f50 con 0x7fc35c077890 2026-03-09T20:52:17.126 INFO:teuthology.orchestra.run.vm07.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T20:52:17.126 INFO:teuthology.orchestra.run.vm07.stdout:alertmanager.vm07 vm07 *:9093,9094 running (8m) 80s ago 9m 43.7M - 0.25.0 c8568f914cd2 aa3206f6f5cb 2026-03-09T20:52:17.126 INFO:teuthology.orchestra.run.vm07.stdout:ceph-exporter.vm07 vm07 running (9m) 80s ago 9m 9.82M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 06140d824fae 2026-03-09T20:52:17.126 INFO:teuthology.orchestra.run.vm07.stdout:ceph-exporter.vm10 vm10 running (8m) 13s ago 8m 10.5M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 ecddc8340426 2026-03-09T20:52:17.126 INFO:teuthology.orchestra.run.vm07.stdout:crash.vm07 vm07 running (3m) 80s ago 9m 7834k - 19.2.3-678-ge911bdeb 654f31e6858e 406c9c54f34a 2026-03-09T20:52:17.126 INFO:teuthology.orchestra.run.vm07.stdout:crash.vm10 vm10 running (3m) 13s ago 8m 7856k - 19.2.3-678-ge911bdeb 654f31e6858e 30eaebf5d733 2026-03-09T20:52:17.126 INFO:teuthology.orchestra.run.vm07.stdout:grafana.vm07 vm07 *:3000 running (8m) 80s ago 9m 160M - 9.4.7 954c08fa6188 74cf2e7ee6ad 2026-03-09T20:52:17.126 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm07.potfau vm07 running (7m) 80s ago 7m 30.4M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 2492b6874dc8 2026-03-09T20:52:17.126 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm07.rovdbp vm07 running (7m) 80s ago 7m 176M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 3dd0b4a28f35 2026-03-09T20:52:17.126 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm10.hzyuyq vm10 running (7m) 13s ago 7m 97.8M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 ed740ceed51a 2026-03-09T20:52:17.126 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm10.qpltwp vm10 running (7m) 13s ago 7m 28.5M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 c5fdba181aaf 2026-03-09T20:52:17.126 INFO:teuthology.orchestra.run.vm07.stdout:mgr.vm07.xjrvch vm07 *:8443,9283,8765 running (4m) 80s ago 9m 615M - 19.2.3-678-ge911bdeb 654f31e6858e bc6ab9c540eb 2026-03-09T20:52:17.126 INFO:teuthology.orchestra.run.vm07.stdout:mgr.vm10.byqahe vm10 *:8443,9283,8765 running (4m) 13s ago 8m 491M - 19.2.3-678-ge911bdeb 654f31e6858e f7ad162e95ff 2026-03-09T20:52:17.126 INFO:teuthology.orchestra.run.vm07.stdout:mon.vm07 vm07 running (4m) 80s ago 9m 58.7M 2048M 19.2.3-678-ge911bdeb 654f31e6858e bce9d510f94f 2026-03-09T20:52:17.126 INFO:teuthology.orchestra.run.vm07.stdout:mon.vm10 vm10 running (3m) 13s ago 8m 50.6M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 4428cf7f0607 2026-03-09T20:52:17.126 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.vm07 vm07 *:9100 running (9m) 80s ago 9m 15.5M - 1.5.0 0da6a335fe13 d6fac1f8a1d0 2026-03-09T20:52:17.126 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.vm10 vm10 *:9100 running (8m) 13s ago 8m 15.5M - 1.5.0 0da6a335fe13 9716a97e7ed1 2026-03-09T20:52:17.126 INFO:teuthology.orchestra.run.vm07.stdout:osd.0 vm07 running (3m) 80s ago 8m 232M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 1da9d2cdbdc3 2026-03-09T20:52:17.126 INFO:teuthology.orchestra.run.vm07.stdout:osd.1 vm07 running (2m) 80s ago 8m 171M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 95f518bf664f 2026-03-09T20:52:17.126 INFO:teuthology.orchestra.run.vm07.stdout:osd.2 vm07 running (81s) 80s ago 8m 11.5M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 0d3aa63353bb 2026-03-09T20:52:17.126 INFO:teuthology.orchestra.run.vm07.stdout:osd.3 vm10 running (59s) 13s ago 7m 157M 4096M 19.2.3-678-ge911bdeb 654f31e6858e c8d2b453e9e2 2026-03-09T20:52:17.126 INFO:teuthology.orchestra.run.vm07.stdout:osd.4 vm10 running (37s) 13s ago 7m 118M 4096M 19.2.3-678-ge911bdeb 654f31e6858e d0231a0cf2be 2026-03-09T20:52:17.126 INFO:teuthology.orchestra.run.vm07.stdout:osd.5 vm10 running (14s) 13s ago 7m 13.1M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 7489b8a43e7f 2026-03-09T20:52:17.126 INFO:teuthology.orchestra.run.vm07.stdout:prometheus.vm07 vm07 *:9095 running (4m) 80s ago 9m 49.8M - 2.43.0 a07b618ecd1d 3f9c07cd3fe3 2026-03-09T20:52:17.128 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.127+0000 7fc38ad23640 1 -- 192.168.123.107:0/1160584650 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc35c077890 msgr2=0x7fc35c079d50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:17.128 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.127+0000 7fc38ad23640 1 --2- 192.168.123.107:0/1160584650 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc35c077890 0x7fc35c079d50 secure :-1 s=READY pgs=80 cs=0 l=1 rev1=1 crypto rx=0x7fc38419b7a0 tx=0x7fc37400a400 comp rx=0 tx=0).stop 2026-03-09T20:52:17.128 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.128+0000 7fc38ad23640 1 -- 192.168.123.107:0/1160584650 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc3841028b0 msgr2=0x7fc38419a280 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:17.128 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.128+0000 7fc38ad23640 1 --2- 192.168.123.107:0/1160584650 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc3841028b0 0x7fc38419a280 secure :-1 s=READY pgs=43 cs=0 l=1 rev1=1 crypto rx=0x7fc36c02f730 tx=0x7fc36c002980 comp rx=0 tx=0).stop 2026-03-09T20:52:17.129 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.128+0000 7fc38ad23640 1 -- 192.168.123.107:0/1160584650 shutdown_connections 2026-03-09T20:52:17.129 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.128+0000 7fc38ad23640 1 --2- 192.168.123.107:0/1160584650 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc35c077890 0x7fc35c079d50 unknown :-1 s=CLOSED pgs=80 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:17.129 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.128+0000 7fc38ad23640 1 --2- 192.168.123.107:0/1160584650 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc384103ab0 0x7fc38419a7c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:17.129 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.128+0000 7fc38ad23640 1 --2- 192.168.123.107:0/1160584650 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc3841028b0 0x7fc38419a280 unknown :-1 s=CLOSED pgs=43 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:17.129 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.128+0000 7fc38ad23640 1 -- 192.168.123.107:0/1160584650 >> 192.168.123.107:0/1160584650 conn(0x7fc3840fe060 msgr2=0x7fc3840ff900 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:52:17.129 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.128+0000 7fc38ad23640 1 -- 192.168.123.107:0/1160584650 shutdown_connections 2026-03-09T20:52:17.129 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.128+0000 7fc38ad23640 1 -- 192.168.123.107:0/1160584650 wait complete. 2026-03-09T20:52:17.185 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.184+0000 7f5d03fff640 1 -- 192.168.123.107:0/1617274787 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5d04103ab0 msgr2=0x7f5d04103f30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:17.185 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.184+0000 7f5d03fff640 1 --2- 192.168.123.107:0/1617274787 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5d04103ab0 0x7f5d04103f30 secure :-1 s=READY pgs=118 cs=0 l=1 rev1=1 crypto rx=0x7f5cf8009a00 tx=0x7f5cf802f280 comp rx=0 tx=0).stop 2026-03-09T20:52:17.185 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.184+0000 7f5d03fff640 1 -- 192.168.123.107:0/1617274787 shutdown_connections 2026-03-09T20:52:17.185 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.184+0000 7f5d03fff640 1 --2- 192.168.123.107:0/1617274787 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5d04103ab0 0x7f5d04103f30 unknown :-1 s=CLOSED pgs=118 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:17.185 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.184+0000 7f5d03fff640 1 --2- 192.168.123.107:0/1617274787 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5d041028b0 0x7f5d04102cb0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:17.185 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.184+0000 7f5d03fff640 1 -- 192.168.123.107:0/1617274787 >> 192.168.123.107:0/1617274787 conn(0x7f5d040fe060 msgr2=0x7f5d04100480 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:52:17.185 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.184+0000 7f5d03fff640 1 -- 192.168.123.107:0/1617274787 shutdown_connections 2026-03-09T20:52:17.185 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.184+0000 7f5d03fff640 1 -- 192.168.123.107:0/1617274787 wait complete. 2026-03-09T20:52:17.185 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.185+0000 7f5d03fff640 1 Processor -- start 2026-03-09T20:52:17.186 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.185+0000 7f5d03fff640 1 -- start start 2026-03-09T20:52:17.186 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.185+0000 7f5d03fff640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5d041028b0 0x7f5d0419a2e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:52:17.186 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.185+0000 7f5d03fff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5d04103ab0 0x7f5d0419a820 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:52:17.186 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.185+0000 7f5d03fff640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f5d0419adf0 con 0x7f5d04103ab0 2026-03-09T20:52:17.186 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.185+0000 7f5d03fff640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f5d0419af60 con 0x7f5d041028b0 2026-03-09T20:52:17.186 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.185+0000 7f5d027fc640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5d04103ab0 0x7f5d0419a820 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:52:17.187 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.185+0000 7f5d027fc640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5d04103ab0 0x7f5d0419a820 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:58628/0 (socket says 192.168.123.107:58628) 2026-03-09T20:52:17.187 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.185+0000 7f5d027fc640 1 -- 192.168.123.107:0/4290303157 learned_addr learned my addr 192.168.123.107:0/4290303157 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:52:17.187 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.186+0000 7f5d02ffd640 1 --2- 192.168.123.107:0/4290303157 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5d041028b0 0x7f5d0419a2e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:52:17.187 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.186+0000 7f5d027fc640 1 -- 192.168.123.107:0/4290303157 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5d041028b0 msgr2=0x7f5d0419a2e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:17.187 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.186+0000 7f5d027fc640 1 --2- 192.168.123.107:0/4290303157 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5d041028b0 0x7f5d0419a2e0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:17.187 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.186+0000 7f5d027fc640 1 -- 192.168.123.107:0/4290303157 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f5cf8009660 con 0x7f5d04103ab0 2026-03-09T20:52:17.187 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.186+0000 7f5d02ffd640 1 --2- 192.168.123.107:0/4290303157 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5d041028b0 0x7f5d0419a2e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T20:52:17.187 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.186+0000 7f5d027fc640 1 --2- 192.168.123.107:0/4290303157 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5d04103ab0 0x7f5d0419a820 secure :-1 s=READY pgs=119 cs=0 l=1 rev1=1 crypto rx=0x7f5cf8004410 tx=0x7f5cf8004440 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:52:17.188 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.186+0000 7f5ce3fff640 1 -- 192.168.123.107:0/4290303157 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f5cf8031b80 con 0x7f5d04103ab0 2026-03-09T20:52:17.188 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.186+0000 7f5ce3fff640 1 -- 192.168.123.107:0/4290303157 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f5cf8031ce0 con 0x7f5d04103ab0 2026-03-09T20:52:17.188 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.186+0000 7f5ce3fff640 1 -- 192.168.123.107:0/4290303157 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f5cf8031280 con 0x7f5d04103ab0 2026-03-09T20:52:17.188 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.186+0000 7f5d03fff640 1 -- 192.168.123.107:0/4290303157 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f5d0419f9a0 con 0x7f5d04103ab0 2026-03-09T20:52:17.188 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.186+0000 7f5d03fff640 1 -- 192.168.123.107:0/4290303157 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f5d0419fe90 con 0x7f5d04103ab0 2026-03-09T20:52:17.189 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.187+0000 7f5d03fff640 1 -- 192.168.123.107:0/4290303157 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f5cc8005350 con 0x7f5d04103ab0 2026-03-09T20:52:17.193 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.191+0000 7f5ce3fff640 1 -- 192.168.123.107:0/4290303157 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f5cf803f070 con 0x7f5d04103ab0 2026-03-09T20:52:17.193 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.192+0000 7f5ce3fff640 1 --2- 192.168.123.107:0/4290303157 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f5cd80779b0 0x7f5cd8079e70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:52:17.193 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.192+0000 7f5ce3fff640 1 -- 192.168.123.107:0/4290303157 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(82..82 src has 1..82) v4 ==== 6394+0+0 (secure 0 0 0) 0x7f5cf80beb40 con 0x7f5d04103ab0 2026-03-09T20:52:17.193 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.192+0000 7f5ce3fff640 1 -- 192.168.123.107:0/4290303157 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f5cf80eea90 con 0x7f5d04103ab0 2026-03-09T20:52:17.193 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.192+0000 7f5d02ffd640 1 --2- 192.168.123.107:0/4290303157 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f5cd80779b0 0x7f5cd8079e70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:52:17.193 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.193+0000 7f5d02ffd640 1 --2- 192.168.123.107:0/4290303157 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f5cd80779b0 0x7f5cd8079e70 secure :-1 s=READY pgs=81 cs=0 l=1 rev1=1 crypto rx=0x7f5d04103910 tx=0x7f5cec005f70 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:52:17.330 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.329+0000 7f5d03fff640 1 -- 192.168.123.107:0/4290303157 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7f5cc80058d0 con 0x7f5d04103ab0 2026-03-09T20:52:17.330 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.329+0000 7f5ce3fff640 1 -- 192.168.123.107:0/4290303157 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+739 (secure 0 0 0) 0x7f5cf80872f0 con 0x7f5d04103ab0 2026-03-09T20:52:17.330 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T20:52:17.330 INFO:teuthology.orchestra.run.vm07.stdout: "mon": { 2026-03-09T20:52:17.330 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T20:52:17.330 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:52:17.330 INFO:teuthology.orchestra.run.vm07.stdout: "mgr": { 2026-03-09T20:52:17.331 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T20:52:17.331 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:52:17.331 INFO:teuthology.orchestra.run.vm07.stdout: "osd": { 2026-03-09T20:52:17.331 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 6 2026-03-09T20:52:17.331 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:52:17.331 INFO:teuthology.orchestra.run.vm07.stdout: "mds": { 2026-03-09T20:52:17.331 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable)": 4 2026-03-09T20:52:17.331 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:52:17.331 INFO:teuthology.orchestra.run.vm07.stdout: "overall": { 2026-03-09T20:52:17.331 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable)": 4, 2026-03-09T20:52:17.331 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 10 2026-03-09T20:52:17.331 INFO:teuthology.orchestra.run.vm07.stdout: } 2026-03-09T20:52:17.331 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T20:52:17.333 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.332+0000 7f5d03fff640 1 -- 192.168.123.107:0/4290303157 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f5cd80779b0 msgr2=0x7f5cd8079e70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:17.333 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.332+0000 7f5d03fff640 1 --2- 192.168.123.107:0/4290303157 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f5cd80779b0 0x7f5cd8079e70 secure :-1 s=READY pgs=81 cs=0 l=1 rev1=1 crypto rx=0x7f5d04103910 tx=0x7f5cec005f70 comp rx=0 tx=0).stop 2026-03-09T20:52:17.333 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.332+0000 7f5d03fff640 1 -- 192.168.123.107:0/4290303157 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5d04103ab0 msgr2=0x7f5d0419a820 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:17.333 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.332+0000 7f5d03fff640 1 --2- 192.168.123.107:0/4290303157 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5d04103ab0 0x7f5d0419a820 secure :-1 s=READY pgs=119 cs=0 l=1 rev1=1 crypto rx=0x7f5cf8004410 tx=0x7f5cf8004440 comp rx=0 tx=0).stop 2026-03-09T20:52:17.333 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.332+0000 7f5d03fff640 1 -- 192.168.123.107:0/4290303157 shutdown_connections 2026-03-09T20:52:17.333 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.332+0000 7f5d03fff640 1 --2- 192.168.123.107:0/4290303157 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f5cd80779b0 0x7f5cd8079e70 unknown :-1 s=CLOSED pgs=81 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:17.333 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.332+0000 7f5d03fff640 1 --2- 192.168.123.107:0/4290303157 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5d04103ab0 0x7f5d0419a820 unknown :-1 s=CLOSED pgs=119 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:17.333 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.332+0000 7f5d03fff640 1 --2- 192.168.123.107:0/4290303157 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5d041028b0 0x7f5d0419a2e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:17.333 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.332+0000 7f5d03fff640 1 -- 192.168.123.107:0/4290303157 >> 192.168.123.107:0/4290303157 conn(0x7f5d040fe060 msgr2=0x7f5d040ffbb0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:52:17.333 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.332+0000 7f5d03fff640 1 -- 192.168.123.107:0/4290303157 shutdown_connections 2026-03-09T20:52:17.333 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.332+0000 7f5d03fff640 1 -- 192.168.123.107:0/4290303157 wait complete. 2026-03-09T20:52:17.391 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.390+0000 7f4eece79640 1 -- 192.168.123.107:0/1165680938 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4ee8075ba0 msgr2=0x7f4ee8075fa0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:17.391 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.390+0000 7f4eece79640 1 --2- 192.168.123.107:0/1165680938 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4ee8075ba0 0x7f4ee8075fa0 secure :-1 s=READY pgs=120 cs=0 l=1 rev1=1 crypto rx=0x7f4ed00099b0 tx=0x7f4ed002f220 comp rx=0 tx=0).stop 2026-03-09T20:52:17.391 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.391+0000 7f4eece79640 1 -- 192.168.123.107:0/1165680938 shutdown_connections 2026-03-09T20:52:17.391 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.391+0000 7f4eece79640 1 --2- 192.168.123.107:0/1165680938 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4ee8076df0 0x7f4ee8077250 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:17.391 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.391+0000 7f4eece79640 1 --2- 192.168.123.107:0/1165680938 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4ee8075ba0 0x7f4ee8075fa0 unknown :-1 s=CLOSED pgs=120 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:17.392 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.391+0000 7f4eece79640 1 -- 192.168.123.107:0/1165680938 >> 192.168.123.107:0/1165680938 conn(0x7f4ee80fe250 msgr2=0x7f4ee8100670 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:52:17.392 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.391+0000 7f4eece79640 1 -- 192.168.123.107:0/1165680938 shutdown_connections 2026-03-09T20:52:17.392 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.391+0000 7f4eece79640 1 -- 192.168.123.107:0/1165680938 wait complete. 2026-03-09T20:52:17.392 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.391+0000 7f4eece79640 1 Processor -- start 2026-03-09T20:52:17.392 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.391+0000 7f4eece79640 1 -- start start 2026-03-09T20:52:17.392 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.392+0000 7f4eece79640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4ee8075ba0 0x7f4ee819e950 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:52:17.393 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.392+0000 7f4eece79640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4ee8076df0 0x7f4ee819ee90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:52:17.393 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.392+0000 7f4eece79640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f4ee819f460 con 0x7f4ee8075ba0 2026-03-09T20:52:17.393 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.392+0000 7f4ee6575640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4ee8075ba0 0x7f4ee819e950 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:52:17.393 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.392+0000 7f4ee6575640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4ee8075ba0 0x7f4ee819e950 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:58650/0 (socket says 192.168.123.107:58650) 2026-03-09T20:52:17.393 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.392+0000 7f4ee6575640 1 -- 192.168.123.107:0/859033670 learned_addr learned my addr 192.168.123.107:0/859033670 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:52:17.393 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.392+0000 7f4eece79640 1 -- 192.168.123.107:0/859033670 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f4ee819f5d0 con 0x7f4ee8076df0 2026-03-09T20:52:17.393 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.392+0000 7f4ee5d74640 1 --2- 192.168.123.107:0/859033670 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4ee8076df0 0x7f4ee819ee90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:52:17.393 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.392+0000 7f4ee6575640 1 -- 192.168.123.107:0/859033670 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4ee8076df0 msgr2=0x7f4ee819ee90 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:17.393 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.392+0000 7f4ee6575640 1 --2- 192.168.123.107:0/859033670 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4ee8076df0 0x7f4ee819ee90 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:17.393 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.392+0000 7f4ee6575640 1 -- 192.168.123.107:0/859033670 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f4ed0009660 con 0x7f4ee8075ba0 2026-03-09T20:52:17.393 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.392+0000 7f4ee5d74640 1 --2- 192.168.123.107:0/859033670 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4ee8076df0 0x7f4ee819ee90 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T20:52:17.393 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.393+0000 7f4ee6575640 1 --2- 192.168.123.107:0/859033670 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4ee8075ba0 0x7f4ee819e950 secure :-1 s=READY pgs=121 cs=0 l=1 rev1=1 crypto rx=0x7f4ed0002410 tx=0x7f4ed00028f0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:52:17.394 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.393+0000 7f4ecf7fe640 1 -- 192.168.123.107:0/859033670 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f4ed003d070 con 0x7f4ee8075ba0 2026-03-09T20:52:17.394 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.393+0000 7f4ecf7fe640 1 -- 192.168.123.107:0/859033670 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f4ed002fc90 con 0x7f4ee8075ba0 2026-03-09T20:52:17.394 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.393+0000 7f4eece79640 1 -- 192.168.123.107:0/859033670 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f4ee81a3fc0 con 0x7f4ee8075ba0 2026-03-09T20:52:17.394 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.393+0000 7f4ecf7fe640 1 -- 192.168.123.107:0/859033670 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f4ed0041870 con 0x7f4ee8075ba0 2026-03-09T20:52:17.394 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.393+0000 7f4eece79640 1 -- 192.168.123.107:0/859033670 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f4ee81a4430 con 0x7f4ee8075ba0 2026-03-09T20:52:17.395 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.394+0000 7f4ecf7fe640 1 -- 192.168.123.107:0/859033670 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f4ed0049050 con 0x7f4ee8075ba0 2026-03-09T20:52:17.396 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.395+0000 7f4ecf7fe640 1 --2- 192.168.123.107:0/859033670 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f4eb80779b0 0x7f4eb8079e70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:52:17.396 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.395+0000 7f4ee5d74640 1 --2- 192.168.123.107:0/859033670 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f4eb80779b0 0x7f4eb8079e70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:52:17.396 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.395+0000 7f4ecf7fe640 1 -- 192.168.123.107:0/859033670 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(82..82 src has 1..82) v4 ==== 6394+0+0 (secure 0 0 0) 0x7f4ed00bf3f0 con 0x7f4ee8075ba0 2026-03-09T20:52:17.396 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.395+0000 7f4eece79640 1 -- 192.168.123.107:0/859033670 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f4eb4005350 con 0x7f4ee8075ba0 2026-03-09T20:52:17.398 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.397+0000 7f4ee5d74640 1 --2- 192.168.123.107:0/859033670 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f4eb80779b0 0x7f4eb8079e70 secure :-1 s=READY pgs=82 cs=0 l=1 rev1=1 crypto rx=0x7f4ee819fe20 tx=0x7f4edc005f50 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:52:17.399 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.398+0000 7f4ecf7fe640 1 -- 192.168.123.107:0/859033670 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f4ed0087ba0 con 0x7f4ee8075ba0 2026-03-09T20:52:17.516 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.515+0000 7f4eece79640 1 -- 192.168.123.107:0/859033670 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "fs dump"} v 0) v1 -- 0x7f4eb4005e10 con 0x7f4ee8075ba0 2026-03-09T20:52:17.517 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.516+0000 7f4ecf7fe640 1 -- 192.168.123.107:0/859033670 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump"}]=0 dumped fsmap epoch 14 v14) v1 ==== 76+0+1934 (secure 0 0 0) 0x7f4ed00872f0 con 0x7f4ee8075ba0 2026-03-09T20:52:17.517 INFO:teuthology.orchestra.run.vm07.stdout:e14 2026-03-09T20:52:17.517 INFO:teuthology.orchestra.run.vm07.stdout:btime 2026-03-09T20:52:08:807762+0000 2026-03-09T20:52:17.517 INFO:teuthology.orchestra.run.vm07.stdout:enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T20:52:17.517 INFO:teuthology.orchestra.run.vm07.stdout:default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T20:52:17.517 INFO:teuthology.orchestra.run.vm07.stdout:legacy client fscid: 1 2026-03-09T20:52:17.517 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:52:17.517 INFO:teuthology.orchestra.run.vm07.stdout:Filesystem 'cephfs' (1) 2026-03-09T20:52:17.517 INFO:teuthology.orchestra.run.vm07.stdout:fs_name cephfs 2026-03-09T20:52:17.517 INFO:teuthology.orchestra.run.vm07.stdout:epoch 14 2026-03-09T20:52:17.517 INFO:teuthology.orchestra.run.vm07.stdout:flags 12 joinable allow_snaps allow_multimds_snaps 2026-03-09T20:52:17.517 INFO:teuthology.orchestra.run.vm07.stdout:created 2026-03-09T20:44:59.885491+0000 2026-03-09T20:52:17.517 INFO:teuthology.orchestra.run.vm07.stdout:modified 2026-03-09T20:52:08.807759+0000 2026-03-09T20:52:17.517 INFO:teuthology.orchestra.run.vm07.stdout:tableserver 0 2026-03-09T20:52:17.517 INFO:teuthology.orchestra.run.vm07.stdout:root 0 2026-03-09T20:52:17.517 INFO:teuthology.orchestra.run.vm07.stdout:session_timeout 60 2026-03-09T20:52:17.518 INFO:teuthology.orchestra.run.vm07.stdout:session_autoclose 300 2026-03-09T20:52:17.518 INFO:teuthology.orchestra.run.vm07.stdout:max_file_size 1099511627776 2026-03-09T20:52:17.518 INFO:teuthology.orchestra.run.vm07.stdout:max_xattr_size 65536 2026-03-09T20:52:17.518 INFO:teuthology.orchestra.run.vm07.stdout:required_client_features {} 2026-03-09T20:52:17.518 INFO:teuthology.orchestra.run.vm07.stdout:last_failure 0 2026-03-09T20:52:17.518 INFO:teuthology.orchestra.run.vm07.stdout:last_failure_osd_epoch 0 2026-03-09T20:52:17.518 INFO:teuthology.orchestra.run.vm07.stdout:compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T20:52:17.518 INFO:teuthology.orchestra.run.vm07.stdout:max_mds 1 2026-03-09T20:52:17.518 INFO:teuthology.orchestra.run.vm07.stdout:in 0,1 2026-03-09T20:52:17.518 INFO:teuthology.orchestra.run.vm07.stdout:up {0=14476,1=24291} 2026-03-09T20:52:17.518 INFO:teuthology.orchestra.run.vm07.stdout:failed 2026-03-09T20:52:17.518 INFO:teuthology.orchestra.run.vm07.stdout:damaged 2026-03-09T20:52:17.518 INFO:teuthology.orchestra.run.vm07.stdout:stopped 2026-03-09T20:52:17.518 INFO:teuthology.orchestra.run.vm07.stdout:data_pools [3] 2026-03-09T20:52:17.518 INFO:teuthology.orchestra.run.vm07.stdout:metadata_pool 2 2026-03-09T20:52:17.518 INFO:teuthology.orchestra.run.vm07.stdout:inline_data disabled 2026-03-09T20:52:17.518 INFO:teuthology.orchestra.run.vm07.stdout:balancer 2026-03-09T20:52:17.518 INFO:teuthology.orchestra.run.vm07.stdout:bal_rank_mask -1 2026-03-09T20:52:17.518 INFO:teuthology.orchestra.run.vm07.stdout:standby_count_wanted 1 2026-03-09T20:52:17.518 INFO:teuthology.orchestra.run.vm07.stdout:qdb_cluster leader: 14476 members: 14476 2026-03-09T20:52:17.518 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm07.rovdbp{0:14476} state up:active seq 3 join_fscid=1 addr [v2:192.168.123.107:6826/2216764941,v1:192.168.123.107:6827/2216764941] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:52:17.518 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm10.qpltwp{1:24291} state up:stopping seq 3 join_fscid=1 addr [v2:192.168.123.110:6824/61492274,v1:192.168.123.110:6825/61492274] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:52:17.518 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:52:17.518 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:52:17.518 INFO:teuthology.orchestra.run.vm07.stdout:Standby daemons: 2026-03-09T20:52:17.518 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:52:17.518 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm07.potfau{-1:34316} state up:standby seq 1 join_fscid=1 addr [v2:192.168.123.107:6828/3625324292,v1:192.168.123.107:6829/3625324292] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:52:17.518 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm10.hzyuyq{-1:44247} state up:standby seq 1 join_fscid=1 addr [v2:192.168.123.110:6826/2699915815,v1:192.168.123.110:6827/2699915815] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:52:17.518 INFO:teuthology.orchestra.run.vm07.stderr:dumped fsmap epoch 14 2026-03-09T20:52:17.520 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.519+0000 7f4eece79640 1 -- 192.168.123.107:0/859033670 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f4eb80779b0 msgr2=0x7f4eb8079e70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:17.520 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.519+0000 7f4eece79640 1 --2- 192.168.123.107:0/859033670 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f4eb80779b0 0x7f4eb8079e70 secure :-1 s=READY pgs=82 cs=0 l=1 rev1=1 crypto rx=0x7f4ee819fe20 tx=0x7f4edc005f50 comp rx=0 tx=0).stop 2026-03-09T20:52:17.520 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.519+0000 7f4eece79640 1 -- 192.168.123.107:0/859033670 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4ee8075ba0 msgr2=0x7f4ee819e950 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:17.520 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.519+0000 7f4eece79640 1 --2- 192.168.123.107:0/859033670 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4ee8075ba0 0x7f4ee819e950 secure :-1 s=READY pgs=121 cs=0 l=1 rev1=1 crypto rx=0x7f4ed0002410 tx=0x7f4ed00028f0 comp rx=0 tx=0).stop 2026-03-09T20:52:17.520 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.520+0000 7f4eece79640 1 -- 192.168.123.107:0/859033670 shutdown_connections 2026-03-09T20:52:17.520 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.520+0000 7f4eece79640 1 --2- 192.168.123.107:0/859033670 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f4eb80779b0 0x7f4eb8079e70 unknown :-1 s=CLOSED pgs=82 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:17.521 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.520+0000 7f4eece79640 1 --2- 192.168.123.107:0/859033670 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4ee8076df0 0x7f4ee819ee90 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:17.521 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.520+0000 7f4eece79640 1 --2- 192.168.123.107:0/859033670 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4ee8075ba0 0x7f4ee819e950 unknown :-1 s=CLOSED pgs=121 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:17.521 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.520+0000 7f4eece79640 1 -- 192.168.123.107:0/859033670 >> 192.168.123.107:0/859033670 conn(0x7f4ee80fe250 msgr2=0x7f4ee80ffda0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:52:17.521 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.520+0000 7f4eece79640 1 -- 192.168.123.107:0/859033670 shutdown_connections 2026-03-09T20:52:17.521 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.520+0000 7f4eece79640 1 -- 192.168.123.107:0/859033670 wait complete. 2026-03-09T20:52:17.577 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.576+0000 7fa0e0350640 1 -- 192.168.123.107:0/1949392229 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa0d8102a80 msgr2=0x7fa0d8102e80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:17.577 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.576+0000 7fa0e0350640 1 --2- 192.168.123.107:0/1949392229 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa0d8102a80 0x7fa0d8102e80 secure :-1 s=READY pgs=122 cs=0 l=1 rev1=1 crypto rx=0x7fa0c80099b0 tx=0x7fa0c802f220 comp rx=0 tx=0).stop 2026-03-09T20:52:17.577 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:17 vm07.local ceph-mon[112105]: pgmap v154: 65 pgs: 65 active+clean; 211 MiB data, 954 MiB used, 119 GiB / 120 GiB avail; 0 B/s rd, 383 B/s wr, 0 op/s 2026-03-09T20:52:17.577 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:17 vm07.local ceph-mon[112105]: from='client.34320 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:52:17.577 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:17 vm07.local ceph-mon[112105]: from='client.34324 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:52:17.577 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:17 vm07.local ceph-mon[112105]: from='client.44255 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:52:17.581 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.578+0000 7fa0e0350640 1 -- 192.168.123.107:0/1949392229 shutdown_connections 2026-03-09T20:52:17.581 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.578+0000 7fa0e0350640 1 --2- 192.168.123.107:0/1949392229 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa0d8103c80 0x7fa0d8104100 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:17.581 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.578+0000 7fa0e0350640 1 --2- 192.168.123.107:0/1949392229 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa0d8102a80 0x7fa0d8102e80 unknown :-1 s=CLOSED pgs=122 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:17.581 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.578+0000 7fa0e0350640 1 -- 192.168.123.107:0/1949392229 >> 192.168.123.107:0/1949392229 conn(0x7fa0d80fe250 msgr2=0x7fa0d8100670 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:52:17.581 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.580+0000 7fa0e0350640 1 -- 192.168.123.107:0/1949392229 shutdown_connections 2026-03-09T20:52:17.581 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.580+0000 7fa0e0350640 1 -- 192.168.123.107:0/1949392229 wait complete. 2026-03-09T20:52:17.582 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.581+0000 7fa0e0350640 1 Processor -- start 2026-03-09T20:52:17.582 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.581+0000 7fa0e0350640 1 -- start start 2026-03-09T20:52:17.583 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.582+0000 7fa0e0350640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa0d8102a80 0x7fa0d819e9b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:52:17.583 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.582+0000 7fa0de0c5640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa0d8102a80 0x7fa0d819e9b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:52:17.583 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.582+0000 7fa0de0c5640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa0d8102a80 0x7fa0d819e9b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:58668/0 (socket says 192.168.123.107:58668) 2026-03-09T20:52:17.583 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.582+0000 7fa0e0350640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa0d8103c80 0x7fa0d819eef0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:52:17.583 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.583+0000 7fa0de0c5640 1 -- 192.168.123.107:0/3649786941 learned_addr learned my addr 192.168.123.107:0/3649786941 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:52:17.584 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.583+0000 7fa0e0350640 1 -- 192.168.123.107:0/3649786941 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa0d819f4c0 con 0x7fa0d8102a80 2026-03-09T20:52:17.584 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.583+0000 7fa0e0350640 1 -- 192.168.123.107:0/3649786941 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa0d819f630 con 0x7fa0d8103c80 2026-03-09T20:52:17.584 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.583+0000 7fa0dd8c4640 1 --2- 192.168.123.107:0/3649786941 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa0d8103c80 0x7fa0d819eef0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:52:17.584 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.583+0000 7fa0de0c5640 1 -- 192.168.123.107:0/3649786941 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa0d8103c80 msgr2=0x7fa0d819eef0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:17.584 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.583+0000 7fa0de0c5640 1 --2- 192.168.123.107:0/3649786941 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa0d8103c80 0x7fa0d819eef0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:17.584 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.583+0000 7fa0de0c5640 1 -- 192.168.123.107:0/3649786941 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fa0c8009660 con 0x7fa0d8102a80 2026-03-09T20:52:17.585 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.584+0000 7fa0de0c5640 1 --2- 192.168.123.107:0/3649786941 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa0d8102a80 0x7fa0d819e9b0 secure :-1 s=READY pgs=123 cs=0 l=1 rev1=1 crypto rx=0x7fa0c8002a50 tx=0x7fa0c8031cd0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:52:17.586 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.585+0000 7fa0cf7fe640 1 -- 192.168.123.107:0/3649786941 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fa0c803d070 con 0x7fa0d8102a80 2026-03-09T20:52:17.586 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.585+0000 7fa0cf7fe640 1 -- 192.168.123.107:0/3649786941 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fa0c8031d70 con 0x7fa0d8102a80 2026-03-09T20:52:17.586 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.585+0000 7fa0cf7fe640 1 -- 192.168.123.107:0/3649786941 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fa0c8031280 con 0x7fa0d8102a80 2026-03-09T20:52:17.588 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.586+0000 7fa0e0350640 1 -- 192.168.123.107:0/3649786941 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fa0d81a4070 con 0x7fa0d8102a80 2026-03-09T20:52:17.588 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.586+0000 7fa0e0350640 1 -- 192.168.123.107:0/3649786941 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fa0d81a45b0 con 0x7fa0d8102a80 2026-03-09T20:52:17.588 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.587+0000 7fa0e0350640 1 -- 192.168.123.107:0/3649786941 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fa0a0005350 con 0x7fa0d8102a80 2026-03-09T20:52:17.590 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.589+0000 7fa0cf7fe640 1 -- 192.168.123.107:0/3649786941 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fa0c8038730 con 0x7fa0d8102a80 2026-03-09T20:52:17.590 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.589+0000 7fa0cf7fe640 1 --2- 192.168.123.107:0/3649786941 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fa0b8077720 0x7fa0b8079be0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:52:17.590 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.590+0000 7fa0dd8c4640 1 --2- 192.168.123.107:0/3649786941 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fa0b8077720 0x7fa0b8079be0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:52:17.591 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.590+0000 7fa0dd8c4640 1 --2- 192.168.123.107:0/3649786941 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fa0b8077720 0x7fa0b8079be0 secure :-1 s=READY pgs=83 cs=0 l=1 rev1=1 crypto rx=0x7fa0d819fed0 tx=0x7fa0c0009210 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:52:17.591 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.590+0000 7fa0cf7fe640 1 -- 192.168.123.107:0/3649786941 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(82..82 src has 1..82) v4 ==== 6394+0+0 (secure 0 0 0) 0x7fa0c80be7d0 con 0x7fa0d8102a80 2026-03-09T20:52:17.594 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.593+0000 7fa0cf7fe640 1 -- 192.168.123.107:0/3649786941 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fa0c8086f80 con 0x7fa0d8102a80 2026-03-09T20:52:17.698 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.697+0000 7fa0e0350640 1 -- 192.168.123.107:0/3649786941 --> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7fa0a0002bf0 con 0x7fa0b8077720 2026-03-09T20:52:17.699 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.698+0000 7fa0cf7fe640 1 -- 192.168.123.107:0/3649786941 <== mgr.34100 v2:192.168.123.107:6800/2064434004 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+416 (secure 0 0 0) 0x7fa0a0002bf0 con 0x7fa0b8077720 2026-03-09T20:52:17.699 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T20:52:17.699 INFO:teuthology.orchestra.run.vm07.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc", 2026-03-09T20:52:17.699 INFO:teuthology.orchestra.run.vm07.stdout: "in_progress": true, 2026-03-09T20:52:17.699 INFO:teuthology.orchestra.run.vm07.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-09T20:52:17.699 INFO:teuthology.orchestra.run.vm07.stdout: "services_complete": [ 2026-03-09T20:52:17.699 INFO:teuthology.orchestra.run.vm07.stdout: "osd", 2026-03-09T20:52:17.699 INFO:teuthology.orchestra.run.vm07.stdout: "mon", 2026-03-09T20:52:17.699 INFO:teuthology.orchestra.run.vm07.stdout: "crash", 2026-03-09T20:52:17.699 INFO:teuthology.orchestra.run.vm07.stdout: "mgr" 2026-03-09T20:52:17.699 INFO:teuthology.orchestra.run.vm07.stdout: ], 2026-03-09T20:52:17.699 INFO:teuthology.orchestra.run.vm07.stdout: "progress": "12/23 daemons upgraded", 2026-03-09T20:52:17.699 INFO:teuthology.orchestra.run.vm07.stdout: "message": "Currently upgrading osd daemons", 2026-03-09T20:52:17.699 INFO:teuthology.orchestra.run.vm07.stdout: "is_paused": false 2026-03-09T20:52:17.699 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T20:52:17.701 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.700+0000 7fa0e0350640 1 -- 192.168.123.107:0/3649786941 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fa0b8077720 msgr2=0x7fa0b8079be0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:17.701 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.700+0000 7fa0e0350640 1 --2- 192.168.123.107:0/3649786941 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fa0b8077720 0x7fa0b8079be0 secure :-1 s=READY pgs=83 cs=0 l=1 rev1=1 crypto rx=0x7fa0d819fed0 tx=0x7fa0c0009210 comp rx=0 tx=0).stop 2026-03-09T20:52:17.701 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.701+0000 7fa0e0350640 1 -- 192.168.123.107:0/3649786941 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa0d8102a80 msgr2=0x7fa0d819e9b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:17.701 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.701+0000 7fa0e0350640 1 --2- 192.168.123.107:0/3649786941 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa0d8102a80 0x7fa0d819e9b0 secure :-1 s=READY pgs=123 cs=0 l=1 rev1=1 crypto rx=0x7fa0c8002a50 tx=0x7fa0c8031cd0 comp rx=0 tx=0).stop 2026-03-09T20:52:17.702 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.701+0000 7fa0e0350640 1 -- 192.168.123.107:0/3649786941 shutdown_connections 2026-03-09T20:52:17.702 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.701+0000 7fa0e0350640 1 --2- 192.168.123.107:0/3649786941 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fa0b8077720 0x7fa0b8079be0 unknown :-1 s=CLOSED pgs=83 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:17.702 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.701+0000 7fa0e0350640 1 --2- 192.168.123.107:0/3649786941 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa0d8103c80 0x7fa0d819eef0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:17.702 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.701+0000 7fa0e0350640 1 --2- 192.168.123.107:0/3649786941 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa0d8102a80 0x7fa0d819e9b0 unknown :-1 s=CLOSED pgs=123 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:17.702 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.701+0000 7fa0e0350640 1 -- 192.168.123.107:0/3649786941 >> 192.168.123.107:0/3649786941 conn(0x7fa0d80fe250 msgr2=0x7fa0d80ffd50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:52:17.702 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.701+0000 7fa0e0350640 1 -- 192.168.123.107:0/3649786941 shutdown_connections 2026-03-09T20:52:17.702 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.701+0000 7fa0e0350640 1 -- 192.168.123.107:0/3649786941 wait complete. 2026-03-09T20:52:17.756 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.755+0000 7ff09c4f4640 1 -- 192.168.123.107:0/664631001 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7ff094076df0 msgr2=0x7ff094077250 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:17.756 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.755+0000 7ff09c4f4640 1 --2- 192.168.123.107:0/664631001 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7ff094076df0 0x7ff094077250 secure :-1 s=READY pgs=44 cs=0 l=1 rev1=1 crypto rx=0x7ff0840099b0 tx=0x7ff08402f220 comp rx=0 tx=0).stop 2026-03-09T20:52:17.756 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.755+0000 7ff09c4f4640 1 -- 192.168.123.107:0/664631001 shutdown_connections 2026-03-09T20:52:17.756 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.755+0000 7ff09c4f4640 1 --2- 192.168.123.107:0/664631001 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7ff094076df0 0x7ff094077250 unknown :-1 s=CLOSED pgs=44 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:17.756 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.755+0000 7ff09c4f4640 1 --2- 192.168.123.107:0/664631001 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ff094075ba0 0x7ff094075fa0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:17.756 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.755+0000 7ff09c4f4640 1 -- 192.168.123.107:0/664631001 >> 192.168.123.107:0/664631001 conn(0x7ff0940fe250 msgr2=0x7ff094100670 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:52:17.756 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.755+0000 7ff09c4f4640 1 -- 192.168.123.107:0/664631001 shutdown_connections 2026-03-09T20:52:17.756 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.755+0000 7ff09c4f4640 1 -- 192.168.123.107:0/664631001 wait complete. 2026-03-09T20:52:17.757 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.756+0000 7ff09c4f4640 1 Processor -- start 2026-03-09T20:52:17.757 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.756+0000 7ff09c4f4640 1 -- start start 2026-03-09T20:52:17.757 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.756+0000 7ff09c4f4640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ff094075ba0 0x7ff09419e900 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:52:17.757 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.756+0000 7ff09a269640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ff094075ba0 0x7ff09419e900 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:52:17.757 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.756+0000 7ff09a269640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ff094075ba0 0x7ff09419e900 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:39234/0 (socket says 192.168.123.107:39234) 2026-03-09T20:52:17.757 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.756+0000 7ff09c4f4640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7ff094076df0 0x7ff09419ee40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:52:17.757 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.756+0000 7ff09c4f4640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff09419f410 con 0x7ff094075ba0 2026-03-09T20:52:17.757 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.756+0000 7ff09c4f4640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff09419f580 con 0x7ff094076df0 2026-03-09T20:52:17.758 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.756+0000 7ff09a269640 1 -- 192.168.123.107:0/3974916112 learned_addr learned my addr 192.168.123.107:0/3974916112 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:52:17.758 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.757+0000 7ff099a68640 1 --2- 192.168.123.107:0/3974916112 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7ff094076df0 0x7ff09419ee40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:52:17.758 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.757+0000 7ff099a68640 1 -- 192.168.123.107:0/3974916112 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ff094075ba0 msgr2=0x7ff09419e900 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:17.758 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.757+0000 7ff099a68640 1 --2- 192.168.123.107:0/3974916112 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ff094075ba0 0x7ff09419e900 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:17.758 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.757+0000 7ff099a68640 1 -- 192.168.123.107:0/3974916112 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7ff088009590 con 0x7ff094076df0 2026-03-09T20:52:17.758 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.757+0000 7ff09a269640 1 --2- 192.168.123.107:0/3974916112 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ff094075ba0 0x7ff09419e900 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T20:52:17.758 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.757+0000 7ff099a68640 1 --2- 192.168.123.107:0/3974916112 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7ff094076df0 0x7ff09419ee40 secure :-1 s=READY pgs=45 cs=0 l=1 rev1=1 crypto rx=0x7ff084005bb0 tx=0x7ff084002f60 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:52:17.758 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.757+0000 7ff0837fe640 1 -- 192.168.123.107:0/3974916112 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7ff08403d070 con 0x7ff094076df0 2026-03-09T20:52:17.758 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.757+0000 7ff09c4f4640 1 -- 192.168.123.107:0/3974916112 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7ff084009660 con 0x7ff094076df0 2026-03-09T20:52:17.759 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.758+0000 7ff0837fe640 1 -- 192.168.123.107:0/3974916112 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7ff084038730 con 0x7ff094076df0 2026-03-09T20:52:17.759 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.758+0000 7ff0837fe640 1 -- 192.168.123.107:0/3974916112 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7ff084041790 con 0x7ff094076df0 2026-03-09T20:52:17.759 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.758+0000 7ff09c4f4640 1 -- 192.168.123.107:0/3974916112 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7ff0941a42f0 con 0x7ff094076df0 2026-03-09T20:52:17.759 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.758+0000 7ff09c4f4640 1 -- 192.168.123.107:0/3974916112 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7ff064005350 con 0x7ff094076df0 2026-03-09T20:52:17.761 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.759+0000 7ff0837fe640 1 -- 192.168.123.107:0/3974916112 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7ff0840388a0 con 0x7ff094076df0 2026-03-09T20:52:17.761 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.759+0000 7ff0837fe640 1 --2- 192.168.123.107:0/3974916112 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7ff0680778e0 0x7ff068079da0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:52:17.761 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.760+0000 7ff0837fe640 1 -- 192.168.123.107:0/3974916112 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(82..82 src has 1..82) v4 ==== 6394+0+0 (secure 0 0 0) 0x7ff0840be330 con 0x7ff094076df0 2026-03-09T20:52:17.761 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.760+0000 7ff09a269640 1 --2- 192.168.123.107:0/3974916112 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7ff0680778e0 0x7ff068079da0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:52:17.761 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.760+0000 7ff09a269640 1 --2- 192.168.123.107:0/3974916112 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7ff0680778e0 0x7ff068079da0 secure :-1 s=READY pgs=84 cs=0 l=1 rev1=1 crypto rx=0x7ff088004750 tx=0x7ff088009290 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:52:17.763 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.762+0000 7ff0837fe640 1 -- 192.168.123.107:0/3974916112 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7ff084086b10 con 0x7ff094076df0 2026-03-09T20:52:17.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:17 vm10.local ceph-mon[103526]: pgmap v154: 65 pgs: 65 active+clean; 211 MiB data, 954 MiB used, 119 GiB / 120 GiB avail; 0 B/s rd, 383 B/s wr, 0 op/s 2026-03-09T20:52:17.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:17 vm10.local ceph-mon[103526]: from='client.34320 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:52:17.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:17 vm10.local ceph-mon[103526]: from='client.34324 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:52:17.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:17 vm10.local ceph-mon[103526]: from='client.44255 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:52:17.890 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.889+0000 7ff09c4f4640 1 -- 192.168.123.107:0/3974916112 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "health", "detail": "detail"} v 0) v1 -- 0x7ff0640051c0 con 0x7ff094076df0 2026-03-09T20:52:17.891 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.890+0000 7ff0837fe640 1 -- 192.168.123.107:0/3974916112 <== mon.1 v2:192.168.123.110:3300/0 7 ==== mon_command_ack([{"prefix": "health", "detail": "detail"}]=0 v0) v1 ==== 74+0+10 (secure 0 0 0) 0x7ff084086260 con 0x7ff094076df0 2026-03-09T20:52:17.891 INFO:teuthology.orchestra.run.vm07.stdout:HEALTH_OK 2026-03-09T20:52:17.893 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.892+0000 7ff09c4f4640 1 -- 192.168.123.107:0/3974916112 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7ff0680778e0 msgr2=0x7ff068079da0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:17.893 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.892+0000 7ff09c4f4640 1 --2- 192.168.123.107:0/3974916112 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7ff0680778e0 0x7ff068079da0 secure :-1 s=READY pgs=84 cs=0 l=1 rev1=1 crypto rx=0x7ff088004750 tx=0x7ff088009290 comp rx=0 tx=0).stop 2026-03-09T20:52:17.893 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.893+0000 7ff09c4f4640 1 -- 192.168.123.107:0/3974916112 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7ff094076df0 msgr2=0x7ff09419ee40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:17.893 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.893+0000 7ff09c4f4640 1 --2- 192.168.123.107:0/3974916112 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7ff094076df0 0x7ff09419ee40 secure :-1 s=READY pgs=45 cs=0 l=1 rev1=1 crypto rx=0x7ff084005bb0 tx=0x7ff084002f60 comp rx=0 tx=0).stop 2026-03-09T20:52:17.894 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.893+0000 7ff09c4f4640 1 -- 192.168.123.107:0/3974916112 shutdown_connections 2026-03-09T20:52:17.894 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.893+0000 7ff09c4f4640 1 --2- 192.168.123.107:0/3974916112 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7ff0680778e0 0x7ff068079da0 unknown :-1 s=CLOSED pgs=84 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:17.894 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.893+0000 7ff09c4f4640 1 --2- 192.168.123.107:0/3974916112 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7ff094076df0 0x7ff09419ee40 unknown :-1 s=CLOSED pgs=45 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:17.894 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.893+0000 7ff09c4f4640 1 --2- 192.168.123.107:0/3974916112 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ff094075ba0 0x7ff09419e900 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:17.894 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.893+0000 7ff09c4f4640 1 -- 192.168.123.107:0/3974916112 >> 192.168.123.107:0/3974916112 conn(0x7ff0940fe250 msgr2=0x7ff0940ffa30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:52:17.894 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.893+0000 7ff09c4f4640 1 -- 192.168.123.107:0/3974916112 shutdown_connections 2026-03-09T20:52:17.894 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:17.893+0000 7ff09c4f4640 1 -- 192.168.123.107:0/3974916112 wait complete. 2026-03-09T20:52:18.549 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:18 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/4290303157' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:18.549 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:18 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/859033670' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T20:52:18.549 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:18 vm07.local ceph-mon[112105]: from='client.34340 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:52:18.549 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:18 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/3974916112' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T20:52:18.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:18 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/4290303157' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:18.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:18 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/859033670' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T20:52:18.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:18 vm10.local ceph-mon[103526]: from='client.34340 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:52:18.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:18 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/3974916112' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T20:52:19.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:19 vm07.local ceph-mon[112105]: pgmap v155: 65 pgs: 65 active+clean; 211 MiB data, 954 MiB used, 119 GiB / 120 GiB avail; 0 B/s rd, 330 B/s wr, 0 op/s 2026-03-09T20:52:19.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:19 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:52:19.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:19 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:52:19.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:19 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:52:19.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:19 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:19.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:19 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:52:19.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:19 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:19.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:19 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:19.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:19 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:19.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:19 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:19.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:19 vm07.local ceph-mon[112105]: Upgrade: Waiting for fs cephfs to scale down to reach 1 MDS 2026-03-09T20:52:19.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:19 vm10.local ceph-mon[103526]: pgmap v155: 65 pgs: 65 active+clean; 211 MiB data, 954 MiB used, 119 GiB / 120 GiB avail; 0 B/s rd, 330 B/s wr, 0 op/s 2026-03-09T20:52:19.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:19 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:52:19.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:19 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:52:19.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:19 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:52:19.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:19 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:19.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:19 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:52:19.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:19 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:19.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:19 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:19.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:19 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:19.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:19 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:19.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:19 vm10.local ceph-mon[103526]: Upgrade: Waiting for fs cephfs to scale down to reach 1 MDS 2026-03-09T20:52:21.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:21 vm07.local ceph-mon[112105]: pgmap v156: 65 pgs: 65 active+clean; 211 MiB data, 954 MiB used, 119 GiB / 120 GiB avail; 398 B/s wr, 0 op/s 2026-03-09T20:52:22.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:21 vm10.local ceph-mon[103526]: pgmap v156: 65 pgs: 65 active+clean; 211 MiB data, 954 MiB used, 119 GiB / 120 GiB avail; 398 B/s wr, 0 op/s 2026-03-09T20:52:23.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:23 vm07.local ceph-mon[112105]: pgmap v157: 65 pgs: 65 active+clean; 211 MiB data, 954 MiB used, 119 GiB / 120 GiB avail; 341 B/s wr, 0 op/s 2026-03-09T20:52:24.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:23 vm10.local ceph-mon[103526]: pgmap v157: 65 pgs: 65 active+clean; 211 MiB data, 954 MiB used, 119 GiB / 120 GiB avail; 341 B/s wr, 0 op/s 2026-03-09T20:52:25.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:25 vm10.local ceph-mon[103526]: pgmap v158: 65 pgs: 65 active+clean; 211 MiB data, 954 MiB used, 119 GiB / 120 GiB avail; 341 B/s wr, 0 op/s 2026-03-09T20:52:25.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:25 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:25.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:25 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:52:25.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:25 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:25.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:25 vm07.local ceph-mon[112105]: pgmap v158: 65 pgs: 65 active+clean; 211 MiB data, 954 MiB used, 119 GiB / 120 GiB avail; 341 B/s wr, 0 op/s 2026-03-09T20:52:25.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:25 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:25.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:25 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:52:25.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:25 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:27.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:27 vm10.local ceph-mon[103526]: pgmap v159: 65 pgs: 65 active+clean; 211 MiB data, 954 MiB used, 119 GiB / 120 GiB avail; 85 B/s wr, 0 op/s 2026-03-09T20:52:27.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:27 vm07.local ceph-mon[112105]: pgmap v159: 65 pgs: 65 active+clean; 211 MiB data, 954 MiB used, 119 GiB / 120 GiB avail; 85 B/s wr, 0 op/s 2026-03-09T20:52:29.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:29 vm07.local ceph-mon[112105]: pgmap v160: 65 pgs: 65 active+clean; 211 MiB data, 954 MiB used, 119 GiB / 120 GiB avail; 341 B/s wr, 0 op/s 2026-03-09T20:52:29.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:29 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:52:29.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:29 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:52:29.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:29 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:52:29.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:29 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:29.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:29 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:52:29.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:29 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:29.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:29 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:29.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:29 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:29.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:29 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:29.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:29 vm07.local ceph-mon[112105]: Upgrade: Waiting for fs cephfs to scale down to reach 1 MDS 2026-03-09T20:52:30.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:29 vm10.local ceph-mon[103526]: pgmap v160: 65 pgs: 65 active+clean; 211 MiB data, 954 MiB used, 119 GiB / 120 GiB avail; 341 B/s wr, 0 op/s 2026-03-09T20:52:30.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:29 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:52:30.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:29 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:52:30.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:29 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:52:30.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:29 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:30.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:29 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:52:30.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:29 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:30.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:29 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:30.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:29 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:30.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:29 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:30.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:29 vm10.local ceph-mon[103526]: Upgrade: Waiting for fs cephfs to scale down to reach 1 MDS 2026-03-09T20:52:31.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:31 vm10.local ceph-mon[103526]: pgmap v161: 65 pgs: 65 active+clean; 211 MiB data, 954 MiB used, 119 GiB / 120 GiB avail; 341 B/s wr, 0 op/s 2026-03-09T20:52:31.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:31 vm07.local ceph-mon[112105]: pgmap v161: 65 pgs: 65 active+clean; 211 MiB data, 954 MiB used, 119 GiB / 120 GiB avail; 341 B/s wr, 0 op/s 2026-03-09T20:52:32.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:32 vm07.local ceph-mon[112105]: daemon mds.cephfs.vm10.qpltwp finished stopping rank 1 in filesystem cephfs (now has 1 ranks) 2026-03-09T20:52:33.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:32 vm10.local ceph-mon[103526]: daemon mds.cephfs.vm10.qpltwp finished stopping rank 1 in filesystem cephfs (now has 1 ranks) 2026-03-09T20:52:33.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:33 vm07.local ceph-mon[112105]: pgmap v162: 65 pgs: 65 active+clean; 211 MiB data, 954 MiB used, 119 GiB / 120 GiB avail; 255 B/s wr, 0 op/s 2026-03-09T20:52:33.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:33 vm07.local ceph-mon[112105]: mds.1 [v2:192.168.123.110:6824/61492274,v1:192.168.123.110:6825/61492274] down:stopped 2026-03-09T20:52:33.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:33 vm07.local ceph-mon[112105]: fsmap cephfs:1 {0=cephfs.vm07.rovdbp=up:active} 2 up:standby 2026-03-09T20:52:34.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:33 vm10.local ceph-mon[103526]: pgmap v162: 65 pgs: 65 active+clean; 211 MiB data, 954 MiB used, 119 GiB / 120 GiB avail; 255 B/s wr, 0 op/s 2026-03-09T20:52:34.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:33 vm10.local ceph-mon[103526]: mds.1 [v2:192.168.123.110:6824/61492274,v1:192.168.123.110:6825/61492274] down:stopped 2026-03-09T20:52:34.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:33 vm10.local ceph-mon[103526]: fsmap cephfs:1 {0=cephfs.vm07.rovdbp=up:active} 2 up:standby 2026-03-09T20:52:34.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:34 vm07.local ceph-mon[112105]: mds.? [v2:192.168.123.110:6824/1063035280,v1:192.168.123.110:6825/1063035280] up:boot 2026-03-09T20:52:34.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:34 vm07.local ceph-mon[112105]: fsmap cephfs:1 {0=cephfs.vm07.rovdbp=up:active} 3 up:standby 2026-03-09T20:52:34.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:34 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm10.qpltwp"}]: dispatch 2026-03-09T20:52:35.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:34 vm10.local ceph-mon[103526]: mds.? [v2:192.168.123.110:6824/1063035280,v1:192.168.123.110:6825/1063035280] up:boot 2026-03-09T20:52:35.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:34 vm10.local ceph-mon[103526]: fsmap cephfs:1 {0=cephfs.vm07.rovdbp=up:active} 3 up:standby 2026-03-09T20:52:35.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:34 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm10.qpltwp"}]: dispatch 2026-03-09T20:52:35.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:35 vm07.local ceph-mon[112105]: pgmap v163: 65 pgs: 65 active+clean; 211 MiB data, 954 MiB used, 119 GiB / 120 GiB avail; 255 B/s wr, 0 op/s 2026-03-09T20:52:36.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:35 vm10.local ceph-mon[103526]: pgmap v163: 65 pgs: 65 active+clean; 211 MiB data, 954 MiB used, 119 GiB / 120 GiB avail; 255 B/s wr, 0 op/s 2026-03-09T20:52:37.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:37 vm07.local ceph-mon[112105]: pgmap v164: 65 pgs: 65 active+clean; 211 MiB data, 954 MiB used, 119 GiB / 120 GiB avail; 255 B/s wr, 0 op/s 2026-03-09T20:52:38.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:37 vm10.local ceph-mon[103526]: pgmap v164: 65 pgs: 65 active+clean; 211 MiB data, 954 MiB used, 119 GiB / 120 GiB avail; 255 B/s wr, 0 op/s 2026-03-09T20:52:39.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:39 vm07.local ceph-mon[112105]: pgmap v165: 65 pgs: 65 active+clean; 211 MiB data, 954 MiB used, 119 GiB / 120 GiB avail; 255 B/s wr, 0 op/s 2026-03-09T20:52:39.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:39 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:52:39.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:39 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:39.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:39 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:39.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:39 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:52:39.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:39 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:52:39.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:39 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:39.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:39 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:52:39.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:39 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:39.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:39 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:39.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:39 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:39.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:39 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:39.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:39 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mds ok-to-stop", "ids": ["cephfs.vm07.rovdbp"]}]: dispatch 2026-03-09T20:52:39.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:39 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:52:40.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:39 vm10.local ceph-mon[103526]: pgmap v165: 65 pgs: 65 active+clean; 211 MiB data, 954 MiB used, 119 GiB / 120 GiB avail; 255 B/s wr, 0 op/s 2026-03-09T20:52:40.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:39 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:52:40.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:39 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:40.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:39 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:40.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:39 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:52:40.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:39 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:52:40.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:39 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:40.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:39 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:52:40.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:39 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:40.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:39 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:40.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:39 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:40.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:39 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:40.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:39 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mds ok-to-stop", "ids": ["cephfs.vm07.rovdbp"]}]: dispatch 2026-03-09T20:52:40.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:39 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:52:40.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:40 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-mon-vm07[112101]: 2026-03-09T20:52:40.210+0000 7ff513bf6640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN) 2026-03-09T20:52:41.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:40 vm10.local ceph-mon[103526]: Upgrade: It appears safe to stop mds.cephfs.vm07.rovdbp 2026-03-09T20:52:41.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:40 vm10.local ceph-mon[103526]: Upgrade: Updating mds.cephfs.vm07.rovdbp 2026-03-09T20:52:41.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:40 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:41.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:40 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm07.rovdbp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-09T20:52:41.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:40 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:52:41.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:40 vm10.local ceph-mon[103526]: Deploying daemon mds.cephfs.vm07.rovdbp on vm07 2026-03-09T20:52:41.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:40 vm10.local ceph-mon[103526]: Health check failed: 1 filesystem is degraded (FS_DEGRADED) 2026-03-09T20:52:41.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:40 vm10.local ceph-mon[103526]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN) 2026-03-09T20:52:41.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:40 vm10.local ceph-mon[103526]: osdmap e83: 6 total, 6 up, 6 in 2026-03-09T20:52:41.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:40 vm10.local ceph-mon[103526]: Standby daemon mds.cephfs.vm07.potfau assigned to filesystem cephfs as rank 0 2026-03-09T20:52:41.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:40 vm10.local ceph-mon[103526]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline) 2026-03-09T20:52:41.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:40 vm10.local ceph-mon[103526]: fsmap cephfs:0/1 3 up:standby, 1 failed 2026-03-09T20:52:41.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:40 vm10.local ceph-mon[103526]: fsmap cephfs:1/1 {0=cephfs.vm07.potfau=up:replay} 2 up:standby 2026-03-09T20:52:41.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:40 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm07.rovdbp"}]: dispatch 2026-03-09T20:52:41.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:40 vm07.local ceph-mon[112105]: Upgrade: It appears safe to stop mds.cephfs.vm07.rovdbp 2026-03-09T20:52:41.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:40 vm07.local ceph-mon[112105]: Upgrade: Updating mds.cephfs.vm07.rovdbp 2026-03-09T20:52:41.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:40 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:41.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:40 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm07.rovdbp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-09T20:52:41.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:40 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:52:41.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:40 vm07.local ceph-mon[112105]: Deploying daemon mds.cephfs.vm07.rovdbp on vm07 2026-03-09T20:52:41.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:40 vm07.local ceph-mon[112105]: Health check failed: 1 filesystem is degraded (FS_DEGRADED) 2026-03-09T20:52:41.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:40 vm07.local ceph-mon[112105]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN) 2026-03-09T20:52:41.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:40 vm07.local ceph-mon[112105]: osdmap e83: 6 total, 6 up, 6 in 2026-03-09T20:52:41.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:40 vm07.local ceph-mon[112105]: Standby daemon mds.cephfs.vm07.potfau assigned to filesystem cephfs as rank 0 2026-03-09T20:52:41.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:40 vm07.local ceph-mon[112105]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline) 2026-03-09T20:52:41.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:40 vm07.local ceph-mon[112105]: fsmap cephfs:0/1 3 up:standby, 1 failed 2026-03-09T20:52:41.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:40 vm07.local ceph-mon[112105]: fsmap cephfs:1/1 {0=cephfs.vm07.potfau=up:replay} 2 up:standby 2026-03-09T20:52:41.135 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:40 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm07.rovdbp"}]: dispatch 2026-03-09T20:52:42.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:42 vm10.local ceph-mon[103526]: pgmap v167: 65 pgs: 65 active+clean; 211 MiB data, 954 MiB used, 119 GiB / 120 GiB avail; 1.8 MiB/s rd, 0 op/s 2026-03-09T20:52:42.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:42 vm07.local ceph-mon[112105]: pgmap v167: 65 pgs: 65 active+clean; 211 MiB data, 954 MiB used, 119 GiB / 120 GiB avail; 1.8 MiB/s rd, 0 op/s 2026-03-09T20:52:43.854 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:43 vm07.local ceph-mon[112105]: pgmap v168: 65 pgs: 65 active+clean; 211 MiB data, 954 MiB used, 119 GiB / 120 GiB avail; 3.4 MiB/s rd, 1 op/s 2026-03-09T20:52:44.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:43 vm10.local ceph-mon[103526]: pgmap v168: 65 pgs: 65 active+clean; 211 MiB data, 954 MiB used, 119 GiB / 120 GiB avail; 3.4 MiB/s rd, 1 op/s 2026-03-09T20:52:45.635 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:45 vm07.local ceph-mon[112105]: pgmap v169: 65 pgs: 65 active+clean; 211 MiB data, 954 MiB used, 119 GiB / 120 GiB avail; 12 MiB/s rd, 4 op/s 2026-03-09T20:52:45.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:45 vm10.local ceph-mon[103526]: pgmap v169: 65 pgs: 65 active+clean; 211 MiB data, 954 MiB used, 119 GiB / 120 GiB avail; 12 MiB/s rd, 4 op/s 2026-03-09T20:52:46.514 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:46 vm07.local ceph-mon[112105]: mds.? [v2:192.168.123.107:6828/3625324292,v1:192.168.123.107:6829/3625324292] up:reconnect 2026-03-09T20:52:46.514 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:46 vm07.local ceph-mon[112105]: fsmap cephfs:1/1 {0=cephfs.vm07.potfau=up:reconnect} 2 up:standby 2026-03-09T20:52:46.514 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:46 vm07.local ceph-mon[112105]: reconnect by client.14518 192.168.144.1:0/2371384686 after 0 2026-03-09T20:52:46.514 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:46 vm07.local ceph-mon[112105]: reconnect by client.24331 192.168.123.110:0/2539319683 after 0.001 2026-03-09T20:52:46.514 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:46 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:46.514 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:46 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:46.514 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:46 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:52:46.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:46 vm10.local ceph-mon[103526]: mds.? [v2:192.168.123.107:6828/3625324292,v1:192.168.123.107:6829/3625324292] up:reconnect 2026-03-09T20:52:46.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:46 vm10.local ceph-mon[103526]: fsmap cephfs:1/1 {0=cephfs.vm07.potfau=up:reconnect} 2 up:standby 2026-03-09T20:52:46.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:46 vm10.local ceph-mon[103526]: reconnect by client.14518 192.168.144.1:0/2371384686 after 0 2026-03-09T20:52:46.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:46 vm10.local ceph-mon[103526]: reconnect by client.24331 192.168.123.110:0/2539319683 after 0.001 2026-03-09T20:52:46.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:46 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:46.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:46 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:46.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:46 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:52:47.655 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:47 vm07.local ceph-mon[112105]: mds.? [v2:192.168.123.107:6828/3625324292,v1:192.168.123.107:6829/3625324292] up:rejoin 2026-03-09T20:52:47.655 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:47 vm07.local ceph-mon[112105]: mds.? [v2:192.168.123.107:6826/2346066069,v1:192.168.123.107:6827/2346066069] up:boot 2026-03-09T20:52:47.655 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:47 vm07.local ceph-mon[112105]: fsmap cephfs:1/1 {0=cephfs.vm07.potfau=up:rejoin} 3 up:standby 2026-03-09T20:52:47.655 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:47 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm07.rovdbp"}]: dispatch 2026-03-09T20:52:47.655 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:47 vm07.local ceph-mon[112105]: daemon mds.cephfs.vm07.potfau is now active in filesystem cephfs as rank 0 2026-03-09T20:52:47.655 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:47 vm07.local ceph-mon[112105]: pgmap v170: 65 pgs: 65 active+clean; 211 MiB data, 954 MiB used, 119 GiB / 120 GiB avail; 20 MiB/s rd, 6 op/s 2026-03-09T20:52:47.655 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:47 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:47.655 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:47 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:47.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:47 vm10.local ceph-mon[103526]: mds.? [v2:192.168.123.107:6828/3625324292,v1:192.168.123.107:6829/3625324292] up:rejoin 2026-03-09T20:52:47.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:47 vm10.local ceph-mon[103526]: mds.? [v2:192.168.123.107:6826/2346066069,v1:192.168.123.107:6827/2346066069] up:boot 2026-03-09T20:52:47.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:47 vm10.local ceph-mon[103526]: fsmap cephfs:1/1 {0=cephfs.vm07.potfau=up:rejoin} 3 up:standby 2026-03-09T20:52:47.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:47 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm07.rovdbp"}]: dispatch 2026-03-09T20:52:47.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:47 vm10.local ceph-mon[103526]: daemon mds.cephfs.vm07.potfau is now active in filesystem cephfs as rank 0 2026-03-09T20:52:47.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:47 vm10.local ceph-mon[103526]: pgmap v170: 65 pgs: 65 active+clean; 211 MiB data, 954 MiB used, 119 GiB / 120 GiB avail; 20 MiB/s rd, 6 op/s 2026-03-09T20:52:47.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:47 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:47.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:47 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:47.957 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:47.955+0000 7fe94355e640 1 -- 192.168.123.107:0/3003146143 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe934098a10 msgr2=0x7fe934098e90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:47.957 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:47.955+0000 7fe94355e640 1 --2- 192.168.123.107:0/3003146143 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe934098a10 0x7fe934098e90 secure :-1 s=READY pgs=125 cs=0 l=1 rev1=1 crypto rx=0x7fe9300099b0 tx=0x7fe93002f240 comp rx=0 tx=0).stop 2026-03-09T20:52:47.958 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:47.957+0000 7fe94355e640 1 -- 192.168.123.107:0/3003146143 shutdown_connections 2026-03-09T20:52:47.958 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:47.957+0000 7fe94355e640 1 --2- 192.168.123.107:0/3003146143 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe934098a10 0x7fe934098e90 unknown :-1 s=CLOSED pgs=125 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:47.958 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:47.957+0000 7fe94355e640 1 --2- 192.168.123.107:0/3003146143 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe934097810 0x7fe934097c10 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:47.958 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:47.957+0000 7fe94355e640 1 -- 192.168.123.107:0/3003146143 >> 192.168.123.107:0/3003146143 conn(0x7fe934092fc0 msgr2=0x7fe9340953e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:52:47.958 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:47.957+0000 7fe94355e640 1 -- 192.168.123.107:0/3003146143 shutdown_connections 2026-03-09T20:52:47.958 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:47.957+0000 7fe94355e640 1 -- 192.168.123.107:0/3003146143 wait complete. 2026-03-09T20:52:47.958 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:47.958+0000 7fe94355e640 1 Processor -- start 2026-03-09T20:52:47.958 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:47.958+0000 7fe94355e640 1 -- start start 2026-03-09T20:52:47.959 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:47.958+0000 7fe94355e640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe934097810 0x7fe93400c210 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:52:47.959 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:47.958+0000 7fe94355e640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe934098a10 0x7fe93400c750 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:52:47.959 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:47.958+0000 7fe94355e640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fe934008c70 con 0x7fe934097810 2026-03-09T20:52:47.959 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:47.958+0000 7fe94355e640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fe934008de0 con 0x7fe934098a10 2026-03-09T20:52:47.960 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:47.959+0000 7fe94255c640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe934097810 0x7fe93400c210 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:52:47.960 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:47.959+0000 7fe941d5b640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe934098a10 0x7fe93400c750 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:52:47.960 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:47.959+0000 7fe94255c640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe934097810 0x7fe93400c210 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:46386/0 (socket says 192.168.123.107:46386) 2026-03-09T20:52:47.960 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:47.959+0000 7fe94255c640 1 -- 192.168.123.107:0/3679649592 learned_addr learned my addr 192.168.123.107:0/3679649592 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:52:47.960 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:47.959+0000 7fe94255c640 1 -- 192.168.123.107:0/3679649592 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe934098a10 msgr2=0x7fe93400c750 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:47.960 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:47.959+0000 7fe94255c640 1 --2- 192.168.123.107:0/3679649592 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe934098a10 0x7fe93400c750 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:47.961 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:47.959+0000 7fe94255c640 1 -- 192.168.123.107:0/3679649592 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fe930009660 con 0x7fe934097810 2026-03-09T20:52:47.961 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:47.960+0000 7fe94255c640 1 --2- 192.168.123.107:0/3679649592 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe934097810 0x7fe93400c210 secure :-1 s=READY pgs=126 cs=0 l=1 rev1=1 crypto rx=0x7fe92c00e960 tx=0x7fe92c00ee30 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:52:47.962 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:47.961+0000 7fe9237fe640 1 -- 192.168.123.107:0/3679649592 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fe92c009800 con 0x7fe934097810 2026-03-09T20:52:47.962 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:47.961+0000 7fe9237fe640 1 -- 192.168.123.107:0/3679649592 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fe92c004590 con 0x7fe934097810 2026-03-09T20:52:47.962 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:47.961+0000 7fe9237fe640 1 -- 192.168.123.107:0/3679649592 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fe92c010640 con 0x7fe934097810 2026-03-09T20:52:47.967 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:47.966+0000 7fe94355e640 1 -- 192.168.123.107:0/3679649592 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fe9340090c0 con 0x7fe934097810 2026-03-09T20:52:47.967 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:47.966+0000 7fe94355e640 1 -- 192.168.123.107:0/3679649592 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fe934009590 con 0x7fe934097810 2026-03-09T20:52:47.969 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:47.968+0000 7fe9237fe640 1 -- 192.168.123.107:0/3679649592 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fe92c0026e0 con 0x7fe934097810 2026-03-09T20:52:47.969 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:47.966+0000 7fe94355e640 1 -- 192.168.123.107:0/3679649592 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fe900005350 con 0x7fe934097810 2026-03-09T20:52:47.972 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:47.968+0000 7fe9237fe640 1 --2- 192.168.123.107:0/3679649592 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fe914077720 0x7fe914079be0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:52:47.972 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:47.969+0000 7fe941d5b640 1 --2- 192.168.123.107:0/3679649592 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fe914077720 0x7fe914079be0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:52:47.972 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:47.969+0000 7fe9237fe640 1 -- 192.168.123.107:0/3679649592 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6480+0+0 (secure 0 0 0) 0x7fe92c01d030 con 0x7fe934097810 2026-03-09T20:52:47.972 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:47.969+0000 7fe941d5b640 1 --2- 192.168.123.107:0/3679649592 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fe914077720 0x7fe914079be0 secure :-1 s=READY pgs=88 cs=0 l=1 rev1=1 crypto rx=0x7fe93400a190 tx=0x7fe93003a040 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:52:47.973 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:47.972+0000 7fe9237fe640 1 -- 192.168.123.107:0/3679649592 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fe92c062100 con 0x7fe934097810 2026-03-09T20:52:48.088 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.087+0000 7fe94355e640 1 -- 192.168.123.107:0/3679649592 --> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7fe900002bf0 con 0x7fe914077720 2026-03-09T20:52:48.089 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.088+0000 7fe9237fe640 1 -- 192.168.123.107:0/3679649592 <== mgr.34100 v2:192.168.123.107:6800/2064434004 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+416 (secure 0 0 0) 0x7fe900002bf0 con 0x7fe914077720 2026-03-09T20:52:48.092 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.090+0000 7fe94355e640 1 -- 192.168.123.107:0/3679649592 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fe914077720 msgr2=0x7fe914079be0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:48.092 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.090+0000 7fe94355e640 1 --2- 192.168.123.107:0/3679649592 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fe914077720 0x7fe914079be0 secure :-1 s=READY pgs=88 cs=0 l=1 rev1=1 crypto rx=0x7fe93400a190 tx=0x7fe93003a040 comp rx=0 tx=0).stop 2026-03-09T20:52:48.092 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.091+0000 7fe94355e640 1 -- 192.168.123.107:0/3679649592 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe934097810 msgr2=0x7fe93400c210 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:48.092 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.091+0000 7fe94355e640 1 --2- 192.168.123.107:0/3679649592 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe934097810 0x7fe93400c210 secure :-1 s=READY pgs=126 cs=0 l=1 rev1=1 crypto rx=0x7fe92c00e960 tx=0x7fe92c00ee30 comp rx=0 tx=0).stop 2026-03-09T20:52:48.092 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.091+0000 7fe94355e640 1 -- 192.168.123.107:0/3679649592 shutdown_connections 2026-03-09T20:52:48.092 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.091+0000 7fe94355e640 1 --2- 192.168.123.107:0/3679649592 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fe914077720 0x7fe914079be0 unknown :-1 s=CLOSED pgs=88 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:48.092 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.091+0000 7fe94355e640 1 --2- 192.168.123.107:0/3679649592 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe934098a10 0x7fe93400c750 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:48.092 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.091+0000 7fe94355e640 1 --2- 192.168.123.107:0/3679649592 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe934097810 0x7fe93400c210 unknown :-1 s=CLOSED pgs=126 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:48.092 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.091+0000 7fe94355e640 1 -- 192.168.123.107:0/3679649592 >> 192.168.123.107:0/3679649592 conn(0x7fe934092fc0 msgr2=0x7fe934094b40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:52:48.092 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.091+0000 7fe94355e640 1 -- 192.168.123.107:0/3679649592 shutdown_connections 2026-03-09T20:52:48.092 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.091+0000 7fe94355e640 1 -- 192.168.123.107:0/3679649592 wait complete. 2026-03-09T20:52:48.101 INFO:teuthology.orchestra.run.vm07.stdout:true 2026-03-09T20:52:48.169 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.168+0000 7f27bc926640 1 -- 192.168.123.107:0/2643247783 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f27b806bec0 msgr2=0x7f27b806c2c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:48.169 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.168+0000 7f27bc926640 1 --2- 192.168.123.107:0/2643247783 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f27b806bec0 0x7f27b806c2c0 secure :-1 s=READY pgs=49 cs=0 l=1 rev1=1 crypto rx=0x7f27a40098e0 tx=0x7f27a402f1b0 comp rx=0 tx=0).stop 2026-03-09T20:52:48.170 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.169+0000 7f27bc926640 1 -- 192.168.123.107:0/2643247783 shutdown_connections 2026-03-09T20:52:48.170 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.169+0000 7f27bc926640 1 --2- 192.168.123.107:0/2643247783 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f27b806d110 0x7f27b806d570 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:48.170 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.169+0000 7f27bc926640 1 --2- 192.168.123.107:0/2643247783 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f27b806bec0 0x7f27b806c2c0 unknown :-1 s=CLOSED pgs=49 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:48.170 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.169+0000 7f27bc926640 1 -- 192.168.123.107:0/2643247783 >> 192.168.123.107:0/2643247783 conn(0x7f27b806a6e0 msgr2=0x7f27b806aaf0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:52:48.170 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.169+0000 7f27bc926640 1 -- 192.168.123.107:0/2643247783 shutdown_connections 2026-03-09T20:52:48.170 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.169+0000 7f27bc926640 1 -- 192.168.123.107:0/2643247783 wait complete. 2026-03-09T20:52:48.171 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.170+0000 7f27bc926640 1 Processor -- start 2026-03-09T20:52:48.171 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.170+0000 7f27bc926640 1 -- start start 2026-03-09T20:52:48.171 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.170+0000 7f27bc926640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f27b806bec0 0x7f27b81a7120 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:52:48.171 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.170+0000 7f27bc926640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f27b806d110 0x7f27b81a7660 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:52:48.171 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.170+0000 7f27bc926640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f27b81a7c30 con 0x7f27b806d110 2026-03-09T20:52:48.171 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.170+0000 7f27bc926640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f27b81a7da0 con 0x7f27b806bec0 2026-03-09T20:52:48.171 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.170+0000 7f27b6ffd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f27b806d110 0x7f27b81a7660 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:52:48.171 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.170+0000 7f27b6ffd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f27b806d110 0x7f27b81a7660 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:46404/0 (socket says 192.168.123.107:46404) 2026-03-09T20:52:48.171 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.170+0000 7f27b6ffd640 1 -- 192.168.123.107:0/2965551318 learned_addr learned my addr 192.168.123.107:0/2965551318 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:52:48.171 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.171+0000 7f27b77fe640 1 --2- 192.168.123.107:0/2965551318 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f27b806bec0 0x7f27b81a7120 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:52:48.172 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.171+0000 7f27b6ffd640 1 -- 192.168.123.107:0/2965551318 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f27b806bec0 msgr2=0x7f27b81a7120 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:48.172 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.171+0000 7f27b6ffd640 1 --2- 192.168.123.107:0/2965551318 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f27b806bec0 0x7f27b81a7120 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:48.172 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.171+0000 7f27b6ffd640 1 -- 192.168.123.107:0/2965551318 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f27a8009660 con 0x7f27b806d110 2026-03-09T20:52:48.172 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.171+0000 7f27b6ffd640 1 --2- 192.168.123.107:0/2965551318 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f27b806d110 0x7f27b81a7660 secure :-1 s=READY pgs=127 cs=0 l=1 rev1=1 crypto rx=0x7f27a8002790 tx=0x7f27a8002c60 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:52:48.172 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.171+0000 7f27b4ff9640 1 -- 192.168.123.107:0/2965551318 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f27a800ecf0 con 0x7f27b806d110 2026-03-09T20:52:48.172 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.171+0000 7f27b4ff9640 1 -- 192.168.123.107:0/2965551318 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f27a800ee50 con 0x7f27b806d110 2026-03-09T20:52:48.173 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.172+0000 7f27b4ff9640 1 -- 192.168.123.107:0/2965551318 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f27a800f790 con 0x7f27b806d110 2026-03-09T20:52:48.173 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.172+0000 7f27bc926640 1 -- 192.168.123.107:0/2965551318 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f27a4009590 con 0x7f27b806d110 2026-03-09T20:52:48.174 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.172+0000 7f27bc926640 1 -- 192.168.123.107:0/2965551318 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f27b81acb80 con 0x7f27b806d110 2026-03-09T20:52:48.174 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.173+0000 7f27b4ff9640 1 -- 192.168.123.107:0/2965551318 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f27a8016070 con 0x7f27b806d110 2026-03-09T20:52:48.175 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.174+0000 7f27b4ff9640 1 --2- 192.168.123.107:0/2965551318 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f278c077720 0x7f278c079be0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:52:48.175 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.174+0000 7f27b77fe640 1 --2- 192.168.123.107:0/2965551318 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f278c077720 0x7f278c079be0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:52:48.175 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.174+0000 7f27b77fe640 1 --2- 192.168.123.107:0/2965551318 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f278c077720 0x7f278c079be0 secure :-1 s=READY pgs=89 cs=0 l=1 rev1=1 crypto rx=0x7f27a4002aa0 tx=0x7f27a403a040 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:52:48.175 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.175+0000 7f27b4ff9640 1 -- 192.168.123.107:0/2965551318 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6480+0+0 (secure 0 0 0) 0x7f27a809a010 con 0x7f27b806d110 2026-03-09T20:52:48.176 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.175+0000 7f27bc926640 1 -- 192.168.123.107:0/2965551318 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f277c005350 con 0x7f27b806d110 2026-03-09T20:52:48.179 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.178+0000 7f27b4ff9640 1 -- 192.168.123.107:0/2965551318 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f27a8062790 con 0x7f27b806d110 2026-03-09T20:52:48.288 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.287+0000 7f27bc926640 1 -- 192.168.123.107:0/2965551318 --> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f277c002bf0 con 0x7f278c077720 2026-03-09T20:52:48.292 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.291+0000 7f27b4ff9640 1 -- 192.168.123.107:0/2965551318 <== mgr.34100 v2:192.168.123.107:6800/2064434004 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+416 (secure 0 0 0) 0x7f277c002bf0 con 0x7f278c077720 2026-03-09T20:52:48.295 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.294+0000 7f27927fc640 1 -- 192.168.123.107:0/2965551318 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f278c077720 msgr2=0x7f278c079be0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:48.295 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.294+0000 7f27927fc640 1 --2- 192.168.123.107:0/2965551318 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f278c077720 0x7f278c079be0 secure :-1 s=READY pgs=89 cs=0 l=1 rev1=1 crypto rx=0x7f27a4002aa0 tx=0x7f27a403a040 comp rx=0 tx=0).stop 2026-03-09T20:52:48.295 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.294+0000 7f27927fc640 1 -- 192.168.123.107:0/2965551318 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f27b806d110 msgr2=0x7f27b81a7660 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:48.295 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.294+0000 7f27927fc640 1 --2- 192.168.123.107:0/2965551318 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f27b806d110 0x7f27b81a7660 secure :-1 s=READY pgs=127 cs=0 l=1 rev1=1 crypto rx=0x7f27a8002790 tx=0x7f27a8002c60 comp rx=0 tx=0).stop 2026-03-09T20:52:48.295 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.294+0000 7f27927fc640 1 -- 192.168.123.107:0/2965551318 shutdown_connections 2026-03-09T20:52:48.295 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.294+0000 7f27927fc640 1 --2- 192.168.123.107:0/2965551318 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f278c077720 0x7f278c079be0 unknown :-1 s=CLOSED pgs=89 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:48.295 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.294+0000 7f27927fc640 1 --2- 192.168.123.107:0/2965551318 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f27b806d110 0x7f27b81a7660 unknown :-1 s=CLOSED pgs=127 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:48.295 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.294+0000 7f27927fc640 1 --2- 192.168.123.107:0/2965551318 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f27b806bec0 0x7f27b81a7120 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:48.295 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.294+0000 7f27927fc640 1 -- 192.168.123.107:0/2965551318 >> 192.168.123.107:0/2965551318 conn(0x7f27b806a6e0 msgr2=0x7f27b810a580 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:52:48.295 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.294+0000 7f27927fc640 1 -- 192.168.123.107:0/2965551318 shutdown_connections 2026-03-09T20:52:48.295 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.294+0000 7f27927fc640 1 -- 192.168.123.107:0/2965551318 wait complete. 2026-03-09T20:52:48.352 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.351+0000 7f84fcfa5640 1 -- 192.168.123.107:0/4178066515 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f84f8072420 msgr2=0x7f84f8077190 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:48.352 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.351+0000 7f84fcfa5640 1 --2- 192.168.123.107:0/4178066515 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f84f8072420 0x7f84f8077190 secure :-1 s=READY pgs=128 cs=0 l=1 rev1=1 crypto rx=0x7f84f0008030 tx=0x7f84f0030dc0 comp rx=0 tx=0).stop 2026-03-09T20:52:48.352 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.351+0000 7f84fcfa5640 1 -- 192.168.123.107:0/4178066515 shutdown_connections 2026-03-09T20:52:48.352 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.351+0000 7f84fcfa5640 1 --2- 192.168.123.107:0/4178066515 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f84f8072420 0x7f84f8077190 unknown :-1 s=CLOSED pgs=128 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:48.352 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.351+0000 7f84fcfa5640 1 --2- 192.168.123.107:0/4178066515 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f84f8071a50 0x7f84f8071e50 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:48.352 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.351+0000 7f84fcfa5640 1 -- 192.168.123.107:0/4178066515 >> 192.168.123.107:0/4178066515 conn(0x7f84f806d4f0 msgr2=0x7f84f806f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:52:48.352 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.351+0000 7f84fcfa5640 1 -- 192.168.123.107:0/4178066515 shutdown_connections 2026-03-09T20:52:48.352 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.351+0000 7f84fcfa5640 1 -- 192.168.123.107:0/4178066515 wait complete. 2026-03-09T20:52:48.352 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.351+0000 7f84fcfa5640 1 Processor -- start 2026-03-09T20:52:48.352 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.351+0000 7f84fcfa5640 1 -- start start 2026-03-09T20:52:48.352 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.352+0000 7f84fcfa5640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f84f8071a50 0x7f84f8084180 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:52:48.353 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.352+0000 7f84fcfa5640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f84f80827d0 0x7f84f8082c50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:52:48.353 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.352+0000 7f84f6ffd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f84f80827d0 0x7f84f8082c50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:52:48.353 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.352+0000 7f84f6ffd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f84f80827d0 0x7f84f8082c50 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:46422/0 (socket says 192.168.123.107:46422) 2026-03-09T20:52:48.353 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.352+0000 7f84f6ffd640 1 -- 192.168.123.107:0/4047523057 learned_addr learned my addr 192.168.123.107:0/4047523057 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:52:48.353 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.352+0000 7f84fcfa5640 1 -- 192.168.123.107:0/4047523057 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f84f8083190 con 0x7f84f80827d0 2026-03-09T20:52:48.353 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.352+0000 7f84fcfa5640 1 -- 192.168.123.107:0/4047523057 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f84f8083300 con 0x7f84f8071a50 2026-03-09T20:52:48.353 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.352+0000 7f84f77fe640 1 --2- 192.168.123.107:0/4047523057 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f84f8071a50 0x7f84f8084180 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:52:48.353 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.352+0000 7f84f6ffd640 1 -- 192.168.123.107:0/4047523057 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f84f8071a50 msgr2=0x7f84f8084180 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:48.353 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.352+0000 7f84f6ffd640 1 --2- 192.168.123.107:0/4047523057 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f84f8071a50 0x7f84f8084180 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:48.353 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.352+0000 7f84f6ffd640 1 -- 192.168.123.107:0/4047523057 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f84f0007ce0 con 0x7f84f80827d0 2026-03-09T20:52:48.353 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.352+0000 7f84f6ffd640 1 --2- 192.168.123.107:0/4047523057 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f84f80827d0 0x7f84f8082c50 secure :-1 s=READY pgs=129 cs=0 l=1 rev1=1 crypto rx=0x7f84f00042c0 tx=0x7f84f0004360 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:52:48.354 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.353+0000 7f84f4ff9640 1 -- 192.168.123.107:0/4047523057 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f84f0002e80 con 0x7f84f80827d0 2026-03-09T20:52:48.354 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.353+0000 7f84fcfa5640 1 -- 192.168.123.107:0/4047523057 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f84f8083580 con 0x7f84f80827d0 2026-03-09T20:52:48.354 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.353+0000 7f84fcfa5640 1 -- 192.168.123.107:0/4047523057 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f84f81be5d0 con 0x7f84f80827d0 2026-03-09T20:52:48.356 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.354+0000 7f84f4ff9640 1 -- 192.168.123.107:0/4047523057 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f84f00048f0 con 0x7f84f80827d0 2026-03-09T20:52:48.356 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.354+0000 7f84f4ff9640 1 -- 192.168.123.107:0/4047523057 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f84f0044800 con 0x7f84f80827d0 2026-03-09T20:52:48.356 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.354+0000 7f84f4ff9640 1 -- 192.168.123.107:0/4047523057 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f84f00449e0 con 0x7f84f80827d0 2026-03-09T20:52:48.356 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.355+0000 7f84f4ff9640 1 --2- 192.168.123.107:0/4047523057 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f84e4077a80 0x7f84e4079f40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:52:48.356 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.355+0000 7f84f77fe640 1 --2- 192.168.123.107:0/4047523057 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f84e4077a80 0x7f84e4079f40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:52:48.356 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.355+0000 7f84f77fe640 1 --2- 192.168.123.107:0/4047523057 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f84e4077a80 0x7f84e4079f40 secure :-1 s=READY pgs=90 cs=0 l=1 rev1=1 crypto rx=0x7f84e8006fd0 tx=0x7f84e8008040 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:52:48.356 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.355+0000 7f84f4ff9640 1 -- 192.168.123.107:0/4047523057 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6480+0+0 (secure 0 0 0) 0x7f84f00c0bf0 con 0x7f84f80827d0 2026-03-09T20:52:48.360 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.356+0000 7f84fcfa5640 1 -- 192.168.123.107:0/4047523057 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f84c0005350 con 0x7f84f80827d0 2026-03-09T20:52:48.360 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.359+0000 7f84f4ff9640 1 -- 192.168.123.107:0/4047523057 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f84f00892e0 con 0x7f84f80827d0 2026-03-09T20:52:48.484 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.483+0000 7f84fcfa5640 1 -- 192.168.123.107:0/4047523057 --> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] -- mgr_command(tid 0: {"prefix": "orch ps", "target": ["mon-mgr", ""]}) v1 -- 0x7f84c4000d30 con 0x7f84e4077a80 2026-03-09T20:52:48.490 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.489+0000 7f84f4ff9640 1 -- 192.168.123.107:0/4047523057 <== mgr.34100 v2:192.168.123.107:6800/2064434004 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+3600 (secure 0 0 0) 0x7f84c4000d30 con 0x7f84e4077a80 2026-03-09T20:52:48.490 INFO:teuthology.orchestra.run.vm07.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T20:52:48.491 INFO:teuthology.orchestra.run.vm07.stdout:alertmanager.vm07 vm07 *:9093,9094 running (9m) 1s ago 9m 43.8M - 0.25.0 c8568f914cd2 aa3206f6f5cb 2026-03-09T20:52:48.491 INFO:teuthology.orchestra.run.vm07.stdout:ceph-exporter.vm07 vm07 running (9m) 1s ago 9m 10.2M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 06140d824fae 2026-03-09T20:52:48.491 INFO:teuthology.orchestra.run.vm07.stdout:ceph-exporter.vm10 vm10 running (9m) 44s ago 9m 10.5M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 ecddc8340426 2026-03-09T20:52:48.491 INFO:teuthology.orchestra.run.vm07.stdout:crash.vm07 vm07 running (3m) 1s ago 9m 7834k - 19.2.3-678-ge911bdeb 654f31e6858e 406c9c54f34a 2026-03-09T20:52:48.491 INFO:teuthology.orchestra.run.vm07.stdout:crash.vm10 vm10 running (3m) 44s ago 9m 7856k - 19.2.3-678-ge911bdeb 654f31e6858e 30eaebf5d733 2026-03-09T20:52:48.491 INFO:teuthology.orchestra.run.vm07.stdout:grafana.vm07 vm07 *:3000 running (9m) 1s ago 9m 161M - 9.4.7 954c08fa6188 74cf2e7ee6ad 2026-03-09T20:52:48.491 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm07.potfau vm07 running (7m) 1s ago 7m 157M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 2492b6874dc8 2026-03-09T20:52:48.491 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm07.rovdbp vm07 running (4s) 1s ago 7m 18.9M - 19.2.3-678-ge911bdeb 654f31e6858e 1763d9a7f9bb 2026-03-09T20:52:48.491 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm10.hzyuyq vm10 running (7m) 44s ago 7m 97.8M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 ed740ceed51a 2026-03-09T20:52:48.491 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm10.qpltwp vm10 running (7m) 44s ago 7m 28.5M - 18.2.7-1055-gab47f43c b6fe7eb6a9d0 c5fdba181aaf 2026-03-09T20:52:48.491 INFO:teuthology.orchestra.run.vm07.stdout:mgr.vm07.xjrvch vm07 *:8443,9283,8765 running (5m) 1s ago 10m 620M - 19.2.3-678-ge911bdeb 654f31e6858e bc6ab9c540eb 2026-03-09T20:52:48.491 INFO:teuthology.orchestra.run.vm07.stdout:mgr.vm10.byqahe vm10 *:8443,9283,8765 running (4m) 44s ago 9m 491M - 19.2.3-678-ge911bdeb 654f31e6858e f7ad162e95ff 2026-03-09T20:52:48.491 INFO:teuthology.orchestra.run.vm07.stdout:mon.vm07 vm07 running (4m) 1s ago 10m 61.7M 2048M 19.2.3-678-ge911bdeb 654f31e6858e bce9d510f94f 2026-03-09T20:52:48.491 INFO:teuthology.orchestra.run.vm07.stdout:mon.vm10 vm10 running (4m) 44s ago 9m 50.6M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 4428cf7f0607 2026-03-09T20:52:48.491 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.vm07 vm07 *:9100 running (9m) 1s ago 9m 16.6M - 1.5.0 0da6a335fe13 d6fac1f8a1d0 2026-03-09T20:52:48.491 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.vm10 vm10 *:9100 running (9m) 44s ago 9m 15.5M - 1.5.0 0da6a335fe13 9716a97e7ed1 2026-03-09T20:52:48.491 INFO:teuthology.orchestra.run.vm07.stdout:osd.0 vm07 running (3m) 1s ago 8m 234M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 1da9d2cdbdc3 2026-03-09T20:52:48.491 INFO:teuthology.orchestra.run.vm07.stdout:osd.1 vm07 running (3m) 1s ago 8m 176M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 95f518bf664f 2026-03-09T20:52:48.491 INFO:teuthology.orchestra.run.vm07.stdout:osd.2 vm07 running (113s) 1s ago 8m 122M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 0d3aa63353bb 2026-03-09T20:52:48.491 INFO:teuthology.orchestra.run.vm07.stdout:osd.3 vm10 running (90s) 44s ago 8m 157M 4096M 19.2.3-678-ge911bdeb 654f31e6858e c8d2b453e9e2 2026-03-09T20:52:48.491 INFO:teuthology.orchestra.run.vm07.stdout:osd.4 vm10 running (69s) 44s ago 8m 118M 4096M 19.2.3-678-ge911bdeb 654f31e6858e d0231a0cf2be 2026-03-09T20:52:48.491 INFO:teuthology.orchestra.run.vm07.stdout:osd.5 vm10 running (45s) 44s ago 8m 13.1M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 7489b8a43e7f 2026-03-09T20:52:48.491 INFO:teuthology.orchestra.run.vm07.stdout:prometheus.vm07 vm07 *:9095 running (4m) 1s ago 9m 56.1M - 2.43.0 a07b618ecd1d 3f9c07cd3fe3 2026-03-09T20:52:48.493 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.492+0000 7f84fcfa5640 1 -- 192.168.123.107:0/4047523057 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f84e4077a80 msgr2=0x7f84e4079f40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:48.494 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.492+0000 7f84fcfa5640 1 --2- 192.168.123.107:0/4047523057 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f84e4077a80 0x7f84e4079f40 secure :-1 s=READY pgs=90 cs=0 l=1 rev1=1 crypto rx=0x7f84e8006fd0 tx=0x7f84e8008040 comp rx=0 tx=0).stop 2026-03-09T20:52:48.494 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.492+0000 7f84fcfa5640 1 -- 192.168.123.107:0/4047523057 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f84f80827d0 msgr2=0x7f84f8082c50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:48.494 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.492+0000 7f84fcfa5640 1 --2- 192.168.123.107:0/4047523057 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f84f80827d0 0x7f84f8082c50 secure :-1 s=READY pgs=129 cs=0 l=1 rev1=1 crypto rx=0x7f84f00042c0 tx=0x7f84f0004360 comp rx=0 tx=0).stop 2026-03-09T20:52:48.494 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.493+0000 7f84fcfa5640 1 -- 192.168.123.107:0/4047523057 shutdown_connections 2026-03-09T20:52:48.494 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.493+0000 7f84fcfa5640 1 --2- 192.168.123.107:0/4047523057 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f84e4077a80 0x7f84e4079f40 unknown :-1 s=CLOSED pgs=90 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:48.494 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.493+0000 7f84fcfa5640 1 --2- 192.168.123.107:0/4047523057 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f84f80827d0 0x7f84f8082c50 unknown :-1 s=CLOSED pgs=129 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:48.494 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.493+0000 7f84fcfa5640 1 --2- 192.168.123.107:0/4047523057 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f84f8071a50 0x7f84f8084180 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:48.494 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.493+0000 7f84fcfa5640 1 -- 192.168.123.107:0/4047523057 >> 192.168.123.107:0/4047523057 conn(0x7f84f806d4f0 msgr2=0x7f84f8073130 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:52:48.494 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.493+0000 7f84fcfa5640 1 -- 192.168.123.107:0/4047523057 shutdown_connections 2026-03-09T20:52:48.494 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.493+0000 7f84fcfa5640 1 -- 192.168.123.107:0/4047523057 wait complete. 2026-03-09T20:52:48.558 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.556+0000 7f497a1a3640 1 -- 192.168.123.107:0/2407216995 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4974072420 msgr2=0x7f4974077190 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:48.558 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.556+0000 7f497a1a3640 1 --2- 192.168.123.107:0/2407216995 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4974072420 0x7f4974077190 secure :-1 s=READY pgs=130 cs=0 l=1 rev1=1 crypto rx=0x7f496c00b600 tx=0x7f496c030670 comp rx=0 tx=0).stop 2026-03-09T20:52:48.558 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.557+0000 7f497a1a3640 1 -- 192.168.123.107:0/2407216995 shutdown_connections 2026-03-09T20:52:48.558 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.557+0000 7f497a1a3640 1 --2- 192.168.123.107:0/2407216995 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4974072420 0x7f4974077190 unknown :-1 s=CLOSED pgs=130 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:48.558 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.557+0000 7f497a1a3640 1 --2- 192.168.123.107:0/2407216995 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4974071a50 0x7f4974071e50 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:48.558 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.557+0000 7f497a1a3640 1 -- 192.168.123.107:0/2407216995 >> 192.168.123.107:0/2407216995 conn(0x7f497406d4f0 msgr2=0x7f497406f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:52:48.560 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:48 vm07.local ceph-mon[112105]: Health check cleared: FS_DEGRADED (was: 1 filesystem is degraded) 2026-03-09T20:52:48.560 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:48 vm07.local ceph-mon[112105]: Cluster is now healthy 2026-03-09T20:52:48.560 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:48 vm07.local ceph-mon[112105]: mds.? [v2:192.168.123.107:6828/3625324292,v1:192.168.123.107:6829/3625324292] up:active 2026-03-09T20:52:48.560 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:48 vm07.local ceph-mon[112105]: fsmap cephfs:1 {0=cephfs.vm07.potfau=up:active} 3 up:standby 2026-03-09T20:52:48.560 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:48 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:48.560 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:48 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:48.560 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:48 vm07.local ceph-mon[112105]: from='client.34354 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:52:48.561 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.557+0000 7f497a1a3640 1 -- 192.168.123.107:0/2407216995 shutdown_connections 2026-03-09T20:52:48.561 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.557+0000 7f497a1a3640 1 -- 192.168.123.107:0/2407216995 wait complete. 2026-03-09T20:52:48.561 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.557+0000 7f497a1a3640 1 Processor -- start 2026-03-09T20:52:48.561 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.557+0000 7f497a1a3640 1 -- start start 2026-03-09T20:52:48.561 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.557+0000 7f497a1a3640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4974071a50 0x7f4974131a30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:52:48.561 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.557+0000 7f497a1a3640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4974131f70 0x7f49741323f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:52:48.561 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.557+0000 7f497a1a3640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f49741333e0 con 0x7f4974131f70 2026-03-09T20:52:48.561 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.557+0000 7f497a1a3640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f4974133550 con 0x7f4974071a50 2026-03-09T20:52:48.561 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.558+0000 7f49791a1640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4974071a50 0x7f4974131a30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:52:48.561 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.558+0000 7f49791a1640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4974071a50 0x7f4974131a30 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:58228/0 (socket says 192.168.123.107:58228) 2026-03-09T20:52:48.561 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.558+0000 7f49791a1640 1 -- 192.168.123.107:0/1265134261 learned_addr learned my addr 192.168.123.107:0/1265134261 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:52:48.561 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.558+0000 7f49789a0640 1 --2- 192.168.123.107:0/1265134261 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4974131f70 0x7f49741323f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:52:48.561 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.558+0000 7f49791a1640 1 -- 192.168.123.107:0/1265134261 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4974131f70 msgr2=0x7f49741323f0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:48.561 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.558+0000 7f49791a1640 1 --2- 192.168.123.107:0/1265134261 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4974131f70 0x7f49741323f0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:48.561 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.558+0000 7f49791a1640 1 -- 192.168.123.107:0/1265134261 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f496c009d00 con 0x7f4974071a50 2026-03-09T20:52:48.561 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.558+0000 7f49789a0640 1 --2- 192.168.123.107:0/1265134261 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4974131f70 0x7f49741323f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T20:52:48.566 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.565+0000 7f49791a1640 1 --2- 192.168.123.107:0/1265134261 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4974071a50 0x7f4974131a30 secure :-1 s=READY pgs=50 cs=0 l=1 rev1=1 crypto rx=0x7f497000b700 tx=0x7f497000bbd0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:52:48.566 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.566+0000 7f496a7fc640 1 -- 192.168.123.107:0/1265134261 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f49700042e0 con 0x7f4974071a50 2026-03-09T20:52:48.568 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.566+0000 7f497a1a3640 1 -- 192.168.123.107:0/1265134261 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f497407fb30 con 0x7f4974071a50 2026-03-09T20:52:48.568 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.566+0000 7f497a1a3640 1 -- 192.168.123.107:0/1265134261 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f4974080030 con 0x7f4974071a50 2026-03-09T20:52:48.568 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.566+0000 7f496a7fc640 1 -- 192.168.123.107:0/1265134261 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f4970009450 con 0x7f4974071a50 2026-03-09T20:52:48.568 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.566+0000 7f496a7fc640 1 -- 192.168.123.107:0/1265134261 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f497000cae0 con 0x7f4974071a50 2026-03-09T20:52:48.569 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.567+0000 7f497a1a3640 1 -- 192.168.123.107:0/1265134261 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f497407a430 con 0x7f4974071a50 2026-03-09T20:52:48.569 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.568+0000 7f496a7fc640 1 -- 192.168.123.107:0/1265134261 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f4970037b90 con 0x7f4974071a50 2026-03-09T20:52:48.572 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.571+0000 7f496a7fc640 1 --2- 192.168.123.107:0/1265134261 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f495c0779b0 0x7f495c079e70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:52:48.572 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.571+0000 7f496a7fc640 1 -- 192.168.123.107:0/1265134261 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6480+0+0 (secure 0 0 0) 0x7f4970099060 con 0x7f4974071a50 2026-03-09T20:52:48.572 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.571+0000 7f496a7fc640 1 -- 192.168.123.107:0/1265134261 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f4970099440 con 0x7f4974071a50 2026-03-09T20:52:48.573 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.572+0000 7f49789a0640 1 --2- 192.168.123.107:0/1265134261 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f495c0779b0 0x7f495c079e70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:52:48.580 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.579+0000 7f49789a0640 1 --2- 192.168.123.107:0/1265134261 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f495c0779b0 0x7f495c079e70 secure :-1 s=READY pgs=91 cs=0 l=1 rev1=1 crypto rx=0x7f496c030b80 tx=0x7f496c03b040 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:52:48.711 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.710+0000 7f497a1a3640 1 -- 192.168.123.107:0/1265134261 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7f49740807b0 con 0x7f4974071a50 2026-03-09T20:52:48.714 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.713+0000 7f496a7fc640 1 -- 192.168.123.107:0/1265134261 <== mon.1 v2:192.168.123.110:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+845 (secure 0 0 0) 0x7f4970061860 con 0x7f4974071a50 2026-03-09T20:52:48.714 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T20:52:48.714 INFO:teuthology.orchestra.run.vm07.stdout: "mon": { 2026-03-09T20:52:48.715 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T20:52:48.715 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:52:48.715 INFO:teuthology.orchestra.run.vm07.stdout: "mgr": { 2026-03-09T20:52:48.715 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T20:52:48.715 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:52:48.715 INFO:teuthology.orchestra.run.vm07.stdout: "osd": { 2026-03-09T20:52:48.715 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 6 2026-03-09T20:52:48.715 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:52:48.715 INFO:teuthology.orchestra.run.vm07.stdout: "mds": { 2026-03-09T20:52:48.715 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable)": 3, 2026-03-09T20:52:48.715 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 1 2026-03-09T20:52:48.715 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:52:48.715 INFO:teuthology.orchestra.run.vm07.stdout: "overall": { 2026-03-09T20:52:48.715 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 18.2.7-1055-gab47f43c (ab47f43c099b2cbae6e21342fe673ce251da54d6) reef (stable)": 3, 2026-03-09T20:52:48.715 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 11 2026-03-09T20:52:48.715 INFO:teuthology.orchestra.run.vm07.stdout: } 2026-03-09T20:52:48.715 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T20:52:48.717 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.716+0000 7f497a1a3640 1 -- 192.168.123.107:0/1265134261 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f495c0779b0 msgr2=0x7f495c079e70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:48.717 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.716+0000 7f497a1a3640 1 --2- 192.168.123.107:0/1265134261 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f495c0779b0 0x7f495c079e70 secure :-1 s=READY pgs=91 cs=0 l=1 rev1=1 crypto rx=0x7f496c030b80 tx=0x7f496c03b040 comp rx=0 tx=0).stop 2026-03-09T20:52:48.717 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.716+0000 7f497a1a3640 1 -- 192.168.123.107:0/1265134261 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4974071a50 msgr2=0x7f4974131a30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:48.717 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.716+0000 7f497a1a3640 1 --2- 192.168.123.107:0/1265134261 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4974071a50 0x7f4974131a30 secure :-1 s=READY pgs=50 cs=0 l=1 rev1=1 crypto rx=0x7f497000b700 tx=0x7f497000bbd0 comp rx=0 tx=0).stop 2026-03-09T20:52:48.717 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.716+0000 7f497a1a3640 1 -- 192.168.123.107:0/1265134261 shutdown_connections 2026-03-09T20:52:48.717 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.716+0000 7f497a1a3640 1 --2- 192.168.123.107:0/1265134261 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f495c0779b0 0x7f495c079e70 unknown :-1 s=CLOSED pgs=91 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:48.717 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.716+0000 7f497a1a3640 1 --2- 192.168.123.107:0/1265134261 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4974131f70 0x7f49741323f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:48.717 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.716+0000 7f497a1a3640 1 --2- 192.168.123.107:0/1265134261 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4974071a50 0x7f4974131a30 unknown :-1 s=CLOSED pgs=50 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:48.717 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.717+0000 7f497a1a3640 1 -- 192.168.123.107:0/1265134261 >> 192.168.123.107:0/1265134261 conn(0x7f497406d4f0 msgr2=0x7f4974075140 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:52:48.718 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.717+0000 7f497a1a3640 1 -- 192.168.123.107:0/1265134261 shutdown_connections 2026-03-09T20:52:48.718 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.717+0000 7f497a1a3640 1 -- 192.168.123.107:0/1265134261 wait complete. 2026-03-09T20:52:48.782 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.781+0000 7fc6cbfff640 1 -- 192.168.123.107:0/839910150 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc6cc0722e0 msgr2=0x7fc6cc110d20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:48.782 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.781+0000 7fc6cbfff640 1 --2- 192.168.123.107:0/839910150 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc6cc0722e0 0x7fc6cc110d20 secure :-1 s=READY pgs=131 cs=0 l=1 rev1=1 crypto rx=0x7fc6b40099b0 tx=0x7fc6b402f220 comp rx=0 tx=0).stop 2026-03-09T20:52:48.782 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.781+0000 7fc6cbfff640 1 -- 192.168.123.107:0/839910150 shutdown_connections 2026-03-09T20:52:48.783 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.781+0000 7fc6cbfff640 1 --2- 192.168.123.107:0/839910150 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc6cc0722e0 0x7fc6cc110d20 unknown :-1 s=CLOSED pgs=131 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:48.783 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.781+0000 7fc6cbfff640 1 --2- 192.168.123.107:0/839910150 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc6cc0719a0 0x7fc6cc071da0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:48.783 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.781+0000 7fc6cbfff640 1 -- 192.168.123.107:0/839910150 >> 192.168.123.107:0/839910150 conn(0x7fc6cc06d4f0 msgr2=0x7fc6cc06f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:52:48.786 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.783+0000 7fc6cbfff640 1 -- 192.168.123.107:0/839910150 shutdown_connections 2026-03-09T20:52:48.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:48 vm10.local ceph-mon[103526]: Health check cleared: FS_DEGRADED (was: 1 filesystem is degraded) 2026-03-09T20:52:48.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:48 vm10.local ceph-mon[103526]: Cluster is now healthy 2026-03-09T20:52:48.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:48 vm10.local ceph-mon[103526]: mds.? [v2:192.168.123.107:6828/3625324292,v1:192.168.123.107:6829/3625324292] up:active 2026-03-09T20:52:48.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:48 vm10.local ceph-mon[103526]: fsmap cephfs:1 {0=cephfs.vm07.potfau=up:active} 3 up:standby 2026-03-09T20:52:48.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:48 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:48.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:48 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:48.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:48 vm10.local ceph-mon[103526]: from='client.34354 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:52:48.787 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.783+0000 7fc6cbfff640 1 -- 192.168.123.107:0/839910150 wait complete. 2026-03-09T20:52:48.787 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.783+0000 7fc6cbfff640 1 Processor -- start 2026-03-09T20:52:48.787 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.783+0000 7fc6cbfff640 1 -- start start 2026-03-09T20:52:48.787 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.783+0000 7fc6cbfff640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc6cc0719a0 0x7fc6cc19e770 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:52:48.787 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.783+0000 7fc6cbfff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc6cc0722e0 0x7fc6cc19ecb0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:52:48.787 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.783+0000 7fc6cbfff640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc6cc19f280 con 0x7fc6cc0722e0 2026-03-09T20:52:48.787 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.783+0000 7fc6cbfff640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc6cc19f3f0 con 0x7fc6cc0719a0 2026-03-09T20:52:48.787 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.784+0000 7fc6c25ff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc6cc0722e0 0x7fc6cc19ecb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:52:48.787 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.784+0000 7fc6caffd640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc6cc0719a0 0x7fc6cc19e770 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:52:48.787 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.784+0000 7fc6c25ff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc6cc0722e0 0x7fc6cc19ecb0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:46474/0 (socket says 192.168.123.107:46474) 2026-03-09T20:52:48.787 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.784+0000 7fc6c25ff640 1 -- 192.168.123.107:0/1466214869 learned_addr learned my addr 192.168.123.107:0/1466214869 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:52:48.787 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.784+0000 7fc6c25ff640 1 -- 192.168.123.107:0/1466214869 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc6cc0719a0 msgr2=0x7fc6cc19e770 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:48.787 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.784+0000 7fc6c25ff640 1 --2- 192.168.123.107:0/1466214869 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc6cc0719a0 0x7fc6cc19e770 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:48.787 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.784+0000 7fc6c25ff640 1 -- 192.168.123.107:0/1466214869 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fc6b4009660 con 0x7fc6cc0722e0 2026-03-09T20:52:48.787 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.784+0000 7fc6c25ff640 1 --2- 192.168.123.107:0/1466214869 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc6cc0722e0 0x7fc6cc19ecb0 secure :-1 s=READY pgs=132 cs=0 l=1 rev1=1 crypto rx=0x7fc6b4002410 tx=0x7fc6b4002980 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:52:48.787 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.785+0000 7fc6c8ff9640 1 -- 192.168.123.107:0/1466214869 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fc6b403d070 con 0x7fc6cc0722e0 2026-03-09T20:52:48.788 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.785+0000 7fc6cbfff640 1 -- 192.168.123.107:0/1466214869 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fc6cc077500 con 0x7fc6cc0722e0 2026-03-09T20:52:48.788 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.785+0000 7fc6cbfff640 1 -- 192.168.123.107:0/1466214869 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fc6cc0779f0 con 0x7fc6cc0722e0 2026-03-09T20:52:48.791 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.787+0000 7fc6cbfff640 1 -- 192.168.123.107:0/1466214869 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fc6cc10d1b0 con 0x7fc6cc0722e0 2026-03-09T20:52:48.791 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.791+0000 7fc6c8ff9640 1 -- 192.168.123.107:0/1466214869 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fc6b4038730 con 0x7fc6cc0722e0 2026-03-09T20:52:48.791 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.791+0000 7fc6c8ff9640 1 -- 192.168.123.107:0/1466214869 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fc6b40417e0 con 0x7fc6cc0722e0 2026-03-09T20:52:48.795 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.794+0000 7fc6c8ff9640 1 -- 192.168.123.107:0/1466214869 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fc6b4041980 con 0x7fc6cc0722e0 2026-03-09T20:52:48.796 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.794+0000 7fc6c8ff9640 1 --2- 192.168.123.107:0/1466214869 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc6a0077a80 0x7fc6a0079f40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:52:48.796 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.795+0000 7fc6c8ff9640 1 -- 192.168.123.107:0/1466214869 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6480+0+0 (secure 0 0 0) 0x7fc6b40bf470 con 0x7fc6cc0722e0 2026-03-09T20:52:48.796 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.795+0000 7fc6c8ff9640 1 -- 192.168.123.107:0/1466214869 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fc6b40c19b0 con 0x7fc6cc0722e0 2026-03-09T20:52:48.796 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.795+0000 7fc6caffd640 1 --2- 192.168.123.107:0/1466214869 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc6a0077a80 0x7fc6a0079f40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:52:48.796 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.795+0000 7fc6caffd640 1 --2- 192.168.123.107:0/1466214869 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc6a0077a80 0x7fc6a0079f40 secure :-1 s=READY pgs=92 cs=0 l=1 rev1=1 crypto rx=0x7fc6bc005fd0 tx=0x7fc6bc004380 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:52:48.931 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.929+0000 7fc6cbfff640 1 -- 192.168.123.107:0/1466214869 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "fs dump"} v 0) v1 -- 0x7fc6cc078210 con 0x7fc6cc0722e0 2026-03-09T20:52:48.932 INFO:teuthology.orchestra.run.vm07.stdout:e21 2026-03-09T20:52:48.933 INFO:teuthology.orchestra.run.vm07.stdout:btime 2026-03-09T20:52:47:379174+0000 2026-03-09T20:52:48.933 INFO:teuthology.orchestra.run.vm07.stdout:enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T20:52:48.933 INFO:teuthology.orchestra.run.vm07.stdout:default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T20:52:48.933 INFO:teuthology.orchestra.run.vm07.stdout:legacy client fscid: 1 2026-03-09T20:52:48.933 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:52:48.933 INFO:teuthology.orchestra.run.vm07.stdout:Filesystem 'cephfs' (1) 2026-03-09T20:52:48.933 INFO:teuthology.orchestra.run.vm07.stdout:fs_name cephfs 2026-03-09T20:52:48.933 INFO:teuthology.orchestra.run.vm07.stdout:epoch 21 2026-03-09T20:52:48.933 INFO:teuthology.orchestra.run.vm07.stdout:flags 12 joinable allow_snaps allow_multimds_snaps 2026-03-09T20:52:48.933 INFO:teuthology.orchestra.run.vm07.stdout:created 2026-03-09T20:44:59.885491+0000 2026-03-09T20:52:48.933 INFO:teuthology.orchestra.run.vm07.stdout:modified 2026-03-09T20:52:47.379172+0000 2026-03-09T20:52:48.933 INFO:teuthology.orchestra.run.vm07.stdout:tableserver 0 2026-03-09T20:52:48.933 INFO:teuthology.orchestra.run.vm07.stdout:root 0 2026-03-09T20:52:48.933 INFO:teuthology.orchestra.run.vm07.stdout:session_timeout 60 2026-03-09T20:52:48.933 INFO:teuthology.orchestra.run.vm07.stdout:session_autoclose 300 2026-03-09T20:52:48.933 INFO:teuthology.orchestra.run.vm07.stdout:max_file_size 1099511627776 2026-03-09T20:52:48.933 INFO:teuthology.orchestra.run.vm07.stdout:max_xattr_size 65536 2026-03-09T20:52:48.933 INFO:teuthology.orchestra.run.vm07.stdout:required_client_features {} 2026-03-09T20:52:48.933 INFO:teuthology.orchestra.run.vm07.stdout:last_failure 0 2026-03-09T20:52:48.933 INFO:teuthology.orchestra.run.vm07.stdout:last_failure_osd_epoch 83 2026-03-09T20:52:48.933 INFO:teuthology.orchestra.run.vm07.stdout:compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T20:52:48.933 INFO:teuthology.orchestra.run.vm07.stdout:max_mds 1 2026-03-09T20:52:48.933 INFO:teuthology.orchestra.run.vm07.stdout:in 0 2026-03-09T20:52:48.933 INFO:teuthology.orchestra.run.vm07.stdout:up {0=34316} 2026-03-09T20:52:48.933 INFO:teuthology.orchestra.run.vm07.stdout:failed 2026-03-09T20:52:48.933 INFO:teuthology.orchestra.run.vm07.stdout:damaged 2026-03-09T20:52:48.933 INFO:teuthology.orchestra.run.vm07.stdout:stopped 1 2026-03-09T20:52:48.933 INFO:teuthology.orchestra.run.vm07.stdout:data_pools [3] 2026-03-09T20:52:48.933 INFO:teuthology.orchestra.run.vm07.stdout:metadata_pool 2 2026-03-09T20:52:48.933 INFO:teuthology.orchestra.run.vm07.stdout:inline_data disabled 2026-03-09T20:52:48.933 INFO:teuthology.orchestra.run.vm07.stdout:balancer 2026-03-09T20:52:48.933 INFO:teuthology.orchestra.run.vm07.stdout:bal_rank_mask -1 2026-03-09T20:52:48.933 INFO:teuthology.orchestra.run.vm07.stdout:standby_count_wanted 1 2026-03-09T20:52:48.933 INFO:teuthology.orchestra.run.vm07.stdout:qdb_cluster leader: 34316 members: 34316 2026-03-09T20:52:48.933 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm07.potfau{0:34316} state up:active seq 13 join_fscid=1 addr [v2:192.168.123.107:6828/3625324292,v1:192.168.123.107:6829/3625324292] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:52:48.933 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:52:48.933 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:52:48.933 INFO:teuthology.orchestra.run.vm07.stdout:Standby daemons: 2026-03-09T20:52:48.933 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:52:48.933 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm10.hzyuyq{-1:44247} state up:standby seq 1 join_fscid=1 addr [v2:192.168.123.110:6826/2699915815,v1:192.168.123.110:6827/2699915815] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:52:48.933 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm10.qpltwp{-1:44269} state up:standby seq 1 join_fscid=1 addr [v2:192.168.123.110:6824/1063035280,v1:192.168.123.110:6825/1063035280] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T20:52:48.933 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm07.rovdbp{-1:44273} state up:standby seq 1 join_fscid=1 addr [v2:192.168.123.107:6826/2346066069,v1:192.168.123.107:6827/2346066069] compat {c=[1],r=[1],i=[1fff]}] 2026-03-09T20:52:48.933 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.931+0000 7fc6c8ff9640 1 -- 192.168.123.107:0/1466214869 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump"}]=0 dumped fsmap epoch 21 v21) v1 ==== 76+0+1932 (secure 0 0 0) 0x7fc6b4087ca0 con 0x7fc6cc0722e0 2026-03-09T20:52:48.933 INFO:teuthology.orchestra.run.vm07.stderr:dumped fsmap epoch 21 2026-03-09T20:52:48.945 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.941+0000 7fc6c1dfe640 1 -- 192.168.123.107:0/1466214869 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc6a0077a80 msgr2=0x7fc6a0079f40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:48.945 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.941+0000 7fc6c1dfe640 1 --2- 192.168.123.107:0/1466214869 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc6a0077a80 0x7fc6a0079f40 secure :-1 s=READY pgs=92 cs=0 l=1 rev1=1 crypto rx=0x7fc6bc005fd0 tx=0x7fc6bc004380 comp rx=0 tx=0).stop 2026-03-09T20:52:48.945 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.941+0000 7fc6c1dfe640 1 -- 192.168.123.107:0/1466214869 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc6cc0722e0 msgr2=0x7fc6cc19ecb0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:48.945 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.941+0000 7fc6c1dfe640 1 --2- 192.168.123.107:0/1466214869 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc6cc0722e0 0x7fc6cc19ecb0 secure :-1 s=READY pgs=132 cs=0 l=1 rev1=1 crypto rx=0x7fc6b4002410 tx=0x7fc6b4002980 comp rx=0 tx=0).stop 2026-03-09T20:52:48.945 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.941+0000 7fc6c1dfe640 1 -- 192.168.123.107:0/1466214869 shutdown_connections 2026-03-09T20:52:48.945 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.941+0000 7fc6c1dfe640 1 --2- 192.168.123.107:0/1466214869 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc6a0077a80 0x7fc6a0079f40 unknown :-1 s=CLOSED pgs=92 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:48.945 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.941+0000 7fc6c1dfe640 1 --2- 192.168.123.107:0/1466214869 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc6cc0722e0 0x7fc6cc19ecb0 unknown :-1 s=CLOSED pgs=132 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:48.945 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.941+0000 7fc6c1dfe640 1 --2- 192.168.123.107:0/1466214869 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc6cc0719a0 0x7fc6cc19e770 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:48.945 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.941+0000 7fc6c1dfe640 1 -- 192.168.123.107:0/1466214869 >> 192.168.123.107:0/1466214869 conn(0x7fc6cc06d4f0 msgr2=0x7fc6cc10eeb0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:52:48.945 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.941+0000 7fc6c1dfe640 1 -- 192.168.123.107:0/1466214869 shutdown_connections 2026-03-09T20:52:48.945 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:48.941+0000 7fc6c1dfe640 1 -- 192.168.123.107:0/1466214869 wait complete. 2026-03-09T20:52:49.018 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.016+0000 7f378226b640 1 -- 192.168.123.107:0/1068660864 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f377c0725f0 msgr2=0x7f377c077360 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:49.018 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.016+0000 7f378226b640 1 --2- 192.168.123.107:0/1068660864 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f377c0725f0 0x7f377c077360 secure :-1 s=READY pgs=133 cs=0 l=1 rev1=1 crypto rx=0x7f376c009a00 tx=0x7f376c02f280 comp rx=0 tx=0).stop 2026-03-09T20:52:49.018 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.016+0000 7f378226b640 1 -- 192.168.123.107:0/1068660864 shutdown_connections 2026-03-09T20:52:49.018 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.016+0000 7f378226b640 1 --2- 192.168.123.107:0/1068660864 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f377c0725f0 0x7f377c077360 unknown :-1 s=CLOSED pgs=133 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:49.018 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.016+0000 7f378226b640 1 --2- 192.168.123.107:0/1068660864 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f377c071c20 0x7f377c072020 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:49.018 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.016+0000 7f378226b640 1 -- 192.168.123.107:0/1068660864 >> 192.168.123.107:0/1068660864 conn(0x7f377c06d660 msgr2=0x7f377c06faa0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:52:49.018 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.017+0000 7f378226b640 1 -- 192.168.123.107:0/1068660864 shutdown_connections 2026-03-09T20:52:49.019 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.017+0000 7f378226b640 1 -- 192.168.123.107:0/1068660864 wait complete. 2026-03-09T20:52:49.019 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.017+0000 7f378226b640 1 Processor -- start 2026-03-09T20:52:49.019 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.017+0000 7f378226b640 1 -- start start 2026-03-09T20:52:49.019 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.017+0000 7f378226b640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f377c071c20 0x7f377c082980 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:52:49.019 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.017+0000 7f378226b640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f377c0725f0 0x7f377c082ec0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:52:49.019 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.017+0000 7f378226b640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f377c084370 con 0x7f377c071c20 2026-03-09T20:52:49.019 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.017+0000 7f378226b640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f377c0844e0 con 0x7f377c0725f0 2026-03-09T20:52:49.019 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.018+0000 7f377b7fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f377c071c20 0x7f377c082980 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:52:49.020 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.018+0000 7f377b7fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f377c071c20 0x7f377c082980 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:46490/0 (socket says 192.168.123.107:46490) 2026-03-09T20:52:49.021 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.018+0000 7f377b7fe640 1 -- 192.168.123.107:0/2248798622 learned_addr learned my addr 192.168.123.107:0/2248798622 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:52:49.021 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.018+0000 7f377b7fe640 1 -- 192.168.123.107:0/2248798622 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f377c0725f0 msgr2=0x7f377c082ec0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:49.021 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.018+0000 7f377b7fe640 1 --2- 192.168.123.107:0/2248798622 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f377c0725f0 0x7f377c082ec0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:49.021 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.018+0000 7f377b7fe640 1 -- 192.168.123.107:0/2248798622 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f376c009660 con 0x7f377c071c20 2026-03-09T20:52:49.021 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.019+0000 7f377b7fe640 1 --2- 192.168.123.107:0/2248798622 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f377c071c20 0x7f377c082980 secure :-1 s=READY pgs=134 cs=0 l=1 rev1=1 crypto rx=0x7f377400ba50 tx=0x7f377400bf20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:52:49.021 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.019+0000 7f3778ff9640 1 -- 192.168.123.107:0/2248798622 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f3774002c70 con 0x7f377c071c20 2026-03-09T20:52:49.021 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.019+0000 7f3778ff9640 1 -- 192.168.123.107:0/2248798622 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f3774002dd0 con 0x7f377c071c20 2026-03-09T20:52:49.021 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.019+0000 7f3778ff9640 1 -- 192.168.123.107:0/2248798622 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f3774004900 con 0x7f377c071c20 2026-03-09T20:52:49.023 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.019+0000 7f378226b640 1 -- 192.168.123.107:0/2248798622 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f377c083460 con 0x7f377c071c20 2026-03-09T20:52:49.023 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.019+0000 7f378226b640 1 -- 192.168.123.107:0/2248798622 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f377c1b59f0 con 0x7f377c071c20 2026-03-09T20:52:49.023 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.020+0000 7f378226b640 1 -- 192.168.123.107:0/2248798622 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f377c072020 con 0x7f377c071c20 2026-03-09T20:52:49.026 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.025+0000 7f3778ff9640 1 -- 192.168.123.107:0/2248798622 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f377400c780 con 0x7f377c071c20 2026-03-09T20:52:49.026 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.025+0000 7f3778ff9640 1 --2- 192.168.123.107:0/2248798622 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f3768077700 0x7f3768079bc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:52:49.027 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.025+0000 7f3778ff9640 1 -- 192.168.123.107:0/2248798622 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6480+0+0 (secure 0 0 0) 0x7f377401a030 con 0x7f377c071c20 2026-03-09T20:52:49.027 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.025+0000 7f3778ff9640 1 -- 192.168.123.107:0/2248798622 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f37740995c0 con 0x7f377c071c20 2026-03-09T20:52:49.027 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.026+0000 7f377affd640 1 --2- 192.168.123.107:0/2248798622 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f3768077700 0x7f3768079bc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:52:49.027 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.026+0000 7f377affd640 1 --2- 192.168.123.107:0/2248798622 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f3768077700 0x7f3768079bc0 secure :-1 s=READY pgs=93 cs=0 l=1 rev1=1 crypto rx=0x7f376c02f790 tx=0x7f376c005b00 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:52:49.133 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.130+0000 7f378226b640 1 -- 192.168.123.107:0/2248798622 --> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f377c076cb0 con 0x7f3768077700 2026-03-09T20:52:49.136 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.131+0000 7f3778ff9640 1 -- 192.168.123.107:0/2248798622 <== mgr.34100 v2:192.168.123.107:6800/2064434004 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+416 (secure 0 0 0) 0x7f377c076cb0 con 0x7f3768077700 2026-03-09T20:52:49.138 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T20:52:49.138 INFO:teuthology.orchestra.run.vm07.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc", 2026-03-09T20:52:49.138 INFO:teuthology.orchestra.run.vm07.stdout: "in_progress": true, 2026-03-09T20:52:49.138 INFO:teuthology.orchestra.run.vm07.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-09T20:52:49.138 INFO:teuthology.orchestra.run.vm07.stdout: "services_complete": [ 2026-03-09T20:52:49.138 INFO:teuthology.orchestra.run.vm07.stdout: "osd", 2026-03-09T20:52:49.138 INFO:teuthology.orchestra.run.vm07.stdout: "mon", 2026-03-09T20:52:49.138 INFO:teuthology.orchestra.run.vm07.stdout: "crash", 2026-03-09T20:52:49.138 INFO:teuthology.orchestra.run.vm07.stdout: "mgr" 2026-03-09T20:52:49.138 INFO:teuthology.orchestra.run.vm07.stdout: ], 2026-03-09T20:52:49.138 INFO:teuthology.orchestra.run.vm07.stdout: "progress": "13/23 daemons upgraded", 2026-03-09T20:52:49.138 INFO:teuthology.orchestra.run.vm07.stdout: "message": "Currently upgrading mds daemons", 2026-03-09T20:52:49.139 INFO:teuthology.orchestra.run.vm07.stdout: "is_paused": false 2026-03-09T20:52:49.139 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T20:52:49.141 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.139+0000 7f378226b640 1 -- 192.168.123.107:0/2248798622 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f3768077700 msgr2=0x7f3768079bc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:49.141 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.139+0000 7f378226b640 1 --2- 192.168.123.107:0/2248798622 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f3768077700 0x7f3768079bc0 secure :-1 s=READY pgs=93 cs=0 l=1 rev1=1 crypto rx=0x7f376c02f790 tx=0x7f376c005b00 comp rx=0 tx=0).stop 2026-03-09T20:52:49.141 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.139+0000 7f378226b640 1 -- 192.168.123.107:0/2248798622 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f377c071c20 msgr2=0x7f377c082980 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:49.141 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.139+0000 7f378226b640 1 --2- 192.168.123.107:0/2248798622 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f377c071c20 0x7f377c082980 secure :-1 s=READY pgs=134 cs=0 l=1 rev1=1 crypto rx=0x7f377400ba50 tx=0x7f377400bf20 comp rx=0 tx=0).stop 2026-03-09T20:52:49.141 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.139+0000 7f378226b640 1 -- 192.168.123.107:0/2248798622 shutdown_connections 2026-03-09T20:52:49.141 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.139+0000 7f378226b640 1 --2- 192.168.123.107:0/2248798622 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f3768077700 0x7f3768079bc0 secure :-1 s=CLOSED pgs=93 cs=0 l=1 rev1=1 crypto rx=0x7f376c02f790 tx=0x7f376c005b00 comp rx=0 tx=0).stop 2026-03-09T20:52:49.141 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.139+0000 7f378226b640 1 --2- 192.168.123.107:0/2248798622 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f377c0725f0 0x7f377c082ec0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:49.141 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.139+0000 7f378226b640 1 --2- 192.168.123.107:0/2248798622 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f377c071c20 0x7f377c082980 unknown :-1 s=CLOSED pgs=134 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:49.141 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.139+0000 7f378226b640 1 -- 192.168.123.107:0/2248798622 >> 192.168.123.107:0/2248798622 conn(0x7f377c06d660 msgr2=0x7f377c075590 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:52:49.141 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.140+0000 7f378226b640 1 -- 192.168.123.107:0/2248798622 shutdown_connections 2026-03-09T20:52:49.142 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.141+0000 7f378226b640 1 -- 192.168.123.107:0/2248798622 wait complete. 2026-03-09T20:52:49.217 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.215+0000 7f4c736ab640 1 -- 192.168.123.107:0/230364918 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4c6c071a70 msgr2=0x7f4c6c071e70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:49.217 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.215+0000 7f4c736ab640 1 --2- 192.168.123.107:0/230364918 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4c6c071a70 0x7f4c6c071e70 secure :-1 s=READY pgs=135 cs=0 l=1 rev1=1 crypto rx=0x7f4c68009a00 tx=0x7f4c6802f270 comp rx=0 tx=0).stop 2026-03-09T20:52:49.217 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.216+0000 7f4c736ab640 1 -- 192.168.123.107:0/230364918 shutdown_connections 2026-03-09T20:52:49.217 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.216+0000 7f4c736ab640 1 --2- 192.168.123.107:0/230364918 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4c6c072440 0x7f4c6c0771b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:49.217 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.216+0000 7f4c736ab640 1 --2- 192.168.123.107:0/230364918 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4c6c071a70 0x7f4c6c071e70 unknown :-1 s=CLOSED pgs=135 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:49.217 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.216+0000 7f4c736ab640 1 -- 192.168.123.107:0/230364918 >> 192.168.123.107:0/230364918 conn(0x7f4c6c06d4f0 msgr2=0x7f4c6c06f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:52:49.218 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.216+0000 7f4c736ab640 1 -- 192.168.123.107:0/230364918 shutdown_connections 2026-03-09T20:52:49.219 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.216+0000 7f4c736ab640 1 -- 192.168.123.107:0/230364918 wait complete. 2026-03-09T20:52:49.219 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.216+0000 7f4c736ab640 1 Processor -- start 2026-03-09T20:52:49.219 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.217+0000 7f4c736ab640 1 -- start start 2026-03-09T20:52:49.219 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.217+0000 7f4c736ab640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4c6c072440 0x7f4c6c084090 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:52:49.219 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.217+0000 7f4c736ab640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4c6c082730 0x7f4c6c082bb0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:52:49.219 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.217+0000 7f4c736ab640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f4c6c0845d0 con 0x7f4c6c072440 2026-03-09T20:52:49.219 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.217+0000 7f4c736ab640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f4c6c0830f0 con 0x7f4c6c082730 2026-03-09T20:52:49.219 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.217+0000 7f4c71420640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4c6c072440 0x7f4c6c084090 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:52:49.219 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.217+0000 7f4c71420640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4c6c072440 0x7f4c6c084090 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:46498/0 (socket says 192.168.123.107:46498) 2026-03-09T20:52:49.219 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.217+0000 7f4c71420640 1 -- 192.168.123.107:0/713430767 learned_addr learned my addr 192.168.123.107:0/713430767 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:52:49.219 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.217+0000 7f4c70c1f640 1 --2- 192.168.123.107:0/713430767 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4c6c082730 0x7f4c6c082bb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:52:49.220 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.217+0000 7f4c71420640 1 -- 192.168.123.107:0/713430767 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4c6c082730 msgr2=0x7f4c6c082bb0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:49.220 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.217+0000 7f4c71420640 1 --2- 192.168.123.107:0/713430767 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4c6c082730 0x7f4c6c082bb0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:49.220 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.217+0000 7f4c71420640 1 -- 192.168.123.107:0/713430767 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f4c68009660 con 0x7f4c6c072440 2026-03-09T20:52:49.220 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.217+0000 7f4c71420640 1 --2- 192.168.123.107:0/713430767 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4c6c072440 0x7f4c6c084090 secure :-1 s=READY pgs=136 cs=0 l=1 rev1=1 crypto rx=0x7f4c68002fd0 tx=0x7f4c68004290 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:52:49.221 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.219+0000 7f4c627fc640 1 -- 192.168.123.107:0/713430767 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f4c68004480 con 0x7f4c6c072440 2026-03-09T20:52:49.221 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.219+0000 7f4c736ab640 1 -- 192.168.123.107:0/713430767 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f4c6c083370 con 0x7f4c6c072440 2026-03-09T20:52:49.221 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.219+0000 7f4c736ab640 1 -- 192.168.123.107:0/713430767 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f4c6c1b5bc0 con 0x7f4c6c072440 2026-03-09T20:52:49.221 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.220+0000 7f4c627fc640 1 -- 192.168.123.107:0/713430767 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f4c680045e0 con 0x7f4c6c072440 2026-03-09T20:52:49.224 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.221+0000 7f4c736ab640 1 -- 192.168.123.107:0/713430767 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f4c6c07a8b0 con 0x7f4c6c072440 2026-03-09T20:52:49.228 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.224+0000 7f4c627fc640 1 -- 192.168.123.107:0/713430767 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f4c68031070 con 0x7f4c6c072440 2026-03-09T20:52:49.228 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.224+0000 7f4c627fc640 1 -- 192.168.123.107:0/713430767 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f4c68031250 con 0x7f4c6c072440 2026-03-09T20:52:49.229 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.224+0000 7f4c627fc640 1 --2- 192.168.123.107:0/713430767 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f4c54077a80 0x7f4c54079f40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:52:49.229 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.225+0000 7f4c627fc640 1 -- 192.168.123.107:0/713430767 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6480+0+0 (secure 0 0 0) 0x7f4c680bf7a0 con 0x7f4c6c072440 2026-03-09T20:52:49.229 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.227+0000 7f4c70c1f640 1 --2- 192.168.123.107:0/713430767 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f4c54077a80 0x7f4c54079f40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:52:49.229 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.228+0000 7f4c70c1f640 1 --2- 192.168.123.107:0/713430767 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f4c54077a80 0x7f4c54079f40 secure :-1 s=READY pgs=94 cs=0 l=1 rev1=1 crypto rx=0x7f4c6c083e10 tx=0x7f4c64002a60 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:52:49.229 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.228+0000 7f4c627fc640 1 -- 192.168.123.107:0/713430767 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f4c68087e90 con 0x7f4c6c072440 2026-03-09T20:52:49.403 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.402+0000 7f4c736ab640 1 -- 192.168.123.107:0/713430767 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "health", "detail": "detail"} v 0) v1 -- 0x7f4c6c07aac0 con 0x7f4c6c072440 2026-03-09T20:52:49.404 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.403+0000 7f4c627fc640 1 -- 192.168.123.107:0/713430767 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "health", "detail": "detail"}]=0 v0) v1 ==== 74+0+10 (secure 0 0 0) 0x7f4c680875e0 con 0x7f4c6c072440 2026-03-09T20:52:49.404 INFO:teuthology.orchestra.run.vm07.stdout:HEALTH_OK 2026-03-09T20:52:49.406 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.405+0000 7f4c736ab640 1 -- 192.168.123.107:0/713430767 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f4c54077a80 msgr2=0x7f4c54079f40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:49.406 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.405+0000 7f4c736ab640 1 --2- 192.168.123.107:0/713430767 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f4c54077a80 0x7f4c54079f40 secure :-1 s=READY pgs=94 cs=0 l=1 rev1=1 crypto rx=0x7f4c6c083e10 tx=0x7f4c64002a60 comp rx=0 tx=0).stop 2026-03-09T20:52:49.406 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.405+0000 7f4c736ab640 1 -- 192.168.123.107:0/713430767 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4c6c072440 msgr2=0x7f4c6c084090 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:49.406 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.405+0000 7f4c736ab640 1 --2- 192.168.123.107:0/713430767 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4c6c072440 0x7f4c6c084090 secure :-1 s=READY pgs=136 cs=0 l=1 rev1=1 crypto rx=0x7f4c68002fd0 tx=0x7f4c68004290 comp rx=0 tx=0).stop 2026-03-09T20:52:49.406 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.406+0000 7f4c736ab640 1 -- 192.168.123.107:0/713430767 shutdown_connections 2026-03-09T20:52:49.407 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.406+0000 7f4c736ab640 1 --2- 192.168.123.107:0/713430767 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f4c54077a80 0x7f4c54079f40 unknown :-1 s=CLOSED pgs=94 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:49.407 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.406+0000 7f4c736ab640 1 --2- 192.168.123.107:0/713430767 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4c6c082730 0x7f4c6c082bb0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:49.407 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.406+0000 7f4c736ab640 1 --2- 192.168.123.107:0/713430767 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4c6c072440 0x7f4c6c084090 unknown :-1 s=CLOSED pgs=136 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:52:49.407 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.406+0000 7f4c736ab640 1 -- 192.168.123.107:0/713430767 >> 192.168.123.107:0/713430767 conn(0x7f4c6c06d4f0 msgr2=0x7f4c6c075380 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:52:49.407 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.406+0000 7f4c736ab640 1 -- 192.168.123.107:0/713430767 shutdown_connections 2026-03-09T20:52:49.407 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:52:49.406+0000 7f4c736ab640 1 -- 192.168.123.107:0/713430767 wait complete. 2026-03-09T20:52:49.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:49 vm10.local ceph-mon[103526]: from='client.34358 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:52:49.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:49 vm10.local ceph-mon[103526]: Detected new or changed devices on vm07 2026-03-09T20:52:49.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:49 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:49.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:49 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:49.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:49 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:52:49.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:49 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:52:49.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:49 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:49.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:49 vm10.local ceph-mon[103526]: from='client.34362 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:52:49.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:49 vm10.local ceph-mon[103526]: pgmap v171: 65 pgs: 65 active+clean; 211 MiB data, 954 MiB used, 119 GiB / 120 GiB avail; 21 MiB/s rd, 102 B/s wr, 6 op/s 2026-03-09T20:52:49.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:49 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:52:49.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:49 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:49.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:49 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:49.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:49 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:49.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:49 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:49.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:49 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mds ok-to-stop", "ids": ["cephfs.vm07.potfau"]}]: dispatch 2026-03-09T20:52:49.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:49 vm10.local ceph-mon[103526]: Upgrade: It appears safe to stop mds.cephfs.vm07.potfau 2026-03-09T20:52:49.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:49 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/1265134261' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:49.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:49 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/1466214869' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T20:52:49.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:49 vm10.local ceph-mon[103526]: Upgrade: Updating mds.cephfs.vm07.potfau 2026-03-09T20:52:49.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:49 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:49.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:49 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm07.potfau", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-09T20:52:49.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:49 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:52:49.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:49 vm10.local ceph-mon[103526]: Deploying daemon mds.cephfs.vm07.potfau on vm07 2026-03-09T20:52:49.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:49 vm10.local ceph-mon[103526]: from='client.34374 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:52:49.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:49 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/713430767' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T20:52:49.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:49 vm07.local ceph-mon[112105]: from='client.34358 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:52:49.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:49 vm07.local ceph-mon[112105]: Detected new or changed devices on vm07 2026-03-09T20:52:49.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:49 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:49.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:49 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:49.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:49 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:52:49.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:49 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:52:49.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:49 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:49.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:49 vm07.local ceph-mon[112105]: from='client.34362 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:52:49.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:49 vm07.local ceph-mon[112105]: pgmap v171: 65 pgs: 65 active+clean; 211 MiB data, 954 MiB used, 119 GiB / 120 GiB avail; 21 MiB/s rd, 102 B/s wr, 6 op/s 2026-03-09T20:52:49.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:49 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:52:49.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:49 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:49.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:49 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:49.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:49 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:49.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:49 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:49.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:49 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mds ok-to-stop", "ids": ["cephfs.vm07.potfau"]}]: dispatch 2026-03-09T20:52:49.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:49 vm07.local ceph-mon[112105]: Upgrade: It appears safe to stop mds.cephfs.vm07.potfau 2026-03-09T20:52:49.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:49 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/1265134261' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:49.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:49 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/1466214869' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T20:52:49.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:49 vm07.local ceph-mon[112105]: Upgrade: Updating mds.cephfs.vm07.potfau 2026-03-09T20:52:49.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:49 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:49.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:49 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm07.potfau", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-09T20:52:49.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:49 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:52:49.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:49 vm07.local ceph-mon[112105]: Deploying daemon mds.cephfs.vm07.potfau on vm07 2026-03-09T20:52:49.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:49 vm07.local ceph-mon[112105]: from='client.34374 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:52:49.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:49 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/713430767' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T20:52:49.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:49 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-mon-vm07[112101]: 2026-03-09T20:52:49.558+0000 7ff513bf6640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN) 2026-03-09T20:52:50.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:50 vm10.local ceph-mon[103526]: Health check failed: 1 filesystem is degraded (FS_DEGRADED) 2026-03-09T20:52:50.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:50 vm10.local ceph-mon[103526]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN) 2026-03-09T20:52:50.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:50 vm10.local ceph-mon[103526]: osdmap e84: 6 total, 6 up, 6 in 2026-03-09T20:52:50.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:50 vm10.local ceph-mon[103526]: Standby daemon mds.cephfs.vm10.hzyuyq assigned to filesystem cephfs as rank 0 2026-03-09T20:52:50.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:50 vm10.local ceph-mon[103526]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline) 2026-03-09T20:52:50.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:50 vm10.local ceph-mon[103526]: fsmap cephfs:0/1 3 up:standby, 1 failed 2026-03-09T20:52:50.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:50 vm10.local ceph-mon[103526]: fsmap cephfs:1/1 {0=cephfs.vm10.hzyuyq=up:replay} 2 up:standby 2026-03-09T20:52:50.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:50 vm07.local ceph-mon[112105]: Health check failed: 1 filesystem is degraded (FS_DEGRADED) 2026-03-09T20:52:50.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:50 vm07.local ceph-mon[112105]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN) 2026-03-09T20:52:50.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:50 vm07.local ceph-mon[112105]: osdmap e84: 6 total, 6 up, 6 in 2026-03-09T20:52:50.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:50 vm07.local ceph-mon[112105]: Standby daemon mds.cephfs.vm10.hzyuyq assigned to filesystem cephfs as rank 0 2026-03-09T20:52:50.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:50 vm07.local ceph-mon[112105]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline) 2026-03-09T20:52:50.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:50 vm07.local ceph-mon[112105]: fsmap cephfs:0/1 3 up:standby, 1 failed 2026-03-09T20:52:50.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:50 vm07.local ceph-mon[112105]: fsmap cephfs:1/1 {0=cephfs.vm10.hzyuyq=up:replay} 2 up:standby 2026-03-09T20:52:51.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:51 vm10.local ceph-mon[103526]: pgmap v173: 65 pgs: 65 active+clean; 211 MiB data, 954 MiB used, 119 GiB / 120 GiB avail; 23 MiB/s rd, 511 B/s wr, 10 op/s 2026-03-09T20:52:51.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:51 vm07.local ceph-mon[112105]: pgmap v173: 65 pgs: 65 active+clean; 211 MiB data, 954 MiB used, 119 GiB / 120 GiB avail; 23 MiB/s rd, 511 B/s wr, 10 op/s 2026-03-09T20:52:53.987 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:53 vm07.local ceph-mon[112105]: pgmap v174: 65 pgs: 65 active+clean; 211 MiB data, 954 MiB used, 119 GiB / 120 GiB avail; 24 MiB/s rd, 5.6 KiB/s wr, 13 op/s 2026-03-09T20:52:53.988 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:53 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:53.988 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:53 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:53.988 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:53 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:52:54.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:53 vm10.local ceph-mon[103526]: pgmap v174: 65 pgs: 65 active+clean; 211 MiB data, 954 MiB used, 119 GiB / 120 GiB avail; 24 MiB/s rd, 5.6 KiB/s wr, 13 op/s 2026-03-09T20:52:54.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:53 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:54.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:53 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:54.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:53 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:52:55.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:54 vm10.local ceph-mon[103526]: mds.? [v2:192.168.123.107:6828/561473714,v1:192.168.123.107:6829/561473714] up:boot 2026-03-09T20:52:55.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:54 vm10.local ceph-mon[103526]: mds.? [v2:192.168.123.110:6826/2699915815,v1:192.168.123.110:6827/2699915815] up:reconnect 2026-03-09T20:52:55.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:54 vm10.local ceph-mon[103526]: fsmap cephfs:1/1 {0=cephfs.vm10.hzyuyq=up:reconnect} 3 up:standby 2026-03-09T20:52:55.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:54 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm07.potfau"}]: dispatch 2026-03-09T20:52:55.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:54 vm10.local ceph-mon[103526]: reconnect by client.24331 192.168.123.110:0/2539319683 after 0 2026-03-09T20:52:55.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:54 vm10.local ceph-mon[103526]: reconnect by client.14518 192.168.144.1:0/2371384686 after 0.00900001 2026-03-09T20:52:55.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:54 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:55.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:54 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:52:55.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:54 vm07.local ceph-mon[112105]: mds.? [v2:192.168.123.107:6828/561473714,v1:192.168.123.107:6829/561473714] up:boot 2026-03-09T20:52:55.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:54 vm07.local ceph-mon[112105]: mds.? [v2:192.168.123.110:6826/2699915815,v1:192.168.123.110:6827/2699915815] up:reconnect 2026-03-09T20:52:55.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:54 vm07.local ceph-mon[112105]: fsmap cephfs:1/1 {0=cephfs.vm10.hzyuyq=up:reconnect} 3 up:standby 2026-03-09T20:52:55.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:54 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm07.potfau"}]: dispatch 2026-03-09T20:52:55.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:54 vm07.local ceph-mon[112105]: reconnect by client.24331 192.168.123.110:0/2539319683 after 0 2026-03-09T20:52:55.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:54 vm07.local ceph-mon[112105]: reconnect by client.14518 192.168.144.1:0/2371384686 after 0.00900001 2026-03-09T20:52:55.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:54 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:55.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:54 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:52:55.933 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:55 vm07.local ceph-mon[112105]: pgmap v175: 65 pgs: 65 active+clean; 211 MiB data, 954 MiB used, 119 GiB / 120 GiB avail; 24 MiB/s rd, 5.6 KiB/s wr, 13 op/s 2026-03-09T20:52:55.933 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:55 vm07.local ceph-mon[112105]: mds.? [v2:192.168.123.110:6826/2699915815,v1:192.168.123.110:6827/2699915815] up:rejoin 2026-03-09T20:52:55.933 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:55 vm07.local ceph-mon[112105]: fsmap cephfs:1/1 {0=cephfs.vm10.hzyuyq=up:rejoin} 3 up:standby 2026-03-09T20:52:55.933 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:55 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:55.933 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:55 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:55.933 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:55 vm07.local ceph-mon[112105]: daemon mds.cephfs.vm10.hzyuyq is now active in filesystem cephfs as rank 0 2026-03-09T20:52:55.933 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:55 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:55.933 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:55 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:56.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:55 vm10.local ceph-mon[103526]: pgmap v175: 65 pgs: 65 active+clean; 211 MiB data, 954 MiB used, 119 GiB / 120 GiB avail; 24 MiB/s rd, 5.6 KiB/s wr, 13 op/s 2026-03-09T20:52:56.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:55 vm10.local ceph-mon[103526]: mds.? [v2:192.168.123.110:6826/2699915815,v1:192.168.123.110:6827/2699915815] up:rejoin 2026-03-09T20:52:56.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:55 vm10.local ceph-mon[103526]: fsmap cephfs:1/1 {0=cephfs.vm10.hzyuyq=up:rejoin} 3 up:standby 2026-03-09T20:52:56.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:55 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:56.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:55 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:56.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:55 vm10.local ceph-mon[103526]: daemon mds.cephfs.vm10.hzyuyq is now active in filesystem cephfs as rank 0 2026-03-09T20:52:56.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:55 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:56.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:55 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:56.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:56 vm07.local ceph-mon[112105]: Health check cleared: FS_DEGRADED (was: 1 filesystem is degraded) 2026-03-09T20:52:56.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:56 vm07.local ceph-mon[112105]: Cluster is now healthy 2026-03-09T20:52:56.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:56 vm07.local ceph-mon[112105]: mds.? [v2:192.168.123.110:6826/2699915815,v1:192.168.123.110:6827/2699915815] up:active 2026-03-09T20:52:56.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:56 vm07.local ceph-mon[112105]: fsmap cephfs:1 {0=cephfs.vm10.hzyuyq=up:active} 3 up:standby 2026-03-09T20:52:56.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:56 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:56.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:56 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:56.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:56 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:52:56.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:56 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:52:56.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:56 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:56.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:56 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:52:56.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:56 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:56.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:56 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:56.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:56 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:56.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:56 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:56.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:56 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mds ok-to-stop", "ids": ["cephfs.vm10.qpltwp"]}]: dispatch 2026-03-09T20:52:56.915 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:56 vm10.local ceph-mon[103526]: Health check cleared: FS_DEGRADED (was: 1 filesystem is degraded) 2026-03-09T20:52:56.915 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:56 vm10.local ceph-mon[103526]: Cluster is now healthy 2026-03-09T20:52:56.915 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:56 vm10.local ceph-mon[103526]: mds.? [v2:192.168.123.110:6826/2699915815,v1:192.168.123.110:6827/2699915815] up:active 2026-03-09T20:52:56.915 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:56 vm10.local ceph-mon[103526]: fsmap cephfs:1 {0=cephfs.vm10.hzyuyq=up:active} 3 up:standby 2026-03-09T20:52:56.915 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:56 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:56.915 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:56 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:56.915 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:56 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:52:56.915 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:56 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:52:56.915 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:56 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:56.915 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:56 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:52:56.915 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:56 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:56.915 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:56 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:56.915 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:56 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:56.915 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:56 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:52:56.915 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:56 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mds ok-to-stop", "ids": ["cephfs.vm10.qpltwp"]}]: dispatch 2026-03-09T20:52:58.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:57 vm10.local ceph-mon[103526]: pgmap v176: 65 pgs: 65 active+clean; 211 MiB data, 954 MiB used, 119 GiB / 120 GiB avail; 21 MiB/s rd, 5.7 KiB/s wr, 13 op/s 2026-03-09T20:52:58.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:57 vm10.local ceph-mon[103526]: Upgrade: It appears safe to stop mds.cephfs.vm10.qpltwp 2026-03-09T20:52:58.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:57 vm10.local ceph-mon[103526]: Upgrade: Updating mds.cephfs.vm10.qpltwp 2026-03-09T20:52:58.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:57 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:58.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:57 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm10.qpltwp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-09T20:52:58.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:57 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:52:58.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:57 vm10.local ceph-mon[103526]: Deploying daemon mds.cephfs.vm10.qpltwp on vm10 2026-03-09T20:52:58.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:57 vm07.local ceph-mon[112105]: pgmap v176: 65 pgs: 65 active+clean; 211 MiB data, 954 MiB used, 119 GiB / 120 GiB avail; 21 MiB/s rd, 5.7 KiB/s wr, 13 op/s 2026-03-09T20:52:58.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:57 vm07.local ceph-mon[112105]: Upgrade: It appears safe to stop mds.cephfs.vm10.qpltwp 2026-03-09T20:52:58.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:57 vm07.local ceph-mon[112105]: Upgrade: Updating mds.cephfs.vm10.qpltwp 2026-03-09T20:52:58.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:57 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:58.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:57 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm10.qpltwp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-09T20:52:58.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:57 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:52:58.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:57 vm07.local ceph-mon[112105]: Deploying daemon mds.cephfs.vm10.qpltwp on vm10 2026-03-09T20:52:59.030 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:58 vm07.local ceph-mon[112105]: osdmap e85: 6 total, 6 up, 6 in 2026-03-09T20:52:59.030 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:58 vm07.local ceph-mon[112105]: fsmap cephfs:1 {0=cephfs.vm10.hzyuyq=up:active} 2 up:standby 2026-03-09T20:52:59.030 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:58 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:59.030 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:58 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:59.030 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:58 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:52:59.266 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:58 vm10.local ceph-mon[103526]: osdmap e85: 6 total, 6 up, 6 in 2026-03-09T20:52:59.266 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:58 vm10.local ceph-mon[103526]: fsmap cephfs:1 {0=cephfs.vm10.hzyuyq=up:active} 2 up:standby 2026-03-09T20:52:59.266 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:58 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:59.266 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:58 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:52:59.266 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:58 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:53:00.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:59 vm10.local ceph-mon[103526]: pgmap v178: 65 pgs: 65 active+clean; 211 MiB data, 954 MiB used, 119 GiB / 120 GiB avail; 23 MiB/s rd, 6.1 KiB/s wr, 15 op/s 2026-03-09T20:53:00.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:59 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:00.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:59 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:00.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:59 vm10.local ceph-mon[103526]: mds.? [v2:192.168.123.110:6824/4027718916,v1:192.168.123.110:6825/4027718916] up:boot 2026-03-09T20:53:00.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:59 vm10.local ceph-mon[103526]: fsmap cephfs:1 {0=cephfs.vm10.hzyuyq=up:active} 3 up:standby 2026-03-09T20:53:00.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:52:59 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm10.qpltwp"}]: dispatch 2026-03-09T20:53:00.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:59 vm07.local ceph-mon[112105]: pgmap v178: 65 pgs: 65 active+clean; 211 MiB data, 954 MiB used, 119 GiB / 120 GiB avail; 23 MiB/s rd, 6.1 KiB/s wr, 15 op/s 2026-03-09T20:53:00.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:59 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:00.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:59 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:00.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:59 vm07.local ceph-mon[112105]: mds.? [v2:192.168.123.110:6824/4027718916,v1:192.168.123.110:6825/4027718916] up:boot 2026-03-09T20:53:00.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:59 vm07.local ceph-mon[112105]: fsmap cephfs:1 {0=cephfs.vm10.hzyuyq=up:active} 3 up:standby 2026-03-09T20:53:00.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:52:59 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm10.qpltwp"}]: dispatch 2026-03-09T20:53:01.308 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:01 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:01.308 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:01 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:01.308 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:01 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:01.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:01 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:01.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:01 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:01.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:01 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:02.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:02 vm10.local ceph-mon[103526]: pgmap v179: 65 pgs: 65 active+clean; 211 MiB data, 954 MiB used, 119 GiB / 120 GiB avail; 16 MiB/s rd, 5.3 KiB/s wr, 11 op/s 2026-03-09T20:53:02.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:02 vm10.local ceph-mon[103526]: Detected new or changed devices on vm10 2026-03-09T20:53:02.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:02 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:02.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:02 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:53:02.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:02 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:53:02.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:02 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:02.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:02 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:53:02.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:02 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:02.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:02 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:02.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:02 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:02.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:02 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:02.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:02 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mds ok-to-stop", "ids": ["cephfs.vm10.hzyuyq"]}]: dispatch 2026-03-09T20:53:02.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:02 vm10.local ceph-mon[103526]: Upgrade: It appears safe to stop mds.cephfs.vm10.hzyuyq 2026-03-09T20:53:02.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:02 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:02.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:02 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm10.hzyuyq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-09T20:53:02.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:02 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:53:02.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:02 vm10.local ceph-mon[103526]: Health check failed: 1 filesystem is degraded (FS_DEGRADED) 2026-03-09T20:53:02.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:02 vm10.local ceph-mon[103526]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN) 2026-03-09T20:53:02.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:02 vm10.local ceph-mon[103526]: osdmap e86: 6 total, 6 up, 6 in 2026-03-09T20:53:02.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:02 vm10.local ceph-mon[103526]: Standby daemon mds.cephfs.vm07.potfau assigned to filesystem cephfs as rank 0 2026-03-09T20:53:02.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:02 vm10.local ceph-mon[103526]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline) 2026-03-09T20:53:02.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:02 vm10.local ceph-mon[103526]: fsmap cephfs:0/1 3 up:standby, 1 failed 2026-03-09T20:53:02.288 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:02 vm10.local ceph-mon[103526]: fsmap cephfs:1/1 {0=cephfs.vm07.potfau=up:replay} 2 up:standby 2026-03-09T20:53:02.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:02 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-mon-vm07[112101]: 2026-03-09T20:53:02.002+0000 7ff513bf6640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN) 2026-03-09T20:53:02.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:02 vm07.local ceph-mon[112105]: pgmap v179: 65 pgs: 65 active+clean; 211 MiB data, 954 MiB used, 119 GiB / 120 GiB avail; 16 MiB/s rd, 5.3 KiB/s wr, 11 op/s 2026-03-09T20:53:02.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:02 vm07.local ceph-mon[112105]: Detected new or changed devices on vm10 2026-03-09T20:53:02.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:02 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:02.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:02 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:53:02.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:02 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:53:02.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:02 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:02.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:02 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:53:02.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:02 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:02.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:02 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:02.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:02 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:02.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:02 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:02.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:02 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mds ok-to-stop", "ids": ["cephfs.vm10.hzyuyq"]}]: dispatch 2026-03-09T20:53:02.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:02 vm07.local ceph-mon[112105]: Upgrade: It appears safe to stop mds.cephfs.vm10.hzyuyq 2026-03-09T20:53:02.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:02 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:02.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:02 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm10.hzyuyq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-09T20:53:02.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:02 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:53:02.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:02 vm07.local ceph-mon[112105]: Health check failed: 1 filesystem is degraded (FS_DEGRADED) 2026-03-09T20:53:02.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:02 vm07.local ceph-mon[112105]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN) 2026-03-09T20:53:02.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:02 vm07.local ceph-mon[112105]: osdmap e86: 6 total, 6 up, 6 in 2026-03-09T20:53:02.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:02 vm07.local ceph-mon[112105]: Standby daemon mds.cephfs.vm07.potfau assigned to filesystem cephfs as rank 0 2026-03-09T20:53:02.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:02 vm07.local ceph-mon[112105]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline) 2026-03-09T20:53:02.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:02 vm07.local ceph-mon[112105]: fsmap cephfs:0/1 3 up:standby, 1 failed 2026-03-09T20:53:02.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:02 vm07.local ceph-mon[112105]: fsmap cephfs:1/1 {0=cephfs.vm07.potfau=up:replay} 2 up:standby 2026-03-09T20:53:03.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:03 vm07.local ceph-mon[112105]: Upgrade: Updating mds.cephfs.vm10.hzyuyq 2026-03-09T20:53:03.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:03 vm07.local ceph-mon[112105]: Deploying daemon mds.cephfs.vm10.hzyuyq on vm10 2026-03-09T20:53:03.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:03 vm10.local ceph-mon[103526]: Upgrade: Updating mds.cephfs.vm10.hzyuyq 2026-03-09T20:53:03.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:03 vm10.local ceph-mon[103526]: Deploying daemon mds.cephfs.vm10.hzyuyq on vm10 2026-03-09T20:53:04.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:04 vm07.local ceph-mon[112105]: pgmap v181: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 8.5 MiB/s rd, 6.2 KiB/s wr, 10 op/s 2026-03-09T20:53:04.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:04 vm10.local ceph-mon[103526]: pgmap v181: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 8.5 MiB/s rd, 6.2 KiB/s wr, 10 op/s 2026-03-09T20:53:06.882 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:06 vm10.local ceph-mon[103526]: pgmap v182: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 8.5 MiB/s rd, 6.1 KiB/s wr, 10 op/s 2026-03-09T20:53:06.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:06 vm07.local ceph-mon[112105]: pgmap v182: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 8.5 MiB/s rd, 6.1 KiB/s wr, 10 op/s 2026-03-09T20:53:07.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:07 vm07.local ceph-mon[112105]: pgmap v183: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 20 MiB/s rd, 5.6 KiB/s wr, 12 op/s 2026-03-09T20:53:07.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:07 vm07.local ceph-mon[112105]: mds.? [v2:192.168.123.107:6828/561473714,v1:192.168.123.107:6829/561473714] up:reconnect 2026-03-09T20:53:07.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:07 vm07.local ceph-mon[112105]: fsmap cephfs:1/1 {0=cephfs.vm07.potfau=up:reconnect} 2 up:standby 2026-03-09T20:53:07.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:07 vm07.local ceph-mon[112105]: reconnect by client.14518 192.168.144.1:0/2371384686 after 0.001 2026-03-09T20:53:07.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:07 vm07.local ceph-mon[112105]: reconnect by client.24331 192.168.123.110:0/2539319683 after 0.001 2026-03-09T20:53:07.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:07 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:07.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:07 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:07.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:07 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:53:07.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:07 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:07.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:07 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:08.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:07 vm10.local ceph-mon[103526]: pgmap v183: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 20 MiB/s rd, 5.6 KiB/s wr, 12 op/s 2026-03-09T20:53:08.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:07 vm10.local ceph-mon[103526]: mds.? [v2:192.168.123.107:6828/561473714,v1:192.168.123.107:6829/561473714] up:reconnect 2026-03-09T20:53:08.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:07 vm10.local ceph-mon[103526]: fsmap cephfs:1/1 {0=cephfs.vm07.potfau=up:reconnect} 2 up:standby 2026-03-09T20:53:08.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:07 vm10.local ceph-mon[103526]: reconnect by client.14518 192.168.144.1:0/2371384686 after 0.001 2026-03-09T20:53:08.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:07 vm10.local ceph-mon[103526]: reconnect by client.24331 192.168.123.110:0/2539319683 after 0.001 2026-03-09T20:53:08.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:07 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:08.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:07 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:08.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:07 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:53:08.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:07 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:08.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:07 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:08.828 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:08 vm10.local ceph-mon[103526]: mds.? [v2:192.168.123.110:6826/1370091423,v1:192.168.123.110:6827/1370091423] up:boot 2026-03-09T20:53:08.828 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:08 vm10.local ceph-mon[103526]: mds.? [v2:192.168.123.107:6828/561473714,v1:192.168.123.107:6829/561473714] up:rejoin 2026-03-09T20:53:08.828 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:08 vm10.local ceph-mon[103526]: fsmap cephfs:1/1 {0=cephfs.vm07.potfau=up:rejoin} 3 up:standby 2026-03-09T20:53:08.828 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:08 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm10.hzyuyq"}]: dispatch 2026-03-09T20:53:08.828 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:08 vm10.local ceph-mon[103526]: daemon mds.cephfs.vm07.potfau is now active in filesystem cephfs as rank 0 2026-03-09T20:53:08.828 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:08 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:08.828 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:08 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:08.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:08 vm07.local ceph-mon[112105]: mds.? [v2:192.168.123.110:6826/1370091423,v1:192.168.123.110:6827/1370091423] up:boot 2026-03-09T20:53:08.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:08 vm07.local ceph-mon[112105]: mds.? [v2:192.168.123.107:6828/561473714,v1:192.168.123.107:6829/561473714] up:rejoin 2026-03-09T20:53:08.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:08 vm07.local ceph-mon[112105]: fsmap cephfs:1/1 {0=cephfs.vm07.potfau=up:rejoin} 3 up:standby 2026-03-09T20:53:08.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:08 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm10.hzyuyq"}]: dispatch 2026-03-09T20:53:08.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:08 vm07.local ceph-mon[112105]: daemon mds.cephfs.vm07.potfau is now active in filesystem cephfs as rank 0 2026-03-09T20:53:08.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:08 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:08.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:08 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:09.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:09 vm07.local ceph-mon[112105]: pgmap v184: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 20 MiB/s rd, 4.8 KiB/s wr, 11 op/s 2026-03-09T20:53:09.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:09 vm07.local ceph-mon[112105]: Health check cleared: FS_DEGRADED (was: 1 filesystem is degraded) 2026-03-09T20:53:09.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:09 vm07.local ceph-mon[112105]: Cluster is now healthy 2026-03-09T20:53:09.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:09 vm07.local ceph-mon[112105]: mds.? [v2:192.168.123.107:6828/561473714,v1:192.168.123.107:6829/561473714] up:active 2026-03-09T20:53:09.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:09 vm07.local ceph-mon[112105]: fsmap cephfs:1 {0=cephfs.vm07.potfau=up:active} 3 up:standby 2026-03-09T20:53:09.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:09 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:09.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:09 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:09.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:09 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:53:09.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:09 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:53:09.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:09 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:09.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:09 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:53:09.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:09 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:09.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:09 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:09.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:09 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:09.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:09 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:09.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:09 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:09.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:09 vm07.local ceph-mon[112105]: Upgrade: Setting container_image for all mds 2026-03-09T20:53:09.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:09 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:09.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:09 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds.cephfs.vm07.potfau"}]: dispatch 2026-03-09T20:53:09.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:09 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds.cephfs.vm07.potfau"}]': finished 2026-03-09T20:53:09.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:09 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds.cephfs.vm07.rovdbp"}]: dispatch 2026-03-09T20:53:09.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:09 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds.cephfs.vm07.rovdbp"}]': finished 2026-03-09T20:53:09.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:09 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds.cephfs.vm10.hzyuyq"}]: dispatch 2026-03-09T20:53:09.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:09 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds.cephfs.vm10.hzyuyq"}]': finished 2026-03-09T20:53:09.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:09 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds.cephfs.vm10.qpltwp"}]: dispatch 2026-03-09T20:53:09.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:09 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds.cephfs.vm10.qpltwp"}]': finished 2026-03-09T20:53:09.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:09 vm07.local ceph-mon[112105]: Upgrade: Scaling up filesystem cephfs max_mds to 2 2026-03-09T20:53:09.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:09 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "fs set", "fs_name": "cephfs", "var": "max_mds", "val": "2"}]: dispatch 2026-03-09T20:53:09.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:09 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:09.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:09 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:53:10.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:09 vm10.local ceph-mon[103526]: pgmap v184: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 20 MiB/s rd, 4.8 KiB/s wr, 11 op/s 2026-03-09T20:53:10.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:09 vm10.local ceph-mon[103526]: Health check cleared: FS_DEGRADED (was: 1 filesystem is degraded) 2026-03-09T20:53:10.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:09 vm10.local ceph-mon[103526]: Cluster is now healthy 2026-03-09T20:53:10.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:09 vm10.local ceph-mon[103526]: mds.? [v2:192.168.123.107:6828/561473714,v1:192.168.123.107:6829/561473714] up:active 2026-03-09T20:53:10.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:09 vm10.local ceph-mon[103526]: fsmap cephfs:1 {0=cephfs.vm07.potfau=up:active} 3 up:standby 2026-03-09T20:53:10.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:09 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:10.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:09 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:10.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:09 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:53:10.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:09 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:53:10.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:09 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:10.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:09 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:53:10.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:09 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:10.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:09 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:10.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:09 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:10.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:09 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:10.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:09 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:10.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:09 vm10.local ceph-mon[103526]: Upgrade: Setting container_image for all mds 2026-03-09T20:53:10.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:09 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:10.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:09 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds.cephfs.vm07.potfau"}]: dispatch 2026-03-09T20:53:10.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:09 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds.cephfs.vm07.potfau"}]': finished 2026-03-09T20:53:10.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:09 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds.cephfs.vm07.rovdbp"}]: dispatch 2026-03-09T20:53:10.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:09 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds.cephfs.vm07.rovdbp"}]': finished 2026-03-09T20:53:10.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:09 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds.cephfs.vm10.hzyuyq"}]: dispatch 2026-03-09T20:53:10.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:09 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds.cephfs.vm10.hzyuyq"}]': finished 2026-03-09T20:53:10.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:09 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds.cephfs.vm10.qpltwp"}]: dispatch 2026-03-09T20:53:10.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:09 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds.cephfs.vm10.qpltwp"}]': finished 2026-03-09T20:53:10.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:09 vm10.local ceph-mon[103526]: Upgrade: Scaling up filesystem cephfs max_mds to 2 2026-03-09T20:53:10.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:09 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "fs set", "fs_name": "cephfs", "var": "max_mds", "val": "2"}]: dispatch 2026-03-09T20:53:10.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:09 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:10.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:09 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:53:10.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:10 vm07.local ceph-mon[112105]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX) 2026-03-09T20:53:10.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:10 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "fs set", "fs_name": "cephfs", "var": "max_mds", "val": "2"}]': finished 2026-03-09T20:53:10.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:10 vm07.local ceph-mon[112105]: daemon mds.cephfs.vm07.rovdbp assigned to filesystem cephfs as rank 1 (now has 2 ranks) 2026-03-09T20:53:10.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:10 vm07.local ceph-mon[112105]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds) 2026-03-09T20:53:10.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:10 vm07.local ceph-mon[112105]: Cluster is now healthy 2026-03-09T20:53:10.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:10 vm07.local ceph-mon[112105]: fsmap cephfs:1 {0=cephfs.vm07.potfau=up:active} 3 up:standby 2026-03-09T20:53:10.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:10 vm07.local ceph-mon[112105]: fsmap cephfs:2 {0=cephfs.vm07.potfau=up:active,1=cephfs.vm07.rovdbp=up:starting} 2 up:standby 2026-03-09T20:53:10.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:10 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:10.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:10 vm07.local ceph-mon[112105]: Upgrade: Enabling allow_standby_replay on filesystem cephfs 2026-03-09T20:53:10.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:10 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "fs set", "fs_name": "cephfs", "var": "allow_standby_replay", "val": "1"}]: dispatch 2026-03-09T20:53:10.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:10 vm07.local ceph-mon[112105]: daemon mds.cephfs.vm07.rovdbp is now active in filesystem cephfs as rank 1 2026-03-09T20:53:11.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:10 vm10.local ceph-mon[103526]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX) 2026-03-09T20:53:11.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:10 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "fs set", "fs_name": "cephfs", "var": "max_mds", "val": "2"}]': finished 2026-03-09T20:53:11.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:10 vm10.local ceph-mon[103526]: daemon mds.cephfs.vm07.rovdbp assigned to filesystem cephfs as rank 1 (now has 2 ranks) 2026-03-09T20:53:11.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:10 vm10.local ceph-mon[103526]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds) 2026-03-09T20:53:11.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:10 vm10.local ceph-mon[103526]: Cluster is now healthy 2026-03-09T20:53:11.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:10 vm10.local ceph-mon[103526]: fsmap cephfs:1 {0=cephfs.vm07.potfau=up:active} 3 up:standby 2026-03-09T20:53:11.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:10 vm10.local ceph-mon[103526]: fsmap cephfs:2 {0=cephfs.vm07.potfau=up:active,1=cephfs.vm07.rovdbp=up:starting} 2 up:standby 2026-03-09T20:53:11.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:10 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:11.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:10 vm10.local ceph-mon[103526]: Upgrade: Enabling allow_standby_replay on filesystem cephfs 2026-03-09T20:53:11.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:10 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "fs set", "fs_name": "cephfs", "var": "allow_standby_replay", "val": "1"}]: dispatch 2026-03-09T20:53:11.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:10 vm10.local ceph-mon[103526]: daemon mds.cephfs.vm07.rovdbp is now active in filesystem cephfs as rank 1 2026-03-09T20:53:12.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:12 vm07.local ceph-mon[112105]: pgmap v185: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 21 MiB/s rd, 4.9 KiB/s wr, 11 op/s 2026-03-09T20:53:12.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:12 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "fs set", "fs_name": "cephfs", "var": "allow_standby_replay", "val": "1"}]': finished 2026-03-09T20:53:12.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:12 vm07.local ceph-mon[112105]: mds.? [v2:192.168.123.107:6826/2346066069,v1:192.168.123.107:6827/2346066069] up:active 2026-03-09T20:53:12.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:12 vm07.local ceph-mon[112105]: fsmap cephfs:2 {0=cephfs.vm07.potfau=up:active,1=cephfs.vm07.rovdbp=up:active} 2 up:standby 2026-03-09T20:53:12.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:12 vm07.local ceph-mon[112105]: fsmap cephfs:2 {0=cephfs.vm07.potfau=up:active,1=cephfs.vm07.rovdbp=up:active} 2 up:standby-replay 2026-03-09T20:53:12.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:12 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:12.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:12 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:12.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:12 vm07.local ceph-mon[112105]: Upgrade: Setting container_image for all rgw 2026-03-09T20:53:12.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:12 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:12.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:12 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:12.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:12 vm07.local ceph-mon[112105]: Upgrade: Setting container_image for all rbd-mirror 2026-03-09T20:53:12.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:12 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:12.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:12 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:12.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:12 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:12.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:12 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm07", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-09T20:53:12.885 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:12 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:53:13.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:12 vm10.local ceph-mon[103526]: pgmap v185: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 21 MiB/s rd, 4.9 KiB/s wr, 11 op/s 2026-03-09T20:53:13.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:12 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "fs set", "fs_name": "cephfs", "var": "allow_standby_replay", "val": "1"}]': finished 2026-03-09T20:53:13.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:12 vm10.local ceph-mon[103526]: mds.? [v2:192.168.123.107:6826/2346066069,v1:192.168.123.107:6827/2346066069] up:active 2026-03-09T20:53:13.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:12 vm10.local ceph-mon[103526]: fsmap cephfs:2 {0=cephfs.vm07.potfau=up:active,1=cephfs.vm07.rovdbp=up:active} 2 up:standby 2026-03-09T20:53:13.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:12 vm10.local ceph-mon[103526]: fsmap cephfs:2 {0=cephfs.vm07.potfau=up:active,1=cephfs.vm07.rovdbp=up:active} 2 up:standby-replay 2026-03-09T20:53:13.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:12 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:13.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:12 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:13.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:12 vm10.local ceph-mon[103526]: Upgrade: Setting container_image for all rgw 2026-03-09T20:53:13.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:12 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:13.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:12 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:13.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:12 vm10.local ceph-mon[103526]: Upgrade: Setting container_image for all rbd-mirror 2026-03-09T20:53:13.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:12 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:13.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:12 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:13.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:12 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:13.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:12 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm07", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-09T20:53:13.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:12 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:53:14.011 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:13 vm10.local ceph-mon[103526]: Upgrade: Updating ceph-exporter.vm07 (1/2) 2026-03-09T20:53:14.011 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:13 vm10.local ceph-mon[103526]: Deploying daemon ceph-exporter.vm07 on vm07 2026-03-09T20:53:14.012 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:13 vm10.local ceph-mon[103526]: pgmap v186: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 21 MiB/s rd, 4.8 KiB/s wr, 13 op/s 2026-03-09T20:53:14.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:13 vm07.local ceph-mon[112105]: Upgrade: Updating ceph-exporter.vm07 (1/2) 2026-03-09T20:53:14.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:13 vm07.local ceph-mon[112105]: Deploying daemon ceph-exporter.vm07 on vm07 2026-03-09T20:53:14.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:13 vm07.local ceph-mon[112105]: pgmap v186: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 21 MiB/s rd, 4.8 KiB/s wr, 13 op/s 2026-03-09T20:53:14.959 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:14 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:14.960 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:14 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:14.960 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:14 vm10.local ceph-mon[103526]: Upgrade: Updating ceph-exporter.vm10 (2/2) 2026-03-09T20:53:14.960 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:14 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:14.960 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:14 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm10", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-09T20:53:14.960 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:14 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:53:14.960 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:14 vm10.local ceph-mon[103526]: Deploying daemon ceph-exporter.vm10 on vm10 2026-03-09T20:53:15.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:14 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:15.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:14 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:15.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:14 vm07.local ceph-mon[112105]: Upgrade: Updating ceph-exporter.vm10 (2/2) 2026-03-09T20:53:15.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:14 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:15.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:14 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm10", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-09T20:53:15.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:14 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:53:15.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:14 vm07.local ceph-mon[112105]: Deploying daemon ceph-exporter.vm10 on vm10 2026-03-09T20:53:16.560 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:16 vm07.local ceph-mon[112105]: pgmap v187: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 24 MiB/s rd, 3.8 KiB/s wr, 16 op/s 2026-03-09T20:53:16.560 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:16 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:16.560 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:16 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:16.560 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:16 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:53:16.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:16 vm10.local ceph-mon[103526]: pgmap v187: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 24 MiB/s rd, 3.8 KiB/s wr, 16 op/s 2026-03-09T20:53:16.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:16 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:16.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:16 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:16.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:16 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:53:17.707 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:17 vm10.local ceph-mon[103526]: pgmap v188: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 28 MiB/s rd, 3.9 KiB/s wr, 20 op/s 2026-03-09T20:53:17.708 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:17 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:17.708 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:17 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:17.708 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:17 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:17.708 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:17 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:17.708 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:17 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:17.708 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:17 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:17.708 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:17 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:17.708 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:17 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:18.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:17 vm07.local ceph-mon[112105]: pgmap v188: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 28 MiB/s rd, 3.9 KiB/s wr, 20 op/s 2026-03-09T20:53:18.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:17 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:18.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:17 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:18.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:17 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:18.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:17 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:18.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:17 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:18.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:17 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:18.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:17 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:18.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:17 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:19.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:19 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:19.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:19 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:19.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:19 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:19.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:19 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:19.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:19 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:53:19.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:19 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:53:19.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:19 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:19.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:19 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:53:19.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:19 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:19.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:19 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:19.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:19 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:19.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:19 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:19.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:19 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:19.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:19 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:19.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:19 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:19.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:19 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:19.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:19 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:19.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:19 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:19.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:19 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter.vm07"}]: dispatch 2026-03-09T20:53:19.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:19 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter.vm07"}]': finished 2026-03-09T20:53:19.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:19 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter.vm10"}]: dispatch 2026-03-09T20:53:19.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:19 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter.vm10"}]': finished 2026-03-09T20:53:19.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:19 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:19.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:19 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:19.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:19 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:19.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:19 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:19.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:19 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:19.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:19 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:19.472 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.469+0000 7fe2a6256640 1 -- 192.168.123.107:0/3364256981 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe2a0072390 msgr2=0x7fe2a010c590 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:19.472 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.469+0000 7fe2a6256640 1 --2- 192.168.123.107:0/3364256981 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe2a0072390 0x7fe2a010c590 secure :-1 s=READY pgs=140 cs=0 l=1 rev1=1 crypto rx=0x7fe288009a00 tx=0x7fe28802f290 comp rx=0 tx=0).stop 2026-03-09T20:53:19.472 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.469+0000 7fe2a6256640 1 -- 192.168.123.107:0/3364256981 shutdown_connections 2026-03-09T20:53:19.472 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.469+0000 7fe2a6256640 1 --2- 192.168.123.107:0/3364256981 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe2a0072390 0x7fe2a010c590 unknown :-1 s=CLOSED pgs=140 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:19.472 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.469+0000 7fe2a6256640 1 --2- 192.168.123.107:0/3364256981 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe2a00719c0 0x7fe2a0071dc0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:19.472 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.469+0000 7fe2a6256640 1 -- 192.168.123.107:0/3364256981 >> 192.168.123.107:0/3364256981 conn(0x7fe2a006d4f0 msgr2=0x7fe2a006f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:53:19.472 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.469+0000 7fe2a6256640 1 -- 192.168.123.107:0/3364256981 shutdown_connections 2026-03-09T20:53:19.472 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.469+0000 7fe2a6256640 1 -- 192.168.123.107:0/3364256981 wait complete. 2026-03-09T20:53:19.472 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.470+0000 7fe2a6256640 1 Processor -- start 2026-03-09T20:53:19.472 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.470+0000 7fe2a6256640 1 -- start start 2026-03-09T20:53:19.472 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.471+0000 7fe2a6256640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe2a00719c0 0x7fe2a01159b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:53:19.472 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.471+0000 7fe2a6256640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe2a0072390 0x7fe2a0115ef0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:53:19.472 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.471+0000 7fe2a6256640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fe2a01173f0 con 0x7fe2a0072390 2026-03-09T20:53:19.472 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.471+0000 7fe2a6256640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fe2a0117560 con 0x7fe2a00719c0 2026-03-09T20:53:19.472 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.471+0000 7fe29f7fe640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe2a00719c0 0x7fe2a01159b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:53:19.472 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.471+0000 7fe29f7fe640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe2a00719c0 0x7fe2a01159b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:60014/0 (socket says 192.168.123.107:60014) 2026-03-09T20:53:19.472 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.471+0000 7fe29f7fe640 1 -- 192.168.123.107:0/3883893438 learned_addr learned my addr 192.168.123.107:0/3883893438 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:53:19.472 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.471+0000 7fe29effd640 1 --2- 192.168.123.107:0/3883893438 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe2a0072390 0x7fe2a0115ef0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:53:19.473 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.472+0000 7fe29f7fe640 1 -- 192.168.123.107:0/3883893438 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe2a0072390 msgr2=0x7fe2a0115ef0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:19.473 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.472+0000 7fe29f7fe640 1 --2- 192.168.123.107:0/3883893438 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe2a0072390 0x7fe2a0115ef0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:19.473 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.472+0000 7fe29f7fe640 1 -- 192.168.123.107:0/3883893438 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fe288009660 con 0x7fe2a00719c0 2026-03-09T20:53:19.473 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.472+0000 7fe29f7fe640 1 --2- 192.168.123.107:0/3883893438 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe2a00719c0 0x7fe2a01159b0 secure :-1 s=READY pgs=56 cs=0 l=1 rev1=1 crypto rx=0x7fe29800b4f0 tx=0x7fe29800b9c0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:53:19.476 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.475+0000 7fe29cff9640 1 -- 192.168.123.107:0/3883893438 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fe298004280 con 0x7fe2a00719c0 2026-03-09T20:53:19.476 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.475+0000 7fe29cff9640 1 -- 192.168.123.107:0/3883893438 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fe2980043e0 con 0x7fe2a00719c0 2026-03-09T20:53:19.476 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.475+0000 7fe29cff9640 1 -- 192.168.123.107:0/3883893438 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fe298010b50 con 0x7fe2a00719c0 2026-03-09T20:53:19.476 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.475+0000 7fe2a6256640 1 -- 192.168.123.107:0/3883893438 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fe2a0116550 con 0x7fe2a00719c0 2026-03-09T20:53:19.476 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.475+0000 7fe2a6256640 1 -- 192.168.123.107:0/3883893438 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fe2a01b5970 con 0x7fe2a00719c0 2026-03-09T20:53:19.477 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.476+0000 7fe2a6256640 1 -- 192.168.123.107:0/3883893438 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fe260005350 con 0x7fe2a00719c0 2026-03-09T20:53:19.480 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.478+0000 7fe29cff9640 1 -- 192.168.123.107:0/3883893438 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fe298002780 con 0x7fe2a00719c0 2026-03-09T20:53:19.480 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.479+0000 7fe29cff9640 1 --2- 192.168.123.107:0/3883893438 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fe2680778e0 0x7fe268079da0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:53:19.480 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.479+0000 7fe29cff9640 1 -- 192.168.123.107:0/3883893438 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(86..86 src has 1..86) v4 ==== 6652+0+0 (secure 0 0 0) 0x7fe298098fe0 con 0x7fe2a00719c0 2026-03-09T20:53:19.481 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.481+0000 7fe29effd640 1 --2- 192.168.123.107:0/3883893438 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fe2680778e0 0x7fe268079da0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:53:19.482 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.481+0000 7fe29effd640 1 --2- 192.168.123.107:0/3883893438 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fe2680778e0 0x7fe268079da0 secure :-1 s=READY pgs=99 cs=0 l=1 rev1=1 crypto rx=0x7fe288038660 tx=0x7fe288038470 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:53:19.483 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.482+0000 7fe29cff9640 1 -- 192.168.123.107:0/3883893438 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fe2980616d0 con 0x7fe2a00719c0 2026-03-09T20:53:19.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:19 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:19.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:19 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:19.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:19 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:19.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:19 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:19.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:19 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:53:19.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:19 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:53:19.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:19 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:19.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:19 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:53:19.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:19 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:19.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:19 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:19.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:19 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:19.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:19 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:19.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:19 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:19.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:19 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:19.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:19 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:19.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:19 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:19.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:19 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:19.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:19 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:19.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:19 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter.vm07"}]: dispatch 2026-03-09T20:53:19.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:19 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter.vm07"}]': finished 2026-03-09T20:53:19.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:19 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter.vm10"}]: dispatch 2026-03-09T20:53:19.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:19 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter.vm10"}]': finished 2026-03-09T20:53:19.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:19 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:19.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:19 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:19.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:19 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:19.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:19 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:19.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:19 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:19.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:19 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:19.596 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.595+0000 7fe2a6256640 1 -- 192.168.123.107:0/3883893438 --> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7fe260002bf0 con 0x7fe2680778e0 2026-03-09T20:53:19.599 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.597+0000 7fe29cff9640 1 -- 192.168.123.107:0/3883893438 <== mgr.34100 v2:192.168.123.107:6800/2064434004 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+466 (secure 0 0 0) 0x7fe260002bf0 con 0x7fe2680778e0 2026-03-09T20:53:19.600 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.599+0000 7fe2967fc640 1 -- 192.168.123.107:0/3883893438 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fe2680778e0 msgr2=0x7fe268079da0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:19.600 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.599+0000 7fe2967fc640 1 --2- 192.168.123.107:0/3883893438 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fe2680778e0 0x7fe268079da0 secure :-1 s=READY pgs=99 cs=0 l=1 rev1=1 crypto rx=0x7fe288038660 tx=0x7fe288038470 comp rx=0 tx=0).stop 2026-03-09T20:53:19.600 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.600+0000 7fe2967fc640 1 -- 192.168.123.107:0/3883893438 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe2a00719c0 msgr2=0x7fe2a01159b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:19.600 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.600+0000 7fe2967fc640 1 --2- 192.168.123.107:0/3883893438 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe2a00719c0 0x7fe2a01159b0 secure :-1 s=READY pgs=56 cs=0 l=1 rev1=1 crypto rx=0x7fe29800b4f0 tx=0x7fe29800b9c0 comp rx=0 tx=0).stop 2026-03-09T20:53:19.601 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.600+0000 7fe2967fc640 1 -- 192.168.123.107:0/3883893438 shutdown_connections 2026-03-09T20:53:19.601 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.600+0000 7fe2967fc640 1 --2- 192.168.123.107:0/3883893438 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fe2680778e0 0x7fe268079da0 unknown :-1 s=CLOSED pgs=99 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:19.601 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.600+0000 7fe2967fc640 1 --2- 192.168.123.107:0/3883893438 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe2a0072390 0x7fe2a0115ef0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:19.601 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.600+0000 7fe2967fc640 1 --2- 192.168.123.107:0/3883893438 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe2a00719c0 0x7fe2a01159b0 unknown :-1 s=CLOSED pgs=56 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:19.601 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.600+0000 7fe2967fc640 1 -- 192.168.123.107:0/3883893438 >> 192.168.123.107:0/3883893438 conn(0x7fe2a006d4f0 msgr2=0x7fe2a010a810 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:53:19.601 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.600+0000 7fe2967fc640 1 -- 192.168.123.107:0/3883893438 shutdown_connections 2026-03-09T20:53:19.601 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.600+0000 7fe2967fc640 1 -- 192.168.123.107:0/3883893438 wait complete. 2026-03-09T20:53:19.611 INFO:teuthology.orchestra.run.vm07.stdout:true 2026-03-09T20:53:19.668 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.667+0000 7fde6bfff640 1 -- 192.168.123.107:0/4292394857 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fde6c0719a0 msgr2=0x7fde6c071da0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:19.668 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.667+0000 7fde6bfff640 1 --2- 192.168.123.107:0/4292394857 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fde6c0719a0 0x7fde6c071da0 secure :-1 s=READY pgs=141 cs=0 l=1 rev1=1 crypto rx=0x7fde5c0099b0 tx=0x7fde5c02f240 comp rx=0 tx=0).stop 2026-03-09T20:53:19.669 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.668+0000 7fde6bfff640 1 -- 192.168.123.107:0/4292394857 shutdown_connections 2026-03-09T20:53:19.669 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.668+0000 7fde6bfff640 1 --2- 192.168.123.107:0/4292394857 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fde6c072370 0x7fde6c10c590 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:19.669 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.668+0000 7fde6bfff640 1 --2- 192.168.123.107:0/4292394857 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fde6c0719a0 0x7fde6c071da0 unknown :-1 s=CLOSED pgs=141 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:19.669 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.668+0000 7fde6bfff640 1 -- 192.168.123.107:0/4292394857 >> 192.168.123.107:0/4292394857 conn(0x7fde6c06d4f0 msgr2=0x7fde6c06f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:53:19.669 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.668+0000 7fde6bfff640 1 -- 192.168.123.107:0/4292394857 shutdown_connections 2026-03-09T20:53:19.669 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.668+0000 7fde6bfff640 1 -- 192.168.123.107:0/4292394857 wait complete. 2026-03-09T20:53:19.669 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.668+0000 7fde6bfff640 1 Processor -- start 2026-03-09T20:53:19.669 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.669+0000 7fde6bfff640 1 -- start start 2026-03-09T20:53:19.669 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.669+0000 7fde6bfff640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fde6c0719a0 0x7fde6c1a71b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:53:19.670 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.669+0000 7fde6bfff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fde6c072370 0x7fde6c1a76f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:53:19.670 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.669+0000 7fde6bfff640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fde6c1a7cc0 con 0x7fde6c072370 2026-03-09T20:53:19.670 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.669+0000 7fde6bfff640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fde6c1a7e30 con 0x7fde6c0719a0 2026-03-09T20:53:19.670 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.669+0000 7fde6a7fc640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fde6c072370 0x7fde6c1a76f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:53:19.670 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.669+0000 7fde6affd640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fde6c0719a0 0x7fde6c1a71b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:53:19.673 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.669+0000 7fde6affd640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fde6c0719a0 0x7fde6c1a71b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:60028/0 (socket says 192.168.123.107:60028) 2026-03-09T20:53:19.674 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.669+0000 7fde6affd640 1 -- 192.168.123.107:0/4009236016 learned_addr learned my addr 192.168.123.107:0/4009236016 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:53:19.674 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.669+0000 7fde6affd640 1 -- 192.168.123.107:0/4009236016 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fde6c072370 msgr2=0x7fde6c1a76f0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:19.674 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.669+0000 7fde6affd640 1 --2- 192.168.123.107:0/4009236016 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fde6c072370 0x7fde6c1a76f0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:19.674 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.669+0000 7fde6affd640 1 -- 192.168.123.107:0/4009236016 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fde5c009660 con 0x7fde6c0719a0 2026-03-09T20:53:19.674 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.669+0000 7fde6affd640 1 --2- 192.168.123.107:0/4009236016 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fde6c0719a0 0x7fde6c1a71b0 secure :-1 s=READY pgs=57 cs=0 l=1 rev1=1 crypto rx=0x7fde5c004290 tx=0x7fde5c0042c0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:53:19.674 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.671+0000 7fde4bfff640 1 -- 192.168.123.107:0/4009236016 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fde5c03d070 con 0x7fde6c0719a0 2026-03-09T20:53:19.674 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.671+0000 7fde4bfff640 1 -- 192.168.123.107:0/4009236016 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fde5c004420 con 0x7fde6c0719a0 2026-03-09T20:53:19.674 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.672+0000 7fde4bfff640 1 -- 192.168.123.107:0/4009236016 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fde5c0417b0 con 0x7fde6c0719a0 2026-03-09T20:53:19.676 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.675+0000 7fde6bfff640 1 -- 192.168.123.107:0/4009236016 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fde6c1ac870 con 0x7fde6c0719a0 2026-03-09T20:53:19.676 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.675+0000 7fde6bfff640 1 -- 192.168.123.107:0/4009236016 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fde6c1acd10 con 0x7fde6c0719a0 2026-03-09T20:53:19.678 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.677+0000 7fde6bfff640 1 -- 192.168.123.107:0/4009236016 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fde30005350 con 0x7fde6c0719a0 2026-03-09T20:53:19.687 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.684+0000 7fde4bfff640 1 -- 192.168.123.107:0/4009236016 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fde5c02fcb0 con 0x7fde6c0719a0 2026-03-09T20:53:19.687 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.685+0000 7fde4bfff640 1 --2- 192.168.123.107:0/4009236016 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fde400776d0 0x7fde40079b90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:53:19.687 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.685+0000 7fde4bfff640 1 -- 192.168.123.107:0/4009236016 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(86..86 src has 1..86) v4 ==== 6652+0+0 (secure 0 0 0) 0x7fde5c0be8e0 con 0x7fde6c0719a0 2026-03-09T20:53:19.687 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.685+0000 7fde4bfff640 1 -- 192.168.123.107:0/4009236016 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fde5c0eea90 con 0x7fde6c0719a0 2026-03-09T20:53:19.687 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.685+0000 7fde6a7fc640 1 --2- 192.168.123.107:0/4009236016 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fde400776d0 0x7fde40079b90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:53:19.688 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.687+0000 7fde6a7fc640 1 --2- 192.168.123.107:0/4009236016 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fde400776d0 0x7fde40079b90 secure :-1 s=READY pgs=100 cs=0 l=1 rev1=1 crypto rx=0x7fde6c1a86d0 tx=0x7fde6400b040 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:53:19.810 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.808+0000 7fde6bfff640 1 -- 192.168.123.107:0/4009236016 --> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7fde30002bf0 con 0x7fde400776d0 2026-03-09T20:53:19.811 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.810+0000 7fde4bfff640 1 -- 192.168.123.107:0/4009236016 <== mgr.34100 v2:192.168.123.107:6800/2064434004 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+466 (secure 0 0 0) 0x7fde30002bf0 con 0x7fde400776d0 2026-03-09T20:53:19.813 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.812+0000 7fde6bfff640 1 -- 192.168.123.107:0/4009236016 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fde400776d0 msgr2=0x7fde40079b90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:19.813 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.812+0000 7fde6bfff640 1 --2- 192.168.123.107:0/4009236016 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fde400776d0 0x7fde40079b90 secure :-1 s=READY pgs=100 cs=0 l=1 rev1=1 crypto rx=0x7fde6c1a86d0 tx=0x7fde6400b040 comp rx=0 tx=0).stop 2026-03-09T20:53:19.813 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.812+0000 7fde6bfff640 1 -- 192.168.123.107:0/4009236016 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fde6c0719a0 msgr2=0x7fde6c1a71b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:19.813 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.812+0000 7fde6bfff640 1 --2- 192.168.123.107:0/4009236016 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fde6c0719a0 0x7fde6c1a71b0 secure :-1 s=READY pgs=57 cs=0 l=1 rev1=1 crypto rx=0x7fde5c004290 tx=0x7fde5c0042c0 comp rx=0 tx=0).stop 2026-03-09T20:53:19.814 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.813+0000 7fde6bfff640 1 -- 192.168.123.107:0/4009236016 shutdown_connections 2026-03-09T20:53:19.814 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.813+0000 7fde6bfff640 1 --2- 192.168.123.107:0/4009236016 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fde400776d0 0x7fde40079b90 unknown :-1 s=CLOSED pgs=100 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:19.814 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.813+0000 7fde6bfff640 1 --2- 192.168.123.107:0/4009236016 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fde6c072370 0x7fde6c1a76f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:19.814 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.813+0000 7fde6bfff640 1 --2- 192.168.123.107:0/4009236016 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fde6c0719a0 0x7fde6c1a71b0 unknown :-1 s=CLOSED pgs=57 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:19.814 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.813+0000 7fde6bfff640 1 -- 192.168.123.107:0/4009236016 >> 192.168.123.107:0/4009236016 conn(0x7fde6c06d4f0 msgr2=0x7fde6c10a830 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:53:19.814 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.813+0000 7fde6bfff640 1 -- 192.168.123.107:0/4009236016 shutdown_connections 2026-03-09T20:53:19.814 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.813+0000 7fde6bfff640 1 -- 192.168.123.107:0/4009236016 wait complete. 2026-03-09T20:53:19.879 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.878+0000 7f913e043640 1 -- 192.168.123.107:0/512299237 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9138072440 msgr2=0x7f91380771b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:19.879 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.878+0000 7f913e043640 1 --2- 192.168.123.107:0/512299237 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9138072440 0x7f91380771b0 secure :-1 s=READY pgs=142 cs=0 l=1 rev1=1 crypto rx=0x7f913000caa0 tx=0x7f91300305a0 comp rx=0 tx=0).stop 2026-03-09T20:53:19.879 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.878+0000 7f913e043640 1 -- 192.168.123.107:0/512299237 shutdown_connections 2026-03-09T20:53:19.879 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.878+0000 7f913e043640 1 --2- 192.168.123.107:0/512299237 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9138072440 0x7f91380771b0 unknown :-1 s=CLOSED pgs=142 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:19.879 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.878+0000 7f913e043640 1 --2- 192.168.123.107:0/512299237 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9138071a70 0x7f9138071e70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:19.879 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.878+0000 7f913e043640 1 -- 192.168.123.107:0/512299237 >> 192.168.123.107:0/512299237 conn(0x7f913806d4f0 msgr2=0x7f913806f930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:53:19.880 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.878+0000 7f913e043640 1 -- 192.168.123.107:0/512299237 shutdown_connections 2026-03-09T20:53:19.880 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.878+0000 7f913e043640 1 -- 192.168.123.107:0/512299237 wait complete. 2026-03-09T20:53:19.880 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.879+0000 7f913e043640 1 Processor -- start 2026-03-09T20:53:19.880 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.879+0000 7f913e043640 1 -- start start 2026-03-09T20:53:19.880 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.879+0000 7f913e043640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9138071a70 0x7f9138083ec0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:53:19.880 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.879+0000 7f913e043640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9138082510 0x7f9138082990 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:53:19.880 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.879+0000 7f913e043640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9138084400 con 0x7f9138082510 2026-03-09T20:53:19.880 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.879+0000 7f913e043640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9138082f00 con 0x7f9138071a70 2026-03-09T20:53:19.880 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.879+0000 7f9136ffd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9138082510 0x7f9138082990 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:53:19.880 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.879+0000 7f9136ffd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9138082510 0x7f9138082990 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:33664/0 (socket says 192.168.123.107:33664) 2026-03-09T20:53:19.880 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.879+0000 7f9136ffd640 1 -- 192.168.123.107:0/2862626722 learned_addr learned my addr 192.168.123.107:0/2862626722 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:53:19.881 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.879+0000 7f9136ffd640 1 -- 192.168.123.107:0/2862626722 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9138071a70 msgr2=0x7f9138083ec0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:19.881 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.879+0000 7f9136ffd640 1 --2- 192.168.123.107:0/2862626722 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9138071a70 0x7f9138083ec0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:19.881 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.879+0000 7f9136ffd640 1 -- 192.168.123.107:0/2862626722 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f9130009d00 con 0x7f9138082510 2026-03-09T20:53:19.881 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.880+0000 7f9136ffd640 1 --2- 192.168.123.107:0/2862626722 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9138082510 0x7f9138082990 secure :-1 s=READY pgs=143 cs=0 l=1 rev1=1 crypto rx=0x7f91300077a0 tx=0x7f9130009510 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:53:19.881 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.880+0000 7f9134ff9640 1 -- 192.168.123.107:0/2862626722 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f913003d070 con 0x7f9138082510 2026-03-09T20:53:19.882 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.880+0000 7f913e043640 1 -- 192.168.123.107:0/2862626722 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f9138083180 con 0x7f9138082510 2026-03-09T20:53:19.882 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.880+0000 7f913e043640 1 -- 192.168.123.107:0/2862626722 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f91381b5bc0 con 0x7f9138082510 2026-03-09T20:53:19.882 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.881+0000 7f9134ff9640 1 -- 192.168.123.107:0/2862626722 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f9130033040 con 0x7f9138082510 2026-03-09T20:53:19.882 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.881+0000 7f9134ff9640 1 -- 192.168.123.107:0/2862626722 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f9130004700 con 0x7f9138082510 2026-03-09T20:53:19.885 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.881+0000 7f913e043640 1 -- 192.168.123.107:0/2862626722 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f913807a890 con 0x7f9138082510 2026-03-09T20:53:19.885 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.884+0000 7f9134ff9640 1 -- 192.168.123.107:0/2862626722 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f913000a8f0 con 0x7f9138082510 2026-03-09T20:53:19.886 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.885+0000 7f9134ff9640 1 --2- 192.168.123.107:0/2862626722 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f911c077a50 0x7f911c079f10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:53:19.886 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.885+0000 7f9134ff9640 1 -- 192.168.123.107:0/2862626722 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(86..86 src has 1..86) v4 ==== 6652+0+0 (secure 0 0 0) 0x7f913000ae40 con 0x7f9138082510 2026-03-09T20:53:19.886 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.885+0000 7f91377fe640 1 --2- 192.168.123.107:0/2862626722 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f911c077a50 0x7f911c079f10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:53:19.887 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.886+0000 7f91377fe640 1 --2- 192.168.123.107:0/2862626722 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f911c077a50 0x7f911c079f10 secure :-1 s=READY pgs=101 cs=0 l=1 rev1=1 crypto rx=0x7f912800ab90 tx=0x7f9128009250 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:53:19.887 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.886+0000 7f9134ff9640 1 -- 192.168.123.107:0/2862626722 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f9130087270 con 0x7f9138082510 2026-03-09T20:53:20.000 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:19.998+0000 7f913e043640 1 -- 192.168.123.107:0/2862626722 --> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] -- mgr_command(tid 0: {"prefix": "orch ps", "target": ["mon-mgr", ""]}) v1 -- 0x7f9138083310 con 0x7f911c077a50 2026-03-09T20:53:20.005 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.004+0000 7f9134ff9640 1 -- 192.168.123.107:0/2862626722 <== mgr.34100 v2:192.168.123.107:6800/2064434004 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+3576 (secure 0 0 0) 0x7f9138083310 con 0x7f911c077a50 2026-03-09T20:53:20.005 INFO:teuthology.orchestra.run.vm07.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T20:53:20.005 INFO:teuthology.orchestra.run.vm07.stdout:alertmanager.vm07 vm07 *:9093,9094 running (9m) 3s ago 10m 43.9M - 0.25.0 c8568f914cd2 aa3206f6f5cb 2026-03-09T20:53:20.005 INFO:teuthology.orchestra.run.vm07.stdout:ceph-exporter.vm07 vm07 running (6s) 3s ago 10m 10.0M - 19.2.3-678-ge911bdeb 654f31e6858e 5cd0ce63c830 2026-03-09T20:53:20.005 INFO:teuthology.orchestra.run.vm07.stdout:ceph-exporter.vm10 vm10 running (4s) 3s ago 9m 9815k - 19.2.3-678-ge911bdeb 654f31e6858e c382fe12976c 2026-03-09T20:53:20.005 INFO:teuthology.orchestra.run.vm07.stdout:crash.vm07 vm07 running (4m) 3s ago 10m 7834k - 19.2.3-678-ge911bdeb 654f31e6858e 406c9c54f34a 2026-03-09T20:53:20.005 INFO:teuthology.orchestra.run.vm07.stdout:crash.vm10 vm10 running (4m) 3s ago 9m 7856k - 19.2.3-678-ge911bdeb 654f31e6858e 30eaebf5d733 2026-03-09T20:53:20.005 INFO:teuthology.orchestra.run.vm07.stdout:grafana.vm07 vm07 *:3000 running (9m) 3s ago 10m 161M - 9.4.7 954c08fa6188 74cf2e7ee6ad 2026-03-09T20:53:20.005 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm07.potfau vm07 running (26s) 3s ago 8m 96.0M - 19.2.3-678-ge911bdeb 654f31e6858e 744ed5bff39a 2026-03-09T20:53:20.005 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm07.rovdbp vm07 running (36s) 3s ago 8m 24.7M - 19.2.3-678-ge911bdeb 654f31e6858e 1763d9a7f9bb 2026-03-09T20:53:20.006 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm10.hzyuyq vm10 running (13s) 3s ago 8m 21.1M - 19.2.3-678-ge911bdeb 654f31e6858e fd0024278e01 2026-03-09T20:53:20.006 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm10.qpltwp vm10 running (21s) 3s ago 8m 163M - 19.2.3-678-ge911bdeb 654f31e6858e fef68e5128e4 2026-03-09T20:53:20.006 INFO:teuthology.orchestra.run.vm07.stdout:mgr.vm07.xjrvch vm07 *:8443,9283,8765 running (5m) 3s ago 10m 622M - 19.2.3-678-ge911bdeb 654f31e6858e bc6ab9c540eb 2026-03-09T20:53:20.006 INFO:teuthology.orchestra.run.vm07.stdout:mgr.vm10.byqahe vm10 *:8443,9283,8765 running (5m) 3s ago 9m 492M - 19.2.3-678-ge911bdeb 654f31e6858e f7ad162e95ff 2026-03-09T20:53:20.006 INFO:teuthology.orchestra.run.vm07.stdout:mon.vm07 vm07 running (5m) 3s ago 10m 64.3M 2048M 19.2.3-678-ge911bdeb 654f31e6858e bce9d510f94f 2026-03-09T20:53:20.006 INFO:teuthology.orchestra.run.vm07.stdout:mon.vm10 vm10 running (4m) 3s ago 9m 53.7M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 4428cf7f0607 2026-03-09T20:53:20.006 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.vm07 vm07 *:9100 running (10m) 3s ago 10m 16.2M - 1.5.0 0da6a335fe13 d6fac1f8a1d0 2026-03-09T20:53:20.006 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.vm10 vm10 *:9100 running (9m) 3s ago 9m 16.0M - 1.5.0 0da6a335fe13 9716a97e7ed1 2026-03-09T20:53:20.006 INFO:teuthology.orchestra.run.vm07.stdout:osd.0 vm07 running (4m) 3s ago 9m 235M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 1da9d2cdbdc3 2026-03-09T20:53:20.006 INFO:teuthology.orchestra.run.vm07.stdout:osd.1 vm07 running (3m) 3s ago 9m 177M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 95f518bf664f 2026-03-09T20:53:20.006 INFO:teuthology.orchestra.run.vm07.stdout:osd.2 vm07 running (2m) 3s ago 9m 124M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 0d3aa63353bb 2026-03-09T20:53:20.006 INFO:teuthology.orchestra.run.vm07.stdout:osd.3 vm10 running (2m) 3s ago 9m 169M 4096M 19.2.3-678-ge911bdeb 654f31e6858e c8d2b453e9e2 2026-03-09T20:53:20.006 INFO:teuthology.orchestra.run.vm07.stdout:osd.4 vm10 running (100s) 3s ago 8m 127M 4096M 19.2.3-678-ge911bdeb 654f31e6858e d0231a0cf2be 2026-03-09T20:53:20.006 INFO:teuthology.orchestra.run.vm07.stdout:osd.5 vm10 running (77s) 3s ago 8m 120M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 7489b8a43e7f 2026-03-09T20:53:20.006 INFO:teuthology.orchestra.run.vm07.stdout:prometheus.vm07 vm07 *:9095 running (5m) 3s ago 10m 56.3M - 2.43.0 a07b618ecd1d 3f9c07cd3fe3 2026-03-09T20:53:20.008 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.007+0000 7f913e043640 1 -- 192.168.123.107:0/2862626722 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f911c077a50 msgr2=0x7f911c079f10 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:20.008 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.007+0000 7f913e043640 1 --2- 192.168.123.107:0/2862626722 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f911c077a50 0x7f911c079f10 secure :-1 s=READY pgs=101 cs=0 l=1 rev1=1 crypto rx=0x7f912800ab90 tx=0x7f9128009250 comp rx=0 tx=0).stop 2026-03-09T20:53:20.008 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.007+0000 7f913e043640 1 -- 192.168.123.107:0/2862626722 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9138082510 msgr2=0x7f9138082990 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:20.008 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.007+0000 7f913e043640 1 --2- 192.168.123.107:0/2862626722 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9138082510 0x7f9138082990 secure :-1 s=READY pgs=143 cs=0 l=1 rev1=1 crypto rx=0x7f91300077a0 tx=0x7f9130009510 comp rx=0 tx=0).stop 2026-03-09T20:53:20.009 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.008+0000 7f913e043640 1 -- 192.168.123.107:0/2862626722 shutdown_connections 2026-03-09T20:53:20.009 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.008+0000 7f913e043640 1 --2- 192.168.123.107:0/2862626722 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f911c077a50 0x7f911c079f10 unknown :-1 s=CLOSED pgs=101 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:20.009 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.008+0000 7f913e043640 1 --2- 192.168.123.107:0/2862626722 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9138082510 0x7f9138082990 unknown :-1 s=CLOSED pgs=143 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:20.009 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.008+0000 7f913e043640 1 --2- 192.168.123.107:0/2862626722 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9138071a70 0x7f9138083ec0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:20.009 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.008+0000 7f913e043640 1 -- 192.168.123.107:0/2862626722 >> 192.168.123.107:0/2862626722 conn(0x7f913806d4f0 msgr2=0x7f913806ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:53:20.009 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.008+0000 7f913e043640 1 -- 192.168.123.107:0/2862626722 shutdown_connections 2026-03-09T20:53:20.009 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.008+0000 7f913e043640 1 -- 192.168.123.107:0/2862626722 wait complete. 2026-03-09T20:53:20.072 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.070+0000 7f1949b8d640 1 -- 192.168.123.107:0/977696283 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f19440feaf0 msgr2=0x7f19440feef0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:20.072 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.070+0000 7f1949b8d640 1 --2- 192.168.123.107:0/977696283 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f19440feaf0 0x7f19440feef0 secure :-1 s=READY pgs=144 cs=0 l=1 rev1=1 crypto rx=0x7f19340099b0 tx=0x7f193402f220 comp rx=0 tx=0).stop 2026-03-09T20:53:20.074 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.073+0000 7f1949b8d640 1 -- 192.168.123.107:0/977696283 shutdown_connections 2026-03-09T20:53:20.074 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.073+0000 7f1949b8d640 1 --2- 192.168.123.107:0/977696283 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f19440ff7d0 0x7f19440ffc50 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:20.074 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.073+0000 7f1949b8d640 1 --2- 192.168.123.107:0/977696283 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f19440feaf0 0x7f19440feef0 unknown :-1 s=CLOSED pgs=144 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:20.074 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.073+0000 7f1949b8d640 1 -- 192.168.123.107:0/977696283 >> 192.168.123.107:0/977696283 conn(0x7f19440fa5b0 msgr2=0x7f19440fc9f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:53:20.075 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.075+0000 7f1949b8d640 1 -- 192.168.123.107:0/977696283 shutdown_connections 2026-03-09T20:53:20.076 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.075+0000 7f1949b8d640 1 -- 192.168.123.107:0/977696283 wait complete. 2026-03-09T20:53:20.078 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.077+0000 7f1949b8d640 1 Processor -- start 2026-03-09T20:53:20.078 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.077+0000 7f1949b8d640 1 -- start start 2026-03-09T20:53:20.078 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.077+0000 7f1949b8d640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f19440feaf0 0x7f1944195f80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:53:20.078 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.077+0000 7f1949b8d640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f19440ff7d0 0x7f19441964c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:53:20.078 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.077+0000 7f1949b8d640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f1944196a90 con 0x7f19440ff7d0 2026-03-09T20:53:20.079 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.077+0000 7f1949b8d640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f1944196c00 con 0x7f19440feaf0 2026-03-09T20:53:20.079 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.077+0000 7f1942ffd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f19440ff7d0 0x7f19441964c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:53:20.079 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.078+0000 7f1942ffd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f19440ff7d0 0x7f19441964c0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:33692/0 (socket says 192.168.123.107:33692) 2026-03-09T20:53:20.079 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.078+0000 7f1942ffd640 1 -- 192.168.123.107:0/2367025636 learned_addr learned my addr 192.168.123.107:0/2367025636 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:53:20.079 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.078+0000 7f19437fe640 1 --2- 192.168.123.107:0/2367025636 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f19440feaf0 0x7f1944195f80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:53:20.080 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.079+0000 7f19437fe640 1 -- 192.168.123.107:0/2367025636 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f19440ff7d0 msgr2=0x7f19441964c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:20.080 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.079+0000 7f19437fe640 1 --2- 192.168.123.107:0/2367025636 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f19440ff7d0 0x7f19441964c0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:20.080 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.079+0000 7f19437fe640 1 -- 192.168.123.107:0/2367025636 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f1934009660 con 0x7f19440feaf0 2026-03-09T20:53:20.080 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.079+0000 7f19437fe640 1 --2- 192.168.123.107:0/2367025636 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f19440feaf0 0x7f1944195f80 secure :-1 s=READY pgs=58 cs=0 l=1 rev1=1 crypto rx=0x7f1934002af0 tx=0x7f19340026e0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:53:20.082 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.081+0000 7f1940ff9640 1 -- 192.168.123.107:0/2367025636 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f193403d070 con 0x7f19440feaf0 2026-03-09T20:53:20.082 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.081+0000 7f1949b8d640 1 -- 192.168.123.107:0/2367025636 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f194419b670 con 0x7f19440feaf0 2026-03-09T20:53:20.082 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.081+0000 7f1949b8d640 1 -- 192.168.123.107:0/2367025636 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f194419bb60 con 0x7f19440feaf0 2026-03-09T20:53:20.083 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.082+0000 7f1940ff9640 1 -- 192.168.123.107:0/2367025636 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f1934002890 con 0x7f19440feaf0 2026-03-09T20:53:20.084 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.083+0000 7f1949b8d640 1 -- 192.168.123.107:0/2367025636 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f194419be40 con 0x7f19440feaf0 2026-03-09T20:53:20.084 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.084+0000 7f1940ff9640 1 -- 192.168.123.107:0/2367025636 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f19340416b0 con 0x7f19440feaf0 2026-03-09T20:53:20.085 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.084+0000 7f1940ff9640 1 -- 192.168.123.107:0/2367025636 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f193404b430 con 0x7f19440feaf0 2026-03-09T20:53:20.085 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.084+0000 7f1940ff9640 1 --2- 192.168.123.107:0/2367025636 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f192c0776d0 0x7f192c079b90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:53:20.086 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.084+0000 7f1940ff9640 1 -- 192.168.123.107:0/2367025636 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(86..86 src has 1..86) v4 ==== 6652+0+0 (secure 0 0 0) 0x7f19340387f0 con 0x7f19440feaf0 2026-03-09T20:53:20.086 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.085+0000 7f1942ffd640 1 --2- 192.168.123.107:0/2367025636 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f192c0776d0 0x7f192c079b90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:53:20.086 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.085+0000 7f1942ffd640 1 --2- 192.168.123.107:0/2367025636 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f192c0776d0 0x7f192c079b90 secure :-1 s=READY pgs=102 cs=0 l=1 rev1=1 crypto rx=0x7f19441974a0 tx=0x7f1930007450 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:53:20.090 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.089+0000 7f1940ff9640 1 -- 192.168.123.107:0/2367025636 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f1934087d00 con 0x7f19440feaf0 2026-03-09T20:53:20.237 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.236+0000 7f1949b8d640 1 -- 192.168.123.107:0/2367025636 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7f1944105ca0 con 0x7f19440feaf0 2026-03-09T20:53:20.237 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.237+0000 7f1940ff9640 1 -- 192.168.123.107:0/2367025636 <== mon.1 v2:192.168.123.110:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+633 (secure 0 0 0) 0x7f1934087450 con 0x7f19440feaf0 2026-03-09T20:53:20.238 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T20:53:20.238 INFO:teuthology.orchestra.run.vm07.stdout: "mon": { 2026-03-09T20:53:20.238 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T20:53:20.238 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:53:20.238 INFO:teuthology.orchestra.run.vm07.stdout: "mgr": { 2026-03-09T20:53:20.238 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T20:53:20.238 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:53:20.238 INFO:teuthology.orchestra.run.vm07.stdout: "osd": { 2026-03-09T20:53:20.238 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 6 2026-03-09T20:53:20.238 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:53:20.238 INFO:teuthology.orchestra.run.vm07.stdout: "mds": { 2026-03-09T20:53:20.238 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 4 2026-03-09T20:53:20.238 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:53:20.238 INFO:teuthology.orchestra.run.vm07.stdout: "overall": { 2026-03-09T20:53:20.239 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 14 2026-03-09T20:53:20.239 INFO:teuthology.orchestra.run.vm07.stdout: } 2026-03-09T20:53:20.239 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T20:53:20.241 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.240+0000 7f1949b8d640 1 -- 192.168.123.107:0/2367025636 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f192c0776d0 msgr2=0x7f192c079b90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:20.241 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.240+0000 7f1949b8d640 1 --2- 192.168.123.107:0/2367025636 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f192c0776d0 0x7f192c079b90 secure :-1 s=READY pgs=102 cs=0 l=1 rev1=1 crypto rx=0x7f19441974a0 tx=0x7f1930007450 comp rx=0 tx=0).stop 2026-03-09T20:53:20.241 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.240+0000 7f1949b8d640 1 -- 192.168.123.107:0/2367025636 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f19440feaf0 msgr2=0x7f1944195f80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:20.241 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.240+0000 7f1949b8d640 1 --2- 192.168.123.107:0/2367025636 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f19440feaf0 0x7f1944195f80 secure :-1 s=READY pgs=58 cs=0 l=1 rev1=1 crypto rx=0x7f1934002af0 tx=0x7f19340026e0 comp rx=0 tx=0).stop 2026-03-09T20:53:20.241 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.240+0000 7f1949b8d640 1 -- 192.168.123.107:0/2367025636 shutdown_connections 2026-03-09T20:53:20.241 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.240+0000 7f1949b8d640 1 --2- 192.168.123.107:0/2367025636 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f192c0776d0 0x7f192c079b90 unknown :-1 s=CLOSED pgs=102 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:20.241 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.240+0000 7f1949b8d640 1 --2- 192.168.123.107:0/2367025636 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f19440ff7d0 0x7f19441964c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:20.241 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.240+0000 7f1949b8d640 1 --2- 192.168.123.107:0/2367025636 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f19440feaf0 0x7f1944195f80 unknown :-1 s=CLOSED pgs=58 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:20.241 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.240+0000 7f1949b8d640 1 -- 192.168.123.107:0/2367025636 >> 192.168.123.107:0/2367025636 conn(0x7f19440fa5b0 msgr2=0x7f19440fc100 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:53:20.241 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.240+0000 7f1949b8d640 1 -- 192.168.123.107:0/2367025636 shutdown_connections 2026-03-09T20:53:20.242 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.240+0000 7f1949b8d640 1 -- 192.168.123.107:0/2367025636 wait complete. 2026-03-09T20:53:20.315 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.313+0000 7fc2fbc42640 1 -- 192.168.123.107:0/1281972044 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc2f4103c60 msgr2=0x7fc2f41040e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:20.315 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.313+0000 7fc2fbc42640 1 --2- 192.168.123.107:0/1281972044 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc2f4103c60 0x7fc2f41040e0 secure :-1 s=READY pgs=145 cs=0 l=1 rev1=1 crypto rx=0x7fc2e4009a00 tx=0x7fc2e402f270 comp rx=0 tx=0).stop 2026-03-09T20:53:20.315 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.314+0000 7fc2fbc42640 1 -- 192.168.123.107:0/1281972044 shutdown_connections 2026-03-09T20:53:20.315 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.314+0000 7fc2fbc42640 1 --2- 192.168.123.107:0/1281972044 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc2f4103c60 0x7fc2f41040e0 unknown :-1 s=CLOSED pgs=145 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:20.315 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.314+0000 7fc2fbc42640 1 --2- 192.168.123.107:0/1281972044 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc2f4102a60 0x7fc2f4102e60 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:20.315 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.314+0000 7fc2fbc42640 1 -- 192.168.123.107:0/1281972044 >> 192.168.123.107:0/1281972044 conn(0x7fc2f40fe250 msgr2=0x7fc2f4100670 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:53:20.315 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.314+0000 7fc2fbc42640 1 -- 192.168.123.107:0/1281972044 shutdown_connections 2026-03-09T20:53:20.315 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.314+0000 7fc2fbc42640 1 -- 192.168.123.107:0/1281972044 wait complete. 2026-03-09T20:53:20.316 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.315+0000 7fc2fbc42640 1 Processor -- start 2026-03-09T20:53:20.316 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.315+0000 7fc2fbc42640 1 -- start start 2026-03-09T20:53:20.316 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.315+0000 7fc2fbc42640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc2f4102a60 0x7fc2f419a430 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:53:20.316 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.315+0000 7fc2f99b7640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc2f4102a60 0x7fc2f419a430 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:53:20.316 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.315+0000 7fc2f99b7640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc2f4102a60 0x7fc2f419a430 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:33712/0 (socket says 192.168.123.107:33712) 2026-03-09T20:53:20.316 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.315+0000 7fc2fbc42640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc2f4103c60 0x7fc2f419a970 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:53:20.316 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.315+0000 7fc2fbc42640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc2f419af40 con 0x7fc2f4102a60 2026-03-09T20:53:20.316 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.315+0000 7fc2fbc42640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc2f419b0b0 con 0x7fc2f4103c60 2026-03-09T20:53:20.317 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.316+0000 7fc2f91b6640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc2f4103c60 0x7fc2f419a970 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:53:20.317 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.316+0000 7fc2f91b6640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc2f4103c60 0x7fc2f419a970 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:60108/0 (socket says 192.168.123.107:60108) 2026-03-09T20:53:20.317 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.316+0000 7fc2f91b6640 1 -- 192.168.123.107:0/85870937 learned_addr learned my addr 192.168.123.107:0/85870937 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:53:20.317 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.316+0000 7fc2f99b7640 1 -- 192.168.123.107:0/85870937 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc2f4103c60 msgr2=0x7fc2f419a970 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:20.317 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.316+0000 7fc2f99b7640 1 --2- 192.168.123.107:0/85870937 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc2f4103c60 0x7fc2f419a970 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:20.317 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.316+0000 7fc2f99b7640 1 -- 192.168.123.107:0/85870937 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fc2dc009590 con 0x7fc2f4102a60 2026-03-09T20:53:20.317 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.316+0000 7fc2f99b7640 1 --2- 192.168.123.107:0/85870937 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc2f4102a60 0x7fc2f419a430 secure :-1 s=READY pgs=146 cs=0 l=1 rev1=1 crypto rx=0x7fc2dc009f80 tx=0x7fc2dc002b80 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:53:20.318 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.317+0000 7fc2eaffd640 1 -- 192.168.123.107:0/85870937 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fc2dc00eae0 con 0x7fc2f4102a60 2026-03-09T20:53:20.318 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.317+0000 7fc2eaffd640 1 -- 192.168.123.107:0/85870937 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fc2dc00ec40 con 0x7fc2f4102a60 2026-03-09T20:53:20.318 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.317+0000 7fc2eaffd640 1 -- 192.168.123.107:0/85870937 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fc2dc00f590 con 0x7fc2f4102a60 2026-03-09T20:53:20.318 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.317+0000 7fc2fbc42640 1 -- 192.168.123.107:0/85870937 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fc2e4009660 con 0x7fc2f4102a60 2026-03-09T20:53:20.320 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.318+0000 7fc2fbc42640 1 -- 192.168.123.107:0/85870937 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fc2f419fe90 con 0x7fc2f4102a60 2026-03-09T20:53:20.320 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.319+0000 7fc2eaffd640 1 -- 192.168.123.107:0/85870937 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fc2dc016070 con 0x7fc2f4102a60 2026-03-09T20:53:20.320 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.319+0000 7fc2fbc42640 1 -- 192.168.123.107:0/85870937 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fc2bc005350 con 0x7fc2f4102a60 2026-03-09T20:53:20.325 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:20 vm07.local ceph-mon[112105]: pgmap v189: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 20 MiB/s rd, 4.6 KiB/s wr, 19 op/s 2026-03-09T20:53:20.325 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:20 vm07.local ceph-mon[112105]: Upgrade: Setting container_image for all ceph-exporter 2026-03-09T20:53:20.325 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:20 vm07.local ceph-mon[112105]: Upgrade: Setting container_image for all iscsi 2026-03-09T20:53:20.325 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:20 vm07.local ceph-mon[112105]: Upgrade: Setting container_image for all nfs 2026-03-09T20:53:20.325 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:20 vm07.local ceph-mon[112105]: Upgrade: Setting container_image for all nvmeof 2026-03-09T20:53:20.325 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:20 vm07.local ceph-mon[112105]: Upgrade: Updating node-exporter.vm07 (1/2) 2026-03-09T20:53:20.325 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:20 vm07.local ceph-mon[112105]: Deploying daemon node-exporter.vm07 on vm07 2026-03-09T20:53:20.325 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.321+0000 7fc2eaffd640 1 --2- 192.168.123.107:0/85870937 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc2cc077680 0x7fc2cc079b40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:53:20.326 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.321+0000 7fc2eaffd640 1 -- 192.168.123.107:0/85870937 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(86..86 src has 1..86) v4 ==== 6652+0+0 (secure 0 0 0) 0x7fc2dc09a3e0 con 0x7fc2f4102a60 2026-03-09T20:53:20.326 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.323+0000 7fc2f91b6640 1 --2- 192.168.123.107:0/85870937 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc2cc077680 0x7fc2cc079b40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:53:20.326 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.323+0000 7fc2f91b6640 1 --2- 192.168.123.107:0/85870937 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc2cc077680 0x7fc2cc079b40 secure :-1 s=READY pgs=103 cs=0 l=1 rev1=1 crypto rx=0x7fc2f419b950 tx=0x7fc2e4002c20 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:53:20.329 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.325+0000 7fc2eaffd640 1 -- 192.168.123.107:0/85870937 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fc2dc062a10 con 0x7fc2f4102a60 2026-03-09T20:53:20.372 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:20 vm10.local ceph-mon[103526]: pgmap v189: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 20 MiB/s rd, 4.6 KiB/s wr, 19 op/s 2026-03-09T20:53:20.372 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:20 vm10.local ceph-mon[103526]: Upgrade: Setting container_image for all ceph-exporter 2026-03-09T20:53:20.372 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:20 vm10.local ceph-mon[103526]: Upgrade: Setting container_image for all iscsi 2026-03-09T20:53:20.372 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:20 vm10.local ceph-mon[103526]: Upgrade: Setting container_image for all nfs 2026-03-09T20:53:20.372 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:20 vm10.local ceph-mon[103526]: Upgrade: Setting container_image for all nvmeof 2026-03-09T20:53:20.372 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:20 vm10.local ceph-mon[103526]: Upgrade: Updating node-exporter.vm07 (1/2) 2026-03-09T20:53:20.372 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:20 vm10.local ceph-mon[103526]: Deploying daemon node-exporter.vm07 on vm07 2026-03-09T20:53:20.450 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.449+0000 7fc2fbc42640 1 -- 192.168.123.107:0/85870937 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "fs dump"} v 0) v1 -- 0x7fc2bc005e10 con 0x7fc2f4102a60 2026-03-09T20:53:20.451 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.450+0000 7fc2eaffd640 1 -- 192.168.123.107:0/85870937 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump"}]=0 dumped fsmap epoch 37 v37) v1 ==== 76+0+2003 (secure 0 0 0) 0x7fc2dc062160 con 0x7fc2f4102a60 2026-03-09T20:53:20.452 INFO:teuthology.orchestra.run.vm07.stdout:e37 2026-03-09T20:53:20.452 INFO:teuthology.orchestra.run.vm07.stdout:btime 2026-03-09T20:53:11:032505+0000 2026-03-09T20:53:20.452 INFO:teuthology.orchestra.run.vm07.stdout:enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T20:53:20.452 INFO:teuthology.orchestra.run.vm07.stdout:default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T20:53:20.452 INFO:teuthology.orchestra.run.vm07.stdout:legacy client fscid: 1 2026-03-09T20:53:20.452 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:53:20.452 INFO:teuthology.orchestra.run.vm07.stdout:Filesystem 'cephfs' (1) 2026-03-09T20:53:20.452 INFO:teuthology.orchestra.run.vm07.stdout:fs_name cephfs 2026-03-09T20:53:20.452 INFO:teuthology.orchestra.run.vm07.stdout:epoch 37 2026-03-09T20:53:20.452 INFO:teuthology.orchestra.run.vm07.stdout:flags 32 joinable allow_snaps allow_multimds_snaps allow_standby_replay 2026-03-09T20:53:20.452 INFO:teuthology.orchestra.run.vm07.stdout:created 2026-03-09T20:44:59.885491+0000 2026-03-09T20:53:20.452 INFO:teuthology.orchestra.run.vm07.stdout:modified 2026-03-09T20:53:11.032483+0000 2026-03-09T20:53:20.452 INFO:teuthology.orchestra.run.vm07.stdout:tableserver 0 2026-03-09T20:53:20.452 INFO:teuthology.orchestra.run.vm07.stdout:root 0 2026-03-09T20:53:20.452 INFO:teuthology.orchestra.run.vm07.stdout:session_timeout 60 2026-03-09T20:53:20.452 INFO:teuthology.orchestra.run.vm07.stdout:session_autoclose 300 2026-03-09T20:53:20.452 INFO:teuthology.orchestra.run.vm07.stdout:max_file_size 1099511627776 2026-03-09T20:53:20.452 INFO:teuthology.orchestra.run.vm07.stdout:max_xattr_size 65536 2026-03-09T20:53:20.452 INFO:teuthology.orchestra.run.vm07.stdout:required_client_features {} 2026-03-09T20:53:20.453 INFO:teuthology.orchestra.run.vm07.stdout:last_failure 0 2026-03-09T20:53:20.453 INFO:teuthology.orchestra.run.vm07.stdout:last_failure_osd_epoch 86 2026-03-09T20:53:20.453 INFO:teuthology.orchestra.run.vm07.stdout:compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes} 2026-03-09T20:53:20.453 INFO:teuthology.orchestra.run.vm07.stdout:max_mds 2 2026-03-09T20:53:20.453 INFO:teuthology.orchestra.run.vm07.stdout:in 0,1 2026-03-09T20:53:20.453 INFO:teuthology.orchestra.run.vm07.stdout:up {0=34382,1=44273} 2026-03-09T20:53:20.453 INFO:teuthology.orchestra.run.vm07.stdout:failed 2026-03-09T20:53:20.453 INFO:teuthology.orchestra.run.vm07.stdout:damaged 2026-03-09T20:53:20.453 INFO:teuthology.orchestra.run.vm07.stdout:stopped 2026-03-09T20:53:20.453 INFO:teuthology.orchestra.run.vm07.stdout:data_pools [3] 2026-03-09T20:53:20.453 INFO:teuthology.orchestra.run.vm07.stdout:metadata_pool 2 2026-03-09T20:53:20.453 INFO:teuthology.orchestra.run.vm07.stdout:inline_data disabled 2026-03-09T20:53:20.453 INFO:teuthology.orchestra.run.vm07.stdout:balancer 2026-03-09T20:53:20.453 INFO:teuthology.orchestra.run.vm07.stdout:bal_rank_mask -1 2026-03-09T20:53:20.453 INFO:teuthology.orchestra.run.vm07.stdout:standby_count_wanted 1 2026-03-09T20:53:20.453 INFO:teuthology.orchestra.run.vm07.stdout:qdb_cluster leader: 34382 members: 44273,34382 2026-03-09T20:53:20.453 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm07.potfau{0:34382} state up:active seq 7 join_fscid=1 addr [v2:192.168.123.107:6828/561473714,v1:192.168.123.107:6829/561473714] compat {c=[1],r=[1],i=[1fff]}] 2026-03-09T20:53:20.453 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm10.qpltwp{0:44295} state up:standby-replay seq 1 join_fscid=1 addr [v2:192.168.123.110:6824/4027718916,v1:192.168.123.110:6825/4027718916] compat {c=[1],r=[1],i=[1fff]}] 2026-03-09T20:53:20.453 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm07.rovdbp{1:44273} state up:active seq 8 join_fscid=1 addr [v2:192.168.123.107:6826/2346066069,v1:192.168.123.107:6827/2346066069] compat {c=[1],r=[1],i=[1fff]}] 2026-03-09T20:53:20.453 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm10.hzyuyq{1:44299} state up:standby-replay seq 1 join_fscid=1 addr [v2:192.168.123.110:6826/1370091423,v1:192.168.123.110:6827/1370091423] compat {c=[1],r=[1],i=[1fff]}] 2026-03-09T20:53:20.453 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:53:20.453 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:53:20.453 INFO:teuthology.orchestra.run.vm07.stderr:dumped fsmap epoch 37 2026-03-09T20:53:20.453 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.452+0000 7fc2fbc42640 1 -- 192.168.123.107:0/85870937 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc2cc077680 msgr2=0x7fc2cc079b40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:20.453 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.453+0000 7fc2fbc42640 1 --2- 192.168.123.107:0/85870937 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc2cc077680 0x7fc2cc079b40 secure :-1 s=READY pgs=103 cs=0 l=1 rev1=1 crypto rx=0x7fc2f419b950 tx=0x7fc2e4002c20 comp rx=0 tx=0).stop 2026-03-09T20:53:20.454 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.453+0000 7fc2fbc42640 1 -- 192.168.123.107:0/85870937 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc2f4102a60 msgr2=0x7fc2f419a430 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:20.454 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.453+0000 7fc2fbc42640 1 --2- 192.168.123.107:0/85870937 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc2f4102a60 0x7fc2f419a430 secure :-1 s=READY pgs=146 cs=0 l=1 rev1=1 crypto rx=0x7fc2dc009f80 tx=0x7fc2dc002b80 comp rx=0 tx=0).stop 2026-03-09T20:53:20.454 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.453+0000 7fc2fbc42640 1 -- 192.168.123.107:0/85870937 shutdown_connections 2026-03-09T20:53:20.454 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.453+0000 7fc2fbc42640 1 --2- 192.168.123.107:0/85870937 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc2cc077680 0x7fc2cc079b40 unknown :-1 s=CLOSED pgs=103 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:20.455 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.454+0000 7fc2fbc42640 1 --2- 192.168.123.107:0/85870937 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc2f4103c60 0x7fc2f419a970 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:20.455 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.454+0000 7fc2fbc42640 1 --2- 192.168.123.107:0/85870937 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc2f4102a60 0x7fc2f419a430 unknown :-1 s=CLOSED pgs=146 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:20.455 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.454+0000 7fc2fbc42640 1 -- 192.168.123.107:0/85870937 >> 192.168.123.107:0/85870937 conn(0x7fc2f40fe250 msgr2=0x7fc2f40ffd50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:53:20.455 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.454+0000 7fc2fbc42640 1 -- 192.168.123.107:0/85870937 shutdown_connections 2026-03-09T20:53:20.455 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.454+0000 7fc2fbc42640 1 -- 192.168.123.107:0/85870937 wait complete. 2026-03-09T20:53:20.521 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.520+0000 7febfda39640 1 -- 192.168.123.107:0/389326531 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7febf8103c60 msgr2=0x7febf81040e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:20.521 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.520+0000 7febfda39640 1 --2- 192.168.123.107:0/389326531 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7febf8103c60 0x7febf81040e0 secure :-1 s=READY pgs=147 cs=0 l=1 rev1=1 crypto rx=0x7febe00099b0 tx=0x7febe002f220 comp rx=0 tx=0).stop 2026-03-09T20:53:20.521 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.520+0000 7febfda39640 1 -- 192.168.123.107:0/389326531 shutdown_connections 2026-03-09T20:53:20.521 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.520+0000 7febfda39640 1 --2- 192.168.123.107:0/389326531 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7febf8103c60 0x7febf81040e0 unknown :-1 s=CLOSED pgs=147 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:20.521 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.520+0000 7febfda39640 1 --2- 192.168.123.107:0/389326531 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7febf8102a60 0x7febf8102e60 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:20.521 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.520+0000 7febfda39640 1 -- 192.168.123.107:0/389326531 >> 192.168.123.107:0/389326531 conn(0x7febf80fe250 msgr2=0x7febf8100670 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:53:20.521 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.521+0000 7febfda39640 1 -- 192.168.123.107:0/389326531 shutdown_connections 2026-03-09T20:53:20.522 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.521+0000 7febfda39640 1 -- 192.168.123.107:0/389326531 wait complete. 2026-03-09T20:53:20.522 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.521+0000 7febfda39640 1 Processor -- start 2026-03-09T20:53:20.522 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.521+0000 7febfda39640 1 -- start start 2026-03-09T20:53:20.522 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.522+0000 7febfda39640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7febf8102a60 0x7febf819e950 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:53:20.523 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.522+0000 7febf6ffd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7febf8102a60 0x7febf819e950 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:53:20.523 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.522+0000 7febf6ffd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7febf8102a60 0x7febf819e950 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:33728/0 (socket says 192.168.123.107:33728) 2026-03-09T20:53:20.523 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.522+0000 7febf6ffd640 1 -- 192.168.123.107:0/1802524822 learned_addr learned my addr 192.168.123.107:0/1802524822 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:53:20.523 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.522+0000 7febfda39640 1 --2- 192.168.123.107:0/1802524822 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7febf8103c60 0x7febf819ee90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:53:20.523 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.522+0000 7febfda39640 1 -- 192.168.123.107:0/1802524822 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7febf819f460 con 0x7febf8102a60 2026-03-09T20:53:20.523 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.522+0000 7febfda39640 1 -- 192.168.123.107:0/1802524822 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7febf819f5d0 con 0x7febf8103c60 2026-03-09T20:53:20.523 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.522+0000 7febf67fc640 1 --2- 192.168.123.107:0/1802524822 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7febf8103c60 0x7febf819ee90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:53:20.523 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.523+0000 7febf67fc640 1 -- 192.168.123.107:0/1802524822 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7febf8102a60 msgr2=0x7febf819e950 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:20.524 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.523+0000 7febf67fc640 1 --2- 192.168.123.107:0/1802524822 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7febf8102a60 0x7febf819e950 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:20.524 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.523+0000 7febf67fc640 1 -- 192.168.123.107:0/1802524822 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7febe0009660 con 0x7febf8103c60 2026-03-09T20:53:20.524 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.523+0000 7febf6ffd640 1 --2- 192.168.123.107:0/1802524822 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7febf8102a60 0x7febf819e950 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-09T20:53:20.524 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.523+0000 7febf67fc640 1 --2- 192.168.123.107:0/1802524822 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7febf8103c60 0x7febf819ee90 secure :-1 s=READY pgs=59 cs=0 l=1 rev1=1 crypto rx=0x7febe0002c20 tx=0x7febe00028f0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:53:20.524 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.523+0000 7febfca37640 1 -- 192.168.123.107:0/1802524822 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7febe003d070 con 0x7febf8103c60 2026-03-09T20:53:20.524 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.523+0000 7febfda39640 1 -- 192.168.123.107:0/1802524822 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7febf81a4010 con 0x7febf8103c60 2026-03-09T20:53:20.525 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.523+0000 7febfca37640 1 -- 192.168.123.107:0/1802524822 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7febe002fc90 con 0x7febf8103c60 2026-03-09T20:53:20.525 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.523+0000 7febfca37640 1 -- 192.168.123.107:0/1802524822 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7febe00418f0 con 0x7febf8103c60 2026-03-09T20:53:20.525 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.523+0000 7febfda39640 1 -- 192.168.123.107:0/1802524822 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7febf81a4500 con 0x7febf8103c60 2026-03-09T20:53:20.525 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.524+0000 7febd67fc640 1 -- 192.168.123.107:0/1802524822 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7febc0005350 con 0x7febf8103c60 2026-03-09T20:53:20.526 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.525+0000 7febfca37640 1 -- 192.168.123.107:0/1802524822 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7febe0038c80 con 0x7febf8103c60 2026-03-09T20:53:20.527 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.526+0000 7febfca37640 1 --2- 192.168.123.107:0/1802524822 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7febcc0778e0 0x7febcc079da0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:53:20.527 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.526+0000 7febfca37640 1 -- 192.168.123.107:0/1802524822 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(86..86 src has 1..86) v4 ==== 6652+0+0 (secure 0 0 0) 0x7febe00beaa0 con 0x7febf8103c60 2026-03-09T20:53:20.527 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.526+0000 7febf6ffd640 1 --2- 192.168.123.107:0/1802524822 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7febcc0778e0 0x7febcc079da0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:53:20.527 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.526+0000 7febf6ffd640 1 --2- 192.168.123.107:0/1802524822 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7febcc0778e0 0x7febcc079da0 secure :-1 s=READY pgs=104 cs=0 l=1 rev1=1 crypto rx=0x7febec002a10 tx=0x7febec005f70 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:53:20.529 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.528+0000 7febfca37640 1 -- 192.168.123.107:0/1802524822 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7febe0087150 con 0x7febf8103c60 2026-03-09T20:53:20.633 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.632+0000 7febd67fc640 1 -- 192.168.123.107:0/1802524822 --> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7febc0002bf0 con 0x7febcc0778e0 2026-03-09T20:53:20.634 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.633+0000 7febfca37640 1 -- 192.168.123.107:0/1802524822 <== mgr.34100 v2:192.168.123.107:6800/2064434004 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+466 (secure 0 0 0) 0x7febc0002bf0 con 0x7febcc0778e0 2026-03-09T20:53:20.636 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T20:53:20.636 INFO:teuthology.orchestra.run.vm07.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc", 2026-03-09T20:53:20.636 INFO:teuthology.orchestra.run.vm07.stdout: "in_progress": true, 2026-03-09T20:53:20.636 INFO:teuthology.orchestra.run.vm07.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-09T20:53:20.636 INFO:teuthology.orchestra.run.vm07.stdout: "services_complete": [ 2026-03-09T20:53:20.636 INFO:teuthology.orchestra.run.vm07.stdout: "osd", 2026-03-09T20:53:20.636 INFO:teuthology.orchestra.run.vm07.stdout: "mon", 2026-03-09T20:53:20.636 INFO:teuthology.orchestra.run.vm07.stdout: "mgr", 2026-03-09T20:53:20.636 INFO:teuthology.orchestra.run.vm07.stdout: "ceph-exporter", 2026-03-09T20:53:20.636 INFO:teuthology.orchestra.run.vm07.stdout: "mds", 2026-03-09T20:53:20.636 INFO:teuthology.orchestra.run.vm07.stdout: "crash" 2026-03-09T20:53:20.636 INFO:teuthology.orchestra.run.vm07.stdout: ], 2026-03-09T20:53:20.636 INFO:teuthology.orchestra.run.vm07.stdout: "progress": "18/23 daemons upgraded", 2026-03-09T20:53:20.636 INFO:teuthology.orchestra.run.vm07.stdout: "message": "Currently upgrading node-exporter daemons", 2026-03-09T20:53:20.636 INFO:teuthology.orchestra.run.vm07.stdout: "is_paused": false 2026-03-09T20:53:20.636 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T20:53:20.639 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.638+0000 7febd67fc640 1 -- 192.168.123.107:0/1802524822 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7febcc0778e0 msgr2=0x7febcc079da0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:20.639 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.638+0000 7febd67fc640 1 --2- 192.168.123.107:0/1802524822 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7febcc0778e0 0x7febcc079da0 secure :-1 s=READY pgs=104 cs=0 l=1 rev1=1 crypto rx=0x7febec002a10 tx=0x7febec005f70 comp rx=0 tx=0).stop 2026-03-09T20:53:20.639 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.638+0000 7febd67fc640 1 -- 192.168.123.107:0/1802524822 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7febf8103c60 msgr2=0x7febf819ee90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:20.640 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.638+0000 7febd67fc640 1 --2- 192.168.123.107:0/1802524822 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7febf8103c60 0x7febf819ee90 secure :-1 s=READY pgs=59 cs=0 l=1 rev1=1 crypto rx=0x7febe0002c20 tx=0x7febe00028f0 comp rx=0 tx=0).stop 2026-03-09T20:53:20.640 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.638+0000 7febd67fc640 1 -- 192.168.123.107:0/1802524822 shutdown_connections 2026-03-09T20:53:20.640 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.638+0000 7febd67fc640 1 --2- 192.168.123.107:0/1802524822 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7febcc0778e0 0x7febcc079da0 secure :-1 s=CLOSED pgs=104 cs=0 l=1 rev1=1 crypto rx=0x7febec002a10 tx=0x7febec005f70 comp rx=0 tx=0).stop 2026-03-09T20:53:20.640 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.638+0000 7febd67fc640 1 --2- 192.168.123.107:0/1802524822 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7febf8103c60 0x7febf819ee90 unknown :-1 s=CLOSED pgs=59 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:20.640 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.638+0000 7febd67fc640 1 --2- 192.168.123.107:0/1802524822 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7febf8102a60 0x7febf819e950 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:20.640 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.638+0000 7febd67fc640 1 -- 192.168.123.107:0/1802524822 >> 192.168.123.107:0/1802524822 conn(0x7febf80fe250 msgr2=0x7febf8104e80 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:53:20.640 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.639+0000 7febd67fc640 1 -- 192.168.123.107:0/1802524822 shutdown_connections 2026-03-09T20:53:20.640 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.639+0000 7febd67fc640 1 -- 192.168.123.107:0/1802524822 wait complete. 2026-03-09T20:53:20.699 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.698+0000 7efe525bb640 1 -- 192.168.123.107:0/923046197 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7efe4c075ba0 msgr2=0x7efe4c075fa0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:20.699 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.698+0000 7efe525bb640 1 --2- 192.168.123.107:0/923046197 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7efe4c075ba0 0x7efe4c075fa0 secure :-1 s=READY pgs=148 cs=0 l=1 rev1=1 crypto rx=0x7efe3c0099b0 tx=0x7efe3c02f240 comp rx=0 tx=0).stop 2026-03-09T20:53:20.699 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.698+0000 7efe525bb640 1 -- 192.168.123.107:0/923046197 shutdown_connections 2026-03-09T20:53:20.700 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.698+0000 7efe525bb640 1 --2- 192.168.123.107:0/923046197 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7efe4c076df0 0x7efe4c077250 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:20.700 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.698+0000 7efe525bb640 1 --2- 192.168.123.107:0/923046197 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7efe4c075ba0 0x7efe4c075fa0 unknown :-1 s=CLOSED pgs=148 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:20.700 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.698+0000 7efe525bb640 1 -- 192.168.123.107:0/923046197 >> 192.168.123.107:0/923046197 conn(0x7efe4c0fe250 msgr2=0x7efe4c100670 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:53:20.700 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.698+0000 7efe525bb640 1 -- 192.168.123.107:0/923046197 shutdown_connections 2026-03-09T20:53:20.700 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.699+0000 7efe525bb640 1 -- 192.168.123.107:0/923046197 wait complete. 2026-03-09T20:53:20.700 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.699+0000 7efe525bb640 1 Processor -- start 2026-03-09T20:53:20.700 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.699+0000 7efe525bb640 1 -- start start 2026-03-09T20:53:20.700 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.699+0000 7efe525bb640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7efe4c075ba0 0x7efe4c19e9c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:53:20.701 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.700+0000 7efe525bb640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7efe4c076df0 0x7efe4c19ef00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:53:20.701 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.700+0000 7efe525bb640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7efe4c19f4d0 con 0x7efe4c076df0 2026-03-09T20:53:20.701 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.700+0000 7efe525bb640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7efe4c19f640 con 0x7efe4c075ba0 2026-03-09T20:53:20.701 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.700+0000 7efe4bfff640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7efe4c075ba0 0x7efe4c19e9c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:53:20.701 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.700+0000 7efe4bfff640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7efe4c075ba0 0x7efe4c19e9c0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:60148/0 (socket says 192.168.123.107:60148) 2026-03-09T20:53:20.701 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.700+0000 7efe4bfff640 1 -- 192.168.123.107:0/3379869695 learned_addr learned my addr 192.168.123.107:0/3379869695 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:53:20.701 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.700+0000 7efe4b7fe640 1 --2- 192.168.123.107:0/3379869695 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7efe4c076df0 0x7efe4c19ef00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:53:20.701 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.700+0000 7efe4b7fe640 1 -- 192.168.123.107:0/3379869695 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7efe4c075ba0 msgr2=0x7efe4c19e9c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:20.701 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.700+0000 7efe4b7fe640 1 --2- 192.168.123.107:0/3379869695 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7efe4c075ba0 0x7efe4c19e9c0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:20.701 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.700+0000 7efe4b7fe640 1 -- 192.168.123.107:0/3379869695 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7efe3c009660 con 0x7efe4c076df0 2026-03-09T20:53:20.702 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.700+0000 7efe4b7fe640 1 --2- 192.168.123.107:0/3379869695 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7efe4c076df0 0x7efe4c19ef00 secure :-1 s=READY pgs=149 cs=0 l=1 rev1=1 crypto rx=0x7efe3800b500 tx=0x7efe3800b9d0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:53:20.702 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.701+0000 7efe497fa640 1 -- 192.168.123.107:0/3379869695 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7efe38004300 con 0x7efe4c076df0 2026-03-09T20:53:20.702 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.701+0000 7efe525bb640 1 -- 192.168.123.107:0/3379869695 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7efe4c1a40e0 con 0x7efe4c076df0 2026-03-09T20:53:20.703 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.701+0000 7efe525bb640 1 -- 192.168.123.107:0/3379869695 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7efe4c1a46b0 con 0x7efe4c076df0 2026-03-09T20:53:20.704 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.701+0000 7efe497fa640 1 -- 192.168.123.107:0/3379869695 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7efe38004460 con 0x7efe4c076df0 2026-03-09T20:53:20.704 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.701+0000 7efe497fa640 1 -- 192.168.123.107:0/3379869695 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7efe38010b90 con 0x7efe4c076df0 2026-03-09T20:53:20.707 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.703+0000 7efe525bb640 1 -- 192.168.123.107:0/3379869695 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7efe10005350 con 0x7efe4c076df0 2026-03-09T20:53:20.707 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.703+0000 7efe497fa640 1 -- 192.168.123.107:0/3379869695 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7efe38002780 con 0x7efe4c076df0 2026-03-09T20:53:20.707 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.704+0000 7efe497fa640 1 --2- 192.168.123.107:0/3379869695 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7efe2c0778e0 0x7efe2c079da0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:53:20.707 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.704+0000 7efe497fa640 1 -- 192.168.123.107:0/3379869695 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(86..86 src has 1..86) v4 ==== 6652+0+0 (secure 0 0 0) 0x7efe38098f10 con 0x7efe4c076df0 2026-03-09T20:53:20.707 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.704+0000 7efe4bfff640 1 --2- 192.168.123.107:0/3379869695 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7efe2c0778e0 0x7efe2c079da0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:53:20.707 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.704+0000 7efe4bfff640 1 --2- 192.168.123.107:0/3379869695 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7efe2c0778e0 0x7efe2c079da0 secure :-1 s=READY pgs=105 cs=0 l=1 rev1=1 crypto rx=0x7efe3c002410 tx=0x7efe3c03a040 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:53:20.708 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.707+0000 7efe497fa640 1 -- 192.168.123.107:0/3379869695 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7efe38061540 con 0x7efe4c076df0 2026-03-09T20:53:20.854 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.853+0000 7efe525bb640 1 -- 192.168.123.107:0/3379869695 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "health", "detail": "detail"} v 0) v1 -- 0x7efe100051c0 con 0x7efe4c076df0 2026-03-09T20:53:20.855 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.854+0000 7efe497fa640 1 -- 192.168.123.107:0/3379869695 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "health", "detail": "detail"}]=0 v0) v1 ==== 74+0+10 (secure 0 0 0) 0x7efe38060c90 con 0x7efe4c076df0 2026-03-09T20:53:20.855 INFO:teuthology.orchestra.run.vm07.stdout:HEALTH_OK 2026-03-09T20:53:20.857 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.856+0000 7efe525bb640 1 -- 192.168.123.107:0/3379869695 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7efe2c0778e0 msgr2=0x7efe2c079da0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:20.857 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.856+0000 7efe525bb640 1 --2- 192.168.123.107:0/3379869695 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7efe2c0778e0 0x7efe2c079da0 secure :-1 s=READY pgs=105 cs=0 l=1 rev1=1 crypto rx=0x7efe3c002410 tx=0x7efe3c03a040 comp rx=0 tx=0).stop 2026-03-09T20:53:20.857 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.856+0000 7efe525bb640 1 -- 192.168.123.107:0/3379869695 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7efe4c076df0 msgr2=0x7efe4c19ef00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:20.857 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.856+0000 7efe525bb640 1 --2- 192.168.123.107:0/3379869695 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7efe4c076df0 0x7efe4c19ef00 secure :-1 s=READY pgs=149 cs=0 l=1 rev1=1 crypto rx=0x7efe3800b500 tx=0x7efe3800b9d0 comp rx=0 tx=0).stop 2026-03-09T20:53:20.858 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.857+0000 7efe525bb640 1 -- 192.168.123.107:0/3379869695 shutdown_connections 2026-03-09T20:53:20.858 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.857+0000 7efe525bb640 1 --2- 192.168.123.107:0/3379869695 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7efe2c0778e0 0x7efe2c079da0 unknown :-1 s=CLOSED pgs=105 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:20.858 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.857+0000 7efe525bb640 1 --2- 192.168.123.107:0/3379869695 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7efe4c076df0 0x7efe4c19ef00 unknown :-1 s=CLOSED pgs=149 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:20.858 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.857+0000 7efe525bb640 1 --2- 192.168.123.107:0/3379869695 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7efe4c075ba0 0x7efe4c19e9c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:20.858 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.857+0000 7efe525bb640 1 -- 192.168.123.107:0/3379869695 >> 192.168.123.107:0/3379869695 conn(0x7efe4c0fe250 msgr2=0x7efe4c0ffd50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:53:20.858 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.857+0000 7efe525bb640 1 -- 192.168.123.107:0/3379869695 shutdown_connections 2026-03-09T20:53:20.858 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:20.857+0000 7efe525bb640 1 -- 192.168.123.107:0/3379869695 wait complete. 2026-03-09T20:53:21.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:21 vm07.local ceph-mon[112105]: from='client.44305 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:53:21.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:21 vm07.local ceph-mon[112105]: from='client.44309 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:53:21.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:21 vm07.local ceph-mon[112105]: from='client.34404 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:53:21.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:21 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/2367025636' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:21.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:21 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/85870937' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T20:53:21.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:21 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/3379869695' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T20:53:21.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:21 vm10.local ceph-mon[103526]: from='client.44305 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:53:21.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:21 vm10.local ceph-mon[103526]: from='client.44309 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:53:21.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:21 vm10.local ceph-mon[103526]: from='client.34404 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:53:21.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:21 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/2367025636' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:21.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:21 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/85870937' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T20:53:21.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:21 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/3379869695' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T20:53:22.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:22 vm07.local ceph-mon[112105]: pgmap v190: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 18 MiB/s rd, 4.6 KiB/s wr, 19 op/s 2026-03-09T20:53:22.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:22 vm07.local ceph-mon[112105]: from='client.44321 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:53:22.926 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:22 vm10.local ceph-mon[103526]: pgmap v190: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 18 MiB/s rd, 4.6 KiB/s wr, 19 op/s 2026-03-09T20:53:22.926 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:22 vm10.local ceph-mon[103526]: from='client.44321 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:53:23.909 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:23 vm10.local ceph-mon[103526]: pgmap v191: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 17 MiB/s rd, 4.5 KiB/s wr, 16 op/s 2026-03-09T20:53:23.909 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:23 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:23.909 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:23 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:23.909 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:23 vm10.local ceph-mon[103526]: Upgrade: Updating node-exporter.vm10 (2/2) 2026-03-09T20:53:23.909 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:23 vm10.local ceph-mon[103526]: Deploying daemon node-exporter.vm10 on vm10 2026-03-09T20:53:24.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:23 vm07.local ceph-mon[112105]: pgmap v191: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 17 MiB/s rd, 4.5 KiB/s wr, 16 op/s 2026-03-09T20:53:24.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:23 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:24.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:23 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:24.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:23 vm07.local ceph-mon[112105]: Upgrade: Updating node-exporter.vm10 (2/2) 2026-03-09T20:53:24.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:23 vm07.local ceph-mon[112105]: Deploying daemon node-exporter.vm10 on vm10 2026-03-09T20:53:25.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:24 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:53:25.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:24 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:53:25.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:25 vm10.local ceph-mon[103526]: pgmap v192: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 16 MiB/s rd, 4.4 KiB/s wr, 16 op/s 2026-03-09T20:53:26.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:25 vm07.local ceph-mon[112105]: pgmap v192: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 16 MiB/s rd, 4.4 KiB/s wr, 16 op/s 2026-03-09T20:53:27.183 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:27 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:27.183 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:27 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:27.183 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:27 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:53:27.183 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:27 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:27.183 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:27 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:27.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:27 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:27.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:27 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:27.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:27 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:53:27.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:27 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:27.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:27 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:28.314 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:28 vm10.local ceph-mon[103526]: pgmap v193: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 9.2 MiB/s rd, 767 B/s wr, 9 op/s 2026-03-09T20:53:28.314 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:28 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:28.314 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:28 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:28.314 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:28 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:28.314 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:28 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:28.314 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:28 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:28.314 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:28 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:28.314 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:28 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:28.314 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:28 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:28.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:28 vm07.local ceph-mon[112105]: pgmap v193: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 9.2 MiB/s rd, 767 B/s wr, 9 op/s 2026-03-09T20:53:28.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:28 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:28.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:28 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:28.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:28 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:28.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:28 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:28.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:28 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:28.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:28 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:28.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:28 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:28.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:28 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:29.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:29 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:29.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:29 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:29.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:29 vm10.local ceph-mon[103526]: pgmap v194: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 687 KiB/s rd, 682 B/s wr, 4 op/s 2026-03-09T20:53:29.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:29 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:29.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:29 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:29.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:29 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:53:29.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:29 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:53:29.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:29 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:29.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:29 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:53:29.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:29 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:29.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:29 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:29.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:29 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:29.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:29 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:29.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:29 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:29.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:29 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:29.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:29 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:29.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:29 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:29.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:29 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:29.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:29 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:29.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:29 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:29.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:29 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:29.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:29 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:29.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:29 vm10.local ceph-mon[103526]: Upgrade: Updating prometheus.vm07 2026-03-09T20:53:29.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:29 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:29.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:29 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:29.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:29 vm07.local ceph-mon[112105]: pgmap v194: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 687 KiB/s rd, 682 B/s wr, 4 op/s 2026-03-09T20:53:29.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:29 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:29.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:29 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:29.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:29 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:53:29.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:29 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:53:29.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:29 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:29.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:29 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:53:29.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:29 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:29.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:29 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:29.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:29 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:29.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:29 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:29.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:29 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:29.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:29 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:29.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:29 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:29.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:29 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:29.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:29 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:29.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:29 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:29.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:29 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:29.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:29 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:29.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:29 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:29.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:29 vm07.local ceph-mon[112105]: Upgrade: Updating prometheus.vm07 2026-03-09T20:53:30.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:30 vm10.local ceph-mon[103526]: Deploying daemon prometheus.vm07 on vm07 2026-03-09T20:53:30.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:30 vm07.local ceph-mon[112105]: Deploying daemon prometheus.vm07 on vm07 2026-03-09T20:53:31.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:31 vm10.local ceph-mon[103526]: pgmap v195: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 4.7 KiB/s rd, 4 op/s 2026-03-09T20:53:31.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:31 vm07.local ceph-mon[112105]: pgmap v195: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 4.7 KiB/s rd, 4 op/s 2026-03-09T20:53:33.856 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:33 vm07.local ceph-mon[112105]: pgmap v196: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:53:34.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:33 vm10.local ceph-mon[103526]: pgmap v196: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:53:35.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:35 vm07.local ceph-mon[112105]: pgmap v197: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:53:35.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:35 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:35.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:35 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:35.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:35 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:53:36.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:35 vm10.local ceph-mon[103526]: pgmap v197: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:53:36.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:35 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:36.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:35 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:36.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:35 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:53:37.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:36 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:37.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:36 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:37.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:36 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:37.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:36 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:37.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:36 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:37.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:36 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:37.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:36 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:37.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:36 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:38.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:38 vm07.local ceph-mon[112105]: pgmap v198: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:53:38.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:38 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:38.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:38 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:38.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:38 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:53:38.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:38 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:53:38.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:38 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:38.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:38 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T20:53:38.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:38 vm07.local ceph-mon[112105]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T20:53:38.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:38 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:53:38.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:38 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:38.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:38 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:38.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:38 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:38.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:38 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:38.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:38 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:38.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:38 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:38.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:38 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:38.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:38 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:38.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:38 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:38.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:38 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:38.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:38 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:38.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:38 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:38.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:38 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:38.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:38 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:38.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:38 vm10.local ceph-mon[103526]: pgmap v198: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:53:38.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:38 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:38.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:38 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:38.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:38 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:53:38.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:38 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:53:38.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:38 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:38.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:38 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T20:53:38.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:38 vm10.local ceph-mon[103526]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T20:53:38.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:38 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:53:38.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:38 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:38.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:38 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:38.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:38 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:38.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:38 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:38.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:38 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:38.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:38 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:38.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:38 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:38.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:38 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:38.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:38 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:38.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:38 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:38.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:38 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:38.538 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:38 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:38.538 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:38 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:38.538 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:38 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:39.377 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:39 vm07.local ceph-mon[112105]: Upgrade: Updating alertmanager.vm07 2026-03-09T20:53:39.378 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:39 vm07.local ceph-mon[112105]: Deploying daemon alertmanager.vm07 on vm07 2026-03-09T20:53:39.378 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:39 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:39.378 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:39 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:39.378 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:39 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:53:39.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:39 vm10.local ceph-mon[103526]: Upgrade: Updating alertmanager.vm07 2026-03-09T20:53:39.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:39 vm10.local ceph-mon[103526]: Deploying daemon alertmanager.vm07 on vm07 2026-03-09T20:53:39.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:39 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:39.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:39 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:39.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:39 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:53:40.433 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:40 vm07.local ceph-mon[112105]: pgmap v199: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:53:40.433 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:40 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:53:40.433 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:40 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:40.433 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:40 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:40.433 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:40 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:40.433 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:40 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:40.438 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:40 vm10.local ceph-mon[103526]: pgmap v199: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:53:40.438 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:40 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:53:40.438 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:40 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:40.438 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:40 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:40.438 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:40 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:40.438 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:40 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:42.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:42 vm10.local ceph-mon[103526]: pgmap v200: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:53:42.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:42 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:42.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:42 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:42.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:42 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:53:42.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:42 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:53:42.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:42 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:42.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:42 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T20:53:42.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:42 vm10.local ceph-mon[103526]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T20:53:42.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:42 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:53:42.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:42 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:42.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:42 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:42.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:42 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:42.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:42 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:42.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:42 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:42.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:42 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:42.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:42 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:42.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:42 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:42.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:42 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:42.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:42 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:42.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:42 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:42.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:42 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:42.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:42 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:42.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:42 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:42.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:42 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:42.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:42 vm07.local ceph-mon[112105]: pgmap v200: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:53:42.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:42 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:42.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:42 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:42.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:42 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:53:42.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:42 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:53:42.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:42 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:42.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:42 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T20:53:42.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:42 vm07.local ceph-mon[112105]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T20:53:42.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:42 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:53:42.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:42 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:42.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:42 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:42.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:42 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:42.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:42 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:42.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:42 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:42.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:42 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:42.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:42 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:42.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:42 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:42.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:42 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:42.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:42 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:42.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:42 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:42.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:42 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:42.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:42 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:42.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:42 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:42.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:42 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:43.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:43 vm07.local ceph-mon[112105]: Upgrade: Updating grafana.vm07 2026-03-09T20:53:43.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:43 vm07.local ceph-mon[112105]: Deploying daemon grafana.vm07 on vm07 2026-03-09T20:53:43.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:43 vm10.local ceph-mon[103526]: Upgrade: Updating grafana.vm07 2026-03-09T20:53:43.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:43 vm10.local ceph-mon[103526]: Deploying daemon grafana.vm07 on vm07 2026-03-09T20:53:44.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:44 vm07.local ceph-mon[112105]: pgmap v201: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:53:44.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:44 vm10.local ceph-mon[103526]: pgmap v201: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:53:46.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:46 vm07.local ceph-mon[112105]: pgmap v202: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:53:46.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:46 vm10.local ceph-mon[103526]: pgmap v202: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:53:48.290 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:48 vm07.local ceph-mon[112105]: pgmap v203: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:53:48.290 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:48 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:48.290 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:48 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:48.290 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:48 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:53:48.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:48 vm10.local ceph-mon[103526]: pgmap v203: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:53:48.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:48 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:48.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:48 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:48.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:48 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:53:50.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:50 vm10.local ceph-mon[103526]: pgmap v204: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:53:50.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:50 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:50.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:50 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:50.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:50 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:50.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:50 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:50.553 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:50 vm07.local ceph-mon[112105]: pgmap v204: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:53:50.553 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:50 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:50.553 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:50 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:50.553 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:50 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:50.553 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:50 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:50.917 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:50.916+0000 7f876afe3640 1 -- 192.168.123.107:0/1343639897 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8764103ca0 msgr2=0x7f8764104120 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:50.917 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:50.916+0000 7f876afe3640 1 --2- 192.168.123.107:0/1343639897 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8764103ca0 0x7f8764104120 secure :-1 s=READY pgs=150 cs=0 l=1 rev1=1 crypto rx=0x7f874c0099b0 tx=0x7f874c02f220 comp rx=0 tx=0).stop 2026-03-09T20:53:50.918 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:50.916+0000 7f876afe3640 1 -- 192.168.123.107:0/1343639897 shutdown_connections 2026-03-09T20:53:50.918 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:50.916+0000 7f876afe3640 1 --2- 192.168.123.107:0/1343639897 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8764103ca0 0x7f8764104120 unknown :-1 s=CLOSED pgs=150 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:50.918 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:50.916+0000 7f876afe3640 1 --2- 192.168.123.107:0/1343639897 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f8764102aa0 0x7f8764102ea0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:50.918 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:50.916+0000 7f876afe3640 1 -- 192.168.123.107:0/1343639897 >> 192.168.123.107:0/1343639897 conn(0x7f87640fe250 msgr2=0x7f8764100670 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:53:50.918 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:50.917+0000 7f876afe3640 1 -- 192.168.123.107:0/1343639897 shutdown_connections 2026-03-09T20:53:50.918 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:50.917+0000 7f876afe3640 1 -- 192.168.123.107:0/1343639897 wait complete. 2026-03-09T20:53:50.918 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:50.917+0000 7f876afe3640 1 Processor -- start 2026-03-09T20:53:50.918 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:50.917+0000 7f876afe3640 1 -- start start 2026-03-09T20:53:50.918 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:50.917+0000 7f876afe3640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8764102aa0 0x7f876419a470 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:53:50.918 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:50.917+0000 7f876afe3640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f8764103ca0 0x7f876419a9b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:53:50.918 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:50.917+0000 7f876afe3640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f876419af80 con 0x7f8764102aa0 2026-03-09T20:53:50.918 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:50.917+0000 7f876afe3640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f876419b0f0 con 0x7f8764103ca0 2026-03-09T20:53:50.919 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:50.918+0000 7f875bfff640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f8764103ca0 0x7f876419a9b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:53:50.919 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:50.918+0000 7f875bfff640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f8764103ca0 0x7f876419a9b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:55584/0 (socket says 192.168.123.107:55584) 2026-03-09T20:53:50.919 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:50.918+0000 7f875bfff640 1 -- 192.168.123.107:0/42885890 learned_addr learned my addr 192.168.123.107:0/42885890 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:53:50.919 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:50.918+0000 7f8768d58640 1 --2- 192.168.123.107:0/42885890 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8764102aa0 0x7f876419a470 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:53:50.919 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:50.918+0000 7f875bfff640 1 -- 192.168.123.107:0/42885890 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8764102aa0 msgr2=0x7f876419a470 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:50.919 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:50.918+0000 7f875bfff640 1 --2- 192.168.123.107:0/42885890 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8764102aa0 0x7f876419a470 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:50.919 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:50.918+0000 7f875bfff640 1 -- 192.168.123.107:0/42885890 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f874c009660 con 0x7f8764103ca0 2026-03-09T20:53:50.919 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:50.918+0000 7f8768d58640 1 --2- 192.168.123.107:0/42885890 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8764102aa0 0x7f876419a470 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T20:53:50.919 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:50.918+0000 7f875bfff640 1 --2- 192.168.123.107:0/42885890 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f8764103ca0 0x7f876419a9b0 secure :-1 s=READY pgs=60 cs=0 l=1 rev1=1 crypto rx=0x7f874c002a50 tx=0x7f874c031cd0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:53:50.920 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:50.919+0000 7f8759ffb640 1 -- 192.168.123.107:0/42885890 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f874c03d070 con 0x7f8764103ca0 2026-03-09T20:53:50.920 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:50.919+0000 7f876afe3640 1 -- 192.168.123.107:0/42885890 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f876419fb30 con 0x7f8764103ca0 2026-03-09T20:53:50.920 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:50.919+0000 7f876afe3640 1 -- 192.168.123.107:0/42885890 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f87641a0020 con 0x7f8764103ca0 2026-03-09T20:53:50.921 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:50.919+0000 7f8759ffb640 1 -- 192.168.123.107:0/42885890 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f874c031070 con 0x7f8764103ca0 2026-03-09T20:53:50.921 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:50.919+0000 7f8759ffb640 1 -- 192.168.123.107:0/42885890 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f874c038640 con 0x7f8764103ca0 2026-03-09T20:53:50.921 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:50.920+0000 7f8759ffb640 1 -- 192.168.123.107:0/42885890 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f874c02fa80 con 0x7f8764103ca0 2026-03-09T20:53:50.921 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:50.921+0000 7f876afe3640 1 -- 192.168.123.107:0/42885890 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f876410b6d0 con 0x7f8764103ca0 2026-03-09T20:53:50.922 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:50.921+0000 7f8759ffb640 1 --2- 192.168.123.107:0/42885890 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f87400778e0 0x7f8740079da0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:53:50.922 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:50.921+0000 7f8768d58640 1 --2- 192.168.123.107:0/42885890 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f87400778e0 0x7f8740079da0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:53:50.922 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:50.921+0000 7f8759ffb640 1 -- 192.168.123.107:0/42885890 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(86..86 src has 1..86) v4 ==== 6652+0+0 (secure 0 0 0) 0x7f874c0be7b0 con 0x7f8764103ca0 2026-03-09T20:53:50.922 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:50.921+0000 7f8768d58640 1 --2- 192.168.123.107:0/42885890 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f87400778e0 0x7f8740079da0 secure :-1 s=READY pgs=106 cs=0 l=1 rev1=1 crypto rx=0x7f8764103b00 tx=0x7f875400a3b0 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:53:50.925 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:50.924+0000 7f8759ffb640 1 -- 192.168.123.107:0/42885890 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f874c086e90 con 0x7f8764103ca0 2026-03-09T20:53:51.024 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:51.023+0000 7f876afe3640 1 -- 192.168.123.107:0/42885890 --> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f8764108090 con 0x7f87400778e0 2026-03-09T20:53:51.025 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:51.024+0000 7f8759ffb640 1 -- 192.168.123.107:0/42885890 <== mgr.34100 v2:192.168.123.107:6800/2064434004 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+175 (secure 0 0 0) 0x7f8764108090 con 0x7f87400778e0 2026-03-09T20:53:51.027 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:51.026+0000 7f876afe3640 1 -- 192.168.123.107:0/42885890 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f87400778e0 msgr2=0x7f8740079da0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:51.027 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:51.026+0000 7f876afe3640 1 --2- 192.168.123.107:0/42885890 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f87400778e0 0x7f8740079da0 secure :-1 s=READY pgs=106 cs=0 l=1 rev1=1 crypto rx=0x7f8764103b00 tx=0x7f875400a3b0 comp rx=0 tx=0).stop 2026-03-09T20:53:51.027 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:51.026+0000 7f876afe3640 1 -- 192.168.123.107:0/42885890 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f8764103ca0 msgr2=0x7f876419a9b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:51.027 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:51.026+0000 7f876afe3640 1 --2- 192.168.123.107:0/42885890 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f8764103ca0 0x7f876419a9b0 secure :-1 s=READY pgs=60 cs=0 l=1 rev1=1 crypto rx=0x7f874c002a50 tx=0x7f874c031cd0 comp rx=0 tx=0).stop 2026-03-09T20:53:51.027 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:51.026+0000 7f876afe3640 1 -- 192.168.123.107:0/42885890 shutdown_connections 2026-03-09T20:53:51.027 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:51.026+0000 7f876afe3640 1 --2- 192.168.123.107:0/42885890 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f87400778e0 0x7f8740079da0 unknown :-1 s=CLOSED pgs=106 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:51.027 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:51.026+0000 7f876afe3640 1 --2- 192.168.123.107:0/42885890 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f8764103ca0 0x7f876419a9b0 unknown :-1 s=CLOSED pgs=60 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:51.027 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:51.026+0000 7f876afe3640 1 --2- 192.168.123.107:0/42885890 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8764102aa0 0x7f876419a470 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:51.027 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:51.026+0000 7f876afe3640 1 -- 192.168.123.107:0/42885890 >> 192.168.123.107:0/42885890 conn(0x7f87640fe250 msgr2=0x7f87640ffa90 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:53:51.028 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:51.027+0000 7f876afe3640 1 -- 192.168.123.107:0/42885890 shutdown_connections 2026-03-09T20:53:51.028 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:51.027+0000 7f876afe3640 1 -- 192.168.123.107:0/42885890 wait complete. 2026-03-09T20:53:51.081 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch ps' 2026-03-09T20:53:51.263 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:53:51.493 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:51.491+0000 7f315a0bb640 1 -- 192.168.123.107:0/1476440890 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f31541089d0 msgr2=0x7f3154108db0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:51.493 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:51.491+0000 7f315a0bb640 1 --2- 192.168.123.107:0/1476440890 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f31541089d0 0x7f3154108db0 secure :-1 s=READY pgs=61 cs=0 l=1 rev1=1 crypto rx=0x7f31380099b0 tx=0x7f313802f220 comp rx=0 tx=0).stop 2026-03-09T20:53:51.493 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:51.492+0000 7f315a0bb640 1 -- 192.168.123.107:0/1476440890 shutdown_connections 2026-03-09T20:53:51.493 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:51.492+0000 7f315a0bb640 1 --2- 192.168.123.107:0/1476440890 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f31541029d0 0x7f3154102e30 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:51.493 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:51.492+0000 7f315a0bb640 1 --2- 192.168.123.107:0/1476440890 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f31541089d0 0x7f3154108db0 unknown :-1 s=CLOSED pgs=61 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:51.493 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:51.492+0000 7f315a0bb640 1 -- 192.168.123.107:0/1476440890 >> 192.168.123.107:0/1476440890 conn(0x7f31540fe710 msgr2=0x7f3154100b30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:53:51.493 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:51.492+0000 7f315a0bb640 1 -- 192.168.123.107:0/1476440890 shutdown_connections 2026-03-09T20:53:51.493 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:51.493+0000 7f315a0bb640 1 -- 192.168.123.107:0/1476440890 wait complete. 2026-03-09T20:53:51.494 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:51.493+0000 7f315a0bb640 1 Processor -- start 2026-03-09T20:53:51.494 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:51.493+0000 7f315a0bb640 1 -- start start 2026-03-09T20:53:51.494 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:51.493+0000 7f315a0bb640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f31541029d0 0x7f31541a0550 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:53:51.495 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:51.493+0000 7f315a0bb640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f31541089d0 0x7f31541a0a90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:53:51.495 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:51.493+0000 7f315a0bb640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f31541a10b0 con 0x7f31541029d0 2026-03-09T20:53:51.495 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:51.493+0000 7f315a0bb640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f315419a640 con 0x7f31541089d0 2026-03-09T20:53:51.495 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:51.494+0000 7f31537fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f31541029d0 0x7f31541a0550 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:53:51.495 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:51.494+0000 7f31537fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f31541029d0 0x7f31541a0550 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:45468/0 (socket says 192.168.123.107:45468) 2026-03-09T20:53:51.495 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:51.494+0000 7f31537fe640 1 -- 192.168.123.107:0/1839834464 learned_addr learned my addr 192.168.123.107:0/1839834464 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:53:51.496 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:51.494+0000 7f31537fe640 1 -- 192.168.123.107:0/1839834464 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f31541089d0 msgr2=0x7f31541a0a90 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T20:53:51.496 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:51.494+0000 7f31537fe640 1 --2- 192.168.123.107:0/1839834464 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f31541089d0 0x7f31541a0a90 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:51.496 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:51.494+0000 7f31537fe640 1 -- 192.168.123.107:0/1839834464 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f3140009590 con 0x7f31541029d0 2026-03-09T20:53:51.496 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:51.494+0000 7f31537fe640 1 --2- 192.168.123.107:0/1839834464 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f31541029d0 0x7f31541a0550 secure :-1 s=READY pgs=151 cs=0 l=1 rev1=1 crypto rx=0x7f3138009980 tx=0x7f3138004290 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:53:51.496 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:51.495+0000 7f3150ff9640 1 -- 192.168.123.107:0/1839834464 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f313802fc90 con 0x7f31541029d0 2026-03-09T20:53:51.496 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:51.495+0000 7f3150ff9640 1 -- 192.168.123.107:0/1839834464 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f313802fdf0 con 0x7f31541029d0 2026-03-09T20:53:51.497 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:51.496+0000 7f3150ff9640 1 -- 192.168.123.107:0/1839834464 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f313804b440 con 0x7f31541029d0 2026-03-09T20:53:51.498 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:51.496+0000 7f315a0bb640 1 -- 192.168.123.107:0/1839834464 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f3138009660 con 0x7f31541029d0 2026-03-09T20:53:51.498 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:51.496+0000 7f315a0bb640 1 -- 192.168.123.107:0/1839834464 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f315419ab40 con 0x7f31541029d0 2026-03-09T20:53:51.498 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:51.497+0000 7f3150ff9640 1 -- 192.168.123.107:0/1839834464 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f3138041a50 con 0x7f31541029d0 2026-03-09T20:53:51.500 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:51.497+0000 7f315a0bb640 1 -- 192.168.123.107:0/1839834464 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f3118005350 con 0x7f31541029d0 2026-03-09T20:53:51.501 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:51.498+0000 7f3150ff9640 1 --2- 192.168.123.107:0/1839834464 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f312c0776d0 0x7f312c079b90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:53:51.501 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:51.498+0000 7f3150ff9640 1 -- 192.168.123.107:0/1839834464 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(86..86 src has 1..86) v4 ==== 6652+0+0 (secure 0 0 0) 0x7f3138050030 con 0x7f31541029d0 2026-03-09T20:53:51.501 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:51.500+0000 7f3152ffd640 1 --2- 192.168.123.107:0/1839834464 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f312c0776d0 0x7f312c079b90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:53:51.501 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:51.501+0000 7f3150ff9640 1 -- 192.168.123.107:0/1839834464 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f313808f430 con 0x7f31541029d0 2026-03-09T20:53:51.502 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:51.501+0000 7f3152ffd640 1 --2- 192.168.123.107:0/1839834464 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f312c0776d0 0x7f312c079b90 secure :-1 s=READY pgs=107 cs=0 l=1 rev1=1 crypto rx=0x7f315419bab0 tx=0x7f3140009290 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:53:51.603 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:51.602+0000 7f315a0bb640 1 -- 192.168.123.107:0/1839834464 --> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] -- mgr_command(tid 0: {"prefix": "orch ps", "target": ["mon-mgr", ""]}) v1 -- 0x7f3118002bf0 con 0x7f312c0776d0 2026-03-09T20:53:51.608 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:51.607+0000 7f3150ff9640 1 -- 192.168.123.107:0/1839834464 <== mgr.34100 v2:192.168.123.107:6800/2064434004 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+3576 (secure 0 0 0) 0x7f3118002bf0 con 0x7f312c0776d0 2026-03-09T20:53:51.608 INFO:teuthology.orchestra.run.vm07.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T20:53:51.608 INFO:teuthology.orchestra.run.vm07.stdout:alertmanager.vm07 vm07 *:9093,9094 running (13s) 2s ago 10m 17.3M - 0.25.0 c8568f914cd2 afb315b2ed75 2026-03-09T20:53:51.608 INFO:teuthology.orchestra.run.vm07.stdout:ceph-exporter.vm07 vm07 running (38s) 2s ago 11m 10.5M - 19.2.3-678-ge911bdeb 654f31e6858e 5cd0ce63c830 2026-03-09T20:53:51.608 INFO:teuthology.orchestra.run.vm07.stdout:ceph-exporter.vm10 vm10 running (36s) 24s ago 10m 10.5M - 19.2.3-678-ge911bdeb 654f31e6858e c382fe12976c 2026-03-09T20:53:51.608 INFO:teuthology.orchestra.run.vm07.stdout:crash.vm07 vm07 running (5m) 2s ago 11m 7834k - 19.2.3-678-ge911bdeb 654f31e6858e 406c9c54f34a 2026-03-09T20:53:51.608 INFO:teuthology.orchestra.run.vm07.stdout:crash.vm10 vm10 running (5m) 24s ago 10m 7856k - 19.2.3-678-ge911bdeb 654f31e6858e 30eaebf5d733 2026-03-09T20:53:51.608 INFO:teuthology.orchestra.run.vm07.stdout:grafana.vm07 vm07 *:3000 running (3s) 2s ago 10m 39.8M - 10.4.0 c8b91775d855 e629e59fd7fd 2026-03-09T20:53:51.608 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm07.potfau vm07 running (58s) 2s ago 8m 95.7M - 19.2.3-678-ge911bdeb 654f31e6858e 744ed5bff39a 2026-03-09T20:53:51.608 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm07.rovdbp vm07 running (67s) 2s ago 8m 25.4M - 19.2.3-678-ge911bdeb 654f31e6858e 1763d9a7f9bb 2026-03-09T20:53:51.608 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm10.hzyuyq vm10 running (45s) 24s ago 8m 21.9M - 19.2.3-678-ge911bdeb 654f31e6858e fd0024278e01 2026-03-09T20:53:51.608 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm10.qpltwp vm10 running (52s) 24s ago 8m 98.0M - 19.2.3-678-ge911bdeb 654f31e6858e fef68e5128e4 2026-03-09T20:53:51.608 INFO:teuthology.orchestra.run.vm07.stdout:mgr.vm07.xjrvch vm07 *:8443,9283,8765 running (6m) 2s ago 11m 633M - 19.2.3-678-ge911bdeb 654f31e6858e bc6ab9c540eb 2026-03-09T20:53:51.608 INFO:teuthology.orchestra.run.vm07.stdout:mgr.vm10.byqahe vm10 *:8443,9283,8765 running (5m) 24s ago 10m 492M - 19.2.3-678-ge911bdeb 654f31e6858e f7ad162e95ff 2026-03-09T20:53:51.609 INFO:teuthology.orchestra.run.vm07.stdout:mon.vm07 vm07 running (5m) 2s ago 11m 67.1M 2048M 19.2.3-678-ge911bdeb 654f31e6858e bce9d510f94f 2026-03-09T20:53:51.609 INFO:teuthology.orchestra.run.vm07.stdout:mon.vm10 vm10 running (5m) 24s ago 10m 55.4M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 4428cf7f0607 2026-03-09T20:53:51.609 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.vm07 vm07 *:9100 running (29s) 2s ago 10m 6693k - 1.7.0 72c9c2088986 95a81084dd9b 2026-03-09T20:53:51.609 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.vm10 vm10 *:9100 running (25s) 24s ago 10m 4676k - 1.7.0 72c9c2088986 a2d3fff54c2f 2026-03-09T20:53:51.609 INFO:teuthology.orchestra.run.vm07.stdout:osd.0 vm07 running (4m) 2s ago 10m 235M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 1da9d2cdbdc3 2026-03-09T20:53:51.609 INFO:teuthology.orchestra.run.vm07.stdout:osd.1 vm07 running (4m) 2s ago 9m 178M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 95f518bf664f 2026-03-09T20:53:51.609 INFO:teuthology.orchestra.run.vm07.stdout:osd.2 vm07 running (2m) 2s ago 9m 124M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 0d3aa63353bb 2026-03-09T20:53:51.609 INFO:teuthology.orchestra.run.vm07.stdout:osd.3 vm10 running (2m) 24s ago 9m 169M 4096M 19.2.3-678-ge911bdeb 654f31e6858e c8d2b453e9e2 2026-03-09T20:53:51.609 INFO:teuthology.orchestra.run.vm07.stdout:osd.4 vm10 running (2m) 24s ago 9m 126M 4096M 19.2.3-678-ge911bdeb 654f31e6858e d0231a0cf2be 2026-03-09T20:53:51.609 INFO:teuthology.orchestra.run.vm07.stdout:osd.5 vm10 running (108s) 24s ago 9m 119M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 7489b8a43e7f 2026-03-09T20:53:51.609 INFO:teuthology.orchestra.run.vm07.stdout:prometheus.vm07 vm07 *:9095 running (16s) 2s ago 10m 42.5M - 2.51.0 1d3b7f56885b 68e468960cdd 2026-03-09T20:53:51.610 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:51.609+0000 7f315a0bb640 1 -- 192.168.123.107:0/1839834464 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f312c0776d0 msgr2=0x7f312c079b90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:51.610 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:51.609+0000 7f315a0bb640 1 --2- 192.168.123.107:0/1839834464 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f312c0776d0 0x7f312c079b90 secure :-1 s=READY pgs=107 cs=0 l=1 rev1=1 crypto rx=0x7f315419bab0 tx=0x7f3140009290 comp rx=0 tx=0).stop 2026-03-09T20:53:51.611 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:51.609+0000 7f315a0bb640 1 -- 192.168.123.107:0/1839834464 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f31541029d0 msgr2=0x7f31541a0550 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:51.611 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:51.609+0000 7f315a0bb640 1 --2- 192.168.123.107:0/1839834464 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f31541029d0 0x7f31541a0550 secure :-1 s=READY pgs=151 cs=0 l=1 rev1=1 crypto rx=0x7f3138009980 tx=0x7f3138004290 comp rx=0 tx=0).stop 2026-03-09T20:53:51.611 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:51.609+0000 7f315a0bb640 1 -- 192.168.123.107:0/1839834464 shutdown_connections 2026-03-09T20:53:51.611 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:51.609+0000 7f315a0bb640 1 --2- 192.168.123.107:0/1839834464 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f312c0776d0 0x7f312c079b90 unknown :-1 s=CLOSED pgs=107 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:51.611 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:51.609+0000 7f315a0bb640 1 --2- 192.168.123.107:0/1839834464 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f31541089d0 0x7f31541a0a90 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:51.611 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:51.609+0000 7f315a0bb640 1 --2- 192.168.123.107:0/1839834464 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f31541029d0 0x7f31541a0550 unknown :-1 s=CLOSED pgs=151 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:51.611 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:51.609+0000 7f315a0bb640 1 -- 192.168.123.107:0/1839834464 >> 192.168.123.107:0/1839834464 conn(0x7f31540fe710 msgr2=0x7f315410b880 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:53:51.611 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:51.609+0000 7f315a0bb640 1 -- 192.168.123.107:0/1839834464 shutdown_connections 2026-03-09T20:53:51.611 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:51.609+0000 7f315a0bb640 1 -- 192.168.123.107:0/1839834464 wait complete. 2026-03-09T20:53:51.660 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch upgrade status' 2026-03-09T20:53:51.801 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:53:51.849 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: pgmap v205: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:53:51.849 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:51.849 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:51.849 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:53:51.849 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:53:51.849 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:51.849 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T20:53:51.849 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T20:53:51.849 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:53:51.849 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:51.849 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:51.850 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:51.850 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:51.850 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:51.850 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:51.850 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:51.850 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:51.850 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:51.850 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:51.850 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:51.850 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:51.850 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:51.850 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:51.850 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:51.850 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:51.850 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:51.850 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:51.850 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: Upgrade: Finalizing container_image settings 2026-03-09T20:53:51.850 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:51.850 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T20:53:51.850 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-09T20:53:51.850 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T20:53:51.850 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-09T20:53:51.850 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T20:53:51.850 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-09T20:53:51.850 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T20:53:51.850 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd"}]': finished 2026-03-09T20:53:51.850 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T20:53:51.850 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-09T20:53:51.850 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T20:53:51.850 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]': finished 2026-03-09T20:53:51.850 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T20:53:51.850 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-09T20:53:51.850 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T20:53:51.851 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T20:53:51.851 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-09T20:53:51.851 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T20:53:51.851 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]': finished 2026-03-09T20:53:51.851 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T20:53:51.851 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-09T20:53:51.851 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T20:53:51.851 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-09T20:53:51.851 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T20:53:51.851 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T20:53:51.851 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T20:53:51.851 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T20:53:51.851 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T20:53:51.851 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T20:53:51.851 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: Upgrade: Complete! 2026-03-09T20:53:51.851 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T20:53:51.851 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-09T20:53:51.851 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:53:51.851 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:53:51.851 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:53:51.851 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:51.851 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:53:51.851 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:53:51.851 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:53:51.851 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:51.851 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:51 vm07.local ceph-mon[112105]: from='client.44325 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:53:52.029 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:52.027+0000 7f05c14b3640 1 -- 192.168.123.107:0/3366680296 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f05bc075720 msgr2=0x7f05bc075b00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:52.029 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:52.027+0000 7f05c14b3640 1 --2- 192.168.123.107:0/3366680296 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f05bc075720 0x7f05bc075b00 secure :-1 s=READY pgs=152 cs=0 l=1 rev1=1 crypto rx=0x7f05a8009a00 tx=0x7f05a802f290 comp rx=0 tx=0).stop 2026-03-09T20:53:52.029 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:52.028+0000 7f05c14b3640 1 -- 192.168.123.107:0/3366680296 shutdown_connections 2026-03-09T20:53:52.029 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:52.028+0000 7f05c14b3640 1 --2- 192.168.123.107:0/3366680296 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f05bc076040 0x7f05bc111330 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:52.029 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:52.028+0000 7f05c14b3640 1 --2- 192.168.123.107:0/3366680296 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f05bc075720 0x7f05bc075b00 unknown :-1 s=CLOSED pgs=152 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:52.029 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:52.028+0000 7f05c14b3640 1 -- 192.168.123.107:0/3366680296 >> 192.168.123.107:0/3366680296 conn(0x7f05bc0fe710 msgr2=0x7f05bc100b30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:53:52.030 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:52.028+0000 7f05c14b3640 1 -- 192.168.123.107:0/3366680296 shutdown_connections 2026-03-09T20:53:52.030 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:52.028+0000 7f05c14b3640 1 -- 192.168.123.107:0/3366680296 wait complete. 2026-03-09T20:53:52.030 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:52.029+0000 7f05c14b3640 1 Processor -- start 2026-03-09T20:53:52.030 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:52.029+0000 7f05c14b3640 1 -- start start 2026-03-09T20:53:52.030 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:52.029+0000 7f05c14b3640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f05bc075720 0x7f05bc19ee50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:53:52.030 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:52.029+0000 7f05c14b3640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f05bc076040 0x7f05bc19f390 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:53:52.030 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:52.029+0000 7f05c14b3640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f05bc19fa20 con 0x7f05bc075720 2026-03-09T20:53:52.030 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:52.029+0000 7f05c14b3640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f05bc1a3790 con 0x7f05bc076040 2026-03-09T20:53:52.030 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:52.029+0000 7f05baffd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f05bc075720 0x7f05bc19ee50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:53:52.030 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:52.030+0000 7f05baffd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f05bc075720 0x7f05bc19ee50 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:45490/0 (socket says 192.168.123.107:45490) 2026-03-09T20:53:52.031 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:52.030+0000 7f05baffd640 1 -- 192.168.123.107:0/2691050216 learned_addr learned my addr 192.168.123.107:0/2691050216 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:53:52.031 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:52.030+0000 7f05ba7fc640 1 --2- 192.168.123.107:0/2691050216 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f05bc076040 0x7f05bc19f390 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:53:52.031 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:52.030+0000 7f05baffd640 1 -- 192.168.123.107:0/2691050216 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f05bc076040 msgr2=0x7f05bc19f390 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:52.031 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:52.030+0000 7f05baffd640 1 --2- 192.168.123.107:0/2691050216 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f05bc076040 0x7f05bc19f390 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:52.035 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:52.030+0000 7f05baffd640 1 -- 192.168.123.107:0/2691050216 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f05a8009660 con 0x7f05bc075720 2026-03-09T20:53:52.035 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:52.030+0000 7f05baffd640 1 --2- 192.168.123.107:0/2691050216 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f05bc075720 0x7f05bc19ee50 secure :-1 s=READY pgs=153 cs=0 l=1 rev1=1 crypto rx=0x7f05a802f7a0 tx=0x7f05a8031db0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:53:52.035 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:52.030+0000 7f05b3fff640 1 -- 192.168.123.107:0/2691050216 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f05a803d070 con 0x7f05bc075720 2026-03-09T20:53:52.035 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:52.030+0000 7f05b3fff640 1 -- 192.168.123.107:0/2691050216 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f05a802fd00 con 0x7f05bc075720 2026-03-09T20:53:52.035 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:52.030+0000 7f05b3fff640 1 -- 192.168.123.107:0/2691050216 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f05a8031110 con 0x7f05bc075720 2026-03-09T20:53:52.035 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:52.030+0000 7f05c14b3640 1 -- 192.168.123.107:0/2691050216 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f05bc1a3a10 con 0x7f05bc075720 2026-03-09T20:53:52.035 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:52.030+0000 7f05c14b3640 1 -- 192.168.123.107:0/2691050216 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f05bc1a3f00 con 0x7f05bc075720 2026-03-09T20:53:52.035 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:52.030+0000 7f05ba7fc640 1 --2- 192.168.123.107:0/2691050216 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f05bc076040 0x7f05bc19f390 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T20:53:52.035 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:52.032+0000 7f05c14b3640 1 -- 192.168.123.107:0/2691050216 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f0588005350 con 0x7f05bc075720 2026-03-09T20:53:52.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: pgmap v205: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:53:52.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:52.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:52.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:53:52.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:53:52.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:52.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T20:53:52.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T20:53:52.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:53:52.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:52.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:52.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:52.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:52.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:52.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:52.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:52.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:52.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:52.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:52.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:52.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:52.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:52.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:52.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:52.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:52.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:52.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:52.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: Upgrade: Finalizing container_image settings 2026-03-09T20:53:52.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:52.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T20:53:52.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-09T20:53:52.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T20:53:52.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-09T20:53:52.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T20:53:52.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-09T20:53:52.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T20:53:52.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd"}]': finished 2026-03-09T20:53:52.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T20:53:52.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-09T20:53:52.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T20:53:52.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]': finished 2026-03-09T20:53:52.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T20:53:52.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-09T20:53:52.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T20:53:52.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T20:53:52.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-09T20:53:52.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T20:53:52.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]': finished 2026-03-09T20:53:52.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T20:53:52.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-09T20:53:52.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T20:53:52.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-09T20:53:52.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T20:53:52.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T20:53:52.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T20:53:52.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T20:53:52.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T20:53:52.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T20:53:52.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: Upgrade: Complete! 2026-03-09T20:53:52.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T20:53:52.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-09T20:53:52.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:53:52.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:53:52.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:53:52.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:52.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:53:52.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:53:52.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:53:52.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:52.038 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:51 vm10.local ceph-mon[103526]: from='client.44325 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:53:52.038 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:52.034+0000 7f05b3fff640 1 -- 192.168.123.107:0/2691050216 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f05a8038730 con 0x7f05bc075720 2026-03-09T20:53:52.039 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:52.038+0000 7f05b3fff640 1 --2- 192.168.123.107:0/2691050216 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f058c0778e0 0x7f058c079da0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:53:52.039 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:52.038+0000 7f05ba7fc640 1 --2- 192.168.123.107:0/2691050216 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f058c0778e0 0x7f058c079da0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:53:52.039 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:52.038+0000 7f05b3fff640 1 -- 192.168.123.107:0/2691050216 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(86..86 src has 1..86) v4 ==== 6652+0+0 (secure 0 0 0) 0x7f05a80bea40 con 0x7f05bc075720 2026-03-09T20:53:52.039 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:52.038+0000 7f05b3fff640 1 -- 192.168.123.107:0/2691050216 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f05a80c22a0 con 0x7f05bc075720 2026-03-09T20:53:52.039 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:52.038+0000 7f05ba7fc640 1 --2- 192.168.123.107:0/2691050216 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f058c0778e0 0x7f058c079da0 secure :-1 s=READY pgs=108 cs=0 l=1 rev1=1 crypto rx=0x7f05ac009fd0 tx=0x7f05ac009290 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:53:52.148 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:52.146+0000 7f05c14b3640 1 -- 192.168.123.107:0/2691050216 --> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f0588002bf0 con 0x7f058c0778e0 2026-03-09T20:53:52.149 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:52.148+0000 7f05b3fff640 1 -- 192.168.123.107:0/2691050216 <== mgr.34100 v2:192.168.123.107:6800/2064434004 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+175 (secure 0 0 0) 0x7f0588002bf0 con 0x7f058c0778e0 2026-03-09T20:53:52.149 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T20:53:52.149 INFO:teuthology.orchestra.run.vm07.stdout: "target_image": null, 2026-03-09T20:53:52.149 INFO:teuthology.orchestra.run.vm07.stdout: "in_progress": false, 2026-03-09T20:53:52.149 INFO:teuthology.orchestra.run.vm07.stdout: "which": "", 2026-03-09T20:53:52.149 INFO:teuthology.orchestra.run.vm07.stdout: "services_complete": [], 2026-03-09T20:53:52.149 INFO:teuthology.orchestra.run.vm07.stdout: "progress": null, 2026-03-09T20:53:52.149 INFO:teuthology.orchestra.run.vm07.stdout: "message": "", 2026-03-09T20:53:52.149 INFO:teuthology.orchestra.run.vm07.stdout: "is_paused": false 2026-03-09T20:53:52.149 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T20:53:52.151 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:52.150+0000 7f05c14b3640 1 -- 192.168.123.107:0/2691050216 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f058c0778e0 msgr2=0x7f058c079da0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:52.151 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:52.150+0000 7f05c14b3640 1 --2- 192.168.123.107:0/2691050216 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f058c0778e0 0x7f058c079da0 secure :-1 s=READY pgs=108 cs=0 l=1 rev1=1 crypto rx=0x7f05ac009fd0 tx=0x7f05ac009290 comp rx=0 tx=0).stop 2026-03-09T20:53:52.151 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:52.150+0000 7f05c14b3640 1 -- 192.168.123.107:0/2691050216 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f05bc075720 msgr2=0x7f05bc19ee50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:52.151 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:52.150+0000 7f05c14b3640 1 --2- 192.168.123.107:0/2691050216 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f05bc075720 0x7f05bc19ee50 secure :-1 s=READY pgs=153 cs=0 l=1 rev1=1 crypto rx=0x7f05a802f7a0 tx=0x7f05a8031db0 comp rx=0 tx=0).stop 2026-03-09T20:53:52.153 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:52.152+0000 7f05c14b3640 1 -- 192.168.123.107:0/2691050216 shutdown_connections 2026-03-09T20:53:52.153 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:52.152+0000 7f05c14b3640 1 --2- 192.168.123.107:0/2691050216 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f058c0778e0 0x7f058c079da0 unknown :-1 s=CLOSED pgs=108 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:52.153 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:52.152+0000 7f05c14b3640 1 --2- 192.168.123.107:0/2691050216 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f05bc076040 0x7f05bc19f390 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:52.153 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:52.152+0000 7f05c14b3640 1 --2- 192.168.123.107:0/2691050216 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f05bc075720 0x7f05bc19ee50 unknown :-1 s=CLOSED pgs=153 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:52.153 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:52.152+0000 7f05c14b3640 1 -- 192.168.123.107:0/2691050216 >> 192.168.123.107:0/2691050216 conn(0x7f05bc0fe710 msgr2=0x7f05bc0ffea0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:53:52.154 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:52.153+0000 7f05c14b3640 1 -- 192.168.123.107:0/2691050216 shutdown_connections 2026-03-09T20:53:52.154 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:52.153+0000 7f05c14b3640 1 -- 192.168.123.107:0/2691050216 wait complete. 2026-03-09T20:53:52.749 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph health detail' 2026-03-09T20:53:52.888 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:53:52.975 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:52 vm07.local ceph-mon[112105]: from='client.34428 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:53:52.975 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:52 vm07.local ceph-mon[112105]: from='client.34432 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:53:53.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:52 vm10.local ceph-mon[103526]: from='client.34428 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:53:53.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:52 vm10.local ceph-mon[103526]: from='client.34432 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:53:53.119 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.117+0000 7fb227700640 1 -- 192.168.123.107:0/2730814463 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb2201029d0 msgr2=0x7fb220102e30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:53.119 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.117+0000 7fb227700640 1 --2- 192.168.123.107:0/2730814463 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb2201029d0 0x7fb220102e30 secure :-1 s=READY pgs=154 cs=0 l=1 rev1=1 crypto rx=0x7fb2100099b0 tx=0x7fb21002f220 comp rx=0 tx=0).stop 2026-03-09T20:53:53.119 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.118+0000 7fb227700640 1 -- 192.168.123.107:0/2730814463 shutdown_connections 2026-03-09T20:53:53.119 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.118+0000 7fb227700640 1 --2- 192.168.123.107:0/2730814463 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb2201029d0 0x7fb220102e30 unknown :-1 s=CLOSED pgs=154 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:53.119 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.118+0000 7fb227700640 1 --2- 192.168.123.107:0/2730814463 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fb2201089d0 0x7fb220108db0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:53.119 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.118+0000 7fb227700640 1 -- 192.168.123.107:0/2730814463 >> 192.168.123.107:0/2730814463 conn(0x7fb2200fe710 msgr2=0x7fb220100b30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:53:53.119 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.118+0000 7fb227700640 1 -- 192.168.123.107:0/2730814463 shutdown_connections 2026-03-09T20:53:53.119 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.118+0000 7fb227700640 1 -- 192.168.123.107:0/2730814463 wait complete. 2026-03-09T20:53:53.119 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.119+0000 7fb227700640 1 Processor -- start 2026-03-09T20:53:53.120 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.119+0000 7fb227700640 1 -- start start 2026-03-09T20:53:53.120 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.119+0000 7fb227700640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb2201029d0 0x7fb22019ee20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:53:53.120 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.119+0000 7fb227700640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fb2201089d0 0x7fb22019f360 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:53:53.120 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.119+0000 7fb227700640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fb22019f9f0 con 0x7fb2201029d0 2026-03-09T20:53:53.120 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.119+0000 7fb227700640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fb2201a3710 con 0x7fb2201089d0 2026-03-09T20:53:53.120 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.119+0000 7fb225475640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb2201029d0 0x7fb22019ee20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:53:53.121 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.119+0000 7fb225475640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb2201029d0 0x7fb22019ee20 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:45512/0 (socket says 192.168.123.107:45512) 2026-03-09T20:53:53.122 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.119+0000 7fb225475640 1 -- 192.168.123.107:0/3562162123 learned_addr learned my addr 192.168.123.107:0/3562162123 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:53:53.122 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.119+0000 7fb225475640 1 -- 192.168.123.107:0/3562162123 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fb2201089d0 msgr2=0x7fb22019f360 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T20:53:53.123 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.119+0000 7fb225475640 1 --2- 192.168.123.107:0/3562162123 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fb2201089d0 0x7fb22019f360 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:53.123 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.119+0000 7fb225475640 1 -- 192.168.123.107:0/3562162123 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fb210009660 con 0x7fb2201029d0 2026-03-09T20:53:53.123 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.120+0000 7fb225475640 1 --2- 192.168.123.107:0/3562162123 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb2201029d0 0x7fb22019ee20 secure :-1 s=READY pgs=155 cs=0 l=1 rev1=1 crypto rx=0x7fb21400e990 tx=0x7fb21400ee60 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:53:53.123 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.120+0000 7fb20e7fc640 1 -- 192.168.123.107:0/3562162123 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fb21400cd30 con 0x7fb2201029d0 2026-03-09T20:53:53.123 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.120+0000 7fb20e7fc640 1 -- 192.168.123.107:0/3562162123 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fb21400ce90 con 0x7fb2201029d0 2026-03-09T20:53:53.123 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.120+0000 7fb20e7fc640 1 -- 192.168.123.107:0/3562162123 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fb214010640 con 0x7fb2201029d0 2026-03-09T20:53:53.126 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.120+0000 7fb227700640 1 -- 192.168.123.107:0/3562162123 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fb2201a39f0 con 0x7fb2201029d0 2026-03-09T20:53:53.126 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.120+0000 7fb227700640 1 -- 192.168.123.107:0/3562162123 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fb2201a3f40 con 0x7fb2201029d0 2026-03-09T20:53:53.126 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.121+0000 7fb20e7fc640 1 -- 192.168.123.107:0/3562162123 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fb2140107a0 con 0x7fb2201029d0 2026-03-09T20:53:53.126 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.121+0000 7fb20e7fc640 1 --2- 192.168.123.107:0/3562162123 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fb1fc0779b0 0x7fb1fc079e70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:53:53.127 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.122+0000 7fb20e7fc640 1 -- 192.168.123.107:0/3562162123 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(86..86 src has 1..86) v4 ==== 6652+0+0 (secure 0 0 0) 0x7fb214014070 con 0x7fb2201029d0 2026-03-09T20:53:53.127 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.122+0000 7fb227700640 1 -- 192.168.123.107:0/3562162123 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fb220104110 con 0x7fb2201029d0 2026-03-09T20:53:53.127 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.125+0000 7fb224c74640 1 --2- 192.168.123.107:0/3562162123 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fb1fc0779b0 0x7fb1fc079e70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:53:53.127 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.125+0000 7fb224c74640 1 --2- 192.168.123.107:0/3562162123 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fb1fc0779b0 0x7fb1fc079e70 secure :-1 s=READY pgs=109 cs=0 l=1 rev1=1 crypto rx=0x7fb210005d20 tx=0x7fb210005c50 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:53:53.127 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.125+0000 7fb20e7fc640 1 -- 192.168.123.107:0/3562162123 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fb214063a40 con 0x7fb2201029d0 2026-03-09T20:53:53.255 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.253+0000 7fb227700640 1 -- 192.168.123.107:0/3562162123 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "health", "detail": "detail"} v 0) v1 -- 0x7fb2201a0130 con 0x7fb2201029d0 2026-03-09T20:53:53.255 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.254+0000 7fb20e7fc640 1 -- 192.168.123.107:0/3562162123 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "health", "detail": "detail"}]=0 v0) v1 ==== 74+0+10 (secure 0 0 0) 0x7fb214063190 con 0x7fb2201029d0 2026-03-09T20:53:53.256 INFO:teuthology.orchestra.run.vm07.stdout:HEALTH_OK 2026-03-09T20:53:53.257 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.256+0000 7fb227700640 1 -- 192.168.123.107:0/3562162123 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fb1fc0779b0 msgr2=0x7fb1fc079e70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:53.257 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.256+0000 7fb227700640 1 --2- 192.168.123.107:0/3562162123 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fb1fc0779b0 0x7fb1fc079e70 secure :-1 s=READY pgs=109 cs=0 l=1 rev1=1 crypto rx=0x7fb210005d20 tx=0x7fb210005c50 comp rx=0 tx=0).stop 2026-03-09T20:53:53.258 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.257+0000 7fb227700640 1 -- 192.168.123.107:0/3562162123 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb2201029d0 msgr2=0x7fb22019ee20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:53.258 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.257+0000 7fb227700640 1 --2- 192.168.123.107:0/3562162123 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb2201029d0 0x7fb22019ee20 secure :-1 s=READY pgs=155 cs=0 l=1 rev1=1 crypto rx=0x7fb21400e990 tx=0x7fb21400ee60 comp rx=0 tx=0).stop 2026-03-09T20:53:53.258 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.257+0000 7fb227700640 1 -- 192.168.123.107:0/3562162123 shutdown_connections 2026-03-09T20:53:53.258 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.257+0000 7fb227700640 1 --2- 192.168.123.107:0/3562162123 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fb1fc0779b0 0x7fb1fc079e70 unknown :-1 s=CLOSED pgs=109 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:53.258 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.257+0000 7fb227700640 1 --2- 192.168.123.107:0/3562162123 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fb2201089d0 0x7fb22019f360 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:53.258 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.257+0000 7fb227700640 1 --2- 192.168.123.107:0/3562162123 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb2201029d0 0x7fb22019ee20 unknown :-1 s=CLOSED pgs=155 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:53.258 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.258+0000 7fb227700640 1 -- 192.168.123.107:0/3562162123 >> 192.168.123.107:0/3562162123 conn(0x7fb2200fe710 msgr2=0x7fb220077830 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:53:53.259 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.258+0000 7fb227700640 1 -- 192.168.123.107:0/3562162123 shutdown_connections 2026-03-09T20:53:53.259 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.258+0000 7fb227700640 1 -- 192.168.123.107:0/3562162123 wait complete. 2026-03-09T20:53:53.318 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions' 2026-03-09T20:53:53.464 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:53:53.704 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.703+0000 7fc7442f1640 1 -- 192.168.123.107:0/879003082 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc73c1089d0 msgr2=0x7fc73c108db0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:53.704 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.703+0000 7fc7442f1640 1 --2- 192.168.123.107:0/879003082 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc73c1089d0 0x7fc73c108db0 secure :-1 s=READY pgs=156 cs=0 l=1 rev1=1 crypto rx=0x7fc7300099b0 tx=0x7fc73002f220 comp rx=0 tx=0).stop 2026-03-09T20:53:53.704 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.704+0000 7fc7442f1640 1 -- 192.168.123.107:0/879003082 shutdown_connections 2026-03-09T20:53:53.705 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.704+0000 7fc7442f1640 1 --2- 192.168.123.107:0/879003082 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc73c1029d0 0x7fc73c102e30 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:53.705 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.704+0000 7fc7442f1640 1 --2- 192.168.123.107:0/879003082 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc73c1089d0 0x7fc73c108db0 unknown :-1 s=CLOSED pgs=156 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:53.705 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.704+0000 7fc7442f1640 1 -- 192.168.123.107:0/879003082 >> 192.168.123.107:0/879003082 conn(0x7fc73c0fe710 msgr2=0x7fc73c100b30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:53:53.705 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.704+0000 7fc7442f1640 1 -- 192.168.123.107:0/879003082 shutdown_connections 2026-03-09T20:53:53.705 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.704+0000 7fc7442f1640 1 -- 192.168.123.107:0/879003082 wait complete. 2026-03-09T20:53:53.705 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.704+0000 7fc7442f1640 1 Processor -- start 2026-03-09T20:53:53.705 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.704+0000 7fc7442f1640 1 -- start start 2026-03-09T20:53:53.705 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.705+0000 7fc7442f1640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc73c1029d0 0x7fc73c1a0600 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:53:53.706 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.705+0000 7fc7442f1640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc73c1089d0 0x7fc73c1a0b40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:53:53.706 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.705+0000 7fc7442f1640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc73c1a10d0 con 0x7fc73c1089d0 2026-03-09T20:53:53.706 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.705+0000 7fc7442f1640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc73c19a6f0 con 0x7fc73c1029d0 2026-03-09T20:53:53.706 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.705+0000 7fc741865640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc73c1089d0 0x7fc73c1a0b40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:53:53.706 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.705+0000 7fc741865640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc73c1089d0 0x7fc73c1a0b40 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:45542/0 (socket says 192.168.123.107:45542) 2026-03-09T20:53:53.706 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.705+0000 7fc741865640 1 -- 192.168.123.107:0/969385456 learned_addr learned my addr 192.168.123.107:0/969385456 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:53:53.706 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.705+0000 7fc742066640 1 --2- 192.168.123.107:0/969385456 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc73c1029d0 0x7fc73c1a0600 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:53:53.706 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.705+0000 7fc741865640 1 -- 192.168.123.107:0/969385456 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc73c1029d0 msgr2=0x7fc73c1a0600 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:53.706 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.705+0000 7fc741865640 1 --2- 192.168.123.107:0/969385456 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc73c1029d0 0x7fc73c1a0600 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:53.706 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.705+0000 7fc741865640 1 -- 192.168.123.107:0/969385456 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fc730009660 con 0x7fc73c1089d0 2026-03-09T20:53:53.706 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.705+0000 7fc741865640 1 --2- 192.168.123.107:0/969385456 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc73c1089d0 0x7fc73c1a0b40 secure :-1 s=READY pgs=157 cs=0 l=1 rev1=1 crypto rx=0x7fc72c00cc60 tx=0x7fc72c007590 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:53:53.708 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.706+0000 7fc72b7fe640 1 -- 192.168.123.107:0/969385456 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fc72c007d80 con 0x7fc73c1089d0 2026-03-09T20:53:53.708 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.706+0000 7fc72b7fe640 1 -- 192.168.123.107:0/969385456 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fc72c00ce80 con 0x7fc73c1089d0 2026-03-09T20:53:53.708 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.706+0000 7fc72b7fe640 1 -- 192.168.123.107:0/969385456 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fc72c00f660 con 0x7fc73c1089d0 2026-03-09T20:53:53.708 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.706+0000 7fc7442f1640 1 -- 192.168.123.107:0/969385456 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fc73c19a9d0 con 0x7fc73c1089d0 2026-03-09T20:53:53.708 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.706+0000 7fc7442f1640 1 -- 192.168.123.107:0/969385456 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fc73c19af20 con 0x7fc73c1089d0 2026-03-09T20:53:53.711 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.707+0000 7fc7442f1640 1 -- 192.168.123.107:0/969385456 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fc704005350 con 0x7fc73c1089d0 2026-03-09T20:53:53.712 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.711+0000 7fc72b7fe640 1 -- 192.168.123.107:0/969385456 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fc72c0040a0 con 0x7fc73c1089d0 2026-03-09T20:53:53.712 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.711+0000 7fc72b7fe640 1 --2- 192.168.123.107:0/969385456 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc71c0779b0 0x7fc71c079e70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:53:53.712 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.711+0000 7fc72b7fe640 1 -- 192.168.123.107:0/969385456 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(86..86 src has 1..86) v4 ==== 6652+0+0 (secure 0 0 0) 0x7fc72c01d030 con 0x7fc73c1089d0 2026-03-09T20:53:53.712 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.711+0000 7fc72b7fe640 1 -- 192.168.123.107:0/969385456 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fc72c09a5b0 con 0x7fc73c1089d0 2026-03-09T20:53:53.712 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.711+0000 7fc742066640 1 --2- 192.168.123.107:0/969385456 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc71c0779b0 0x7fc71c079e70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:53:53.713 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.712+0000 7fc742066640 1 --2- 192.168.123.107:0/969385456 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc71c0779b0 0x7fc71c079e70 secure :-1 s=READY pgs=110 cs=0 l=1 rev1=1 crypto rx=0x7fc730002410 tx=0x7fc73003a040 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:53:53.846 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:53 vm07.local ceph-mon[112105]: pgmap v206: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:53:53.846 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:53 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/3562162123' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T20:53:53.846 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.844+0000 7fc7442f1640 1 -- 192.168.123.107:0/969385456 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7fc7040058d0 con 0x7fc73c1089d0 2026-03-09T20:53:53.848 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.848+0000 7fc72b7fe640 1 -- 192.168.123.107:0/969385456 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+633 (secure 0 0 0) 0x7fc72c0629a0 con 0x7fc73c1089d0 2026-03-09T20:53:53.849 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T20:53:53.849 INFO:teuthology.orchestra.run.vm07.stdout: "mon": { 2026-03-09T20:53:53.849 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T20:53:53.849 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:53:53.849 INFO:teuthology.orchestra.run.vm07.stdout: "mgr": { 2026-03-09T20:53:53.849 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T20:53:53.849 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:53:53.849 INFO:teuthology.orchestra.run.vm07.stdout: "osd": { 2026-03-09T20:53:53.849 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 6 2026-03-09T20:53:53.849 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:53:53.849 INFO:teuthology.orchestra.run.vm07.stdout: "mds": { 2026-03-09T20:53:53.849 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 4 2026-03-09T20:53:53.849 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:53:53.849 INFO:teuthology.orchestra.run.vm07.stdout: "overall": { 2026-03-09T20:53:53.849 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 14 2026-03-09T20:53:53.849 INFO:teuthology.orchestra.run.vm07.stdout: } 2026-03-09T20:53:53.849 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T20:53:53.851 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.850+0000 7fc7442f1640 1 -- 192.168.123.107:0/969385456 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc71c0779b0 msgr2=0x7fc71c079e70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:53.851 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.850+0000 7fc7442f1640 1 --2- 192.168.123.107:0/969385456 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc71c0779b0 0x7fc71c079e70 secure :-1 s=READY pgs=110 cs=0 l=1 rev1=1 crypto rx=0x7fc730002410 tx=0x7fc73003a040 comp rx=0 tx=0).stop 2026-03-09T20:53:53.851 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.850+0000 7fc7442f1640 1 -- 192.168.123.107:0/969385456 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc73c1089d0 msgr2=0x7fc73c1a0b40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:53.851 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.850+0000 7fc7442f1640 1 --2- 192.168.123.107:0/969385456 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc73c1089d0 0x7fc73c1a0b40 secure :-1 s=READY pgs=157 cs=0 l=1 rev1=1 crypto rx=0x7fc72c00cc60 tx=0x7fc72c007590 comp rx=0 tx=0).stop 2026-03-09T20:53:53.851 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.850+0000 7fc7442f1640 1 -- 192.168.123.107:0/969385456 shutdown_connections 2026-03-09T20:53:53.851 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.850+0000 7fc7442f1640 1 --2- 192.168.123.107:0/969385456 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc71c0779b0 0x7fc71c079e70 unknown :-1 s=CLOSED pgs=110 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:53.851 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.850+0000 7fc7442f1640 1 --2- 192.168.123.107:0/969385456 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc73c1089d0 0x7fc73c1a0b40 unknown :-1 s=CLOSED pgs=157 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:53.852 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.850+0000 7fc7442f1640 1 --2- 192.168.123.107:0/969385456 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc73c1029d0 0x7fc73c1a0600 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:53:53.852 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.850+0000 7fc7442f1640 1 -- 192.168.123.107:0/969385456 >> 192.168.123.107:0/969385456 conn(0x7fc73c0fe710 msgr2=0x7fc73c10b860 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:53:53.852 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.851+0000 7fc7442f1640 1 -- 192.168.123.107:0/969385456 shutdown_connections 2026-03-09T20:53:53.852 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:53:53.851+0000 7fc7442f1640 1 -- 192.168.123.107:0/969385456 wait complete. 2026-03-09T20:53:53.911 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'echo "wait for servicemap items w/ changing names to refresh"' 2026-03-09T20:53:54.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:53 vm10.local ceph-mon[103526]: pgmap v206: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:53:54.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:53 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/3562162123' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T20:53:54.057 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:53:54.261 INFO:teuthology.orchestra.run.vm07.stdout:wait for servicemap items w/ changing names to refresh 2026-03-09T20:53:54.297 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'sleep 60' 2026-03-09T20:53:54.442 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:53:54.737 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:54 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/969385456' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:54.737 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:54 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:53:55.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:54 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/969385456' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:53:55.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:54 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:53:56.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:55 vm10.local ceph-mon[103526]: pgmap v207: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:53:56.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:55 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:56.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:55 vm07.local ceph-mon[112105]: pgmap v207: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:53:56.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:55 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:53:58.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:57 vm10.local ceph-mon[103526]: pgmap v208: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:53:58.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:57 vm07.local ceph-mon[112105]: pgmap v208: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:54:00.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:53:59 vm10.local ceph-mon[103526]: pgmap v209: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:54:00.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:53:59 vm07.local ceph-mon[112105]: pgmap v209: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:54:01.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:54:01 vm07.local ceph-mon[112105]: pgmap v210: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:54:02.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:54:01 vm10.local ceph-mon[103526]: pgmap v210: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:54:04.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:54:03 vm10.local ceph-mon[103526]: pgmap v211: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:54:04.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:54:03 vm07.local ceph-mon[112105]: pgmap v211: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:54:06.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:54:05 vm10.local ceph-mon[103526]: pgmap v212: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:54:06.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:54:05 vm07.local ceph-mon[112105]: pgmap v212: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:54:08.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:54:07 vm10.local ceph-mon[103526]: pgmap v213: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:54:08.100 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:54:07 vm07.local ceph-mon[112105]: pgmap v213: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:54:10.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:54:09 vm07.local ceph-mon[112105]: pgmap v214: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:54:10.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:54:09 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:54:10.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:54:09 vm10.local ceph-mon[103526]: pgmap v214: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:54:10.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:54:09 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:54:11.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:54:11 vm07.local ceph-mon[112105]: pgmap v215: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:54:12.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:54:11 vm10.local ceph-mon[103526]: pgmap v215: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:54:14.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:54:13 vm07.local ceph-mon[112105]: pgmap v216: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:54:14.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:54:13 vm10.local ceph-mon[103526]: pgmap v216: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:54:16.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:54:15 vm07.local ceph-mon[112105]: pgmap v217: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:54:16.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:54:15 vm10.local ceph-mon[103526]: pgmap v217: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:54:18.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:54:17 vm10.local ceph-mon[103526]: pgmap v218: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:54:18.359 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:54:17 vm07.local ceph-mon[112105]: pgmap v218: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:54:20.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:54:19 vm10.local ceph-mon[103526]: pgmap v219: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:54:20.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:54:19 vm07.local ceph-mon[112105]: pgmap v219: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:54:22.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:54:22 vm10.local ceph-mon[103526]: pgmap v220: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:54:22.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:54:22 vm07.local ceph-mon[112105]: pgmap v220: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:54:24.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:54:24 vm10.local ceph-mon[103526]: pgmap v221: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:54:24.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:54:24 vm07.local ceph-mon[112105]: pgmap v221: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:54:25.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:54:25 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:54:25.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:54:25 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:54:26.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:54:26 vm10.local ceph-mon[103526]: pgmap v222: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:54:26.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:54:26 vm07.local ceph-mon[112105]: pgmap v222: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:54:28.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:54:28 vm07.local ceph-mon[112105]: pgmap v223: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:54:28.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:54:28 vm10.local ceph-mon[103526]: pgmap v223: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:54:30.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:54:30 vm07.local ceph-mon[112105]: pgmap v224: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:54:30.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:54:30 vm10.local ceph-mon[103526]: pgmap v224: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:54:32.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:54:32 vm07.local ceph-mon[112105]: pgmap v225: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:54:32.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:54:32 vm10.local ceph-mon[103526]: pgmap v225: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:54:34.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:54:34 vm07.local ceph-mon[112105]: pgmap v226: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:54:34.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:54:34 vm10.local ceph-mon[103526]: pgmap v226: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:54:36.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:54:36 vm07.local ceph-mon[112105]: pgmap v227: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:54:36.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:54:36 vm10.local ceph-mon[103526]: pgmap v227: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:54:38.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:54:38 vm07.local ceph-mon[112105]: pgmap v228: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:54:38.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:54:38 vm10.local ceph-mon[103526]: pgmap v228: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:54:39.611 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:54:39 vm07.local ceph-mon[112105]: pgmap v229: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:54:39.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:54:39 vm10.local ceph-mon[103526]: pgmap v229: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:54:40.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:54:40 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:54:40.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:54:40 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:54:41.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:54:41 vm10.local ceph-mon[103526]: pgmap v230: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:54:41.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:54:41 vm07.local ceph-mon[112105]: pgmap v230: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:54:43.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:54:43 vm07.local ceph-mon[112105]: pgmap v231: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:54:44.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:54:43 vm10.local ceph-mon[103526]: pgmap v231: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:54:45.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:54:45 vm07.local ceph-mon[112105]: pgmap v232: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:54:46.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:54:45 vm10.local ceph-mon[103526]: pgmap v232: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:54:47.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:54:47 vm07.local ceph-mon[112105]: pgmap v233: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:54:48.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:54:47 vm10.local ceph-mon[103526]: pgmap v233: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:54:49.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:54:49 vm07.local ceph-mon[112105]: pgmap v234: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:54:50.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:54:49 vm10.local ceph-mon[103526]: pgmap v234: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:54:51.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:54:51 vm07.local ceph-mon[112105]: pgmap v235: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:54:51.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:54:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:54:51.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:54:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:54:51.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:54:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:54:51.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:54:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:54:52.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:54:51 vm10.local ceph-mon[103526]: pgmap v235: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:54:52.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:54:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:54:52.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:54:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:54:52.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:54:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:54:52.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:54:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:54:53.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:54:53 vm07.local ceph-mon[112105]: pgmap v236: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:54:54.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:54:53 vm10.local ceph-mon[103526]: pgmap v236: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:54:54.668 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch ps' 2026-03-09T20:54:54.848 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:54:54.871 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:54:54 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:54:55.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:54:54 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:54:55.076 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.075+0000 7ff32d89b640 1 -- 192.168.123.107:0/3476838295 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ff3281089d0 msgr2=0x7ff328108db0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:54:55.076 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.075+0000 7ff32d89b640 1 --2- 192.168.123.107:0/3476838295 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ff3281089d0 0x7ff328108db0 secure :-1 s=READY pgs=158 cs=0 l=1 rev1=1 crypto rx=0x7ff3140099b0 tx=0x7ff31402f220 comp rx=0 tx=0).stop 2026-03-09T20:54:55.076 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.075+0000 7ff32d89b640 1 -- 192.168.123.107:0/3476838295 shutdown_connections 2026-03-09T20:54:55.076 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.075+0000 7ff32d89b640 1 --2- 192.168.123.107:0/3476838295 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7ff3281029d0 0x7ff328102e30 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:54:55.076 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.075+0000 7ff32d89b640 1 --2- 192.168.123.107:0/3476838295 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ff3281089d0 0x7ff328108db0 unknown :-1 s=CLOSED pgs=158 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:54:55.076 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.075+0000 7ff32d89b640 1 -- 192.168.123.107:0/3476838295 >> 192.168.123.107:0/3476838295 conn(0x7ff3280fe710 msgr2=0x7ff328100b30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:54:55.077 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.076+0000 7ff32d89b640 1 -- 192.168.123.107:0/3476838295 shutdown_connections 2026-03-09T20:54:55.077 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.076+0000 7ff32d89b640 1 -- 192.168.123.107:0/3476838295 wait complete. 2026-03-09T20:54:55.077 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.076+0000 7ff32d89b640 1 Processor -- start 2026-03-09T20:54:55.077 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.076+0000 7ff32d89b640 1 -- start start 2026-03-09T20:54:55.077 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.076+0000 7ff32d89b640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ff3281029d0 0x7ff3280785f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:54:55.077 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.076+0000 7ff32d89b640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7ff3281089d0 0x7ff328078b30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:54:55.078 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.076+0000 7ff32d89b640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff328079210 con 0x7ff3281029d0 2026-03-09T20:54:55.078 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.076+0000 7ff32d89b640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff3281a37b0 con 0x7ff3281089d0 2026-03-09T20:54:55.078 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.076+0000 7ff326ffd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ff3281029d0 0x7ff3280785f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:54:55.078 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.076+0000 7ff326ffd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ff3281029d0 0x7ff3280785f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:44282/0 (socket says 192.168.123.107:44282) 2026-03-09T20:54:55.078 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.076+0000 7ff326ffd640 1 -- 192.168.123.107:0/3440900902 learned_addr learned my addr 192.168.123.107:0/3440900902 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:54:55.078 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.077+0000 7ff3267fc640 1 --2- 192.168.123.107:0/3440900902 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7ff3281089d0 0x7ff328078b30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:54:55.078 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.077+0000 7ff326ffd640 1 -- 192.168.123.107:0/3440900902 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7ff3281089d0 msgr2=0x7ff328078b30 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:54:55.078 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.077+0000 7ff326ffd640 1 --2- 192.168.123.107:0/3440900902 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7ff3281089d0 0x7ff328078b30 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:54:55.078 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.077+0000 7ff326ffd640 1 -- 192.168.123.107:0/3440900902 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7ff314009660 con 0x7ff3281029d0 2026-03-09T20:54:55.078 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.077+0000 7ff326ffd640 1 --2- 192.168.123.107:0/3440900902 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ff3281029d0 0x7ff3280785f0 secure :-1 s=READY pgs=159 cs=0 l=1 rev1=1 crypto rx=0x7ff314002410 tx=0x7ff314031cd0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:54:55.078 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.077+0000 7ff32c899640 1 -- 192.168.123.107:0/3440900902 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7ff31403d070 con 0x7ff3281029d0 2026-03-09T20:54:55.079 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.077+0000 7ff32d89b640 1 -- 192.168.123.107:0/3440900902 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7ff3281a3a30 con 0x7ff3281029d0 2026-03-09T20:54:55.079 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.077+0000 7ff32d89b640 1 -- 192.168.123.107:0/3440900902 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7ff3281a3f20 con 0x7ff3281029d0 2026-03-09T20:54:55.079 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.077+0000 7ff32c899640 1 -- 192.168.123.107:0/3440900902 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7ff3140043d0 con 0x7ff3281029d0 2026-03-09T20:54:55.079 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.077+0000 7ff32c899640 1 -- 192.168.123.107:0/3440900902 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7ff314038470 con 0x7ff3281029d0 2026-03-09T20:54:55.079 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.078+0000 7ff32c899640 1 -- 192.168.123.107:0/3440900902 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7ff314031200 con 0x7ff3281029d0 2026-03-09T20:54:55.080 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.079+0000 7ff32c899640 1 --2- 192.168.123.107:0/3440900902 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7ff2ec0779b0 0x7ff2ec079e70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:54:55.082 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.079+0000 7ff32c899640 1 -- 192.168.123.107:0/3440900902 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(86..86 src has 1..86) v4 ==== 6652+0+0 (secure 0 0 0) 0x7ff3140be580 con 0x7ff3281029d0 2026-03-09T20:54:55.082 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.079+0000 7ff3267fc640 1 --2- 192.168.123.107:0/3440900902 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7ff2ec0779b0 0x7ff2ec079e70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:54:55.083 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.079+0000 7ff32d89b640 1 -- 192.168.123.107:0/3440900902 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7ff2f0005350 con 0x7ff3281029d0 2026-03-09T20:54:55.083 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.079+0000 7ff3267fc640 1 --2- 192.168.123.107:0/3440900902 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7ff2ec0779b0 0x7ff2ec079e70 secure :-1 s=READY pgs=111 cs=0 l=1 rev1=1 crypto rx=0x7ff32810a980 tx=0x7ff31c009210 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:54:55.083 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.082+0000 7ff32c899640 1 -- 192.168.123.107:0/3440900902 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7ff314086ce0 con 0x7ff3281029d0 2026-03-09T20:54:55.179 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.177+0000 7ff32d89b640 1 -- 192.168.123.107:0/3440900902 --> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] -- mgr_command(tid 0: {"prefix": "orch ps", "target": ["mon-mgr", ""]}) v1 -- 0x7ff2f0002bf0 con 0x7ff2ec0779b0 2026-03-09T20:54:55.183 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.182+0000 7ff32c899640 1 -- 192.168.123.107:0/3440900902 <== mgr.34100 v2:192.168.123.107:6800/2064434004 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+3576 (secure 0 0 0) 0x7ff2f0002bf0 con 0x7ff2ec0779b0 2026-03-09T20:54:55.184 INFO:teuthology.orchestra.run.vm07.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T20:54:55.184 INFO:teuthology.orchestra.run.vm07.stdout:alertmanager.vm07 vm07 *:9093,9094 running (76s) 65s ago 11m 17.3M - 0.25.0 c8568f914cd2 afb315b2ed75 2026-03-09T20:54:55.184 INFO:teuthology.orchestra.run.vm07.stdout:ceph-exporter.vm07 vm07 running (101s) 65s ago 12m 10.5M - 19.2.3-678-ge911bdeb 654f31e6858e 5cd0ce63c830 2026-03-09T20:54:55.184 INFO:teuthology.orchestra.run.vm07.stdout:ceph-exporter.vm10 vm10 running (99s) 88s ago 11m 10.5M - 19.2.3-678-ge911bdeb 654f31e6858e c382fe12976c 2026-03-09T20:54:55.184 INFO:teuthology.orchestra.run.vm07.stdout:crash.vm07 vm07 running (6m) 65s ago 12m 7834k - 19.2.3-678-ge911bdeb 654f31e6858e 406c9c54f34a 2026-03-09T20:54:55.184 INFO:teuthology.orchestra.run.vm07.stdout:crash.vm10 vm10 running (6m) 88s ago 11m 7856k - 19.2.3-678-ge911bdeb 654f31e6858e 30eaebf5d733 2026-03-09T20:54:55.184 INFO:teuthology.orchestra.run.vm07.stdout:grafana.vm07 vm07 *:3000 running (67s) 65s ago 11m 39.8M - 10.4.0 c8b91775d855 e629e59fd7fd 2026-03-09T20:54:55.184 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm07.potfau vm07 running (2m) 65s ago 9m 95.7M - 19.2.3-678-ge911bdeb 654f31e6858e 744ed5bff39a 2026-03-09T20:54:55.184 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm07.rovdbp vm07 running (2m) 65s ago 9m 25.4M - 19.2.3-678-ge911bdeb 654f31e6858e 1763d9a7f9bb 2026-03-09T20:54:55.184 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm10.hzyuyq vm10 running (109s) 88s ago 9m 21.9M - 19.2.3-678-ge911bdeb 654f31e6858e fd0024278e01 2026-03-09T20:54:55.184 INFO:teuthology.orchestra.run.vm07.stdout:mds.cephfs.vm10.qpltwp vm10 running (116s) 88s ago 9m 98.0M - 19.2.3-678-ge911bdeb 654f31e6858e fef68e5128e4 2026-03-09T20:54:55.184 INFO:teuthology.orchestra.run.vm07.stdout:mgr.vm07.xjrvch vm07 *:8443,9283,8765 running (7m) 65s ago 12m 633M - 19.2.3-678-ge911bdeb 654f31e6858e bc6ab9c540eb 2026-03-09T20:54:55.184 INFO:teuthology.orchestra.run.vm07.stdout:mgr.vm10.byqahe vm10 *:8443,9283,8765 running (6m) 88s ago 11m 492M - 19.2.3-678-ge911bdeb 654f31e6858e f7ad162e95ff 2026-03-09T20:54:55.184 INFO:teuthology.orchestra.run.vm07.stdout:mon.vm07 vm07 running (6m) 65s ago 12m 67.1M 2048M 19.2.3-678-ge911bdeb 654f31e6858e bce9d510f94f 2026-03-09T20:54:55.184 INFO:teuthology.orchestra.run.vm07.stdout:mon.vm10 vm10 running (6m) 88s ago 11m 55.4M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 4428cf7f0607 2026-03-09T20:54:55.184 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.vm07 vm07 *:9100 running (92s) 65s ago 12m 6693k - 1.7.0 72c9c2088986 95a81084dd9b 2026-03-09T20:54:55.184 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.vm10 vm10 *:9100 running (89s) 88s ago 11m 4676k - 1.7.0 72c9c2088986 a2d3fff54c2f 2026-03-09T20:54:55.185 INFO:teuthology.orchestra.run.vm07.stdout:osd.0 vm07 running (5m) 65s ago 11m 235M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 1da9d2cdbdc3 2026-03-09T20:54:55.185 INFO:teuthology.orchestra.run.vm07.stdout:osd.1 vm07 running (5m) 65s ago 10m 178M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 95f518bf664f 2026-03-09T20:54:55.185 INFO:teuthology.orchestra.run.vm07.stdout:osd.2 vm07 running (4m) 65s ago 10m 124M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 0d3aa63353bb 2026-03-09T20:54:55.185 INFO:teuthology.orchestra.run.vm07.stdout:osd.3 vm10 running (3m) 88s ago 10m 169M 4096M 19.2.3-678-ge911bdeb 654f31e6858e c8d2b453e9e2 2026-03-09T20:54:55.185 INFO:teuthology.orchestra.run.vm07.stdout:osd.4 vm10 running (3m) 88s ago 10m 126M 4096M 19.2.3-678-ge911bdeb 654f31e6858e d0231a0cf2be 2026-03-09T20:54:55.185 INFO:teuthology.orchestra.run.vm07.stdout:osd.5 vm10 running (2m) 88s ago 10m 119M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 7489b8a43e7f 2026-03-09T20:54:55.185 INFO:teuthology.orchestra.run.vm07.stdout:prometheus.vm07 vm07 *:9095 running (80s) 65s ago 11m 42.5M - 2.51.0 1d3b7f56885b 68e468960cdd 2026-03-09T20:54:55.185 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.184+0000 7ff32d89b640 1 -- 192.168.123.107:0/3440900902 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7ff2ec0779b0 msgr2=0x7ff2ec079e70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:54:55.185 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.184+0000 7ff32d89b640 1 --2- 192.168.123.107:0/3440900902 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7ff2ec0779b0 0x7ff2ec079e70 secure :-1 s=READY pgs=111 cs=0 l=1 rev1=1 crypto rx=0x7ff32810a980 tx=0x7ff31c009210 comp rx=0 tx=0).stop 2026-03-09T20:54:55.186 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.185+0000 7ff32d89b640 1 -- 192.168.123.107:0/3440900902 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ff3281029d0 msgr2=0x7ff3280785f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:54:55.186 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.185+0000 7ff32d89b640 1 --2- 192.168.123.107:0/3440900902 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ff3281029d0 0x7ff3280785f0 secure :-1 s=READY pgs=159 cs=0 l=1 rev1=1 crypto rx=0x7ff314002410 tx=0x7ff314031cd0 comp rx=0 tx=0).stop 2026-03-09T20:54:55.186 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.185+0000 7ff32d89b640 1 -- 192.168.123.107:0/3440900902 shutdown_connections 2026-03-09T20:54:55.186 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.185+0000 7ff32d89b640 1 --2- 192.168.123.107:0/3440900902 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7ff2ec0779b0 0x7ff2ec079e70 unknown :-1 s=CLOSED pgs=111 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:54:55.186 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.185+0000 7ff32d89b640 1 --2- 192.168.123.107:0/3440900902 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7ff3281089d0 0x7ff328078b30 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:54:55.186 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.185+0000 7ff32d89b640 1 --2- 192.168.123.107:0/3440900902 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ff3281029d0 0x7ff3280785f0 unknown :-1 s=CLOSED pgs=159 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:54:55.186 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.185+0000 7ff32d89b640 1 -- 192.168.123.107:0/3440900902 >> 192.168.123.107:0/3440900902 conn(0x7ff3280fe710 msgr2=0x7ff32810c9e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:54:55.186 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.185+0000 7ff32d89b640 1 -- 192.168.123.107:0/3440900902 shutdown_connections 2026-03-09T20:54:55.186 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.185+0000 7ff32d89b640 1 -- 192.168.123.107:0/3440900902 wait complete. 2026-03-09T20:54:55.227 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions' 2026-03-09T20:54:55.369 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:54:55.598 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.597+0000 7f0c1ae12640 1 -- 192.168.123.107:0/668861678 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0c14102800 msgr2=0x7f0c14102c60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:54:55.598 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.597+0000 7f0c1ae12640 1 --2- 192.168.123.107:0/668861678 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0c14102800 0x7f0c14102c60 secure :-1 s=READY pgs=160 cs=0 l=1 rev1=1 crypto rx=0x7f0bfc0099b0 tx=0x7f0bfc02f220 comp rx=0 tx=0).stop 2026-03-09T20:54:55.598 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.597+0000 7f0c1ae12640 1 -- 192.168.123.107:0/668861678 shutdown_connections 2026-03-09T20:54:55.598 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.597+0000 7f0c1ae12640 1 --2- 192.168.123.107:0/668861678 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0c14102800 0x7f0c14102c60 unknown :-1 s=CLOSED pgs=160 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:54:55.599 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.597+0000 7f0c1ae12640 1 --2- 192.168.123.107:0/668861678 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f0c14108800 0x7f0c14108be0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:54:55.599 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.597+0000 7f0c1ae12640 1 -- 192.168.123.107:0/668861678 >> 192.168.123.107:0/668861678 conn(0x7f0c140fe540 msgr2=0x7f0c14100960 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:54:55.599 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.598+0000 7f0c1ae12640 1 -- 192.168.123.107:0/668861678 shutdown_connections 2026-03-09T20:54:55.599 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.598+0000 7f0c1ae12640 1 -- 192.168.123.107:0/668861678 wait complete. 2026-03-09T20:54:55.599 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.598+0000 7f0c1ae12640 1 Processor -- start 2026-03-09T20:54:55.599 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.598+0000 7f0c1ae12640 1 -- start start 2026-03-09T20:54:55.600 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.599+0000 7f0c1ae12640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f0c14102800 0x7f0c141a0570 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:54:55.600 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.599+0000 7f0c1ae12640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0c14108800 0x7f0c141a0ab0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:54:55.600 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.599+0000 7f0c1ae12640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f0c141a10d0 con 0x7f0c14108800 2026-03-09T20:54:55.600 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.599+0000 7f0c1ae12640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f0c1419a660 con 0x7f0c14102800 2026-03-09T20:54:55.600 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.599+0000 7f0c1960f640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0c14108800 0x7f0c141a0ab0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:54:55.600 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.599+0000 7f0c1960f640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0c14108800 0x7f0c141a0ab0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:44302/0 (socket says 192.168.123.107:44302) 2026-03-09T20:54:55.600 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.599+0000 7f0c1960f640 1 -- 192.168.123.107:0/2572449024 learned_addr learned my addr 192.168.123.107:0/2572449024 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:54:55.600 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.599+0000 7f0c19e10640 1 --2- 192.168.123.107:0/2572449024 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f0c14102800 0x7f0c141a0570 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:54:55.601 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.600+0000 7f0c1960f640 1 -- 192.168.123.107:0/2572449024 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f0c14102800 msgr2=0x7f0c141a0570 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:54:55.601 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.600+0000 7f0c1960f640 1 --2- 192.168.123.107:0/2572449024 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f0c14102800 0x7f0c141a0570 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:54:55.601 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.600+0000 7f0c1960f640 1 -- 192.168.123.107:0/2572449024 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f0bfc009660 con 0x7f0c14108800 2026-03-09T20:54:55.601 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.600+0000 7f0c19e10640 1 --2- 192.168.123.107:0/2572449024 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f0c14102800 0x7f0c141a0570 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T20:54:55.601 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.600+0000 7f0c1960f640 1 --2- 192.168.123.107:0/2572449024 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0c14108800 0x7f0c141a0ab0 secure :-1 s=READY pgs=161 cs=0 l=1 rev1=1 crypto rx=0x7f0bfc002410 tx=0x7f0bfc004290 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:54:55.601 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.600+0000 7f0c0affd640 1 -- 192.168.123.107:0/2572449024 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f0bfc03d070 con 0x7f0c14108800 2026-03-09T20:54:55.601 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.600+0000 7f0c0affd640 1 -- 192.168.123.107:0/2572449024 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f0bfc0043b0 con 0x7f0c14108800 2026-03-09T20:54:55.601 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.600+0000 7f0c0affd640 1 -- 192.168.123.107:0/2572449024 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f0bfc041880 con 0x7f0c14108800 2026-03-09T20:54:55.602 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.600+0000 7f0c1ae12640 1 -- 192.168.123.107:0/2572449024 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f0c1419a8e0 con 0x7f0c14108800 2026-03-09T20:54:55.602 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.601+0000 7f0c1ae12640 1 -- 192.168.123.107:0/2572449024 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f0c1419add0 con 0x7f0c14108800 2026-03-09T20:54:55.603 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.602+0000 7f0c0affd640 1 -- 192.168.123.107:0/2572449024 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f0bfc02fc90 con 0x7f0c14108800 2026-03-09T20:54:55.604 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.602+0000 7f0c0affd640 1 --2- 192.168.123.107:0/2572449024 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f0bec0778e0 0x7f0bec079da0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:54:55.604 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.602+0000 7f0c0affd640 1 -- 192.168.123.107:0/2572449024 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(86..86 src has 1..86) v4 ==== 6652+0+0 (secure 0 0 0) 0x7f0bfc0be680 con 0x7f0c14108800 2026-03-09T20:54:55.604 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.603+0000 7f0c19e10640 1 --2- 192.168.123.107:0/2572449024 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f0bec0778e0 0x7f0bec079da0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:54:55.604 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.603+0000 7f0c19e10640 1 --2- 192.168.123.107:0/2572449024 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f0bec0778e0 0x7f0bec079da0 secure :-1 s=READY pgs=112 cs=0 l=1 rev1=1 crypto rx=0x7f0c04006fd0 tx=0x7f0c04008040 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:54:55.607 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.603+0000 7f0c08ff9640 1 -- 192.168.123.107:0/2572449024 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f0bdc005350 con 0x7f0c14108800 2026-03-09T20:54:55.608 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.607+0000 7f0c0affd640 1 -- 192.168.123.107:0/2572449024 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f0bfc086cb0 con 0x7f0c14108800 2026-03-09T20:54:55.743 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.741+0000 7f0c08ff9640 1 -- 192.168.123.107:0/2572449024 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7f0bdc005e10 con 0x7f0c14108800 2026-03-09T20:54:55.745 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:54:55 vm07.local ceph-mon[112105]: pgmap v237: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:54:55.745 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:54:55 vm07.local ceph-mon[112105]: from='client.34444 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:54:55.746 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.745+0000 7f0c0affd640 1 -- 192.168.123.107:0/2572449024 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+633 (secure 0 0 0) 0x7f0bfc086400 con 0x7f0c14108800 2026-03-09T20:54:55.746 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T20:54:55.746 INFO:teuthology.orchestra.run.vm07.stdout: "mon": { 2026-03-09T20:54:55.746 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T20:54:55.746 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:54:55.746 INFO:teuthology.orchestra.run.vm07.stdout: "mgr": { 2026-03-09T20:54:55.746 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T20:54:55.746 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:54:55.746 INFO:teuthology.orchestra.run.vm07.stdout: "osd": { 2026-03-09T20:54:55.746 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 6 2026-03-09T20:54:55.746 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:54:55.746 INFO:teuthology.orchestra.run.vm07.stdout: "mds": { 2026-03-09T20:54:55.746 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 4 2026-03-09T20:54:55.746 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T20:54:55.746 INFO:teuthology.orchestra.run.vm07.stdout: "overall": { 2026-03-09T20:54:55.746 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 14 2026-03-09T20:54:55.746 INFO:teuthology.orchestra.run.vm07.stdout: } 2026-03-09T20:54:55.746 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T20:54:55.748 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.747+0000 7f0c08ff9640 1 -- 192.168.123.107:0/2572449024 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f0bec0778e0 msgr2=0x7f0bec079da0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:54:55.748 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.747+0000 7f0c08ff9640 1 --2- 192.168.123.107:0/2572449024 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f0bec0778e0 0x7f0bec079da0 secure :-1 s=READY pgs=112 cs=0 l=1 rev1=1 crypto rx=0x7f0c04006fd0 tx=0x7f0c04008040 comp rx=0 tx=0).stop 2026-03-09T20:54:55.748 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.747+0000 7f0c08ff9640 1 -- 192.168.123.107:0/2572449024 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0c14108800 msgr2=0x7f0c141a0ab0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:54:55.748 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.747+0000 7f0c08ff9640 1 --2- 192.168.123.107:0/2572449024 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0c14108800 0x7f0c141a0ab0 secure :-1 s=READY pgs=161 cs=0 l=1 rev1=1 crypto rx=0x7f0bfc002410 tx=0x7f0bfc004290 comp rx=0 tx=0).stop 2026-03-09T20:54:55.748 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.747+0000 7f0c08ff9640 1 -- 192.168.123.107:0/2572449024 shutdown_connections 2026-03-09T20:54:55.748 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.747+0000 7f0c08ff9640 1 --2- 192.168.123.107:0/2572449024 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f0bec0778e0 0x7f0bec079da0 unknown :-1 s=CLOSED pgs=112 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:54:55.749 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.747+0000 7f0c08ff9640 1 --2- 192.168.123.107:0/2572449024 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0c14108800 0x7f0c141a0ab0 unknown :-1 s=CLOSED pgs=161 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:54:55.749 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.747+0000 7f0c08ff9640 1 --2- 192.168.123.107:0/2572449024 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f0c14102800 0x7f0c141a0570 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:54:55.749 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.747+0000 7f0c08ff9640 1 -- 192.168.123.107:0/2572449024 >> 192.168.123.107:0/2572449024 conn(0x7f0c140fe540 msgr2=0x7f0c1410c7c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:54:55.749 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.747+0000 7f0c08ff9640 1 -- 192.168.123.107:0/2572449024 shutdown_connections 2026-03-09T20:54:55.749 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:55.747+0000 7f0c08ff9640 1 -- 192.168.123.107:0/2572449024 wait complete. 2026-03-09T20:54:55.811 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions | jq -e '"'"'.overall | length == 1'"'"'' 2026-03-09T20:54:55.951 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:54:56.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:54:55 vm10.local ceph-mon[103526]: pgmap v237: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:54:56.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:54:55 vm10.local ceph-mon[103526]: from='client.34444 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:54:56.189 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.180+0000 7f42fcf34640 1 -- 192.168.123.107:0/1151013023 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f42f8108800 msgr2=0x7f42f8108be0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:54:56.190 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.180+0000 7f42fcf34640 1 --2- 192.168.123.107:0/1151013023 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f42f8108800 0x7f42f8108be0 secure :-1 s=READY pgs=162 cs=0 l=1 rev1=1 crypto rx=0x7f42e40099b0 tx=0x7f42e402f240 comp rx=0 tx=0).stop 2026-03-09T20:54:56.190 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.181+0000 7f42fcf34640 1 -- 192.168.123.107:0/1151013023 shutdown_connections 2026-03-09T20:54:56.190 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.181+0000 7f42fcf34640 1 --2- 192.168.123.107:0/1151013023 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f42f8102800 0x7f42f8102c60 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:54:56.190 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.181+0000 7f42fcf34640 1 --2- 192.168.123.107:0/1151013023 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f42f8108800 0x7f42f8108be0 unknown :-1 s=CLOSED pgs=162 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:54:56.190 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.181+0000 7f42fcf34640 1 -- 192.168.123.107:0/1151013023 >> 192.168.123.107:0/1151013023 conn(0x7f42f80fe540 msgr2=0x7f42f8100960 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:54:56.190 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.184+0000 7f42fcf34640 1 -- 192.168.123.107:0/1151013023 shutdown_connections 2026-03-09T20:54:56.190 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.184+0000 7f42fcf34640 1 -- 192.168.123.107:0/1151013023 wait complete. 2026-03-09T20:54:56.190 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.184+0000 7f42fcf34640 1 Processor -- start 2026-03-09T20:54:56.190 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.184+0000 7f42fcf34640 1 -- start start 2026-03-09T20:54:56.190 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.184+0000 7f42fcf34640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f42f8102800 0x7f42f8114390 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:54:56.190 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.184+0000 7f42fcf34640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f42f8108800 0x7f42f81148d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:54:56.190 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.184+0000 7f42fcf34640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f42f80783f0 con 0x7f42f8102800 2026-03-09T20:54:56.190 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.184+0000 7f42fcf34640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f42f8078560 con 0x7f42f8108800 2026-03-09T20:54:56.190 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.185+0000 7f42f6ffd640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f42f8108800 0x7f42f81148d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:54:56.190 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.185+0000 7f42f6ffd640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f42f8108800 0x7f42f81148d0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:44366/0 (socket says 192.168.123.107:44366) 2026-03-09T20:54:56.190 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.185+0000 7f42f6ffd640 1 -- 192.168.123.107:0/1156507924 learned_addr learned my addr 192.168.123.107:0/1156507924 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:54:56.190 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.185+0000 7f42f6ffd640 1 -- 192.168.123.107:0/1156507924 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f42f8102800 msgr2=0x7f42f8114390 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:54:56.190 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.185+0000 7f42f6ffd640 1 --2- 192.168.123.107:0/1156507924 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f42f8102800 0x7f42f8114390 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:54:56.190 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.185+0000 7f42f6ffd640 1 -- 192.168.123.107:0/1156507924 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f42e4009660 con 0x7f42f8108800 2026-03-09T20:54:56.190 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.185+0000 7f42f6ffd640 1 --2- 192.168.123.107:0/1156507924 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f42f8108800 0x7f42f81148d0 secure :-1 s=READY pgs=62 cs=0 l=1 rev1=1 crypto rx=0x7f42e800b4f0 tx=0x7f42e800b9c0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:54:56.190 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.186+0000 7f42f4ff9640 1 -- 192.168.123.107:0/1156507924 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f42e8004280 con 0x7f42f8108800 2026-03-09T20:54:56.190 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.186+0000 7f42fcf34640 1 -- 192.168.123.107:0/1156507924 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f42f8078840 con 0x7f42f8108800 2026-03-09T20:54:56.190 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.186+0000 7f42fcf34640 1 -- 192.168.123.107:0/1156507924 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f42f8078d90 con 0x7f42f8108800 2026-03-09T20:54:56.190 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.187+0000 7f42f4ff9640 1 -- 192.168.123.107:0/1156507924 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f42e80043e0 con 0x7f42f8108800 2026-03-09T20:54:56.190 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.187+0000 7f42f4ff9640 1 -- 192.168.123.107:0/1156507924 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f42e8010b50 con 0x7f42f8108800 2026-03-09T20:54:56.190 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.187+0000 7f42f4ff9640 1 -- 192.168.123.107:0/1156507924 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f42e80347d0 con 0x7f42f8108800 2026-03-09T20:54:56.190 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.188+0000 7f42f4ff9640 1 --2- 192.168.123.107:0/1156507924 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f42dc077b20 0x7f42dc079fe0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:54:56.190 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.188+0000 7f42f4ff9640 1 -- 192.168.123.107:0/1156507924 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(86..86 src has 1..86) v4 ==== 6652+0+0 (secure 0 0 0) 0x7f42e8099c50 con 0x7f42f8108800 2026-03-09T20:54:56.192 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.191+0000 7f42fcf34640 1 -- 192.168.123.107:0/1156507924 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f42f8103f40 con 0x7f42f8108800 2026-03-09T20:54:56.192 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.191+0000 7f42f77fe640 1 --2- 192.168.123.107:0/1156507924 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f42dc077b20 0x7f42dc079fe0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:54:56.193 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.192+0000 7f42f77fe640 1 --2- 192.168.123.107:0/1156507924 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f42dc077b20 0x7f42dc079fe0 secure :-1 s=READY pgs=113 cs=0 l=1 rev1=1 crypto rx=0x7f42e402f750 tx=0x7f42e4005c50 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:54:56.195 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.194+0000 7f42f4ff9640 1 -- 192.168.123.107:0/1156507924 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f42e8061c30 con 0x7f42f8108800 2026-03-09T20:54:56.321 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.320+0000 7f42fcf34640 1 -- 192.168.123.107:0/1156507924 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7f42f80798d0 con 0x7f42f8108800 2026-03-09T20:54:56.322 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.321+0000 7f42f4ff9640 1 -- 192.168.123.107:0/1156507924 <== mon.1 v2:192.168.123.110:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+633 (secure 0 0 0) 0x7f42e8061a50 con 0x7f42f8108800 2026-03-09T20:54:56.324 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.324+0000 7f42fcf34640 1 -- 192.168.123.107:0/1156507924 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f42dc077b20 msgr2=0x7f42dc079fe0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:54:56.325 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.324+0000 7f42fcf34640 1 --2- 192.168.123.107:0/1156507924 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f42dc077b20 0x7f42dc079fe0 secure :-1 s=READY pgs=113 cs=0 l=1 rev1=1 crypto rx=0x7f42e402f750 tx=0x7f42e4005c50 comp rx=0 tx=0).stop 2026-03-09T20:54:56.325 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.324+0000 7f42fcf34640 1 -- 192.168.123.107:0/1156507924 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f42f8108800 msgr2=0x7f42f81148d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:54:56.325 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.324+0000 7f42fcf34640 1 --2- 192.168.123.107:0/1156507924 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f42f8108800 0x7f42f81148d0 secure :-1 s=READY pgs=62 cs=0 l=1 rev1=1 crypto rx=0x7f42e800b4f0 tx=0x7f42e800b9c0 comp rx=0 tx=0).stop 2026-03-09T20:54:56.329 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.329+0000 7f42fcf34640 1 -- 192.168.123.107:0/1156507924 shutdown_connections 2026-03-09T20:54:56.330 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.329+0000 7f42fcf34640 1 --2- 192.168.123.107:0/1156507924 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f42dc077b20 0x7f42dc079fe0 unknown :-1 s=CLOSED pgs=113 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:54:56.330 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.329+0000 7f42fcf34640 1 --2- 192.168.123.107:0/1156507924 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f42f8108800 0x7f42f81148d0 unknown :-1 s=CLOSED pgs=62 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:54:56.330 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.329+0000 7f42fcf34640 1 --2- 192.168.123.107:0/1156507924 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f42f8102800 0x7f42f8114390 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:54:56.330 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.329+0000 7f42fcf34640 1 -- 192.168.123.107:0/1156507924 >> 192.168.123.107:0/1156507924 conn(0x7f42f80fe540 msgr2=0x7f42f80fe920 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:54:56.330 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.330+0000 7f42fcf34640 1 -- 192.168.123.107:0/1156507924 shutdown_connections 2026-03-09T20:54:56.330 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.330+0000 7f42fcf34640 1 -- 192.168.123.107:0/1156507924 wait complete. 2026-03-09T20:54:56.343 INFO:teuthology.orchestra.run.vm07.stdout:true 2026-03-09T20:54:56.406 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions | jq -e '"'"'.overall | keys'"'"' | grep $sha1' 2026-03-09T20:54:56.561 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:54:56.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:54:56 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/2572449024' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:54:56.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:54:56 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/1156507924' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:54:56.843 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:54:56 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/2572449024' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:54:56.843 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:54:56 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/1156507924' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:54:56.844 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.841+0000 7f3b89fa3640 1 -- 192.168.123.107:0/3087231001 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3b84075720 msgr2=0x7f3b84075b00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:54:56.844 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.841+0000 7f3b89fa3640 1 --2- 192.168.123.107:0/3087231001 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3b84075720 0x7f3b84075b00 secure :-1 s=READY pgs=163 cs=0 l=1 rev1=1 crypto rx=0x7f3b700099b0 tx=0x7f3b7002f220 comp rx=0 tx=0).stop 2026-03-09T20:54:56.844 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.843+0000 7f3b89fa3640 1 -- 192.168.123.107:0/3087231001 shutdown_connections 2026-03-09T20:54:56.844 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.843+0000 7f3b89fa3640 1 --2- 192.168.123.107:0/3087231001 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f3b84076040 0x7f3b84111330 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:54:56.844 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.843+0000 7f3b89fa3640 1 --2- 192.168.123.107:0/3087231001 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3b84075720 0x7f3b84075b00 secure :-1 s=CLOSED pgs=163 cs=0 l=1 rev1=1 crypto rx=0x7f3b700099b0 tx=0x7f3b7002f220 comp rx=0 tx=0).stop 2026-03-09T20:54:56.844 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.843+0000 7f3b89fa3640 1 -- 192.168.123.107:0/3087231001 >> 192.168.123.107:0/3087231001 conn(0x7f3b840fe710 msgr2=0x7f3b84100b30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:54:56.844 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.843+0000 7f3b89fa3640 1 -- 192.168.123.107:0/3087231001 shutdown_connections 2026-03-09T20:54:56.844 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.844+0000 7f3b89fa3640 1 -- 192.168.123.107:0/3087231001 wait complete. 2026-03-09T20:54:56.845 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.844+0000 7f3b89fa3640 1 Processor -- start 2026-03-09T20:54:56.845 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.844+0000 7f3b89fa3640 1 -- start start 2026-03-09T20:54:56.845 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.844+0000 7f3b89fa3640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3b84076040 0x7f3b8419f050 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:54:56.845 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.844+0000 7f3b89fa3640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f3b8419f590 0x7f3b841a3990 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:54:56.845 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.844+0000 7f3b89fa3640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f3b8419fb90 con 0x7f3b84076040 2026-03-09T20:54:56.845 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.844+0000 7f3b89fa3640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f3b8419fd00 con 0x7f3b8419f590 2026-03-09T20:54:56.846 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.845+0000 7f3b82ffd640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f3b8419f590 0x7f3b841a3990 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:54:56.846 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.845+0000 7f3b82ffd640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f3b8419f590 0x7f3b841a3990 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:44388/0 (socket says 192.168.123.107:44388) 2026-03-09T20:54:56.846 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.845+0000 7f3b82ffd640 1 -- 192.168.123.107:0/1910060874 learned_addr learned my addr 192.168.123.107:0/1910060874 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:54:56.846 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.845+0000 7f3b82ffd640 1 -- 192.168.123.107:0/1910060874 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3b84076040 msgr2=0x7f3b8419f050 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T20:54:56.846 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.845+0000 7f3b837fe640 1 --2- 192.168.123.107:0/1910060874 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3b84076040 0x7f3b8419f050 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:54:56.846 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.845+0000 7f3b82ffd640 1 --2- 192.168.123.107:0/1910060874 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3b84076040 0x7f3b8419f050 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:54:56.846 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.845+0000 7f3b82ffd640 1 -- 192.168.123.107:0/1910060874 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f3b70009660 con 0x7f3b8419f590 2026-03-09T20:54:56.846 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.845+0000 7f3b837fe640 1 --2- 192.168.123.107:0/1910060874 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3b84076040 0x7f3b8419f050 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T20:54:56.846 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.845+0000 7f3b82ffd640 1 --2- 192.168.123.107:0/1910060874 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f3b8419f590 0x7f3b841a3990 secure :-1 s=READY pgs=63 cs=0 l=1 rev1=1 crypto rx=0x7f3b7400d8d0 tx=0x7f3b7400dda0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:54:56.846 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.846+0000 7f3b80ff9640 1 -- 192.168.123.107:0/1910060874 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f3b74004490 con 0x7f3b8419f590 2026-03-09T20:54:56.847 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.846+0000 7f3b89fa3640 1 -- 192.168.123.107:0/1910060874 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f3b841a3f90 con 0x7f3b8419f590 2026-03-09T20:54:56.847 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.846+0000 7f3b89fa3640 1 -- 192.168.123.107:0/1910060874 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f3b841a44e0 con 0x7f3b8419f590 2026-03-09T20:54:56.847 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.846+0000 7f3b80ff9640 1 -- 192.168.123.107:0/1910060874 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f3b7400bd00 con 0x7f3b8419f590 2026-03-09T20:54:56.847 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.846+0000 7f3b80ff9640 1 -- 192.168.123.107:0/1910060874 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f3b74010460 con 0x7f3b8419f590 2026-03-09T20:54:56.848 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.847+0000 7f3b80ff9640 1 -- 192.168.123.107:0/1910060874 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f3b740027e0 con 0x7f3b8419f590 2026-03-09T20:54:56.849 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.847+0000 7f3b89fa3640 1 -- 192.168.123.107:0/1910060874 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f3b84101d00 con 0x7f3b8419f590 2026-03-09T20:54:56.849 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.847+0000 7f3b80ff9640 1 --2- 192.168.123.107:0/1910060874 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f3b58077890 0x7f3b58079d50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:54:56.849 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.847+0000 7f3b80ff9640 1 -- 192.168.123.107:0/1910060874 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(86..86 src has 1..86) v4 ==== 6652+0+0 (secure 0 0 0) 0x7f3b74099df0 con 0x7f3b8419f590 2026-03-09T20:54:56.849 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.848+0000 7f3b837fe640 1 --2- 192.168.123.107:0/1910060874 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f3b58077890 0x7f3b58079d50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:54:56.849 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.848+0000 7f3b837fe640 1 --2- 192.168.123.107:0/1910060874 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f3b58077890 0x7f3b58079d50 secure :-1 s=READY pgs=114 cs=0 l=1 rev1=1 crypto rx=0x7f3b70002410 tx=0x7f3b70002da0 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:54:56.851 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.850+0000 7f3b80ff9640 1 -- 192.168.123.107:0/1910060874 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f3b740623a0 con 0x7f3b8419f590 2026-03-09T20:54:56.980 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.979+0000 7f3b89fa3640 1 -- 192.168.123.107:0/1910060874 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7f3b841a47c0 con 0x7f3b8419f590 2026-03-09T20:54:56.980 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.979+0000 7f3b80ff9640 1 -- 192.168.123.107:0/1910060874 <== mon.1 v2:192.168.123.110:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+633 (secure 0 0 0) 0x7f3b74061af0 con 0x7f3b8419f590 2026-03-09T20:54:56.983 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.982+0000 7f3b89fa3640 1 -- 192.168.123.107:0/1910060874 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f3b58077890 msgr2=0x7f3b58079d50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:54:56.983 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.982+0000 7f3b89fa3640 1 --2- 192.168.123.107:0/1910060874 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f3b58077890 0x7f3b58079d50 secure :-1 s=READY pgs=114 cs=0 l=1 rev1=1 crypto rx=0x7f3b70002410 tx=0x7f3b70002da0 comp rx=0 tx=0).stop 2026-03-09T20:54:56.983 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.982+0000 7f3b89fa3640 1 -- 192.168.123.107:0/1910060874 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f3b8419f590 msgr2=0x7f3b841a3990 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:54:56.983 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.982+0000 7f3b89fa3640 1 --2- 192.168.123.107:0/1910060874 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f3b8419f590 0x7f3b841a3990 secure :-1 s=READY pgs=63 cs=0 l=1 rev1=1 crypto rx=0x7f3b7400d8d0 tx=0x7f3b7400dda0 comp rx=0 tx=0).stop 2026-03-09T20:54:56.983 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.982+0000 7f3b89fa3640 1 -- 192.168.123.107:0/1910060874 shutdown_connections 2026-03-09T20:54:56.983 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.982+0000 7f3b89fa3640 1 --2- 192.168.123.107:0/1910060874 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f3b58077890 0x7f3b58079d50 unknown :-1 s=CLOSED pgs=114 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:54:56.983 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.982+0000 7f3b89fa3640 1 --2- 192.168.123.107:0/1910060874 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f3b8419f590 0x7f3b841a3990 unknown :-1 s=CLOSED pgs=63 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:54:56.983 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.983+0000 7f3b89fa3640 1 --2- 192.168.123.107:0/1910060874 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f3b84076040 0x7f3b8419f050 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:54:56.984 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.983+0000 7f3b89fa3640 1 -- 192.168.123.107:0/1910060874 >> 192.168.123.107:0/1910060874 conn(0x7f3b840fe710 msgr2=0x7f3b840ff590 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:54:56.984 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.983+0000 7f3b89fa3640 1 -- 192.168.123.107:0/1910060874 shutdown_connections 2026-03-09T20:54:56.984 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:56.983+0000 7f3b89fa3640 1 -- 192.168.123.107:0/1910060874 wait complete. 2026-03-09T20:54:56.991 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)" 2026-03-09T20:54:57.051 DEBUG:teuthology.parallel:result is None 2026-03-09T20:54:57.052 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-09T20:54:57.054 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm07.local 2026-03-09T20:54:57.054 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- bash -c 'ceph fs dump' 2026-03-09T20:54:57.207 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:54:57.437 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:57.435+0000 7f33fef96640 1 -- 192.168.123.107:0/1186946691 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f33f8106800 msgr2=0x7f33f8106be0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:54:57.437 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:57.435+0000 7f33fef96640 1 --2- 192.168.123.107:0/1186946691 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f33f8106800 0x7f33f8106be0 secure :-1 s=READY pgs=164 cs=0 l=1 rev1=1 crypto rx=0x7f33e40099b0 tx=0x7f33e402f220 comp rx=0 tx=0).stop 2026-03-09T20:54:57.437 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:57.436+0000 7f33fef96640 1 -- 192.168.123.107:0/1186946691 shutdown_connections 2026-03-09T20:54:57.437 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:57.436+0000 7f33fef96640 1 --2- 192.168.123.107:0/1186946691 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f33f80feb60 0x7f33f80fefe0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:54:57.437 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:57.436+0000 7f33fef96640 1 --2- 192.168.123.107:0/1186946691 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f33f8106800 0x7f33f8106be0 unknown :-1 s=CLOSED pgs=164 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:54:57.437 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:57.436+0000 7f33fef96640 1 -- 192.168.123.107:0/1186946691 >> 192.168.123.107:0/1186946691 conn(0x7f33f80faa50 msgr2=0x7f33f80fce70 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:54:57.437 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:57.436+0000 7f33fef96640 1 -- 192.168.123.107:0/1186946691 shutdown_connections 2026-03-09T20:54:57.437 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:57.436+0000 7f33fef96640 1 -- 192.168.123.107:0/1186946691 wait complete. 2026-03-09T20:54:57.437 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:57.436+0000 7f33fef96640 1 Processor -- start 2026-03-09T20:54:57.437 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:57.437+0000 7f33fef96640 1 -- start start 2026-03-09T20:54:57.438 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:57.437+0000 7f33fef96640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f33f80feb60 0x7f33f81a0600 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:54:57.438 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:57.437+0000 7f33fef96640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f33f8106800 0x7f33f81a0b40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:54:57.438 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:57.437+0000 7f33fef96640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f33f81a11d0 con 0x7f33f8106800 2026-03-09T20:54:57.438 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:57.437+0000 7f33fef96640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f33f819a710 con 0x7f33f80feb60 2026-03-09T20:54:57.438 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:57.437+0000 7f33fcd0b640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f33f80feb60 0x7f33f81a0600 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:54:57.438 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:57.437+0000 7f33f7fff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f33f8106800 0x7f33f81a0b40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:54:57.438 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:57.437+0000 7f33fcd0b640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f33f80feb60 0x7f33f81a0600 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:44416/0 (socket says 192.168.123.107:44416) 2026-03-09T20:54:57.438 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:57.437+0000 7f33fcd0b640 1 -- 192.168.123.107:0/130662225 learned_addr learned my addr 192.168.123.107:0/130662225 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:54:57.438 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:57.438+0000 7f33fcd0b640 1 -- 192.168.123.107:0/130662225 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f33f8106800 msgr2=0x7f33f81a0b40 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:54:57.438 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:57.438+0000 7f33fcd0b640 1 --2- 192.168.123.107:0/130662225 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f33f8106800 0x7f33f81a0b40 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:54:57.438 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:57.438+0000 7f33fcd0b640 1 -- 192.168.123.107:0/130662225 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f33e4009660 con 0x7f33f80feb60 2026-03-09T20:54:57.439 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:57.438+0000 7f33fcd0b640 1 --2- 192.168.123.107:0/130662225 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f33f80feb60 0x7f33f81a0600 secure :-1 s=READY pgs=64 cs=0 l=1 rev1=1 crypto rx=0x7f33e4002410 tx=0x7f33e4004290 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:54:57.439 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:57.438+0000 7f33f5ffb640 1 -- 192.168.123.107:0/130662225 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f33e403d070 con 0x7f33f80feb60 2026-03-09T20:54:57.439 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:57.438+0000 7f33fef96640 1 -- 192.168.123.107:0/130662225 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f33f819a990 con 0x7f33f80feb60 2026-03-09T20:54:57.439 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:57.438+0000 7f33fef96640 1 -- 192.168.123.107:0/130662225 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f33f819ae80 con 0x7f33f80feb60 2026-03-09T20:54:57.440 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:57.439+0000 7f33f5ffb640 1 -- 192.168.123.107:0/130662225 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f33e40043b0 con 0x7f33f80feb60 2026-03-09T20:54:57.440 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:57.439+0000 7f33f5ffb640 1 -- 192.168.123.107:0/130662225 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f33e4041720 con 0x7f33f80feb60 2026-03-09T20:54:57.441 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:57.440+0000 7f33cb7fe640 1 -- 192.168.123.107:0/130662225 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f33f8101f40 con 0x7f33f80feb60 2026-03-09T20:54:57.443 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:57.441+0000 7f33f5ffb640 1 -- 192.168.123.107:0/130662225 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f33e4038730 con 0x7f33f80feb60 2026-03-09T20:54:57.443 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:57.441+0000 7f33f5ffb640 1 --2- 192.168.123.107:0/130662225 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f33cc077890 0x7f33cc079d50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:54:57.443 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:57.441+0000 7f33f5ffb640 1 -- 192.168.123.107:0/130662225 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(86..86 src has 1..86) v4 ==== 6652+0+0 (secure 0 0 0) 0x7f33e40bec00 con 0x7f33f80feb60 2026-03-09T20:54:57.443 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:57.442+0000 7f33f7fff640 1 --2- 192.168.123.107:0/130662225 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f33cc077890 0x7f33cc079d50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:54:57.443 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:57.442+0000 7f33f7fff640 1 --2- 192.168.123.107:0/130662225 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f33cc077890 0x7f33cc079d50 secure :-1 s=READY pgs=115 cs=0 l=1 rev1=1 crypto rx=0x7f33f819be90 tx=0x7f33e800a5c0 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:54:57.444 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:57.443+0000 7f33f5ffb640 1 -- 192.168.123.107:0/130662225 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f33e40872b0 con 0x7f33f80feb60 2026-03-09T20:54:57.563 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:57.562+0000 7f33cb7fe640 1 -- 192.168.123.107:0/130662225 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "fs dump"} v 0) v1 -- 0x7f33f80fefe0 con 0x7f33f80feb60 2026-03-09T20:54:57.564 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:57.563+0000 7f33f5ffb640 1 -- 192.168.123.107:0/130662225 <== mon.1 v2:192.168.123.110:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump"}]=0 dumped fsmap epoch 37 v37) v1 ==== 76+0+2003 (secure 0 0 0) 0x7f33e4086a00 con 0x7f33f80feb60 2026-03-09T20:54:57.565 INFO:teuthology.orchestra.run.vm07.stdout:e37 2026-03-09T20:54:57.565 INFO:teuthology.orchestra.run.vm07.stdout:btime 2026-03-09T20:53:11:032505+0000 2026-03-09T20:54:57.565 INFO:teuthology.orchestra.run.vm07.stdout:enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T20:54:57.565 INFO:teuthology.orchestra.run.vm07.stdout:default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T20:54:57.565 INFO:teuthology.orchestra.run.vm07.stdout:legacy client fscid: 1 2026-03-09T20:54:57.565 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:54:57.565 INFO:teuthology.orchestra.run.vm07.stdout:Filesystem 'cephfs' (1) 2026-03-09T20:54:57.565 INFO:teuthology.orchestra.run.vm07.stdout:fs_name cephfs 2026-03-09T20:54:57.565 INFO:teuthology.orchestra.run.vm07.stdout:epoch 37 2026-03-09T20:54:57.565 INFO:teuthology.orchestra.run.vm07.stdout:flags 32 joinable allow_snaps allow_multimds_snaps allow_standby_replay 2026-03-09T20:54:57.565 INFO:teuthology.orchestra.run.vm07.stdout:created 2026-03-09T20:44:59.885491+0000 2026-03-09T20:54:57.565 INFO:teuthology.orchestra.run.vm07.stdout:modified 2026-03-09T20:53:11.032483+0000 2026-03-09T20:54:57.565 INFO:teuthology.orchestra.run.vm07.stdout:tableserver 0 2026-03-09T20:54:57.565 INFO:teuthology.orchestra.run.vm07.stdout:root 0 2026-03-09T20:54:57.565 INFO:teuthology.orchestra.run.vm07.stdout:session_timeout 60 2026-03-09T20:54:57.565 INFO:teuthology.orchestra.run.vm07.stdout:session_autoclose 300 2026-03-09T20:54:57.565 INFO:teuthology.orchestra.run.vm07.stdout:max_file_size 1099511627776 2026-03-09T20:54:57.565 INFO:teuthology.orchestra.run.vm07.stdout:max_xattr_size 65536 2026-03-09T20:54:57.565 INFO:teuthology.orchestra.run.vm07.stdout:required_client_features {} 2026-03-09T20:54:57.566 INFO:teuthology.orchestra.run.vm07.stdout:last_failure 0 2026-03-09T20:54:57.566 INFO:teuthology.orchestra.run.vm07.stdout:last_failure_osd_epoch 86 2026-03-09T20:54:57.566 INFO:teuthology.orchestra.run.vm07.stdout:compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes} 2026-03-09T20:54:57.566 INFO:teuthology.orchestra.run.vm07.stdout:max_mds 2 2026-03-09T20:54:57.566 INFO:teuthology.orchestra.run.vm07.stdout:in 0,1 2026-03-09T20:54:57.566 INFO:teuthology.orchestra.run.vm07.stdout:up {0=34382,1=44273} 2026-03-09T20:54:57.566 INFO:teuthology.orchestra.run.vm07.stdout:failed 2026-03-09T20:54:57.566 INFO:teuthology.orchestra.run.vm07.stdout:damaged 2026-03-09T20:54:57.566 INFO:teuthology.orchestra.run.vm07.stdout:stopped 2026-03-09T20:54:57.566 INFO:teuthology.orchestra.run.vm07.stdout:data_pools [3] 2026-03-09T20:54:57.566 INFO:teuthology.orchestra.run.vm07.stdout:metadata_pool 2 2026-03-09T20:54:57.566 INFO:teuthology.orchestra.run.vm07.stdout:inline_data disabled 2026-03-09T20:54:57.566 INFO:teuthology.orchestra.run.vm07.stdout:balancer 2026-03-09T20:54:57.566 INFO:teuthology.orchestra.run.vm07.stdout:bal_rank_mask -1 2026-03-09T20:54:57.566 INFO:teuthology.orchestra.run.vm07.stdout:standby_count_wanted 1 2026-03-09T20:54:57.566 INFO:teuthology.orchestra.run.vm07.stdout:qdb_cluster leader: 34382 members: 44273,34382 2026-03-09T20:54:57.566 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm07.potfau{0:34382} state up:active seq 7 join_fscid=1 addr [v2:192.168.123.107:6828/561473714,v1:192.168.123.107:6829/561473714] compat {c=[1],r=[1],i=[1fff]}] 2026-03-09T20:54:57.566 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm10.qpltwp{0:44295} state up:standby-replay seq 1 join_fscid=1 addr [v2:192.168.123.110:6824/4027718916,v1:192.168.123.110:6825/4027718916] compat {c=[1],r=[1],i=[1fff]}] 2026-03-09T20:54:57.566 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm07.rovdbp{1:44273} state up:active seq 8 join_fscid=1 addr [v2:192.168.123.107:6826/2346066069,v1:192.168.123.107:6827/2346066069] compat {c=[1],r=[1],i=[1fff]}] 2026-03-09T20:54:57.566 INFO:teuthology.orchestra.run.vm07.stdout:[mds.cephfs.vm10.hzyuyq{1:44299} state up:standby-replay seq 1 join_fscid=1 addr [v2:192.168.123.110:6826/1370091423,v1:192.168.123.110:6827/1370091423] compat {c=[1],r=[1],i=[1fff]}] 2026-03-09T20:54:57.566 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:54:57.566 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:54:57.566 INFO:teuthology.orchestra.run.vm07.stderr:dumped fsmap epoch 37 2026-03-09T20:54:57.567 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:57.566+0000 7f33cb7fe640 1 -- 192.168.123.107:0/130662225 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f33cc077890 msgr2=0x7f33cc079d50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:54:57.568 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:57.566+0000 7f33cb7fe640 1 --2- 192.168.123.107:0/130662225 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f33cc077890 0x7f33cc079d50 secure :-1 s=READY pgs=115 cs=0 l=1 rev1=1 crypto rx=0x7f33f819be90 tx=0x7f33e800a5c0 comp rx=0 tx=0).stop 2026-03-09T20:54:57.568 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:57.566+0000 7f33cb7fe640 1 -- 192.168.123.107:0/130662225 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f33f80feb60 msgr2=0x7f33f81a0600 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:54:57.568 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:57.566+0000 7f33cb7fe640 1 --2- 192.168.123.107:0/130662225 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f33f80feb60 0x7f33f81a0600 secure :-1 s=READY pgs=64 cs=0 l=1 rev1=1 crypto rx=0x7f33e4002410 tx=0x7f33e4004290 comp rx=0 tx=0).stop 2026-03-09T20:54:57.568 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:57.567+0000 7f33cb7fe640 1 -- 192.168.123.107:0/130662225 shutdown_connections 2026-03-09T20:54:57.568 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:57.567+0000 7f33cb7fe640 1 --2- 192.168.123.107:0/130662225 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f33cc077890 0x7f33cc079d50 unknown :-1 s=CLOSED pgs=115 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:54:57.568 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:57.567+0000 7f33cb7fe640 1 --2- 192.168.123.107:0/130662225 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f33f8106800 0x7f33f81a0b40 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:54:57.568 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:57.567+0000 7f33cb7fe640 1 --2- 192.168.123.107:0/130662225 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f33f80feb60 0x7f33f81a0600 unknown :-1 s=CLOSED pgs=64 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:54:57.568 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:57.567+0000 7f33cb7fe640 1 -- 192.168.123.107:0/130662225 >> 192.168.123.107:0/130662225 conn(0x7f33f80faa50 msgr2=0x7f33f8102340 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:54:57.568 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:57.567+0000 7f33cb7fe640 1 -- 192.168.123.107:0/130662225 shutdown_connections 2026-03-09T20:54:57.569 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:57.568+0000 7f33cb7fe640 1 -- 192.168.123.107:0/130662225 wait complete. 2026-03-09T20:54:57.615 INFO:teuthology.run_tasks:Running task fs.post_upgrade_checks... 2026-03-09T20:54:57.618 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph fs dump --format=json 2026-03-09T20:54:57.789 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:54:57.818 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:54:57 vm07.local ceph-mon[112105]: pgmap v238: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:54:57.818 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:54:57 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/1910060874' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:54:57.818 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:54:57 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/130662225' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T20:54:58.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:54:57 vm10.local ceph-mon[103526]: pgmap v238: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:54:58.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:54:57 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/1910060874' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T20:54:58.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:54:57 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/130662225' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T20:54:58.044 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.042+0000 7f63e5500640 1 -- 192.168.123.107:0/3332107706 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f63e0108a00 msgr2=0x7f63e0108de0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:54:58.044 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.042+0000 7f63e5500640 1 --2- 192.168.123.107:0/3332107706 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f63e0108a00 0x7f63e0108de0 secure :-1 s=READY pgs=165 cs=0 l=1 rev1=1 crypto rx=0x7f63d40099b0 tx=0x7f63d402f220 comp rx=0 tx=0).stop 2026-03-09T20:54:58.044 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.043+0000 7f63e5500640 1 -- 192.168.123.107:0/3332107706 shutdown_connections 2026-03-09T20:54:58.044 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.043+0000 7f63e5500640 1 --2- 192.168.123.107:0/3332107706 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f63e0102a00 0x7f63e0102e60 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:54:58.044 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.043+0000 7f63e5500640 1 --2- 192.168.123.107:0/3332107706 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f63e0108a00 0x7f63e0108de0 unknown :-1 s=CLOSED pgs=165 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:54:58.044 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.043+0000 7f63e5500640 1 -- 192.168.123.107:0/3332107706 >> 192.168.123.107:0/3332107706 conn(0x7f63e00fe700 msgr2=0x7f63e0100b20 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:54:58.044 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.043+0000 7f63e5500640 1 -- 192.168.123.107:0/3332107706 shutdown_connections 2026-03-09T20:54:58.044 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.044+0000 7f63e5500640 1 -- 192.168.123.107:0/3332107706 wait complete. 2026-03-09T20:54:58.045 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.044+0000 7f63e5500640 1 Processor -- start 2026-03-09T20:54:58.045 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.044+0000 7f63e5500640 1 -- start start 2026-03-09T20:54:58.045 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.045+0000 7f63e5500640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f63e0102a00 0x7f63e01a05b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:54:58.046 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.045+0000 7f63e5500640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f63e0108a00 0x7f63e01a0af0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:54:58.046 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.045+0000 7f63deffd640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f63e0102a00 0x7f63e01a05b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:54:58.046 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.045+0000 7f63de7fc640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f63e0108a00 0x7f63e01a0af0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:54:58.046 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.045+0000 7f63de7fc640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f63e0108a00 0x7f63e01a0af0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:45318/0 (socket says 192.168.123.107:45318) 2026-03-09T20:54:58.046 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.045+0000 7f63deffd640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f63e0102a00 0x7f63e01a05b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:51332/0 (socket says 192.168.123.107:51332) 2026-03-09T20:54:58.046 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.045+0000 7f63e5500640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f63e01a1080 con 0x7f63e0108a00 2026-03-09T20:54:58.046 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.045+0000 7f63e5500640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f63e019a6a0 con 0x7f63e0102a00 2026-03-09T20:54:58.046 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.045+0000 7f63de7fc640 1 -- 192.168.123.107:0/1628354742 learned_addr learned my addr 192.168.123.107:0/1628354742 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:54:58.046 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.045+0000 7f63de7fc640 1 -- 192.168.123.107:0/1628354742 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f63e0102a00 msgr2=0x7f63e01a05b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:54:58.046 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.045+0000 7f63de7fc640 1 --2- 192.168.123.107:0/1628354742 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f63e0102a00 0x7f63e01a05b0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:54:58.046 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.045+0000 7f63de7fc640 1 -- 192.168.123.107:0/1628354742 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f63d4009660 con 0x7f63e0108a00 2026-03-09T20:54:58.047 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.045+0000 7f63deffd640 1 --2- 192.168.123.107:0/1628354742 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f63e0102a00 0x7f63e01a05b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T20:54:58.047 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.046+0000 7f63de7fc640 1 --2- 192.168.123.107:0/1628354742 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f63e0108a00 0x7f63e01a0af0 secure :-1 s=READY pgs=166 cs=0 l=1 rev1=1 crypto rx=0x7f63cc00cc60 tx=0x7f63cc007590 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:54:58.048 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.046+0000 7f63c3fff640 1 -- 192.168.123.107:0/1628354742 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f63cc007e00 con 0x7f63e0108a00 2026-03-09T20:54:58.048 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.046+0000 7f63c3fff640 1 -- 192.168.123.107:0/1628354742 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f63cc00ce80 con 0x7f63e0108a00 2026-03-09T20:54:58.048 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.046+0000 7f63c3fff640 1 -- 192.168.123.107:0/1628354742 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f63cc00f660 con 0x7f63e0108a00 2026-03-09T20:54:58.048 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.046+0000 7f63e5500640 1 -- 192.168.123.107:0/1628354742 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f63e019a980 con 0x7f63e0108a00 2026-03-09T20:54:58.048 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.046+0000 7f63e5500640 1 -- 192.168.123.107:0/1628354742 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f63e019ad90 con 0x7f63e0108a00 2026-03-09T20:54:58.048 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.047+0000 7f63e5500640 1 -- 192.168.123.107:0/1628354742 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f63ac005350 con 0x7f63e0108a00 2026-03-09T20:54:58.052 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.051+0000 7f63c3fff640 1 -- 192.168.123.107:0/1628354742 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f63cc0040a0 con 0x7f63e0108a00 2026-03-09T20:54:58.052 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.051+0000 7f63c3fff640 1 --2- 192.168.123.107:0/1628354742 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f63b80779b0 0x7f63b8079e70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:54:58.052 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.051+0000 7f63c3fff640 1 -- 192.168.123.107:0/1628354742 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(86..86 src has 1..86) v4 ==== 6652+0+0 (secure 0 0 0) 0x7f63cc01d030 con 0x7f63e0108a00 2026-03-09T20:54:58.052 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.051+0000 7f63c3fff640 1 -- 192.168.123.107:0/1628354742 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f63cc09a770 con 0x7f63e0108a00 2026-03-09T20:54:58.052 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.051+0000 7f63deffd640 1 --2- 192.168.123.107:0/1628354742 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f63b80779b0 0x7f63b8079e70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:54:58.053 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.052+0000 7f63deffd640 1 --2- 192.168.123.107:0/1628354742 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f63b80779b0 0x7f63b8079e70 secure :-1 s=READY pgs=116 cs=0 l=1 rev1=1 crypto rx=0x7f63d40099b0 tx=0x7f63d4005c50 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:54:58.175 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.174+0000 7f63e5500640 1 -- 192.168.123.107:0/1628354742 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "fs dump", "format": "json"} v 0) v1 -- 0x7f63ac0051c0 con 0x7f63e0108a00 2026-03-09T20:54:58.177 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.176+0000 7f63c3fff640 1 -- 192.168.123.107:0/1628354742 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "format": "json"}]=0 dumped fsmap epoch 37 v37) v1 ==== 94+0+5299 (secure 0 0 0) 0x7f63cc062b60 con 0x7f63e0108a00 2026-03-09T20:54:58.177 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:54:58.177 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":37,"btime":"2026-03-09T20:53:11:032505+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[],"filesystems":[{"mdsmap":{"epoch":37,"flags":50,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":true,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-09T20:44:59.885491+0000","modified":"2026-03-09T20:53:11.032483+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":86,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"max_mds":2,"in":[0,1],"up":{"mds_0":34382,"mds_1":44273},"failed":[],"damaged":[],"stopped":[],"info":{"gid_34382":{"gid":34382,"name":"cephfs.vm07.potfau","rank":0,"incarnation":30,"state":"up:active","state_seq":7,"addr":"192.168.123.107:6829/561473714","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6828","nonce":561473714},{"type":"v1","addr":"192.168.123.107:6829","nonce":561473714}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}}},"gid_44273":{"gid":44273,"name":"cephfs.vm07.rovdbp","rank":1,"incarnation":35,"state":"up:active","state_seq":8,"addr":"192.168.123.107:6827/2346066069","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6826","nonce":2346066069},{"type":"v1","addr":"192.168.123.107:6827","nonce":2346066069}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}}},"gid_44295":{"gid":44295,"name":"cephfs.vm10.qpltwp","rank":0,"incarnation":0,"state":"up:standby-replay","state_seq":1,"addr":"192.168.123.110:6825/4027718916","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6824","nonce":4027718916},{"type":"v1","addr":"192.168.123.110:6825","nonce":4027718916}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}}},"gid_44299":{"gid":44299,"name":"cephfs.vm10.hzyuyq","rank":1,"incarnation":0,"state":"up:standby-replay","state_seq":1,"addr":"192.168.123.110:6827/1370091423","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6826","nonce":1370091423},{"type":"v1","addr":"192.168.123.110:6827","nonce":1370091423}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":34382,"qdb_cluster":[44273,34382]},"id":1}]} 2026-03-09T20:54:58.177 INFO:teuthology.orchestra.run.vm07.stderr:dumped fsmap epoch 37 2026-03-09T20:54:58.179 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.178+0000 7f63e5500640 1 -- 192.168.123.107:0/1628354742 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f63b80779b0 msgr2=0x7f63b8079e70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:54:58.179 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.178+0000 7f63e5500640 1 --2- 192.168.123.107:0/1628354742 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f63b80779b0 0x7f63b8079e70 secure :-1 s=READY pgs=116 cs=0 l=1 rev1=1 crypto rx=0x7f63d40099b0 tx=0x7f63d4005c50 comp rx=0 tx=0).stop 2026-03-09T20:54:58.179 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.178+0000 7f63e5500640 1 -- 192.168.123.107:0/1628354742 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f63e0108a00 msgr2=0x7f63e01a0af0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:54:58.179 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.178+0000 7f63e5500640 1 --2- 192.168.123.107:0/1628354742 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f63e0108a00 0x7f63e01a0af0 secure :-1 s=READY pgs=166 cs=0 l=1 rev1=1 crypto rx=0x7f63cc00cc60 tx=0x7f63cc007590 comp rx=0 tx=0).stop 2026-03-09T20:54:58.179 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.178+0000 7f63e5500640 1 -- 192.168.123.107:0/1628354742 shutdown_connections 2026-03-09T20:54:58.179 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.179+0000 7f63e5500640 1 --2- 192.168.123.107:0/1628354742 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f63b80779b0 0x7f63b8079e70 unknown :-1 s=CLOSED pgs=116 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:54:58.179 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.179+0000 7f63e5500640 1 --2- 192.168.123.107:0/1628354742 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f63e0108a00 0x7f63e01a0af0 unknown :-1 s=CLOSED pgs=166 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:54:58.179 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.179+0000 7f63e5500640 1 --2- 192.168.123.107:0/1628354742 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f63e0102a00 0x7f63e01a05b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:54:58.180 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.179+0000 7f63e5500640 1 -- 192.168.123.107:0/1628354742 >> 192.168.123.107:0/1628354742 conn(0x7f63e00fe700 msgr2=0x7f63e010b8b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:54:58.180 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.179+0000 7f63e5500640 1 -- 192.168.123.107:0/1628354742 shutdown_connections 2026-03-09T20:54:58.180 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.179+0000 7f63e5500640 1 -- 192.168.123.107:0/1628354742 wait complete. 2026-03-09T20:54:58.246 DEBUG:tasks.fs:checking fs fscid=1,name=cephfs state = {'epoch': 8, 'max_mds': 2, 'flags': 50} 2026-03-09T20:54:58.246 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph fs dump --format=json 9 2026-03-09T20:54:58.390 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:54:58.627 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.625+0000 7f82e7577640 1 -- 192.168.123.107:0/3603521492 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f82e80ff200 msgr2=0x7f82e810c7d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:54:58.627 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.625+0000 7f82e7577640 1 --2- 192.168.123.107:0/3603521492 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f82e80ff200 0x7f82e810c7d0 secure :-1 s=READY pgs=167 cs=0 l=1 rev1=1 crypto rx=0x7f82dc009a30 tx=0x7f82dc02f2f0 comp rx=0 tx=0).stop 2026-03-09T20:54:58.627 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.626+0000 7f82e7577640 1 -- 192.168.123.107:0/3603521492 shutdown_connections 2026-03-09T20:54:58.627 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.626+0000 7f82e7577640 1 --2- 192.168.123.107:0/3603521492 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f82e80ff200 0x7f82e810c7d0 unknown :-1 s=CLOSED pgs=167 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:54:58.627 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.626+0000 7f82e7577640 1 --2- 192.168.123.107:0/3603521492 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f82e80fe850 0x7f82e80fec30 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:54:58.627 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.626+0000 7f82e7577640 1 -- 192.168.123.107:0/3603521492 >> 192.168.123.107:0/3603521492 conn(0x7f82e80fa4a0 msgr2=0x7f82e80fc8c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:54:58.627 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.626+0000 7f82e7577640 1 -- 192.168.123.107:0/3603521492 shutdown_connections 2026-03-09T20:54:58.627 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.626+0000 7f82e7577640 1 -- 192.168.123.107:0/3603521492 wait complete. 2026-03-09T20:54:58.628 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.627+0000 7f82e7577640 1 Processor -- start 2026-03-09T20:54:58.628 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.627+0000 7f82e7577640 1 -- start start 2026-03-09T20:54:58.628 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.627+0000 7f82e7577640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f82e80fe850 0x7f82e81a04f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:54:58.629 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.627+0000 7f82e7577640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f82e80ff200 0x7f82e81a0a30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:54:58.629 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.627+0000 7f82e7577640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f82e81a1050 con 0x7f82e80ff200 2026-03-09T20:54:58.629 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.627+0000 7f82e7577640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f82e819a5e0 con 0x7f82e80fe850 2026-03-09T20:54:58.629 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.628+0000 7f82e6575640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f82e80fe850 0x7f82e81a04f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:54:58.629 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.628+0000 7f82e5d74640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f82e80ff200 0x7f82e81a0a30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:54:58.629 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.628+0000 7f82e5d74640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f82e80ff200 0x7f82e81a0a30 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:45334/0 (socket says 192.168.123.107:45334) 2026-03-09T20:54:58.629 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.628+0000 7f82e5d74640 1 -- 192.168.123.107:0/903902370 learned_addr learned my addr 192.168.123.107:0/903902370 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:54:58.629 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.628+0000 7f82e5d74640 1 -- 192.168.123.107:0/903902370 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f82e80fe850 msgr2=0x7f82e81a04f0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:54:58.629 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.628+0000 7f82e5d74640 1 --2- 192.168.123.107:0/903902370 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f82e80fe850 0x7f82e81a04f0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:54:58.629 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.628+0000 7f82e5d74640 1 -- 192.168.123.107:0/903902370 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f82dc009660 con 0x7f82e80ff200 2026-03-09T20:54:58.629 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.628+0000 7f82e5d74640 1 --2- 192.168.123.107:0/903902370 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f82e80ff200 0x7f82e81a0a30 secure :-1 s=READY pgs=168 cs=0 l=1 rev1=1 crypto rx=0x7f82dc02f800 tx=0x7f82dc031d60 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:54:58.630 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.629+0000 7f82cf7fe640 1 -- 192.168.123.107:0/903902370 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f82dc031f00 con 0x7f82e80ff200 2026-03-09T20:54:58.630 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.629+0000 7f82cf7fe640 1 -- 192.168.123.107:0/903902370 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f82dc004050 con 0x7f82e80ff200 2026-03-09T20:54:58.630 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.629+0000 7f82cf7fe640 1 -- 192.168.123.107:0/903902370 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f82dc038dd0 con 0x7f82e80ff200 2026-03-09T20:54:58.630 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.629+0000 7f82e7577640 1 -- 192.168.123.107:0/903902370 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f82e819a860 con 0x7f82e80ff200 2026-03-09T20:54:58.631 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.629+0000 7f82e7577640 1 -- 192.168.123.107:0/903902370 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f82e819ac20 con 0x7f82e80ff200 2026-03-09T20:54:58.631 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.629+0000 7f82e6575640 1 --2- 192.168.123.107:0/903902370 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f82e80fe850 0x7f82e81a04f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T20:54:58.631 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.630+0000 7f82cf7fe640 1 -- 192.168.123.107:0/903902370 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f82dc048050 con 0x7f82e80ff200 2026-03-09T20:54:58.632 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.631+0000 7f82cf7fe640 1 --2- 192.168.123.107:0/903902370 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f82b8077720 0x7f82b8079be0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:54:58.632 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.631+0000 7f82cf7fe640 1 -- 192.168.123.107:0/903902370 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(86..86 src has 1..86) v4 ==== 6652+0+0 (secure 0 0 0) 0x7f82dc0be4e0 con 0x7f82e80ff200 2026-03-09T20:54:58.632 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.631+0000 7f82e7577640 1 -- 192.168.123.107:0/903902370 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f82b4005350 con 0x7f82e80ff200 2026-03-09T20:54:58.635 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.631+0000 7f82e6575640 1 --2- 192.168.123.107:0/903902370 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f82b8077720 0x7f82b8079be0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:54:58.635 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.634+0000 7f82e6575640 1 --2- 192.168.123.107:0/903902370 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f82b8077720 0x7f82b8079be0 secure :-1 s=READY pgs=117 cs=0 l=1 rev1=1 crypto rx=0x7f82d0009c50 tx=0x7f82d0009290 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:54:58.635 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.634+0000 7f82cf7fe640 1 -- 192.168.123.107:0/903902370 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f82dc086b90 con 0x7f82e80ff200 2026-03-09T20:54:58.754 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:54:58 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/1628354742' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json"}]: dispatch 2026-03-09T20:54:58.754 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.752+0000 7f82e7577640 1 -- 192.168.123.107:0/903902370 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 9, "format": "json"} v 0) v1 -- 0x7f82b4005600 con 0x7f82e80ff200 2026-03-09T20:54:58.757 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.756+0000 7f82cf7fe640 1 -- 192.168.123.107:0/903902370 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 9, "format": "json"}]=0 dumped fsmap epoch 9 v37) v1 ==== 105+0+4939 (secure 0 0 0) 0x7f82dc03d020 con 0x7f82e80ff200 2026-03-09T20:54:58.758 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:54:58.758 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":9,"btime":"1970-01-01T00:00:00:000000+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[],"filesystems":[{"mdsmap":{"epoch":9,"flags":50,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":true,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-09T20:44:59.885491+0000","modified":"2026-03-09T20:45:07.063847+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":2,"in":[0,1],"up":{"mds_0":14476,"mds_1":24291},"failed":[],"damaged":[],"stopped":[],"info":{"gid_14476":{"gid":14476,"name":"cephfs.vm07.rovdbp","rank":0,"incarnation":4,"state":"up:active","state_seq":3,"addr":"192.168.123.107:6827/2216764941","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6826","nonce":2216764941},{"type":"v1","addr":"192.168.123.107:6827","nonce":2216764941}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}},"gid_14490":{"gid":14490,"name":"cephfs.vm07.potfau","rank":1,"incarnation":0,"state":"up:standby-replay","state_seq":1,"addr":"192.168.123.107:6829/3289699342","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6828","nonce":3289699342},{"type":"v1","addr":"192.168.123.107:6829","nonce":3289699342}]},"join_fscid":-1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}},"gid_14498":{"gid":14498,"name":"cephfs.vm10.hzyuyq","rank":0,"incarnation":0,"state":"up:standby-replay","state_seq":1,"addr":"192.168.123.110:6827/3212743251","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6826","nonce":3212743251},{"type":"v1","addr":"192.168.123.110:6827","nonce":3212743251}]},"join_fscid":-1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}},"gid_24291":{"gid":24291,"name":"cephfs.vm10.qpltwp","rank":1,"incarnation":6,"state":"up:active","state_seq":2,"addr":"192.168.123.110:6825/61492274","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6824","nonce":61492274},{"type":"v1","addr":"192.168.123.110:6825","nonce":61492274}]},"join_fscid":-1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":0,"qdb_cluster":[]},"id":1}]} 2026-03-09T20:54:58.758 INFO:teuthology.orchestra.run.vm07.stderr:dumped fsmap epoch 9 2026-03-09T20:54:58.760 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.759+0000 7f82e7577640 1 -- 192.168.123.107:0/903902370 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f82b8077720 msgr2=0x7f82b8079be0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:54:58.760 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.759+0000 7f82e7577640 1 --2- 192.168.123.107:0/903902370 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f82b8077720 0x7f82b8079be0 secure :-1 s=READY pgs=117 cs=0 l=1 rev1=1 crypto rx=0x7f82d0009c50 tx=0x7f82d0009290 comp rx=0 tx=0).stop 2026-03-09T20:54:58.760 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.759+0000 7f82e7577640 1 -- 192.168.123.107:0/903902370 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f82e80ff200 msgr2=0x7f82e81a0a30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:54:58.760 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.759+0000 7f82e7577640 1 --2- 192.168.123.107:0/903902370 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f82e80ff200 0x7f82e81a0a30 secure :-1 s=READY pgs=168 cs=0 l=1 rev1=1 crypto rx=0x7f82dc02f800 tx=0x7f82dc031d60 comp rx=0 tx=0).stop 2026-03-09T20:54:58.760 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.759+0000 7f82e7577640 1 -- 192.168.123.107:0/903902370 shutdown_connections 2026-03-09T20:54:58.760 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.759+0000 7f82e7577640 1 --2- 192.168.123.107:0/903902370 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f82b8077720 0x7f82b8079be0 secure :-1 s=CLOSED pgs=117 cs=0 l=1 rev1=1 crypto rx=0x7f82d0009c50 tx=0x7f82d0009290 comp rx=0 tx=0).stop 2026-03-09T20:54:58.760 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.759+0000 7f82e7577640 1 --2- 192.168.123.107:0/903902370 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f82e80ff200 0x7f82e81a0a30 secure :-1 s=CLOSED pgs=168 cs=0 l=1 rev1=1 crypto rx=0x7f82dc02f800 tx=0x7f82dc031d60 comp rx=0 tx=0).stop 2026-03-09T20:54:58.760 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.759+0000 7f82e7577640 1 --2- 192.168.123.107:0/903902370 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f82e80fe850 0x7f82e81a04f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:54:58.760 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.759+0000 7f82e7577640 1 -- 192.168.123.107:0/903902370 >> 192.168.123.107:0/903902370 conn(0x7f82e80fa4a0 msgr2=0x7f82e80fbb80 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:54:58.760 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.759+0000 7f82e7577640 1 -- 192.168.123.107:0/903902370 shutdown_connections 2026-03-09T20:54:58.760 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:58.759+0000 7f82e7577640 1 -- 192.168.123.107:0/903902370 wait complete. 2026-03-09T20:54:58.827 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph fs dump --format=json 10 2026-03-09T20:54:58.970 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:54:59.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:54:58 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/1628354742' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json"}]: dispatch 2026-03-09T20:54:59.203 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.201+0000 7fb37478c640 1 -- 192.168.123.107:0/2252503513 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb36c073510 msgr2=0x7fb36c0738f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:54:59.203 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.201+0000 7fb37478c640 1 --2- 192.168.123.107:0/2252503513 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb36c073510 0x7fb36c0738f0 secure :-1 s=READY pgs=169 cs=0 l=1 rev1=1 crypto rx=0x7fb35c0099b0 tx=0x7fb35c02f2b0 comp rx=0 tx=0).stop 2026-03-09T20:54:59.203 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.202+0000 7fb37478c640 1 -- 192.168.123.107:0/2252503513 shutdown_connections 2026-03-09T20:54:59.203 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.202+0000 7fb37478c640 1 --2- 192.168.123.107:0/2252503513 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fb36c073e30 0x7fb36c10cb80 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:54:59.203 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.202+0000 7fb37478c640 1 --2- 192.168.123.107:0/2252503513 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb36c073510 0x7fb36c0738f0 unknown :-1 s=CLOSED pgs=169 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:54:59.203 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.202+0000 7fb37478c640 1 -- 192.168.123.107:0/2252503513 >> 192.168.123.107:0/2252503513 conn(0x7fb36c0fc460 msgr2=0x7fb36c0fe880 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:54:59.204 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.203+0000 7fb37478c640 1 -- 192.168.123.107:0/2252503513 shutdown_connections 2026-03-09T20:54:59.204 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.203+0000 7fb37478c640 1 -- 192.168.123.107:0/2252503513 wait complete. 2026-03-09T20:54:59.204 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.203+0000 7fb37478c640 1 Processor -- start 2026-03-09T20:54:59.204 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.203+0000 7fb37478c640 1 -- start start 2026-03-09T20:54:59.205 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.204+0000 7fb37478c640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fb36c073510 0x7fb36c1a06c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:54:59.205 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.204+0000 7fb37478c640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb36c073e30 0x7fb36c1a0c00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:54:59.205 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.204+0000 7fb37478c640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fb36c19a7b0 con 0x7fb36c073e30 2026-03-09T20:54:59.205 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.204+0000 7fb372501640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fb36c073510 0x7fb36c1a06c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:54:59.205 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.204+0000 7fb372501640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fb36c073510 0x7fb36c1a06c0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:51376/0 (socket says 192.168.123.107:51376) 2026-03-09T20:54:59.205 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.204+0000 7fb372501640 1 -- 192.168.123.107:0/278648884 learned_addr learned my addr 192.168.123.107:0/278648884 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:54:59.205 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.204+0000 7fb371d00640 1 --2- 192.168.123.107:0/278648884 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb36c073e30 0x7fb36c1a0c00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:54:59.205 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.204+0000 7fb37478c640 1 -- 192.168.123.107:0/278648884 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fb36c19a920 con 0x7fb36c073510 2026-03-09T20:54:59.206 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.205+0000 7fb371d00640 1 -- 192.168.123.107:0/278648884 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fb36c073510 msgr2=0x7fb36c1a06c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:54:59.206 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.205+0000 7fb371d00640 1 --2- 192.168.123.107:0/278648884 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fb36c073510 0x7fb36c1a06c0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:54:59.206 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.205+0000 7fb371d00640 1 -- 192.168.123.107:0/278648884 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fb35c009660 con 0x7fb36c073e30 2026-03-09T20:54:59.206 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.205+0000 7fb371d00640 1 --2- 192.168.123.107:0/278648884 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb36c073e30 0x7fb36c1a0c00 secure :-1 s=READY pgs=170 cs=0 l=1 rev1=1 crypto rx=0x7fb36000e970 tx=0x7fb36000ee40 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:54:59.206 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.205+0000 7fb35b7fe640 1 -- 192.168.123.107:0/278648884 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fb36000ccb0 con 0x7fb36c073e30 2026-03-09T20:54:59.206 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.205+0000 7fb35b7fe640 1 -- 192.168.123.107:0/278648884 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fb360004590 con 0x7fb36c073e30 2026-03-09T20:54:59.206 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.205+0000 7fb37478c640 1 -- 192.168.123.107:0/278648884 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fb36c19ac00 con 0x7fb36c073e30 2026-03-09T20:54:59.207 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.206+0000 7fb35b7fe640 1 -- 192.168.123.107:0/278648884 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fb360010640 con 0x7fb36c073e30 2026-03-09T20:54:59.207 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.207+0000 7fb37478c640 1 -- 192.168.123.107:0/278648884 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fb36c19b150 con 0x7fb36c073e30 2026-03-09T20:54:59.208 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.207+0000 7fb35b7fe640 1 -- 192.168.123.107:0/278648884 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fb360010860 con 0x7fb36c073e30 2026-03-09T20:54:59.208 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.207+0000 7fb37478c640 1 -- 192.168.123.107:0/278648884 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fb36c074c50 con 0x7fb36c073e30 2026-03-09T20:54:59.211 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.209+0000 7fb35b7fe640 1 --2- 192.168.123.107:0/278648884 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fb340077720 0x7fb340079be0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:54:59.212 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.209+0000 7fb35b7fe640 1 -- 192.168.123.107:0/278648884 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(86..86 src has 1..86) v4 ==== 6652+0+0 (secure 0 0 0) 0x7fb360014070 con 0x7fb36c073e30 2026-03-09T20:54:59.213 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.212+0000 7fb35b7fe640 1 -- 192.168.123.107:0/278648884 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fb360062d30 con 0x7fb36c073e30 2026-03-09T20:54:59.213 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.212+0000 7fb372501640 1 --2- 192.168.123.107:0/278648884 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fb340077720 0x7fb340079be0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:54:59.215 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.214+0000 7fb372501640 1 --2- 192.168.123.107:0/278648884 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fb340077720 0x7fb340079be0 secure :-1 s=READY pgs=118 cs=0 l=1 rev1=1 crypto rx=0x7fb35c005ec0 tx=0x7fb35c03a040 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:54:59.326 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.324+0000 7fb37478c640 1 -- 192.168.123.107:0/278648884 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 10, "format": "json"} v 0) v1 -- 0x7fb36c19bcc0 con 0x7fb36c073e30 2026-03-09T20:54:59.328 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.327+0000 7fb35b7fe640 1 -- 192.168.123.107:0/278648884 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 10, "format": "json"}]=0 dumped fsmap epoch 10 v37) v1 ==== 107+0+4939 (secure 0 0 0) 0x7fb360062480 con 0x7fb36c073e30 2026-03-09T20:54:59.328 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:54:59.329 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":10,"btime":"1970-01-01T00:00:00:000000+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[],"filesystems":[{"mdsmap":{"epoch":10,"flags":50,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":true,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-09T20:44:59.885491+0000","modified":"2026-03-09T20:45:08.091502+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":2,"in":[0,1],"up":{"mds_0":14476,"mds_1":24291},"failed":[],"damaged":[],"stopped":[],"info":{"gid_14476":{"gid":14476,"name":"cephfs.vm07.rovdbp","rank":0,"incarnation":4,"state":"up:active","state_seq":3,"addr":"192.168.123.107:6827/2216764941","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6826","nonce":2216764941},{"type":"v1","addr":"192.168.123.107:6827","nonce":2216764941}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}},"gid_14490":{"gid":14490,"name":"cephfs.vm07.potfau","rank":1,"incarnation":0,"state":"up:standby-replay","state_seq":2,"addr":"192.168.123.107:6829/3289699342","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6828","nonce":3289699342},{"type":"v1","addr":"192.168.123.107:6829","nonce":3289699342}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}},"gid_14498":{"gid":14498,"name":"cephfs.vm10.hzyuyq","rank":0,"incarnation":0,"state":"up:standby-replay","state_seq":1,"addr":"192.168.123.110:6827/3212743251","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6826","nonce":3212743251},{"type":"v1","addr":"192.168.123.110:6827","nonce":3212743251}]},"join_fscid":-1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}},"gid_24291":{"gid":24291,"name":"cephfs.vm10.qpltwp","rank":1,"incarnation":6,"state":"up:active","state_seq":3,"addr":"192.168.123.110:6825/61492274","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6824","nonce":61492274},{"type":"v1","addr":"192.168.123.110:6825","nonce":61492274}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":0,"qdb_cluster":[]},"id":1}]} 2026-03-09T20:54:59.329 INFO:teuthology.orchestra.run.vm07.stderr:dumped fsmap epoch 10 2026-03-09T20:54:59.330 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.330+0000 7fb37478c640 1 -- 192.168.123.107:0/278648884 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fb340077720 msgr2=0x7fb340079be0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:54:59.331 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.330+0000 7fb37478c640 1 --2- 192.168.123.107:0/278648884 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fb340077720 0x7fb340079be0 secure :-1 s=READY pgs=118 cs=0 l=1 rev1=1 crypto rx=0x7fb35c005ec0 tx=0x7fb35c03a040 comp rx=0 tx=0).stop 2026-03-09T20:54:59.331 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.330+0000 7fb37478c640 1 -- 192.168.123.107:0/278648884 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb36c073e30 msgr2=0x7fb36c1a0c00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:54:59.331 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.330+0000 7fb37478c640 1 --2- 192.168.123.107:0/278648884 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb36c073e30 0x7fb36c1a0c00 secure :-1 s=READY pgs=170 cs=0 l=1 rev1=1 crypto rx=0x7fb36000e970 tx=0x7fb36000ee40 comp rx=0 tx=0).stop 2026-03-09T20:54:59.331 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.330+0000 7fb37478c640 1 -- 192.168.123.107:0/278648884 shutdown_connections 2026-03-09T20:54:59.331 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.330+0000 7fb37478c640 1 --2- 192.168.123.107:0/278648884 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fb340077720 0x7fb340079be0 unknown :-1 s=CLOSED pgs=118 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:54:59.331 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.331+0000 7fb37478c640 1 --2- 192.168.123.107:0/278648884 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb36c073e30 0x7fb36c1a0c00 unknown :-1 s=CLOSED pgs=170 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:54:59.332 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.331+0000 7fb37478c640 1 --2- 192.168.123.107:0/278648884 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fb36c073510 0x7fb36c1a06c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:54:59.332 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.331+0000 7fb37478c640 1 -- 192.168.123.107:0/278648884 >> 192.168.123.107:0/278648884 conn(0x7fb36c0fc460 msgr2=0x7fb36c10c180 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:54:59.332 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.331+0000 7fb37478c640 1 -- 192.168.123.107:0/278648884 shutdown_connections 2026-03-09T20:54:59.332 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.331+0000 7fb37478c640 1 -- 192.168.123.107:0/278648884 wait complete. 2026-03-09T20:54:59.395 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph fs dump --format=json 11 2026-03-09T20:54:59.549 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:54:59.778 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.776+0000 7fa4c0d8c640 1 -- 192.168.123.107:0/238295980 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa4bc075720 msgr2=0x7fa4bc075b00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:54:59.778 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.776+0000 7fa4c0d8c640 1 --2- 192.168.123.107:0/238295980 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa4bc075720 0x7fa4bc075b00 secure :-1 s=READY pgs=65 cs=0 l=1 rev1=1 crypto rx=0x7fa4a40099b0 tx=0x7fa4a402f220 comp rx=0 tx=0).stop 2026-03-09T20:54:59.778 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.777+0000 7fa4c0d8c640 1 -- 192.168.123.107:0/238295980 shutdown_connections 2026-03-09T20:54:59.778 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.777+0000 7fa4c0d8c640 1 --2- 192.168.123.107:0/238295980 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa4bc076040 0x7fa4bc111330 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:54:59.778 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.777+0000 7fa4c0d8c640 1 --2- 192.168.123.107:0/238295980 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa4bc075720 0x7fa4bc075b00 unknown :-1 s=CLOSED pgs=65 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:54:59.778 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.777+0000 7fa4c0d8c640 1 -- 192.168.123.107:0/238295980 >> 192.168.123.107:0/238295980 conn(0x7fa4bc0fe710 msgr2=0x7fa4bc100b30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:54:59.778 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.778+0000 7fa4c0d8c640 1 -- 192.168.123.107:0/238295980 shutdown_connections 2026-03-09T20:54:59.778 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.778+0000 7fa4c0d8c640 1 -- 192.168.123.107:0/238295980 wait complete. 2026-03-09T20:54:59.779 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.778+0000 7fa4c0d8c640 1 Processor -- start 2026-03-09T20:54:59.779 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.778+0000 7fa4c0d8c640 1 -- start start 2026-03-09T20:54:59.779 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.778+0000 7fa4c0d8c640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa4bc075720 0x7fa4bc19ed90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:54:59.779 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.778+0000 7fa4c0d8c640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa4bc076040 0x7fa4bc19f2d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:54:59.779 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.778+0000 7fa4c0d8c640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa4bc19f960 con 0x7fa4bc076040 2026-03-09T20:54:59.780 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.778+0000 7fa4c0d8c640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa4bc1a36d0 con 0x7fa4bc075720 2026-03-09T20:54:59.780 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.779+0000 7fa4ba575640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa4bc075720 0x7fa4bc19ed90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:54:59.780 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.779+0000 7fa4ba575640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa4bc075720 0x7fa4bc19ed90 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:51396/0 (socket says 192.168.123.107:51396) 2026-03-09T20:54:59.780 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.779+0000 7fa4ba575640 1 -- 192.168.123.107:0/2651005320 learned_addr learned my addr 192.168.123.107:0/2651005320 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:54:59.780 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.779+0000 7fa4ba575640 1 -- 192.168.123.107:0/2651005320 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa4bc076040 msgr2=0x7fa4bc19f2d0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:54:59.780 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.779+0000 7fa4b9d74640 1 --2- 192.168.123.107:0/2651005320 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa4bc076040 0x7fa4bc19f2d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:54:59.780 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.779+0000 7fa4ba575640 1 --2- 192.168.123.107:0/2651005320 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa4bc076040 0x7fa4bc19f2d0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:54:59.780 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.779+0000 7fa4ba575640 1 -- 192.168.123.107:0/2651005320 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fa4b0009590 con 0x7fa4bc075720 2026-03-09T20:54:59.780 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.779+0000 7fa4b9d74640 1 --2- 192.168.123.107:0/2651005320 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa4bc076040 0x7fa4bc19f2d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T20:54:59.780 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.779+0000 7fa4ba575640 1 --2- 192.168.123.107:0/2651005320 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa4bc075720 0x7fa4bc19ed90 secure :-1 s=READY pgs=66 cs=0 l=1 rev1=1 crypto rx=0x7fa4a4009980 tx=0x7fa4a4004290 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:54:59.780 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.780+0000 7fa4a37fe640 1 -- 192.168.123.107:0/2651005320 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fa4a403d070 con 0x7fa4bc075720 2026-03-09T20:54:59.780 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.780+0000 7fa4c0d8c640 1 -- 192.168.123.107:0/2651005320 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fa4a4009660 con 0x7fa4bc075720 2026-03-09T20:54:59.781 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.780+0000 7fa4c0d8c640 1 -- 192.168.123.107:0/2651005320 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fa4bc1a3cb0 con 0x7fa4bc075720 2026-03-09T20:54:59.781 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.780+0000 7fa4a37fe640 1 -- 192.168.123.107:0/2651005320 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fa4a4004400 con 0x7fa4bc075720 2026-03-09T20:54:59.781 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.780+0000 7fa4a37fe640 1 -- 192.168.123.107:0/2651005320 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fa4a4041620 con 0x7fa4bc075720 2026-03-09T20:54:59.781 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.781+0000 7fa4a17fa640 1 -- 192.168.123.107:0/2651005320 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fa4bc076e60 con 0x7fa4bc075720 2026-03-09T20:54:59.783 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.782+0000 7fa4a37fe640 1 -- 192.168.123.107:0/2651005320 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fa4a402fc90 con 0x7fa4bc075720 2026-03-09T20:54:59.783 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.782+0000 7fa4a37fe640 1 --2- 192.168.123.107:0/2651005320 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fa4940778e0 0x7fa494079da0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:54:59.783 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.782+0000 7fa4a37fe640 1 -- 192.168.123.107:0/2651005320 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(86..86 src has 1..86) v4 ==== 6652+0+0 (secure 0 0 0) 0x7fa4a40bea70 con 0x7fa4bc075720 2026-03-09T20:54:59.783 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.782+0000 7fa4b9d74640 1 --2- 192.168.123.107:0/2651005320 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fa4940778e0 0x7fa494079da0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:54:59.784 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.783+0000 7fa4b9d74640 1 --2- 192.168.123.107:0/2651005320 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fa4940778e0 0x7fa494079da0 secure :-1 s=READY pgs=119 cs=0 l=1 rev1=1 crypto rx=0x7fa4bc1a0340 tx=0x7fa4b0009290 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:54:59.785 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.784+0000 7fa4a37fe640 1 -- 192.168.123.107:0/2651005320 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fa4a4087120 con 0x7fa4bc075720 2026-03-09T20:54:59.831 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:54:59 vm07.local ceph-mon[112105]: pgmap v239: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:54:59.831 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:54:59 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/903902370' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 9, "format": "json"}]: dispatch 2026-03-09T20:54:59.831 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:54:59 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/278648884' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 10, "format": "json"}]: dispatch 2026-03-09T20:54:59.902 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.901+0000 7fa4a17fa640 1 -- 192.168.123.107:0/2651005320 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 11, "format": "json"} v 0) v1 -- 0x7fa4bc1a3890 con 0x7fa4bc075720 2026-03-09T20:54:59.903 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.902+0000 7fa4a37fe640 1 -- 192.168.123.107:0/2651005320 <== mon.1 v2:192.168.123.110:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 11, "format": "json"}]=0 dumped fsmap epoch 11 v37) v1 ==== 107+0+4938 (secure 0 0 0) 0x7fa4a4086870 con 0x7fa4bc075720 2026-03-09T20:54:59.903 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:54:59.903 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":11,"btime":"1970-01-01T00:00:00:000000+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[],"filesystems":[{"mdsmap":{"epoch":11,"flags":50,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":true,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-09T20:44:59.885491+0000","modified":"2026-03-09T20:45:12.822947+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":2,"in":[0,1],"up":{"mds_0":14476,"mds_1":24291},"failed":[],"damaged":[],"stopped":[],"info":{"gid_14476":{"gid":14476,"name":"cephfs.vm07.rovdbp","rank":0,"incarnation":4,"state":"up:active","state_seq":3,"addr":"192.168.123.107:6827/2216764941","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6826","nonce":2216764941},{"type":"v1","addr":"192.168.123.107:6827","nonce":2216764941}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}},"gid_14490":{"gid":14490,"name":"cephfs.vm07.potfau","rank":1,"incarnation":0,"state":"up:standby-replay","state_seq":2,"addr":"192.168.123.107:6829/3289699342","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6828","nonce":3289699342},{"type":"v1","addr":"192.168.123.107:6829","nonce":3289699342}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}},"gid_14498":{"gid":14498,"name":"cephfs.vm10.hzyuyq","rank":0,"incarnation":0,"state":"up:standby-replay","state_seq":3,"addr":"192.168.123.110:6827/3212743251","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6826","nonce":3212743251},{"type":"v1","addr":"192.168.123.110:6827","nonce":3212743251}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}},"gid_24291":{"gid":24291,"name":"cephfs.vm10.qpltwp","rank":1,"incarnation":6,"state":"up:active","state_seq":3,"addr":"192.168.123.110:6825/61492274","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6824","nonce":61492274},{"type":"v1","addr":"192.168.123.110:6825","nonce":61492274}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":0,"qdb_cluster":[]},"id":1}]} 2026-03-09T20:54:59.904 INFO:teuthology.orchestra.run.vm07.stderr:dumped fsmap epoch 11 2026-03-09T20:54:59.906 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.905+0000 7fa4a17fa640 1 -- 192.168.123.107:0/2651005320 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fa4940778e0 msgr2=0x7fa494079da0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:54:59.906 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.905+0000 7fa4a17fa640 1 --2- 192.168.123.107:0/2651005320 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fa4940778e0 0x7fa494079da0 secure :-1 s=READY pgs=119 cs=0 l=1 rev1=1 crypto rx=0x7fa4bc1a0340 tx=0x7fa4b0009290 comp rx=0 tx=0).stop 2026-03-09T20:54:59.906 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.905+0000 7fa4a17fa640 1 -- 192.168.123.107:0/2651005320 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa4bc075720 msgr2=0x7fa4bc19ed90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:54:59.906 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.905+0000 7fa4a17fa640 1 --2- 192.168.123.107:0/2651005320 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa4bc075720 0x7fa4bc19ed90 secure :-1 s=READY pgs=66 cs=0 l=1 rev1=1 crypto rx=0x7fa4a4009980 tx=0x7fa4a4004290 comp rx=0 tx=0).stop 2026-03-09T20:54:59.906 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.905+0000 7fa4a17fa640 1 -- 192.168.123.107:0/2651005320 shutdown_connections 2026-03-09T20:54:59.906 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.905+0000 7fa4a17fa640 1 --2- 192.168.123.107:0/2651005320 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fa4940778e0 0x7fa494079da0 unknown :-1 s=CLOSED pgs=119 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:54:59.906 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.905+0000 7fa4a17fa640 1 --2- 192.168.123.107:0/2651005320 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa4bc076040 0x7fa4bc19f2d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:54:59.906 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.905+0000 7fa4a17fa640 1 --2- 192.168.123.107:0/2651005320 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa4bc075720 0x7fa4bc19ed90 secure :-1 s=CLOSED pgs=66 cs=0 l=1 rev1=1 crypto rx=0x7fa4a4009980 tx=0x7fa4a4004290 comp rx=0 tx=0).stop 2026-03-09T20:54:59.906 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.905+0000 7fa4a17fa640 1 -- 192.168.123.107:0/2651005320 >> 192.168.123.107:0/2651005320 conn(0x7fa4bc0fe710 msgr2=0x7fa4bc0ffdf0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:54:59.906 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.905+0000 7fa4a17fa640 1 -- 192.168.123.107:0/2651005320 shutdown_connections 2026-03-09T20:54:59.906 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:54:59.905+0000 7fa4a17fa640 1 -- 192.168.123.107:0/2651005320 wait complete. 2026-03-09T20:54:59.966 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph fs dump --format=json 12 2026-03-09T20:55:00.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:54:59 vm10.local ceph-mon[103526]: pgmap v239: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:55:00.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:54:59 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/903902370' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 9, "format": "json"}]: dispatch 2026-03-09T20:55:00.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:54:59 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/278648884' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 10, "format": "json"}]: dispatch 2026-03-09T20:55:00.123 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:55:00.382 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.381+0000 7fe66dc47640 1 -- 192.168.123.107:0/2803015283 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe668075720 msgr2=0x7fe668075b00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:00.382 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.381+0000 7fe66dc47640 1 --2- 192.168.123.107:0/2803015283 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe668075720 0x7fe668075b00 secure :-1 s=READY pgs=171 cs=0 l=1 rev1=1 crypto rx=0x7fe6580099b0 tx=0x7fe65802f220 comp rx=0 tx=0).stop 2026-03-09T20:55:00.383 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.381+0000 7fe66dc47640 1 -- 192.168.123.107:0/2803015283 shutdown_connections 2026-03-09T20:55:00.383 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.381+0000 7fe66dc47640 1 --2- 192.168.123.107:0/2803015283 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe668076040 0x7fe668111330 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:00.383 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.381+0000 7fe66dc47640 1 --2- 192.168.123.107:0/2803015283 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe668075720 0x7fe668075b00 unknown :-1 s=CLOSED pgs=171 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:00.383 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.381+0000 7fe66dc47640 1 -- 192.168.123.107:0/2803015283 >> 192.168.123.107:0/2803015283 conn(0x7fe6680fe710 msgr2=0x7fe668100b30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:55:00.383 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.382+0000 7fe66dc47640 1 -- 192.168.123.107:0/2803015283 shutdown_connections 2026-03-09T20:55:00.383 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.382+0000 7fe66dc47640 1 -- 192.168.123.107:0/2803015283 wait complete. 2026-03-09T20:55:00.383 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.382+0000 7fe66dc47640 1 Processor -- start 2026-03-09T20:55:00.383 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.382+0000 7fe66dc47640 1 -- start start 2026-03-09T20:55:00.383 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.382+0000 7fe66dc47640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe668075720 0x7fe66819ee70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:00.384 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.382+0000 7fe66dc47640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe668076040 0x7fe66819f3b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:00.384 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.382+0000 7fe66dc47640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fe66819f9b0 con 0x7fe668076040 2026-03-09T20:55:00.384 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.382+0000 7fe66dc47640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fe66819fb20 con 0x7fe668075720 2026-03-09T20:55:00.384 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.383+0000 7fe6677fe640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe668075720 0x7fe66819ee70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:00.384 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.383+0000 7fe6677fe640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe668075720 0x7fe66819ee70 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:51412/0 (socket says 192.168.123.107:51412) 2026-03-09T20:55:00.384 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.383+0000 7fe6677fe640 1 -- 192.168.123.107:0/4053504421 learned_addr learned my addr 192.168.123.107:0/4053504421 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:55:00.384 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.383+0000 7fe666ffd640 1 --2- 192.168.123.107:0/4053504421 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe668076040 0x7fe66819f3b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:00.384 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.383+0000 7fe6677fe640 1 -- 192.168.123.107:0/4053504421 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe668076040 msgr2=0x7fe66819f3b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:00.384 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.383+0000 7fe6677fe640 1 --2- 192.168.123.107:0/4053504421 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe668076040 0x7fe66819f3b0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:00.384 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.383+0000 7fe6677fe640 1 -- 192.168.123.107:0/4053504421 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fe658009660 con 0x7fe668075720 2026-03-09T20:55:00.384 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.383+0000 7fe666ffd640 1 --2- 192.168.123.107:0/4053504421 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe668076040 0x7fe66819f3b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-09T20:55:00.384 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.383+0000 7fe6677fe640 1 --2- 192.168.123.107:0/4053504421 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe668075720 0x7fe66819ee70 secure :-1 s=READY pgs=67 cs=0 l=1 rev1=1 crypto rx=0x7fe658002410 tx=0x7fe658004290 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:55:00.384 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.384+0000 7fe664ff9640 1 -- 192.168.123.107:0/4053504421 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fe65803d070 con 0x7fe668075720 2026-03-09T20:55:00.385 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.384+0000 7fe66dc47640 1 -- 192.168.123.107:0/4053504421 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fe6681a38a0 con 0x7fe668075720 2026-03-09T20:55:00.385 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.384+0000 7fe66dc47640 1 -- 192.168.123.107:0/4053504421 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fe6681a3d90 con 0x7fe668075720 2026-03-09T20:55:00.385 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.384+0000 7fe664ff9640 1 -- 192.168.123.107:0/4053504421 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fe6580043b0 con 0x7fe668075720 2026-03-09T20:55:00.385 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.384+0000 7fe664ff9640 1 -- 192.168.123.107:0/4053504421 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fe658041720 con 0x7fe668075720 2026-03-09T20:55:00.386 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.385+0000 7fe64a7fc640 1 -- 192.168.123.107:0/4053504421 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fe668076e60 con 0x7fe668075720 2026-03-09T20:55:00.388 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.386+0000 7fe664ff9640 1 -- 192.168.123.107:0/4053504421 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fe658038730 con 0x7fe668075720 2026-03-09T20:55:00.388 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.386+0000 7fe664ff9640 1 --2- 192.168.123.107:0/4053504421 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fe640077890 0x7fe640079d50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:00.388 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.386+0000 7fe664ff9640 1 -- 192.168.123.107:0/4053504421 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(86..86 src has 1..86) v4 ==== 6652+0+0 (secure 0 0 0) 0x7fe6580bec00 con 0x7fe668075720 2026-03-09T20:55:00.388 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.387+0000 7fe666ffd640 1 --2- 192.168.123.107:0/4053504421 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fe640077890 0x7fe640079d50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:00.388 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.387+0000 7fe666ffd640 1 --2- 192.168.123.107:0/4053504421 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fe640077890 0x7fe640079d50 secure :-1 s=READY pgs=120 cs=0 l=1 rev1=1 crypto rx=0x7fe654005f10 tx=0x7fe654005ea0 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:55:00.389 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.388+0000 7fe664ff9640 1 -- 192.168.123.107:0/4053504421 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fe6580872b0 con 0x7fe668075720 2026-03-09T20:55:00.503 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.501+0000 7fe64a7fc640 1 -- 192.168.123.107:0/4053504421 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 12, "format": "json"} v 0) v1 -- 0x7fe668075b00 con 0x7fe668075720 2026-03-09T20:55:00.505 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.504+0000 7fe664ff9640 1 -- 192.168.123.107:0/4053504421 <== mon.1 v2:192.168.123.110:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 12, "format": "json"}]=0 dumped fsmap epoch 12 v37) v1 ==== 107+0+3374 (secure 0 0 0) 0x7fe658046d40 con 0x7fe668075720 2026-03-09T20:55:00.506 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:55:00.506 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":12,"btime":"2026-03-09T20:52:07:790605+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[],"filesystems":[{"mdsmap":{"epoch":12,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-09T20:44:59.885491+0000","modified":"2026-03-09T20:52:07.790604+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":2,"in":[0,1],"up":{"mds_0":14476,"mds_1":24291},"failed":[],"damaged":[],"stopped":[],"info":{"gid_14476":{"gid":14476,"name":"cephfs.vm07.rovdbp","rank":0,"incarnation":4,"state":"up:active","state_seq":3,"addr":"192.168.123.107:6827/2216764941","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6826","nonce":2216764941},{"type":"v1","addr":"192.168.123.107:6827","nonce":2216764941}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}},"gid_24291":{"gid":24291,"name":"cephfs.vm10.qpltwp","rank":1,"incarnation":6,"state":"up:active","state_seq":3,"addr":"192.168.123.110:6825/61492274","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6824","nonce":61492274},{"type":"v1","addr":"192.168.123.110:6825","nonce":61492274}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":14476,"qdb_cluster":[14476,24291]},"id":1}]} 2026-03-09T20:55:00.506 INFO:teuthology.orchestra.run.vm07.stderr:dumped fsmap epoch 12 2026-03-09T20:55:00.508 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.507+0000 7fe64a7fc640 1 -- 192.168.123.107:0/4053504421 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fe640077890 msgr2=0x7fe640079d50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:00.508 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.507+0000 7fe64a7fc640 1 --2- 192.168.123.107:0/4053504421 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fe640077890 0x7fe640079d50 secure :-1 s=READY pgs=120 cs=0 l=1 rev1=1 crypto rx=0x7fe654005f10 tx=0x7fe654005ea0 comp rx=0 tx=0).stop 2026-03-09T20:55:00.508 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.507+0000 7fe64a7fc640 1 -- 192.168.123.107:0/4053504421 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe668075720 msgr2=0x7fe66819ee70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:00.508 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.507+0000 7fe64a7fc640 1 --2- 192.168.123.107:0/4053504421 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe668075720 0x7fe66819ee70 secure :-1 s=READY pgs=67 cs=0 l=1 rev1=1 crypto rx=0x7fe658002410 tx=0x7fe658004290 comp rx=0 tx=0).stop 2026-03-09T20:55:00.508 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.507+0000 7fe64a7fc640 1 -- 192.168.123.107:0/4053504421 shutdown_connections 2026-03-09T20:55:00.509 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.508+0000 7fe64a7fc640 1 --2- 192.168.123.107:0/4053504421 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fe640077890 0x7fe640079d50 unknown :-1 s=CLOSED pgs=120 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:00.509 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.508+0000 7fe64a7fc640 1 --2- 192.168.123.107:0/4053504421 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fe668076040 0x7fe66819f3b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:00.509 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.508+0000 7fe64a7fc640 1 --2- 192.168.123.107:0/4053504421 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fe668075720 0x7fe66819ee70 unknown :-1 s=CLOSED pgs=67 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:00.509 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.508+0000 7fe64a7fc640 1 -- 192.168.123.107:0/4053504421 >> 192.168.123.107:0/4053504421 conn(0x7fe6680fe710 msgr2=0x7fe6680ffe30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:55:00.509 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.508+0000 7fe64a7fc640 1 -- 192.168.123.107:0/4053504421 shutdown_connections 2026-03-09T20:55:00.509 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.508+0000 7fe64a7fc640 1 -- 192.168.123.107:0/4053504421 wait complete. 2026-03-09T20:55:00.576 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 12 2026-03-09T20:55:00.577 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph fs dump --format=json 13 2026-03-09T20:55:00.725 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:55:00.761 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:00 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/2651005320' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 11, "format": "json"}]: dispatch 2026-03-09T20:55:00.761 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:00 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/4053504421' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 12, "format": "json"}]: dispatch 2026-03-09T20:55:00.955 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.953+0000 7f61f44c2640 1 -- 192.168.123.107:0/1792081339 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f61ec0fea80 msgr2=0x7f61ec0feee0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:00.955 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.953+0000 7f61f44c2640 1 --2- 192.168.123.107:0/1792081339 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f61ec0fea80 0x7f61ec0feee0 secure :-1 s=READY pgs=172 cs=0 l=1 rev1=1 crypto rx=0x7f61dc009a00 tx=0x7f61dc02f270 comp rx=0 tx=0).stop 2026-03-09T20:55:00.955 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.954+0000 7f61f44c2640 1 -- 192.168.123.107:0/1792081339 shutdown_connections 2026-03-09T20:55:00.955 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.954+0000 7f61f44c2640 1 --2- 192.168.123.107:0/1792081339 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f61ec0fea80 0x7f61ec0feee0 unknown :-1 s=CLOSED pgs=172 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:00.955 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.954+0000 7f61f44c2640 1 --2- 192.168.123.107:0/1792081339 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f61ec106650 0x7f61ec106a30 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:00.955 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.954+0000 7f61f44c2640 1 -- 192.168.123.107:0/1792081339 >> 192.168.123.107:0/1792081339 conn(0x7f61ec0fa8f0 msgr2=0x7f61ec0fcd10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:55:00.955 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.954+0000 7f61f44c2640 1 -- 192.168.123.107:0/1792081339 shutdown_connections 2026-03-09T20:55:00.955 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.954+0000 7f61f44c2640 1 -- 192.168.123.107:0/1792081339 wait complete. 2026-03-09T20:55:00.955 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.955+0000 7f61f44c2640 1 Processor -- start 2026-03-09T20:55:00.956 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.955+0000 7f61f44c2640 1 -- start start 2026-03-09T20:55:00.956 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.955+0000 7f61f44c2640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f61ec0fea80 0x7f61ec0ffca0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:00.956 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.955+0000 7f61f44c2640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f61ec106650 0x7f61ec1001e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:00.956 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.955+0000 7f61f44c2640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f61ec101d00 con 0x7f61ec106650 2026-03-09T20:55:00.956 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.955+0000 7f61f44c2640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f61ec100750 con 0x7f61ec0fea80 2026-03-09T20:55:00.956 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.955+0000 7f61f2237640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f61ec0fea80 0x7f61ec0ffca0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:00.956 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.955+0000 7f61f2237640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f61ec0fea80 0x7f61ec0ffca0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:51440/0 (socket says 192.168.123.107:51440) 2026-03-09T20:55:00.956 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.955+0000 7f61f2237640 1 -- 192.168.123.107:0/4244130037 learned_addr learned my addr 192.168.123.107:0/4244130037 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:55:00.956 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.955+0000 7f61f1a36640 1 --2- 192.168.123.107:0/4244130037 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f61ec106650 0x7f61ec1001e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:00.957 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.956+0000 7f61f2237640 1 -- 192.168.123.107:0/4244130037 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f61ec106650 msgr2=0x7f61ec1001e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:00.957 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.956+0000 7f61f2237640 1 --2- 192.168.123.107:0/4244130037 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f61ec106650 0x7f61ec1001e0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:00.957 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.956+0000 7f61f2237640 1 -- 192.168.123.107:0/4244130037 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f61dc009660 con 0x7f61ec0fea80 2026-03-09T20:55:00.957 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.956+0000 7f61f1a36640 1 --2- 192.168.123.107:0/4244130037 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f61ec106650 0x7f61ec1001e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T20:55:00.957 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.956+0000 7f61f2237640 1 --2- 192.168.123.107:0/4244130037 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f61ec0fea80 0x7f61ec0ffca0 secure :-1 s=READY pgs=68 cs=0 l=1 rev1=1 crypto rx=0x7f61e000d8d0 tx=0x7f61e000dda0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:55:00.957 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.956+0000 7f61d37fe640 1 -- 192.168.123.107:0/4244130037 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f61e0004490 con 0x7f61ec0fea80 2026-03-09T20:55:00.957 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.956+0000 7f61f44c2640 1 -- 192.168.123.107:0/4244130037 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f61ec100a30 con 0x7f61ec0fea80 2026-03-09T20:55:00.958 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.956+0000 7f61f44c2640 1 -- 192.168.123.107:0/4244130037 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f61ec1a4ad0 con 0x7f61ec0fea80 2026-03-09T20:55:00.958 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.956+0000 7f61d37fe640 1 -- 192.168.123.107:0/4244130037 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f61e000bd00 con 0x7f61ec0fea80 2026-03-09T20:55:00.958 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.956+0000 7f61d37fe640 1 -- 192.168.123.107:0/4244130037 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f61e0010460 con 0x7f61ec0fea80 2026-03-09T20:55:00.959 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.958+0000 7f61f44c2640 1 -- 192.168.123.107:0/4244130037 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f61b4005350 con 0x7f61ec0fea80 2026-03-09T20:55:00.959 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.958+0000 7f61d37fe640 1 -- 192.168.123.107:0/4244130037 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f61e00027e0 con 0x7f61ec0fea80 2026-03-09T20:55:00.959 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.958+0000 7f61d37fe640 1 --2- 192.168.123.107:0/4244130037 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f61c00778e0 0x7f61c0079da0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:00.959 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.958+0000 7f61d37fe640 1 -- 192.168.123.107:0/4244130037 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(86..86 src has 1..86) v4 ==== 6652+0+0 (secure 0 0 0) 0x7f61e0099c50 con 0x7f61ec0fea80 2026-03-09T20:55:00.959 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.958+0000 7f61f1a36640 1 --2- 192.168.123.107:0/4244130037 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f61c00778e0 0x7f61c0079da0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:00.960 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.959+0000 7f61f1a36640 1 --2- 192.168.123.107:0/4244130037 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f61c00778e0 0x7f61c0079da0 secure :-1 s=READY pgs=121 cs=0 l=1 rev1=1 crypto rx=0x7f61ec101420 tx=0x7f61dc005e50 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:55:00.962 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:00.961+0000 7f61d37fe640 1 -- 192.168.123.107:0/4244130037 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f61e0062330 con 0x7f61ec0fea80 2026-03-09T20:55:01.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:00 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/2651005320' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 11, "format": "json"}]: dispatch 2026-03-09T20:55:01.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:00 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/4053504421' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 12, "format": "json"}]: dispatch 2026-03-09T20:55:01.073 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.070+0000 7f61f44c2640 1 -- 192.168.123.107:0/4244130037 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 13, "format": "json"} v 0) v1 -- 0x7f61b40051c0 con 0x7f61ec0fea80 2026-03-09T20:55:01.075 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.073+0000 7f61d37fe640 1 -- 192.168.123.107:0/4244130037 <== mon.1 v2:192.168.123.110:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 13, "format": "json"}]=0 dumped fsmap epoch 13 v37) v1 ==== 107+0+4939 (secure 0 0 0) 0x7f61e0061a80 con 0x7f61ec0fea80 2026-03-09T20:55:01.075 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:55:01.075 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":13,"btime":"2026-03-09T20:52:08:802220+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":34316,"name":"cephfs.vm07.potfau","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.107:6829/3625324292","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6828","nonce":3625324292},{"type":"v1","addr":"192.168.123.107:6829","nonce":3625324292}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":13},{"gid":44247,"name":"cephfs.vm10.hzyuyq","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.110:6827/2699915815","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6826","nonce":2699915815},{"type":"v1","addr":"192.168.123.110:6827","nonce":2699915815}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":13}],"filesystems":[{"mdsmap":{"epoch":13,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-09T20:44:59.885491+0000","modified":"2026-03-09T20:52:07.883688+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":1,"in":[0,1],"up":{"mds_0":14476,"mds_1":24291},"failed":[],"damaged":[],"stopped":[],"info":{"gid_14476":{"gid":14476,"name":"cephfs.vm07.rovdbp","rank":0,"incarnation":4,"state":"up:active","state_seq":3,"addr":"192.168.123.107:6827/2216764941","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6826","nonce":2216764941},{"type":"v1","addr":"192.168.123.107:6827","nonce":2216764941}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}},"gid_24291":{"gid":24291,"name":"cephfs.vm10.qpltwp","rank":1,"incarnation":6,"state":"up:active","state_seq":3,"addr":"192.168.123.110:6825/61492274","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6824","nonce":61492274},{"type":"v1","addr":"192.168.123.110:6825","nonce":61492274}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":14476,"qdb_cluster":[24291,14476]},"id":1}]} 2026-03-09T20:55:01.075 INFO:teuthology.orchestra.run.vm07.stderr:dumped fsmap epoch 13 2026-03-09T20:55:01.077 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.076+0000 7f61f44c2640 1 -- 192.168.123.107:0/4244130037 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f61c00778e0 msgr2=0x7f61c0079da0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:01.077 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.076+0000 7f61f44c2640 1 --2- 192.168.123.107:0/4244130037 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f61c00778e0 0x7f61c0079da0 secure :-1 s=READY pgs=121 cs=0 l=1 rev1=1 crypto rx=0x7f61ec101420 tx=0x7f61dc005e50 comp rx=0 tx=0).stop 2026-03-09T20:55:01.077 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.076+0000 7f61f44c2640 1 -- 192.168.123.107:0/4244130037 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f61ec0fea80 msgr2=0x7f61ec0ffca0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:01.077 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.076+0000 7f61f44c2640 1 --2- 192.168.123.107:0/4244130037 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f61ec0fea80 0x7f61ec0ffca0 secure :-1 s=READY pgs=68 cs=0 l=1 rev1=1 crypto rx=0x7f61e000d8d0 tx=0x7f61e000dda0 comp rx=0 tx=0).stop 2026-03-09T20:55:01.077 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.077+0000 7f61f44c2640 1 -- 192.168.123.107:0/4244130037 shutdown_connections 2026-03-09T20:55:01.078 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.077+0000 7f61f44c2640 1 --2- 192.168.123.107:0/4244130037 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f61c00778e0 0x7f61c0079da0 unknown :-1 s=CLOSED pgs=121 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:01.078 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.077+0000 7f61f44c2640 1 --2- 192.168.123.107:0/4244130037 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f61ec106650 0x7f61ec1001e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:01.078 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.077+0000 7f61f44c2640 1 --2- 192.168.123.107:0/4244130037 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f61ec0fea80 0x7f61ec0ffca0 unknown :-1 s=CLOSED pgs=68 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:01.078 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.077+0000 7f61f44c2640 1 -- 192.168.123.107:0/4244130037 >> 192.168.123.107:0/4244130037 conn(0x7f61ec0fa8f0 msgr2=0x7f61ec10a610 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:55:01.078 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.077+0000 7f61f44c2640 1 -- 192.168.123.107:0/4244130037 shutdown_connections 2026-03-09T20:55:01.078 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.077+0000 7f61f44c2640 1 -- 192.168.123.107:0/4244130037 wait complete. 2026-03-09T20:55:01.133 DEBUG:tasks.fs:max_mds reduced in epoch 13 2026-03-09T20:55:01.133 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 13 2026-03-09T20:55:01.133 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph fs dump --format=json 14 2026-03-09T20:55:01.277 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:55:01.503 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.502+0000 7f565d41a640 1 -- 192.168.123.107:0/1662079214 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5658073e30 msgr2=0x7f565810cb80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:01.503 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.502+0000 7f565d41a640 1 --2- 192.168.123.107:0/1662079214 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5658073e30 0x7f565810cb80 secure :-1 s=READY pgs=173 cs=0 l=1 rev1=1 crypto rx=0x7f56440099b0 tx=0x7f564402f2b0 comp rx=0 tx=0).stop 2026-03-09T20:55:01.503 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.502+0000 7f565d41a640 1 -- 192.168.123.107:0/1662079214 shutdown_connections 2026-03-09T20:55:01.503 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.502+0000 7f565d41a640 1 --2- 192.168.123.107:0/1662079214 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5658073e30 0x7f565810cb80 unknown :-1 s=CLOSED pgs=173 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:01.503 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.502+0000 7f565d41a640 1 --2- 192.168.123.107:0/1662079214 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5658073510 0x7f56580738f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:01.503 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.502+0000 7f565d41a640 1 -- 192.168.123.107:0/1662079214 >> 192.168.123.107:0/1662079214 conn(0x7f56580fc460 msgr2=0x7f56580fe880 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:55:01.503 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.502+0000 7f565d41a640 1 -- 192.168.123.107:0/1662079214 shutdown_connections 2026-03-09T20:55:01.503 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.502+0000 7f565d41a640 1 -- 192.168.123.107:0/1662079214 wait complete. 2026-03-09T20:55:01.504 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.503+0000 7f565d41a640 1 Processor -- start 2026-03-09T20:55:01.504 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.503+0000 7f565d41a640 1 -- start start 2026-03-09T20:55:01.504 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.503+0000 7f565d41a640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5658073510 0x7f56581a0660 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:01.504 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.503+0000 7f5656ffd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5658073510 0x7f56581a0660 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:01.504 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.504+0000 7f5656ffd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5658073510 0x7f56581a0660 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:45386/0 (socket says 192.168.123.107:45386) 2026-03-09T20:55:01.504 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.504+0000 7f565d41a640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5658073e30 0x7f56581a0ba0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:01.505 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.504+0000 7f565d41a640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f565819a750 con 0x7f5658073510 2026-03-09T20:55:01.505 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.504+0000 7f565d41a640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f565819a8c0 con 0x7f5658073e30 2026-03-09T20:55:01.505 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.504+0000 7f5656ffd640 1 -- 192.168.123.107:0/922694420 learned_addr learned my addr 192.168.123.107:0/922694420 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:55:01.505 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.504+0000 7f56567fc640 1 --2- 192.168.123.107:0/922694420 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5658073e30 0x7f56581a0ba0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:01.505 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.504+0000 7f5656ffd640 1 -- 192.168.123.107:0/922694420 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5658073e30 msgr2=0x7f56581a0ba0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:01.505 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.504+0000 7f5656ffd640 1 --2- 192.168.123.107:0/922694420 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5658073e30 0x7f56581a0ba0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:01.505 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.504+0000 7f5656ffd640 1 -- 192.168.123.107:0/922694420 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f5644009660 con 0x7f5658073510 2026-03-09T20:55:01.505 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.504+0000 7f5656ffd640 1 --2- 192.168.123.107:0/922694420 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5658073510 0x7f56581a0660 secure :-1 s=READY pgs=174 cs=0 l=1 rev1=1 crypto rx=0x7f564c00da30 tx=0x7f564c00df00 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:55:01.507 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.505+0000 7f5633fff640 1 -- 192.168.123.107:0/922694420 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f564c004280 con 0x7f5658073510 2026-03-09T20:55:01.507 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.505+0000 7f5633fff640 1 -- 192.168.123.107:0/922694420 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f564c00be10 con 0x7f5658073510 2026-03-09T20:55:01.507 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.505+0000 7f5633fff640 1 -- 192.168.123.107:0/922694420 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f564c010460 con 0x7f5658073510 2026-03-09T20:55:01.507 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.505+0000 7f565d41a640 1 -- 192.168.123.107:0/922694420 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f565819aba0 con 0x7f5658073510 2026-03-09T20:55:01.507 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.505+0000 7f565d41a640 1 -- 192.168.123.107:0/922694420 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f565819b0f0 con 0x7f5658073510 2026-03-09T20:55:01.510 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.506+0000 7f5633fff640 1 -- 192.168.123.107:0/922694420 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f564c0105c0 con 0x7f5658073510 2026-03-09T20:55:01.510 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.506+0000 7f565d41a640 1 -- 192.168.123.107:0/922694420 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f561c005350 con 0x7f5658073510 2026-03-09T20:55:01.510 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.508+0000 7f5633fff640 1 --2- 192.168.123.107:0/922694420 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f562c077890 0x7f562c079d50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:01.510 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.508+0000 7f5633fff640 1 -- 192.168.123.107:0/922694420 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(86..86 src has 1..86) v4 ==== 6652+0+0 (secure 0 0 0) 0x7f564c099c60 con 0x7f5658073510 2026-03-09T20:55:01.510 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.509+0000 7f56567fc640 1 --2- 192.168.123.107:0/922694420 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f562c077890 0x7f562c079d50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:01.511 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.510+0000 7f5633fff640 1 -- 192.168.123.107:0/922694420 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f564c0632f0 con 0x7f5658073510 2026-03-09T20:55:01.511 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.510+0000 7f56567fc640 1 --2- 192.168.123.107:0/922694420 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f562c077890 0x7f562c079d50 secure :-1 s=READY pgs=122 cs=0 l=1 rev1=1 crypto rx=0x7f565819bbc0 tx=0x7f564403a040 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:55:01.629 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.628+0000 7f565d41a640 1 -- 192.168.123.107:0/922694420 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 14, "format": "json"} v 0) v1 -- 0x7f561c0051c0 con 0x7f5658073510 2026-03-09T20:55:01.630 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.629+0000 7f5633fff640 1 -- 192.168.123.107:0/922694420 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 14, "format": "json"}]=0 dumped fsmap epoch 14 v37) v1 ==== 107+0+4935 (secure 0 0 0) 0x7f564c0026e0 con 0x7f5658073510 2026-03-09T20:55:01.630 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:55:01.630 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":14,"btime":"2026-03-09T20:52:08:807762+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":34316,"name":"cephfs.vm07.potfau","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.107:6829/3625324292","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6828","nonce":3625324292},{"type":"v1","addr":"192.168.123.107:6829","nonce":3625324292}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":13},{"gid":44247,"name":"cephfs.vm10.hzyuyq","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.110:6827/2699915815","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6826","nonce":2699915815},{"type":"v1","addr":"192.168.123.110:6827","nonce":2699915815}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":13}],"filesystems":[{"mdsmap":{"epoch":14,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-09T20:44:59.885491+0000","modified":"2026-03-09T20:52:08.807759+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":1,"in":[0,1],"up":{"mds_0":14476,"mds_1":24291},"failed":[],"damaged":[],"stopped":[],"info":{"gid_14476":{"gid":14476,"name":"cephfs.vm07.rovdbp","rank":0,"incarnation":4,"state":"up:active","state_seq":3,"addr":"192.168.123.107:6827/2216764941","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6826","nonce":2216764941},{"type":"v1","addr":"192.168.123.107:6827","nonce":2216764941}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}},"gid_24291":{"gid":24291,"name":"cephfs.vm10.qpltwp","rank":1,"incarnation":6,"state":"up:stopping","state_seq":3,"addr":"192.168.123.110:6825/61492274","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6824","nonce":61492274},{"type":"v1","addr":"192.168.123.110:6825","nonce":61492274}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":14476,"qdb_cluster":[14476]},"id":1}]} 2026-03-09T20:55:01.630 INFO:teuthology.orchestra.run.vm07.stderr:dumped fsmap epoch 14 2026-03-09T20:55:01.632 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.631+0000 7f565d41a640 1 -- 192.168.123.107:0/922694420 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f562c077890 msgr2=0x7f562c079d50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:01.633 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.631+0000 7f565d41a640 1 --2- 192.168.123.107:0/922694420 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f562c077890 0x7f562c079d50 secure :-1 s=READY pgs=122 cs=0 l=1 rev1=1 crypto rx=0x7f565819bbc0 tx=0x7f564403a040 comp rx=0 tx=0).stop 2026-03-09T20:55:01.633 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.632+0000 7f565d41a640 1 -- 192.168.123.107:0/922694420 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5658073510 msgr2=0x7f56581a0660 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:01.633 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.632+0000 7f565d41a640 1 --2- 192.168.123.107:0/922694420 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5658073510 0x7f56581a0660 secure :-1 s=READY pgs=174 cs=0 l=1 rev1=1 crypto rx=0x7f564c00da30 tx=0x7f564c00df00 comp rx=0 tx=0).stop 2026-03-09T20:55:01.633 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.632+0000 7f565d41a640 1 -- 192.168.123.107:0/922694420 shutdown_connections 2026-03-09T20:55:01.633 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.632+0000 7f565d41a640 1 --2- 192.168.123.107:0/922694420 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f562c077890 0x7f562c079d50 unknown :-1 s=CLOSED pgs=122 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:01.633 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.632+0000 7f565d41a640 1 --2- 192.168.123.107:0/922694420 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5658073e30 0x7f56581a0ba0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:01.633 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.632+0000 7f565d41a640 1 --2- 192.168.123.107:0/922694420 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5658073510 0x7f56581a0660 unknown :-1 s=CLOSED pgs=174 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:01.633 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.632+0000 7f565d41a640 1 -- 192.168.123.107:0/922694420 >> 192.168.123.107:0/922694420 conn(0x7f56580fc460 msgr2=0x7f565810c1a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:55:01.633 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.632+0000 7f565d41a640 1 -- 192.168.123.107:0/922694420 shutdown_connections 2026-03-09T20:55:01.633 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:01.632+0000 7f565d41a640 1 -- 192.168.123.107:0/922694420 wait complete. 2026-03-09T20:55:01.701 DEBUG:tasks.fs:max_mds reduced in epoch 14 2026-03-09T20:55:01.701 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 14 2026-03-09T20:55:01.701 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph fs dump --format=json 15 2026-03-09T20:55:01.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:01 vm10.local ceph-mon[103526]: pgmap v240: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:55:01.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:01 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/4244130037' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 13, "format": "json"}]: dispatch 2026-03-09T20:55:01.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:01 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/922694420' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 14, "format": "json"}]: dispatch 2026-03-09T20:55:01.847 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:55:01.872 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:01 vm07.local ceph-mon[112105]: pgmap v240: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:55:01.872 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:01 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/4244130037' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 13, "format": "json"}]: dispatch 2026-03-09T20:55:01.872 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:01 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/922694420' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 14, "format": "json"}]: dispatch 2026-03-09T20:55:02.088 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.087+0000 7f67f1eba640 1 -- 192.168.123.107:0/50405492 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f67ec1029d0 msgr2=0x7f67ec102e30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:02.089 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.087+0000 7f67f1eba640 1 --2- 192.168.123.107:0/50405492 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f67ec1029d0 0x7f67ec102e30 secure :-1 s=READY pgs=69 cs=0 l=1 rev1=1 crypto rx=0x7f67d00099b0 tx=0x7f67d002f220 comp rx=0 tx=0).stop 2026-03-09T20:55:02.089 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.088+0000 7f67f1eba640 1 -- 192.168.123.107:0/50405492 shutdown_connections 2026-03-09T20:55:02.089 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.088+0000 7f67f1eba640 1 --2- 192.168.123.107:0/50405492 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f67ec1029d0 0x7f67ec102e30 unknown :-1 s=CLOSED pgs=69 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:02.089 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.088+0000 7f67f1eba640 1 --2- 192.168.123.107:0/50405492 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f67ec1089d0 0x7f67ec108db0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:02.089 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.088+0000 7f67f1eba640 1 -- 192.168.123.107:0/50405492 >> 192.168.123.107:0/50405492 conn(0x7f67ec0fe710 msgr2=0x7f67ec100b30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:55:02.089 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.088+0000 7f67f1eba640 1 -- 192.168.123.107:0/50405492 shutdown_connections 2026-03-09T20:55:02.089 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.088+0000 7f67f1eba640 1 -- 192.168.123.107:0/50405492 wait complete. 2026-03-09T20:55:02.090 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.089+0000 7f67f1eba640 1 Processor -- start 2026-03-09T20:55:02.090 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.089+0000 7f67f1eba640 1 -- start start 2026-03-09T20:55:02.090 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.089+0000 7f67f1eba640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f67ec1029d0 0x7f67ec1a0680 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:02.090 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.089+0000 7f67eb7fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f67ec1029d0 0x7f67ec1a0680 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:02.091 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.090+0000 7f67eb7fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f67ec1029d0 0x7f67ec1a0680 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:45406/0 (socket says 192.168.123.107:45406) 2026-03-09T20:55:02.091 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.090+0000 7f67f1eba640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f67ec1089d0 0x7f67ec1a0bc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:02.091 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.090+0000 7f67f1eba640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f67ec19a770 con 0x7f67ec1029d0 2026-03-09T20:55:02.091 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.090+0000 7f67f1eba640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f67ec19a8e0 con 0x7f67ec1089d0 2026-03-09T20:55:02.091 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.090+0000 7f67eb7fe640 1 -- 192.168.123.107:0/1002887646 learned_addr learned my addr 192.168.123.107:0/1002887646 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:55:02.091 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.090+0000 7f67eaffd640 1 --2- 192.168.123.107:0/1002887646 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f67ec1089d0 0x7f67ec1a0bc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:02.091 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.090+0000 7f67eaffd640 1 -- 192.168.123.107:0/1002887646 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f67ec1029d0 msgr2=0x7f67ec1a0680 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:02.092 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.090+0000 7f67eaffd640 1 --2- 192.168.123.107:0/1002887646 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f67ec1029d0 0x7f67ec1a0680 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:02.092 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.090+0000 7f67eaffd640 1 -- 192.168.123.107:0/1002887646 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f67d8009590 con 0x7f67ec1089d0 2026-03-09T20:55:02.092 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.090+0000 7f67eb7fe640 1 --2- 192.168.123.107:0/1002887646 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f67ec1029d0 0x7f67ec1a0680 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T20:55:02.092 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.091+0000 7f67eaffd640 1 --2- 192.168.123.107:0/1002887646 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f67ec1089d0 0x7f67ec1a0bc0 secure :-1 s=READY pgs=70 cs=0 l=1 rev1=1 crypto rx=0x7f67d002f730 tx=0x7f67d0004290 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:55:02.092 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.091+0000 7f67e8ff9640 1 -- 192.168.123.107:0/1002887646 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f67d003d070 con 0x7f67ec1089d0 2026-03-09T20:55:02.092 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.091+0000 7f67f1eba640 1 -- 192.168.123.107:0/1002887646 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f67d0009660 con 0x7f67ec1089d0 2026-03-09T20:55:02.092 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.091+0000 7f67e8ff9640 1 -- 192.168.123.107:0/1002887646 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f67d00384e0 con 0x7f67ec1089d0 2026-03-09T20:55:02.092 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.091+0000 7f67f1eba640 1 -- 192.168.123.107:0/1002887646 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f67ec19aec0 con 0x7f67ec1089d0 2026-03-09T20:55:02.093 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.092+0000 7f67e8ff9640 1 -- 192.168.123.107:0/1002887646 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f67d0041620 con 0x7f67ec1089d0 2026-03-09T20:55:02.093 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.092+0000 7f67f1eba640 1 -- 192.168.123.107:0/1002887646 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f67b0005350 con 0x7f67ec1089d0 2026-03-09T20:55:02.095 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.093+0000 7f67e8ff9640 1 -- 192.168.123.107:0/1002887646 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f67d0038980 con 0x7f67ec1089d0 2026-03-09T20:55:02.095 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.094+0000 7f67e8ff9640 1 --2- 192.168.123.107:0/1002887646 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f67c40778e0 0x7f67c4079da0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:02.095 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.094+0000 7f67e8ff9640 1 -- 192.168.123.107:0/1002887646 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(86..86 src has 1..86) v4 ==== 6652+0+0 (secure 0 0 0) 0x7f67d00be330 con 0x7f67ec1089d0 2026-03-09T20:55:02.095 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.094+0000 7f67eb7fe640 1 --2- 192.168.123.107:0/1002887646 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f67c40778e0 0x7f67c4079da0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:02.095 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.094+0000 7f67eb7fe640 1 --2- 192.168.123.107:0/1002887646 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f67c40778e0 0x7f67c4079da0 secure :-1 s=READY pgs=123 cs=0 l=1 rev1=1 crypto rx=0x7f67d8004750 tx=0x7f67d8009290 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:55:02.097 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.096+0000 7f67e8ff9640 1 -- 192.168.123.107:0/1002887646 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f67d0086a90 con 0x7f67ec1089d0 2026-03-09T20:55:02.214 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.213+0000 7f67f1eba640 1 -- 192.168.123.107:0/1002887646 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 15, "format": "json"} v 0) v1 -- 0x7f67b00051c0 con 0x7f67ec1089d0 2026-03-09T20:55:02.215 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.214+0000 7f67e8ff9640 1 -- 192.168.123.107:0/1002887646 <== mon.1 v2:192.168.123.110:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 15, "format": "json"}]=0 dumped fsmap epoch 15 v37) v1 ==== 107+0+4142 (secure 0 0 0) 0x7f67d00861e0 con 0x7f67ec1089d0 2026-03-09T20:55:02.215 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:55:02.215 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":15,"btime":"2026-03-09T20:52:32:544210+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":34316,"name":"cephfs.vm07.potfau","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.107:6829/3625324292","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6828","nonce":3625324292},{"type":"v1","addr":"192.168.123.107:6829","nonce":3625324292}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":13},{"gid":44247,"name":"cephfs.vm10.hzyuyq","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.110:6827/2699915815","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6826","nonce":2699915815},{"type":"v1","addr":"192.168.123.110:6827","nonce":2699915815}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":13}],"filesystems":[{"mdsmap":{"epoch":15,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-09T20:44:59.885491+0000","modified":"2026-03-09T20:52:32.310264+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":1,"in":[0],"up":{"mds_0":14476},"failed":[],"damaged":[],"stopped":[1],"info":{"gid_14476":{"gid":14476,"name":"cephfs.vm07.rovdbp","rank":0,"incarnation":4,"state":"up:active","state_seq":3,"addr":"192.168.123.107:6827/2216764941","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6826","nonce":2216764941},{"type":"v1","addr":"192.168.123.107:6827","nonce":2216764941}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":14476,"qdb_cluster":[14476]},"id":1}]} 2026-03-09T20:55:02.215 INFO:teuthology.orchestra.run.vm07.stderr:dumped fsmap epoch 15 2026-03-09T20:55:02.217 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.216+0000 7f67f1eba640 1 -- 192.168.123.107:0/1002887646 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f67c40778e0 msgr2=0x7f67c4079da0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:02.217 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.216+0000 7f67f1eba640 1 --2- 192.168.123.107:0/1002887646 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f67c40778e0 0x7f67c4079da0 secure :-1 s=READY pgs=123 cs=0 l=1 rev1=1 crypto rx=0x7f67d8004750 tx=0x7f67d8009290 comp rx=0 tx=0).stop 2026-03-09T20:55:02.217 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.216+0000 7f67f1eba640 1 -- 192.168.123.107:0/1002887646 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f67ec1089d0 msgr2=0x7f67ec1a0bc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:02.217 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.216+0000 7f67f1eba640 1 --2- 192.168.123.107:0/1002887646 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f67ec1089d0 0x7f67ec1a0bc0 secure :-1 s=READY pgs=70 cs=0 l=1 rev1=1 crypto rx=0x7f67d002f730 tx=0x7f67d0004290 comp rx=0 tx=0).stop 2026-03-09T20:55:02.217 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.217+0000 7f67f1eba640 1 -- 192.168.123.107:0/1002887646 shutdown_connections 2026-03-09T20:55:02.218 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.217+0000 7f67f1eba640 1 --2- 192.168.123.107:0/1002887646 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f67c40778e0 0x7f67c4079da0 unknown :-1 s=CLOSED pgs=123 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:02.218 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.217+0000 7f67f1eba640 1 --2- 192.168.123.107:0/1002887646 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f67ec1089d0 0x7f67ec1a0bc0 unknown :-1 s=CLOSED pgs=70 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:02.218 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.217+0000 7f67f1eba640 1 --2- 192.168.123.107:0/1002887646 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f67ec1029d0 0x7f67ec1a0680 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:02.218 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.217+0000 7f67f1eba640 1 -- 192.168.123.107:0/1002887646 >> 192.168.123.107:0/1002887646 conn(0x7f67ec0fe710 msgr2=0x7f67ec10ca00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:55:02.218 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.217+0000 7f67f1eba640 1 -- 192.168.123.107:0/1002887646 shutdown_connections 2026-03-09T20:55:02.218 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.217+0000 7f67f1eba640 1 -- 192.168.123.107:0/1002887646 wait complete. 2026-03-09T20:55:02.277 DEBUG:tasks.fs:max_mds reduced in epoch 15 2026-03-09T20:55:02.277 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 15 2026-03-09T20:55:02.277 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph fs dump --format=json 16 2026-03-09T20:55:02.415 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:55:02.644 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.642+0000 7f5e0acb8640 1 -- 192.168.123.107:0/4193249917 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5e04106780 msgr2=0x7f5e04106b60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:02.645 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.642+0000 7f5e0acb8640 1 --2- 192.168.123.107:0/4193249917 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5e04106780 0x7f5e04106b60 secure :-1 s=READY pgs=175 cs=0 l=1 rev1=1 crypto rx=0x7f5dec0099b0 tx=0x7f5dec02f220 comp rx=0 tx=0).stop 2026-03-09T20:55:02.646 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.645+0000 7f5e0acb8640 1 -- 192.168.123.107:0/4193249917 shutdown_connections 2026-03-09T20:55:02.646 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.645+0000 7f5e0acb8640 1 --2- 192.168.123.107:0/4193249917 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5e04100780 0x7f5e04100be0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:02.646 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.645+0000 7f5e0acb8640 1 --2- 192.168.123.107:0/4193249917 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5e04106780 0x7f5e04106b60 unknown :-1 s=CLOSED pgs=175 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:02.646 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.645+0000 7f5e0acb8640 1 -- 192.168.123.107:0/4193249917 >> 192.168.123.107:0/4193249917 conn(0x7f5e040fc460 msgr2=0x7f5e040fe880 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:55:02.648 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.647+0000 7f5e0acb8640 1 -- 192.168.123.107:0/4193249917 shutdown_connections 2026-03-09T20:55:02.648 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.647+0000 7f5e0acb8640 1 -- 192.168.123.107:0/4193249917 wait complete. 2026-03-09T20:55:02.648 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.648+0000 7f5e0acb8640 1 Processor -- start 2026-03-09T20:55:02.649 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.648+0000 7f5e0acb8640 1 -- start start 2026-03-09T20:55:02.649 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.648+0000 7f5e0acb8640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5e04100780 0x7f5e041186a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:02.649 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.648+0000 7f5e0acb8640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5e04106780 0x7f5e0410f660 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:02.649 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.648+0000 7f5e0acb8640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f5e0410fba0 con 0x7f5e04100780 2026-03-09T20:55:02.649 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.648+0000 7f5e0acb8640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f5e0410fd10 con 0x7f5e04106780 2026-03-09T20:55:02.649 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.649+0000 7f5e08a2d640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5e04100780 0x7f5e041186a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:02.650 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.649+0000 7f5dfbfff640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5e04106780 0x7f5e0410f660 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:02.650 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.649+0000 7f5dfbfff640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5e04106780 0x7f5e0410f660 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:51498/0 (socket says 192.168.123.107:51498) 2026-03-09T20:55:02.650 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.649+0000 7f5dfbfff640 1 -- 192.168.123.107:0/1147269015 learned_addr learned my addr 192.168.123.107:0/1147269015 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:55:02.650 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.649+0000 7f5e08a2d640 1 -- 192.168.123.107:0/1147269015 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5e04106780 msgr2=0x7f5e0410f660 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:02.650 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.649+0000 7f5e08a2d640 1 --2- 192.168.123.107:0/1147269015 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5e04106780 0x7f5e0410f660 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:02.650 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.649+0000 7f5e08a2d640 1 -- 192.168.123.107:0/1147269015 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f5dec009660 con 0x7f5e04100780 2026-03-09T20:55:02.651 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.650+0000 7f5e08a2d640 1 --2- 192.168.123.107:0/1147269015 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5e04100780 0x7f5e041186a0 secure :-1 s=READY pgs=176 cs=0 l=1 rev1=1 crypto rx=0x7f5dec02f730 tx=0x7f5dec004290 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:55:02.651 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.650+0000 7f5df9ffb640 1 -- 192.168.123.107:0/1147269015 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f5dec03d070 con 0x7f5e04100780 2026-03-09T20:55:02.651 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.650+0000 7f5df9ffb640 1 -- 192.168.123.107:0/1147269015 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f5dec0384e0 con 0x7f5e04100780 2026-03-09T20:55:02.652 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.651+0000 7f5df9ffb640 1 -- 192.168.123.107:0/1147269015 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f5dec041620 con 0x7f5e04100780 2026-03-09T20:55:02.652 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.651+0000 7f5e0acb8640 1 -- 192.168.123.107:0/1147269015 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f5e0410ff10 con 0x7f5e04100780 2026-03-09T20:55:02.652 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.651+0000 7f5e0acb8640 1 -- 192.168.123.107:0/1147269015 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f5e04110330 con 0x7f5e04100780 2026-03-09T20:55:02.656 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.651+0000 7f5e0acb8640 1 -- 192.168.123.107:0/1147269015 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f5e04101ec0 con 0x7f5e04100780 2026-03-09T20:55:02.656 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.652+0000 7f5df9ffb640 1 -- 192.168.123.107:0/1147269015 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f5dec038730 con 0x7f5e04100780 2026-03-09T20:55:02.656 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.653+0000 7f5df9ffb640 1 --2- 192.168.123.107:0/1147269015 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f5dd4077720 0x7f5dd4079be0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:02.656 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.653+0000 7f5df9ffb640 1 -- 192.168.123.107:0/1147269015 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(86..86 src has 1..86) v4 ==== 6652+0+0 (secure 0 0 0) 0x7f5dec0be900 con 0x7f5e04100780 2026-03-09T20:55:02.656 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.655+0000 7f5df9ffb640 1 -- 192.168.123.107:0/1147269015 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f5dec087060 con 0x7f5e04100780 2026-03-09T20:55:02.656 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.655+0000 7f5dfbfff640 1 --2- 192.168.123.107:0/1147269015 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f5dd4077720 0x7f5dd4079be0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:02.656 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.655+0000 7f5dfbfff640 1 --2- 192.168.123.107:0/1147269015 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f5dd4077720 0x7f5dd4079be0 secure :-1 s=READY pgs=124 cs=0 l=1 rev1=1 crypto rx=0x7f5e04110de0 tx=0x7f5df4009290 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:55:02.767 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.766+0000 7f5e0acb8640 1 -- 192.168.123.107:0/1147269015 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 16, "format": "json"} v 0) v1 -- 0x7f5e04110b90 con 0x7f5e04100780 2026-03-09T20:55:02.767 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:02 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/1002887646' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 15, "format": "json"}]: dispatch 2026-03-09T20:55:02.769 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.769+0000 7f5df9ffb640 1 -- 192.168.123.107:0/1147269015 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 16, "format": "json"}]=0 dumped fsmap epoch 16 v37) v1 ==== 107+0+4925 (secure 0 0 0) 0x7f5dec038a10 con 0x7f5e04100780 2026-03-09T20:55:02.770 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:55:02.770 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":16,"btime":"2026-03-09T20:52:33:551802+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":34316,"name":"cephfs.vm07.potfau","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.107:6829/3625324292","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6828","nonce":3625324292},{"type":"v1","addr":"192.168.123.107:6829","nonce":3625324292}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":13},{"gid":44247,"name":"cephfs.vm10.hzyuyq","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.110:6827/2699915815","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6826","nonce":2699915815},{"type":"v1","addr":"192.168.123.110:6827","nonce":2699915815}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":13},{"gid":44269,"name":"cephfs.vm10.qpltwp","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.110:6825/1063035280","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6824","nonce":1063035280},{"type":"v1","addr":"192.168.123.110:6825","nonce":1063035280}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":16}],"filesystems":[{"mdsmap":{"epoch":15,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-09T20:44:59.885491+0000","modified":"2026-03-09T20:52:32.310264+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":1,"in":[0],"up":{"mds_0":14476},"failed":[],"damaged":[],"stopped":[1],"info":{"gid_14476":{"gid":14476,"name":"cephfs.vm07.rovdbp","rank":0,"incarnation":4,"state":"up:active","state_seq":3,"addr":"192.168.123.107:6827/2216764941","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6826","nonce":2216764941},{"type":"v1","addr":"192.168.123.107:6827","nonce":2216764941}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":14476,"qdb_cluster":[14476]},"id":1}]} 2026-03-09T20:55:02.770 INFO:teuthology.orchestra.run.vm07.stderr:dumped fsmap epoch 16 2026-03-09T20:55:02.771 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.771+0000 7f5e0acb8640 1 -- 192.168.123.107:0/1147269015 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f5dd4077720 msgr2=0x7f5dd4079be0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:02.772 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.771+0000 7f5e0acb8640 1 --2- 192.168.123.107:0/1147269015 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f5dd4077720 0x7f5dd4079be0 secure :-1 s=READY pgs=124 cs=0 l=1 rev1=1 crypto rx=0x7f5e04110de0 tx=0x7f5df4009290 comp rx=0 tx=0).stop 2026-03-09T20:55:02.772 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.771+0000 7f5e0acb8640 1 -- 192.168.123.107:0/1147269015 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5e04100780 msgr2=0x7f5e041186a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:02.772 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.771+0000 7f5e0acb8640 1 --2- 192.168.123.107:0/1147269015 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5e04100780 0x7f5e041186a0 secure :-1 s=READY pgs=176 cs=0 l=1 rev1=1 crypto rx=0x7f5dec02f730 tx=0x7f5dec004290 comp rx=0 tx=0).stop 2026-03-09T20:55:02.772 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.771+0000 7f5e0acb8640 1 -- 192.168.123.107:0/1147269015 shutdown_connections 2026-03-09T20:55:02.772 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.771+0000 7f5e0acb8640 1 --2- 192.168.123.107:0/1147269015 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f5dd4077720 0x7f5dd4079be0 unknown :-1 s=CLOSED pgs=124 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:02.772 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.771+0000 7f5e0acb8640 1 --2- 192.168.123.107:0/1147269015 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f5e04106780 0x7f5e0410f660 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:02.772 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.771+0000 7f5e0acb8640 1 --2- 192.168.123.107:0/1147269015 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5e04100780 0x7f5e041186a0 unknown :-1 s=CLOSED pgs=176 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:02.772 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.771+0000 7f5e0acb8640 1 -- 192.168.123.107:0/1147269015 >> 192.168.123.107:0/1147269015 conn(0x7f5e040fc460 msgr2=0x7f5e0410a720 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:55:02.772 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.772+0000 7f5e0acb8640 1 -- 192.168.123.107:0/1147269015 shutdown_connections 2026-03-09T20:55:02.773 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:02.772+0000 7f5e0acb8640 1 -- 192.168.123.107:0/1147269015 wait complete. 2026-03-09T20:55:02.830 DEBUG:tasks.fs:max_mds reduced in epoch 16 2026-03-09T20:55:02.830 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 16 2026-03-09T20:55:02.830 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph fs dump --format=json 17 2026-03-09T20:55:02.980 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:55:03.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:02 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/1002887646' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 15, "format": "json"}]: dispatch 2026-03-09T20:55:03.228 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.226+0000 7f801e879640 1 -- 192.168.123.107:0/1170569750 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f80180fe7f0 msgr2=0x7f80180fec70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:03.228 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.226+0000 7f801e879640 1 --2- 192.168.123.107:0/1170569750 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f80180fe7f0 0x7f80180fec70 secure :-1 s=READY pgs=71 cs=0 l=1 rev1=1 crypto rx=0x7f80040099b0 tx=0x7f800402f220 comp rx=0 tx=0).stop 2026-03-09T20:55:03.228 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.227+0000 7f801e879640 1 -- 192.168.123.107:0/1170569750 shutdown_connections 2026-03-09T20:55:03.228 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.227+0000 7f801e879640 1 --2- 192.168.123.107:0/1170569750 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f80180fe7f0 0x7f80180fec70 unknown :-1 s=CLOSED pgs=71 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:03.228 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.227+0000 7f801e879640 1 --2- 192.168.123.107:0/1170569750 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8018106960 0x7f8018106d40 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:03.228 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.227+0000 7f801e879640 1 -- 192.168.123.107:0/1170569750 >> 192.168.123.107:0/1170569750 conn(0x7f80180fa5e0 msgr2=0x7f80180fca00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:55:03.228 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.227+0000 7f801e879640 1 -- 192.168.123.107:0/1170569750 shutdown_connections 2026-03-09T20:55:03.228 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.227+0000 7f801e879640 1 -- 192.168.123.107:0/1170569750 wait complete. 2026-03-09T20:55:03.229 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.228+0000 7f801e879640 1 Processor -- start 2026-03-09T20:55:03.229 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.228+0000 7f801e879640 1 -- start start 2026-03-09T20:55:03.229 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.228+0000 7f801e879640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f80180fe7f0 0x7f801819ed50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:03.230 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.228+0000 7f801e879640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8018106960 0x7f801819f290 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:03.230 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.228+0000 7f801e879640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f801819f920 con 0x7f8018106960 2026-03-09T20:55:03.230 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.228+0000 7f801e879640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f80181a3690 con 0x7f80180fe7f0 2026-03-09T20:55:03.230 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.229+0000 7f80177fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8018106960 0x7f801819f290 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:03.230 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.229+0000 7f80177fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8018106960 0x7f801819f290 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:45434/0 (socket says 192.168.123.107:45434) 2026-03-09T20:55:03.230 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.229+0000 7f80177fe640 1 -- 192.168.123.107:0/3889677519 learned_addr learned my addr 192.168.123.107:0/3889677519 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:55:03.231 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.229+0000 7f80177fe640 1 -- 192.168.123.107:0/3889677519 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f80180fe7f0 msgr2=0x7f801819ed50 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:03.231 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.229+0000 7f80177fe640 1 --2- 192.168.123.107:0/3889677519 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f80180fe7f0 0x7f801819ed50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:03.231 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.229+0000 7f80177fe640 1 -- 192.168.123.107:0/3889677519 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f8008009590 con 0x7f8018106960 2026-03-09T20:55:03.231 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.229+0000 7f80177fe640 1 --2- 192.168.123.107:0/3889677519 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8018106960 0x7f801819f290 secure :-1 s=READY pgs=177 cs=0 l=1 rev1=1 crypto rx=0x7f8004002410 tx=0x7f8004004290 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:55:03.231 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.229+0000 7f80157fa640 1 -- 192.168.123.107:0/3889677519 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f800403d070 con 0x7f8018106960 2026-03-09T20:55:03.231 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.229+0000 7f80157fa640 1 -- 192.168.123.107:0/3889677519 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f8004038520 con 0x7f8018106960 2026-03-09T20:55:03.231 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.230+0000 7f80157fa640 1 -- 192.168.123.107:0/3889677519 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f8004041690 con 0x7f8018106960 2026-03-09T20:55:03.231 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.230+0000 7f801e879640 1 -- 192.168.123.107:0/3889677519 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f8004009660 con 0x7f8018106960 2026-03-09T20:55:03.232 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.231+0000 7f801e879640 1 -- 192.168.123.107:0/3889677519 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f80181a3ba0 con 0x7f8018106960 2026-03-09T20:55:03.235 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.231+0000 7f801e879640 1 -- 192.168.123.107:0/3889677519 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f7fe4005350 con 0x7f8018106960 2026-03-09T20:55:03.235 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.232+0000 7f80157fa640 1 -- 192.168.123.107:0/3889677519 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f800402fc90 con 0x7f8018106960 2026-03-09T20:55:03.235 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.232+0000 7f80157fa640 1 --2- 192.168.123.107:0/3889677519 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f7fec077680 0x7f7fec079b40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:03.235 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.232+0000 7f80157fa640 1 -- 192.168.123.107:0/3889677519 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(86..86 src has 1..86) v4 ==== 6652+0+0 (secure 0 0 0) 0x7f800408ab10 con 0x7f8018106960 2026-03-09T20:55:03.236 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.235+0000 7f8017fff640 1 --2- 192.168.123.107:0/3889677519 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f7fec077680 0x7f7fec079b40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:03.236 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.235+0000 7f8017fff640 1 --2- 192.168.123.107:0/3889677519 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f7fec077680 0x7f7fec079b40 secure :-1 s=READY pgs=125 cs=0 l=1 rev1=1 crypto rx=0x7f8008005e00 tx=0x7f8008005d50 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:55:03.236 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.235+0000 7f80157fa640 1 -- 192.168.123.107:0/3889677519 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f80040c3050 con 0x7f8018106960 2026-03-09T20:55:03.345 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.344+0000 7f801e879640 1 -- 192.168.123.107:0/3889677519 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 17, "format": "json"} v 0) v1 -- 0x7f7fe40058d0 con 0x7f8018106960 2026-03-09T20:55:03.348 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.347+0000 7f80157fa640 1 -- 192.168.123.107:0/3889677519 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 17, "format": "json"}]=0 dumped fsmap epoch 17 v37) v1 ==== 107+0+4124 (secure 0 0 0) 0x7f8004086530 con 0x7f8018106960 2026-03-09T20:55:03.348 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:55:03.348 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":17,"btime":"2026-03-09T20:52:40:211942+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":34316,"name":"cephfs.vm07.potfau","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.107:6829/3625324292","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6828","nonce":3625324292},{"type":"v1","addr":"192.168.123.107:6829","nonce":3625324292}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":13},{"gid":44247,"name":"cephfs.vm10.hzyuyq","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.110:6827/2699915815","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6826","nonce":2699915815},{"type":"v1","addr":"192.168.123.110:6827","nonce":2699915815}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":13},{"gid":44269,"name":"cephfs.vm10.qpltwp","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.110:6825/1063035280","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6824","nonce":1063035280},{"type":"v1","addr":"192.168.123.110:6825","nonce":1063035280}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":16}],"filesystems":[{"mdsmap":{"epoch":17,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-09T20:44:59.885491+0000","modified":"2026-03-09T20:52:40.211941+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":83,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":1,"in":[0],"up":{},"failed":[0],"damaged":[],"stopped":[1],"info":{},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":0,"qdb_cluster":[]},"id":1}]} 2026-03-09T20:55:03.348 INFO:teuthology.orchestra.run.vm07.stderr:dumped fsmap epoch 17 2026-03-09T20:55:03.350 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.349+0000 7f801e879640 1 -- 192.168.123.107:0/3889677519 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f7fec077680 msgr2=0x7f7fec079b40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:03.350 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.349+0000 7f801e879640 1 --2- 192.168.123.107:0/3889677519 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f7fec077680 0x7f7fec079b40 secure :-1 s=READY pgs=125 cs=0 l=1 rev1=1 crypto rx=0x7f8008005e00 tx=0x7f8008005d50 comp rx=0 tx=0).stop 2026-03-09T20:55:03.350 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.349+0000 7f801e879640 1 -- 192.168.123.107:0/3889677519 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8018106960 msgr2=0x7f801819f290 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:03.350 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.349+0000 7f801e879640 1 --2- 192.168.123.107:0/3889677519 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8018106960 0x7f801819f290 secure :-1 s=READY pgs=177 cs=0 l=1 rev1=1 crypto rx=0x7f8004002410 tx=0x7f8004004290 comp rx=0 tx=0).stop 2026-03-09T20:55:03.350 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.349+0000 7f801e879640 1 -- 192.168.123.107:0/3889677519 shutdown_connections 2026-03-09T20:55:03.350 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.350+0000 7f801e879640 1 --2- 192.168.123.107:0/3889677519 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f7fec077680 0x7f7fec079b40 unknown :-1 s=CLOSED pgs=125 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:03.350 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.350+0000 7f801e879640 1 --2- 192.168.123.107:0/3889677519 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8018106960 0x7f801819f290 unknown :-1 s=CLOSED pgs=177 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:03.350 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.350+0000 7f801e879640 1 --2- 192.168.123.107:0/3889677519 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f80180fe7f0 0x7f801819ed50 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:03.351 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.350+0000 7f801e879640 1 -- 192.168.123.107:0/3889677519 >> 192.168.123.107:0/3889677519 conn(0x7f80180fa5e0 msgr2=0x7f801810a7e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:55:03.351 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.350+0000 7f801e879640 1 -- 192.168.123.107:0/3889677519 shutdown_connections 2026-03-09T20:55:03.351 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.350+0000 7f801e879640 1 -- 192.168.123.107:0/3889677519 wait complete. 2026-03-09T20:55:03.406 DEBUG:tasks.fs:max_mds reduced in epoch 17 2026-03-09T20:55:03.406 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 17 2026-03-09T20:55:03.406 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph fs dump --format=json 18 2026-03-09T20:55:03.558 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:55:03.796 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.794+0000 7fc04f8f0640 1 -- 192.168.123.107:0/1359422297 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc048075720 msgr2=0x7fc048075b00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:03.796 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.794+0000 7fc04f8f0640 1 --2- 192.168.123.107:0/1359422297 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc048075720 0x7fc048075b00 secure :-1 s=READY pgs=178 cs=0 l=1 rev1=1 crypto rx=0x7fc0380099b0 tx=0x7fc03802f260 comp rx=0 tx=0).stop 2026-03-09T20:55:03.796 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.795+0000 7fc04f8f0640 1 -- 192.168.123.107:0/1359422297 shutdown_connections 2026-03-09T20:55:03.796 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.795+0000 7fc04f8f0640 1 --2- 192.168.123.107:0/1359422297 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc048076040 0x7fc048111330 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:03.796 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.795+0000 7fc04f8f0640 1 --2- 192.168.123.107:0/1359422297 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc048075720 0x7fc048075b00 unknown :-1 s=CLOSED pgs=178 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:03.796 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.795+0000 7fc04f8f0640 1 -- 192.168.123.107:0/1359422297 >> 192.168.123.107:0/1359422297 conn(0x7fc0480fe710 msgr2=0x7fc048100b30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:55:03.796 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.795+0000 7fc04f8f0640 1 -- 192.168.123.107:0/1359422297 shutdown_connections 2026-03-09T20:55:03.796 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.796+0000 7fc04f8f0640 1 -- 192.168.123.107:0/1359422297 wait complete. 2026-03-09T20:55:03.797 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.796+0000 7fc04f8f0640 1 Processor -- start 2026-03-09T20:55:03.797 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.796+0000 7fc04f8f0640 1 -- start start 2026-03-09T20:55:03.797 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.796+0000 7fc04f8f0640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc048075720 0x7fc04819edf0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:03.797 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.796+0000 7fc04f8f0640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc048076040 0x7fc04819f330 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:03.798 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.796+0000 7fc04f8f0640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc04819f930 con 0x7fc048075720 2026-03-09T20:55:03.798 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.796+0000 7fc04f8f0640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc04819faa0 con 0x7fc048076040 2026-03-09T20:55:03.798 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.797+0000 7fc04d665640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc048075720 0x7fc04819edf0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:03.798 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.797+0000 7fc04d665640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc048075720 0x7fc04819edf0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:45462/0 (socket says 192.168.123.107:45462) 2026-03-09T20:55:03.800 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.797+0000 7fc04d665640 1 -- 192.168.123.107:0/2604009702 learned_addr learned my addr 192.168.123.107:0/2604009702 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:55:03.800 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.797+0000 7fc04d665640 1 -- 192.168.123.107:0/2604009702 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc048076040 msgr2=0x7fc04819f330 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T20:55:03.800 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.797+0000 7fc04d665640 1 --2- 192.168.123.107:0/2604009702 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc048076040 0x7fc04819f330 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:03.800 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.797+0000 7fc04d665640 1 -- 192.168.123.107:0/2604009702 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fc038009660 con 0x7fc048075720 2026-03-09T20:55:03.800 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.797+0000 7fc04d665640 1 --2- 192.168.123.107:0/2604009702 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc048075720 0x7fc04819edf0 secure :-1 s=READY pgs=179 cs=0 l=1 rev1=1 crypto rx=0x7fc03802f770 tx=0x7fc0380043d0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:55:03.800 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.798+0000 7fc0367fc640 1 -- 192.168.123.107:0/2604009702 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fc03803d070 con 0x7fc048075720 2026-03-09T20:55:03.801 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.798+0000 7fc0367fc640 1 -- 192.168.123.107:0/2604009702 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fc03802fc90 con 0x7fc048075720 2026-03-09T20:55:03.801 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.798+0000 7fc0367fc640 1 -- 192.168.123.107:0/2604009702 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fc0380417d0 con 0x7fc048075720 2026-03-09T20:55:03.801 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.798+0000 7fc04f8f0640 1 -- 192.168.123.107:0/2604009702 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fc0481a3820 con 0x7fc048075720 2026-03-09T20:55:03.801 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.798+0000 7fc04f8f0640 1 -- 192.168.123.107:0/2604009702 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fc0481a3d10 con 0x7fc048075720 2026-03-09T20:55:03.801 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.799+0000 7fc0367fc640 1 -- 192.168.123.107:0/2604009702 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fc03804b430 con 0x7fc048075720 2026-03-09T20:55:03.802 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.800+0000 7fc0367fc640 1 --2- 192.168.123.107:0/2604009702 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc0180779b0 0x7fc018079e70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:03.802 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.801+0000 7fc0367fc640 1 -- 192.168.123.107:0/2604009702 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(86..86 src has 1..86) v4 ==== 6652+0+0 (secure 0 0 0) 0x7fc0380bf660 con 0x7fc048075720 2026-03-09T20:55:03.802 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.801+0000 7fc04ce64640 1 --2- 192.168.123.107:0/2604009702 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc0180779b0 0x7fc018079e70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:03.805 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.801+0000 7fc04f8f0640 1 -- 192.168.123.107:0/2604009702 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fc048076e60 con 0x7fc048075720 2026-03-09T20:55:03.805 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.804+0000 7fc04ce64640 1 --2- 192.168.123.107:0/2604009702 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc0180779b0 0x7fc018079e70 secure :-1 s=READY pgs=126 cs=0 l=1 rev1=1 crypto rx=0x7fc03c005fd0 tx=0x7fc03c005d00 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:55:03.805 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.804+0000 7fc0367fc640 1 -- 192.168.123.107:0/2604009702 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fc038087d10 con 0x7fc048075720 2026-03-09T20:55:03.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:03 vm07.local ceph-mon[112105]: pgmap v241: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:55:03.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:03 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/1147269015' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 16, "format": "json"}]: dispatch 2026-03-09T20:55:03.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:03 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/3889677519' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 17, "format": "json"}]: dispatch 2026-03-09T20:55:03.924 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.923+0000 7fc04f8f0640 1 -- 192.168.123.107:0/2604009702 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 18, "format": "json"} v 0) v1 -- 0x7fc048075b00 con 0x7fc048075720 2026-03-09T20:55:03.925 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.924+0000 7fc0367fc640 1 -- 192.168.123.107:0/2604009702 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 18, "format": "json"}]=0 dumped fsmap epoch 18 v37) v1 ==== 107+0+4135 (secure 0 0 0) 0x7fc038046090 con 0x7fc048075720 2026-03-09T20:55:03.925 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:55:03.925 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":18,"btime":"2026-03-09T20:52:40:227717+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":44247,"name":"cephfs.vm10.hzyuyq","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.110:6827/2699915815","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6826","nonce":2699915815},{"type":"v1","addr":"192.168.123.110:6827","nonce":2699915815}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":13},{"gid":44269,"name":"cephfs.vm10.qpltwp","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.110:6825/1063035280","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6824","nonce":1063035280},{"type":"v1","addr":"192.168.123.110:6825","nonce":1063035280}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":16}],"filesystems":[{"mdsmap":{"epoch":18,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-09T20:44:59.885491+0000","modified":"2026-03-09T20:52:40.227698+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":83,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":1,"in":[0],"up":{"mds_0":34316},"failed":[],"damaged":[],"stopped":[1],"info":{"gid_34316":{"gid":34316,"name":"cephfs.vm07.potfau","rank":0,"incarnation":18,"state":"up:replay","state_seq":1,"addr":"192.168.123.107:6829/3625324292","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6828","nonce":3625324292},{"type":"v1","addr":"192.168.123.107:6829","nonce":3625324292}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":0,"qdb_cluster":[]},"id":1}]} 2026-03-09T20:55:03.925 INFO:teuthology.orchestra.run.vm07.stderr:dumped fsmap epoch 18 2026-03-09T20:55:03.927 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.926+0000 7fc04f8f0640 1 -- 192.168.123.107:0/2604009702 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc0180779b0 msgr2=0x7fc018079e70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:03.928 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.926+0000 7fc04f8f0640 1 --2- 192.168.123.107:0/2604009702 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc0180779b0 0x7fc018079e70 secure :-1 s=READY pgs=126 cs=0 l=1 rev1=1 crypto rx=0x7fc03c005fd0 tx=0x7fc03c005d00 comp rx=0 tx=0).stop 2026-03-09T20:55:03.928 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.927+0000 7fc04f8f0640 1 -- 192.168.123.107:0/2604009702 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc048075720 msgr2=0x7fc04819edf0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:03.928 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.927+0000 7fc04f8f0640 1 --2- 192.168.123.107:0/2604009702 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc048075720 0x7fc04819edf0 secure :-1 s=READY pgs=179 cs=0 l=1 rev1=1 crypto rx=0x7fc03802f770 tx=0x7fc0380043d0 comp rx=0 tx=0).stop 2026-03-09T20:55:03.928 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.927+0000 7fc04f8f0640 1 -- 192.168.123.107:0/2604009702 shutdown_connections 2026-03-09T20:55:03.928 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.927+0000 7fc04f8f0640 1 --2- 192.168.123.107:0/2604009702 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc0180779b0 0x7fc018079e70 unknown :-1 s=CLOSED pgs=126 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:03.928 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.927+0000 7fc04f8f0640 1 --2- 192.168.123.107:0/2604009702 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc048076040 0x7fc04819f330 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:03.928 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.927+0000 7fc04f8f0640 1 --2- 192.168.123.107:0/2604009702 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc048075720 0x7fc04819edf0 unknown :-1 s=CLOSED pgs=179 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:03.928 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.927+0000 7fc04f8f0640 1 -- 192.168.123.107:0/2604009702 >> 192.168.123.107:0/2604009702 conn(0x7fc0480fe710 msgr2=0x7fc0480ffdf0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:55:03.928 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.927+0000 7fc04f8f0640 1 -- 192.168.123.107:0/2604009702 shutdown_connections 2026-03-09T20:55:03.928 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:03.927+0000 7fc04f8f0640 1 -- 192.168.123.107:0/2604009702 wait complete. 2026-03-09T20:55:03.972 DEBUG:tasks.fs:max_mds reduced in epoch 18 2026-03-09T20:55:03.972 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 18 2026-03-09T20:55:03.972 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph fs dump --format=json 19 2026-03-09T20:55:04.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:03 vm10.local ceph-mon[103526]: pgmap v241: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:55:04.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:03 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/1147269015' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 16, "format": "json"}]: dispatch 2026-03-09T20:55:04.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:03 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/3889677519' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 17, "format": "json"}]: dispatch 2026-03-09T20:55:04.111 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:55:04.339 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.338+0000 7fbe40302640 1 -- 192.168.123.107:0/2703132132 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbe381089d0 msgr2=0x7fbe38108db0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:04.339 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.338+0000 7fbe40302640 1 --2- 192.168.123.107:0/2703132132 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbe381089d0 0x7fbe38108db0 secure :-1 s=READY pgs=180 cs=0 l=1 rev1=1 crypto rx=0x7fbe20009a00 tx=0x7fbe2002f280 comp rx=0 tx=0).stop 2026-03-09T20:55:04.339 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.339+0000 7fbe40302640 1 -- 192.168.123.107:0/2703132132 shutdown_connections 2026-03-09T20:55:04.339 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.339+0000 7fbe40302640 1 --2- 192.168.123.107:0/2703132132 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fbe381029d0 0x7fbe38102e30 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:04.340 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.339+0000 7fbe40302640 1 --2- 192.168.123.107:0/2703132132 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbe381089d0 0x7fbe38108db0 unknown :-1 s=CLOSED pgs=180 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:04.340 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.339+0000 7fbe40302640 1 -- 192.168.123.107:0/2703132132 >> 192.168.123.107:0/2703132132 conn(0x7fbe380fe710 msgr2=0x7fbe38100b30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:55:04.340 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.339+0000 7fbe40302640 1 -- 192.168.123.107:0/2703132132 shutdown_connections 2026-03-09T20:55:04.340 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.339+0000 7fbe40302640 1 -- 192.168.123.107:0/2703132132 wait complete. 2026-03-09T20:55:04.340 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.339+0000 7fbe40302640 1 Processor -- start 2026-03-09T20:55:04.340 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.339+0000 7fbe40302640 1 -- start start 2026-03-09T20:55:04.340 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.339+0000 7fbe40302640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbe381029d0 0x7fbe381a06c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:04.340 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.339+0000 7fbe40302640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fbe381089d0 0x7fbe381a0c00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:04.341 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.339+0000 7fbe40302640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fbe3819a7b0 con 0x7fbe381029d0 2026-03-09T20:55:04.341 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.339+0000 7fbe40302640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fbe3819a920 con 0x7fbe381089d0 2026-03-09T20:55:04.341 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.340+0000 7fbe3d876640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fbe381089d0 0x7fbe381a0c00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:04.341 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.340+0000 7fbe3d876640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fbe381089d0 0x7fbe381a0c00 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:51574/0 (socket says 192.168.123.107:51574) 2026-03-09T20:55:04.341 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.340+0000 7fbe3d876640 1 -- 192.168.123.107:0/717686002 learned_addr learned my addr 192.168.123.107:0/717686002 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:55:04.341 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.340+0000 7fbe3e077640 1 --2- 192.168.123.107:0/717686002 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbe381029d0 0x7fbe381a06c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:04.341 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.340+0000 7fbe3d876640 1 -- 192.168.123.107:0/717686002 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbe381029d0 msgr2=0x7fbe381a06c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:04.341 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.340+0000 7fbe3d876640 1 --2- 192.168.123.107:0/717686002 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbe381029d0 0x7fbe381a06c0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:04.341 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.340+0000 7fbe3d876640 1 -- 192.168.123.107:0/717686002 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fbe20009660 con 0x7fbe381089d0 2026-03-09T20:55:04.341 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.340+0000 7fbe3e077640 1 --2- 192.168.123.107:0/717686002 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbe381029d0 0x7fbe381a06c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-09T20:55:04.341 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.340+0000 7fbe3d876640 1 --2- 192.168.123.107:0/717686002 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fbe381089d0 0x7fbe381a0c00 secure :-1 s=READY pgs=72 cs=0 l=1 rev1=1 crypto rx=0x7fbe2800e9b0 tx=0x7fbe2800ee80 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:55:04.341 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.341+0000 7fbe2f7fe640 1 -- 192.168.123.107:0/717686002 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fbe2800cd90 con 0x7fbe381089d0 2026-03-09T20:55:04.342 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.341+0000 7fbe40302640 1 -- 192.168.123.107:0/717686002 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fbe3819ac00 con 0x7fbe381089d0 2026-03-09T20:55:04.342 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.341+0000 7fbe40302640 1 -- 192.168.123.107:0/717686002 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fbe3819b150 con 0x7fbe381089d0 2026-03-09T20:55:04.342 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.341+0000 7fbe2f7fe640 1 -- 192.168.123.107:0/717686002 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fbe28004590 con 0x7fbe381089d0 2026-03-09T20:55:04.342 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.341+0000 7fbe2f7fe640 1 -- 192.168.123.107:0/717686002 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fbe28010640 con 0x7fbe381089d0 2026-03-09T20:55:04.343 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.342+0000 7fbe2f7fe640 1 -- 192.168.123.107:0/717686002 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fbe280040d0 con 0x7fbe381089d0 2026-03-09T20:55:04.344 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.343+0000 7fbe2f7fe640 1 --2- 192.168.123.107:0/717686002 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fbe0c0778e0 0x7fbe0c079da0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:04.344 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.343+0000 7fbe3e077640 1 --2- 192.168.123.107:0/717686002 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fbe0c0778e0 0x7fbe0c079da0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:04.344 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.344+0000 7fbe3e077640 1 --2- 192.168.123.107:0/717686002 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fbe0c0778e0 0x7fbe0c079da0 secure :-1 s=READY pgs=127 cs=0 l=1 rev1=1 crypto rx=0x7fbe200040c0 tx=0x7fbe20002e80 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:55:04.345 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.344+0000 7fbe2f7fe640 1 -- 192.168.123.107:0/717686002 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(86..86 src has 1..86) v4 ==== 6652+0+0 (secure 0 0 0) 0x7fbe28014070 con 0x7fbe381089d0 2026-03-09T20:55:04.345 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.344+0000 7fbe40302640 1 -- 192.168.123.107:0/717686002 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fbe38104110 con 0x7fbe381089d0 2026-03-09T20:55:04.348 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.347+0000 7fbe2f7fe640 1 -- 192.168.123.107:0/717686002 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fbe28062a40 con 0x7fbe381089d0 2026-03-09T20:55:04.459 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.458+0000 7fbe40302640 1 -- 192.168.123.107:0/717686002 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 19, "format": "json"} v 0) v1 -- 0x7fbe3819bcc0 con 0x7fbe381089d0 2026-03-09T20:55:04.463 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.462+0000 7fbe2f7fe640 1 -- 192.168.123.107:0/717686002 <== mon.1 v2:192.168.123.110:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 19, "format": "json"}]=0 dumped fsmap epoch 19 v37) v1 ==== 107+0+4139 (secure 0 0 0) 0x7fbe28062190 con 0x7fbe381089d0 2026-03-09T20:55:04.463 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:55:04.463 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":19,"btime":"2026-03-09T20:52:45:094372+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":44247,"name":"cephfs.vm10.hzyuyq","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.110:6827/2699915815","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6826","nonce":2699915815},{"type":"v1","addr":"192.168.123.110:6827","nonce":2699915815}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":13},{"gid":44269,"name":"cephfs.vm10.qpltwp","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.110:6825/1063035280","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6824","nonce":1063035280},{"type":"v1","addr":"192.168.123.110:6825","nonce":1063035280}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":16}],"filesystems":[{"mdsmap":{"epoch":19,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-09T20:44:59.885491+0000","modified":"2026-03-09T20:52:44.102125+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":83,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":1,"in":[0],"up":{"mds_0":34316},"failed":[],"damaged":[],"stopped":[1],"info":{"gid_34316":{"gid":34316,"name":"cephfs.vm07.potfau","rank":0,"incarnation":18,"state":"up:reconnect","state_seq":11,"addr":"192.168.123.107:6829/3625324292","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6828","nonce":3625324292},{"type":"v1","addr":"192.168.123.107:6829","nonce":3625324292}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":0,"qdb_cluster":[]},"id":1}]} 2026-03-09T20:55:04.463 INFO:teuthology.orchestra.run.vm07.stderr:dumped fsmap epoch 19 2026-03-09T20:55:04.465 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.464+0000 7fbe40302640 1 -- 192.168.123.107:0/717686002 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fbe0c0778e0 msgr2=0x7fbe0c079da0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:04.465 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.464+0000 7fbe40302640 1 --2- 192.168.123.107:0/717686002 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fbe0c0778e0 0x7fbe0c079da0 secure :-1 s=READY pgs=127 cs=0 l=1 rev1=1 crypto rx=0x7fbe200040c0 tx=0x7fbe20002e80 comp rx=0 tx=0).stop 2026-03-09T20:55:04.465 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.464+0000 7fbe40302640 1 -- 192.168.123.107:0/717686002 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fbe381089d0 msgr2=0x7fbe381a0c00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:04.465 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.464+0000 7fbe40302640 1 --2- 192.168.123.107:0/717686002 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fbe381089d0 0x7fbe381a0c00 secure :-1 s=READY pgs=72 cs=0 l=1 rev1=1 crypto rx=0x7fbe2800e9b0 tx=0x7fbe2800ee80 comp rx=0 tx=0).stop 2026-03-09T20:55:04.465 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.464+0000 7fbe40302640 1 -- 192.168.123.107:0/717686002 shutdown_connections 2026-03-09T20:55:04.465 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.464+0000 7fbe40302640 1 --2- 192.168.123.107:0/717686002 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fbe0c0778e0 0x7fbe0c079da0 unknown :-1 s=CLOSED pgs=127 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:04.465 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.464+0000 7fbe40302640 1 --2- 192.168.123.107:0/717686002 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fbe381089d0 0x7fbe381a0c00 unknown :-1 s=CLOSED pgs=72 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:04.465 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.464+0000 7fbe40302640 1 --2- 192.168.123.107:0/717686002 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbe381029d0 0x7fbe381a06c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:04.466 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.464+0000 7fbe40302640 1 -- 192.168.123.107:0/717686002 >> 192.168.123.107:0/717686002 conn(0x7fbe380fe710 msgr2=0x7fbe3810c970 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:55:04.466 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.465+0000 7fbe40302640 1 -- 192.168.123.107:0/717686002 shutdown_connections 2026-03-09T20:55:04.466 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.465+0000 7fbe40302640 1 -- 192.168.123.107:0/717686002 wait complete. 2026-03-09T20:55:04.525 DEBUG:tasks.fs:max_mds reduced in epoch 19 2026-03-09T20:55:04.525 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 19 2026-03-09T20:55:04.525 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph fs dump --format=json 20 2026-03-09T20:55:04.680 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:55:04.720 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:04 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/2604009702' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 18, "format": "json"}]: dispatch 2026-03-09T20:55:04.720 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:04 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/717686002' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 19, "format": "json"}]: dispatch 2026-03-09T20:55:04.943 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.941+0000 7f8a44126640 1 -- 192.168.123.107:0/2154558025 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8a3c0ff640 msgr2=0x7f8a3c0ffaa0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:04.943 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.941+0000 7f8a44126640 1 --2- 192.168.123.107:0/2154558025 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8a3c0ff640 0x7f8a3c0ffaa0 secure :-1 s=READY pgs=181 cs=0 l=1 rev1=1 crypto rx=0x7f8a300099b0 tx=0x7f8a3002f240 comp rx=0 tx=0).stop 2026-03-09T20:55:04.943 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.941+0000 7f8a44126640 1 -- 192.168.123.107:0/2154558025 shutdown_connections 2026-03-09T20:55:04.943 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.941+0000 7f8a44126640 1 --2- 192.168.123.107:0/2154558025 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8a3c0ff640 0x7f8a3c0ffaa0 unknown :-1 s=CLOSED pgs=181 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:04.943 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.941+0000 7f8a44126640 1 --2- 192.168.123.107:0/2154558025 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f8a3c105640 0x7f8a3c105a20 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:04.943 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.941+0000 7f8a44126640 1 -- 192.168.123.107:0/2154558025 >> 192.168.123.107:0/2154558025 conn(0x7f8a3c0fb340 msgr2=0x7f8a3c0fd760 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:55:04.943 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.941+0000 7f8a44126640 1 -- 192.168.123.107:0/2154558025 shutdown_connections 2026-03-09T20:55:04.943 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.941+0000 7f8a44126640 1 -- 192.168.123.107:0/2154558025 wait complete. 2026-03-09T20:55:04.943 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.942+0000 7f8a44126640 1 Processor -- start 2026-03-09T20:55:04.943 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.942+0000 7f8a44126640 1 -- start start 2026-03-09T20:55:04.943 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.942+0000 7f8a44126640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f8a3c0ff640 0x7f8a3c1973d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:04.944 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.944+0000 7f8a41e9b640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f8a3c0ff640 0x7f8a3c1973d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:04.945 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.944+0000 7f8a44126640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8a3c105640 0x7f8a3c197910 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:04.945 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.944+0000 7f8a41e9b640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f8a3c0ff640 0x7f8a3c1973d0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:51590/0 (socket says 192.168.123.107:51590) 2026-03-09T20:55:04.945 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.944+0000 7f8a41e9b640 1 -- 192.168.123.107:0/3009061581 learned_addr learned my addr 192.168.123.107:0/3009061581 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:55:04.945 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.944+0000 7f8a44126640 1 -- 192.168.123.107:0/3009061581 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f8a3c197fa0 con 0x7f8a3c105640 2026-03-09T20:55:04.945 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.944+0000 7f8a44126640 1 -- 192.168.123.107:0/3009061581 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f8a3c19bc70 con 0x7f8a3c0ff640 2026-03-09T20:55:04.945 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.944+0000 7f8a4169a640 1 --2- 192.168.123.107:0/3009061581 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8a3c105640 0x7f8a3c197910 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:04.945 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.944+0000 7f8a41e9b640 1 -- 192.168.123.107:0/3009061581 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8a3c105640 msgr2=0x7f8a3c197910 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:04.945 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.944+0000 7f8a41e9b640 1 --2- 192.168.123.107:0/3009061581 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8a3c105640 0x7f8a3c197910 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:04.945 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.944+0000 7f8a41e9b640 1 -- 192.168.123.107:0/3009061581 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f8a30009660 con 0x7f8a3c0ff640 2026-03-09T20:55:04.945 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.944+0000 7f8a41e9b640 1 --2- 192.168.123.107:0/3009061581 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f8a3c0ff640 0x7f8a3c1973d0 secure :-1 s=READY pgs=73 cs=0 l=1 rev1=1 crypto rx=0x7f8a2c00e990 tx=0x7f8a2c00ee60 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:55:04.945 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.944+0000 7f8a2affd640 1 -- 192.168.123.107:0/3009061581 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f8a2c00cd30 con 0x7f8a3c0ff640 2026-03-09T20:55:04.945 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.945+0000 7f8a2affd640 1 -- 192.168.123.107:0/3009061581 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f8a2c00ce90 con 0x7f8a3c0ff640 2026-03-09T20:55:04.945 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.945+0000 7f8a44126640 1 -- 192.168.123.107:0/3009061581 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f8a3c19bef0 con 0x7f8a3c0ff640 2026-03-09T20:55:04.946 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.945+0000 7f8a2affd640 1 -- 192.168.123.107:0/3009061581 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f8a2c011860 con 0x7f8a3c0ff640 2026-03-09T20:55:04.946 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.945+0000 7f8a44126640 1 -- 192.168.123.107:0/3009061581 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f8a3c19c3e0 con 0x7f8a3c0ff640 2026-03-09T20:55:04.948 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.946+0000 7f8a44126640 1 -- 192.168.123.107:0/3009061581 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f8a3c100d80 con 0x7f8a3c0ff640 2026-03-09T20:55:04.948 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.946+0000 7f8a2affd640 1 -- 192.168.123.107:0/3009061581 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f8a2c020050 con 0x7f8a3c0ff640 2026-03-09T20:55:04.948 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.947+0000 7f8a2affd640 1 --2- 192.168.123.107:0/3009061581 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f8a18077680 0x7f8a18079b40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:04.948 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.947+0000 7f8a2affd640 1 -- 192.168.123.107:0/3009061581 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(86..86 src has 1..86) v4 ==== 6652+0+0 (secure 0 0 0) 0x7f8a2c09bb80 con 0x7f8a3c0ff640 2026-03-09T20:55:04.948 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.947+0000 7f8a4169a640 1 --2- 192.168.123.107:0/3009061581 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f8a18077680 0x7f8a18079b40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:04.948 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.947+0000 7f8a4169a640 1 --2- 192.168.123.107:0/3009061581 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f8a18077680 0x7f8a18079b40 secure :-1 s=READY pgs=128 cs=0 l=1 rev1=1 crypto rx=0x7f8a3c1988e0 tx=0x7f8a3003a040 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:55:04.950 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:04.949+0000 7f8a2affd640 1 -- 192.168.123.107:0/3009061581 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f8a2c0641b0 con 0x7f8a3c0ff640 2026-03-09T20:55:05.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:04 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/2604009702' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 18, "format": "json"}]: dispatch 2026-03-09T20:55:05.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:04 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/717686002' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 19, "format": "json"}]: dispatch 2026-03-09T20:55:05.061 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:05.060+0000 7f8a44126640 1 -- 192.168.123.107:0/3009061581 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 20, "format": "json"} v 0) v1 -- 0x7f8a3c10c9c0 con 0x7f8a3c0ff640 2026-03-09T20:55:05.064 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:05.063+0000 7f8a2affd640 1 -- 192.168.123.107:0/3009061581 <== mon.1 v2:192.168.123.110:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 20, "format": "json"}]=0 dumped fsmap epoch 20 v37) v1 ==== 107+0+4987 (secure 0 0 0) 0x7f8a2c063900 con 0x7f8a3c0ff640 2026-03-09T20:55:05.064 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:55:05.064 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":20,"btime":"2026-03-09T20:52:46:345903+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":44247,"name":"cephfs.vm10.hzyuyq","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.110:6827/2699915815","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6826","nonce":2699915815},{"type":"v1","addr":"192.168.123.110:6827","nonce":2699915815}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":13},{"gid":44269,"name":"cephfs.vm10.qpltwp","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.110:6825/1063035280","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6824","nonce":1063035280},{"type":"v1","addr":"192.168.123.110:6825","nonce":1063035280}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":16},{"gid":44273,"name":"cephfs.vm07.rovdbp","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.107:6827/2346066069","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6826","nonce":2346066069},{"type":"v1","addr":"192.168.123.107:6827","nonce":2346066069}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":20}],"filesystems":[{"mdsmap":{"epoch":20,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-09T20:44:59.885491+0000","modified":"2026-03-09T20:52:45.349388+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":83,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":1,"in":[0],"up":{"mds_0":34316},"failed":[],"damaged":[],"stopped":[1],"info":{"gid_34316":{"gid":34316,"name":"cephfs.vm07.potfau","rank":0,"incarnation":18,"state":"up:rejoin","state_seq":12,"addr":"192.168.123.107:6829/3625324292","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6828","nonce":3625324292},{"type":"v1","addr":"192.168.123.107:6829","nonce":3625324292}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":0,"qdb_cluster":[]},"id":1}]} 2026-03-09T20:55:05.064 INFO:teuthology.orchestra.run.vm07.stderr:dumped fsmap epoch 20 2026-03-09T20:55:05.066 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:05.065+0000 7f8a44126640 1 -- 192.168.123.107:0/3009061581 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f8a18077680 msgr2=0x7f8a18079b40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:05.067 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:05.065+0000 7f8a44126640 1 --2- 192.168.123.107:0/3009061581 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f8a18077680 0x7f8a18079b40 secure :-1 s=READY pgs=128 cs=0 l=1 rev1=1 crypto rx=0x7f8a3c1988e0 tx=0x7f8a3003a040 comp rx=0 tx=0).stop 2026-03-09T20:55:05.067 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:05.066+0000 7f8a44126640 1 -- 192.168.123.107:0/3009061581 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f8a3c0ff640 msgr2=0x7f8a3c1973d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:05.067 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:05.066+0000 7f8a44126640 1 --2- 192.168.123.107:0/3009061581 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f8a3c0ff640 0x7f8a3c1973d0 secure :-1 s=READY pgs=73 cs=0 l=1 rev1=1 crypto rx=0x7f8a2c00e990 tx=0x7f8a2c00ee60 comp rx=0 tx=0).stop 2026-03-09T20:55:05.067 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:05.066+0000 7f8a44126640 1 -- 192.168.123.107:0/3009061581 shutdown_connections 2026-03-09T20:55:05.067 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:05.066+0000 7f8a44126640 1 --2- 192.168.123.107:0/3009061581 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f8a18077680 0x7f8a18079b40 unknown :-1 s=CLOSED pgs=128 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:05.067 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:05.066+0000 7f8a44126640 1 --2- 192.168.123.107:0/3009061581 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8a3c105640 0x7f8a3c197910 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:05.067 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:05.066+0000 7f8a44126640 1 --2- 192.168.123.107:0/3009061581 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f8a3c0ff640 0x7f8a3c1973d0 unknown :-1 s=CLOSED pgs=73 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:05.067 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:05.066+0000 7f8a44126640 1 -- 192.168.123.107:0/3009061581 >> 192.168.123.107:0/3009061581 conn(0x7f8a3c0fb340 msgr2=0x7f8a3c103150 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:55:05.067 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:05.066+0000 7f8a44126640 1 -- 192.168.123.107:0/3009061581 shutdown_connections 2026-03-09T20:55:05.067 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:05.066+0000 7f8a44126640 1 -- 192.168.123.107:0/3009061581 wait complete. 2026-03-09T20:55:05.128 DEBUG:tasks.fs:max_mds reduced in epoch 20 2026-03-09T20:55:05.128 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 20 2026-03-09T20:55:05.128 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph fs dump --format=json 21 2026-03-09T20:55:05.280 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:55:05.500 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:05.498+0000 7f816d6b2640 1 -- 192.168.123.107:0/3113159306 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f81680ff3a0 msgr2=0x7f816810c970 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:05.500 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:05.498+0000 7f816d6b2640 1 --2- 192.168.123.107:0/3113159306 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f81680ff3a0 0x7f816810c970 secure :-1 s=READY pgs=182 cs=0 l=1 rev1=1 crypto rx=0x7f815c009a30 tx=0x7f815c02f2f0 comp rx=0 tx=0).stop 2026-03-09T20:55:05.500 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:05.499+0000 7f816d6b2640 1 -- 192.168.123.107:0/3113159306 shutdown_connections 2026-03-09T20:55:05.500 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:05.499+0000 7f816d6b2640 1 --2- 192.168.123.107:0/3113159306 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f81680ff3a0 0x7f816810c970 unknown :-1 s=CLOSED pgs=182 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:05.500 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:05.499+0000 7f816d6b2640 1 --2- 192.168.123.107:0/3113159306 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f81680fe9f0 0x7f81680fedd0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:05.500 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:05.499+0000 7f816d6b2640 1 -- 192.168.123.107:0/3113159306 >> 192.168.123.107:0/3113159306 conn(0x7f81680fa5e0 msgr2=0x7f81680fca00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:55:05.500 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:05.499+0000 7f816d6b2640 1 -- 192.168.123.107:0/3113159306 shutdown_connections 2026-03-09T20:55:05.500 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:05.499+0000 7f816d6b2640 1 -- 192.168.123.107:0/3113159306 wait complete. 2026-03-09T20:55:05.501 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:05.500+0000 7f816d6b2640 1 Processor -- start 2026-03-09T20:55:05.501 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:05.500+0000 7f816d6b2640 1 -- start start 2026-03-09T20:55:05.501 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:05.500+0000 7f816d6b2640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f81680fe9f0 0x7f81681a06f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:05.501 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:05.500+0000 7f816d6b2640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f81680ff3a0 0x7f81681a0c30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:05.501 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:05.500+0000 7f816d6b2640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f81681a12c0 con 0x7f81680ff3a0 2026-03-09T20:55:05.502 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:05.500+0000 7f816d6b2640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f816819a840 con 0x7f81680fe9f0 2026-03-09T20:55:05.502 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:05.501+0000 7f81667fc640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f81680ff3a0 0x7f81681a0c30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:05.502 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:05.501+0000 7f81667fc640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f81680ff3a0 0x7f81681a0c30 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:45524/0 (socket says 192.168.123.107:45524) 2026-03-09T20:55:05.502 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:05.501+0000 7f81667fc640 1 -- 192.168.123.107:0/928088218 learned_addr learned my addr 192.168.123.107:0/928088218 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:55:05.502 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:05.501+0000 7f81667fc640 1 -- 192.168.123.107:0/928088218 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f81680fe9f0 msgr2=0x7f81681a06f0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:05.502 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:05.501+0000 7f81667fc640 1 --2- 192.168.123.107:0/928088218 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f81680fe9f0 0x7f81681a06f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:05.502 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:05.501+0000 7f81667fc640 1 -- 192.168.123.107:0/928088218 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f8150009590 con 0x7f81680ff3a0 2026-03-09T20:55:05.502 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:05.501+0000 7f81667fc640 1 --2- 192.168.123.107:0/928088218 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f81680ff3a0 0x7f81681a0c30 secure :-1 s=READY pgs=183 cs=0 l=1 rev1=1 crypto rx=0x7f815c02f800 tx=0x7f815c031d60 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:55:05.502 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:05.501+0000 7f8147fff640 1 -- 192.168.123.107:0/928088218 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f815c031f00 con 0x7f81680ff3a0 2026-03-09T20:55:05.502 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:05.501+0000 7f8147fff640 1 -- 192.168.123.107:0/928088218 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f815c004050 con 0x7f81680ff3a0 2026-03-09T20:55:05.502 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:05.501+0000 7f8147fff640 1 -- 192.168.123.107:0/928088218 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f815c038dd0 con 0x7f81680ff3a0 2026-03-09T20:55:05.502 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:05.502+0000 7f816d6b2640 1 -- 192.168.123.107:0/928088218 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f815c009660 con 0x7f81680ff3a0 2026-03-09T20:55:05.503 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:05.502+0000 7f816d6b2640 1 -- 192.168.123.107:0/928088218 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f816819acf0 con 0x7f81680ff3a0 2026-03-09T20:55:05.504 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:05.503+0000 7f816d6b2640 1 -- 192.168.123.107:0/928088218 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f812c005350 con 0x7f81680ff3a0 2026-03-09T20:55:05.506 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:05.505+0000 7f8147fff640 1 -- 192.168.123.107:0/928088218 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f815c048050 con 0x7f81680ff3a0 2026-03-09T20:55:05.506 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:05.505+0000 7f8147fff640 1 --2- 192.168.123.107:0/928088218 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f813c077720 0x7f813c079be0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:05.506 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:05.505+0000 7f8147fff640 1 -- 192.168.123.107:0/928088218 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(86..86 src has 1..86) v4 ==== 6652+0+0 (secure 0 0 0) 0x7f815c050080 con 0x7f81680ff3a0 2026-03-09T20:55:05.506 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:05.505+0000 7f8166ffd640 1 --2- 192.168.123.107:0/928088218 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f813c077720 0x7f813c079be0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:05.507 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:05.506+0000 7f8166ffd640 1 --2- 192.168.123.107:0/928088218 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f813c077720 0x7f813c079be0 secure :-1 s=READY pgs=129 cs=0 l=1 rev1=1 crypto rx=0x7f8150005e00 tx=0x7f8150005d50 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:55:05.507 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:05.506+0000 7f8147fff640 1 -- 192.168.123.107:0/928088218 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f815c087000 con 0x7f81680ff3a0 2026-03-09T20:55:05.619 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:05.618+0000 7f816d6b2640 1 -- 192.168.123.107:0/928088218 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 21, "format": "json"} v 0) v1 -- 0x7f812c0058d0 con 0x7f81680ff3a0 2026-03-09T20:55:05.621 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:05.620+0000 7f8147fff640 1 -- 192.168.123.107:0/928088218 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 21, "format": "json"}]=0 dumped fsmap epoch 21 v37) v1 ==== 107+0+4996 (secure 0 0 0) 0x7f815c036370 con 0x7f81680ff3a0 2026-03-09T20:55:05.621 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:55:05.621 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":21,"btime":"2026-03-09T20:52:47:379174+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":44247,"name":"cephfs.vm10.hzyuyq","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.110:6827/2699915815","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6826","nonce":2699915815},{"type":"v1","addr":"192.168.123.110:6827","nonce":2699915815}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":13},{"gid":44269,"name":"cephfs.vm10.qpltwp","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.110:6825/1063035280","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6824","nonce":1063035280},{"type":"v1","addr":"192.168.123.110:6825","nonce":1063035280}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":16},{"gid":44273,"name":"cephfs.vm07.rovdbp","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.107:6827/2346066069","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6826","nonce":2346066069},{"type":"v1","addr":"192.168.123.107:6827","nonce":2346066069}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":20}],"filesystems":[{"mdsmap":{"epoch":21,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-09T20:44:59.885491+0000","modified":"2026-03-09T20:52:47.379172+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":83,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":1,"in":[0],"up":{"mds_0":34316},"failed":[],"damaged":[],"stopped":[1],"info":{"gid_34316":{"gid":34316,"name":"cephfs.vm07.potfau","rank":0,"incarnation":18,"state":"up:active","state_seq":13,"addr":"192.168.123.107:6829/3625324292","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6828","nonce":3625324292},{"type":"v1","addr":"192.168.123.107:6829","nonce":3625324292}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":34316,"qdb_cluster":[34316]},"id":1}]} 2026-03-09T20:55:05.621 INFO:teuthology.orchestra.run.vm07.stderr:dumped fsmap epoch 21 2026-03-09T20:55:05.623 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:05.622+0000 7f816d6b2640 1 -- 192.168.123.107:0/928088218 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f813c077720 msgr2=0x7f813c079be0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:05.623 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:05.622+0000 7f816d6b2640 1 --2- 192.168.123.107:0/928088218 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f813c077720 0x7f813c079be0 secure :-1 s=READY pgs=129 cs=0 l=1 rev1=1 crypto rx=0x7f8150005e00 tx=0x7f8150005d50 comp rx=0 tx=0).stop 2026-03-09T20:55:05.623 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:05.623+0000 7f816d6b2640 1 -- 192.168.123.107:0/928088218 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f81680ff3a0 msgr2=0x7f81681a0c30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:05.624 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:05.623+0000 7f816d6b2640 1 --2- 192.168.123.107:0/928088218 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f81680ff3a0 0x7f81681a0c30 secure :-1 s=READY pgs=183 cs=0 l=1 rev1=1 crypto rx=0x7f815c02f800 tx=0x7f815c031d60 comp rx=0 tx=0).stop 2026-03-09T20:55:05.624 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:05.623+0000 7f816d6b2640 1 -- 192.168.123.107:0/928088218 shutdown_connections 2026-03-09T20:55:05.624 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:05.623+0000 7f816d6b2640 1 --2- 192.168.123.107:0/928088218 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f813c077720 0x7f813c079be0 unknown :-1 s=CLOSED pgs=129 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:05.624 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:05.623+0000 7f816d6b2640 1 --2- 192.168.123.107:0/928088218 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f81680ff3a0 0x7f81681a0c30 unknown :-1 s=CLOSED pgs=183 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:05.624 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:05.623+0000 7f816d6b2640 1 --2- 192.168.123.107:0/928088218 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f81680fe9f0 0x7f81681a06f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:05.624 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:05.623+0000 7f816d6b2640 1 -- 192.168.123.107:0/928088218 >> 192.168.123.107:0/928088218 conn(0x7f81680fa5e0 msgr2=0x7f81680fbd60 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:55:05.624 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:05.624+0000 7f816d6b2640 1 -- 192.168.123.107:0/928088218 shutdown_connections 2026-03-09T20:55:05.625 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:05.624+0000 7f816d6b2640 1 -- 192.168.123.107:0/928088218 wait complete. 2026-03-09T20:55:05.692 DEBUG:tasks.fs:max_mds reduced in epoch 21 2026-03-09T20:55:05.692 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 21 2026-03-09T20:55:05.692 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph fs dump --format=json 22 2026-03-09T20:55:05.832 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:55:05.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:05 vm07.local ceph-mon[112105]: pgmap v242: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:55:05.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:05 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/3009061581' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 20, "format": "json"}]: dispatch 2026-03-09T20:55:05.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:05 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/928088218' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 21, "format": "json"}]: dispatch 2026-03-09T20:55:06.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:05 vm10.local ceph-mon[103526]: pgmap v242: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:55:06.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:05 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/3009061581' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 20, "format": "json"}]: dispatch 2026-03-09T20:55:06.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:05 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/928088218' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 21, "format": "json"}]: dispatch 2026-03-09T20:55:06.080 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.078+0000 7f9de331d640 1 -- 192.168.123.107:0/255783876 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9ddc1029d0 msgr2=0x7f9ddc102e30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:06.080 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.078+0000 7f9de331d640 1 --2- 192.168.123.107:0/255783876 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9ddc1029d0 0x7f9ddc102e30 secure :-1 s=READY pgs=184 cs=0 l=1 rev1=1 crypto rx=0x7f9dd800b0a0 tx=0x7f9dd802f4c0 comp rx=0 tx=0).stop 2026-03-09T20:55:06.080 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.079+0000 7f9de331d640 1 -- 192.168.123.107:0/255783876 shutdown_connections 2026-03-09T20:55:06.080 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.079+0000 7f9de331d640 1 --2- 192.168.123.107:0/255783876 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9ddc1029d0 0x7f9ddc102e30 unknown :-1 s=CLOSED pgs=184 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:06.080 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.079+0000 7f9de331d640 1 --2- 192.168.123.107:0/255783876 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9ddc1089d0 0x7f9ddc108db0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:06.080 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.079+0000 7f9de331d640 1 -- 192.168.123.107:0/255783876 >> 192.168.123.107:0/255783876 conn(0x7f9ddc0fe710 msgr2=0x7f9ddc100b30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:55:06.080 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.079+0000 7f9de331d640 1 -- 192.168.123.107:0/255783876 shutdown_connections 2026-03-09T20:55:06.080 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.080+0000 7f9de331d640 1 -- 192.168.123.107:0/255783876 wait complete. 2026-03-09T20:55:06.081 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.080+0000 7f9de331d640 1 Processor -- start 2026-03-09T20:55:06.081 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.080+0000 7f9de331d640 1 -- start start 2026-03-09T20:55:06.081 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.081+0000 7f9de331d640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9ddc1089d0 0x7f9ddc1a0400 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:06.082 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.081+0000 7f9de1092640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9ddc1089d0 0x7f9ddc1a0400 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:06.082 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.081+0000 7f9de1092640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9ddc1089d0 0x7f9ddc1a0400 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:45538/0 (socket says 192.168.123.107:45538) 2026-03-09T20:55:06.082 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.081+0000 7f9de331d640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9ddc1a0940 0x7f9ddc19a5a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:06.082 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.081+0000 7f9de1092640 1 -- 192.168.123.107:0/604879805 learned_addr learned my addr 192.168.123.107:0/604879805 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:55:06.082 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.081+0000 7f9de331d640 1 -- 192.168.123.107:0/604879805 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9ddc1a0f60 con 0x7f9ddc1089d0 2026-03-09T20:55:06.082 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.081+0000 7f9de331d640 1 -- 192.168.123.107:0/604879805 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9ddc1a10a0 con 0x7f9ddc1a0940 2026-03-09T20:55:06.082 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.081+0000 7f9de0891640 1 --2- 192.168.123.107:0/604879805 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9ddc1a0940 0x7f9ddc19a5a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:06.083 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.082+0000 7f9de0891640 1 -- 192.168.123.107:0/604879805 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9ddc1089d0 msgr2=0x7f9ddc1a0400 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:06.083 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.082+0000 7f9de0891640 1 --2- 192.168.123.107:0/604879805 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9ddc1089d0 0x7f9ddc1a0400 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:06.083 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.082+0000 7f9de0891640 1 -- 192.168.123.107:0/604879805 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f9dd8009d00 con 0x7f9ddc1a0940 2026-03-09T20:55:06.083 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.082+0000 7f9de1092640 1 --2- 192.168.123.107:0/604879805 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9ddc1089d0 0x7f9ddc1a0400 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T20:55:06.084 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.083+0000 7f9de0891640 1 --2- 192.168.123.107:0/604879805 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9ddc1a0940 0x7f9ddc19a5a0 secure :-1 s=READY pgs=74 cs=0 l=1 rev1=1 crypto rx=0x7f9dd800b070 tx=0x7f9dd8009390 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:55:06.084 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.083+0000 7f9dca7fc640 1 -- 192.168.123.107:0/604879805 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f9dd8002c70 con 0x7f9ddc1a0940 2026-03-09T20:55:06.084 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.083+0000 7f9dca7fc640 1 -- 192.168.123.107:0/604879805 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f9dd8002dd0 con 0x7f9ddc1a0940 2026-03-09T20:55:06.084 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.083+0000 7f9de331d640 1 -- 192.168.123.107:0/604879805 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f9ddc19aba0 con 0x7f9ddc1a0940 2026-03-09T20:55:06.085 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.083+0000 7f9dca7fc640 1 -- 192.168.123.107:0/604879805 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f9dd80409f0 con 0x7f9ddc1a0940 2026-03-09T20:55:06.085 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.083+0000 7f9de331d640 1 -- 192.168.123.107:0/604879805 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f9ddc19b0f0 con 0x7f9ddc1a0940 2026-03-09T20:55:06.086 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.085+0000 7f9dca7fc640 1 -- 192.168.123.107:0/604879805 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f9dd8004700 con 0x7f9ddc1a0940 2026-03-09T20:55:06.086 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.085+0000 7f9de331d640 1 -- 192.168.123.107:0/604879805 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f9da4005350 con 0x7f9ddc1a0940 2026-03-09T20:55:06.086 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.085+0000 7f9dca7fc640 1 --2- 192.168.123.107:0/604879805 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f9db8077890 0x7f9db8079d50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:06.086 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.085+0000 7f9de1092640 1 --2- 192.168.123.107:0/604879805 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f9db8077890 0x7f9db8079d50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:06.086 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.085+0000 7f9dca7fc640 1 -- 192.168.123.107:0/604879805 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(86..86 src has 1..86) v4 ==== 6652+0+0 (secure 0 0 0) 0x7f9dd80be5a0 con 0x7f9ddc1a0940 2026-03-09T20:55:06.087 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.086+0000 7f9de1092640 1 --2- 192.168.123.107:0/604879805 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f9db8077890 0x7f9db8079d50 secure :-1 s=READY pgs=130 cs=0 l=1 rev1=1 crypto rx=0x7f9dcc0059c0 tx=0x7f9dcc00a430 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:55:06.089 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.088+0000 7f9dca7fc640 1 -- 192.168.123.107:0/604879805 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f9dd8086c80 con 0x7f9ddc1a0940 2026-03-09T20:55:06.205 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.203+0000 7f9de331d640 1 -- 192.168.123.107:0/604879805 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 22, "format": "json"} v 0) v1 -- 0x7f9da40051c0 con 0x7f9ddc1a0940 2026-03-09T20:55:06.207 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.206+0000 7f9dca7fc640 1 -- 192.168.123.107:0/604879805 <== mon.1 v2:192.168.123.110:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 22, "format": "json"}]=0 dumped fsmap epoch 22 v37) v1 ==== 107+0+4192 (secure 0 0 0) 0x7f9dd80863d0 con 0x7f9ddc1a0940 2026-03-09T20:55:06.207 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:55:06.207 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":22,"btime":"2026-03-09T20:52:49:559534+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":44247,"name":"cephfs.vm10.hzyuyq","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.110:6827/2699915815","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6826","nonce":2699915815},{"type":"v1","addr":"192.168.123.110:6827","nonce":2699915815}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":13},{"gid":44269,"name":"cephfs.vm10.qpltwp","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.110:6825/1063035280","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6824","nonce":1063035280},{"type":"v1","addr":"192.168.123.110:6825","nonce":1063035280}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":16},{"gid":44273,"name":"cephfs.vm07.rovdbp","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.107:6827/2346066069","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6826","nonce":2346066069},{"type":"v1","addr":"192.168.123.107:6827","nonce":2346066069}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":20}],"filesystems":[{"mdsmap":{"epoch":22,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-09T20:44:59.885491+0000","modified":"2026-03-09T20:52:49.559525+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":84,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":1,"in":[0],"up":{},"failed":[0],"damaged":[],"stopped":[1],"info":{},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":0,"qdb_cluster":[]},"id":1}]} 2026-03-09T20:55:06.207 INFO:teuthology.orchestra.run.vm07.stderr:dumped fsmap epoch 22 2026-03-09T20:55:06.209 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.208+0000 7f9de331d640 1 -- 192.168.123.107:0/604879805 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f9db8077890 msgr2=0x7f9db8079d50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:06.209 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.208+0000 7f9de331d640 1 --2- 192.168.123.107:0/604879805 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f9db8077890 0x7f9db8079d50 secure :-1 s=READY pgs=130 cs=0 l=1 rev1=1 crypto rx=0x7f9dcc0059c0 tx=0x7f9dcc00a430 comp rx=0 tx=0).stop 2026-03-09T20:55:06.209 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.208+0000 7f9de331d640 1 -- 192.168.123.107:0/604879805 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9ddc1a0940 msgr2=0x7f9ddc19a5a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:06.209 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.208+0000 7f9de331d640 1 --2- 192.168.123.107:0/604879805 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9ddc1a0940 0x7f9ddc19a5a0 secure :-1 s=READY pgs=74 cs=0 l=1 rev1=1 crypto rx=0x7f9dd800b070 tx=0x7f9dd8009390 comp rx=0 tx=0).stop 2026-03-09T20:55:06.209 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.208+0000 7f9de331d640 1 -- 192.168.123.107:0/604879805 shutdown_connections 2026-03-09T20:55:06.209 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.208+0000 7f9de331d640 1 --2- 192.168.123.107:0/604879805 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f9db8077890 0x7f9db8079d50 unknown :-1 s=CLOSED pgs=130 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:06.209 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.208+0000 7f9de331d640 1 --2- 192.168.123.107:0/604879805 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9ddc1a0940 0x7f9ddc19a5a0 unknown :-1 s=CLOSED pgs=74 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:06.209 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.208+0000 7f9de331d640 1 --2- 192.168.123.107:0/604879805 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9ddc1089d0 0x7f9ddc1a0400 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:06.209 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.209+0000 7f9de331d640 1 -- 192.168.123.107:0/604879805 >> 192.168.123.107:0/604879805 conn(0x7f9ddc0fe710 msgr2=0x7f9ddc0feaf0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:55:06.210 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.209+0000 7f9de331d640 1 -- 192.168.123.107:0/604879805 shutdown_connections 2026-03-09T20:55:06.210 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.209+0000 7f9de331d640 1 -- 192.168.123.107:0/604879805 wait complete. 2026-03-09T20:55:06.274 DEBUG:tasks.fs:max_mds reduced in epoch 22 2026-03-09T20:55:06.274 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 22 2026-03-09T20:55:06.274 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph fs dump --format=json 23 2026-03-09T20:55:06.424 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:55:06.673 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.671+0000 7f8ce4c4b640 1 -- 192.168.123.107:0/1350080189 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8ce0073a50 msgr2=0x7f8ce0073e90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:06.673 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.671+0000 7f8ce4c4b640 1 --2- 192.168.123.107:0/1350080189 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8ce0073a50 0x7f8ce0073e90 secure :-1 s=READY pgs=185 cs=0 l=1 rev1=1 crypto rx=0x7f8cc80099e0 tx=0x7f8cc802f2f0 comp rx=0 tx=0).stop 2026-03-09T20:55:06.673 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.672+0000 7f8ce4c4b640 1 -- 192.168.123.107:0/1350080189 shutdown_connections 2026-03-09T20:55:06.673 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.672+0000 7f8ce4c4b640 1 --2- 192.168.123.107:0/1350080189 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8ce0073a50 0x7f8ce0073e90 unknown :-1 s=CLOSED pgs=185 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:06.673 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.672+0000 7f8ce4c4b640 1 --2- 192.168.123.107:0/1350080189 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f8ce0104650 0x7f8ce0073510 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:06.673 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.672+0000 7f8ce4c4b640 1 -- 192.168.123.107:0/1350080189 >> 192.168.123.107:0/1350080189 conn(0x7f8ce00fc460 msgr2=0x7f8ce00fe880 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:55:06.673 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.672+0000 7f8ce4c4b640 1 -- 192.168.123.107:0/1350080189 shutdown_connections 2026-03-09T20:55:06.673 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.672+0000 7f8ce4c4b640 1 -- 192.168.123.107:0/1350080189 wait complete. 2026-03-09T20:55:06.673 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.672+0000 7f8ce4c4b640 1 Processor -- start 2026-03-09T20:55:06.673 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.673+0000 7f8ce4c4b640 1 -- start start 2026-03-09T20:55:06.674 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.673+0000 7f8ce4c4b640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f8ce0073a50 0x7f8ce019ee70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:06.674 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.673+0000 7f8ce4c4b640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8ce0104650 0x7f8ce019f3b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:06.674 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.673+0000 7f8ce4c4b640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f8ce019fa40 con 0x7f8ce0104650 2026-03-09T20:55:06.674 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.673+0000 7f8ce4c4b640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f8ce01a37b0 con 0x7f8ce0073a50 2026-03-09T20:55:06.674 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.673+0000 7f8cddd74640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8ce0104650 0x7f8ce019f3b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:06.674 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.673+0000 7f8cde575640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f8ce0073a50 0x7f8ce019ee70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:06.674 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.673+0000 7f8cddd74640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8ce0104650 0x7f8ce019f3b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:45548/0 (socket says 192.168.123.107:45548) 2026-03-09T20:55:06.674 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.673+0000 7f8cddd74640 1 -- 192.168.123.107:0/3623375524 learned_addr learned my addr 192.168.123.107:0/3623375524 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:55:06.675 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.674+0000 7f8cde575640 1 -- 192.168.123.107:0/3623375524 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8ce0104650 msgr2=0x7f8ce019f3b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:06.675 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.674+0000 7f8cde575640 1 --2- 192.168.123.107:0/3623375524 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8ce0104650 0x7f8ce019f3b0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:06.675 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.674+0000 7f8cde575640 1 -- 192.168.123.107:0/3623375524 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f8cc8009660 con 0x7f8ce0073a50 2026-03-09T20:55:06.675 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.674+0000 7f8cddd74640 1 --2- 192.168.123.107:0/3623375524 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8ce0104650 0x7f8ce019f3b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T20:55:06.675 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.674+0000 7f8cde575640 1 --2- 192.168.123.107:0/3623375524 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f8ce0073a50 0x7f8ce019ee70 secure :-1 s=READY pgs=75 cs=0 l=1 rev1=1 crypto rx=0x7f8cd400e970 tx=0x7f8cd400ee40 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:55:06.675 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.674+0000 7f8cc77fe640 1 -- 192.168.123.107:0/3623375524 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f8cd400ccb0 con 0x7f8ce0073a50 2026-03-09T20:55:06.675 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.674+0000 7f8ce4c4b640 1 -- 192.168.123.107:0/3623375524 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f8ce01a3a90 con 0x7f8ce0073a50 2026-03-09T20:55:06.675 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.674+0000 7f8ce4c4b640 1 -- 192.168.123.107:0/3623375524 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f8ce01a3fe0 con 0x7f8ce0073a50 2026-03-09T20:55:06.676 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.675+0000 7f8cc77fe640 1 -- 192.168.123.107:0/3623375524 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f8cd4004590 con 0x7f8ce0073a50 2026-03-09T20:55:06.676 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.675+0000 7f8cc77fe640 1 -- 192.168.123.107:0/3623375524 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f8cd4010640 con 0x7f8ce0073a50 2026-03-09T20:55:06.676 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.675+0000 7f8ce4c4b640 1 -- 192.168.123.107:0/3623375524 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f8ce0074c50 con 0x7f8ce0073a50 2026-03-09T20:55:06.677 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.676+0000 7f8cc77fe640 1 -- 192.168.123.107:0/3623375524 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f8cd40107a0 con 0x7f8ce0073a50 2026-03-09T20:55:06.678 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.677+0000 7f8cc77fe640 1 --2- 192.168.123.107:0/3623375524 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f8cb8077890 0x7f8cb8079d50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:06.678 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.677+0000 7f8cc77fe640 1 -- 192.168.123.107:0/3623375524 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(86..86 src has 1..86) v4 ==== 6652+0+0 (secure 0 0 0) 0x7f8cd4014070 con 0x7f8ce0073a50 2026-03-09T20:55:06.678 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.677+0000 7f8cddd74640 1 --2- 192.168.123.107:0/3623375524 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f8cb8077890 0x7f8cb8079d50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:06.679 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.678+0000 7f8cddd74640 1 --2- 192.168.123.107:0/3623375524 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f8cb8077890 0x7f8cb8079d50 secure :-1 s=READY pgs=131 cs=0 l=1 rev1=1 crypto rx=0x7f8ce01a0420 tx=0x7f8cc803a040 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:55:06.679 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.678+0000 7f8cc77fe640 1 -- 192.168.123.107:0/3623375524 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f8cd4062c70 con 0x7f8ce0073a50 2026-03-09T20:55:06.785 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:06 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/604879805' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 22, "format": "json"}]: dispatch 2026-03-09T20:55:06.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:06 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/604879805' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 22, "format": "json"}]: dispatch 2026-03-09T20:55:06.805 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.804+0000 7f8ce4c4b640 1 -- 192.168.123.107:0/3623375524 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 23, "format": "json"} v 0) v1 -- 0x7f8ce0073e90 con 0x7f8ce0073a50 2026-03-09T20:55:06.806 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.805+0000 7f8cc77fe640 1 -- 192.168.123.107:0/3623375524 <== mon.1 v2:192.168.123.110:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 23, "format": "json"}]=0 dumped fsmap epoch 23 v37) v1 ==== 107+0+4203 (secure 0 0 0) 0x7f8cd40623c0 con 0x7f8ce0073a50 2026-03-09T20:55:06.806 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:55:06.806 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":23,"btime":"2026-03-09T20:52:49:568087+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":44269,"name":"cephfs.vm10.qpltwp","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.110:6825/1063035280","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6824","nonce":1063035280},{"type":"v1","addr":"192.168.123.110:6825","nonce":1063035280}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":16},{"gid":44273,"name":"cephfs.vm07.rovdbp","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.107:6827/2346066069","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6826","nonce":2346066069},{"type":"v1","addr":"192.168.123.107:6827","nonce":2346066069}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":20}],"filesystems":[{"mdsmap":{"epoch":23,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-09T20:44:59.885491+0000","modified":"2026-03-09T20:52:49.568068+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":84,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":1,"in":[0],"up":{"mds_0":44247},"failed":[],"damaged":[],"stopped":[1],"info":{"gid_44247":{"gid":44247,"name":"cephfs.vm10.hzyuyq","rank":0,"incarnation":23,"state":"up:replay","state_seq":1,"addr":"192.168.123.110:6827/2699915815","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6826","nonce":2699915815},{"type":"v1","addr":"192.168.123.110:6827","nonce":2699915815}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":0,"qdb_cluster":[]},"id":1}]} 2026-03-09T20:55:06.806 INFO:teuthology.orchestra.run.vm07.stderr:dumped fsmap epoch 23 2026-03-09T20:55:06.809 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.807+0000 7f8ce4c4b640 1 -- 192.168.123.107:0/3623375524 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f8cb8077890 msgr2=0x7f8cb8079d50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:06.809 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.807+0000 7f8ce4c4b640 1 --2- 192.168.123.107:0/3623375524 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f8cb8077890 0x7f8cb8079d50 secure :-1 s=READY pgs=131 cs=0 l=1 rev1=1 crypto rx=0x7f8ce01a0420 tx=0x7f8cc803a040 comp rx=0 tx=0).stop 2026-03-09T20:55:06.809 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.808+0000 7f8ce4c4b640 1 -- 192.168.123.107:0/3623375524 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f8ce0073a50 msgr2=0x7f8ce019ee70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:06.809 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.808+0000 7f8ce4c4b640 1 --2- 192.168.123.107:0/3623375524 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f8ce0073a50 0x7f8ce019ee70 secure :-1 s=READY pgs=75 cs=0 l=1 rev1=1 crypto rx=0x7f8cd400e970 tx=0x7f8cd400ee40 comp rx=0 tx=0).stop 2026-03-09T20:55:06.809 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.808+0000 7f8ce4c4b640 1 -- 192.168.123.107:0/3623375524 shutdown_connections 2026-03-09T20:55:06.809 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.808+0000 7f8ce4c4b640 1 --2- 192.168.123.107:0/3623375524 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f8cb8077890 0x7f8cb8079d50 unknown :-1 s=CLOSED pgs=131 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:06.809 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.808+0000 7f8ce4c4b640 1 --2- 192.168.123.107:0/3623375524 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8ce0104650 0x7f8ce019f3b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:06.809 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.808+0000 7f8ce4c4b640 1 --2- 192.168.123.107:0/3623375524 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f8ce0073a50 0x7f8ce019ee70 unknown :-1 s=CLOSED pgs=75 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:06.809 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.808+0000 7f8ce4c4b640 1 -- 192.168.123.107:0/3623375524 >> 192.168.123.107:0/3623375524 conn(0x7f8ce00fc460 msgr2=0x7f8ce00fd6f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:55:06.809 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.808+0000 7f8ce4c4b640 1 -- 192.168.123.107:0/3623375524 shutdown_connections 2026-03-09T20:55:06.809 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:06.808+0000 7f8ce4c4b640 1 -- 192.168.123.107:0/3623375524 wait complete. 2026-03-09T20:55:06.873 DEBUG:tasks.fs:max_mds reduced in epoch 23 2026-03-09T20:55:06.873 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 23 2026-03-09T20:55:06.873 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph fs dump --format=json 24 2026-03-09T20:55:07.027 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:55:07.301 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.299+0000 7f04675d5640 1 -- 192.168.123.107:0/1414165765 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0460076040 msgr2=0x7f0460111330 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:07.301 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.299+0000 7f04675d5640 1 --2- 192.168.123.107:0/1414165765 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0460076040 0x7f0460111330 secure :-1 s=READY pgs=186 cs=0 l=1 rev1=1 crypto rx=0x7f04500099b0 tx=0x7f045002f220 comp rx=0 tx=0).stop 2026-03-09T20:55:07.302 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.300+0000 7f04675d5640 1 -- 192.168.123.107:0/1414165765 shutdown_connections 2026-03-09T20:55:07.302 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.300+0000 7f04675d5640 1 --2- 192.168.123.107:0/1414165765 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0460076040 0x7f0460111330 unknown :-1 s=CLOSED pgs=186 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:07.302 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.300+0000 7f04675d5640 1 --2- 192.168.123.107:0/1414165765 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f0460075720 0x7f0460075b00 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:07.303 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.300+0000 7f04675d5640 1 -- 192.168.123.107:0/1414165765 >> 192.168.123.107:0/1414165765 conn(0x7f04600fe710 msgr2=0x7f0460100b30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:55:07.303 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.302+0000 7f04675d5640 1 -- 192.168.123.107:0/1414165765 shutdown_connections 2026-03-09T20:55:07.303 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.302+0000 7f04675d5640 1 -- 192.168.123.107:0/1414165765 wait complete. 2026-03-09T20:55:07.303 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.302+0000 7f04675d5640 1 Processor -- start 2026-03-09T20:55:07.303 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.303+0000 7f04675d5640 1 -- start start 2026-03-09T20:55:07.304 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.303+0000 7f04675d5640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f0460075720 0x7f046019ee00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:07.304 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.303+0000 7f04675d5640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0460076040 0x7f046019f340 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:07.304 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.303+0000 7f04675d5640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f046019f9d0 con 0x7f0460076040 2026-03-09T20:55:07.304 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.303+0000 7f04675d5640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f04601a3740 con 0x7f0460075720 2026-03-09T20:55:07.304 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.303+0000 7f046534a640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f0460075720 0x7f046019ee00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:07.304 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.303+0000 7f046534a640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f0460075720 0x7f046019ee00 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:51666/0 (socket says 192.168.123.107:51666) 2026-03-09T20:55:07.304 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.303+0000 7f046534a640 1 -- 192.168.123.107:0/4126467436 learned_addr learned my addr 192.168.123.107:0/4126467436 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:55:07.304 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.303+0000 7f0464b49640 1 --2- 192.168.123.107:0/4126467436 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0460076040 0x7f046019f340 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:07.304 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.304+0000 7f046534a640 1 -- 192.168.123.107:0/4126467436 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0460076040 msgr2=0x7f046019f340 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:07.304 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.304+0000 7f046534a640 1 --2- 192.168.123.107:0/4126467436 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0460076040 0x7f046019f340 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:07.305 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.304+0000 7f046534a640 1 -- 192.168.123.107:0/4126467436 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f0450009660 con 0x7f0460075720 2026-03-09T20:55:07.305 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.304+0000 7f0464b49640 1 --2- 192.168.123.107:0/4126467436 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0460076040 0x7f046019f340 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-09T20:55:07.305 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.304+0000 7f046534a640 1 --2- 192.168.123.107:0/4126467436 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f0460075720 0x7f046019ee00 secure :-1 s=READY pgs=76 cs=0 l=1 rev1=1 crypto rx=0x7f045400e9b0 tx=0x7f045400ee80 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:55:07.305 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.304+0000 7f044e7fc640 1 -- 192.168.123.107:0/4126467436 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f045400cd90 con 0x7f0460075720 2026-03-09T20:55:07.305 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.304+0000 7f04675d5640 1 -- 192.168.123.107:0/4126467436 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f04601a3a20 con 0x7f0460075720 2026-03-09T20:55:07.305 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.304+0000 7f04675d5640 1 -- 192.168.123.107:0/4126467436 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f04601a3f70 con 0x7f0460075720 2026-03-09T20:55:07.306 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.305+0000 7f044e7fc640 1 -- 192.168.123.107:0/4126467436 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f0454004590 con 0x7f0460075720 2026-03-09T20:55:07.306 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.305+0000 7f044e7fc640 1 -- 192.168.123.107:0/4126467436 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f0454010640 con 0x7f0460075720 2026-03-09T20:55:07.306 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.305+0000 7f04675d5640 1 -- 192.168.123.107:0/4126467436 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f0460076e60 con 0x7f0460075720 2026-03-09T20:55:07.308 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.307+0000 7f044e7fc640 1 -- 192.168.123.107:0/4126467436 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f04540040d0 con 0x7f0460075720 2026-03-09T20:55:07.308 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.307+0000 7f044e7fc640 1 --2- 192.168.123.107:0/4126467436 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f043c0778e0 0x7f043c079da0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:07.308 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.307+0000 7f044e7fc640 1 -- 192.168.123.107:0/4126467436 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(86..86 src has 1..86) v4 ==== 6652+0+0 (secure 0 0 0) 0x7f0454014070 con 0x7f0460075720 2026-03-09T20:55:07.308 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.307+0000 7f0464b49640 1 --2- 192.168.123.107:0/4126467436 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f043c0778e0 0x7f043c079da0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:07.309 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.308+0000 7f0464b49640 1 --2- 192.168.123.107:0/4126467436 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f043c0778e0 0x7f043c079da0 secure :-1 s=READY pgs=132 cs=0 l=1 rev1=1 crypto rx=0x7f04601a03b0 tx=0x7f045003a040 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:55:07.310 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.309+0000 7f044e7fc640 1 -- 192.168.123.107:0/4126467436 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f0454062b50 con 0x7f0460075720 2026-03-09T20:55:07.427 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.426+0000 7f04675d5640 1 -- 192.168.123.107:0/4126467436 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 24, "format": "json"} v 0) v1 -- 0x7f0460075b00 con 0x7f0460075720 2026-03-09T20:55:07.428 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.427+0000 7f044e7fc640 1 -- 192.168.123.107:0/4126467436 <== mon.1 v2:192.168.123.110:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 24, "format": "json"}]=0 dumped fsmap epoch 24 v37) v1 ==== 107+0+5055 (secure 0 0 0) 0x7f04540622a0 con 0x7f0460075720 2026-03-09T20:55:07.428 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:55:07.429 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":24,"btime":"2026-03-09T20:52:53:656661+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":34382,"name":"cephfs.vm07.potfau","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.107:6829/561473714","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6828","nonce":561473714},{"type":"v1","addr":"192.168.123.107:6829","nonce":561473714}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":24},{"gid":44269,"name":"cephfs.vm10.qpltwp","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.110:6825/1063035280","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6824","nonce":1063035280},{"type":"v1","addr":"192.168.123.110:6825","nonce":1063035280}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":16},{"gid":44273,"name":"cephfs.vm07.rovdbp","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.107:6827/2346066069","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6826","nonce":2346066069},{"type":"v1","addr":"192.168.123.107:6827","nonce":2346066069}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":20}],"filesystems":[{"mdsmap":{"epoch":24,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-09T20:44:59.885491+0000","modified":"2026-03-09T20:52:53.350362+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":84,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":1,"in":[0],"up":{"mds_0":44247},"failed":[],"damaged":[],"stopped":[1],"info":{"gid_44247":{"gid":44247,"name":"cephfs.vm10.hzyuyq","rank":0,"incarnation":23,"state":"up:reconnect","state_seq":13,"addr":"192.168.123.110:6827/2699915815","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6826","nonce":2699915815},{"type":"v1","addr":"192.168.123.110:6827","nonce":2699915815}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":0,"qdb_cluster":[]},"id":1}]} 2026-03-09T20:55:07.429 INFO:teuthology.orchestra.run.vm07.stderr:dumped fsmap epoch 24 2026-03-09T20:55:07.431 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.430+0000 7f04675d5640 1 -- 192.168.123.107:0/4126467436 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f043c0778e0 msgr2=0x7f043c079da0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:07.431 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.430+0000 7f04675d5640 1 --2- 192.168.123.107:0/4126467436 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f043c0778e0 0x7f043c079da0 secure :-1 s=READY pgs=132 cs=0 l=1 rev1=1 crypto rx=0x7f04601a03b0 tx=0x7f045003a040 comp rx=0 tx=0).stop 2026-03-09T20:55:07.431 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.430+0000 7f04675d5640 1 -- 192.168.123.107:0/4126467436 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f0460075720 msgr2=0x7f046019ee00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:07.431 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.430+0000 7f04675d5640 1 --2- 192.168.123.107:0/4126467436 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f0460075720 0x7f046019ee00 secure :-1 s=READY pgs=76 cs=0 l=1 rev1=1 crypto rx=0x7f045400e9b0 tx=0x7f045400ee80 comp rx=0 tx=0).stop 2026-03-09T20:55:07.431 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.431+0000 7f04675d5640 1 -- 192.168.123.107:0/4126467436 shutdown_connections 2026-03-09T20:55:07.431 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.431+0000 7f04675d5640 1 --2- 192.168.123.107:0/4126467436 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f043c0778e0 0x7f043c079da0 unknown :-1 s=CLOSED pgs=132 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:07.432 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.431+0000 7f04675d5640 1 --2- 192.168.123.107:0/4126467436 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0460076040 0x7f046019f340 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:07.432 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.431+0000 7f04675d5640 1 --2- 192.168.123.107:0/4126467436 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f0460075720 0x7f046019ee00 unknown :-1 s=CLOSED pgs=76 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:07.432 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.431+0000 7f04675d5640 1 -- 192.168.123.107:0/4126467436 >> 192.168.123.107:0/4126467436 conn(0x7f04600fe710 msgr2=0x7f04600ffdf0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:55:07.432 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.431+0000 7f04675d5640 1 -- 192.168.123.107:0/4126467436 shutdown_connections 2026-03-09T20:55:07.432 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.431+0000 7f04675d5640 1 -- 192.168.123.107:0/4126467436 wait complete. 2026-03-09T20:55:07.493 DEBUG:tasks.fs:max_mds reduced in epoch 24 2026-03-09T20:55:07.493 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 24 2026-03-09T20:55:07.493 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph fs dump --format=json 25 2026-03-09T20:55:07.641 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:55:07.894 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.892+0000 7f9e04a67640 1 -- 192.168.123.107:0/3575789108 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9e00106760 msgr2=0x7f9e00106b40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:07.894 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.892+0000 7f9e04a67640 1 --2- 192.168.123.107:0/3575789108 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9e00106760 0x7f9e00106b40 secure :-1 s=READY pgs=187 cs=0 l=1 rev1=1 crypto rx=0x7f9dec0099b0 tx=0x7f9dec02f260 comp rx=0 tx=0).stop 2026-03-09T20:55:07.894 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.893+0000 7f9e04a67640 1 -- 192.168.123.107:0/3575789108 shutdown_connections 2026-03-09T20:55:07.894 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.893+0000 7f9e04a67640 1 --2- 192.168.123.107:0/3575789108 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9e00100760 0x7f9e00100bc0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:07.894 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.893+0000 7f9e04a67640 1 --2- 192.168.123.107:0/3575789108 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9e00106760 0x7f9e00106b40 unknown :-1 s=CLOSED pgs=187 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:07.894 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.893+0000 7f9e04a67640 1 -- 192.168.123.107:0/3575789108 >> 192.168.123.107:0/3575789108 conn(0x7f9e000fc480 msgr2=0x7f9e000fe8a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:55:07.894 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.893+0000 7f9e04a67640 1 -- 192.168.123.107:0/3575789108 shutdown_connections 2026-03-09T20:55:07.894 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.894+0000 7f9e04a67640 1 -- 192.168.123.107:0/3575789108 wait complete. 2026-03-09T20:55:07.895 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.894+0000 7f9e04a67640 1 Processor -- start 2026-03-09T20:55:07.895 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.894+0000 7f9e04a67640 1 -- start start 2026-03-09T20:55:07.896 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.894+0000 7f9e04a67640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9e00100760 0x7f9e0006d940 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:07.896 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.894+0000 7f9e04a67640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9e00106760 0x7f9e0006de80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:07.896 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.894+0000 7f9e04a67640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9e0006e450 con 0x7f9e00106760 2026-03-09T20:55:07.896 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.894+0000 7f9e04a67640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9e0006e5c0 con 0x7f9e00100760 2026-03-09T20:55:07.896 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.895+0000 7f9dfdd74640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9e00106760 0x7f9e0006de80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:07.897 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.895+0000 7f9dfdd74640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9e00106760 0x7f9e0006de80 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:52158/0 (socket says 192.168.123.107:52158) 2026-03-09T20:55:07.897 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.895+0000 7f9dfdd74640 1 -- 192.168.123.107:0/63155736 learned_addr learned my addr 192.168.123.107:0/63155736 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:55:07.897 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.895+0000 7f9dfe575640 1 --2- 192.168.123.107:0/63155736 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9e00100760 0x7f9e0006d940 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:07.897 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:07 vm07.local ceph-mon[112105]: pgmap v243: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:55:07.897 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:07 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/3623375524' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 23, "format": "json"}]: dispatch 2026-03-09T20:55:07.897 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:07 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/4126467436' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 24, "format": "json"}]: dispatch 2026-03-09T20:55:07.897 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.895+0000 7f9dfdd74640 1 -- 192.168.123.107:0/63155736 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9e00100760 msgr2=0x7f9e0006d940 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:07.898 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.895+0000 7f9dfdd74640 1 --2- 192.168.123.107:0/63155736 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9e00100760 0x7f9e0006d940 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:07.898 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.895+0000 7f9dfdd74640 1 -- 192.168.123.107:0/63155736 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f9dec009660 con 0x7f9e00106760 2026-03-09T20:55:07.898 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.895+0000 7f9dfdd74640 1 --2- 192.168.123.107:0/63155736 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9e00106760 0x7f9e0006de80 secure :-1 s=READY pgs=188 cs=0 l=1 rev1=1 crypto rx=0x7f9df400e990 tx=0x7f9df400ee60 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:55:07.899 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.895+0000 7f9deb7fe640 1 -- 192.168.123.107:0/63155736 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f9df400cd30 con 0x7f9e00106760 2026-03-09T20:55:07.899 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.895+0000 7f9deb7fe640 1 -- 192.168.123.107:0/63155736 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f9df400ce90 con 0x7f9e00106760 2026-03-09T20:55:07.899 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.895+0000 7f9deb7fe640 1 -- 192.168.123.107:0/63155736 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f9df4010640 con 0x7f9e00106760 2026-03-09T20:55:07.899 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.898+0000 7f9e04a67640 1 -- 192.168.123.107:0/63155736 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f9e001a6c20 con 0x7f9e00106760 2026-03-09T20:55:07.899 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.898+0000 7f9e04a67640 1 -- 192.168.123.107:0/63155736 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f9e001a7140 con 0x7f9e00106760 2026-03-09T20:55:07.901 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.900+0000 7f9e04a67640 1 -- 192.168.123.107:0/63155736 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f9dcc005350 con 0x7f9e00106760 2026-03-09T20:55:07.905 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.904+0000 7f9deb7fe640 1 -- 192.168.123.107:0/63155736 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f9df4002900 con 0x7f9e00106760 2026-03-09T20:55:07.905 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.904+0000 7f9deb7fe640 1 --2- 192.168.123.107:0/63155736 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f9ddc0778e0 0x7f9ddc079da0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:07.905 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.905+0000 7f9deb7fe640 1 -- 192.168.123.107:0/63155736 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(86..86 src has 1..86) v4 ==== 6652+0+0 (secure 0 0 0) 0x7f9df401d070 con 0x7f9e00106760 2026-03-09T20:55:07.906 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.905+0000 7f9deb7fe640 1 -- 192.168.123.107:0/63155736 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f9df409a1e0 con 0x7f9e00106760 2026-03-09T20:55:07.906 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.905+0000 7f9dfe575640 1 --2- 192.168.123.107:0/63155736 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f9ddc0778e0 0x7f9ddc079da0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:07.906 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:07.905+0000 7f9dfe575640 1 --2- 192.168.123.107:0/63155736 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f9ddc0778e0 0x7f9ddc079da0 secure :-1 s=READY pgs=133 cs=0 l=1 rev1=1 crypto rx=0x7f9dec002410 tx=0x7f9dec03a040 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:55:08.019 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:08.018+0000 7f9e04a67640 1 -- 192.168.123.107:0/63155736 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 25, "format": "json"} v 0) v1 -- 0x7f9dcc0058d0 con 0x7f9e00106760 2026-03-09T20:55:08.019 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:08.018+0000 7f9deb7fe640 1 -- 192.168.123.107:0/63155736 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 25, "format": "json"}]=0 dumped fsmap epoch 25 v37) v1 ==== 107+0+5052 (secure 0 0 0) 0x7f9df4062640 con 0x7f9e00106760 2026-03-09T20:55:08.020 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:55:08.020 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":25,"btime":"2026-03-09T20:52:54:669363+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":34382,"name":"cephfs.vm07.potfau","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.107:6829/561473714","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6828","nonce":561473714},{"type":"v1","addr":"192.168.123.107:6829","nonce":561473714}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":24},{"gid":44269,"name":"cephfs.vm10.qpltwp","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.110:6825/1063035280","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6824","nonce":1063035280},{"type":"v1","addr":"192.168.123.110:6825","nonce":1063035280}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":16},{"gid":44273,"name":"cephfs.vm07.rovdbp","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.107:6827/2346066069","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6826","nonce":2346066069},{"type":"v1","addr":"192.168.123.107:6827","nonce":2346066069}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":20}],"filesystems":[{"mdsmap":{"epoch":25,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-09T20:44:59.885491+0000","modified":"2026-03-09T20:52:53.674051+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":84,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":1,"in":[0],"up":{"mds_0":44247},"failed":[],"damaged":[],"stopped":[1],"info":{"gid_44247":{"gid":44247,"name":"cephfs.vm10.hzyuyq","rank":0,"incarnation":23,"state":"up:rejoin","state_seq":14,"addr":"192.168.123.110:6827/2699915815","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6826","nonce":2699915815},{"type":"v1","addr":"192.168.123.110:6827","nonce":2699915815}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":0,"qdb_cluster":[]},"id":1}]} 2026-03-09T20:55:08.020 INFO:teuthology.orchestra.run.vm07.stderr:dumped fsmap epoch 25 2026-03-09T20:55:08.022 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:08.021+0000 7f9e04a67640 1 -- 192.168.123.107:0/63155736 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f9ddc0778e0 msgr2=0x7f9ddc079da0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:08.022 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:08.021+0000 7f9e04a67640 1 --2- 192.168.123.107:0/63155736 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f9ddc0778e0 0x7f9ddc079da0 secure :-1 s=READY pgs=133 cs=0 l=1 rev1=1 crypto rx=0x7f9dec002410 tx=0x7f9dec03a040 comp rx=0 tx=0).stop 2026-03-09T20:55:08.022 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:08.021+0000 7f9e04a67640 1 -- 192.168.123.107:0/63155736 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9e00106760 msgr2=0x7f9e0006de80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:08.022 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:08.021+0000 7f9e04a67640 1 --2- 192.168.123.107:0/63155736 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9e00106760 0x7f9e0006de80 secure :-1 s=READY pgs=188 cs=0 l=1 rev1=1 crypto rx=0x7f9df400e990 tx=0x7f9df400ee60 comp rx=0 tx=0).stop 2026-03-09T20:55:08.022 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:08.022+0000 7f9e04a67640 1 -- 192.168.123.107:0/63155736 shutdown_connections 2026-03-09T20:55:08.023 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:08.022+0000 7f9e04a67640 1 --2- 192.168.123.107:0/63155736 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f9ddc0778e0 0x7f9ddc079da0 unknown :-1 s=CLOSED pgs=133 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:08.023 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:08.022+0000 7f9e04a67640 1 --2- 192.168.123.107:0/63155736 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9e00106760 0x7f9e0006de80 unknown :-1 s=CLOSED pgs=188 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:08.023 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:08.022+0000 7f9e04a67640 1 --2- 192.168.123.107:0/63155736 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9e00100760 0x7f9e0006d940 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:08.023 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:08.022+0000 7f9e04a67640 1 -- 192.168.123.107:0/63155736 >> 192.168.123.107:0/63155736 conn(0x7f9e000fc480 msgr2=0x7f9e001095d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:55:08.023 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:08.022+0000 7f9e04a67640 1 -- 192.168.123.107:0/63155736 shutdown_connections 2026-03-09T20:55:08.023 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:08.022+0000 7f9e04a67640 1 -- 192.168.123.107:0/63155736 wait complete. 2026-03-09T20:55:08.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:07 vm10.local ceph-mon[103526]: pgmap v243: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:55:08.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:07 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/3623375524' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 23, "format": "json"}]: dispatch 2026-03-09T20:55:08.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:07 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/4126467436' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 24, "format": "json"}]: dispatch 2026-03-09T20:55:08.088 DEBUG:tasks.fs:max_mds reduced in epoch 25 2026-03-09T20:55:08.088 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 25 2026-03-09T20:55:08.088 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph fs dump --format=json 26 2026-03-09T20:55:08.240 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:55:08.479 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:08.477+0000 7f6b03ce4640 1 -- 192.168.123.107:0/2088561991 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6afc101980 msgr2=0x7f6afc101d60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:08.479 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:08.477+0000 7f6b03ce4640 1 --2- 192.168.123.107:0/2088561991 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6afc101980 0x7f6afc101d60 secure :-1 s=READY pgs=189 cs=0 l=1 rev1=1 crypto rx=0x7f6ae40099e0 tx=0x7f6ae402f2f0 comp rx=0 tx=0).stop 2026-03-09T20:55:08.479 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:08.478+0000 7f6b03ce4640 1 -- 192.168.123.107:0/2088561991 shutdown_connections 2026-03-09T20:55:08.479 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:08.478+0000 7f6b03ce4640 1 --2- 192.168.123.107:0/2088561991 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f6afc1022a0 0x7f6afc10a790 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:08.479 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:08.478+0000 7f6b03ce4640 1 --2- 192.168.123.107:0/2088561991 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6afc101980 0x7f6afc101d60 unknown :-1 s=CLOSED pgs=189 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:08.479 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:08.478+0000 7f6b03ce4640 1 -- 192.168.123.107:0/2088561991 >> 192.168.123.107:0/2088561991 conn(0x7f6afc0fb340 msgr2=0x7f6afc0fd760 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:55:08.479 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:08.478+0000 7f6b03ce4640 1 -- 192.168.123.107:0/2088561991 shutdown_connections 2026-03-09T20:55:08.479 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:08.478+0000 7f6b03ce4640 1 -- 192.168.123.107:0/2088561991 wait complete. 2026-03-09T20:55:08.480 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:08.479+0000 7f6b03ce4640 1 Processor -- start 2026-03-09T20:55:08.480 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:08.479+0000 7f6b03ce4640 1 -- start start 2026-03-09T20:55:08.480 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:08.479+0000 7f6b03ce4640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f6afc1022a0 0x7f6afc0ff490 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:08.481 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:08.479+0000 7f6b03ce4640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6afc0ff9d0 0x7f6afc0ffe30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:08.481 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:08.479+0000 7f6b03ce4640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f6afc101450 con 0x7f6afc0ff9d0 2026-03-09T20:55:08.481 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:08.479+0000 7f6b03ce4640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f6afc101590 con 0x7f6afc1022a0 2026-03-09T20:55:08.481 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:08.479+0000 7f6b01258640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6afc0ff9d0 0x7f6afc0ffe30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:08.481 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:08.479+0000 7f6b01258640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6afc0ff9d0 0x7f6afc0ffe30 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:52186/0 (socket says 192.168.123.107:52186) 2026-03-09T20:55:08.481 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:08.479+0000 7f6b01258640 1 -- 192.168.123.107:0/1192670712 learned_addr learned my addr 192.168.123.107:0/1192670712 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:55:08.481 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:08.479+0000 7f6b01258640 1 -- 192.168.123.107:0/1192670712 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f6afc1022a0 msgr2=0x7f6afc0ff490 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T20:55:08.481 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:08.479+0000 7f6b01258640 1 --2- 192.168.123.107:0/1192670712 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f6afc1022a0 0x7f6afc0ff490 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:08.481 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:08.479+0000 7f6b01258640 1 -- 192.168.123.107:0/1192670712 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f6ae4009660 con 0x7f6afc0ff9d0 2026-03-09T20:55:08.481 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:08.480+0000 7f6b01258640 1 --2- 192.168.123.107:0/1192670712 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6afc0ff9d0 0x7f6afc0ffe30 secure :-1 s=READY pgs=190 cs=0 l=1 rev1=1 crypto rx=0x7f6aec00e9b0 tx=0x7f6aec00ee80 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:55:08.481 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:08.480+0000 7f6af2ffd640 1 -- 192.168.123.107:0/1192670712 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f6aec00cd90 con 0x7f6afc0ff9d0 2026-03-09T20:55:08.482 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:08.480+0000 7f6af2ffd640 1 -- 192.168.123.107:0/1192670712 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f6aec004590 con 0x7f6afc0ff9d0 2026-03-09T20:55:08.482 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:08.480+0000 7f6af2ffd640 1 -- 192.168.123.107:0/1192670712 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f6aec010640 con 0x7f6afc0ff9d0 2026-03-09T20:55:08.482 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:08.481+0000 7f6b03ce4640 1 -- 192.168.123.107:0/1192670712 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f6afc071b20 con 0x7f6afc0ff9d0 2026-03-09T20:55:08.482 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:08.481+0000 7f6b03ce4640 1 -- 192.168.123.107:0/1192670712 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f6afc071ff0 con 0x7f6afc0ff9d0 2026-03-09T20:55:08.483 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:08.482+0000 7f6af2ffd640 1 -- 192.168.123.107:0/1192670712 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f6aec0026e0 con 0x7f6afc0ff9d0 2026-03-09T20:55:08.483 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:08.482+0000 7f6b03ce4640 1 -- 192.168.123.107:0/1192670712 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f6ac4005350 con 0x7f6afc0ff9d0 2026-03-09T20:55:08.484 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:08.482+0000 7f6af2ffd640 1 --2- 192.168.123.107:0/1192670712 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f6ad80776d0 0x7f6ad8079b90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:08.484 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:08.483+0000 7f6af2ffd640 1 -- 192.168.123.107:0/1192670712 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(86..86 src has 1..86) v4 ==== 6652+0+0 (secure 0 0 0) 0x7f6aec01d030 con 0x7f6afc0ff9d0 2026-03-09T20:55:08.484 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:08.483+0000 7f6b01a59640 1 --2- 192.168.123.107:0/1192670712 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f6ad80776d0 0x7f6ad8079b90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:08.484 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:08.483+0000 7f6b01a59640 1 --2- 192.168.123.107:0/1192670712 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f6ad80776d0 0x7f6ad8079b90 secure :-1 s=READY pgs=134 cs=0 l=1 rev1=1 crypto rx=0x7f6ae4008000 tx=0x7f6ae40023d0 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:55:08.487 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:08.486+0000 7f6af2ffd640 1 -- 192.168.123.107:0/1192670712 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f6aec062330 con 0x7f6afc0ff9d0 2026-03-09T20:55:08.605 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:08.604+0000 7f6b03ce4640 1 -- 192.168.123.107:0/1192670712 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 26, "format": "json"} v 0) v1 -- 0x7f6ac4005600 con 0x7f6afc0ff9d0 2026-03-09T20:55:08.607 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:08.606+0000 7f6af2ffd640 1 -- 192.168.123.107:0/1192670712 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 26, "format": "json"}]=0 dumped fsmap epoch 26 v37) v1 ==== 107+0+5061 (secure 0 0 0) 0x7f6aec061a80 con 0x7f6afc0ff9d0 2026-03-09T20:55:08.607 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:55:08.607 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":26,"btime":"2026-03-09T20:52:55:724350+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":34382,"name":"cephfs.vm07.potfau","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.107:6829/561473714","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6828","nonce":561473714},{"type":"v1","addr":"192.168.123.107:6829","nonce":561473714}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":24},{"gid":44269,"name":"cephfs.vm10.qpltwp","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.110:6825/1063035280","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6824","nonce":1063035280},{"type":"v1","addr":"192.168.123.110:6825","nonce":1063035280}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":16},{"gid":44273,"name":"cephfs.vm07.rovdbp","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.107:6827/2346066069","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6826","nonce":2346066069},{"type":"v1","addr":"192.168.123.107:6827","nonce":2346066069}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":20}],"filesystems":[{"mdsmap":{"epoch":26,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-09T20:44:59.885491+0000","modified":"2026-03-09T20:52:55.724349+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":84,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":1,"in":[0],"up":{"mds_0":44247},"failed":[],"damaged":[],"stopped":[1],"info":{"gid_44247":{"gid":44247,"name":"cephfs.vm10.hzyuyq","rank":0,"incarnation":23,"state":"up:active","state_seq":15,"addr":"192.168.123.110:6827/2699915815","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6826","nonce":2699915815},{"type":"v1","addr":"192.168.123.110:6827","nonce":2699915815}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":44247,"qdb_cluster":[44247]},"id":1}]} 2026-03-09T20:55:08.607 INFO:teuthology.orchestra.run.vm07.stderr:dumped fsmap epoch 26 2026-03-09T20:55:08.610 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:08.609+0000 7f6b03ce4640 1 -- 192.168.123.107:0/1192670712 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f6ad80776d0 msgr2=0x7f6ad8079b90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:08.610 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:08.609+0000 7f6b03ce4640 1 --2- 192.168.123.107:0/1192670712 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f6ad80776d0 0x7f6ad8079b90 secure :-1 s=READY pgs=134 cs=0 l=1 rev1=1 crypto rx=0x7f6ae4008000 tx=0x7f6ae40023d0 comp rx=0 tx=0).stop 2026-03-09T20:55:08.610 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:08.609+0000 7f6b03ce4640 1 -- 192.168.123.107:0/1192670712 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6afc0ff9d0 msgr2=0x7f6afc0ffe30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:08.610 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:08.609+0000 7f6b03ce4640 1 --2- 192.168.123.107:0/1192670712 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6afc0ff9d0 0x7f6afc0ffe30 secure :-1 s=READY pgs=190 cs=0 l=1 rev1=1 crypto rx=0x7f6aec00e9b0 tx=0x7f6aec00ee80 comp rx=0 tx=0).stop 2026-03-09T20:55:08.610 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:08.609+0000 7f6b03ce4640 1 -- 192.168.123.107:0/1192670712 shutdown_connections 2026-03-09T20:55:08.610 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:08.609+0000 7f6b03ce4640 1 --2- 192.168.123.107:0/1192670712 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f6ad80776d0 0x7f6ad8079b90 secure :-1 s=CLOSED pgs=134 cs=0 l=1 rev1=1 crypto rx=0x7f6ae4008000 tx=0x7f6ae40023d0 comp rx=0 tx=0).stop 2026-03-09T20:55:08.610 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:08.609+0000 7f6b03ce4640 1 --2- 192.168.123.107:0/1192670712 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6afc0ff9d0 0x7f6afc0ffe30 unknown :-1 s=CLOSED pgs=190 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:08.610 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:08.609+0000 7f6b03ce4640 1 --2- 192.168.123.107:0/1192670712 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f6afc1022a0 0x7f6afc0ff490 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:08.610 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:08.609+0000 7f6b03ce4640 1 -- 192.168.123.107:0/1192670712 >> 192.168.123.107:0/1192670712 conn(0x7f6afc0fb340 msgr2=0x7f6afc109f40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:55:08.610 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:08.609+0000 7f6b03ce4640 1 -- 192.168.123.107:0/1192670712 shutdown_connections 2026-03-09T20:55:08.611 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:08.609+0000 7f6b03ce4640 1 -- 192.168.123.107:0/1192670712 wait complete. 2026-03-09T20:55:08.680 DEBUG:tasks.fs:max_mds reduced in epoch 26 2026-03-09T20:55:08.680 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 26 2026-03-09T20:55:08.680 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph fs dump --format=json 27 2026-03-09T20:55:08.830 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:55:08.865 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:08 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/63155736' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 25, "format": "json"}]: dispatch 2026-03-09T20:55:08.865 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:08 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/1192670712' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 26, "format": "json"}]: dispatch 2026-03-09T20:55:09.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:08 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/63155736' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 25, "format": "json"}]: dispatch 2026-03-09T20:55:09.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:08 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/1192670712' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 26, "format": "json"}]: dispatch 2026-03-09T20:55:09.067 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.066+0000 7f34fd2e1640 1 -- 192.168.123.107:0/2107772671 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f34f8106780 msgr2=0x7f34f8106b60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:09.067 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.066+0000 7f34fd2e1640 1 --2- 192.168.123.107:0/2107772671 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f34f8106780 0x7f34f8106b60 secure :-1 s=READY pgs=191 cs=0 l=1 rev1=1 crypto rx=0x7f34e00099b0 tx=0x7f34e002f220 comp rx=0 tx=0).stop 2026-03-09T20:55:09.067 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.066+0000 7f34fd2e1640 1 -- 192.168.123.107:0/2107772671 shutdown_connections 2026-03-09T20:55:09.067 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.066+0000 7f34fd2e1640 1 --2- 192.168.123.107:0/2107772671 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f34f8100780 0x7f34f8100be0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:09.067 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.066+0000 7f34fd2e1640 1 --2- 192.168.123.107:0/2107772671 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f34f8106780 0x7f34f8106b60 unknown :-1 s=CLOSED pgs=191 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:09.068 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.066+0000 7f34fd2e1640 1 -- 192.168.123.107:0/2107772671 >> 192.168.123.107:0/2107772671 conn(0x7f34f80fc460 msgr2=0x7f34f80fe880 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:55:09.068 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.067+0000 7f34fd2e1640 1 -- 192.168.123.107:0/2107772671 shutdown_connections 2026-03-09T20:55:09.068 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.067+0000 7f34fd2e1640 1 -- 192.168.123.107:0/2107772671 wait complete. 2026-03-09T20:55:09.068 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.067+0000 7f34fd2e1640 1 Processor -- start 2026-03-09T20:55:09.069 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.068+0000 7f34fd2e1640 1 -- start start 2026-03-09T20:55:09.069 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.068+0000 7f34fd2e1640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f34f8100780 0x7f34f819b3d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:09.069 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.068+0000 7f34fd2e1640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f34f819b910 0x7f34f8196480 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:09.069 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.068+0000 7f34fd2e1640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f34f819bec0 con 0x7f34f8100780 2026-03-09T20:55:09.069 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.068+0000 7f34fd2e1640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f34f81969c0 con 0x7f34f819b910 2026-03-09T20:55:09.069 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.068+0000 7f34f67fc640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f34f819b910 0x7f34f8196480 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:09.069 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.068+0000 7f34f67fc640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f34f819b910 0x7f34f8196480 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:44398/0 (socket says 192.168.123.107:44398) 2026-03-09T20:55:09.069 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.068+0000 7f34f67fc640 1 -- 192.168.123.107:0/2211973636 learned_addr learned my addr 192.168.123.107:0/2211973636 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:55:09.069 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.068+0000 7f34f67fc640 1 -- 192.168.123.107:0/2211973636 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f34f8100780 msgr2=0x7f34f819b3d0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T20:55:09.069 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.068+0000 7f34f67fc640 1 --2- 192.168.123.107:0/2211973636 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f34f8100780 0x7f34f819b3d0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:09.070 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.068+0000 7f34f67fc640 1 -- 192.168.123.107:0/2211973636 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f34e0009660 con 0x7f34f819b910 2026-03-09T20:55:09.070 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.069+0000 7f34f67fc640 1 --2- 192.168.123.107:0/2211973636 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f34f819b910 0x7f34f8196480 secure :-1 s=READY pgs=77 cs=0 l=1 rev1=1 crypto rx=0x7f34ec00e9b0 tx=0x7f34ec00ee80 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:55:09.070 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.069+0000 7f34d7fff640 1 -- 192.168.123.107:0/2211973636 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f34ec00cd90 con 0x7f34f819b910 2026-03-09T20:55:09.070 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.069+0000 7f34d7fff640 1 -- 192.168.123.107:0/2211973636 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f34ec004590 con 0x7f34f819b910 2026-03-09T20:55:09.070 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.069+0000 7f34d7fff640 1 -- 192.168.123.107:0/2211973636 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f34ec010640 con 0x7f34f819b910 2026-03-09T20:55:09.070 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.069+0000 7f34fd2e1640 1 -- 192.168.123.107:0/2211973636 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f34f8196c20 con 0x7f34f819b910 2026-03-09T20:55:09.070 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.069+0000 7f34fd2e1640 1 -- 192.168.123.107:0/2211973636 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f34f8197140 con 0x7f34f819b910 2026-03-09T20:55:09.071 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.070+0000 7f34fd2e1640 1 -- 192.168.123.107:0/2211973636 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f34f8101ec0 con 0x7f34f819b910 2026-03-09T20:55:09.071 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.071+0000 7f34d7fff640 1 -- 192.168.123.107:0/2211973636 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f34ec0040d0 con 0x7f34f819b910 2026-03-09T20:55:09.072 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.071+0000 7f34d7fff640 1 --2- 192.168.123.107:0/2211973636 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f34cc0779b0 0x7f34cc079e70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:09.072 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.071+0000 7f34d7fff640 1 -- 192.168.123.107:0/2211973636 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(86..86 src has 1..86) v4 ==== 6652+0+0 (secure 0 0 0) 0x7f34ec01d030 con 0x7f34f819b910 2026-03-09T20:55:09.072 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.071+0000 7f34f6ffd640 1 --2- 192.168.123.107:0/2211973636 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f34cc0779b0 0x7f34cc079e70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:09.072 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.071+0000 7f34f6ffd640 1 --2- 192.168.123.107:0/2211973636 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f34cc0779b0 0x7f34cc079e70 secure :-1 s=READY pgs=135 cs=0 l=1 rev1=1 crypto rx=0x7f34e002f730 tx=0x7f34e00023d0 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:55:09.074 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.073+0000 7f34d7fff640 1 -- 192.168.123.107:0/2211973636 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f34ec062520 con 0x7f34f819b910 2026-03-09T20:55:09.190 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.189+0000 7f34fd2e1640 1 -- 192.168.123.107:0/2211973636 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 27, "format": "json"} v 0) v1 -- 0x7f34f810da50 con 0x7f34f819b910 2026-03-09T20:55:09.191 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.190+0000 7f34d7fff640 1 -- 192.168.123.107:0/2211973636 <== mon.1 v2:192.168.123.110:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 27, "format": "json"}]=0 dumped fsmap epoch 27 v37) v1 ==== 107+0+4278 (secure 0 0 0) 0x7f34ec061c70 con 0x7f34f819b910 2026-03-09T20:55:09.191 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:55:09.191 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":27,"btime":"2026-03-09T20:52:57:942060+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":34382,"name":"cephfs.vm07.potfau","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.107:6829/561473714","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6828","nonce":561473714},{"type":"v1","addr":"192.168.123.107:6829","nonce":561473714}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":24},{"gid":44273,"name":"cephfs.vm07.rovdbp","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.107:6827/2346066069","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6826","nonce":2346066069},{"type":"v1","addr":"192.168.123.107:6827","nonce":2346066069}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":20}],"filesystems":[{"mdsmap":{"epoch":26,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-09T20:44:59.885491+0000","modified":"2026-03-09T20:52:55.724349+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":84,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":1,"in":[0],"up":{"mds_0":44247},"failed":[],"damaged":[],"stopped":[1],"info":{"gid_44247":{"gid":44247,"name":"cephfs.vm10.hzyuyq","rank":0,"incarnation":23,"state":"up:active","state_seq":15,"addr":"192.168.123.110:6827/2699915815","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6826","nonce":2699915815},{"type":"v1","addr":"192.168.123.110:6827","nonce":2699915815}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":44247,"qdb_cluster":[44247]},"id":1}]} 2026-03-09T20:55:09.191 INFO:teuthology.orchestra.run.vm07.stderr:dumped fsmap epoch 27 2026-03-09T20:55:09.194 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.193+0000 7f34fd2e1640 1 -- 192.168.123.107:0/2211973636 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f34cc0779b0 msgr2=0x7f34cc079e70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:09.194 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.193+0000 7f34fd2e1640 1 --2- 192.168.123.107:0/2211973636 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f34cc0779b0 0x7f34cc079e70 secure :-1 s=READY pgs=135 cs=0 l=1 rev1=1 crypto rx=0x7f34e002f730 tx=0x7f34e00023d0 comp rx=0 tx=0).stop 2026-03-09T20:55:09.194 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.193+0000 7f34fd2e1640 1 -- 192.168.123.107:0/2211973636 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f34f819b910 msgr2=0x7f34f8196480 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:09.194 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.193+0000 7f34fd2e1640 1 --2- 192.168.123.107:0/2211973636 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f34f819b910 0x7f34f8196480 secure :-1 s=READY pgs=77 cs=0 l=1 rev1=1 crypto rx=0x7f34ec00e9b0 tx=0x7f34ec00ee80 comp rx=0 tx=0).stop 2026-03-09T20:55:09.194 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.193+0000 7f34fd2e1640 1 -- 192.168.123.107:0/2211973636 shutdown_connections 2026-03-09T20:55:09.194 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.193+0000 7f34fd2e1640 1 --2- 192.168.123.107:0/2211973636 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f34cc0779b0 0x7f34cc079e70 unknown :-1 s=CLOSED pgs=135 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:09.194 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.193+0000 7f34fd2e1640 1 --2- 192.168.123.107:0/2211973636 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f34f819b910 0x7f34f8196480 unknown :-1 s=CLOSED pgs=77 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:09.194 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.193+0000 7f34fd2e1640 1 --2- 192.168.123.107:0/2211973636 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f34f8100780 0x7f34f819b3d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:09.194 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.193+0000 7f34fd2e1640 1 -- 192.168.123.107:0/2211973636 >> 192.168.123.107:0/2211973636 conn(0x7f34f80fc460 msgr2=0x7f34f80733a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:55:09.194 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.193+0000 7f34fd2e1640 1 -- 192.168.123.107:0/2211973636 shutdown_connections 2026-03-09T20:55:09.195 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.194+0000 7f34fd2e1640 1 -- 192.168.123.107:0/2211973636 wait complete. 2026-03-09T20:55:09.233 DEBUG:tasks.fs:max_mds reduced in epoch 27 2026-03-09T20:55:09.233 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 27 2026-03-09T20:55:09.233 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph fs dump --format=json 28 2026-03-09T20:55:09.381 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:55:09.632 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.630+0000 7fa69369b640 1 -- 192.168.123.107:0/4068572134 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa68c108a00 msgr2=0x7fa68c108de0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:09.632 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.630+0000 7fa69369b640 1 --2- 192.168.123.107:0/4068572134 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa68c108a00 0x7fa68c108de0 secure :-1 s=READY pgs=192 cs=0 l=1 rev1=1 crypto rx=0x7fa67c0099b0 tx=0x7fa67c02f220 comp rx=0 tx=0).stop 2026-03-09T20:55:09.632 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.631+0000 7fa69369b640 1 -- 192.168.123.107:0/4068572134 shutdown_connections 2026-03-09T20:55:09.632 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.631+0000 7fa69369b640 1 --2- 192.168.123.107:0/4068572134 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa68c102a00 0x7fa68c102e60 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:09.633 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.631+0000 7fa69369b640 1 --2- 192.168.123.107:0/4068572134 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa68c108a00 0x7fa68c108de0 unknown :-1 s=CLOSED pgs=192 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:09.633 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.631+0000 7fa69369b640 1 -- 192.168.123.107:0/4068572134 >> 192.168.123.107:0/4068572134 conn(0x7fa68c0fe700 msgr2=0x7fa68c100b20 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:55:09.633 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.632+0000 7fa69369b640 1 -- 192.168.123.107:0/4068572134 shutdown_connections 2026-03-09T20:55:09.633 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.632+0000 7fa69369b640 1 -- 192.168.123.107:0/4068572134 wait complete. 2026-03-09T20:55:09.633 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.632+0000 7fa69369b640 1 Processor -- start 2026-03-09T20:55:09.633 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.632+0000 7fa69369b640 1 -- start start 2026-03-09T20:55:09.633 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.633+0000 7fa69369b640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa68c102a00 0x7fa68c1a0650 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:09.634 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.633+0000 7fa69369b640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa68c108a00 0x7fa68c1a0b90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:09.634 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.633+0000 7fa69369b640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa68c19a740 con 0x7fa68c102a00 2026-03-09T20:55:09.634 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.633+0000 7fa69369b640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa68c19a8b0 con 0x7fa68c108a00 2026-03-09T20:55:09.636 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.633+0000 7fa691410640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa68c102a00 0x7fa68c1a0650 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:09.639 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.633+0000 7fa691410640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa68c102a00 0x7fa68c1a0650 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:52210/0 (socket says 192.168.123.107:52210) 2026-03-09T20:55:09.639 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.633+0000 7fa691410640 1 -- 192.168.123.107:0/3974399680 learned_addr learned my addr 192.168.123.107:0/3974399680 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:55:09.639 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.633+0000 7fa691410640 1 -- 192.168.123.107:0/3974399680 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa68c108a00 msgr2=0x7fa68c1a0b90 unknown :-1 s=STATE_CONNECTING_RE l=1).mark_down 2026-03-09T20:55:09.640 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.633+0000 7fa691410640 1 --2- 192.168.123.107:0/3974399680 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa68c108a00 0x7fa68c1a0b90 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:09.640 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.633+0000 7fa691410640 1 -- 192.168.123.107:0/3974399680 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fa67c009660 con 0x7fa68c102a00 2026-03-09T20:55:09.640 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.633+0000 7fa691410640 1 --2- 192.168.123.107:0/3974399680 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa68c102a00 0x7fa68c1a0650 secure :-1 s=READY pgs=193 cs=0 l=1 rev1=1 crypto rx=0x7fa67c002410 tx=0x7fa67c004290 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:55:09.640 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.633+0000 7fa6827fc640 1 -- 192.168.123.107:0/3974399680 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fa67c004430 con 0x7fa68c102a00 2026-03-09T20:55:09.640 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.633+0000 7fa6827fc640 1 -- 192.168.123.107:0/3974399680 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fa67c02fc90 con 0x7fa68c102a00 2026-03-09T20:55:09.640 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.633+0000 7fa6827fc640 1 -- 192.168.123.107:0/3974399680 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fa67c04b650 con 0x7fa68c102a00 2026-03-09T20:55:09.640 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.633+0000 7fa69369b640 1 -- 192.168.123.107:0/3974399680 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fa68c19ab30 con 0x7fa68c102a00 2026-03-09T20:55:09.640 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.634+0000 7fa69369b640 1 -- 192.168.123.107:0/3974399680 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fa68c19b020 con 0x7fa68c102a00 2026-03-09T20:55:09.641 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.635+0000 7fa69369b640 1 -- 192.168.123.107:0/3974399680 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fa68c104140 con 0x7fa68c102a00 2026-03-09T20:55:09.641 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.638+0000 7fa6827fc640 1 -- 192.168.123.107:0/3974399680 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fa67c04b7b0 con 0x7fa68c102a00 2026-03-09T20:55:09.641 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.639+0000 7fa6827fc640 1 --2- 192.168.123.107:0/3974399680 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fa6680779b0 0x7fa668079e70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:09.641 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.639+0000 7fa6827fc640 1 -- 192.168.123.107:0/3974399680 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(86..86 src has 1..86) v4 ==== 6652+0+0 (secure 0 0 0) 0x7fa67c0c8590 con 0x7fa68c102a00 2026-03-09T20:55:09.641 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.639+0000 7fa6827fc640 1 -- 192.168.123.107:0/3974399680 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fa67c02fe00 con 0x7fa68c102a00 2026-03-09T20:55:09.641 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.640+0000 7fa690c0f640 1 --2- 192.168.123.107:0/3974399680 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fa6680779b0 0x7fa668079e70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:09.641 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.640+0000 7fa690c0f640 1 --2- 192.168.123.107:0/3974399680 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fa6680779b0 0x7fa668079e70 secure :-1 s=READY pgs=136 cs=0 l=1 rev1=1 crypto rx=0x7fa68c19bbb0 tx=0x7fa674005c80 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:55:09.697 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:09 vm07.local ceph-mon[112105]: pgmap v244: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.0 KiB/s rd, 3 op/s 2026-03-09T20:55:09.697 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:09 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/2211973636' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 27, "format": "json"}]: dispatch 2026-03-09T20:55:09.697 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:09 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:55:09.762 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.760+0000 7fa69369b640 1 -- 192.168.123.107:0/3974399680 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 28, "format": "json"} v 0) v1 -- 0x7fa68c19bd10 con 0x7fa68c102a00 2026-03-09T20:55:09.762 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.761+0000 7fa6827fc640 1 -- 192.168.123.107:0/3974399680 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 28, "format": "json"}]=0 dumped fsmap epoch 28 v37) v1 ==== 107+0+5129 (secure 0 0 0) 0x7fa67c090c40 con 0x7fa68c102a00 2026-03-09T20:55:09.762 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:55:09.762 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":28,"btime":"2026-03-09T20:52:59:802993+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":34382,"name":"cephfs.vm07.potfau","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.107:6829/561473714","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6828","nonce":561473714},{"type":"v1","addr":"192.168.123.107:6829","nonce":561473714}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":24},{"gid":44273,"name":"cephfs.vm07.rovdbp","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.107:6827/2346066069","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6826","nonce":2346066069},{"type":"v1","addr":"192.168.123.107:6827","nonce":2346066069}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":20},{"gid":44295,"name":"cephfs.vm10.qpltwp","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.110:6825/4027718916","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6824","nonce":4027718916},{"type":"v1","addr":"192.168.123.110:6825","nonce":4027718916}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":28}],"filesystems":[{"mdsmap":{"epoch":26,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-09T20:44:59.885491+0000","modified":"2026-03-09T20:52:55.724349+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":84,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":1,"in":[0],"up":{"mds_0":44247},"failed":[],"damaged":[],"stopped":[1],"info":{"gid_44247":{"gid":44247,"name":"cephfs.vm10.hzyuyq","rank":0,"incarnation":23,"state":"up:active","state_seq":15,"addr":"192.168.123.110:6827/2699915815","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6826","nonce":2699915815},{"type":"v1","addr":"192.168.123.110:6827","nonce":2699915815}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":44247,"qdb_cluster":[44247]},"id":1}]} 2026-03-09T20:55:09.763 INFO:teuthology.orchestra.run.vm07.stderr:dumped fsmap epoch 28 2026-03-09T20:55:09.765 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.764+0000 7fa69369b640 1 -- 192.168.123.107:0/3974399680 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fa6680779b0 msgr2=0x7fa668079e70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:09.765 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.764+0000 7fa69369b640 1 --2- 192.168.123.107:0/3974399680 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fa6680779b0 0x7fa668079e70 secure :-1 s=READY pgs=136 cs=0 l=1 rev1=1 crypto rx=0x7fa68c19bbb0 tx=0x7fa674005c80 comp rx=0 tx=0).stop 2026-03-09T20:55:09.765 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.764+0000 7fa69369b640 1 -- 192.168.123.107:0/3974399680 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa68c102a00 msgr2=0x7fa68c1a0650 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:09.765 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.764+0000 7fa69369b640 1 --2- 192.168.123.107:0/3974399680 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa68c102a00 0x7fa68c1a0650 secure :-1 s=READY pgs=193 cs=0 l=1 rev1=1 crypto rx=0x7fa67c002410 tx=0x7fa67c004290 comp rx=0 tx=0).stop 2026-03-09T20:55:09.765 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.764+0000 7fa69369b640 1 -- 192.168.123.107:0/3974399680 shutdown_connections 2026-03-09T20:55:09.765 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.764+0000 7fa69369b640 1 --2- 192.168.123.107:0/3974399680 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fa6680779b0 0x7fa668079e70 unknown :-1 s=CLOSED pgs=136 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:09.766 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.764+0000 7fa69369b640 1 --2- 192.168.123.107:0/3974399680 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fa68c108a00 0x7fa68c1a0b90 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:09.766 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.764+0000 7fa69369b640 1 --2- 192.168.123.107:0/3974399680 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa68c102a00 0x7fa68c1a0650 unknown :-1 s=CLOSED pgs=193 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:09.766 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.764+0000 7fa69369b640 1 -- 192.168.123.107:0/3974399680 >> 192.168.123.107:0/3974399680 conn(0x7fa68c0fe700 msgr2=0x7fa68c1075f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:55:09.766 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.765+0000 7fa69369b640 1 -- 192.168.123.107:0/3974399680 shutdown_connections 2026-03-09T20:55:09.766 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:09.765+0000 7fa69369b640 1 -- 192.168.123.107:0/3974399680 wait complete. 2026-03-09T20:55:09.807 DEBUG:tasks.fs:max_mds reduced in epoch 28 2026-03-09T20:55:09.807 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 28 2026-03-09T20:55:09.807 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph fs dump --format=json 29 2026-03-09T20:55:09.964 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:55:10.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:09 vm10.local ceph-mon[103526]: pgmap v244: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.0 KiB/s rd, 3 op/s 2026-03-09T20:55:10.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:09 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/2211973636' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 27, "format": "json"}]: dispatch 2026-03-09T20:55:10.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:09 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:55:10.288 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.286+0000 7f4e62880640 1 -- 192.168.123.107:0/3365007550 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4e5c072af0 msgr2=0x7f4e5c10ba70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:10.288 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.286+0000 7f4e62880640 1 --2- 192.168.123.107:0/3365007550 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4e5c072af0 0x7f4e5c10ba70 secure :-1 s=READY pgs=194 cs=0 l=1 rev1=1 crypto rx=0x7f4e5400b0a0 tx=0x7f4e5402f4c0 comp rx=0 tx=0).stop 2026-03-09T20:55:10.288 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.286+0000 7f4e62880640 1 -- 192.168.123.107:0/3365007550 shutdown_connections 2026-03-09T20:55:10.288 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.286+0000 7f4e62880640 1 --2- 192.168.123.107:0/3365007550 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4e5c072af0 0x7f4e5c10ba70 unknown :-1 s=CLOSED pgs=194 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:10.288 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.286+0000 7f4e62880640 1 --2- 192.168.123.107:0/3365007550 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4e5c072140 0x7f4e5c072520 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:10.288 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.286+0000 7f4e62880640 1 -- 192.168.123.107:0/3365007550 >> 192.168.123.107:0/3365007550 conn(0x7f4e5c06c7e0 msgr2=0x7f4e5c06cbf0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:55:10.288 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.286+0000 7f4e62880640 1 -- 192.168.123.107:0/3365007550 shutdown_connections 2026-03-09T20:55:10.288 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.286+0000 7f4e62880640 1 -- 192.168.123.107:0/3365007550 wait complete. 2026-03-09T20:55:10.289 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.287+0000 7f4e62880640 1 Processor -- start 2026-03-09T20:55:10.289 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.287+0000 7f4e62880640 1 -- start start 2026-03-09T20:55:10.289 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.287+0000 7f4e62880640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4e5c072140 0x7f4e5c07d470 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:10.289 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.287+0000 7f4e62880640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4e5c084390 0x7f4e5c07d9b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:10.289 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.287+0000 7f4e62880640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f4e5c07e090 con 0x7f4e5c084390 2026-03-09T20:55:10.289 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.287+0000 7f4e62880640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f4e5c07e1d0 con 0x7f4e5c072140 2026-03-09T20:55:10.289 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.287+0000 7f4e5bfff640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4e5c072140 0x7f4e5c07d470 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:10.289 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.287+0000 7f4e5bfff640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4e5c072140 0x7f4e5c07d470 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:44432/0 (socket says 192.168.123.107:44432) 2026-03-09T20:55:10.289 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.287+0000 7f4e5bfff640 1 -- 192.168.123.107:0/3735877268 learned_addr learned my addr 192.168.123.107:0/3735877268 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:55:10.289 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.287+0000 7f4e5b7fe640 1 --2- 192.168.123.107:0/3735877268 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4e5c084390 0x7f4e5c07d9b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:10.289 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.288+0000 7f4e5bfff640 1 -- 192.168.123.107:0/3735877268 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4e5c084390 msgr2=0x7f4e5c07d9b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:10.289 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.288+0000 7f4e5bfff640 1 --2- 192.168.123.107:0/3735877268 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4e5c084390 0x7f4e5c07d9b0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:10.289 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.288+0000 7f4e5bfff640 1 -- 192.168.123.107:0/3735877268 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f4e54009d00 con 0x7f4e5c072140 2026-03-09T20:55:10.289 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.288+0000 7f4e5b7fe640 1 --2- 192.168.123.107:0/3735877268 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4e5c084390 0x7f4e5c07d9b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T20:55:10.289 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.288+0000 7f4e5bfff640 1 --2- 192.168.123.107:0/3735877268 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4e5c072140 0x7f4e5c07d470 secure :-1 s=READY pgs=78 cs=0 l=1 rev1=1 crypto rx=0x7f4e4c00c910 tx=0x7f4e4c00cde0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:55:10.289 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.288+0000 7f4e597fa640 1 -- 192.168.123.107:0/3735877268 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f4e4c007c20 con 0x7f4e5c072140 2026-03-09T20:55:10.290 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.288+0000 7f4e597fa640 1 -- 192.168.123.107:0/3735877268 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f4e4c007d80 con 0x7f4e5c072140 2026-03-09T20:55:10.290 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.288+0000 7f4e62880640 1 -- 192.168.123.107:0/3735877268 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f4e5c081f90 con 0x7f4e5c072140 2026-03-09T20:55:10.290 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.288+0000 7f4e62880640 1 -- 192.168.123.107:0/3735877268 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f4e5c0824b0 con 0x7f4e5c072140 2026-03-09T20:55:10.290 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.289+0000 7f4e597fa640 1 -- 192.168.123.107:0/3735877268 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f4e4c01d440 con 0x7f4e5c072140 2026-03-09T20:55:10.290 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.289+0000 7f4e62880640 1 -- 192.168.123.107:0/3735877268 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f4e28005350 con 0x7f4e5c072140 2026-03-09T20:55:10.291 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.290+0000 7f4e597fa640 1 -- 192.168.123.107:0/3735877268 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f4e4c005000 con 0x7f4e5c072140 2026-03-09T20:55:10.292 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.291+0000 7f4e597fa640 1 --2- 192.168.123.107:0/3735877268 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f4e40077980 0x7f4e40079e40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:10.292 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.291+0000 7f4e5b7fe640 1 --2- 192.168.123.107:0/3735877268 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f4e40077980 0x7f4e40079e40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:10.292 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.291+0000 7f4e597fa640 1 -- 192.168.123.107:0/3735877268 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(86..86 src has 1..86) v4 ==== 6652+0+0 (secure 0 0 0) 0x7f4e4c0a2b40 con 0x7f4e5c072140 2026-03-09T20:55:10.292 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.291+0000 7f4e5b7fe640 1 --2- 192.168.123.107:0/3735877268 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f4e40077980 0x7f4e40079e40 secure :-1 s=READY pgs=137 cs=0 l=1 rev1=1 crypto rx=0x7f4e5402f9d0 tx=0x7f4e5403a040 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:55:10.294 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.293+0000 7f4e597fa640 1 -- 192.168.123.107:0/3735877268 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f4e4c06b1f0 con 0x7f4e5c072140 2026-03-09T20:55:10.411 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.410+0000 7f4e62880640 1 -- 192.168.123.107:0/3735877268 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 29, "format": "json"} v 0) v1 -- 0x7f4e280058d0 con 0x7f4e5c072140 2026-03-09T20:55:10.412 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.411+0000 7f4e597fa640 1 -- 192.168.123.107:0/3735877268 <== mon.1 v2:192.168.123.110:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 29, "format": "json"}]=0 dumped fsmap epoch 29 v37) v1 ==== 107+0+4325 (secure 0 0 0) 0x7f4e4c06a940 con 0x7f4e5c072140 2026-03-09T20:55:10.413 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:55:10.413 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":29,"btime":"2026-03-09T20:53:02:003816+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":34382,"name":"cephfs.vm07.potfau","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.107:6829/561473714","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6828","nonce":561473714},{"type":"v1","addr":"192.168.123.107:6829","nonce":561473714}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":24},{"gid":44273,"name":"cephfs.vm07.rovdbp","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.107:6827/2346066069","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6826","nonce":2346066069},{"type":"v1","addr":"192.168.123.107:6827","nonce":2346066069}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":20},{"gid":44295,"name":"cephfs.vm10.qpltwp","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.110:6825/4027718916","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6824","nonce":4027718916},{"type":"v1","addr":"192.168.123.110:6825","nonce":4027718916}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":28}],"filesystems":[{"mdsmap":{"epoch":29,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-09T20:44:59.885491+0000","modified":"2026-03-09T20:53:02.003814+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":86,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":1,"in":[0],"up":{},"failed":[0],"damaged":[],"stopped":[1],"info":{},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":0,"qdb_cluster":[]},"id":1}]} 2026-03-09T20:55:10.413 INFO:teuthology.orchestra.run.vm07.stderr:dumped fsmap epoch 29 2026-03-09T20:55:10.415 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.414+0000 7f4e62880640 1 -- 192.168.123.107:0/3735877268 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f4e40077980 msgr2=0x7f4e40079e40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:10.415 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.414+0000 7f4e62880640 1 --2- 192.168.123.107:0/3735877268 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f4e40077980 0x7f4e40079e40 secure :-1 s=READY pgs=137 cs=0 l=1 rev1=1 crypto rx=0x7f4e5402f9d0 tx=0x7f4e5403a040 comp rx=0 tx=0).stop 2026-03-09T20:55:10.415 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.414+0000 7f4e62880640 1 -- 192.168.123.107:0/3735877268 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4e5c072140 msgr2=0x7f4e5c07d470 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:10.415 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.414+0000 7f4e62880640 1 --2- 192.168.123.107:0/3735877268 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4e5c072140 0x7f4e5c07d470 secure :-1 s=READY pgs=78 cs=0 l=1 rev1=1 crypto rx=0x7f4e4c00c910 tx=0x7f4e4c00cde0 comp rx=0 tx=0).stop 2026-03-09T20:55:10.416 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.415+0000 7f4e62880640 1 -- 192.168.123.107:0/3735877268 shutdown_connections 2026-03-09T20:55:10.416 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.415+0000 7f4e62880640 1 --2- 192.168.123.107:0/3735877268 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f4e40077980 0x7f4e40079e40 unknown :-1 s=CLOSED pgs=137 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:10.416 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.415+0000 7f4e62880640 1 --2- 192.168.123.107:0/3735877268 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4e5c084390 0x7f4e5c07d9b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:10.416 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.415+0000 7f4e62880640 1 --2- 192.168.123.107:0/3735877268 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4e5c072140 0x7f4e5c07d470 unknown :-1 s=CLOSED pgs=78 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:10.416 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.415+0000 7f4e62880640 1 -- 192.168.123.107:0/3735877268 >> 192.168.123.107:0/3735877268 conn(0x7f4e5c06c7e0 msgr2=0x7f4e5c10a9e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:55:10.416 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.415+0000 7f4e62880640 1 -- 192.168.123.107:0/3735877268 shutdown_connections 2026-03-09T20:55:10.416 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.415+0000 7f4e62880640 1 -- 192.168.123.107:0/3735877268 wait complete. 2026-03-09T20:55:10.461 DEBUG:tasks.fs:max_mds reduced in epoch 29 2026-03-09T20:55:10.461 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 29 2026-03-09T20:55:10.461 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph fs dump --format=json 30 2026-03-09T20:55:10.616 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:55:10.711 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:10 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/3974399680' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 28, "format": "json"}]: dispatch 2026-03-09T20:55:10.712 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:10 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/3735877268' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 29, "format": "json"}]: dispatch 2026-03-09T20:55:10.880 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.878+0000 7f4a5f435640 1 -- 192.168.123.107:0/366446187 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4a581089d0 msgr2=0x7f4a58108db0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:10.880 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.878+0000 7f4a5f435640 1 --2- 192.168.123.107:0/366446187 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4a581089d0 0x7f4a58108db0 secure :-1 s=READY pgs=195 cs=0 l=1 rev1=1 crypto rx=0x7f4a4c0099b0 tx=0x7f4a4c02f220 comp rx=0 tx=0).stop 2026-03-09T20:55:10.880 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.879+0000 7f4a5f435640 1 -- 192.168.123.107:0/366446187 shutdown_connections 2026-03-09T20:55:10.880 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.879+0000 7f4a5f435640 1 --2- 192.168.123.107:0/366446187 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4a581029d0 0x7f4a58102e30 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:10.880 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.879+0000 7f4a5f435640 1 --2- 192.168.123.107:0/366446187 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4a581089d0 0x7f4a58108db0 unknown :-1 s=CLOSED pgs=195 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:10.880 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.879+0000 7f4a5f435640 1 -- 192.168.123.107:0/366446187 >> 192.168.123.107:0/366446187 conn(0x7f4a580fe710 msgr2=0x7f4a58100b30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:55:10.880 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.880+0000 7f4a5f435640 1 -- 192.168.123.107:0/366446187 shutdown_connections 2026-03-09T20:55:10.881 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.880+0000 7f4a5f435640 1 -- 192.168.123.107:0/366446187 wait complete. 2026-03-09T20:55:10.881 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.880+0000 7f4a5f435640 1 Processor -- start 2026-03-09T20:55:10.881 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.880+0000 7f4a5f435640 1 -- start start 2026-03-09T20:55:10.881 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.880+0000 7f4a5f435640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4a581029d0 0x7f4a58116780 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:10.882 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.880+0000 7f4a5f435640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4a58116cc0 0x7f4a5810f810 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:10.882 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.880+0000 7f4a5f435640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f4a581172c0 con 0x7f4a581029d0 2026-03-09T20:55:10.882 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.880+0000 7f4a5f435640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f4a5810fd50 con 0x7f4a58116cc0 2026-03-09T20:55:10.882 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.881+0000 7f4a5c9a9640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4a58116cc0 0x7f4a5810f810 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:10.882 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.881+0000 7f4a5c9a9640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4a58116cc0 0x7f4a5810f810 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:44452/0 (socket says 192.168.123.107:44452) 2026-03-09T20:55:10.882 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.881+0000 7f4a5c9a9640 1 -- 192.168.123.107:0/1324571972 learned_addr learned my addr 192.168.123.107:0/1324571972 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:55:10.882 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.881+0000 7f4a5c9a9640 1 -- 192.168.123.107:0/1324571972 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4a581029d0 msgr2=0x7f4a58116780 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T20:55:10.882 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.881+0000 7f4a5c9a9640 1 --2- 192.168.123.107:0/1324571972 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4a581029d0 0x7f4a58116780 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:10.882 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.881+0000 7f4a5c9a9640 1 -- 192.168.123.107:0/1324571972 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f4a4c009660 con 0x7f4a58116cc0 2026-03-09T20:55:10.882 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.881+0000 7f4a5c9a9640 1 --2- 192.168.123.107:0/1324571972 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4a58116cc0 0x7f4a5810f810 secure :-1 s=READY pgs=79 cs=0 l=1 rev1=1 crypto rx=0x7f4a4800d8d0 tx=0x7f4a4800dda0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:55:10.882 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.881+0000 7f4a467fc640 1 -- 192.168.123.107:0/1324571972 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f4a48004490 con 0x7f4a58116cc0 2026-03-09T20:55:10.883 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.881+0000 7f4a467fc640 1 -- 192.168.123.107:0/1324571972 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f4a4800bd00 con 0x7f4a58116cc0 2026-03-09T20:55:10.883 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.882+0000 7f4a467fc640 1 -- 192.168.123.107:0/1324571972 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f4a4800b840 con 0x7f4a58116cc0 2026-03-09T20:55:10.883 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.882+0000 7f4a5f435640 1 -- 192.168.123.107:0/1324571972 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f4a5810fef0 con 0x7f4a58116cc0 2026-03-09T20:55:10.883 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.882+0000 7f4a5f435640 1 -- 192.168.123.107:0/1324571972 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f4a581103c0 con 0x7f4a58116cc0 2026-03-09T20:55:10.884 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.883+0000 7f4a5f435640 1 -- 192.168.123.107:0/1324571972 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f4a58110690 con 0x7f4a58116cc0 2026-03-09T20:55:10.884 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.883+0000 7f4a467fc640 1 -- 192.168.123.107:0/1324571972 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f4a480027e0 con 0x7f4a58116cc0 2026-03-09T20:55:10.885 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.884+0000 7f4a467fc640 1 --2- 192.168.123.107:0/1324571972 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f4a340778e0 0x7f4a34079da0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:10.885 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.884+0000 7f4a467fc640 1 -- 192.168.123.107:0/1324571972 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(86..86 src has 1..86) v4 ==== 6652+0+0 (secure 0 0 0) 0x7f4a48099b50 con 0x7f4a58116cc0 2026-03-09T20:55:10.885 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.884+0000 7f4a5d1aa640 1 --2- 192.168.123.107:0/1324571972 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f4a340778e0 0x7f4a34079da0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:10.885 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.884+0000 7f4a5d1aa640 1 --2- 192.168.123.107:0/1324571972 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f4a340778e0 0x7f4a34079da0 secure :-1 s=READY pgs=138 cs=0 l=1 rev1=1 crypto rx=0x7f4a4c02f730 tx=0x7f4a4c0023d0 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:55:10.887 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:10.886+0000 7f4a467fc640 1 -- 192.168.123.107:0/1324571972 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f4a48062100 con 0x7f4a58116cc0 2026-03-09T20:55:11.002 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.001+0000 7f4a5f435640 1 -- 192.168.123.107:0/1324571972 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 30, "format": "json"} v 0) v1 -- 0x7f4a58104110 con 0x7f4a58116cc0 2026-03-09T20:55:11.004 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.003+0000 7f4a467fc640 1 -- 192.168.123.107:0/1324571972 <== mon.1 v2:192.168.123.110:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 30, "format": "json"}]=0 dumped fsmap epoch 30 v37) v1 ==== 107+0+4404 (secure 0 0 0) 0x7f4a48061850 con 0x7f4a58116cc0 2026-03-09T20:55:11.004 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:55:11.004 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":30,"btime":"2026-03-09T20:53:02:010715+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":44273,"name":"cephfs.vm07.rovdbp","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.107:6827/2346066069","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6826","nonce":2346066069},{"type":"v1","addr":"192.168.123.107:6827","nonce":2346066069}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":20},{"gid":44295,"name":"cephfs.vm10.qpltwp","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.110:6825/4027718916","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6824","nonce":4027718916},{"type":"v1","addr":"192.168.123.110:6825","nonce":4027718916}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":28}],"filesystems":[{"mdsmap":{"epoch":30,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-09T20:44:59.885491+0000","modified":"2026-03-09T20:53:02.010709+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":86,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"max_mds":1,"in":[0],"up":{"mds_0":34382},"failed":[],"damaged":[],"stopped":[1],"info":{"gid_34382":{"gid":34382,"name":"cephfs.vm07.potfau","rank":0,"incarnation":30,"state":"up:replay","state_seq":1,"addr":"192.168.123.107:6829/561473714","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6828","nonce":561473714},{"type":"v1","addr":"192.168.123.107:6829","nonce":561473714}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":0,"qdb_cluster":[]},"id":1}]} 2026-03-09T20:55:11.004 INFO:teuthology.orchestra.run.vm07.stderr:dumped fsmap epoch 30 2026-03-09T20:55:11.006 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.005+0000 7f4a5f435640 1 -- 192.168.123.107:0/1324571972 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f4a340778e0 msgr2=0x7f4a34079da0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:11.006 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.005+0000 7f4a5f435640 1 --2- 192.168.123.107:0/1324571972 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f4a340778e0 0x7f4a34079da0 secure :-1 s=READY pgs=138 cs=0 l=1 rev1=1 crypto rx=0x7f4a4c02f730 tx=0x7f4a4c0023d0 comp rx=0 tx=0).stop 2026-03-09T20:55:11.006 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.006+0000 7f4a5f435640 1 -- 192.168.123.107:0/1324571972 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4a58116cc0 msgr2=0x7f4a5810f810 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:11.007 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.006+0000 7f4a5f435640 1 --2- 192.168.123.107:0/1324571972 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4a58116cc0 0x7f4a5810f810 secure :-1 s=READY pgs=79 cs=0 l=1 rev1=1 crypto rx=0x7f4a4800d8d0 tx=0x7f4a4800dda0 comp rx=0 tx=0).stop 2026-03-09T20:55:11.007 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.006+0000 7f4a5f435640 1 -- 192.168.123.107:0/1324571972 shutdown_connections 2026-03-09T20:55:11.007 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.006+0000 7f4a5f435640 1 --2- 192.168.123.107:0/1324571972 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f4a340778e0 0x7f4a34079da0 unknown :-1 s=CLOSED pgs=138 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:11.007 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.006+0000 7f4a5f435640 1 --2- 192.168.123.107:0/1324571972 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f4a58116cc0 0x7f4a5810f810 unknown :-1 s=CLOSED pgs=79 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:11.007 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.006+0000 7f4a5f435640 1 --2- 192.168.123.107:0/1324571972 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4a581029d0 0x7f4a58116780 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:11.007 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.006+0000 7f4a5f435640 1 -- 192.168.123.107:0/1324571972 >> 192.168.123.107:0/1324571972 conn(0x7f4a580fe710 msgr2=0x7f4a58109830 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:55:11.007 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.006+0000 7f4a5f435640 1 -- 192.168.123.107:0/1324571972 shutdown_connections 2026-03-09T20:55:11.007 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.007+0000 7f4a5f435640 1 -- 192.168.123.107:0/1324571972 wait complete. 2026-03-09T20:55:11.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:10 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/3974399680' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 28, "format": "json"}]: dispatch 2026-03-09T20:55:11.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:10 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/3735877268' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 29, "format": "json"}]: dispatch 2026-03-09T20:55:11.072 DEBUG:tasks.fs:max_mds reduced in epoch 30 2026-03-09T20:55:11.073 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 30 2026-03-09T20:55:11.073 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph fs dump --format=json 31 2026-03-09T20:55:11.225 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:55:11.477 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.476+0000 7f6ecd5df640 1 -- 192.168.123.107:0/762637929 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6ec8076040 msgr2=0x7f6ec8111330 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:11.478 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.476+0000 7f6ecd5df640 1 --2- 192.168.123.107:0/762637929 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6ec8076040 0x7f6ec8111330 secure :-1 s=READY pgs=196 cs=0 l=1 rev1=1 crypto rx=0x7f6eb40099b0 tx=0x7f6eb402f220 comp rx=0 tx=0).stop 2026-03-09T20:55:11.478 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.477+0000 7f6ecd5df640 1 -- 192.168.123.107:0/762637929 shutdown_connections 2026-03-09T20:55:11.478 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.477+0000 7f6ecd5df640 1 --2- 192.168.123.107:0/762637929 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6ec8076040 0x7f6ec8111330 unknown :-1 s=CLOSED pgs=196 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:11.478 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.477+0000 7f6ecd5df640 1 --2- 192.168.123.107:0/762637929 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f6ec8075720 0x7f6ec8075b00 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:11.478 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.477+0000 7f6ecd5df640 1 -- 192.168.123.107:0/762637929 >> 192.168.123.107:0/762637929 conn(0x7f6ec80fe710 msgr2=0x7f6ec8100b30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:55:11.478 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.477+0000 7f6ecd5df640 1 -- 192.168.123.107:0/762637929 shutdown_connections 2026-03-09T20:55:11.478 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.477+0000 7f6ecd5df640 1 -- 192.168.123.107:0/762637929 wait complete. 2026-03-09T20:55:11.478 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.477+0000 7f6ecd5df640 1 Processor -- start 2026-03-09T20:55:11.478 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.477+0000 7f6ecd5df640 1 -- start start 2026-03-09T20:55:11.479 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.478+0000 7f6ecd5df640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f6ec8075720 0x7f6ec819eda0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:11.479 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.478+0000 7f6ecd5df640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6ec8076040 0x7f6ec819f2e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:11.479 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.478+0000 7f6ecd5df640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f6ec819f970 con 0x7f6ec8076040 2026-03-09T20:55:11.479 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.478+0000 7f6ecd5df640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f6ec81a36e0 con 0x7f6ec8075720 2026-03-09T20:55:11.479 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.478+0000 7f6ec6ffd640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f6ec8075720 0x7f6ec819eda0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:11.479 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.478+0000 7f6ec67fc640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6ec8076040 0x7f6ec819f2e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:11.479 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.478+0000 7f6ec6ffd640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f6ec8075720 0x7f6ec819eda0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:44462/0 (socket says 192.168.123.107:44462) 2026-03-09T20:55:11.479 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.478+0000 7f6ec6ffd640 1 -- 192.168.123.107:0/2657143130 learned_addr learned my addr 192.168.123.107:0/2657143130 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:55:11.479 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.478+0000 7f6ec6ffd640 1 -- 192.168.123.107:0/2657143130 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6ec8076040 msgr2=0x7f6ec819f2e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:11.480 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.479+0000 7f6ec6ffd640 1 --2- 192.168.123.107:0/2657143130 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6ec8076040 0x7f6ec819f2e0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:11.480 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.479+0000 7f6ec6ffd640 1 -- 192.168.123.107:0/2657143130 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f6eb4009660 con 0x7f6ec8075720 2026-03-09T20:55:11.480 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.479+0000 7f6ec67fc640 1 --2- 192.168.123.107:0/2657143130 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6ec8076040 0x7f6ec819f2e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T20:55:11.480 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.479+0000 7f6ec6ffd640 1 --2- 192.168.123.107:0/2657143130 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f6ec8075720 0x7f6ec819eda0 secure :-1 s=READY pgs=80 cs=0 l=1 rev1=1 crypto rx=0x7f6ebc00e990 tx=0x7f6ebc00ee60 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:55:11.480 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.479+0000 7f6ea3fff640 1 -- 192.168.123.107:0/2657143130 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f6ebc00cd30 con 0x7f6ec8075720 2026-03-09T20:55:11.480 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.479+0000 7f6ecd5df640 1 -- 192.168.123.107:0/2657143130 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f6ec81a39c0 con 0x7f6ec8075720 2026-03-09T20:55:11.480 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.479+0000 7f6ecd5df640 1 -- 192.168.123.107:0/2657143130 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f6ec81a3f10 con 0x7f6ec8075720 2026-03-09T20:55:11.481 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.480+0000 7f6ea3fff640 1 -- 192.168.123.107:0/2657143130 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f6ebc00ce90 con 0x7f6ec8075720 2026-03-09T20:55:11.481 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.480+0000 7f6ea3fff640 1 -- 192.168.123.107:0/2657143130 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f6ebc010640 con 0x7f6ec8075720 2026-03-09T20:55:11.483 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.481+0000 7f6ea3fff640 1 -- 192.168.123.107:0/2657143130 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f6ebc0040d0 con 0x7f6ec8075720 2026-03-09T20:55:11.483 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.481+0000 7f6ecd5df640 1 -- 192.168.123.107:0/2657143130 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f6ec8076e60 con 0x7f6ec8075720 2026-03-09T20:55:11.483 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.481+0000 7f6ea3fff640 1 --2- 192.168.123.107:0/2657143130 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f6e9c077890 0x7f6e9c079d50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:11.483 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.482+0000 7f6ea3fff640 1 -- 192.168.123.107:0/2657143130 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(86..86 src has 1..86) v4 ==== 6652+0+0 (secure 0 0 0) 0x7f6ebc014070 con 0x7f6ec8075720 2026-03-09T20:55:11.483 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.482+0000 7f6ec67fc640 1 --2- 192.168.123.107:0/2657143130 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f6e9c077890 0x7f6e9c079d50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:11.483 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.482+0000 7f6ec67fc640 1 --2- 192.168.123.107:0/2657143130 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f6e9c077890 0x7f6e9c079d50 secure :-1 s=READY pgs=139 cs=0 l=1 rev1=1 crypto rx=0x7f6ec81a0350 tx=0x7f6eb403a040 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:55:11.486 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.485+0000 7f6ea3fff640 1 -- 192.168.123.107:0/2657143130 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f6ebc0628f0 con 0x7f6ec8075720 2026-03-09T20:55:11.602 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.600+0000 7f6ecd5df640 1 -- 192.168.123.107:0/2657143130 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 31, "format": "json"} v 0) v1 -- 0x7f6ec8075b00 con 0x7f6ec8075720 2026-03-09T20:55:11.605 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.604+0000 7f6ea3fff640 1 -- 192.168.123.107:0/2657143130 <== mon.1 v2:192.168.123.110:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 31, "format": "json"}]=0 dumped fsmap epoch 31 v37) v1 ==== 107+0+4407 (secure 0 0 0) 0x7f6ebc062040 con 0x7f6ec8075720 2026-03-09T20:55:11.605 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:55:11.605 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":31,"btime":"2026-03-09T20:53:06:060943+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":44273,"name":"cephfs.vm07.rovdbp","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.107:6827/2346066069","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6826","nonce":2346066069},{"type":"v1","addr":"192.168.123.107:6827","nonce":2346066069}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":20},{"gid":44295,"name":"cephfs.vm10.qpltwp","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.110:6825/4027718916","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6824","nonce":4027718916},{"type":"v1","addr":"192.168.123.110:6825","nonce":4027718916}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":28}],"filesystems":[{"mdsmap":{"epoch":31,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-09T20:44:59.885491+0000","modified":"2026-03-09T20:53:05.629171+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":86,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"max_mds":1,"in":[0],"up":{"mds_0":34382},"failed":[],"damaged":[],"stopped":[1],"info":{"gid_34382":{"gid":34382,"name":"cephfs.vm07.potfau","rank":0,"incarnation":30,"state":"up:reconnect","state_seq":5,"addr":"192.168.123.107:6829/561473714","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6828","nonce":561473714},{"type":"v1","addr":"192.168.123.107:6829","nonce":561473714}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":0,"qdb_cluster":[]},"id":1}]} 2026-03-09T20:55:11.605 INFO:teuthology.orchestra.run.vm07.stderr:dumped fsmap epoch 31 2026-03-09T20:55:11.608 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.607+0000 7f6ecd5df640 1 -- 192.168.123.107:0/2657143130 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f6e9c077890 msgr2=0x7f6e9c079d50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:11.608 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.607+0000 7f6ecd5df640 1 --2- 192.168.123.107:0/2657143130 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f6e9c077890 0x7f6e9c079d50 secure :-1 s=READY pgs=139 cs=0 l=1 rev1=1 crypto rx=0x7f6ec81a0350 tx=0x7f6eb403a040 comp rx=0 tx=0).stop 2026-03-09T20:55:11.608 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.607+0000 7f6ecd5df640 1 -- 192.168.123.107:0/2657143130 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f6ec8075720 msgr2=0x7f6ec819eda0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:11.608 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.607+0000 7f6ecd5df640 1 --2- 192.168.123.107:0/2657143130 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f6ec8075720 0x7f6ec819eda0 secure :-1 s=READY pgs=80 cs=0 l=1 rev1=1 crypto rx=0x7f6ebc00e990 tx=0x7f6ebc00ee60 comp rx=0 tx=0).stop 2026-03-09T20:55:11.608 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.607+0000 7f6ecd5df640 1 -- 192.168.123.107:0/2657143130 shutdown_connections 2026-03-09T20:55:11.608 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.608+0000 7f6ecd5df640 1 --2- 192.168.123.107:0/2657143130 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f6e9c077890 0x7f6e9c079d50 unknown :-1 s=CLOSED pgs=139 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:11.608 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.608+0000 7f6ecd5df640 1 --2- 192.168.123.107:0/2657143130 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6ec8076040 0x7f6ec819f2e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:11.609 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.608+0000 7f6ecd5df640 1 --2- 192.168.123.107:0/2657143130 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f6ec8075720 0x7f6ec819eda0 unknown :-1 s=CLOSED pgs=80 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:11.609 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.608+0000 7f6ecd5df640 1 -- 192.168.123.107:0/2657143130 >> 192.168.123.107:0/2657143130 conn(0x7f6ec80fe710 msgr2=0x7f6ec80ffd90 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:55:11.609 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.608+0000 7f6ecd5df640 1 -- 192.168.123.107:0/2657143130 shutdown_connections 2026-03-09T20:55:11.609 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:11.608+0000 7f6ecd5df640 1 -- 192.168.123.107:0/2657143130 wait complete. 2026-03-09T20:55:11.677 DEBUG:tasks.fs:max_mds reduced in epoch 31 2026-03-09T20:55:11.677 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 31 2026-03-09T20:55:11.677 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph fs dump --format=json 32 2026-03-09T20:55:11.822 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:55:11.860 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:11 vm07.local ceph-mon[112105]: pgmap v245: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.0 KiB/s rd, 4 op/s 2026-03-09T20:55:11.860 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:11 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/1324571972' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 30, "format": "json"}]: dispatch 2026-03-09T20:55:11.860 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:11 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/2657143130' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 31, "format": "json"}]: dispatch 2026-03-09T20:55:11.938 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:11 vm10.local ceph-mon[103526]: pgmap v245: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.0 KiB/s rd, 4 op/s 2026-03-09T20:55:11.938 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:11 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/1324571972' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 30, "format": "json"}]: dispatch 2026-03-09T20:55:11.938 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:11 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/2657143130' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 31, "format": "json"}]: dispatch 2026-03-09T20:55:12.065 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.064+0000 7fc4955d8640 1 -- 192.168.123.107:0/4266285667 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc4901029d0 msgr2=0x7fc490102e30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:12.065 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.064+0000 7fc4955d8640 1 --2- 192.168.123.107:0/4266285667 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc4901029d0 0x7fc490102e30 secure :-1 s=READY pgs=197 cs=0 l=1 rev1=1 crypto rx=0x7fc4840099b0 tx=0x7fc48402f220 comp rx=0 tx=0).stop 2026-03-09T20:55:12.066 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.065+0000 7fc4955d8640 1 -- 192.168.123.107:0/4266285667 shutdown_connections 2026-03-09T20:55:12.066 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.065+0000 7fc4955d8640 1 --2- 192.168.123.107:0/4266285667 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc4901029d0 0x7fc490102e30 unknown :-1 s=CLOSED pgs=197 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:12.066 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.065+0000 7fc4955d8640 1 --2- 192.168.123.107:0/4266285667 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc4901089d0 0x7fc490108db0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:12.066 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.065+0000 7fc4955d8640 1 -- 192.168.123.107:0/4266285667 >> 192.168.123.107:0/4266285667 conn(0x7fc4900fe710 msgr2=0x7fc490100b30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:55:12.066 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.065+0000 7fc4955d8640 1 -- 192.168.123.107:0/4266285667 shutdown_connections 2026-03-09T20:55:12.066 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.065+0000 7fc4955d8640 1 -- 192.168.123.107:0/4266285667 wait complete. 2026-03-09T20:55:12.066 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.066+0000 7fc4955d8640 1 Processor -- start 2026-03-09T20:55:12.067 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.066+0000 7fc4955d8640 1 -- start start 2026-03-09T20:55:12.067 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.066+0000 7fc4955d8640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc4901029d0 0x7fc490075700 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:12.067 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.066+0000 7fc4955d8640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc4901089d0 0x7fc490075c40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:12.067 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.066+0000 7fc4955d8640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc490079780 con 0x7fc4901089d0 2026-03-09T20:55:12.067 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.066+0000 7fc4955d8640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc4900798f0 con 0x7fc4901029d0 2026-03-09T20:55:12.067 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.066+0000 7fc48effd640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc4901029d0 0x7fc490075700 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:12.067 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.066+0000 7fc48effd640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc4901029d0 0x7fc490075700 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.110:3300/0 says I am v2:192.168.123.107:44480/0 (socket says 192.168.123.107:44480) 2026-03-09T20:55:12.067 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.066+0000 7fc48effd640 1 -- 192.168.123.107:0/1407691430 learned_addr learned my addr 192.168.123.107:0/1407691430 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:55:12.068 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.066+0000 7fc48e7fc640 1 --2- 192.168.123.107:0/1407691430 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc4901089d0 0x7fc490075c40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:12.068 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.067+0000 7fc48effd640 1 -- 192.168.123.107:0/1407691430 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc4901089d0 msgr2=0x7fc490075c40 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:12.068 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.067+0000 7fc48effd640 1 --2- 192.168.123.107:0/1407691430 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc4901089d0 0x7fc490075c40 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:12.068 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.067+0000 7fc48effd640 1 -- 192.168.123.107:0/1407691430 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fc484009660 con 0x7fc4901029d0 2026-03-09T20:55:12.068 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.067+0000 7fc48e7fc640 1 --2- 192.168.123.107:0/1407691430 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc4901089d0 0x7fc490075c40 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T20:55:12.068 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.067+0000 7fc48effd640 1 --2- 192.168.123.107:0/1407691430 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc4901029d0 0x7fc490075700 secure :-1 s=READY pgs=81 cs=0 l=1 rev1=1 crypto rx=0x7fc47800d8d0 tx=0x7fc47800dda0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:55:12.068 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.067+0000 7fc46ffff640 1 -- 192.168.123.107:0/1407691430 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fc478004490 con 0x7fc4901029d0 2026-03-09T20:55:12.068 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.067+0000 7fc4955d8640 1 -- 192.168.123.107:0/1407691430 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fc490076240 con 0x7fc4901029d0 2026-03-09T20:55:12.069 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.067+0000 7fc4955d8640 1 -- 192.168.123.107:0/1407691430 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fc4901a9260 con 0x7fc4901029d0 2026-03-09T20:55:12.069 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.067+0000 7fc46ffff640 1 -- 192.168.123.107:0/1407691430 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fc47800bd00 con 0x7fc4901029d0 2026-03-09T20:55:12.069 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.067+0000 7fc46ffff640 1 -- 192.168.123.107:0/1407691430 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fc478010460 con 0x7fc4901029d0 2026-03-09T20:55:12.070 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.069+0000 7fc46ffff640 1 -- 192.168.123.107:0/1407691430 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fc4780027e0 con 0x7fc4901029d0 2026-03-09T20:55:12.070 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.069+0000 7fc4955d8640 1 -- 192.168.123.107:0/1407691430 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fc45c005350 con 0x7fc4901029d0 2026-03-09T20:55:12.070 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.069+0000 7fc46ffff640 1 --2- 192.168.123.107:0/1407691430 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc468077890 0x7fc468079d50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:12.070 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.069+0000 7fc46ffff640 1 -- 192.168.123.107:0/1407691430 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(86..86 src has 1..86) v4 ==== 6652+0+0 (secure 0 0 0) 0x7fc478099d20 con 0x7fc4901029d0 2026-03-09T20:55:12.070 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.070+0000 7fc48e7fc640 1 --2- 192.168.123.107:0/1407691430 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc468077890 0x7fc468079d50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:12.071 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.070+0000 7fc48e7fc640 1 --2- 192.168.123.107:0/1407691430 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc468077890 0x7fc468079d50 secure :-1 s=READY pgs=140 cs=0 l=1 rev1=1 crypto rx=0x7fc490076e80 tx=0x7fc484005c50 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:55:12.073 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.072+0000 7fc46ffff640 1 -- 192.168.123.107:0/1407691430 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fc478062350 con 0x7fc4901029d0 2026-03-09T20:55:12.185 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.184+0000 7fc4955d8640 1 -- 192.168.123.107:0/1407691430 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 32, "format": "json"} v 0) v1 -- 0x7fc45c0051c0 con 0x7fc4901029d0 2026-03-09T20:55:12.186 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.185+0000 7fc46ffff640 1 -- 192.168.123.107:0/1407691430 <== mon.1 v2:192.168.123.110:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 32, "format": "json"}]=0 dumped fsmap epoch 32 v37) v1 ==== 107+0+5255 (secure 0 0 0) 0x7fc478061aa0 con 0x7fc4901029d0 2026-03-09T20:55:12.186 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:55:12.186 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":32,"btime":"2026-03-09T20:53:07:557854+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":44273,"name":"cephfs.vm07.rovdbp","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.107:6827/2346066069","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6826","nonce":2346066069},{"type":"v1","addr":"192.168.123.107:6827","nonce":2346066069}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":20},{"gid":44295,"name":"cephfs.vm10.qpltwp","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.110:6825/4027718916","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6824","nonce":4027718916},{"type":"v1","addr":"192.168.123.110:6825","nonce":4027718916}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":28},{"gid":44299,"name":"cephfs.vm10.hzyuyq","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.110:6827/1370091423","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6826","nonce":1370091423},{"type":"v1","addr":"192.168.123.110:6827","nonce":1370091423}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":32}],"filesystems":[{"mdsmap":{"epoch":32,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-09T20:44:59.885491+0000","modified":"2026-03-09T20:53:06.565349+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":86,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"max_mds":1,"in":[0],"up":{"mds_0":34382},"failed":[],"damaged":[],"stopped":[1],"info":{"gid_34382":{"gid":34382,"name":"cephfs.vm07.potfau","rank":0,"incarnation":30,"state":"up:rejoin","state_seq":6,"addr":"192.168.123.107:6829/561473714","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6828","nonce":561473714},{"type":"v1","addr":"192.168.123.107:6829","nonce":561473714}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":0,"qdb_cluster":[]},"id":1}]} 2026-03-09T20:55:12.187 INFO:teuthology.orchestra.run.vm07.stderr:dumped fsmap epoch 32 2026-03-09T20:55:12.188 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.187+0000 7fc4955d8640 1 -- 192.168.123.107:0/1407691430 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc468077890 msgr2=0x7fc468079d50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:12.188 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.187+0000 7fc4955d8640 1 --2- 192.168.123.107:0/1407691430 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc468077890 0x7fc468079d50 secure :-1 s=READY pgs=140 cs=0 l=1 rev1=1 crypto rx=0x7fc490076e80 tx=0x7fc484005c50 comp rx=0 tx=0).stop 2026-03-09T20:55:12.189 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.188+0000 7fc4955d8640 1 -- 192.168.123.107:0/1407691430 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc4901029d0 msgr2=0x7fc490075700 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:12.189 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.188+0000 7fc4955d8640 1 --2- 192.168.123.107:0/1407691430 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc4901029d0 0x7fc490075700 secure :-1 s=READY pgs=81 cs=0 l=1 rev1=1 crypto rx=0x7fc47800d8d0 tx=0x7fc47800dda0 comp rx=0 tx=0).stop 2026-03-09T20:55:12.189 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.188+0000 7fc4955d8640 1 -- 192.168.123.107:0/1407691430 shutdown_connections 2026-03-09T20:55:12.189 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.188+0000 7fc4955d8640 1 --2- 192.168.123.107:0/1407691430 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fc468077890 0x7fc468079d50 unknown :-1 s=CLOSED pgs=140 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:12.189 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.188+0000 7fc4955d8640 1 --2- 192.168.123.107:0/1407691430 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc4901089d0 0x7fc490075c40 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:12.189 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.188+0000 7fc4955d8640 1 --2- 192.168.123.107:0/1407691430 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fc4901029d0 0x7fc490075700 unknown :-1 s=CLOSED pgs=81 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:12.189 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.188+0000 7fc4955d8640 1 -- 192.168.123.107:0/1407691430 >> 192.168.123.107:0/1407691430 conn(0x7fc4900fe710 msgr2=0x7fc49010c990 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:55:12.189 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.188+0000 7fc4955d8640 1 -- 192.168.123.107:0/1407691430 shutdown_connections 2026-03-09T20:55:12.189 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.188+0000 7fc4955d8640 1 -- 192.168.123.107:0/1407691430 wait complete. 2026-03-09T20:55:12.233 DEBUG:tasks.fs:max_mds reduced in epoch 32 2026-03-09T20:55:12.233 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 32 2026-03-09T20:55:12.233 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph fs dump --format=json 33 2026-03-09T20:55:12.381 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:55:12.609 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.607+0000 7fcd7618b640 1 -- 192.168.123.107:0/4278906651 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcd70076040 msgr2=0x7fcd70111330 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:12.609 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.607+0000 7fcd7618b640 1 --2- 192.168.123.107:0/4278906651 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcd70076040 0x7fcd70111330 secure :-1 s=READY pgs=198 cs=0 l=1 rev1=1 crypto rx=0x7fcd5c0099b0 tx=0x7fcd5c02f220 comp rx=0 tx=0).stop 2026-03-09T20:55:12.609 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.608+0000 7fcd7618b640 1 -- 192.168.123.107:0/4278906651 shutdown_connections 2026-03-09T20:55:12.609 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.608+0000 7fcd7618b640 1 --2- 192.168.123.107:0/4278906651 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcd70076040 0x7fcd70111330 unknown :-1 s=CLOSED pgs=198 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:12.609 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.608+0000 7fcd7618b640 1 --2- 192.168.123.107:0/4278906651 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fcd70075720 0x7fcd70075b00 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:12.609 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.608+0000 7fcd7618b640 1 -- 192.168.123.107:0/4278906651 >> 192.168.123.107:0/4278906651 conn(0x7fcd700fe710 msgr2=0x7fcd70100b30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:55:12.609 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.608+0000 7fcd7618b640 1 -- 192.168.123.107:0/4278906651 shutdown_connections 2026-03-09T20:55:12.609 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.608+0000 7fcd7618b640 1 -- 192.168.123.107:0/4278906651 wait complete. 2026-03-09T20:55:12.609 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.608+0000 7fcd7618b640 1 Processor -- start 2026-03-09T20:55:12.609 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.608+0000 7fcd7618b640 1 -- start start 2026-03-09T20:55:12.609 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.609+0000 7fcd7618b640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcd70075720 0x7fcd7019eda0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:12.610 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.609+0000 7fcd6f7fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcd70075720 0x7fcd7019eda0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:12.610 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.609+0000 7fcd6f7fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcd70075720 0x7fcd7019eda0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:52300/0 (socket says 192.168.123.107:52300) 2026-03-09T20:55:12.610 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.609+0000 7fcd7618b640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fcd70076040 0x7fcd7019f2e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:12.610 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.609+0000 7fcd7618b640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fcd7019f970 con 0x7fcd70075720 2026-03-09T20:55:12.610 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.609+0000 7fcd7618b640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fcd701a36e0 con 0x7fcd70076040 2026-03-09T20:55:12.610 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.609+0000 7fcd6f7fe640 1 -- 192.168.123.107:0/1297212690 learned_addr learned my addr 192.168.123.107:0/1297212690 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:55:12.610 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.609+0000 7fcd6effd640 1 --2- 192.168.123.107:0/1297212690 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fcd70076040 0x7fcd7019f2e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:12.610 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.609+0000 7fcd6effd640 1 -- 192.168.123.107:0/1297212690 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcd70075720 msgr2=0x7fcd7019eda0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:12.610 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.610+0000 7fcd6effd640 1 --2- 192.168.123.107:0/1297212690 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcd70075720 0x7fcd7019eda0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:12.611 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.610+0000 7fcd6effd640 1 -- 192.168.123.107:0/1297212690 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fcd5c009660 con 0x7fcd70076040 2026-03-09T20:55:12.611 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.610+0000 7fcd6f7fe640 1 --2- 192.168.123.107:0/1297212690 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcd70075720 0x7fcd7019eda0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T20:55:12.611 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.610+0000 7fcd6effd640 1 --2- 192.168.123.107:0/1297212690 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fcd70076040 0x7fcd7019f2e0 secure :-1 s=READY pgs=82 cs=0 l=1 rev1=1 crypto rx=0x7fcd5c0099b0 tx=0x7fcd5c004290 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:55:12.611 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.610+0000 7fcd6cff9640 1 -- 192.168.123.107:0/1297212690 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fcd5c03d070 con 0x7fcd70076040 2026-03-09T20:55:12.611 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.610+0000 7fcd7618b640 1 -- 192.168.123.107:0/1297212690 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fcd701a3960 con 0x7fcd70076040 2026-03-09T20:55:12.611 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.610+0000 7fcd7618b640 1 -- 192.168.123.107:0/1297212690 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fcd701a3e50 con 0x7fcd70076040 2026-03-09T20:55:12.611 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.610+0000 7fcd6cff9640 1 -- 192.168.123.107:0/1297212690 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fcd5c038730 con 0x7fcd70076040 2026-03-09T20:55:12.611 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.610+0000 7fcd6cff9640 1 -- 192.168.123.107:0/1297212690 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fcd5c041620 con 0x7fcd70076040 2026-03-09T20:55:12.612 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.611+0000 7fcd7618b640 1 -- 192.168.123.107:0/1297212690 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fcd34005350 con 0x7fcd70076040 2026-03-09T20:55:12.612 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.611+0000 7fcd6cff9640 1 -- 192.168.123.107:0/1297212690 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fcd5c0417e0 con 0x7fcd70076040 2026-03-09T20:55:12.613 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.612+0000 7fcd6cff9640 1 --2- 192.168.123.107:0/1297212690 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fcd40077890 0x7fcd40079d50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:12.613 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.612+0000 7fcd6f7fe640 1 --2- 192.168.123.107:0/1297212690 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fcd40077890 0x7fcd40079d50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:12.613 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.612+0000 7fcd6cff9640 1 -- 192.168.123.107:0/1297212690 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(86..86 src has 1..86) v4 ==== 6652+0+0 (secure 0 0 0) 0x7fcd5c0be310 con 0x7fcd70076040 2026-03-09T20:55:12.613 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.612+0000 7fcd6f7fe640 1 --2- 192.168.123.107:0/1297212690 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fcd40077890 0x7fcd40079d50 secure :-1 s=READY pgs=141 cs=0 l=1 rev1=1 crypto rx=0x7fcd54005fd0 tx=0x7fcd540074e0 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:55:12.615 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.614+0000 7fcd6cff9640 1 -- 192.168.123.107:0/1297212690 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fcd5c0869f0 con 0x7fcd70076040 2026-03-09T20:55:12.712 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:12 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/1407691430' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 32, "format": "json"}]: dispatch 2026-03-09T20:55:12.739 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.737+0000 7fcd7618b640 1 -- 192.168.123.107:0/1297212690 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 33, "format": "json"} v 0) v1 -- 0x7fcd340051c0 con 0x7fcd70076040 2026-03-09T20:55:12.739 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.738+0000 7fcd6cff9640 1 -- 192.168.123.107:0/1297212690 <== mon.1 v2:192.168.123.110:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 33, "format": "json"}]=0 dumped fsmap epoch 33 v37) v1 ==== 107+0+5264 (secure 0 0 0) 0x7fcd5c086140 con 0x7fcd70076040 2026-03-09T20:55:12.739 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:55:12.739 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":33,"btime":"2026-03-09T20:53:08:568765+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":44273,"name":"cephfs.vm07.rovdbp","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.107:6827/2346066069","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6826","nonce":2346066069},{"type":"v1","addr":"192.168.123.107:6827","nonce":2346066069}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":20},{"gid":44295,"name":"cephfs.vm10.qpltwp","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.110:6825/4027718916","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6824","nonce":4027718916},{"type":"v1","addr":"192.168.123.110:6825","nonce":4027718916}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":28},{"gid":44299,"name":"cephfs.vm10.hzyuyq","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.110:6827/1370091423","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6826","nonce":1370091423},{"type":"v1","addr":"192.168.123.110:6827","nonce":1370091423}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":32}],"filesystems":[{"mdsmap":{"epoch":33,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-09T20:44:59.885491+0000","modified":"2026-03-09T20:53:08.568764+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":86,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"max_mds":1,"in":[0],"up":{"mds_0":34382},"failed":[],"damaged":[],"stopped":[1],"info":{"gid_34382":{"gid":34382,"name":"cephfs.vm07.potfau","rank":0,"incarnation":30,"state":"up:active","state_seq":7,"addr":"192.168.123.107:6829/561473714","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6828","nonce":561473714},{"type":"v1","addr":"192.168.123.107:6829","nonce":561473714}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":34382,"qdb_cluster":[34382]},"id":1}]} 2026-03-09T20:55:12.740 INFO:teuthology.orchestra.run.vm07.stderr:dumped fsmap epoch 33 2026-03-09T20:55:12.741 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.740+0000 7fcd7618b640 1 -- 192.168.123.107:0/1297212690 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fcd40077890 msgr2=0x7fcd40079d50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:12.741 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.740+0000 7fcd7618b640 1 --2- 192.168.123.107:0/1297212690 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fcd40077890 0x7fcd40079d50 secure :-1 s=READY pgs=141 cs=0 l=1 rev1=1 crypto rx=0x7fcd54005fd0 tx=0x7fcd540074e0 comp rx=0 tx=0).stop 2026-03-09T20:55:12.742 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.741+0000 7fcd7618b640 1 -- 192.168.123.107:0/1297212690 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fcd70076040 msgr2=0x7fcd7019f2e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:12.742 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.741+0000 7fcd7618b640 1 --2- 192.168.123.107:0/1297212690 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fcd70076040 0x7fcd7019f2e0 secure :-1 s=READY pgs=82 cs=0 l=1 rev1=1 crypto rx=0x7fcd5c0099b0 tx=0x7fcd5c004290 comp rx=0 tx=0).stop 2026-03-09T20:55:12.742 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.741+0000 7fcd7618b640 1 -- 192.168.123.107:0/1297212690 shutdown_connections 2026-03-09T20:55:12.742 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.741+0000 7fcd7618b640 1 --2- 192.168.123.107:0/1297212690 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fcd40077890 0x7fcd40079d50 unknown :-1 s=CLOSED pgs=141 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:12.742 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.741+0000 7fcd7618b640 1 --2- 192.168.123.107:0/1297212690 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fcd70076040 0x7fcd7019f2e0 unknown :-1 s=CLOSED pgs=82 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:12.742 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.741+0000 7fcd7618b640 1 --2- 192.168.123.107:0/1297212690 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcd70075720 0x7fcd7019eda0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:12.742 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.741+0000 7fcd7618b640 1 -- 192.168.123.107:0/1297212690 >> 192.168.123.107:0/1297212690 conn(0x7fcd700fe710 msgr2=0x7fcd700ffd90 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:55:12.742 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.741+0000 7fcd7618b640 1 -- 192.168.123.107:0/1297212690 shutdown_connections 2026-03-09T20:55:12.742 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:12.742+0000 7fcd7618b640 1 -- 192.168.123.107:0/1297212690 wait complete. 2026-03-09T20:55:12.804 DEBUG:tasks.fs:max_mds reduced in epoch 33 2026-03-09T20:55:12.804 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 33 2026-03-09T20:55:12.804 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph fs dump --format=json 34 2026-03-09T20:55:12.952 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:55:13.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:12 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/1407691430' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 32, "format": "json"}]: dispatch 2026-03-09T20:55:13.199 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.197+0000 7ffa6927c640 1 -- 192.168.123.107:0/1779982419 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7ffa64108800 msgr2=0x7ffa64108be0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:13.199 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.197+0000 7ffa6927c640 1 --2- 192.168.123.107:0/1779982419 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7ffa64108800 0x7ffa64108be0 secure :-1 s=READY pgs=83 cs=0 l=1 rev1=1 crypto rx=0x7ffa480098e0 tx=0x7ffa4802f190 comp rx=0 tx=0).stop 2026-03-09T20:55:13.199 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.198+0000 7ffa6927c640 1 -- 192.168.123.107:0/1779982419 shutdown_connections 2026-03-09T20:55:13.199 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.198+0000 7ffa6927c640 1 --2- 192.168.123.107:0/1779982419 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ffa64102800 0x7ffa64102c60 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:13.199 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.198+0000 7ffa6927c640 1 --2- 192.168.123.107:0/1779982419 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7ffa64108800 0x7ffa64108be0 unknown :-1 s=CLOSED pgs=83 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:13.199 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.198+0000 7ffa6927c640 1 -- 192.168.123.107:0/1779982419 >> 192.168.123.107:0/1779982419 conn(0x7ffa640fe540 msgr2=0x7ffa64100960 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:55:13.199 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.198+0000 7ffa6927c640 1 -- 192.168.123.107:0/1779982419 shutdown_connections 2026-03-09T20:55:13.199 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.198+0000 7ffa6927c640 1 -- 192.168.123.107:0/1779982419 wait complete. 2026-03-09T20:55:13.199 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.199+0000 7ffa6927c640 1 Processor -- start 2026-03-09T20:55:13.200 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.199+0000 7ffa6927c640 1 -- start start 2026-03-09T20:55:13.200 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.199+0000 7ffa6927c640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ffa64102800 0x7ffa641a37b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:13.200 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.199+0000 7ffa6927c640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7ffa64108800 0x7ffa641a3cf0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:13.200 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.199+0000 7ffa6927c640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ffa64078340 con 0x7ffa64102800 2026-03-09T20:55:13.200 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.199+0000 7ffa6927c640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ffa640784b0 con 0x7ffa64108800 2026-03-09T20:55:13.200 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.199+0000 7ffa63fff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ffa64102800 0x7ffa641a37b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:13.200 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.199+0000 7ffa63fff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ffa64102800 0x7ffa641a37b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:52322/0 (socket says 192.168.123.107:52322) 2026-03-09T20:55:13.200 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.199+0000 7ffa63fff640 1 -- 192.168.123.107:0/4124784325 learned_addr learned my addr 192.168.123.107:0/4124784325 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:55:13.200 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.199+0000 7ffa63fff640 1 -- 192.168.123.107:0/4124784325 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7ffa64108800 msgr2=0x7ffa641a3cf0 unknown :-1 s=STATE_CONNECTING_RE l=1).mark_down 2026-03-09T20:55:13.200 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.199+0000 7ffa63fff640 1 --2- 192.168.123.107:0/4124784325 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7ffa64108800 0x7ffa641a3cf0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:13.201 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.199+0000 7ffa63fff640 1 -- 192.168.123.107:0/4124784325 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7ffa50009660 con 0x7ffa64102800 2026-03-09T20:55:13.201 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.199+0000 7ffa63fff640 1 --2- 192.168.123.107:0/4124784325 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ffa64102800 0x7ffa641a37b0 secure :-1 s=READY pgs=199 cs=0 l=1 rev1=1 crypto rx=0x7ffa48002410 tx=0x7ffa48002e00 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:55:13.201 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.200+0000 7ffa617fa640 1 -- 192.168.123.107:0/4124784325 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7ffa4803d070 con 0x7ffa64102800 2026-03-09T20:55:13.201 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.200+0000 7ffa617fa640 1 -- 192.168.123.107:0/4124784325 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7ffa4802fbc0 con 0x7ffa64102800 2026-03-09T20:55:13.201 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.200+0000 7ffa6927c640 1 -- 192.168.123.107:0/4124784325 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7ffa48009590 con 0x7ffa64102800 2026-03-09T20:55:13.202 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.200+0000 7ffa6927c640 1 -- 192.168.123.107:0/4124784325 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7ffa64078a30 con 0x7ffa64102800 2026-03-09T20:55:13.202 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.202+0000 7ffa617fa640 1 -- 192.168.123.107:0/4124784325 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7ffa48041600 con 0x7ffa64102800 2026-03-09T20:55:13.206 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.202+0000 7ffa6927c640 1 -- 192.168.123.107:0/4124784325 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7ffa2c005350 con 0x7ffa64102800 2026-03-09T20:55:13.206 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.203+0000 7ffa617fa640 1 -- 192.168.123.107:0/4124784325 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7ffa4804b430 con 0x7ffa64102800 2026-03-09T20:55:13.206 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.203+0000 7ffa617fa640 1 --2- 192.168.123.107:0/4124784325 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7ffa38077720 0x7ffa38079be0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:13.206 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.203+0000 7ffa617fa640 1 -- 192.168.123.107:0/4124784325 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(86..86 src has 1..86) v4 ==== 6652+0+0 (secure 0 0 0) 0x7ffa480bf4c0 con 0x7ffa64102800 2026-03-09T20:55:13.206 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.205+0000 7ffa637fe640 1 --2- 192.168.123.107:0/4124784325 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7ffa38077720 0x7ffa38079be0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:13.206 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.205+0000 7ffa637fe640 1 --2- 192.168.123.107:0/4124784325 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7ffa38077720 0x7ffa38079be0 secure :-1 s=READY pgs=142 cs=0 l=1 rev1=1 crypto rx=0x7ffa6410d690 tx=0x7ffa50009340 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:55:13.206 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.205+0000 7ffa617fa640 1 -- 192.168.123.107:0/4124784325 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7ffa48087b70 con 0x7ffa64102800 2026-03-09T20:55:13.316 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.315+0000 7ffa6927c640 1 -- 192.168.123.107:0/4124784325 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 34, "format": "json"} v 0) v1 -- 0x7ffa2c005600 con 0x7ffa64102800 2026-03-09T20:55:13.319 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.318+0000 7ffa617fa640 1 -- 192.168.123.107:0/4124784325 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 34, "format": "json"}]=0 dumped fsmap epoch 34 v37) v1 ==== 107+0+5264 (secure 0 0 0) 0x7ffa480872c0 con 0x7ffa64102800 2026-03-09T20:55:13.319 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:55:13.319 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":34,"btime":"2026-03-09T20:53:10:012106+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":44273,"name":"cephfs.vm07.rovdbp","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.107:6827/2346066069","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6826","nonce":2346066069},{"type":"v1","addr":"192.168.123.107:6827","nonce":2346066069}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":20},{"gid":44295,"name":"cephfs.vm10.qpltwp","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.110:6825/4027718916","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6824","nonce":4027718916},{"type":"v1","addr":"192.168.123.110:6825","nonce":4027718916}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":28},{"gid":44299,"name":"cephfs.vm10.hzyuyq","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.110:6827/1370091423","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6826","nonce":1370091423},{"type":"v1","addr":"192.168.123.110:6827","nonce":1370091423}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":32}],"filesystems":[{"mdsmap":{"epoch":34,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-09T20:44:59.885491+0000","modified":"2026-03-09T20:53:09.014127+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":86,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"max_mds":2,"in":[0],"up":{"mds_0":34382},"failed":[],"damaged":[],"stopped":[1],"info":{"gid_34382":{"gid":34382,"name":"cephfs.vm07.potfau","rank":0,"incarnation":30,"state":"up:active","state_seq":7,"addr":"192.168.123.107:6829/561473714","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6828","nonce":561473714},{"type":"v1","addr":"192.168.123.107:6829","nonce":561473714}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":34382,"qdb_cluster":[34382]},"id":1}]} 2026-03-09T20:55:13.319 INFO:teuthology.orchestra.run.vm07.stderr:dumped fsmap epoch 34 2026-03-09T20:55:13.321 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.320+0000 7ffa6927c640 1 -- 192.168.123.107:0/4124784325 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7ffa38077720 msgr2=0x7ffa38079be0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:13.321 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.320+0000 7ffa6927c640 1 --2- 192.168.123.107:0/4124784325 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7ffa38077720 0x7ffa38079be0 secure :-1 s=READY pgs=142 cs=0 l=1 rev1=1 crypto rx=0x7ffa6410d690 tx=0x7ffa50009340 comp rx=0 tx=0).stop 2026-03-09T20:55:13.322 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.321+0000 7ffa6927c640 1 -- 192.168.123.107:0/4124784325 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ffa64102800 msgr2=0x7ffa641a37b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:13.322 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.321+0000 7ffa6927c640 1 --2- 192.168.123.107:0/4124784325 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ffa64102800 0x7ffa641a37b0 secure :-1 s=READY pgs=199 cs=0 l=1 rev1=1 crypto rx=0x7ffa48002410 tx=0x7ffa48002e00 comp rx=0 tx=0).stop 2026-03-09T20:55:13.322 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.321+0000 7ffa6927c640 1 -- 192.168.123.107:0/4124784325 shutdown_connections 2026-03-09T20:55:13.322 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.321+0000 7ffa6927c640 1 --2- 192.168.123.107:0/4124784325 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7ffa38077720 0x7ffa38079be0 unknown :-1 s=CLOSED pgs=142 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:13.322 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.321+0000 7ffa6927c640 1 --2- 192.168.123.107:0/4124784325 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7ffa64108800 0x7ffa641a3cf0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:13.322 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.321+0000 7ffa6927c640 1 --2- 192.168.123.107:0/4124784325 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ffa64102800 0x7ffa641a37b0 unknown :-1 s=CLOSED pgs=199 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:13.322 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.321+0000 7ffa6927c640 1 -- 192.168.123.107:0/4124784325 >> 192.168.123.107:0/4124784325 conn(0x7ffa640fe540 msgr2=0x7ffa6410b6b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:55:13.322 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.321+0000 7ffa6927c640 1 -- 192.168.123.107:0/4124784325 shutdown_connections 2026-03-09T20:55:13.322 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.321+0000 7ffa6927c640 1 -- 192.168.123.107:0/4124784325 wait complete. 2026-03-09T20:55:13.377 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 34 2026-03-09T20:55:13.377 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph fs dump --format=json 35 2026-03-09T20:55:13.525 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:55:13.770 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.768+0000 7f9fe1aa3640 1 -- 192.168.123.107:0/2951940622 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9fdc075720 msgr2=0x7f9fdc075b00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:13.770 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.768+0000 7f9fe1aa3640 1 --2- 192.168.123.107:0/2951940622 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9fdc075720 0x7f9fdc075b00 secure :-1 s=READY pgs=200 cs=0 l=1 rev1=1 crypto rx=0x7f9fcc0099b0 tx=0x7f9fcc02f220 comp rx=0 tx=0).stop 2026-03-09T20:55:13.770 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.769+0000 7f9fe1aa3640 1 -- 192.168.123.107:0/2951940622 shutdown_connections 2026-03-09T20:55:13.770 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.769+0000 7f9fe1aa3640 1 --2- 192.168.123.107:0/2951940622 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9fdc076040 0x7f9fdc111330 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:13.770 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.769+0000 7f9fe1aa3640 1 --2- 192.168.123.107:0/2951940622 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9fdc075720 0x7f9fdc075b00 unknown :-1 s=CLOSED pgs=200 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:13.770 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.769+0000 7f9fe1aa3640 1 -- 192.168.123.107:0/2951940622 >> 192.168.123.107:0/2951940622 conn(0x7f9fdc0fe710 msgr2=0x7f9fdc100b30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:55:13.771 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.769+0000 7f9fe1aa3640 1 -- 192.168.123.107:0/2951940622 shutdown_connections 2026-03-09T20:55:13.771 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.770+0000 7f9fe1aa3640 1 -- 192.168.123.107:0/2951940622 wait complete. 2026-03-09T20:55:13.771 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.770+0000 7f9fe1aa3640 1 Processor -- start 2026-03-09T20:55:13.771 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.770+0000 7f9fe1aa3640 1 -- start start 2026-03-09T20:55:13.771 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.770+0000 7f9fe1aa3640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9fdc075720 0x7f9fdc19ee10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:13.771 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.770+0000 7f9fe1aa3640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9fdc076040 0x7f9fdc19f350 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:13.772 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.771+0000 7f9fe1aa3640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9fdc19f9e0 con 0x7f9fdc076040 2026-03-09T20:55:13.772 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.771+0000 7f9fe1aa3640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9fdc1a3750 con 0x7f9fdc075720 2026-03-09T20:55:13.772 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.771+0000 7f9fdaffd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9fdc076040 0x7f9fdc19f350 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:13.772 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.771+0000 7f9fdaffd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9fdc076040 0x7f9fdc19f350 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:52336/0 (socket says 192.168.123.107:52336) 2026-03-09T20:55:13.772 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.771+0000 7f9fdaffd640 1 -- 192.168.123.107:0/3216242317 learned_addr learned my addr 192.168.123.107:0/3216242317 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:55:13.772 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.771+0000 7f9fdb7fe640 1 --2- 192.168.123.107:0/3216242317 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9fdc075720 0x7f9fdc19ee10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:13.772 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.771+0000 7f9fdaffd640 1 -- 192.168.123.107:0/3216242317 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9fdc075720 msgr2=0x7f9fdc19ee10 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:13.772 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.771+0000 7f9fdaffd640 1 --2- 192.168.123.107:0/3216242317 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9fdc075720 0x7f9fdc19ee10 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:13.772 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.771+0000 7f9fdaffd640 1 -- 192.168.123.107:0/3216242317 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f9fcc009660 con 0x7f9fdc076040 2026-03-09T20:55:13.772 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.771+0000 7f9fdaffd640 1 --2- 192.168.123.107:0/3216242317 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9fdc076040 0x7f9fdc19f350 secure :-1 s=READY pgs=201 cs=0 l=1 rev1=1 crypto rx=0x7f9fc800da40 tx=0x7f9fc800df10 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:55:13.773 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.771+0000 7f9fd8ff9640 1 -- 192.168.123.107:0/3216242317 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f9fc8004280 con 0x7f9fdc076040 2026-03-09T20:55:13.773 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.771+0000 7f9fd8ff9640 1 -- 192.168.123.107:0/3216242317 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f9fc800be10 con 0x7f9fdc076040 2026-03-09T20:55:13.773 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.771+0000 7f9fd8ff9640 1 -- 192.168.123.107:0/3216242317 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f9fc8005230 con 0x7f9fdc076040 2026-03-09T20:55:13.773 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.771+0000 7f9fe1aa3640 1 -- 192.168.123.107:0/3216242317 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f9fdc1a3a30 con 0x7f9fdc076040 2026-03-09T20:55:13.773 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.771+0000 7f9fe1aa3640 1 -- 192.168.123.107:0/3216242317 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f9fdc1a3f80 con 0x7f9fdc076040 2026-03-09T20:55:13.776 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.772+0000 7f9fe1aa3640 1 -- 192.168.123.107:0/3216242317 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f9fa0005350 con 0x7f9fdc076040 2026-03-09T20:55:13.777 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.776+0000 7f9fd8ff9640 1 -- 192.168.123.107:0/3216242317 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f9fc80043e0 con 0x7f9fdc076040 2026-03-09T20:55:13.778 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.777+0000 7f9fd8ff9640 1 --2- 192.168.123.107:0/3216242317 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f9fb00779b0 0x7f9fb0079e70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:13.778 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.777+0000 7f9fd8ff9640 1 -- 192.168.123.107:0/3216242317 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(86..86 src has 1..86) v4 ==== 6652+0+0 (secure 0 0 0) 0x7f9fc8099eb0 con 0x7f9fdc076040 2026-03-09T20:55:13.778 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.777+0000 7f9fdb7fe640 1 --2- 192.168.123.107:0/3216242317 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f9fb00779b0 0x7f9fb0079e70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:13.778 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.777+0000 7f9fd8ff9640 1 -- 192.168.123.107:0/3216242317 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f9fc809a330 con 0x7f9fdc076040 2026-03-09T20:55:13.778 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.778+0000 7f9fdb7fe640 1 --2- 192.168.123.107:0/3216242317 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f9fb00779b0 0x7f9fb0079e70 secure :-1 s=READY pgs=143 cs=0 l=1 rev1=1 crypto rx=0x7f9fcc004870 tx=0x7f9fcc0047e0 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:55:13.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:13 vm07.local ceph-mon[112105]: pgmap v246: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.6 KiB/s rd, 3 op/s 2026-03-09T20:55:13.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:13 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/1297212690' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 33, "format": "json"}]: dispatch 2026-03-09T20:55:13.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:13 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/4124784325' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 34, "format": "json"}]: dispatch 2026-03-09T20:55:13.891 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.890+0000 7f9fe1aa3640 1 -- 192.168.123.107:0/3216242317 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 35, "format": "json"} v 0) v1 -- 0x7f9fa00051c0 con 0x7f9fdc076040 2026-03-09T20:55:13.892 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.891+0000 7f9fd8ff9640 1 -- 192.168.123.107:0/3216242317 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 35, "format": "json"}]=0 dumped fsmap epoch 35 v37) v1 ==== 107+0+5281 (secure 0 0 0) 0x7f9fc8062560 con 0x7f9fdc076040 2026-03-09T20:55:13.892 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:55:13.892 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":35,"btime":"2026-03-09T20:53:10:016888+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":44295,"name":"cephfs.vm10.qpltwp","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.110:6825/4027718916","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6824","nonce":4027718916},{"type":"v1","addr":"192.168.123.110:6825","nonce":4027718916}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":28},{"gid":44299,"name":"cephfs.vm10.hzyuyq","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.110:6827/1370091423","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6826","nonce":1370091423},{"type":"v1","addr":"192.168.123.110:6827","nonce":1370091423}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":32}],"filesystems":[{"mdsmap":{"epoch":35,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-09T20:44:59.885491+0000","modified":"2026-03-09T20:53:10.016868+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":86,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"max_mds":2,"in":[0,1],"up":{"mds_0":34382,"mds_1":44273},"failed":[],"damaged":[],"stopped":[],"info":{"gid_34382":{"gid":34382,"name":"cephfs.vm07.potfau","rank":0,"incarnation":30,"state":"up:active","state_seq":7,"addr":"192.168.123.107:6829/561473714","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6828","nonce":561473714},{"type":"v1","addr":"192.168.123.107:6829","nonce":561473714}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}}},"gid_44273":{"gid":44273,"name":"cephfs.vm07.rovdbp","rank":1,"incarnation":35,"state":"up:starting","state_seq":1,"addr":"192.168.123.107:6827/2346066069","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6826","nonce":2346066069},{"type":"v1","addr":"192.168.123.107:6827","nonce":2346066069}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":34382,"qdb_cluster":[34382]},"id":1}]} 2026-03-09T20:55:13.892 INFO:teuthology.orchestra.run.vm07.stderr:dumped fsmap epoch 35 2026-03-09T20:55:13.894 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.893+0000 7f9fe1aa3640 1 -- 192.168.123.107:0/3216242317 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f9fb00779b0 msgr2=0x7f9fb0079e70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:13.894 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.893+0000 7f9fe1aa3640 1 --2- 192.168.123.107:0/3216242317 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f9fb00779b0 0x7f9fb0079e70 secure :-1 s=READY pgs=143 cs=0 l=1 rev1=1 crypto rx=0x7f9fcc004870 tx=0x7f9fcc0047e0 comp rx=0 tx=0).stop 2026-03-09T20:55:13.894 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.893+0000 7f9fe1aa3640 1 -- 192.168.123.107:0/3216242317 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9fdc076040 msgr2=0x7f9fdc19f350 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:13.894 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.894+0000 7f9fe1aa3640 1 --2- 192.168.123.107:0/3216242317 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9fdc076040 0x7f9fdc19f350 secure :-1 s=READY pgs=201 cs=0 l=1 rev1=1 crypto rx=0x7f9fc800da40 tx=0x7f9fc800df10 comp rx=0 tx=0).stop 2026-03-09T20:55:13.895 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.894+0000 7f9fe1aa3640 1 -- 192.168.123.107:0/3216242317 shutdown_connections 2026-03-09T20:55:13.895 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.894+0000 7f9fe1aa3640 1 --2- 192.168.123.107:0/3216242317 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f9fb00779b0 0x7f9fb0079e70 unknown :-1 s=CLOSED pgs=143 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:13.895 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.894+0000 7f9fe1aa3640 1 --2- 192.168.123.107:0/3216242317 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9fdc076040 0x7f9fdc19f350 unknown :-1 s=CLOSED pgs=201 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:13.895 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.894+0000 7f9fe1aa3640 1 --2- 192.168.123.107:0/3216242317 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f9fdc075720 0x7f9fdc19ee10 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:13.895 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.894+0000 7f9fe1aa3640 1 -- 192.168.123.107:0/3216242317 >> 192.168.123.107:0/3216242317 conn(0x7f9fdc0fe710 msgr2=0x7f9fdc0ffe60 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:55:13.895 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.894+0000 7f9fe1aa3640 1 -- 192.168.123.107:0/3216242317 shutdown_connections 2026-03-09T20:55:13.895 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:13.894+0000 7f9fe1aa3640 1 -- 192.168.123.107:0/3216242317 wait complete. 2026-03-09T20:55:13.959 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 35 2026-03-09T20:55:13.960 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph fs dump --format=json 36 2026-03-09T20:55:14.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:13 vm10.local ceph-mon[103526]: pgmap v246: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.6 KiB/s rd, 3 op/s 2026-03-09T20:55:14.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:13 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/1297212690' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 33, "format": "json"}]: dispatch 2026-03-09T20:55:14.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:13 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/4124784325' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 34, "format": "json"}]: dispatch 2026-03-09T20:55:14.110 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:55:14.360 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:14.359+0000 7f31041ae640 1 -- 192.168.123.107:0/135790376 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f30fc1089d0 msgr2=0x7f30fc108db0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:14.360 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:14.359+0000 7f31041ae640 1 --2- 192.168.123.107:0/135790376 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f30fc1089d0 0x7f30fc108db0 secure :-1 s=READY pgs=84 cs=0 l=1 rev1=1 crypto rx=0x7f30f00099b0 tx=0x7f30f002f220 comp rx=0 tx=0).stop 2026-03-09T20:55:14.361 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:14.360+0000 7f31041ae640 1 -- 192.168.123.107:0/135790376 shutdown_connections 2026-03-09T20:55:14.361 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:14.360+0000 7f31041ae640 1 --2- 192.168.123.107:0/135790376 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f30fc1029d0 0x7f30fc102e30 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:14.361 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:14.360+0000 7f31041ae640 1 --2- 192.168.123.107:0/135790376 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f30fc1089d0 0x7f30fc108db0 unknown :-1 s=CLOSED pgs=84 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:14.361 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:14.360+0000 7f31041ae640 1 -- 192.168.123.107:0/135790376 >> 192.168.123.107:0/135790376 conn(0x7f30fc0fe710 msgr2=0x7f30fc100b30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:55:14.361 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:14.360+0000 7f31041ae640 1 -- 192.168.123.107:0/135790376 shutdown_connections 2026-03-09T20:55:14.361 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:14.360+0000 7f31041ae640 1 -- 192.168.123.107:0/135790376 wait complete. 2026-03-09T20:55:14.362 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:14.361+0000 7f31041ae640 1 Processor -- start 2026-03-09T20:55:14.362 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:14.361+0000 7f31041ae640 1 -- start start 2026-03-09T20:55:14.362 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:14.361+0000 7f31041ae640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f30fc1029d0 0x7f30fc10f850 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:14.362 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:14.361+0000 7f31041ae640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f30fc1089d0 0x7f30fc10fd90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:14.362 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:14.361+0000 7f31041ae640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f30fc1102d0 con 0x7f30fc1029d0 2026-03-09T20:55:14.362 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:14.361+0000 7f31041ae640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f30fc110440 con 0x7f30fc1089d0 2026-03-09T20:55:14.362 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:14.361+0000 7f3101f23640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f30fc1029d0 0x7f30fc10f850 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:14.362 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:14.361+0000 7f3101f23640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f30fc1029d0 0x7f30fc10f850 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:52352/0 (socket says 192.168.123.107:52352) 2026-03-09T20:55:14.362 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:14.361+0000 7f3101f23640 1 -- 192.168.123.107:0/2689533199 learned_addr learned my addr 192.168.123.107:0/2689533199 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:55:14.363 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:14.362+0000 7f3101722640 1 --2- 192.168.123.107:0/2689533199 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f30fc1089d0 0x7f30fc10fd90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:14.363 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:14.362+0000 7f3101f23640 1 -- 192.168.123.107:0/2689533199 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f30fc1089d0 msgr2=0x7f30fc10fd90 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:14.363 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:14.362+0000 7f3101f23640 1 --2- 192.168.123.107:0/2689533199 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f30fc1089d0 0x7f30fc10fd90 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:14.363 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:14.362+0000 7f3101f23640 1 -- 192.168.123.107:0/2689533199 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f30ec009590 con 0x7f30fc1029d0 2026-03-09T20:55:14.363 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:14.362+0000 7f3101f23640 1 --2- 192.168.123.107:0/2689533199 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f30fc1029d0 0x7f30fc10f850 secure :-1 s=READY pgs=202 cs=0 l=1 rev1=1 crypto rx=0x7f30f0005ec0 tx=0x7f30f0004300 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:55:14.363 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:14.362+0000 7f30eaffd640 1 -- 192.168.123.107:0/2689533199 <== mon.0 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f30f003c040 con 0x7f30fc1029d0 2026-03-09T20:55:14.363 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:14.363+0000 7f30eaffd640 1 -- 192.168.123.107:0/2689533199 <== mon.0 v2:192.168.123.107:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f30f002fc90 con 0x7f30fc1029d0 2026-03-09T20:55:14.363 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:14.363+0000 7f31041ae640 1 -- 192.168.123.107:0/2689533199 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f30f0009660 con 0x7f30fc1029d0 2026-03-09T20:55:14.364 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:14.363+0000 7f31041ae640 1 -- 192.168.123.107:0/2689533199 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f30fc1b1990 con 0x7f30fc1029d0 2026-03-09T20:55:14.364 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:14.363+0000 7f30eaffd640 1 -- 192.168.123.107:0/2689533199 <== mon.0 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f30f0040550 con 0x7f30fc1029d0 2026-03-09T20:55:14.365 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:14.364+0000 7f30eaffd640 1 -- 192.168.123.107:0/2689533199 <== mon.0 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f30f00406b0 con 0x7f30fc1029d0 2026-03-09T20:55:14.365 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:14.364+0000 7f30eaffd640 1 --2- 192.168.123.107:0/2689533199 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f30d00778e0 0x7f30d0079da0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:14.365 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:14.365+0000 7f30eaffd640 1 -- 192.168.123.107:0/2689533199 <== mon.0 v2:192.168.123.107:3300/0 5 ==== osd_map(86..86 src has 1..86) v4 ==== 6652+0+0 (secure 0 0 0) 0x7f30f00bf9f0 con 0x7f30fc1029d0 2026-03-09T20:55:14.366 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:14.365+0000 7f3101722640 1 --2- 192.168.123.107:0/2689533199 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f30d00778e0 0x7f30d0079da0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:14.367 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:14.366+0000 7f3101722640 1 --2- 192.168.123.107:0/2689533199 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f30d00778e0 0x7f30d0079da0 secure :-1 s=READY pgs=144 cs=0 l=1 rev1=1 crypto rx=0x7f30fc110db0 tx=0x7f30ec009290 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:55:14.369 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:14.366+0000 7f31041ae640 1 -- 192.168.123.107:0/2689533199 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f30c4005350 con 0x7f30fc1029d0 2026-03-09T20:55:14.370 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:14.369+0000 7f30eaffd640 1 -- 192.168.123.107:0/2689533199 <== mon.0 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f30f0088020 con 0x7f30fc1029d0 2026-03-09T20:55:14.487 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:14.486+0000 7f31041ae640 1 -- 192.168.123.107:0/2689533199 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 36, "format": "json"} v 0) v1 -- 0x7f30c40051c0 con 0x7f30fc1029d0 2026-03-09T20:55:14.488 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:14.487+0000 7f30eaffd640 1 -- 192.168.123.107:0/2689533199 <== mon.0 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 36, "format": "json"}]=0 dumped fsmap epoch 36 v37) v1 ==== 107+0+5284 (secure 0 0 0) 0x7f30f0087770 con 0x7f30fc1029d0 2026-03-09T20:55:14.491 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T20:55:14.491 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":36,"btime":"2026-03-09T20:53:11:024283+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":44295,"name":"cephfs.vm10.qpltwp","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.110:6825/4027718916","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6824","nonce":4027718916},{"type":"v1","addr":"192.168.123.110:6825","nonce":4027718916}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":28},{"gid":44299,"name":"cephfs.vm10.hzyuyq","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.110:6827/1370091423","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6826","nonce":1370091423},{"type":"v1","addr":"192.168.123.110:6827","nonce":1370091423}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":32}],"filesystems":[{"mdsmap":{"epoch":36,"flags":50,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":true,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-09T20:44:59.885491+0000","modified":"2026-03-09T20:53:11.024282+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":86,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"max_mds":2,"in":[0,1],"up":{"mds_0":34382,"mds_1":44273},"failed":[],"damaged":[],"stopped":[],"info":{"gid_34382":{"gid":34382,"name":"cephfs.vm07.potfau","rank":0,"incarnation":30,"state":"up:active","state_seq":7,"addr":"192.168.123.107:6829/561473714","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6828","nonce":561473714},{"type":"v1","addr":"192.168.123.107:6829","nonce":561473714}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}}},"gid_44273":{"gid":44273,"name":"cephfs.vm07.rovdbp","rank":1,"incarnation":35,"state":"up:active","state_seq":8,"addr":"192.168.123.107:6827/2346066069","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6826","nonce":2346066069},{"type":"v1","addr":"192.168.123.107:6827","nonce":2346066069}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":34382,"qdb_cluster":[34382,44273]},"id":1}]} 2026-03-09T20:55:14.491 INFO:teuthology.orchestra.run.vm07.stderr:dumped fsmap epoch 36 2026-03-09T20:55:14.493 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:14.492+0000 7f31041ae640 1 -- 192.168.123.107:0/2689533199 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f30d00778e0 msgr2=0x7f30d0079da0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:14.493 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:14.492+0000 7f31041ae640 1 --2- 192.168.123.107:0/2689533199 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f30d00778e0 0x7f30d0079da0 secure :-1 s=READY pgs=144 cs=0 l=1 rev1=1 crypto rx=0x7f30fc110db0 tx=0x7f30ec009290 comp rx=0 tx=0).stop 2026-03-09T20:55:14.493 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:14.492+0000 7f31041ae640 1 -- 192.168.123.107:0/2689533199 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f30fc1029d0 msgr2=0x7f30fc10f850 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:14.493 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:14.492+0000 7f31041ae640 1 --2- 192.168.123.107:0/2689533199 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f30fc1029d0 0x7f30fc10f850 secure :-1 s=READY pgs=202 cs=0 l=1 rev1=1 crypto rx=0x7f30f0005ec0 tx=0x7f30f0004300 comp rx=0 tx=0).stop 2026-03-09T20:55:14.493 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:14.493+0000 7f31041ae640 1 -- 192.168.123.107:0/2689533199 shutdown_connections 2026-03-09T20:55:14.494 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:14.493+0000 7f31041ae640 1 --2- 192.168.123.107:0/2689533199 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f30d00778e0 0x7f30d0079da0 unknown :-1 s=CLOSED pgs=144 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:14.494 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:14.493+0000 7f31041ae640 1 --2- 192.168.123.107:0/2689533199 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f30fc1089d0 0x7f30fc10fd90 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:14.494 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:14.493+0000 7f31041ae640 1 --2- 192.168.123.107:0/2689533199 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f30fc1029d0 0x7f30fc10f850 unknown :-1 s=CLOSED pgs=202 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:14.494 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:14.493+0000 7f31041ae640 1 -- 192.168.123.107:0/2689533199 >> 192.168.123.107:0/2689533199 conn(0x7f30fc0fe710 msgr2=0x7f30fc1044d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:55:14.494 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:14.493+0000 7f31041ae640 1 -- 192.168.123.107:0/2689533199 shutdown_connections 2026-03-09T20:55:14.494 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:14.493+0000 7f31041ae640 1 -- 192.168.123.107:0/2689533199 wait complete. 2026-03-09T20:55:14.556 DEBUG:teuthology.run_tasks:Unwinding manager ceph-fuse 2026-03-09T20:55:14.559 INFO:tasks.ceph_fuse:Unmounting ceph-fuse clients... 2026-03-09T20:55:14.559 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-09T20:55:14.559 DEBUG:teuthology.orchestra.run.vm07:> dd if=/proc/self/mounts of=/dev/stdout 2026-03-09T20:55:14.575 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-09T20:55:14.575 DEBUG:teuthology.orchestra.run.vm07:> dd if=/proc/self/mounts of=/dev/stdout 2026-03-09T20:55:14.631 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph osd blocklist ls 2026-03-09T20:55:14.820 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:55:14.844 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:14 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/3216242317' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 35, "format": "json"}]: dispatch 2026-03-09T20:55:14.844 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:14 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/2689533199' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 36, "format": "json"}]: dispatch 2026-03-09T20:55:15.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:14 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/3216242317' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 35, "format": "json"}]: dispatch 2026-03-09T20:55:15.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:14 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/2689533199' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 36, "format": "json"}]: dispatch 2026-03-09T20:55:15.062 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.060+0000 7fda24899640 1 -- 192.168.123.107:0/2286819925 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fda1c0720b0 msgr2=0x7fda1c072490 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:15.062 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.060+0000 7fda24899640 1 --2- 192.168.123.107:0/2286819925 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fda1c0720b0 0x7fda1c072490 secure :-1 s=READY pgs=203 cs=0 l=1 rev1=1 crypto rx=0x7fda140099b0 tx=0x7fda1402f2b0 comp rx=0 tx=0).stop 2026-03-09T20:55:15.062 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.061+0000 7fda24899640 1 -- 192.168.123.107:0/2286819925 shutdown_connections 2026-03-09T20:55:15.062 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.061+0000 7fda24899640 1 --2- 192.168.123.107:0/2286819925 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fda1c0729d0 0x7fda1c10b9f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:15.062 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.061+0000 7fda24899640 1 --2- 192.168.123.107:0/2286819925 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fda1c0720b0 0x7fda1c072490 unknown :-1 s=CLOSED pgs=203 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:15.062 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.061+0000 7fda24899640 1 -- 192.168.123.107:0/2286819925 >> 192.168.123.107:0/2286819925 conn(0x7fda1c06c7e0 msgr2=0x7fda1c06cbf0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:55:15.062 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.061+0000 7fda24899640 1 -- 192.168.123.107:0/2286819925 shutdown_connections 2026-03-09T20:55:15.062 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.061+0000 7fda24899640 1 -- 192.168.123.107:0/2286819925 wait complete. 2026-03-09T20:55:15.063 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.061+0000 7fda24899640 1 Processor -- start 2026-03-09T20:55:15.063 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.061+0000 7fda24899640 1 -- start start 2026-03-09T20:55:15.063 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.061+0000 7fda24899640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fda1c0720b0 0x7fda1c112d80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:15.063 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.061+0000 7fda24899640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fda1c0729d0 0x7fda1c1132c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:15.063 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.061+0000 7fda24899640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fda1c1139a0 con 0x7fda1c0720b0 2026-03-09T20:55:15.063 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.061+0000 7fda24899640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fda1c1b5bd0 con 0x7fda1c0729d0 2026-03-09T20:55:15.063 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.062+0000 7fda2260e640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fda1c0720b0 0x7fda1c112d80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:15.063 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.062+0000 7fda2260e640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fda1c0720b0 0x7fda1c112d80 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:52372/0 (socket says 192.168.123.107:52372) 2026-03-09T20:55:15.063 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.062+0000 7fda2260e640 1 -- 192.168.123.107:0/4171292576 learned_addr learned my addr 192.168.123.107:0/4171292576 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:55:15.063 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.062+0000 7fda21e0d640 1 --2- 192.168.123.107:0/4171292576 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fda1c0729d0 0x7fda1c1132c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:15.063 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.062+0000 7fda21e0d640 1 -- 192.168.123.107:0/4171292576 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fda1c0720b0 msgr2=0x7fda1c112d80 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:15.063 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.062+0000 7fda21e0d640 1 --2- 192.168.123.107:0/4171292576 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fda1c0720b0 0x7fda1c112d80 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:15.063 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.062+0000 7fda21e0d640 1 -- 192.168.123.107:0/4171292576 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fda14009660 con 0x7fda1c0729d0 2026-03-09T20:55:15.063 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.062+0000 7fda21e0d640 1 --2- 192.168.123.107:0/4171292576 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fda1c0729d0 0x7fda1c1132c0 secure :-1 s=READY pgs=85 cs=0 l=1 rev1=1 crypto rx=0x7fda0800e940 tx=0x7fda0800ee10 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:55:15.063 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.062+0000 7fda0f7fe640 1 -- 192.168.123.107:0/4171292576 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fda0800cd30 con 0x7fda1c0729d0 2026-03-09T20:55:15.063 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.062+0000 7fda0f7fe640 1 -- 192.168.123.107:0/4171292576 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7fda0800ce90 con 0x7fda1c0729d0 2026-03-09T20:55:15.063 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.062+0000 7fda24899640 1 -- 192.168.123.107:0/4171292576 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fda1c1b5eb0 con 0x7fda1c0729d0 2026-03-09T20:55:15.064 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.062+0000 7fda24899640 1 -- 192.168.123.107:0/4171292576 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fda1c1b6400 con 0x7fda1c0729d0 2026-03-09T20:55:15.064 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.063+0000 7fda0f7fe640 1 -- 192.168.123.107:0/4171292576 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fda08002ea0 con 0x7fda1c0729d0 2026-03-09T20:55:15.064 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.063+0000 7fda0f7fe640 1 -- 192.168.123.107:0/4171292576 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fda08010640 con 0x7fda1c0729d0 2026-03-09T20:55:15.065 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.064+0000 7fda0f7fe640 1 --2- 192.168.123.107:0/4171292576 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fda000779b0 0x7fda00079e70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:15.065 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.064+0000 7fda0f7fe640 1 -- 192.168.123.107:0/4171292576 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(86..86 src has 1..86) v4 ==== 6652+0+0 (secure 0 0 0) 0x7fda08014070 con 0x7fda1c0729d0 2026-03-09T20:55:15.065 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.064+0000 7fda2260e640 1 --2- 192.168.123.107:0/4171292576 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fda000779b0 0x7fda00079e70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:15.066 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.065+0000 7fda2260e640 1 --2- 192.168.123.107:0/4171292576 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fda000779b0 0x7fda00079e70 secure :-1 s=READY pgs=145 cs=0 l=1 rev1=1 crypto rx=0x7fda14005ec0 tx=0x7fda1403a040 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:55:15.066 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.065+0000 7fda24899640 1 -- 192.168.123.107:0/4171292576 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fd9e4005350 con 0x7fda1c0729d0 2026-03-09T20:55:15.069 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.068+0000 7fda0f7fe640 1 -- 192.168.123.107:0/4171292576 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fda080639c0 con 0x7fda1c0729d0 2026-03-09T20:55:15.169 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.167+0000 7fda24899640 1 -- 192.168.123.107:0/4171292576 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "osd blocklist ls"} v 0) v1 -- 0x7fd9e4005b80 con 0x7fda1c0729d0 2026-03-09T20:55:15.169 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.168+0000 7fda0f7fe640 1 -- 192.168.123.107:0/4171292576 <== mon.1 v2:192.168.123.110:3300/0 7 ==== mon_command_ack([{"prefix": "osd blocklist ls"}]=0 listed 43 entries v86) v1 ==== 81+0+2664 (secure 0 0 0) 0x7fda08063110 con 0x7fda1c0729d0 2026-03-09T20:55:15.170 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.110:6826/2699915815 2026-03-10T20:53:02.003589+0000 2026-03-09T20:55:15.170 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:6829/3625324292 2026-03-10T20:52:49.559135+0000 2026-03-09T20:55:15.170 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:6828/3625324292 2026-03-10T20:52:49.559135+0000 2026-03-09T20:55:15.170 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:6827/2216764941 2026-03-10T20:52:40.211755+0000 2026-03-09T20:55:15.170 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.110:6827/2699915815 2026-03-10T20:53:02.003589+0000 2026-03-09T20:55:15.170 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:6826/2216764941 2026-03-10T20:52:40.211755+0000 2026-03-09T20:55:15.170 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.110:0/2036814840 2026-03-10T20:47:49.841254+0000 2026-03-09T20:55:15.170 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.110:0/575208151 2026-03-10T20:47:49.841254+0000 2026-03-09T20:55:15.170 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:0/860262504 2026-03-10T20:42:33.181797+0000 2026-03-09T20:55:15.170 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.110:0/3372253495 2026-03-10T20:47:49.841254+0000 2026-03-09T20:55:15.170 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:0/1552256432 2026-03-10T20:43:19.070116+0000 2026-03-09T20:55:15.170 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:6801/810998986 2026-03-10T20:42:43.298801+0000 2026-03-09T20:55:15.170 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:6801/4233182156 2026-03-10T20:47:24.773436+0000 2026-03-09T20:55:15.170 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:0/2100553270 2026-03-10T20:48:24.416950+0000 2026-03-09T20:55:15.170 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:6800/39551776 2026-03-10T20:48:24.416950+0000 2026-03-09T20:55:15.170 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:0/1493737433 2026-03-10T20:48:24.416950+0000 2026-03-09T20:55:15.170 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:0/2516772307 2026-03-10T20:47:24.773436+0000 2026-03-09T20:55:15.170 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:0/1594973564 2026-03-10T20:43:19.070116+0000 2026-03-09T20:55:15.170 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:0/370247461 2026-03-10T20:48:24.416950+0000 2026-03-09T20:55:15.170 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.110:6829/2207204228 2026-03-10T20:47:49.841254+0000 2026-03-09T20:55:15.170 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:6801/1859043218 2026-03-10T20:42:33.181797+0000 2026-03-09T20:55:15.170 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.110:0/48614864 2026-03-10T20:47:49.841254+0000 2026-03-09T20:55:15.170 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:0/1007394955 2026-03-10T20:42:43.298801+0000 2026-03-09T20:55:15.170 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:6800/4233182156 2026-03-10T20:47:24.773436+0000 2026-03-09T20:55:15.170 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:6801/4166937886 2026-03-10T20:43:19.070116+0000 2026-03-09T20:55:15.170 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:6801/39551776 2026-03-10T20:48:24.416950+0000 2026-03-09T20:55:15.171 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:6800/810998986 2026-03-10T20:42:43.298801+0000 2026-03-09T20:55:15.171 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:0/1632839730 2026-03-10T20:42:33.181797+0000 2026-03-09T20:55:15.171 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.110:0/1902639438 2026-03-10T20:47:49.841254+0000 2026-03-09T20:55:15.171 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:6800/4166937886 2026-03-10T20:43:19.070116+0000 2026-03-09T20:55:15.171 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:0/1862677490 2026-03-10T20:48:24.416950+0000 2026-03-09T20:55:15.171 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:0/1293769731 2026-03-10T20:48:24.416950+0000 2026-03-09T20:55:15.171 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:0/4075654402 2026-03-10T20:42:43.298801+0000 2026-03-09T20:55:15.171 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:0/3147220177 2026-03-10T20:43:19.070116+0000 2026-03-09T20:55:15.171 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:0/216055426 2026-03-10T20:42:33.181797+0000 2026-03-09T20:55:15.171 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.110:6828/2207204228 2026-03-10T20:47:49.841254+0000 2026-03-09T20:55:15.171 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:0/2967433424 2026-03-10T20:42:43.298801+0000 2026-03-09T20:55:15.171 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:6800/1859043218 2026-03-10T20:42:33.181797+0000 2026-03-09T20:55:15.171 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.110:0/3773928085 2026-03-10T20:47:49.841254+0000 2026-03-09T20:55:15.171 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:0/4216245667 2026-03-10T20:48:24.416950+0000 2026-03-09T20:55:15.171 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:0/2106845752 2026-03-10T20:47:24.773436+0000 2026-03-09T20:55:15.171 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:0/1991130546 2026-03-10T20:47:24.773436+0000 2026-03-09T20:55:15.171 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:0/831337671 2026-03-10T20:47:24.773436+0000 2026-03-09T20:55:15.171 INFO:teuthology.orchestra.run.vm07.stderr:listed 43 entries 2026-03-09T20:55:15.172 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.171+0000 7fda24899640 1 -- 192.168.123.107:0/4171292576 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fda000779b0 msgr2=0x7fda00079e70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:15.172 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.171+0000 7fda24899640 1 --2- 192.168.123.107:0/4171292576 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fda000779b0 0x7fda00079e70 secure :-1 s=READY pgs=145 cs=0 l=1 rev1=1 crypto rx=0x7fda14005ec0 tx=0x7fda1403a040 comp rx=0 tx=0).stop 2026-03-09T20:55:15.172 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.171+0000 7fda24899640 1 -- 192.168.123.107:0/4171292576 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fda1c0729d0 msgr2=0x7fda1c1132c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:15.172 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.171+0000 7fda24899640 1 --2- 192.168.123.107:0/4171292576 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fda1c0729d0 0x7fda1c1132c0 secure :-1 s=READY pgs=85 cs=0 l=1 rev1=1 crypto rx=0x7fda0800e940 tx=0x7fda0800ee10 comp rx=0 tx=0).stop 2026-03-09T20:55:15.173 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.171+0000 7fda24899640 1 -- 192.168.123.107:0/4171292576 shutdown_connections 2026-03-09T20:55:15.173 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.171+0000 7fda24899640 1 --2- 192.168.123.107:0/4171292576 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7fda000779b0 0x7fda00079e70 unknown :-1 s=CLOSED pgs=145 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:15.173 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.171+0000 7fda24899640 1 --2- 192.168.123.107:0/4171292576 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7fda1c0729d0 0x7fda1c1132c0 unknown :-1 s=CLOSED pgs=85 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:15.173 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.171+0000 7fda24899640 1 --2- 192.168.123.107:0/4171292576 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fda1c0720b0 0x7fda1c112d80 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:15.173 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.171+0000 7fda24899640 1 -- 192.168.123.107:0/4171292576 >> 192.168.123.107:0/4171292576 conn(0x7fda1c06c7e0 msgr2=0x7fda1c070440 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:55:15.173 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.172+0000 7fda24899640 1 -- 192.168.123.107:0/4171292576 shutdown_connections 2026-03-09T20:55:15.173 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.172+0000 7fda24899640 1 -- 192.168.123.107:0/4171292576 wait complete. 2026-03-09T20:55:15.213 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-09T20:55:15.213 DEBUG:teuthology.orchestra.run.vm07:> dd if=/proc/self/mounts of=/dev/stdout 2026-03-09T20:55:15.231 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph osd blocklist ls 2026-03-09T20:55:15.422 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T20:55:15.681 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.680+0000 7f032a25f640 1 -- 192.168.123.107:0/2785041855 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f03240ffcf0 msgr2=0x7f03241039d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:15.682 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.680+0000 7f032a25f640 1 --2- 192.168.123.107:0/2785041855 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f03240ffcf0 0x7f03241039d0 secure :-1 s=READY pgs=204 cs=0 l=1 rev1=1 crypto rx=0x7f03100099b0 tx=0x7f031002f220 comp rx=0 tx=0).stop 2026-03-09T20:55:15.682 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.681+0000 7f032a25f640 1 -- 192.168.123.107:0/2785041855 shutdown_connections 2026-03-09T20:55:15.682 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.681+0000 7f032a25f640 1 --2- 192.168.123.107:0/2785041855 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f03240ffcf0 0x7f03241039d0 unknown :-1 s=CLOSED pgs=204 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:15.682 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.681+0000 7f032a25f640 1 --2- 192.168.123.107:0/2785041855 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f03240ff340 0x7f03240ff720 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:15.682 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.681+0000 7f032a25f640 1 -- 192.168.123.107:0/2785041855 >> 192.168.123.107:0/2785041855 conn(0x7f03240fa5e0 msgr2=0x7f03240fca00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:55:15.682 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.681+0000 7f032a25f640 1 -- 192.168.123.107:0/2785041855 shutdown_connections 2026-03-09T20:55:15.682 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.681+0000 7f032a25f640 1 -- 192.168.123.107:0/2785041855 wait complete. 2026-03-09T20:55:15.682 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.681+0000 7f032a25f640 1 Processor -- start 2026-03-09T20:55:15.683 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.682+0000 7f032a25f640 1 -- start start 2026-03-09T20:55:15.683 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.682+0000 7f032a25f640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f03240ff340 0x7f03241a06a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:15.683 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.682+0000 7f03237fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f03240ff340 0x7f03241a06a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:15.683 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.682+0000 7f03237fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f03240ff340 0x7f03241a06a0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:52378/0 (socket says 192.168.123.107:52378) 2026-03-09T20:55:15.683 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.682+0000 7f032a25f640 1 --2- >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f03240ffcf0 0x7f03241a0be0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:15.683 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.682+0000 7f032a25f640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f032419a790 con 0x7f03240ff340 2026-03-09T20:55:15.683 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.682+0000 7f032a25f640 1 -- --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f032419a900 con 0x7f03240ffcf0 2026-03-09T20:55:15.683 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.682+0000 7f03237fe640 1 -- 192.168.123.107:0/3474359659 learned_addr learned my addr 192.168.123.107:0/3474359659 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-09T20:55:15.683 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.683+0000 7f0322ffd640 1 --2- 192.168.123.107:0/3474359659 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f03240ffcf0 0x7f03241a0be0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:15.684 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.683+0000 7f0322ffd640 1 -- 192.168.123.107:0/3474359659 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f03240ff340 msgr2=0x7f03241a06a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:15.684 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.683+0000 7f0322ffd640 1 --2- 192.168.123.107:0/3474359659 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f03240ff340 0x7f03241a06a0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:15.684 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.683+0000 7f0322ffd640 1 -- 192.168.123.107:0/3474359659 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f0310009660 con 0x7f03240ffcf0 2026-03-09T20:55:15.684 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.683+0000 7f03237fe640 1 --2- 192.168.123.107:0/3474359659 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f03240ff340 0x7f03241a06a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T20:55:15.684 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.683+0000 7f0322ffd640 1 --2- 192.168.123.107:0/3474359659 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f03240ffcf0 0x7f03241a0be0 secure :-1 s=READY pgs=86 cs=0 l=1 rev1=1 crypto rx=0x7f03100099b0 tx=0x7f0310004290 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:55:15.684 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.683+0000 7f0320ff9640 1 -- 192.168.123.107:0/3474359659 <== mon.1 v2:192.168.123.110:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f031003d070 con 0x7f03240ffcf0 2026-03-09T20:55:15.684 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.683+0000 7f032a25f640 1 -- 192.168.123.107:0/3474359659 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f032419ab80 con 0x7f03240ffcf0 2026-03-09T20:55:15.685 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.683+0000 7f0320ff9640 1 -- 192.168.123.107:0/3474359659 <== mon.1 v2:192.168.123.110:3300/0 2 ==== config(27 keys) v1 ==== 1108+0+0 (secure 0 0 0) 0x7f0310038730 con 0x7f03240ffcf0 2026-03-09T20:55:15.685 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.683+0000 7f0320ff9640 1 -- 192.168.123.107:0/3474359659 <== mon.1 v2:192.168.123.110:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f0310041600 con 0x7f03240ffcf0 2026-03-09T20:55:15.685 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.684+0000 7f032a25f640 1 -- 192.168.123.107:0/3474359659 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f032419b070 con 0x7f03240ffcf0 2026-03-09T20:55:15.685 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.685+0000 7f032a25f640 1 -- 192.168.123.107:0/3474359659 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f02e8005350 con 0x7f03240ffcf0 2026-03-09T20:55:15.686 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.685+0000 7f0320ff9640 1 -- 192.168.123.107:0/3474359659 <== mon.1 v2:192.168.123.110:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f0310063b60 con 0x7f03240ffcf0 2026-03-09T20:55:15.687 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.686+0000 7f0320ff9640 1 --2- 192.168.123.107:0/3474359659 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f02f80778e0 0x7f02f8079da0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T20:55:15.687 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.686+0000 7f0320ff9640 1 -- 192.168.123.107:0/3474359659 <== mon.1 v2:192.168.123.110:3300/0 5 ==== osd_map(86..86 src has 1..86) v4 ==== 6652+0+0 (secure 0 0 0) 0x7f03100be820 con 0x7f03240ffcf0 2026-03-09T20:55:15.687 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.686+0000 7f03237fe640 1 --2- 192.168.123.107:0/3474359659 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f02f80778e0 0x7f02f8079da0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T20:55:15.687 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.686+0000 7f03237fe640 1 --2- 192.168.123.107:0/3474359659 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f02f80778e0 0x7f02f8079da0 secure :-1 s=READY pgs=146 cs=0 l=1 rev1=1 crypto rx=0x7f0308005fd0 tx=0x7f03080074e0 comp rx=0 tx=0).ready entity=mgr.34100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T20:55:15.689 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.688+0000 7f0320ff9640 1 -- 192.168.123.107:0/3474359659 <== mon.1 v2:192.168.123.110:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f0310086f80 con 0x7f03240ffcf0 2026-03-09T20:55:15.731 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:15 vm07.local ceph-mon[112105]: pgmap v247: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.0 KiB/s rd, 4 op/s 2026-03-09T20:55:15.731 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:15 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/4171292576' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T20:55:15.790 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.789+0000 7f032a25f640 1 -- 192.168.123.107:0/3474359659 --> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] -- mon_command({"prefix": "osd blocklist ls"} v 0) v1 -- 0x7f02e80051c0 con 0x7f03240ffcf0 2026-03-09T20:55:15.791 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.790+0000 7f0320ff9640 1 -- 192.168.123.107:0/3474359659 <== mon.1 v2:192.168.123.110:3300/0 7 ==== mon_command_ack([{"prefix": "osd blocklist ls"}]=0 listed 43 entries v86) v1 ==== 81+0+2664 (secure 0 0 0) 0x7f03100866d0 con 0x7f03240ffcf0 2026-03-09T20:55:15.791 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.110:6826/2699915815 2026-03-10T20:53:02.003589+0000 2026-03-09T20:55:15.791 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:6829/3625324292 2026-03-10T20:52:49.559135+0000 2026-03-09T20:55:15.791 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:6828/3625324292 2026-03-10T20:52:49.559135+0000 2026-03-09T20:55:15.791 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:6827/2216764941 2026-03-10T20:52:40.211755+0000 2026-03-09T20:55:15.791 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.110:6827/2699915815 2026-03-10T20:53:02.003589+0000 2026-03-09T20:55:15.791 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:6826/2216764941 2026-03-10T20:52:40.211755+0000 2026-03-09T20:55:15.791 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.110:0/2036814840 2026-03-10T20:47:49.841254+0000 2026-03-09T20:55:15.791 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.110:0/575208151 2026-03-10T20:47:49.841254+0000 2026-03-09T20:55:15.791 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:0/860262504 2026-03-10T20:42:33.181797+0000 2026-03-09T20:55:15.791 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.110:0/3372253495 2026-03-10T20:47:49.841254+0000 2026-03-09T20:55:15.791 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:0/1552256432 2026-03-10T20:43:19.070116+0000 2026-03-09T20:55:15.791 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:6801/810998986 2026-03-10T20:42:43.298801+0000 2026-03-09T20:55:15.792 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:6801/4233182156 2026-03-10T20:47:24.773436+0000 2026-03-09T20:55:15.792 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:0/2100553270 2026-03-10T20:48:24.416950+0000 2026-03-09T20:55:15.792 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:6800/39551776 2026-03-10T20:48:24.416950+0000 2026-03-09T20:55:15.792 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:0/1493737433 2026-03-10T20:48:24.416950+0000 2026-03-09T20:55:15.792 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:0/2516772307 2026-03-10T20:47:24.773436+0000 2026-03-09T20:55:15.792 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:0/1594973564 2026-03-10T20:43:19.070116+0000 2026-03-09T20:55:15.792 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:0/370247461 2026-03-10T20:48:24.416950+0000 2026-03-09T20:55:15.792 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.110:6829/2207204228 2026-03-10T20:47:49.841254+0000 2026-03-09T20:55:15.792 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:6801/1859043218 2026-03-10T20:42:33.181797+0000 2026-03-09T20:55:15.792 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.110:0/48614864 2026-03-10T20:47:49.841254+0000 2026-03-09T20:55:15.792 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:0/1007394955 2026-03-10T20:42:43.298801+0000 2026-03-09T20:55:15.792 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:6800/4233182156 2026-03-10T20:47:24.773436+0000 2026-03-09T20:55:15.792 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:6801/4166937886 2026-03-10T20:43:19.070116+0000 2026-03-09T20:55:15.792 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:6801/39551776 2026-03-10T20:48:24.416950+0000 2026-03-09T20:55:15.792 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:6800/810998986 2026-03-10T20:42:43.298801+0000 2026-03-09T20:55:15.792 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:0/1632839730 2026-03-10T20:42:33.181797+0000 2026-03-09T20:55:15.792 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.110:0/1902639438 2026-03-10T20:47:49.841254+0000 2026-03-09T20:55:15.792 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:6800/4166937886 2026-03-10T20:43:19.070116+0000 2026-03-09T20:55:15.792 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:0/1862677490 2026-03-10T20:48:24.416950+0000 2026-03-09T20:55:15.792 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:0/1293769731 2026-03-10T20:48:24.416950+0000 2026-03-09T20:55:15.792 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:0/4075654402 2026-03-10T20:42:43.298801+0000 2026-03-09T20:55:15.792 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:0/3147220177 2026-03-10T20:43:19.070116+0000 2026-03-09T20:55:15.792 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:0/216055426 2026-03-10T20:42:33.181797+0000 2026-03-09T20:55:15.792 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.110:6828/2207204228 2026-03-10T20:47:49.841254+0000 2026-03-09T20:55:15.792 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:0/2967433424 2026-03-10T20:42:43.298801+0000 2026-03-09T20:55:15.792 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:6800/1859043218 2026-03-10T20:42:33.181797+0000 2026-03-09T20:55:15.792 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.110:0/3773928085 2026-03-10T20:47:49.841254+0000 2026-03-09T20:55:15.792 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:0/4216245667 2026-03-10T20:48:24.416950+0000 2026-03-09T20:55:15.792 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:0/2106845752 2026-03-10T20:47:24.773436+0000 2026-03-09T20:55:15.792 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:0/1991130546 2026-03-10T20:47:24.773436+0000 2026-03-09T20:55:15.792 INFO:teuthology.orchestra.run.vm07.stdout:192.168.123.107:0/831337671 2026-03-10T20:47:24.773436+0000 2026-03-09T20:55:15.792 INFO:teuthology.orchestra.run.vm07.stderr:listed 43 entries 2026-03-09T20:55:15.793 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.792+0000 7f032a25f640 1 -- 192.168.123.107:0/3474359659 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f02f80778e0 msgr2=0x7f02f8079da0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:15.793 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.792+0000 7f032a25f640 1 --2- 192.168.123.107:0/3474359659 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f02f80778e0 0x7f02f8079da0 secure :-1 s=READY pgs=146 cs=0 l=1 rev1=1 crypto rx=0x7f0308005fd0 tx=0x7f03080074e0 comp rx=0 tx=0).stop 2026-03-09T20:55:15.793 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.793+0000 7f032a25f640 1 -- 192.168.123.107:0/3474359659 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f03240ffcf0 msgr2=0x7f03241a0be0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:15.794 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.793+0000 7f032a25f640 1 --2- 192.168.123.107:0/3474359659 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f03240ffcf0 0x7f03241a0be0 secure :-1 s=READY pgs=86 cs=0 l=1 rev1=1 crypto rx=0x7f03100099b0 tx=0x7f0310004290 comp rx=0 tx=0).stop 2026-03-09T20:55:15.794 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.793+0000 7f032a25f640 1 -- 192.168.123.107:0/3474359659 shutdown_connections 2026-03-09T20:55:15.794 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.793+0000 7f032a25f640 1 --2- 192.168.123.107:0/3474359659 >> [v2:192.168.123.107:6800/2064434004,v1:192.168.123.107:6801/2064434004] conn(0x7f02f80778e0 0x7f02f8079da0 unknown :-1 s=CLOSED pgs=146 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:15.794 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.793+0000 7f032a25f640 1 --2- 192.168.123.107:0/3474359659 >> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] conn(0x7f03240ffcf0 0x7f03241a0be0 unknown :-1 s=CLOSED pgs=86 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:15.794 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.793+0000 7f032a25f640 1 --2- 192.168.123.107:0/3474359659 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f03240ff340 0x7f03241a06a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T20:55:15.794 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.793+0000 7f032a25f640 1 -- 192.168.123.107:0/3474359659 >> 192.168.123.107:0/3474359659 conn(0x7f03240fa5e0 msgr2=0x7f03240fc520 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:55:15.794 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.794+0000 7f032a25f640 1 -- 192.168.123.107:0/3474359659 shutdown_connections 2026-03-09T20:55:15.795 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-09T20:55:15.794+0000 7f032a25f640 1 -- 192.168.123.107:0/3474359659 wait complete. 2026-03-09T20:55:15.857 INFO:tasks.cephfs.fuse_mount:Running fusermount -u on ubuntu@vm07.local... 2026-03-09T20:55:15.857 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-09T20:55:15.857 DEBUG:teuthology.orchestra.run.vm07:> sudo fusermount -u /home/ubuntu/cephtest/mnt.0 2026-03-09T20:55:15.886 INFO:teuthology.orchestra.run:waiting for 300 2026-03-09T20:55:16.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:15 vm10.local ceph-mon[103526]: pgmap v247: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.0 KiB/s rd, 4 op/s 2026-03-09T20:55:16.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:15 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/4171292576' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T20:55:17.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:16 vm10.local ceph-mon[103526]: from='client.? 192.168.123.107:0/3474359659' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T20:55:17.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:16 vm07.local ceph-mon[112105]: from='client.? 192.168.123.107:0/3474359659' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T20:55:18.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:17 vm10.local ceph-mon[103526]: pgmap v248: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.0 KiB/s rd, 3 op/s 2026-03-09T20:55:18.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:17 vm07.local ceph-mon[112105]: pgmap v248: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.0 KiB/s rd, 3 op/s 2026-03-09T20:55:20.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:19 vm10.local ceph-mon[103526]: pgmap v249: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.0 KiB/s rd, 3 op/s 2026-03-09T20:55:20.061 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:19 vm07.local ceph-mon[112105]: pgmap v249: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.0 KiB/s rd, 3 op/s 2026-03-09T20:55:21.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:21 vm07.local ceph-mon[112105]: pgmap v250: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:55:22.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:21 vm10.local ceph-mon[103526]: pgmap v250: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:55:24.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:23 vm10.local ceph-mon[103526]: pgmap v251: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:55:24.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:23 vm07.local ceph-mon[112105]: pgmap v251: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:55:25.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:24 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:55:25.117 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:24 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:55:26.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:25 vm10.local ceph-mon[103526]: pgmap v252: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:55:26.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:25 vm07.local ceph-mon[112105]: pgmap v252: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:55:28.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:27 vm10.local ceph-mon[103526]: pgmap v253: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:55:28.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:27 vm07.local ceph-mon[112105]: pgmap v253: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:55:30.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:29 vm10.local ceph-mon[103526]: pgmap v254: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:55:30.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:29 vm07.local ceph-mon[112105]: pgmap v254: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:55:31.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:31 vm07.local ceph-mon[112105]: pgmap v255: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:55:32.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:31 vm10.local ceph-mon[103526]: pgmap v255: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:55:34.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:33 vm07.local ceph-mon[112105]: pgmap v256: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:55:34.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:33 vm10.local ceph-mon[103526]: pgmap v256: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:55:36.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:35 vm07.local ceph-mon[112105]: pgmap v257: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:55:36.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:35 vm10.local ceph-mon[103526]: pgmap v257: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:55:38.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:37 vm07.local ceph-mon[112105]: pgmap v258: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:55:38.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:37 vm10.local ceph-mon[103526]: pgmap v258: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:55:40.078 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:39 vm07.local ceph-mon[112105]: pgmap v259: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:55:40.078 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:39 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:55:40.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:39 vm10.local ceph-mon[103526]: pgmap v259: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:55:40.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:39 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:55:41.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:41 vm07.local ceph-mon[112105]: pgmap v260: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:55:42.089 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:41 vm10.local ceph-mon[103526]: pgmap v260: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:55:44.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:43 vm07.local ceph-mon[112105]: pgmap v261: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:55:44.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:43 vm10.local ceph-mon[103526]: pgmap v261: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:55:46.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:45 vm07.local ceph-mon[112105]: pgmap v262: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:55:46.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:45 vm10.local ceph-mon[103526]: pgmap v262: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:55:48.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:47 vm07.local ceph-mon[112105]: pgmap v263: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:55:48.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:47 vm10.local ceph-mon[103526]: pgmap v263: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:55:50.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:49 vm07.local ceph-mon[112105]: pgmap v264: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:55:50.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:49 vm10.local ceph-mon[103526]: pgmap v264: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:55:51.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:51 vm07.local ceph-mon[112105]: pgmap v265: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:55:51.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:55:51.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:55:51.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:55:51.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:55:52.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:51 vm10.local ceph-mon[103526]: pgmap v265: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:55:52.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:55:52.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:55:52.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:55:52.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:55:54.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:53 vm07.local ceph-mon[112105]: pgmap v266: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:55:54.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:53 vm10.local ceph-mon[103526]: pgmap v266: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:55:55.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:54 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:55:55.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:54 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:55:56.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:55 vm07.local ceph-mon[112105]: pgmap v267: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:55:56.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:55 vm10.local ceph-mon[103526]: pgmap v267: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:55:58.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:57 vm07.local ceph-mon[112105]: pgmap v268: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:55:58.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:57 vm10.local ceph-mon[103526]: pgmap v268: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:56:00.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:55:59 vm07.local ceph-mon[112105]: pgmap v269: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:56:00.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:55:59 vm10.local ceph-mon[103526]: pgmap v269: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:56:02.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:56:01 vm07.local ceph-mon[112105]: pgmap v270: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:56:02.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:56:01 vm10.local ceph-mon[103526]: pgmap v270: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:56:03.933 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:56:03 vm07.local ceph-mon[112105]: pgmap v271: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:56:04.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:56:03 vm10.local ceph-mon[103526]: pgmap v271: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:56:06.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:56:05 vm10.local ceph-mon[103526]: pgmap v272: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:56:06.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:56:05 vm07.local ceph-mon[112105]: pgmap v272: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:56:08.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:56:07 vm10.local ceph-mon[103526]: pgmap v273: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:56:08.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:56:07 vm07.local ceph-mon[112105]: pgmap v273: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:56:10.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:56:09 vm10.local ceph-mon[103526]: pgmap v274: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:56:10.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:56:09 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:56:10.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:56:09 vm07.local ceph-mon[112105]: pgmap v274: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:56:10.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:56:09 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:56:12.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:56:11 vm07.local ceph-mon[112105]: pgmap v275: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:56:12.222 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:56:11 vm10.local ceph-mon[103526]: pgmap v275: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:56:14.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:56:13 vm10.local ceph-mon[103526]: pgmap v276: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:56:14.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:56:13 vm07.local ceph-mon[112105]: pgmap v276: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:56:16.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:56:15 vm10.local ceph-mon[103526]: pgmap v277: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:56:16.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:56:15 vm07.local ceph-mon[112105]: pgmap v277: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:56:18.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:56:17 vm10.local ceph-mon[103526]: pgmap v278: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:56:18.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:56:17 vm07.local ceph-mon[112105]: pgmap v278: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:56:20.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:56:19 vm10.local ceph-mon[103526]: pgmap v279: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:56:20.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:56:19 vm07.local ceph-mon[112105]: pgmap v279: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:56:22.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:56:21 vm10.local ceph-mon[103526]: pgmap v280: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:56:22.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:56:21 vm07.local ceph-mon[112105]: pgmap v280: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:56:24.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:56:23 vm10.local ceph-mon[103526]: pgmap v281: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:56:24.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:56:23 vm07.local ceph-mon[112105]: pgmap v281: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:56:25.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:56:24 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:56:25.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:56:24 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:56:26.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:56:26 vm07.local ceph-mon[112105]: pgmap v282: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:56:26.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:56:26 vm10.local ceph-mon[103526]: pgmap v282: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:56:28.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:56:28 vm07.local ceph-mon[112105]: pgmap v283: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:56:28.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:56:28 vm10.local ceph-mon[103526]: pgmap v283: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:56:30.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:56:30 vm07.local ceph-mon[112105]: pgmap v284: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:56:30.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:56:30 vm10.local ceph-mon[103526]: pgmap v284: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:56:32.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:56:32 vm07.local ceph-mon[112105]: pgmap v285: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:56:32.444 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:56:32 vm10.local ceph-mon[103526]: pgmap v285: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:56:34.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:56:34 vm07.local ceph-mon[112105]: pgmap v286: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:56:34.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:56:34 vm10.local ceph-mon[103526]: pgmap v286: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:56:36.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:56:36 vm10.local ceph-mon[103526]: pgmap v287: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:56:36.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:56:36 vm07.local ceph-mon[112105]: pgmap v287: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:56:38.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:56:38 vm10.local ceph-mon[103526]: pgmap v288: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:56:38.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:56:38 vm07.local ceph-mon[112105]: pgmap v288: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:56:40.435 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:56:40 vm07.local ceph-mon[112105]: pgmap v289: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:56:40.435 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:56:40 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:56:40.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:56:40 vm10.local ceph-mon[103526]: pgmap v289: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:56:40.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:56:40 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:56:42.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:56:42 vm10.local ceph-mon[103526]: pgmap v290: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:56:42.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:56:42 vm07.local ceph-mon[112105]: pgmap v290: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:56:44.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:56:44 vm10.local ceph-mon[103526]: pgmap v291: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:56:44.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:56:44 vm07.local ceph-mon[112105]: pgmap v291: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:56:46.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:56:46 vm07.local ceph-mon[112105]: pgmap v292: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:56:46.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:56:46 vm10.local ceph-mon[103526]: pgmap v292: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:56:48.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:56:48 vm07.local ceph-mon[112105]: pgmap v293: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:56:48.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:56:48 vm10.local ceph-mon[103526]: pgmap v293: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:56:50.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:56:50 vm07.local ceph-mon[112105]: pgmap v294: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:56:50.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:56:50 vm10.local ceph-mon[103526]: pgmap v294: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:56:51.617 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:56:51 vm07.local ceph-mon[112105]: pgmap v295: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:56:51.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:56:51 vm10.local ceph-mon[103526]: pgmap v295: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:56:52.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:56:52 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:56:52.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:56:52 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:56:52.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:56:52 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:56:52.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:56:52 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:56:52.668 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:56:52 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:56:52.668 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:56:52 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:56:52.668 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:56:52 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:56:52.668 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:56:52 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:56:53.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:56:53 vm07.local ceph-mon[112105]: pgmap v296: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:56:53.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:56:53 vm10.local ceph-mon[103526]: pgmap v296: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:56:54.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:56:54 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:56:55.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:56:54 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:56:55.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:56:55 vm07.local ceph-mon[112105]: pgmap v297: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:56:56.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:56:55 vm10.local ceph-mon[103526]: pgmap v297: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:56:58.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:56:57 vm10.local ceph-mon[103526]: pgmap v298: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:56:58.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:56:57 vm07.local ceph-mon[112105]: pgmap v298: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:57:00.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:56:59 vm10.local ceph-mon[103526]: pgmap v299: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:57:00.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:56:59 vm07.local ceph-mon[112105]: pgmap v299: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:57:01.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:57:01 vm07.local ceph-mon[112105]: pgmap v300: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:57:02.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:57:01 vm10.local ceph-mon[103526]: pgmap v300: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:57:04.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:57:03 vm10.local ceph-mon[103526]: pgmap v301: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:57:04.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:57:03 vm07.local ceph-mon[112105]: pgmap v301: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:57:06.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:57:05 vm10.local ceph-mon[103526]: pgmap v302: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:57:06.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:57:05 vm07.local ceph-mon[112105]: pgmap v302: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:57:08.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:57:07 vm10.local ceph-mon[103526]: pgmap v303: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:57:08.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:57:07 vm07.local ceph-mon[112105]: pgmap v303: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:57:10.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:57:09 vm10.local ceph-mon[103526]: pgmap v304: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:57:10.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:57:09 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:57:10.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:57:09 vm07.local ceph-mon[112105]: pgmap v304: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:57:10.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:57:09 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:57:11.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:57:11 vm07.local ceph-mon[112105]: pgmap v305: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:57:12.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:57:11 vm10.local ceph-mon[103526]: pgmap v305: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:57:14.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:57:13 vm10.local ceph-mon[103526]: pgmap v306: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:57:14.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:57:13 vm07.local ceph-mon[112105]: pgmap v306: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:57:16.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:57:15 vm10.local ceph-mon[103526]: pgmap v307: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:57:16.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:57:15 vm07.local ceph-mon[112105]: pgmap v307: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:57:18.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:57:17 vm10.local ceph-mon[103526]: pgmap v308: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:57:18.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:57:17 vm07.local ceph-mon[112105]: pgmap v308: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:57:20.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:57:19 vm10.local ceph-mon[103526]: pgmap v309: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:57:20.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:57:19 vm07.local ceph-mon[112105]: pgmap v309: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:57:22.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:57:21 vm10.local ceph-mon[103526]: pgmap v310: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:57:22.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:57:21 vm07.local ceph-mon[112105]: pgmap v310: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:57:24.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:57:23 vm10.local ceph-mon[103526]: pgmap v311: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:57:24.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:57:23 vm07.local ceph-mon[112105]: pgmap v311: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:57:25.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:57:24 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:57:25.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:57:24 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:57:26.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:57:25 vm10.local ceph-mon[103526]: pgmap v312: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:57:26.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:57:25 vm07.local ceph-mon[112105]: pgmap v312: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:57:28.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:57:27 vm10.local ceph-mon[103526]: pgmap v313: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:57:28.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:57:27 vm07.local ceph-mon[112105]: pgmap v313: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:57:30.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:57:29 vm07.local ceph-mon[112105]: pgmap v314: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:57:30.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:57:29 vm10.local ceph-mon[103526]: pgmap v314: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:57:32.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:57:31 vm07.local ceph-mon[112105]: pgmap v315: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:57:32.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:57:31 vm10.local ceph-mon[103526]: pgmap v315: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:57:34.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:57:33 vm07.local ceph-mon[112105]: pgmap v316: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:57:34.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:57:33 vm10.local ceph-mon[103526]: pgmap v316: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:57:36.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:57:35 vm07.local ceph-mon[112105]: pgmap v317: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:57:36.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:57:35 vm10.local ceph-mon[103526]: pgmap v317: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:57:38.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:57:37 vm07.local ceph-mon[112105]: pgmap v318: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:57:38.162 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:57:37 vm10.local ceph-mon[103526]: pgmap v318: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:57:40.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:57:39 vm07.local ceph-mon[112105]: pgmap v319: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:57:40.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:57:39 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:57:40.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:57:39 vm10.local ceph-mon[103526]: pgmap v319: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:57:40.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:57:39 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:57:42.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:57:41 vm07.local ceph-mon[112105]: pgmap v320: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:57:42.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:57:41 vm10.local ceph-mon[103526]: pgmap v320: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:57:44.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:57:43 vm07.local ceph-mon[112105]: pgmap v321: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:57:44.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:57:43 vm10.local ceph-mon[103526]: pgmap v321: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:57:46.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:57:45 vm10.local ceph-mon[103526]: pgmap v322: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:57:46.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:57:45 vm07.local ceph-mon[112105]: pgmap v322: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:57:48.271 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:57:47 vm10.local ceph-mon[103526]: pgmap v323: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:57:48.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:57:47 vm07.local ceph-mon[112105]: pgmap v323: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:57:50.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:57:49 vm10.local ceph-mon[103526]: pgmap v324: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:57:50.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:57:49 vm07.local ceph-mon[112105]: pgmap v324: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:57:52.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:57:51 vm10.local ceph-mon[103526]: pgmap v325: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:57:52.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:57:51 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:57:52.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:57:51 vm07.local ceph-mon[112105]: pgmap v325: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:57:52.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:57:51 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:57:53.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:57:52 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:57:53.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:57:52 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:57:53.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:57:52 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:57:53.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:57:52 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:57:53.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:57:52 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:57:53.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:57:52 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:57:54.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:57:54 vm10.local ceph-mon[103526]: pgmap v326: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:57:54.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:57:54 vm07.local ceph-mon[112105]: pgmap v326: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:57:55.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:57:55 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:57:55.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:57:55 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:57:56.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:57:56 vm10.local ceph-mon[103526]: pgmap v327: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:57:56.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:57:56 vm07.local ceph-mon[112105]: pgmap v327: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:57:58.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:57:58 vm10.local ceph-mon[103526]: pgmap v328: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:57:58.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:57:58 vm07.local ceph-mon[112105]: pgmap v328: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:58:00.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:58:00 vm10.local ceph-mon[103526]: pgmap v329: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:58:00.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:58:00 vm07.local ceph-mon[112105]: pgmap v329: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:58:02.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:58:02 vm10.local ceph-mon[103526]: pgmap v330: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:58:02.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:58:02 vm07.local ceph-mon[112105]: pgmap v330: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:58:04.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:58:04 vm10.local ceph-mon[103526]: pgmap v331: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:58:04.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:58:04 vm07.local ceph-mon[112105]: pgmap v331: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:58:06.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:58:06 vm10.local ceph-mon[103526]: pgmap v332: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:58:06.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:58:06 vm07.local ceph-mon[112105]: pgmap v332: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:58:08.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:58:08 vm10.local ceph-mon[103526]: pgmap v333: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:58:08.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:58:08 vm07.local ceph-mon[112105]: pgmap v333: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:58:10.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:58:10 vm10.local ceph-mon[103526]: pgmap v334: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:58:10.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:58:10 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:58:10.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:58:10 vm07.local ceph-mon[112105]: pgmap v334: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:58:10.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:58:10 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:58:12.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:58:12 vm07.local ceph-mon[112105]: pgmap v335: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:58:12.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:58:12 vm10.local ceph-mon[103526]: pgmap v335: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:58:14.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:58:14 vm07.local ceph-mon[112105]: pgmap v336: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:58:14.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:58:14 vm10.local ceph-mon[103526]: pgmap v336: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:58:16.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:58:16 vm07.local ceph-mon[112105]: pgmap v337: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:58:16.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:58:16 vm10.local ceph-mon[103526]: pgmap v337: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:58:17.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:58:17 vm07.local ceph-mon[112105]: pgmap v338: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:58:17.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:58:17 vm10.local ceph-mon[103526]: pgmap v338: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:58:19.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:58:19 vm07.local ceph-mon[112105]: pgmap v339: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:58:20.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:58:19 vm10.local ceph-mon[103526]: pgmap v339: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:58:21.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:58:21 vm07.local ceph-mon[112105]: pgmap v340: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:58:22.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:58:21 vm10.local ceph-mon[103526]: pgmap v340: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:58:24.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:58:23 vm10.local ceph-mon[103526]: pgmap v341: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:58:24.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:58:23 vm07.local ceph-mon[112105]: pgmap v341: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:58:25.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:58:24 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:58:25.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:58:24 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:58:26.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:58:25 vm10.local ceph-mon[103526]: pgmap v342: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:58:26.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:58:25 vm07.local ceph-mon[112105]: pgmap v342: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:58:28.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:58:27 vm10.local ceph-mon[103526]: pgmap v343: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:58:28.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:58:27 vm07.local ceph-mon[112105]: pgmap v343: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:58:30.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:58:29 vm10.local ceph-mon[103526]: pgmap v344: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:58:30.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:58:29 vm07.local ceph-mon[112105]: pgmap v344: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:58:32.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:58:32 vm07.local ceph-mon[112105]: pgmap v345: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:58:32.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:58:32 vm10.local ceph-mon[103526]: pgmap v345: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:58:34.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:58:34 vm07.local ceph-mon[112105]: pgmap v346: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:58:34.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:58:34 vm10.local ceph-mon[103526]: pgmap v346: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:58:36.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:58:36 vm07.local ceph-mon[112105]: pgmap v347: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:58:36.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:58:36 vm10.local ceph-mon[103526]: pgmap v347: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:58:38.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:58:38 vm07.local ceph-mon[112105]: pgmap v348: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:58:38.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:58:38 vm10.local ceph-mon[103526]: pgmap v348: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:58:40.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:58:40 vm07.local ceph-mon[112105]: pgmap v349: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:58:40.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:58:40 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:58:40.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:58:40 vm10.local ceph-mon[103526]: pgmap v349: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:58:40.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:58:40 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:58:42.271 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:58:42 vm07.local ceph-mon[112105]: pgmap v350: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:58:42.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:58:42 vm10.local ceph-mon[103526]: pgmap v350: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:58:44.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:58:44 vm10.local ceph-mon[103526]: pgmap v351: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:58:44.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:58:44 vm07.local ceph-mon[112105]: pgmap v351: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:58:46.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:58:46 vm07.local ceph-mon[112105]: pgmap v352: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:58:46.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:58:46 vm10.local ceph-mon[103526]: pgmap v352: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:58:48.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:58:48 vm07.local ceph-mon[112105]: pgmap v353: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:58:48.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:58:48 vm10.local ceph-mon[103526]: pgmap v353: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:58:50.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:58:50 vm07.local ceph-mon[112105]: pgmap v354: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:58:50.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:58:50 vm10.local ceph-mon[103526]: pgmap v354: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:58:52.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:58:52 vm07.local ceph-mon[112105]: pgmap v355: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:58:52.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:58:52 vm10.local ceph-mon[103526]: pgmap v355: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:58:53.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:58:53 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:58:53.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:58:53 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "who": "osd/host:vm10", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:58:53.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:58:53 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:58:53.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:58:53 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:58:53.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:58:53 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:58:53.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:58:53 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:58:53.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:58:53 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:58:53.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:58:53 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "who": "osd/host:vm10", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:58:53.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:58:53 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:58:53.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:58:53 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:58:53.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:58:53 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:58:53.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:58:53 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:58:54.287 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:58:54 vm10.local ceph-mon[103526]: pgmap v356: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:58:54.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:58:54 vm07.local ceph-mon[112105]: pgmap v356: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:58:55.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:58:55 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:58:55.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:58:55 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:58:56.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:58:56 vm10.local ceph-mon[103526]: pgmap v357: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:58:56.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:58:56 vm07.local ceph-mon[112105]: pgmap v357: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:58:58.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:58:58 vm10.local ceph-mon[103526]: pgmap v358: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:58:58.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:58:58 vm07.local ceph-mon[112105]: pgmap v358: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:59:00.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:59:00 vm10.local ceph-mon[103526]: pgmap v359: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:59:00.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:59:00 vm07.local ceph-mon[112105]: pgmap v359: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:59:02.531 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:59:02 vm07.local ceph-mon[112105]: pgmap v360: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:59:02.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:59:02 vm10.local ceph-mon[103526]: pgmap v360: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:59:04.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:59:04 vm10.local ceph-mon[103526]: pgmap v361: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:59:04.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:59:04 vm07.local ceph-mon[112105]: pgmap v361: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:59:06.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:59:06 vm07.local ceph-mon[112105]: pgmap v362: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:59:06.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:59:06 vm10.local ceph-mon[103526]: pgmap v362: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:59:08.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:59:08 vm07.local ceph-mon[112105]: pgmap v363: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:59:08.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:59:08 vm10.local ceph-mon[103526]: pgmap v363: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:59:10.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:59:10 vm07.local ceph-mon[112105]: pgmap v364: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:59:10.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:59:10 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:59:10.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:59:10 vm10.local ceph-mon[103526]: pgmap v364: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:59:10.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:59:10 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:59:12.589 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:59:12 vm10.local ceph-mon[103526]: pgmap v365: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:59:12.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:59:12 vm07.local ceph-mon[112105]: pgmap v365: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:59:14.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:59:14 vm10.local ceph-mon[103526]: pgmap v366: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:59:14.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:59:14 vm07.local ceph-mon[112105]: pgmap v366: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:59:15.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:59:15 vm07.local ceph-mon[112105]: pgmap v367: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:59:15.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:59:15 vm10.local ceph-mon[103526]: pgmap v367: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:59:18.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:59:17 vm10.local ceph-mon[103526]: pgmap v368: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:59:18.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:59:17 vm07.local ceph-mon[112105]: pgmap v368: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:59:20.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:59:19 vm10.local ceph-mon[103526]: pgmap v369: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:59:20.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:59:19 vm07.local ceph-mon[112105]: pgmap v369: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:59:21.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:59:21 vm07.local ceph-mon[112105]: pgmap v370: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:59:22.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:59:21 vm10.local ceph-mon[103526]: pgmap v370: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:59:24.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:59:23 vm10.local ceph-mon[103526]: pgmap v371: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:59:24.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:59:23 vm07.local ceph-mon[112105]: pgmap v371: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:59:25.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:59:24 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:59:25.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:59:24 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:59:26.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:59:25 vm10.local ceph-mon[103526]: pgmap v372: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:59:26.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:59:25 vm07.local ceph-mon[112105]: pgmap v372: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:59:28.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:59:27 vm10.local ceph-mon[103526]: pgmap v373: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:59:28.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:59:27 vm07.local ceph-mon[112105]: pgmap v373: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:59:30.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:59:29 vm10.local ceph-mon[103526]: pgmap v374: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:59:30.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:59:29 vm07.local ceph-mon[112105]: pgmap v374: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:59:31.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:59:31 vm07.local ceph-mon[112105]: pgmap v375: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:59:32.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:59:31 vm10.local ceph-mon[103526]: pgmap v375: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:59:34.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:59:33 vm10.local ceph-mon[103526]: pgmap v376: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:59:34.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:59:33 vm07.local ceph-mon[112105]: pgmap v376: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:59:36.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:59:35 vm10.local ceph-mon[103526]: pgmap v377: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:59:36.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:59:35 vm07.local ceph-mon[112105]: pgmap v377: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:59:38.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:59:38 vm07.local ceph-mon[112105]: pgmap v378: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:59:38.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:59:38 vm10.local ceph-mon[103526]: pgmap v378: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:59:40.383 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:59:40 vm07.local ceph-mon[112105]: pgmap v379: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:59:40.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:59:40 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:59:40.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:59:40 vm10.local ceph-mon[103526]: pgmap v379: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:59:40.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:59:40 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:59:42.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:59:42 vm10.local ceph-mon[103526]: pgmap v380: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:59:42.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:59:42 vm07.local ceph-mon[112105]: pgmap v380: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:59:44.527 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:59:44 vm10.local ceph-mon[103526]: pgmap v381: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:59:44.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:59:44 vm07.local ceph-mon[112105]: pgmap v381: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:59:46.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:59:46 vm10.local ceph-mon[103526]: pgmap v382: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:59:46.633 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:59:46 vm07.local ceph-mon[112105]: pgmap v382: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:59:48.384 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:59:48 vm07.local ceph-mon[112105]: pgmap v383: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:59:48.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:59:48 vm10.local ceph-mon[103526]: pgmap v383: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:59:50.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:59:50 vm10.local ceph-mon[103526]: pgmap v384: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:59:50.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:59:50 vm07.local ceph-mon[112105]: pgmap v384: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:59:52.537 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:59:52 vm10.local ceph-mon[103526]: pgmap v385: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:59:52.634 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:59:52 vm07.local ceph-mon[112105]: pgmap v385: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:59:53.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:59:53 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:59:53.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:59:53 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:59:53.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:59:53 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:59:53.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:59:53 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:59:53.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:59:53 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:59:53.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:59:53 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:59:53.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:59:53 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:59:53.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:59:53 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' 2026-03-09T20:59:54.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:59:54 vm10.local ceph-mon[103526]: pgmap v386: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:59:54.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:59:54 vm07.local ceph-mon[112105]: pgmap v386: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T20:59:55.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:59:55 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:59:55.787 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:59:55 vm10.local ceph-mon[103526]: pgmap v387: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:59:55.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:59:55 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:59:55.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:59:55 vm07.local ceph-mon[112105]: pgmap v387: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T20:59:58.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:59:57 vm10.local ceph-mon[103526]: pgmap v388: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T20:59:58.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:59:57 vm07.local ceph-mon[112105]: pgmap v388: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T21:00:00.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 20:59:59 vm10.local ceph-mon[103526]: pgmap v389: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T21:00:00.134 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 20:59:59 vm07.local ceph-mon[112105]: pgmap v389: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T21:00:01.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 21:00:00 vm10.local ceph-mon[103526]: overall HEALTH_OK 2026-03-09T21:00:01.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 21:00:00 vm07.local ceph-mon[112105]: overall HEALTH_OK 2026-03-09T21:00:01.884 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 21:00:01 vm07.local ceph-mon[112105]: pgmap v390: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T21:00:02.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 21:00:01 vm10.local ceph-mon[103526]: pgmap v390: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T21:00:04.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 21:00:03 vm10.local ceph-mon[103526]: pgmap v391: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T21:00:04.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 21:00:03 vm07.local ceph-mon[112105]: pgmap v391: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T21:00:06.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 21:00:05 vm10.local ceph-mon[103526]: pgmap v392: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T21:00:06.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 21:00:05 vm07.local ceph-mon[112105]: pgmap v392: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T21:00:08.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 21:00:07 vm10.local ceph-mon[103526]: pgmap v393: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T21:00:08.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 21:00:07 vm07.local ceph-mon[112105]: pgmap v393: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T21:00:10.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 21:00:09 vm10.local ceph-mon[103526]: pgmap v394: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T21:00:10.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 21:00:09 vm10.local ceph-mon[103526]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:00:10.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 21:00:09 vm07.local ceph-mon[112105]: pgmap v394: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 3 op/s 2026-03-09T21:00:10.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 21:00:09 vm07.local ceph-mon[112105]: from='mgr.34100 192.168.123.107:0/354379591' entity='mgr.vm07.xjrvch' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T21:00:11.883 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 21:00:11 vm07.local ceph-mon[112105]: pgmap v395: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T21:00:12.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 21:00:11 vm10.local ceph-mon[103526]: pgmap v395: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 2.1 KiB/s rd, 4 op/s 2026-03-09T21:00:14.037 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 21:00:13 vm10.local ceph-mon[103526]: pgmap v396: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T21:00:14.133 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 21:00:13 vm07.local ceph-mon[112105]: pgmap v396: 65 pgs: 65 active+clean; 211 MiB data, 955 MiB used, 119 GiB / 120 GiB avail; 1.7 KiB/s rd, 3 op/s 2026-03-09T21:00:14.932 ERROR:tasks.cephfs.fuse_mount:process failed to terminate after unmount. This probably indicates a bug within ceph-fuse. 2026-03-09T21:00:14.932 ERROR:teuthology.run_tasks:Manager failed: ceph-fuse Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa/tasks/ceph_fuse.py", line 248, in task mount.umount_wait() File "/home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa/tasks/cephfs/fuse_mount.py", line 403, in umount_wait run.wait([self.fuse_daemon], timeout) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 479, in wait check_time() File "/home/teuthos/teuthology/teuthology/contextutil.py", line 134, in __call__ raise MaxWhileTries(error_msg) teuthology.exceptions.MaxWhileTries: reached maximum tries (50) after waiting for 300 seconds 2026-03-09T21:00:14.933 DEBUG:teuthology.run_tasks:Unwinding manager cephadm 2026-03-09T21:00:14.936 INFO:tasks.cephadm:Teardown begin 2026-03-09T21:00:14.936 ERROR:teuthology.contextutil:Saw exception from nested tasks Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/contextutil.py", line 32, in nested yield vars File "/home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa/tasks/cephadm.py", line 2252, in task yield File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa/tasks/ceph_fuse.py", line 248, in task mount.umount_wait() File "/home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa/tasks/cephfs/fuse_mount.py", line 403, in umount_wait run.wait([self.fuse_daemon], timeout) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 479, in wait check_time() File "/home/teuthos/teuthology/teuthology/contextutil.py", line 134, in __call__ raise MaxWhileTries(error_msg) teuthology.exceptions.MaxWhileTries: reached maximum tries (50) after waiting for 300 seconds 2026-03-09T21:00:14.937 DEBUG:teuthology.orchestra.run.vm07:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T21:00:14.963 DEBUG:teuthology.orchestra.run.vm10:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T21:00:14.991 INFO:tasks.cephadm:Disabling cephadm mgr module 2026-03-09T21:00:14.991 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 -- ceph mgr module disable cephadm 2026-03-09T21:00:15.153 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/mon.vm07/config 2026-03-09T21:00:15.357 INFO:teuthology.orchestra.run.vm07.stderr:Error: statfs /etc/ceph/ceph.client.admin.keyring: no such file or directory 2026-03-09T21:00:15.375 DEBUG:teuthology.orchestra.run:got remote process result: 125 2026-03-09T21:00:15.376 INFO:tasks.cephadm:Cleaning up testdir ceph.* files... 2026-03-09T21:00:15.376 DEBUG:teuthology.orchestra.run.vm07:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-09T21:00:15.390 DEBUG:teuthology.orchestra.run.vm10:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-09T21:00:15.406 INFO:tasks.cephadm:Stopping all daemons... 2026-03-09T21:00:15.407 INFO:tasks.cephadm.mon.vm07:Stopping mon.vm07... 2026-03-09T21:00:15.407 DEBUG:teuthology.orchestra.run.vm07:> sudo systemctl stop ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4@mon.vm07 2026-03-09T21:00:15.553 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 09 21:00:15 vm07.local systemd[1]: Stopping Ceph mon.vm07 for 589eab88-1bf8-11f1-9e50-71f3ab1833c4... 2026-03-09T21:00:15.700 DEBUG:teuthology.orchestra.run.vm07:> sudo pkill -f 'journalctl -f -n 0 -u ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4@mon.vm07.service' 2026-03-09T21:00:15.735 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T21:00:15.735 INFO:tasks.cephadm.mon.vm07:Stopped mon.vm07 2026-03-09T21:00:15.735 INFO:tasks.cephadm.mon.vm10:Stopping mon.vm10... 2026-03-09T21:00:15.735 DEBUG:teuthology.orchestra.run.vm10:> sudo systemctl stop ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4@mon.vm10 2026-03-09T21:00:16.026 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 21:00:15 vm10.local systemd[1]: Stopping Ceph mon.vm10 for 589eab88-1bf8-11f1-9e50-71f3ab1833c4... 2026-03-09T21:00:16.026 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 21:00:15 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-mon-vm10[103522]: 2026-03-09T21:00:15.847+0000 7ff2f9a4d640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.vm10 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T21:00:16.026 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 21:00:15 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-mon-vm10[103522]: 2026-03-09T21:00:15.847+0000 7ff2f9a4d640 -1 mon.vm10@1(peon) e3 *** Got Signal Terminated *** 2026-03-09T21:00:16.026 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 21:00:15 vm10.local podman[143116]: 2026-03-09 21:00:15.951921388 +0000 UTC m=+0.120256907 container died 4428cf7f0607bdeb22f587a4124e71f15446c241c91010a8a32efe73a21b0707 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-mon-vm10, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, CEPH_REF=squid) 2026-03-09T21:00:16.026 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 21:00:15 vm10.local podman[143116]: 2026-03-09 21:00:15.973710331 +0000 UTC m=+0.142045830 container remove 4428cf7f0607bdeb22f587a4124e71f15446c241c91010a8a32efe73a21b0707 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-mon-vm10, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default) 2026-03-09T21:00:16.026 INFO:journalctl@ceph.mon.vm10.vm10.stdout:Mar 09 21:00:15 vm10.local bash[143116]: ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-mon-vm10 2026-03-09T21:00:16.035 DEBUG:teuthology.orchestra.run.vm10:> sudo pkill -f 'journalctl -f -n 0 -u ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4@mon.vm10.service' 2026-03-09T21:00:16.072 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T21:00:16.072 INFO:tasks.cephadm.mon.vm10:Stopped mon.vm10 2026-03-09T21:00:16.072 INFO:tasks.cephadm.osd.0:Stopping osd.0... 2026-03-09T21:00:16.072 DEBUG:teuthology.orchestra.run.vm07:> sudo systemctl stop ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4@osd.0 2026-03-09T21:00:16.383 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 21:00:16 vm07.local systemd[1]: Stopping Ceph osd.0 for 589eab88-1bf8-11f1-9e50-71f3ab1833c4... 2026-03-09T21:00:16.384 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 21:00:16 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-0[120060]: 2026-03-09T21:00:16.174+0000 7ffbcd7bd640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T21:00:16.384 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 21:00:16 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-0[120060]: 2026-03-09T21:00:16.174+0000 7ffbcd7bd640 -1 osd.0 86 *** Got signal Terminated *** 2026-03-09T21:00:16.384 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 21:00:16 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-0[120060]: 2026-03-09T21:00:16.175+0000 7ffbcd7bd640 -1 osd.0 86 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T21:00:21.499 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 21:00:21 vm07.local podman[169256]: 2026-03-09 21:00:21.217753393 +0000 UTC m=+5.057769776 container died 1da9d2cdbdc33dbc96a5b0f9c60e8be480f5e4f62b5eb0aeb537c3483e1d2366 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20260223) 2026-03-09T21:00:21.499 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 21:00:21 vm07.local podman[169256]: 2026-03-09 21:00:21.247439688 +0000 UTC m=+5.087456071 container remove 1da9d2cdbdc33dbc96a5b0f9c60e8be480f5e4f62b5eb0aeb537c3483e1d2366 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True) 2026-03-09T21:00:21.499 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 21:00:21 vm07.local bash[169256]: ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-0 2026-03-09T21:00:21.499 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 21:00:21 vm07.local podman[169323]: 2026-03-09 21:00:21.407614522 +0000 UTC m=+0.017366015 container create e61fa4ff3f1d4df46969b8564fb2c840be253136ef1000e595aba1c0443551a5 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-0-deactivate, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS) 2026-03-09T21:00:21.499 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 21:00:21 vm07.local podman[169323]: 2026-03-09 21:00:21.450586003 +0000 UTC m=+0.060337507 container init e61fa4ff3f1d4df46969b8564fb2c840be253136ef1000e595aba1c0443551a5 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-0-deactivate, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True) 2026-03-09T21:00:21.499 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 21:00:21 vm07.local podman[169323]: 2026-03-09 21:00:21.457638774 +0000 UTC m=+0.067390267 container start e61fa4ff3f1d4df46969b8564fb2c840be253136ef1000e595aba1c0443551a5 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-0-deactivate, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.build-date=20260223) 2026-03-09T21:00:21.499 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 21:00:21 vm07.local podman[169323]: 2026-03-09 21:00:21.459441959 +0000 UTC m=+0.069193452 container attach e61fa4ff3f1d4df46969b8564fb2c840be253136ef1000e595aba1c0443551a5 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-0-deactivate, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS) 2026-03-09T21:00:21.672 DEBUG:teuthology.orchestra.run.vm07:> sudo pkill -f 'journalctl -f -n 0 -u ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4@osd.0.service' 2026-03-09T21:00:21.706 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T21:00:21.707 INFO:tasks.cephadm.osd.0:Stopped osd.0 2026-03-09T21:00:21.707 INFO:tasks.cephadm.osd.1:Stopping osd.1... 2026-03-09T21:00:21.707 DEBUG:teuthology.orchestra.run.vm07:> sudo systemctl stop ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4@osd.1 2026-03-09T21:00:22.133 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 21:00:21 vm07.local systemd[1]: Stopping Ceph osd.1 for 589eab88-1bf8-11f1-9e50-71f3ab1833c4... 2026-03-09T21:00:22.134 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 21:00:21 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-1[125895]: 2026-03-09T21:00:21.853+0000 7f8490b5d640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.1 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T21:00:22.134 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 21:00:21 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-1[125895]: 2026-03-09T21:00:21.853+0000 7f8490b5d640 -1 osd.1 86 *** Got signal Terminated *** 2026-03-09T21:00:22.134 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 21:00:21 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-1[125895]: 2026-03-09T21:00:21.853+0000 7f8490b5d640 -1 osd.1 86 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T21:00:27.173 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 21:00:26 vm07.local podman[169421]: 2026-03-09 21:00:26.892263013 +0000 UTC m=+5.053576030 container died 95f518bf664f65fc3388230a6cd58163a3d87db7bb0ddb79b98abeec73692ba7 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2) 2026-03-09T21:00:27.173 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 21:00:26 vm07.local podman[169421]: 2026-03-09 21:00:26.920878665 +0000 UTC m=+5.082191682 container remove 95f518bf664f65fc3388230a6cd58163a3d87db7bb0ddb79b98abeec73692ba7 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-09T21:00:27.173 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 21:00:26 vm07.local bash[169421]: ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-1 2026-03-09T21:00:27.174 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 21:00:27 vm07.local podman[169501]: 2026-03-09 21:00:27.080432707 +0000 UTC m=+0.017568375 container create 6bc48af60b18920ad771f36e35765da7f927b6b808ed21913d901d42ce25c4c2 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-1-deactivate, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-09T21:00:27.174 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 21:00:27 vm07.local podman[169501]: 2026-03-09 21:00:27.121033571 +0000 UTC m=+0.058169249 container init 6bc48af60b18920ad771f36e35765da7f927b6b808ed21913d901d42ce25c4c2 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-1-deactivate, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-09T21:00:27.174 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 21:00:27 vm07.local podman[169501]: 2026-03-09 21:00:27.124521379 +0000 UTC m=+0.061657047 container start 6bc48af60b18920ad771f36e35765da7f927b6b808ed21913d901d42ce25c4c2 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-1-deactivate, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-09T21:00:27.174 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 21:00:27 vm07.local podman[169501]: 2026-03-09 21:00:27.125693614 +0000 UTC m=+0.062829272 container attach 6bc48af60b18920ad771f36e35765da7f927b6b808ed21913d901d42ce25c4c2 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-1-deactivate, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, OSD_FLAVOR=default) 2026-03-09T21:00:27.174 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 21:00:27 vm07.local podman[169501]: 2026-03-09 21:00:27.073244914 +0000 UTC m=+0.010380591 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T21:00:27.348 DEBUG:teuthology.orchestra.run.vm07:> sudo pkill -f 'journalctl -f -n 0 -u ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4@osd.1.service' 2026-03-09T21:00:27.380 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T21:00:27.380 INFO:tasks.cephadm.osd.1:Stopped osd.1 2026-03-09T21:00:27.380 INFO:tasks.cephadm.osd.2:Stopping osd.2... 2026-03-09T21:00:27.380 DEBUG:teuthology.orchestra.run.vm07:> sudo systemctl stop ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4@osd.2 2026-03-09T21:00:27.884 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 21:00:27 vm07.local systemd[1]: Stopping Ceph osd.2 for 589eab88-1bf8-11f1-9e50-71f3ab1833c4... 2026-03-09T21:00:27.884 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 21:00:27 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-2[132432]: 2026-03-09T21:00:27.540+0000 7f242aebb640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.2 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T21:00:27.884 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 21:00:27 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-2[132432]: 2026-03-09T21:00:27.540+0000 7f242aebb640 -1 osd.2 86 *** Got signal Terminated *** 2026-03-09T21:00:27.884 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 21:00:27 vm07.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-2[132432]: 2026-03-09T21:00:27.540+0000 7f242aebb640 -1 osd.2 86 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T21:00:32.854 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 21:00:32 vm07.local podman[169597]: 2026-03-09 21:00:32.575566553 +0000 UTC m=+5.049646875 container died 0d3aa63353bb720a934b8ab3b3781c190354c49c4674e62a7b7553aa98e4161f (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid) 2026-03-09T21:00:32.854 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 21:00:32 vm07.local podman[169597]: 2026-03-09 21:00:32.610200659 +0000 UTC m=+5.084280991 container remove 0d3aa63353bb720a934b8ab3b3781c190354c49c4674e62a7b7553aa98e4161f (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-2, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.41.3) 2026-03-09T21:00:32.854 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 21:00:32 vm07.local bash[169597]: ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-2 2026-03-09T21:00:32.855 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 21:00:32 vm07.local podman[169664]: 2026-03-09 21:00:32.762912114 +0000 UTC m=+0.015550928 container create 2e94eb73c9cdee412f84d54b66af30724ccc50b5c5b1dc34c97a76b0f1be65d7 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-2-deactivate, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS) 2026-03-09T21:00:32.855 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 21:00:32 vm07.local podman[169664]: 2026-03-09 21:00:32.80876073 +0000 UTC m=+0.061399544 container init 2e94eb73c9cdee412f84d54b66af30724ccc50b5c5b1dc34c97a76b0f1be65d7 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-2-deactivate, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20260223, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-09T21:00:32.855 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 21:00:32 vm07.local podman[169664]: 2026-03-09 21:00:32.812627978 +0000 UTC m=+0.065266783 container start 2e94eb73c9cdee412f84d54b66af30724ccc50b5c5b1dc34c97a76b0f1be65d7 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-2-deactivate, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-09T21:00:32.855 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 21:00:32 vm07.local podman[169664]: 2026-03-09 21:00:32.813803669 +0000 UTC m=+0.066442473 container attach 2e94eb73c9cdee412f84d54b66af30724ccc50b5c5b1dc34c97a76b0f1be65d7 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-2-deactivate, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS) 2026-03-09T21:00:32.855 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 21:00:32 vm07.local podman[169664]: 2026-03-09 21:00:32.755897585 +0000 UTC m=+0.008536399 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T21:00:33.025 DEBUG:teuthology.orchestra.run.vm07:> sudo pkill -f 'journalctl -f -n 0 -u ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4@osd.2.service' 2026-03-09T21:00:33.060 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T21:00:33.060 INFO:tasks.cephadm.osd.2:Stopped osd.2 2026-03-09T21:00:33.060 INFO:tasks.cephadm.osd.3:Stopping osd.3... 2026-03-09T21:00:33.060 DEBUG:teuthology.orchestra.run.vm10:> sudo systemctl stop ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4@osd.3 2026-03-09T21:00:33.537 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 21:00:33 vm10.local systemd[1]: Stopping Ceph osd.3 for 589eab88-1bf8-11f1-9e50-71f3ab1833c4... 2026-03-09T21:00:33.537 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 21:00:33 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-3[113999]: 2026-03-09T21:00:33.154+0000 7fce32a1c640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.3 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T21:00:33.537 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 21:00:33 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-3[113999]: 2026-03-09T21:00:33.154+0000 7fce32a1c640 -1 osd.3 86 *** Got signal Terminated *** 2026-03-09T21:00:33.537 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 21:00:33 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-3[113999]: 2026-03-09T21:00:33.154+0000 7fce32a1c640 -1 osd.3 86 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T21:00:38.455 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 21:00:38 vm10.local podman[143222]: 2026-03-09 21:00:38.200988977 +0000 UTC m=+5.059590464 container died c8d2b453e9e22355c1b85786b88570c44183fda41bdcad0752dc98ae5026bf72 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-3, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-09T21:00:38.455 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 21:00:38 vm10.local podman[143222]: 2026-03-09 21:00:38.262680645 +0000 UTC m=+5.121282123 container remove c8d2b453e9e22355c1b85786b88570c44183fda41bdcad0752dc98ae5026bf72 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-09T21:00:38.455 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 21:00:38 vm10.local bash[143222]: ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-3 2026-03-09T21:00:38.455 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 21:00:38 vm10.local podman[143289]: 2026-03-09 21:00:38.411022679 +0000 UTC m=+0.018219883 container create ad7d9dca8fea9a06dc7c09d1f919461c2271347997bd752d91e2a8d331fa9852 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-3-deactivate, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid) 2026-03-09T21:00:38.455 INFO:journalctl@ceph.osd.3.vm10.stdout:Mar 09 21:00:38 vm10.local podman[143289]: 2026-03-09 21:00:38.450850716 +0000 UTC m=+0.058047940 container init ad7d9dca8fea9a06dc7c09d1f919461c2271347997bd752d91e2a8d331fa9852 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-3-deactivate, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , ceph=True, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-09T21:00:38.630 DEBUG:teuthology.orchestra.run.vm10:> sudo pkill -f 'journalctl -f -n 0 -u ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4@osd.3.service' 2026-03-09T21:00:38.669 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T21:00:38.669 INFO:tasks.cephadm.osd.3:Stopped osd.3 2026-03-09T21:00:38.669 INFO:tasks.cephadm.osd.4:Stopping osd.4... 2026-03-09T21:00:38.669 DEBUG:teuthology.orchestra.run.vm10:> sudo systemctl stop ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4@osd.4 2026-03-09T21:00:39.037 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 21:00:38 vm10.local systemd[1]: Stopping Ceph osd.4 for 589eab88-1bf8-11f1-9e50-71f3ab1833c4... 2026-03-09T21:00:39.037 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 21:00:38 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-4[119365]: 2026-03-09T21:00:38.828+0000 7f20d64ca640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.4 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T21:00:39.037 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 21:00:38 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-4[119365]: 2026-03-09T21:00:38.828+0000 7f20d64ca640 -1 osd.4 86 *** Got signal Terminated *** 2026-03-09T21:00:39.037 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 21:00:38 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-4[119365]: 2026-03-09T21:00:38.828+0000 7f20d64ca640 -1 osd.4 86 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T21:00:44.112 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 21:00:43 vm10.local podman[143385]: 2026-03-09 21:00:43.870497721 +0000 UTC m=+5.056484400 container died d0231a0cf2beb1439d52638e900a919bf5408d14564808ad1a36ef0067ef9297 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-4, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-09T21:00:44.112 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 21:00:43 vm10.local podman[143385]: 2026-03-09 21:00:43.905440613 +0000 UTC m=+5.091427303 container remove d0231a0cf2beb1439d52638e900a919bf5408d14564808ad1a36ef0067ef9297 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-4, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-09T21:00:44.112 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 21:00:43 vm10.local bash[143385]: ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-4 2026-03-09T21:00:44.112 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 21:00:44 vm10.local podman[143464]: 2026-03-09 21:00:44.065695895 +0000 UTC m=+0.023846795 container create 3a7330cf1c3d670ca70b28abaa541d38ab237d5175c86860039fbe14d44d1e1c (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-4-deactivate, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-09T21:00:44.112 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 21:00:43 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-5[125129]: 2026-03-09T21:00:43.855+0000 7fe9989d3640 -1 osd.5 86 heartbeat_check: no reply from 192.168.123.107:6806 osd.0 since back 2026-03-09T21:00:19.456956+0000 front 2026-03-09T21:00:19.456990+0000 (oldest deadline 2026-03-09T21:00:43.556711+0000) 2026-03-09T21:00:44.367 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 21:00:44 vm10.local podman[143464]: 2026-03-09 21:00:44.112220867 +0000 UTC m=+0.070371767 container init 3a7330cf1c3d670ca70b28abaa541d38ab237d5175c86860039fbe14d44d1e1c (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-4-deactivate, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS) 2026-03-09T21:00:44.367 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 21:00:44 vm10.local podman[143464]: 2026-03-09 21:00:44.117045778 +0000 UTC m=+0.075196678 container start 3a7330cf1c3d670ca70b28abaa541d38ab237d5175c86860039fbe14d44d1e1c (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-4-deactivate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0) 2026-03-09T21:00:44.367 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 21:00:44 vm10.local podman[143464]: 2026-03-09 21:00:44.118046371 +0000 UTC m=+0.076197271 container attach 3a7330cf1c3d670ca70b28abaa541d38ab237d5175c86860039fbe14d44d1e1c (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-4-deactivate, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3) 2026-03-09T21:00:44.367 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 21:00:44 vm10.local podman[143464]: 2026-03-09 21:00:44.056078234 +0000 UTC m=+0.014229144 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T21:00:44.367 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 21:00:44 vm10.local podman[143464]: 2026-03-09 21:00:44.275206509 +0000 UTC m=+0.233357409 container died 3a7330cf1c3d670ca70b28abaa541d38ab237d5175c86860039fbe14d44d1e1c (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-4-deactivate, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20260223, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS) 2026-03-09T21:00:44.367 INFO:journalctl@ceph.osd.4.vm10.stdout:Mar 09 21:00:44 vm10.local podman[143464]: 2026-03-09 21:00:44.358579849 +0000 UTC m=+0.316730749 container remove 3a7330cf1c3d670ca70b28abaa541d38ab237d5175c86860039fbe14d44d1e1c (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-4-deactivate, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-09T21:00:44.372 DEBUG:teuthology.orchestra.run.vm10:> sudo pkill -f 'journalctl -f -n 0 -u ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4@osd.4.service' 2026-03-09T21:00:44.417 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T21:00:44.417 INFO:tasks.cephadm.osd.4:Stopped osd.4 2026-03-09T21:00:44.417 INFO:tasks.cephadm.osd.5:Stopping osd.5... 2026-03-09T21:00:44.417 DEBUG:teuthology.orchestra.run.vm10:> sudo systemctl stop ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4@osd.5 2026-03-09T21:00:44.787 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 21:00:44 vm10.local systemd[1]: Stopping Ceph osd.5 for 589eab88-1bf8-11f1-9e50-71f3ab1833c4... 2026-03-09T21:00:44.787 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 21:00:44 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-5[125129]: 2026-03-09T21:00:44.592+0000 7fe99cbcc640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.5 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T21:00:44.787 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 21:00:44 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-5[125129]: 2026-03-09T21:00:44.592+0000 7fe99cbcc640 -1 osd.5 86 *** Got signal Terminated *** 2026-03-09T21:00:44.787 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 21:00:44 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-5[125129]: 2026-03-09T21:00:44.592+0000 7fe99cbcc640 -1 osd.5 86 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T21:00:45.171 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 21:00:44 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-5[125129]: 2026-03-09T21:00:44.897+0000 7fe9989d3640 -1 osd.5 86 heartbeat_check: no reply from 192.168.123.107:6806 osd.0 since back 2026-03-09T21:00:19.456956+0000 front 2026-03-09T21:00:19.456990+0000 (oldest deadline 2026-03-09T21:00:43.556711+0000) 2026-03-09T21:00:46.287 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 21:00:45 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-5[125129]: 2026-03-09T21:00:45.869+0000 7fe9989d3640 -1 osd.5 86 heartbeat_check: no reply from 192.168.123.107:6806 osd.0 since back 2026-03-09T21:00:19.456956+0000 front 2026-03-09T21:00:19.456990+0000 (oldest deadline 2026-03-09T21:00:43.556711+0000) 2026-03-09T21:00:47.287 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 21:00:46 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-5[125129]: 2026-03-09T21:00:46.909+0000 7fe9989d3640 -1 osd.5 86 heartbeat_check: no reply from 192.168.123.107:6806 osd.0 since back 2026-03-09T21:00:19.456956+0000 front 2026-03-09T21:00:19.456990+0000 (oldest deadline 2026-03-09T21:00:43.556711+0000) 2026-03-09T21:00:48.287 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 21:00:47 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-5[125129]: 2026-03-09T21:00:47.952+0000 7fe9989d3640 -1 osd.5 86 heartbeat_check: no reply from 192.168.123.107:6806 osd.0 since back 2026-03-09T21:00:19.456956+0000 front 2026-03-09T21:00:19.456990+0000 (oldest deadline 2026-03-09T21:00:43.556711+0000) 2026-03-09T21:00:48.287 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 21:00:47 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-5[125129]: 2026-03-09T21:00:47.952+0000 7fe9989d3640 -1 osd.5 86 heartbeat_check: no reply from 192.168.123.107:6814 osd.1 since back 2026-03-09T21:00:23.557120+0000 front 2026-03-09T21:00:23.557098+0000 (oldest deadline 2026-03-09T21:00:47.656936+0000) 2026-03-09T21:00:49.287 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 21:00:48 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-5[125129]: 2026-03-09T21:00:48.959+0000 7fe9989d3640 -1 osd.5 86 heartbeat_check: no reply from 192.168.123.107:6806 osd.0 since back 2026-03-09T21:00:19.456956+0000 front 2026-03-09T21:00:19.456990+0000 (oldest deadline 2026-03-09T21:00:43.556711+0000) 2026-03-09T21:00:49.287 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 21:00:48 vm10.local ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-5[125129]: 2026-03-09T21:00:48.959+0000 7fe9989d3640 -1 osd.5 86 heartbeat_check: no reply from 192.168.123.107:6814 osd.1 since back 2026-03-09T21:00:23.557120+0000 front 2026-03-09T21:00:23.557098+0000 (oldest deadline 2026-03-09T21:00:47.656936+0000) 2026-03-09T21:00:49.909 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 21:00:49 vm10.local podman[143559]: 2026-03-09 21:00:49.625862872 +0000 UTC m=+5.049428993 container died 7489b8a43e7f5b9d1f86178b6701dbeb684ee31dde054546b00e8f6dc3552838 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-5, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-09T21:00:49.909 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 21:00:49 vm10.local podman[143559]: 2026-03-09 21:00:49.652583695 +0000 UTC m=+5.076149816 container remove 7489b8a43e7f5b9d1f86178b6701dbeb684ee31dde054546b00e8f6dc3552838 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-5, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) 2026-03-09T21:00:49.909 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 21:00:49 vm10.local bash[143559]: ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-5 2026-03-09T21:00:49.909 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 21:00:49 vm10.local podman[143626]: 2026-03-09 21:00:49.816560684 +0000 UTC m=+0.018594115 container create c874fa65b70e5806fff1ddf0bb180a762332f547d6a39d724be9de8bc4a99fa9 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-5-deactivate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default) 2026-03-09T21:00:49.909 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 21:00:49 vm10.local podman[143626]: 2026-03-09 21:00:49.865903528 +0000 UTC m=+0.067936970 container init c874fa65b70e5806fff1ddf0bb180a762332f547d6a39d724be9de8bc4a99fa9 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-5-deactivate, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, org.label-schema.build-date=20260223, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default) 2026-03-09T21:00:49.909 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 21:00:49 vm10.local podman[143626]: 2026-03-09 21:00:49.86902434 +0000 UTC m=+0.071057771 container start c874fa65b70e5806fff1ddf0bb180a762332f547d6a39d724be9de8bc4a99fa9 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-5-deactivate, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-09T21:00:49.909 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 21:00:49 vm10.local podman[143626]: 2026-03-09 21:00:49.869998343 +0000 UTC m=+0.072031774 container attach c874fa65b70e5806fff1ddf0bb180a762332f547d6a39d724be9de8bc4a99fa9 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4-osd-5-deactivate, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3) 2026-03-09T21:00:49.909 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 09 21:00:49 vm10.local podman[143626]: 2026-03-09 21:00:49.809335038 +0000 UTC m=+0.011368470 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T21:00:50.062 DEBUG:teuthology.orchestra.run.vm10:> sudo pkill -f 'journalctl -f -n 0 -u ceph-589eab88-1bf8-11f1-9e50-71f3ab1833c4@osd.5.service' 2026-03-09T21:00:50.104 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T21:00:50.104 INFO:tasks.cephadm.osd.5:Stopped osd.5 2026-03-09T21:00:50.104 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 --force --keep-logs 2026-03-09T21:00:50.222 INFO:teuthology.orchestra.run.vm07.stdout:Deleting cluster with fsid: 589eab88-1bf8-11f1-9e50-71f3ab1833c4 2026-03-09T21:00:51.802 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.vm07.stderr:ceph-fuse[96587]: fuse finished with error 0 and tester_r 0 2026-03-09T21:00:59.362 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 --force --keep-logs 2026-03-09T21:00:59.462 INFO:teuthology.orchestra.run.vm10.stdout:Deleting cluster with fsid: 589eab88-1bf8-11f1-9e50-71f3ab1833c4 2026-03-09T21:01:05.080 DEBUG:teuthology.orchestra.run.vm07:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T21:01:05.119 DEBUG:teuthology.orchestra.run.vm10:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T21:01:05.152 INFO:tasks.cephadm:Archiving crash dumps... 2026-03-09T21:01:05.152 DEBUG:teuthology.misc:Transferring archived files from vm07:/var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/crash to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/647/remote/vm07/crash 2026-03-09T21:01:05.152 DEBUG:teuthology.orchestra.run.vm07:> sudo tar c -f - -C /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/crash -- . 2026-03-09T21:01:05.187 INFO:teuthology.orchestra.run.vm07.stderr:tar: /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/crash: Cannot open: No such file or directory 2026-03-09T21:01:05.187 INFO:teuthology.orchestra.run.vm07.stderr:tar: Error is not recoverable: exiting now 2026-03-09T21:01:05.188 DEBUG:teuthology.misc:Transferring archived files from vm10:/var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/crash to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/647/remote/vm10/crash 2026-03-09T21:01:05.188 DEBUG:teuthology.orchestra.run.vm10:> sudo tar c -f - -C /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/crash -- . 2026-03-09T21:01:05.224 INFO:teuthology.orchestra.run.vm10.stderr:tar: /var/lib/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/crash: Cannot open: No such file or directory 2026-03-09T21:01:05.224 INFO:teuthology.orchestra.run.vm10.stderr:tar: Error is not recoverable: exiting now 2026-03-09T21:01:05.226 INFO:tasks.cephadm:Checking cluster log for badness... 2026-03-09T21:01:05.226 DEBUG:teuthology.orchestra.run.vm07:> sudo egrep '\[ERR\]|\[WRN\]|\[SEC\]' /var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph.log | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | egrep -v FS_DEGRADED | egrep -v 'filesystem is degraded' | egrep -v FS_INLINE_DATA_DEPRECATED | egrep -v FS_WITH_FAILED_MDS | egrep -v MDS_ALL_DOWN | egrep -v 'filesystem is offline' | egrep -v 'is offline because no MDS' | egrep -v MDS_DAMAGE | egrep -v MDS_DEGRADED | egrep -v MDS_FAILED | egrep -v MDS_INSUFFICIENT_STANDBY | egrep -v MDS_UP_LESS_THAN_MAX | egrep -v 'online, but wants' | egrep -v 'filesystem is online with fewer MDS than max_mds' | egrep -v POOL_APP_NOT_ENABLED | egrep -v 'do not have an application enabled' | egrep -v 'overall HEALTH_' | egrep -v 'Replacing daemon' | egrep -v 'deprecated feature inline_data' | egrep -v MGR_MODULE_ERROR | egrep -v OSD_DOWN | egrep -v 'osds down' | egrep -v 'overall HEALTH_' | egrep -v '\(OSD_DOWN\)' | egrep -v '\(OSD_' | egrep -v 'but it is still running' | egrep -v 'is not responding' | egrep -v MON_DOWN | egrep -v PG_AVAILABILITY | egrep -v PG_DEGRADED | egrep -v 'Reduced data availability' | egrep -v 'Degraded data redundancy' | egrep -v 'pg .* is stuck inactive' | egrep -v 'pg .* is .*degraded' | egrep -v 'pg .* is stuck peering' | head -n 1 2026-03-09T21:01:05.292 INFO:tasks.cephadm:Compressing logs... 2026-03-09T21:01:05.292 DEBUG:teuthology.orchestra.run.vm07:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T21:01:05.293 DEBUG:teuthology.orchestra.run.vm10:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T21:01:05.317 INFO:teuthology.orchestra.run.vm07.stderr:find: ‘/var/log/rbd-target-api’: No such file or directory 2026-03-09T21:01:05.317 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-09T21:01:05.318 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph-mon.vm07.log 2026-03-09T21:01:05.318 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/cephadm.log: gzip -5 --verbose -- /var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph.log 2026-03-09T21:01:05.323 INFO:teuthology.orchestra.run.vm10.stderr:find: gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-09T21:01:05.323 INFO:teuthology.orchestra.run.vm10.stderr:‘/var/log/rbd-target-api’: No such file or directory 2026-03-09T21:01:05.324 INFO:teuthology.orchestra.run.vm10.stderr:/var/log/ceph/cephadm.log: gzip -5 --verbose -- /var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph-volume.log 2026-03-09T21:01:05.325 INFO:teuthology.orchestra.run.vm10.stderr:gzip -5 --verbose -- /var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph-client.ceph-exporter.vm10.log 2026-03-09T21:01:05.326 INFO:teuthology.orchestra.run.vm10.stderr:/var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph-volume.log: 92.7% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-09T21:01:05.326 INFO:teuthology.orchestra.run.vm10.stderr:gzip -5 --verbose -- /var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph-mgr.vm10.byqahe.log 2026-03-09T21:01:05.327 INFO:teuthology.orchestra.run.vm10.stderr:gzip -5 --verbose -- /var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph-mon.vm10.log 2026-03-09T21:01:05.331 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph-mon.vm07.log: gzip -5 --verbose -- /var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph-mgr.vm07.xjrvch.log 2026-03-09T21:01:05.332 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph.log: 90.7% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-09T21:01:05.333 INFO:teuthology.orchestra.run.vm10.stderr:/var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph-mgr.vm10.byqahe.log: /var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph-client.ceph-exporter.vm10.log: 93.9% -- replaced with /var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph-client.ceph-exporter.vm10.log.gz 2026-03-09T21:01:05.333 INFO:teuthology.orchestra.run.vm07.stderr: 87.4% -- replaced with /var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph.log.gz 2026-03-09T21:01:05.333 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph.audit.log 2026-03-09T21:01:05.334 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph-mgr.vm07.xjrvch.log: gzip -5 --verbose -- /var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph.cephadm.log 2026-03-09T21:01:05.337 INFO:teuthology.orchestra.run.vm10.stderr:gzip -5 --verbose -- /var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph.audit.log 2026-03-09T21:01:05.344 INFO:teuthology.orchestra.run.vm10.stderr:/var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph-mon.vm10.log: 92.4% -- replaced with /var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph-volume.log.gz 2026-03-09T21:01:05.345 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph.audit.log: 91.5% -- replaced with /var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph.audit.log.gz 2026-03-09T21:01:05.345 INFO:teuthology.orchestra.run.vm10.stderr:gzip -5 --verbose -- /var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph.log 2026-03-09T21:01:05.345 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph-volume.log 2026-03-09T21:01:05.347 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph.cephadm.log: 85.3% -- replaced with /var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph.cephadm.log.gz 2026-03-09T21:01:05.347 INFO:teuthology.orchestra.run.vm10.stderr:/var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph.audit.log: 89.4% -- replaced with /var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph-mgr.vm10.byqahe.log.gz 2026-03-09T21:01:05.349 INFO:teuthology.orchestra.run.vm10.stderr:gzip -5 --verbose -- /var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph.cephadm.log 2026-03-09T21:01:05.350 INFO:teuthology.orchestra.run.vm10.stderr:/var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph.log: gzip -5 --verbose -- /var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph-osd.3.log 2026-03-09T21:01:05.351 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph-client.ceph-exporter.vm07.log 2026-03-09T21:01:05.351 INFO:teuthology.orchestra.run.vm10.stderr: 91.7% 87.4% -- replaced with /var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph.log.gz 2026-03-09T21:01:05.351 INFO:teuthology.orchestra.run.vm10.stderr:/var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph.cephadm.log: -- replaced with /var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph.audit.log.gz 2026-03-09T21:01:05.352 INFO:teuthology.orchestra.run.vm10.stderr: 85.0% -- replaced with /var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph.cephadm.log.gz 2026-03-09T21:01:05.352 INFO:teuthology.orchestra.run.vm10.stderr:gzip -5 --verbose -- /var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph-osd.4.log 2026-03-09T21:01:05.352 INFO:teuthology.orchestra.run.vm10.stderr:/var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph-osd.3.log: gzip -5 --verbose -- /var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph-osd.5.log 2026-03-09T21:01:05.360 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph-volume.log: gzip -5 --verbose -- /var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph-osd.0.log 2026-03-09T21:01:05.361 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph-client.ceph-exporter.vm07.log: 93.9% -- replaced with /var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph-client.ceph-exporter.vm07.log.gz 2026-03-09T21:01:05.372 INFO:teuthology.orchestra.run.vm10.stderr:/var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph-osd.4.log: gzip -5 --verbose -- /var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph-mds.cephfs.vm10.qpltwp.log 2026-03-09T21:01:05.374 INFO:teuthology.orchestra.run.vm07.stderr: 92.7% -- replaced with /var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph-volume.log.gz 2026-03-09T21:01:05.374 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph-osd.1.log 2026-03-09T21:01:05.378 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph-osd.0.log: gzip -5 --verbose -- /var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph-osd.2.log 2026-03-09T21:01:05.385 INFO:teuthology.orchestra.run.vm10.stderr:/var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph-osd.5.log: gzip -5 --verbose -- /var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph-mds.cephfs.vm10.hzyuyq.log 2026-03-09T21:01:05.389 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph-osd.1.log: gzip -5 --verbose -- /var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph-mds.cephfs.vm07.rovdbp.log 2026-03-09T21:01:05.394 INFO:teuthology.orchestra.run.vm10.stderr:/var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph-mds.cephfs.vm10.qpltwp.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.1.log 2026-03-09T21:01:05.403 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph-osd.2.log: gzip -5 --verbose -- /var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph-mds.cephfs.vm07.potfau.log 2026-03-09T21:01:05.410 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph-mds.cephfs.vm07.rovdbp.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.0.log 2026-03-09T21:01:05.914 INFO:teuthology.orchestra.run.vm10.stderr:/var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph-mds.cephfs.vm10.hzyuyq.log: /var/log/ceph/ceph-client.1.log: 92.1% -- replaced with /var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph-mon.vm10.log.gz 2026-03-09T21:01:05.967 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph-mds.cephfs.vm07.potfau.log: /var/log/ceph/ceph-client.0.log: 89.4% -- replaced with /var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph-mgr.vm07.xjrvch.log.gz 2026-03-09T21:01:06.931 INFO:teuthology.orchestra.run.vm07.stderr: 90.6% -- replaced with /var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph-mon.vm07.log.gz 2026-03-09T21:01:14.854 INFO:teuthology.orchestra.run.vm10.stderr: 93.6% -- replaced with /var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph-osd.4.log.gz 2026-03-09T21:01:15.893 INFO:teuthology.orchestra.run.vm07.stderr: 93.6% -- replaced with /var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph-osd.2.log.gz 2026-03-09T21:01:16.542 INFO:teuthology.orchestra.run.vm10.stderr: 94.7% -- replaced with /var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph-mds.cephfs.vm10.qpltwp.log.gz 2026-03-09T21:01:16.917 INFO:teuthology.orchestra.run.vm07.stderr: 93.7% -- replaced with /var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph-osd.0.log.gz 2026-03-09T21:01:17.424 INFO:teuthology.orchestra.run.vm10.stderr: 93.8% -- replaced with /var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph-osd.5.log.gz 2026-03-09T21:01:17.761 INFO:teuthology.orchestra.run.vm07.stderr: 93.7% -- replaced with /var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph-osd.1.log.gz 2026-03-09T21:01:18.482 INFO:teuthology.orchestra.run.vm10.stderr: 93.7% -- replaced with /var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph-osd.3.log.gz 2026-03-09T21:01:22.579 INFO:teuthology.orchestra.run.vm10.stderr: 94.9% -- replaced with /var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph-mds.cephfs.vm10.hzyuyq.log.gz 2026-03-09T21:01:22.661 INFO:teuthology.orchestra.run.vm10.stderr:gzip: /var/log/ceph/ceph-client.1.log: file size changed while zipping 2026-03-09T21:01:22.661 INFO:teuthology.orchestra.run.vm10.stderr: 93.3% -- replaced with /var/log/ceph/ceph-client.1.log.gz 2026-03-09T21:01:22.663 INFO:teuthology.orchestra.run.vm10.stderr: 2026-03-09T21:01:22.663 INFO:teuthology.orchestra.run.vm10.stderr:real 0m17.352s 2026-03-09T21:01:22.663 INFO:teuthology.orchestra.run.vm10.stderr:user 0m32.788s 2026-03-09T21:01:22.663 INFO:teuthology.orchestra.run.vm10.stderr:sys 0m1.697s 2026-03-09T21:01:23.982 INFO:teuthology.orchestra.run.vm07.stderr: 94.8% -- replaced with /var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph-mds.cephfs.vm07.potfau.log.gz 2026-03-09T21:01:25.449 INFO:teuthology.orchestra.run.vm07.stderr:gzip: /var/log/ceph/ceph-client.0.log: file size changed while zipping 2026-03-09T21:01:25.583 INFO:teuthology.orchestra.run.vm07.stderr: 93.3% -- replaced with /var/log/ceph/ceph-client.0.log.gz 2026-03-09T21:02:21.701 INFO:teuthology.orchestra.run.vm07.stderr: 92.9% -- replaced with /var/log/ceph/589eab88-1bf8-11f1-9e50-71f3ab1833c4/ceph-mds.cephfs.vm07.rovdbp.log.gz 2026-03-09T21:02:21.703 INFO:teuthology.orchestra.run.vm07.stderr: 2026-03-09T21:02:21.703 INFO:teuthology.orchestra.run.vm07.stderr:real 1m16.396s 2026-03-09T21:02:21.703 INFO:teuthology.orchestra.run.vm07.stderr:user 1m28.046s 2026-03-09T21:02:21.703 INFO:teuthology.orchestra.run.vm07.stderr:sys 0m6.132s 2026-03-09T21:02:21.703 INFO:tasks.cephadm:Archiving logs... 2026-03-09T21:02:21.704 DEBUG:teuthology.misc:Transferring archived files from vm07:/var/log/ceph to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/647/remote/vm07/log 2026-03-09T21:02:21.704 DEBUG:teuthology.orchestra.run.vm07:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-09T21:02:26.950 DEBUG:teuthology.misc:Transferring archived files from vm10:/var/log/ceph to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/647/remote/vm10/log 2026-03-09T21:02:26.951 DEBUG:teuthology.orchestra.run.vm10:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-09T21:02:28.680 INFO:tasks.cephadm:Removing cluster... 2026-03-09T21:02:28.680 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 --force 2026-03-09T21:02:28.841 INFO:teuthology.orchestra.run.vm07.stdout:Deleting cluster with fsid: 589eab88-1bf8-11f1-9e50-71f3ab1833c4 2026-03-09T21:02:29.231 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 589eab88-1bf8-11f1-9e50-71f3ab1833c4 --force 2026-03-09T21:02:29.359 INFO:teuthology.orchestra.run.vm10.stdout:Deleting cluster with fsid: 589eab88-1bf8-11f1-9e50-71f3ab1833c4 2026-03-09T21:02:29.663 INFO:tasks.cephadm:Removing cephadm ... 2026-03-09T21:02:29.663 DEBUG:teuthology.orchestra.run.vm07:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-09T21:02:29.682 DEBUG:teuthology.orchestra.run.vm10:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-09T21:02:29.702 INFO:tasks.cephadm:Teardown complete 2026-03-09T21:02:29.702 DEBUG:teuthology.run_tasks:Unwinding manager install 2026-03-09T21:02:29.706 ERROR:teuthology.contextutil:Saw exception from nested tasks Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/contextutil.py", line 32, in nested yield vars File "/home/teuthos/teuthology/teuthology/task/install/__init__.py", line 644, in task yield File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa/tasks/ceph_fuse.py", line 248, in task mount.umount_wait() File "/home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa/tasks/cephfs/fuse_mount.py", line 403, in umount_wait run.wait([self.fuse_daemon], timeout) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 479, in wait check_time() File "/home/teuthos/teuthology/teuthology/contextutil.py", line 134, in __call__ raise MaxWhileTries(error_msg) teuthology.exceptions.MaxWhileTries: reached maximum tries (50) after waiting for 300 seconds 2026-03-09T21:02:29.706 INFO:teuthology.task.install.util:Removing shipped files: /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer... 2026-03-09T21:02:29.706 DEBUG:teuthology.orchestra.run.vm07:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-09T21:02:29.724 DEBUG:teuthology.orchestra.run.vm10:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-09T21:02:29.787 INFO:teuthology.task.install.rpm:Removing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd on rpm system. 2026-03-09T21:02:29.787 DEBUG:teuthology.orchestra.run.vm07:> 2026-03-09T21:02:29.787 DEBUG:teuthology.orchestra.run.vm07:> for d in ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd ; do 2026-03-09T21:02:29.787 DEBUG:teuthology.orchestra.run.vm07:> sudo yum -y remove $d || true 2026-03-09T21:02:29.787 DEBUG:teuthology.orchestra.run.vm07:> done 2026-03-09T21:02:29.795 INFO:teuthology.task.install.rpm:Removing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd on rpm system. 2026-03-09T21:02:29.795 DEBUG:teuthology.orchestra.run.vm10:> 2026-03-09T21:02:29.795 DEBUG:teuthology.orchestra.run.vm10:> for d in ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd ; do 2026-03-09T21:02:29.795 DEBUG:teuthology.orchestra.run.vm10:> sudo yum -y remove $d || true 2026-03-09T21:02:29.795 DEBUG:teuthology.orchestra.run.vm10:> done 2026-03-09T21:02:30.210 INFO:teuthology.orchestra.run.vm10.stdout:Dependencies resolved. 2026-03-09T21:02:30.210 INFO:teuthology.orchestra.run.vm10.stdout:================================================================================ 2026-03-09T21:02:30.210 INFO:teuthology.orchestra.run.vm10.stdout: Package Arch Version Repository Size 2026-03-09T21:02:30.210 INFO:teuthology.orchestra.run.vm10.stdout:================================================================================ 2026-03-09T21:02:30.210 INFO:teuthology.orchestra.run.vm10.stdout:Removing: 2026-03-09T21:02:30.210 INFO:teuthology.orchestra.run.vm10.stdout: ceph-radosgw x86_64 2:18.2.7-1055.gab47f43c.el9 @ceph 31 M 2026-03-09T21:02:30.210 INFO:teuthology.orchestra.run.vm10.stdout:Removing unused dependencies: 2026-03-09T21:02:30.210 INFO:teuthology.orchestra.run.vm10.stdout: mailcap noarch 2.1.49-5.el9 @baseos 78 k 2026-03-09T21:02:30.211 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:02:30.211 INFO:teuthology.orchestra.run.vm10.stdout:Transaction Summary 2026-03-09T21:02:30.211 INFO:teuthology.orchestra.run.vm10.stdout:================================================================================ 2026-03-09T21:02:30.211 INFO:teuthology.orchestra.run.vm10.stdout:Remove 2 Packages 2026-03-09T21:02:30.211 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:02:30.211 INFO:teuthology.orchestra.run.vm10.stdout:Freed space: 31 M 2026-03-09T21:02:30.211 INFO:teuthology.orchestra.run.vm10.stdout:Running transaction check 2026-03-09T21:02:30.221 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-03-09T21:02:30.222 INFO:teuthology.orchestra.run.vm10.stdout:Transaction check succeeded. 2026-03-09T21:02:30.222 INFO:teuthology.orchestra.run.vm10.stdout:Running transaction test 2026-03-09T21:02:30.222 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-03-09T21:02:30.222 INFO:teuthology.orchestra.run.vm07.stdout: Package Arch Version Repository Size 2026-03-09T21:02:30.222 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-03-09T21:02:30.222 INFO:teuthology.orchestra.run.vm07.stdout:Removing: 2026-03-09T21:02:30.222 INFO:teuthology.orchestra.run.vm07.stdout: ceph-radosgw x86_64 2:18.2.7-1055.gab47f43c.el9 @ceph 31 M 2026-03-09T21:02:30.222 INFO:teuthology.orchestra.run.vm07.stdout:Removing unused dependencies: 2026-03-09T21:02:30.222 INFO:teuthology.orchestra.run.vm07.stdout: mailcap noarch 2.1.49-5.el9 @baseos 78 k 2026-03-09T21:02:30.222 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:02:30.222 INFO:teuthology.orchestra.run.vm07.stdout:Transaction Summary 2026-03-09T21:02:30.222 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-03-09T21:02:30.222 INFO:teuthology.orchestra.run.vm07.stdout:Remove 2 Packages 2026-03-09T21:02:30.222 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:02:30.223 INFO:teuthology.orchestra.run.vm07.stdout:Freed space: 31 M 2026-03-09T21:02:30.223 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction check 2026-03-09T21:02:30.235 INFO:teuthology.orchestra.run.vm07.stdout:Transaction check succeeded. 2026-03-09T21:02:30.236 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction test 2026-03-09T21:02:30.243 INFO:teuthology.orchestra.run.vm10.stdout:Transaction test succeeded. 2026-03-09T21:02:30.244 INFO:teuthology.orchestra.run.vm10.stdout:Running transaction 2026-03-09T21:02:30.256 INFO:teuthology.orchestra.run.vm07.stdout:Transaction test succeeded. 2026-03-09T21:02:30.256 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction 2026-03-09T21:02:30.336 INFO:teuthology.orchestra.run.vm10.stdout: Preparing : 1/1 2026-03-09T21:02:30.342 INFO:teuthology.orchestra.run.vm07.stdout: Preparing : 1/1 2026-03-09T21:02:30.360 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: ceph-radosgw-2:18.2.7-1055.gab47f43c.el9.x86_64 1/2 2026-03-09T21:02:30.360 INFO:teuthology.orchestra.run.vm10.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T21:02:30.360 INFO:teuthology.orchestra.run.vm10.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-09T21:02:30.360 INFO:teuthology.orchestra.run.vm10.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-radosgw.target". 2026-03-09T21:02:30.360 INFO:teuthology.orchestra.run.vm10.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-radosgw.target". 2026-03-09T21:02:30.360 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:02:30.362 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : ceph-radosgw-2:18.2.7-1055.gab47f43c.el9.x86_64 1/2 2026-03-09T21:02:30.368 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-radosgw-2:18.2.7-1055.gab47f43c.el9.x86_64 1/2 2026-03-09T21:02:30.368 INFO:teuthology.orchestra.run.vm07.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T21:02:30.368 INFO:teuthology.orchestra.run.vm07.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-09T21:02:30.368 INFO:teuthology.orchestra.run.vm07.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-radosgw.target". 2026-03-09T21:02:30.368 INFO:teuthology.orchestra.run.vm07.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-radosgw.target". 2026-03-09T21:02:30.368 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:02:30.370 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-radosgw-2:18.2.7-1055.gab47f43c.el9.x86_64 1/2 2026-03-09T21:02:30.389 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-radosgw-2:18.2.7-1055.gab47f43c.el9.x86_64 1/2 2026-03-09T21:02:30.389 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: ceph-radosgw-2:18.2.7-1055.gab47f43c.el9.x86_64 1/2 2026-03-09T21:02:30.410 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-09T21:02:30.414 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-09T21:02:30.533 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: mailcap-2.1.49-5.el9.noarch 2/2 2026-03-09T21:02:30.533 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : ceph-radosgw-2:18.2.7-1055.gab47f43c.el9.x86_64 1/2 2026-03-09T21:02:30.535 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: mailcap-2.1.49-5.el9.noarch 2/2 2026-03-09T21:02:30.535 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-radosgw-2:18.2.7-1055.gab47f43c.el9.x86_64 1/2 2026-03-09T21:02:30.595 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-09T21:02:30.595 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:02:30.595 INFO:teuthology.orchestra.run.vm07.stdout:Removed: 2026-03-09T21:02:30.595 INFO:teuthology.orchestra.run.vm07.stdout: ceph-radosgw-2:18.2.7-1055.gab47f43c.el9.x86_64 mailcap-2.1.49-5.el9.noarch 2026-03-09T21:02:30.595 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:02:30.595 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-03-09T21:02:30.597 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-09T21:02:30.597 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:02:30.597 INFO:teuthology.orchestra.run.vm10.stdout:Removed: 2026-03-09T21:02:30.597 INFO:teuthology.orchestra.run.vm10.stdout: ceph-radosgw-2:18.2.7-1055.gab47f43c.el9.x86_64 mailcap-2.1.49-5.el9.noarch 2026-03-09T21:02:30.597 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:02:30.597 INFO:teuthology.orchestra.run.vm10.stdout:Complete! 2026-03-09T21:02:30.956 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-03-09T21:02:30.957 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-03-09T21:02:30.957 INFO:teuthology.orchestra.run.vm07.stdout: Package Arch Version Repository Size 2026-03-09T21:02:30.957 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-03-09T21:02:30.957 INFO:teuthology.orchestra.run.vm07.stdout:Removing: 2026-03-09T21:02:30.957 INFO:teuthology.orchestra.run.vm07.stdout: ceph-test x86_64 2:18.2.7-1055.gab47f43c.el9 @ceph 149 M 2026-03-09T21:02:30.957 INFO:teuthology.orchestra.run.vm07.stdout:Removing unused dependencies: 2026-03-09T21:02:30.957 INFO:teuthology.orchestra.run.vm07.stdout: libxslt x86_64 1.1.34-12.el9 @appstream 743 k 2026-03-09T21:02:30.957 INFO:teuthology.orchestra.run.vm07.stdout: socat x86_64 1.7.4.1-8.el9 @appstream 1.1 M 2026-03-09T21:02:30.957 INFO:teuthology.orchestra.run.vm07.stdout: xmlstarlet x86_64 1.6.1-20.el9 @appstream 195 k 2026-03-09T21:02:30.957 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:02:30.957 INFO:teuthology.orchestra.run.vm07.stdout:Transaction Summary 2026-03-09T21:02:30.957 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-03-09T21:02:30.957 INFO:teuthology.orchestra.run.vm07.stdout:Remove 4 Packages 2026-03-09T21:02:30.957 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:02:30.958 INFO:teuthology.orchestra.run.vm07.stdout:Freed space: 151 M 2026-03-09T21:02:30.958 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction check 2026-03-09T21:02:30.961 INFO:teuthology.orchestra.run.vm07.stdout:Transaction check succeeded. 2026-03-09T21:02:30.961 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction test 2026-03-09T21:02:30.989 INFO:teuthology.orchestra.run.vm07.stdout:Transaction test succeeded. 2026-03-09T21:02:30.989 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction 2026-03-09T21:02:31.013 INFO:teuthology.orchestra.run.vm10.stdout:Dependencies resolved. 2026-03-09T21:02:31.013 INFO:teuthology.orchestra.run.vm10.stdout:================================================================================ 2026-03-09T21:02:31.013 INFO:teuthology.orchestra.run.vm10.stdout: Package Arch Version Repository Size 2026-03-09T21:02:31.013 INFO:teuthology.orchestra.run.vm10.stdout:================================================================================ 2026-03-09T21:02:31.013 INFO:teuthology.orchestra.run.vm10.stdout:Removing: 2026-03-09T21:02:31.013 INFO:teuthology.orchestra.run.vm10.stdout: ceph-test x86_64 2:18.2.7-1055.gab47f43c.el9 @ceph 149 M 2026-03-09T21:02:31.013 INFO:teuthology.orchestra.run.vm10.stdout:Removing unused dependencies: 2026-03-09T21:02:31.013 INFO:teuthology.orchestra.run.vm10.stdout: libxslt x86_64 1.1.34-12.el9 @appstream 743 k 2026-03-09T21:02:31.013 INFO:teuthology.orchestra.run.vm10.stdout: socat x86_64 1.7.4.1-8.el9 @appstream 1.1 M 2026-03-09T21:02:31.013 INFO:teuthology.orchestra.run.vm10.stdout: xmlstarlet x86_64 1.6.1-20.el9 @appstream 195 k 2026-03-09T21:02:31.013 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:02:31.013 INFO:teuthology.orchestra.run.vm10.stdout:Transaction Summary 2026-03-09T21:02:31.013 INFO:teuthology.orchestra.run.vm10.stdout:================================================================================ 2026-03-09T21:02:31.013 INFO:teuthology.orchestra.run.vm10.stdout:Remove 4 Packages 2026-03-09T21:02:31.013 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:02:31.013 INFO:teuthology.orchestra.run.vm10.stdout:Freed space: 151 M 2026-03-09T21:02:31.014 INFO:teuthology.orchestra.run.vm10.stdout:Running transaction check 2026-03-09T21:02:31.017 INFO:teuthology.orchestra.run.vm10.stdout:Transaction check succeeded. 2026-03-09T21:02:31.017 INFO:teuthology.orchestra.run.vm10.stdout:Running transaction test 2026-03-09T21:02:31.043 INFO:teuthology.orchestra.run.vm07.stdout: Preparing : 1/1 2026-03-09T21:02:31.046 INFO:teuthology.orchestra.run.vm10.stdout:Transaction test succeeded. 2026-03-09T21:02:31.046 INFO:teuthology.orchestra.run.vm10.stdout:Running transaction 2026-03-09T21:02:31.052 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-test-2:18.2.7-1055.gab47f43c.el9.x86_64 1/4 2026-03-09T21:02:31.056 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : xmlstarlet-1.6.1-20.el9.x86_64 2/4 2026-03-09T21:02:31.061 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : libxslt-1.1.34-12.el9.x86_64 3/4 2026-03-09T21:02:31.079 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-09T21:02:31.105 INFO:teuthology.orchestra.run.vm10.stdout: Preparing : 1/1 2026-03-09T21:02:31.114 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : ceph-test-2:18.2.7-1055.gab47f43c.el9.x86_64 1/4 2026-03-09T21:02:31.116 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : xmlstarlet-1.6.1-20.el9.x86_64 2/4 2026-03-09T21:02:31.120 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : libxslt-1.1.34-12.el9.x86_64 3/4 2026-03-09T21:02:31.137 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-09T21:02:31.165 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-09T21:02:31.165 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-test-2:18.2.7-1055.gab47f43c.el9.x86_64 1/4 2026-03-09T21:02:31.165 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 2/4 2026-03-09T21:02:31.165 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 3/4 2026-03-09T21:02:31.221 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-09T21:02:31.221 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : ceph-test-2:18.2.7-1055.gab47f43c.el9.x86_64 1/4 2026-03-09T21:02:31.221 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 2/4 2026-03-09T21:02:31.221 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 3/4 2026-03-09T21:02:31.234 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 4/4 2026-03-09T21:02:31.234 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:02:31.234 INFO:teuthology.orchestra.run.vm07.stdout:Removed: 2026-03-09T21:02:31.234 INFO:teuthology.orchestra.run.vm07.stdout: ceph-test-2:18.2.7-1055.gab47f43c.el9.x86_64 libxslt-1.1.34-12.el9.x86_64 2026-03-09T21:02:31.234 INFO:teuthology.orchestra.run.vm07.stdout: socat-1.7.4.1-8.el9.x86_64 xmlstarlet-1.6.1-20.el9.x86_64 2026-03-09T21:02:31.234 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:02:31.234 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-03-09T21:02:31.295 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 4/4 2026-03-09T21:02:31.295 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:02:31.295 INFO:teuthology.orchestra.run.vm10.stdout:Removed: 2026-03-09T21:02:31.295 INFO:teuthology.orchestra.run.vm10.stdout: ceph-test-2:18.2.7-1055.gab47f43c.el9.x86_64 libxslt-1.1.34-12.el9.x86_64 2026-03-09T21:02:31.295 INFO:teuthology.orchestra.run.vm10.stdout: socat-1.7.4.1-8.el9.x86_64 xmlstarlet-1.6.1-20.el9.x86_64 2026-03-09T21:02:31.296 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:02:31.296 INFO:teuthology.orchestra.run.vm10.stdout:Complete! 2026-03-09T21:02:31.510 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-03-09T21:02:31.510 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-03-09T21:02:31.510 INFO:teuthology.orchestra.run.vm07.stdout: Package Arch Version Repository Size 2026-03-09T21:02:31.510 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-03-09T21:02:31.510 INFO:teuthology.orchestra.run.vm07.stdout:Removing: 2026-03-09T21:02:31.510 INFO:teuthology.orchestra.run.vm07.stdout: ceph x86_64 2:18.2.7-1055.gab47f43c.el9 @ceph 0 2026-03-09T21:02:31.510 INFO:teuthology.orchestra.run.vm07.stdout:Removing unused dependencies: 2026-03-09T21:02:31.510 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mds x86_64 2:18.2.7-1055.gab47f43c.el9 @ceph 6.4 M 2026-03-09T21:02:31.511 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mon x86_64 2:18.2.7-1055.gab47f43c.el9 @ceph 18 M 2026-03-09T21:02:31.511 INFO:teuthology.orchestra.run.vm07.stdout: ceph-osd x86_64 2:18.2.7-1055.gab47f43c.el9 @ceph 58 M 2026-03-09T21:02:31.511 INFO:teuthology.orchestra.run.vm07.stdout: ledmon-libs x86_64 1.1.0-3.el9 @baseos 80 k 2026-03-09T21:02:31.511 INFO:teuthology.orchestra.run.vm07.stdout: libconfig x86_64 1.7.2-9.el9 @baseos 220 k 2026-03-09T21:02:31.511 INFO:teuthology.orchestra.run.vm07.stdout: libstoragemgmt x86_64 1.10.1-1.el9 @appstream 685 k 2026-03-09T21:02:31.511 INFO:teuthology.orchestra.run.vm07.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 @appstream 832 k 2026-03-09T21:02:31.511 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:02:31.511 INFO:teuthology.orchestra.run.vm07.stdout:Transaction Summary 2026-03-09T21:02:31.511 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-03-09T21:02:31.511 INFO:teuthology.orchestra.run.vm07.stdout:Remove 8 Packages 2026-03-09T21:02:31.511 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:02:31.511 INFO:teuthology.orchestra.run.vm07.stdout:Freed space: 84 M 2026-03-09T21:02:31.511 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction check 2026-03-09T21:02:31.514 INFO:teuthology.orchestra.run.vm07.stdout:Transaction check succeeded. 2026-03-09T21:02:31.514 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction test 2026-03-09T21:02:31.541 INFO:teuthology.orchestra.run.vm07.stdout:Transaction test succeeded. 2026-03-09T21:02:31.541 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction 2026-03-09T21:02:31.568 INFO:teuthology.orchestra.run.vm10.stdout:Dependencies resolved. 2026-03-09T21:02:31.568 INFO:teuthology.orchestra.run.vm10.stdout:================================================================================ 2026-03-09T21:02:31.568 INFO:teuthology.orchestra.run.vm10.stdout: Package Arch Version Repository Size 2026-03-09T21:02:31.569 INFO:teuthology.orchestra.run.vm10.stdout:================================================================================ 2026-03-09T21:02:31.569 INFO:teuthology.orchestra.run.vm10.stdout:Removing: 2026-03-09T21:02:31.569 INFO:teuthology.orchestra.run.vm10.stdout: ceph x86_64 2:18.2.7-1055.gab47f43c.el9 @ceph 0 2026-03-09T21:02:31.569 INFO:teuthology.orchestra.run.vm10.stdout:Removing unused dependencies: 2026-03-09T21:02:31.569 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mds x86_64 2:18.2.7-1055.gab47f43c.el9 @ceph 6.4 M 2026-03-09T21:02:31.569 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mon x86_64 2:18.2.7-1055.gab47f43c.el9 @ceph 18 M 2026-03-09T21:02:31.569 INFO:teuthology.orchestra.run.vm10.stdout: ceph-osd x86_64 2:18.2.7-1055.gab47f43c.el9 @ceph 58 M 2026-03-09T21:02:31.569 INFO:teuthology.orchestra.run.vm10.stdout: ledmon-libs x86_64 1.1.0-3.el9 @baseos 80 k 2026-03-09T21:02:31.569 INFO:teuthology.orchestra.run.vm10.stdout: libconfig x86_64 1.7.2-9.el9 @baseos 220 k 2026-03-09T21:02:31.569 INFO:teuthology.orchestra.run.vm10.stdout: libstoragemgmt x86_64 1.10.1-1.el9 @appstream 685 k 2026-03-09T21:02:31.569 INFO:teuthology.orchestra.run.vm10.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 @appstream 832 k 2026-03-09T21:02:31.569 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:02:31.569 INFO:teuthology.orchestra.run.vm10.stdout:Transaction Summary 2026-03-09T21:02:31.569 INFO:teuthology.orchestra.run.vm10.stdout:================================================================================ 2026-03-09T21:02:31.569 INFO:teuthology.orchestra.run.vm10.stdout:Remove 8 Packages 2026-03-09T21:02:31.569 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:02:31.569 INFO:teuthology.orchestra.run.vm10.stdout:Freed space: 84 M 2026-03-09T21:02:31.569 INFO:teuthology.orchestra.run.vm10.stdout:Running transaction check 2026-03-09T21:02:31.572 INFO:teuthology.orchestra.run.vm10.stdout:Transaction check succeeded. 2026-03-09T21:02:31.572 INFO:teuthology.orchestra.run.vm10.stdout:Running transaction test 2026-03-09T21:02:31.583 INFO:teuthology.orchestra.run.vm07.stdout: Preparing : 1/1 2026-03-09T21:02:31.584 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-2:18.2.7-1055.gab47f43c.el9.x86_64 1/8 2026-03-09T21:02:31.605 INFO:teuthology.orchestra.run.vm10.stdout:Transaction test succeeded. 2026-03-09T21:02:31.605 INFO:teuthology.orchestra.run.vm10.stdout:Running transaction 2026-03-09T21:02:31.610 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-osd-2:18.2.7-1055.gab47f43c.el9.x86_64 2/8 2026-03-09T21:02:31.610 INFO:teuthology.orchestra.run.vm07.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T21:02:31.610 INFO:teuthology.orchestra.run.vm07.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-09T21:02:31.610 INFO:teuthology.orchestra.run.vm07.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-osd.target". 2026-03-09T21:02:31.610 INFO:teuthology.orchestra.run.vm07.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-osd.target". 2026-03-09T21:02:31.610 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:02:31.612 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-osd-2:18.2.7-1055.gab47f43c.el9.x86_64 2/8 2026-03-09T21:02:31.622 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-osd-2:18.2.7-1055.gab47f43c.el9.x86_64 2/8 2026-03-09T21:02:31.637 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 3/8 2026-03-09T21:02:31.637 INFO:teuthology.orchestra.run.vm07.stdout:Removed "/etc/systemd/system/multi-user.target.wants/libstoragemgmt.service". 2026-03-09T21:02:31.637 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:02:31.638 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : libstoragemgmt-1.10.1-1.el9.x86_64 3/8 2026-03-09T21:02:31.650 INFO:teuthology.orchestra.run.vm10.stdout: Preparing : 1/1 2026-03-09T21:02:31.652 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : ceph-2:18.2.7-1055.gab47f43c.el9.x86_64 1/8 2026-03-09T21:02:31.663 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 3/8 2026-03-09T21:02:31.667 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 4/8 2026-03-09T21:02:31.670 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ledmon-libs-1.1.0-3.el9.x86_64 5/8 2026-03-09T21:02:31.673 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : libconfig-1.7.2-9.el9.x86_64 6/8 2026-03-09T21:02:31.674 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: ceph-osd-2:18.2.7-1055.gab47f43c.el9.x86_64 2/8 2026-03-09T21:02:31.674 INFO:teuthology.orchestra.run.vm10.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T21:02:31.674 INFO:teuthology.orchestra.run.vm10.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-09T21:02:31.674 INFO:teuthology.orchestra.run.vm10.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-osd.target". 2026-03-09T21:02:31.674 INFO:teuthology.orchestra.run.vm10.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-osd.target". 2026-03-09T21:02:31.675 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:02:31.677 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : ceph-osd-2:18.2.7-1055.gab47f43c.el9.x86_64 2/8 2026-03-09T21:02:31.688 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: ceph-osd-2:18.2.7-1055.gab47f43c.el9.x86_64 2/8 2026-03-09T21:02:31.700 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-mds-2:18.2.7-1055.gab47f43c.el9.x86_64 7/8 2026-03-09T21:02:31.700 INFO:teuthology.orchestra.run.vm07.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T21:02:31.700 INFO:teuthology.orchestra.run.vm07.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-09T21:02:31.700 INFO:teuthology.orchestra.run.vm07.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mds.target". 2026-03-09T21:02:31.700 INFO:teuthology.orchestra.run.vm07.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mds.target". 2026-03-09T21:02:31.700 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:02:31.701 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-mds-2:18.2.7-1055.gab47f43c.el9.x86_64 7/8 2026-03-09T21:02:31.707 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 3/8 2026-03-09T21:02:31.708 INFO:teuthology.orchestra.run.vm10.stdout:Removed "/etc/systemd/system/multi-user.target.wants/libstoragemgmt.service". 2026-03-09T21:02:31.708 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:02:31.709 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : libstoragemgmt-1.10.1-1.el9.x86_64 3/8 2026-03-09T21:02:31.711 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-mds-2:18.2.7-1055.gab47f43c.el9.x86_64 7/8 2026-03-09T21:02:31.736 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 3/8 2026-03-09T21:02:31.738 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-mon-2:18.2.7-1055.gab47f43c.el9.x86_64 8/8 2026-03-09T21:02:31.739 INFO:teuthology.orchestra.run.vm07.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T21:02:31.739 INFO:teuthology.orchestra.run.vm07.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-09T21:02:31.739 INFO:teuthology.orchestra.run.vm07.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mon.target". 2026-03-09T21:02:31.739 INFO:teuthology.orchestra.run.vm07.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mon.target". 2026-03-09T21:02:31.739 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:02:31.739 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-mon-2:18.2.7-1055.gab47f43c.el9.x86_64 8/8 2026-03-09T21:02:31.741 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 4/8 2026-03-09T21:02:31.744 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : ledmon-libs-1.1.0-3.el9.x86_64 5/8 2026-03-09T21:02:31.754 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : libconfig-1.7.2-9.el9.x86_64 6/8 2026-03-09T21:02:31.776 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: ceph-mds-2:18.2.7-1055.gab47f43c.el9.x86_64 7/8 2026-03-09T21:02:31.776 INFO:teuthology.orchestra.run.vm10.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T21:02:31.776 INFO:teuthology.orchestra.run.vm10.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-09T21:02:31.776 INFO:teuthology.orchestra.run.vm10.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mds.target". 2026-03-09T21:02:31.776 INFO:teuthology.orchestra.run.vm10.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mds.target". 2026-03-09T21:02:31.776 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:02:31.777 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : ceph-mds-2:18.2.7-1055.gab47f43c.el9.x86_64 7/8 2026-03-09T21:02:31.787 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: ceph-mds-2:18.2.7-1055.gab47f43c.el9.x86_64 7/8 2026-03-09T21:02:31.808 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: ceph-mon-2:18.2.7-1055.gab47f43c.el9.x86_64 8/8 2026-03-09T21:02:31.808 INFO:teuthology.orchestra.run.vm10.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T21:02:31.808 INFO:teuthology.orchestra.run.vm10.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-09T21:02:31.808 INFO:teuthology.orchestra.run.vm10.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mon.target". 2026-03-09T21:02:31.808 INFO:teuthology.orchestra.run.vm10.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mon.target". 2026-03-09T21:02:31.808 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:02:31.809 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : ceph-mon-2:18.2.7-1055.gab47f43c.el9.x86_64 8/8 2026-03-09T21:02:31.876 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-mon-2:18.2.7-1055.gab47f43c.el9.x86_64 8/8 2026-03-09T21:02:31.876 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-2:18.2.7-1055.gab47f43c.el9.x86_64 1/8 2026-03-09T21:02:31.876 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-mds-2:18.2.7-1055.gab47f43c.el9.x86_64 2/8 2026-03-09T21:02:31.876 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-mon-2:18.2.7-1055.gab47f43c.el9.x86_64 3/8 2026-03-09T21:02:31.877 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-osd-2:18.2.7-1055.gab47f43c.el9.x86_64 4/8 2026-03-09T21:02:31.877 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 5/8 2026-03-09T21:02:31.877 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 6/8 2026-03-09T21:02:31.877 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 7/8 2026-03-09T21:02:31.928 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: ceph-mon-2:18.2.7-1055.gab47f43c.el9.x86_64 8/8 2026-03-09T21:02:31.928 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : ceph-2:18.2.7-1055.gab47f43c.el9.x86_64 1/8 2026-03-09T21:02:31.928 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : ceph-mds-2:18.2.7-1055.gab47f43c.el9.x86_64 2/8 2026-03-09T21:02:31.928 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : ceph-mon-2:18.2.7-1055.gab47f43c.el9.x86_64 3/8 2026-03-09T21:02:31.928 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : ceph-osd-2:18.2.7-1055.gab47f43c.el9.x86_64 4/8 2026-03-09T21:02:31.928 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 5/8 2026-03-09T21:02:31.928 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 6/8 2026-03-09T21:02:31.928 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 7/8 2026-03-09T21:02:31.943 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 8/8 2026-03-09T21:02:31.943 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:02:31.943 INFO:teuthology.orchestra.run.vm07.stdout:Removed: 2026-03-09T21:02:31.943 INFO:teuthology.orchestra.run.vm07.stdout: ceph-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T21:02:31.943 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mds-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T21:02:31.943 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mon-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T21:02:31.943 INFO:teuthology.orchestra.run.vm07.stdout: ceph-osd-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T21:02:31.943 INFO:teuthology.orchestra.run.vm07.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-09T21:02:31.943 INFO:teuthology.orchestra.run.vm07.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-09T21:02:31.943 INFO:teuthology.orchestra.run.vm07.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-09T21:02:31.943 INFO:teuthology.orchestra.run.vm07.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-09T21:02:31.943 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:02:31.943 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-03-09T21:02:31.993 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 8/8 2026-03-09T21:02:31.993 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:02:31.993 INFO:teuthology.orchestra.run.vm10.stdout:Removed: 2026-03-09T21:02:31.993 INFO:teuthology.orchestra.run.vm10.stdout: ceph-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T21:02:31.993 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mds-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T21:02:31.993 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mon-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T21:02:31.993 INFO:teuthology.orchestra.run.vm10.stdout: ceph-osd-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T21:02:31.993 INFO:teuthology.orchestra.run.vm10.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-09T21:02:31.993 INFO:teuthology.orchestra.run.vm10.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-09T21:02:31.993 INFO:teuthology.orchestra.run.vm10.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-09T21:02:31.993 INFO:teuthology.orchestra.run.vm10.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-09T21:02:31.993 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:02:31.993 INFO:teuthology.orchestra.run.vm10.stdout:Complete! 2026-03-09T21:02:32.198 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-03-09T21:02:32.205 INFO:teuthology.orchestra.run.vm07.stdout:============================================================================================ 2026-03-09T21:02:32.205 INFO:teuthology.orchestra.run.vm07.stdout: Package Arch Version Repository Size 2026-03-09T21:02:32.205 INFO:teuthology.orchestra.run.vm07.stdout:============================================================================================ 2026-03-09T21:02:32.205 INFO:teuthology.orchestra.run.vm07.stdout:Removing: 2026-03-09T21:02:32.205 INFO:teuthology.orchestra.run.vm07.stdout: ceph-base x86_64 2:18.2.7-1055.gab47f43c.el9 @ceph 22 M 2026-03-09T21:02:32.205 INFO:teuthology.orchestra.run.vm07.stdout:Removing dependent packages: 2026-03-09T21:02:32.205 INFO:teuthology.orchestra.run.vm07.stdout: ceph-immutable-object-cache x86_64 2:18.2.7-1055.gab47f43c.el9 @ceph 395 k 2026-03-09T21:02:32.205 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr x86_64 2:18.2.7-1055.gab47f43c.el9 @ceph 4.5 M 2026-03-09T21:02:32.205 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-cephadm noarch 2:18.2.7-1055.gab47f43c.el9 @ceph-noarch 736 k 2026-03-09T21:02:32.205 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-dashboard noarch 2:18.2.7-1055.gab47f43c.el9 @ceph-noarch 87 M 2026-03-09T21:02:32.205 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-diskprediction-local noarch 2:18.2.7-1055.gab47f43c.el9 @ceph-noarch 66 M 2026-03-09T21:02:32.205 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-rook noarch 2:18.2.7-1055.gab47f43c.el9 @ceph-noarch 563 k 2026-03-09T21:02:32.205 INFO:teuthology.orchestra.run.vm07.stdout: rbd-mirror x86_64 2:18.2.7-1055.gab47f43c.el9 @ceph 12 M 2026-03-09T21:02:32.205 INFO:teuthology.orchestra.run.vm07.stdout:Removing unused dependencies: 2026-03-09T21:02:32.205 INFO:teuthology.orchestra.run.vm07.stdout: ceph-common x86_64 2:18.2.7-1055.gab47f43c.el9 @ceph 71 M 2026-03-09T21:02:32.205 INFO:teuthology.orchestra.run.vm07.stdout: ceph-grafana-dashboards noarch 2:18.2.7-1055.gab47f43c.el9 @ceph-noarch 355 k 2026-03-09T21:02:32.205 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core noarch 2:18.2.7-1055.gab47f43c.el9 @ceph-noarch 1.5 M 2026-03-09T21:02:32.205 INFO:teuthology.orchestra.run.vm07.stdout: ceph-prometheus-alerts noarch 2:18.2.7-1055.gab47f43c.el9 @ceph-noarch 52 k 2026-03-09T21:02:32.205 INFO:teuthology.orchestra.run.vm07.stdout: ceph-selinux x86_64 2:18.2.7-1055.gab47f43c.el9 @ceph 138 k 2026-03-09T21:02:32.205 INFO:teuthology.orchestra.run.vm07.stdout: flexiblas x86_64 3.0.4-9.el9 @appstream 68 k 2026-03-09T21:02:32.205 INFO:teuthology.orchestra.run.vm07.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 @appstream 11 M 2026-03-09T21:02:32.205 INFO:teuthology.orchestra.run.vm07.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 @appstream 39 k 2026-03-09T21:02:32.205 INFO:teuthology.orchestra.run.vm07.stdout: fmt x86_64 8.1.1-5.el9 @epel 337 k 2026-03-09T21:02:32.205 INFO:teuthology.orchestra.run.vm07.stdout: gperftools-libs x86_64 2.9.1-3.el9 @epel 1.4 M 2026-03-09T21:02:32.205 INFO:teuthology.orchestra.run.vm07.stdout: libcephsqlite x86_64 2:18.2.7-1055.gab47f43c.el9 @ceph 438 k 2026-03-09T21:02:32.205 INFO:teuthology.orchestra.run.vm07.stdout: libgfortran x86_64 11.5.0-14.el9 @baseos 2.8 M 2026-03-09T21:02:32.205 INFO:teuthology.orchestra.run.vm07.stdout: liboath x86_64 2.6.12-1.el9 @epel 94 k 2026-03-09T21:02:32.205 INFO:teuthology.orchestra.run.vm07.stdout: libquadmath x86_64 11.5.0-14.el9 @baseos 330 k 2026-03-09T21:02:32.206 INFO:teuthology.orchestra.run.vm07.stdout: libradosstriper1 x86_64 2:18.2.7-1055.gab47f43c.el9 @ceph 1.5 M 2026-03-09T21:02:32.206 INFO:teuthology.orchestra.run.vm07.stdout: libunwind x86_64 1.6.2-1.el9 @epel 170 k 2026-03-09T21:02:32.206 INFO:teuthology.orchestra.run.vm07.stdout: openblas x86_64 0.3.29-1.el9 @appstream 112 k 2026-03-09T21:02:32.206 INFO:teuthology.orchestra.run.vm07.stdout: openblas-openmp x86_64 0.3.29-1.el9 @appstream 46 M 2026-03-09T21:02:32.206 INFO:teuthology.orchestra.run.vm07.stdout: python3-asyncssh noarch 2.13.2-5.el9 @epel 3.9 M 2026-03-09T21:02:32.206 INFO:teuthology.orchestra.run.vm07.stdout: python3-autocommand noarch 2.2.2-8.el9 @epel 82 k 2026-03-09T21:02:32.206 INFO:teuthology.orchestra.run.vm07.stdout: python3-babel noarch 2.9.1-2.el9 @appstream 27 M 2026-03-09T21:02:32.206 INFO:teuthology.orchestra.run.vm07.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 @epel 254 k 2026-03-09T21:02:32.206 INFO:teuthology.orchestra.run.vm07.stdout: python3-bcrypt x86_64 3.2.2-1.el9 @epel 87 k 2026-03-09T21:02:32.206 INFO:teuthology.orchestra.run.vm07.stdout: python3-cachetools noarch 4.2.4-1.el9 @epel 93 k 2026-03-09T21:02:32.206 INFO:teuthology.orchestra.run.vm07.stdout: python3-ceph-common x86_64 2:18.2.7-1055.gab47f43c.el9 @ceph 640 k 2026-03-09T21:02:32.206 INFO:teuthology.orchestra.run.vm07.stdout: python3-certifi noarch 2023.05.07-4.el9 @epel 6.3 k 2026-03-09T21:02:32.206 INFO:teuthology.orchestra.run.vm07.stdout: python3-cffi x86_64 1.14.5-5.el9 @baseos 1.0 M 2026-03-09T21:02:32.206 INFO:teuthology.orchestra.run.vm07.stdout: python3-chardet noarch 4.0.0-5.el9 @anaconda 1.4 M 2026-03-09T21:02:32.206 INFO:teuthology.orchestra.run.vm07.stdout: python3-cheroot noarch 10.0.1-4.el9 @epel 682 k 2026-03-09T21:02:32.206 INFO:teuthology.orchestra.run.vm07.stdout: python3-cherrypy noarch 18.6.1-2.el9 @epel 1.1 M 2026-03-09T21:02:32.206 INFO:teuthology.orchestra.run.vm07.stdout: python3-cryptography x86_64 36.0.1-5.el9 @baseos 4.5 M 2026-03-09T21:02:32.206 INFO:teuthology.orchestra.run.vm07.stdout: python3-devel x86_64 3.9.25-3.el9 @appstream 765 k 2026-03-09T21:02:32.206 INFO:teuthology.orchestra.run.vm07.stdout: python3-google-auth noarch 1:2.45.0-1.el9 @epel 1.4 M 2026-03-09T21:02:32.206 INFO:teuthology.orchestra.run.vm07.stdout: python3-idna noarch 2.10-7.el9.1 @anaconda 513 k 2026-03-09T21:02:32.206 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco noarch 8.2.1-3.el9 @epel 3.7 k 2026-03-09T21:02:32.206 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 @epel 24 k 2026-03-09T21:02:32.206 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 @epel 55 k 2026-03-09T21:02:32.206 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-context noarch 6.0.1-3.el9 @epel 31 k 2026-03-09T21:02:32.206 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 @epel 33 k 2026-03-09T21:02:32.206 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-text noarch 4.0.0-2.el9 @epel 51 k 2026-03-09T21:02:32.206 INFO:teuthology.orchestra.run.vm07.stdout: python3-jinja2 noarch 2.11.3-8.el9 @appstream 1.1 M 2026-03-09T21:02:32.206 INFO:teuthology.orchestra.run.vm07.stdout: python3-jsonpatch noarch 1.21-16.el9 @koji-override-0 55 k 2026-03-09T21:02:32.206 INFO:teuthology.orchestra.run.vm07.stdout: python3-jsonpointer noarch 2.0-4.el9 @koji-override-0 34 k 2026-03-09T21:02:32.206 INFO:teuthology.orchestra.run.vm07.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 @epel 21 M 2026-03-09T21:02:32.206 INFO:teuthology.orchestra.run.vm07.stdout: python3-logutils noarch 0.3.5-21.el9 @epel 126 k 2026-03-09T21:02:32.206 INFO:teuthology.orchestra.run.vm07.stdout: python3-mako noarch 1.1.4-6.el9 @appstream 534 k 2026-03-09T21:02:32.206 INFO:teuthology.orchestra.run.vm07.stdout: python3-markupsafe x86_64 1.1.1-12.el9 @appstream 60 k 2026-03-09T21:02:32.206 INFO:teuthology.orchestra.run.vm07.stdout: python3-more-itertools noarch 8.12.0-2.el9 @epel 378 k 2026-03-09T21:02:32.206 INFO:teuthology.orchestra.run.vm07.stdout: python3-natsort noarch 7.1.1-5.el9 @epel 215 k 2026-03-09T21:02:32.206 INFO:teuthology.orchestra.run.vm07.stdout: python3-numpy x86_64 1:1.23.5-2.el9 @appstream 30 M 2026-03-09T21:02:32.206 INFO:teuthology.orchestra.run.vm07.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 @appstream 1.7 M 2026-03-09T21:02:32.206 INFO:teuthology.orchestra.run.vm07.stdout: python3-oauthlib noarch 3.1.1-5.el9 @koji-override-0 888 k 2026-03-09T21:02:32.206 INFO:teuthology.orchestra.run.vm07.stdout: python3-pecan noarch 1.4.2-3.el9 @epel 1.3 M 2026-03-09T21:02:32.206 INFO:teuthology.orchestra.run.vm07.stdout: python3-ply noarch 3.11-14.el9 @baseos 430 k 2026-03-09T21:02:32.206 INFO:teuthology.orchestra.run.vm07.stdout: python3-portend noarch 3.1.0-2.el9 @epel 20 k 2026-03-09T21:02:32.206 INFO:teuthology.orchestra.run.vm07.stdout: python3-prettytable noarch 0.7.2-27.el9 @koji-override-0 166 k 2026-03-09T21:02:32.206 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 @epel 389 k 2026-03-09T21:02:32.206 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyasn1 noarch 0.4.8-7.el9 @appstream 622 k 2026-03-09T21:02:32.206 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 @appstream 1.0 M 2026-03-09T21:02:32.206 INFO:teuthology.orchestra.run.vm07.stdout: python3-pycparser noarch 2.20-6.el9 @baseos 745 k 2026-03-09T21:02:32.206 INFO:teuthology.orchestra.run.vm07.stdout: python3-pysocks noarch 1.7.1-12.el9 @anaconda 88 k 2026-03-09T21:02:32.206 INFO:teuthology.orchestra.run.vm07.stdout: python3-pytz noarch 2021.1-5.el9 @koji-override-0 176 k 2026-03-09T21:02:32.206 INFO:teuthology.orchestra.run.vm07.stdout: python3-repoze-lru noarch 0.7-16.el9 @epel 83 k 2026-03-09T21:02:32.206 INFO:teuthology.orchestra.run.vm07.stdout: python3-requests noarch 2.25.1-10.el9 @baseos 405 k 2026-03-09T21:02:32.206 INFO:teuthology.orchestra.run.vm07.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 @appstream 119 k 2026-03-09T21:02:32.206 INFO:teuthology.orchestra.run.vm07.stdout: python3-routes noarch 2.5.1-5.el9 @epel 459 k 2026-03-09T21:02:32.206 INFO:teuthology.orchestra.run.vm07.stdout: python3-rsa noarch 4.9-2.el9 @epel 202 k 2026-03-09T21:02:32.207 INFO:teuthology.orchestra.run.vm07.stdout: python3-scipy x86_64 1.9.3-2.el9 @appstream 76 M 2026-03-09T21:02:32.207 INFO:teuthology.orchestra.run.vm07.stdout: python3-tempora noarch 5.0.0-2.el9 @epel 96 k 2026-03-09T21:02:32.207 INFO:teuthology.orchestra.run.vm07.stdout: python3-toml noarch 0.10.2-6.el9 @appstream 99 k 2026-03-09T21:02:32.207 INFO:teuthology.orchestra.run.vm07.stdout: python3-typing-extensions noarch 4.15.0-1.el9 @epel 447 k 2026-03-09T21:02:32.207 INFO:teuthology.orchestra.run.vm07.stdout: python3-urllib3 noarch 1.26.5-7.el9 @baseos 746 k 2026-03-09T21:02:32.207 INFO:teuthology.orchestra.run.vm07.stdout: python3-webob noarch 1.8.8-2.el9 @epel 1.2 M 2026-03-09T21:02:32.207 INFO:teuthology.orchestra.run.vm07.stdout: python3-websocket-client noarch 1.2.3-2.el9 @epel 319 k 2026-03-09T21:02:32.207 INFO:teuthology.orchestra.run.vm07.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 @epel 1.9 M 2026-03-09T21:02:32.207 INFO:teuthology.orchestra.run.vm07.stdout: python3-zc-lockfile noarch 2.0-10.el9 @epel 35 k 2026-03-09T21:02:32.207 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:02:32.207 INFO:teuthology.orchestra.run.vm07.stdout:Transaction Summary 2026-03-09T21:02:32.207 INFO:teuthology.orchestra.run.vm07.stdout:============================================================================================ 2026-03-09T21:02:32.207 INFO:teuthology.orchestra.run.vm07.stdout:Remove 84 Packages 2026-03-09T21:02:32.207 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:02:32.207 INFO:teuthology.orchestra.run.vm07.stdout:Freed space: 515 M 2026-03-09T21:02:32.207 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction check 2026-03-09T21:02:32.233 INFO:teuthology.orchestra.run.vm07.stdout:Transaction check succeeded. 2026-03-09T21:02:32.267 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction test 2026-03-09T21:02:32.267 INFO:teuthology.orchestra.run.vm10.stdout:Dependencies resolved. 2026-03-09T21:02:32.267 INFO:teuthology.orchestra.run.vm10.stdout:============================================================================================ 2026-03-09T21:02:32.267 INFO:teuthology.orchestra.run.vm10.stdout: Package Arch Version Repository Size 2026-03-09T21:02:32.267 INFO:teuthology.orchestra.run.vm10.stdout:============================================================================================ 2026-03-09T21:02:32.267 INFO:teuthology.orchestra.run.vm10.stdout:Removing: 2026-03-09T21:02:32.267 INFO:teuthology.orchestra.run.vm10.stdout: ceph-base x86_64 2:18.2.7-1055.gab47f43c.el9 @ceph 22 M 2026-03-09T21:02:32.267 INFO:teuthology.orchestra.run.vm10.stdout:Removing dependent packages: 2026-03-09T21:02:32.267 INFO:teuthology.orchestra.run.vm10.stdout: ceph-immutable-object-cache x86_64 2:18.2.7-1055.gab47f43c.el9 @ceph 395 k 2026-03-09T21:02:32.267 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr x86_64 2:18.2.7-1055.gab47f43c.el9 @ceph 4.5 M 2026-03-09T21:02:32.267 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-cephadm noarch 2:18.2.7-1055.gab47f43c.el9 @ceph-noarch 736 k 2026-03-09T21:02:32.267 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-dashboard noarch 2:18.2.7-1055.gab47f43c.el9 @ceph-noarch 87 M 2026-03-09T21:02:32.267 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-diskprediction-local noarch 2:18.2.7-1055.gab47f43c.el9 @ceph-noarch 66 M 2026-03-09T21:02:32.267 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-rook noarch 2:18.2.7-1055.gab47f43c.el9 @ceph-noarch 563 k 2026-03-09T21:02:32.267 INFO:teuthology.orchestra.run.vm10.stdout: rbd-mirror x86_64 2:18.2.7-1055.gab47f43c.el9 @ceph 12 M 2026-03-09T21:02:32.267 INFO:teuthology.orchestra.run.vm10.stdout:Removing unused dependencies: 2026-03-09T21:02:32.267 INFO:teuthology.orchestra.run.vm10.stdout: ceph-common x86_64 2:18.2.7-1055.gab47f43c.el9 @ceph 71 M 2026-03-09T21:02:32.268 INFO:teuthology.orchestra.run.vm10.stdout: ceph-grafana-dashboards noarch 2:18.2.7-1055.gab47f43c.el9 @ceph-noarch 355 k 2026-03-09T21:02:32.268 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-modules-core noarch 2:18.2.7-1055.gab47f43c.el9 @ceph-noarch 1.5 M 2026-03-09T21:02:32.268 INFO:teuthology.orchestra.run.vm10.stdout: ceph-prometheus-alerts noarch 2:18.2.7-1055.gab47f43c.el9 @ceph-noarch 52 k 2026-03-09T21:02:32.268 INFO:teuthology.orchestra.run.vm10.stdout: ceph-selinux x86_64 2:18.2.7-1055.gab47f43c.el9 @ceph 138 k 2026-03-09T21:02:32.268 INFO:teuthology.orchestra.run.vm10.stdout: flexiblas x86_64 3.0.4-9.el9 @appstream 68 k 2026-03-09T21:02:32.268 INFO:teuthology.orchestra.run.vm10.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 @appstream 11 M 2026-03-09T21:02:32.268 INFO:teuthology.orchestra.run.vm10.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 @appstream 39 k 2026-03-09T21:02:32.268 INFO:teuthology.orchestra.run.vm10.stdout: fmt x86_64 8.1.1-5.el9 @epel 337 k 2026-03-09T21:02:32.268 INFO:teuthology.orchestra.run.vm10.stdout: gperftools-libs x86_64 2.9.1-3.el9 @epel 1.4 M 2026-03-09T21:02:32.268 INFO:teuthology.orchestra.run.vm10.stdout: libcephsqlite x86_64 2:18.2.7-1055.gab47f43c.el9 @ceph 438 k 2026-03-09T21:02:32.268 INFO:teuthology.orchestra.run.vm10.stdout: libgfortran x86_64 11.5.0-14.el9 @baseos 2.8 M 2026-03-09T21:02:32.268 INFO:teuthology.orchestra.run.vm10.stdout: liboath x86_64 2.6.12-1.el9 @epel 94 k 2026-03-09T21:02:32.268 INFO:teuthology.orchestra.run.vm10.stdout: libquadmath x86_64 11.5.0-14.el9 @baseos 330 k 2026-03-09T21:02:32.268 INFO:teuthology.orchestra.run.vm10.stdout: libradosstriper1 x86_64 2:18.2.7-1055.gab47f43c.el9 @ceph 1.5 M 2026-03-09T21:02:32.268 INFO:teuthology.orchestra.run.vm10.stdout: libunwind x86_64 1.6.2-1.el9 @epel 170 k 2026-03-09T21:02:32.268 INFO:teuthology.orchestra.run.vm10.stdout: openblas x86_64 0.3.29-1.el9 @appstream 112 k 2026-03-09T21:02:32.268 INFO:teuthology.orchestra.run.vm10.stdout: openblas-openmp x86_64 0.3.29-1.el9 @appstream 46 M 2026-03-09T21:02:32.268 INFO:teuthology.orchestra.run.vm10.stdout: python3-asyncssh noarch 2.13.2-5.el9 @epel 3.9 M 2026-03-09T21:02:32.268 INFO:teuthology.orchestra.run.vm10.stdout: python3-autocommand noarch 2.2.2-8.el9 @epel 82 k 2026-03-09T21:02:32.268 INFO:teuthology.orchestra.run.vm10.stdout: python3-babel noarch 2.9.1-2.el9 @appstream 27 M 2026-03-09T21:02:32.268 INFO:teuthology.orchestra.run.vm10.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 @epel 254 k 2026-03-09T21:02:32.268 INFO:teuthology.orchestra.run.vm10.stdout: python3-bcrypt x86_64 3.2.2-1.el9 @epel 87 k 2026-03-09T21:02:32.268 INFO:teuthology.orchestra.run.vm10.stdout: python3-cachetools noarch 4.2.4-1.el9 @epel 93 k 2026-03-09T21:02:32.268 INFO:teuthology.orchestra.run.vm10.stdout: python3-ceph-common x86_64 2:18.2.7-1055.gab47f43c.el9 @ceph 640 k 2026-03-09T21:02:32.268 INFO:teuthology.orchestra.run.vm10.stdout: python3-certifi noarch 2023.05.07-4.el9 @epel 6.3 k 2026-03-09T21:02:32.268 INFO:teuthology.orchestra.run.vm10.stdout: python3-cffi x86_64 1.14.5-5.el9 @baseos 1.0 M 2026-03-09T21:02:32.268 INFO:teuthology.orchestra.run.vm10.stdout: python3-chardet noarch 4.0.0-5.el9 @anaconda 1.4 M 2026-03-09T21:02:32.268 INFO:teuthology.orchestra.run.vm10.stdout: python3-cheroot noarch 10.0.1-4.el9 @epel 682 k 2026-03-09T21:02:32.268 INFO:teuthology.orchestra.run.vm10.stdout: python3-cherrypy noarch 18.6.1-2.el9 @epel 1.1 M 2026-03-09T21:02:32.268 INFO:teuthology.orchestra.run.vm10.stdout: python3-cryptography x86_64 36.0.1-5.el9 @baseos 4.5 M 2026-03-09T21:02:32.268 INFO:teuthology.orchestra.run.vm10.stdout: python3-devel x86_64 3.9.25-3.el9 @appstream 765 k 2026-03-09T21:02:32.268 INFO:teuthology.orchestra.run.vm10.stdout: python3-google-auth noarch 1:2.45.0-1.el9 @epel 1.4 M 2026-03-09T21:02:32.268 INFO:teuthology.orchestra.run.vm10.stdout: python3-idna noarch 2.10-7.el9.1 @anaconda 513 k 2026-03-09T21:02:32.268 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco noarch 8.2.1-3.el9 @epel 3.7 k 2026-03-09T21:02:32.268 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 @epel 24 k 2026-03-09T21:02:32.268 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 @epel 55 k 2026-03-09T21:02:32.268 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco-context noarch 6.0.1-3.el9 @epel 31 k 2026-03-09T21:02:32.268 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 @epel 33 k 2026-03-09T21:02:32.268 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco-text noarch 4.0.0-2.el9 @epel 51 k 2026-03-09T21:02:32.268 INFO:teuthology.orchestra.run.vm10.stdout: python3-jinja2 noarch 2.11.3-8.el9 @appstream 1.1 M 2026-03-09T21:02:32.268 INFO:teuthology.orchestra.run.vm10.stdout: python3-jsonpatch noarch 1.21-16.el9 @koji-override-0 55 k 2026-03-09T21:02:32.268 INFO:teuthology.orchestra.run.vm10.stdout: python3-jsonpointer noarch 2.0-4.el9 @koji-override-0 34 k 2026-03-09T21:02:32.268 INFO:teuthology.orchestra.run.vm10.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 @epel 21 M 2026-03-09T21:02:32.268 INFO:teuthology.orchestra.run.vm10.stdout: python3-logutils noarch 0.3.5-21.el9 @epel 126 k 2026-03-09T21:02:32.268 INFO:teuthology.orchestra.run.vm10.stdout: python3-mako noarch 1.1.4-6.el9 @appstream 534 k 2026-03-09T21:02:32.268 INFO:teuthology.orchestra.run.vm10.stdout: python3-markupsafe x86_64 1.1.1-12.el9 @appstream 60 k 2026-03-09T21:02:32.268 INFO:teuthology.orchestra.run.vm10.stdout: python3-more-itertools noarch 8.12.0-2.el9 @epel 378 k 2026-03-09T21:02:32.268 INFO:teuthology.orchestra.run.vm10.stdout: python3-natsort noarch 7.1.1-5.el9 @epel 215 k 2026-03-09T21:02:32.268 INFO:teuthology.orchestra.run.vm10.stdout: python3-numpy x86_64 1:1.23.5-2.el9 @appstream 30 M 2026-03-09T21:02:32.269 INFO:teuthology.orchestra.run.vm10.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 @appstream 1.7 M 2026-03-09T21:02:32.269 INFO:teuthology.orchestra.run.vm10.stdout: python3-oauthlib noarch 3.1.1-5.el9 @koji-override-0 888 k 2026-03-09T21:02:32.269 INFO:teuthology.orchestra.run.vm10.stdout: python3-pecan noarch 1.4.2-3.el9 @epel 1.3 M 2026-03-09T21:02:32.269 INFO:teuthology.orchestra.run.vm10.stdout: python3-ply noarch 3.11-14.el9 @baseos 430 k 2026-03-09T21:02:32.269 INFO:teuthology.orchestra.run.vm10.stdout: python3-portend noarch 3.1.0-2.el9 @epel 20 k 2026-03-09T21:02:32.269 INFO:teuthology.orchestra.run.vm10.stdout: python3-prettytable noarch 0.7.2-27.el9 @koji-override-0 166 k 2026-03-09T21:02:32.269 INFO:teuthology.orchestra.run.vm10.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 @epel 389 k 2026-03-09T21:02:32.269 INFO:teuthology.orchestra.run.vm10.stdout: python3-pyasn1 noarch 0.4.8-7.el9 @appstream 622 k 2026-03-09T21:02:32.269 INFO:teuthology.orchestra.run.vm10.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 @appstream 1.0 M 2026-03-09T21:02:32.269 INFO:teuthology.orchestra.run.vm10.stdout: python3-pycparser noarch 2.20-6.el9 @baseos 745 k 2026-03-09T21:02:32.269 INFO:teuthology.orchestra.run.vm10.stdout: python3-pysocks noarch 1.7.1-12.el9 @anaconda 88 k 2026-03-09T21:02:32.269 INFO:teuthology.orchestra.run.vm10.stdout: python3-pytz noarch 2021.1-5.el9 @koji-override-0 176 k 2026-03-09T21:02:32.269 INFO:teuthology.orchestra.run.vm10.stdout: python3-repoze-lru noarch 0.7-16.el9 @epel 83 k 2026-03-09T21:02:32.269 INFO:teuthology.orchestra.run.vm10.stdout: python3-requests noarch 2.25.1-10.el9 @baseos 405 k 2026-03-09T21:02:32.269 INFO:teuthology.orchestra.run.vm10.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 @appstream 119 k 2026-03-09T21:02:32.269 INFO:teuthology.orchestra.run.vm10.stdout: python3-routes noarch 2.5.1-5.el9 @epel 459 k 2026-03-09T21:02:32.269 INFO:teuthology.orchestra.run.vm10.stdout: python3-rsa noarch 4.9-2.el9 @epel 202 k 2026-03-09T21:02:32.269 INFO:teuthology.orchestra.run.vm10.stdout: python3-scipy x86_64 1.9.3-2.el9 @appstream 76 M 2026-03-09T21:02:32.269 INFO:teuthology.orchestra.run.vm10.stdout: python3-tempora noarch 5.0.0-2.el9 @epel 96 k 2026-03-09T21:02:32.269 INFO:teuthology.orchestra.run.vm10.stdout: python3-toml noarch 0.10.2-6.el9 @appstream 99 k 2026-03-09T21:02:32.269 INFO:teuthology.orchestra.run.vm10.stdout: python3-typing-extensions noarch 4.15.0-1.el9 @epel 447 k 2026-03-09T21:02:32.269 INFO:teuthology.orchestra.run.vm10.stdout: python3-urllib3 noarch 1.26.5-7.el9 @baseos 746 k 2026-03-09T21:02:32.269 INFO:teuthology.orchestra.run.vm10.stdout: python3-webob noarch 1.8.8-2.el9 @epel 1.2 M 2026-03-09T21:02:32.269 INFO:teuthology.orchestra.run.vm10.stdout: python3-websocket-client noarch 1.2.3-2.el9 @epel 319 k 2026-03-09T21:02:32.269 INFO:teuthology.orchestra.run.vm10.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 @epel 1.9 M 2026-03-09T21:02:32.269 INFO:teuthology.orchestra.run.vm10.stdout: python3-zc-lockfile noarch 2.0-10.el9 @epel 35 k 2026-03-09T21:02:32.269 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:02:32.269 INFO:teuthology.orchestra.run.vm10.stdout:Transaction Summary 2026-03-09T21:02:32.269 INFO:teuthology.orchestra.run.vm10.stdout:============================================================================================ 2026-03-09T21:02:32.269 INFO:teuthology.orchestra.run.vm10.stdout:Remove 84 Packages 2026-03-09T21:02:32.269 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:02:32.269 INFO:teuthology.orchestra.run.vm10.stdout:Freed space: 515 M 2026-03-09T21:02:32.269 INFO:teuthology.orchestra.run.vm10.stdout:Running transaction check 2026-03-09T21:02:32.269 INFO:teuthology.orchestra.run.vm10.stdout:Transaction check succeeded. 2026-03-09T21:02:32.269 INFO:teuthology.orchestra.run.vm10.stdout:Running transaction test 2026-03-09T21:02:32.389 INFO:teuthology.orchestra.run.vm07.stdout:Transaction test succeeded. 2026-03-09T21:02:32.389 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction 2026-03-09T21:02:32.406 INFO:teuthology.orchestra.run.vm10.stdout:Transaction test succeeded. 2026-03-09T21:02:32.406 INFO:teuthology.orchestra.run.vm10.stdout:Running transaction 2026-03-09T21:02:32.556 INFO:teuthology.orchestra.run.vm07.stdout: Preparing : 1/1 2026-03-09T21:02:32.557 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-mgr-rook-2:18.2.7-1055.gab47f43c.el9.noarch 1/84 2026-03-09T21:02:32.557 INFO:teuthology.orchestra.run.vm10.stdout: Preparing : 1/1 2026-03-09T21:02:32.557 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : ceph-mgr-rook-2:18.2.7-1055.gab47f43c.el9.noarch 1/84 2026-03-09T21:02:32.567 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-mgr-rook-2:18.2.7-1055.gab47f43c.el9.noarch 1/84 2026-03-09T21:02:32.568 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: ceph-mgr-rook-2:18.2.7-1055.gab47f43c.el9.noarch 1/84 2026-03-09T21:02:32.588 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-mgr-2:18.2.7-1055.gab47f43c.el9.x86_64 2/84 2026-03-09T21:02:32.588 INFO:teuthology.orchestra.run.vm07.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T21:02:32.589 INFO:teuthology.orchestra.run.vm07.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-09T21:02:32.589 INFO:teuthology.orchestra.run.vm07.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mgr.target". 2026-03-09T21:02:32.589 INFO:teuthology.orchestra.run.vm07.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mgr.target". 2026-03-09T21:02:32.589 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:02:32.589 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-mgr-2:18.2.7-1055.gab47f43c.el9.x86_64 2/84 2026-03-09T21:02:32.592 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: ceph-mgr-2:18.2.7-1055.gab47f43c.el9.x86_64 2/84 2026-03-09T21:02:32.592 INFO:teuthology.orchestra.run.vm10.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T21:02:32.592 INFO:teuthology.orchestra.run.vm10.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-09T21:02:32.592 INFO:teuthology.orchestra.run.vm10.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mgr.target". 2026-03-09T21:02:32.592 INFO:teuthology.orchestra.run.vm10.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mgr.target". 2026-03-09T21:02:32.592 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:02:32.593 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : ceph-mgr-2:18.2.7-1055.gab47f43c.el9.x86_64 2/84 2026-03-09T21:02:32.605 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-mgr-2:18.2.7-1055.gab47f43c.el9.x86_64 2/84 2026-03-09T21:02:32.609 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: ceph-mgr-2:18.2.7-1055.gab47f43c.el9.x86_64 2/84 2026-03-09T21:02:32.666 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : ceph-mgr-modules-core-2:18.2.7-1055.gab47f43c.el9. 3/84 2026-03-09T21:02:32.666 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-mgr-modules-core-2:18.2.7-1055.gab47f43c.el9. 3/84 2026-03-09T21:02:32.693 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-kubernetes-1:26.1.0-3.el9.noarch 4/84 2026-03-09T21:02:32.693 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : ceph-mgr-dashboard-2:18.2.7-1055.gab47f43c.el9.noa 5/84 2026-03-09T21:02:32.696 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-kubernetes-1:26.1.0-3.el9.noarch 4/84 2026-03-09T21:02:32.696 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-mgr-dashboard-2:18.2.7-1055.gab47f43c.el9.noa 5/84 2026-03-09T21:02:32.707 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: ceph-mgr-dashboard-2:18.2.7-1055.gab47f43c.el9.noa 5/84 2026-03-09T21:02:32.710 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-mgr-dashboard-2:18.2.7-1055.gab47f43c.el9.noa 5/84 2026-03-09T21:02:32.712 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-requests-oauthlib-1.3.0-12.el9.noarch 6/84 2026-03-09T21:02:32.712 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : ceph-mgr-cephadm-2:18.2.7-1055.gab47f43c.el9.noarc 7/84 2026-03-09T21:02:32.715 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-requests-oauthlib-1.3.0-12.el9.noarch 6/84 2026-03-09T21:02:32.715 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-mgr-cephadm-2:18.2.7-1055.gab47f43c.el9.noarc 7/84 2026-03-09T21:02:32.727 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: ceph-mgr-cephadm-2:18.2.7-1055.gab47f43c.el9.noarc 7/84 2026-03-09T21:02:32.729 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-mgr-cephadm-2:18.2.7-1055.gab47f43c.el9.noarc 7/84 2026-03-09T21:02:32.735 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-cherrypy-18.6.1-2.el9.noarch 8/84 2026-03-09T21:02:32.738 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-cherrypy-18.6.1-2.el9.noarch 8/84 2026-03-09T21:02:32.739 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-cheroot-10.0.1-4.el9.noarch 9/84 2026-03-09T21:02:32.742 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-jaraco-collections-3.0.0-8.el9.noarch 10/84 2026-03-09T21:02:32.742 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-cheroot-10.0.1-4.el9.noarch 9/84 2026-03-09T21:02:32.745 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-jaraco-collections-3.0.0-8.el9.noarch 10/84 2026-03-09T21:02:32.748 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-jaraco-text-4.0.0-2.el9.noarch 11/84 2026-03-09T21:02:32.751 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-jaraco-text-4.0.0-2.el9.noarch 11/84 2026-03-09T21:02:32.754 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-jinja2-2.11.3-8.el9.noarch 12/84 2026-03-09T21:02:32.757 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-jinja2-2.11.3-8.el9.noarch 12/84 2026-03-09T21:02:32.764 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-requests-2.25.1-10.el9.noarch 13/84 2026-03-09T21:02:32.767 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-requests-2.25.1-10.el9.noarch 13/84 2026-03-09T21:02:32.778 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-google-auth-1:2.45.0-1.el9.noarch 14/84 2026-03-09T21:02:32.781 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-google-auth-1:2.45.0-1.el9.noarch 14/84 2026-03-09T21:02:32.786 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-pecan-1.4.2-3.el9.noarch 15/84 2026-03-09T21:02:32.789 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-pecan-1.4.2-3.el9.noarch 15/84 2026-03-09T21:02:32.798 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-rsa-4.9-2.el9.noarch 16/84 2026-03-09T21:02:32.801 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-rsa-4.9-2.el9.noarch 16/84 2026-03-09T21:02:32.805 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-pyasn1-modules-0.4.8-7.el9.noarch 17/84 2026-03-09T21:02:32.809 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-pyasn1-modules-0.4.8-7.el9.noarch 17/84 2026-03-09T21:02:32.841 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-urllib3-1.26.5-7.el9.noarch 18/84 2026-03-09T21:02:32.845 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-urllib3-1.26.5-7.el9.noarch 18/84 2026-03-09T21:02:32.877 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-babel-2.9.1-2.el9.noarch 19/84 2026-03-09T21:02:32.877 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-babel-2.9.1-2.el9.noarch 19/84 2026-03-09T21:02:32.882 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-jaraco-classes-3.2.1-5.el9.noarch 20/84 2026-03-09T21:02:32.883 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-jaraco-classes-3.2.1-5.el9.noarch 20/84 2026-03-09T21:02:32.895 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-pyOpenSSL-21.0.0-1.el9.noarch 21/84 2026-03-09T21:02:32.895 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-pyOpenSSL-21.0.0-1.el9.noarch 21/84 2026-03-09T21:02:32.904 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-asyncssh-2.13.2-5.el9.noarch 22/84 2026-03-09T21:02:32.904 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : ceph-mgr-diskprediction-local-2:18.2.7-1055.gab47f 23/84 2026-03-09T21:02:32.906 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-asyncssh-2.13.2-5.el9.noarch 22/84 2026-03-09T21:02:32.906 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-mgr-diskprediction-local-2:18.2.7-1055.gab47f 23/84 2026-03-09T21:02:32.916 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:18.2.7-1055.gab47f 23/84 2026-03-09T21:02:32.918 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:18.2.7-1055.gab47f 23/84 2026-03-09T21:02:33.026 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-jsonpatch-1.21-16.el9.noarch 24/84 2026-03-09T21:02:33.038 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-jsonpatch-1.21-16.el9.noarch 24/84 2026-03-09T21:02:33.060 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-scipy-1.9.3-2.el9.x86_64 25/84 2026-03-09T21:02:33.073 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-scipy-1.9.3-2.el9.x86_64 25/84 2026-03-09T21:02:33.080 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 26/84 2026-03-09T21:02:33.087 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-cryptography-36.0.1-5.el9.x86_64 27/84 2026-03-09T21:02:33.090 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-bcrypt-3.2.2-1.el9.x86_64 28/84 2026-03-09T21:02:33.093 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 26/84 2026-03-09T21:02:33.102 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-cryptography-36.0.1-5.el9.x86_64 27/84 2026-03-09T21:02:33.105 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-bcrypt-3.2.2-1.el9.x86_64 28/84 2026-03-09T21:02:33.112 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: rbd-mirror-2:18.2.7-1055.gab47f43c.el9.x86_64 29/84 2026-03-09T21:02:33.112 INFO:teuthology.orchestra.run.vm10.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T21:02:33.112 INFO:teuthology.orchestra.run.vm10.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-09T21:02:33.112 INFO:teuthology.orchestra.run.vm10.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target". 2026-03-09T21:02:33.112 INFO:teuthology.orchestra.run.vm10.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target". 2026-03-09T21:02:33.112 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:02:33.113 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : rbd-mirror-2:18.2.7-1055.gab47f43c.el9.x86_64 29/84 2026-03-09T21:02:33.126 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: rbd-mirror-2:18.2.7-1055.gab47f43c.el9.x86_64 29/84 2026-03-09T21:02:33.130 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: rbd-mirror-2:18.2.7-1055.gab47f43c.el9.x86_64 29/84 2026-03-09T21:02:33.131 INFO:teuthology.orchestra.run.vm07.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T21:02:33.131 INFO:teuthology.orchestra.run.vm07.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-09T21:02:33.131 INFO:teuthology.orchestra.run.vm07.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target". 2026-03-09T21:02:33.131 INFO:teuthology.orchestra.run.vm07.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target". 2026-03-09T21:02:33.131 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:02:33.131 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-mako-1.1.4-6.el9.noarch 30/84 2026-03-09T21:02:33.132 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : rbd-mirror-2:18.2.7-1055.gab47f43c.el9.x86_64 29/84 2026-03-09T21:02:33.134 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-jaraco-context-6.0.1-3.el9.noarch 31/84 2026-03-09T21:02:33.137 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-portend-3.1.0-2.el9.noarch 32/84 2026-03-09T21:02:33.140 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-tempora-5.0.0-2.el9.noarch 33/84 2026-03-09T21:02:33.145 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-jaraco-functools-3.5.0-2.el9.noarch 34/84 2026-03-09T21:02:33.149 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: rbd-mirror-2:18.2.7-1055.gab47f43c.el9.x86_64 29/84 2026-03-09T21:02:33.151 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-routes-2.5.1-5.el9.noarch 35/84 2026-03-09T21:02:33.156 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-mako-1.1.4-6.el9.noarch 30/84 2026-03-09T21:02:33.157 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-cffi-1.14.5-5.el9.x86_64 36/84 2026-03-09T21:02:33.160 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-jaraco-context-6.0.1-3.el9.noarch 31/84 2026-03-09T21:02:33.164 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-portend-3.1.0-2.el9.noarch 32/84 2026-03-09T21:02:33.167 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-tempora-5.0.0-2.el9.noarch 33/84 2026-03-09T21:02:33.171 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-jaraco-functools-3.5.0-2.el9.noarch 34/84 2026-03-09T21:02:33.177 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-routes-2.5.1-5.el9.noarch 35/84 2026-03-09T21:02:33.183 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-cffi-1.14.5-5.el9.x86_64 36/84 2026-03-09T21:02:33.212 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-pycparser-2.20-6.el9.noarch 37/84 2026-03-09T21:02:33.228 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-numpy-1:1.23.5-2.el9.x86_64 38/84 2026-03-09T21:02:33.232 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : flexiblas-netlib-3.0.4-9.el9.x86_64 39/84 2026-03-09T21:02:33.236 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 40/84 2026-03-09T21:02:33.239 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-pycparser-2.20-6.el9.noarch 37/84 2026-03-09T21:02:33.240 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : openblas-openmp-0.3.29-1.el9.x86_64 41/84 2026-03-09T21:02:33.243 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : libgfortran-11.5.0-14.el9.x86_64 42/84 2026-03-09T21:02:33.253 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-numpy-1:1.23.5-2.el9.x86_64 38/84 2026-03-09T21:02:33.255 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : flexiblas-netlib-3.0.4-9.el9.x86_64 39/84 2026-03-09T21:02:33.258 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 40/84 2026-03-09T21:02:33.261 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : openblas-openmp-0.3.29-1.el9.x86_64 41/84 2026-03-09T21:02:33.263 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : libgfortran-11.5.0-14.el9.x86_64 42/84 2026-03-09T21:02:33.272 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: ceph-immutable-object-cache-2:18.2.7-1055.gab47f43 43/84 2026-03-09T21:02:33.272 INFO:teuthology.orchestra.run.vm10.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T21:02:33.272 INFO:teuthology.orchestra.run.vm10.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-09T21:02:33.272 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:02:33.272 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : ceph-immutable-object-cache-2:18.2.7-1055.gab47f43 43/84 2026-03-09T21:02:33.283 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: ceph-immutable-object-cache-2:18.2.7-1055.gab47f43 43/84 2026-03-09T21:02:33.286 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-immutable-object-cache-2:18.2.7-1055.gab47f43 43/84 2026-03-09T21:02:33.286 INFO:teuthology.orchestra.run.vm07.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T21:02:33.286 INFO:teuthology.orchestra.run.vm07.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-09T21:02:33.286 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:02:33.286 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : openblas-0.3.29-1.el9.x86_64 44/84 2026-03-09T21:02:33.286 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-immutable-object-cache-2:18.2.7-1055.gab47f43 43/84 2026-03-09T21:02:33.288 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : flexiblas-3.0.4-9.el9.x86_64 45/84 2026-03-09T21:02:33.292 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-ply-3.11-14.el9.noarch 46/84 2026-03-09T21:02:33.295 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-immutable-object-cache-2:18.2.7-1055.gab47f43 43/84 2026-03-09T21:02:33.296 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-repoze-lru-0.7-16.el9.noarch 47/84 2026-03-09T21:02:33.298 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : openblas-0.3.29-1.el9.x86_64 44/84 2026-03-09T21:02:33.300 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-jaraco-8.2.1-3.el9.noarch 48/84 2026-03-09T21:02:33.300 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : flexiblas-3.0.4-9.el9.x86_64 45/84 2026-03-09T21:02:33.303 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-more-itertools-8.12.0-2.el9.noarch 49/84 2026-03-09T21:02:33.305 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-ply-3.11-14.el9.noarch 46/84 2026-03-09T21:02:33.307 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-toml-0.10.2-6.el9.noarch 50/84 2026-03-09T21:02:33.310 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-repoze-lru-0.7-16.el9.noarch 47/84 2026-03-09T21:02:33.312 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-pytz-2021.1-5.el9.noarch 51/84 2026-03-09T21:02:33.313 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-jaraco-8.2.1-3.el9.noarch 48/84 2026-03-09T21:02:33.318 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-more-itertools-8.12.0-2.el9.noarch 49/84 2026-03-09T21:02:33.322 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-backports-tarfile-1.2.0-1.el9.noarch 52/84 2026-03-09T21:02:33.323 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-toml-0.10.2-6.el9.noarch 50/84 2026-03-09T21:02:33.327 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-pytz-2021.1-5.el9.noarch 51/84 2026-03-09T21:02:33.329 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-devel-3.9.25-3.el9.x86_64 53/84 2026-03-09T21:02:33.332 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-jsonpointer-2.0-4.el9.noarch 54/84 2026-03-09T21:02:33.336 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-typing-extensions-4.15.0-1.el9.noarch 55/84 2026-03-09T21:02:33.339 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-backports-tarfile-1.2.0-1.el9.noarch 52/84 2026-03-09T21:02:33.340 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-idna-2.10-7.el9.1.noarch 56/84 2026-03-09T21:02:33.346 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-pysocks-1.7.1-12.el9.noarch 57/84 2026-03-09T21:02:33.348 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-devel-3.9.25-3.el9.x86_64 53/84 2026-03-09T21:02:33.352 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-jsonpointer-2.0-4.el9.noarch 54/84 2026-03-09T21:02:33.354 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-pyasn1-0.4.8-7.el9.noarch 58/84 2026-03-09T21:02:33.357 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-typing-extensions-4.15.0-1.el9.noarch 55/84 2026-03-09T21:02:33.361 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-idna-2.10-7.el9.1.noarch 56/84 2026-03-09T21:02:33.363 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-logutils-0.3.5-21.el9.noarch 59/84 2026-03-09T21:02:33.368 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-pysocks-1.7.1-12.el9.noarch 57/84 2026-03-09T21:02:33.371 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-webob-1.8.8-2.el9.noarch 60/84 2026-03-09T21:02:33.376 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-pyasn1-0.4.8-7.el9.noarch 58/84 2026-03-09T21:02:33.380 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-cachetools-4.2.4-1.el9.noarch 61/84 2026-03-09T21:02:33.382 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-logutils-0.3.5-21.el9.noarch 59/84 2026-03-09T21:02:33.386 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-chardet-4.0.0-5.el9.noarch 62/84 2026-03-09T21:02:33.386 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-webob-1.8.8-2.el9.noarch 60/84 2026-03-09T21:02:33.391 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-autocommand-2.2.2-8.el9.noarch 63/84 2026-03-09T21:02:33.394 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-cachetools-4.2.4-1.el9.noarch 61/84 2026-03-09T21:02:33.396 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-zc-lockfile-2.0-10.el9.noarch 64/84 2026-03-09T21:02:33.400 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-chardet-4.0.0-5.el9.noarch 62/84 2026-03-09T21:02:33.405 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-autocommand-2.2.2-8.el9.noarch 63/84 2026-03-09T21:02:33.407 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-natsort-7.1.1-5.el9.noarch 65/84 2026-03-09T21:02:33.410 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-zc-lockfile-2.0-10.el9.noarch 64/84 2026-03-09T21:02:33.413 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-oauthlib-3.1.1-5.el9.noarch 66/84 2026-03-09T21:02:33.415 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : ceph-grafana-dashboards-2:18.2.7-1055.gab47f43c.el 67/84 2026-03-09T21:02:33.423 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-natsort-7.1.1-5.el9.noarch 65/84 2026-03-09T21:02:33.423 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : ceph-prometheus-alerts-2:18.2.7-1055.gab47f43c.el9 68/84 2026-03-09T21:02:33.428 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-oauthlib-3.1.1-5.el9.noarch 66/84 2026-03-09T21:02:33.431 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-grafana-dashboards-2:18.2.7-1055.gab47f43c.el 67/84 2026-03-09T21:02:33.432 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-werkzeug-2.0.3-3.el9.1.noarch 69/84 2026-03-09T21:02:33.438 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-prometheus-alerts-2:18.2.7-1055.gab47f43c.el9 68/84 2026-03-09T21:02:33.438 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-websocket-client-1.2.3-2.el9.noarch 70/84 2026-03-09T21:02:33.444 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-certifi-2023.05.07-4.el9.noarch 71/84 2026-03-09T21:02:33.445 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-werkzeug-2.0.3-3.el9.1.noarch 69/84 2026-03-09T21:02:33.450 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-websocket-client-1.2.3-2.el9.noarch 70/84 2026-03-09T21:02:33.453 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-certifi-2023.05.07-4.el9.noarch 71/84 2026-03-09T21:02:33.474 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: ceph-base-2:18.2.7-1055.gab47f43c.el9.x86_64 72/84 2026-03-09T21:02:33.474 INFO:teuthology.orchestra.run.vm10.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-crash.service". 2026-03-09T21:02:33.474 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:02:33.477 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-base-2:18.2.7-1055.gab47f43c.el9.x86_64 72/84 2026-03-09T21:02:33.477 INFO:teuthology.orchestra.run.vm07.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-crash.service". 2026-03-09T21:02:33.477 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:02:33.483 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : ceph-base-2:18.2.7-1055.gab47f43c.el9.x86_64 72/84 2026-03-09T21:02:33.485 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-base-2:18.2.7-1055.gab47f43c.el9.x86_64 72/84 2026-03-09T21:02:33.508 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: ceph-base-2:18.2.7-1055.gab47f43c.el9.x86_64 72/84 2026-03-09T21:02:33.509 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : ceph-selinux-2:18.2.7-1055.gab47f43c.el9.x86_64 73/84 2026-03-09T21:02:33.512 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-base-2:18.2.7-1055.gab47f43c.el9.x86_64 72/84 2026-03-09T21:02:33.512 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-selinux-2:18.2.7-1055.gab47f43c.el9.x86_64 73/84 2026-03-09T21:02:40.260 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: ceph-selinux-2:18.2.7-1055.gab47f43c.el9.x86_64 73/84 2026-03-09T21:02:40.260 INFO:teuthology.orchestra.run.vm10.stdout:skipping the directory /sys 2026-03-09T21:02:40.260 INFO:teuthology.orchestra.run.vm10.stdout:skipping the directory /proc 2026-03-09T21:02:40.260 INFO:teuthology.orchestra.run.vm10.stdout:skipping the directory /mnt 2026-03-09T21:02:40.260 INFO:teuthology.orchestra.run.vm10.stdout:skipping the directory /var/tmp 2026-03-09T21:02:40.260 INFO:teuthology.orchestra.run.vm10.stdout:skipping the directory /home 2026-03-09T21:02:40.260 INFO:teuthology.orchestra.run.vm10.stdout:skipping the directory /root 2026-03-09T21:02:40.260 INFO:teuthology.orchestra.run.vm10.stdout:skipping the directory /tmp 2026-03-09T21:02:40.260 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:02:40.275 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : ceph-common-2:18.2.7-1055.gab47f43c.el9.x86_64 74/84 2026-03-09T21:02:40.307 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: ceph-common-2:18.2.7-1055.gab47f43c.el9.x86_64 74/84 2026-03-09T21:02:40.310 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-ceph-common-2:18.2.7-1055.gab47f43c.el9.x8 75/84 2026-03-09T21:02:40.313 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-prettytable-0.7.2-27.el9.noarch 76/84 2026-03-09T21:02:40.317 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : gperftools-libs-2.9.1-3.el9.x86_64 77/84 2026-03-09T21:02:40.320 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : libunwind-1.6.2-1.el9.x86_64 78/84 2026-03-09T21:02:40.323 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : liboath-2.6.12-1.el9.x86_64 79/84 2026-03-09T21:02:40.323 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : libradosstriper1-2:18.2.7-1055.gab47f43c.el9.x86_6 80/84 2026-03-09T21:02:40.339 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: libradosstriper1-2:18.2.7-1055.gab47f43c.el9.x86_6 80/84 2026-03-09T21:02:40.344 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : fmt-8.1.1-5.el9.x86_64 81/84 2026-03-09T21:02:40.348 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : libquadmath-11.5.0-14.el9.x86_64 82/84 2026-03-09T21:02:40.352 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-markupsafe-1.1.1-12.el9.x86_64 83/84 2026-03-09T21:02:40.352 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : libcephsqlite-2:18.2.7-1055.gab47f43c.el9.x86_64 84/84 2026-03-09T21:02:40.434 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-selinux-2:18.2.7-1055.gab47f43c.el9.x86_64 73/84 2026-03-09T21:02:40.434 INFO:teuthology.orchestra.run.vm07.stdout:skipping the directory /sys 2026-03-09T21:02:40.434 INFO:teuthology.orchestra.run.vm07.stdout:skipping the directory /proc 2026-03-09T21:02:40.434 INFO:teuthology.orchestra.run.vm07.stdout:skipping the directory /mnt 2026-03-09T21:02:40.434 INFO:teuthology.orchestra.run.vm07.stdout:skipping the directory /var/tmp 2026-03-09T21:02:40.434 INFO:teuthology.orchestra.run.vm07.stdout:skipping the directory /home 2026-03-09T21:02:40.434 INFO:teuthology.orchestra.run.vm07.stdout:skipping the directory /root 2026-03-09T21:02:40.434 INFO:teuthology.orchestra.run.vm07.stdout:skipping the directory /tmp 2026-03-09T21:02:40.434 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:02:40.447 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-common-2:18.2.7-1055.gab47f43c.el9.x86_64 74/84 2026-03-09T21:02:40.470 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: libcephsqlite-2:18.2.7-1055.gab47f43c.el9.x86_64 84/84 2026-03-09T21:02:40.470 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : ceph-base-2:18.2.7-1055.gab47f43c.el9.x86_64 1/84 2026-03-09T21:02:40.470 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : ceph-common-2:18.2.7-1055.gab47f43c.el9.x86_64 2/84 2026-03-09T21:02:40.470 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : ceph-grafana-dashboards-2:18.2.7-1055.gab47f43c.el 3/84 2026-03-09T21:02:40.470 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : ceph-immutable-object-cache-2:18.2.7-1055.gab47f43 4/84 2026-03-09T21:02:40.470 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : ceph-mgr-2:18.2.7-1055.gab47f43c.el9.x86_64 5/84 2026-03-09T21:02:40.471 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : ceph-mgr-cephadm-2:18.2.7-1055.gab47f43c.el9.noarc 6/84 2026-03-09T21:02:40.471 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : ceph-mgr-dashboard-2:18.2.7-1055.gab47f43c.el9.noa 7/84 2026-03-09T21:02:40.471 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : ceph-mgr-diskprediction-local-2:18.2.7-1055.gab47f 8/84 2026-03-09T21:02:40.471 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : ceph-mgr-modules-core-2:18.2.7-1055.gab47f43c.el9. 9/84 2026-03-09T21:02:40.471 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : ceph-mgr-rook-2:18.2.7-1055.gab47f43c.el9.noarch 10/84 2026-03-09T21:02:40.471 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : ceph-prometheus-alerts-2:18.2.7-1055.gab47f43c.el9 11/84 2026-03-09T21:02:40.471 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : ceph-selinux-2:18.2.7-1055.gab47f43c.el9.x86_64 12/84 2026-03-09T21:02:40.471 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 13/84 2026-03-09T21:02:40.471 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 14/84 2026-03-09T21:02:40.471 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 15/84 2026-03-09T21:02:40.471 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : fmt-8.1.1-5.el9.x86_64 16/84 2026-03-09T21:02:40.471 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 17/84 2026-03-09T21:02:40.471 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : libcephsqlite-2:18.2.7-1055.gab47f43c.el9.x86_64 18/84 2026-03-09T21:02:40.471 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 19/84 2026-03-09T21:02:40.471 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 20/84 2026-03-09T21:02:40.471 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 21/84 2026-03-09T21:02:40.471 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : libradosstriper1-2:18.2.7-1055.gab47f43c.el9.x86_6 22/84 2026-03-09T21:02:40.471 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 23/84 2026-03-09T21:02:40.471 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 24/84 2026-03-09T21:02:40.471 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 25/84 2026-03-09T21:02:40.471 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 26/84 2026-03-09T21:02:40.471 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 27/84 2026-03-09T21:02:40.473 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 28/84 2026-03-09T21:02:40.473 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 29/84 2026-03-09T21:02:40.473 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 30/84 2026-03-09T21:02:40.473 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 31/84 2026-03-09T21:02:40.473 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-ceph-common-2:18.2.7-1055.gab47f43c.el9.x8 32/84 2026-03-09T21:02:40.473 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 33/84 2026-03-09T21:02:40.473 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 34/84 2026-03-09T21:02:40.473 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-chardet-4.0.0-5.el9.noarch 35/84 2026-03-09T21:02:40.473 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 36/84 2026-03-09T21:02:40.473 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 37/84 2026-03-09T21:02:40.473 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 38/84 2026-03-09T21:02:40.473 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 39/84 2026-03-09T21:02:40.473 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 40/84 2026-03-09T21:02:40.473 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-idna-2.10-7.el9.1.noarch 41/84 2026-03-09T21:02:40.473 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 42/84 2026-03-09T21:02:40.473 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 43/84 2026-03-09T21:02:40.473 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 44/84 2026-03-09T21:02:40.473 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 45/84 2026-03-09T21:02:40.473 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 46/84 2026-03-09T21:02:40.473 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 47/84 2026-03-09T21:02:40.473 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 48/84 2026-03-09T21:02:40.473 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-jsonpatch-1.21-16.el9.noarch 49/84 2026-03-09T21:02:40.473 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-jsonpointer-2.0-4.el9.noarch 50/84 2026-03-09T21:02:40.473 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 51/84 2026-03-09T21:02:40.473 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 52/84 2026-03-09T21:02:40.473 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 53/84 2026-03-09T21:02:40.473 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 54/84 2026-03-09T21:02:40.473 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 55/84 2026-03-09T21:02:40.473 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 56/84 2026-03-09T21:02:40.473 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 57/84 2026-03-09T21:02:40.473 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 58/84 2026-03-09T21:02:40.473 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-oauthlib-3.1.1-5.el9.noarch 59/84 2026-03-09T21:02:40.473 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 60/84 2026-03-09T21:02:40.473 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-ply-3.11-14.el9.noarch 61/84 2026-03-09T21:02:40.473 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 62/84 2026-03-09T21:02:40.473 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-prettytable-0.7.2-27.el9.noarch 63/84 2026-03-09T21:02:40.473 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 64/84 2026-03-09T21:02:40.473 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 65/84 2026-03-09T21:02:40.473 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 66/84 2026-03-09T21:02:40.473 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 67/84 2026-03-09T21:02:40.473 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-pysocks-1.7.1-12.el9.noarch 68/84 2026-03-09T21:02:40.474 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-pytz-2021.1-5.el9.noarch 69/84 2026-03-09T21:02:40.474 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 70/84 2026-03-09T21:02:40.474 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 71/84 2026-03-09T21:02:40.474 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 72/84 2026-03-09T21:02:40.474 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 73/84 2026-03-09T21:02:40.474 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 74/84 2026-03-09T21:02:40.474 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 75/84 2026-03-09T21:02:40.474 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 76/84 2026-03-09T21:02:40.474 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 77/84 2026-03-09T21:02:40.474 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 78/84 2026-03-09T21:02:40.474 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 79/84 2026-03-09T21:02:40.474 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 80/84 2026-03-09T21:02:40.474 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 81/84 2026-03-09T21:02:40.474 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 82/84 2026-03-09T21:02:40.474 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 83/84 2026-03-09T21:02:40.482 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-common-2:18.2.7-1055.gab47f43c.el9.x86_64 74/84 2026-03-09T21:02:40.489 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-ceph-common-2:18.2.7-1055.gab47f43c.el9.x8 75/84 2026-03-09T21:02:40.502 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-prettytable-0.7.2-27.el9.noarch 76/84 2026-03-09T21:02:40.507 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : gperftools-libs-2.9.1-3.el9.x86_64 77/84 2026-03-09T21:02:40.511 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : libunwind-1.6.2-1.el9.x86_64 78/84 2026-03-09T21:02:40.514 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : liboath-2.6.12-1.el9.x86_64 79/84 2026-03-09T21:02:40.514 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : libradosstriper1-2:18.2.7-1055.gab47f43c.el9.x86_6 80/84 2026-03-09T21:02:40.532 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: libradosstriper1-2:18.2.7-1055.gab47f43c.el9.x86_6 80/84 2026-03-09T21:02:40.536 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : fmt-8.1.1-5.el9.x86_64 81/84 2026-03-09T21:02:40.541 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : libquadmath-11.5.0-14.el9.x86_64 82/84 2026-03-09T21:02:40.546 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-markupsafe-1.1.1-12.el9.x86_64 83/84 2026-03-09T21:02:40.546 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : libcephsqlite-2:18.2.7-1055.gab47f43c.el9.x86_64 84/84 2026-03-09T21:02:40.569 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : rbd-mirror-2:18.2.7-1055.gab47f43c.el9.x86_64 84/84 2026-03-09T21:02:40.570 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:02:40.570 INFO:teuthology.orchestra.run.vm10.stdout:Removed: 2026-03-09T21:02:40.570 INFO:teuthology.orchestra.run.vm10.stdout: ceph-base-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T21:02:40.570 INFO:teuthology.orchestra.run.vm10.stdout: ceph-common-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T21:02:40.570 INFO:teuthology.orchestra.run.vm10.stdout: ceph-grafana-dashboards-2:18.2.7-1055.gab47f43c.el9.noarch 2026-03-09T21:02:40.570 INFO:teuthology.orchestra.run.vm10.stdout: ceph-immutable-object-cache-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T21:02:40.570 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T21:02:40.570 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-cephadm-2:18.2.7-1055.gab47f43c.el9.noarch 2026-03-09T21:02:40.570 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-dashboard-2:18.2.7-1055.gab47f43c.el9.noarch 2026-03-09T21:02:40.570 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-diskprediction-local-2:18.2.7-1055.gab47f43c.el9.noarch 2026-03-09T21:02:40.570 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-modules-core-2:18.2.7-1055.gab47f43c.el9.noarch 2026-03-09T21:02:40.570 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-rook-2:18.2.7-1055.gab47f43c.el9.noarch 2026-03-09T21:02:40.570 INFO:teuthology.orchestra.run.vm10.stdout: ceph-prometheus-alerts-2:18.2.7-1055.gab47f43c.el9.noarch 2026-03-09T21:02:40.570 INFO:teuthology.orchestra.run.vm10.stdout: ceph-selinux-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T21:02:40.570 INFO:teuthology.orchestra.run.vm10.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-09T21:02:40.570 INFO:teuthology.orchestra.run.vm10.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-09T21:02:40.570 INFO:teuthology.orchestra.run.vm10.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-09T21:02:40.570 INFO:teuthology.orchestra.run.vm10.stdout: fmt-8.1.1-5.el9.x86_64 2026-03-09T21:02:40.570 INFO:teuthology.orchestra.run.vm10.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-09T21:02:40.570 INFO:teuthology.orchestra.run.vm10.stdout: libcephsqlite-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T21:02:40.570 INFO:teuthology.orchestra.run.vm10.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-09T21:02:40.570 INFO:teuthology.orchestra.run.vm10.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-09T21:02:40.570 INFO:teuthology.orchestra.run.vm10.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-09T21:02:40.570 INFO:teuthology.orchestra.run.vm10.stdout: libradosstriper1-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T21:02:40.570 INFO:teuthology.orchestra.run.vm10.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-09T21:02:40.570 INFO:teuthology.orchestra.run.vm10.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-09T21:02:40.570 INFO:teuthology.orchestra.run.vm10.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-09T21:02:40.570 INFO:teuthology.orchestra.run.vm10.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-09T21:02:40.570 INFO:teuthology.orchestra.run.vm10.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-09T21:02:40.570 INFO:teuthology.orchestra.run.vm10.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-09T21:02:40.570 INFO:teuthology.orchestra.run.vm10.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-09T21:02:40.570 INFO:teuthology.orchestra.run.vm10.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-09T21:02:40.570 INFO:teuthology.orchestra.run.vm10.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-09T21:02:40.570 INFO:teuthology.orchestra.run.vm10.stdout: python3-ceph-common-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T21:02:40.570 INFO:teuthology.orchestra.run.vm10.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-09T21:02:40.570 INFO:teuthology.orchestra.run.vm10.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-09T21:02:40.570 INFO:teuthology.orchestra.run.vm10.stdout: python3-chardet-4.0.0-5.el9.noarch 2026-03-09T21:02:40.570 INFO:teuthology.orchestra.run.vm10.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-09T21:02:40.570 INFO:teuthology.orchestra.run.vm10.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-09T21:02:40.570 INFO:teuthology.orchestra.run.vm10.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-09T21:02:40.570 INFO:teuthology.orchestra.run.vm10.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-09T21:02:40.570 INFO:teuthology.orchestra.run.vm10.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-09T21:02:40.570 INFO:teuthology.orchestra.run.vm10.stdout: python3-idna-2.10-7.el9.1.noarch 2026-03-09T21:02:40.570 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-09T21:02:40.570 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-09T21:02:40.570 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-09T21:02:40.570 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-09T21:02:40.570 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-09T21:02:40.570 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-09T21:02:40.570 INFO:teuthology.orchestra.run.vm10.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-09T21:02:40.570 INFO:teuthology.orchestra.run.vm10.stdout: python3-jsonpatch-1.21-16.el9.noarch 2026-03-09T21:02:40.571 INFO:teuthology.orchestra.run.vm10.stdout: python3-jsonpointer-2.0-4.el9.noarch 2026-03-09T21:02:40.571 INFO:teuthology.orchestra.run.vm10.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-09T21:02:40.571 INFO:teuthology.orchestra.run.vm10.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-09T21:02:40.571 INFO:teuthology.orchestra.run.vm10.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-09T21:02:40.571 INFO:teuthology.orchestra.run.vm10.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-09T21:02:40.571 INFO:teuthology.orchestra.run.vm10.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-09T21:02:40.571 INFO:teuthology.orchestra.run.vm10.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-09T21:02:40.571 INFO:teuthology.orchestra.run.vm10.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-09T21:02:40.571 INFO:teuthology.orchestra.run.vm10.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-09T21:02:40.571 INFO:teuthology.orchestra.run.vm10.stdout: python3-oauthlib-3.1.1-5.el9.noarch 2026-03-09T21:02:40.571 INFO:teuthology.orchestra.run.vm10.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-09T21:02:40.571 INFO:teuthology.orchestra.run.vm10.stdout: python3-ply-3.11-14.el9.noarch 2026-03-09T21:02:40.571 INFO:teuthology.orchestra.run.vm10.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-09T21:02:40.571 INFO:teuthology.orchestra.run.vm10.stdout: python3-prettytable-0.7.2-27.el9.noarch 2026-03-09T21:02:40.571 INFO:teuthology.orchestra.run.vm10.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-09T21:02:40.571 INFO:teuthology.orchestra.run.vm10.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-09T21:02:40.571 INFO:teuthology.orchestra.run.vm10.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-09T21:02:40.571 INFO:teuthology.orchestra.run.vm10.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-09T21:02:40.571 INFO:teuthology.orchestra.run.vm10.stdout: python3-pysocks-1.7.1-12.el9.noarch 2026-03-09T21:02:40.571 INFO:teuthology.orchestra.run.vm10.stdout: python3-pytz-2021.1-5.el9.noarch 2026-03-09T21:02:40.571 INFO:teuthology.orchestra.run.vm10.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-09T21:02:40.571 INFO:teuthology.orchestra.run.vm10.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-09T21:02:40.571 INFO:teuthology.orchestra.run.vm10.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-09T21:02:40.571 INFO:teuthology.orchestra.run.vm10.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-09T21:02:40.571 INFO:teuthology.orchestra.run.vm10.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-09T21:02:40.571 INFO:teuthology.orchestra.run.vm10.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-09T21:02:40.571 INFO:teuthology.orchestra.run.vm10.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-09T21:02:40.571 INFO:teuthology.orchestra.run.vm10.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-09T21:02:40.571 INFO:teuthology.orchestra.run.vm10.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-09T21:02:40.571 INFO:teuthology.orchestra.run.vm10.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-09T21:02:40.571 INFO:teuthology.orchestra.run.vm10.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-09T21:02:40.571 INFO:teuthology.orchestra.run.vm10.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-09T21:02:40.571 INFO:teuthology.orchestra.run.vm10.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-09T21:02:40.571 INFO:teuthology.orchestra.run.vm10.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-09T21:02:40.571 INFO:teuthology.orchestra.run.vm10.stdout: rbd-mirror-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T21:02:40.571 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:02:40.571 INFO:teuthology.orchestra.run.vm10.stdout:Complete! 2026-03-09T21:02:40.662 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: libcephsqlite-2:18.2.7-1055.gab47f43c.el9.x86_64 84/84 2026-03-09T21:02:40.662 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-base-2:18.2.7-1055.gab47f43c.el9.x86_64 1/84 2026-03-09T21:02:40.662 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-common-2:18.2.7-1055.gab47f43c.el9.x86_64 2/84 2026-03-09T21:02:40.662 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-grafana-dashboards-2:18.2.7-1055.gab47f43c.el 3/84 2026-03-09T21:02:40.662 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-immutable-object-cache-2:18.2.7-1055.gab47f43 4/84 2026-03-09T21:02:40.662 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-mgr-2:18.2.7-1055.gab47f43c.el9.x86_64 5/84 2026-03-09T21:02:40.663 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-mgr-cephadm-2:18.2.7-1055.gab47f43c.el9.noarc 6/84 2026-03-09T21:02:40.663 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-mgr-dashboard-2:18.2.7-1055.gab47f43c.el9.noa 7/84 2026-03-09T21:02:40.663 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-mgr-diskprediction-local-2:18.2.7-1055.gab47f 8/84 2026-03-09T21:02:40.663 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-mgr-modules-core-2:18.2.7-1055.gab47f43c.el9. 9/84 2026-03-09T21:02:40.663 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-mgr-rook-2:18.2.7-1055.gab47f43c.el9.noarch 10/84 2026-03-09T21:02:40.663 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-prometheus-alerts-2:18.2.7-1055.gab47f43c.el9 11/84 2026-03-09T21:02:40.663 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-selinux-2:18.2.7-1055.gab47f43c.el9.x86_64 12/84 2026-03-09T21:02:40.663 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 13/84 2026-03-09T21:02:40.663 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 14/84 2026-03-09T21:02:40.663 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 15/84 2026-03-09T21:02:40.663 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : fmt-8.1.1-5.el9.x86_64 16/84 2026-03-09T21:02:40.663 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 17/84 2026-03-09T21:02:40.663 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libcephsqlite-2:18.2.7-1055.gab47f43c.el9.x86_64 18/84 2026-03-09T21:02:40.663 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 19/84 2026-03-09T21:02:40.663 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 20/84 2026-03-09T21:02:40.663 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 21/84 2026-03-09T21:02:40.663 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libradosstriper1-2:18.2.7-1055.gab47f43c.el9.x86_6 22/84 2026-03-09T21:02:40.663 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 23/84 2026-03-09T21:02:40.664 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 24/84 2026-03-09T21:02:40.664 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 25/84 2026-03-09T21:02:40.664 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 26/84 2026-03-09T21:02:40.664 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 27/84 2026-03-09T21:02:40.664 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 28/84 2026-03-09T21:02:40.664 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 29/84 2026-03-09T21:02:40.664 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 30/84 2026-03-09T21:02:40.664 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 31/84 2026-03-09T21:02:40.664 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-ceph-common-2:18.2.7-1055.gab47f43c.el9.x8 32/84 2026-03-09T21:02:40.664 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 33/84 2026-03-09T21:02:40.664 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 34/84 2026-03-09T21:02:40.664 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-chardet-4.0.0-5.el9.noarch 35/84 2026-03-09T21:02:40.664 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 36/84 2026-03-09T21:02:40.664 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 37/84 2026-03-09T21:02:40.665 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 38/84 2026-03-09T21:02:40.665 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 39/84 2026-03-09T21:02:40.665 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 40/84 2026-03-09T21:02:40.665 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-idna-2.10-7.el9.1.noarch 41/84 2026-03-09T21:02:40.665 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 42/84 2026-03-09T21:02:40.665 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 43/84 2026-03-09T21:02:40.665 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 44/84 2026-03-09T21:02:40.665 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 45/84 2026-03-09T21:02:40.665 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 46/84 2026-03-09T21:02:40.665 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 47/84 2026-03-09T21:02:40.665 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 48/84 2026-03-09T21:02:40.665 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-jsonpatch-1.21-16.el9.noarch 49/84 2026-03-09T21:02:40.665 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-jsonpointer-2.0-4.el9.noarch 50/84 2026-03-09T21:02:40.665 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 51/84 2026-03-09T21:02:40.665 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 52/84 2026-03-09T21:02:40.665 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 53/84 2026-03-09T21:02:40.665 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 54/84 2026-03-09T21:02:40.665 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 55/84 2026-03-09T21:02:40.665 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 56/84 2026-03-09T21:02:40.665 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 57/84 2026-03-09T21:02:40.665 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 58/84 2026-03-09T21:02:40.665 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-oauthlib-3.1.1-5.el9.noarch 59/84 2026-03-09T21:02:40.665 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 60/84 2026-03-09T21:02:40.666 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-ply-3.11-14.el9.noarch 61/84 2026-03-09T21:02:40.666 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 62/84 2026-03-09T21:02:40.666 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-prettytable-0.7.2-27.el9.noarch 63/84 2026-03-09T21:02:40.666 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 64/84 2026-03-09T21:02:40.666 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 65/84 2026-03-09T21:02:40.666 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 66/84 2026-03-09T21:02:40.666 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 67/84 2026-03-09T21:02:40.666 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-pysocks-1.7.1-12.el9.noarch 68/84 2026-03-09T21:02:40.666 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-pytz-2021.1-5.el9.noarch 69/84 2026-03-09T21:02:40.666 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 70/84 2026-03-09T21:02:40.666 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 71/84 2026-03-09T21:02:40.666 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 72/84 2026-03-09T21:02:40.666 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 73/84 2026-03-09T21:02:40.666 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 74/84 2026-03-09T21:02:40.666 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 75/84 2026-03-09T21:02:40.666 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 76/84 2026-03-09T21:02:40.666 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 77/84 2026-03-09T21:02:40.666 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 78/84 2026-03-09T21:02:40.666 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 79/84 2026-03-09T21:02:40.666 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 80/84 2026-03-09T21:02:40.666 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 81/84 2026-03-09T21:02:40.666 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 82/84 2026-03-09T21:02:40.666 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 83/84 2026-03-09T21:02:40.747 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : rbd-mirror-2:18.2.7-1055.gab47f43c.el9.x86_64 84/84 2026-03-09T21:02:40.748 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:02:40.748 INFO:teuthology.orchestra.run.vm07.stdout:Removed: 2026-03-09T21:02:40.748 INFO:teuthology.orchestra.run.vm07.stdout: ceph-base-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T21:02:40.748 INFO:teuthology.orchestra.run.vm07.stdout: ceph-common-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T21:02:40.748 INFO:teuthology.orchestra.run.vm07.stdout: ceph-grafana-dashboards-2:18.2.7-1055.gab47f43c.el9.noarch 2026-03-09T21:02:40.748 INFO:teuthology.orchestra.run.vm07.stdout: ceph-immutable-object-cache-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T21:02:40.748 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T21:02:40.748 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-cephadm-2:18.2.7-1055.gab47f43c.el9.noarch 2026-03-09T21:02:40.748 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-dashboard-2:18.2.7-1055.gab47f43c.el9.noarch 2026-03-09T21:02:40.748 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-diskprediction-local-2:18.2.7-1055.gab47f43c.el9.noarch 2026-03-09T21:02:40.748 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core-2:18.2.7-1055.gab47f43c.el9.noarch 2026-03-09T21:02:40.748 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-rook-2:18.2.7-1055.gab47f43c.el9.noarch 2026-03-09T21:02:40.748 INFO:teuthology.orchestra.run.vm07.stdout: ceph-prometheus-alerts-2:18.2.7-1055.gab47f43c.el9.noarch 2026-03-09T21:02:40.748 INFO:teuthology.orchestra.run.vm07.stdout: ceph-selinux-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T21:02:40.748 INFO:teuthology.orchestra.run.vm07.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-09T21:02:40.748 INFO:teuthology.orchestra.run.vm07.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-09T21:02:40.748 INFO:teuthology.orchestra.run.vm07.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-09T21:02:40.748 INFO:teuthology.orchestra.run.vm07.stdout: fmt-8.1.1-5.el9.x86_64 2026-03-09T21:02:40.748 INFO:teuthology.orchestra.run.vm07.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-09T21:02:40.748 INFO:teuthology.orchestra.run.vm07.stdout: libcephsqlite-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T21:02:40.748 INFO:teuthology.orchestra.run.vm07.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-09T21:02:40.748 INFO:teuthology.orchestra.run.vm07.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-09T21:02:40.748 INFO:teuthology.orchestra.run.vm07.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-09T21:02:40.748 INFO:teuthology.orchestra.run.vm07.stdout: libradosstriper1-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T21:02:40.748 INFO:teuthology.orchestra.run.vm07.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-09T21:02:40.748 INFO:teuthology.orchestra.run.vm07.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-09T21:02:40.748 INFO:teuthology.orchestra.run.vm07.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-09T21:02:40.748 INFO:teuthology.orchestra.run.vm07.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-09T21:02:40.748 INFO:teuthology.orchestra.run.vm07.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-09T21:02:40.748 INFO:teuthology.orchestra.run.vm07.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-09T21:02:40.748 INFO:teuthology.orchestra.run.vm07.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-09T21:02:40.748 INFO:teuthology.orchestra.run.vm07.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-09T21:02:40.748 INFO:teuthology.orchestra.run.vm07.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-09T21:02:40.748 INFO:teuthology.orchestra.run.vm07.stdout: python3-ceph-common-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T21:02:40.748 INFO:teuthology.orchestra.run.vm07.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-09T21:02:40.748 INFO:teuthology.orchestra.run.vm07.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-09T21:02:40.748 INFO:teuthology.orchestra.run.vm07.stdout: python3-chardet-4.0.0-5.el9.noarch 2026-03-09T21:02:40.748 INFO:teuthology.orchestra.run.vm07.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-09T21:02:40.748 INFO:teuthology.orchestra.run.vm07.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-09T21:02:40.749 INFO:teuthology.orchestra.run.vm07.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-09T21:02:40.749 INFO:teuthology.orchestra.run.vm07.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-09T21:02:40.749 INFO:teuthology.orchestra.run.vm07.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-09T21:02:40.749 INFO:teuthology.orchestra.run.vm07.stdout: python3-idna-2.10-7.el9.1.noarch 2026-03-09T21:02:40.749 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-09T21:02:40.749 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-09T21:02:40.749 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-09T21:02:40.749 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-09T21:02:40.749 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-09T21:02:40.749 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-09T21:02:40.749 INFO:teuthology.orchestra.run.vm07.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-09T21:02:40.749 INFO:teuthology.orchestra.run.vm07.stdout: python3-jsonpatch-1.21-16.el9.noarch 2026-03-09T21:02:40.749 INFO:teuthology.orchestra.run.vm07.stdout: python3-jsonpointer-2.0-4.el9.noarch 2026-03-09T21:02:40.749 INFO:teuthology.orchestra.run.vm07.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-09T21:02:40.749 INFO:teuthology.orchestra.run.vm07.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-09T21:02:40.749 INFO:teuthology.orchestra.run.vm07.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-09T21:02:40.749 INFO:teuthology.orchestra.run.vm07.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-09T21:02:40.749 INFO:teuthology.orchestra.run.vm07.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-09T21:02:40.749 INFO:teuthology.orchestra.run.vm07.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-09T21:02:40.749 INFO:teuthology.orchestra.run.vm07.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-09T21:02:40.749 INFO:teuthology.orchestra.run.vm07.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-09T21:02:40.749 INFO:teuthology.orchestra.run.vm07.stdout: python3-oauthlib-3.1.1-5.el9.noarch 2026-03-09T21:02:40.749 INFO:teuthology.orchestra.run.vm07.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-09T21:02:40.749 INFO:teuthology.orchestra.run.vm07.stdout: python3-ply-3.11-14.el9.noarch 2026-03-09T21:02:40.749 INFO:teuthology.orchestra.run.vm07.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-09T21:02:40.749 INFO:teuthology.orchestra.run.vm07.stdout: python3-prettytable-0.7.2-27.el9.noarch 2026-03-09T21:02:40.749 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-09T21:02:40.749 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-09T21:02:40.749 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-09T21:02:40.749 INFO:teuthology.orchestra.run.vm07.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-09T21:02:40.749 INFO:teuthology.orchestra.run.vm07.stdout: python3-pysocks-1.7.1-12.el9.noarch 2026-03-09T21:02:40.749 INFO:teuthology.orchestra.run.vm07.stdout: python3-pytz-2021.1-5.el9.noarch 2026-03-09T21:02:40.749 INFO:teuthology.orchestra.run.vm07.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-09T21:02:40.749 INFO:teuthology.orchestra.run.vm07.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-09T21:02:40.749 INFO:teuthology.orchestra.run.vm07.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-09T21:02:40.749 INFO:teuthology.orchestra.run.vm07.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-09T21:02:40.749 INFO:teuthology.orchestra.run.vm07.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-09T21:02:40.749 INFO:teuthology.orchestra.run.vm07.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-09T21:02:40.749 INFO:teuthology.orchestra.run.vm07.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-09T21:02:40.749 INFO:teuthology.orchestra.run.vm07.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-09T21:02:40.749 INFO:teuthology.orchestra.run.vm07.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-09T21:02:40.749 INFO:teuthology.orchestra.run.vm07.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-09T21:02:40.749 INFO:teuthology.orchestra.run.vm07.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-09T21:02:40.749 INFO:teuthology.orchestra.run.vm07.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-09T21:02:40.749 INFO:teuthology.orchestra.run.vm07.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-09T21:02:40.749 INFO:teuthology.orchestra.run.vm07.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-09T21:02:40.749 INFO:teuthology.orchestra.run.vm07.stdout: rbd-mirror-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T21:02:40.749 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:02:40.749 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-03-09T21:02:40.822 INFO:teuthology.orchestra.run.vm10.stdout:Dependencies resolved. 2026-03-09T21:02:40.823 INFO:teuthology.orchestra.run.vm10.stdout:================================================================================ 2026-03-09T21:02:40.823 INFO:teuthology.orchestra.run.vm10.stdout: Package Arch Version Repository Size 2026-03-09T21:02:40.823 INFO:teuthology.orchestra.run.vm10.stdout:================================================================================ 2026-03-09T21:02:40.823 INFO:teuthology.orchestra.run.vm10.stdout:Removing: 2026-03-09T21:02:40.823 INFO:teuthology.orchestra.run.vm10.stdout: cephadm noarch 2:18.2.7-1055.gab47f43c.el9 @ceph-noarch 218 k 2026-03-09T21:02:40.823 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:02:40.823 INFO:teuthology.orchestra.run.vm10.stdout:Transaction Summary 2026-03-09T21:02:40.823 INFO:teuthology.orchestra.run.vm10.stdout:================================================================================ 2026-03-09T21:02:40.823 INFO:teuthology.orchestra.run.vm10.stdout:Remove 1 Package 2026-03-09T21:02:40.823 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:02:40.823 INFO:teuthology.orchestra.run.vm10.stdout:Freed space: 218 k 2026-03-09T21:02:40.823 INFO:teuthology.orchestra.run.vm10.stdout:Running transaction check 2026-03-09T21:02:40.825 INFO:teuthology.orchestra.run.vm10.stdout:Transaction check succeeded. 2026-03-09T21:02:40.825 INFO:teuthology.orchestra.run.vm10.stdout:Running transaction test 2026-03-09T21:02:40.827 INFO:teuthology.orchestra.run.vm10.stdout:Transaction test succeeded. 2026-03-09T21:02:40.827 INFO:teuthology.orchestra.run.vm10.stdout:Running transaction 2026-03-09T21:02:40.844 INFO:teuthology.orchestra.run.vm10.stdout: Preparing : 1/1 2026-03-09T21:02:40.845 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : cephadm-2:18.2.7-1055.gab47f43c.el9.noarch 1/1 2026-03-09T21:02:40.987 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: cephadm-2:18.2.7-1055.gab47f43c.el9.noarch 1/1 2026-03-09T21:02:40.991 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-03-09T21:02:40.992 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-03-09T21:02:40.992 INFO:teuthology.orchestra.run.vm07.stdout: Package Arch Version Repository Size 2026-03-09T21:02:40.992 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-03-09T21:02:40.992 INFO:teuthology.orchestra.run.vm07.stdout:Removing: 2026-03-09T21:02:40.992 INFO:teuthology.orchestra.run.vm07.stdout: cephadm noarch 2:18.2.7-1055.gab47f43c.el9 @ceph-noarch 218 k 2026-03-09T21:02:40.992 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:02:40.992 INFO:teuthology.orchestra.run.vm07.stdout:Transaction Summary 2026-03-09T21:02:40.992 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-03-09T21:02:40.992 INFO:teuthology.orchestra.run.vm07.stdout:Remove 1 Package 2026-03-09T21:02:40.992 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:02:40.992 INFO:teuthology.orchestra.run.vm07.stdout:Freed space: 218 k 2026-03-09T21:02:40.992 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction check 2026-03-09T21:02:40.993 INFO:teuthology.orchestra.run.vm07.stdout:Transaction check succeeded. 2026-03-09T21:02:40.993 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction test 2026-03-09T21:02:40.995 INFO:teuthology.orchestra.run.vm07.stdout:Transaction test succeeded. 2026-03-09T21:02:40.995 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction 2026-03-09T21:02:41.017 INFO:teuthology.orchestra.run.vm07.stdout: Preparing : 1/1 2026-03-09T21:02:41.017 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : cephadm-2:18.2.7-1055.gab47f43c.el9.noarch 1/1 2026-03-09T21:02:41.039 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : cephadm-2:18.2.7-1055.gab47f43c.el9.noarch 1/1 2026-03-09T21:02:41.039 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:02:41.039 INFO:teuthology.orchestra.run.vm10.stdout:Removed: 2026-03-09T21:02:41.039 INFO:teuthology.orchestra.run.vm10.stdout: cephadm-2:18.2.7-1055.gab47f43c.el9.noarch 2026-03-09T21:02:41.039 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:02:41.039 INFO:teuthology.orchestra.run.vm10.stdout:Complete! 2026-03-09T21:02:41.129 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: cephadm-2:18.2.7-1055.gab47f43c.el9.noarch 1/1 2026-03-09T21:02:41.175 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : cephadm-2:18.2.7-1055.gab47f43c.el9.noarch 1/1 2026-03-09T21:02:41.175 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:02:41.175 INFO:teuthology.orchestra.run.vm07.stdout:Removed: 2026-03-09T21:02:41.175 INFO:teuthology.orchestra.run.vm07.stdout: cephadm-2:18.2.7-1055.gab47f43c.el9.noarch 2026-03-09T21:02:41.175 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:02:41.175 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-03-09T21:02:41.254 INFO:teuthology.orchestra.run.vm10.stdout:No match for argument: ceph-immutable-object-cache 2026-03-09T21:02:41.254 INFO:teuthology.orchestra.run.vm10.stderr:No packages marked for removal. 2026-03-09T21:02:41.257 INFO:teuthology.orchestra.run.vm10.stdout:Dependencies resolved. 2026-03-09T21:02:41.258 INFO:teuthology.orchestra.run.vm10.stdout:Nothing to do. 2026-03-09T21:02:41.258 INFO:teuthology.orchestra.run.vm10.stdout:Complete! 2026-03-09T21:02:41.389 INFO:teuthology.orchestra.run.vm07.stdout:No match for argument: ceph-immutable-object-cache 2026-03-09T21:02:41.389 INFO:teuthology.orchestra.run.vm07.stderr:No packages marked for removal. 2026-03-09T21:02:41.392 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-03-09T21:02:41.393 INFO:teuthology.orchestra.run.vm07.stdout:Nothing to do. 2026-03-09T21:02:41.393 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-03-09T21:02:41.422 INFO:teuthology.orchestra.run.vm10.stdout:No match for argument: ceph-mgr 2026-03-09T21:02:41.423 INFO:teuthology.orchestra.run.vm10.stderr:No packages marked for removal. 2026-03-09T21:02:41.425 INFO:teuthology.orchestra.run.vm10.stdout:Dependencies resolved. 2026-03-09T21:02:41.426 INFO:teuthology.orchestra.run.vm10.stdout:Nothing to do. 2026-03-09T21:02:41.426 INFO:teuthology.orchestra.run.vm10.stdout:Complete! 2026-03-09T21:02:41.596 INFO:teuthology.orchestra.run.vm07.stdout:No match for argument: ceph-mgr 2026-03-09T21:02:41.596 INFO:teuthology.orchestra.run.vm07.stderr:No packages marked for removal. 2026-03-09T21:02:41.599 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-03-09T21:02:41.600 INFO:teuthology.orchestra.run.vm07.stdout:Nothing to do. 2026-03-09T21:02:41.600 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-03-09T21:02:41.622 INFO:teuthology.orchestra.run.vm10.stdout:No match for argument: ceph-mgr-dashboard 2026-03-09T21:02:41.622 INFO:teuthology.orchestra.run.vm10.stderr:No packages marked for removal. 2026-03-09T21:02:41.626 INFO:teuthology.orchestra.run.vm10.stdout:Dependencies resolved. 2026-03-09T21:02:41.627 INFO:teuthology.orchestra.run.vm10.stdout:Nothing to do. 2026-03-09T21:02:41.627 INFO:teuthology.orchestra.run.vm10.stdout:Complete! 2026-03-09T21:02:41.848 INFO:teuthology.orchestra.run.vm07.stdout:No match for argument: ceph-mgr-dashboard 2026-03-09T21:02:41.849 INFO:teuthology.orchestra.run.vm07.stderr:No packages marked for removal. 2026-03-09T21:02:41.852 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-03-09T21:02:41.852 INFO:teuthology.orchestra.run.vm07.stdout:Nothing to do. 2026-03-09T21:02:41.852 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-03-09T21:02:41.906 INFO:teuthology.orchestra.run.vm10.stdout:No match for argument: ceph-mgr-diskprediction-local 2026-03-09T21:02:41.906 INFO:teuthology.orchestra.run.vm10.stderr:No packages marked for removal. 2026-03-09T21:02:41.909 INFO:teuthology.orchestra.run.vm10.stdout:Dependencies resolved. 2026-03-09T21:02:41.909 INFO:teuthology.orchestra.run.vm10.stdout:Nothing to do. 2026-03-09T21:02:41.909 INFO:teuthology.orchestra.run.vm10.stdout:Complete! 2026-03-09T21:02:42.040 INFO:teuthology.orchestra.run.vm07.stdout:No match for argument: ceph-mgr-diskprediction-local 2026-03-09T21:02:42.040 INFO:teuthology.orchestra.run.vm07.stderr:No packages marked for removal. 2026-03-09T21:02:42.043 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-03-09T21:02:42.044 INFO:teuthology.orchestra.run.vm07.stdout:Nothing to do. 2026-03-09T21:02:42.044 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-03-09T21:02:42.074 INFO:teuthology.orchestra.run.vm10.stdout:No match for argument: ceph-mgr-rook 2026-03-09T21:02:42.074 INFO:teuthology.orchestra.run.vm10.stderr:No packages marked for removal. 2026-03-09T21:02:42.077 INFO:teuthology.orchestra.run.vm10.stdout:Dependencies resolved. 2026-03-09T21:02:42.082 INFO:teuthology.orchestra.run.vm10.stdout:Nothing to do. 2026-03-09T21:02:42.082 INFO:teuthology.orchestra.run.vm10.stdout:Complete! 2026-03-09T21:02:42.229 INFO:teuthology.orchestra.run.vm07.stdout:No match for argument: ceph-mgr-rook 2026-03-09T21:02:42.229 INFO:teuthology.orchestra.run.vm07.stderr:No packages marked for removal. 2026-03-09T21:02:42.232 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-03-09T21:02:42.232 INFO:teuthology.orchestra.run.vm07.stdout:Nothing to do. 2026-03-09T21:02:42.232 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-03-09T21:02:42.324 INFO:teuthology.orchestra.run.vm10.stdout:No match for argument: ceph-mgr-cephadm 2026-03-09T21:02:42.324 INFO:teuthology.orchestra.run.vm10.stderr:No packages marked for removal. 2026-03-09T21:02:42.327 INFO:teuthology.orchestra.run.vm10.stdout:Dependencies resolved. 2026-03-09T21:02:42.328 INFO:teuthology.orchestra.run.vm10.stdout:Nothing to do. 2026-03-09T21:02:42.328 INFO:teuthology.orchestra.run.vm10.stdout:Complete! 2026-03-09T21:02:42.510 INFO:teuthology.orchestra.run.vm07.stdout:No match for argument: ceph-mgr-cephadm 2026-03-09T21:02:42.510 INFO:teuthology.orchestra.run.vm07.stderr:No packages marked for removal. 2026-03-09T21:02:42.514 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-03-09T21:02:42.515 INFO:teuthology.orchestra.run.vm07.stdout:Nothing to do. 2026-03-09T21:02:42.515 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-03-09T21:02:42.550 INFO:teuthology.orchestra.run.vm10.stdout:Dependencies resolved. 2026-03-09T21:02:42.551 INFO:teuthology.orchestra.run.vm10.stdout:================================================================================ 2026-03-09T21:02:42.551 INFO:teuthology.orchestra.run.vm10.stdout: Package Arch Version Repository Size 2026-03-09T21:02:42.551 INFO:teuthology.orchestra.run.vm10.stdout:================================================================================ 2026-03-09T21:02:42.551 INFO:teuthology.orchestra.run.vm10.stdout:Removing: 2026-03-09T21:02:42.551 INFO:teuthology.orchestra.run.vm10.stdout: ceph-fuse x86_64 2:18.2.7-1055.gab47f43c.el9 @ceph 2.5 M 2026-03-09T21:02:42.551 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:02:42.551 INFO:teuthology.orchestra.run.vm10.stdout:Transaction Summary 2026-03-09T21:02:42.551 INFO:teuthology.orchestra.run.vm10.stdout:================================================================================ 2026-03-09T21:02:42.551 INFO:teuthology.orchestra.run.vm10.stdout:Remove 1 Package 2026-03-09T21:02:42.551 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:02:42.551 INFO:teuthology.orchestra.run.vm10.stdout:Freed space: 2.5 M 2026-03-09T21:02:42.551 INFO:teuthology.orchestra.run.vm10.stdout:Running transaction check 2026-03-09T21:02:42.552 INFO:teuthology.orchestra.run.vm10.stdout:Transaction check succeeded. 2026-03-09T21:02:42.553 INFO:teuthology.orchestra.run.vm10.stdout:Running transaction test 2026-03-09T21:02:42.562 INFO:teuthology.orchestra.run.vm10.stdout:Transaction test succeeded. 2026-03-09T21:02:42.562 INFO:teuthology.orchestra.run.vm10.stdout:Running transaction 2026-03-09T21:02:42.587 INFO:teuthology.orchestra.run.vm10.stdout: Preparing : 1/1 2026-03-09T21:02:42.602 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : ceph-fuse-2:18.2.7-1055.gab47f43c.el9.x86_64 1/1 2026-03-09T21:02:42.682 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: ceph-fuse-2:18.2.7-1055.gab47f43c.el9.x86_64 1/1 2026-03-09T21:02:42.716 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-03-09T21:02:42.716 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-03-09T21:02:42.716 INFO:teuthology.orchestra.run.vm07.stdout: Package Arch Version Repository Size 2026-03-09T21:02:42.716 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-03-09T21:02:42.716 INFO:teuthology.orchestra.run.vm07.stdout:Removing: 2026-03-09T21:02:42.716 INFO:teuthology.orchestra.run.vm07.stdout: ceph-fuse x86_64 2:18.2.7-1055.gab47f43c.el9 @ceph 2.5 M 2026-03-09T21:02:42.716 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:02:42.716 INFO:teuthology.orchestra.run.vm07.stdout:Transaction Summary 2026-03-09T21:02:42.716 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-03-09T21:02:42.716 INFO:teuthology.orchestra.run.vm07.stdout:Remove 1 Package 2026-03-09T21:02:42.716 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:02:42.717 INFO:teuthology.orchestra.run.vm07.stdout:Freed space: 2.5 M 2026-03-09T21:02:42.717 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction check 2026-03-09T21:02:42.719 INFO:teuthology.orchestra.run.vm07.stdout:Transaction check succeeded. 2026-03-09T21:02:42.719 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction test 2026-03-09T21:02:42.730 INFO:teuthology.orchestra.run.vm07.stdout:Transaction test succeeded. 2026-03-09T21:02:42.730 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction 2026-03-09T21:02:42.740 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : ceph-fuse-2:18.2.7-1055.gab47f43c.el9.x86_64 1/1 2026-03-09T21:02:42.740 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:02:42.740 INFO:teuthology.orchestra.run.vm10.stdout:Removed: 2026-03-09T21:02:42.740 INFO:teuthology.orchestra.run.vm10.stdout: ceph-fuse-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T21:02:42.740 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:02:42.740 INFO:teuthology.orchestra.run.vm10.stdout:Complete! 2026-03-09T21:02:42.762 INFO:teuthology.orchestra.run.vm07.stdout: Preparing : 1/1 2026-03-09T21:02:42.777 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-fuse-2:18.2.7-1055.gab47f43c.el9.x86_64 1/1 2026-03-09T21:02:42.842 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-fuse-2:18.2.7-1055.gab47f43c.el9.x86_64 1/1 2026-03-09T21:02:42.885 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-fuse-2:18.2.7-1055.gab47f43c.el9.x86_64 1/1 2026-03-09T21:02:42.885 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:02:42.885 INFO:teuthology.orchestra.run.vm07.stdout:Removed: 2026-03-09T21:02:42.885 INFO:teuthology.orchestra.run.vm07.stdout: ceph-fuse-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T21:02:42.885 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:02:42.885 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-03-09T21:02:42.958 INFO:teuthology.orchestra.run.vm10.stdout:Dependencies resolved. 2026-03-09T21:02:42.958 INFO:teuthology.orchestra.run.vm10.stdout:================================================================================ 2026-03-09T21:02:42.958 INFO:teuthology.orchestra.run.vm10.stdout: Package Arch Version Repo Size 2026-03-09T21:02:42.958 INFO:teuthology.orchestra.run.vm10.stdout:================================================================================ 2026-03-09T21:02:42.958 INFO:teuthology.orchestra.run.vm10.stdout:Removing: 2026-03-09T21:02:42.958 INFO:teuthology.orchestra.run.vm10.stdout: librados-devel x86_64 2:18.2.7-1055.gab47f43c.el9 @ceph 456 k 2026-03-09T21:02:42.958 INFO:teuthology.orchestra.run.vm10.stdout:Removing dependent packages: 2026-03-09T21:02:42.958 INFO:teuthology.orchestra.run.vm10.stdout: libcephfs-devel x86_64 2:18.2.7-1055.gab47f43c.el9 @ceph 139 k 2026-03-09T21:02:42.958 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:02:42.958 INFO:teuthology.orchestra.run.vm10.stdout:Transaction Summary 2026-03-09T21:02:42.958 INFO:teuthology.orchestra.run.vm10.stdout:================================================================================ 2026-03-09T21:02:42.958 INFO:teuthology.orchestra.run.vm10.stdout:Remove 2 Packages 2026-03-09T21:02:42.958 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:02:42.958 INFO:teuthology.orchestra.run.vm10.stdout:Freed space: 595 k 2026-03-09T21:02:42.959 INFO:teuthology.orchestra.run.vm10.stdout:Running transaction check 2026-03-09T21:02:42.961 INFO:teuthology.orchestra.run.vm10.stdout:Transaction check succeeded. 2026-03-09T21:02:42.961 INFO:teuthology.orchestra.run.vm10.stdout:Running transaction test 2026-03-09T21:02:42.972 INFO:teuthology.orchestra.run.vm10.stdout:Transaction test succeeded. 2026-03-09T21:02:42.972 INFO:teuthology.orchestra.run.vm10.stdout:Running transaction 2026-03-09T21:02:43.000 INFO:teuthology.orchestra.run.vm10.stdout: Preparing : 1/1 2026-03-09T21:02:43.003 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : libcephfs-devel-2:18.2.7-1055.gab47f43c.el9.x86_64 1/2 2026-03-09T21:02:43.017 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : librados-devel-2:18.2.7-1055.gab47f43c.el9.x86_64 2/2 2026-03-09T21:02:43.079 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-03-09T21:02:43.079 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-03-09T21:02:43.079 INFO:teuthology.orchestra.run.vm07.stdout: Package Arch Version Repo Size 2026-03-09T21:02:43.079 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-03-09T21:02:43.079 INFO:teuthology.orchestra.run.vm07.stdout:Removing: 2026-03-09T21:02:43.079 INFO:teuthology.orchestra.run.vm07.stdout: librados-devel x86_64 2:18.2.7-1055.gab47f43c.el9 @ceph 456 k 2026-03-09T21:02:43.079 INFO:teuthology.orchestra.run.vm07.stdout:Removing dependent packages: 2026-03-09T21:02:43.079 INFO:teuthology.orchestra.run.vm07.stdout: libcephfs-devel x86_64 2:18.2.7-1055.gab47f43c.el9 @ceph 139 k 2026-03-09T21:02:43.079 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:02:43.079 INFO:teuthology.orchestra.run.vm07.stdout:Transaction Summary 2026-03-09T21:02:43.079 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-03-09T21:02:43.079 INFO:teuthology.orchestra.run.vm07.stdout:Remove 2 Packages 2026-03-09T21:02:43.079 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:02:43.080 INFO:teuthology.orchestra.run.vm07.stdout:Freed space: 595 k 2026-03-09T21:02:43.080 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction check 2026-03-09T21:02:43.082 INFO:teuthology.orchestra.run.vm07.stdout:Transaction check succeeded. 2026-03-09T21:02:43.082 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction test 2026-03-09T21:02:43.093 INFO:teuthology.orchestra.run.vm07.stdout:Transaction test succeeded. 2026-03-09T21:02:43.093 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction 2026-03-09T21:02:43.094 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: librados-devel-2:18.2.7-1055.gab47f43c.el9.x86_64 2/2 2026-03-09T21:02:43.094 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : libcephfs-devel-2:18.2.7-1055.gab47f43c.el9.x86_64 1/2 2026-03-09T21:02:43.121 INFO:teuthology.orchestra.run.vm07.stdout: Preparing : 1/1 2026-03-09T21:02:43.124 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : libcephfs-devel-2:18.2.7-1055.gab47f43c.el9.x86_64 1/2 2026-03-09T21:02:43.142 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : librados-devel-2:18.2.7-1055.gab47f43c.el9.x86_64 2/2 2026-03-09T21:02:43.154 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : librados-devel-2:18.2.7-1055.gab47f43c.el9.x86_64 2/2 2026-03-09T21:02:43.154 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:02:43.154 INFO:teuthology.orchestra.run.vm10.stdout:Removed: 2026-03-09T21:02:43.154 INFO:teuthology.orchestra.run.vm10.stdout: libcephfs-devel-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T21:02:43.154 INFO:teuthology.orchestra.run.vm10.stdout: librados-devel-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T21:02:43.155 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:02:43.155 INFO:teuthology.orchestra.run.vm10.stdout:Complete! 2026-03-09T21:02:43.217 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: librados-devel-2:18.2.7-1055.gab47f43c.el9.x86_64 2/2 2026-03-09T21:02:43.217 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libcephfs-devel-2:18.2.7-1055.gab47f43c.el9.x86_64 1/2 2026-03-09T21:02:43.276 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : librados-devel-2:18.2.7-1055.gab47f43c.el9.x86_64 2/2 2026-03-09T21:02:43.276 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:02:43.277 INFO:teuthology.orchestra.run.vm07.stdout:Removed: 2026-03-09T21:02:43.277 INFO:teuthology.orchestra.run.vm07.stdout: libcephfs-devel-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T21:02:43.277 INFO:teuthology.orchestra.run.vm07.stdout: librados-devel-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T21:02:43.277 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:02:43.277 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-03-09T21:02:43.361 INFO:teuthology.orchestra.run.vm10.stdout:Dependencies resolved. 2026-03-09T21:02:43.362 INFO:teuthology.orchestra.run.vm10.stdout:================================================================================ 2026-03-09T21:02:43.362 INFO:teuthology.orchestra.run.vm10.stdout: Package Arch Version Repo Size 2026-03-09T21:02:43.362 INFO:teuthology.orchestra.run.vm10.stdout:================================================================================ 2026-03-09T21:02:43.362 INFO:teuthology.orchestra.run.vm10.stdout:Removing: 2026-03-09T21:02:43.362 INFO:teuthology.orchestra.run.vm10.stdout: libcephfs2 x86_64 2:18.2.7-1055.gab47f43c.el9 @ceph 2.0 M 2026-03-09T21:02:43.362 INFO:teuthology.orchestra.run.vm10.stdout:Removing dependent packages: 2026-03-09T21:02:43.362 INFO:teuthology.orchestra.run.vm10.stdout: python3-cephfs x86_64 2:18.2.7-1055.gab47f43c.el9 @ceph 505 k 2026-03-09T21:02:43.362 INFO:teuthology.orchestra.run.vm10.stdout:Removing unused dependencies: 2026-03-09T21:02:43.363 INFO:teuthology.orchestra.run.vm10.stdout: python3-ceph-argparse x86_64 2:18.2.7-1055.gab47f43c.el9 @ceph 186 k 2026-03-09T21:02:43.363 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:02:43.363 INFO:teuthology.orchestra.run.vm10.stdout:Transaction Summary 2026-03-09T21:02:43.363 INFO:teuthology.orchestra.run.vm10.stdout:================================================================================ 2026-03-09T21:02:43.363 INFO:teuthology.orchestra.run.vm10.stdout:Remove 3 Packages 2026-03-09T21:02:43.363 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:02:43.363 INFO:teuthology.orchestra.run.vm10.stdout:Freed space: 2.6 M 2026-03-09T21:02:43.363 INFO:teuthology.orchestra.run.vm10.stdout:Running transaction check 2026-03-09T21:02:43.364 INFO:teuthology.orchestra.run.vm10.stdout:Transaction check succeeded. 2026-03-09T21:02:43.365 INFO:teuthology.orchestra.run.vm10.stdout:Running transaction test 2026-03-09T21:02:43.377 INFO:teuthology.orchestra.run.vm10.stdout:Transaction test succeeded. 2026-03-09T21:02:43.378 INFO:teuthology.orchestra.run.vm10.stdout:Running transaction 2026-03-09T21:02:43.408 INFO:teuthology.orchestra.run.vm10.stdout: Preparing : 1/1 2026-03-09T21:02:43.410 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-cephfs-2:18.2.7-1055.gab47f43c.el9.x86_64 1/3 2026-03-09T21:02:43.412 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-ceph-argparse-2:18.2.7-1055.gab47f43c.el9.x8 2/3 2026-03-09T21:02:43.412 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : libcephfs2-2:18.2.7-1055.gab47f43c.el9.x86_64 3/3 2026-03-09T21:02:43.484 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: libcephfs2-2:18.2.7-1055.gab47f43c.el9.x86_64 3/3 2026-03-09T21:02:43.484 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : libcephfs2-2:18.2.7-1055.gab47f43c.el9.x86_64 1/3 2026-03-09T21:02:43.484 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-ceph-argparse-2:18.2.7-1055.gab47f43c.el9.x8 2/3 2026-03-09T21:02:43.516 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-03-09T21:02:43.516 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-03-09T21:02:43.516 INFO:teuthology.orchestra.run.vm07.stdout: Package Arch Version Repo Size 2026-03-09T21:02:43.516 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-03-09T21:02:43.516 INFO:teuthology.orchestra.run.vm07.stdout:Removing: 2026-03-09T21:02:43.516 INFO:teuthology.orchestra.run.vm07.stdout: libcephfs2 x86_64 2:18.2.7-1055.gab47f43c.el9 @ceph 2.0 M 2026-03-09T21:02:43.516 INFO:teuthology.orchestra.run.vm07.stdout:Removing dependent packages: 2026-03-09T21:02:43.516 INFO:teuthology.orchestra.run.vm07.stdout: python3-cephfs x86_64 2:18.2.7-1055.gab47f43c.el9 @ceph 505 k 2026-03-09T21:02:43.516 INFO:teuthology.orchestra.run.vm07.stdout:Removing unused dependencies: 2026-03-09T21:02:43.516 INFO:teuthology.orchestra.run.vm07.stdout: python3-ceph-argparse x86_64 2:18.2.7-1055.gab47f43c.el9 @ceph 186 k 2026-03-09T21:02:43.516 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:02:43.517 INFO:teuthology.orchestra.run.vm07.stdout:Transaction Summary 2026-03-09T21:02:43.517 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-03-09T21:02:43.517 INFO:teuthology.orchestra.run.vm07.stdout:Remove 3 Packages 2026-03-09T21:02:43.517 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:02:43.517 INFO:teuthology.orchestra.run.vm07.stdout:Freed space: 2.6 M 2026-03-09T21:02:43.517 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction check 2026-03-09T21:02:43.519 INFO:teuthology.orchestra.run.vm07.stdout:Transaction check succeeded. 2026-03-09T21:02:43.519 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction test 2026-03-09T21:02:43.531 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-cephfs-2:18.2.7-1055.gab47f43c.el9.x86_64 3/3 2026-03-09T21:02:43.531 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:02:43.531 INFO:teuthology.orchestra.run.vm10.stdout:Removed: 2026-03-09T21:02:43.531 INFO:teuthology.orchestra.run.vm10.stdout: libcephfs2-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T21:02:43.531 INFO:teuthology.orchestra.run.vm10.stdout: python3-ceph-argparse-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T21:02:43.531 INFO:teuthology.orchestra.run.vm10.stdout: python3-cephfs-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T21:02:43.531 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:02:43.531 INFO:teuthology.orchestra.run.vm10.stdout:Complete! 2026-03-09T21:02:43.534 INFO:teuthology.orchestra.run.vm07.stdout:Transaction test succeeded. 2026-03-09T21:02:43.534 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction 2026-03-09T21:02:43.563 INFO:teuthology.orchestra.run.vm07.stdout: Preparing : 1/1 2026-03-09T21:02:43.565 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-cephfs-2:18.2.7-1055.gab47f43c.el9.x86_64 1/3 2026-03-09T21:02:43.567 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-ceph-argparse-2:18.2.7-1055.gab47f43c.el9.x8 2/3 2026-03-09T21:02:43.567 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : libcephfs2-2:18.2.7-1055.gab47f43c.el9.x86_64 3/3 2026-03-09T21:02:43.640 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: libcephfs2-2:18.2.7-1055.gab47f43c.el9.x86_64 3/3 2026-03-09T21:02:43.640 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libcephfs2-2:18.2.7-1055.gab47f43c.el9.x86_64 1/3 2026-03-09T21:02:43.641 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-ceph-argparse-2:18.2.7-1055.gab47f43c.el9.x8 2/3 2026-03-09T21:02:43.689 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-cephfs-2:18.2.7-1055.gab47f43c.el9.x86_64 3/3 2026-03-09T21:02:43.689 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:02:43.689 INFO:teuthology.orchestra.run.vm07.stdout:Removed: 2026-03-09T21:02:43.689 INFO:teuthology.orchestra.run.vm07.stdout: libcephfs2-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T21:02:43.689 INFO:teuthology.orchestra.run.vm07.stdout: python3-ceph-argparse-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T21:02:43.689 INFO:teuthology.orchestra.run.vm07.stdout: python3-cephfs-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T21:02:43.689 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:02:43.689 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-03-09T21:02:43.750 INFO:teuthology.orchestra.run.vm10.stdout:No match for argument: libcephfs-devel 2026-03-09T21:02:43.751 INFO:teuthology.orchestra.run.vm10.stderr:No packages marked for removal. 2026-03-09T21:02:43.754 INFO:teuthology.orchestra.run.vm10.stdout:Dependencies resolved. 2026-03-09T21:02:43.754 INFO:teuthology.orchestra.run.vm10.stdout:Nothing to do. 2026-03-09T21:02:43.754 INFO:teuthology.orchestra.run.vm10.stdout:Complete! 2026-03-09T21:02:43.901 INFO:teuthology.orchestra.run.vm07.stdout:No match for argument: libcephfs-devel 2026-03-09T21:02:43.901 INFO:teuthology.orchestra.run.vm07.stderr:No packages marked for removal. 2026-03-09T21:02:43.904 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-03-09T21:02:43.905 INFO:teuthology.orchestra.run.vm07.stdout:Nothing to do. 2026-03-09T21:02:43.905 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-03-09T21:02:43.969 INFO:teuthology.orchestra.run.vm10.stdout:Dependencies resolved. 2026-03-09T21:02:43.971 INFO:teuthology.orchestra.run.vm10.stdout:================================================================================ 2026-03-09T21:02:43.971 INFO:teuthology.orchestra.run.vm10.stdout: Package Arch Version Repository Size 2026-03-09T21:02:43.971 INFO:teuthology.orchestra.run.vm10.stdout:================================================================================ 2026-03-09T21:02:43.971 INFO:teuthology.orchestra.run.vm10.stdout:Removing: 2026-03-09T21:02:43.971 INFO:teuthology.orchestra.run.vm10.stdout: librados2 x86_64 2:18.2.7-1055.gab47f43c.el9 @ceph 12 M 2026-03-09T21:02:43.971 INFO:teuthology.orchestra.run.vm10.stdout:Removing dependent packages: 2026-03-09T21:02:43.971 INFO:teuthology.orchestra.run.vm10.stdout: python3-rados x86_64 2:18.2.7-1055.gab47f43c.el9 @ceph 1.1 M 2026-03-09T21:02:43.971 INFO:teuthology.orchestra.run.vm10.stdout: python3-rbd x86_64 2:18.2.7-1055.gab47f43c.el9 @ceph 1.1 M 2026-03-09T21:02:43.971 INFO:teuthology.orchestra.run.vm10.stdout: python3-rgw x86_64 2:18.2.7-1055.gab47f43c.el9 @ceph 265 k 2026-03-09T21:02:43.971 INFO:teuthology.orchestra.run.vm10.stdout: qemu-kvm-block-rbd x86_64 17:10.1.0-15.el9 @appstream 37 k 2026-03-09T21:02:43.971 INFO:teuthology.orchestra.run.vm10.stdout: rbd-fuse x86_64 2:18.2.7-1055.gab47f43c.el9 @ceph 231 k 2026-03-09T21:02:43.971 INFO:teuthology.orchestra.run.vm10.stdout: rbd-nbd x86_64 2:18.2.7-1055.gab47f43c.el9 @ceph 490 k 2026-03-09T21:02:43.971 INFO:teuthology.orchestra.run.vm10.stdout:Removing unused dependencies: 2026-03-09T21:02:43.971 INFO:teuthology.orchestra.run.vm10.stdout: boost-program-options x86_64 1.75.0-13.el9 @appstream 276 k 2026-03-09T21:02:43.971 INFO:teuthology.orchestra.run.vm10.stdout: libarrow x86_64 9.0.0-15.el9 @epel 18 M 2026-03-09T21:02:43.971 INFO:teuthology.orchestra.run.vm10.stdout: libarrow-doc noarch 9.0.0-15.el9 @epel 122 k 2026-03-09T21:02:43.971 INFO:teuthology.orchestra.run.vm10.stdout: libpmemobj x86_64 1.12.1-1.el9 @appstream 383 k 2026-03-09T21:02:43.971 INFO:teuthology.orchestra.run.vm10.stdout: librabbitmq x86_64 0.11.0-7.el9 @appstream 102 k 2026-03-09T21:02:43.971 INFO:teuthology.orchestra.run.vm10.stdout: librbd1 x86_64 2:18.2.7-1055.gab47f43c.el9 @ceph 12 M 2026-03-09T21:02:43.971 INFO:teuthology.orchestra.run.vm10.stdout: librdkafka x86_64 1.6.1-102.el9 @appstream 2.0 M 2026-03-09T21:02:43.971 INFO:teuthology.orchestra.run.vm10.stdout: librgw2 x86_64 2:18.2.7-1055.gab47f43c.el9 @ceph 16 M 2026-03-09T21:02:43.971 INFO:teuthology.orchestra.run.vm10.stdout: lttng-ust x86_64 2.12.0-6.el9 @appstream 1.0 M 2026-03-09T21:02:43.971 INFO:teuthology.orchestra.run.vm10.stdout: parquet-libs x86_64 9.0.0-15.el9 @epel 2.8 M 2026-03-09T21:02:43.971 INFO:teuthology.orchestra.run.vm10.stdout: re2 x86_64 1:20211101-20.el9 @epel 472 k 2026-03-09T21:02:43.971 INFO:teuthology.orchestra.run.vm10.stdout: thrift x86_64 0.15.0-4.el9 @epel 4.8 M 2026-03-09T21:02:43.971 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:02:43.972 INFO:teuthology.orchestra.run.vm10.stdout:Transaction Summary 2026-03-09T21:02:43.972 INFO:teuthology.orchestra.run.vm10.stdout:================================================================================ 2026-03-09T21:02:43.972 INFO:teuthology.orchestra.run.vm10.stdout:Remove 19 Packages 2026-03-09T21:02:43.972 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:02:43.972 INFO:teuthology.orchestra.run.vm10.stdout:Freed space: 73 M 2026-03-09T21:02:43.972 INFO:teuthology.orchestra.run.vm10.stdout:Running transaction check 2026-03-09T21:02:43.976 INFO:teuthology.orchestra.run.vm10.stdout:Transaction check succeeded. 2026-03-09T21:02:43.976 INFO:teuthology.orchestra.run.vm10.stdout:Running transaction test 2026-03-09T21:02:44.002 INFO:teuthology.orchestra.run.vm10.stdout:Transaction test succeeded. 2026-03-09T21:02:44.002 INFO:teuthology.orchestra.run.vm10.stdout:Running transaction 2026-03-09T21:02:44.049 INFO:teuthology.orchestra.run.vm10.stdout: Preparing : 1/1 2026-03-09T21:02:44.052 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : rbd-nbd-2:18.2.7-1055.gab47f43c.el9.x86_64 1/19 2026-03-09T21:02:44.055 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : rbd-fuse-2:18.2.7-1055.gab47f43c.el9.x86_64 2/19 2026-03-09T21:02:44.057 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-rgw-2:18.2.7-1055.gab47f43c.el9.x86_64 3/19 2026-03-09T21:02:44.057 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : librgw2-2:18.2.7-1055.gab47f43c.el9.x86_64 4/19 2026-03-09T21:02:44.070 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: librgw2-2:18.2.7-1055.gab47f43c.el9.x86_64 4/19 2026-03-09T21:02:44.072 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : parquet-libs-9.0.0-15.el9.x86_64 5/19 2026-03-09T21:02:44.075 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-rbd-2:18.2.7-1055.gab47f43c.el9.x86_64 6/19 2026-03-09T21:02:44.077 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : python3-rados-2:18.2.7-1055.gab47f43c.el9.x86_64 7/19 2026-03-09T21:02:44.079 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 8/19 2026-03-09T21:02:44.083 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : libarrow-doc-9.0.0-15.el9.noarch 9/19 2026-03-09T21:02:44.083 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : librbd1-2:18.2.7-1055.gab47f43c.el9.x86_64 10/19 2026-03-09T21:02:44.102 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: librbd1-2:18.2.7-1055.gab47f43c.el9.x86_64 10/19 2026-03-09T21:02:44.103 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : librados2-2:18.2.7-1055.gab47f43c.el9.x86_64 11/19 2026-03-09T21:02:44.103 INFO:teuthology.orchestra.run.vm10.stdout:warning: file /etc/ceph: remove failed: No such file or directory 2026-03-09T21:02:44.103 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:02:44.119 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: librados2-2:18.2.7-1055.gab47f43c.el9.x86_64 11/19 2026-03-09T21:02:44.122 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : libarrow-9.0.0-15.el9.x86_64 12/19 2026-03-09T21:02:44.127 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : re2-1:20211101-20.el9.x86_64 13/19 2026-03-09T21:02:44.131 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : lttng-ust-2.12.0-6.el9.x86_64 14/19 2026-03-09T21:02:44.131 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-03-09T21:02:44.133 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-03-09T21:02:44.133 INFO:teuthology.orchestra.run.vm07.stdout: Package Arch Version Repository Size 2026-03-09T21:02:44.133 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-03-09T21:02:44.133 INFO:teuthology.orchestra.run.vm07.stdout:Removing: 2026-03-09T21:02:44.133 INFO:teuthology.orchestra.run.vm07.stdout: librados2 x86_64 2:18.2.7-1055.gab47f43c.el9 @ceph 12 M 2026-03-09T21:02:44.133 INFO:teuthology.orchestra.run.vm07.stdout:Removing dependent packages: 2026-03-09T21:02:44.133 INFO:teuthology.orchestra.run.vm07.stdout: python3-rados x86_64 2:18.2.7-1055.gab47f43c.el9 @ceph 1.1 M 2026-03-09T21:02:44.133 INFO:teuthology.orchestra.run.vm07.stdout: python3-rbd x86_64 2:18.2.7-1055.gab47f43c.el9 @ceph 1.1 M 2026-03-09T21:02:44.133 INFO:teuthology.orchestra.run.vm07.stdout: python3-rgw x86_64 2:18.2.7-1055.gab47f43c.el9 @ceph 265 k 2026-03-09T21:02:44.133 INFO:teuthology.orchestra.run.vm07.stdout: qemu-kvm-block-rbd x86_64 17:10.1.0-15.el9 @appstream 37 k 2026-03-09T21:02:44.133 INFO:teuthology.orchestra.run.vm07.stdout: rbd-fuse x86_64 2:18.2.7-1055.gab47f43c.el9 @ceph 231 k 2026-03-09T21:02:44.133 INFO:teuthology.orchestra.run.vm07.stdout: rbd-nbd x86_64 2:18.2.7-1055.gab47f43c.el9 @ceph 490 k 2026-03-09T21:02:44.133 INFO:teuthology.orchestra.run.vm07.stdout:Removing unused dependencies: 2026-03-09T21:02:44.133 INFO:teuthology.orchestra.run.vm07.stdout: boost-program-options x86_64 1.75.0-13.el9 @appstream 276 k 2026-03-09T21:02:44.133 INFO:teuthology.orchestra.run.vm07.stdout: libarrow x86_64 9.0.0-15.el9 @epel 18 M 2026-03-09T21:02:44.133 INFO:teuthology.orchestra.run.vm07.stdout: libarrow-doc noarch 9.0.0-15.el9 @epel 122 k 2026-03-09T21:02:44.133 INFO:teuthology.orchestra.run.vm07.stdout: libpmemobj x86_64 1.12.1-1.el9 @appstream 383 k 2026-03-09T21:02:44.133 INFO:teuthology.orchestra.run.vm07.stdout: librabbitmq x86_64 0.11.0-7.el9 @appstream 102 k 2026-03-09T21:02:44.133 INFO:teuthology.orchestra.run.vm07.stdout: librbd1 x86_64 2:18.2.7-1055.gab47f43c.el9 @ceph 12 M 2026-03-09T21:02:44.133 INFO:teuthology.orchestra.run.vm07.stdout: librdkafka x86_64 1.6.1-102.el9 @appstream 2.0 M 2026-03-09T21:02:44.133 INFO:teuthology.orchestra.run.vm07.stdout: librgw2 x86_64 2:18.2.7-1055.gab47f43c.el9 @ceph 16 M 2026-03-09T21:02:44.133 INFO:teuthology.orchestra.run.vm07.stdout: lttng-ust x86_64 2.12.0-6.el9 @appstream 1.0 M 2026-03-09T21:02:44.133 INFO:teuthology.orchestra.run.vm07.stdout: parquet-libs x86_64 9.0.0-15.el9 @epel 2.8 M 2026-03-09T21:02:44.133 INFO:teuthology.orchestra.run.vm07.stdout: re2 x86_64 1:20211101-20.el9 @epel 472 k 2026-03-09T21:02:44.133 INFO:teuthology.orchestra.run.vm07.stdout: thrift x86_64 0.15.0-4.el9 @epel 4.8 M 2026-03-09T21:02:44.133 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:02:44.133 INFO:teuthology.orchestra.run.vm07.stdout:Transaction Summary 2026-03-09T21:02:44.133 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-03-09T21:02:44.133 INFO:teuthology.orchestra.run.vm07.stdout:Remove 19 Packages 2026-03-09T21:02:44.133 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:02:44.134 INFO:teuthology.orchestra.run.vm07.stdout:Freed space: 73 M 2026-03-09T21:02:44.134 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction check 2026-03-09T21:02:44.134 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : thrift-0.15.0-4.el9.x86_64 15/19 2026-03-09T21:02:44.137 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : libpmemobj-1.12.1-1.el9.x86_64 16/19 2026-03-09T21:02:44.138 INFO:teuthology.orchestra.run.vm07.stdout:Transaction check succeeded. 2026-03-09T21:02:44.138 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction test 2026-03-09T21:02:44.139 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : boost-program-options-1.75.0-13.el9.x86_64 17/19 2026-03-09T21:02:44.142 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : librabbitmq-0.11.0-7.el9.x86_64 18/19 2026-03-09T21:02:44.157 INFO:teuthology.orchestra.run.vm10.stdout: Erasing : librdkafka-1.6.1-102.el9.x86_64 19/19 2026-03-09T21:02:44.163 INFO:teuthology.orchestra.run.vm07.stdout:Transaction test succeeded. 2026-03-09T21:02:44.163 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction 2026-03-09T21:02:44.208 INFO:teuthology.orchestra.run.vm07.stdout: Preparing : 1/1 2026-03-09T21:02:44.211 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : rbd-nbd-2:18.2.7-1055.gab47f43c.el9.x86_64 1/19 2026-03-09T21:02:44.213 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : rbd-fuse-2:18.2.7-1055.gab47f43c.el9.x86_64 2/19 2026-03-09T21:02:44.216 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-rgw-2:18.2.7-1055.gab47f43c.el9.x86_64 3/19 2026-03-09T21:02:44.216 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : librgw2-2:18.2.7-1055.gab47f43c.el9.x86_64 4/19 2026-03-09T21:02:44.225 INFO:teuthology.orchestra.run.vm10.stdout: Running scriptlet: librdkafka-1.6.1-102.el9.x86_64 19/19 2026-03-09T21:02:44.226 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 1/19 2026-03-09T21:02:44.226 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 2/19 2026-03-09T21:02:44.226 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 3/19 2026-03-09T21:02:44.226 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 4/19 2026-03-09T21:02:44.226 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 5/19 2026-03-09T21:02:44.226 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : librados2-2:18.2.7-1055.gab47f43c.el9.x86_64 6/19 2026-03-09T21:02:44.226 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : librbd1-2:18.2.7-1055.gab47f43c.el9.x86_64 7/19 2026-03-09T21:02:44.226 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 8/19 2026-03-09T21:02:44.226 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : librgw2-2:18.2.7-1055.gab47f43c.el9.x86_64 9/19 2026-03-09T21:02:44.226 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 10/19 2026-03-09T21:02:44.226 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 11/19 2026-03-09T21:02:44.226 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-rados-2:18.2.7-1055.gab47f43c.el9.x86_64 12/19 2026-03-09T21:02:44.226 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-rbd-2:18.2.7-1055.gab47f43c.el9.x86_64 13/19 2026-03-09T21:02:44.226 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : python3-rgw-2:18.2.7-1055.gab47f43c.el9.x86_64 14/19 2026-03-09T21:02:44.226 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 15/19 2026-03-09T21:02:44.226 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : rbd-fuse-2:18.2.7-1055.gab47f43c.el9.x86_64 16/19 2026-03-09T21:02:44.227 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : rbd-nbd-2:18.2.7-1055.gab47f43c.el9.x86_64 17/19 2026-03-09T21:02:44.227 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : re2-1:20211101-20.el9.x86_64 18/19 2026-03-09T21:02:44.232 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: librgw2-2:18.2.7-1055.gab47f43c.el9.x86_64 4/19 2026-03-09T21:02:44.235 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : parquet-libs-9.0.0-15.el9.x86_64 5/19 2026-03-09T21:02:44.237 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-rbd-2:18.2.7-1055.gab47f43c.el9.x86_64 6/19 2026-03-09T21:02:44.240 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-rados-2:18.2.7-1055.gab47f43c.el9.x86_64 7/19 2026-03-09T21:02:44.242 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 8/19 2026-03-09T21:02:44.245 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : libarrow-doc-9.0.0-15.el9.noarch 9/19 2026-03-09T21:02:44.245 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : librbd1-2:18.2.7-1055.gab47f43c.el9.x86_64 10/19 2026-03-09T21:02:44.261 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: librbd1-2:18.2.7-1055.gab47f43c.el9.x86_64 10/19 2026-03-09T21:02:44.261 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : librados2-2:18.2.7-1055.gab47f43c.el9.x86_64 11/19 2026-03-09T21:02:44.261 INFO:teuthology.orchestra.run.vm07.stdout:warning: file /etc/ceph: remove failed: No such file or directory 2026-03-09T21:02:44.261 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:02:44.269 INFO:teuthology.orchestra.run.vm10.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 19/19 2026-03-09T21:02:44.269 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:02:44.269 INFO:teuthology.orchestra.run.vm10.stdout:Removed: 2026-03-09T21:02:44.269 INFO:teuthology.orchestra.run.vm10.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-09T21:02:44.269 INFO:teuthology.orchestra.run.vm10.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-09T21:02:44.269 INFO:teuthology.orchestra.run.vm10.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-09T21:02:44.269 INFO:teuthology.orchestra.run.vm10.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-09T21:02:44.269 INFO:teuthology.orchestra.run.vm10.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-09T21:02:44.269 INFO:teuthology.orchestra.run.vm10.stdout: librados2-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T21:02:44.269 INFO:teuthology.orchestra.run.vm10.stdout: librbd1-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T21:02:44.269 INFO:teuthology.orchestra.run.vm10.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-09T21:02:44.269 INFO:teuthology.orchestra.run.vm10.stdout: librgw2-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T21:02:44.269 INFO:teuthology.orchestra.run.vm10.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-09T21:02:44.269 INFO:teuthology.orchestra.run.vm10.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-09T21:02:44.269 INFO:teuthology.orchestra.run.vm10.stdout: python3-rados-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T21:02:44.269 INFO:teuthology.orchestra.run.vm10.stdout: python3-rbd-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T21:02:44.269 INFO:teuthology.orchestra.run.vm10.stdout: python3-rgw-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T21:02:44.270 INFO:teuthology.orchestra.run.vm10.stdout: qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 2026-03-09T21:02:44.270 INFO:teuthology.orchestra.run.vm10.stdout: rbd-fuse-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T21:02:44.270 INFO:teuthology.orchestra.run.vm10.stdout: rbd-nbd-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T21:02:44.270 INFO:teuthology.orchestra.run.vm10.stdout: re2-1:20211101-20.el9.x86_64 2026-03-09T21:02:44.270 INFO:teuthology.orchestra.run.vm10.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-09T21:02:44.270 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-09T21:02:44.270 INFO:teuthology.orchestra.run.vm10.stdout:Complete! 2026-03-09T21:02:44.277 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: librados2-2:18.2.7-1055.gab47f43c.el9.x86_64 11/19 2026-03-09T21:02:44.280 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : libarrow-9.0.0-15.el9.x86_64 12/19 2026-03-09T21:02:44.283 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : re2-1:20211101-20.el9.x86_64 13/19 2026-03-09T21:02:44.287 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : lttng-ust-2.12.0-6.el9.x86_64 14/19 2026-03-09T21:02:44.289 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : thrift-0.15.0-4.el9.x86_64 15/19 2026-03-09T21:02:44.292 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : libpmemobj-1.12.1-1.el9.x86_64 16/19 2026-03-09T21:02:44.294 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : boost-program-options-1.75.0-13.el9.x86_64 17/19 2026-03-09T21:02:44.297 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : librabbitmq-0.11.0-7.el9.x86_64 18/19 2026-03-09T21:02:44.311 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : librdkafka-1.6.1-102.el9.x86_64 19/19 2026-03-09T21:02:44.381 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: librdkafka-1.6.1-102.el9.x86_64 19/19 2026-03-09T21:02:44.381 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 1/19 2026-03-09T21:02:44.381 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 2/19 2026-03-09T21:02:44.381 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 3/19 2026-03-09T21:02:44.381 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 4/19 2026-03-09T21:02:44.381 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 5/19 2026-03-09T21:02:44.381 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : librados2-2:18.2.7-1055.gab47f43c.el9.x86_64 6/19 2026-03-09T21:02:44.381 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : librbd1-2:18.2.7-1055.gab47f43c.el9.x86_64 7/19 2026-03-09T21:02:44.381 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 8/19 2026-03-09T21:02:44.381 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : librgw2-2:18.2.7-1055.gab47f43c.el9.x86_64 9/19 2026-03-09T21:02:44.381 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 10/19 2026-03-09T21:02:44.381 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 11/19 2026-03-09T21:02:44.381 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-rados-2:18.2.7-1055.gab47f43c.el9.x86_64 12/19 2026-03-09T21:02:44.381 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-rbd-2:18.2.7-1055.gab47f43c.el9.x86_64 13/19 2026-03-09T21:02:44.381 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-rgw-2:18.2.7-1055.gab47f43c.el9.x86_64 14/19 2026-03-09T21:02:44.381 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 15/19 2026-03-09T21:02:44.381 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : rbd-fuse-2:18.2.7-1055.gab47f43c.el9.x86_64 16/19 2026-03-09T21:02:44.381 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : rbd-nbd-2:18.2.7-1055.gab47f43c.el9.x86_64 17/19 2026-03-09T21:02:44.381 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : re2-1:20211101-20.el9.x86_64 18/19 2026-03-09T21:02:44.427 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 19/19 2026-03-09T21:02:44.428 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:02:44.428 INFO:teuthology.orchestra.run.vm07.stdout:Removed: 2026-03-09T21:02:44.428 INFO:teuthology.orchestra.run.vm07.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-09T21:02:44.428 INFO:teuthology.orchestra.run.vm07.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-09T21:02:44.428 INFO:teuthology.orchestra.run.vm07.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-09T21:02:44.428 INFO:teuthology.orchestra.run.vm07.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-09T21:02:44.428 INFO:teuthology.orchestra.run.vm07.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-09T21:02:44.428 INFO:teuthology.orchestra.run.vm07.stdout: librados2-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T21:02:44.428 INFO:teuthology.orchestra.run.vm07.stdout: librbd1-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T21:02:44.428 INFO:teuthology.orchestra.run.vm07.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-09T21:02:44.428 INFO:teuthology.orchestra.run.vm07.stdout: librgw2-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T21:02:44.428 INFO:teuthology.orchestra.run.vm07.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-09T21:02:44.428 INFO:teuthology.orchestra.run.vm07.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-09T21:02:44.428 INFO:teuthology.orchestra.run.vm07.stdout: python3-rados-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T21:02:44.428 INFO:teuthology.orchestra.run.vm07.stdout: python3-rbd-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T21:02:44.428 INFO:teuthology.orchestra.run.vm07.stdout: python3-rgw-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T21:02:44.428 INFO:teuthology.orchestra.run.vm07.stdout: qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 2026-03-09T21:02:44.428 INFO:teuthology.orchestra.run.vm07.stdout: rbd-fuse-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T21:02:44.428 INFO:teuthology.orchestra.run.vm07.stdout: rbd-nbd-2:18.2.7-1055.gab47f43c.el9.x86_64 2026-03-09T21:02:44.428 INFO:teuthology.orchestra.run.vm07.stdout: re2-1:20211101-20.el9.x86_64 2026-03-09T21:02:44.428 INFO:teuthology.orchestra.run.vm07.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-09T21:02:44.428 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T21:02:44.428 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-03-09T21:02:44.460 INFO:teuthology.orchestra.run.vm10.stdout:No match for argument: librbd1 2026-03-09T21:02:44.460 INFO:teuthology.orchestra.run.vm10.stderr:No packages marked for removal. 2026-03-09T21:02:44.464 INFO:teuthology.orchestra.run.vm10.stdout:Dependencies resolved. 2026-03-09T21:02:44.464 INFO:teuthology.orchestra.run.vm10.stdout:Nothing to do. 2026-03-09T21:02:44.465 INFO:teuthology.orchestra.run.vm10.stdout:Complete! 2026-03-09T21:02:44.657 INFO:teuthology.orchestra.run.vm07.stdout:No match for argument: librbd1 2026-03-09T21:02:44.657 INFO:teuthology.orchestra.run.vm07.stderr:No packages marked for removal. 2026-03-09T21:02:44.660 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-03-09T21:02:44.661 INFO:teuthology.orchestra.run.vm07.stdout:Nothing to do. 2026-03-09T21:02:44.661 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-03-09T21:02:44.670 INFO:teuthology.orchestra.run.vm10.stdout:No match for argument: python3-rados 2026-03-09T21:02:44.671 INFO:teuthology.orchestra.run.vm10.stderr:No packages marked for removal. 2026-03-09T21:02:44.676 INFO:teuthology.orchestra.run.vm10.stdout:Dependencies resolved. 2026-03-09T21:02:44.677 INFO:teuthology.orchestra.run.vm10.stdout:Nothing to do. 2026-03-09T21:02:44.677 INFO:teuthology.orchestra.run.vm10.stdout:Complete! 2026-03-09T21:02:44.888 INFO:teuthology.orchestra.run.vm07.stdout:No match for argument: python3-rados 2026-03-09T21:02:44.888 INFO:teuthology.orchestra.run.vm07.stderr:No packages marked for removal. 2026-03-09T21:02:44.892 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-03-09T21:02:44.893 INFO:teuthology.orchestra.run.vm07.stdout:Nothing to do. 2026-03-09T21:02:44.893 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-03-09T21:02:44.914 INFO:teuthology.orchestra.run.vm10.stdout:No match for argument: python3-rgw 2026-03-09T21:02:44.914 INFO:teuthology.orchestra.run.vm10.stderr:No packages marked for removal. 2026-03-09T21:02:44.918 INFO:teuthology.orchestra.run.vm10.stdout:Dependencies resolved. 2026-03-09T21:02:44.918 INFO:teuthology.orchestra.run.vm10.stdout:Nothing to do. 2026-03-09T21:02:44.918 INFO:teuthology.orchestra.run.vm10.stdout:Complete! 2026-03-09T21:02:45.100 INFO:teuthology.orchestra.run.vm07.stdout:No match for argument: python3-rgw 2026-03-09T21:02:45.100 INFO:teuthology.orchestra.run.vm07.stderr:No packages marked for removal. 2026-03-09T21:02:45.104 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-03-09T21:02:45.104 INFO:teuthology.orchestra.run.vm07.stdout:Nothing to do. 2026-03-09T21:02:45.104 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-03-09T21:02:45.181 INFO:teuthology.orchestra.run.vm10.stdout:No match for argument: python3-cephfs 2026-03-09T21:02:45.181 INFO:teuthology.orchestra.run.vm10.stderr:No packages marked for removal. 2026-03-09T21:02:45.185 INFO:teuthology.orchestra.run.vm10.stdout:Dependencies resolved. 2026-03-09T21:02:45.185 INFO:teuthology.orchestra.run.vm10.stdout:Nothing to do. 2026-03-09T21:02:45.185 INFO:teuthology.orchestra.run.vm10.stdout:Complete! 2026-03-09T21:02:45.303 INFO:teuthology.orchestra.run.vm07.stdout:No match for argument: python3-cephfs 2026-03-09T21:02:45.303 INFO:teuthology.orchestra.run.vm07.stderr:No packages marked for removal. 2026-03-09T21:02:45.306 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-03-09T21:02:45.306 INFO:teuthology.orchestra.run.vm07.stdout:Nothing to do. 2026-03-09T21:02:45.307 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-03-09T21:02:45.379 INFO:teuthology.orchestra.run.vm10.stdout:No match for argument: python3-rbd 2026-03-09T21:02:45.379 INFO:teuthology.orchestra.run.vm10.stderr:No packages marked for removal. 2026-03-09T21:02:45.382 INFO:teuthology.orchestra.run.vm10.stdout:Dependencies resolved. 2026-03-09T21:02:45.383 INFO:teuthology.orchestra.run.vm10.stdout:Nothing to do. 2026-03-09T21:02:45.383 INFO:teuthology.orchestra.run.vm10.stdout:Complete! 2026-03-09T21:02:45.503 INFO:teuthology.orchestra.run.vm07.stdout:No match for argument: python3-rbd 2026-03-09T21:02:45.504 INFO:teuthology.orchestra.run.vm07.stderr:No packages marked for removal. 2026-03-09T21:02:45.507 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-03-09T21:02:45.507 INFO:teuthology.orchestra.run.vm07.stdout:Nothing to do. 2026-03-09T21:02:45.507 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-03-09T21:02:45.578 INFO:teuthology.orchestra.run.vm10.stdout:No match for argument: rbd-fuse 2026-03-09T21:02:45.578 INFO:teuthology.orchestra.run.vm10.stderr:No packages marked for removal. 2026-03-09T21:02:45.581 INFO:teuthology.orchestra.run.vm10.stdout:Dependencies resolved. 2026-03-09T21:02:45.582 INFO:teuthology.orchestra.run.vm10.stdout:Nothing to do. 2026-03-09T21:02:45.582 INFO:teuthology.orchestra.run.vm10.stdout:Complete! 2026-03-09T21:02:45.711 INFO:teuthology.orchestra.run.vm07.stdout:No match for argument: rbd-fuse 2026-03-09T21:02:45.711 INFO:teuthology.orchestra.run.vm07.stderr:No packages marked for removal. 2026-03-09T21:02:45.715 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-03-09T21:02:45.716 INFO:teuthology.orchestra.run.vm07.stdout:Nothing to do. 2026-03-09T21:02:45.716 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-03-09T21:02:45.769 INFO:teuthology.orchestra.run.vm10.stdout:No match for argument: rbd-mirror 2026-03-09T21:02:45.769 INFO:teuthology.orchestra.run.vm10.stderr:No packages marked for removal. 2026-03-09T21:02:45.772 INFO:teuthology.orchestra.run.vm10.stdout:Dependencies resolved. 2026-03-09T21:02:45.773 INFO:teuthology.orchestra.run.vm10.stdout:Nothing to do. 2026-03-09T21:02:45.773 INFO:teuthology.orchestra.run.vm10.stdout:Complete! 2026-03-09T21:02:45.930 INFO:teuthology.orchestra.run.vm07.stdout:No match for argument: rbd-mirror 2026-03-09T21:02:45.930 INFO:teuthology.orchestra.run.vm07.stderr:No packages marked for removal. 2026-03-09T21:02:45.933 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-03-09T21:02:45.934 INFO:teuthology.orchestra.run.vm07.stdout:Nothing to do. 2026-03-09T21:02:45.934 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-03-09T21:02:45.970 INFO:teuthology.orchestra.run.vm10.stdout:No match for argument: rbd-nbd 2026-03-09T21:02:45.970 INFO:teuthology.orchestra.run.vm10.stderr:No packages marked for removal. 2026-03-09T21:02:45.974 INFO:teuthology.orchestra.run.vm10.stdout:Dependencies resolved. 2026-03-09T21:02:45.974 INFO:teuthology.orchestra.run.vm10.stdout:Nothing to do. 2026-03-09T21:02:45.974 INFO:teuthology.orchestra.run.vm10.stdout:Complete! 2026-03-09T21:02:45.998 DEBUG:teuthology.orchestra.run.vm10:> sudo yum clean all 2026-03-09T21:02:46.126 INFO:teuthology.orchestra.run.vm10.stdout:56 files removed 2026-03-09T21:02:46.132 INFO:teuthology.orchestra.run.vm07.stdout:No match for argument: rbd-nbd 2026-03-09T21:02:46.132 INFO:teuthology.orchestra.run.vm07.stderr:No packages marked for removal. 2026-03-09T21:02:46.136 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-03-09T21:02:46.136 INFO:teuthology.orchestra.run.vm07.stdout:Nothing to do. 2026-03-09T21:02:46.137 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-03-09T21:02:46.146 DEBUG:teuthology.orchestra.run.vm10:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-09T21:02:46.164 DEBUG:teuthology.orchestra.run.vm07:> sudo yum clean all 2026-03-09T21:02:46.173 DEBUG:teuthology.orchestra.run.vm10:> sudo yum clean expire-cache 2026-03-09T21:02:46.304 INFO:teuthology.orchestra.run.vm07.stdout:56 files removed 2026-03-09T21:02:46.335 DEBUG:teuthology.orchestra.run.vm07:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-09T21:02:46.345 INFO:teuthology.orchestra.run.vm10.stdout:Cache was expired 2026-03-09T21:02:46.346 INFO:teuthology.orchestra.run.vm10.stdout:0 files removed 2026-03-09T21:02:46.363 DEBUG:teuthology.orchestra.run.vm07:> sudo yum clean expire-cache 2026-03-09T21:02:46.375 DEBUG:teuthology.parallel:result is None 2026-03-09T21:02:46.534 INFO:teuthology.orchestra.run.vm07.stdout:Cache was expired 2026-03-09T21:02:46.534 INFO:teuthology.orchestra.run.vm07.stdout:0 files removed 2026-03-09T21:02:46.561 DEBUG:teuthology.parallel:result is None 2026-03-09T21:02:46.561 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm07.local 2026-03-09T21:02:46.561 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm10.local 2026-03-09T21:02:46.561 DEBUG:teuthology.orchestra.run.vm07:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-09T21:02:46.561 DEBUG:teuthology.orchestra.run.vm10:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-09T21:02:46.590 DEBUG:teuthology.orchestra.run.vm07:> sudo mv -f /etc/yum/pluginconf.d/priorities.conf.orig /etc/yum/pluginconf.d/priorities.conf 2026-03-09T21:02:46.591 DEBUG:teuthology.orchestra.run.vm10:> sudo mv -f /etc/yum/pluginconf.d/priorities.conf.orig /etc/yum/pluginconf.d/priorities.conf 2026-03-09T21:02:46.660 DEBUG:teuthology.parallel:result is None 2026-03-09T21:02:46.663 DEBUG:teuthology.parallel:result is None 2026-03-09T21:02:46.663 DEBUG:teuthology.run_tasks:Unwinding manager clock 2026-03-09T21:02:46.666 INFO:teuthology.task.clock:Checking final clock skew... 2026-03-09T21:02:46.666 DEBUG:teuthology.orchestra.run.vm07:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T21:02:46.702 DEBUG:teuthology.orchestra.run.vm10:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T21:02:46.718 INFO:teuthology.orchestra.run.vm07.stderr:bash: line 1: ntpq: command not found 2026-03-09T21:02:46.723 INFO:teuthology.orchestra.run.vm10.stderr:bash: line 1: ntpq: command not found 2026-03-09T21:02:46.835 INFO:teuthology.orchestra.run.vm10.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-09T21:02:46.835 INFO:teuthology.orchestra.run.vm10.stdout:=============================================================================== 2026-03-09T21:02:46.835 INFO:teuthology.orchestra.run.vm10.stdout:^- 104-167-24-26.lunoxia.fc> 2 6 377 22 +2879us[+2879us] +/- 36ms 2026-03-09T21:02:46.835 INFO:teuthology.orchestra.run.vm10.stdout:^- 217.160.19.219 2 8 377 20 +4143us[+4143us] +/- 50ms 2026-03-09T21:02:46.835 INFO:teuthology.orchestra.run.vm10.stdout:^+ vps-fra1.orleans.ddnss.de 2 6 377 86 +133us[ +133us] +/- 12ms 2026-03-09T21:02:46.835 INFO:teuthology.orchestra.run.vm10.stdout:^* 79.133.44.136 1 7 377 90 -9456ns[-6919ns] +/- 10ms 2026-03-09T21:02:46.835 INFO:teuthology.orchestra.run.vm07.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-09T21:02:46.836 INFO:teuthology.orchestra.run.vm07.stdout:=============================================================================== 2026-03-09T21:02:46.836 INFO:teuthology.orchestra.run.vm07.stdout:^- 104-167-24-26.lunoxia.fc> 2 7 377 87 +2842us[+2881us] +/- 36ms 2026-03-09T21:02:46.836 INFO:teuthology.orchestra.run.vm07.stdout:^- 217.160.19.219 2 7 377 20 +4053us[+4053us] +/- 50ms 2026-03-09T21:02:46.836 INFO:teuthology.orchestra.run.vm07.stdout:^+ vps-fra1.orleans.ddnss.de 2 7 377 151 +134us[ +173us] +/- 12ms 2026-03-09T21:02:46.836 INFO:teuthology.orchestra.run.vm07.stdout:^* 79.133.44.136 1 7 377 21 -51us[ -13us] +/- 10ms 2026-03-09T21:02:46.836 DEBUG:teuthology.run_tasks:Unwinding manager ansible.cephlab 2026-03-09T21:02:46.840 INFO:teuthology.task.ansible:Skipping ansible cleanup... 2026-03-09T21:02:46.840 DEBUG:teuthology.run_tasks:Unwinding manager selinux 2026-03-09T21:02:46.843 DEBUG:teuthology.run_tasks:Unwinding manager pcp 2026-03-09T21:02:46.846 DEBUG:teuthology.run_tasks:Unwinding manager internal.timer 2026-03-09T21:02:46.849 INFO:teuthology.task.internal:Duration was 1750.773901 seconds 2026-03-09T21:02:46.849 DEBUG:teuthology.run_tasks:Unwinding manager internal.syslog 2026-03-09T21:02:46.852 INFO:teuthology.task.internal.syslog:Shutting down syslog monitoring... 2026-03-09T21:02:46.852 DEBUG:teuthology.orchestra.run.vm07:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-09T21:02:46.879 DEBUG:teuthology.orchestra.run.vm10:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-09T21:02:46.920 INFO:teuthology.orchestra.run.vm07.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-09T21:02:46.924 INFO:teuthology.orchestra.run.vm10.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-09T21:02:47.420 INFO:teuthology.task.internal.syslog:Checking logs for errors... 2026-03-09T21:02:47.420 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm07.local 2026-03-09T21:02:47.420 DEBUG:teuthology.orchestra.run.vm07:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-09T21:02:47.452 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm10.local 2026-03-09T21:02:47.452 DEBUG:teuthology.orchestra.run.vm10:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-09T21:02:47.479 INFO:teuthology.task.internal.syslog:Gathering journactl... 2026-03-09T21:02:47.479 DEBUG:teuthology.orchestra.run.vm07:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T21:02:47.494 DEBUG:teuthology.orchestra.run.vm10:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T21:02:48.248 INFO:teuthology.task.internal.syslog:Compressing syslogs... 2026-03-09T21:02:48.248 DEBUG:teuthology.orchestra.run.vm07:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T21:02:48.249 DEBUG:teuthology.orchestra.run.vm10:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T21:02:48.276 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T21:02:48.277 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T21:02:48.277 INFO:teuthology.orchestra.run.vm07.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-09T21:02:48.277 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T21:02:48.278 INFO:teuthology.orchestra.run.vm07.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz/home/ubuntu/cephtest/archive/syslog/journalctl.log: 2026-03-09T21:02:48.279 INFO:teuthology.orchestra.run.vm10.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T21:02:48.279 INFO:teuthology.orchestra.run.vm10.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T21:02:48.280 INFO:teuthology.orchestra.run.vm10.stderr:gzip/home/ubuntu/cephtest/archive/syslog/kern.log: -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T21:02:48.280 INFO:teuthology.orchestra.run.vm10.stderr: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-09T21:02:48.280 INFO:teuthology.orchestra.run.vm10.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-09T21:02:48.417 INFO:teuthology.orchestra.run.vm10.stderr:/home/ubuntu/cephtest/archive/syslog/journalctl.log: 97.7% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-09T21:02:48.461 INFO:teuthology.orchestra.run.vm07.stderr: 96.7% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-09T21:02:48.463 DEBUG:teuthology.run_tasks:Unwinding manager internal.sudo 2026-03-09T21:02:48.466 INFO:teuthology.task.internal:Restoring /etc/sudoers... 2026-03-09T21:02:48.467 DEBUG:teuthology.orchestra.run.vm07:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-09T21:02:48.528 DEBUG:teuthology.orchestra.run.vm10:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-09T21:02:48.559 DEBUG:teuthology.run_tasks:Unwinding manager internal.coredump 2026-03-09T21:02:48.563 DEBUG:teuthology.orchestra.run.vm07:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-09T21:02:48.570 DEBUG:teuthology.orchestra.run.vm10:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-09T21:02:48.599 INFO:teuthology.orchestra.run.vm07.stdout:kernel.core_pattern = core 2026-03-09T21:02:48.633 INFO:teuthology.orchestra.run.vm10.stdout:kernel.core_pattern = core 2026-03-09T21:02:48.651 DEBUG:teuthology.orchestra.run.vm07:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-09T21:02:48.672 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T21:02:48.673 DEBUG:teuthology.orchestra.run.vm10:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-09T21:02:48.707 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T21:02:48.707 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive 2026-03-09T21:02:48.711 INFO:teuthology.task.internal:Transferring archived files... 2026-03-09T21:02:48.711 DEBUG:teuthology.misc:Transferring archived files from vm07:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/647/remote/vm07 2026-03-09T21:02:48.711 DEBUG:teuthology.orchestra.run.vm07:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-09T21:02:48.752 DEBUG:teuthology.misc:Transferring archived files from vm10:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/647/remote/vm10 2026-03-09T21:02:48.752 DEBUG:teuthology.orchestra.run.vm10:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-09T21:02:48.786 INFO:teuthology.task.internal:Removing archive directory... 2026-03-09T21:02:48.786 DEBUG:teuthology.orchestra.run.vm07:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-09T21:02:48.792 DEBUG:teuthology.orchestra.run.vm10:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-09T21:02:48.842 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive_upload 2026-03-09T21:02:48.858 INFO:teuthology.task.internal:Not uploading archives. 2026-03-09T21:02:48.859 DEBUG:teuthology.run_tasks:Unwinding manager internal.base 2026-03-09T21:02:48.862 INFO:teuthology.task.internal:Tidying up after the test... 2026-03-09T21:02:48.862 DEBUG:teuthology.orchestra.run.vm07:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-09T21:02:48.864 DEBUG:teuthology.orchestra.run.vm10:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-09T21:02:48.883 INFO:teuthology.orchestra.run.vm07.stdout: 8532097 0 drwxr-xr-x 3 ubuntu ubuntu 19 Mar 9 21:02 /home/ubuntu/cephtest 2026-03-09T21:02:48.883 INFO:teuthology.orchestra.run.vm07.stdout: 33604751 0 d--------- 2 ubuntu ubuntu 6 Mar 9 20:45 /home/ubuntu/cephtest/mnt.0 2026-03-09T21:02:48.883 INFO:teuthology.orchestra.run.vm07.stderr:find: ‘/home/ubuntu/cephtest/mnt.0’: Permission denied 2026-03-09T21:02:48.884 INFO:teuthology.orchestra.run.vm07.stderr:rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty 2026-03-09T21:02:48.885 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T21:02:48.885 ERROR:teuthology.run_tasks:Manager failed: internal.base Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/task/internal/__init__.py", line 48, in base yield File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa/tasks/ceph_fuse.py", line 248, in task mount.umount_wait() File "/home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa/tasks/cephfs/fuse_mount.py", line 403, in umount_wait run.wait([self.fuse_daemon], timeout) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 479, in wait check_time() File "/home/teuthos/teuthology/teuthology/contextutil.py", line 134, in __call__ raise MaxWhileTries(error_msg) teuthology.exceptions.MaxWhileTries: reached maximum tries (50) after waiting for 300 seconds During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/teuthology/teuthology/task/internal/__init__.py", line 53, in base run.wait( File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 485, in wait proc.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm07 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest' 2026-03-09T21:02:48.885 DEBUG:teuthology.run_tasks:Unwinding manager console_log 2026-03-09T21:02:48.889 DEBUG:teuthology.run_tasks:Exception was not quenched, exiting: MaxWhileTries: reached maximum tries (50) after waiting for 300 seconds 2026-03-09T21:02:48.890 INFO:teuthology.run:Summary data: description: orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no kernel overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{reef} 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} duration: 1750.7739012241364 failure_reason: reached maximum tries (50) after waiting for 300 seconds flavor: default owner: kyr status: fail success: false 2026-03-09T21:02:48.890 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-09T21:02:48.948 INFO:teuthology.run:FAIL